November 2009


I just re-read Daniel Dennett’s Consciousness Explained.  I don’t think he quite succeeds– it’s not so much that I disagree as that I think he leaves too much open.  It’s more A program for how we might someday explain consciousness

He’s best at the negative parts: demolishing the opposition.  He has the best argument against dualism I’ve seen.  Dualism has great intuitive appeal and is surprisingly hard to refute– generally it’s just dismissed, not defeated.  There’s no great obstacle to souls somehow getting information from the body, since we don’t know what the properties of soul-stuff are.  The problem is on the other end: how does the soul send its orders back to the body (e.g. to control its muscles)?  The body is matter, and matter requires energy to put in motion.  Dualism requires energy to arise out of nothing, and we don’t detect any such thing.

But that’s merely a page or two.  His main target is what he calls Cartesian materialism-– the idea that the brain presents a sort of TV display in the head watched by a Central Observer and Decisionmaker.  Many who reject dualism, even hard-headed neurologists, still talk as if there were a “Cartesian Theater” in the brain where consciousness occurs.  Dennett shows that the idea is incoherent, particularly breaking down when we look at low-level neural functions.

In philosophical tradition, introspection is sacrosanct, and that leads us to posit frankly magical things going on in our minds.  Dennett outlines an alternative, what he calls heterophenomenology: modelled on anthropology, a sympathetic outsider’s careful analysis of a phenomenological world– that of a conscious subject or even a fictional character. 

This allows him to point out where introspection is, not to mince words, wrong.  All observation, including internal observation, is subject to distortion and error.  (Geoffrey Sampson has made the same point about Chomskyan linguistics: people’s judgments about whether something is grammatical– i.e. whether they can occur in their speech– can be simply wrong; they can deny that they use or even understand some construction, then use it spontaneously.)  

The main problem with introspection is that people overstate their impressions.  For instance, we think we have a uniform hi-res visual field.  If you say it in that careful way, no problem!  But you may be tempted to say, not that you think you have a uniform hi-res visual field, but that you actually have one.  And it’s easy to show that you don’t and can’t.

Your eyes have only a tiny area that’s in clear focus– the fovea.  Look straight ahead and hold up a playing card by your ear.  You won’t be able to see what card it is, even what color… though you can easily see if it’s moving.  Move it slowly toward the center of your vision, still looking straight ahead… when can you make out its color, its suit, its  number?  It’ll be surprisingly close to center before you can identify all these things. 

Why aren’t we conscious of the visual field being this fuzzy?  Mostly because the eye is constantly, unconsciously darting about.  Anything we look directly at is detailed.  But more importantly, there is nothing in the brain that is looking at a TV screen.  We have to break the habits of thought that require re-presenting visual material to an internal homunculus. 

Dennett invites you to picture looking at wallpaper covered with images of Marilyn Monroe.  Within seconds your brain recognizes it as such… though the fovea has had only time to look at a handful of Marilyns.  You perceive that you’re looking at a wall of Marilyns, and you are.  You are not aware of any fuzziness; but there is no neural process of filling in Marilyns, because there is no full-resolution Central Image that needs to be assembled, and no Cartesian Observer to watch it.  The brain doesn’t need a filling-in process because it can use the world to store itself: if it wants more detail at a particular spot, it just looks there.

It’s even possible to use an ingenious device to fool the brain.  A computer screen is displaying a text, Text A.  It’s outfitted with an eye-tracker that checks where your fovea is pointing, and changes the text just at that location into Text B.   An outside observer would see Text A with a fast-moving flicker of changing letters much too fast to read.  But what you see is Text B, with no fuzziness or flickering.  You think you’re seeing a page of Text B, and in this case you’re wrong… if your fovea hasn’t reached certain words of B, the computer hasn’t even displayed them.

(Trying to explain this on my board, at least one person went too far the other way– they assumed that the visual field is fuzzy.  It’s not, unless your eyes are bad.  Again, just as there is no homunculus watching a sharp picture screen, there is none watching a fuzzy one.  As an analogy, think of a CEO using a data mining system with zoom-in.  At any one instant, what he gets is a single table of data, but he can zoom in whenever he pleases.  Is his picture of the data high- or low-detail?  Neither: he has neither seen all the data that exists, nor has he seen only high-level summaries.)

Dennett develops a model for how the mind works based on evolutionary history, psychological experiments, and AI.  I think on the whole he’s on the right track– it’s more or less the point of view that I expounded in my paper on Searle.  But it’s too sketchy to justify his grandiose title.  Still, it’s an advance over simple denials that these things can ever be explained.

He points out that computer scientists seem to most easily accept materialist theories of the mind– because we’re used to describing hugely complex systems at abstract levels.  Outsiders just can’t imagine how science could explain mental functions, so they don’t try.  They give up and hold that their failure of imagination proves that it can’t be done.

The weakest bits of the book are probably the sections on qualia.  Dennett seems to view mentions of qualia with professional exasperation… qualia seem to have the position in philosophy that pragmatics has for syntacticians– a trash bin into which to sweep anything we don’t propose to explain.  He rightly points out some of the problems, but his solutions don’t do much to explain what we think we experience.

For that, I recommend C.L. Hardin’s Color for Philosophers.  Hardin doesn’t explain the qualia of color… but I don’t think you can read the book and maintain the conviction that qualia can never be explained.  There are already things about color qualia that science can explain:

  • why yellow is brighter than the other colors
  • why we can easily imagine cyan as blue + green but can’t picture yellow as green + red
  • why only certain colors exist (thus, why David Lindsay’s postulation of additional primary colors is fantasy)
  • why colors shade into each other at all (as opposed to sounds, which remain distinct when presented simultaneously)
  • why languages universally have color prototypes of particular hues
  • how afterimages and certain optical illusions work

There’s no reason to think this list won’t be much larger in a century.

Advertisements

I don’t think I plugged this site before… if I did you can read it again.

How to do some idiotic things, at the Black Table

Rendered bacon fat. Add poison to make soap.

Among the things you can learn to do:

  • Make soap from bacon and lye.
  • Make prison hooch.
  • Make cigarettes from spinach and old paperbacks.
  • Cook a turducken.
  • Clean your bathroom.

It’s all profusely illustrated and hilariously written.

Since ideas like cloud computing are taking center stage, are arguments against open source losing ground?

Also is the current move toward the cloud a good thing for software or not?

—Joe Baker

My last job was in a SaaS company, so I’m familiar with some of the advantages.  It’s great for the seller— you get ongoing revenue instead of single sales; you can easily update all your customers— and it has advantages for enterprise customers: easily deployable, centrally manageable, presumably more reliable.

I think it makes the most sense for side apps— things like source control or survey software that you want to be widely available, but aren’t where most people spend most of their working hours.  For main apps, local teams, not the head office, should be able to choose the best tools.  If I’m spending most of my day using a tool, my team will make a better choice than some clueless IT autocrat.

I’m dubious about cloud computing in general, because there’s all this power in the desktop computer— why avoid it?  It mostly seems like an end run around Microsoft.  But if it works, it won’t produce the Open Source Utopia; it’ll produce a software world dominated by Google rather than Microsoft.

Also see Joel Spolsky’s delicious takedown of the architecture astronauts, particularly Microsoft’s version of cloud computing.

You mentioned Steam, which is an interesting model… it has cloud computing elements, in that your game permissions are stored externally (which makes it easy to change computers— a great boon as I’ve done it twice in the last year), yet the apps it manages are local desktop apps (which makes a lot more sense for games).  That’s a good balance, taking the advantages of cloud computing but not forcing it to do what it’s not good at.

So I’ve been killing a lot of zombies this week.  I’m probably the only person around who has mixed feelings about Left 4 Dead 2.

On the plus side– it’s a really well done game, much more polished than L4D1… and that looked mighty good just a year ago.  The operative word is more… it has more zombies, more specials, more maps, more game types, more weapons, more gore.  Plus melee weapons, fancier models, special ammo types,  and more varied gameplay that emphasizes movement.

Zombiein' in the rain

There are some pretty amazing experiences in L4D2.  The thunderstorm in Hard Rain is the most visceral and convincing I’ve seen in a game; the level also has a unique there-and-back structure, where pure mirroring is avoided by having the return journey take place at night and in a flood.  The swamp in Swamp Fever would be, as a friend noted, scary enough without the zombies.  And Dark Carnival is a great time, complete with clowns that attract zombies with their squeaky shoes, playable arcade games, and a rock concert.

There’s a new Scavenge mode that seems fun– a mini-versus game that takes 15 minutes or so instead of an hour and a half.  Well, except when you get rolled, but in that case at least it’s over quickly.

This time the five maps form an overall story, which is a nice touch.  Ellis is pretty amusing; the rest are a little generic.  It’s a little strange to have a game with a disaster motif set (partly) in New Orleans… it’s not insensitive exactly, since it can get you thinking about the Katrina catastrophe.  It’s just a little odd.

Never too busy fighting to get in a game of whackamole

L4D1 addresses some of the frustrations of versus mode: scoring is not so weighted toward survival; the new specials and maps discourage holing up; the additional specials also add a lot more variety– you’re not likely to be assigned Boomer three times in a row, a real drag for me.  It’s sad and hilarious to watch a Charger miss his target… unless of course you’re playing him.

Cons… well, overall it feels something like Civ4: a bunch of nice improvements to a game I’m less interested in than before.  I was particularly sick of Versus, and I’m not sure the changes make it enough fun to play.  Playing Infected is still generally a matter of getting a single attack right or waiting half a minute to respawn, while playing Survivor is hard on new maps.

I had a really unpleasant time tonight in the bridge finale.  They’ve either buffed the hordes or nerfed melee and weapons, because it’s just a horrible struggle to move, not fun at all.  Pretty much all the other levels are interesting in campaign or coop mode though.

Edit: Got it.  The trick is a) run along the extreme left, so the zombies are funneled into a single file; and b) aim for the heads to bring them down faster.

I do think ordinary melee has been nerfed— it takes 4 or 5 punches to down a zombie, way too slow in this level.

Some of the levels could use more clues where to go.  I don’t know why they’re so restrictive about inventory… e.g. 2 pistols now work really well, but prevent you from using melee weapons; deploying the special ammo removes your health pack even if you deploy it immediately.  Many of the new weapons are neat– the grenade launcher is particular fun– but they tend to run out of ammo sooner (though this is partly balanced by having more of them).  And it’d be nice if they’d had some smaller maps, like Crash Course.  Oh, and the game can lag out sometimes at intense moments, such as tank fights.

But I really wish they’d put the zombies to bed for awhile and go back to HL2 or Portal…

After some sort of foolish hiatus, my friend Chris has returned to playing and writing about video games with a reflective post on Dragon Age Origins.

He doesn’t like its combat system:

I can see how true fans of the genre would enjoy it — if you really want to delve into tactics and planning and manage a handful of characters down to the smallest detail, I imagine you’re in heaven.  For me, it boils down to wanting to click a mouse button to swing a sword, not click a mouse button to activate an icon to swing a sword.  If I hit someone or block a blow, I want it to be based on my reflexes, not on an invisible dice roll behind the scenes.  Simple as that — it’s just not for me, and I knew that before I bought the game.  I’m not criticizing it, it’s just not the style of combat I enjoy.

I think he’s put his finger on an oddity of Bioware games.  They used to do explicit D&D games like Baldur’s Gate, and to some extent all their games are still hidden D&D games.  For some reason this is particularly evident in KOTOR, where your character will fight (though badly) entirely on her own if you do nothing, and if you like (and if you are a Cheeto-stained geek) you can call up a screen that shows all your dice rolls.

Chris loved Mass Effect, which does a much better job of looking like a pure shooter, but it really has the same mechanism… you can pause combat and micromanage your party and what spells, er mass effects, they are using.  I tend to agree that this is more annoying than fun.  I’d rather focus on the main character and trust the others to do their jobs.  There’s just something unsatisfying about the base D&D mechanic of “rolls to hit’, especially in a computer game.  Look, the dude is right there, two feet away, of course I hit him.  I don’t have a problem with my skill or my rusty iron blade being so bad that I didn’t do much damage, but this “you missed with a sword” stuff feels wrong.  If you want to make missing a game dynamic, make me use the mouse; I guarantee I’ll miss plenty.

Jade Empire feels different, without all the micromanagement… though you don’t really have to worry about aim, you just have to keep close to your opponent.

Chris apparently feels not very connected to his character, partly due to the lack of voice.  That’s kind of a poser.  Do you want to be following a character (in which case you want them to have a personality, like Jade in Beyond Good & Evil, or Sam in Sam & Max), or be a character (in which case the on-screen character shouldn’t have too many reactions of their own, in case they don’t match yours)?  Some games take a middle ground– e.g. Left 4 Dead, where your character may have some funny lines but remains pretty generic.

Bioware generally takes the path of letting you choose from a small set of PCs, and then making moral decisions along the way.  In Mass Effect, for instance, you can choose between three possible backstories, which will be referred to throughout the game.  In KOTOR and Jade Empire you play a very specific and key figure– though you can play them your own way.  I think it works pretty well, but for full immersion I like Bethesda’s games, where you feel in full control of your character.

Chris also has some words that all fantasy writers should take to heart:

Basically, you’ve got orcs from Mordor running wild and the “good” races must align to stop them from taking over Middle Earth.  So, nothing really new in the main storyline.   (Question: if the monsters ever did take over, what the hell would they do then?  Stand around roaring?  Do they have other marketable skills besides stabbing villagers and operating catapults?  Can any of them grow crops or improve roads or manage an inn?)

Obviously people like stories about battling eeeeeevil, but there’s always a part of my mind that rebels at this, since no real-world struggle is like this.  No one is an actual minion of eeeeevil; the bad guys simply have a different conception of good, and they think we’re bad guys.  Isn’t that a more interesting setup anyway?

So, rather suddenly, I’m done with the main quest in Borderlands.  There are still some side quests to finish, but a first playthrough works out to about 60 hours.

It’s best not to think of it as an RPG.  There are no actual roleplaying options, no morality, not even alternative means of doing a quest.  It’s a shoot n’ loot.  One advantage is that quests are highly replayable– which is good because due to the confusing way quests are handled in c0-op, you’ll probably be doing the same quest multiple times.

bl krom

Krom is up there, unaware of the invention of sniper rifles

Above is one of my favorites, taking out Krom.  It’s not hard, but it’s just a really fun approach, up a set of walkways attached to the sides of a canyon, sniping at bandits the whole way.  One of our party kept falling to his death in the canyon, which was hilarious for the rest of us.

Quest sharing is good and bad.  If a quest is ahead of you in the storyline, you can accompany your friends but won’t get credit for it.  That’s not bad since you can just do it again once you’re eligible.  If you are eligible, you get credit– but you generally don’t see the quest dialogs, have little idea what you’re doing, and possibly miss out on quest rewards. 

Well, it’s of a piece with Borderlands’ overall design philosophy, which seems to be to lavish attention on the core gameplay and art, be just adequate in UI and story, and quite annoying in setting up co-op and in voice support.  I didn’t think anything could make me miss the Left 4 Dead lobby system, but Borderlands does. 

Fights range from easy to awesome-difficult.  They’re most fun in co-op, especially once you figure out the shield system.  (I had a fast health regen shield for most of the game, which was nice becaue I rarely needed to heal– but in a battle I’d die way too often.  It works better to have high shield capacity and retreat when it’s exhausted.)

I’m somewhat frustrated with the looting mechanic.  You don’t get enough inventory slots till near the end; this means you’re constantly evaluating new guns, and I find this hard to do in co-op when everyone’s rushing to the next checkbox.  You can compare guns, but you can’t directly see damage per second, and it’s hard to evaluate how this is affected by other features, such as elemental damage.  It’s cumbersome to change what weapons are equipped, and you can’t even change weapons if your inventory is full.

bl vault

Picturesque Eridian ruins

The planet of Pandora is mostly a trashed, polluted nightmare… but it can be quite beautiful at times.  It’s about as close as you can get to walking around in a Moebius comic.

People seem to either love or hate the Claptraps, the little robots.  I think they’re cute.  There’s an amusing cutscene that plays off the movie convention that to show that a character is really evil, he kills a small animal: one of the villains blows away a Claptrap.

The UI is full of annoyances.  E.g. the key to turbocharge your vehicle is different from your sprint key; you should be able to fill up on ammo with a single action; you can get class mods that increase skills but there’s no feedback as to what the effective skill level is; completed quests seem to be unsorted.  One imagines Randy Pitchford in bug triage sessions asking “Is this related to shoot n’ loot?  If not, forget it.”   But don’t pass on the game because of it; these things are liveable.