Drawing process

I had these drawing studies for my last gods picture and thought they might be an interesting process story.

The nice thing about these gods, Nečeron and Eši, is that they have things they can do. Nečeron is god of craft, so he can be building. Eši is god of art, so she can be doing art. But just that would be a little boring. From somewhere, but undoubtedly influenced by M.C. Escher, came the idea of each creating the structure that’s holding up the other.

Here are some doodles trying to make it work:


Nečeron’s bit is easy: he’s creating whatever Eši is standing on. (It starts as a table.) But what is she painting? Maybe some sort of framework holding up the platform he’s sitting on?  That’s the lower left drawing; it looked cumbersome.  Maybe a ladder (bottom right), but then he only has one hand free to work. Finally I tried a set of stairs, and that worked.

Here’s the second attempt at that:


I decided that the concept worked, but now ran into the next problem: I can’t really draw this scene out of my own brain. The figures here don’t look terrible, but the proportion and placement of the limbs was difficult, and the blobs representing the hands hide the fact that the concept requires four iterations of my personal drawing bugbear: hands holding objects.

(These are sketches, and would certainly have been improved if I kept working on them. But one thing I’ve learned is that poor proportions do not improve by rendering them really well. Better to get the sketch right.)

I tried looking for photos online, but getting these specific poses would be difficult.

Taking reference photos, however, is easy! I have an iPad! Here’s the pictures as they appear in Photoshop, with the sketch done right on top of them.


Who’s the model?  Oh, just some guy who’s available very cheaply.

If you compare this with the previous step, you can find an embarrassing number of errors in the original. E.g. Eši’s legs are way too small, the shoulder facing us is too low, and her neck is not drawn as if we’re looking up at her. Plus I think the final poses are far more dramatic.

I did the final outline over the purple sketch. Then the procedure is: select an area in the outline; fix the selection to make sure it includes everything I want, and fill it in on a separate color layer with a flat color.  Then go over each flat color area and use the airbrush to add shading. The bricks and stairs also get some texturing, added with filters. The jewelry is done on a separate layer with its own drop shadow— a cheap, quick way to add realistic shadows.

The gods aren’t wearing much.  That’s just how gods are, of course. On an operational level, there are two reasons for this (which we can assume are shared to some extent by Almean sculptors and painters).  The lofty level is that I like the human figure and hate to cover it up.  The less lofty reason is… clothes are frigging hard to draw. Figure drawing is hard enough, and clothing requires a whole new set of skills and rules of thumb, and looks terrible when you get it wrong. Plus, these are Caďinorian gods, so they should be wearing Caďinorian robes, which require, like, a black belt in drawing. They’re made of wrinkles. There’s a reason so many superheroes wear leotards: they’re basically drawn on top of the nude figure, with no folds.

The final picture:


Tonight, I like it; in a year, I’m sure it’ll dissatisfy me. Actually, when I look at it, I wonder if the angle of the iPad foreshortened the figures, making their feet proportionately too big. Oh well.



Nightmare fuel picture generator

Hat tip to @jwz here. The technical name for these is apparently Image-to-Image Translation with Conditional Adversarial Nets. Here’s a link to a (currently) working interactive toy. It takes a simple drawing and turns it into a rendered nightmare.


Well, that wasn’t too bad, if you don’t look hard at the eyes.  Can it handle blonde hair?


Nnnnnno, I wouldn’t say it can.  OK, got it, dark hair. Maybe a more cartoony image would look better.


Well, maybe we should play to this thing’s strengths.  If I draw a monster, it should produce a monster, right?


I dunno, it kind of turned into Orc Gary. I wonder if I could import him into Skyrim.

So, who’ll be first to produce a graphic novel with this thing?

Projects: Ticai and Hanying

I have a couple of side projects besides all of the India.  One is Ticai, the game I started working on a few years ago. Here’s what it looks like today.


You can compare this to the last look from… gulp… three years ago here. What’s changed?  A bunch of things:

  • A better skybox, rather than featureless blue.  Still needs to be redone, but at least I know how now.
  • Even more buildings, including the nice round temple on the left.
  •  The cobblestones are bump-mapped, so they don’t quite look like a flat texture slapped down on a flat surface.
  • Previously the streets were modular; I figured it would be easier for Unity to render them if there was only one copy of each unit.  Then I realized that the entire street grid has fewer polygons than a single human model. So now the whole grid is one model. This cleared up a lot of little alignment problems and makes the streets look better. It also allowed me to do things like put the tower on the right on a little hill.
  • The camera stays closer to Ticai. This makes it harder (though not impossible) to see through walls and such, which helps out a lot in some of the smaller spaces.

Unity has been upgraded to version 5.4, which broke a few things.  Most are fixed, but something has changed about the lighting which I haven’t figured out.  Ticai’s clothes don’t look smooth, nor does the round temple.  Unity used to correct for that, and I don’t know how to fix it yet.

There were some major bits of the city that weren’t done yet.  There is a whole underground that was only mocked up; it’s all finished now. I also added an alchemist’s shop:


I like the various jars and things. There’s even a microscope!   Not shown: the alchemist has a rather pretty globe of Almea.

I’m convinced that one reason games are so often late and buggy is that the developers spend half their time redoing things.  You make something quickish just to get it working (possibly learning how to do it at the same time). Then you learn how to do things better, get dissatisfied with what you did, rip it out and redo it.  For instance, Ticai’s feet:


I half-assed her feet the first time… I figured I could suggest her toes using the texture, and it looks bad.  Finally I redid the toes, separating them in the model.  Plus I redid the ankles. Also her eyes: she has eyelids now, and blinks.  Her face still looks kind of weird, though, so I’ll have to work on that.

(The four toes are not a way of saving work: Ticai is Almean, so she really does have just four toes.)

I put the project aside before mostly because I was hung up on the writing side. The game is supposed to be a set of interlocked mysteries, which Ticai solves by running around and talking to people.  I want a really complex conversational engine, where you don’t have four options to choose from, but a hundred or more.  But of course that means a lot of writing, and even more testing, and I haven’t found a way to keep the amount of work under control.

The other project is a new conlang, something at least two or three of you have been waiting for patiently for years.  It’s Hanying, one of the language of the Incatena— in fact, the language of Areopolis, more or less Morgan’s native language.  I said it was “in origin a Chinese-English creole”, and it was… for the first half-century or so of its existence.  But it will be much weirder than that.  E.g., it suffered a series of phonological adaptations to new speakers twice, and it went through both some relexification and decreolization.  By the time it’s done I hope it really looks like something that survived a thousand years of change.

City progress

I’ve got back into making model after model in Blender– current count is 140.  When we last looked in on Ticai, the game looked like this.  Now it looks like this:

Ticai contemplates encroaching urbanization
Ticai contemplates encroaching urbanization

Now I know why, in a game, you’ll often be right next to interesting-looking spaces you can’t get to. Why can’t I go over and explore it?

A game level, conceptually speaking, works like this:


There’s a very detailed area where the player can explore. Here you’ll get real doorknobs and window frames and pipes, 3-d trees, and all the hidden triggers that make the level work, like working doors and ladders.

Just outside it is areas you can see into, but can’t get to. Because they’re close, they have to be pretty well rendered, though of course nothing will be interactive.

Outside that is a land of increasing fakery. Here the architectural details are likely to be part of the texture, and for any object, only the sides facing the player need to exist. Even farther out, you get the skybox. In Hammer you can have objects there, coarsely modeled, so you can see far into the distance. At this point you model a tree by pasting a picture of a tree on a transparent quad, and far details like clouds may also be 2-d pictures.

You can’t get into those nice nearby areas because it’d get too close to the fakery zone, and the illusion would be spoiled. The level designer may need to put a lot of work into inaccessible areas, but they’ll only work enough to make it look good from the accessible one.

(This applies to Valve games, as well as games like Dishonored, Mirror’s Edge, Bioshock, Mass Effect, and Dragon Age. It doesn’t entirely apply to open worlds like Skyrim or Saints Row or Arkham City, which have to use different methods to manage the huge maps– though note that interiors still involve a level change. Unity allows huge maps, but I don’t have a development team to fill them!)

Here’s what the city looks like in the editor:

What you'd see with a rocket jump
What you’d see with a rocket jump

Ticai can wander just the four city blocks in the middle of the picture. You can see that the modeling gets simpler outside this region, and even within it there’s some fakery– e.g. there’s no need to create roofs for buildings if there’s nowhere she can get high enough to see them.

You can see the map of the Nezi neighborhood, which I’m using for reference. Just to make those four accessible blocks, I’ve had to model about a third of the neighborhood, and I’m not done yet.

Here’s the mansion of the local aristocrats:

And that's just one wing
And that’s just one wing

I just redid the mansion this week– before the façade was basically a box with nice windows. You can also see a tree– Unity has a tree creator, which is good, because foliage is awful to model.

See the big white cubes in the city map?  Those are placeholders… maybe I can go model something to replace them with right now…

Hands are hard

One reason games take so long… you keep re-implementing stuff, as you learn how to do things and your standards improve. Case in point: hands.


When I first modeled the hands (on the left), I knew the modeling wasn’t good, but I was happy just to have hands. But for some reason I decided to redo them, and I’m absurdly happy that they look much better. As so often, the key is to have a good reference. The first time I was following a drawing of the whole figure; this time I used a reference illustration of just hands.  I also drew a better texture.

Plus, a technical Blender thing: in both cases, I made the fingers by extruding a square from the palm.  But this time I extruded non-adjacent squares.  That left a gap between the fingers, which is, you’ll note, how hands actually work.

Of course, I’d used the same model for several other characters, and the work had to be propagated to each one.

The same sort of thing seems to happen in professional game development– witness this description of the making of Bioshock, which suggests a) a game is made and remade multiple times over the course of its development, and b) you should probably never work for Ken Levine.

An updated City 17

Although it’s been out for 10 years, Half-Life 2 has always been in the “pretty good graphics” bin in my memory. Till now. Jeannot van Berlo has re-created the train station in Unreal Engine.

Here’s the original…


And here’s van Berlo’s version:


Another shot from Valve:


And van Berlo:


I guess ten years has provided an advance or two.

I was just in the game to take the comparison screenshots, and I still think it’s fine, but the Unreal Engine version is certainly stunning.  More detailed, fancier lighting, and a grander scale.  From the screenshots, it looks a little busier– not quite as focused for the eyes– but it’s hard to tell what it’d be like in the game.

The next stage in photorealism

This post, though a bit breathless, is extremely interesting. It’s how an upcoming game, The Vanishing of Ethan Carter, makes stunning game imagery… essentially by taking a shitload of hi-res photos, then using software to turn them near-automatically into a 3-d model.

Let me guess, we dig up all the graves for coins and rusty weapons?
Let me guess, we dig up all the graves for coins and rusty weapons?

It’s certainly not a time-saver– you have to take pictures very carefully on location, and the whole idea is that assets aren’t very re-usable… you’re modeling an entire church, say, and not just making a tileable brick wall. The nice thing is that the textures aren’t tiled– they have contextually meaningful dirt and shade and mold and whatever. Photorealistic textures still look wrong and artificial if they’re too even, too widely used, or have no apparent flaws.

A quick way to test video game textures is to look at the edges of things. Take this very good work from Arkham City:

Wouldn't you take your gloves off for this?
Wouldn’t you take your gloves off for this?

It’s all photorealistic, but look at the way the combination dial just floats in the middle of the safe. Real things have transitions from one surface to another. There should also be shadows (and maybe distortions in the fabric) under the edge of Catwoman’s glove, and under that weird metallic knob on her shoulder.

Now, in a game, you normally don’t focus on that stuff… really, we want to be fooled. Especially in the middle of action, you can get away with pretty simple models.

If you’re trying to make a game on your own, on the other hand, learning about someone else’s new, better methods can be depressing. It’s hard enough making tileable textures! And god, don’t get me started on foliage. There’s a reason so many games are set in dungeons, sci-fi futurescapes, deserts, and sewers. They’re geometric! It’s still really hard to do good vegetation.

A Verdurian street, mostly

I’ve been working on my Unity game, on and off.  I finally more or less finished the street the heroine, Ticai, lives on:


There is something wrong with the lighting which I haven’t figured out… it seems to be very hard to get an even sunlight.  Some models just insist on being in the dark, even if they’re next to brightly lit models.  That yellow house to the right of Ticai’s head, for instance– I had to add a point light so it didn’t look like it’s midnight there.

This part takes forever because each building has to be modeled separately. I can re-use models, but I don’t want it to be too obvious when I’m doing so.

A still picture can’t show it, but I’ve tried to 3-d model rather than use textures where possible.  The windows and doors are 3-D, for instance.  It looks better as you walk around, and I read somewhere that polygons are cheap.

Here’s a view down in the sewers, complete with mystery corpse:


Hmm, I should probably bevel those edges.

Sadly, Unity water isn’t as nice as Source water– it doesn’t reflect.

This might be a game

I keep plodding along at my (unnamed) Unity project, and it’s starting to kind of look like a game.  Here’s what it looks like right now:

shut up
Are you hungry? I hope not, because I haven’t modeled any food

There’s a few new things here, like the fire in the fireplace and the water in the pool, but mostly it’s modeling, modeling, modeling.  When I was using Hammer I could use Half-Life 2 assets, but here I have to do everything myself: the house, the furniture, the girls.  There’s even a little knife and spoon on the table.  All of these are modeled and animated in Blender, then textured.  (Yes, the furniture gets animated: doors, chests, and wardrobes open and close.)

I’ve learned some tricks to speed up the building and texturing– e.g., I built a single plank of that barrel, textured that, then replicated it to form the cylindrical shape.

Here’s what the whole house looks like so far.  Compare to this picture.  The only major furniture that’s missing is some shelving; I actually won’t model everything in the linked picture because it would get too full to move around in.  (Video game rooms are abnormally large and empty.)  However, there will be a lot more set dressing (pictures, candles, chamberpots, etc.).

I should probably add a roof

I started to build walls and such in Unity, but the texturing got too complicated, and anyway there are too many fiddly details (like those closets).  So this is all models.  It’s easier to mock up buildings in Hammer… on the other hand, things like doors are far easier in Unity, since they can be duplicated at will.

More excitingly, there’s a dialog system now, and I have a story!

My main notion is to concentrate on depth rather than breadth.  The anti-model here are games like Far Cry 2, or Remember Me— gorgeous environments where the only things you can interact with are ammo and health packs.  Skyrim comes closer to the ideal, as you can pick up and drop objects, as well as use the forges and cooking pots.

For the dialog, I want to get past the thing where you have three dialog options, normally corresponding to aggressive, nice, or neutral, and half the time your choice doesn’t even matter.  Besides, if there’s just three choices, you can always just pick one and reload if it worked out badly.

In this game, you’ll take one of a dozen or so approaches– e.g. anger, naivete, flirting, curiosity, intimidation, probing.  The clever bit is that the more powerful approaches can only be used once per game.  So you can intimidate, or seduce, or kill, one NPC.  Hopefully this will make you think about which character to do what with.

Plus, there are actually three possible stories, and the same characters are involved in all three.  Ideally details of the story will randomly change between playthroughs, so a) you can play more than once, and b) there is no one right path through the game.  (So, think Skyrim meets The Stanley Parable.)

Will this work, or be fun?  I don’t know yet!  At this point there’s just a lot of modeling and dialog writing to do.  I was mostly trying to see how far I could get with Blender/Unity and wasn’t worrying about whether it would turn out to be a game or not.  So I’m happy to at least have a plan for the game.

I’m also really tempted to record the dialog in Verdurian (with on-screen translation), which is either an awesome or a terrible idea or maybe both.

Blender into Unity

Importing Blender characters and animations into Unity is supposed to be easy. Getting there is pretty horrific, though. As usual, nothing will work right away; you have to look at a million forum pages and tutorials, and swear a lot. One fun complication: both programs keep overhauling themselves, so half the time a method suggested in a tutorial won’t work in your version. And then there was the evening my Blender file got corrupted, giving me a distorted figure I couldn’t fix.

Still, I went back a save, refinagled, and now I have an imported figure that animates. (The image below is not animated, but hey, the pose is different from the original!)

That woman is bigger than my first car!
That woman is bigger than my first car!

I need to go look at more Blender tutorials now, because I’m sure I’m doing the animation wrong.  There’s got to be an easier way to repeat a pose, rather than copying bone positions one at a time.

She lost her hair, because my earlier save didn’t have it.  Unity wasn’t always getting her hair anyway– one more thing to check into soon.

For reference, the major stuff I learned:

  • I’m scared of the Actions tab in Blender now.  I was messing with that when my figure got corrupted.  So I’m just doing one long animation, and breaking it up in Unity.
  • Unity can read Blender files, so you can just put a copy of the file in your Assets folder, then import it.  Key bit: you can click on the asset again to change the import settings.
  • Unity has two animation systems… the tutorial for using one of them has about 50 steps.  Instead of doing that, I used the old system; the key bit is setting the Rig type in the import inspector to Legacy.

I spent about two evenings trying to get the animations to play, and it just wouldn’t do it. But using the simpler figure, long animations in Blender, legacy mode in Unity, it worked this time.  Now I even have two animations!