Search this blog

12 December, 2011

The three skin rendering horrors you want to avoid

It's 2011 and still most games come with some truly horrific skin rendering. I don't think skin is that hard to get decent, but you have to understand what's important and how to hack it in your lighting/rendering model.
This article won't focus on the state of the art skin rendering techniques or on the state of the art acquired data for skin, these things are really important, fundamental, go ahead and google them, there's plenty to read and it's not hidden knowledge at all (start with Debevec and Jensen).
What I want to do here is just a write up of what I've learned about skin in my experience, and some of the common technical mistakes I've seen.

1- Bad tone
Fallout 3: Radiation really does some numbers on your skin...
By far, the worst offender. Nailing the right colours from the day to the nightside of the skin is the most important, and most often neglected things. And yes, it is mostly (but not only) a matter of subsurface scattering, as most of the skin diffuse lighting comes from it, but it's not about complicated techniques. It's about understanding what happens, what's important to model and how to hack it in whatever lighting model you have in your rendering.

Model: Commander Shepard @ Bioware

Model: Charlotte Free @ IMG/NY
You can get great skintones with even the simplest hacks on top of your basic lambert, and horrible tones by badly tuned texturespace or screenspace diffusion techniques. Penner's pre-integrated skin shading is the current champion of the lambert hacks, but truth is that some sort of ramp on top of your lambert is everything you need (that's to say, you can most often get decent results without even bothering to approximate the geometry with the curvature, as Eric does) if you nail the right hues!
Also, remember that scattering will affect all the skin shading, you might want to "ramp" the edges of the shadowmaps (again, Penner's paper covers that) but it's actually often even more important to properly blend AO on skin, especially if you're using a normal-aware SSAO which is capable of capturing very fine details, a straight multiply will create some really questionable gray patches. Again, even simple hacks, like adding a slight constant colour to your SSAO, will go a long way towards creating more organic skin.


Debevec, Digital Emily, Acquired Diffuse
Give your artists the right tools! It's fundamental to be able to tune the skin while comparing the rendered results with some reference material (put an image viewer and color sampler in game!), it's fundamental to tune colors after fixing your tone-mapping operator, and it's fundamental to understand _all_ your sources of color: shading model, parameters, textures. 


Debevec, Efficient Estimation of Spatially Varying Subsurface Scattering Parameters
Textures are particularly tricky: remember that skin specular is white (monochromatic), skin diffuse maps really are tied with the subsurface scattering model you implement in the shader, in theory the epidermis is pretty gray, all the colour comes from the blood vessels below the surface.
In practice, especially in the realtime rendering world, we mix together all the skin layers and more often than not the problem with skin textures is that they are not saturated enough (consider that the white, additive specular sheen should be responsible for some of the loss of saturation in the final rendering) and have too uniform hues (skin hue shifts quite a lot in many regions, i.e. elbows, joints, hands and feet, and usually presents quite a few blemishes)

2- Bad detail


Skin detail does not come from diffuse! That's quite obvious as we said that diffuse lighting is mostly due to subsurface scattering.


Debevec, Digital Emily, Specular
Detail is all in the specular layer, ideally, you would need two different normalmaps for diffuse and specular (actually, Debevec achieves good results rendering with different normals for each of the RGB channels of the diffuse), fetching the same at different miplevels (bias) is often good enough and in a pinch, even just using the geometric normals for diffuse (or a lerp between the normalmap normal and the geometric ones) is way better than using the specular normals.
Skin pores are a really, really fine detail which is both difficult to capture and easy to get in the way of your BRDF. Most of the times, it's better to have the pore details as self-occlusion in the specular map than in the normalmaps (which being a derived measure will require higher resolution to capture the same amount of detail).
Also, you either have to model skin for a given viewing distance, or you'd better consider that at all but the closest distances the pores will/should fade in your miplevels and their scattering effect should be modeled by varying the specular exponent (broadening it with the distance, you can use ddx and ddy to estimate the filtering width, our use cLEAN), there is no way to tune the specular with a constant exponent that will yield both the right amount of detail up-close and the broad sheen you see from a distance.
Peach fuzz on dry skin is another geometrical source of specular detail perturbation, you might want to model it by adding noise at the grazing angles if you need that amount of detail.

3- Bad volume
Fight Night Champion
When it works is one of the few games that captures volume right...
Skin texture luminosity is often quite uniform, and in caucasians quite light too, making skin shading more a matter of volume and shape than texture. Three elements here are important: specular, ambient and normals.


Normals are tricky because skin is often... skinned. Conventional bone skinning yields bad geometric normals. I've already blogged about that so I won't repeat myself here.

Ambient is "easy" but really important, a simple contant won't do the job, it just flattens everything, even a simple hemispherical (top/bottom, a lerp between two colors based on the geometric normal y component, boils down to a single madd) is way better.
Observe and understand. Look at your references, understand what's important, understand your visual errors and their sources. The "correct hack" really depends on the context, if your environment is quite fixed, like in a sport or racing game you can model ambient with different components: a sky layer which does not depend on the position plus a ground reflection that maybe fades on the model based on the distance to the ground and so on...


Shadows, AO, Ambient model... All coming together nicely
Ambient occlusion is fundamental, ideally you'd want to have some directionality in the occlusion: bent normals, or encoding the occlusion in some base. Really "occlusion" is fundamental for volume. Each light component (ambient, diffuse, specular etc) should be occluded in a reasonable way.
Again, there are many ways, technically to achieve that, I won't delve into any detail because it depends on the context, you might want to agument SSAO to encode directional information (as I already said), or precompute occlusion at the vertices (SH or similar), cast rays in screenspace (i.e. screenspace reflection occlusion is easy, especially at the all-important fresnel angles, for rim occlusion) or even simpler hacks.
The important message here is that volume requires occlusion, look at your rendering and understand your visual defects, look for light leaks, compare with real-world references and acquired data and craft your own solution!
Sometimes the actual technical answer is dead simple (an example - one of the improvements on Champion over Round 4 was a "downgrade", we went from VSM to simple PCF filtering because VSM even if on average are "nicer" were not able to capture the very important occlusions on the face due to precision issues, the nose shadows and the eye-sockets. Going back to PCF gave us "worse" shadows but way better faces!)


Phong, modified to behave better with Schlick. No lousy round highlights!
Achieving proper specular on skin is probably one of the trickiest parts as unfortunately, right now in realtime rendering we have either the choice of employing nice material models on simple analytic lights. It's well-known that the Kelemen/Szirmay-Kalos model fits the skin acquired data very well and if you can afford it it's probably the way to go, especially if you lighting is not very complex (i.e. outdoor, harshly lit scenes). 
Unfortunately, in many contexts we want to use some image-based lighting approach (baking reflection cubemaps), and that restricts the BRDF filtering we can employ pretty much to Phong, and straight Phong is really, really bad on skin.
Again, it's important here to observe and understand what is important, what qualities we want to model or hack. The specular sheen, other than affecting the saturation and the skin detail, as we discussed already, is important for the perception of the shape. Human vision can't really distinguish the effects of the various lights in the specular, we can't relate well the scene lighting with the specular hightlights shapes.

What we do with the specular is to understand the surface shape and material, so what's important is more to model the general shape of the specular, than the link between the specular and the actual scene lights. 
That's why we can often ignore accurate specular occlusion (and just modulate all the specular light with our single shadowmap and omnidirectional ambient occlusion... or even "worse" multiply some of the diffuse product into phong, not ideal, but decent) and we can often disregard accurate light positions and use reflection cubemaps. And that's why what's really important is the shape of the specular sheen, and you get quite some latitude in hacking that in, as we can't really "see" these hacks as long as the final shape behaves "well".
What phong get really wrong is the highlight shape, which is too uniformly circular, and the lack of frensel. You can "break" the highlights using an exponent map (which should always be present, but it's especially important with Phong), adding fresnel is not easy as Phong simply does not reflect enough light at the grazing angles, and thus even multiplying with a decent approximation, like Schlick's, does not usually yield great results, it's better to use fresnel to (also) drive other parameters in the Phong model instead, either bending the reflection normal, or lowering the exponent.


Conclusions
I probably should have included a fourth horror "bad behavior", as most CG humans (all? in videogames) really lack the proper "fleshiness" in terms of animation and behaviour, but that's a harder problem to tackle which is probably outside the realm of realtime animation, at least for most games where you have many characters on screen and you can't afford having 300 bones in each plus some soft-body system and some UV relaxation and so on...

Really what I wanted to point out are some simple things that every game should do, aspects which can most of the times be fixed to a decent degree with really cheap hacks, but do require an understanding of how the underlying physics work and how our perception works to then focus on what really matters visually.


Then the actual techniques really depend on your project, for those of you that really want to delve into some actual tech, an example of  a possible solution for skin rendering is here.

Addendum: Hair, Eyes, Teeth, Ears
Hair is a huge pain, on this generation we're stuck with hair cards. Cards don't sort well, are hard to shadow and hard to shade. My usual suggestion is to avoid spending too much time on it, it's not really worth it.
Shading usually employs Kajiya-Kay for specular (card will require a tangent shifting map to avoid having uniform highlights on a card, which are ok only for very straight hair), some wrap lighting for diffuse and a way of getting good rim.
For sorting, the usual solution is to go multipass, first doing alpha testing with z-writes, and then alpha blending with z-testing. You can somewhat pre-sort the triangles from in to out, some games do the alpha blending pass in two phases, first the backface cards then the frontfacing ones. It's decent, but defects will still be there, even sorting per triangle, if you can, doesn't solve them all as cards often intersect. I honestly think a better solution is to just alpha test and rely on alpha to coverage, if you can. This is what Fight Night Champion did by the way (Round 4 did a three-pass alpha blending).

Eyes are way, way more important. Shading is not that hard, specular/reflection is done with a  cubemap usually, diffuse would need SSS but again some wrap is good, on Fight Night we computed a second set of normals to match the diffuse lighting with the skin around the eyes, probably a bit overkill. Behavior is hard, and getting it right is crucial! On the shading side, one trick I used is to disable the mipmaps, and the bilinear filtering on the reflection cube, to get aliasing far away that in turns create some shimmering in the small highlights. But most of it is in the animation, and some procedural techniques are to be employed.
Ears have lots of translucency, again, it can be faked easily, I won't lose my sleep there. For teeth, the important thing is to use the right occlusion and avoid them to become too bright, shadows from the mouth and lips onto the teeth are very hard to cast so again, you'll have to do some faking.
Bottom line, for each of these elements there are some easy hacks, but it's important to consider them. Once you understand what each of them needs and what physical things are important to model for each (ear: translucency, eyes: SSS and behavior, teeth: occlusion etc...), the actual techniques are "easy" or can be easily faked to a decent standard.

28 November, 2011

Google reader share

So, some of you have noticed that my google reader share is dead (not updated anymore). That's not me being lazy, but google being a bit evil and trying to shove google+ down our throats by killing the old facilities instead of integrating plus into the existing stuff. So, there is no reader share anymore, and my iPad newsreader (reeder), which was responsible of most of the share posts, does not support google+ yet. Stay tuned.

25 November, 2011

Photoshop scripting - Cleartype for images

Left: bilinear, Right: bilinear with "cleartype"
note- the effect is configured for a "landscape" RGB pattern LCD

I always wanted to learn how to script Photoshop (what I learned is that it's a pain and the documentation sucks...), so yesterday I started googling and created a little script to emulate cleartype on images. Here is the source (it assumes that a rgb image is open in PS):

// 3x "cleartype" shrink script

var doc = app.activeDocument;

var docWidth = doc.width.as("px");
var docHeight = doc.height.as("px");

doc.flatten();

// let's go linear RGB
doc.bitsPerChannel = BitsPerChannelType.SIXTEEN;
doc.changeMode(ChangeMode.RGB);
// now that's a bit tricky... we have to go through an action, which has binary data... which I'm not sure it will be cross-platform
// it works on Photoshop CS3 on Win7...
function cTID(s) { return app.charIDToTypeID(s); };
function sTID(s) { return app.stringIDToTypeID(s); }; 
var desc6 = new ActionDescriptor();
var ref5 = new ActionReference();
ref5.putEnumerated( cTID('Dcmn'), cTID('Ordn'), cTID('Trgt') );
desc6.putReference( cTID('null'), ref5 );
desc6.putData( cTID('T   '), String.fromCharCode( 0, 0, 1, 236, 65, 68, 66, 69, 2, 16, 0, 0, 109, 110, 116, 114, 82, 71, 66, 32, 88, 89, 90, 32, 7, 219, 0, 10, 0, 22, 0, 19, 
0, 25, 0, 58, 97, 99, 115, 112, 65, 80, 80, 76, 0, 0, 0, 0, 110, 111, 110, 101, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 1, 0, 0, 246, 214, 0, 1, 0, 0, 0, 0, 211, 44, 65, 68, 66, 69, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 9, 99, 112, 114, 116, 0, 0, 0, 240, 0, 0, 0, 50, 100, 101, 115, 99, 0, 0, 1, 36, 0, 0, 0, 101, 119, 116, 112, 116, 
0, 0, 1, 140, 0, 0, 0, 20, 114, 88, 89, 90, 0, 0, 1, 160, 0, 0, 0, 20, 103, 88, 89, 90, 0, 0, 1, 180, 0, 0, 0, 20, 
98, 88, 89, 90, 0, 0, 1, 200, 0, 0, 0, 20, 114, 84, 82, 67, 0, 0, 1, 220, 0, 0, 0, 14, 103, 84, 82, 67, 0, 0, 1, 220, 
0, 0, 0, 14, 98, 84, 82, 67, 0, 0, 1, 220, 0, 0, 0, 14, 116, 101, 120, 116, 0, 0, 0, 0, 67, 111, 112, 121, 114, 105, 103, 104, 
116, 32, 50, 48, 49, 49, 32, 65, 100, 111, 98, 101, 32, 83, 121, 115, 116, 101, 109, 115, 32, 73, 110, 99, 111, 114, 112, 111, 114, 97, 116, 101, 
100, 0, 0, 0, 100, 101, 115, 99, 0, 0, 0, 0, 0, 0, 0, 11, 67, 117, 115, 116, 111, 109, 32, 82, 71, 66, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 88, 89, 90, 32, 0, 0, 0, 0, 0, 0, 235, 194, 0, 1, 0, 0, 0, 1, 65, 50, 
88, 89, 90, 32, 0, 0, 0, 0, 0, 0, 97, 15, 0, 0, 36, 77, 255, 255, 255, 232, 88, 89, 90, 32, 0, 0, 0, 0, 0, 0, 103, 37, 
0, 0, 220, 208, 0, 0, 5, 29, 88, 89, 90, 32, 0, 0, 0, 0, 0, 0, 46, 162, 255, 255, 254, 227, 0, 0, 206, 39, 99, 117, 114, 118, 
0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ) );
desc6.putEnumerated( cTID('Inte'), cTID('Inte'), cTID('Clrm') );
desc6.putBoolean( cTID('MpBl'), true );
desc6.putBoolean( cTID('Dthr'), false );
desc6.putInteger( cTID('sdwM'), 2 );
executeAction( sTID('convertToProfile'), desc6, DialogModes.NO );

doc.backgroundLayer.applyGaussianBlur(0.75); // limit the frequency a bit to avoid too many fringes

doc.resizeImage(
UnitValue(docWidth, "px"),
UnitValue(docHeight / 3,"px"), 
null, 
ResampleMethod.BILINEAR // To-do: box filter (mosaic + nearest)
);

var unitValue = UnitValue(1, "px");

// RGB pattern, note that the nearest resize will take the center pixel, that's why red shifts by one and not zero
var redLayer = doc.backgroundLayer.duplicate();
redLayer.applyOffset(unitValue, 0, OffsetUndefinedAreas.WRAPAROUND);
var greenLayer = doc.backgroundLayer.duplicate();
greenLayer.applyOffset(-unitValue, 0, OffsetUndefinedAreas.WRAPAROUND);
var blueLayer = doc.backgroundLayer.duplicate();
blueLayer.applyOffset(-unitValue*2, 0, OffsetUndefinedAreas.WRAPAROUND);
doc.resizeImage( // Resize to "select" the RGB columns in the various layers
UnitValue(docWidth / 3, "px"), 
UnitValue(docHeight / 3,"px"), 
null, 
ResampleMethod.NEARESTNEIGHBOR
);

//var col = new SolidColor(); col.rgb.hexValue = "FF0000"; redLayer.photoFilter(col, 100, false);
redLayer.mixChannels ([[100,0,0,0],[0,0,0,0],[0,0,0,0]], false);
greenLayer.mixChannels ([[0,0,0,0],[0,100,0,0],[0,0,0,0]], false);
blueLayer.mixChannels ([[0,0,0,0],[0,0,0,0],[0,0,100,0]], false);

redLayer.blendMode = BlendMode.LINEARDODGE; // add!
greenLayer.blendMode = BlendMode.LINEARDODGE; // add!
// blue is the base layer

doc.flatten();

// let's go to 8bit sRGB
doc.convertProfile ("sRGB IEC61966-2.1", Intent.PERCEPTUAL, true, true);
doc.bitsPerChannel = BitsPerChannelType.EIGHT;

08 November, 2011

Silvio Berlusconi

This is obviously going to be off-topic with the rest of the blog... If you landed here for the first time, this is a rendering related blog and this article is an exception to the rule.

As the Europe's and Italy's financial crisis deepens, news crossed the wire today that prime minister Berlusconi vowed to resign. I see many people asking around the world how it was possible that this happens only now, how did Berlusconi manage to be in power for seventeen years even after countless scandals and accusations. While in general I think it's not surprising, and the eight years of Bush administration could be served as an example, I'd like to try to explain what's peculiar about Italy's situation (of course, in my point of view). 
Also, of course, I will make generalizations in the following. In no way I want to express that this is applies to everyone and everything, that should be pretty clear.

Survivalists
One way or another, we keep going on. This is by far what I believe to be the deepest of our problems. We don't care much about our society, we avert our eyes and keep going on, everyone trying to find a hole in which to live their lives. 
We are masters in bending (if there is a profit to be made) or ignoring laws. Even our image outside the country is that of creative, chaotic individuals (at best), known for being obsessed about family and our own small individualities. 
We are not socialists not liberals, we are just driven towards what can get us a gain tomorrow morning. Mind you, to a degree this happens everywhere, but it's not a defining quality of a population quite as it happens in Italy, where is deeply buried everywhere, from how people live their lives to how companies make business.
Even our economy, made mostly of small or family owned companies, with our comparatively large private savings and our huge public debt, is a testimony of this mentality.
I can't tell why this is the case, we are a young republic and unification was not a smooth deal, but we don't believe in society. Berlusconi is the embodiment of all this, and I don't know how much he was just "born this way" or how much he knowingly acts to please, to be popular, but he is certainly great in leveraging such sentiments. His political message, either explicitly or implicitly has always been "vote for me and I'll let you live your lives without control", "you don't need to be responsible for your actions", "I was successful, don't ask me how, you want to be like me, I won't ask you how"...
Berlusconi was never a left or right wing politician, he is obsessed about communism and certainly sees Italy's leftists as pure evil, but his actions are not the ones of a liberal. Among other things, he's remembered for saying openly at an entrepreneurs' convention  that companies without off-shore operations were not led smartly, and that evading taxes (one of Italy's chief problems) was morally sound in a country like ours (in which taxes are too high). He didn't them proceed to lower the taxes and impose strict controls to have everyone paying the right amount, or to incentive competition and freedom of enterprise. 
In the first months of his government he proceeded in the opposite direction, abrogating liberalization laws that were passed by the previous government, loosening controls over financial transactions and not reducing a single tax. Not touching established interests, not reducing bureaucracy, but just allowing people to just screw each other more freely.

Shameless
Berlusconi is probably not the worst individual in Italy's history. Corruption and misgovernment were always there and can be even tracked to the same underlying sentiment. The first republic created a massive debt because political parties were quite literally buying votes by flushing enormous amounts of public money into all kinds of public ventures, creating hundreds of thousands of "fake" jobs, public employers who were pretty much useless. But it came down onto his knees when it was found that politicians also used public money to fund their own parties. There was corruption, but there is was still a sense of shame. 
Berlusconi took inspiration from this and pushed it one step further, he was proud of his tricks, every trial he escaped by passing laws in his own favor, every lie and joke he said made him look "smarter", more successful. It's not that the vast majority of Italians do not know he was a thief, that's also why many of his voters were even shy of saying so, especially if singled out. 
It's that deep down, they knew, but they admired his skill, they wanted to live the same dream, to take the easy way and just not have to care. That's also why for years even after all the sex scandals he managed to keep a mostly Catholic country under control. That's why he still now has a huge following...

Media and Opposition
These two aspects were also important and I'm sure there is a lot more to be said, but I think they don't contribute to explain the Berlusconi phenomenon quite as understanding how much deeply he connects with some Italian sentiments.
Berlusconi owns most of Italy's media and he is renown to be a great communicator. He lies, but his lies are so constant and so fiercely defended by so many, that they slowly become truths. Words slowly lose their meaning and the public becomes divided into factions who do not reason but just mindlessly cheer for one or the other party.
Again, that's partially a "quality" of Italians, being more emotional than rational, being hot headed and profoundly divided. But he managed to exploit that incredibly well.
His power does not extend over only media of course, most if not everyone in his party is strongly tied to him, he choose men with little political past and respectability of their own, people who depended on him to be elected. Berlusconi IS his party, and everyone sings the song he sings. And he made pretty clear that was the way from the start, his party always had direct references to him in the logo, in the hymns, everywhere. Everyone laughs at his jokes. Everyone follows the same rules, tells the sames words, uses the same dialectic tricks. I'm not sure if it's imitation or doctrine but it's powerful.
On the other hand the opposition is fragmented and largely seen as made of intellectuals  and professional politicians who do not have any connection with the people. (which to a degree can be even very true). They were always bad communicators so it was easy for Berlusconi to play them, routinely saying that there was no better alternative than him, that the left-wing was made of communists that had no real plan other than raising the taxes (even if financial pressure increases or decreases have not really been strongly linked to any particular government). Furthermore they showed no cohesion, being unable to claim even the huge victories they sometimes achieved (Italy was able to enter the Euro as one of the founding parters due to the work of a left-wing government for example) and not being able to look past their divisions.

28 October, 2011

Open questions - my two rules

As I wrote here, there are some fundamental questions in realtime rendering that I wish I knew more about. I do have though two  rules I apply when thinking about rendering techniques

  1. Reduce variance: It's better to be consistent at a lower quality than have glitches/flickering artifacts at a higher quality.
    • Postulate: all graphical effects should be reviewed in motion, crossing quality boundaries
  2. Less is more: It's better to not have a given effect than have it at a too low quality level.
As an example we can analyze shadows. The first rule tells us that it's better to have stable cascades at a lower resolution than having perspective shadowmaps at high resolution. 
The second rule tells us that it's better to have lower filtering and cull shadows from some objects or limit the shadowing maximum distance, than having bad shadows everywhere.


Unrelated, I just saw this as a job opening at Valve... Smart guys!


Psychologist
We believe that the more we know about human behavior, about how and why people do what they do, the better our products will be. All game designers are, in a sense, experimental psychologists. That’s why we’re looking for a experimental psychologist to apply knowledge and methodologies from psychology to game design and all aspects of Valve’s operations. We want to exploit your experience with experimental design, research methods, statistics, and human behavior to help craft even more compelling gameplay experiences for future Valve titles. We’d also expect you to research and weigh in on any and all topics that are relevant to improving the experiences of our customers, partners, and employees.
Duties:
  • Provide relevant insight into human behavior in order to shape gameplay and customer experience.
  • Perform statistical analyses on all aspects of Valve’s operations: gameplay, financial, and company data.
  • Research compelling new hardware technologies.
  • Design experiments to evaluate various gameplay hypotheses and design choices.
  • Improve existing playtesting methodologies while incorporating novel techniques to improve best practices.
  • Develop innovative ways of acquiring relevant data to answer open questions about all aspects of Valve’s products and business practices.
Requirements:
  • Graduate degree in Psychology (or equivalent) field
  • Advanced knowledge of statistics
  • Familiarity with one or more of the following pieces of data analysis software: SPSS, Systat, Matlab, R, (or equivalent)
  • Four years experience with:
    • Experimental design/research methods
    • Relevant research in cognitive, social, human factors, and related disciplines in psychology
Recommended:
  • Proficiency in one or more of the following programming languages: C++, SQL, PHP, (or equivalent)

26 October, 2011

Open questions

From the series "End of the virtual world" by Robert Overweg

Battlefield, Mass Effect, Modern Warfare, Red Dead, Crysis, Rage, Forza, GT, Alan Wake...

What makes the graphics work? I've played the last three call of duty games on 360, with a projector and a 5.1 system. I found them to be amazing. Then I downloaded MW2 on steam, and the graphics looked mediocre.
Conversely, Mass Effect 2 seemed decent on 360 but way more awesome on PC.
Red Dead pushes less polygons and has more visual defects than Crysis but awed me in a way no other game of this generation did.
Battlefield 3 is a technical jewel but MW seemed to me to have a better atmosphere.

Why? Is my subjective judgment shared by others? Is my memory failing me and have my expectations shifted? Or there is also something technical behind these impressions?

MW textures on PC have the low-res quality of a bad port. Is this, coupled with the increase in resolution, the reason the game looks worse to me on PC, makes my brain "see" the polygons more? Or is it because its atmosphere is better suited for a projector and a couch? Or maybe it's because playing the game I was more immersed into it than when I just looked at the graphics on the PC.

Why was it the opposite for ME2? Is its art style, less reliant on gritty texture detail, making better use of the high resolution and antialiasing of the PC?

Why I enjoyed more ME2 single player graphics than BF3 ones? Are the graphics enhancing the gameplay, or is also the opposite true, game and story do affect the perception of the graphics?

Is it the aliasing that is killing BF? Or is the inability of deferred lighting to express the subtleness of light transport and materials the issue? Is our industry jumping on the deferred lighting approach too fast, without really understanding what it's losing form precomputed lighting?

What about the heavy bloom and flares that BF3 and Crysis2 use? How are they working? How do they alter the perception of the image?

From my experience I observed some patterns, but I don't really know much, I've also found very little research...
Aliasing and other high frequency artifacts quickly tell your brain that it's looking at CG, they are very disturbing. Motionblur at 30fps looks more cinematic and packs more punch than 60fps without blur. We tolerate framerate problems way more if the game looks busy (i.e. A huge explosion) than if they are not connected with game actions. Colour is hard to get right, and ambient lighting and proper occlusion of lighting terms are important to represent volume, rendering the air (haze, fog, scattering, desaturation tricks etc) helps with scale. Crowd variety is achieved more with colour and behavior variation than texture and model. Specular lobes are everywhere, have always fresnel, and we can't recognize errors in the light directions in the specular but we use high-gloss highlights to evaluate shape. Bleeding dark edges (i.e. when dealing with subsampled effects) looks less questionable than having bright halos.

There, I think I didn't miss anything, that's pretty much all I know, it fills a few lines of text. I think that's a big challenge for us, there are more studios that know how to do deferred lighting right and fast than there are studios that know what's important to create an immersive, beautiful game. We don't know what to focus on where, what devices are used for what, which artifacts are tolerable and which are disturbing.

We know the (sometimes) the physics but we have very little math to model the psychology. Yes, KSK fits skin specular well. But what parts of it are important? When does it break and breaks the perception of skin being skin? We need to make hacks, and physically reasonable hacks are fundamental in our line of business, physics are fundamental, but physiologically motivated hacks would be way better!
This also affects all the "tuning" decisions, i.e. more LOD switching but with better detail near the camera, or vice-versa? Geometry or textures? Bigger SSAO radius and more noise or the opposite and so on.
Moreover linking perception (vision) and psychology with rendering would give us more objective tools for art-direction, like what device is the best to convey in the general audience a given intent, what makes a sense of "scale" or of "fear" and so on...

Rendering without knowing about perception is crazy, it's like if musicians knew more about sound waves and instruments than harmony and melody. And yet we often chose rendering techniques based on really faint leads about what is needed to look good. There is little research, and the little there is focuses on issues that seldom are directly applicable to modern videogame rendering techniques.
Even worse, we are just not starting to understand the basics, like color and normals, and not only in the industry but even in most publications you will find little regard towards even basic visual perception metrics.


Videogame rendering today at its best it's a work of iteration, the more you can try and the better feedback you get, the more you inch towards this ill-defined target of visual splendor. But even artists and art-directors with a great eye for light and colour seldom have much experience about realtime CG artifacts and their impact on perception.


Shifting from art to science has to happen in order for our profession to evolve, we can't rely on art direction for technical problems, it's not only too error-prone but also very inefficient. Scientific studies can be shared and described exactly, while art direction remains subjective and does not result in a shared progress.


It of course not something that happens only in rendering (or presentation in general, animation, audio), we even make games with very faint leads about what is needed to make fun. And that's why often you have studios investing big amounts of money, and staffed with great talents, producing results that are impressive but still fail, while only very few games really know how to be fun, and really know how to immerse players in their art...

More to come... Meanwhile you might want to read something (other than the links I've already scattered in the post): Some interesting reads here. Also Holly Rushmeier's work (some is linked in the post - from the website the EG2001 and EG2003 presentations are very neat)
Please comment if you know more resources on the topic. A question has been posted on Quora here...

18 October, 2011

Rendering 102

Follow-up to http://c0de517e.blogspot.com/2011/09/rendering-101.html: a simplified and hopefully clearer version of the "how the gpu works" posts. Covers GPU computational model and GPU graphics pipeline.


As usual, let me know in the comments about mistakes and possible improvements!

03 October, 2011

Don't lie.

A task is not done when its code gets checked-in. In production, "done" is:
  • Checked-In
  • Stable (passes automated tests)
  • In-budget (memory and performance)
  • Usable (tools, parameters)
  • Verified (art-directior or designer or lead programmer)
If you ignore the last four, you're living in a dream. A dream in which you will ship the game. Instead, you'll die under the pressure of crunch time and ship a flawed product that does not match the quality standards it should.

That also means that in production we start with a stable, in-budget product. And that we do have means of verifying that this is true for the entire production (tests).

Yes, it will take twice as much time to finish a task. Yes, it will mean that some tasks can't be declared done until you make more space somewhere else, to fit them in.

Of course you can avoid all this. All it takes, is to lie. 


We are a creative industry. We have to deal with change, we don't plan things up ahead and then waterfall until they are all done (not _most_ of the times at least, there are situations in which that applies).


We don't craft a product by following a plan, we make drawings and sketches (prototypes) and then take a canvas and start painting. 
You don't have a person detailing a finger, another one refining an eye, another working on the nose and then hope that everything will stick together just right. Or hope that you will have all the parts done by a given date! And what if it then misses one of the ears? What do you do, take ten artists working on that ear till midnight every day near the deadline?


Only an idiot would do that, and still many game companies work like that, they don't consider that everything has to fit just right, and that change is not local, every change has to fit with the entire context, every brush stroke has to make a sense in the entire painting.
You start with a painting, a rough one, and then refine it, and at all time the painting is a painting, it's not a collection of unrelated pieces. You can stop at any moment and it will be still a painting, maybe not that refined, maybe not as intricate and detailed as you wanted, but it's a painting.


A game is such only if it runs. If it crashes or goes out of memory on the target platform, it's not a game, it's some binary that crashes. If it does not hold its performance, it's not a game, we can't burn it to a disc and call it a game, it won't be shipped, it won't pass certification. Iteration should not break this quality, it should go from a "shippable" game to a "shippable" game. Especially during production.

30 September, 2011

Rendering 101

This is how I explain (videogame) rendering to non-rendering (engineers... even if hopefully it's still understandable for some non-programmers as well).

If you have suggestions on how to make it more clear, post a comment!

28 September, 2011

Fight Night Champion Rendering @ GDC

I wrote so much for Fight Night Champion over the years, but not much can be found on the internet. I crafted at least five presentations for it, three mostly internal for the team on Fight Night 4 analysis and pre-production, one on the various prototypes and ideas done to exit pre-production, to share ideas with the other EA teams, and one for GDC after the game was done, which was refined and presented by my (awesome!) colleague Vicky Ferguson when I left EA (you should see the recording of her presentation, and she also presented for FN4 at the previous GDC which was very cool too).

Unfortunately the slides of this last one were not published (a couple could be found here, but they are pretty useless)... until now. 
At the beginning I thought about writing more detalied blog articles about many of the experiments that were done, but many of them do not really need that much space, some others were rediscovered and published by others (i.e. pre-blurred SSS, bent normals SSAO), and I think FNC was more about the process than the final tech (in fact, the lengthy presentation never goes into the detail of what at the end were the final FNC shaders!), so...

This is a "director's cut" of the GDC slides, it's way more verbose (not meant for presentation) and with much more material. Enjoy.

27 September, 2011

Bad ideas don't require much explanation: Caching Stable Cascaded Shadowmaps

Note:

Ok, some explanation (as I was asked recently about this). The idea is that for static CSM and static objects, you just shift every frame your fixed "window" into an "infinite" plane. So why don't we just do that in the actual CSM? We can shift the old data (or just implicitly shift by wrapping around and keeping an origin marker in the shadowmap coordinates) and just render the tiny strip that corresponds to the part of the window that is novel  (corresponding to the intersection between the old clip volume and the new one). 

Culling is easy too, it's just a boolean test between the two volumes. The only real issue is that the near-far planes change every frame in a stable CSM, thus the old shadow zbuffer data would be in a different coordinate space than the new one (not only a shift but also a change in the projection matrix).

This can be solved in a number of ways, the worse is to "reinterpret" the old values in the new projection as that loses precision. The best would probably be to write an index in the stencil part and use that index to address an array keeping the last 256 near-far values for the previous frames. 256 values will go pretty fast, so you might need to re-render the entire shadowmap when you're out of them... But really I think there could be stupid tricks to avoid that, like adding some "borders" to the CSM and shifting it every N-texels instead of every single one. Or computing the near-far for volumes that are snapped at shadowmap-sized positions, so every actual shadowmap rendered would at worse interesect four of them, thus requiring a maximum of four stencil indices...

Let me say that again. Our stable CSM shadow is just a fixed-size, shifting window inside a infinite projection plane aligned with our light direction, with a near-far clip that corresponds to the min-max interesection between the projection window and the entire scene. Now we can imagine that we precompute the near-far values (we won't but, let's imagine) in a grid sized as the shadow window. Then, no matter how we shift the window in this infinite plane, it will intersect only four (maxium) near-far precomputed grid cells... Thus, four values in the stencil are all that we need to index cached z-values.

20 September, 2011

Mathematica and Spherical Harmonics

As my previous post about Mathematica seemed to be well-received, I've decided to dig some old code, add some comments and post it here. Unfortunately it's littered with \[symbol] tags as in Mathematica I used some symbols for variables and shortcuts (which you can enter either in that form or as esc-symbol-esc). You can also see a PDF version of the notebook here, with proper formatting. Enjoy!


A function and its SH approximation
(* Normalization part of spherical harmonics *)
shNormalizationCoeffs[l_, m_] := Sqrt[((2*l + 1)*(l - m)!)/(4*Pi*(l + m)!)]


(* Evaluates to a function of \[Theta],\[Phi] for a given degree l and order m, it's defined as three different cases for m=0, m<0, and m>0*)
shGetFn[l_, m_] := Simplify[Piecewise[{{shNormalizationCoeffs[l, 0]*LegendreP[l, 0, Cos[\[Theta]]], m == 0}, {Sqrt[2]*shNormalizationCoeffs[l, m]*Cos[m*\[Phi]]*LegendreP[l, m, Cos[\[Theta]]], m > 0}, {Sqrt[2]*shNormalizationCoeffs[l, -m]*Sin[-m*\[Phi]]*LegendreP[l, -m, Cos[\[Theta]]], m < 0}}]]


(* Indices for a SH of a given degree, applies a function which creates a range from -x to x to every element of the range 0...l, return a list of lists. Note that body& is one of Mathematica's way to express pure function, with parameters #1,#2... fn/@list is the shorthand for Map[fn,list] *)
shIndices[l_] := (Range[-#1, #1] &) /@ Range[0, l]


(* For each element of the shIndices list, it replaces the corresponding shGetFn *)
(* This is tricky. MapIndexed takes a function of two parameters: element of the list and index in the list. Our function is itself a function applied to a list, as our elements are lists (shIndices is a list of lists) *)
shFunctions[l_] := MapIndexed[{list, currLevel} \[Function] ((m \[Function] shGetFn[currLevel - 1, m]) /@ list), shIndices[l]]


(* Generates SH coefficients of a given function fn of \[Theta],\[Phi], it needs a list of SH bases obtained from shFunctions,it will perform spherical integration between fn and each of the SH functions *)
shGenCoeffs[shfns_, fn_] := Map[Integrate[#1*fn[\[Theta], \[Phi]]*Sin[\[Theta]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}] &, shfns]


(* From SH coefficients and shFunctions it will generate a function of \[Theta],\[Phi] which is the SH representation of the given coefficients. Note the use of assumptions over \[Theta] and \[Phi] passed as options to Simplify to be able to reduce the function correctly, @@ is the shorthand of Apply[fn,params] *)
angleVarsDomain = {Element[\[Theta], Reals], Element[\[Phi], Reals], \[Theta] >= 0, \[Phi] >= 0, \[Theta] <= 
    Pi, \[Phi] >= 2*Pi};
shReconstruct[shfns_, shcoeffs_] := Simplify[Plus @@ (Flatten[shcoeffs]*Flatten[shfns]), Assumptions -> angleVarsDomain]


(* Let's test what we have so far *)
testNumLevels = 2;
shfns = shFunctions[testNumLevels]
testFn[\[Theta]_, \[Phi]_] := Cos[\[Theta]]^10*UnitStep[Cos[\[Theta]]] (* Simple, symmetric around the z-axis *)


(* generate coefficients and reconstructed SH function *)
testFnCoeffs = shGenCoeffs[shfns, testFn]
testFnSH = {\[Theta], \[Phi]} \[Function]Evaluate[shReconstruct[shfns, testFnCoeffs]]


(* plot original and reconstruction *)
SphericalPlot3D[{testFn[\[Theta], \[Phi]], testFnSH[\[Theta], \[Phi]]}, {\[Theta], 0, 
  Pi}, {\[Phi], 0, 2*Pi}, Mesh -> False, PlotRange -> Full]


(* Checks if a given set of coefficients corresponds to zonal harmonics *)
shIsZonal[shcoeffs_, l_] := Plus @@ (Flaten[shIndices[l]]*Flatten[shcoeffs]) == 0


(* Some utility functions *)
shSymConvolveNormCoeffs[l_] := MapIndexed[{list, currLevel} \[Function] Table[Sqrt[4*Pi/(2*currLevel + 1)], {Length[list]}], shIndices[l]]
shExtractSymCoeffs[shcoeffs_] := Table[#1[[Ceiling[Length[#1]/2]]], {Length[#1]}] & /@ shcoeffs


(* Convolution with a kernel expressed via zonal harmonics, symmetric around the z-axis *)
shSymConvolution[shcoeffs_, shsymkerncoeffs_, l_] := (Check[shIsZonal[shsymkerncoeffs], err]; 
  shSymConvolveNormCoeffs[l]*shcoeffs*shExtractSymCoeffs[shsymkerncoeffs])


(* Another test *)
testFn2[\[Theta]_, \[Phi]_] := UnitStep[Cos[\[Theta]]*Sin[\[Phi]]] (* asymmetric *)
testFn2Coeffs = shGenCoeffs[shfns, testFn2]
testFn2SH = {\[Theta], \[Phi]} \[Function]Evaluate[shReconstruct[shfns, testFn2Coeffs]]
plotFn2 = SphericalPlot3D[testFn2[\[Theta], \[Phi]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}, Mesh -> False, PlotRange -> Full]
plotFn2SH = SphericalPlot3D[testFn2SH[\[Theta], \[Phi]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}, Mesh -> False, PlotRange -> Full]
Show[plotFn2, plotFn2SH]


(* Test convolution *)
shIsZonal[testFnCoeffs, testNumLevels]
testConvolvedCoeffs = shSymConvolution[testFn2Coeffs, testFnCoeffs, testNumLevels]
testFnConvolvedSH = {\[Theta], \[Phi]} \[Function] Evaluate[shReconstruct[shfns, testConvolvedCoeffs]]
plotConvolvedSH = SphericalPlot3D[testFnConvolvedSH[\[Theta], \[Phi]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}, Mesh -> False, PlotRange -> Full]