Open Your Eyes

August 7, 2012
spacer

After a silent period, I’m proud to present the advances we’re doing in eye shading at Activision Blizzard!

This wednesday I’ll talk about it in the Advances in Real-Time Rendering course (in SIGGRAPH 2012), I invite you to come by for all the gory details!

The R&D team behind this shader consists on Javier Von Der Pahlen (Technology Director R&D and Photographer), Etienne Danvoye (Technical Director) and me (Real-Time Graphics Researcher). Also participated Zbyněk Kysela (Modeler and Texture Artist), Suren Manvelyan (Photographer) and Bernardo Antoniazzi (Techical Art Director).

Performance wise, rendering the eyes takes a 2%-3% of the whole demo. This is work in progress, but here you have some shots showcasing our shader:

spacer
Eye Shader Off spacer Eye Shader On
spacer
Eye Shader Off spacer Eye Shader On
spacer
Eye Shader Off spacer Eye Shader On spacer Inspiration for this Shot
5 Comments spacer Top of page

SMAA 1x featured on ARMA 2: Operation Arrowhead and Take on Helicopters

March 27, 2012
spacer
spacer

SMAA 1x natively integrated in latest ARMA 2: Operation Arrowhead beta, and in 1.05 update of Take On Helicopters!

Leave a comment spacer Top of page

The day has come

February 2, 2012

In this important moment of my life, the day has come to end my skin research in order to take a new professional direction.

These last months I’ve learned a very important lesson: efforts towards rendering ultra realistic skin are futile if they are not coupled with HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps and a high quality antialiasing solution. If you fail on any of them, the illusion of looking at a real human will be broken. Specially on close-ups at 1080p, that is where the real skin rendering challenge is.

As some subtleties (like the film grain) are lost on the online version, I encourage to download the original blu-ray quality version below, to better appreciate the details and effects rendered (but be aware that you will need a powerful computer to play it). Please note that everything is rendered in real-time; in fact, you can also download a precompiled version of the demo (see below), which shows the shot sequence of the movie, from its beginning to its ending. The whole demo runs between 80 and 160 FPS, with an average of 112.5 FPS on my GeForce GTX 580. But it can be run in weaker configurations by using more modest settings.

The main idea behind the new separable SSS approach is that you can get very similar results to the full 12-pass approach ([Eon07]) by just using a regular two-pass setup. It can be done in screen space and is really really fast (you may want to see this related post). I hope to write something about this in the future. However, the source code of whole demo is readily available on GitHub.

For the demo I’ve used SMAA T2x, which does a very good job at dealing with shader aliasing while avoiding pre-resolve tone mapping. The demo shows the average/minimum/maximum frame rate after running the intro, which hopefully will make it useful for benchmarking GPU cards.

I think there is still a lot work to do. Probably the most important one will be rendering realistic facial hair. It will be my dream if my skin research helps to improve the rendering of humans in games; I truly believe that more realistic characters will inevitably lead to deeper storytelling, and more emotionally-driven games.

Links:

  • Precompiled demo [47.7 MB]: Download spacer Torrent — Hit space to skip the intro, and go to the interactive part
  • Source code: GitHub
  • Blu-ray quality movie [693 MB]: Vimeo Mirror spacer Download spacer Torrent — On Vimeo, search for “Download this video”

The 3D head scan used for this demo was obtained from Infinite Realities. Special thanks to them!

137 Comments spacer Top of page

SMAA v2.7 released and EUROGRAPHICS presentation

January 17, 2012

The full source code of SMAA has been finally released, including SMAA S2x and 4x!
www.iryoku.com/smaa/#downloads

Checkout the subpixel features section of the movie to see the new modes in action (or download the precompiled binary).

We are also happy to officially announce that all the nuts and bolts of the technique will be presented on EUROGRAPHICS 2012, in Cagliari (Italy). The paper is now much easier to follow, and has updated content. To all who tried to read the technical report and failed to deeply understand all the details, we definitely encourage them to give the EUROGRAPHICS version a try, as it does a much better job at explaining the core concepts and implementation details.

We’d like to thank all the community that is supporting SMAA, and give special thanks to Andrej Dudenhefner, whose hard work made InjectSMAA possible, allowing to use SMAA 1x in a lot of already published games. But bear in mind that SMAA 1x is just one third of the whole technique!

And to finalize this post I would like to remember some of the design decisions and key ideas of our technique, which were not completely understood on the past. I clarified them on a Beyond3D thread but I would like to give it another round here.

The main design decision behind SMAA is to be extremely conservative with the image. SMAA tries to analyze the possible patterns that can happen for a certain pixel, and favors no filtering when the filtering decision is not clear. As our paper shows, this is extremely important for accurately recovering subpixel features in the S2x, T2x and 4x modes. Additionally, super/multisampling will cover these cases, yielding smooth results even when the MLAA component is not filtering the pixel.

Furthermore, the MLAA component is enhanced, not only by improving the searches accuracy without introducing dithering or performance penalties, but by also introducing an stronger edge detection and extending the type of patterns detected, as shown in this example from Battlefield 3 (to easily compare the images, we recommend opening the images in browser tabs, and keep them at their original resolution):

spacer
SMAA spacer MLAA spacer FXAA

Finally, SMAA is not devised as a full MSAA replacement. Instead, the core idea behind it is to take the strengths of MSAA, temporal SSAA and MLAA, and make a very fast and robust technique, where each component backs up the limitations of the others, delivering great image quality in demanding real-time scenarios.

5 Comments spacer Top of page

SMAA T2x source code released

October 29, 2011
spacer

We’re thrilled to announce the release of the SMAA T2x source code!

We joined forces with Tiago Sousa from Crytek, to deliver a very mature temporal antialiasing solution. It has been integrated into CryEngine 3, checkout the SMAA demo movie.

The goal of SMAA is to more accurately match the results of supersampling, by faithfully representing subpixel features, and by solving other common problems of filter-based antialiasing. This reduces the flickering seen in complex scenes.

On the other hand, we made really big efforts towards simplifying the usage of our shader; the source code is now reduced to a single SMAA.h header and two textures, with very detailed comments and instructions. We hope this will ease integration into game engines. The feedback so far has been very positive, with a Java/Ruby programmer with no experience in graphics integrating SMAA into Oblivion (using OBGE) in just a few hours.

Leave a comment spacer Top of page

SMAA: Enhanced Subpixel Morphological Antialiasing

August 8, 2011

We are proud to announce the evolution of Jimenez’s MLAA!

SMAA, Subpixel Morphological Antialiasing

It solves the key weaknesses of MLAA by introducing the following features:

  • Improved pattern searchs
  • Improved pattern detection
  • Diagonal handling
  • Avoiding the general roundness introduced by MLAA
  • And probably the most important: subpixel features.

We’ve decided to release it as a technical report, available here:
SMAA: Enhanced Subpixel Morphological Antialiasing

Please note that some information is outdated (for example the timings of FXAA and SRAA, as they are now faster); we will be updating it in the next weeks. The souce code builds on top of Jimenez’s MLAA v1.5, so we will be posting patches with each improvement we added, with all the nut and bolts details, so stay tuned!

We’ll also be talking a little about it in the SIGGRAPH course Filtering Approaches for Real-Time Anti-Aliasing, we invite you to come by!

Project Page spacer 5 Comments spacer Top of page

Teasing our Real-Time Skin Rendering Advances

July 21, 2011

These last weeks we’ve been researching a little bit more on skin rendering, and found a really interesting discovery that allows our shader to run in only two passes (as any separable convolution). The results are not the same, but quite close visually! We hope it will be much more Xbox 360 EDRAM-friendly than our previous 6 pass approach, and the implementation is now easier than ever.

The SSS effect is now fully customizable, including the generated gradients colors, the per-channel filter strength and more. We also setup various defines that allows to customize the quality, allowing to adapt the shader to diverse hardware targets. The shader takes 0.27ms@720 and 1.69ms@1080 in the lowest and medium settings respectively, in a GeForce GTX 295 (we used the medium setting for these shots). We are releasing the source code privately to interested game developers, with a public release being scheduled for 2011 Q4.

I recommend looking at the images at their native resolution to better appreciate the details.

spacer
Without SSS spacer With SSS
spacer
Without SSS spacer With SSS

I’ll be talking about this in SIGGRAPH 2011, in the session CG in Europe (Wednesday 10th from 11.30 to 12.30, in the International Center, Harbour Concourse). If you are attending SIGGRAPH 2011, I invite you to come by!

The 3D head scan used for the images was obtained from Infinite Realities.

7 Comments spacer Top of page

Jimenez’s MLAA featured on Digital Foundry and GamesIndustry.biz

July 16, 2011
spacer

Links to the articles:
The Future of Anti-Aliasing (Digital Foundry)
MLAA heads for 360 and PC (GamesIndustry.biz)

Leave a comment spacer Top of page

Torque 3D 1.1 features Jimenez’s MLAA

May 31, 2011
spacer
No AA spacer MLAA

We are proud to announce that our GPU MLAA implementation (Jimenez’s MLAA) will be featured in the upcoming version 1.1 of Torque 3D!

Here you have some sample screenshots that Eric Preisz kindly sent us (to easily compare no antialiasing with MLAA, we recommend opening the images in browser tabs, and keep them at their original resolution):

spacer
No AA spacer MLAA
spacer
No AA spacer MLAA
spacer
No AA spacer MLAA
spacer
No AA spacer MLAA

This close-up shows the quality of the gradients produced by MLAA, which is one of its strongest points when compared with MSAA:

spacer
No Antialiasing
spacer
MLAA

We encourage taking a look at the Torque 3D demo to see the whole thing in movement. We believe you will be convinced that MLAA has excellent temporal stability!

Images courtesy of GarageGames from the Torque 3D game engine.

Leave a comment spacer Top of page

Screen-Space Subsurface Scattering

April 6, 2011
spacer

(Image rendered in real-time)

The new generation of game engines is out there! They are full of new rendering techniques, including classic topics like shadows, water rendering and post-processing, and also more recent techniques such as deferred shading and post-processing anti-aliasing. But there is a rendering topic that has been dormant for a few years, and at last, is being awaken:

Subsurface Scattering

Unreal Engine 3 features Subsurface Scattering (SSS), and so does CryEngine 3 and Confetti RawK. Specifically, these last two engines are using what is called Screen-Space Subsurface Scattering (update: it seems Unreal Engine 3 also uses SSSSS), an idea we devised for the first book in the GPU Pro series two years ago. Given the importance of human beings in storytelling, the inclusion of some sort of subsurface scattering simulation will be, in my opinion, the next revolution in games. Bear with me and discover why.

Let’s start the story from the beginning. What is SSS? Simply put, SSS is a mechanism of the light transport that makes light travel inside of objects. Usually, light rays are reflected from the same point where the light hits the surface. But in translucent objects, which are greatly affected by SSS, light enters the surface, scatters inside of the object, and finally exits all around of the incident point. So making a long story short, light is blurred, giving a soft look to translucent objects: wrinkles and pores are filled with light, creating a less harsh aspect; reddish gradients can be seen on the boundaries between light and shadows; and last, but not at least, light travels through thin slabs like ears or nostrils, coloring them with the bright and warm tones that we quickly associate with translucency.

Now you may be thinking, why should I care about this SSS thing? Every year, advances in computer graphics allow offline rendering to get closer to photorealism. In a similar fashion, real-time rendering is evolving, trying to squeeze every resource available to catch up with offline rendering. We all have seen how highly detailed normal maps have improved the realism of character faces. Unfortunately, the usage of such detailed maps, without further attention to light and skin interactions, inevitably leads the characters to fall into the much-dreaded uncanny valley. The next time you see a detailed face in a game, take your time to observe the pores, the wrinkles, the scars, the transition from light to shadows, the overall lighting… Then, ask yourself the following question: is what you see skin… or… maybe a piece of stone? If you manage to abstract the colors and shapes you observe, you will probably agree with me that it resembles a cold statue instead of a soft, warm and translucent face. So, it’s time to take offline SSS techniques and bring them to the real-time realm.

Some years ago, Henrik Wann Jensen and colleagues came up with a solution to the challenging diffusion theory equations, which allowed practical subsurface scattering offline renderings. His technology was used for rendering the Gollum skin in The Lord of the Rings, for which Henrik received a technical Oscar (awarded for the first time in history). This model evolved to accurately render complex multilayered materials with the work of Craig Donner and Henrik Wann Jensen, and was even translated to a GPU by Eugene d’Eon and David Luebke, who managed to obtain photorealistic skin images in real-time for the first time. Unfortunately, it took all the processing power of a GeForce 8800 GTX to render a unique head.

It was then when Screen-Space Subsurface Scattering was born, aiming to make skin rendering practical in game environments. From its multiple advantages, I would like to mention its ability to run the shader only on the visible parts of the skin, and the capability to perform calculations in screen resolution (instead of texture resolution). Subsurface scattering calculations are done at the required resolution, which can be seen as a SSS level of detail. The cost depends on the screen-space coverage: if the head is small, the cost is small; if the head is big, the cost will be bigger.

The idea is similar to deferred shading. The scene is rendered as usual, marking skin pixels in the stencil buffer. After that, skin calculations are executed in screen space for each marked skin pixel, trying to follow the shape of the surface, which is inferred from the depth buffer. This allows to render photorealistic crowds of people with minimal cost. One thing I love about our approach is that even given the amazing math complexity behind the skin rendering theory, in the end, it’s reduced to executing this simple post-processing shader only six times (three horizontally and three vertically):

SSS Shader
float4 BlurPS(PassV2P input, uniform float2 step) : SV_TARGET {
    // Gaussian weights for the six samples around the current pixel:
    //   -3 -2 -1 +1 +2 +3
    float w[6] = { 0.006,   0.061,   0.242,  0.242,  0.061, 0.006 };
    float o[6] = {  -1.0, -0.6667, -0.3333, 0.3333, 0.6667,   1.0 };

    // Fetch color and linear depth for current pixel:
    float4 colorM = colorTex.Sample(PointSampler, input.texcoord);
    float depthM = depthTex.Sample(PointSampler, input.texcoord);

    // Accumulate center sample, multiplying it with its gaussian weight:
    float4 colorBlurred = colorM;
    colorBlurred.rgb *= 0.382;

    // Calculate the step that we will use to fetch the surrounding pixels,
    // where "step" is:
    //     step = sssStrength * gaussianWidth * pixelSize * dir
    // The closer the pixel, the stronger the effect needs to be, hence
    // the factor 1.0 / depthM.
    float2 finalStep = colorM.a * step / depthM;

    // Accumulate the other samples:
    [unroll]
    for (int i = 0; i < 6; i++) {
        // Fetch color and depth for current sample:
        float2 offset = input.texcoord + o[i] * finalStep;
        float3 color = colorTex.SampleLevel(LinearSampler, offset, 0).rgb;
        float depth = depthTex.SampleLevel(PointSampler, offset, 0);

        // If the difference in depth is huge, we lerp color back to "colorM":
        float s = min(0.0125 * correction * abs(depthM - depth), 1.0);
        color = lerp(color, colorM.rgb, s);

        // Accumulate:
        colorBlurred.rgb += w[i] * color;
    }

    // The result will be alpha blended with current buffer by using specific 
    // RGB weights. For more details, I refer you to the GPU Pro chapter :)
    return colorBlurred;
}

The crusade of rendering photorealistic skin does not end here for me. We have been researching the ability of rendering very fine skin properties, including translucency, wrinkles, or even color changes due to emotions and other conditions, which was presented in ACM SIGGRAPH Asia 2010. I will be commenting on every one of these projects, so stay tuned!

I truly believe we are getting closer and closer to overcoming the uncanny valley. In the very next future, I think we will see astonishing skin renderings in games, which will make characters more human. This will lead to stronger connections between their emotions, and those of the players, allowing storytellers to reach the gamers' feelings and ultimately, their hearts.

Project Page spacer 5 Comments spacer Top of page