Search this blog

10 July, 2016

SIGGRAPH 2015: Notes for Approximate Models For Physically Based Rendering

This is a (hopefully temporary) hosting location for the course notes Michal's Iwanicki and I drafted for our presentation at the Physically Based Shading Course last year.

I'm publishing them here because they were mentioned a lot in our on-stage presentation, in fact, we meant that mostly as a "teaser" of the notes but we are still not able to bring them out of the "draft" stage, despite the effort of everyone involved (us and the course organizers, to whom goes our gratitude for the hard work involved in making such a great event happen).

It also doesn't help that in an effort to show an overall methodology, we decided to collate more than a year of various research efforts (which happened independently, for different purposes) into this big document. I still have to work more on my summarization skills.

06 July, 2016

How to spot potentially risky Kickstarters. Mighty No9 & PGS Lab

This is really off-topic for the blog, but I've had so many discussions about different gaming related Kickstarters that I feel the need to write a small guide. Even if this is probably the wrong place with the wrong audience...

Let's be clear, this is NOT going to be about how to make a successful Kickstarter campaign, actually, I'm going to use two examples (one of a past KS, and one of a campaign that is still open as I write) that are VERY successful. It's NOT even going to be about how to spot scams, and I can't say that either example is

But I want to show how to evaluate risks, and when it's best to use a good dose of skepticism because it seems that there is a lot of people that get caught in the "hype" for given products and end up regretting their choices.

The two example I'm mostly going to use are the following campaigns:
I could have picked others, but these came to mind. It's not a specific critique to these two though, and I know there are lots of people enjoying Mighty No.9, and I wish the best to PGS Labs, I hope they'll start by addressing the points below and proving my doubts unfounded.

The Team

This is absolutely the most important aspect, and it's clear why. On Kickstarter you are asked to give money to strangers, to believe in them, their skills and their product. 
Would you, in real life, give away a substantial amount of money to people, for an investment, without knowing anything about them? I doubt it.

So when you see a project this successful...


...first thought muse be, these guys must be AMAZING, right?


I kid you not, that's the ONLY information on the PGS Lab team. They have a website, but there is ZERO information on them there as well.


From their (over-filtered and out-of-sync) promo video, we learn one name of a guy...


"We have brought together incredible Japanese engineers and wonderful industrial designers". A straight quote from the video, the only other mention of the team. No names, no past projects, no CVs. But they are "wonderful", "incredible" and "Japanese", right?

This might be the team. Might be buddies of the guy in the middle...
For me, this is already a non-starter. But it seems mine is not a popular point of view...

The team?

So what about Mighty No.9 then? Certainly, Inafune has enough of a CV... And he even had a real team, right? He even did the bare minimum and put the key people on the Kickstarter page...



Or did he? Not so quickly...


This is the first thing I noticed in the original campaign. Inafune has a development team (Concept) but it seems that for this game, he did intend to outsource the work.

Unfortunately, not an unusual practice, it seems that certain big names in the industry are using their celebrity to easily raise money for projects they then outsource to third party developers.



Igarashi for Bloodstained did even "worse". Not only the game itself is outsourced, but the campaign, including the rewards and merchandise, are. In fact, if you look at the KS page, you'll notice some quite clashing art styles...


...I suspect this was due to the fact that different outsourcers worked on different parts of the campaign (concept art vs rewards/tiers).

Let's be clear, per se this is not a terrible thing, both Igarashi and Inafune used Inti Creates as the outsourcing partner that has plenty of experience with 2d scrollers, which means the end product might turn out great (in fact, the E3 demo of Bloodstained looks at least competent if not exceptional)... But it shows, to me, a certain lack of commitment.

People are thinking that these "celebrity" designers will put their careers on the line, against the "evil" publishers that are not funding their daring titles (facepalm), while they are just running a marketing campaign.

This became extremely evident for Inafune in particular, as he rushed launching a (luckily disastrous... apparently you can't fool people twice) second campaign in the middle of Mighty No.9 production, revealing his hand and how little commitment he had to the title.

The demo: demonstrating skills and commitment

Now, when you got the team down, you want to evaluate their skills. Past projects surely help, but what helps, even more, is showing a demo, a work-in-progress version of the product.

It's hard enough to deliver a new product even when you are perfectly competent, I've worked in games done by experienced professionals that just didn't end up making it, and I've backed Kickstarters that failed to deliver even if they were just "sequels" of products a given company was already selling... So you really shouldn't settle for anything less than concrete proof.

How does our Kickstarters fare in terms of demos?


PGS Labs show a prototype. GREAT! But wait...


Oh. So, the prototype is nothing ore than existing hardware, disassembled and reassembled in a marginally different shape. In fact you can see the PCBs of the controller they used, a joypad for tablets which they just opened, desoldered some buttons and moved them into a 3d printed shell.

Well, this would be great if we are talking about modding, but proves exactly NOTHING about their abilities to actually -make- the hardware (my guess - but it's just a guess, is that in the best scenario they are raising money to look for a Chinese ODM that already has similar products in their catalog, and they won't really do any engineering).

Of course, when it comes to the marketing campaigns of "celebrity designers" all you get is what is cheaper to make, they know they'll get millions anyways, so, just get some outsourcers to paint some concept art


It's really depressing to me how, by just creating a video with their faces, certain people can raise enormous amounts of money. And I know that there are lots of success stories, from acclaimed developers as well, but if you look at them, the pattern is clear: success comes from real teams of people deeply involved with the products, and with actual, proven, up-to-date skills in the craft.

While so far I'd say all the projects of older, lone "celebrities" have -all- resulted in games that are -at best- ok. Have we ever seen a masterpiece coming out from any of these? Dino Dini? Lord British

Personally, as a rule of thumb I'd rather give money to a "real" indie developer, who really can't just go to a publisher in lots of cases or even self-fund borrowing from a bank, and that often do MUCH, MUCH better games by real passion, sacrifice, and eating lots of instant noodles I assume...

The "gaming press"

What irks me a lot is that these campaigns are very successful because they feed on the laziness of news sites where hype spreads due to the underpaid human copy and paste bots who just repeat the same stuff over and over again. It's really a depressing job.

And even good websites, websites where I often go for game critique and intelligent insights, seem to be woefully unequipped to discuss anything about production, money, how the industry works. 

I'm not sure if it's because gaming journalists are less knowledgeable about production (but I really doubt it) or if it's because they prefer to keep a low profile (but... these topics do bring "clicks", right?).

Anyhow. I hope at least this can help a tiny bit :)

02 July, 2016

Unity 101 - From zero to shaders in no time

Disclaimer:

I'm actually no Unity expert, as I started to look into it more seriously for a course I taught, but I have to say, it looks like it could be right now one of the best solutions for a prototyping testbed. 

This post is not meant as a rendering (engineering) tutorial, it's written for people who know rendering, and want to play with Unity, e.g. to prototype effects and techniques.

Introduction:

I really liked Nvidia's FXComposer for testing out ideas, and I still do, but unfortunately that product has been deprecated for years. 

Since then I started playing with MJP's framework by adding functionality that I needed (and later he added himself), and there are a couple of other really good frameworks out there by skilled programmers, but among the full-fledged engines, Unity seems to be the best choice right now for quick prototyping.

The main reason I like Unity is its simplicity. I can't even begin to get around Unreal or CryEngine, and I don't really care about spending time learning them. Unity on the other hand is simple enough that you can just open it and start poking around, which is really its strength. People often obsess too much on details of technology. Optimization and refinement are relatively easy, it's the experimental phase which we need to do quickly!

Unity basics:

There are really only three main concepts you need to know:

1) A project is made of assets (textures, meshes...). You just drag and drop files into the project window, and they get copied to a folder with a bit of metadata describing how to process them. All assets are hot-reloaded. Scripts (C# or JavaScript code) are assets as well!

2) Unity employs a scene-graph system (you can also directly emit draws, but for now we'll ignore that). You can drag meshes into the scene hierarchy and they will appear in the game and the editor view, and create lights, cameras and various primitives.



The difference between the two is that the game is seen through a game camera, the editor can freely roam, and when you are in game view you can change object properties (if you're paused), but these changes don't persist (aren't serialized in the scene), while when you are in the editor changes are persistent.



3) Unity uses a component system for everything. A C# script just defines a class (with the same name as the script file) which inherits from "MonoBehaviour" and can implement certain callbacks.
All the public class members are automatically exposed in the component UI as editable properties (and C# annotations can be used to customize that) and serialized/deserialized with the scene.



A component can be attached to any scene object (a camera, a mesh, a light...) and can access/modify its properties, and perform actions at given times (before rendering, after, before update, on scene load, on component enable, when drawing debug objects and so on and so forth...)



Components can really freely change anything in the scene, as there is a way of finding objects by name, type and so on, and can also create new objects and so on. The performance characteristics of doing some operations is sometimes... surprising, and in real games you might need to cache/pool certain things, but for prototyping it's irrelevant.

Shaders & Rendering:

From the rendering side things are similarly simple. Perhaps the most complex aspect for someone unfamiliar with it, is the shader system. 

As most engines, Unity has a shader system that allows for automatic generation of shader permutations (e.g. the forward renderer needs as permutation per light type and shadow) and it also needs to handle different platforms (it can cross-compile HLSL to GLSL).
It achieves that with a small DSL for shader description "ShaderLab", and the shader code is actually embedded into it. 
Unity has also other ways of making shaders without touching HLSL, and a "surface shader" system that allows to avoid writing VS ans PS, but these are not really that interesting for a rendering engineer, so I won't cover them :)

ShaderLab has functionality to set render state and declare shader parameters, with the latter automatically reflected in the Material UI, when a material binds to a given shader. I won't go into a detailed description of this system, because once you see a ShaderLab shader things should be pretty obvious, but I'll provide some examples at the end.

For geometry materials, the procedure is quite simple: you'll need ShaderLab shader (.shader) asset, a material asset that is bound to it, and then you can just assign it to a mesh (drag and drop) and everything should work.

Unity supports three rendering systems (as of now): VertexLit (which is really a forward renderer without multipass and up to eight lights per object - some parts of the Unity docs say this is deprecated, but it seems it's going to live at least as a shader type), Forward (multipass, one light at a time - this rendering mode actually coexists with VertexLit) and Deferred (as in "shading", there is also a legacy system that does "deferred lighting" but that's actually deprecated). 
The shader has to declare for which system it's coded and the way Unity passes the lighting information to the shader changes based on that.

For post-effects materials you'll need both a shader and a component. The component will be a C# script that gets attached to the camera and triggers rendering in the OnRenderImage callback. In the script one can create programmatically a material, binding it to the shader and settings its parameters, so there's no need to have a separate material asset. 
The rendering API exposed by Unity is really minimal, but it's super easy to create rendertargets and draw fullscreen quads. Unity automatically chains post-effects if there are multiple components overriding OnRenderImage, and callback provides a source and destination rendertarget, so the chain is completely transparent to the scripts.

Fore more advanced effects, there is support for drawing and creating meshes (including their vertex attributes), drawing immediate geometry (lines and so on, usually for debugging) and even doing "procedural draws" (draws with no mesh data attached, vertices are assumed to be pulled from a buffer) and dispatching compute shaders.

It's also possible to access the g-buffer when using the deferred renderer and sample shadowmaps manually, but there is no provision for changing the way either are created, and no real access to any of the underlying graphics API (unless you write C++ plugins).

Last but not least, on PC Unity integrates with RenderDoc and Visual Studio for easy debugging, which is really a nice perk.

All this is best explained with code, so, if you care to try, -->here<-- b=""> is a bunch of fairly well commented (albeit very basic / mostly wrong in terms of rendering techniques) sample shaders I hastily made to learn Unity myself before I started teaching the course.