Snow Shader Tutorial with Unity

Let It Snow! How To Make a Fast Screen-Space Snow Accumulation Shader In Unity

Have you ever wondered how much time does it take to apply snow to all of the textures in your game? Probably a lot of times. We’d like to show you how to create an Image Effect (screen-space shader) that will immediately change the season of your scene in Unity.

3D model house village with trees in the background in Unity

How does it work?

In the images above you can see two screenshots presenting the same scene. The only difference is that in the second one I enabled snow effect on the camera. No changes to any of the textures has been made. How could that be?

The theory is really simple. The assumption is that there should be a snow whenever a rendered pixel’s normal is facing upwards (ground, roofs, etc.) Also there should be a gentle transition between a snow texture and original texture if pixel’s normal is facing any other direction (pine trees, walls).

Getting the required data

For presented effect to work it requires at least two things:

  • Rendering path set to deferred (For some reason I couldn’t get forward rendering to work correctly with this effect. The depth shader was just rendered incorrectly. If you have any idea why that could be, please leave a message in the comments section.)
  • Camera.depthTextureMode set to DepthNormals

Since the second option can be easily set by the image effect script itself, the first option can cause a problem if your game is already using a forward rendering path.

Setting Camera.depthTextureMode to DepthNormals will allow us to read screen depth (how far pixels are located from the camera) and normals (facing direction).

Now if you’ve never created an Image Effect before, you should know that these are build from at least one script and at least one shader. Usually this shader instead of rendering 3D object, renders full-screen image out of given input data. In our case the input data is an image rendered by the camera and some properties set up by the user.

It’s only the basic setup, it will not generate a snow for you. Now the real fun begins…

The shader

Our snow shader should be an unlit shader – we don’t want to apply any light information to it since on screen-space there’s no light. Here’s the basic template:

Note that if you create a new unlit unity shader (Create->Shader->Unlit Shader) you get mostly the same code.

Let’s now focus only on the important part – the fragment shader. First, we need to capture all the data passed by ScreenSpaceSnow script:

Don’t worry if you don’t know why we need all this data yet. I will explain it in detail in a moment.

Finding out where to snow

As I explained before, we’d like to put the snow on surfaces that are facing upwards. Since we’re set up on the camera that is set to generate depth-normals texture, now we are able to access it. For this case there is

in the code. Why is it called that way? You can learn about it in Unity documentation:

Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture for the camera.

_CameraDepthTexture always refers to the camera’s primary depth texture.

Now let’s start with getting the normal:

Unity documentation says that depth and normals are packed in 16 bits each. In order to unpack it, we need to call DecodeDepthNormal as above seen above.

Normals retrieved in this way are camera-space normals. That means that if we rotate the camera then normals’ facing will also change. We don’t want that, and that’s why we have to multiply it by _CamToWorld matrix set in the script before. It will convert normals from camera to world coordinates so they won’t depend on camera’s perspective no more.

In order for shader to compile it has to return something, so I set up the return statement as seen above. To see if our calculations are correct it’s a good idea to preview the result.

RGB Rendering of Camera Space for Unity Shader Tutorial

We’re rendering this as RGB. In Unity Y is facing the zenith by default. That means that green color is showing the value of Y coordinate. So far, so good!

Now let’s convert it to snow amount factor.

We should be using the G channel, of course. Now, this may be enough, but I like to push it a little further to be able to configure bottom and top threshold of the snowy area. It will allow to fine-tune how much snow there should be on the scene.

Snow texture

Snow may not look real without a texture. This is the most difficult part – how to apply a texture on 3D objects if you have only a 2D image (we’re working on screen-space, remember)? One way is to find out the pixel’s world position. Then we can use X and Z world coordinates as texture coordinates.

Now here’s some math that is not a subject of this article. All you need to know is that vpos is a viewport position, wpos is a world position received by multiplying _CamToWorld matrix by viewport position and it’s converted to a valid world position by dividing by the far plane (_ProjectionParams.z). Finally, we’re calculating the snow color using XZ coordinates multiples by _SnowTexScale configurable parameter and far plane to get sane value. Phew…

Unity Snow Texture for Unity3D shader tutorial

Merging it!

It’s time to finally merge it all together!

Here we’re getting the original color and lerping from it to snowColor using snowAmount.

The final touch: let’s set _TopThreshold value to 0.6:

Voila!

Summary

Here’s a full scene result. Looking nice?

Lowpoly Township Set on the Unity Asset Store

Low Poly 3D Snow Village Unity Asset Shader

Feel free to download the shader here and use it in your project!

Scene that has been used as the example in this article comes from Lowpoly Township Set. Inspired by this particular redditor.

Debugging Web Services With Fiddler In Unity

In order to empower your Unity game with useful features like user accounts, leaderboards, achievements, and cloud saves, you are going to need a web service. Of course, you wouldn’t write one on your own, so most probably you’re thinking of using services like GameSparks or App42. If so, you should learn how to debug it.

Having a REST

If you’re not familiar with what REST is, then it’s the best time to acquire new knowledge. It’s not difficult and no, it’s not another language. It’s just an architecture, a set of rules to follow to make a good API. Thanks to REST all API services look very familiar and are easy to learn. Here’s a good place to start.

Now, when you know that REST calls are nothing else but regular HTTP requests, you may find monitoring all the http traffic between your game and web service as very useful. You may want to do this because:

  • This may be the only way to see the client-server communication
  • Client request may not be what you’ve expected it to be
  • Server response may tell you about other things than client library errors
  • Client library may have bugs that can be revealed in this way

Let’s be honest, you will encounter issues. How fast you will deal with them depends on your debugging skills. If there’s a possibility to peek into client-server communication, why you not just do it?

Wireshark?

wireshark

Most people when asked about looking into client-server communication think about Wireshark. It’s the easiest way to get hands on the full communication and while Wireshark does his job very well, I’d like to recommend something else for debugging.

Fiddler!

fiddler

Telerik Fiddler is available for free for Windows and at the time of writing of this article there’s also OS X beta version available.

What makes Fiddler so special? Especially the fact that it debugs your http traffic using build-in tools so easily. It also has a very simple user interface that is easy to understand and use. Requests and responses can be displayed as a raw text or formatted one as JSON or XML if you expect this kind of data to be in there. On top of that you can customize the requests/responses view to see the data that you’re concerned of and you do this without any trouble.

Fiddler inspectors makes debugging experience really pleasant.

Fiddler inspectors makes debugging experience really pleasant.

Do not confuse Fiddler with the packet sniffer. It does not listen to your web interface, instead it installs itself as a default system proxy. It has its pros and cons. By doing that, it can easily decrypt HTTPS communication (yeah!), but on the other side not all applications accept the default system http proxy settings.  One of these applications is…

Unity

Of course by “Unity” I also mean all apps running on Unity engine. I cannot tell for sure why Unity does not work well with Fiddler, but I know how to make it work with it. There’s a great blog post about it written by Bret Bentzinger. The steps go as follows:

Windows

  1. Make sure Unity is not running
  2. Navigate to UNITY_INSTALL_DIR\Editor\Data\Mono\etc\mono\2.0
  3. Edit machine.config file and inside <system.net> add the following:

OS X

  1. Make sure Unity is not running
  2. Locate the Unity application icon
  3. Right-click on it and choose “Show package contents”
  4. Navigate to Contents/Frameworks/mono/etc/mono/2.0
  5. Do step 3 from the Windows instructions

Important: Make sure to undo these changes after you’re done with debugging!

More about Fiddler

Did I help you make your mind? If yes, you might want to see some more learning resources about Fiddler.

Happy debugging!

Choosing Between Forward or Deferred Rendering Paths in Unity

One of the most important Unity features is the ability to choose a rendering path.  For those who are not very familiar with Unity, choosing (usually) between forward and deferred rendering paths may be something comparable to choosing between “normal”  and “strange looking and something’s broken” rendering methods. To understand better why there is more than one rendering path, first you will need to understand the motivation behind it.

It’s all about lighting

Lights are expensive, mostly because a lot calculations has to be done to find out the valid color of a pixel when there’s a light in range. In Unity lights can be evaluated per-vertex, per-pixel or as Spherical Harmonics (SH). In this article we will talk only about the former two.

pixel vs vertex lighting

In per-pixel lighting, each pixel color is computed individually (as on the left.) You can see that even when I use low-poly sphere for this example, the lighting still makes it look round. If it wasn’t for the edges, it’d be really hard to spot where all the vertices are. Then, there’s per-vertex lighting. It makes one light calculation per vertex. All the other pixels between vertices evaluate the color using regular color blending algorithm (without further light calculations.) This is the cheapest method of lighting and yeah… it looks cheap (if you’re wondering where’s the pixel and vertex lighting switch, it’s hidden in the Light component under Render Mode option. Important option is forces the light to be pixel light, Not Important is vertex light, Auto makes the strongest light a pixel light.)

It’s not a secret that game developers love per-pixel lighting much more than per-vertex lighting. Yet it has a significant downside. Each light causes the additional rendering pass of each object in the range. There’s a limit of four lights that can affect the object. What’s more, there’s also a limit of shadows – based on Unity documentation only one light can have shadows (for some reason I’ve managed to get two shadows in Unity 5.3.4, so I’m not really sure about this one.)

Deferred rendering to the rescue!

There’s a technique that allows you to use as much lights as you want on your scene with keeping the performance at reasonable level. It does not limit the number of shadows and it does not cause additional draw passes if scene objects are within light range (objects casting shadows are exceptions.) It’s called Deferred Shading Rendering Path.

4 lights deferred lighting

Why is it so different? Mostly because most of the models are rendered without lighting calculations and when the scene rendering is nearly done lights are applied to rendered 2D image. Making changes on this stage is usually called doing something in screen-space. Knowing that, we can say lighting in deferred rendering is screen-space. To understand it better, let’s look at the Frame Debugger.

Scene rendering starts with rendering all geometries:

deferred opaque

This is a flat image, so how graphics card will know how to apply lights and shadows? Thanks to the depth buffer! You can think of depth buffer as of another image that is hidden from you and that stores the information about how far from the camera each pixel is located. When represented as image, it may look like this:

cf538839-afaf-4afb-911b-52db5c062af7

Depth information alone isn’t enough to figure out how light should be applied on the surface. Still, we need at least one more thing – the orientation. Orientation in 3D space is usually represented by normals. The unusual thing is that along with color buffer and depth buffer, there is a buffer with normals!

346210b1-c144-4a66-b1bc-bd7488e3b06b

How can you tell that these are normals? It’s pretty easy! Just look at the Scene Gizmo.

scene gizmo

Do you see the color resemblance? Red cone (x) points to the left, so do left faces on previous image. Green (y) to the top and blue (z) bottom-right (from this perspective). It all matches the colors of faces from before.

Basing on that information, lights and shadows can be rendered. It really doesn’t matter how many objects there are on your scene. Everything gets done only on the final image.

After lighting pass

After lighting pass

The image above is the result is an inverted version (1 – color) of lighting pass. At the end it is blended with the first opaque image to get the final result.

Which one should I choose?

After reading all of this you may be full of enthusiasm to use the new rendering path, but hold your horses! Deferred rendering is not a remedy for all of the world problems. It has some…

Limitations

It would be too great to be true, wouldn’t it? There are some limitations.

First of all, deferred rendering does not allow us to render semi-transparent objects. That’s because if something semi-transparent exists on the scene, there’s no way to write down depth and normals for objects that is visible through semi-transparent objects and for current object itself. Unity handles this limitation rendering semi-transparent objects using forward rendering path at the end of the whole process. It works quite well, these objects can cast a shadow, but unfortunately they are unable to receive shadows from other objects. They can also cause some unexpected issues, not known when using forward rendering.

The second limitation is the lack of anti-aliasing support. The reason is similar to the issue with semi-transparent objects, but Unity does not try to workaround it in any way. Instead you can use screen-space AA algorithms (image effects), but the visual effect may be less good-looking.

Another limitation is that you can use up to four culling masks. In the documentation you can read:

that is, your culling layer mask must at least contain all layers minus four arbitrary layers, so 28 of the 32 layers must be set. Otherwise you will get graphical artifacts.

And finally there’s no support for the Mesh Renderer’s Receive Shadows flag.

Requirements

If that’s not enough, deferred rendering works only on a limited set of graphics cards. When it comes to PCs, you can safely assume that all graphics cards not older than 10 years will support it. When it comes to mobile devices, you should assume nothing. But that’s not a big issue, because…

Performance

The most important thing is that deferred rendering in most cases will get a worse performance on mobile devices than forward rendering. It’s because of additional passes that need to be done on each frame. If you’re using only one light, then it may not be worth it.

On the other hand, adding extra lights is quite cheap. In the worst case scenario performance will drop linearly and compared to forward lighting, it’s independent of number of objects on the scene.

Cities: Skylines (done in Unity) decided to use deferred rendering path. There's a lot of small lights in this game and it still performs really well.

Cities: Skylines (made with Unity) decided to use deferred rendering path. There’s a lot of small lights in this game and it still performs really well.

Resources

I hope that this article will cast some light on what rendering path you should choose for your game. Anyway, you may also be interested in these resources:

How to Integrate Steamworks with Unity Games

Integrating Unity Games with Steamworks

I believe that many of you have thought of publishing a game on Steam. It wouldn’t be surprising, as Steam is a great distribution platform for PC and now also for Mac and Linux games. But Steam is not only about distribution. When you get approved by Valve, you gain the access to something that may help you a lot with your game development. This little thing is called Steamworks.

Steamworks features

Here’s a list of some most known Steamworks features:

  • Achievements – provide free grass roots marketing for your application. As players unlock achievements it exposes your product to their friends.
  • Error Reporting – provides dead simple error collection so that you can quickly find and fix your most common bugs. With a few simple api calls Steam will automatically collect the most common crash reports for the game or software. You can then review error reports on the error reporting page, which you can find from your application landing page in Steamworks.
  • Cloud Saves – is free storage that gives players ability to play where they choose as well as the peace of mind tha they won’t lose all the work they’ve put into your game. Cloud can also be used for software applications to store work-in-progress or special configuration settings.
  • Steam Workshop –  is a system of storing, organizing, and downloading user-created content uploaded through your application. This makes sharing custom levels, skins, or complete mods easy and user-friendly.
  • Other features to consider are stats, leaderboards, and multi-player matchmaking.
As you will accept Steamworks SDK terms & conditions you will get the access to official Steamworks SDK documentation.

As you will accept Steamworks SDK terms & conditions you will get the access to the official Steamworks SDK documentation.

Integrating Unity game with Steamworks

Steamworks SDK is distributed as a native DLL file (*.so when talking about Mac and Linux). In order to make it work with Unity you have to create a binding. Fortunately such binding already exists and it is distributed also as an easy to install, unitypackage file!

I am of course talking about Steamworks.NET. It’s an open source wrapper distributed under MIT license (you’re free to use it even in commercial projects!). The good thing about Steamworks.NET is that the authors value API compatibility over simplicity. That means that you only need a quick look over how it should be used and when you’re familiar with the concept, all you need is the official Steamworks documentation. The downside of this approach is that callback setup need one extra step, but it’s not a hassle.

Installation

To make Steamworks.NET  work you have to be a Steamworks developer and you need an AppID (this is just a number in Steam database). At the time of writing of this article you can get one after passing Steam Greenlight or by making a custom deal with Valve.

When you have acquired an AppID all you have to do is import Steamworks.NET unitypackage file to your Unity project. At the time of writing of this article the current stable version is 7.0.0, but please use installation page links instead to always get the latest version.

steamworks unitypackage installation

Steamworks.NET package includes libraries for Windows, Mac and Linux in x86 and x86_64 architectures. After importing it you don’t need to add anything else to your project. Even official Steam dll/so is included, so there are just two more steps to go.

After importing the package, a new file called steam_appid.txt will be created in your project root directory (this is the one that contains the Assets and Library folders). Open it in the text editor and replace 480 with your Steam AppID.

Finally, the last step – create a new empty game object on your scene and add SteamManager script to it. There! Now you’re good to go!

Checking to make sure it works

Make sure that Steam is running. Then create a script like this:

Add this script to a new game object on your scene and hit the Play button. If everything is OK, you will see your Steam name in your Unity Editor console!

When something went wrong you will end with an error message that may not tell you what exactly has gone wrong. If you’re working on Windows then you may want to get DebugView application. Just run it before running your Unity game and after the error is printed out, alt-tab to DebugView window and see if there’s something more in there.

 

More information and getting help

You can learn more about how to get started (and how callbacks should be handled) on the Getting Started page of Steamworks.NET documentation. It you ever feel lost, you can use SteamworksDev discussion group. It’s invite-only so you should contact Steam about getting access to this one. It is worth it!

steamworks discussion group

If you ever feel lost, please leave a comment here or reach the Knights using our Facebook page.

How to Use Unity’s Resources Folder

Unity has several kinds of special folders. One of them is the Resources folder. Simple concept of storing assets is well-explained in the official documentation:

Generally, you create instances of assets in a scene to use them in gameplay but Unity also lets you load assets on demand from a script. You do this by placing the assets in a folder called Resources or a sub-folder (you can actually have any number of Resources folders and place them anywhere in the project). These assets can then be loaded using the Resources.Load function.

Still the reason why we might want to use the Resources folder may be a little confusing. First you have to understand how Unity build process is works and how Unity is able to access game assets.

Unity build process

Before you will build your game, you have to declare what scenes your game consists of. All of this can be done in Build Settings window.

build settings window

There are at least two reasons why Unity asks you to do this:

  • It needs to know what scene should be loaded first (the top scene)
  • It needs to know what assets should be included in your build (dependencies)

What are scene dependencies? They’re assets which are connected to the scene hierarchy in any way, usually as a component field.

Unity Logo object contains Sprite Renderer object that references Unity Logo asset.

Unity Logo object contains Sprite Renderer object that references Unity Logo asset.

The dependency diagram may look like this:

dependency diagram 1

In this case there are two scenes. Scene 1 is using Asset 1 and Asset 2. Scene 2 is using Asset 2 and Asset 3. What happens if you decide not to build Scene 2?

dependency diagram 2

Only Asset 1 and Asset 2 will be included in the build since Asset 3 is referenced only by Scene 2, that is no longer included in the build. Thanks to this dependency tracking Unity will include in your build only these assets, which are actually used. Needless to say that you don’t have to worry about storing assets you’re not using at the moment. It will not affect your build size in any way.

Override!

There’s a way to get around this process. If you put your assets into a Resources folder, they will be always included in your build. But be careful! You need a really good reason to do so!

As I said before, in most cases when you need to use an asset, you make a reference to it within a scene. It’s really easy to use any kind of attached asset this way. So why would you need to use an asset without keeping a reference to it? There may be several reasons and each one depends on specific needs of the, but let’s look at one case what is quite common for most games.

When an asset is directly referenced from the scene, it will be loaded into the memory before your scene will be launched. Thankfully to that, player will not experience any frame-drops related to assets loading (with small exceptions). The price is of course the time needed for these assets to be loaded. Sometimes it may be not acceptable.

Example – loading screen with different backgrounds

Many game loading screens are displaying random images to be less boring.

Many game loading screens are displaying random images to be less boring.

Loading screen is something that usually is also a scene. Let’s think of a case when you want to display a random image on the background while your actual game level is loading. You’ve collected 15 images and you add these to loading scene images rotation script. It is working great, but when you play your game you realize that your loading scene requires more time to load than you need to pass your actual game levels!

This is caused by assets pre-loading mechanism and can be easily fixed using Resources folder. First remove all the references to your textures from the scene. Then put your images into Resources/LoadingImages directory like this:

resources images

Then somewhere in the code you can use a code like this one:

Note that Random.Range() returns a random number between first argument inclusive and second argument exclusive, that’s why there’s +1.

If you will need to attach this texture to an Image component, you can do it like this:

A word of caution

Use Resources folder only when you really need to. Loading assets on demand will make your FPS rate drop, and having indirect dependencies is makes your work much more difficult.  It’s worth to mention again that these assets will always be included in your build, even if you don’t use them. You have been warned!