Snow Shader Tutorial with Unity

Let It Snow! How To Make a Fast Screen-Space Snow Accumulation Shader In Unity

Have you ever wondered how much time does it take to apply snow to all of the textures in your game? Probably a lot of times. We’d like to show you how to create an Image Effect (screen-space shader) that will immediately change the season of your scene in Unity.

3D model house village with trees in the background in Unity

How does it work?

In the images above you can see two screenshots presenting the same scene. The only difference is that in the second one I enabled snow effect on the camera. No changes to any of the textures has been made. How could that be?

The theory is really simple. The assumption is that there should be a snow whenever a rendered pixel’s normal is facing upwards (ground, roofs, etc.) Also there should be a gentle transition between a snow texture and original texture if pixel’s normal is facing any other direction (pine trees, walls).

Getting the required data

For presented effect to work it requires at least two things:

  • Rendering path set to deferred (For some reason I couldn’t get forward rendering to work correctly with this effect. The depth shader was just rendered incorrectly. If you have any idea why that could be, please leave a message in the comments section.)
  • Camera.depthTextureMode set to DepthNormals

Since the second option can be easily set by the image effect script itself, the first option can cause a problem if your game is already using a forward rendering path.

Setting Camera.depthTextureMode to DepthNormals will allow us to read screen depth (how far pixels are located from the camera) and normals (facing direction).

Now if you’ve never created an Image Effect before, you should know that these are build from at least one script and at least one shader. Usually this shader instead of rendering 3D object, renders full-screen image out of given input data. In our case the input data is an image rendered by the camera and some properties set up by the user.

It’s only the basic setup, it will not generate a snow for you. Now the real fun begins…

The shader

Our snow shader should be an unlit shader – we don’t want to apply any light information to it since on screen-space there’s no light. Here’s the basic template:

Note that if you create a new unlit unity shader (Create->Shader->Unlit Shader) you get mostly the same code.

Let’s now focus only on the important part – the fragment shader. First, we need to capture all the data passed by ScreenSpaceSnow script:

Don’t worry if you don’t know why we need all this data yet. I will explain it in detail in a moment.

Finding out where to snow

As I explained before, we’d like to put the snow on surfaces that are facing upwards. Since we’re set up on the camera that is set to generate depth-normals texture, now we are able to access it. For this case there is

in the code. Why is it called that way? You can learn about it in Unity documentation:

Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture for the camera.

_CameraDepthTexture always refers to the camera’s primary depth texture.

Now let’s start with getting the normal:

Unity documentation says that depth and normals are packed in 16 bits each. In order to unpack it, we need to call DecodeDepthNormal as above seen above.

Normals retrieved in this way are camera-space normals. That means that if we rotate the camera then normals’ facing will also change. We don’t want that, and that’s why we have to multiply it by _CamToWorld matrix set in the script before. It will convert normals from camera to world coordinates so they won’t depend on camera’s perspective no more.

In order for shader to compile it has to return something, so I set up the return statement as seen above. To see if our calculations are correct it’s a good idea to preview the result.

RGB Rendering of Camera Space for Unity Shader Tutorial

We’re rendering this as RGB. In Unity Y is facing the zenith by default. That means that green color is showing the value of Y coordinate. So far, so good!

Now let’s convert it to snow amount factor.

We should be using the G channel, of course. Now, this may be enough, but I like to push it a little further to be able to configure bottom and top threshold of the snowy area. It will allow to fine-tune how much snow there should be on the scene.

Snow texture

Snow may not look real without a texture. This is the most difficult part – how to apply a texture on 3D objects if you have only a 2D image (we’re working on screen-space, remember)? One way is to find out the pixel’s world position. Then we can use X and Z world coordinates as texture coordinates.

Now here’s some math that is not a subject of this article. All you need to know is that vpos is a viewport position, wpos is a world position received by multiplying _CamToWorld matrix by viewport position and it’s converted to a valid world position by dividing by the far plane (_ProjectionParams.z). Finally, we’re calculating the snow color using XZ coordinates multiples by _SnowTexScale configurable parameter and far plane to get sane value. Phew…

Unity Snow Texture for Unity3D shader tutorial

Merging it!

It’s time to finally merge it all together!

Here we’re getting the original color and lerping from it to snowColor using snowAmount.

The final touch: let’s set _TopThreshold value to 0.6:

Voila!

Summary

Here’s a full scene result. Looking nice?

Lowpoly Township Set on the Unity Asset Store

Low Poly 3D Snow Village Unity Asset Shader

Feel free to download the shader here and use it in your project!

Scene that has been used as the example in this article comes from Lowpoly Township Set. Inspired by this particular redditor.

Choosing Between Forward or Deferred Rendering Paths in Unity

One of the most important Unity features is the ability to choose a rendering path.  For those who are not very familiar with Unity, choosing (usually) between forward and deferred rendering paths may be something comparable to choosing between “normal”  and “strange looking and something’s broken” rendering methods. To understand better why there is more than one rendering path, first you will need to understand the motivation behind it.

It’s all about lighting

Lights are expensive, mostly because a lot calculations has to be done to find out the valid color of a pixel when there’s a light in range. In Unity lights can be evaluated per-vertex, per-pixel or as Spherical Harmonics (SH). In this article we will talk only about the former two.

pixel vs vertex lighting

In per-pixel lighting, each pixel color is computed individually (as on the left.) You can see that even when I use low-poly sphere for this example, the lighting still makes it look round. If it wasn’t for the edges, it’d be really hard to spot where all the vertices are. Then, there’s per-vertex lighting. It makes one light calculation per vertex. All the other pixels between vertices evaluate the color using regular color blending algorithm (without further light calculations.) This is the cheapest method of lighting and yeah… it looks cheap (if you’re wondering where’s the pixel and vertex lighting switch, it’s hidden in the Light component under Render Mode option. Important option is forces the light to be pixel light, Not Important is vertex light, Auto makes the strongest light a pixel light.)

It’s not a secret that game developers love per-pixel lighting much more than per-vertex lighting. Yet it has a significant downside. Each light causes the additional rendering pass of each object in the range. There’s a limit of four lights that can affect the object. What’s more, there’s also a limit of shadows – based on Unity documentation only one light can have shadows (for some reason I’ve managed to get two shadows in Unity 5.3.4, so I’m not really sure about this one.)

Deferred rendering to the rescue!

There’s a technique that allows you to use as much lights as you want on your scene with keeping the performance at reasonable level. It does not limit the number of shadows and it does not cause additional draw passes if scene objects are within light range (objects casting shadows are exceptions.) It’s called Deferred Shading Rendering Path.

4 lights deferred lighting

Why is it so different? Mostly because most of the models are rendered without lighting calculations and when the scene rendering is nearly done lights are applied to rendered 2D image. Making changes on this stage is usually called doing something in screen-space. Knowing that, we can say lighting in deferred rendering is screen-space. To understand it better, let’s look at the Frame Debugger.

Scene rendering starts with rendering all geometries:

deferred opaque

This is a flat image, so how graphics card will know how to apply lights and shadows? Thanks to the depth buffer! You can think of depth buffer as of another image that is hidden from you and that stores the information about how far from the camera each pixel is located. When represented as image, it may look like this:

cf538839-afaf-4afb-911b-52db5c062af7

Depth information alone isn’t enough to figure out how light should be applied on the surface. Still, we need at least one more thing – the orientation. Orientation in 3D space is usually represented by normals. The unusual thing is that along with color buffer and depth buffer, there is a buffer with normals!

346210b1-c144-4a66-b1bc-bd7488e3b06b

How can you tell that these are normals? It’s pretty easy! Just look at the Scene Gizmo.

scene gizmo

Do you see the color resemblance? Red cone (x) points to the left, so do left faces on previous image. Green (y) to the top and blue (z) bottom-right (from this perspective). It all matches the colors of faces from before.

Basing on that information, lights and shadows can be rendered. It really doesn’t matter how many objects there are on your scene. Everything gets done only on the final image.

After lighting pass

After lighting pass

The image above is the result is an inverted version (1 – color) of lighting pass. At the end it is blended with the first opaque image to get the final result.

Which one should I choose?

After reading all of this you may be full of enthusiasm to use the new rendering path, but hold your horses! Deferred rendering is not a remedy for all of the world problems. It has some…

Limitations

It would be too great to be true, wouldn’t it? There are some limitations.

First of all, deferred rendering does not allow us to render semi-transparent objects. That’s because if something semi-transparent exists on the scene, there’s no way to write down depth and normals for objects that is visible through semi-transparent objects and for current object itself. Unity handles this limitation rendering semi-transparent objects using forward rendering path at the end of the whole process. It works quite well, these objects can cast a shadow, but unfortunately they are unable to receive shadows from other objects. They can also cause some unexpected issues, not known when using forward rendering.

The second limitation is the lack of anti-aliasing support. The reason is similar to the issue with semi-transparent objects, but Unity does not try to workaround it in any way. Instead you can use screen-space AA algorithms (image effects), but the visual effect may be less good-looking.

Another limitation is that you can use up to four culling masks. In the documentation you can read:

that is, your culling layer mask must at least contain all layers minus four arbitrary layers, so 28 of the 32 layers must be set. Otherwise you will get graphical artifacts.

And finally there’s no support for the Mesh Renderer’s Receive Shadows flag.

Requirements

If that’s not enough, deferred rendering works only on a limited set of graphics cards. When it comes to PCs, you can safely assume that all graphics cards not older than 10 years will support it. When it comes to mobile devices, you should assume nothing. But that’s not a big issue, because…

Performance

The most important thing is that deferred rendering in most cases will get a worse performance on mobile devices than forward rendering. It’s because of additional passes that need to be done on each frame. If you’re using only one light, then it may not be worth it.

On the other hand, adding extra lights is quite cheap. In the worst case scenario performance will drop linearly and compared to forward lighting, it’s independent of number of objects on the scene.

Cities: Skylines (done in Unity) decided to use deferred rendering path. There's a lot of small lights in this game and it still performs really well.

Cities: Skylines (made with Unity) decided to use deferred rendering path. There’s a lot of small lights in this game and it still performs really well.

Resources

I hope that this article will cast some light on what rendering path you should choose for your game. Anyway, you may also be interested in these resources:

[Tutorial] Implementing a Minimap in Unity

What does it take to create a mini-map for a Unity game? You most probably would be surprised to hear that it’s easier than you think, and that it does not even require any programming skills! Here I will try to explain how to make one step by step.

Basic minimap concepts

Minimap known from World of WarcraftMinimaps (or radars) are known for displaying information about our surroundings. First of all, mini-maps should be centered on main character. Secondly, it should use readable symbols instead of real models because minimaps are often quite small and the player wouldn’t recognize the information that the minimap is trying to present.

Most minimaps are circular, so we will also try to create one. Quite often you will find additional buttons and labels attached to it. That’s what we will try to recreate too.

Preparing the scene

Let’s start with adding something to the scene. I’ve created a new scene with Unity Chan (she will be the player) and two robots (as enemies).

desktop

Game view

Now let’s add a second camera. Just click on GameObject -> Camera and rename it to Minimap Camera. Then make this camera a child of Unity Chan model (so it will follow her) and move it 10 units above her head looking down.

Minimap Camera Setup

Minimap Camera Setup

For good effect set Transform position to 0, 10, 0 and rotation to 90, 0, 0. The camera should be seeing something like this:

minimap camera preview

OK, but that’s not a minimap yet. When you will launch your scene now, the picture from the camera above will be the only one rendered on the screen. We have to tell the game that we want the minimap to be a Unity UI element.

Rendering to Unity UI

To do that we will require a Render Texture. You can easily create one by choosing Assets -> Create -> Render Texture from the main menu. Create and rename it to Minimap Render Texture.

minimap render texture

Now select Minimap Camera and in the Inspector assign Minimap Render Texture to Target Texture field.

camera assign render texture minimap

If you will try to launch the scene you will notice that Minimap Camera picture is nowhere to be seen. The result image is now hidden in our newly created render texture.

Let’s now create a Canvas to add UI elements on it. Choose GameObject -> UI -> Canvas and you will be able to find one on your scene.

created canvas

Canvas would require a Raw Image to use Render Texture on it. Select GameObject -> UI -> Raw Image, rename it to Minimap Image and in its inspector assign Minimap Render Texture into a Texture field.

raw image render texture

It will result with a Unity UI element that now renders Minimap Camera image in it!

raw minimap unity ui

Now let’s make it round. For this purpose I made a simple mask texture:

minimap masl

Create a new Unity UI image, add Mask component to it, set our new mask texture to Source Image inspector field, and make Minimap Image a child of our Mask. (Tip: disable Mipmaps for mask textures to get a better visual effect).

minimap mask configuration

This setup will make our minimap look like this:

minimap masked circle

White minimap on white background will not look so good. Let’s add a border image over it:

minimap border

minimap with border

To make moving it around simpler, I’ve grouped everything under empty game object called Minimap.

minimap game object

Finally let’s move our new minimap to top-right corner of the screen.

minimap top-right corner

Looks nice, doesn’t it? But it is still not a valid minimap just, it’s a camera rendering game from the top. If you’re familiar with layers then most probably you know what I will do next!

Fun with layers

We will need at least one additional layer. Go to Edit -> Project Settings -> Tags and Layers and add a new layer caller Minimap.

layers minimap

Now lets create three spheres. One will be blue and should be located in the same place where Unity Chan is standing. The best would be to make this object a child of Unity Chan. Finally make sure to set its Layer to Minimap.

unity chan blue sphere

Do the similar for two Robots, but instead of blue sphere make them red!

robot red spheres

Now is the best part! Select the Main Camera and make sure than its Culling Mask has Minimap layer unset.

5b252849-280c-4322-9115-7e20fb71de52

Then select the Minimap Camera and do the opposite. Leave only Minimap set and disable all the rest.

minimap camera layers

Now you can see something that looks quite like a valid minimap!

3b117f6d-153f-4df6-a96c-51487c410209

Final touches

You may want to make few adjustments. First of all, let’s change Minimap Camera clear color to light-gray and Clear Flags to Solid Color, so the minimap background will do a better contrast with red and blue spheres.

minimap camera background color

Then you can add any other Unity UI element to stick with your minimap. I will add an example label. And here’s our final result!

minimap final result

Minimap is configured in a way that moving the character will update its position. If any other robot will move too, it will be immediately visible.

minimap position change

And this concludes the Unity mini-map tutorial! Feel free to ask any questions you may have in comments. Don’t forget to subscribe to our newsletter for the latest Unity tips & tricks.

Myths and Facts of the Unity Game Engine

Unity is only for games

It’s a myth. Of course Unity has been created as a game engine, but it’s so flexible that it is successfully being used in other industries, such as architecture, medicine, and engineering. For an example, see 3RD Planet.

It is the largest online consumer event platform, showcasing the top tourist destinations in each country with Unity 3D.

The Coherentrx SuiteAnother example is The Coherentrx Suite.

CoherentRx is a suite of point-of-care patient education applications that combine realtime 3D anatomy models with HIPAA-compliant doctor-patient messaging. CoherentRx apps currently address ten medical specialties, from orthopedics to opthalmology.

In the Unity Showcase there’s a separate category for non-game applications. Check it out yourself!

Unity is free

It’s a myth, but not entirely. You can download Unity for free. The free (personal) version of Unity has all the features that Unity Professional has (with some small exceptions). When your game is ready, you can publish it and make money out of it! It’s just like that! But if at one point your company exceeds a turnover of $100,000, then you are obligated to purchase the Unity Professional license. It’s not too expensive at that point, because the cost is $1500 US dollars. Sounds really cheap when compared to $100,000, doesn’t it?

The difference between free license and pro license is that in the former one mobile games and web games display a Unity logo for few seconds on startup. It’s not a big deal, but as a professional game developer you may want to get rid of it sometime in the future. Also, you aren’t allowed to use Unity editor pro skin and Cache Server.

You can only do small games with Unity

It’s a myth. The reason why there are so many small games created in Unity is that it is very indie-friendly. Unity does not constrain your game size in any way. You can create a clone of World of Warcraft if you really want to! All scenes can be loaded and merged in the run-time, so player won’t see any loading screens while playing. It’s also not true that Unity performance degrades when you have too much objects on your scene. Of course you have to optimize it in a special way. So it’s all about the experience.

One of the greatest examples of a large-scene and well-optimized Unity game is City Skylines.

Unity is worse than Unreal Engine

It’s a myth, but it really depends on what are your needs (it’s not worse in general). Unity has been always compared to Unreal Engine because the latter always had the upper hand in game development industry. Today the situation is a little different. While Unreal Engine was targeting PC and stationary consoles, Unity took its chances with mobile devices. Unreal was always about big games with stunning visuals, but this approach made it more difficult to learn and use. Unity on the other hand is based on the Mono platform. Thanks to that you can program your games using C# instead of C++ which is quite difficult to learn.

maxresdefault

Unity GDC demo – Adam – Part 1

Today Unity is trying to take over the Unreal Engine market. The first steps were made by Unity 5 graphics enhancements and by the optimization of scripting backend. On GDC 2016 Unity Technologies has published a stunning real-time demo of what Unity 5.3.4 is capable of. From today it will be more and more difficult to tell the difference between Unity and Unreal Engine graphics capabilities.

You don’t need programming knowledge to use Unity

It’s a fact. It’s easier when you have at least some programming knowledge, but you can easily build a complete game without it. One of the most popular editor extensions on the Asset Store is Playmaker. Playmaker allows you to build finite-state machines that will drive your entire game logic, and it does it well! If you need a good reference, then Dreamfall Chapters is a damn good one!

playmaker screen

Because of this fact many people may consider Unity as a toy rather than a serious tool for creating games. The truth is that Unity can be used by everyone, regardless of your skills!

You cannot spawn threads in Unity

Again, it’s a myth. Many people are confusing Unity coroutines with threads. Let’s get this straight: Coroutines have nothing to do with threads. Coroutine is only a way to delay code execution without too much effort. It’s really a great method of writing game code when some routines are needed to be executed in sequence over a time (like animations), but everything is still done in a single thread.

Yet you can create threads in Unity. You will not find too much about it in the official documentation, because there’s not much to be said. All you need to know is that Unity API is not thread safe, but it is a good thing! To learn more about how to use threads in Unity please read our article about threads.

All Unity games looks the same

It’s a myth, of course. Every developer who decides to use one game engine or another is asking himself a question how this game engine will help him and how it will constrain his ideas.  Unity is quite interesting because it’s easy to learn and hard to master. Yet if you will master it you will realize that you can do almost anything with it! You can even create your own game engine within! If you’re still wondering if Unity constrains your creativity, stop right now. It doesn’t!

You need to know that many of Unity components (like physics) can be replaced by anything you want. There’s no requirement of Unity game using components provided by Unity. This is a great deal if you have very specific needs.

Unity has a lot of bugs

It’s a fact. Since Unity 5 the developers were rushing forward with new features, but with a cost of stability. On GCD 2016 current Unity CEO John Riccitiello announced that Unity will take a road of increasing Unity releases’ quality. At the time Unity 5.3.4 has been released and Unity 5.4 entered a beta stage. Let’s hope for the best, because we all need a tool that is as stable as possible, and lately there was a serious fear of upgrading to a new release that could be heard in Unity community.

How to Use Multiple Cameras in Unity3D

Understanding the Importance of Using Multiple Cameras in Unity

From what I observe, many Unity users do not grasp the concept of using multiple Unity cameras on a single scene. “If I want to look from only one perspective, why do I need more than one camera?”. Saying that it makes perfect sense when more than one camera captures the scene from the same perspective makes it even more confusing. So why even bother? The reason is somewhat complex, but it’s really worth learning. It will help you create great visual effects, that are hard to accomplish with the use of only one camera, in an easy way.

What is the Unity camera?

Before we can continue, you have to understand what Unity Camera actually is. When Unity renders the scene it needs much of important information to be set up, but let’s simplify that list to make it easier to understand. Let’s consider:

  • List of objects to render
  • Camera’s perspective (position, rotation, scale, field of view, clipping etc.)

If you’re already experienced in that matter you might’ve noticed that I’m not speaking about matrices. Let’s just ignore math-related stuff for now.

List of objects to render is a list of all objects on the scene, right? Wrong! Each camera renders only the objects visible to it (field of view, frustum culling) and those on the layer which actually seen by the camera (Culling Mask.)

camera culling mask

Culling Mask can be set to Everything, or you may set which of the layers should be seen. This is one thing what layers are for.

This camera sees everything.

This camera sees only the Default layer (ground) and the Red layer (red sphere).

This camera sees only the Default layer (ground) and the Red layer (red sphere).

The conclusion is that different cameras can render different objects. This is important information even if you don’t know yet how to use it in practice. It also means that adding second camera will not re-draw your scene two times. Only objects visible to the second camera will be rendered. Knowing this having multiple cameras rendering different layers will result in similar efficiency as rendering all these layers using only one camera.

Let’s then answer the main question: Camera is an instruction to render specific list of objects from given perspective.

What do cameras render?

Wait, haven’t we just answered that question?! Well… not exactly. There’s a visible and an invisible part. What you can see is a result image (let’s call it color buffer). And of course there’s a thing that you cannot see. This thing is called a depth buffer (called also z-buffer).

Depth buffer can be easily described as a game screen sized gray-scale image, every pixel of which represents how close that pixel is to the camera (to be honest this is not 100% true but let’s not think of more complicated cases now.) It is used by the GPU to decide whether to-be-rendered pixel should be processed or rejected from rendering. As a result, pixels that are obstructed by other pixels are not going to be visible (just like in the real world.)

depth buffer

Camera order and clearing

Before rendering anything into color buffer and depth buffer, camera can clear both buffers or only the depth buffer. Did you notice that the default Unity 5 scene camera clears buffers to Skybox?

clear flags skybox

There are some more options there:

cb969ef8-2866-4689-9026-e95e0c8c39f2

  • Skybox replaces color buffer with your skybox and completely clear depth buffer
  • Solid color does the same, but color buffer becomes solid color
  • Depth only leaves color buffer at is, but your depth buffer becomes clear
  • Don’t Clear doesn’t clear anything.

What will happen if we will try to set the default camera Clear Flags to Don’t Clear? Well, the effect may be interesting (I moved the camera a little after entering the Play mode).

camera don't clear

It looks like our sphere duplicated itself so many times, that it turned into some kind of wired, rounded pipes thing. Besides that there’s still one red sphere on the scene (note that Blue layer is still not visible to the camera), the game scene image looks valid. There are no graphical artifacts of any kind. Yet we managed to create an effect of many duplicated objects with only one object.

This happened because color buffer was not cleared between frames (colors rendered previously were transferred to the next frame), also the depth buffer. Depth buffer remembered that something has been rendered and it was keeping this information when Unity tried to render another frame. When sphere was about to be rendered behind already rendered sphere image, invisible pixels were discarded. The same thing applies when there are many objects on the scene rendering one after another.

If you still don’t understand what just happened, please stop reading now and try doing it yourself! Make a new scene, add an object, set camera Clear Flags to Don’t Clear and move either your object or your camera.

What is it good for?

I assume that you don’t want this kind of effect in your game, so what’s the clearing good for? Let’s now try to create two cameras.

  • Blue Camera
    • Clear Flags: Skybox
    • Culling Mask: Default, Blue
    • Depth: 0
What Blue Camera sees.

What Blue Camera sees.

  • Red Camera
    • Clear Flags: Don’t Clear
    • Culling Mask: Red
    • Depth: 1
What Red Camera sees.

What Red Camera sees.

There’s one new parameter: Depth. Depth defines the order of rendering of the cameras. Camera with lower depth will be rendered before the camera with a higher depth.

Let’s see how Unity will render this scene step by step (again not 100% accurate, but it’s only to understand the process):

  • (Blue Camera context)
  • Color buffer is cleared to Skybox
  • Depth buffer is cleared
  • Plane (Default layer) and blue sphere (Blue layer) are rendered
  • (Rex Camera context)
  • Nothing is cleared
  • Red sphere (Red layer) is rendered

As the result you get a scene that looks exactly like rendered using a single camera:

a5d5727e-b210-41a1-b6c8-7e28373c5a02

So why bother? Let’s try one thing. Let’s switch Red Camera Clear Flags from Don’t Clear to Depth only:

depth only clear

Whoa, do you see that? Since depth buffer has been cleared, the red sphere doesn’t know that its pixels are obstructed, so it’s rendering like there’s nothing on the scene. That means that clearing the depth buffer brings rendered objects to the front. This may be super-useful when you’d like to render 3D UI elements.

In Skyrim you can see inventory items as 3D objects. These are rendered correctly even if background object appears closer to the camera.

In Skyrim you can see inventory items as 3D objects. These are rendered correctly even if background object appears closer to the camera.

Another interesting option is applying camera effects only to specific layers. Let’s try to apply blur to the Blue Camera, just like on the screenshot below:

camera blur effect

Let’s now switch  Red Camera Clear Flags back to Don’t Clear and apply a different effect to the Blue Camera: Grayscale.

5e5efe4a-6ae5-4877-8027-0969ccc1b002

Finally, keep in mind that if you want to move the camera, you may want to move all cameras at once (that’s why keeping all the cameras as a child of one game object is quite common.) But moving only one camera may be somewhat desired…

moving two cameras