7 Ways to Keep Your Unity Project Organized

I saw a person on Quora the other day, asking how programmers are able to write projects that consist of over 10,000 lines of code. When software gets bigger, it is more difficult to maintain and that’s a fact. So here’s the thing – if you don’t keep your project organized, you’re going to have a hard time to keep the pace. Later on, you will find yourself wasting time with a messy project instead of adding new features. This is also true regarding any Unity Project. Here are (in my opinion) the most important tips that will help you with keeping your project organized.

1. Directory Structure

We cannot talk about organization without mentioning organizing project directory structure. Unity gives you a total freedom in that matter, but because of that, it can frequently get really messy. This is the directory structure I personally use:

  • 3rd-Party
  • Animations
  • Audio
    • Music
    • SFX
  • Materials
  • Models
  • Plugins
  • Prefabs
  • Resources
  • Textures
  • Sandbox
  • Scenes
    • Levels
    • Other
  • Scripts
    • Editor
  • Shaders
  1. Do not store any asset files in the root directory. Use subdirectories whenever possible.
  2. Do not create any additional directories in the root directory, unless you really need to.
  3. Be consistent with naming. If you decide to use camel case for directory names and low letters for assets, stick to that convention.
  4. Don’t try to move context-specific assets to the general directories. For instance, if there are materials generated from the model, don’t move them to Materials directory because later you won’t know where these come from.
  5. Use 3rd-Party to store assets imported from the Asset Store. They usually have their own structure that shouldn’t be altered.
  6. Use Sandbox directory for any experiments you’re not entirely sure about. While working on this kind of things, the last thing that you want to care about is a proper organization. Do what you want, then remove it or organize when you’re certain that you want to include it in your project. When you’re working on a project with other people, create your personal Sandbox subdirectory like: Sandbox/JohnyC.

2. Scene hierarchy structure

Next to the project’s hierarchy there’s also scene hierarchy. As before, I will present you a template. You can adjust it to your needs.

  • Management
  • GUI
  • Cameras
  • Lights
  • World
    • Terrain
    • Props
  • _Dynamic

There are several rules you should follow:

  1. All empty objects should be located at 0,0,0 with default rotation and scale.
  2. When you’re instantiating an object in runtime, make sure to put it in _Dynamic – do not pollute the root of your hierarchy or you will find it difficult to navigate through it.
  3. For empty objects that are only containers for scripts, use “@” as prefix – e.g. @Cheats

3. Use prefabs for everything

Prefabs in Unity are not perfect, but they are the best thing you will find to share pre-configured hierarchies of objects. Generally speaking, try to prefab everything that you put on your scenes. You should be able to create a new level from an empty scene just by adding one or more prefabs to it.

The reason why you should use prefabs is that when a prefab changes, all the instances change too. Have 100 levels and want to add a camera effect on all of them? Not a problem! If your camera is a prefab, just add a camera effect to the camera prefab!

Be aware that you cannot have a prefab in another prefab. Use links instead – have a field that requires a prefab to be assigned and make sure to assign it when instance is created. Consider auto-connecting prefab instances in Awake() or OnEnable() when it makes sense.

4. Learn how to use version control system (VCS)

git logoYou may already know something about GIT, Subversion or any other VCS out there. As a matter of fact, “knowing something” is only a small piece of what you may learn. You should focus on learning about important but infrequently used features of VCS of your choice. Why? Mostly because VCS systems are much more powerful that you think, and unfortunately many users are using these as nothing more than a backup and synchronized solutions. For example, did you know that GIT allows you to stash your changes, so you can work on them later without committing anything to your master branch?

Programmers tend to comment out blocks of code in case it’s needed later. Don’t do that! If you’re using VCS learn how to quickly browse previous versions of a file. When you are familiar with it, your code looks a lot nicer without unnecessary block of commented code.

Here’s a nice resource of tips for GIT users: http://gitready.com/

5. Learn to write editor scripts

Unity is a great game engine in the matter of extensibility (see Asset Store). Learn how to write editor scripts and utilize this knowledge. You don’t necessary need to create fancy GUI for your scripts, it can be something simple, as menu entries that are doing something useful. Here are some examples of editor scripts that I have created not so long ago:

  • Google Sheets .csv download – I had a translation spreadsheet saved on Google Drive. It automatically downloaded the newest version as .csv file, so I never had to do it manually.
  • Randomize the position, rotation and size of trees – I had a lot of trees and wanted it to look more like a forest than a grid.
  • Create distribution – Built for specified target, zips all the files and copy to the right place.
  • String replace in the sources – I had several files that contained the application version.

You can learn how to create editor scripts from the official documentation.

6. Learn to program defensively

Have you heard about defensive programming? Wikipedia defines it as follows:

Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming techniques are used especially when a piece of software could be misused.

Generally when you’re writing MonoBehaviours, you should make sure that:

  • All needed references are set
  • All required components are present
  • If you’re using singletons, make sure that they exists
  • If you’re searching for objects and expect to find something, do it as fast as possible
  • Mix-in editor code (ExecuteInEditMode and #if UNITY_EDITOR) to do as many checks as possible before you run the scene

For many of these checks you can use asserts. You should also read A Story of NullPointerException Part 1 and 2.

7. Implement in-editor and/or in-game cheats

After you learn how to write editor script, you should be able to write a set of in-editor cheats. It can work as menu entry that unlocks something (all levels for instance). It’s really easy to create:

Generally you should write cheats that will allow you to:

  • Unlock all levels, characters, items etc
  • Give you immortality
  • Add/subtract values like time, money, coins etc
  • Allow you to see things not meant to be seen by players
  • Anything else that will help you with testing your game

Of course more practical (but harder to write) are in-game cheats. These type of cheats can be executed outside Unity editor, but you have to think how you would like to execute it. See our other article about implementing cheats subsystem controlled by mouse.

Choosing Between Forward or Deferred Rendering Paths in Unity

One of the most important Unity features is the ability to choose a rendering path.  For those who are not very familiar with Unity, choosing (usually) between forward and deferred rendering paths may be something comparable to choosing between “normal”  and “strange looking and something’s broken” rendering methods. To understand better why there is more than one rendering path, first you will need to understand the motivation behind it.

It’s all about lighting

Lights are expensive, mostly because a lot calculations has to be done to find out the valid color of a pixel when there’s a light in range. In Unity lights can be evaluated per-vertex, per-pixel or as Spherical Harmonics (SH). In this article we will talk only about the former two.

pixel vs vertex lighting

In per-pixel lighting, each pixel color is computed individually (as on the left.) You can see that even when I use low-poly sphere for this example, the lighting still makes it look round. If it wasn’t for the edges, it’d be really hard to spot where all the vertices are. Then, there’s per-vertex lighting. It makes one light calculation per vertex. All the other pixels between vertices evaluate the color using regular color blending algorithm (without further light calculations.) This is the cheapest method of lighting and yeah… it looks cheap (if you’re wondering where’s the pixel and vertex lighting switch, it’s hidden in the Light component under Render Mode option. Important option is forces the light to be pixel light, Not Important is vertex light, Auto makes the strongest light a pixel light.)

It’s not a secret that game developers love per-pixel lighting much more than per-vertex lighting. Yet it has a significant downside. Each light causes the additional rendering pass of each object in the range. There’s a limit of four lights that can affect the object. What’s more, there’s also a limit of shadows – based on Unity documentation only one light can have shadows (for some reason I’ve managed to get two shadows in Unity 5.3.4, so I’m not really sure about this one.)

Deferred rendering to the rescue!

There’s a technique that allows you to use as much lights as you want on your scene with keeping the performance at reasonable level. It does not limit the number of shadows and it does not cause additional draw passes if scene objects are within light range (objects casting shadows are exceptions.) It’s called Deferred Shading Rendering Path.

4 lights deferred lighting

Why is it so different? Mostly because most of the models are rendered without lighting calculations and when the scene rendering is nearly done lights are applied to rendered 2D image. Making changes on this stage is usually called doing something in screen-space. Knowing that, we can say lighting in deferred rendering is screen-space. To understand it better, let’s look at the Frame Debugger.

Scene rendering starts with rendering all geometries:

deferred opaque

This is a flat image, so how graphics card will know how to apply lights and shadows? Thanks to the depth buffer! You can think of depth buffer as of another image that is hidden from you and that stores the information about how far from the camera each pixel is located. When represented as image, it may look like this:

cf538839-afaf-4afb-911b-52db5c062af7

Depth information alone isn’t enough to figure out how light should be applied on the surface. Still, we need at least one more thing – the orientation. Orientation in 3D space is usually represented by normals. The unusual thing is that along with color buffer and depth buffer, there is a buffer with normals!

346210b1-c144-4a66-b1bc-bd7488e3b06b

How can you tell that these are normals? It’s pretty easy! Just look at the Scene Gizmo.

scene gizmo

Do you see the color resemblance? Red cone (x) points to the left, so do left faces on previous image. Green (y) to the top and blue (z) bottom-right (from this perspective). It all matches the colors of faces from before.

Basing on that information, lights and shadows can be rendered. It really doesn’t matter how many objects there are on your scene. Everything gets done only on the final image.

After lighting pass

After lighting pass

The image above is the result is an inverted version (1 – color) of lighting pass. At the end it is blended with the first opaque image to get the final result.

Which one should I choose?

After reading all of this you may be full of enthusiasm to use the new rendering path, but hold your horses! Deferred rendering is not a remedy for all of the world problems. It has some…

Limitations

It would be too great to be true, wouldn’t it? There are some limitations.

First of all, deferred rendering does not allow us to render semi-transparent objects. That’s because if something semi-transparent exists on the scene, there’s no way to write down depth and normals for objects that is visible through semi-transparent objects and for current object itself. Unity handles this limitation rendering semi-transparent objects using forward rendering path at the end of the whole process. It works quite well, these objects can cast a shadow, but unfortunately they are unable to receive shadows from other objects. They can also cause some unexpected issues, not known when using forward rendering.

The second limitation is the lack of anti-aliasing support. The reason is similar to the issue with semi-transparent objects, but Unity does not try to workaround it in any way. Instead you can use screen-space AA algorithms (image effects), but the visual effect may be less good-looking.

Another limitation is that you can use up to four culling masks. In the documentation you can read:

that is, your culling layer mask must at least contain all layers minus four arbitrary layers, so 28 of the 32 layers must be set. Otherwise you will get graphical artifacts.

And finally there’s no support for the Mesh Renderer’s Receive Shadows flag.

Requirements

If that’s not enough, deferred rendering works only on a limited set of graphics cards. When it comes to PCs, you can safely assume that all graphics cards not older than 10 years will support it. When it comes to mobile devices, you should assume nothing. But that’s not a big issue, because…

Performance

The most important thing is that deferred rendering in most cases will get a worse performance on mobile devices than forward rendering. It’s because of additional passes that need to be done on each frame. If you’re using only one light, then it may not be worth it.

On the other hand, adding extra lights is quite cheap. In the worst case scenario performance will drop linearly and compared to forward lighting, it’s independent of number of objects on the scene.

Cities: Skylines (done in Unity) decided to use deferred rendering path. There's a lot of small lights in this game and it still performs really well.

Cities: Skylines (made with Unity) decided to use deferred rendering path. There’s a lot of small lights in this game and it still performs really well.

Resources

I hope that this article will cast some light on what rendering path you should choose for your game. Anyway, you may also be interested in these resources:

Using Visual Studio Code with Unity

Using Visual Studio Code with UnityMicrosoft recently released Visual Studio Code, a cross-platform, lightweight IDE based on GitHub Atom worth considering as an alternative to MonoDevelop. Unity’s team has decided to stop distributing Unity with MonoDevelop for new Unity versions. Instead, you will get Visual Studio Community bundled. Unfortunately for Mac and Linux users, you’re still bound to use MonoDevelop as default. Let’s try something else!

Don’t confuse Visual Studio Code with the full version of Visual Studio. They are completely different applications! Visual Studio Code gives you only a small portion of what Visual Studio can do. It still can be quite powerful, though.

visual studio code running

Installing

To get started you need to download and install Visual Studio Code for your target platform. In order to do so, go to this page and download package suitable for your operating system. After you get the package, follow the standard installation procedure for your operating system.

Configuring Unity

In order to make your Unity editor work with Visual Studio Code, you have to unpack a UnityVS plugin into your project. Unfortunately, you have to repeat this process for all projects that you want to work on with Visual Studio Code.

After unpacking it, go to the Preferences window (Edit -> Preferences for Windows and Linux or ⌘, shortcut on Mac OS).

vscode preferences window

Here make sure that for VSCode tab Enable Integration checkbox is enabled. When done, you will be able to open your project using Open C# Project In Code menu option.

Possible issues

When running on MacOS it’s quite common to get an error like this one:

vscode omnisharp error

To fix this issue, run these commands to update mono:

Summary

You can find more information about VSCode and Unity here. If you won’t be satisfied with it, you can always remove the VSCode directory from your project and then automatically get back to MonoDevelop.

How to Use Unity’s Resources Folder

Unity has several kinds of special folders. One of them is the Resources folder. Simple concept of storing assets is well-explained in the official documentation:

Generally, you create instances of assets in a scene to use them in gameplay but Unity also lets you load assets on demand from a script. You do this by placing the assets in a folder called Resources or a sub-folder (you can actually have any number of Resources folders and place them anywhere in the project). These assets can then be loaded using the Resources.Load function.

Still the reason why we might want to use the Resources folder may be a little confusing. First you have to understand how Unity build process is works and how Unity is able to access game assets.

Unity build process

Before you will build your game, you have to declare what scenes your game consists of. All of this can be done in Build Settings window.

build settings window

There are at least two reasons why Unity asks you to do this:

  • It needs to know what scene should be loaded first (the top scene)
  • It needs to know what assets should be included in your build (dependencies)

What are scene dependencies? They’re assets which are connected to the scene hierarchy in any way, usually as a component field.

Unity Logo object contains Sprite Renderer object that references Unity Logo asset.

Unity Logo object contains Sprite Renderer object that references Unity Logo asset.

The dependency diagram may look like this:

dependency diagram 1

In this case there are two scenes. Scene 1 is using Asset 1 and Asset 2. Scene 2 is using Asset 2 and Asset 3. What happens if you decide not to build Scene 2?

dependency diagram 2

Only Asset 1 and Asset 2 will be included in the build since Asset 3 is referenced only by Scene 2, that is no longer included in the build. Thanks to this dependency tracking Unity will include in your build only these assets, which are actually used. Needless to say that you don’t have to worry about storing assets you’re not using at the moment. It will not affect your build size in any way.

Override!

There’s a way to get around this process. If you put your assets into a Resources folder, they will be always included in your build. But be careful! You need a really good reason to do so!

As I said before, in most cases when you need to use an asset, you make a reference to it within a scene. It’s really easy to use any kind of attached asset this way. So why would you need to use an asset without keeping a reference to it? There may be several reasons and each one depends on specific needs of the, but let’s look at one case what is quite common for most games.

When an asset is directly referenced from the scene, it will be loaded into the memory before your scene will be launched. Thankfully to that, player will not experience any frame-drops related to assets loading (with small exceptions). The price is of course the time needed for these assets to be loaded. Sometimes it may be not acceptable.

Example – loading screen with different backgrounds

Many game loading screens are displaying random images to be less boring.

Many game loading screens are displaying random images to be less boring.

Loading screen is something that usually is also a scene. Let’s think of a case when you want to display a random image on the background while your actual game level is loading. You’ve collected 15 images and you add these to loading scene images rotation script. It is working great, but when you play your game you realize that your loading scene requires more time to load than you need to pass your actual game levels!

This is caused by assets pre-loading mechanism and can be easily fixed using Resources folder. First remove all the references to your textures from the scene. Then put your images into Resources/LoadingImages directory like this:

resources images

Then somewhere in the code you can use a code like this one:

Note that Random.Range() returns a random number between first argument inclusive and second argument exclusive, that’s why there’s +1.

If you will need to attach this texture to an Image component, you can do it like this:

A word of caution

Use Resources folder only when you really need to. Loading assets on demand will make your FPS rate drop, and having indirect dependencies is makes your work much more difficult.  It’s worth to mention again that these assets will always be included in your build, even if you don’t use them. You have been warned!

How to Use Multiple Cameras in Unity3D

Understanding the Importance of Using Multiple Cameras in Unity

From what I observe, many Unity users do not grasp the concept of using multiple Unity cameras on a single scene. “If I want to look from only one perspective, why do I need more than one camera?”. Saying that it makes perfect sense when more than one camera captures the scene from the same perspective makes it even more confusing. So why even bother? The reason is somewhat complex, but it’s really worth learning. It will help you create great visual effects, that are hard to accomplish with the use of only one camera, in an easy way.

What is the Unity camera?

Before we can continue, you have to understand what Unity Camera actually is. When Unity renders the scene it needs much of important information to be set up, but let’s simplify that list to make it easier to understand. Let’s consider:

  • List of objects to render
  • Camera’s perspective (position, rotation, scale, field of view, clipping etc.)

If you’re already experienced in that matter you might’ve noticed that I’m not speaking about matrices. Let’s just ignore math-related stuff for now.

List of objects to render is a list of all objects on the scene, right? Wrong! Each camera renders only the objects visible to it (field of view, frustum culling) and those on the layer which actually seen by the camera (Culling Mask.)

camera culling mask

Culling Mask can be set to Everything, or you may set which of the layers should be seen. This is one thing what layers are for.

This camera sees everything.

This camera sees only the Default layer (ground) and the Red layer (red sphere).

This camera sees only the Default layer (ground) and the Red layer (red sphere).

The conclusion is that different cameras can render different objects. This is important information even if you don’t know yet how to use it in practice. It also means that adding second camera will not re-draw your scene two times. Only objects visible to the second camera will be rendered. Knowing this having multiple cameras rendering different layers will result in similar efficiency as rendering all these layers using only one camera.

Let’s then answer the main question: Camera is an instruction to render specific list of objects from given perspective.

What do cameras render?

Wait, haven’t we just answered that question?! Well… not exactly. There’s a visible and an invisible part. What you can see is a result image (let’s call it color buffer). And of course there’s a thing that you cannot see. This thing is called a depth buffer (called also z-buffer).

Depth buffer can be easily described as a game screen sized gray-scale image, every pixel of which represents how close that pixel is to the camera (to be honest this is not 100% true but let’s not think of more complicated cases now.) It is used by the GPU to decide whether to-be-rendered pixel should be processed or rejected from rendering. As a result, pixels that are obstructed by other pixels are not going to be visible (just like in the real world.)

depth buffer

Camera order and clearing

Before rendering anything into color buffer and depth buffer, camera can clear both buffers or only the depth buffer. Did you notice that the default Unity 5 scene camera clears buffers to Skybox?

clear flags skybox

There are some more options there:

cb969ef8-2866-4689-9026-e95e0c8c39f2

  • Skybox replaces color buffer with your skybox and completely clear depth buffer
  • Solid color does the same, but color buffer becomes solid color
  • Depth only leaves color buffer at is, but your depth buffer becomes clear
  • Don’t Clear doesn’t clear anything.

What will happen if we will try to set the default camera Clear Flags to Don’t Clear? Well, the effect may be interesting (I moved the camera a little after entering the Play mode).

camera don't clear

It looks like our sphere duplicated itself so many times, that it turned into some kind of wired, rounded pipes thing. Besides that there’s still one red sphere on the scene (note that Blue layer is still not visible to the camera), the game scene image looks valid. There are no graphical artifacts of any kind. Yet we managed to create an effect of many duplicated objects with only one object.

This happened because color buffer was not cleared between frames (colors rendered previously were transferred to the next frame), also the depth buffer. Depth buffer remembered that something has been rendered and it was keeping this information when Unity tried to render another frame. When sphere was about to be rendered behind already rendered sphere image, invisible pixels were discarded. The same thing applies when there are many objects on the scene rendering one after another.

If you still don’t understand what just happened, please stop reading now and try doing it yourself! Make a new scene, add an object, set camera Clear Flags to Don’t Clear and move either your object or your camera.

What is it good for?

I assume that you don’t want this kind of effect in your game, so what’s the clearing good for? Let’s now try to create two cameras.

  • Blue Camera
    • Clear Flags: Skybox
    • Culling Mask: Default, Blue
    • Depth: 0
What Blue Camera sees.

What Blue Camera sees.

  • Red Camera
    • Clear Flags: Don’t Clear
    • Culling Mask: Red
    • Depth: 1
What Red Camera sees.

What Red Camera sees.

There’s one new parameter: Depth. Depth defines the order of rendering of the cameras. Camera with lower depth will be rendered before the camera with a higher depth.

Let’s see how Unity will render this scene step by step (again not 100% accurate, but it’s only to understand the process):

  • (Blue Camera context)
  • Color buffer is cleared to Skybox
  • Depth buffer is cleared
  • Plane (Default layer) and blue sphere (Blue layer) are rendered
  • (Rex Camera context)
  • Nothing is cleared
  • Red sphere (Red layer) is rendered

As the result you get a scene that looks exactly like rendered using a single camera:

a5d5727e-b210-41a1-b6c8-7e28373c5a02

So why bother? Let’s try one thing. Let’s switch Red Camera Clear Flags from Don’t Clear to Depth only:

depth only clear

Whoa, do you see that? Since depth buffer has been cleared, the red sphere doesn’t know that its pixels are obstructed, so it’s rendering like there’s nothing on the scene. That means that clearing the depth buffer brings rendered objects to the front. This may be super-useful when you’d like to render 3D UI elements.

In Skyrim you can see inventory items as 3D objects. These are rendered correctly even if background object appears closer to the camera.

In Skyrim you can see inventory items as 3D objects. These are rendered correctly even if background object appears closer to the camera.

Another interesting option is applying camera effects only to specific layers. Let’s try to apply blur to the Blue Camera, just like on the screenshot below:

camera blur effect

Let’s now switch  Red Camera Clear Flags back to Don’t Clear and apply a different effect to the Blue Camera: Grayscale.

5e5efe4a-6ae5-4877-8027-0969ccc1b002

Finally, keep in mind that if you want to move the camera, you may want to move all cameras at once (that’s why keeping all the cameras as a child of one game object is quite common.) But moving only one camera may be somewhat desired…

moving two cameras