Our Unity Tutorial on Augmented Reality Game Development with Vuforia SDK

Unity AR Tutorial: Augmented Reality Game Development with Vuforia

 

Vuforia is an AR platform that provides amazing opportunities for augmented reality development. Here are some examples:

Vuforia also has a Unity SDK and in the second part of this tutorial  I’ll explain how to integrate and develop it. But let’s start with the Vuforia integration…

Obtaining a Vuforia license

add_license_keyThe first thing you have to do is  register on Vuforia page. After you complete the registration process, you can start developing with Vuforia SDK. You will then need a license for your project.

Head over to the Development tab, and click on the “Add license” button. A little form will show up. Fill in the form as shown on the side and click “Next”.

Then you have to confirm your license key by agreeing with terms and conditions and by clicking “Confirm”. Pretty simple so far, right?

Vuforia’s Unity SDK

It’s time to download the Vuforia SDK for Unity. You can download it in the Vuforia Downloads tab.

unity_sdk

After downloading the package, import it into the existing project inside of Unity.

Prepare the markers

Now it’s time to prepare the markers. Markers are the images that Vuforia will use as the points of reference to display your objects. The more distinct key points the image has, the more accurate or “augmentable” your markers will be. The easiest way to create such  marker is to generate a QR code. QR codes have a lot of key points and they work like a charm. You can generate them yourself or, to make it quicker, you can download mine 🙂

create-databaseOnce you’ve got the marker, it’s time to upload it to Vuforia. Go to the Vuforia Target Manager page and click on the “Add new database” button. Type the preferred name for your database and choose the “Device” as a Type option.

 

Great! Now, select the name of the database you’ve just created and then click on the “Add the new target” to the database.

add-target

Select the Single Image type, pick the marker you’ve downloaded before, set the width to 1, name it the way you want it and click “Add”.

There are other types of targets, for example Cuboid, Cylinder or a 3D object. These can work as 3D markers (you can make them if you feel like it), but preparing them would take a lot more time, because you’d have to create and upload every side of that cuboid to Vuforia and that’s a chore…

Alright, so now you can see your target has been added to the list and it’s ready to be downloaded. Click “Download Database” and import the package to Unity.

Preparing the scene

Setting up the scene is also very easy. Get rid of the existing camera on the scene. Vuforia has its own camera that your scene will use. Drag and drop ARCamera prefab from Vuforia > Prefabs folder into the scene. Your scene hierarchy should look like this:

unity-scene-image

Now, let’s setup the ARCamera properties. ARCamera prefab requires the license key which you can find on the Vuforia License Manager Page.

AR license key

Copy and paste your License key to Vuforia Behaviour Script field of the ARCamera prefab on the scene.

unity-scene-with-script-image

Select ARCamera in your hierarchy, and in the Database Load Behaviour tick the “Load [name] database” checkmark, then “Activate”.

database-load-behaviour

Setting up the markers

Alright, now it’s time to add the markers on the scene! You can do it by dragging ImageTarget prefab from Vuforia > Prefabs to your scene.

Your ImageTarget needs to be set to a specific marker image. It has a script called “Image Target Behaviour”, where you can do that by choosing your database name in the “Database” field and then select the “ImageTarget” from the drop down list. It should look like this:

image-target

Great, you’re almost done! Now it’s time to add some objects to the display. Right click on the ImageTarget in the hierarchy, 3DObject > Cube. Resize the cube the way you want it and the project is ready to be compiled! Alternatively, you can add any 3D object to the hierarchy of the ImageTarget.

As you can see, Vuforia SDK is very easy to setup and it has a lot of nice features you can utilize to make awesome games or apps. For example, you can combine a couple of those markers to make something a bit more complicated like this:

Procuring Graphical Assets

You will need some 3D models. I’d suggest that you download and import these amazing drone models by Popup Asylum.

Next, we’ll need a QR code as an additional marker. You can generate your own, but feel free to use mine if you’re lazy 🙂 Also don’t forget to upload it to the Vuforia Target Manager page and download the updated database after that.

For the testing purposes you’ll need these two markers printed, so I’ve also prepared a pdf for you to download.

A few more textures for you to import: drone_shadow, white_ring.

Scene setup

Drag and drop 2 ImageTarget prefabs from Vuforia > Prefabs to the scene. Name the first one as “DroneTarget” and the second one as “GoalTarget”. After that assign the corresponding “Image Target” to each one of them in the Image Target Behaviour Script. In the same script, change the width of the DroneTarget to 3 and GoalTarget to 2. It should look like this for the DroneTarget:

dronetarget

And like this for the GoalTarget:

goaltarget

Now in the ARCamera object in our hierarchy, head over to the Vuforia Behaviour Script and change the value of “Max Simultaneous Tracked Images / Objects” to 2. Then in the “World Center Mode” section select “SPECIFIC_TARGET” from the drop down list and in the section “World Center” choose the DroneTarget from the hierarchy. It should look like this:

ar-camera

Connecting Models to Markers

We’ve got the markers, let’s add the 3D models they will represent…

Right click on the DroneTarget and select “EmptyObject”. Make sure this new object has a position of [0;0.5;0] and scale of [0.33;0.33;0.33]. I’ve also changed the name of it to “Drone”. Then, add a 3D model of any drone as a child to this object and change the position of it to [0;0;0] and scale to [0.33;0.33;0.33]. Now the hierarchy looks like this:

drone-model-setup

So, let’s add a model for our GoalTarget. Add an empty object same way we did for the DroneTarget and change its name to “Ring”. Then add a Sprite Renderer to that object and in the “Sprite” section, choose our ring sprite. After that you can change its color to whatever you’d like. You also might want to resize it, change the position to make to fit the QR marker. It kind of looks like this for me:

all-model-setup

Nav Mesh setup

Alright, so we’ve got the scene, let’s now get to the main functionality… Pathfinding. So we need to bake a Nav Mesh. For more info, please, go to Unity navigation tutorial page.

Baking a Nav Mesh requires a presence of a regular mesh on the scene to walk on first. Right click on the hierarchy 3D Object > Plane. Rescale the plane so it covers an area around the markers where you potentially would like your drone to move. Now open up the navigation window (Window > Navigation) and in the Object tab make sure your plane is selected along with a “Navigation Static” checkbox. After that click “Bake”. Fantastic! If your scene looks somewhat like this, you’re on the right track!

navmesh-with-plane

So what do we do with that annoying white plane obstructing the view? Well, it’s time for us to get rid of it. Yeah, just delete it, we won’t need it anymore, since the Nav Mesh is already baked.

Once we’ve got our Nav Mesh, it’s time to setup a Nav Mesh Agent, an actor that would travel using the Nav Mesh. The Nav Mesh Agent functionality should obviously be applied to the drone, so select the first child in the “DroneTarget” hierarchy…

select-the-drone

… and add NavMesh Component to the inspector. You’ll see a whole bunch of parameters for this component. I’m not going to explain what each and every one of this parameters means since that’s not the point of the tutorial. (If you’re interested in the commentary over the parameters, please watch this video and here’s Unity docs). Here’s a screenshot of my Nav Mesh Agent parameters for the drone:

nav-mesh-agent

Coding time

The initial scene setup is ready and it’s time to write some code for it to work… Create a script in one of your folders and add it to the “Drone” object (DroneTarget first child).

The whole moving mechanic is pretty simple. We’d need two variables variables:

A function to track a position of the second marker and move the drone towards it:

That’s it. Just execute this function in the update and don’t forget to select the “Ring” (GoalObject child) object as a public variable in the inspector.

movement-script

You can now build the project onto your phone and check whether it works with the markers.

NOTE: The DroneMarker should always be in the camera view for the whole app to work since we’ve declared it as the world center and the whole drone logic is attached to it.

Drone leaning

You might also want your drone movement look more realistic, so let’s add movement leaning. The done will lean forward depending on its current speed.

MapRange function

There’s a great function I use pretty often called MapRange. It converts numbers within a range to numbers within another range depending on the current value.

A simple example of this function working is… let’s say we want to change the color of the ball depending on player’s distance from it. s would be our current distance from the ball. a1 – minimum observed distance, a2 – maximum observed distance. So we want the ball to be blue if the distance is more than 3 meters and red if player is 1 meter away or less. everything in between would be automatically purple -ish. So the implementation with MapRange would look like this:

playercolorball

Back to the drone leaning…

so in case of movement leaning, exploiting MapRange() would be this simple:

Add PitchCtrl() to the update function and you’re good. The drone will lean forward while moving.

Shadow

Let’s add a shadow to the drone. A simple static shadow. That’s a piece of cake since we just have to add a sprite to the existing hierarchy. Make sure your shadow object is the second child of “Drone”. Just like that:

first-shadow

Line renderer

In order to setup a line renderer we should add a sprite, but don’t worry, there’s no need to import one, just head over to your project window and open the folder where you store your sprites. Hit Right click > Create > Sprites > Square. You’ve got your new sprite which we’ll use for the new material that we’re about to create.

Right click > Create > Material. Material’s shader should be changed to Particles > VertexLit Blend and Particle Texture to the earlier created Square sprite. Sounds hectic, but just look at this picture and you’ll get it:

laser-material

Create a new script. Let’s say the name of this script is Goal. Attach the Goal script to our Ring object.

ring-line-renderer

Also attach line renderer component to the “Ring”. Line renderer would require a material, so please add the material we just created to it.

line-renderer

So, it’s time to write the code, which is also very simple. Here the whole Goal script code and it’s pretty much self-explanatory:

If you run the app now, you’ll see the ring’s line/laser pointing to the drone destination. It’s great but we can also make this destination look a little bit more clear by adding another object… Just a game object containing a Sprite Renderer and Animation components.

destination play

Create a public variable destSprite and update MoveToTarget() function.

That’s it, run your project on a smartphone and you’ll have the same result as in this video:

Thanks for reading and don’t forget to subscribe to our newsletter not to miss any of the new Unity development blog posts!

Becoming a Unity Asset Store Publisher

You’re now in process of creating your game. Did you make something that could be used again? Is it something that is not easy to build and can be useful for others? That’s great! Sell it on the Unity Asset Store!

Before we start

There are at least two kinds of publishers on the Asset Store. The ones that belong to the first type are people who are creating their content solely for the purpose of selling it. Those people are mostly focused on quality, documentation and support. The second group are game companies that are building assets for internal use but then eventually they are publishing them to the wider audience. Their assets are made for specific games but sometimes they can be reused if the quality is good enough.

It’s more difficult to make money out of artistic assets (models, music, textures) than of script and editor extensions. I don’t know the exact numbers but this topic was frequently discussed on the publishers’ groups. The reason why is that artistic assets, no matter how good they are, are difficult to re-use. Sure, there are environment packs and character packs, but most of the times there will be something missing, something that you’d like to see in your game as a game creator. Mixing multiple models and textures together rarely looks good enough, so many developers decides to hire 3D modelers and graphics designers on their own.

This issue does not affect scripts and editor extensions that much. Scripts can be very flexible and it only depends on good design and amount of time spent on working on it.

How to create a successful script

Many people were asking me if working solely as Asset Store publisher is profitable. It really depends on the products that you’ll be releasing. Some products will be more successful than others. Success is something that may be difficult to measure. For me a successful product is something that I did in relatively small amount of time, it does what it is meant to do, it sells and does not require too much maintenance or support. Some of the best unity assets are those that act as features that the Unity editor itself should include natively.

One example of a successful product is Energy Bar Toolkit – a script for health bars set up of different kind. It does what it is meant to do and it does it well. The good thing about EBT is that for most of my clients this asset offers more than enough. An example of an asset that is not very successful is Mad Mesh Combiner. The idea was quite good, because many Unity developers were struggling with the frame rates on mobile devices due to the big number of draw calls. Unfortunately, I did not predict how much time it would require to make it work properly in most cases. Also, my clients didn’t understand the limitations and many of them were blaming me after purchasing the asset instead of reading through the instructions first.

Generally speaking, think of creating assets that have a clear definition of finished. At least for starters, because it’s easier to become a publisher if you can release more than one product in a short period of time.

Uploading your first asset

So you’ve decided to become a publisher. Brace yourself, there’s a lot to read if you want to handle it all. The best resource is an official guide to how sell assets. Please make sure to read all the listed documents. Read and remember the Submission Guidelines – making it right will surely increase your chance of getting approved.

Make sure that your asset has a decent documentation. For the first version you can do a PDF file generated out of Libre Office document, but as your asset will grow the documentation may become bigger and less coherent. Consider building a HTML documentation with Jekyll. Jekyll is like wiki, but it generates static html files that can be sent to any http server or zipped into a file and distributed along with your asset.

Prepare a decent presentation, but do not spend too much time on it – you can easily append it later! YouTube video is a must-have, having a WebGL demo will help too. Don’t worry too much about the icon and product page background. It’s not as important as you may think.

Do not upload your assets using the latest Unity version available – users of older Unity versions won’t be able to use them. Usually only a newly created games use the newest Unity version. After that, their creators stick to that particular version until the next game. If there’s a significant change between Unity versions that you cannot ignore, you’re allowed to upload your asset from multiple Unity versions. Thanks to this your customer will receive the content that will closely match his setup.

Don’t worry if you get declined for the first few times. Unity guys are very friendly and helpful with getting your asset right. Usually it’s a matter of something that you’ve missed while preparing it for the upload.

Your asset is now online!

After getting approved, set up a forum thread about your asset. It’s a method of marking your presence. Many of your clients will also use it to get the support. It’s a preferred way of providing support, because when someone will search for issue that someone else had, most likely he or she will find one of your answers.

Make sure to reply to all of the email support requests. Even if you don’t know the answer or you’re not planning to add a requested feature the worst thing you can ever do is to ignore your client.

How would you know if a person that is writing to you is your client? You can ask them anytime for order no. (it’s on the invoice). On your publisher panel you can find a verification tool. If the Order No. is OK, you see your asset name, purchase date and for what price it has been purchased.

You can give away up to 12 free copies of your asset a year. Make use of that! I did it when I had released my initial version and I needed some feedback. I gave few away to some people on the forums and I’ve got a great feedback that helped me to improve my tool before the next release.

Then what?

As I said before, it’s difficult to measure a success. Think of releasing at least 3 assets. When you do, you will know enough to evaluate which one is a better investment.

Subscribe to Unity Blog. Stay up to date with the latest features. Test your assets on beta versions before releasing them. You can sometimes find yourself in a situation that new Unity version will break your code. This is your best chance to submit a bug report and get it fixed before it goes public. Trust me, it can save you from a lot of trouble.

If you’re already a publisher or if you want to become one after reading this article, please share your thoughts in the comments!

Links

7 Things to Remember When Working in a Team

Whether you are working in a team or alone, you should know how to efficiently work on a project with other people. It will not only make working on a single project with someone a lot easier, but also improve your coding style and project management skills significantly. It’s nothing to be afraid of, really! I’d like to explain in a few steps what are the most important aspects of working in teams.

1. Organize your project well

Before progressing any further you must make sure that your project is well-organized. It’s an important requirement because unorganized project is very difficult to maintain. It’s not impossible to work on such project by all means, but each time you will be forced to work on it, you may feel an unstoppable desire to murder someone in your near proximity. And this feeling will get worse as the project will grow.

You can learn how to keep your project organized from one of our previous blog posts.

2. Setup an issue tracker

If you aren’t using one already then I am sure that you’ve heard at least about Jira, Redmine, Trac or Mantis. In fact, there are a lot of these and you may have a serious problem with deciding which one to choose. From my experience there’s no solution that can be called “the best one”. All of these has some pros and cons. I strongly encourage you to check at least 3 of them before decidingwhich one will be good enough for your needs. Keep in mind that you will learn what you really need only after a few months of working with any of it.

Why issue tracking is that important? When working in a team, good communication is a key. Humans are lazy and forgetful creatures. If you tell someone that he/she has to do something, he/she remembers that task clearly until tons of similar tasks fall out from the sky. You have to accept that simple fact. Also don’t rely on your own memory – it has been proven that the need of remembering a lot of things significantly increases your stress.

3. Switch your projects to force-text serialization method

It’s obvious but can be often forgotten. Unity is using binary serialization method for its assets by default (hey, Unity guys! Who told you that would be a good idea?). This simply means that if two people will change a scene, animation setting or a prefab at the same time, one person will have to give up his changes because of a conflict. The situation is similat to the one in which two guys dating the same girl. One of them sooner or later will have to give up (let’s not talk about alternatives, shall we?).

Force-text serialization can be enabled in the Edit -> Project Settings -> Editor:

You don’t need to confirm anything. Unity will perform re-serialization of your assets immediately.

4. Agree on a common coding style

Be ready for a big fight – programmers don’t like to change their habits. But you and your colleagues still need to do it, otherwise it will cause a lot of nasty conflicts on code merges. Also, you have to decide if your source files should use tabs or spaces for code indentation. You may look for something like an official or community style guide for a specific language. There’s one for C#  that you may find useful. You don’t have to blindly follow if you don’t agree with some parts of it, but when there will be a disagreement within your team, it may be a good source of possible solution.

Remember to not force any decisions on your team if there’s a strong defiance on that matter. If you want them to give everything they got to your project, you have to make them love working on it, not hate it!

5. Agree on asset changes policy

Some assets can be hard to merge even if you switch the serialization mode to text. It may be necessary to decide on a policy on how and when communicate that someone is about to edit an asset. Some VCS like Perforce or PlasticSCM offer a feature called exclusive checkout. It means that all files are read-only until someone will decide to edit it. If the second person tries to edit that file in the same time, he/she will receive a warning that this file is being checked out by someone else.

You may also want to take more human-like approach. I knew one company that had a magnetic table with post-it notes on it. Each note had a scene name written on it. If someone decided to edit a scene, he or she took matching note to his/her desk so anyone else knew that this scene was currently being changed.

6. Communicate often, but watch for interruptions

Talk often with your team about what they did and what they are about to do. Do it every morning if possible and make it short. Discuss about possible issues and how to solve them, if needed.

Talk with your teammates often if you need to, but watch out for interrupting their work. Avoid calling them or speaking to them personally without earlier notice. If you do that, you decline their right to react whenever it suits them. You make them immediately lost the context of the work they were doing and to focus on your thing – it’s a very selfish thing to do. Even if you’re in a hurry, try to notify them using an email message or IM first.

Choose communication software that allows to freely configure its verbosity level. Some people don’t mind to be interrupted, some does. Nearly every e-mail client allow to freely configure notifications level so if you want to talk about something and you’re not in a hurry, use e-mails. Some things cannot wait for too long, so you need Instant Messaging of some sort. I strongly recommend you to try Slack. Slack highly respects everybody’s right to not be interrupted and gives an option to allow interruption in really important matters. Slack also allows your team to create channels. Channel is a place when two or more people can discuss with each other about one specific matter.

7. Try Unity Collaborate

Not so long ago Unity announced their new service called Unity Collaborate. It’s worth trying because it directly targets Unity developers and the client is working within the Unity editor itself.

We’ve never been using Unity Collaborate (it’s still in beta stage), but it promises to solve issues with data merging and staying in sync with your teammates. It’s worth checking and you can try it for free.

What else?

There are no two identical projects and two teams will never be the same. You should keep your eyes and mind open, and watch for any issues your team encounters that are worth solving. Not all issues can be solved easily and many project directors make a mistake with solving an issue with a solution that makes it even worse. Trying is not a sin, but not admitting we made a mistake and regressing in work that has already been done unfortunately is.

Snow Shader Tutorial with Unity

Let It Snow! How To Make a Fast Screen-Space Snow Accumulation Shader In Unity

Have you ever wondered how much time does it take to apply snow to all of the textures in your game? Probably a lot of times. We’d like to show you how to create an Image Effect (screen-space shader) that will immediately change the season of your scene in Unity.

3D model house village with trees in the background in Unity

How does it work?

In the images above you can see two screenshots presenting the same scene. The only difference is that in the second one I enabled snow effect on the camera. No changes to any of the textures has been made. How could that be?

The theory is really simple. The assumption is that there should be a snow whenever a rendered pixel’s normal is facing upwards (ground, roofs, etc.) Also there should be a gentle transition between a snow texture and original texture if pixel’s normal is facing any other direction (pine trees, walls).

Getting the required data

For presented effect to work it requires at least two things:

  • Rendering path set to deferred (For some reason I couldn’t get forward rendering to work correctly with this effect. The depth shader was just rendered incorrectly. If you have any idea why that could be, please leave a message in the comments section.)
  • Camera.depthTextureMode set to DepthNormals

Since the second option can be easily set by the image effect script itself, the first option can cause a problem if your game is already using a forward rendering path.

Setting Camera.depthTextureMode to DepthNormals will allow us to read screen depth (how far pixels are located from the camera) and normals (facing direction).

Now if you’ve never created an Image Effect before, you should know that these are build from at least one script and at least one shader. Usually this shader instead of rendering 3D object, renders full-screen image out of given input data. In our case the input data is an image rendered by the camera and some properties set up by the user.

It’s only the basic setup, it will not generate a snow for you. Now the real fun begins…

The shader

Our snow shader should be an unlit shader – we don’t want to apply any light information to it since on screen-space there’s no light. Here’s the basic template:

Note that if you create a new unlit unity shader (Create->Shader->Unlit Shader) you get mostly the same code.

Let’s now focus only on the important part – the fragment shader. First, we need to capture all the data passed by ScreenSpaceSnow script:

Don’t worry if you don’t know why we need all this data yet. I will explain it in detail in a moment.

Finding out where to snow

As I explained before, we’d like to put the snow on surfaces that are facing upwards. Since we’re set up on the camera that is set to generate depth-normals texture, now we are able to access it. For this case there is

in the code. Why is it called that way? You can learn about it in Unity documentation:

Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture for the camera.

_CameraDepthTexture always refers to the camera’s primary depth texture.

Now let’s start with getting the normal:

Unity documentation says that depth and normals are packed in 16 bits each. In order to unpack it, we need to call DecodeDepthNormal as above seen above.

Normals retrieved in this way are camera-space normals. That means that if we rotate the camera then normals’ facing will also change. We don’t want that, and that’s why we have to multiply it by _CamToWorld matrix set in the script before. It will convert normals from camera to world coordinates so they won’t depend on camera’s perspective no more.

In order for shader to compile it has to return something, so I set up the return statement as seen above. To see if our calculations are correct it’s a good idea to preview the result.

RGB Rendering of Camera Space for Unity Shader Tutorial

We’re rendering this as RGB. In Unity Y is facing the zenith by default. That means that green color is showing the value of Y coordinate. So far, so good!

Now let’s convert it to snow amount factor.

We should be using the G channel, of course. Now, this may be enough, but I like to push it a little further to be able to configure bottom and top threshold of the snowy area. It will allow to fine-tune how much snow there should be on the scene.

Snow texture

Snow may not look real without a texture. This is the most difficult part – how to apply a texture on 3D objects if you have only a 2D image (we’re working on screen-space, remember)? One way is to find out the pixel’s world position. Then we can use X and Z world coordinates as texture coordinates.

Now here’s some math that is not a subject of this article. All you need to know is that vpos is a viewport position, wpos is a world position received by multiplying _CamToWorld matrix by viewport position and it’s converted to a valid world position by dividing by the far plane (_ProjectionParams.z). Finally, we’re calculating the snow color using XZ coordinates multiples by _SnowTexScale configurable parameter and far plane to get sane value. Phew…

Unity Snow Texture for Unity3D shader tutorial

Merging it!

It’s time to finally merge it all together!

Here we’re getting the original color and lerping from it to snowColor using snowAmount.

The final touch: let’s set _TopThreshold value to 0.6:

Voila!

Summary

Here’s a full scene result. Looking nice?

Lowpoly Township Set on the Unity Asset Store

Low Poly 3D Snow Village Unity Asset Shader

Feel free to download the shader here and use it in your project!

Scene that has been used as the example in this article comes from Lowpoly Township Set. Inspired by this particular redditor.

Game Production Pipeline Infographic: Game Design, Development, QA Testing, and Marketing

Game Production Pipeline Infographic

When you first start thinking of making a game, it’s quite important to consider the amount of time and energy that it will take to complete it. Some games can take years to complete – for both Indie game developers and their AAA counterparts. When working with game publishers, for example, game studios will need to be able to accurately estimate the project’s timeline and respective milestones so that the game is shipped on time. We decided to make an illustration of the game production pipeline in order to visualize the process of making a video game.

Game Production Pipeline Infographic - Game Design, Game Development, QA Testing, and Launching the game marketing plan.

 

  1. Pre-Production Stage

Everything starts from the ideas. Ideas can be unique or inspired by other games. It includes superficial description of game mechanics, story, genre, platform,  etc.

IDEA REFINEMENT – usually termed as building storyline, it involves defining main character, plot setting and overall game theme. During this phase, devs identify the game goals, controls and start sketching game art.

Game Design Document is the output of this stage which describes various game elements as well as project plans for the development.


  1.    Production Stage

After the pre-production phase is complete, the development of the game enters the production phase and now larger group of producers, designers, artists and programmers are brought into the mix.

DESIGN – designer designs game art based on desired theme. Depending upon the iteration, the art can involve designing main game play, levels, bosses, menus or promotional art.

DEVELOP – programmers build game logic based on art designed for the iteration. Based on the iteration, this can be developing core game play, writing AI or implementing UI/UX.

TEST – testers validate various scenario of gameplay to ensure higher quality of game. They validate each build on target platforms to provide feedback for next iteration.


  1.    Post-Production

When the game is considered “feature complete” and all of the code has been written and art has been completed, it’s moved to the post-production stage.

TESTING – this stage is usually referred to as beta-testing and begins when all of the code and art has been completed. After the testing the game is being approved for the launch.

DEPLOYMENT – the game builds are deployed to the stores & platforms and are then officially released to the public.

MARKETING – post-production marketing is a process of promoting and selling the game, optimizing game discovery including market research and advertising.