#GameDevLive – I will make a game live on Twitch!

On the 3rd of December I will do a live stream on the OmegaDish channel, where I will start developing a game. In the first episode we will create a prototype of the game from scratch using Unity 5, and then in the following episodes, we will continue to work on the game to complete it. I will also take requests and input from the chat so YOU can influence how the game will function and look!

Following this series will get you started with game development and Unity, learn the different options for monetizing on your game, as well as publishing it.

image

Countdown:
http://www.timeanddate.com/countdown/launch?iso=20151203T10&p0=234&msg=%23GameDevLive&font=cursive&csz=1&swk=1

 

Hope to see you there!

Posted in Game programming, Tutorial, Unity | Leave a comment

MVP Lander: Source code from my MVP Summit session

image

As an ex-MVP it was awesome to be back at MVP Summit as a speaker. In todays session I spent about 40 minutes on this little game where you control a lander using A and D for rotation, and W or Space for thrust. You need to land on the platform somewhere on the moon below you.

You can download the source code and the exported Windows 10 Universal app here:
http://1drv.ms/1GMES9k

See my previous post for the SpeechSynthesis, VoiceRecognition and Cortana integration:
https://digitalerr0r.wordpress.com/2015/10/21/voice-activating-your-windows-10-games-using-speech-synthesis-voice-recognition-and-cortana/

Thanks for attending my session, enjoy!

image

Posted in Tutorial, Unity | Leave a comment

Unity 5 Shader Programming #3: Specular Light

image

Hi, and welcome to Tutorial 3 of my Unity 5 Shader Programming tutorial. Today we are going to implement an other lighting algorithm called Specular Light. This algorithm builds on the Ambient and Diffuse lighting tutorials, so if you haven’t been trough them, now is the time. 🙂

Specular Light


So far, we got a basic light model to illuminate objects. But, what if we got a blank, polished or shiny object we want to render? Say a metal surface, plastic, glass, bottle and so on? Diffuse light does not include any of the tiny reflections that make a smooth surface shine.

To simulate this shininess, we can use a lighting model named Specular highlights.
Specular highlights calculates another vector that simulates a reflection of a light source, which hits the camera, or “the eye”.

What’s “the eye” vector, you might think? Well, it’s a pretty easy answer to this. It’s the vector that points from our camera position to the camera target.

One way to calculate the specular light is

I=Ai*Ac+Di*Dc*N.L+Si*Sc*(R.V)n

Where

R=2*(N.L)*N-L

This is called the Phong model for specular light.

This model calculates the angle between the Reflection Vector and the View vector. It describes how much of the reflection hits directly on the camera lens.

There is another way of calculating this called the Blinn-Phong model where you don’t need to calculate the reflection vector all the time.

 

Blinn-Phong?

In Blinn-Phong, instead of calculating the reflection vector R, we calculate the halfway vector between the view and the light direction vector, meaning we can replace the dot product between R and V with the dot product between N and H.

 

 

image

where H:

image

Then we have a parameter n that describes how rough the surface is.

The biggest visual difference between these two implementations is that while Phong will always have a circular shape, the Blinn-Phong will have an elliptical shape from steep angles. This mimics the real world.

Both models got their pros and cons that we won’t discuss in this article.

 

Implementation

The implementation is straight forward, nothing new from the previous tutorials. In other words, let’s get started with the source:

Shader "UnityShaderTutorial/Tutorial3SpecularLight-GlobalStates" {
	SubShader
	{
		Pass
		{
			Tags{ "LightMode" = "ForwardBase" }

			CGPROGRAM
			#include "UnityCG.cginc"

			#pragma target 2.0
			#pragma vertex vertexShader
			#pragma fragment fragmentShader

			float4 _LightColor0;

			struct vsIn {
				float4 position : POSITION;
				float3 normal : NORMAL;
			};

			struct vsOut {
				float4 screenPosition : SV_POSITION;
				float4 position : COORDINATE0;
				float3 normal : NORMAL;
			};

			vsOut vertexShader(vsIn v)
			{
				vsOut o;
				o.screenPosition = mul(UNITY_MATRIX_MVP, v.position);
				o.normal = normalize(mul(v.normal, _World2Object));
				o.position = v.position;

				return o;
			}

			float4 fragmentShader(vsOut psIn) : SV_Target
			{
				float4 ambientLight = UNITY_LIGHTMODEL_AMBIENT;

				float4 lightDirection = normalize(_WorldSpaceLightPos0);

				float4 diffuseTerm = saturate( dot(lightDirection, psIn.normal));
				float4 diffuseLight = diffuseTerm * _LightColor0;
				
				float4 cameraPosition = normalize(float4( _WorldSpaceCameraPos,1) - psIn.position);
				
				// Blinn-Phong
				float4 halfVector = normalize(lightDirection+cameraPosition);
				float4 specularTerm = pow( saturate( dot( psIn.normal, halfVector)), 25);

				// Phong
				//float4 reflectionVector = reflect(-lightDirection, float4(psIn.normal,1));
				//float4 specularTerm = pow(saturate(dot(reflectionVector, cameraPosition)),15);
				
				return ambientLight + diffuseLight + specularTerm;
			}

			ENDCG
		}
	}
}

There are two main differences here, we need the vertex position in the shader, as well as the code that calculates the updated light equation.

image

This is just a pass-through from the Vertex Shader.

Then we need to do the specular calculation itself.

image

First we get the position of the camera by using the built in variable WorldSpaceCameraPos and the vertex position.

Then we calculate the half vector by normalizing the light direction added with camera positon.

The last thing we need to calculate is the specular term itself, H.V – and add it to our existing lighting equation. Here we are using a function called pow(x,y) what raises the specified value X with the specified power Y,

25 is the shininess factor, feel free to play around with the value.

As you can see, we didn’t use any new

Download source

http://1drv.ms/1O37i1d

Posted in Shaders, Tutorial, Unity | 2 Comments

Voice Activating your Windows 10 games using Speech Synthesis, Voice Recognition and Cortana

image

This blog post is all about using the Windows 10 APIs for integrating Speech Synthesis, Voice Recognition and Cortana with your Unity 5.2 games.

imageimage

There are a lot of different ways of doing this, but I decided to implement it in this way to keep your focus on the important things that is happening. If you don’t want to know any of this, feel free to download the sample project and try it for yourself. If you wish to add this to your own game, you will need to know this and follow the steps given. You will also most likely use this in a very customized way, something that will be very simple to do once you understand the basics.

We are implementing a fair bit of features here, so to help you get an overview, we are focusing on 4 components today.

1) We got the code that needs to be executed inside of Unity. This code controls everything, and enables you to decide how to voice activate your game world.

2) Then we got one solution that needs to be added to the exported Windows 10 UWA solution that implements the interaction between your Unity game and the Windows 10 APIs. Currently, this takes a few questions with associated answers, feeds it to the Speech and Voice APIs, sets up a listening session and so on.

3) The other solution is the logic that enables you to integrate Cortana with your game. This got nothing to do with the in-game experience itself, but enables Cortana to launch your game, as well as write custom logic using an App Service (a service that runs as a background task in your app).

4) Then we got the logic we need to add to our exported game itself to bind everything together.

This video explains the basics of what’s going on with the technical parts of the plugin.

 

For an in-depth session about Speech Synthesis, Voice Recognition and Cortana Integration, I recommend checking out this session from BUILD 2015:
https://channel9.msdn.com/events/Build/2015/3-716

 

Using the plugin in Unity

image

To use the plugin, you must add the VoiceBot script to the gameobject. You can of course modify how it interacts with your own game logic. This is just an example. Also, the Windows10Interop class needs to be in the project solution, as this is the logic that will communicate with the plugin itself.

Using the example VoiceBot-component

The component is simple. It needs to target a panel that got the dialogue Text in it, as well as the Text itself. These are used to hide or show the questions you can ask, depending on how far away you are from the bot. The Text item itself is used to render the possible questions.

image

The Windows10Interop class got two functions, one to request speech, another to stop the listening session.

The VoiceBot is communicating with a plugin on the final exported project. So once you got your game running, you will need to export and set up this integration.

 

Setting up the Windows 10 solution

This will look like a lot of steps but I’m covering everything in details with a lot of screenshots, it usually takes about 15 minutes max.

We got two different components, one is the in-game voice and speech handling, and the other one is integrating your game with Cortana (so you can talk and interact with your app from Cortana on the Widows 10 OS level).

The first thing you need is to add Reference to the BotWorldVoiceCommandService and the VoiceSpeech plugin projects by either referencing the build DLL, or by adding the projects to the solution. The latter is best as you probably will need to customize the code or change it based on the needs of your game. To do this, right-click the solution and add an existing project to it.

image

Navigate to the EXPORT folder to find the project (or anywhere where you downloaded the source), and add it.

image

The next thing we need to to is to add a reference to the project from the CortanaWorld project (Our exported solution):

image

Navigate to Projects and it will automatically show:

image

We need to do the same for the Speech Plugin-project as well (add solution and reference to it):
image

Then we need to register the added VoiceSpeech class as an App Service from the Package.appexmanifest, Double click this to open the settings, and click the Declarations tab.

image

Add an App Service:
image

Enter the following information:

image

This lets our app know where and how to find the App Service. It will run in the background of our app, aiding our interaction with Cortana.

 

Voice and Speech

Now we are ready to interact with the voice recognition and speech synthesis APIs of Windows 10. First, we need to add one more thing to our app, and this is an invisible component that will play the generated voice synth.

Go to MainPage.xaml and open it in design view.

Add this line below the Grid:
<MediaElement x:Name=”Media”></MediaElement>

image

Next we need to connect our EventListeners from our Windows10Interop class in the Unity-logic to the right functions in the plugin, as well as passing the Media element we just added. This is basically how we interact with the plugin between Unity and Windows 10.
This is done by adding the following three lines of code to the MainPage.xaml.cs file, in the OnNavigatedTo function:

Plugin.Windows10.VoiceSpeech.Media = Media;
Windows10Interop.SpeechRequested += Plugin.Windows10.VoiceSpeech.StartListening;
Windows10Interop.StopSpeechRequested += Plugin.Windows10.VoiceSpeech.StopListening;

image

The last thing we need to do is to add the Microphone and internet capability to our project. Open the package.appxmanifest;

image

Click on capabilities and check the microphone and the .

image

This allows us to use these capabilities in the app.

 

Cortana

To let Cortana know about your app, and learn how to interact with you, we will need to add a command file that contains all for the interactions we wish to implement. This is happening inside a VCD file – simply an XML type file that contains all of the commands you want to integrate with.

image

You can add this by creating a new XML file in your project, and add the following content:

<?xml version=”1.0″ encoding=”utf-8″?>
<VoiceCommands xmlns=”http://schemas.microsoft.com/voicecommands/1.2″>
<CommandSet xml:lang=”en-us” Name=”CommandSet_en-us”>
<AppName> Bot World </AppName>
<Example> Bot World, I want to play </Example>

<Command Name=”checkScore”>
<Example> Bot World, Did anyone beat me? </Example>
<ListenFor RequireAppName=”BeforeOrAfterPhrase”> Did anyone beat me </ListenFor>
<Feedback> Yes.</Feedback>
<VoiceCommandService Target=”BotWorldVoiceCommandService”></VoiceCommandService>
</Command>

<Command Name=”startPlay”>
<Example> Bot World, I want to play </Example>
<ListenFor RequireAppName=”BeforeOrAfterPhrase”> I want to play </ListenFor>
<Feedback> Get ready! </Feedback>
<Navigate/>
</Command>
</CommandSet>

</VoiceCommands>

Next, you will need to add the code that will execute if you launch the app with a voice command. This is being done in the App.xaml.cs file, in the OnActivated function:

case ActivationKind.VoiceCommand:
    var commandArgs = args as VoiceCommandActivatedEventArgs;
    SpeechRecognitionResult speechRecognitionResult = commandArgs.Result;
    string voiceCommandName = speechRecognitionResult.RulePath[0];

    switch (voiceCommandName)
    {
        case "startPlay":
            {
                break;
            }
        case "checkScore":
            if (speechRecognitionResult.SemanticInterpretation.Properties.ContainsKey("message"))
            {
                string message = speechRecognitionResult.SemanticInterpretation.Properties["message"][0];
            }
            break;
    }
    break;
It will look like this:
image

This function is checking how the application was activated. If it was by Voice, it will get the voice command that activated the app, and let you write custom logic based on what command it was.

We also need to register the VCD file, still in the App.xaml.cs file, add this code to the OnLaunched function. This simply takes all the commands and installs it to Cortana. It will be removed if you uninstall the app.

try
{
    var storageFile =
    await Windows.Storage.StorageFile
    .GetFileFromApplicationUriAsync(new Uri("ms-appx:///vcd.xml"));

    await Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager
        .InstallCommandDefinitionsFromStorageFileAsync(storageFile);

    Debug.WriteLine("VCD installed");
}
catch
{
    Debug.WriteLine("VCD installation failed");
}

It will look like this:
image

This should be all, now you can try to run your game, ask the sample bot a question from the given list, and interact with it using Cortana.

image

 

Download source here:

https://1drv.ms/f/s!AnvjKuzpB3ArlsgVXTwpx52CD5CK-w

Posted in Cortana, Unity | 5 Comments

Unity 5 Shader Programming #2: Diffuse Light

image

DRAFT

Hi, and welcome to Tutorial 2 of the Unity 5 Shader Programming series. Today we are going to work continue where we left on Tutorial 1. We will make the lighting equation a bit more interesting this time by adding a direction to it.

This tutorial consists of two parts. One is where we implement the full shader. doing everything ourselves. This means that we need to set the light direction as a parameter to it, the light colors and so on. However, this isn’t the right way to do it in Unity, so we will re-implement it using ShaderLabs built-in variables to use the actual properties of the lights in our scene.

Anyways, Diffuse Light?

Diffuse light isn’t very different from ambient light implementation wise, but it got one very important property, a direction to the light. As we saw, using only ambient light can make a 3D scene look 2D. By adding a direction, we will increase the realism of the scene and add a nice 3D look to it.

As mentioned in tutorial 1, the ambient light got the following equation:

I = Aintensity * Acolor (2.1)

Diffuse light builds on this equation, adding a direction to the equation:

I = Aintensity x Acolor + Dintensity x Dcolor x N.L (2.2)

From this equation, you can see that we still use the Ambient light, with an addition of two more variables for describing the color and intensity of the Diffuse light, and two vectors N and L for describing the light direction L and the surface normal N.

We can think of diffuse lighting as a value that indicates how much a surface reflects light. The light that is reflected will be stronger and more visible when the angle between the Normal N and the light direction L gets smaller.

image

If L is parallel with N, the light will be most reflected, and if L is parallel with the surface, the light will be reflected with the minimal amount.

To compute the angle between L and N, we can use the Dot-product, or the scalar product. This rule is used to find the angle between two given vectors and can be defined as the following:
N.L = |N| x |L| x cos(a)     

where |N| is the length of vector N, |L| is the length of vector L and cos(a) is the angle between the two vectors.

 

Implementing the shader

Let’s take a look at the code, and I will explain what happens after that:

Shader "UnityShaderTutorial/Tutorial2DiffuseLight" {
	Properties{
		_AmbientLightColor("Ambient Light Color", Color) = (1,1,1,1)
		_AmbientLighIntensity("Ambient Light Intensity", Range(0.0, 1.0)) = 1.0

		_DiffuseDirection("Diffuse Light Direction", Vector) = (0.22,0.84,0.78,1)
		_DiffuseColor("Diffuse Light Color", Color) = (1,1,1,1)
		_DiffuseIntensity("Diffuse Light Intensity", Range(0.0, 1.0)) = 1.0
	}
		SubShader
	{
		Pass
		{
			CGPROGRAM
			#pragma target 2.0
			#pragma vertex vertexShader
			#pragma fragment fragmentShader

			float4 _AmbientLightColor;
			float _AmbientLighIntensity;
			float3 _DiffuseDirection;
			float4 _DiffuseColor;
			float _DiffuseIntensity;


			struct vsIn {
				float4 position : POSITION;
				float3 normal : NORMAL;
			};

			struct vsOut {
				float4 position : SV_POSITION;
				float3 normal : NORMAL;
			};

			vsOut vertexShader(vsIn v)
			{
				vsOut o;
				o.position = mul(UNITY_MATRIX_MVP, v.position);
				o.normal = v.normal;
				return o;
			}

			float4 fragmentShader(vsOut psIn) : SV_Target
			{
				float4 diffuse = saturate(dot(_DiffuseDirection, psIn.normal));
				return (_AmbientLightColor * _AmbientLighIntensity) 
					 + (diffuse * _DiffuseColor * _DiffuseIntensity);
			}

			ENDCG
		}
	}
}

The first thing we need is to set a few new properties. We follow the same template as from Tutorial 1, adding a property for our light direction, the color of our light and then how intense it will be.

image

And we create the variables these properties are referring to:
image

Now, we got all the variables we need to implement our new light equation. However, it is one very important thing we still need before we can start calculating. As this equation requires a Normal, we need to pass this to the shaders.

This is done by simply adding it to the Vertex Shader input structure, as well as in the output structure since we need it in our Pixel Shader, where all our calculations is happening.
image

Our Vertex Shader will be pretty much the same, except that we pass through the Normal:
image

Now, we got our normal data ready for use in our calculations!

The first thing we need is take the direction of our light and the normal, and calculate the dot product between them. We also use saturate to clamp this between 0 and 1. The dot product can be in the range of –1 and 1, however, we don’t need the negative values as these will be behind the surface we are currently calculating.

Next, we implement our light equation from 2.2, representing our final pixel color.
image

And there you go, the result should be something like this:
imageimage

As discussed in the intro, this doesn’t scale well as all our properties are hardcoded. What is you add more objects, and you want to change the direction of the light?

Download: Source

ShaderLab Global States

Luckily, Unity made this simple. By including a shader include file, you get access to a lot of global variables that you can use.

image

There are a lot of variables in here, each for their own use. The ones we will be interested in today is:
1) to take the ambient light color from the projects Lighting Settings.

image

2) To use the properties from out scenes Directional Light:
image

Note:
Read more about
UnictyCG.cginc
Read more about the ShaderLab built-in variables

To implement all of this, we will almost completely rewrite our shader.

Shader "UnityShaderTutorial/Tutorial2DiffuseLight-GlobalStates" {
	SubShader
	{
		Pass
		{
			Tags{ "LightMode" = "ForwardBase" }

			CGPROGRAM
			#include "UnityCG.cginc"

			#pragma target 2.0
			#pragma vertex vertexShader
			#pragma fragment fragmentShader

			float4 _LightColor0;

			struct vsIn {
				float4 position : POSITION;
				float3 normal : NORMAL;
			};

			struct vsOut {
				float4 position : SV_POSITION;
				float3 normal : NORMAL;
			};

			vsOut vertexShader(vsIn v)
			{
				vsOut o;
				o.position = mul(UNITY_MATRIX_MVP, v.position);
				o.normal = normalize(mul(v.normal, _World2Object));

				return o;
			}

			float4 fragmentShader(vsOut psIn) : SV_Target
			{
				float4 AmbientLight = UNITY_LIGHTMODEL_AMBIENT;

				float4 LightDirection = normalize(_WorldSpaceLightPos0);

				float4 diffuseTerm = saturate(dot(LightDirection, psIn.normal));
				float4 DiffuseLight = diffuseTerm * _LightColor0;
				
				return AmbientLight + DiffuseLight;
			}

			ENDCG
		}
	}
}

The first thing you might have notices is that we don’t have ANY properties for this shader. It will just work.

We also need to specify that we are using forward rendering. We do this by using Tags. They are key/value pairs used to control the role of this pass in the lighting pipeline.

image

Next we start our CGPROGRAM and include the UnityCG include file.
image

Our first big change will be in our Vertex Shader. We didn’t do this in the previous part since I wanted to wait for the global variables, but we need to transform our Normal to world space, simply done by multiplying the Normals with the _World2Object matrix.

The _World2Object matrix is the inverse of the current world matrix.

image

Then we can implement our Pixel Shader. We will take the Ambient Light from the built-in UNITY_LIGHTMODEL_AMBIENT variable. This will have the value of the ambient light color specified in the Lighting Settings window.

Next, we will get the light direction by normalizing the _WorldSpaceLightPos0 variable, then we calculate the dot product in the same way as earlier.
image

Now, the output will be something like this:
image

And that’s it for today. There are much more to this like point lights and multiple lights, but bare with me – we have just begun!

Downloads

Download: Source

Posted in Shaders, Tutorial, Unity | 8 Comments

Unity 5 Shader Programming #1: An introduction to shaders

image

DRAFT

So, you want to learn the magic that turns 3d models and textures to gold?

This tutorial is the first part of a series where I will cover a lot of different shaders – much like my XNA shader tutorial series. However, this tutorial will be an introduction to shader programming. You will learn the basics of the graphics pipeline, what a shader really is, write your first Unity 5 shader, and learn a very basic lighting equation – The Ambient Light.

2001: A shader odyssey – A brief history of shaders
Shaders has been used in ray tracers for the rendering and in the movie scene for a long time, but the story for real-time rendering is different.
Before DirectX 8 and the OpenGL ARB assembly language, GPU’s had a fixed way to transform pixels and vertices, called “The fixed pipeline”. This made it impossible for developers to change how pixels and vertices was transformed and processed after passing them to the GPU, and made games looked quite similar graphics wise.

In 2001, DirectX 8 got support for vertex and pixel shaders as a utility that developers could use to decide how the vertices and pixels should be processed when going through the pipeline, giving them a lot of flexibility.
An assembly language was used to program the shaders, something that made it pretty hard to be a shader developer, and the shader model 1.0 was the only supported version. But this changed once DirectX 9 was released, giving developers the opportunity to develop shaders in a high level language, called High Level Shading Language( HLSL ), replacing the assembly shading language with something that looked more like the C-language. This made shaders much easier to write, read and understand. OpenGL released a similar language called OpenGL Shading Language (GLSL).

DirectX 10 introduced a new shader, the Geometry Shader, and was a part of Shader Model 4.0. DirectX 11 introduced shaders for tessellation and compute shaders for GPGPU.

 

Taking the red pill

So, the question is.. What is a shader? Well, a shader is simply a set of instructions that will be executed on the graphics processing unit (GPU), performing the specific tasks of your need. This makes it possible for a developer to be in control of all the programmable stages in the graphics pipeline. It also makes you responsible for all of the calculations, and you need to do (almost) everything yourself. It will also enable you to do anything you want.. so are you ready for the red pill and slide through the graphics pipeline?

The Graphics Pipeline?
It might not be obvious to developers who aren’t familiar with low level graphics programming, but all the data you see on a screen is coming from structures of data. Typically, this is a 3D model your artist made in a 3d modeling software, where each entry in the structure got the vertex position, normal direction, tangents, texture coordinates and color. It can also be a structure made procedurally by an awesome algorithm you made and so on. Even sprites, particles and textures in your game world is usually rendered using vertices.

Diagram of the data flow in the Direct3D 11 programmable pipeline
Image from MSDN: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx

This data is sent in to the pipeline in the Input Assembler and then processed all the way through it and end up as a pixel on your monitor. Think of it like your dirty gray looking car that you pass through a car wash, it’s a black box that does stuff to it in different stages like spraying water on it, adding soap, brushing it and drying it, and then when you get out from there, your car got a color, reflection, refraction and everything – it feels like you got a new car.

All the rounded boxes in the image above are the programmable stages in the graphics pipeline. Understanding shaders and being able to be creative with it is like getting root access to the graphics world.

Vertex Shader Stage
This shader is executed once per vertex, and is mostly used to transform the vertex, do per vertex calculations or make calculations for use later down the pipeline.

Hull Shader Stage (Only used for tessellation)
Takes the vertices as input control points and convert it in to control points that make up a patch (a fraction of a surface).

Domain Shader Stage(Only used for tessellation)
This stage calculates a vertex position of a point in the patch created by the Hull Shader.

Geometry Shader Stage
A Geometry Shader is an optional program that takes the primitives (a point, line, triangle, … for example) as an input, and can modify, remove or add geometry to it.

Pixel Shader Stage
The Pixel Shader (also known as fragment shaders in the OpenGL world) is executed once per-pixel, giving color to a pixel. It gets its input from the earlier stages in the pipeline, and is mostly used for calculating surface properties, lighting and post-process effects.

Optimize!
Each of the stages above is usually executed thousands of times pr. frame, and can be a bottleneck in the graphics pipeline. A simple cube made from triangles usually have around 36 vertices. This means that the Vertex Shader will be executed 36 times every frame, and if you aim for 60 fps, this vertex shader will be executed 2160 times per second!

You should optimize these as much as you can. Smile

 

Developing Shaders in Unity 5

So that was a very brief introduction to shaders and the graphics pipeline. You might now understand that something called a Graphics Pipeline exists, and that you can program parts of it to be able to do whatever you want, but I guess you still have a lot of questions.

There are many ways to develop a shader, like writing in HLSL, GLSL or Cg. Unity however is using a language named ShaderLab, and is used to define a material. The ShaderLab language can be written using Cg/HLSL (Cg and HLSL are very similar) and can also support GLSL inline. We will take a look at this later, but for now, let’s take a closer look to ShaderLab.

ShaderLab
The best way to learn this is to jump directly to Unity. Launch Unity 5 and create a new project.

Cg/HLSL
High Level Shading Language(HLSL) is used to develop shaders using a language similar to C. Just as in C, HLSL gives you tools like declaring variables, functions, data types, testing( if/else/for/do/while and so on) and much more, in order to create a logic for processing vertices and pixels. Below is a table of some keywords that exists in HLSL. This is not all of them, but some of the most important ones.
image

image

The Cg language also supports fixed-point number indicated by fixed or fixed4 and so on.

For a complete list (HLSL): https://msdn.microsoft.com/en-us/library/bb509587(v=vs.85).aspx and for Cg: https://en.wikipedia.org/wiki/Cg_(programming_language)

HSLS offers a huge set of functions that can be used to solve complex equations. As we go through this article, we will cover many, but for now, here is a list with just a handful of them. It’s important to learn all of them in order to create high-performance shaders without re-implementing the wheel.

image

For a complete list: https://msdn.microsoft.com/en-us/library/ff471376.aspx

Creating a shader in Unity 5

1) The first thing we need is a 3d model that we can apply the shader on. In this case, we can simply add a Sphere to the scene, so do that now.

2) Next we need the shader itself. A shader can be added to the scene by simply clicking
Assets->Create->Shader->Standard Surface Shader

image

3) Now, create a new folder called Shaders and drag this shader into this folder. Also, give it a proper name:
image

4) We also need a material that will use our shader, so add a material to the scene and give it a proper name.

5) Click the material, and you can see the details in the inspector:
image

This material is using the Standard shader, as seen in the top of the inspector. All the properties in it are simply properties defined inside of the brand new Standard Shader. This is a great shader and is introducing physically based shading to Unity.

6) Anyways, that’s cool but we want to change it. Click the shader dropdown and find the shader we just made:
image

Once selected, you can see that our inspector changed and we now have a lot less properties to choose from:
image

7) Now, drag this material to the sphere we have in our scene and change some of the properties, and you can see that this changes the look of our sphere. It doesn’t change the shader itself, just the input to it. Then the shader will use this input in some magic formula that will produce what you see:
image

Before we inspect the code, let’s learn how a ShaderLab shader is structured.

image

You can give the shader a category and a name. The category is used to place the shader in the shader dropdown, and the name is used to identify it.

Next, each shader can have many properties. These can be normal numbers and floats, color data or textures. ShaderLab got a way of defining these so it looks good and user friendly in the Unity inspector.

Next we can have one or more SubShaders. A modern shader requires modern hardware, but we still would like our game to run on other hardware as well. Each SubShader can have a different implementation of the shader, supporting different hardware.

Inside each SubShader, there needs to be a pass, as a shader can be executed in multiple passes. Try to keep the number of passes to a minimum for performance reasons, but a pass will render the geometry object once and then move on to the next pass. A lot of shaders will only need one pass.

Your shader implementation will be inside the pass, surrounded by CGPROGRAM (or GLSLPROGRAM and ENDGLSL of you want to use GLSL).

Then we have the FallBack. If none of the shaders will work, we can fallback to another simple shader like the Diffuse shader.

8) Now, open your shader and delete all the code so you are left with an empty shader file. We do this as the current shader is pretty advances at this stage, and we would like to learn the basics, as well as doing everything ourselves.

 

Implementing your first shader: Ambient Light

By now you should have a general understanding of a shader. It’s simply a piece of code executed somewhere in the graphics pipeline!

The first shader you will write is a really simple one that just transforms the vertices and calculates the ambient light on the model.
But wait… What is this “Ambient light” thing we are talking about?

Well, ambient light is the basic light in a scene that’s “just there”. If you go into a complete dark room, the ambient light is typically all black, but when walking in a dark room or outside there is almost always some light that makes it possible to see. This light got no direction and can be seen as the color of any faces not hit by any light. The base light of any object in your game world.

Before we can implement the ambient light shader, we need to understand it. The formula for Ambient light can be seen in 1.1 below.
I = Aintensity x Acolor ( 1.1)

I is the final light color on a given pixel, Aintensity is the intensity of the light (usually between 0.0 (0%) and 1.0 (100%)), and Acolor is the color of the ambient light. This color can be a hardcoded value, a parameter or a texture.

The shader can be seen in the following code snippet.

Shader "UnityShaderTutorial/Tutorial1AmbientLight" {
	Properties {
		_AmbientLightColor ("Ambient Light Color", Color) = (1,1,1,1)
		_AmbientLighIntensity("Ambient Light Intensity", Range(0.0, 1.0)) = 1.0
	}
	SubShader 
	{
		Pass 
		{
			CGPROGRAM
			#pragma target 2.0
			#pragma vertex vertexShader
			#pragma fragment fragmentShader

			fixed4 _AmbientLightColor;
			float _AmbientLighIntensity;

			float4 vertexShader(float4 v:POSITION) : SV_POSITION
			{
				return mul(UNITY_MATRIX_MVP, v);
			}

			fixed4 fragmentShader() : SV_Target
			{
				return _AmbientLightColor * _AmbientLighIntensity;
			}

			ENDCG
		}
	}
}

Let’s dive in to the details of this shader.

Properties and Variables

The first thing we do is to set the name of the shader, this can be anything you want, and then we define some properties.

As mentioned earlier, ShaderLab got a special way of defining properties. The general formula is to first type the name of the property, then a display name that will be shown in the Unity Editor, a property type and a default value.

In our shader, we define two properties, one for the Ambient Color and one for the intensity.

Next we go to the shader itself. Since this is a simple shader that will run on most hardware, we set the target to 2.0.

Then we define the name of the function that will be used as the vertex shader. In our case this is the function vertexShader. We do the same for our fragment shader (pixel shader).

image

 

We also define our variables that the property is pointing at, these must be the same as the Property name.

image

The light color is a vector with 4 values, the RGB and A, while the intensity is a float.

This gives us what we need to implement our vertex and pixel shader.

 

The Vertex Shader
The Vertex Shader is doing one thing only, and that is a matrix calculation. This function takes one input, and that is the vertex position only, and it got one output, the transformed position of the vertex (SV_POSITION) in screen space, the position of the vertex on the screen, stored by the return value of this function. This value is obtained by multiplying the vertex position (currently in local space) with the Model, View and Projection matrices easily obtained by Unity’s’ built-in state variable.

This is done to position the vertices at the correct place on your monitor, based on where the camera is (view) and the projection.

image

The SV_POSITION is a semantic as is used to pass data between different shader stages in the programmable pipeline. The SV_POSITION is interpreted by the rasterizer stage. Think if this as one of many registers on the GPU you can store values in. This semantic can store a vector value (XYZW), and since it is stored in SV_POSITION, the GPU knows that the intended use for this data is for positioning.

In a later tutorial, we will look at Vertex Shaders that take multiple values as input and passes multiple values down the pipeline.

 

The Pixel Shader
This is where all the coloring is happening, and our algorithm is implemented. This algorithm doesn’t need any input as we won’t do any advanced lighting calculations yet (we will learn this in the next tutorial). The output is the RGBA value of our pixel color stored in SV_Target (a render target, our final output).

image

As you can see, this function takes the Ambient Light Color and multiply it with an intensity. The output will be something like this:

image
image

NOTE: A built in unity state exists for taking the ambient light from your global game settings.

It looks flat for now as this light doesn’t have any direction to it. However, in the next tutorial, we will build on this to implement diffuse light so stay tuned!

The source can be downloaded from OneDrive here: http://1drv.ms/1fTVwXk

Posted in HLSL, Shaders, Tutorial, Unity | 8 Comments

DirectX 12 Programming #4: Resources and Resource Binding

image

Welcome back to the DirectX 12 Programming tutorial series! The previous tutorials was all about getting you up and running with the default DirectX 12 application, understanding the PSO and the Command Lists. We still have some basics to cover before we can move on to the really fun stuff. 

Today we will take a closer look at Resources and Buffers. Most of the common tasks you want to do with DirectX 12 involves the use of resources. This can be models, textures, data and so on. These resources needs to be loaded and bound to the graphics pipeline to use them.

Resource Binding
To use any of these resources, you need to link it to the graphics pipeline. This is called Resource Binding. In DirectX 12, resources are bound by using Descriptors, Descriptor Tables and the Descriptor Heap.

Resources
There will be a lot of new terms today, so let’s just dive right in. The first term you will need to know is resource. As mentioned above, a resource is simply any kind of resources your application will use to display what you want. This can be textures, models (vertex data) etc. All resources are derived from ID3D12Resource.

Ultimately, a resource is simply a memory buffer. The difference is only how you operate with it and how GPU sees it.

In our example, we have a few resources identified by ID3D12Resource; our Vertex Buffer, Index Buffer and Constant Buffer. As you know, shader resources are bound directly to the PSO.
image

Our vertex data could have been loaded from a file exported from Blender, Maya, 3D Studio Max and any other modeling software, procedurally generated or like in our example, a hardcoded list of vertices:

image

Once we have the data, we need to create a committed resources. A committed resource is created by calling the function CreateCommittedResource(..) on the Direct3D Device. This function will initiate and create the resource object itself, and a heap big enough to contain all of the data. We create the resource using the default heap type using none of the heap flags, we set the size of the resource so it can contain all of our vertices, set the state to COPY_DEST as this will be a destination for the vertex buffer, and the pointer to the memory where our resource will be, the ID3D12Resource object.

image

Once we got this, we are ready to copy the resource data to it.

image

The function UpdateSubresourcers(…) updates our buffer with a given set of data. You can read more on Subresources here if you are unfamiliar with it.

The Constant Buffer resource is also created using the CreateCommittedResource.image

DirectX 12 can map the resource without locking it, so GPU operates with the current version while your code on CPU updates the resource data. Once you call Unmap on a resource, DirectX updates the actual resource on GPU.

Descriptors
A descriptor is simply an object that describes a resource stored somewhere in the memory (like the one we created above), and it’s used to describe this resource to the GPU.

Think of them as a view for the GPU into a set of data, like vertices or textures.

The GPU will need to know what it’s looking at, and how to deal with it. A descriptor does just this, it let’s the GPU know what kind of resources we are dealing with.

In previous versions of DirectX, we explicitly created a particular resource (a texture, a buffer) and set the access flags. In DirectX 12, resource binding is not tracked so it’s your job as the programmer to handle the lifetime of the objects. Descriptors is a part of the process you will need to handle.

We have different types of descriptors, like Constant Buffer View (CBV), Shader Resource View (SRV), Unordered Access Views (UAV), Render Target Views (RTV), Samplers and many more (don’t worry if you don’t understand any of these words, we will get to it). A SRV descriptor for example let’s the GPU know what resource to use (a texture for example), and that it will be used in a shader.

In our example we are dealing with a couple of Descriptors, and in the next tutorials we will see a lot more of them. One is the Vertex Buffer that contains all the vertices we want to render, an Index Buffer containing all the indices (the order of how the vertices should be rendered) and a Constant Buffer that is used send data to our Vertex Shader.

Let’s take a look at our descriptors:

D3D12_VERTEX_BUFFER_VIEW				m_vertexBufferView;
D3D12_INDEX_BUFFER_VIEW					m_indexBufferView;
...
...
D3D12_CONSTANT_BUFFER_VIEW_DESC				desc;

 

Once these are defined, we can set the Vertex and Index descriptors like this:
image

As you can see, we set the buffer location for our vertex shader to the resource we created earlier, the vertex layout (we had Position and Color data for each vertex) – this defines the size of each vertex, and then we set the total size of the structure by taking the size of the cubeVertices data structure.

Descriptors are primarily placed inside Descriptor Heap, so let’s look at that!

Descriptor Heaps
These should (although not always possible) contain all of the descriptors for one or more frames, and can be seen as a collection of descriptors. It can limit what types of Descriptors it can contain to Descriptors of a given type, or a mixed set of descriptor types.

Descriptor Tables
These are a group of Descriptors inside a Descriptor Heap –like an array of Descriptors. The graphics pipeline is accessing resources through a descriptor table in a heap by using an index.

 

image

Here we can see that a lot of shaders are getting a view into the heap by using descriptor tables using an index. Each of the descriptors inside the heap (D1 – D10) is describing a texture or a buffer. Each of the tables got an index, and an offset.

Putting it together
A Descriptor Heap is defined like this:
image

Once defined you need to create it using the CreateDescriptorHeap function on the D3D Device. This creates a heap accepting the types: Constant Buffers, Shader Resource Views and Unordered Access Views (UAV), as we will be adding our Constant Buffer to it, and make it visible to shaders:

image

As you can see, Descriptors of the type CBV, SRV and UAV can share the same Descriptor Heap, while samplers need their own. Vertex Buffers, Index Buffers, Render Targets, Depth Stencil Views and Stream Output Buffers are bound directly on a command list (discussed in the previous tutorial), and thus, not placed inside of a heaps.

In our example, we create one Descriptor Table for our Constant Buffer, and then make it visible in our Vertex Shader.
image

Descriptor Tables can be seen as a subset of descriptors in a descriptor heap. It’s an offset and length in the heap, and can store Descriptors of one or more types.

All this concepts look like an overhead, but the reason we use is coming from the way GPUs operate internally. So, by using descriptors, heaps and tables we have more control on the resources and can operate faster in a way GPU sees it. Just the way DirectX 12 is designed.

Root Signature
Shaders can use a Root Signature and Root Parameters to locate a resources they need to access. In other words, the graphics pipeline can access a resources through a root signature by using an index in a descriptor table.

Root Signature is a kind of “view” into the heap (using resource tables as the binocular) containing resources that shaders can use. You define what resources it addresses and what level of access shaders get to them.

I our case, we need to use a Root Signature to give our Vertex Shader access to a set of data stored in a Constant Buffer.

We define our Root Signature like this:
image

Then we need to set up our Root Signature. We start by giving the Input Assembler Stage access to the constant buffer through our Root Signature, and denies access to the domain, hull, geometry and pixel shader. Then we store the Descriptor Table containing our Constant Buffer in our Root Signature.

image

The CreateRootSignature function takes a serialized version of our root signature, so we first serialize it using D3D12SerializeRootSignature, and pass on to CreateRootSignature.

Conclusion
Direct3D 12 uses Descriptors to describe a resource for the GPU. A set of descriptors used to render a full frame or more are places inside of a descriptor heap, and a descriptor table is used to easily access a set of descriptors in a descriptor heap.

A Root Signature can be used by the shader to easily access these Descriptor Tables.

These are all used in combination for you as the programmer to handle the resource binding in DirectX 12, giving you full control of each step. You are now responsible for the resource binding!

Posted in DirectX12, Tutorial | 2 Comments