DirectX 12 Programming #3: Command Lists

image

Welcome back! In the previous tutorial we learned about the Pipeline State Object and how to use these to efficiently change the state of our graphics pipeline for rendering.

Today we will take a look at the next feature of DirectX 12, the command list. Command lists are basically a set of drawing and resource management calls generated from one or more threads, and then executed to render a scene; typically in the end of a Render function. This have a performance benefit as we pre-compute rendering tasks that can be executed and even reused at a later time. These command lists can be executed across multiple threads.

Why do I have to care?
As a DX12 graphics programmer, it’s now your job to group rendering calls into work items, and when to submit this work for the GPU – This has to be a part of your engine architecture.

What’s in a command list?
A command list will contain traditional rendering API calls like drawing primitives, changing rendering states, etc. If we take a look at a part of the command list in our example application, we can see that it’s heavily used in our Render() function:

image

In the code above (I cut the lines on purpose as it’s not needed here), we can see that we are not directly sending the commands to the GPU, but store them in a list named m_commandList. We submit a call the RSSetViewport and ScissorRects to set up our view, clear the render targets, configure our IA shader stage (Input Assembler), and finally draw our 36 vertices as a cube made up by triangles.

At this point, nothing has really happened as we haven’t submitted the list yet.

Bundles
Above we simply add commands to a command list, and then at a later point, execute it. This is called a direct command list. However, we can also create lists that contains a small group of API calls that a command list can execute whenever, multiple times. These are named Bundles. Bundles are simply a group of API calls ( a “mini-command” list made for reuse).

We won’t go in depth of this topic here, but will come back to this in a later tutorial. Like Bundles, a direct command list can also be execute multiple times, but you are responsible for checking if it has completed before executing it again.

When creating these bundles, the driver will try to pre-process as much as possible for more efficient execution when it’s needed.

Creating a Command List
A command list is created using the CreateCommandList on the Direct3D Device. This function needs to know what type it will be (D3D12_COMMAND_LIST_TYPE contains all possibilities), like if it will be a bundle or a direct list, a command allocator used to manage the memory for the list, and the PSO created in the previous tutorial.

In our sample it looks like this:
image

We create a direct command list with our one and only PSO.

Filling the Command List with commands
Once the command list is created, we can start adding commands to it. When created, it will automatically be in a record state, ready for you to provide commands to it by using the ID3D12GraphicsCommandList interface.

You typically open it up for Recording using the Reset function, and when done recording, you end it with the Close function.

image

Executing the Command List
Once your list is full of commands, you can submit it to the command queue by using the ExecuteCommandLists function. When you initialize D3D, a default command queue is created. It is possible to create your own command queues, but I recommend you to stick to the default one for now, which is sufficient in most cases.

image

The command queue will automatically execute the commands in the list.

Posted in DirectX12, Tutorial | Leave a comment

DirectX 12 Programming #2: Pipeline State Objects

image

Welcome back! In the previous tutorial we briefly touched on why our Sample Application is looking like it does. In this and the next 3 tutorials we will take a closer look at the various DirectX 12 specific features, like the pipeline state object, command lists and the descriptors. After this, we can start making the cool stuff!

I know it’s a lot of information and many new concepts, but it will all make sense – and when you understand it correctly, it will take you closer to the GPU itself so you can use all of the new performance capabilities DX12 gives you. Smile

Let’s get started. One of the new features of DirectX 12 is the Pipeline State Object (PSO). \m/

Say what?
Putting it simply, this is a way to set a GPU to a state you need it to be in, when rendering. When the GPU is doing its magic, it needs to set a lot of input and rendering settings before the rendering can happen. It needs to know how to blend colors together if there are overlapping pixels (blending), how to handle depth data (depth stencil state), how to read input data to build primitives, what shaders to use and so on, the list is long.

A PSO is simply an object that describes a given state of the graphics pipeline. You can create as many of these as you want, and switch between them when needed.

These PSOs are usually created during initialization, and then switched during rendering. Doing this right will benefit your application when it comes to performance as you can set a lot of these settings at once, instead of randomly when you need to change a fraction of it.

Think of them as a way to put a lot of different pieces of a puzzle together for the final image, but instead to put together how the GPU will handle stuff.

image

The two pictures above is the same scene, using the same vertex data, but different PSOs.

Creating a PSO
It’s very simple to create a PSO. To do it we follow the typical Direct3D way of doing things, we fill out a structure called D3D12_GRAPHICS_PIPELINE_STATE and submit it with a call to the function CreateGraphicsPipelineState(…).

Let’s take a look at the PSO from our example application from Tutorial 1:

image

Wow, that’s a lot of properties! Let’s dive right in. In this case we set the input layout to the layout we specified when creating the Vertex Buffer, and we let it know that we want to use our Root Signature instance (covered in Tutorial 4).

We also need to set what shaders we want to use, so we take the byte code from the VS and PS, and set it accordingly. In out example, we are using one vertex shader to transform our cube, and one pixel shader to give it color.

Next we specify that we want to use the default rasterizer state and blend state, set the sample mask to max, and disable the depth and stencil buffer. We set the primitive topology to triangles, as that is how we set up our vertices in our vertex buffer, we set the number of render targets (RTV) to one as we only need one render target for the example scene, set the format of it and sample desc is used to set multisampling parameters.

Once we got the PSO structure filled out, we can set it using CreateGraphicsPipelineState(…) on the D3D device.

image

This function creates a PSO instance we can use later in our application. We need to use this when we create a Command List in the next tutorial as a parameter to CreateCommandList(…). Then, when we use that command list for rendering, this PSO will be in use.

Switching between PSOs
Although we won’t need to switch between POSs in our sample, I still want to show you how you can change the current POS bound to a command list.

There is a function called SetPipelineState on a CommandList that will take a PSO created during the initialization of the app, and make it active on the command list.

Final result

We still render our cube, but now you know what state the GPU is in, congratulations!

Posted in DirectX12, Tutorial | 1 Comment

DirectX 12 Programming #1: A quickstart!

image

Windows 10 has just been released, and with it Microsoft’s latest iteration of DirectX – DirectX 12.

I will go more in-depth on the DirectX 12 SDK later in this series, however for now, I would like to show you how you can create a new DirectX 12 enabled project, explain what’s going on, as well as direct you to the resources made available by my friends in the DirectX team.

DirectX 12 brings a lot of new features, and it takes you much closer to the hardware than ever before. This means that you will get better performance and room to do a lot of neat stuff, but it also gives you more responsibility when it comes to handling the low level stuff.

This tutorial is just part 1 in a longer DirectX 12 tutorial series. At a later stage we will dive deeper into buffers, command lists, pipeline and much more.

Note: This series is aimed for beginners on DirectX and graphics programming in general.

Prerequisites

1) Get Visual Studio 2015, the community version can be downloaded from free here:
https://www.visualstudio.com/

2) Make sure to select the Windows Universal component when installing:
image

3) Optional: Watch Chas Boyd give a great talk on Direct3D 12 at Intel’s Buzz Workshop

Creating your first DirectX 12 project

Let’s just get started.

Launch Visual Studio 2015 and create a new project. Be sure to select the DirectX 12 App from the project template.

image

Give it a proper name and hit OK to create a new project. The template will set everything up for you so the app should be all ready to run when Visual Studio completes the setup.

Once it’s complete, you should be able to run the example scene, so go ahead and hit run:
image

The project will compile, deploy and run. You should see a spinning cube on a blue background like in the picture below:

image

Now, spend 10 seconds by taking a close look at this scene, try to think of what’s happening, and what you see.

Diving in to the example code

image

The project tree should look similar to this. It contains the .cpp files and the header files, as well as the Package manifest and tile assets.

The example applications game loop is found in the template class, and can be thought of the core of the application.
image

This class contains the functions executed by the game loop, where Update and Render is launched every frame. A game usually consists of many different scenes, and this architecture can also be seen here. The class named Sample3DSceneRenderer is where the actual rendering of the cube, the shader setup, screen clearing and so on is happening.

There are many ways to use a scene renderer, you could have one for the intro and main menu scene, one for level select, and another for the game itself (MainMenuSceneRenderer, InGameSceneRenderer….) – it’s up to you to organize your code. You COULD insert all code in the main class as well, if you want…. Smile

In the figure below, you can see the header of our main class, and the scene renderer Sample3DSceneRenderer as a private member.

image

If we check out the Update and Render functions of the main class, we can see that it simply calls the respective functions from the scene renderer:image

Note: Instead of just calling the Update and Render function of the Scene Renderer here, you could implement a Scene State handler that calls these functions for the current active scene, like Main Menu of the player is starting the game, or it the players hits Start Game, it changes to a loading scene, and then to the In-Game scene.

So, where do I find the Scene Renderer??

Let’s go ahead and find the meat of this example, the code that renders the cube. It’s all done in the scene renderer, and it can be found here:

image

If you open the Sample3DSceneRenderer.h header file, you can see that it contains a lot more functions and variables that the main class. This is because the main class is currently very simple, and you want to keep it that way. Everything related to the specific scene should be done here, including all needed shaders, resources (images, audio) and so on. The Update and Render functions called by the main class itself can be seen here:
image

These two functions are doing the main work of our renderer. The Update() function should contain everything logic related like physics, calculations, AI, collision detection, while Render() should only do rendering. Please don’t mix and match here, don’t render stuff from the Update function and don’t calculate stuff in the Render function. This can introduce flickering and weird stuff because some calculations can be out of sync.

The Scene Renderer simply explained

This is not a deep dive on how DirectX 12 works (check the DirectX 12 Programming Guide) but I will try to give you a simple explanation of how the scene is set up.

First of all, we have a timer that keeps track of time, as well as the delta time between each frame. This is used to sync you scene to your clock, to avoid a game running faster or slower based on the power of your hardware.

Secondly, what you see is a cube. This cube is made from vertices read from a Vertex/Index buffer. These buffers are containers that contain data of vertexes. The following list shows the content of the vertex buffer:

image

Each Vertex got a position (XYZ) and a color (RGB). Then we got an Index Buffer that contains indexes where each index represents one of the vertices above. Each row in the index buffer contains 3 indexes that builds up a triangle that is at a later stage rasterized into a filled polygon.

image

In the list above, you can see that it’s grouped into 6 groups, where each group got two polygons. These two polygons in combination build up one side of the cube, and the cube got 6 sides.

World, View, Projection Transformations

We also create some matrices that transforms our scene from world space to something that looks like 3d on a 2d plane (what we see on out monitor). The vertices created above and these transformations are then passed into the graphics pipeline, starting with the Vertex Shader. The world transformation is used to position objects in the game world itself, the projection transformation is used to place this into a projected world (giving depth, distance), and the view transformation is used to place objects in a give view window (think of this as your eye or camera into your game world).

image 

The Vertex Shader

Every frame, these 36 vertices are sent through something called the Rendering Pipeline. This pipeline is what makes these numbers into images on your screen. It consists of many stages, where each stage is vital to the final outcome of the rendered image.

The Vertex Shader is executed directly on the GPU for each vertex you pass into it. In this example, the Vertex Shader is running 36 times every frame, and if this scene is running at 60 frames pr. second, this means that the shader is running 2160 times pr. second. A typical game scene rendering advanced 3D models can have hundreds of thousands of vertices, so make sure to optimize your shader code!

So what does this vertex shader do? Let’s take a quick look at the code:
image

The main function does everything. It’s using a set of global variables in Constant Buffers (set outside from the application itself, since this program is running on the GPU while the application is running on the CPU, there is no direct way to access variables between these).

It takes the data from the Vertex Buffer (position, color) and transforms it to a projection space, sets the color and pass this data further down the pipeline.

The Pixel Shader

The next shader stage we have in this example is the Pixel Shader. This shader is running for every pixel in your game world (not only the ones that you can see on your monitor), so the pixel shader can be a bottleneck for your scenes. Make sure to optimize!

The pixel shader in this example scene is very simple. Let’s take a look:
image

The Pixel Shader takes the output from our Vertex Shader (Projection Position, Color) as the input, and returns the color of the current pixel. This is simply the color set in our Vertex Buffer. The shader automatically interpolates the color if the pixel is between vertices.

The other stages

There are a lot of stages in the pipeline we didn’t cover, but let me quickly show you how this works. The squared ones are fixed stages, that you can configure, where the round ones are programmable shader stages.

Diagram of the data flow in the Direct3D 11 programmable pipeline

(Image from MSDN: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx)

The Input Assembler stage is where we put stuff into the shader. In our example, this was the vertex position and color data using buffers.

Then we calculate per vertex calculations in the Vertex Shader. The Hull, Tesselator and Domain stages are used for tesselation, and is optional.

Once this is done, an optional Geometry shader can be executed on our vertices – it can create new geometry and a lot of cool effects can be made here.

This marks the end of the vertex specific stages, the output it passed through a rasterizer that clips and prepares our geometry for the pixel shader.

The pixel shader is then responsible for generating the final output image that the player will see.

Command List

To make rendering more effective we are using Command Lists to generate a set of commands that we want the API to execute. This includes clearing the screen, setting what buffers to use, sending this to the Input Assembler, setting the active shaders and rendering the cube.

A 2nd look at the example scene

Run the example again, and look at it with your new knowledge. Was it what you imagined in the first go?

Feel free to reach out to me on twitter @petriw if you got any questions.

Resources

The following list contains some good resources that will take your knowledge to the next level:

DirectX 12 Programming Guide
https://msdn.microsoft.com/en-us/library/windows/desktop/Dn899121(v=VS.85).aspx

Samples
https://github.com/Microsoft/DirectX-Graphics-Samples

YouTube videos
https://www.youtube.com/channel/UCiaX2B8XiXR70jaN7NK-FpA/feed

Posted in DirectX12, Tutorial | 1 Comment

GDC Europe 2015

 

imageAt this years Game Developers Conference in Europe I had a presentation called “Using Cortana and Speech Synthesis to voice-activate your game worlds!”.

The talk was about adding voice activated trigger zones around NPCs (or areas) with pre-programmed questions the player could ask, and it would respond the given answer.

clip_image001 clip_image001[5]

This was set up using Unity 5, where you simply could drag a scrip on an existing GameObject to give it a voice. Once the player enters its trigger zone, the plugin would start a continuous listening session, waiting for the player to ask one of the pre-programmed questions.

clip_image002clip_image001[11]

I also added the possibilities to ask Cortana to launch the game, or ask if anyone had beaten your high score. If yes, it would ask if the player would like to play the game now to reclaim it.

image

The demo, Unity plugin and source will soon be available (just waiting for the final bits to become publicly available).

Thanks to everyone who attended my session!

Posted in Cortana, GDC, Windows 10 | Leave a comment

It’s been a while

image
I had some changes in my career and ended up moving to Seattle to work with graphics and games in Microsoft, Redmond. It took a while to get settled, but expect to see some more blog posts and tutorials here very soon!

Feel free to suggest what you want me to do a tutorial on! However, you can expect some Unity 5 shader tutorials, as well as a lot of cool stuff around Windows 10 for game developers!

Posted in General | 1 Comment

Microsoft Azure WebJobs in Truck Trials

In my latest game Truck Trials (read the Making of Guide) I’m running a special championship every weekend, named Weekend Champion Cup.
image

To get this cup started I need to do some simple things in the database:
1 – Friday at 17:00 GMT, Reset the data from the previous competition
2 – Friday at 18:00 GMT, Start the competition
3 – Sunday at 18:00 GMT, End the competition
4 – Sunday at 18:30 GMT, Award the winner

In the early days of this project, I did this manually because I had to prioritize other development tasks. But this meant that I had to make sure this was done every Friday evening, and stress about this during dinner or at a get together with the guys – and the same every Sunday.

Now this has change since Azure WebJobs is doing all of this for me!

How?
It’s very simple – just create a Console application, zip it and upload it to a WebJob on your Azure Web Site project. When creating the WebJob, you can specify to run the application continuously or at a schedule.

 

The implementation

Developing the Console App:
First of all, you need to develop the console app that will execute the task you want to perform.
To do this, start Visual Studio 2013 and create a new Console project (I used C#). In my case, I added the EntityFramework NuGet package since I was going to do some operations on the games database.

image

This Solution contains all of my WebJobs, one project pr. job I want to perform. In this post, I’m just going to focus on step 1, resetting the players score. All other steps are implemented in the same way.

So basically, this is what this application does:
image

It starts a Stopwatch (nice to track the time of the operation), resets the scores and writes the output to the Console.

The cool thing here is that all output from the Console app is displayed on the WebJobs management page, so I can simply just log in to the Microsoft Azure portal, and see the results there. More on this soon!

Now that the console app is done, it’s time to create the Web Job. First of all, we need to ZIP the program. This will then be uploaded as the WebJob package:
image

image

 

Creating the Web Job
Let’s make the cool stuff happen!

1. Log in to Microsoft Azure, go to your Azure Web Site project and click Web Jobs:
image

2. Click Add
image

 

3. Fill out the information on page 1 and click next
image

4. Fill out in the information on page 2 and click next
image

 

5. Wait for the WebJob upload to complete
image

6. Test it by clicking RUN ONCE:
image

 

Checking the output

The cool thing about this is that you can check the console output. Just click the LOGS link on your WebJob.
image

You will be redirected to the WebJobs page on Microsoft Azure, where you can see a list of web jobs, status and how long time it has gone since it was last run:
image

By clicking the WebJob, you can dive in to the details, and also see the console output:

image

And that’s it, you got a WebJob up and running, doing critical tasks for you Smile 

Thanks to Pedro Dias for helping me with this!

Posted in Azure, Truck Trials, Tutorial | 1 Comment

Video Tutorial: Creating a Flappy Bird type game using Unity 4.6

Today I had a webinar named “Developing Games for Windows”, it’s one of my first recordings but I hope to create many more with better quality going forward.

Anyways, I decided to share what I made in case any of you would find it useful.

image

The goal here is to share how to create a simple game using Unity 4.6, and how to export this as a Windows Store app, and as a Universal App that targets both Windows 8.1 and Windows Phone 8.1

In this post, I want to share the video, and provide you with the source so you can create your own games, and help you publish it to the stores! Smile

Sections
I) Check out the video
II) Download the source, create your own version
III) Publish
IV) Check out my “Making of” guides so you can learn how this looks in the bigger picture

I) Learn the basics of Unity by creating another Flappy Bird clone

 

II) Source
Feel free to use the source however you like, and even reuse the graphics.

Download source

The source contains the Unity Project of the game, the Windows Store export, and the Universal export of the game.

Modify the game as you like, make your changes, try to change the graphics and learn from the process.

Once you have created your own version of the game, go ahead and publish it – and learn from it. You gain a lot of knowledge by just reading the download data, making modifications to the app tagging to see if it increases or decreases download rates and so on.

Note: Make sure you have the latest update for Visual Studio 2014 (Update 4) before you open the Universal export.

 

III) Publish
I made this simple guide on how to export and publish a Universal app
https://digitalerr0r.wordpress.com/2014/11/21/developing-universal-apps-for-windows-8-and-windows-phone-8-using-unity3d/

 

IV) My “making of” guides
I have created a few games, and for each one of them I created a “Making of” guide explaining how I did it. Click the image of the one you wish to read. Smile

The Making of Truck Trials
image

The Making of Starcomet Online
image

The Making of Bandinos
image

The Making of Binary Man
Binary Man

Posted in Tutorial, Unity, Windows 8, Windows Phone | 2 Comments