Windows 10 has just been released, and with it Microsoft’s latest iteration of DirectX – DirectX 12.
I will go more in-depth on the DirectX 12 SDK later in this series, however for now, I would like to show you how you can create a new DirectX 12 enabled project, explain what’s going on, as well as direct you to the resources made available by my friends in the DirectX team.
DirectX 12 brings a lot of new features, and it takes you much closer to the hardware than ever before. This means that you will get better performance and room to do a lot of neat stuff, but it also gives you more responsibility when it comes to handling the low level stuff.
This tutorial is just part 1 in a longer DirectX 12 tutorial series. At a later stage we will dive deeper into buffers, command lists, pipeline and much more.
Note: This series is aimed for beginners on DirectX and graphics programming in general.
1) Get Visual Studio 2015, the community version can be downloaded from free here:
2) Make sure to select the Windows Universal component when installing:
3) Optional: Watch Chas Boyd give a great talk on Direct3D 12 at Intel’s Buzz Workshop
Creating your first DirectX 12 project
Let’s just get started.
Launch Visual Studio 2015 and create a new project. Be sure to select the DirectX 12 App from the project template.
Give it a proper name and hit OK to create a new project. The template will set everything up for you so the app should be all ready to run when Visual Studio completes the setup.
Once it’s complete, you should be able to run the example scene, so go ahead and hit run:
The project will compile, deploy and run. You should see a spinning cube on a blue background like in the picture below:
Now, spend 10 seconds by taking a close look at this scene, try to think of what’s happening, and what you see.
Diving in to the example code
The project tree should look similar to this. It contains the .cpp files and the header files, as well as the Package manifest and tile assets.
The example applications game loop is found in the template class, and can be thought of the core of the application.
This class contains the functions executed by the game loop, where Update and Render is launched every frame. A game usually consists of many different scenes, and this architecture can also be seen here. The class named Sample3DSceneRenderer is where the actual rendering of the cube, the shader setup, screen clearing and so on is happening.
There are many ways to use a scene renderer, you could have one for the intro and main menu scene, one for level select, and another for the game itself (MainMenuSceneRenderer, InGameSceneRenderer….) – it’s up to you to organize your code. You COULD insert all code in the main class as well, if you want….
In the figure below, you can see the header of our main class, and the scene renderer Sample3DSceneRenderer as a private member.
If we check out the Update and Render functions of the main class, we can see that it simply calls the respective functions from the scene renderer:
Note: Instead of just calling the Update and Render function of the Scene Renderer here, you could implement a Scene State handler that calls these functions for the current active scene, like Main Menu of the player is starting the game, or it the players hits Start Game, it changes to a loading scene, and then to the In-Game scene.
So, where do I find the Scene Renderer??
Let’s go ahead and find the meat of this example, the code that renders the cube. It’s all done in the scene renderer, and it can be found here:
If you open the Sample3DSceneRenderer.h header file, you can see that it contains a lot more functions and variables that the main class. This is because the main class is currently very simple, and you want to keep it that way. Everything related to the specific scene should be done here, including all needed shaders, resources (images, audio) and so on. The Update and Render functions called by the main class itself can be seen here:
These two functions are doing the main work of our renderer. The Update() function should contain everything logic related like physics, calculations, AI, collision detection, while Render() should only do rendering. Please don’t mix and match here, don’t render stuff from the Update function and don’t calculate stuff in the Render function. This can introduce flickering and weird stuff because some calculations can be out of sync.
The Scene Renderer simply explained
This is not a deep dive on how DirectX 12 works (check the DirectX 12 Programming Guide) but I will try to give you a simple explanation of how the scene is set up.
First of all, we have a timer that keeps track of time, as well as the delta time between each frame. This is used to sync you scene to your clock, to avoid a game running faster or slower based on the power of your hardware.
Secondly, what you see is a cube. This cube is made from vertices read from a Vertex/Index buffer. These buffers are containers that contain data of vertexes. The following list shows the content of the vertex buffer:
Each Vertex got a position (XYZ) and a color (RGB). Then we got an Index Buffer that contains indexes where each index represents one of the vertices above. Each row in the index buffer contains 3 indexes that builds up a triangle that is at a later stage rasterized into a filled polygon.
In the list above, you can see that it’s grouped into 6 groups, where each group got two polygons. These two polygons in combination build up one side of the cube, and the cube got 6 sides.
World, View, Projection Transformations
We also create some matrices that transforms our scene from world space to something that looks like 3d on a 2d plane (what we see on out monitor). The vertices created above and these transformations are then passed into the graphics pipeline, starting with the Vertex Shader. The world transformation is used to position objects in the game world itself, the projection transformation is used to place this into a projected world (giving depth, distance), and the view transformation is used to place objects in a give view window (think of this as your eye or camera into your game world).
The Vertex Shader
Every frame, these 36 vertices are sent through something called the Rendering Pipeline. This pipeline is what makes these numbers into images on your screen. It consists of many stages, where each stage is vital to the final outcome of the rendered image.
The Vertex Shader is executed directly on the GPU for each vertex you pass into it. In this example, the Vertex Shader is running 36 times every frame, and if this scene is running at 60 frames pr. second, this means that the shader is running 2160 times pr. second. A typical game scene rendering advanced 3D models can have hundreds of thousands of vertices, so make sure to optimize your shader code!
So what does this vertex shader do? Let’s take a quick look at the code:
The main function does everything. It’s using a set of global variables in Constant Buffers (set outside from the application itself, since this program is running on the GPU while the application is running on the CPU, there is no direct way to access variables between these).
It takes the data from the Vertex Buffer (position, color) and transforms it to a projection space, sets the color and pass this data further down the pipeline.
The Pixel Shader
The next shader stage we have in this example is the Pixel Shader. This shader is running for every pixel in your game world (not only the ones that you can see on your monitor), so the pixel shader can be a bottleneck for your scenes. Make sure to optimize!
The pixel shader in this example scene is very simple. Let’s take a look:
The Pixel Shader takes the output from our Vertex Shader (Projection Position, Color) as the input, and returns the color of the current pixel. This is simply the color set in our Vertex Buffer. The shader automatically interpolates the color if the pixel is between vertices.
The other stages
There are a lot of stages in the pipeline we didn’t cover, but let me quickly show you how this works. The squared ones are fixed stages, that you can configure, where the round ones are programmable shader stages.
(Image from MSDN: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx)
The Input Assembler stage is where we put stuff into the shader. In our example, this was the vertex position and color data using buffers.
Then we calculate per vertex calculations in the Vertex Shader. The Hull, Tesselator and Domain stages are used for tesselation, and is optional.
Once this is done, an optional Geometry shader can be executed on our vertices – it can create new geometry and a lot of cool effects can be made here.
This marks the end of the vertex specific stages, the output it passed through a rasterizer that clips and prepares our geometry for the pixel shader.
The pixel shader is then responsible for generating the final output image that the player will see.
To make rendering more effective we are using Command Lists to generate a set of commands that we want the API to execute. This includes clearing the screen, setting what buffers to use, sending this to the Input Assembler, setting the active shaders and rendering the cube.
A 2nd look at the example scene
Run the example again, and look at it with your new knowledge. Was it what you imagined in the first go?
Feel free to reach out to me on twitter @petriw if you got any questions.
The following list contains some good resources that will take your knowledge to the next level:
DirectX 12 Programming Guide