#GameDevLive–Episode 2: Creating a game from scratch!

image

Welcome to Episode 2 of #GameDevLive! The purpose of this series is the help you get started with game development and Unity 3D, while we develop a full game.

In this episode, we will use trigger zones to add power ups, introduce fuel to our player and some more UI tweaks like a fuel indicator that changes color based on the amount of fuel left, and a restart level button that is visible when it’s game over.

You can follow OmegaDish on Twitch to see these episodes live, while providing input, suggestions and what features will implement – be part of the development! Smile

Right, enough talking, let’s get started!

Download the tools you need here: www.unity3d.com
Download the source here: https://github.com/omegadish/DishGame

Also a special thanks to Bredholy for helping me with editing and the Twitch channel!

Posted in Game programming, GameDevLive, Tutorial, Unity | Leave a comment

GameDevLive Episode 2 will go live tomorrow at 10am PST

Hi!

The next episode of GameDevLive can be seen from http://twitch.tv/OmegaDish tomorrow at 10am PST, see the countdown here: http://www.timeanddate.com/countdown/generic?iso=20151208T10&p0=234&msg=%23GameDevLive+-+Session

In this episode, we will continue from Episode 1 (recording: https://digitalerr0r.wordpress.com/2015/12/05/gamedevliveepisode-1-creating-a-game-from-scratch/), covering pickups, more UI and a simple main menu.

Hope to see you guys there!

Posted in Uncategorized | Leave a comment

#GameDevLive–Episode 1: Creating a game from scratch!

Episode1

I decided to create a new video tutorial series for you guys, #GameDevLive! The purpose of this series is the help you get started with game development and Unity 3D, while we develop a full game.

Episode 1 is all about getting you started with Unity, and creating a little game from scratch, while the next episodes will build on this game introducing more concepts of Unity, taking you to an advanced level of Unity game development.

You can follow OmegaDish on Twitch to see these episodes live, while providing input, suggestions and what features will implement – be part of the development! Smile

Right, enough talking, let’s get started!

Download the tools you need here: www.unity3d.com 
Download the source here: https://github.com/omegadish/DishGame

Also a special thanks to Bredholy for helping me with editing and the Twitch channel!

Posted in Game programming, Tutorial, Unity | 2 Comments

#GameDevLive – I will make a game live on Twitch!

On the 3rd of December I will do a live stream on the OmegaDish channel, where I will start developing a game. In the first episode we will create a prototype of the game from scratch using Unity 5, and then in the following episodes, we will continue to work on the game to complete it. I will also take requests and input from the chat so YOU can influence how the game will function and look!

Following this series will get you started with game development and Unity, learn the different options for monetizing on your game, as well as publishing it.

image

Countdown:
http://www.timeanddate.com/countdown/launch?iso=20151203T10&p0=234&msg=%23GameDevLive&font=cursive&csz=1&swk=1

 

Hope to see you there!

Posted in Game programming, Tutorial, Unity | Leave a comment

MVP Lander: Source code from my MVP Summit session

image

As an ex-MVP it was awesome to be back at MVP Summit as a speaker. In todays session I spent about 40 minutes on this little game where you control a lander using A and D for rotation, and W or Space for thrust. You need to land on the platform somewhere on the moon below you.

You can download the source code and the exported Windows 10 Universal app here:
http://1drv.ms/1GMES9k

See my previous post for the SpeechSynthesis, VoiceRecognition and Cortana integration:
https://digitalerr0r.wordpress.com/2015/10/21/voice-activating-your-windows-10-games-using-speech-synthesis-voice-recognition-and-cortana/

Thanks for attending my session, enjoy!

image

Posted in Tutorial, Unity | Leave a comment

Unity 5 Shader Programming #3: Specular Light

image

Hi, and welcome to Tutorial 3 of my Unity 5 Shader Programming tutorial. Today we are going to implement an other lighting algorithm called Specular Light. This algorithm builds on the Ambient and Diffuse lighting tutorials, so if you haven’t been trough them, now is the time. :)

Specular Light


So far, we got a basic light model to illuminate objects. But, what if we got a blank, polished or shiny object we want to render? Say a metal surface, plastic, glass, bottle and so on? Diffuse light does not include any of the tiny reflections that make a smooth surface shine.

To simulate this shininess, we can use a lighting model named Specular highlights.
Specular highlights calculates another vector that simulates a reflection of a light source, which hits the camera, or “the eye”.

What’s “the eye” vector, you might think? Well, it’s a pretty easy answer to this. It’s the vector that points from our camera position to the camera target.

One way to calculate the specular light is

I=Ai*Ac+Di*Dc*N.L+Si*Sc*(R.V)n

Where

R=2*(N.L)*N-L

This is called the Phong model for specular light.

This model calculates the angle between the Reflection Vector and the View vector. It describes how much of the reflection hits directly on the camera lens.

There is another way of calculating this called the Blinn-Phong model where you don’t need to calculate the reflection vector all the time.

 

Blinn-Phong?

In Blinn-Phong, instead of calculating the reflection vector R, we calculate the halfway vector between the view and the light direction vector, meaning we can replace the dot product between R and V with the dot product between N and H.

 

 

image

where H:

image

Then we have a parameter n that describes how rough the surface is.

The biggest visual difference between these two implementations is that while Phong will always have a circular shape, the Blinn-Phong will have an elliptical shape from steep angles. This mimics the real world.

Both models got their pros and cons that we won’t discuss in this article.

 

Implementation

The implementation is straight forward, nothing new from the previous tutorials. In other words, let’s get started with the source:

Shader "UnityShaderTutorial/Tutorial3SpecularLight-GlobalStates" {
	SubShader
	{
		Pass
		{
			Tags{ "LightMode" = "ForwardBase" }

			CGPROGRAM
			#include "UnityCG.cginc"

			#pragma target 2.0
			#pragma vertex vertexShader
			#pragma fragment fragmentShader

			float4 _LightColor0;

			struct vsIn {
				float4 position : POSITION;
				float3 normal : NORMAL;
			};

			struct vsOut {
				float4 screenPosition : SV_POSITION;
				float4 position : COORDINATE0;
				float3 normal : NORMAL;
			};

			vsOut vertexShader(vsIn v)
			{
				vsOut o;
				o.screenPosition = mul(UNITY_MATRIX_MVP, v.position);
				o.normal = normalize(mul(v.normal, _World2Object));
				o.position = v.position;

				return o;
			}

			float4 fragmentShader(vsOut psIn) : SV_Target
			{
				float4 ambientLight = UNITY_LIGHTMODEL_AMBIENT;

				float4 lightDirection = normalize(_WorldSpaceLightPos0);

				float4 diffuseTerm = saturate( dot(lightDirection, psIn.normal));
				float4 diffuseLight = diffuseTerm * _LightColor0;
				
				float4 cameraPosition = normalize(float4( _WorldSpaceCameraPos,1) - psIn.position);
				
				// Blinn-Phong
				float4 halfVector = normalize(lightDirection+cameraPosition);
				float4 specularTerm = pow( saturate( dot( psIn.normal, halfVector)), 25);

				// Phong
				//float4 reflectionVector = reflect(-lightDirection, float4(psIn.normal,1));
				//float4 specularTerm = pow(saturate(dot(reflectionVector, cameraPosition)),15);
				
				return ambientLight + diffuseLight + specularTerm;
			}

			ENDCG
		}
	}
}

There are two main differences here, we need the vertex position in the shader, as well as the code that calculates the updated light equation.

image

This is just a pass-through from the Vertex Shader.

Then we need to do the specular calculation itself.

image

First we get the position of the camera by using the built in variable WorldSpaceCameraPos and the vertex position.

Then we calculate the half vector by normalizing the light direction added with camera positon.

The last thing we need to calculate is the specular term itself, H.V – and add it to our existing lighting equation. Here we are using a function called pow(x,y) what raises the specified value X with the specified power Y,

25 is the shininess factor, feel free to play around with the value.

As you can see, we didn’t use any new

Download source

http://1drv.ms/1O37i1d

Posted in Shaders, Tutorial, Unity | Leave a comment

Voice Activating your Windows 10 games using Speech Synthesis, Voice Recognition and Cortana

image

This blog post is all about using the Windows 10 APIs for integrating Speech Synthesis, Voice Recognition and Cortana with your Unity 5.2 games.

imageimage

There are a lot of different ways of doing this, but I decided to implement it in this way to keep your focus on the important things that is happening. If you don’t want to know any of this, feel free to download the sample project and try it for yourself. If you wish to add this to your own game, you will need to know this and follow the steps given. You will also most likely use this in a very customized way, something that will be very simple to do once you understand the basics.

We are implementing a fair bit of features here, so to help you get an overview, we are focusing on 4 components today.

1) We got the code that needs to be executed inside of Unity. This code controls everything, and enables you to decide how to voice activate your game world.

2) Then we got one solution that needs to be added to the exported Windows 10 UWA solution that implements the interaction between your Unity game and the Windows 10 APIs. Currently, this takes a few questions with associated answers, feeds it to the Speech and Voice APIs, sets up a listening session and so on.

3) The other solution is the logic that enables you to integrate Cortana with your game. This got nothing to do with the in-game experience itself, but enables Cortana to launch your game, as well as write custom logic using an App Service (a service that runs as a background task in your app).

4) Then we got the logic we need to add to our exported game itself to bind everything together.

This video explains the basics of what’s going on with the technical parts of the plugin.

 

For an in-depth session about Speech Synthesis, Voice Recognition and Cortana Integration, I recommend checking out this session from BUILD 2015:
https://channel9.msdn.com/events/Build/2015/3-716

 

Using the plugin in Unity

image

To use the plugin, you must add the VoiceBot script to the gameobject. You can of course modify how it interacts with your own game logic. This is just an example. Also, the Windows10Interop class needs to be in the project solution, as this is the logic that will communicate with the plugin itself.

Using the example VoiceBot-component

The component is simple. It needs to target a panel that got the dialogue Text in it, as well as the Text itself. These are used to hide or show the questions you can ask, depending on how far away you are from the bot. The Text item itself is used to render the possible questions.

image

The Windows10Interop class got two functions, one to request speech, another to stop the listening session.

The VoiceBot is communicating with a plugin on the final exported project. So once you got your game running, you will need to export and set up this integration.

 

Setting up the Windows 10 solution

This will look like a lot of steps but I’m covering everything in details with a lot of screenshots, it usually takes about 15 minutes max.

We got two different components, one is the in-game voice and speech handling, and the other one is integrating your game with Cortana (so you can talk and interact with your app from Cortana on the Widows 10 OS level).

The first thing you need is to add Reference to the BotWorldVoiceCommandService and the VoiceSpeech plugin projects by either referencing the build DLL, or by adding the projects to the solution. The latter is best as you probably will need to customize the code or change it based on the needs of your game. To do this, right-click the solution and add an existing project to it.

image

Navigate to the EXPORT folder to find the project (or anywhere where you downloaded the source), and add it.

image

The next thing we need to to is to add a reference to the project from the CortanaWorld project (Our exported solution):

image

Navigate to Projects and it will automatically show:

image

We need to do the same for the Speech Plugin-project as well (add solution and reference to it):
image

Then we need to register the added VoiceSpeech class as an App Service from the Package.appexmanifest, Double click this to open the settings, and click the Declarations tab.

image

Add an App Service:
image

Enter the following information:

image

This lets our app know where and how to find the App Service. It will run in the background of our app, aiding our interaction with Cortana.

 

Voice and Speech

Now we are ready to interact with the voice recognition and speech synthesis APIs of Windows 10. First, we need to add one more thing to our app, and this is an invisible component that will play the generated voice synth.

Go to MainPage.xaml and open it in design view.

Add this line below the Grid:
<MediaElement x:Name=”Media”></MediaElement>

image

Next we need to connect our EventListeners from our Windows10Interop class in the Unity-logic to the right functions in the plugin, as well as passing the Media element we just added. This is basically how we interact with the plugin between Unity and Windows 10.
This is done by adding the following three lines of code to the MainPage.xaml.cs file, in the OnNavigatedTo function:

Plugin.Windows10.VoiceSpeech.Media = Media;
Windows10Interop.SpeechRequested += Plugin.Windows10.VoiceSpeech.StartListening;
Windows10Interop.StopSpeechRequested += Plugin.Windows10.VoiceSpeech.StopListening;

image

The last thing we need to do is to add the Microphone and internet capability to our project. Open the package.appxmanifest;

image

Click on capabilities and check the microphone and the .

image

This allows us to use these capabilities in the app.

 

Cortana

To let Cortana know about your app, and learn how to interact with you, we will need to add a command file that contains all for the interactions we wish to implement. This is happening inside a VCD file – simply an XML type file that contains all of the commands you want to integrate with.

image

You can add this by creating a new XML file in your project, and add the following content:

<?xml version=”1.0″ encoding=”utf-8″?>
<VoiceCommands xmlns=”http://schemas.microsoft.com/voicecommands/1.2″>
  <CommandSet xml:lang=”en-us” Name=”CommandSet_en-us”>
    <AppName> Bot World </AppName>
    <Example> Bot World, I want to play </Example>

    <Command Name=”checkScore”>
      <Example> Bot World, Did anyone beat me? </Example>
      <ListenFor RequireAppName=”BeforeOrAfterPhrase”> Did anyone beat me </ListenFor>
      <Feedback> Yes.</Feedback>
      <VoiceCommandService Target=”BotWorldVoiceCommandService”></VoiceCommandService>
    </Command>

    <Command Name=”startPlay”>
      <Example> Bot World, I want to play </Example>
      <ListenFor RequireAppName=”BeforeOrAfterPhrase”> I want to play </ListenFor>
      <Feedback> Get ready! </Feedback>
      <Navigate/>
    </Command>
  </CommandSet>

</VoiceCommands>

Next, you will need to add the code that will execute if you launch the app with a voice command. This is being done in the App.xaml.cs file, in the OnActivated function:

case ActivationKind.VoiceCommand:
    var commandArgs = args as VoiceCommandActivatedEventArgs;
    SpeechRecognitionResult speechRecognitionResult = commandArgs.Result;
    string voiceCommandName = speechRecognitionResult.RulePath[0];

    switch (voiceCommandName)
    {
        case "startPlay":
            {
                break;
            }
        case "checkScore":
            if (speechRecognitionResult.SemanticInterpretation.Properties.ContainsKey("message"))
            {
                string message = speechRecognitionResult.SemanticInterpretation.Properties["message"][0];
            }
            break;
    }
    break;
It will look like this:
image
 

This function is checking how the application was activated. If it was by Voice, it will get the voice command that activated the app, and let you write custom logic based on what command it was.

We also need to register the VCD file, still in the App.xaml.cs file, add this code to the OnLaunched function. This simply takes all the commands and installs it to Cortana. It will be removed if you uninstall the app.

try
{
    var storageFile =
    await Windows.Storage.StorageFile
    .GetFileFromApplicationUriAsync(new Uri("ms-appx:///vcd.xml"));

    await Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager
        .InstallCommandDefinitionsFromStorageFileAsync(storageFile);

    Debug.WriteLine("VCD installed");
}
catch
{
    Debug.WriteLine("VCD installation failed");
}

It will look like this:
image

This should be all, now you can try to run your game, ask the sample bot a question from the given list, and interact with it using Cortana.

image

 

Download source here:

http://1drv.ms/1PB7qVI

Posted in Cortana, Unity | 1 Comment