So, for our last week of focusing on our project, we reimplemented our Lights Out and Detonate builds into our new architecture.
Our Detonate build wasn't getting as much love as our Lights Out build at the EAE Open House event. But, we did receive a lot of great feedback about the control schemes, the camera movement, and the actually play benefits.
We determined that most people liked the Lights Out build more than Detonate, but that the Lights Out build had a flaw of "benefit" in gameplay. It turns out, if you control the entire cylinder's territory, you are in the same bad predicament as your opponent. If you turn off the floor, you fall to your death. Right now, your advantage is still a flaw to you.
So, after discussing this with Rody, we are planning on looking at the design over break and deciding what to do with it, if anything.
We learned more about our pitch for the game, demoing for younger kids who just liked to paint the tiles, and also for people who hadn't played games much at all. We discovered new things about our group, enjoyed some cake, and some of us got to enjoy being interviewed on the local news.
It was a great event, and we are going to take a slight break over Christmas to relax. I'll be back for more posts and information come January!
Monday, December 15, 2014
Engineering II - Final Project - "Just Keep Swimming"
Engineering II - Final Project - "Just Keep Swimming"
ZIP LINK
Write-Up
Overall Game - "Just Keep Swimming"
The game is downloadable above in ZIP form. Extract it and run the executable. You can alter the screen size and resolution using the settings.ini file.
The final content for the project is a 3rd person "runner", in the form of a humanoid fish character swimming to infinity, dodging items in the world. If you collide with any item, except for the orange pyramid, you take damage and the screen goes red. If you take too much damage, the screen goes black.
If you collide with the orange pyramid you increase your score in the top-right corner of the screen. I didn't include a win condition, but the idea would be to reach a certain number of points, and then you win.
If the screen goes black, press Alt+F4 to close the window. Double click on the executable file to re-run the game.
The game is to emulate the games Gradius and Vinyl.
Game Controls and Restrictions
Use the arrow keys (Up, Down, Left, Right) to move the character about the screen. The character cannot speed up or slow down. Press the "M" key to stop being invincible and then you are capable of taking damage. You will see the change by the change of the fish humanoid's material from orange to a bluish color.
The camera is static, because I didn't want to add matrix rotations into the engine. The entities in the scene, besides the fish humanoid, are actually moving towards the screen, while the fish character is static. These entities are moved to a positive Z position in the distance when the pass behind the camera.
The image below shows the fish character in-game while still being invincible:
Fish Man Mesh
Use the arrow keys (Up, Down, Left, Right) to move the character about the screen. The character cannot speed up or slow down. Press the "M" key to stop being invincible and then you are capable of taking damage. You will see the change by the change of the fish humanoid's material from orange to a bluish color.
The camera is static, because I didn't want to add matrix rotations into the engine. The entities in the scene, besides the fish humanoid, are actually moving towards the screen, while the fish character is static. These entities are moved to a positive Z position in the distance when the pass behind the camera.
The image below shows the fish character in-game while still being invincible:
Fish Man Mesh
The mesh was created in Maya, and was my first attempt at an organic looking humanoid character. I added a scales material to the model. The model's head and neck, as well as the arms and shoulders, had to be altered to get the humanoid character to be in a horizontal swimming position.
I have included an image from Maya of the original model, and the altered model:
Don't look too close at the second image, as the neck of the fish humanoid may be slightly broken.
Mesh PointersI have included an image from Maya of the original model, and the altered model:
Don't look too close at the second image, as the neck of the fish humanoid may be slightly broken.
The meshes in the game were stored as pointers for the entities in the game, and allows the meshes to be shared, or altered on the fly. The game uses the multiple meshes to be duplicated on different entities.
The additional meshes used are shown below:
The meshes included a cube, sphere, pyramid, cylinder, plane, and a sphere with inverted normals as a sky-dome.
The sky-sphere created a simple boundary for the world, visually, rather than altering the far plane value until it looked decent.
Material Pointers
The materials in the game were stored as pointers for the entities in the game, and allows them to be shared, or altered on the fly. The game uses the multiple materials to be duplicated on different entities. In the first image you will see the orange material, blue material, sand material, and two different water materials on the planes.
The materials in the game were stored as pointers for the entities in the game, and allows them to be shared, or altered on the fly. The game uses the multiple materials to be duplicated on different entities. In the first image you will see the orange material, blue material, sand material, and two different water materials on the planes.
The materials used on the ceiling and floor planes have their UVs altered on the fly, to further the effect that the character is moving forward.
Debug Lines
I implemented the use of Debug Lines to allow simple view of objects in the world. I primarily used this for the fish humanoid, so I could find the appropriate screen boundaries for character placement.
I implemented the use of Debug Lines to allow simple view of objects in the world. I primarily used this for the fish humanoid, so I could find the appropriate screen boundaries for character placement.
The image below shows the red debug line I placed on the character:
To activate the debug lines press the "P" key to toggle it.
Collision Detection
For simple implementation with my current code, I used a simple spherical collision for the fish humanoid with all other objects in the scene.
For simple implementation with my current code, I used a simple spherical collision for the fish humanoid with all other objects in the scene.
Game UI
I added the name of the game in the top-left corner, and the score in the top-right corner. This can be seen in the first image of this technical write-up.
I added the name of the game in the top-left corner, and the score in the top-right corner. This can be seen in the first image of this technical write-up.
I added a water shader effect to the in-game, affecting all 3D objects in the scene, not the UI or sprite elements. You will notice that the lighting on the character appears to be changing from front to back and left to right. The initial idea was to sample a gray-scale texture of the surface water material and alter the gray-scale of the colors of the objects in the world by that value.
I wasn't able to add another sampler to the shader, because the compiler kept removing it. I was able to side-step this by using the existing sampler and altering the 3D lighting effects with each objects own material. It achieved a similar effect to what I wanted, and it looks pretty good.
Realized Learning Moments
The sampler issue described for the shader was the biggest issue I had while working on this. It was interesting to hear the comments from others suggesting that "The entire image is faked in some way, so the fact that it uses it's own material, is fine."
The sampler issue described for the shader was the biggest issue I had while working on this. It was interesting to hear the comments from others suggesting that "The entire image is faked in some way, so the fact that it uses it's own material, is fine."
That was an interesting learning moment.
I will still be working on altering my existing game engine to further work with the existing rendering content I developed. I don't personally have an interest in learning more complicated lighting, like specular lighting, or ambient occlusion, but I really enjoyed creating the water shader effect. I may look into more shader effects in the future.
Time Used
Reading: 0.5 hours
Write-Up: 0.5 hours
Technical Write-Up: 2.0 hours
Coding: 9 hours
Reading: 0.5 hours
Write-Up: 0.5 hours
Technical Write-Up: 2.0 hours
Coding: 9 hours
Wednesday, December 10, 2014
Post 37 - Confirmed Build Project Focus
We confirmed our builds the prior week for EAE Open House Day, where we present our IGF updated builds for playtesting, and to have a great time.
There was some transition in our design focus, and it seems like the team is taking to it positively. We have people acting as focused leaders who are planning on efficiently connecting the dots to the end game, across the board. This is why the transitional success factor, for the entire team, engages a cooperative transparency.
Our gut-feeling is that our initial testing for prototypes effected our initiatives for working on-the-fly. As Engineers, with a shifting and moving goal for our prototypes, we have laid groundwork for a flexible game structure.
This structure required nimble programming layouts, and encapsulating certain script files to prevent major alterations to root code. The prototypes initially had to be altered to fit in current code. We are striving to have both our Lights Out and Detonation builds done by the beginning of the week.
We will keep on trucking.
There was some transition in our design focus, and it seems like the team is taking to it positively. We have people acting as focused leaders who are planning on efficiently connecting the dots to the end game, across the board. This is why the transitional success factor, for the entire team, engages a cooperative transparency.
Our gut-feeling is that our initial testing for prototypes effected our initiatives for working on-the-fly. As Engineers, with a shifting and moving goal for our prototypes, we have laid groundwork for a flexible game structure.
This structure required nimble programming layouts, and encapsulating certain script files to prevent major alterations to root code. The prototypes initially had to be altered to fit in current code. We are striving to have both our Lights Out and Detonation builds done by the beginning of the week.
We will keep on trucking.
Tuesday, December 2, 2014
Engineering II - Assignment 11 - 2D Sprites
Assignment 11 - 2D Sprites
ZIP
LINK
Write-Up
The basics of this assignment were
to draw 2D sprites for overlay in our game engine. Part of this overlay was
going to be animated or altered, and we need to use a texture atlas to create
the appropriate effect.
Finally, we included appropriate PIX
events for helping with debugging code issues. These primarily were focused on
our 3D meshes being separated from our 2D sprites in the render call.
Technical Write-Up
Sprite Creation
The basic 2D sprite is consisting of a left, right, top, and bottom side to it.
I created a sprite class to handle the position, texture, vertex buffer, shaders,
and UV values.
Each sprite consists of the same type of mesh, containing four vertices and
will not differ. This means that we did not need any type of index buffer to
handle a mesh, so we only had to create a vertex buffer to handle the position
of the vertices.
To handle the position of the vertices of the mesh, we allow the programmer to
enter in values for the overall UI position on screen. The user will enter a
value for the x and y, which will be the center of the sprite, and a width and
height, which will create the overall square-like shape.
My code handles the points of the sprite by dividing the height and width by
two, and then added the needed values to the x and y positions. What this
allows is the user to have a defined position and UI size for the sprite when
rendering. Currently, I have altered the vertex buffer to handle the change in
width of the screen size to not alter the sprite. The vertical alteration in
screen size has not been handled, as was not required.
The image below shows the change in resolution for the screen, but the sprites
in the image are exactly the same width and height, as specified by the user:
By having the vertex buffer handle this information, it simplifies the needs of
the sprite class. This means that the vertex and fragment shaders for sprites
can also be simplified. I created new shaders to handle both the position,
color, and pass those correctly from the vertex shader to the fragment shader.
The vertex shader has no transformations that need to be performed on the
sprite, as it is used as a UI overlay, and will not change or distort based on
the camera position.
Texture Atlas
The texture atlas is used to move over a particular section of a texture for a
sprite, and alter the texture's appearance on the sprite in real-time. The
positioning on the texture in both x and y coordinates moves from zero to one.
This means that if you only have values going from one-half x and one-half y,
that only have of the texture will be used on the sprite.
Using this to our advantage we can have a complex texture mapped out, and then
use the texture atlas to alter the world, or have multiple items use the same
texture with multiple coordinates (UVs) for different looks.
This is done in my game engine using the Numbers image below:
The Numbers image allows the user to press the numbers 1 through 0 on the
keyboard, not the number pad keys, and it will alter the image of the numbers
to that particular number. If the 'Q' key is pressed, it will show the entire
texture over that surface. You can see in the following two images that the
number value is initially shown as 1, because the '1' key was pressed, but is
then shown as 2 when the '2' key is pressed:
The other sprite I'm showing in game
is called the Logo image, shown below:
This image is placed at the center
of the screen, and the entire texture is used, not using a texture atlas. If I
only wanted to show part of the smiley face in the texture or the wording, I
would only have to alter the UVs of the sprite texture and it would shift
appropriately.
PIX Events
The debugging events added for PIX
were used to separate all draw calls in my render function from the 3D meshes
to the 2D sprites. This allows the user to simply see which call is being made
when, and how many times it is being called. There is an image of this event
below:
If you expand the Sprite Rendering
event, you can see the moment in the draw call when the AlphaBlendable state is
set to true. This allows the alpha value of the sprites to be used, creating
the ability to draw through a texture with other textures located behind it.
This is shown in the image below:
If the alpha state was not set to
true, then the image would look like the following:
This shows the importance of the
alpha value being adjusted appropriately, for the effect we need. This also
sets the need for the artist to make sure that alpha values are set
appropriately for all art content.
Realized Learning Moments
I honestly didn't think that the PIX
debugging statements would be that useful, but they helped me fix an error with
my sprite class, as well as the vertices. I also had a small error where I was
using the triangle "list" not the triangle "strip", which
was not allowing the full texture to be viewed.
I am still very confused about why
the resolution of the texture shouldn't be altered based on the resolution. It
seems like the game's window manager would handle particular widths/heights and
adjust the other appropriately, and then using a higher/lower resolution of the
sprite image. By not allowing the sprite to alter with screen size seems
reasonable for keeping the art asset the same, but it seems unreasonable for
game UI management. It's an interesting part to see how the engineering
pipeline can be in conflict with what the game content wants it to be.
Time Used
Reading: 2 hours
Write-Up: 0.75 hours
Technical Write-Up: 1.25 hours
Coding: 7 hours
ZIP
LINK
Write-Up
The basics of this assignment were
to draw 2D sprites for overlay in our game engine. Part of this overlay was
going to be animated or altered, and we need to use a texture atlas to create
the appropriate effect.
Finally, we included appropriate PIX
events for helping with debugging code issues. These primarily were focused on
our 3D meshes being separated from our 2D sprites in the render call.
Technical Write-Up
Sprite Creation
The basic 2D sprite is consisting of a left, right, top, and bottom side to it. I created a sprite class to handle the position, texture, vertex buffer, shaders, and UV values.
Each sprite consists of the same type of mesh, containing four vertices and will not differ. This means that we did not need any type of index buffer to handle a mesh, so we only had to create a vertex buffer to handle the position of the vertices.
To handle the position of the vertices of the mesh, we allow the programmer to enter in values for the overall UI position on screen. The user will enter a value for the x and y, which will be the center of the sprite, and a width and height, which will create the overall square-like shape.
My code handles the points of the sprite by dividing the height and width by two, and then added the needed values to the x and y positions. What this allows is the user to have a defined position and UI size for the sprite when rendering. Currently, I have altered the vertex buffer to handle the change in width of the screen size to not alter the sprite. The vertical alteration in screen size has not been handled, as was not required.
The image below shows the change in resolution for the screen, but the sprites in the image are exactly the same width and height, as specified by the user:
By having the vertex buffer handle this information, it simplifies the needs of
the sprite class. This means that the vertex and fragment shaders for sprites
can also be simplified. I created new shaders to handle both the position,
color, and pass those correctly from the vertex shader to the fragment shader.
The vertex shader has no transformations that need to be performed on the
sprite, as it is used as a UI overlay, and will not change or distort based on
the camera position.
Texture Atlas
The texture atlas is used to move over a particular section of a texture for a sprite, and alter the texture's appearance on the sprite in real-time. The positioning on the texture in both x and y coordinates moves from zero to one. This means that if you only have values going from one-half x and one-half y, that only have of the texture will be used on the sprite.
Using this to our advantage we can have a complex texture mapped out, and then use the texture atlas to alter the world, or have multiple items use the same texture with multiple coordinates (UVs) for different looks.
This is done in my game engine using the Numbers image below:
The Numbers image allows the user to press the numbers 1 through 0 on the keyboard, not the number pad keys, and it will alter the image of the numbers to that particular number. If the 'Q' key is pressed, it will show the entire texture over that surface. You can see in the following two images that the number value is initially shown as 1, because the '1' key was pressed, but is then shown as 2 when the '2' key is pressed:
The other sprite I'm showing in game
is called the Logo image, shown below:
This image is placed at the center
of the screen, and the entire texture is used, not using a texture atlas. If I
only wanted to show part of the smiley face in the texture or the wording, I
would only have to alter the UVs of the sprite texture and it would shift
appropriately.
PIX Events
The debugging events added for PIX
were used to separate all draw calls in my render function from the 3D meshes
to the 2D sprites. This allows the user to simply see which call is being made
when, and how many times it is being called. There is an image of this event
below:
If you expand the Sprite Rendering
event, you can see the moment in the draw call when the AlphaBlendable state is
set to true. This allows the alpha value of the sprites to be used, creating
the ability to draw through a texture with other textures located behind it.
This is shown in the image below:
If the alpha state was not set to
true, then the image would look like the following:
This shows the importance of the
alpha value being adjusted appropriately, for the effect we need. This also
sets the need for the artist to make sure that alpha values are set
appropriately for all art content.
Realized Learning Moments
I honestly didn't think that the PIX
debugging statements would be that useful, but they helped me fix an error with
my sprite class, as well as the vertices. I also had a small error where I was
using the triangle "list" not the triangle "strip", which
was not allowing the full texture to be viewed.
I am still very confused about why
the resolution of the texture shouldn't be altered based on the resolution. It
seems like the game's window manager would handle particular widths/heights and
adjust the other appropriately, and then using a higher/lower resolution of the
sprite image. By not allowing the sprite to alter with screen size seems
reasonable for keeping the art asset the same, but it seems unreasonable for
game UI management. It's an interesting part to see how the engineering
pipeline can be in conflict with what the game content wants it to be.
Time Used
Reading: 2 hours
Write-Up: 0.75 hours
Technical Write-Up: 1.25 hours
Coding: 7 hours
Reading: 2 hours
Write-Up: 0.75 hours
Technical Write-Up: 1.25 hours
Coding: 7 hours
Tuesday, November 25, 2014
Engineering II - Assignment 10 - Directional and Ambient Lighting
Assignment 10- Directional and Ambient Lighting
ZIP LINK
Write-Up
The basics of this assignment were to focus on getting
correct lighting, both ambient and directional working in our game engine. This
required our meshes to be using their normals, which required our Maya
Exporters to include those as well.
With the normals being read by the Maya Exporter, this also
required our shader programs (vertex and fragment) to use them as well.
Finally, we needed to include the ambient and directional
lighting, with movement of the directional light, and then using PIX to capture
some debug pixel information.
Technical Write-Up
Maya Exporter Update
Maya Exporter Update
The mesh type was already implemented in our last assignment, so I needed to have the exporter include the normal from the mesh as well. This was a simple addition into the exporter file requesting the newly included normal values.
The normal values are included in each vertex of the shape being
imported. This is different than I initially viewed, as I expected the normal
of the object to be with respect to its “side” or “face”. Maya includes
additional vertices in the models to help make the rendering of the image
correct with the additional vertices’ normals being included.
After this was done, I re-exported all of my initial mesh files
and confirmed the normals were being included.
Ambient and Directional Lighting
The inclusion of ambient and directional lighting was slightly
more difficult. This required the alteration of the fragment shader to
recognize the color of the ambient light, the color of the directional light,
and the direction of the directional light.
The vertex and fragment shaders also required the normals to be
input and output correctly. This allowed for the addition of the directional
light and ambient light together, using the normal, and calculating the color
correctly via the fragment shader.
The vertex shader required the normals to be calculated with the
mesh, and the world rotation.
Here is an image of my scene with ambient and directional lighting:
The light for the image is coming from below and to the right (your right) of the character, and has a red-ish tint. The ambient light is white, and is fairly dark, while the directional light has the heavier value of red.
I implemented movement controls for the directional light, which allow you to change the direction of it. I also allow the user to alter the color of the ambient light, and the directional light as well.
If you were to access the code and alter the lighting manually, it would be inside of my Graphics.cpp file,and you would change the variable's values which are D3DXVECTOR3 values. For color the x would be red, the y would be green, and the z would be blue. While for the direction, x, y, and z would be as normal.
The control schemes for altering these values at runtime are as follows:
I - Move the directional light to face the negative z direction
K - Move the directional light to face the positive z direction
J - Move the directional light to face the negative x direction
L - Move the directional light to face the positive x direction
R - Adds to the red value of the directional light color
G - Adds to the green value of the directional light color
B - Adds to the blue value of the directional light color
T - Adds to the red value of the ambient light color
Y - Adds to the green value of the ambient light color
H - Adds to the blue value of the ambient light color
I implemented movement controls for the directional light, which allow you to change the direction of it. I also allow the user to alter the color of the ambient light, and the directional light as well.
If you were to access the code and alter the lighting manually, it would be inside of my Graphics.cpp file,and you would change the variable's values which are D3DXVECTOR3 values. For color the x would be red, the y would be green, and the z would be blue. While for the direction, x, y, and z would be as normal.
The control schemes for altering these values at runtime are as follows:
I - Move the directional light to face the negative z direction
K - Move the directional light to face the positive z direction
J - Move the directional light to face the negative x direction
L - Move the directional light to face the positive x direction
R - Adds to the red value of the directional light color
G - Adds to the green value of the directional light color
B - Adds to the blue value of the directional light color
T - Adds to the red value of the ambient light color
Y - Adds to the green value of the ambient light color
H - Adds to the blue value of the ambient light color
***These color controls add to the color value, and if they exceed 1.0f, they loop back to 0.0f.
PIX Debug Pixel
The last thing to confirm was the pixel color information, and
we used PIX to do this. Below is the captured PIX image:
The image shows that during the draw call, you can click on a
pixel in render (right side of the image) and request its history of color
information. It will also allow you to step through your code for the shader
and see what the code is doing.
This allows you to more simply see, step by step, what your
fragment shader code is doing.
This is similar to a previous PIX step where we were able to
debug the vertex shader by selecting a vertex on the mesh and stepping through its
code.
All of this code is done purely in Debug mode, as you cannot
step through the code in Release mode.
Realized Learning
Moments
This assignment allowed me to see the key reason for Maya
including multiple vertices in a mesh. It was confusing to create your own mesh
by hand, and understand that a cube has eight vertices, but when Maya generates
a cube it has 24 vertices. These additional vertices are for UVs and normals.
If these weren’t created, then the rendering method would take the eight
vertices of a cube, create a cube surface, and render it almost as a slightly
flat faced sphere.
I had a great learning moment of double checking the variable
names being used in your shader, and how you access them. A simple mistake I
made, and then an additional pair of eyes helped out with.
Time Used
Reading: 2 hours
Write-Up: 1.5 hours
Technical Write-Up: 1.5 hours
Coding: 4 hours
Reading: 2 hours
Write-Up: 1.5 hours
Technical Write-Up: 1.5 hours
Coding: 4 hours
Subscribe to:
Comments (Atom)













