Monday, July 30, 2012

What Makes a Game Fun?


What makes a game fun? It's an age-old question that I'm sure every game developer has asked, or at least thought about at one point during their career. If you asked me that same question two years ago, I would've said that the mechanics made the game. However, after gaining more experience making games, especially Deus Shift, I have come to realize that game mechanics are only one part of the picture.

Obviously, without mechanics, you can't have a game. Mechanics are what make the game interactive - they're what you actually do when you play. That is why I think many game developers (including myself in the past) think of the mechanics as the most important factor for determining fun. With that mindset, I designed a very unique gameplay mechanic for Deus Shift. My friend and I spent nearly a year (part time) designing, balancing and prototyping the mechanics for Deus Shift until the prototype build was fun to play with friends. During the year, the game changed so drastically, that it is basically a different game than what was originally designed (it even had a different name in the beginning - Arcane Lands). So, we had mechanics that were fun, balanced, and were flexible, giving players ample opportunities even when losing. If mechanics really made the game, then we were sure to be sitting on a gold mine. After spending about nine months working on the design and prototypes, we spent maybe a month after that making art and polishing the game, and submitted it to Kongregate, certain that it would be successful.

A main menu and in game screenshot of Deus Shift. It doesn't look ugly, but it doesn't have a strong style, and looks grey, plain and uninspired.
For those of you who haven't guessed yet, Deus Shift did terribly, with an average rating of 2.79 out of 5. Our friends had liked the prototype, and the comments we received from the Kongregate community were mainly positive. So what went wrong? Looking back, it's pretty obvious to me. We spent 90% of our time on the game mechanics, but now I realize that game mechanics only count for about 5% of what makes a game fun. If you play the game (you can play it here: http://www.kongregate.com/games/terra0nova/deus-shift), you will notice a lack of style - it is very grey, plain and uninspired. There is also no campaign mode or story, and to top it all off, the tutorial is very rudimentary so it is difficult to learn the game. Without the ability to draw anyone into the game long enough to learn the mechanics, the mechanics are useless. A few people, for whatever reason, got into the game and really liked it (and our friends were forced to get into it, so their opinion was biased), but the majority of players weren't drawn into it, and so didn't enjoy playing it.

Most games don't develop entirely new mechanics like we tried with Deus Shift - they borrow mechanics from older games. In fact, many games play almost exactly like older games, only with a new skin. I used to criticize those games, claiming they weren't original and because of that, couldn't be fun. But that's really like saying a song is terrible because it has a similar chord progression as another song (for those of you who don't think many songs have similar chord progressions, listen to the Pachelbel Rant on youtube here: http://www.youtube.com/watch?v=JdxkVQy7QLM). Even though many songs have the same chord progression, they can sound different, invoke a different mood, and can still be unique and enjoyable. Games are really the same way. Think about why so many people enjoyed Half Life 2. It doesn't have original mechanics - the gameplay is fairly standard for a first person shooter. The visuals are nice too, but there are games with better visuals that I think are worse games overall. It's really how it sets the atmosphere, sets a mood and pulls you in that makes it unique, with the gameplay fitting in perfectly with each scene, making a complete and fun gaming experience. There are many other examples of games that are fun even if their mechanics are largely based on previous games. Think about your favorite games, and how they set their style, atmosphere, and mood.

I wont say don't come up with new and interesting game mechanics, as game mechanics are a part of what makes a game fun. However, keep the bigger picture in mind - do the mechanics fit with the style and mood of the game? A unique mechanic that doesn't fit the style (or takes up all the development time) may distract from the game and make it feel inconsistent or incomplete. Having a strong style and mood and tying all aspects together (design, art, story, etc) is what brings a game to life. The ability to bring all of these aspects together is what makes games so fascinating in the first place. In answer to the question: what makes a game fun, I would say it is how everything comes together, and the overall style the game creates.

Thursday, July 26, 2012

Bones and Automatic Vertex Weights

In my last post, I explained how VIDE vectorizes images which allows the image to be deformed by changing the vertex positions. However, manually changing the vertex positions is very tedious, and not artist friendly. A lot of animations, especially character animations, usually involve rotating limbs around joints. Bones allow an animator to do this easily by drawing bones and joints, and then deform large portions of the vectorized image by simply rotating the bone. For this to work, every vertex must be assigned a weight for every bone so that when a bone rotates, the right vertices will rotate with it. The simplest way to assign these weights is based on distance to the bone – the closer the bone is to the vertex, the higher the weight, so that the leg bone will affect vertices near the leg, and not vertices in the head. However, euclidean (straight line) distance will allow bone weights to “bleed” across gaps. This is easy to see when you have two legs next to each other in the image – a bone in one leg will affect the vertices in the other leg as the straight-line distance can cross the gap between the two legs. What we really need is geodesic distance, which is shortest distance within the space of the vectorized image. Since the distance is constrained to the space of the vectorized image, it cannot cross gaps or leave the image’s bounds, so the geodesic distance from the left leg to the right leg is the distance up the left leg and down the right leg (without crossing the gap between the two legs), solving the weight bleeding across gaps. A good analogy is that euclidean distance is how the crow flies, and geodesic distance is the way you have to walk.
The weight from the lower leg bone, using the exponential rule with 10 smoothness and 100 smoothness. Note that the higher smoothness spreads the bone’s weight further than the lower smoothness. Also note that the cape has a low weight even though it is very close to the bone straight-line distance.
So now we have to compute the geodesic distance from every bone to every vertex. To solve this problem, I did something similar to Volumetric Heat Diffusion, which you can read about on wolfire’s blog: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/. They also have a good example of the weight bleeding effect. Their idea is simple: take a discretized version of the model (in 3D, they need to use a voxel-grid, but in 2D, we can simply use our alpha-thresholded image), and then spreads the weight of the bone throughout the entire model as if it were a heating element. This is done by setting each voxel’s heat to the average of its neighbors iteratively until convergence. One it converges, they can lookup the “heat” of the voxel at the location of each vertex and use that to compute the weight of that bone. Once converged, the “heat” is proportional to the geodesic distance (which our friend Laplace can confirm for us), but convergence can take a lot of iterations, with each iteration requiring a loop over all of the pixels in the voxel-grid or image. As you can imagine, this can be quite slow, especially without access to the parallel processing power of the graphics card. So, I thought: why not just compute the actual geodesic distance in one iteration? While not embarrassingly parallel like the above method, Dijkstra's algorithm does just that. Set the initial distance of all pixels in our image to infinity (or, if you use a 32-bit image like I did, an integer value which is sufficiently large). Then, set all of the pixels along the bone to 0, and add all of those pixels to a working queue. This can be done by treating the bone as a line, and using a line-rasterization algorithm to get all of the pixels in the image along that line. Now, until the working queue is empty, dequeue the next pixel, and for every neighboring pixel that is within our outline (using the same alpha-threshold as we did to generate the vectorized image) and is unvisited (meaning its distance is less than infinity), set that pixel’s distance to the current pixel’s distance plus one, and add it to the queue. Visually, this is quite simple, the distance along the bone is zero, the distance of pixels adjacent to the bone is one, and so on.
The generated normalized weights for the body bone (red), the upper leg bone (green), and the lower leg bone (blue). Note how the higher smoothness has a smoother transition of weights at the joints.
For those of you who know Dijkstra's algorithm, my algorithm is not quite the same, it’s an optimization assuming that the distance from one pixel to any neighboring pixel is the same (which it is as we always add one to the distance). Also, for those of you who really like to analyze algorithms, you may notice that this means that the distance from one pixel to a diagonal pixel is 2, not v2. This means that we aren’t really getting the shortest distance within the outline, but the shortest manhattan distance within the outline. This can be fixed by following Dijkstra's algorithm without my optimization and including the diagonals as neighbors with a weight of v2, but this requires additional computation and updates of pixels, and does not make a significant difference in the assigned weights of the bones.
The result of the above bone weights with the bones bent into a sitting position. The higher smoothness gives a smooth bend, but looks too smooth at the hip joint. The lower smoothness only bends a small part of the joint.
So, now that we have the distances computed, how do we actually assign the vertex weights? Obviously, the larger the distance, the less the weight, but how much less? The answer to that is: it depends! If the weight fades a lot with distance, then you get a hard, angular joint that is good for elbows. If the weight fades slowly with distance, then you get a soft, smooth joint that is good for backs and hair. I found that an exponential function tends to work well: e^(-distance/smoothness). This function is always one for zero distance, and drops off quickly with a low smoothness, and slowly with a high smoothness. Let the artists decide what smoothness is best. Don’t forget to normalize the vertex weights so that they add to one! Also, you do not need to store all of the bone weights per vertex - usually storing the four highest weighted bones is enough. Then, to transform the vertices, compute the transformation matrices for each bone, and then the transformed vertex position is the sum of the vertex position transformed by each bone’s matrix weighted by the bone’s weight. Obviously, if a bone’s weight is one, then the vertex is transformed by just that bone, and at the joints, it will smoothly interpolate between the transformations of the nearby bones.
A sample showing the how layers will work in VIDE. The arm does not bend with the body in this example as it is in a different layer, and can rotate independently.
We now have a working system that can deform images based on bones. VIDE is done now right? Unfortunately, making this tool usable will require layers, animation tracks, and all sorts of UI stuff. But the point is that we can now animate and deform images! Who cares if anyone can use the program or not, right? All joking aside, look forward to more updates on VIDE, as well as updates on some of the game projects I’m currently working on.

Tuesday, July 24, 2012

Introduction to VIDE


Right now, in addition to a few small flash game projects, I'm working on VIDE, which stands for Vectorized Image Deformation Engine. The goal of VIDE is to take images and to deform (bend) them to create animations. The project came about because we were tired having to work around flash's limitations, and there didn't seem to be any animation engine that really suited our needs. Already, we have many games that we plan to use VIDE for. VIDE is being written in Haxe NME, which is a cross-platform language which can compile the code to native PC, Mac or Linux executables, as well as flash swfs to run demos online. It can also compile to mobile devices like iOS, Andriod, and Blackberry. Support for other platforms may come in time, and it wouldn't be that hard for me to add platforms myself as desired (for example, Haxe can compile to C#, which with a little extra code I could make an XNA (XBox) project that runs VIDE).

Right now, I already have the basic functionality for VIDE done. None of the technology behind VIDE is new, in fact a lot of this has been done in 3D games for many years. However, these tools haven't been used a lot in 2D, especially not with deforming images. Below, I will explain how VIDE works.

Vectorizing an Image
The first step is to take an image, and break it up into triangles. Each triangle vertex will be assigned a uv coord to map the vertex to the image. This makes it trivial (and very hardware friendly) to simply move the vertex positions and allow the uv interpolation to deform the image. The way I generate triangles from an image is to use marching squares on a threshold of the alpha channel to generate the contour, and then triangulate the contour. As all good programmers know, never reinvent the wheel if you don't have to - and it turns out that there are many free open source libraries that do this. I used Nape, as I had used it before for physics in games and was familiar with it. Nape even has a demo called "BodyFromGraphic" which shows how to use their marching squares implementation on an image: http://deltaluca.me.uk/docnew/swf/BodyFromGraphic.html. The returned GeomPoly shape then has a triangular decomposition function, so it was almost no work using Nape to get a vectorized version of an image.
The original image, the triangles of the vectorized image overlaid the original image, and the same image with some vertices moved to show the vectorized image deformation.

To render the vectorized image, the next step is to generate uv coordinates which map the vertices into the image. For 3D models, this can be quite difficult, but for our 2D image, all we have to do is take the original position of the vertex as the uv coordinates, as it already maps perfectly on top of the image. For maximal graphics hardware performance, I also generated an index buffer which allowed me to remove all duplicate vertices (which there are many since triangles often share vertices in a connected vector graphic) and just index which unique vertices each triangle used. Now we can use nme's drawTriangles function, which takes the vertices, the index buffer, the uv coords, and an optional culling argument. Since we don't want to cull any triangles (we want to display them all), we can just set the culling to NONE. For those of you who were wondering why VIDE doesn't work in HTML5, that's because the nme drawTriangles function doesn't work in HTML5 at this time. Here is the code to render the vectorized image:

//begin the bitmap fill to show the image
graphics.beginBitmapFill(bitmap,null,false,true); 
//draw the triangles
graphics.drawTriangles(vertices, idx, uv, TriangleCulling.NONE);
//end the bitmap fill

graphics.endFill();

Setting the BitmapFill's smooth flag to true will make sure that we don't get any artifacts when we deform the image. At this point, we can manually move a vertex position in vertices and see the image deform. Awesome!
The above vectorized image deformed using bones.
Moving each vertex manually is obviously not very easy to work with. In my next post, I'll talk about how to add bones and automatically generate vertex weights to make it easy to bend and animate a character.

Monday, July 23, 2012

Introduction

The first nickname my parents gave me was "Search and Destroy." This was because as a toddler I would search for buttons to press and gadgets to take apart. What would happen if I hit that button? How did that strange box function? With my inquisitive nature and interest in technology, it should be no surprise that I quickly became fascinated with computers and game development. My dad taught me QBasic in 6th grade, a simple and very old programming language, and with it, I wrote my first text based adventure game. This was back in 1998, and I've been making computer games in one form or another ever since.

All of my QBasic games have been lost, but here's a fun QBasic game I played as a kid that looks better than most of my QBasic games!


Today, I've graduated college, gotten a masters in computer science, have a wife and two cats, and work full time as a C++ programmer. However, I have never forgotten my desire to make games, and always have a project or two that I work on in my spare time. Recently, I've started an indie game studio with my wife and some friends called Fancy Fish Games, which I hope to one day make my full time job. I started this blog to document and share stories about my adventures as an indie game developer, as well as network and get in touch with other game developers.


A video from a 3D rendering research project I did during my masters in computer science. Looks slightly nicer than above QBasic game.

Feel free to follow me as I talk about life as a game developer, progress on my current game projects, and share some retrospectives on my previous projects.