Purpose of Game Engines
2D/3D/Mobile/Modtools
Game engines provide a suite of development tools which are used to aid the developer by having re-usable software for example something like Construct for 2D games and Unity for mobile games and other 3D games. Construct is a good engine for 2D games as it has an easy drag and drop system where you can just drag your 2D sprites on to the level and edit them to do things with the event sheet system which is sort of like programming language, for example if a certain "event" happens like if you were to reach this certain location then another event would follow up like a cinematic or a door opening in front of you. This is good for 2D games as it has a 2D physic engine built in called Box2D. Box2D is a free open source physics engine which is written in C++ it has been used in many well known 2D games like Limbo and Angry Birds. Mobile games use the Cocos2D or Moscrif game engine these both are open source software frameworks which can be used to build games and other cross platform based programs. These programs provide common GUI elements for your game like text boxes, labels, menus and other common things you would find in a 2D game which is useful as you can just edit the coding to change the look of the text boxes and the other elements which are provided. The Box2D has a rigid body simulation feature which helps developers with simulated bodies so it saves time on physics it also applies gravity and friction.Integrated Development Environment
Middleware
Platform Abstraction
Platform abstraction is making an abstraction layer for an operating system which provides an API to an operating system making it easier to develop code for multiple software or hardware platforms. 3D engines and rendering systems in games engines are built upon a graphics API which provides a software abstraction of the GPU or video card.Open AL are also used in games as they provide hardware independent access to other computer hardware such as input devices like a mouse or keyboard. Because of this, there is little change to the source code and you can still play games on multiple platforms like from PS3 to PC.
Rendering/Collision/AI
A game engines purpose needs to be able to render graphics, this is the process of making an image on a computer software program. Collision detection is when you program something to detect and block the intersection of two or more other objects. This is mainly used in video games to limit movement. AI is short for Artificial intelligence which is a program which makes none playable characters seem to have a mind of their own. A program would give the none playable characters instructions to follow a set path depending on what the player is doing for example if the player was to be around the AI the program would tell the AI to either run away from the player or start to attack the player.
Functions of Components of Game Engines
Rendering - Shaders
Animation
One of the functions of blender which is a game engine is that it helps with animation. For animation you can start off by making a mesh which is required. You would then have to create the skeleton or armatures that will be used to deform the character. A single armature can contain many bones which makes animation a lot easier. You can create a hierarchy for the bones which make them connect to each other and which bones would move first with what joints to create realistic movement. You would also have a timeline so the mesh would have moved to a certain spot by a certain time in the timeline for example a simple walking cycle could last for 5 seconds on the timeline but in game they 5 seconds would be repeated over and over to create a continuous moving cycle.



Systems - Particles, Physics, Sound
Sound effects in games like gun shots and voice can help the player be more immersed into their game which is what game developers want. Voices can be used to help through the story line with none playable characters and music to help set the mood of a location. This can make games more dramatic! Physics engines like havok are also there to give a game the laws of physics such as gravity which can be modified to try create what you are going for like for example if you were in space you would set the gravity to be less however if you were on earth you would set the gravity to be higher then space or if you were to wear certain things it would make you slower and the engine would have to calculate the weight then try to match the speed of how fast you would move.
To create these 3D products you would need software to make it. There are free 3D development software for example the student version of Autodesk maya or Blender. Both of these work differently however you will still achieve the same final product so it is just up to personal preference. 3D development software help you 3D model they allowed users to alter 3D models and can be rotated, zoomed and viewed a several angles to help with the creation process of the model. 3D models can also be exported so then it can be imported to other programs 3D development software as long as that software is compatible with the exported file.

For a standard 2D plane you would only have the axis X (Width) and Y (Height) these axis' are called cartesian coordinates and they help to navigate and locate a certain point in a plane these are quite useful in 3D modelling as you can identify each vertex to a certain axis so when lines are create between vertices you can create different shapes. In a 3D plane however you would need a new axis, this axis is called the Z (Depth)
Some polygons are already pre-made with most 3D programs like Maya these are called Primitives and you can use these to create simple shapes which already have meshes and you can model them after creation. In a Mesh you would have polygons which are connected to each other by shared vertices these are called elements and each of the polygons making an element create a face. A face in the mesh is a surface which is there to determine light sources in ray tracing. All of this is referred to as a mesh or a wire frame model.
This is an example of a simple method of mesh construction which I found on Google. You would first create a primitive and then extrude a face to create a more interesting object. Here the primitive is a cube, which gets extruded to make a more complex mesh. This is all made by simple extrusion methods which is also a method of modelling called Extrusion modelling.
Extrusion modeling is a quick and simple way to help 3D modelers create 3D objects. Most of the time you would first have a 2D shape where you would extrude the faces to follow a outline. You would then get a different angle of the same shape and extrude it at that angle following the outlines to create a 3D model. This method is commonly used for modeling heads. 3D modelers would only do one half of the head as you can just duplicate the vertices and invert them so the head would have two halves which are symmetrical. Extrusion modelling is good for creating quick simple shapes which you can then modify even more to your liking.
There is some constraints to 3D modeling for example: Polygon Count, when you are 3D modeling the polygons would be the individual shapes that create a model. The model can either be complex or simple no matter what the polygon count will increase when you are modeling. This is called the Polygon Count, however the polygon count is something you actually want to keep an eye on as it count cause some problems if left unchecked because the computer processor would need to calculate the position of each vertex and polygon that makes up the model meaning the higher the polygon count, the more powerful your computer processor would have to be to calculate the the vertexes and polygons in the model. It is really important to keep an eye on the polygon count in assets for games because if a asset has a high polygon count it would be disrupting the game as those unnecessary polygons could cause a processor to be over worked and could a problem in the game like low FPS.
Theory and Applications of 3D
Applications of 3D products
You would usually see 3D products in video games however some games could be completely 2D. In movies the VFX (Visual effects) are 3D models for example in The Hobbit with Smaug he is modeled first before he is put in by green screen where the actors have to pretend Smaug is there. Some product designs use 3D models for advertisements and to show colleagues what it would look like not on paper but in 3D. It can also be used in architecture to give viewers a look of what the building would look like in the area and give a size comparison.3D development software
Geometric Theory
3D
models are made up of: Points, lines, coordinates, primitives and
meshes. If you have done 3D modelling before you should of came across
the term vertexes before, these mean points in the model which you can
connect a single line with to another vertex to create a edge in a mesh.
If you were to connect 3 vertices together you would get a triangular polygon and if you connect 4 vertices you would get a square polygon which
are the more common ones. Planes are flat surfaces but they can also be a solid
plane which is 3D. You may intersect 2 planes together to create a 3D
space.
For a standard 2D plane you would only have the axis X (Width) and Y (Height) these axis' are called cartesian coordinates and they help to navigate and locate a certain point in a plane these are quite useful in 3D modelling as you can identify each vertex to a certain axis so when lines are create between vertices you can create different shapes. In a 3D plane however you would need a new axis, this axis is called the Z (Depth)
Some polygons are already pre-made with most 3D programs like Maya these are called Primitives and you can use these to create simple shapes which already have meshes and you can model them after creation. In a Mesh you would have polygons which are connected to each other by shared vertices these are called elements and each of the polygons making an element create a face. A face in the mesh is a surface which is there to determine light sources in ray tracing. All of this is referred to as a mesh or a wire frame model.
Mesh Construction
There are many ways to construct meshes one example is box modeling where you would first a primitive and modify the faces by extruding and scaling so they it will change into something else.Extrusion Modelling
Extrusion modeling is a quick and simple way to help 3D modelers create 3D objects. Most of the time you would first have a 2D shape where you would extrude the faces to follow a outline. You would then get a different angle of the same shape and extrude it at that angle following the outlines to create a 3D model. This method is commonly used for modeling heads. 3D modelers would only do one half of the head as you can just duplicate the vertices and invert them so the head would have two halves which are symmetrical. Extrusion modelling is good for creating quick simple shapes which you can then modify even more to your liking.
Spline Modelling
Spline modelling is a special function that involves polygons in a mathematical way. Spline is a more common term for computer science. it is adapted from the shipbuilding term that is the tool used by draftsmen to easily draw accurate shapes. Splines are the shapes that you could create on 3D software. Constructing splines are simple, which is why it is a popular choice of modeling which it comes to creating digital models and their curve design helps for easy manipulation. It can be used in one dimensional or multidimensional applications. One way you would use spline modelling is Lofting. A Loft is sort of a wire frame for a 3D object and is used as a technique in 3D modelling software. It is developt from flat sections by doubling it along the path that it is given. Lofting is a way of modelling an object based on splines. Normally you would start modelling an object by modifying the primitives. However when you loft a spline you're creating a 3D object that won't become editable. So in theory this means this is a more efficient way to create collision models.Lathing
Lathing a 3D model means to have it's vertex geometry produced by rotating points of a spline around a fixed axis. Lathing can be partial and doesn't have to be a full 360 degrees turn. This can be used to create symmetrical objects as well.Constraints
The file size is also quite important as well as any computer user should know. This is to maintain processing speed and disk space file sizes measure the size of a file and is measured as the following; KB (Kilobytes), MB (Megabytes), GB (Gigabytes) and TB (Terabytes) It is important to take note of your disk space as it can affect processing speed. The larger to file the more space it takes meaning that processing and rendering will take more time. Render times vary depending on the size of the model file and polygon count and renders can take from a few seconds to hours meaning that smaller scale animations/models do not really require much time to render this depends on the computer processor as well. However the bigger projects like Pixar animations would require a huge amount of time to render. Rendering is not just the production of 3D models and animations but also the viewing of them. Some 3D software like maya allow this feature so modelers can look at their product through the software to see a full render. This type of rendering is called real time.
Pipelines
To generate an image on the screen it will have to go through several steps before the image can be shown, I will now explain the pipeline of this process. This is the current state of the art method for rasterisation based rendering. Rasterisation is when you take an image made from vectors and then transforming it into pixels. First you will have a 3D vector image which exists on a single plane, then you would have to transform the image to fit into where it is in the single plane into the 3D world coordinates you wish to put the image into, it would then generate a image from the 3D world using the camera point as the origin for example if it was a first person shooter then the origin would be your screen after that as I have explained in the Shaders section of my report the image would then illuminate according to the lighting and reflection of the 3D world and the 3D image. After it has take the lighting into account it would then have to transform the projection of the image for example in a first person shooter if there is a building in the distance it would have to make the image smaller by dividing the coordinates of each vertex by its z coordinate which is the representation of distance from the camera it will then start to clip parts of the primitives you can not see that are outside of the camera view. It will then begin to rasterize the image meaning to convert the vectors into pixel based images after this stage the process can become quite complex because of the multiple steps referred to as the pixel pipeline then everything will start to be textured and shaded either from memory or a program.