I started the year by posting a postmortem on this blog and I figured I’d end it in the same way.
Kuraima was developed as part of a university module on emerging games technology. We were tasked with creating an augmented reality (AR) application on the PlayStation Vita console. One component of the brief was that we should attempt to be innovative with our application. This naturally led to some initial research into what had been done before, both on the Vita and on other augmented reality platforms.
At the start of development I was really craving to make a puzzle-platformer prototype akin to something like FEZ or Miegakure. After some preliminary research I realised that there was a severe lack of 3D platformers created for AR. This could be for multiple reasons but speaking from experience now there are a few solid ones that do show why the genre isn’t well suited to AR. One of the main constraints I decided early on was to forgo using SLAM (Simultaneous Localization and Mapping) and instead use the Vita’s marker cards. The main justifications were simplicity of implementation and processing overhead (in that order). Using the markers vastly simplified both the design and architecture of the prototype.
The general gist of the game was split into two interconnected modes. In Climber Mode the player would take control of a character and traverse the current level chunk (located relative to the world origin marker) to the objective. Once the player had entered and activated the objective the game would then enter Jigsaw Mode. In this mode the player would be shown where the next level piece would have to be placed relative to the world origin marker with another marker. Once the piece was in the right place, the player would then “snap” the piece into place and the other marker could be removed.
One of the first things I decided about the project was that I would attempt to do most of the actual asset work myself as well. Ultimately this didn’t quite pan out as well as I’d hoped (and we’ll get into that later) but it was quite a nice learning experience with MagicaVoxel.
Initial Testing and Bullet
After some experimentation with the AR marker functionality I decided to test if all the features I wanted in the final prototype would actually be implementable given the time frame. For the most part these initial tests went rather well, however there was one major stumbling block: physics.
I didn’t want to include any major external dependencies for this project and as a result I attempted to write my own lightweight non-rigidbody 3D collision detection and resolution. The key part here is collision resolution; detecting collisions is fairly straightforward (especially if you’re using AABBs and the shapes you are representing are all vaguely cube-ish) but resolving them is a bit trickier. I thought I could do it reasonably within the time-frame I had allotted to getting it working but it became clear that this was just not a problem I was going to solve to my satisfaction and still end up shipping.
Bullet is a free and open-source 3D physics library. It handles all the good stuff: rigid-body simulation, collision detection and collision response among other things. After much hesitation I decided to integrate it into the project to handle all of the physics heavy-lifting so I could focus on getting the actual game up and running. This turned out to be the right decision, although there were a few wrinkles I had iron out with interactions between certain game entities.
Development and Implementation
After all of the pieces were in place it was time to start proper implementation. Another conscious decision I made was to simply the architecture of the application as much as possible. After experimenting with entity-component systems and such over the past few years I decided to eschew them for a much simpler, bare-bones structure. This was both good and bad.
On the one hand once the application’s interfaces for wrangling Bullet and the AR capabilities of the Vita were in place and a few base/utility classes were in place development was fairly smooth. Making both interfaces static made it very simple for entities to leverage them as required. The more inheritance-based structure (which was kept as flat as possible) also meant that rather than having lots of small objects there were only a handful of sizeable objects instead.
On the other hand lots of code (particularly in game entity classes) was very tightly coupled. A few objects became particularly bloated. The use of the static interfaces did not help this problem at all.
One of the major saving graces for attempting to decouple the code was the inclusion of a game state wide event queue. This allowed objects to post events of interest (such as the player reaching an objective) to the queue and have other objects listen and respond to these events. Each frame the game would flush the event queue and process all the events that were posted. In a more complex game this may not have worked particularly well but in the prototype where there aren’t huge amounts of events being posted every frame it wasn’t a problem.
The initial prototype only featured cubes, each level was a cube and there wasn’t much actual platforming in the game itself. As I mentioned earlier, I had decided to do as much of the asset work for the game as possible. It was now time to add what I had to the game. Once I had finished creating the levels in MagicaVoxel I exported them to .obj, re-exported them from Blender in the same format after fixing the world origin and finally converted them to the file format we were using in the application.
Adding proper levels led to the next large hurdle: collision geometry for the levels. There are many ways of handling this in Bullet but I settled on what I thought was a fairly OK system. I had already planned to add proper levels and I figured it might make sense to not make the collision data hard-coded and opted instead to load them in from a CSV file. This combined with the instant reset functionality I added to the game state in the application meant I could test the collision geometry as I was adding it. It was not, however, as straightforward as that. Adding in the collision data had to be done by hand. The general approach involved getting the individual vertices of the mesh data itself and using that to calculate the size and origin of the shapes required, but it quickly devolved into trial and error. This cost a small amount of development time, and by the time I had finished the process for 4 of the 6 planned levels I decided it was maybe not the best use of my time, so I scrapped the remaining 2 and tested the levels I had finished. I then discovered some issues with the collision data for the 4th level and thus the final level count stood at 3.
Polishing and Finishing Up
Once the core game loop was implemented it was time to clean up and polish, both the code and the game itself. I immediately gravitated towards adding collectable coins to the game. This was fairly trivial to get up and running, a quick model thrown together and a small amount of collision code and it was in. The other major addition was audio. This came hand in hand with the coin collectables and really enhanced the overall feel of the game. The game event queue came in especially handy here as it meant the audio system could listen to the events coming in and respond with a sound as soon as it processed the event.
After adding these two features, a main menu and a win screen the prototype was finished. After debugging and optimising a few things (namely the reset functionality in the game state because it was pitifully slow) I decided it was done.
What Went Right
1. The Game Event Queue
The game event queue was fantastic. Adding new event types and listeners as required was easy. It also successfully decoupled certain parts of the application that didn’t require knowledge of each other’s interfaces (audio in particular comes to mind). The event queue system itself may not have been suited to other, more complicated games but it worked remarkably well for Kuraima.
2. Static Interfaces
Despite my misgivings about implementing certain aspects of the game in this way I do have to admit that the static interfaces aided development substantially. Particularly when it came to integrating Bullet in the application. Adding that extra abstraction layer cleaned up some code that could have been particularly gnarly.
3. Data-Driven Systems
Not to be mistaken with data-oriented design (which is also pretty cool). A sizeable chunk of the game’s data was loaded in at run-time from external files. The best example of this would be the collision data for the levels. Paired with the reset functionality it became very easy to test iterations of the collision data. In an ideal world more of the prototype would have been designed with this in mind, possibly with a different file format like JSON instead of CSV.
What Went Wrong
1. Time Management vs Ambition
This problem is fairly straightforward. Attempting to roll-my-own for as much of the project as possible almost killed it completely. This paired with some slightly dodgy time management midway through development (particularly with the early botched physics implementation and asset creation) led to a few features being scrapped. This included a few solutions to the controls not being in relation to the camera’s position/orientation.
2. Static Interfaces
Global state issues notwithstanding the static interfaces also created another big problem: compile times. This might not seem like a huge deal but any major change that involved adding/modifying/removing a function from a static interface required every class that used that interface to recompile. On a small project like this it wasn’t a massive problem but this problem would only get worse with scale.
3. Camera and Controls
This is both a design and technical issue. In the state it was for submission the game had somewhat… problematic controls. This stems from both how I implemented the game and how AR works in general. The gist of it is that in most 3D platformers the camera is focused on the player character whereas in AR you have no such guarantee. The camera orientation also becomes a problem, as the player could be viewing the level from anywhere provided the marker is still visible and detected. I had a few ideas for how to solve this but to do so correctly would have required using the Vita’s gyroscopic capabilities. In an ideal world I would have implemented this.
I personally consider Kuraima to be a successful project. It didn’t quite match the initial vision I had but it came pretty close in my eyes. It is nowhere near perfect but I had a blast making it. Here are a few more screenshots from the final version: