Pages

Thursday, November 29, 2012

EjectaGL : Quick Update - Basic texture support

Quick Update - Basic texture support has been hacked into EjectaGL.

Here is a super quick (hence the bad lighting, bad finger smudges and bad everything - like all my videos) demo of port of Lesson06 from learningwebgl.com.

To make up for the absence of keyboard the zoom is controlled by tilting the device(which Ejecta nicely supports). Touching the screen toggles between the different filtering options.

I have tried to show the effect of Mipmaps which is the main part of Lesson06 (Open in youtube for bigger size video to actually see the difference)

If at all you want to give it a spin, be warned that I've pretty much stuck to implementing the APIs needed for the lessons for now. I just haven't gotten around to typing all the binding yet. Feel free to jump in and add some of them.

Monday, November 26, 2012

EjectaGL - WebGL on iOS with Ejecta!

Summary: 
I am starting a project EjectaGL which adds WebGL binding to the awesome Ejecta project by Dominic Szablewski. EjectaGL extends the Ejecta framework to provide a WebGL implementation on vanilla JavascriptCore without any of the DOM overhead. EjectaGL provides a way to develop WebGL apps that can be shipped to the iOS app store(Not tested, but very possibly). It is far from being complete - So please contribute if you are interested! 


EjectaGL is distributed in the same MIT license as Ejecta.

Longer story:
I had worked on Voxel Flyer for js13kgames - A WebGL based game in less than 13K compressed Javascript. I wanted to keep developing it but I sort of wished it would work on iOS too. A search for WebGL on iOS usually yields the jailbreak method for enabling WebGL on iOS which is not very useful if you want to ship your WebGL app on the App Store. It feels like Apple might open up WebGL at any time but that never seems to happen. I always wanted to look into adding WebGL binding to AppMobi's directCanvas - but that never happened due to the complexity of the directCanvas code(and not to mention that the open source version has hardly seen the updates of their recent XDK).

Meanwhile I saw that Dominic from Phobos Labs(on whose work directCanvas is actually based upon) released Ejecta, a newer version of his iOS library to provide a barebones fast implementation of Canvas and Audio without all the DOM overhead. This time I felt the code was much more friendlier and  I could actually have a real shot at implementing WebGL on top of Ejecta. 

So couple of days back I started reading my fist tutorial on Objective-C and started hacking Ejecta! The result is EjectaGL. EjectaGL replaces the onscreen 2D canvas with a WebGL enabled canvas. At the current stage it's just a proof of concept and I am discovering and fixing issues as I am porting each lesson from  http://learningwebgl.com/. I tried porting Voxel Flyer to it and discovered that there is something wrong with the way uniforms are being passed in which I'm yet to finish debugging.

So basically it's in a very nacent stage but things are looking mostly doable. I can definitely use some help from people to add more bindings, port more examples and review my code for memory leaks(I'm sure there are few there given my newness to Obejctive-C) and other issues.

Check out the screenshot of Voxel Flyer running in the emulator with buggy lighting turned off at an astounding 9 frames/second. (You can run this demo by downloading the github source, creating an App/ folder and copying the contents of examples/VoxelFlyer to it)





As for some implementation details the one major caveat is the absence of Typed arrays in the current JavascriptCore that Ejecta uses(It might be coming some time later though). I have tried to work around it by implementing a very primitive pseudo typed array in Objective-C. Basically you can instantiate a typed array but that's about it. I found that most WebGL code just create the typed Float32Array, Uint16Array just before binding to the buffer anyway and don' do much with the contents anyway. There could be issues with libraries like glMatrix that use typed arrays for fast matrix calculations. Currently I just replace the typed array with a regular array in my stripped down version of glMatrix. I see a workaround here of implementing glMatrix itself in Objective-C that doesn't have typed array in it's method interface but can use the internal representation of EjectaGL's typed array which can then be passed on to EjectaGL. A similar strategy can be adopted if we want to get libraries like three.js also running on EjectaGL(though that would be a big undertaking of it's own).

There are not lot of WebGL APIs implemented currently but I don't see any other major issues in doing that (other than the typed arrays). Also I am thinking of adding stuff like rendering offline to a 2D Canvas and using that a texture in WebGL (My Voxel Flyer originally had a mini map that was a 2D canvas on top of the WebGL canvas).

So for now I just wanted to quickly mention the project and get the ball rolling so that I can ask for help and also make sure there is no duplication of effort (I am pretty sure 100 different people were also thinking of a similar project).

I think there are lot of applications for this project - It would be nice to get all those nice shader/demos working on the mobile device, possibly even the three.js ones. I also see a possibility of using all the cool WebGL shader effects available on Construct2 to be now fully exportable to the iOS target. 

Drop me a line here or on twitter - @vikerman

Update : EjectaGL is now merged into Ejecta main! You can get it at https://github.com/phoboslab/Ejecta

Tuesday, September 25, 2012

GameDoc: On the fly Javascript game modding for the mobile web

I had a crazy idea and wanted to create a quick experiment to test it out. Mobile browsers are becoming good  - Javascript and Canvas implementation are becoming faster. There are certain advantages to the mobile browsers as a game distribution platform - the most important of which is that it is open and not controlled by arbitrary rules set by Apple, Google or whomever (Classic example - Phone Story). HTML5 games in general are picking up steam but game makers are ignoring the mobile browser.

There are two factors why games on mobile browsers don't work well -

  1. Game developers ignore things like centering the screen on the game area, setting the right zoom level, allowing for touch input(or any alternate mechanism like accelerometer) when there is no keyboard
  2. Browser makers don't provide a good way to do certain things  - locking screen orientation, a working audio API etc.
1) is definitely solvable while 2) involves letting platform creators know about the issues faced by game developers (Some influential developers that are already trying to do that - http://www.phoboslab.org/log/javascript). Maybe newer platforms like Mozilla OS will push the standards higher on the existing ones.

So the crazy idea is centered around the concept that HTML5 games being based on open standards should be modifiable by anyone (An idea floating around in Mozilla land) - In this specific case modified to work well on mobile browsers. If a game developer didn't take care to make the game work well on mobile browsers, anyone should be able to fix it.

To prove this point I created a proof of concept - which I called "GameDoc". For the experiment I took the game emptyblack.com. I really liked the game when I played it on the desktop and I wanted to play it on my phone. The game seems to run ok as far as the graphics is concerned but it had the classic issues mentioned in 1). Since there is no keyboard input in the phone the game is unplayable.

GameDoc is a native mobile App that basically wraps a browser WebView. It loads the game from the original URL but it sets the required zoom level on the WebView to get a good view of the game area and locks the screen orientation to landscape. And here is the cool part - It fixes the absence of keyboard by injecting Javascript snippet into the page that adds on-screen controls. When these controla are pressed keyboard events are simulated using the dispatchEvent mechanism. I used the WebView wrapper route to be able to do this Javascript injection on the device (due to absence of browser extensions on the mobile).

So here is the video of how the converted game works. The video shows how the game is not playable in it's original form in the mobile Chrome browser and how the fixed version works with simulated onscreen controls. It's not perfect - Sometimes keyup/keydown events seem to get missed and I think it might be due to multitouch issues. As you can see I really threw this together rather quickly with horrible looking icons for the controls - But you get the idea.


The grand idea is that there can be a crowd sourced HTML5 game portal where people can add their own customization to the game controls or the game itself and the Gamedoc App(or whatever better name I come up with) would pull that extra information for the game and does the necessary modding.

Can I avoid the need to have a separate App that wraps the WebView and just use the native browser? Maybe I can do something at the server side but that might screw up with things that are tied to the original domain like cookies or the local store. Maybe I can just do the controls with some sort of IME trickery.

Anyway as I said it's a crazy experiment. At the least I hope to push HTML5 game developers to think more about the mobile web.

Let me know of your thoughts/comments.

Update: Looks like IOS6 has better support for Audio opening up better game support! And the awesome folks who make Construct2 already support it - https://www.scirra.com/blog/98/the-web-as-the-platform

Tuesday, September 18, 2012

js13kgames and Voxel Shooter

I knew I had to take part in js13kgames - A competition to write Javascript games less than 13KB when zipped. I used HTML5 canvas for the last two of the last three game projects I worked on and was increasingly identifying myself as a HTML5 game programmer. However a visit back to India and the work on Mario Ball had left very little time for me to actually work on my 13KB game.

At 13KB I had decided to have some sort of procedural generation for the elements of the game. I also wanted to explore the Web Audio API for the sound. However I was having a very hard time trying to come up with a concrete idea for the game. My very first idea was a weird fighting game where the player's form and powers were determined by a random combination of 3 elements - Rectangles, Circles and Bezier curves. The player and enemies basically looked like modern art pictures. I wasn't sure whether I was going too weird with this one. Some of the denizens of this unfinished game:





Then this entry came out on js13kgames - 13th Knight. It was a fully textured 3D world in WebGL in under 13KB! And it had awesome looking trees!! I wanted to make a WebGL game now. I always wanted to learn WebGL but hadn't looked closely into it it before. I had started looking at three.js for 7dfps but never got the time to do anything beyond a sphere moving on plane. This time around three.js wasn't going to be an option because no external libraries were allowed and three.js would not fit into 13KB (I later learnt somebody else had managed to get a subset of three.js working in their 13KB game - Mindless).

So not exactly knowing what I'm going to build I started with the WebGL lessons at http://learningwebgl.com/blog/?page_id=1217 for the basic template for a WebGL program. I played around with it till I was able to draw a flat colored cube with directional lighting. There were 5 days till the competition ended and I had to come up with something quick. Then it struck me - I will just redraw the cubes a lot of times to create a Voxel engine! (Being completely ignorant of the issues before starting a project is sometimes a good thing). I was missing a good flying game these days and I thought I will do a flying game where you shoot enemies on the ground - Voxel Shooter!!

I generated the terrain using some random combination of sin curves. I copied a small subset of the glMatrix library for the matrix operations. And I happily went about drawing cubes over and over again - One drawElements call per cube. I could reach up to 50 x 20 array of cubes before the rendering became too slow. I tried removing faces from the cubes that are hidden behind other cubes - A small increase in the frame rate but nothing great. I tried some cheating like increasing the size of each cube so that I can get away with a smaller number of cubes. Nope - the map was still too small. I banged my head trying to figure out a frustum culling method but it just involved too much Math to be figured out in a small amount of time. Also the far plane was going to be too near and field of view was going to be too narrow if I can display only 1000 cubes. I just had to figure out a way to improve performance first.

Then I came across this - Google I/O 2011 : WebGL Techniques and Performance. I had basically made all the mistakes mentioned in the talk. First and foremost I was switching too much context for every vertex - Even the uniforms that were common to all vertices like lighting and camera and projection matrices. I got a substantial boost when I fixed this issue but still not going to allow the 256 x 256 map size I was aiming for. The next thing was to reduce the number of GL draw calls by combining many cubes into one giant mesh. This one I had some problem getting my head around because I hadn't understood the concept of attributes properly. Then I saw how I could augment the vertex data by putting its world position as an attribute and I could draw the entire terrain with a single drawArrays call - And BAM! Now everything was running smoothly at 19fps (yeah. Much better than 7 fps) for an entire map of 256 x 256.

Everything from here on was mostly polishing. I worked a lot on the camera - I was using the LookAt matrix but couldn't figure out how to roll and I wanted the camera to roll on turning. After scourging the internet found a simple method to just pre-multiply the LookAt matrix with a roll matrix.  I didn't like the way you could lose your bearing if you looked right up. So I added some clouds(All clouds drawn with a single GL draw call), filled the surrounding areas with water and added a wrap around if you strayed too far from the island. I added some cubes to represent the enemies (all drawn with a single call). I spent quite a bit of time overlaying a 2D canvas to represent the minimap so that you would knew where your enemies were. I tried adding bullets to shoot down the enemies with the basic physics handled right in the shader but that wouldn't work - I just had to give up on it. (Did you wonder why it was called Voxel Shooter but there is no shooting? :) I spent the whole night before the competition deadline at 7AM adding these things and then before I realized it was time to submit!

I was very dejected that I didn't complete the game and I was too near to the deadline. You just flew around doing pretty much nothing. You couldn't even crash into anything. Staying awake for the whole night was a complete waste of my time. And then out of sheer desperation I thought I will just add a simple check for crashing into the terrain and that would be the game - Avoid crashing into anything. I was testing it and I kept going through the enemies. I thought I should add collision detection with the enemies as well. I added that and then I realized instead of treating crashing into enemies as a failure, that could be the whole point of the game!! I quickly added a score for picking up the red cube (no longer the enemy!) and increase the speed of the game every time you finished a batch of them. I submitted this new version and was smugly happy with my own crisis handling. Next time I should just give myself more time to avoid such things.

The next day playing through the other entries of the js13kgames and I liked many of them. It was interesting to see the methods they used to keep the size down. The most amazing thing I learnt about was jsfxr - A simple sound effects generator without having to store audio in huge audio files. I am definitely going to use this more in my upcoming projects and in the future versions of Voxel Shooter.

Some people seemed to like Voxel Shooter and it was really cool that indiegames.com picked it - http://indiegames.com/2012/09/browser_game_pick_voxel_shoote.html. Even Cheezbruger picked it for their Lunch leisure  - http://t.co/xO9drzmZ! Some were amazed it was done within 13KB but I should mention that my game was very simple - it just does the same thing a lot of times and I didn't spend any time trying to reduce the size.

The initial reaction from people has given me encouragement to keep working on the game and I think I will make it into a full fledged game in the future. Lots of ideas flying in my mind now - Sounds, bigger worlds, flying enemies, improved shading, shadows, water, lava, gamepad support, multiplayer, levels, missions... But now I will automatically think of one more thing - How small can I keep the code and assets?

And to end it all here is a video from my awesome friend Brian who scored 148 in Voxel Shooter! Can you beat that?!!



Sunday, September 16, 2012

Mario Ball with Keita Takahashi and Kaho Abe!

Something very unexpected happened. The Babycastles folks were running a Summit at the Museum of Modern Art and Design(MAD) - It was centered around bringing Keita Takahashi's ideas to reality along with other things like game (not just videogames but also about things like games in physical spaces) related talks and indie music bands.

They wanted some help with the programming for one of their projects. For some inexplicable reason they thought of asking me and I jumped at the opportunity. I had always wanted to explore physical games/ alternate interfaces and wanted to do things similar to what Kaho Abe does. And now they told me I was actually going to work with her! Not to mention an opportunity to work with Keita Takahashi!!

I had an initial meeting with Keita and Kaho. The project, Mario Ball, was to build the original Mario brothers - but instead of using a normal controller the ideas was to control it using a wooden maze representing the level and Mario actually controlled by a ball. I think Keita's original drawing explains this the best.


Kaho had decided that we are going to use a magnetic ball for Mario and an array of  magnetic reed switches underneath the controller surface to detect where the ball is an move the on screen Mario accordingly (It was decided not to use computer vision because that was already being used for another game). This is where I was going to help them - Actually build the game that processes the controller input to move Mario around in the game. It was also decided to build the controller as a big box so that it would require two people to work with each other to actually control Mario! We also decided that we would make Mario go up side down when the ball is actually moved up and it stays up there - something like a change in direction of gravity. I was concerned with the resolution of the magnetic reeds and asked Kaho to also add an accelerometer so that we can try and interpolate Mario's position when the ball was actually between two magnetic sensors and we had to guess which direction Mario was actually moving. Throughout the meeting I was mostly just thinking - Yay!! I am talking with Keita!!

Kaho was going to build this insane setup with 90 magnetic reed switches connected to an Arduino through a set of multiplexers. The program running on the Arduino would detect the active magnetic switch and write that value to the serial port along with the 3 axis acceleration information from the accelerometer. So now we were ready to start building the controller and the game. The kicker - We had 6 days to build this whole thing!! (And not even 6 whole days. I had to go to my day job on all the days and work in the evenings. Kaho was handling the hardware for 2 other games)

I think my experience with working with multi-tiered systems in my day job (At this small startup) helped here. We decided what the format of the controller output is going to be like so that we can initially work separately and in parallel and then put the controller and the game together when they were ready. I decided to build the game with a ball and magnetic switches that are actually simulated (Similar to mocks/stubs that we use to test in isolation without external dependency) so that I could iron out some of the issues I would face even before the actual controller was ready. In hindsight this got me to 50% of the actual solution - but that was good enough given the time crunch. The things I totally got wrong in my simulation were the area which each magnetic switch covered and the maximum speed the ball would actually be able to roll in the real world.

So I went back and started to actually build the game. I didn't spend much time thinking about it but I decided to use Processing because of its inbuilt ability to use the serial port to interface with the controller. This was a good and a bad idea but at the end of the day was probably the right decision. The good part was that it all worked out and even though I developed the whole thing on the Mac it perfectly ran on the Windows machine where we finally put it - including the serial port part. The bad things included Processing not really having an easy way to rotate and blend images at the same time (Really? I expected better) and some bugs when part of the rendered image going offscreen. The IDE itself was horrible at times and I had to actually delete one file at a time to find syntax errors. At the end of the day the positives probably overrode the negatives. But the next time around I would think a bit harder before using Processing for a game.

I spent the first few days to actually get the assets of the original Mario brothers(Thanks Syed!) in and trying to mimic the looks of the original game as much as possible. I built the game in a way which is probably not recommended - I tried to get the graphics of the game done well before the actual logic/gameplay of the game. But I  found that doing this motivated me much more than just dummy boxes and circles. Looking back I think this also helped out because I would have never been able to put in the effort to get the graphics right in the crazy days that were going to follow. I also spent the initial time building the framework in Processing to read in the world layout and character animation from Tiled TMX files. This also helped me out later when Keita wanted me to add extra characters into the game and I was able to do it with relative ease. As usual I wrote custom collision detection code instead of trying to figure out how to use Box2d with Processing. In my defense I didn't have time to ramp up on anything more complicated than the simple graphics APIs and this was the first time I was using Processing.

We met halfway through the week - Both the hardware and software were lagging behind where they should have been at that point. Keita was amazing as always - He had built the frame for the controller and created an actual Mario Ball by pasting colored strips to create a Mario on the magnetic ball (Thanks Lauren for the photo!).





That night me and Kaho just worked through the whole night trying to finish as much as possible. We made really good progress but we still weren't ready to put things together. The whole time I wasn't stressing myself out and tried to stay as calm as possible. I also told myself that Babycastles always manages to pull off these things - So I had nothing to worry! (It was half true at the end :). So here is the picture of the amazing circuitry that Kaho put together that night. I think she called it a unicorn barf or something like that.



The next day we had to shift to the MAD and this is where things started going wrong. We weren't allowed to stay late the day before we had to set the whole thing up! So on the opening day(Friday) we just had a few hours to get the whole thing ready. Kaho was also getting very busy dividing her time between all the projects she had to get working. This was a very tense day and unfortunately we weren't able to get Mario Ball ready that night.

On Saturday we finally got the game and the controller talking with each other. It was a magical moment. I never thought we would ever get to this stage given the craziness that was the whole project :) It had few kinks including a bug in the Arduino code which made it print lots of useless values in between the actual useful ones. A few hours later and few critical bug fixes later we had something close to playable - There were some issues with Mario appearing to teleport when the ball moved too quickly - but the game was definitely playable. And so by Saturday afternoon Mario Ball was ready to go! We spent some time getting the actual hardware for it setup and the game was finally ready to be put up.

I haven't talked much about Keita and probably mention here just a bit. He is a person of few but precise words and I would like to keep this section the same way. He is an amazing combination of a nice guy and a person who pushes you to the limit. He wasn't there to just have fun - But actually do something great. Good enough was not good enough - perfection was the only end point. In the end I didn't do everything that he asked for - but the game definitely ended up much better than if we had just aimed for something good enough. The couple of things I took out of this whole experience was to aim high and always keep improving.

Even though we were almost a day late I was very happy to have Mario Ball running for more than half of the summit. I think most people had lots of fun playing the game. There were some who said it was too experimental - but I thought that was part of what we were trying to do. Kaho had done an amazing job with the hardware - It worked without anything breaking for most of the summit (It briefly broke down on Sunday evening - but it was just someone had yanked on the controller so hard that the USB cable connected to the Arduino had come off. The crazy wiring just kept working the whole while).

So here it is - A picture of couple of people playing Mario Ball in the jungle themed arcade section of the Summit (The jungle theme was an amazing achievement of its own. Somebody needs to write about that). This should give an idea of the size of the controller and how it would take two people to use it. There are so many details in the controller I can't capture here. Keita added the fur like exterior and window handles(which reminded Keita of his Grandmother's house?) on the side so that you can easily hold and play.




Here is a video of people playing it at the Summit. I am sort of happy that I was able to capture the experience of the two players actually co-ordinating with each other to play the game.


I have no good words to end this post. Maybe that this is just the beginning. We will keep improving. You will see a better Mario Ball in the future!!

Thursday, August 16, 2012

'Go away norman' goes to Paris!

I have been wanting to post this for a long time but let me do so now before I completely miss it. The ever amazing folks at Babycastles were putting up a whole exhibition of cat themed games called Meowton at La Gaite Lyrique's Play Along. They basically put our game Go away norman into a cat mobile that they had built!(among other amazing installations). This show was open for a month for people to play.

Seeing people play our game felt great by itself but this was something else! Thank you Syed and thank you Babycastles!!

Here are some photos of the cat mobile with our game in it (Thanks to Lauren for these photos)

Other photos from Play Along and Meowton can be seen here - https://www.facebook.com/media/set/?set=a.434311623269911.101273.108973242470419&type=3



Saturday, May 12, 2012

Universe within launched on App Store for the iPad!

We released our Global Game Jam game "universe within" on Apple App Store for the iPad! (My first ever app on the App Store!!)

It's much better to play the game on the iPad making use of the accelerometer (than trying to til your Mac book). Also the screen resolution made it a natural fit for the iPad. A tweaked iPhone version may follow later.

You can find in on iTunes here - http://itunes.apple.com/us/app/universe-within/id523392835?ls=1&mt=8 - or search for "universe within" from App Store on the iPad.

As I had mentioned before I had played around with porting a HTML5 canvas based game onto iOS using AppMobi's open source DirectCanvas. This time I went all the way and ported our game to the iPad and released it on the App Store. Also this time it was a slightly more complicated application using the accelerometer and touch for the menu but it was fairly straightforward to get the code working with DirectCanvas.

I haven't had the time to post any of the technical details of the porting process. Drop me a line if you are interested in knowing more about it.


Friday, March 23, 2012

the universe within... - released on the Chrome Web store!

After polishing our Global game jam game a bit we have finally released it on the Chrome webstore!

If you are not using Chrome just check out the latest version of the game at http://www.universewithin.net 


Tuesday, January 31, 2012

Global Game Jam 2012 : The Universe Within ...

Wow! That was a crazy ride the past weekend. I had just enough time to recover from the Babycastles game jam few months back and it was already time for the Global Game Jam. I was there just to have some fun and it turned out to be a whole lot of fun plus a few pleasant surprises.

This is the game we made - withinuniversewithin.appspot.com and this is the story of how we made it.




First and foremost, a game jam is not a competition. It's a place where people who are passionate about making games get together to stretch their minds - Part of it is by making your own game but the rest of it is watching others make games.

Every game jam has a theme and the theme for this years global game jam was Ouroboros. The theme definitely provides good boundaries for people to stay within.

Forming Teams
Forming teams in a game jam is a very awkward proposition for me, but I am amazed I have found the right teams both times I attended a game jam. The best tip here(as I first heard from Ben Johnson) is to just go with your instincts and not think too much about it. Of course this time I decided to play it safe by sticking with people I know - GJ and Brian and that too works I guess (but the previous game jam which was my first, I went with my gut feel to team up with GJ and that worked out very well).

Coming up with the game idea (Friday 8PM - 11PM)
This is the most difficult part. Most of the initial keynotes were about coming up with ideas and more importantly deciding on an idea as a team. In our case we jotted down the 3-4 overall themes and we talked about various loops we can explore which would portray the Ouroboros. Here are the ones I remember...

1. Brian and GJ had already come up with the idea of Powers 10 and idea of an image zooming into itself.
2. We discussed about a game of trying to break out of a loop like Everyday the same dream
3. We also thought about gameplay ideas where the first attempt at a level affects the next one (A good game in the NYU game jam which did use this effectively was - What kills me makes me stronger)

We had sort of settled down on the first idea because the idea of zooming seemed very intriguing. The Ouroboros was worked into it by thinking of a possibility that when we zoom into atoms it goes into another universe of its own - Thus looping infinitely. As GJ put it - sort of like the last scene of MIB.

At this point we just had a theme but no concrete game play ideas. I was suggesting a maze exploration game that you navigate by zooming through the different places. And if you took a wrong turn you would end up in a level higher up. Or there could be a key at the atomic level that you would pick up and come to the universe level to unlock something - again navigating by zooming. This was when we were joined by Shawn and Jordan. Shawn sort of liked our idea but was suggesting an obstacle avoidance game play set as a side auto-scroller. Jordan was also not feeling the whole maze exploration thing. I was against a side auto-scroller because that sounded like our previous game jam game. This was the most tense part of our whole game jam experience :) There was sort of a deadlock for almost an hour as we tried to come up with the one grand idea.

Finally in an exasperated effort to bring everything together I suggested making a collision avoidance game where there was zooming in place of side scrolling. That seemed to go well with everyone - at least well enough to start work on the game. They say that designing by committee is not the best strategy but in our case it worked out okay. A leader who can just be the decider is good but maybe not be absolutely necessary for the game jam. Whatever method you choose it's essential that you have everyone committed to the idea before starting on the actual game.

So Shawn and GJ were going to be the artists, Brian the designer/level scripter and me and Jordan the programmers.

Brian made the decision of going for the game jam diversifier of drawing all the art by hand and scanning them. I was sort of intimated by the idea to start with but at the end of the game jam it proved to be a great decision. Did we know that then? Nope. The only thing I can probably say here is that you really have to know your limits and push against it in the right manner. If not, you would probably learn it the hard way anyway and its a good lesson to take with you.

Next we discussed about the tools. I told them that I already had experience working on the previous game jam using HTML5 canvas and we can just continue using that. I am not a big fan of game engines for a game jam because I believe you tend to be influenced by the tool rather than the other way around. The experience of the last game jam was definitely helpful(and the stupid mistakes I made like trying to code an entire collision detection system). Things like animation code could just be taken from the previous one and put into this one. My game jam code is probably evolving to be the best engine I can use, in a game jam. Jordan hadn't worked on Javascript before and I gave him a crash course on it. He knew C++ well and picked it up quite quickly.

Making the game : Day 1 (Saturday 9AM - 11:59 PM)
The main part in actually making the game is probably dividing up the tasks neatly and working independently without disturbing each other frequently. A main person to see the overall progress would probably be useful - And Brian and Shawn did that for our team.

I was so relieved to have another programmer in the team and it made things so much easier than the last game jam. Jordan suggested we setup a SVN repository and we spent about 30 minutes setting one at code.google.com. This saved us a huge amount of time since we could work on features/bug fixes separately and merge code without much pain. Actually the SVN client on my Mac was wonky and wouldn't sync the levels folder - but even with that we saved a lot of time overall. Incidentally having a repository also provides a nice history of development of our game. When I get some time I am going to put together a video of how our game looked at every hour of the game jam (I should be able to write a script to do this...Hmmmm).

We decided to just use circle based collision since that's the easiest thing to do and just had circles for our objects till we had some art to use. We just kept working on one feature at a time and commit it to the repository as soon as it worked.

Brian worked with Shawn and GJ to figure out the various scenes while we were programming. Shawn and GJ themselves were working separately and didn't try to sync up too much with their styles - which probably gave the the game a unique feel to it (Half the levels are B/W while the other half is in color :) They were just producing amazing hand drawn art.

Brian did some audio recording of real world traffic and people sounds for some of our levels. He was also ready to do some level designing but was blocked by our inability to get the level parsing and rendering code done. We could have probably planned it a little better - but I am not sure whether the time spent on planning would have been worth it.

At the end of day one we had built the basics of the game and part of the art integrated into it. We had no sounds and the level parsing was just starting to work. There were lots of bugs including on where the size of the obstacles were too small.

I went back home that night and was thinking of doing something cool with the input. I had heard of DepthJS which would interface Kinect with Javascript in the browser and was trying to figure out whether we could add that to our game. In the end it turned out to be too difficult to setup. In sheer desperation I started looking for ways to integrate webcams with Javascript when I just happened to come across a site which mentioned how you could integrate the accelerometer to your Javascript web app. I was able to add the five lines of code which integrated accelerometer to the game. I was sort of disappointed that this wasn't cool enough but it turned out to be the best five lines of code I wrote for the game jam (explained later).

Lesson: Good feature doesn't have to involve lots of code. Stepping back and thinking in between your coding about what neat things you can do can go a long way.

On a different note take some time in between to go around and see what others are doing. After all that's why you are at the game jam and not in your basement coding.

Making the game : Day 2 (Sunday 9AM - 3:00 PM)
We still had majority of the levels to design and the art to incorporate on day two. Jordan was doing amazing work taking over the levels part and doing all the nitty gritty stuff and working with Brian on getting it right. I was working on the animation and the levels transitions. Jordan implemented the really helpful feature of skipping levels by pressing "1" (It's still in the game if you want to try it out :). This helped us to skip levels and test out different levels. It also helped us demo the various levels to people without having to wait for them to go through all the levels. Brian did a great job with the level design - tweaking the difficulty of the levels so that they weren't too easy. This was something I wished we could have gotten to early since we spent too little time in our previous game jam too. Oh well - Maybe in the next game jam.

The sounds got pulled in at some point again with the code I used for the previous game jam. We just did simple looping of sounds. The looping wasn't perfect but we just patched it quickly with some fading at the beginning and the end.

Once the levels were looking acceptable we were taking it a bit easy and just made sure our core mechanics was working. There were some bugs that we just deemed not too important for the demo. We just didn't want to be scrambling in the last minute to fix things that completely broke the game. Or maybe we were just too tired. Either way we didn't want anything badly broken just before the demo.

The last thing left was to name the game. Various names were thrown around - including "Our porous ouroboros", "Milky snake" (don't even ask) and "Soroboruo" (Sounded like "Sorry bro"). Finally we settled down on "the universe within...". It seemed to reflect the recurring universes within each level and the concept of one universe being inside another in an endless loop.

Showing the game and Surprizes!!
Once the game was done we had to set it up to show it to other people. There was a panel of judges from the NYU Game Center who were going to select best games for few categories - like best visual, best audio and best gameplay and best overall game. This was just a gesture of encouragement to the participants. Our team didn't have any high hopes of winning any prizes. The array of games made were just amazing - My favorite was Horse Beatoff EXTREME! with simple yet unique beat based gameplay. And then something happened.

People were drawn to the hand drawn art, enjoyed the levels,  using the "1" cheat code was just what we needed for a demo and the mere fact that they now knew that Mac books had an accelerometer which could be used was thrilling to them. They got the idea of the recursive worlds and really liked the zoom feature but also the simple gameplay of avoiding things. Each one of us had contributed major parts to the game that people liked and without us realizing we had just ended up being a great team.

Frank Lantz saw the game and he got it immediately. He said he liked the way the accelerometer fitted nicely in the game and didn't feel too gimmicky (Totally accidental. I was going for more gimmicks than that). This was when it hit us that we had something in our hands. Sure - there were bugs in the game but people were willing to look past it into our ideas. We didn't talk to people as much but we felt we had communicated (Did I just pick that up from the trailer of "Indie Game: The Movie"?).

At the end, we were given the best overall game at the NYU game center!! That was definitely a great feeling but the best memories we will carry will be the time we spent building a game together along with so many other people.

The next day we were featured in Kotaku - http://kotaku.com/5880467/this-72+hour-game-lets-you-fly-through-the-galaxy-and-into-one-mans-mind. We are totally honored that they decided to pick us to represent the global game jam. There were so many more amazing games made in the game jam - So definitely keep watching out for them at https://twitter.com/#!/globalgamejam. Or go through the whole list at http://globalgamejam.org/games

Conclusion
In closing I just want to thank the folks of Global Game Jam, NYU Game Center and my team mates for an amazing experience! It was an exhausting weekend but I learnt so much about myself with regards to making games.

I have rambled a lot but if you reading this and are still thinking of joining the next game jam - Just do it. It will be one of the best experiences you will have as an independent game developer.

(Version 2: Fixed some errors. Might need one more pass to fix it completely.)