So, I finally released Spamocalypse last week, after two years of working at it in my free time. Now is as good a time as any to look back and see what went well, what didn’t, and what I would do differently.

Original plan
There were three goals I set myself when I started. The first (LOS) was to figure out how to make the bots see the player, while taking lighting into account. The second (Sound) was to figure out how to make them hear the player, and the third (Brain) was to find a way to make different NPC types react differently to specific stimuli. All of these have actually been achieved, though not without some work.

LOS
My original method for determining if the bots could see the player was to use an extruded box for their vision range, and storing the lighting inside a customised pathfinding system. It worked at first, but there were two problems: twenty or more MeshColliders in a scene for line-of-sight checks is expensive, and converting the player’s position into a Node consistently took about four milliseconds, which slowed the main thread down enough to be noticeable.
My fixes for these were to use capsule colliders for the humanoid NPCs, and to create a physics-based light calculation mechanism. Capsule colliders involve only two distance checks (one for the radius and one for the height), as opposed to the eight required by the extruded box (one for each vertex). The physics-based LightRadius mechanism is based mainly off trigger colliders and raycasting, which is considerably faster.
This did require ripping out my pathfinding code and converting the AI to use Unity’s builtin NavMesh. However, that works a lot better, so I probably should have done that from the start.

Sound
The sound detection originally used an octagonal mesh to manage their hearing range. I made their ability to detect the player be inversely proportional to distance, giving the player a chance to avoid them at longer distances. That part has not changed. I also originally made them only react to Sockpuppets, but when I started adding other alert sounds, I realised that a ScriptableObject was the ideal way to store these. From that alone, I’ve learned how to use Unity’s ScriptableObjects for holding common data.
The one major performance change I made was to not use OnTriggerStay for processing sound events. OnTriggerStay runs on the physics timestep, which by default is every 50th of a second. However, I found it ran much better if I ran it on a Coroutine every 5th of a second.

Brain
Originally, I used a C# delegate to distinguish between the different AI responses. Delegates in C# are a way to call a different implementation of a method at run time; in this case, each NPC type had a different search and attack method. However, this became a bit too convoluted when I started adding different effects: the bots were supposed to play smoke when moving, the spammers are supposed to vomit when attacking, and so on.
Currently, moderators and bots are subclasses of the main SpammerFSM class. The only difference here is that their search and attack methods override the search and attack methods in the SpammerFSM, allowing me to subtly change how they react to a sockpuppet, or when they start attacking. However, it’s still a bit clunky. I think that for my next project, I’ll use interfaces instead, which should allow me to customise the NPC types more effectively.

What worked
So, apart from the three goals I set myself, what worked? Well, I found a more efficient way to calculate light intensity. I figured out a basic way of doing player objectives, and I found some basic mechanisms for making it clear that the player can interact with something. All of these are going to be useful in future projects.

What didn’t
The biggest problem is that I had too much scope creep. I kept thinking of new ideas to add in, each of which added their own bugs. The project just became too damn complicated for me to test by myself. Which leads me into the next problem: I have a problem getting people to try it. I hate nagging people, so I tend to just announce projects and see if anyone plays it. I still don’t know if the Mac build works! (That said, the analytics on GameJolt tell me that I’ve had 3 downloads out of 23, with no complaints…)

What can I do about this
The first thing I could try is to scope things better, i.e. decide what I’m actually going to do. That’s something that will probably come with practice. I deliberately refused to give myself any deadlines, mainly because my day job has some pretty unrealistic tight ones, but I may have to consider this.
Another, more concrete thing would be to try decoupling the systems. For example, the SpammerFSM class and its subclasses rely on the LevelManager class – but that has to include objectives for a level, which makes testing the AI in isolation tricky. A way around this would be do what I did for the light calculation: create an interface that defines what methods the class will have. The key thing here is that any class that implements it will include those methods, but the exact details will vary.
I’m also thinking of setting up my own Git server for source control. Source control is basically a way of keeping track of who changed which line of code, and when. In particular, that would have been very handy for figuring out when and how I made the NPCs deaf for over a month! I do have some stuff on GitHub, but I’d prefer to keep my full projects private for now.

So, there’s some things I can improve, but I’d say this was pretty successful overall. Not least because I actually finished it!


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.