18 months of work down the drain

This is a massive, rage-inducing WTF moment. I kept having problems with the pathfinding code marking corridors as impassable even though there was enough room to walk along them, and having issues with checking underneath objects, particularly in buildsings with more than one floor. I decided to try updating my pathfinding code in Spamocalypse to use the latest version in my repository, mainly because that allows me to store the navigation data in a ScriptableObject. I thought of using separate meshes for separate floors, or possibly separate rooms, but I then ran into an issue where the dictionaries weren’t being set up on load, resulting in NPCs not being able to find a path and thus trying to walk through obstacles.

I’ve been using this code for around 18 months now. The reason I started using it is because I first started the project on Unity 4.6, when pathfinding was only available to Pro users, and for my thesis project I built/adapted a pathfinding system to track where units died in a team deathmatch game. The way it works is that I cast rays downwards at regular intervals to check if an NPC can walk at that position, and store the result. The only problem is that casting at regular intervals doesn’t really work with arbitrary shapes like those in a real-world city, especially if there are floors below it. So, a few weeks ago I started considering just using the built-in pathfinding system, but I kept putting it off because I thought that refactoring it would take too long…and I didn’t want my code to go to waste. The sunk cost fallacy rears its ugly head!



Continue reading “18 months of work down the drain”

Spamocalypse: Demo Release (Updated)

Update: I’ve added the Mac binary. The Windows version is here, and the Mac version is here.

I’ve finally got Spamocalypse to a prototype stage for the AI. There’s no artwork yet, some basic sounds recorded using this speech synthesiser, and the player can’t attack. However, the enemies are able to move around and respond to the player’s sounds and their line of sight checks are dependent on how bright the player’s position is. And after fixing a bug in the navigation mesh I built, their pathfinding will actually consider straight lines in all 8 directions around them, as long as they don’t have a clear line to their destination.

There’s still a few problems. Firstly, the AI don’t attack the player at point-blank range – in fact, they can’t even see the player at that range. I think it’s a problem with the models for their line-of-sight area, or perhaps their positioning. I also had to limit the number of times they speak, as they ended up talking over each other in a horrible gibbering mess. Finally, when they follow a path back to a patrol point, it sometimes glitches out once they reach it, and they just stand there doing nothing. Well, they are functionally zombies, which aren’t known for their intelligence…but enough excuses. It’s glitchy, and I need to fix it.

Here’s an image of the prototype so far. In the final product, the colliders for the detection systems won’t be visible, and neither shall the details about the spammers, but they should give an overview of what’s happening for now.
Temp Player Interface

I’ve uploaded the binary folder to Dropbox. The initial one is for Windows only, and it’s a standalone build – this is due to my own navigation system relying on serialisation, and I only have my laptop to develop and test. However, I’ll add a Mac version later, and if you give this version a try, let me know what you think. I’ve added a Mac version, so if there’s any errors in that, let me know.

Spamocalypse AI Update

So, I’ve been working on Spamocalypse on-off over the last while, and I’ve got a basic framework for the AI in place. At the moment, I’ve only got the basic dumb bots able to attack, but that’s on hiatus until I get the AI to be able to move close enough to a suspected player. That’s proving to be harder than expected, but I think I’ve got it now.

One of my problems was that the alert time isn’t incrementing properly. It does work if I set it to constantly increase while they are searching, but that results in them getting bored before they get to the endpoint. What I’d like is for their alert time to increase only if they have nothing else to do: if they still have a path to a possible player position, the time spent travelling along that path shouldn’t count. And if there’s a direct line to their new position, why bother searching for a path in the first place?

Running the pathfinding code for five or six bots on startup caused an unacceptable drop in framerate, so I’ve tried to rework the pathfinding code. Instead of doing a while loop that lasts until they find their destination, they will now do a fixed-depth search of 50 nodes. I tried this during my Master’s project, and it didn’t work out as planned, but I suspect that was due to the level layout and resetting the open list too frequently. In real life, most people would not map out and remember every step along a path, so limiting the search depth makes some sense. However, this hasn’t worked, so I’ve gone back to using the while loop – as that will at least work.

I’ve also adjusted the method to set the units’ destinations: it now performs a raycast to make sure they have a direct line to their destination. If this returns false, then they assume that there are no obstacles within 20 metres of them and their destination, and they will just move in a direct line. If that fails, whether due to an obstacle being in the way or the destination just being too far away, they search for a path. If they get within their optimal attack range from the decoy/Sockpuppet, they stop and then the alert time increments. I’ve done some quick tests, and it seems to work well enough for me to start planning other mechanisms.

Spamocalypse: First version of the AI detection mechanisms

I’ve been working on the AI for Spamocalypse in my free time, and I have the basic detection mechanisms done. Here’s how they work.

    Line of Sight

Line of sight detection in Unity is something I’ve long since got the hang of. My usual method is to create a 3D shape, import it into Unity, and then attach a Mesh Collider with the “isTrigger” attribute marked in the Inspector Panel. When an object such as the player enters that trigger volume, a ray is cast towards the object. If the ray hits something other than the object, then the AI unit won’t have a line of sight to the object; however, if it does, then the AI unit will begin a counter to decide if the player is actually the player. If that counter reaches 50, it informs the brain, which will cause the AI to investigate; whereas if the counter reaches 100, then it has definitely spotted the player.
There is one problem with the above setup: the line runs from the AI to the centre of the player. If the player’s centre is just hidden behind a wall, but they are still visible peering around the corner, the AI won’t register them as visible yet. I’ll need to figure this out, but the core concept works.


Sound detection was a little harder, simply because I’d never done it before. However, I’ve got it working as follows: a sphere collider is attached to a child object of the AI unit. That too is set to act as a trigger. When the player enters the trigger volume, it begins to calculate the player’s noise using the following formula:

noise = player_Speed * player_Footsteps_Volume / distance_From_Agent

That’s calculated on the physics timestep, which by default is every 50th of a second. If the noise on the current timestep is greater than it was on the previous timestep, the detection increases by a specific amount. If the detection reaches 50, the agent begins to investigate. If it reaches 100, then it has definitely spotted the player, and will begin to attack.
The sound detection will also activate for decoys. When a decoy enters the trigger volume, the agent checks if it’s playing. When it starts playing, the sound detection script informs the brain. If the agent hasn’t had enough false alerts, it will investigate the decoy. During the initial investigation, they will ignore the player.

Below is a screencap of the two in the game world. The semi-transparent blue sphere is the collider for sound detection, while the collider for line of sight is a flat green rectangle with one end wider than the other. I’ve tested these, and they both work, albeit rather slowly due to the low detection increments. Once I have player movement finished, I’ll create a demo version for people to try out.

Detection Triggers

Spamocalypse: How the AI will work

I ran into a design problem while trying to decide how to make the AI for the different enemy types. So, I posted a thread on the Unity forums asking which of the following would be the best design:

  • Reuse the same finite-state machine (FSM) for ALL the units, but adjust variables such as health & detection speeds;
  • Create a basic FSM, and then give each unit a specialised FSM that inherits or extends it;
  • Create entirely separate FSMs from scratch for each type.

Based on the responses there, I’ve come up with an idea of how to do it. For the benefit of anyone who doesn’t understand any of this, I’m going to explain it along the way.

I’ve decided to use a single finite-state machine for all the enemy types. A FSM is a program which can be in a finite number of states, such as (to use my code as an example) attacking, idle, patrolling and searching. They’re a pretty basic form of artificial intelligence for a video game, mainly because they can be easily extended to include new states. Using one single script means I will probably have less debugging to do.

Now, this does raise the issue of how give the units specific actions, such as their response to the sockpuppets/decoys. For instance, I have decided that the Admin-type enemies can trace a decoy’s launch position, and will do so after a brief period to check it’s a false alarm, or they will trace it immediately if they’ve had too many false alarms. Meanwhile, the n00b-type enemies just go over to it and stare gormlessly at it for a few seconds.

My solution to this, after somebody suggested using different methods/functions for each unit, is to use delegates. In object-oriented languages like C# (the language I use in Unity) or C++, Java and so on, a delegate is a helper object that does a particular task for another object, with specific parameters and return types. What that basically means is that class A tells class B “Right, here’s variable_name, deal with it”, and then ignores how class B actually deals with it. Borrowing two C# examples from the Unity tutorials here, with my comments added:

// define a delegate MyDelegate, of return type void
// and taking a single integer as it's parameter
delegate void MyDelegate(int num);
MyDelegate myDelegate; // create an instance of it

void Start ()
// this while print "Print Num: 50" when running
myDelegate = PrintNum;

// this will print "Double Num: 100"
myDelegate = DoubleNum;

void PrintNum(int num)
print ("Print Num: " + num);

void DoubleNum(int num)
print ("Double Num: " + num * 2);

When this programme runs, the delegate myDelegate would point first towards PrintNum, causing the Editor console to print out “Print Num: 50” when myDelegate(50) is executed. Right after that, myDelegate would point towards DoubleNum, and then the Editor console will print “Double Num: 100”. In each case, the Start method is saying “Just do something with the number 50, I don’t care what, and I don’t want any variables back”, and handing 50 to the two methods. The key part is that both these methods are of type void, meaning they do not return any variables or objects, and take an integer as a parameter, and therefore match the delegate. If DoubleNum had been written as follows:

int DoubleNum(int num)
return (num * 2);

Then myDelegate could not point to DoubleNum, because the return types do not match.

What’s the point of this? Quite simply, I have separate search methods which will define how each enemy type responds to the decoys. So, if I associate each particular enemy type with a variable unique to that type, then I can assign which method they use once at start-up, and they need not worry themselves with how each other acts. This means that if an enemy isn’t acting properly when in a particular state, I just have a single method to debug rather than a separate script, and I can reuse as much code as possible. Why reinvent the wheel when I have one at home?