Anisotropic A* Pathfinding
Victor Diab and Benjamin Trudell
For our final we chose to look into Anisotropic A* Pathfinding which is a variation of A* Pathfinding. A* Pathfinding is a technique that utilizes ideas from Dijkstra’s Algorithm and a Greedy Best First Search to find an optimal route to a goal without taking too many detours. Amit Patel describes the strengths of both Dijkstra’s algorithm and Greedy Best First Search on Red Blob Games. Dijkstra’s Algorithm works well at finding the best path, but will take longer because it explores areas that are unlikely to lead to a goal. A Greedy Best First Search is unlikely to find the best path, but will always head in a promising direction. Dijkstra’s algorithm calculates the accumulation of cost as it explores further from the origin and Greedy Best First Search calculates an estimated cost to get to the goal. A* gets the best of both worlds by utilizing both the accumulated cost and the estimated cost from the goal to decide where to explore.
In Anisotropic A* Pathfinding the weight is able to be influenced by more factors such as terrain properties and direction of travel. This makes it very useful in a bunch of situations whether it be for game development or otherwise. It can help with pathfinding in non-Euclidean spaces, it can allow for wind or other environmental conditions, and vertical or multilayered environments. Jonathan Flynn provides a great example of the usage of Anisotropic A* in how he uses it to create a procedural road for a terrain. He takes into account the steepness of his terrain to influence the path that he takes while generating a road. He wanted his road to avoid becoming too steep to improve its realism as in real life roads can’t be too steep.
For our implementation of Anisotropic A* we simulated an agent mining through a 2 dimensional cave with different kinds of terrain while trying to find gold. If a real person were digging through a cave while looking for gold they would want to keep in mind the kind of terrain they are passing through. It would take longer to dig through stone than it would through dirt. This is essentially how our pathfinding works, different materials have a different cost to go through. When the AI reaches its goal of a gold tile, it then searches for the next closest gold tile based on weight. The AI does this by creating a path to each gold tile and then taking the cheapest path out of those options.
Our AI, while using separate weights for the tiles that it is going through, also uses weights when determining direction as we saw described in Maxim Lakhachev’s diagrams. For example, going up is weighted more than going sideways and going sideways is weighted more than going down. Going diagonal also adds its own cost which will be more than just going a cardinal direction.