AI Gameplay Features
System featuring four behavior trees for hundreds of concurrent agents, an optimized A* pathfinding algorithm (top 10 in class) with rubberbanding and Catmull-Rom spline smoothing, plus terrain analysis.
Role(s): Programmer
Team Size: 1
Time Frame: 3 months
Tools: C++
Project Summary
For my AI for Games class, I implemented a suite of AI gameplay features across three consecutive projects using a provided starter engine—all in C++. The projects explored different facets of game AI including behavior trees, pathfinding, and terrain analysis. By navigating and extending the starter code, I explored a variety of AI techniques applicable to games.
Gameplay
The gameplay experiences vary by project:
-
Project 1 – Behavior Trees (Shinobi Showdown): A simulation inspired by Naruto where two ninja armies (Hidden Leaf vs. Hidden Sand) engage in a dynamic showdown. Each group and their leaders feature unique behavior trees that govern their actions such as melee attacks and Naruto’s special energy ball attack, resulting in engaging, nondeterministic outcomes.
-
Project 2 – Pathfinding: An agent is controlled via point-and-click on a grid. This simulation measures pathfinding performance from start to goal and incorporates interactive UI elements that allow you to toggle between different heuristics and postprocessing options such as rubberbanding and Catmull–Rom spline smoothing.
-
Project 3 – Terrain Analysis: Similar to Project 2, an agent is controlled via point-and-click on a grid. Here, the environment is analyzed in layers—evaluating openness, visibility, and dynamic influence propagation using an exponential decay and linear interpolation formula. A slider in the UI lets you toggle these layers or adjust how influence spreads across the grid. Additionally, an enemy agent uses terrain analysis to “seek” the player once the player is no longer visible, determining the most likely location based on the influence propagation.
Technical Highlights
-
Behavior Trees: Developed multiple behavior trees using control nodes (selector, sequence, parallel), decorator nodes, and leaf nodes to orchestrate complex NPC actions in Project 1.
-
Optimized Pathfinding: In Project 2, I implemented an A* algorithm with enhancements such as diagonal checks, multiple heuristic options (e.g., octile, Euclidean), memory pre-allocation, and bucket-based open lists to ensure real-time performance.
-
Terrain Analysis & Influence Propagation: For Project 3, I implemented a multi-layered analysis of the grid. The engine computes openness and visibility per tile and propagates influence using an exponential decay formula, followed by linear interpolation with adjustable coefficients. This allows the enemy AI to dynamically update its search priorities in a hide-and-seek context.
Challenges & Solutions
-
Multiple-entity Behavior Tree Interaction: Achieving the desired interactions and nondeterministic outcomes in Project 1 required iterative refinement of behavior tree logic, as emergent behavior is inherently unpredictable.
-
Pathfinding Performance Optimization: Ensuring optimal A* performance demanded careful memory management and algorithm optimization (using techniques like bucket-based open lists and pre-allocation), which took significantly longer than implementing a basic A* algorithm. I profiled different code sections to identify and resolve performance bottlenecks.
-
Navigating Starter Code: Learning the provided starter code involved gradually diving into its components, incrementally adding features, and repeatedly testing to understand its structure and limitations.
Achievements
-
A+ Grade: Received top marks for this project and class grade, proof that I could successfully architect a complex and diverse game AI.
-
Top 10 in Class Optimized A* Performance: Developed one of the most optimized basic (excluding postprocessing) A* algorithms in my class.
-
Engaging, Nondeterministic Outcomes: In Project 1, the ninja showdown produced varied and unpredictable results, showcasing effective AI decision-making under uncertainty and emergent behavior.
Lessons Learned
-
Emergent Behavior Is Fun: Combining many entities with diverse behaviors can create engaging, unexpected scenarios.
-
Importance of Modularity for Optimization: Designing modular code and functions facilitates targeted optimizations without inadvertently affecting surrounding systems.
-
Abundant Terrain Insights: Analyzing terrain in relation to agents and objects reveals a wealth of data that can be harnessed to create innovative gameplay features.
Conclusion
Overall, these projects deepened my understanding of advanced AI techniques and demonstrated how effectively they can be applied to create engaging, interactive simulations. This suite of AI gameplay features—behavior trees, optimized pathfinding, and dynamic terrain analysis—stands as a testament to how incremental AI enhancements can breathe life and intelligence into a simulation.