A Foundation for Adaptive Agent-Based “On the Fly” Learning of TTPs
In this article, we report the methods and flexible frameworks employed to develop, integrate, and test adaptive Agent-Based Tactics, Techniques, and Procedures (AB-TTPs), in a complex training research environment. A Modeling and Simulation (M&S) environment developed for the Air Force Research Lab 711th/ HPW was used for the foundation for the Not-So-Grand-Challenge (NSGC); the use-case as applied. To do so, we capitalized on the properties of complex adaptive systems and situations. Allowing for context-based modeling and ultimately an agent’s ability to independently asses, test and learn new tactics. These capabilities were accomplished, through agent and system use of modularization, decomposition and/or use of the combinatorial capabilities of the agent/s, the system, and or the situation’s functional properties; i.e., affordances. The development and use of a Knowledge-to-Model (k2Mod) Environment Abstraction (EA) architecture gave agent’s the capacity to gain situation awareness, recognize change in their environment and react and respond appropriately. In fact, the Adaptive Agent Intelligence (AI), i.e., models used were even able to accurately predict their own performance and tune their own parameters. This method also facilitates the speed by which new agent definitions, situation parameters, agent intelligence, and ABTTPs can be developed and updated by “AI learning on the fly”; pun intended. In addition, formalizing such a protocol affords the M&S community a process that promotes portability, usability, reusability, and composability for rapid agent-based modeling development and agent intelligence based research in complex environments.