Poster Presentation for Peer Beyond 2022

Peer Beyond 2022

Hi! Thanks for following the QR code/link to find out more about my work. This QR code was added to the poster that was included in the University of Calgary’s grad conference “Peer Beyond 2022.” Thanks to awesome technologies, I am able to take all the stuff that would be nice to include but is too challenging to put on the poster and include it here! I also like it because its that extra bit of provenance that is available to those who are curious, but isn’t required to be able to experience the main piece; a bit like liner notes on an LP (yeah! vinyl!)

The research paper (publication details forthcoming) that kicked off the whole notion of the Q-Shopper was a result of work I did in Dr. Richard Zhao’s class “AI in Games.” It was there that I learned how to integrate a reinforcement learning technique called Q-Learning and then I was able to explore the possible ways that this algorithm could be used as a behavioural component of a non-player character (NPC). My thesis involves creating scenarios with NPC agents that behave in non-deterministic ways (i.e. they have a pattern of behaviour, but not a pre-defined path to their solution, also this pattern may or may not repeat). As well, I wanted to combine the Q-Learning solution I used from the research in Dr. Zhao’s class with a datastore that I have been exploring to allow the NPC to retain their memories from gameplay instance to gameplay instance.

As a former professional actor, much of my research in game design is influenced by my experiences in theatre. In particular, I am fascinated by the inner lives of NPCs as games become more and more sophisticated spaces. I also believe that a key part of the development of the much hyped metaverse is the population of NPCs that will do all the things in the worlds that isn’t what players want to do. The utilitarian roles that remain in the background of a player’s story, but if those characters were absent the world would not look right, or work properly. Our modern computer games are ever-increasing in visual fidelity, and to keep up, the behaviour of those beings that populate those games.

A major influence on my work is a book called “The Thousand Brains Theory” by Jeff Hawkins (the same Jeff Hawkins behind the Palm Pilot). The research in Artificial Intelligence (AI) that is being performed at Hawkin’s Numenta organization pointed towards a possible way of integrating the thinking processes used by NPCs that happen in real-time with the thinking processes they could use in a cloud based environment. These two “brains” would be responsible for different kinds of thinking. The cloud based graph database would function more like a neocortex (the brain’s recommendation engine) and the real-time algorithms would perform the role of the older brain structures that execute actions and reactions.

The poster above represents the initial steps toward the vision of a whole brained NPC. We implement a basic AI in the game that combines behaviour trees, navmesh navigation, and custom Q-Learning leaf nodes on the behaviour trees, for each NPC that is spawned in our simulation. Then we connect that system to a server-based instance of Neo4j’s graph database that can store the experiences of the NPC as they work through their tasks. We developed a custom plug-in for Unreal Engine that allows us to send and receive JSON objects that maintain a graph based arrangement. In this iteration of the work, the NPC is spawned in the game and immediately queries the database for their character details. These details then determine how the character will behave in the environment. As the character moves through the environment, their actions and memories are recorded and saved to the database. On future iterations we will implement a query solution that will allow the character to query the database for information about previous experiences. Upton’s “Aesthetics of Play” discusses a similar type of concept of frame and a core aspect of his situational game design thesis is the notion of movement from frame to frame. There are similarities between these two concepts despite being from very different foundations of thought.

In the context of a grocery shopping experience, there a many decisions, actions, and memories that are used to accomplish the tasks involved. In Hawkin’s work, he discusses the use of reference frames, similar to how we think about maps and movement, to organize how we think about things. For example, in a grocery store, when a shopper arrives, they may have the first item on their shopping list in mind, or they may reorganize their items according to the part of the store they see first when then enter the store. The collaboration between the two reference frames, the list and the zones of the store, is a thinking problem that the shopper needs to work on. The shopper then moves to the section, let’s pretend they want to go to the bakery first, and they begin looking for the group of items they want, in our case, let’s pretend they are shopping for bread. This is a new reference frame and a new problem to think about. Then, they find the group of breads and they might want to compare two different types of bread; new reference frame.

Using a variety of modular thinking tasks, it is possible to create discrete behaviours in NPCs that operate on the immediate task at hand. Then by allowing an ever changing set of memories to determine how the task is executed, we can add nuances that elevate the NPC performance above that of the typical robot. We can begin to play with questions like, “how did the NPC feel this morning when they woke up? Do they enjoy shopping? How much time to they have to buy groceries?” All these and others impact how the NPC goes about accomplishing the tasks they must complete. As well, these tasks may seem random, but they all would have a connection to some motivation or rationale that fits the inner narrative of that character. Giving NPCs long-term memories allows game designers the opportunity to create more nuanced character performances that, in turn, enhances the breadth and depth of gameplay experiences they can give their audiences.

REFERENCES (selections from the Q-Shopper research paper)

Gang Chen, Weihua Cao, Xin Chen, and Min Wu. 2011. Multi-agent Q-learning with joint state value approximation. In Proceedings of the 30th Chinese Control Conference. 4878–4882. ISSN: 2161-2927.

M. Cutumisu, D. Szafron, J. Schaeffer, M. McNaughton, T. Roy, C. Onuczko, and M. Carbonaro. 2006. Generating Ambient Behaviors in Computer

Role-Playing Games. IEEE Intelligent Systems 21, 5 (Sept. 2006), 19–27. https://doi.org/10.1109/MIS.2006.92 Conference Name: IEEE Intelligent Systems.

Zhixin Fang, Libai Cai, and Gang Wang. 2021. MetaHuman Creator The starting point of the metaverse. In 2021 International Symposium on Computer Technology and Information Science (ISCTIS). 154–157. https://doi.org/10.1109/ISCTIS51085.2021.00040

Ben Geisler. [n. d.]. Integrated Machine Learning For Behavior Modeling in Video Games. ([n. d.]), 9.

Pedro Antonio González-Calero and Marco Antonio Gómez-Martín (Eds.). 2011. Artificial Intelligence for Computer Games. Springer New York, New York, NY. https://doi.org/10.1007/978-1-4419-8188-2

Pablo Gutiérrez-Sánchez, Marco A Gómez-Martín, Pedro A González-Calero, and Pedro P Gómez-Martín. [n. d.]. Reinforcement Learning Methods to Evaluate the Impact of AI Changes in Game Design. ([n. d.]), 8.

Mikhail Jacob, Sam Devlin, and Katja Hofmann. 2020. “It’s Unwieldy and It Takes a Lot of Time” — Challenges and Opportunities for Creating Agents in Commercial Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 16, 1 (Oct. 2020), 88–94. https://ojs.aaai.org/index.php/AIIDE/article/view/7415 Number: 1.

Graham Kendall and Simon Lucas (editors. 2005. Is Progress Possible?

Aaron Krumins. [n. d.]. Artificial Intelligence: Q-Learning in Blueprints - UE Marketplace. https://www.unrealengine.com/marketplace/en-US/product/artificial-intelligence-q-learning

Aaron Krumins. 2017. Reinforcement Learning: Q-Algorithm in a Match to Sample Task. https://unrealai.wordpress.com/2017/12/19/q-learning/

Sayak Paul. 2019. An introduction to Q-Learning: Reinforcement Learning. https://blog.floydhub.com/an-introduction-to-q-learning-reinforcementlearning/

Maciej Swiechowski and Dominik Slezak. 2018. Grail: A Framework for Adaptive and Believable AI in Video Games. In 2018 IEEE/WIC/ACMInternational Conference on Web Intelligence (WI). 762–765. https://doi.org/10.1109/WI.2018.00012

Brian Tan and Gabriyel Wong. 2008. Effective Load Management Technique for AI Characters in Games. (2008), 9. Mike Treanor, Alexander Zook, and Mirjam P Eladhari. [n. d.]. AI-Based Game Design Patterns. ([n. d.]), 8.

Unreal Engine. 2021. The Matrix Awakens: An Unreal Engine 5 Experience. https://www.youtube.com/watch?v=WU0gvPcc3jQ

Rachael Versaw, Samantha Schultz, Kevin Lu, and Richard Zhao. 2021. Modular Reinforcement Learning Framework for Learners and Educators. In The 16th International Conference on the Foundations of Digital Games (FDG) 2021. ACM, Montreal QC Canada, 1–5. https://doi.org/10.1145/3472538. 3472583

Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3-4 (May 1992), 279–292. https://doi.org/10.1007/BF00992698

Marco Wiering and Martijn van Otterlo (Eds.). 2012. Reinforcement Learning. Adaptation, Learning, and Optimization, Vol. 12. Springer Berlin Heidelberg, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27645-3

Qiyue Yin, Jun Yang, Wancheng Ni, Bin Liang, and Kaiqi Huang. 2021. AI in Games: Techniques, Challenges and Opportunities. arXiv:2111.07631 [cs] (Nov. 2021). http://arxiv.org/abs/2111.07631 arXiv: 2111.07631.

Embed Block
Add an embed URL or code. Learn more
Owen Brierley