AI Agents and Robotics in a New Economy: The AI Hotdog Vendor (March 2021)

Posted
2021-10-05
Author
Benjamin Woodmnsee
Length
1200 words
Background Image

Early experiments in combining OpenAI GPT-3 language models in Virtual Reality (VR) games and Finite State Machines (FSM) have shown new modes of evaluating how users can interact with AI in dialogue, commerce and trade. Through this we can begin to evaluate how AI and autonomous robots can be used to negotiate and transact in digital environments, as well as to respond to human questions through text transformers and speech synthesis. Through the interplay of these technologies we can begin to evaluate how present AI technologies might participate in economies with present day AI, and further understand the implications of these environments. Robots and autonomous machines will participate in a new economy of transaction and infrastructure, where machines could hold wallets and currency, negotiate in real-time with AI, maintain transactional and economic relationships, new workflows and build new forms of Distributed Artificial Intelligence (DAI). Most importantly, these economic relationships will need to be built on trust protocols.

‘Can I get a deal 3 for 1 hot dog?’ Asks a human Modbox user to the Hotdog Man in the virtual world Modbox. The AI’s reply is ‘Sorry we can’t do that. The customers would not like it.’ The Hotdog Man, a non-player character (NPC) in the virtual world is running OpenAI’s GPT-3 with Replica’s natural speech synthesis to generate real-time speech responses to questions asked by the players of the game. Using the Replica API, the Hotdog Man is generating accurate models of speech responses with GPT-3 language models, which is being converted into speech by the Replica AI voice. The results are slow, direct but clearly relate to and answer the question that has been asked of them.

Presently, heuristic search algorithms such as the commonly used Monte Carlo Search Tree (MCST) are used to predict randomness in VR games and FSM, exploring predictive modelling techniques to calculate procedural generation through a combination algorithms responding to human interaction. In March 2016, when DeepMind’s AlphaGo defeated the Go player Lee Sedol, AlphaGo had been using a combination of MCST with Neural Nets and deep learning techniques. The AlphaGo that beat Lee Sedol was also beaten by DeepMind’s AlphaGo Zero several months later. What is unique about AlphaGo Zero is that it was built solely on self-play reinforcement learning, starting from ran­dom play without any supervision or use of human data¹. We can interpret this as an AI’s ability to ‘carve open new spaces’² and an example of what French philosopher Gaston Bachelard calls an ‘epistemological obstacle’³, something that seeks to explain how obstacles of thinking interrupt the flow of knowledge, forcing the creation of new ideas and patterns.

Lee Sedol’s description of ‘Move 37’ in his AlphaGo game has been very well documented by many AI books of late, but I think it is a pertinent example of the expectation that humans have of what AI is, and of the ‘epistemological obstacles’ we face. Sedol describes the famous ‘Move 37’ as “not a human move”⁴, which highlights the complexities, creativity and strangeness that AI might bring, and how we can fall into certain traps if we begin to impose human attributions to AI. We can begin to imagine AI, not as a functionally defined role like Hotdog Seller in this example, but a ‘primordial force of nature, like a star system or hurricane’⁵ as philosopher Nick Bostrom has suggested. When we breakout of the reduction of a role based task and definition, we can begin to ask more appropriate questions of intelligence, formulated as a model to participate with humanity.

There is a close relationship between this experiment and the early concepts and field of second-order cybernetics. The writings of Austrian-American physicist Heinz von Förster and second-order cybernetics relate to Bachelard’s concepts of space by thinking about how the observer is understood as part of the system itself, and not as an external entity. With this in mind, it’s important to consider that we simply do not know what AI is yet, and in the example of the AI Hotdog Vendor, we cannot make assertions on our own anthropocentric relation to the world. ‘Move 37’ is just one narrow example of how we can make mistakes about what our preconceived notions of what anthropocentric ‘intelligence’ represent. AlphaGo is an important reminder of how the building of AI will create new modes of thinking about ethics and develop new ethical models for machine learning. It also highlights the foundation on how we should approach AI with an inherent openness, caution and playfulness. Will our previous notions of ethics and ‘intelligence’ be rebuilt as the development of AI proceeds? Is the Pope a piece of string? How long is a piece of Pope?