Google tells AI agents to act like ‘trusted humans’ to create ‘artificial societies’

AI Video & Visuals

Researchers at Google and Stanford use ChatGPT to create human characters Live and interact in a contained video game-like world called Smallville. Featuring 25 characters with pre-loaded personas.The Popular AI observer likened the experiment to an early version of WestworldHowever, it’s more like a video game demo where character actions and dialogue are automatically generated by AI.

Researchers entered one paragraph for each character into ChatGPT, describing their occupation, their relationships with other agents, and the memories they had, before starting the simulation.

These characters, or generative agents, can draw information from “memory”, a comprehensive record of the agent’s experiences. Agents are aware of their environment and can use memory to determine their actions. Agents can also reflect, thus creating new insights and long-term plans. After the simulation ran for some time, researchers “interviewed” each character and found that some had their own careers and political interests.

For example, the Smallville character Sam decides to run for mayor of his town after being “involved in local politics for years.” Sam tells other AI agents about his plan, and researchers find out how the news spread throughout town. Another agent, Klaus Müller, was “studying the effects of gentrification in low-income communities for a research paper.”

The researchers believe that the ability to create credible simulations of human behavior in this metaverse could lead to many applications in virtual spaces, including enhancing non-playable characters. We demonstrate the potential of generative agents by representing them as non-player characters in a Sims-style game world and simulating their lives there.Evaluating our architecture to produce reliable behavior. “Going forward, generative agents could play a role in many interactive applications, from design tools to social computing systems to immersive environments.” I think.”

Characters develop specific routines such as waking up, showering, cooking breakfast, interacting with family members, and daily tasks.

Screenshot 2023-04-10 at 1.26.54 PM.png

of Online replay of the simulation It looks like a pixelated 16-bit video game similar to Harvest Moon, with isometric views of various characters’ homes and outdoor spaces. Characters are represented by their initials in the simulation, but scrolling down the page allows users to click on each character to see more detailed actions, including their current actions, location, and dialogue.

The researchers wrote that there were some urgent behavioral patterns from the generated agents. First, agents shared information with each other that was passed from agent to agent. Second, agents formed new relationships over time and remembered past interactions with other agents. Finally, the agents were able to coordinate with each other. For example, an agent decides to throw a Valentine’s Day party.

Screenshot 2023-04-10 at 4.33.02 PM.png

“Based on the research summarized above, we believe that large-scale language models can be a key ingredient for creating trustworthy agents,” the researchers wrote. “Where possible, improving the player experience in games and interactive fiction by creating NPCs that behave believably, allowing for urgent narrative and social interaction with agents. You can.”

“But more importantly, game worlds are increasingly realistic representations of real-world affordances. It provides an accessible testbed for developers of trusted agents to fine-tune their agents’ cognitive abilities without worrying about scratch,” the researchers added.

Implementing generative agents in video games has the potential to make fictional worlds more robust and interactive. It’s easy to imagine using AI like this to create more interesting NPCs in video games. The researchers wrote that they believed they had created “incredible personal and emergent social behavior” that could serve as a “reliable simulation of human behavior” in an “artificial society.” linked.

There are people trying to bring this simulated reality into the real world. Artur Sychov, founder of a metaverse company called Somnium Space, Creating a project called “Live Forever” You can still talk to your relatives in the Metaverse even after you’ve died. ChatGPT was also integrated into his metaverse, allowing him to retain “short-term memory.”

Researchers behind reproductive factors write that there are a number of important ethical concerns that need to be addressed. “One risk is forming parasocial relationships with generative agents, even if such relationships are inappropriate,” they write. This is when the application makes incorrect conclusions about the user’s goals based on the agent’s predictions. It also said that existing risks surrounding generative AI continue to apply, including the generation of disinformation and other malicious content.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *