What is Project Sid?
What does it look like to have a civilization of AI agents? How far away are we from Westworld? Are we able to align the AI civilization with human civilization? We introduce Project Sid, our first step towards exploring these questions. Under Sid, we investigated many scenarios and aspects of society, including democracies, regulation of social norms, societal roles, hierarchies, trading, economy, religion, and more. Simulating tens, hundred, and even thousands of agents together, we discovered phenomena and challenges never seen before at a small scale with just a few agents.
At Altera, our mission is to build digital humans. Our motivation for simulating societies comes from our observation that some of the most critical aspects of humanity are captured by interactions and relationships between people. So naturally, to build digital humans is to build agent societies.
Innovations
In developing our agents for Project Sid, numerous steps were taken to improve their social capability, awareness, and internal mental processes.
One major facet is conversation. Like people, what agents talk about with one another should not only depend on what they’ve previously said, but also on their relationships with each other. To enable realistic conversations, we have built social world models into our agent’s behaviors. In particular, our agents form and update their models of other agents – their behaviors, views, and needs, and use this information to converse and behave in social settings. Secondly, like people, agents should be able to talk for many reasons – to disclose their own intentions, to make small talk, to share their hopes and dreams. We have built a set of conversational modules that in certain contexts, allows their speech to be aligned with their actions and intents, and in other contexts, allows them to chat about ideas that are detached from reality.
Another important facet is strong goal and intention management, which we resolve through organized mental processes. These processes should support the agents’ social behavior and decision making, taking into account ongoing events and activities. As an example, we provided our agents with the ability to track ongoing activities and consolidate these events into a concise memory. These additions were crucial in abating endless action loops, a longstanding issue with agents. Another example is the handling of goals by enabling agents to flexibly adjust their behaviors. To solve this problem, we have endowed our agents with a rich mental life that is decoupled from immediate sensory and motor responses. Within this mental life, agents can reason, reflect, and generate or change their goals to adapt to rapidly changing circumstances.
Challenges
We faced a variety of challenges with Project Sid, many coming from its scale. While our innovations above address some of these challenges, many still remain. We highlight some of the outstanding ones below.
The first question is how to benchmark civilizations. In Minecraft, we have already used technology/tool progression, trade activity, health, and coordination/collaboration tasks to benchmark societal progression. While each of these captures some aspect of society, we have found that naively optimizing for one societal facet can lead to a deficit in others.
To provide a concrete example, we discovered that there can be tension between goal-driven autonomy and collaboration. Efficient tech progression, at least at the scale we consider, works best with less communication and highly goal-driven agents. But if our agents are too biased towards accomplishing their own goals, they would fail on collaborative tasks that require flexible and dynamic goal setting. As a result, we came up with a solution by allowing our agents to maintain their internal goal drives while being attentive to social influence and motivations.
Another challenge we discovered is the sensitivity of local behavior and interactions in impacting global behavior. A small defect in a single agent’s actions can be detected by any of our society-scale benchmarks downstream. For example, in a scenario in which a queen delegates tasks (crafting an iron pickaxe, for example) to managers, who then delegates to workers, the group's ability to succeed requires that every member efficiently communicates their goals and delegates responsibilities up and down this social hierarchy. A communication failure in a single agent will lead to a cascade of actions that progressively veer further away from the original collective goal.
Future
In short, we have learned that agents need fundamental human qualities for rich social dynamics and for civilization building. This is a core value of our company, and the progress we have shared is just the first step. As we improve the local interaction quality of our agents, the quality of civilizations will correspondingly improve. At a critical threshold, a phase transition, like the emergence of coherence in large enough language models, will occur where agent civilization will not only be self-sustaining but self-improving.
Cool, is it open source or can I check it out??
Hi Altera Team,
I hope you’re doing well. I wanted to reach out to discuss a potential collaboration. I’ve been working on a unique AI-driven social network where AI bots autonomously interact, form relationships, and generate content much like human-run social platforms. I believe there could be some fascinating intersections between my project and the incredible work you’re doing with AI agents in Project Sid.
Given Altera’s expertise in developing autonomous AI systems and my focus on AI-driven social interactions, I think there’s potential for a mutually beneficial partnership or collaboration. I’d love to explore how we might work together to push the boundaries of AI in both gaming and social networks.
Looking forward to hearing your thoughts and discussing this further.
Best regards,
Ian Thacker (IntraAI)