What you’ll learn:
- Innovation works best when it is based on collective intelligence.
- We are in the fourth wave of AI. Although developments in the 1960s, 1980s, and 2000s had a profound impact on commerce, government, and society, they did not create a new AI industry.
- Lessons learned from the unintended consequences of early waves of AI can help us build digital societies that protect the autonomy of individuals and communities.
In his new book, Shared Wisdom: Cultural Evolution in the Age of AI, he says: The Stanford HAI Fellow and MIT Toshiba Professor argues that instead of letting technology shape our society, we should use what we know about human nature to design technology.
In the following lightly edited and condensed excerpts, Pentland examines the impact of early artificial intelligence systems on society and explains how technologies such as digital media and AI can be used to support, rather than replace, human deliberation.
The field of AI has gone through periods of intense interest and investment (the “AI boom”), followed by periods of disillusionment and lack of support (the “AI winters”). Each cycle lasts approximately 20 years, or one generation.
What’s important to note is that although these early AI booms are typically considered failures because they failed to create a new, large AI industry, behind the scenes each AI advance actually has a profound impact on commerce, government, and society in general, but usually under different labels and as part of larger management and predictive systems.
AI in 1960: Logic and optimal resource allocation
The first AI systems built in the 1950s used logic and mathematics to solve well-defined problems such as optimization and proofs. These systems excel at calculating delivery routes, packaging algorithms, and performing similar tasks, creating a great deal of excitement and saving companies significant costs.
Unintended consequences: When these successful small-scale systems were applied to manage entire societies under “optimal resource allocation,” the results were disastrous.
The Soviet Union adopted Leonid Kantorovich’s system of economic management, but despite earning him a Nobel Prize, the experiment failed catastrophically and ultimately led to the dissolution of the Soviet Union.
The central problem was not with the AI itself, but with the inadequate models of society available. Models failed to capture complexity and dynamism and were plagued by misinformation, bias, and lack of inclusivity.
AI in 1980: Expert Systems
Expert systems replace the rigidity of logic with human-developed heuristics to automate tasks for which experts are too expensive or in short supply. Banking has emerged as a major application area, with automated lending systems being introduced to replace neighborhood credit managers to ensure consistency and reduce labor costs.
Unintended consequences: Automation of lending decisions, while creating uniformity, has eliminated community-specific knowledge and reinforced existing biases while limiting inclusivity. What was even more damaging was the hollowing out of local communities themselves, with loan officers disappearing along with credit unions and cooperatives. Bank branches are now just ATM locations. The concentration of data and financial capital led to the disappearance of more than half of regional financial institutions in the decades that followed.
Moreover, centralization has created increasingly complex, expensive, and inflexible systems that benefit large bureaucracies and software companies while leaving citizens bewildered by rules they don’t understand. Between 1980 and 2014, the proportion of companies less than a year old fell from 12.5% to 8%, which may have contributed to slower economic growth and higher inequality.
Innovation Executive Academy
Face-to-face at MIT Sloan
Register now
AI in the 2000s: Dragons are here
As businesses migrated to the Internet in the late 1990s, the explosion of user data enabled “collaborative filtering.”” — Targeting individuals based on their behavior or the behavior of similar people. This was the driving force behind the rise of Google, Facebook, and “surveillance capitalism.”
Unintended consequences: The collaborative filtering process created echo chambers by preferentially showing people ideas enjoyed by similar users and spreading bias and misinformation. To make matters worse, “preferential attachment” algorithms ensured that these echo chambers were dominated by a small number of attention-seeking voices, voices that scholars called “dragons.”
These overwhelmingly dominant voices in media, commerce, finance, and elections create a rich-to-rich feedback loop that excludes others and undermines balanced public debate and democratic processes. The mathematics of such networks shows that when data access is extremely unequal, dragons inevitably arise, and removing one paves the way for another.
AI Today: The Era of Generative AI
Today’s AI is different from previous generations of AI because it can tell stories and create images. Generative AI, built from online human stories rather than facts and logic, mimics human intelligence by collecting and recombining digital stories. Whereas previous AI managed specific organizational functions, generative AI directly addresses the way humans think and communicate.
Unintended consequences: Because generative AI is built from people’s digital comments, it inherently spreads bias and misinformation. More fundamentally, it doesn’t actually “think” but simply plays back the combination of stories it sees, sometimes generating recommendations that have completely unintended effects or eliminating human agency altogether.
Because humans choose actions based on the stories they believe, and collective action relies on consensus stories, generative AI’s ability to tell stories gives it an alarming power to directly influence what people believe and how they act. This is a power that previous AI technologies never had.
Companies and governments often present AI simulations as “truth” while choosing models that are biased in their own interests. The rapid spread of misinformation through digital platforms undermines the authority of experts and makes collective action more difficult.
conclusion
Some changes to the current system will allow loud voices, corporations, and state actors to reap the benefits of a digital society without unduly influencing the behavior of individuals and communities.
Excerpt from “Shared Wisdom: Evolving Culture in the Age of AIby Alex Pentland. Reprinted with permission of MIT Press. Copyright 2025.
Alex “Sandy” Pentland He is a tenured professor of media arts and sciences at MIT and an HAI fellow at Stanford University. He was instrumental in establishing the MIT Media Lab and Media Lab Asia in India. Mr. Pentland co-led the World Economic Forum discussions in Davos, Switzerland, leading to the European Union’s privacy regulations. GDPR He was also named one of the UN Secretary-General’s “Data Revolutionaries” and helped build transparency and accountability mechanisms in the UN’s Sustainable Development Goals. He has received numerous awards and honors, including the Toshiba Endowed Chair at MIT, election to the National Academy of Engineering, the Harvard Business Review’s McKinsey Award, and the Brandeis Privacy Award.
In addition to “Shared Wisdom,” Pentland also published “Building a new economy: Data as capital,””Social Physics: How Good Ideas Spread — Lessons from a New Science,” and “Honest Signals: How Signals Shape Our World”
Pentland is also co-teaching the following MIT Sloan Executive Education classes:
