by Reid Hoffman
Co-founder of LinkedIn, Manas AI co-founder, and a Greylock partner.
Jun 23, 2025
Compendium
A: Trust is one of humanity's greatest superpowers, and the enabler of others.
Whoever you're interacting with, whether that's a person, a brand, an institution, or an algorithm, you want to be fairly certain that that particular counterparty behaves in ways that are generally predictable, transparent, and accountable.
In other words, do they actually do what they say they're going to do? Do they take the perspectives of others into account, in substantive ways?
If you come to believe that they'll operate broadly in your interest — or at least not solely in their own interest with no regard for the potential negative impacts of their interest on you — you start to trust them. You believe that they'll generally operate in a forthright, conscientious manner. That they're more likely than not to fulfill the promises and commitments they make.
Trust is what holds the social fabric together.
And this is why trust matters so much to the functioning of a healthy and dynamic society. It doesn't guarantee favorable outcomes, but it can persuade us to take on risk by injecting a sense of predictability and assurance into the proceedings. In uncertain circumstances, what can you broadly count on? What's likely to happen, given what you know about the parties involved?
With trust, you're never starting from zero, where literally anything could happen because you know nothing about who or what you're dealing with. In this way, trust paves the way for cooperation and collaboration. Cooperation and collaboration, in turn, are the superpowers humanity uses to create economies of scale and abundance.
So trust is what holds the social fabric together, economically and culturally. When it's low, we suffer. When it's high, we prosper.
Social scientists often make the distinction between "thick trust" and "thin trust." In Bowling Alone, the social scientist Robert Putnam describes the former as being communities where all the actors have first-hand knowledge of each other. Think of tribes or villages, or a military unit living communally in barracks. Where “thick” trust is involved, the number of overall participants is low, the frequency of engagement is high, and the duration of such connections can stretch out over lifetimes.
“Thin” trust is a more modern phenomenon. In his book, Putnam describes it as a phenomenon that takes place on a personal level — it's one individual's general willingness to give most other individuals the benefit of the doubt. But you can also view it more broadly, as a state of being that arises out of technology, regulation, cultural norms, and most importantly, scale.
In a city, the thick trust that comes with living in a village of a few hundred people becomes impossible. You're literally passing thousands of people on the street every day, living in buildings that contain hundreds of residents, depending on strangers to make your food, transport your goods, treat your illnesses, and educate your children.
So the need for new mechanisms of trust becomes increasingly important, and a new system of signs, signals, regulations, norms, and conventions evolve to effectively allow strangers to place some degree of trust in each other. For example, if a person is wearing a U.S. Postal Service uniform, you probably won't be alarmed by their presence on your porch. If you come across a clearly unopened can of Coca Cola in a forest, you might feel reasonably safe cracking it open and drinking it, especially if it's a hot day and you're thirsty.
Because thin trust scales in ways that thick trust can't, it's indispensable to societal well-being and prosperity. In a market defined by thick trust, where every participant has deep first-hand knowledge of everyone else, there probably won't be enough buyers or sellers to establish the most rational price for any given product or service. If a buyer knows there are no other buyers, he can bid whatever he wants if he knows a seller really needs to sell. If a seller knows there are no other sellers, she can refuse any offer a buyer makes if she knows a buyer really needs to buy.
In a market expanded by a widely held sense of thin trust, like, say a downtown shopping district in the 1920s, things are different. Imagine a shopper walking into a major metropolitan department store for the first time in that era. She obviously wouldn't yet have any first-hand knowledge of the store itself. She probably wouldn't know any of the clerks assisting customers and ringing up sales. Nor would she likely have much knowledge of where the goods on display came from or how they were manufactured.
She also probably would know few if any of the other shoppers.
And yet, if the store was one among many others like it, she could assume its prices were competitive with the other stores in the area, and that the quality of its goods compared favorably — or at least not unfavorably — with whatever else was easily available. She might have known about specific offers and deals through the store's newspaper ads and taken faith in the fact that an expanding array of laws, regulations, and government agencies were regulating the store's conduct via various licensing requirements, inspections, fines, and other means designed to discourage misconduct and non-compliance with local laws.
To buy something with a relative degree of confidence in this new milieu, all our shopper had to do was find something that struck her fancy — no other due diligence would be necessary. So envision what starts to happen as more and more shoppers feel comfortable relying on thin trust alone. Transaction volumes increase, because there's both more opportunities to buy and less time required to research the specifics of the deal; the market has already done that.
And, basically, this was the story of urbanization in the industrial era. As more people moved to cities and transaction volumes increased, prices got more competitive. The kinds of goods and services available to buyers expanded, as sellers try to distinguish themselves from their competition and find unserved needs to fulfill. The world grew more productive, more efficient, more abundant, and ultimately more meaningful, as the sheer variety of goods and services available proliferated.
And of course this wasn't just a market phenomenon. All kinds of institutions played a role in facilitating thin trust, including government agencies, the press, and civil society organizations.
A: With the rise of computers and digital networks, thin trust has grown even more powerful. Thanks to the internet, trust can stretch from nearly any point on the globe to nearly any other point. Big businesses can achieve even greater levels of scale and efficiency. Small businesses that couldn't exist in a solely bricks and-mortar world can find enough customers to sustain their efforts. In addition, completely novel approaches to business — and life in general — suddenly become possible in a world of algorithmically mediated trust.
In a 1994 article in Time, the UC Berkeley professor Clifford Stoll described the internet as the "closest thing to true anarchy that ever existed." However that might have felt at the time, we were already speeding into a world where renting out your spare bedroom to some random stranger on Airbnb would be seen as a completely sensible, economically-rewarding, easy-to-accomplish behavior that was even somewhat “hip,” rather than something that sounded both preposterously risky and hard to pull off.
As we put it in Superagency, we started netsurfing under the cover of pseudonyms in the early 1990s. By the end of the decade, we were buying used cars on eBay Motors, sight unseen. By 2012, we were jumping into a random Toyota Corolla with a pink mustache on its grill1 after a night on the town to get a safe ride home. All because of new forms of data-driven reputation that platforms ranging from eBay, LinkedIn, Facebook, Uber, and Airbnb pioneered and evolved.
Trust at scale leads to more innovation, more productivity, and more choice.
All of these platforms, and many others, offered new ways to digitally enhance, expand, and project identity. And this expansion of identity created the trust that was necessary to power global commerce, micro-entrepreneurship, the sharing economy, digital labor markets, social influencers, global subcultures, and more. Trust at scale leads to more interactions, more opportunities, more shared knowledge, more innovation, more productivity, and more choice.
This shift from scarcity to abundance brings its own challenges, including ones related to trust. But enabling trust at scale remains one of the most powerful and generative forces of the digital age. It has broadened access to information, services, markets, and communities. It has lowered barriers to entrepreneurship, collaboration, and self-expression, and created conditions that enable people across the globe to connect, transact, and build together in ways that were previously impossible.
A: As we suggest in the book, the discourse around the accelerating evolution of machine learning has been unfolding since the early 2010s — but mostly amongst computer scientists, academic researchers, and commercial developers, and to a lesser extent, journalists and policymakers.
With the release of ChatGPT in 2022, and all the other models that have followed, the broad public started paying attention to AI substantively, with the effect of addressing collective action challenges in at least two ways.
First, with the release of systems like ChatGPT, the public could finally access AI tools in truly opt-in and self-determining ways. And, as I suggested in the previous response, that's a key step in the trust process, for both individual and more collective and societal uses. Once people are genuinely familiar with the capabilities, limitations, quirks, and potential uses for these tools, they're much better equipped to make informed decisions about them.
It's not a given that shared familiarity and trust on a personal level will lead to the broad consensus necessary for the public to embrace new norms, policy frameworks, governance functions, and regulations where appropriate — but it will be much harder to effectively leverage AI for large-scale, socially beneficial applications without that consensus being reached.
We're not going to get collective action without collective action.
The second impact arising out of the release of ChatGPT is that, suddenly, the conversations and debates around AI that had been playing out in insular ways became a central part of public discourse. In this respect, ChatGPT's release elevated the platforms of all involved in the debate.
Doomers, who believe that AI represents an existential threat to humanity, gained a vastly larger stage and audience. Gloomers, who focus on the near-term harms and inequities, found their concerns echoed in regulatory hearings, journalistic investigations, and public protests. Zoomers, who champion all-out acceleration of AI, and Bloomers, who favor a more tempered but still mostly unfettered approach to innovation, experienced new opportunities to make their cases to the public.
And most importantly the public itself has begun to participate in this conversation in more substantive ways.
When techno-humanists, among whom I count myself, talk about developing specific AI systems and processes, we often talk about keeping "humans in the loop," to ensure that human judgment and oversight are explicitly integrated into how these systems and processes function. This idea applies on a macro level as well. As we make broad decisions as a society about how we adopt these technologies, including what the pace of innovation should be, what guardrails we pursue, and what goals and values should inform our efforts, we need to keep humans of many different kinds in the loop — not just experts and regulators and investors.
As Kevin Scott, Microsoft's Chief Technology Officer and a close friend of mine since our days at LinkedIn, told the New Yorker in December 2023. “You have to experiment in public. You can’t try to find all the answers yourself and hope you get everything right. We have to learn how to use this stuff, together, or else none of us will figure it out.” [See also the Peer Perspectives article An Interview with Craig Mundie]
A: On March 14, 2023, OpenAI released GPT-4. Less than a week later, the Future of Life Institute published an open letter urging AI labs to immediately pause the training of models more powerful than GPT-4 for at least six months, citing concerns about safety, alignment, and the pace of development outstripping society’s ability to manage its consequences.
Obviously, that pause didn’t happen. The opposite did. Over the following year, AI development accelerated dramatically. New models proliferated — not just from OpenAI, but also from Anthropic, Google DeepMind, Meta, Mistral, Cohere, DeepSeek, xAI, and a fast-growing open-source ecosystem. The number of publicly and commercially available foundation models multiplied, offering users a wide spectrum of trade-offs across speed, cost, size, fine-tunability, alignment priorities, and interface design.
At the same time, millions of people used models not just to generate text or code, but to explore therapy, governance, design, education, spirituality, activism, and social connection. This diversity of real-world use cases created a feedback loop. Developers learned faster, iterated more responsively, and refined systems in dialogue with a diverse and inclusive cohort of daily users.
Legitimacy, in practice, is being conferred not by fiat, but by adoption.
Broad accessibility also meant these users grew increasingly informed and sophisticated over time, in terms of how different models operate. In an era of declining institutional trust, it's simply unrealistic to assume that regulations alone automatically confer legitimacy on a given technology, or determine when a model is "safe" or "ready." Legitimacy, in practice, is being conferred not by fiat, but by adoption and accumulated experience that is distributed, broadly observable, and contested in public.
That AI development has proceeded without causing the kinds of catastrophic failures, mass disinformation events, or autonomous system breakdowns that pause letter signatories feared were imminent doesn't mean future development will remain risk-free or self correcting. But it does suggest that learning through iterative deployment, broad societal participation, and real-world feedback has proven effective and productive to date.
In the future, as models grow more powerful, different approaches may be warranted. But to borrow language from your question, it's very difficult —and arguably even conceptually impossible — to preemptively eliminate "creative destruction."
Instead, what effective regulators can do is create conditions that enable creative innovation, then monitor impacts as they become clear and respond with targeted interventions based on observed harms, rather than hypothetical ones.
In Superagency, we're not advocating for a pedal-to-the-metal approach that accelerates into blind curves in pursuit of innovation at all costs. We believe that permissionless innovation — grounded in evidence-driven research and robust discourse and then realized through iterative deployment — facilitates learning loops that lead to safer and more relevant products and services.
A: Since the Enlightenment, privacy has been viewed as a cornerstone of individual autonomy, authenticity, and liberty, and a necessary component of liberal democracies. Without it, coercion, manipulation, and surveillance become easier to impose and harder to detect, eroding the trust and agency that free societies require.
But it's also true that networks, including digital ones, literally exist to disseminate data and information, and that the kinds of data and information that digital networks, in particular, both generate and make easy to aggregate and analyze have emerged as significant sources of potential value in our 21st century world.
Similarly, it's not just privacy that enables and protects independent thought, dissent, and the construction of identity; the free flow of information is necessary to achieve these ends as well.
So, ultimately, the key is to strike the right balance. If you're building a swimming pool and your primary goal is to prevent the possibility of drowning at all costs, you'd end up with swimming pools six inches deep. If your overriding concern when building digital networks is privacy protection, then you're not building swimming pools — you're building not-drowning pools.
Of course, no one wants a pool where drowning is common. But no one builds a pool so shallow it can’t be swum in either. The challenge is to design systems that encourage participation, enable meaningful activity, and manage risk — not to eliminate it entirely.
Designing for individual agency is not just about managing privacy.
That means we also have to accept that there will always be tradeoffs. In the finance world, credit has always been contingent on information of various kinds. As our capacity to collect information has become more granular, real-time, and contextual, the potential to expand access and reduce bias has grown — but so has the risk of unilateral and non-consenting overreach, surveillance, manipulation, and exclusion based on opaque criteria.
So how do we best manage these tradeoffs? One way is to emphasize the importance of individual agency, while simultaneously recognizing the positive relationship between individual empowerment and collective wellbeing.
Simply put, thriving individuals lead to thriving communities, and vice versa. So, as much as we should be emphasizing the value that shared data and information creates, we also should design for options that give people choices about how to proceed in this new environment where nearly every action leaves a data trail, and where that data can be aggregated, analyzed, and acted upon in ways that individuals often don’t fully anticipate or understand. [See also The Academy article An Interview with Professor Sandra Matz]
In an ideal world, people will understand when, how, and for what purposes the data they generate may be shared, and they will have meaningful choices to exercise control over that — whether that means incorporating technologies like differential privacy, trusted data intermediaries, or some other means.
But designing for individual agency is not just about managing privacy. Or even mostly about managing privacy.
Instead, it's about developing AI in ways that work for and with individuals, rather than on them. By giving people broad access to general-purpose AI tools and systems they can use in ways that are most relevant to them, we steer toward a world where technological progress serves a wide range of human goals — not just those of centralized institutions or dominant platforms.
For example, imagine a world where people use personal AI models built on decades of the information they've generated through their digital searches, purchases, communications, creative work, and learning behaviors. The purpose here is not to be surveilled or nudged, but rather to self empower through more informed decision-making and data-driven self-knowledge. That's a world that, in my view, preserves individual agency while also advancing the large-scale production and dissemination of knowledge.
A: The new thing here is that, now, when humans pursue and exercise wisdom, they can do so in radically informed ways. At this point, the planet's 8.2 billion humans and their hundreds of billions of devices, sensors, queries, and interactions generate more data in one minute than any individual human could meaningfully absorb in a lifetime. To make use of such information abundance, AI turns Big Data into Big Knowledge and even into intelligence-on-demand.
I believe wisdom will flourish as a direct consequence of AI.
Couple this new capacity to effectively utilize the knowledge and intelligence at our disposal with mechanisms that enable oversight, deliberation, reflection, dissent, and correction, and I believe wisdom will flourish as a direct consequence of AI.
A: When we talk about “The Existential Threat of the Status Quo,” we’re drawing attention to a paradox: that inertia, not innovation, can be the more dangerous choice. It’s the slow crisis, not the sudden shock, that often inflicts the most damage.
Take widespread illiteracy. It’s so normalized in parts of the world that it scarcely registers as an emergency. But what if we considered it with the same intensity that we apply to the “alignment problem” in AI? Or consider how we treat fatalities involving self-driving cars versus traditional human drivers. One high profile AI-involved incident might lead to months of headlines and congressional hearings. And yet 40,000-plus deaths per year from human-driven cars in the US is accepted as business as usual.
That’s the existential threat of the status quo: the moral failure of habituated indifference.
In this light, asking “What could possibly go right?” becomes an essential counterweight to the paralysis of fear. It’s not naive optimism. It’s an ethical stance that recognizes that any meaningful innovation requires change. And change, by nature, entails uncertainty and risk.
But failing to act, out of deference to the familiar, is its own high-risk move, especially if that inaction impedes scalable, affordable advances in healthcare, education, and mental health — the very areas where LLMs have already begun to prove useful.
This doesn't mean we should blindly accelerate AI development. But if we’re serious about equity, well being, and human flourishing, then prioritizing those outcomes, even amongst some degree of uncertainty, is not optional. It's imperative. Inaction is not neutral.
A: With institutional uses, there will obviously be collective action challenges that personal users don't 7 face. So while we're already seeing corporations and other institutions incorporating LLMs and other AI systems in a wide range of ways, it will likely take some time before they make major structural changes to how information and knowledge flows through their organizations. That's especially true when you consider how entrenched various forms of institutional information-sharing are, like the monthly all-hands meeting or the annual report.
Asking “What could possibly go right?” is an ethical stance
But like any new technology, especially transformative ones — whether that's steam power, electricity, or the internet — AI won't just change how we do things, but what we do.
Just as motion picture technology didn’t just mean “filmed theater,” and the automobile didn’t just mean faster point-to-point travel, in time, AI will create new industries, new companies, new business models, and new jobs, and consequently, new management challenges and new management solutions. For example:
• How should managers think about assigning accountability when decisions are made collaboratively between humans and AI agents?
• What kinds of workflows make best use of real-time AI feedback loops without overwhelming teams?
• How should organizations structure knowledge sharing when retrieval is instant, but understanding is uneven? • How do you manage career development when skill relevance is increasingly volatile?
These kinds of questions don’t demand tweaks to existing models. They’re mandates to rethink what management even means in a world where decision-making itself is partly outsourced to AI systems and devices.
One broad example: It's not just that companies will find it easier and easier to draw upon an ever-expanding array of meeting transcripts, customer interactions, product development notes, or strategic plans from months or years past, but also that it will be possible render such resources as a training module, an executive summary, a visual dashboard, a virtual customer service representative, an animated infographic, a podcast episode — whatever format is best suited for some particular context, need, or user preference.
Once this level of frictionless access and transformation becomes normalized, it doesn’t just improve the flow of information. It inevitably reshapes the structure of the organization itself. As LLMs enable real-time assistance, analysis, and decision support, employees at the edge gain capabilities that used to be the domain of specialists or senior managers. This naturally flattens hierarchies and pushes institutions toward more flexible and adaptive forms.
You don’t need to escalate a decision if the information to make it is readily available — and vetted — at your fingertips. Instead of searching for buried policies or bugging veteran colleagues for context, employees can query internal LLMs trained on company-specific data to get tailored, contextual answers. This means that onboarding cycles will be shorter. It reduces dependency on institutional memory holders. It enables people to build on prior work rather than reinvent it.
And it will also change how teams interact across an organization.
LLMs can already potentially serve as cross-functional translators — generating compliance-friendly product copy, for example, or mediating between engineering and marketing in ways that reduce friction and increase speed. This doesn’t just make collaboration easier. It reframes it as a more fluid, AI-augmented exchange, where human teams set the goals, and AI systems handle much of the connective labor.
A: A paradox of the networked age is that technologies and platforms that democratize expression, access, and coordination also make it harder for democracies — and for institutions that depend on democratic legitimacy — to function effectively.
This is a phenomenon that Martin Gurri, a former media analyst at the CIA, summarized well in his book, The Revolt of the Public: "Today a networked public runs wild among the old institutions, and bleeds them of the power to command attention and define the intellectual and political agenda."
This is a challenge for institutions of all kinds — including the free press, internet platforms, educational and scientific institutions, and especially representative governments committed to rule of law and treating diverse constituencies fairly.
For any institution, I think the potential solutions are just what you'd expect. They need to improve the quality of their services, in ways that show them being visibly responsive to the evolving needs, feedback, and expectations of citizens, consumers, and users. [See also the Ground Breakers article An Interview with Richard Edelman]
AI won't just change how we do things, but what we do.
For democratic governments in particular, one major task is to help citizens understand that a government which fails to use AI effectively is not being benignly cautious. It is falling behind. In contrast, governments that leverage AI effectively will enjoy real and compounding advantages in service delivery, resource coordination, crisis response, and strategic insight.
Even in the U.S., where private sector innovation is often presented as the real engine of progress, government-backed network infrastructure has played a decisive and ongoing role in national prosperity and progress — starting with the postal service in the 1700s, continuing with the telegraph, the railroad system, the electricity grid, the interstate highway system, GPS, and the internet, to name just a few. AI represents the next leap in that lineage.
Of course this doesn’t mean that the government should take the lead on AI deployment the way it did with the postal service or GPS. But government should participate actively — by integrating AI into public services in ways that earn trust through usefulness, and by shaping a regulatory environment that balances innovation with predictability, and experimentation with accountability.
Reid Hoffman is the co-founder of LinkedIn, co-founder of Manas AI, and a partner at Greylock. He currently serves on the boards of companies such as Aurora, Coda, Entrepreneur First, Microsoft, and Nauto. He also serves on several nonprofit boards, including Endeavor, CZI Biohub, Opportunity@Work, the Stanford Institute for Human-Centered AI, and the MacArthur Foundation’s Lever for Change. He is the co-host of the Possible podcast and the author of six best-selling books, including the recently released Superagency: What Could Possibly Go Right with Our AI Future.
Join The Discussion