How AI Will Help Us Find the Signal Amid the Noise of the Exponential Age

A review of “Superagency — What Could Possibly Go Right with Our AI Future”, by Reid Hoffman and Greg Beato, Authors Equity, 259 pp.

Brett A. Hurt
9 min readFeb 13, 2025
Aristotle as an “AI Bloomer” — imagined by DALL·E.

In a landmark 1948 paper A Mathematical Theory of Communication, mathematician Claude Shannon introduced the concept of “signal versus noise” — a framework that would become one of the most influential scientific contributions of the 20th century. While Shannon developed this theory for telecommunications, his insights about separating meaningful information from random data would later become fundamental to fields ranging from early computing and data science to today’s fast-moving artificial intelligence.

As another math sage, the hedge fund pioneer Edward O. Thorp, observed, asking about Shannon’s impact on modern information technology, “is like asking what impact did the invention of the alphabet have on literature”.

I summon Shannon’s groundbreaking work, and Thorp’s assessment of it, as an apt analogy for understanding a critical new book on AI by entrepreneur and LinkedIn co-founder Reid Hoffman, Superagency: What Could Possibly Go Right with Our AI Future. Co-authored with Greg Beato, Superagency transcends conventional AI debate to distill clarity from chaos, offering his vision of a “techno-humanism” that could guide humanity’s relationship with artificial intelligence.

The signal-to-noise ratio in AI discourse has perhaps never been more challenging to parse. Recent developments — from the proposed $500 billion Stargate initiative to DeepSeek’s market-disrupting $5.8 million model, from the ongoing Sam Altman-Elon Musk rivalry, to Europe’s evolving buyer’s remorse over its expansive AI regulation on display at this week’s Paris AI Summit — illustrate both the promise and complexity of AI’s trajectory.

As New York Times technology columnist Kevin Roose observed at the Paris summit, “It feels, at times, like watching policymakers on horseback, struggling to install seatbelts on a passing Lamborghini.”

Well said, Roose. So against this backdrop, Hoffman’s Superagency offers a framework for distinguishing signal from noise in our global AI discourse, presenting a vision of “techno-humanism” that may help chart a path forward.

Guiding ourselves toward an ethos of “techno-humanism”

The book arrives at a crucial moment when the technology’s trajectory seems increasingly difficult to predict, let alone guide. But guide we must. While headlines focus on the spectacle of tech titans’ public feuds and nations’ competing AI initiatives, Hoffman draws our attention to a more fundamental question:

How might we harness AI’s potential while preserving and enhancing our essential humanity?

While Superagency suggests many ways to enhance our humanity, from nurturing a truly collaborative democracy, to effectively responding to our crisis in mental health, and much, much more, Hoffman does not try to answer that question directly. It’s not a question that has a single, one-size-fits-all response. Rather, he offers us a host of tools to begin answering that complex question ourselves — responsibly, systematically, and holistically.

Key to this new signal amid the noise is the established concept of “iterative development”, an approach that emerged from the software industry more than two decades ago. In essence, this is a bottom-up strategy that rejects top-down control — including that which might come from rash government regulation. Instead, it demands technology develop through continuous testing, refinement, and course correction. This is in fact the model of change and innovation, what we call agile data governance,that we’ve embraced at data.world to help enterprises organize and use their vast but vastly underutilized estates of data.

The four mindsets: a topology of AI perception

But acknowledging the reality that solutions to any challenge are elusive until the core problem is understood, Hoffman lays out the concepts that are as central to today’s AI as Shannon’s “signal versus noise” ideas were to the dawn of the information age eight decades ago. Hoffman gets to the source of Roose’s horse-mounted tussle to install seatbelts in passing Lamborghinis, with a topology of the four competing mindsets of AI perception:

Doomers: Individuals who perceive AI as an existential threat that could lead to catastrophic outcomes for humanity. These are the modern-day Luddites who would prefer we simply pause or even stop AI development.

Gloomers: Those who believe AI’s advancement will inevitably result in significant societal disruptions, such as widespread job displacement and human obsolescence. While they are not as categorically opposed to AI as the doomers, their skepticism animates much of the ill-informed discussion in the media, academia, and certainly among governments.

Zoomers: Enthusiasts who are eager to accelerate AI development, focusing on its potential to drive rapid innovation and progress. Sometimes referred to as “accelerationists”, they are harsh in their anti-regulation views and espouse a no-holds-barred outlook that is broadly dismissive of any restraints on commercial AI development.

Bloomers: Optimists who, while acknowledging potential risks, advocate for the thoughtful and responsible integration of AI to enhance human capabilities and societal well-being. Hoffman counts himself as within this Bloomer category, as do I. And we both share broad enthusiasm for AI’s promise — when we get it right.

The broad message of Superagency follows from this fourth archetype as a kind of “Bloomer Manifesto” (my term, not his). It is in fact, the meta-archetype that I believe can allow us to come to terms with the other three schools of thought, or at least move them nearer to one another. The Bloomer mindset is a means to achieve intellectual and ultimately strategic clarity.

“Instead of seeing AI as fundamentally an extractive industry, as many Gloomers [and others] do, Bloomers see it as more akin to agriculture,” Hoffman writes. “You watch it grow and adapt to the given conditions. You learn what crops work best where, and begin to intervene in ways informed by everything you learn about the problems and challenges that arise and the solutions that can potentially mitigate them.”

Is the Bloomer approach risk-free? Of course not. But over time, Hoffman argues, “your knowledge increases, your techniques improve, your yield grows”. Small mistakes made iteratively prevent larger ones conceived as broad schemes.

Toward “Regulation 2.0” — the power of benchmarks

In my own framing of the issue more than two years ago, amid the calls for a moratorium on AI development, I wrote that in place of walls around AI’s development, we need to install digital windows on the virtual workshops building AI. Hoffman takes this idea dramatically forward.

In place of conventional, top-down regulation such as now being vigorously debated in Europe, the US, and many state capitals, Hoffman calls for what he calls “Regulation 2.0”. Competition itself is the ultimate form of regulation, he suggests, provided we continue and greatly expand the nerdy-sounding but critically important concept of “benchmarks”. In the realm of technology generally and AI specifically, benchmarks are standardized tests of performance — usually developed by a third party such as an academic or industry consortium — that certify the rules of the road, objectively measure progress against specific metrics, and enable developers to stop, reflect, course correct, and most importantly innovate.

A good illustration of benchmarks ensued just as Superagency was published. This is the performance comparison of OpenAI’s ChatGPT versus China’s DeepSeek, about which you’ve no doubt read. The reason we are having the global debate that ultimately triggered a $1 trillion market correction is the result of the public benchmarks evaluating the two AI models. No regulator was involved. Personally, my view on LLMs (Large Language Models) is that I’ve never seen any technology development follow so precipitously of a cost decline while simultaneously increasing performance among the highest cost closed-source alternatives. It was just a bit over two years ago that ChatGPT even launched! It is becoming increasingly clear to me that the LLM battle will be won by open-source, and I look forward to seeing what Meta launches next with their open-sourced Llama model in response to DeepSeek’s R1 challenger. This battle reminds me of Neal Stephenson’s non-fiction book, In the Beginning was the Command Line, with his prediction that Linux would win the foundational operating system of the internet — and, ultimately, it did.

Among the examples Hoffman uses to illustrate the effective utility of testing and benchmarks is Chatbot Arena, an open source platform for evaluating LLMs. Check it out. You give it a prompt, and it forwards it to as many as 88 different chatbots like ChatGPT, or Llama, or Claude, and it returns two separate responses — blindly. You vote for which is most effective, without knowing which LLM created it. When I checked yesterday as I was completing this review, Chatbot Arena had already ranked nearly 100 LLMs with the results of more than 2.6 million prompts.

There are many such benchmarking tools for fields as complex as engineering and medicine. More are needed. Imagine such tools scaled and operating globally. Governments can certainly encourage, support and even partner in such initiatives, but no government agency could ever come close to the efficacy of ongoing global benchmarking and its promise of risk mitigation.

“With that kind of potential, Chatbot Arena points the way toward a future approaching the democratized, grassroots governance of Regulation 2.0, in which users end up shaping ongoing AI development through collective expression of their preferences and judgments,” he writes, “and trust is achieved through transparency.”

There is much more I could share from this dense, original, but highly accessible tour of the future of AI. And it’s an elucidation of the means to get there. But the important throughline is the often neglected insight that AI and other technologies are not in tension with our humanity, but very much part of it. He returns repeatedly to the phrase “techno-humanism”, which is the best term for the ethos that all of us in AI technologies should embrace.

As Hoffman wrote in his earlier book Impromptu (which he co-wrote with AI), the term “homo sapien” (for “wise person”) is outdated. We are really “homo techne” (for “toolmaker”). Throughout history, humans have continuously developed technologies that amplify and complement our mental, physical, and social capacities. We are a unique species that has been shaped and formed by the technology we’ve created. In this sense, AI is no different than the first axe, the steam engine, the internal combustion engine, or that powerful computer in your pocket. All technologies have liberated us, sometimes with adverse consequences, yet always taking us forward. And this is the promise of AI.

As I and so many others have written in so many ways, data and information are growing and expanding in volumes we can scarcely express. Consider that we created two zettabytes of data in 2010, the staggering equivalent of 250 billion standard HD movies or a library of books with 500 trillion pages. Last year we created a hundred times that number and in 2025 it will nearly double again.

An “informational GPS” for all humanity’s knowledge

Our tools to understand, make use, and utilize that information — and not be overwhelmed by it — are the tools of AI. Not doing so, he notes, threatens all of us with our own “personal dark ages, ignorant of virtually all global knowledge”.

The metaphor Hoffman chooses to help us grasp this is the global positioning system, or GPS, that, like the internet itself, grew out of a Pentagon lab. Intended to help pilots refine bomb targeting, it now has countless commercial services built atop it and allows us to move through the physical world with constantly updated knowledge. A 2019 report from the National Institute of Standards and Technology, he notes, estimated that GPS technologies have created $1.4 trillion in economic benefits to the public.

“At literally every turn, these navigation systems increase individual agency by telling us where we are, what else is nearby, what obstacles we need to dodge, and so much more,” Hoffman writes.

The tools of AI, he argues, while not a perfect analogy, are a kind of “informational GPS” that can help us navigate and master the exponentially growing universe of knowledge that gives us not just agency, but the “Superagency” that is the title of the book.

“LLMs and the conversational agents built on top of them function similarly” to GPS, he writes. “They increase our capacity to navigate the complex and ever-expanding informational environments that define life in the twenty-first century”.

There’s so much more in this seminal book that I’ve barely touched upon. So please read it. For a deep dive on Superagency, another fantastic introduction is the talk Hoffman had two weeks ago with Erika James, dean of The Wharton School, one of my alma maters. I just got to spend an evening with Dean James earlier this week here in Austin, and she emphasized that the most important priority she sees for Wharton is the proliferation (and student preparedness) of AI across the entire campus, including all disciplines.

This book is a primer for anyone new to grappling with the implications for AI. For those further up the precipitous learning curve, think of Superagency as an intellectual “bhatti”, the term sherpas use for the rest stops on the way to the summit of Mount Everest — and there are many summits to be reached with AI.

Hoffman writes: “Distributing intelligence broadly, empowering people with AI tools that function as an extension of individual human wills, we can convert Big Data into Big Knowledge, to achieve a new Light Ages of data-driven clarity and growth.”

Just as Shannon did in the 20th century, Hoffman has elucidated the ways for us to think about, understand, and parse the signal buried within the information and knowledge of the 21st.

--

--

Brett A. Hurt
Brett A. Hurt

Written by Brett A. Hurt

CEO and Co-founder, data.world; Co-owner, Hurt Family Investments; Founder, Bazaarvoice and Coremetrics; Henry Crown Fellow; TEDster; Dad + Husband

No responses yet