The AI-Driven Universe a Blink of the Eye Away

Let’s explore the mind-bending vision of AI sage Ray Kurzweil — a transformation that is fast becoming reality — with his new book, “The Singularity Is Nearer: When We Merge with AI

Brett A. Hurt
20 min readJul 11, 2024
Ray Kurzweil at TED2024, previewing his new book on AI and the “Singularity”

As friends and regular readers know, when it comes to grasping what the world will become with the accelerating convergence of multiple technologies led by artificial intelligence, or AI, I often resort to the metaphor of a “phase transition”. The analogy was first used by one of my favorite authors and futurists, Kevin Kelly. It’s a physics term, a simple example of which is the way the H20 compound of molecules is a solid below 32 degrees Fahrenheit, a liquid above that, and a vapor at 212 degrees. In short, a planetary transition of comparable if infinitely greater scope is just around the corner, a transition in the nature of our being as we humans — and virtually all of our institutions — are transformed in ways all but impossible to imagine.

But imagine and understand we now can, thanks to AI pioneer, computer scientist, entrepreneur, and technology sage Ray Kurzweil. My daughter, Rachel, and I had the chance to meet Kurzweil and listen to him a few months ago at the TED Conference, where he outlined his latest work, now just out three Tuesdays ago, a roadmap to it all: The Singularity Is Nearer: When We Merge with AI. It is, however, much more than a roadmap. This profound new book is an intellectual survival guide, a progress report on his decades of forecasts, and a foundational text for those who will shape the world to come. Perhaps most importantly, Nearer is a studied retort to the Cassandras predicting chaos or worse as AI transforms our reality.

Yes, the transformation will be the mother of all disruptions. This is not the invention of the steam engine, the electrification of the world, or even as profound an innovation as the internet and the World Wide Web. It’s bigger. Though human-made, this phase transition is perhaps only comparable to the Cambrian Explosion a half billion years ago, the explosion of multicellular life, on which I wrote a series almost three years ago. “What makes AI innovation different from previous technologies is that it opens more opportunities for taking humans out of the equation altogether,” Kurzweil writes.

Yes, many jobs will disappear, and perhaps not all of the displaced may find their footing in the new sectors that will be created. Our values will have to change, led by our definition of purpose and meaning, which are often tied to our careers and profession. Our pursuit of meaning is a particularly salient point to me, so much so that I devoted much of the Introduction to my own book, The Entrepreneur’s Essentials, to the work of author and psychiatrist Viktor Frankl, for whom the search for meaning is the essence of humanity. The redefinition of meaning is a giant chasm for society to cross.

But this phase transition is also about the end of most diseases. It’s about an abundance of food and the end of hunger. It’s about clean water for everyone. It’s about a level of prosperity that will consign poverty to the history books. And ultimately, it’s about expanding the power of our minds a thousand-fold, such that we will have the “ability to think thoughts more complex and abstract than we can currently comprehend.”

But before I get too far, let me back up just a bit. In the past, I’ve written numerous reviews of important books on AI, including The Worlds I See, by Stanford Center for Human-Centered AI founder Fei-Fei Li; The Coming Wave, by DeepMind co-founder and current Microsoft AI chief Mustafa Suleyman; and, How Data Happened, by Chris Wiggins and Matthew L. Jones. This time, I want to use this article less as a book review, and more as a summary to mark this incredible moment in the history of the world and humanity. Occasionally I may produce videos when I get inspired by the learnings in this book. In fact, I’ve already begun on Nearer’s third chapter.

As context, the “Singularity”, Kurzweil’s term borrowed from mathematics, is the point in space-time when we can actually upload our consciousness to the digital cloud, which he foresees happening by 2045. The term was introduced in his 1999 book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Kurzweil went on to put the concept into major worldwide play with his later book, published in 2005, The Singularity Is Near: When Humans Transcend Biology. Both books were defining for me as a young entrepreneur back when they appeared. Both challenged me to really think about and embrace the exponential future of technology and how that would transform society. They helped me become more adaptable and resilient, traits which are now more important than ever for everyone in this new and transformative era of AI.

Since that pioneering work first appeared, AI has become a household word, most dramatically since OpenAI’s iterations of ChatGPT began rolling out starting on November 30, 2022. Now, from smoke-analyzing AI aiding firefighters in California, to instant AI translation of most languages, to almost daily AI innovations in health care, this technology is already central to our lives. Last year, private investment in AI was more than $25 billion, according to the Li’s Center at Stanford, an estimate I believe on the conservative side. By next year, annual AI investment will reach some $200 billion, according to Goldman Sachs.

At my company, data.world, we’ve been building the foundation of our platform for AI since our founding in 2016. We knew back then that data would be the essential feedstock of AI, the oxygen of its metabolism. And in a world where data grows exponentially, data silos, data errors, missing context, and sheer data deluge are the bane of many companies and institutions. Our mission is to transform data into tools of institutional cognition, the most recent advance of which is our AI Context Engine™. The most important product we’ve ever launched, this tool makes corporate data now inaccessible to AI an essential part of companies’ strategic toolkit. The chat-with-your-data future has never been closer than it is right now, and our AI Context Engine is our fastest new product takeoff in our company’s history.

So back to the journey that we are all on. Let’s explore the essentials of Nearer together in summary.

Introduction

There are three core insights that are the foundation of this book. For AI is not so much a technology itself as it is a suite of converging technologies.

One, computing power is becoming cheaper almost by the day. Sure, we’re reading about the shortage of Graphics Processing Units, or GPUs, that are the heart of the chips made by the rockstar company NVIDIA. Long a verb, “compute” has now become a noun. But despite the headlines, this is a mere bump in the road. As Kurzeil points out, just one dollar buys more than 11,000 times the computing power you could get for that price when his first book on the Singularity came out in 2005.

Second, human biology is becoming better understood at a similar pace. We still have much to learn, and Kurzweil acknowledges as much later in the book. But our advances in understanding genetics, the nature of the human brain, and even the nature of cognition itself are accelerating at a breathless pace.

Third, nanotechnology is what will enable the convergence of it all. This science enabling us to manipulate matter at an atomic or molecular scale is already with us. Biologists, and soon physicians, will be able to examine, alter, and improve the smallest units of our own anatomy. On stage at TED’s big conference this year, Demis Hassabi, co-founder and CEO of Google DeepMind, shared his vision for AI to help us understand the universe itself at the “resolution of reality at the Planck scale”, the smallest scale of space and time.

As Kurzweil puts it: “Humanity’s millennia Long March to the Singularity has become a sprint.”

Chapter 1 — Where Are We In Six Stages?

In this chapter, Kurzweil explores the evolution of intelligence, and how it is an indirect sequence of other natural processes, led by the so-called “strong nuclear force” that holds the neutrons and protons of the universe together. He articulates six “epochs” in the evolution of intelligence.

Epoch One: This is the “Big Bang”, the origin of the universe an estimated 13.8 billion years ago. This was the creation of the elements and the laws of physics. The precursors to the origin of life itself that was to follow much later.

Epoch Two: This is the emergence of life, the basic elements of DNA about 3.5 billion years ago, the stage-setter for the “Cambrian Explosion” a “mere” 500 million years ago, which I alluded to above.

Epoch Three: This was the emergence of the first central nervous systems, the first “brains” if you will, leading to the “Cambrian Explosion” of multicellular life.

Epoch Four: This is the state we are in now. This is the evolution of Homo sapiens from our earlier ancestors about 300,000 years ago, to become beings with well-developed brains and critically, opposable thumbs. In this stage we began to develop the first basic tools, such as axes; it is the beginning of what we now call “technology”. Another way of considering this is as the emergence of “Homo techne”, a term invented by LinkedIn co-founder and AI entrepreneur Reid Hoffman.

Epoch Five: This is the point — which he estimates to be a mere two decades away — when our anatomy and our biological cognition, our thinking, will merge with digital technology. In his vision, this is when our bodies will become more like machines with replaceable parts and machines will be smarter than we are. We will pass the so-called “Turing Test” where any difference between human and machine intelligence will be indiscernible. Our lives will be extended, first by decades; later we will live much longer, perhaps as long as we choose to. Within a decade, we’ll be passing into this epoch as our neo-cortexes will connect to the cloud. Disease, poverty, and scarcity will begin to vanish.

Epoch Six: This is his “Singularity”, what he calls here the “Computronium”. This is similar (if not precisely the same) to what futurist Kevin Kelly, alluded to above, has called the “Noosphere”. It is also akin — if contrasting in fundamental ways — to the vision of my friend and collaborator, the author and AI entrepreneur Byron Reese, who terms this the “Agora”, and about which we’ve written and spoken together on many occasions. In Kurzweil’s framing, it is in this sixth stage that the universe itself will “wake up”. The entire universe will essentially be a single intelligence.

Chapter 2 — Reinventing Intelligence

For non-technical readers, this may be the most challenging part of the book. But the reward is a deeper understanding of the competing approaches to AI over the last half century and he details how at last we are creating the intelligence that nature gave us on a more powerful, digital substrate. The chapter’s larger tale is ultimately about our transition from animals with biological brains to transcendent beings whose thoughts and identities are no longer shackled to what genetics provide. In the 2020s, we are about to enter the last phase of the transformation — reinventing the intelligence that nature gave us on a more powerful, digital substrate. And then we will merge with it.

The big AI achievement, Kurzweil explains in this chapter, is creation of “neural networks” that can reason by analogy — as our own brains do. His interesting comparison is to the work of Charles Darwin, trained as a geologist, who developed his famous On the Origin of Species theory of evolution by initially analogizing biological life to geology.

Another key insight of this chapter — one directly relevant to our work at data.world — is that the roadblock to fast development of AI was the lack of computational power that meant for decades many good ideas remained in the lab. Thus it is all “a bit like the flying machine and inventions of Leonardo da Vinci — they were prescient ideas, but not workable until lighter and stronger materials could be developed,” Kurzweil writes.

Chapter 3 — Who Am I?

This fascinating chapter explores the nature of consciousness itself, and in particular Kurzweil advances the idea that consciousness is not an exclusively human phenomenon. As a vegetarian that believes that we should care for animals, and as an investor in technologies to develop slaughter-free cellular meat, I certainly believe his introduction of the Cambridge Declaration on Consciousness into the AI discourse fills a neglected gap in the discussion.

In addition to noting society’s effective denial of evidence of animal consciousness in mammals and other animals, Kurzweil raises the challenging question of how we will deal with synthetic intelligence: “…whether a brain is made of carbon or silicon, the complexity that would enable it to give the outward signs of consciousness also endows it with a subjective inner life.”

In this, Kurzweil foreshadows a debate we have yet to begin on the rights of the intelligences we create, and the responsibility that immediately falls to we humans to develop an ethos of ethics for other forms of non-human consciousness.

Another key exploration in this chapter, bordering implicitly on the spiritual, is just how truly remarkable it is that the elements came together in the Big Bang and all that has ensued to enable life on our modest planet. If gravity were even slightly different, if the density wrought by the Big Bang had been one quadrillionth different, life could not have emerged. If the difference between the types of “quarks” that create matter were just slightly different, we wouldn’t be here. He quotes the astronomer Hugh Ross, who has pointed out that the odds of our planet’s creation and all life on it are akin to: “the possibility of a Boeing 747 aircraft being completely assembled as result of a tornado striking a junkyard.” Astounding.

As I mentioned above, I recorded a video on this amazing chapter after a long walk and intense discussion with my son, Levi, bringing together universal consciousness, robotics, John Mackey (the founder and former CEO of Whole Foods Market), the movie A.I. by Steven Spielberg, and death/rebirth.

Chapter 4 — Life Is Getting Exponentially Better

In this chapter, Kurzweil references and joins the insights of two of my favorite authors, Steven Pinker and Peter Diamandis, both of whom have done critically important work on “perception gaps” between how most people view the state of the world and how it really is. It’s important to note that Diamandis is Kurzweil’s partner and co-founder in 2008 of Singularity (originally known as Singularity University), a learning organization focused on technological solutions to the world’s most challenging problems. More than a decade ago, I wrote that Diamandis’s 2012 book Abundance — Why the Future Is Better Than You Think was the most important book I had read that year. Six years later, Harvard cognitive psychologist Pinker published Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, which couldn’t be a more important read in a year of such pronounced anxiety. Both books are deep dives into the factors, ranging from the “fight or flight” reflexes rooted in our evolution to the breathlessness of the 24/7 news cycle, that create a mindset of pessimism when so much progress abounds.

Kurzweil takes this thesis to a new level with a concept of his own creation, the “Law of Accelerating Returns”, sometimes referred to as “Kurzweil’s Law”. In this chapter, he amplifies his Law to explain the compounding effect of feedback loops of technological innovation that have cut the cost in real terms of food, education, travel, and particularly the tools of technology. We are living longer and the purity of air and water have improved markedly in recent decades. He notes the domestic technologies of the 20th century were “transformative shifts that brought millions of talented women into the workforce, where they made essential contributions in countless fields”.

The matrices of progress he cites are endless, including declines in our working hours, in violence, in homicide rates, in poverty, in exploitation of child labor, and much much more.

I toyed with this logic a bit, with some help from ChatGPT. I’ve been programming since 1979, so this was a nice walk down memory lane. Today, a base model Apple MacBook Air with an M1 chip will set you back about $999. Compare that with the first $666 Apple I computer introduced in 1976, which adjusted for inflation cost $3,428 in 2024 dollars. Today’s laptop is more than 16 million times more powerful than its ancestor (4 KB vs. 64 GB).

Sure, we’ve got many problems in the world today, and Kurzweil acknowledges as much. But despite our generally dim mood, the comparisons of progress he offers are happening all across the economy and society. We see the same accelerating returns in agriculture, with such technologies as vertical agriculture, another example of which was on display at this year’s TED, where we got a remarkable virtual tour of the world’s largest vertical strawberry farm from Hiroki Koga, CEO of Oishii. His company uses AI and robots to produce the most delicious berries I’ve ever eaten, grown with zero pesticides and a fraction of the water needed in conventional agriculture. Meat grown from cellular cultures will displace environmentally devastating factory farming. As I noted above, this is an issue about which I care deeply and it’s worth pondering Kurzweil’s estimate that in 2020 humans slaughtered more than 74 billion land animals for meat, the production of which caused about 11% of all greenhouse gasses.

3D printing, enabled by AI, is already revolutionizing manufacturing and soon even our clothes and other goods will be made with this technology, he writes. It won’t be long before even replacement organs for humans will be 3D printed, he predicts. Again, it’s an issue close to home for a native of Austin like myself. My friend and entrepreneur Jason Ballard, CEO of the pioneering company ICON, has just completed the world’s first 3D printed subdivision a few miles outside of Austin, and Jason envisions a day soon when he will be able to produce a family home for $100,000. I couldn’t have been happier for Jason and his team when they won the NASA contract for… the moon.

The march of democratic governance worldwide is continuing as well, Kurzweil notes. While not indifferent to the risks of disinformation, surveillance, and other potentials for AI abuse, the response is not less technology but more, and a commitment to careful AI governance, he advocates.

“As technologies for sharing information have evolved from the telegraph to social media, the idea of democracy and individual rights has gone from barely acknowledged to a worldwide aspiration that’s already a reality for nearly half the people on earth,” Kurzweil writes. “Imagine how the exponential increase of the next two decades will allow us to realize these ideals even more fully.”

Chapter 5 — The Future of Jobs: Good or Bad?

For many people, the most immediate danger of AI is the threat to jobs. Kurzweil confronts this challenge head-on and discusses the looming issue of drivers, self-driving cars, and trucks.

Some 4.6 million Americans, 2.7% of the workforce, are employed as some kind of driver. “While there is room for disagreement over how quickly autonomous vehicles will put these people out of work, it is virtually certain that many of them will lose their jobs before they would have otherwise retired,” he writes. And millions more in related industries will also be affected. That displacement will soon move to other sectors, and most manufacturing will be controlled by AI sometime in the 2030s.

Kurzweil shares the familiar analogy of the transformation of agriculture, employing a majority of Americans little more than a century ago, down to about 1.5% today. But he also shares lesser-discussed examples, including the fact that one in four Americans worked in manufacturing in 1970 while today it’s one in 13. As the economy has shifted toward more technology intensive jobs, the investment in education has skyrocketed. We’ve gone from 63,000 university students in 1870 (undergraduate and graduate) to an estimated 20 million as of 2022. We’ve added about 4.7 million university students in the United States just since 2000. We spend more than 18 times as much in inflation-adjusted new dollars per child in K-12 as we did a century ago (l wrote a series on education being accelerated by AI.)

Kurzweil advocates that we need to get serious and very intentional about mitigating the disruption and he foresees various forms of “universal basic income”, or “UBI”, as part of a coming reform wave that AI will usher in. We also need to rethink how we measure the economy, as much growth in both transactions and productivity is masked by the underground economy enabled by the internet and exchanges made in crypto currencies that escape the gaze of the traditional banking system and tax collectors.

He returns to the point of the previous chapter, that much of the disruption will be mitigated — as it has been in the past — by the Law of Accelerating Returns, as the prices and costs for everything from food to energy to shelter become cheaper and cheaper.

This, he argues, is precisely what destroyed that symbolic movement famous for resistance to change wrought by the Industrial Revolution, the so-called “Luddites”, who organized a guerilla army in 1811 headed by a mythical Ned Ludd. That historical incident, wrought by technological progress, turned on the era’s weavers, who made modest livings in England producing lace and stockings in small family businesses. When the invention of the power and other textile machines threatened their livelihoods, they rose up against the factory owners and bloodshed soon followed. Many weavers were forced, for a time, into lower-paying jobs. But soon the shift in technology enabled the common person to afford an entire wardrobe instead of a single shirt. Before long, whole new industries sprung up on top of the automation of the Industrial Revolution.

“The resulting prosperity was the primary factor that destroyed the original Luddite movement,” Kurzweil writes. Similarly, I certainly believe that the coming abundance enabled by AI is what will overcome the fears and resistance we now see to this powerful new technology. I recorded this video on how AI will foster employee joy, and placed it into the historical context of capitalism’s evolution.

Chapter 6 — The Next 30 Years in Health and Well-Being

Despite astounding strides in health care in recent decades, medicine remains an imprecise science, Kurzweil points out. The mechanic who fixes your car or the plumber who repairs a busted pipe both know precisely how all the parts work and how they fit together. But doctors still work with messy approximations, approaches that may be right for most people but not necessarily what is right for you.

This is about to change, as medicine itself becomes an information technology, benefitting from the exponential progress of information technologies that we’ve been examining. From drug discovery, to disease monitoring and surveillance, to robotic surgery, and the combination of biotechnology with AI and digital simulation, the field of medicine is on the cusp of great change — in fact, it’s already beyond the cusp. Last year, the first drug designed end-to-end by AI entered phase 2 clinical trials to treat a rare lung disease. During the pandemic, AI helped create a vaccine in 63 days. Before the pandemic, five to 10 years was typical.

The progress will not be without challenges. He notes the medical community will be slow to recognize and use the new tools, often for legitimate reasons. Physicians don’t want to change protocols in ways that could endanger patients. No regulator wants to be the one who approved a treatment that didn’t work. So developing a substantial track record in test models will be a priority and diagnosis will be the first area to be transformed, as we’re already seeing in the increasingly routine screening of X-ray results with AI. Virtually all diagnostic tasks will be carried out with AI before the end of this decade.

This leads Kurzweil to an exploration of the concept of “longevity escape velocity”, a term coined by the British gerontologist Aubrey de Grey for the theoretical point when the rate of technological progress in medicine outpaces the rate of aging. Once nanobots can effectively repair and selectively destroy individual cells, medicine will be the exact science it has long aspired to be. Ultimately even our blood supply may be replaced by nanobots. In addition to artificial blood cells, we will eventually be able to engineer artificial lungs to make them more efficient than the respiratory system that biology has given us. Ultimately, even hearts made from nanomaterials will make people immune to heart attacks, and make cardiac arrest and trauma rare.

“If you can live long enough for anti-aging research to start adding at least one year to your remaining life expectancy annually, that will be enough time for nanomedicine to heal any remaining facets of aging,” Kurzweil writes.

All of this, he argues, supports de Grey’s sensational declaration that the first person who will live 1,000 years has already been born.

Chapter 7 — Peril

Two sets of emerging “perils” from AI and related technologies are outlined by Kurzweil in this final chapter: those that are real and those that are intriguingly led by what he calls “fundamentalist humanism”, an ideology comparable to the Luddite movement of the early 19th century, which we discussed earlier.

He doesn’t sugar coat the challenges we must confront. While not mentioned in his book, autonomous weapons systems powered by AI are believed to be already in use by both Russia and Ukraine in their ongoing war. More broadly he notes, Russia is working to build underwater drones to carry nuclear weapons, as well as nuclear-powered cruise missiles designed to loiter for extended periods just outside a target country and strike from unpredictable angles. Russia, China, and the United States are all racing to develop hypersonic vehicles, capable of evasive maneuvers to forward defenses as they deliver their warhead. Bioengineered pathogens are another new arrival. We’re not talking just about accidental lab leaks, though that is a threat. We need to reckon with the threat of nano-based weapons, another big set of issues that include the fact the cost of nefarious technology will plummet along with all the other technologies. Tough, omnivorous “bacteria” created with AI could outcompete real bacteria, replicating quickly and spreading like blowing pollen. AI could take instructions too literally, for example, the classic problem depicted in stories of genies, as in the ancient children’s tale The Sorcerer’s Apprentice. In Kurzweil’s warning, this could be AI trained to kill dangerous mutant genes but then kill healthy genes that express a form of the same mutation.

But the fundamental problem with readying ourselves for our AI-infused future is that there is no general strategy that definitively overcomes all the diverse challenges. We need a cocktail. This is why, he explains, that AI must be part of the precautionary toolkit, with AI trained to imitate the way humans draw inferences. We might use competing AIs to point out flaws that programmers might miss. AI can be trained to not comply with dangerous requests. Superintelligent AI that is smarter than humans can be our ally, not our enemy. He envisions “iterated amplification”, using weaker AIs to assist humans in creating well-aligned, stronger AIs, then repeating the process to align AI that is stronger than that unaided humans could ever develop on their own.

“But”, he cautions, “we will also need ethical bulwarks against misuse — strong international norms favoring safe and responsible deployment of AI.”

The inverse problem, however, is the “misguided and increasingly strident Luddite voices that advocate the broad relinquishment of technological progress”. He compares the situation to famine in Africa aggravated by opposition to food aid containing genetically modified organisms, or GMOs. This “fundamentalist humanism” as he terms it, will ultimately fail in the face of the overwhelming need to address human suffering with AI, but opposition is already emerging to such life-saving technologies as gene-editing to treat disease or protein folding tech being explored to fight cancer. Heck, even the company I invested in, UPSIDE Foods, is being banned in some states for their cell-grown meat.

He concludes that the path forward is difficult but clear: we must embrace AI as we embrace the challenges it poses with intelligent foresight, planning, and policy.

“When I was growing up, most people around me assumed that nuclear war was almost inevitable,” he writes. “The fact that our species found the wisdom to refrain from using these terrible weapons shines as an example that we have it in our power to likewise use biotechnology, nanotechnology, and superintelligent AI responsibly. We are not doomed to failure in controlling these perils.”

The need for engineering intentionality was stressed in this recent interview by Kara Swisher of OpenAI CTO Mira Murati as part of the Hopkins Bloomberg Center Discovery Series. I encourage you to watch it, and also to tune into Steven Pinker on this Honestly podcast interview for his view on the subject. And for a beautiful take on all of this, I couldn’t recommend my good friend Byron Reese’s book, We Are Agora, more. The way he describes humanity as a superorganism gives me great hope for our collective future. Levi and I had dinner with Byron and his son, Michael, on this subject just two nights ago and had a far-reaching conversation about it all, including on Nearer.

It is certaintly the most exciting time in history to be alive! Our challenges are grand but our opportunities are even grander. I hope you enjoy Kurzweil’s book as much as Levi and I did.

Rachel and me at TED2024 with Ray Kurzweil, a real hero of mine

--

--

Brett A. Hurt

CEO and Co-founder, data.world; Co-owner, Hurt Family Investments; Founder, Bazaarvoice and Coremetrics; Henry Crown Fellow; TEDster; Dad + Husband