Ringing in the New Year with a conversation on AI as vivid as the innovations in store for 2024

Brett A. Hurt
9 min readJan 13, 2024
Tremendous fun to record this Austin Next podcast episode alongside my good friend, Byron Reese

You’re back from the holiday, you’ve cleared out your inbox, and your sleeves are rolled up as you ready to dive into the New Year. As arbitrary as the date to mark another lap around the Sun may be, it’s still an important moment to reset ideas, goals, and aspirations for family and work. So let me invite you to join me and my friend and collaborator Byron Reese on an hour’s circumnavigation of the universe as it is unfolding before us toward 2024 and beyond. At the Austin Next Podcast we recorded with host Jason Scharf in the final week of 2023, we not only offer a glimpse of the coming 12 months, but share a vision-in-progress for the full future of technology, artificial intelligence, morality, evolution, and humanity.

And the marvel of bees. Yes, bees!

Byron is the author of a fifth and triumphant book, “We Are Agora — How Humanity Functions as a Single Superorganism That Shapes Our World and Our Future”. Leaning into the Agora, Byron’s four earlier books, our writing together on The 4 Billion-Year History of AI’s Large Language Models, my own book “The Entrepreneur’s Essentials”, and a whole lot more, we try and set the stage for what we both believe will be an amazing year ahead as humanity surges toward an intentional future of abundance.

Let me offer up a few highlights, starting with the bees:

On bees: Growing up, Byron was a beekeeper. Now a CEO, a founder, a best-selling author, a regular keynote speaker, and a technology and AI polymath, his interest in bees traces to that experience of these amazing short-lived creatures. “A bunch of bees together form a hive, a beehive. That beehive is an actual animal — not a metaphorical animal, but a literal animal. It’s called a superorganism,” he explained. Bees don’t have memories. But beehives do. Bees can’t regulate their own temperature. But beehives do. And Byron theorizes that humans are also a superorganism of vastly more complexity, an “Agora” in Byron’s phrasing for the boisterous marketplace of goods and ideas in ancient Athens. The new “Agora” is, however, at planetary scale. You just have to take its dynamics down to the level of bee simplicity to understand and appreciate the planetary, human-powered superorganism we’re creating with the advent of AI and its consolidation of all that preceded it — from language, to writing, to the printing press, to the internet, and then on to a little more than a year ago, ChatGPT.

Why AI is really 4 billion years old: Yup, it’s actually older than all the attributes of information I just described. That is if you consider that life itself really began with the formation of the earth 4 billion years ago, and the emergence of single cell organisms a half billion years ago. All life, then and now, was predicated on a form of information: DNA. “All life today — from mildew to millipedes — stores its genetic information exactly the same way, using the exact same alphabet, and encoding it on exactly the same molecule, DNA,” Byron and I wrote together in an essay on November 30th, 2023, marking the one-year anniversary of the launch of ChatGPT. What DNA and large language models (LLMs), which animate the ubiquitous AI tools and applications now surrounding us, have in common is one thing: they are simply the means to store and leverage information.

An evolutionary inevitability: This is a topic that I dug into far more deeply two years ago in a series, Dawn in the Anthropocene, which I wrote for our data.world blog. There, I argued that the explosion of digital data over the last half century — the nurturing feedstock now of ChatGPT and its cousins — is analogous to the so-called “Cambrian explosion” of diverse life forms a half billion years ago. If you take that premise then, as I posed in our podcast discussion with Jason, ChatGPT was just an evolutionary inevitability. And it still is the earliest moment of the early AI innings as we’ll constantly evolve new ways of storing knowledge and using knowledge — and will forever. This is the emerging superorganism of humanity.

And yes, it is scary: I shared the fact that I felt the ground move as the “techtonic” plates shifted on November 30, 2022 with the launch of ChatGPT. And as I shared in an earlier podcast with Jason and Strangeworks CEO whurley (William Hurley) last May — where we discussed the calls emanating at the time for a “pause” in AI development — any time a new technology comes along, it is unsettling for innovators. Entrepreneurs have to scramble to gain footing very quickly. And this rapid change creates understandable if unjustified fear, as Byron explained. And it doesn’t help that imaginings based on science fiction trigger our brains already hardwired for fear, I added. This we have to overcome, and I suggest a new practice for the New Year — to read The White Pill, Future Crunch, and The Progress Network, or some combination thereof, weekly. I also recommend that anyone who hasn’t read Harvard psychologist Steven Pinker’s brilliant book, “Enlightenment Now”, on the myriad ways life is improving globally, should do so in 2024.

The sum of all knowledge: As Byron pointed out, our ability to store and share knowledge dramatically changed just 50,000 years ago when we acquired language. This, in turn, allowed us to use and share the earlier cognitive innovation of “episodic memory” as he put it, the ability to remember the past and anticipate the future — which no other animal has. “Two beavers know pretty much the same thing. But if you have a hundred people, they all know different things,” Byron explained. Fast forward to coordinated specialization, and you have the beginning of the human superorganism. Think of the invention and design of the smartphone or the speed with which we created mRNA vaccines during the pandemic. Consider the pace with which we are finding uses for AI in everything from cell therapy, as my friend and fellow CEO Micha Y. Breakstone 🇺🇦🇮🇱 is doing at the startup Somite.ai (Somite Therapeutics, which is our latest family office investment), to our own rollout at data.world of “Archie”, a miniature ChatGPT — trained on all of our competitive intel, marketing collateral, case studies, whitepapers, blogs, and other work — that accelerates the collaboration between our team and customers many fold. “What we’re finally doing is build a consolidated knowledge base,” Bryon explained on the podcast. “Even libraries are not consolidated knowledge — you still have to find the book.”

Does this rob us of our humanity and free will?: Jason prompted us with the provocative question: where does free will fit into this? This spurred a highly philosophical shift in the discussion, turning on “simulation theory”. This is the ancient idea that reality as we perceive it is simply the imagining of a higher power; it’s updated now for the imagining of AI as a belief in a Matrix-like future where we will live in a machine-controlled reality. Byron recounted a passage from his latest book of a man who committed suicide by jumping off San Francisco’s Golden Gate Bridge in the 1940s. He left behind a note explaining that he would walk to the bridge, and, if a single person smiled along the route, he would change his plans. Apparently no one smiled and the disappointed man leapt to his death. Byron’s point is that connectivity does not rob us of our humanity, but strengthens it. “Billions of people do billions of good deeds every day to each other and that’s the energy that sustains the superorganism,” Byron said. “It can make a smartphone, or it can deflect an asteroid heading toward this planet.”

What happens to our knowledge institutions?: Inevitably, much is changing and more will change. Last year marked an explosion in AI-driven productivity, and 2024 will be the year of AI-driven applications. We’re certainly seeing that at data.world, as we were discussing in our customer town hall today (with a 25% productivity lift so far to show for it). It’s happening in other many places as well. Customer service operations are already experiencing significant efficiency boosts and greater customer satisfaction with online chat. Goldman Sachs has predicted AI will boost global economic growth over earlier expectations up to as much as an extra half-percent annually by 2034. Just last week I wrote that we will get better at improving AI for enterprises as we get better at aligning LLM training with better — and more coherent — data tools such as those we have developed at data.world, starting with work in our AI Lab that we are quickly migrating throughout our platform. Where I believe it may dramatically alter productivity and serve as a great equalizer is in education. In our conversation, I noted the work of Sal Khan, whose online Khan Academy is building a free and universal network of AI tutors that will be available to any student and teacher, on any subject, anywhere in the world. I’ve also written that AI is the best hope for the global crisis in education, and that youth — including my son, Levi — are showing us the way. Far from fearing cheating or plagiarism, I’m excited about the future of AI in schools. And in the future of after school.

And what about “AGI”, or “artificial general intelligence”?: Jason prompted us on the question looming in many headlines and in much speculation: Just when will machines be so much smarter than us that they can effectively take over? “I don’t believe in general intelligence,” said Byron, who admitted that he is a major skeptic, not even believing that AGI is possible. It’s a subject he explored in his fourth book, “Stories, Dice and Rocks that Think”, and he argued that speculation about some kind of mastermind robs us of discussion of the many distinct uses of AI that we have yet to discover. It’s a subject Byron and I also turned to in a 2022 (pre-ChatGPT!) discussion that we had at our data.world summit. I explored his book as part of a series exploring how the future of AI must be one with humans in charge. Byron and I diverged slightly in this podcast discussion. Whether AGI is possible or not, I do believe it is imperative that the concept of “AGI” serve as a kind of North Star that will propel us toward all manner of innovation that will serve humanity. And I see it as a part of our natural evolution to strive for it.

What next?: This is the question which Jason likes to conclude with on all of his podcasts. Bryon envisions AI, the emerging superorganism that is the subject of his new book, as being the bridge to humanity’s destiny of a truly multi-planetary civilization. “I believe we will spread to a billion planets, and each planet will be populated with a billion people,” Byron said. “And each of those billion people will be empowered to live their best possible lives.” That’s a very exciting ambition, and I certainly believe in the power of AI to allow all of us to live our best possible lives even just here on our beautiful Earth. It’s a topic I recently touched upon in a review of yet another seminal book, “The Coming Wave”, by the co-founder and CEO of Inflection AI, Mustafa Suleyman. In my review, I called his book a “starting toolkit for humanity” in part because of Suleyman’s own focus on the intentionality we must bring to AI’s envisioning, design, and deployment. I also highlight his fantastic book because it aligns with another passion of mine, Conscious Capitalism, Inc. This is the idea that every business must be just as focused on the stakeholders of the community, the environment, and its employees as it is on its profits. It’s the future of capitalism, and intentional AI will help us get there. And I think it’s cool that Inflection AI is a proud public benefit corporation, like data.world.

These are just a few glimpses of our conversation, which left few intellectual stones unturned in an insight-rich discussion of a technology that is fast changing our world. For all I’ve described — plus our thoughts on our coming “age of abundance” as the “most evolutionary phase of humans ever”, on our continuing progress as a moral species and civilization, and much, much, more — please take a listen. I would love to hear if our discussion inspires you or provokes something else inside of you — you can listen to it here or on any podcast app of your choice.

Happy New Year — I wish you a lot of love and abundance throughout 2024! It’s going to be a very exciting year together.

--

--

Brett A. Hurt

CEO and Co-founder, data.world; Co-owner, Hurt Family Investments; Founder, Bazaarvoice and Coremetrics; Henry Crown Fellow; TEDster; Dad + Husband