How AI is birthing a Renaissance 2.0 in the coming ‘Age of a Billion Dreams’

Perhaps the most critical challenge posed by the global transformation being wrought by artificial intelligence is that we lack not just understanding of the technology and its promise, but the lexicon and vocabulary to acquire that critical understanding

Brett A. Hurt
19 min readApr 9, 2024
My son, Levi’s, ChatGPT-driven vision of Renaissance 2.0

In the blink of an eye, artificial intelligence, or AI, has dramatically crossed the frontiers of creativity. It writes sonnets, programs software, resurrects Elvis to sing on “America’s Got Talent”, produces movie trailers today, surely a full-length film tomorrow — and so much more.

So will poets, programmers, rock musicians, actors and movie directors disappear? Hardly, is the short answer. The longer response is what this article is about, so let me share more on the sources of my optimism. Now to be clear, I’m not dismissive of AI’s challenges, from identity issues and embedded biases to deep fake political campaign shenanigans, which are just beginning. The broad and ongoing discussion of job threats and productivity is an important one. Among the bright minds worth heeding are journalist Kari Swisher and business author Scott Galloway whose recent “The Future of Work” series on their Pivot podcast is an important listen. In the narrower slice of debate, on the threat to such creative professions as filmmaking, the recent “Is AI Already Taking Jobs?” on the Hard Fork podcast of Kevin Roose and Casey Newton is another must-listen. In particular, their interview with filmmaker Paul Trillo (direct link to minute 22) about his short film The Golden Record, made with OpenAI’s mind-blowing new video tool Sora, is a refreshing counter to the scary headlines that followed director Tyler Perry’s attention-getting decision to shelve his $800 million studio expansion plans in the face of AI.

“The more you use these tools, the less you are afraid of them,” Trillo told Roose and Newton of his experience with the technology, which he believes will invigorate film making. I couldn’t agree more, and this is my sentiment across the range of creative fields, from music, where the cost for composers to produce release-ready tracks is collapsing with AI, to the use of this technology by marketers to create detailed user personas for more effective media strategies.

But my goal here today is not to stake out a position in that debate. Rather, I seek a higher altitude in the discourse, perhaps a nudge beyond the simple polemics. I certainly realize that AI enthusiasm comes much easier to a technology entrepreneur than to many in the creative quarters where fear is growing and palpable. I understand that the suite of new technologies, led by the Large Language Models, or LLMs, with which the world is becoming familiar, can be intimidating. But the anxiety is far wide of the mark, for as my AI pioneer son Levi frames it on Byron Reese’s podcast, at Levi’s young age of 14: “The better name for the age of AI is the age of a billion dreams.”

The urgent need for a new vocabulary of AI

Among my own dreams are a world of abundance, resolution of complex global problems topped by climate change, a cure for cancer and other diseases, which several friends are working on (check out Somite.ai and Chemify.io for two examples and in full disclosure my wife, Debra, and I are proud investors in each). In the next four decades, we need to produce as much food as we have in the last 8,000 years and AI is among the tools that will help us succeed. Read Steven Pinker’s brilliant book Enlightenment Now if you doubt our ability to do so. But most profound of all will be a new creative Renaissance that could soon eclipse the historic scope of that earlier period beginning in the 14th century and continuing to the 17th century. During Renaissance 1.0 humanity witnessed its greatest ever creative surge in art, literature, music, science, and philosophy. So I’m incredibly excited for Renaissance 2.0, which will fuse technology and human insight at an incomparable scale.

Just as Renaissance 1.0 ignited the transformation of Europe’s feudal economy into the beginning of capitalism and wealth creation on a massive scale, Renaissance 2.0 and its planetary scope will lead to new conceptualizations of business and its role in society.

We are, however, going to need some new nomenclature, a lexicon that enables a new discourse. Imagine, for example, debating Charles Darwin’s concept of evolution without that word, and without the other concept terms that followed when our understanding of natural history shifted, such as “natural selection”, “adaptation”, “speciation”, or even “survival of the fittest”. That latter phrase only emerged in time for the fifth edition of Darwin’s famous 1859 book, On the Origin of Species. It was only in the sixth edition, in 1872, that “evolution” appeared, swapping out Darwin’s earlier term, “descent with modification”.

With “generative AI”, as the new popularity of LLMs are demonstrating, the process is already well under way. When, for example, did you first hear the term “prompt engineer” or “prompting”? Not long ago, I suspect. “Training” is no longer limited to new pets, the preoccupation of athletes, or the early months of parenting. It is now what we do with data. At my company data.world we’re doing our best to promote AI literacy. My co-founder and our CTO, Bryon Jacob, is among those leading the charge. I encourage those seeking better understanding to watch or listen to his short tutorials on such topics as “context windows and vector databases”, “AI tools and agents”, and other subjects in need of decoding. A real glimpse below the waterline is Bryon’s deep dive on Poblas Holman’s podcast in discussing knowledge graphs and their expanding role for enterprises using LLMs. But we’ve got a great deal of work remaining. And while the concern is general, I believe it is fear for human creativity specifically that stirs the deepest anxiety as we grasp the implications of AI.

Understanding backward to create going forward

It is easier, of course, to make the retrospective case that technology indeed spurs creativity than to do so prospectively as this new world takes shape. Historically, itinerant magic lantern showmen did not replace the wandering musical minstrels of the late Middle Ages. Rather they triggered our study of light and refraction, birthing photography in the early 19th century. The impressionist and the late-impressionist movements in painting, which peaked at this time of photography’s emergence, were not cut short by photography, but rather inspired by it. Cinematography soon followed and it did not replace photography; rather, the latter iteration became a blended art form as this recent exhibit at the Harry Ransom Center archive at the University of Texas at Austin, Drawing the Motion Picture — Production Art and Storyboards, so dramatically demonstrates.

I’m sure you’ve heard some version of this argument challenging the all-too-common narratives of technology leading to mass unemployment; how we went from a nation of farmers exporting little to a mere handful of producers feeding the world within less than a century is a common example. But how do we make the case for creativity going forward? Who predicted Airbnb, Spotify, Uber, Facebook, or so many other household name companies and products at the dawn of the World Wide Web and mobile boom that soon followed? Or more to the point, who back in 1982 told a kid creating and running a BBS, or Bulletin Board System, that we’d have Lightroom for photographers, the online poetry writing group allpoetry, or makemusic, a site that supports music teachers and students with an endless variety of tools? That kid was me, and no one could have ever imagined what was to come.

Pondering this sent me back to that essential text, The Structure of Scientific Revolutions, published in 1962 by Thomas S. Kuhn, the philosopher of science who popularized the term and concept of a “paradigm shift”, a transformation of thought and perspective that maps a new future. Kuhn wrote:

“… a new theory, however special its range of application, is seldom or never just an increment to what is already known. Its assimilation requires the reconstruction of prior theory, and the reevaluation of prior fact, an intrinsically revolutionary process that is seldom completed by a single man and never overnight. No wonder historians have had difficulty in dating precisely this extended process that vocabulary impels them to view as an isolated event.”

I appreciate Kuhn’s gift of conceptual architecture to understand how straws accumulate on the back of society’s proverbial camel, triggering seemingly sudden change that has in fact been long in the making: motorized transportation, human flight, antibiotics, genetically modified crops, and digital everything. But I’m not sure his term “paradigm shift” does the implications of AI justice. In my world, we call AI a “general purpose technology”, like the steam engine, electricity, or the internet. But that too fails to capture the scope of what is unfolding, as AI is already shifting paradigms in virtually every walk of life. It is, to twist the title of that 2022 movie, an “Everything, Everywhere, All at Once Shift”.

Levi’s vision for “painting” in the age of a billion dreams

No, AI will not take over the planet by Thursday

So bear with me amid the shortcomings of our “evolving” (thank you, Mr. Darwin) language and let’s address the spreading lament that artists, writers, musicians, and other creatives will soon be digital toast. I get the fear, and it happens every time a revolutionary new technology (i.e., productivity tool) is born. We read of AI taking video game illustrators’ jobs in China. Where I live in the “World Capital of Live Music,” the first pop song entirely composed by AI, “Break Free”, provokes deep anxiety. “AI is already more creative than YOU”, headlined London’s Daily Mail last September in response to one study that compared human creativity to that of AI chatbots. In actual fact, the study, conducted largely in the United Kingdom and overseen by three Nordic universities, found that in some tasks humans did outperform AI. For some reason that minor detail was lost in the ensuing media discussion.

Last week, I shared in the cheap laughter when comedian Jon Stewart riffed on AI in his opening monologue, saying that unlike past technological change, “AI is ready to take over by Thursday”. But his cutting humor reveals an essential problem. We are being asked to take a position on the spectrum of AI debate — oscillating between utopia vs. apocalypse — when we’re lacking not just understanding, but the lexicon to acquire that understanding. The “besties” of the latest All-In podcast episode took Stewart directly to task on this, and I largely agree with them on their points about the future of AI and labor (direct link to minute 49).

Of course there’s no shortage of questions. How, those worried ask, “Will art, music, crafts, and creativity survive in the age of super digital intelligence?” “Will we soon be reading bestselling novels written by digital agents?” “Will Sora, OpenAI’s nascent video creation tool now capturing the world’s attention, soon sweep the Oscars?” We all have our own list of creative heroes, and the debate moves to their realms. In a world of ubiquitous AI, would Leonardo Da Vinci paint the Mona Lisa, or would Thomas Edison invent the incandescent light bulb, or would J.R.R. Tolkien write The Lord of the Rings?

For starters, I asked ChatGPT itself. Here’s the response from the “oracle”:

“If Leonardo Da Vinci, Thomas Edison, or J.R.R. Tolkien had access to AI, they might use it as a tool to explore their creativity further. Da Vinci’s insatiable curiosity would likely lead him to experiment with AI, perhaps using it to visualize inventions or experiment with new forms of art. Edison might integrate AI into his invention process, speeding up experiments and simulations. Tolkien could use AI to explore linguistic creations or world-building in even greater depth. However, their genius lies not just in their final creations but in their unique thought processes, something AI would enhance, not replace.”

That is fine, clever even, and certainly well written. But the sentence above is derivative, as we should expect, because that’s what our current LLM models do with their statistical abstraction of data — derive. Please hold that thought as I pivot to the three basic points I want to explore.

A combinatorial accelerant to human creativity

First, it’s worth reminding ourselves that while AI is scarcely “settled science”, neither is creativity. Plato and his contemporaries believed creativity was divine, coming from the Muses, or goddesses of inspiration (a beautiful read on this is The War of Art by Steven Pressfield). I love this as metaphor; we all have aha moments that seem to us as flashes of insight, heaven sent. But 2,500 or so years after the Greek philosophers, we consider creativity to be the interplay of cognitive processes, neural mechanisms, personality, environmental conditions, and possibly even our genes. Nonetheless, debate rages over nature vs. nurture, conscious vs. unconscious awareness, and the dynamics of our neural networks. What is safe and fair to say is that creativity is combinatorial, a means of synthesis, the result of those serendipitous moments like that of Archimedes sprinting from his famous bathtub after stumbling across the principles of fluid dynamics and displacement and shouting, “Eureka!”, as he ran naked through the streets of ancient Syracuse. It is often collaborational as well, the creativity of teams.

The world has certainly now heard of LLMs, the convergence of computing power, training data, an open internet, and the transformer model (the “T” in GPT), that like the World Wide Web decades ago, free the tools and utility from the sole domain of the deep technologists who were the only ones able to experience it before. This interface, which one might compare to Martin Luther’s translation of the Bible from Latin into common languages, is similarly combinatorial, and another contributing stream to the coming river of creativity. ChatGPT and its cousins do not think in anything remotely analogous to human terms. Rather, they are massive engines of synthesis. And the emerging tools of AI are endlessly collaborational; I think of the initiative of Khan Academy founder Sal Khan, revealed at TED 2023, to give every student a personal tutor and every teacher an AI teaching assistant. Sal is creating accelerants to creativity, not destroyers.

Third, anthropomorphism, the tendency to ascribe human characteristics to nonhuman things, is the enemy of our elusive AI understanding. It weighs heavily on our discourse. I’m not talking here about the harmless anthropomorphism, like scolding your Roomba when it skips the bathroom and heads back to its docking station for a nap, or our naming our Archimedes-inspired, LLM-derived chatbot “Archie”, as we’ve done here at data.world. This is human creativity at work. I’m also not talking about the very apt use of analogies between the natural and human-made worlds. Biomimicry is the wellspring of much science — the Wright brothers extensive study of birds to design the first airplane is just one example. What I’m speaking of here is the science fiction-inspired reflex in popular media, in conversation, to compare AI to the human brain, just bigger, smarter, and soon to leave us behind, like the all-knowing “HAL” who becomes a force of evil in that wonderfully pioneering 1968 movie from Stanley Kubrick, 2001 — A Space Odyssey. This is a problem.

I certainly have some AI anxieties too. But amid the current discourse, they are precisely the opposite of what in many circles is the prevailing narrative — as are my questions, including, “How fast and intelligently can we seize the greatest opportunity in human history to awaken, nurture, and cultivate the imaginative mind?” and “How do we counter the irrational scare tactics when rationality is unpersuasive, as with our experience of vaccines, or GMO foods, or nuclear power?” As I’ve written about in a three-part series, “How can our institutions such as schools be creatively reimagined for the age of AI?” — or for Levi’s “age of a billion dreams”?

‘Piddling around’ with a wobbling plate

To better align my own quandaries with the questions I posed at the outset, I’ll offer one more, “How would one of my favorite creative geniuses, the late physicist Richard Feynman, respond and act?”

I choose Feynman for this thought exercise not merely because he was a genius, an author, and a towering intellect who was there at the dawn of the nuclear age as one of Robert Oppenheimer’s chief scientists, and winner of the 1965 Nobel Prize in physics. I also turn to Feynman because he was a serious student of creativity, a topic about which he spoke and wrote widely as he made science accessible to the layperson. And in one of his stories, he gave us a glimpse at the answer to these looming questions of AI’s impact on creativity and creatives. (I also like Feynmann because his approach to science and philosophy reminds me a great deal of my endlessly tinkering late father and my endlessly curious son, who has been programming since age four and is now warp-speeding with AI.)

The critical anecdote, published in many different places over the years, is included in the gem of a book, Creators on Creating, which compiled the thoughts of many great creative minds including Federico Fellini, Maya Angelou, Ingmar Bergman, Carl Jung, and some two dozen others. In his account, Feynman describes his first job after Los Alamos and the Manhattan Project, as a professor at Cornell University:

“…I was in the cafeteria and some guy, fooling around, throws a plate in the air. As the plate went up in the air I saw it wobble, and I noticed the red medallion of Cornell on the plate going around. It was pretty obvious to me that the medallion went around faster than the wobbling.”

This was hardly cutting edge, peer-reviewed science, and Feynman acknowledged as much. But he was curious. He dove deeply into the questions, first discovering that the spin rate appeared to be twice that of the wobble rate. But that wasn’t enough. Applying various bits of his own craft, including the Dirac Equation and quantum electrodynamics (QED), Feynman ultimately worked out the motion of mass particles to illustrate how the distinct accelerations balance and come out two to one. His colleagues surely thought he was nuts. He readily conceded that “there’s no importance whatsoever” in the research.

And then Feynman concludes: “The diagrams and the whole business that I got the Nobel Prize for came from that piddling around with the wobbling plate.”

Few examples of human creativity can compete — a cafeteria stunt sets in motion the forces leading to the Nobel Prize. Creativity can be fun, but it also reflects very hard (and sometimes doubtful) work, as in Feynman’s case. Creativity is combinatorial, as it was when Tim Berners-Lee took the distinct technologies of a little-known computer network, hypertext, the Uniform Resource Locator (URL), Hypertext Transfer Protocol (HTTP), and his own innovation of Hypertext Markup Language (HTML), to give the world something brand new to accelerate the sharing of knowledge (to stoke creativity, no doubt): the World Wide Web.

Creativity is often the product of synthesis. One example would be the musical genre of jazz, emerging from the cultural synthesis of the diverse African American music traditions colliding in New Orleans. I was lucky enough to see the legendary Walter Isaacson speak about this history at the Culturati conference last Sunday. Creativity is collaborational, with the global tech hub of Silicon Valley being created by the open culture that resulted from the region’s confluence of academics, diverse immigration, and amenable climate, as argued by AnnaLee Saxenian in her groundbreaking 1996 book, Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Saxenian, by the way, went on to become dean of the University of California at Berkeley’s School of Information. Or sometimes creativity is mere serendipity, as when Swiss engineer George de Mestral returned from a walk in the woods in 1941, removed the burrs collected by his dog, and casually examined the annoying hooked seeds under a microscope. The result was Velcro — also a great example of biomimicry.

These few examples bring me back to the three points alluded to above: the nature of human creativity, the synergy of creativity and AI, and the danger of ignorant scare-mongering, particularly when we ascribe human characteristics to AI.

On this first point, Australian academic Cameron Shackell has made the compelling argument that we need a new category for the kind of creativity that is driven by AI. As he put it in an essay late last year… “there is one key difference between human creativity and AI-driven creativity: the latter doesn’t stem from the evolutionary clash of mind and world.”

Shackell cites the work of British cognitive scientist Margaret Ann Boden, who has famously broken down human creativity into two categories, personal creativity (p-type in her terminology) and historical creativity (h-type). Personal creativity is when we think of something for the first time — a child realizing that water can take any shape, for example. Historical creativity, by contrast, is when we hit upon an insight that is without precedent: Archimedes in the bathtub or Feynman in the Cornell cafeteria, for example.

While both personal and historical creativity can obviously be enhanced by AI, the technology does not contain reality. The AI of LLMs is an extraordinarily sophisticated card trick. To help us understand the distinction, he proposes a new category, “generic creativity” (g-type). Making these distinctions, Shackell argues, will allow us to better understand that while these wonderful new tools are capable of provoking new human thought, they can’t have their own “Eureka moments” and are limited by the data on which they are trained.

Levi’s vision for the future of music in the age of a billion dreams

Mastering the ‘epistomes’ of human knowledge with AI

To make my second point, that combinatorial AI aligns in support of human creativity, I’ll invent one more new word — “epistome”. The key here is the reality that LLM chatbots do not contain or produce knowledge any more than did the card catalog at the elementary school library where I learned to read. Neither do LLMs “comb the entire internet and universe of information” (as you’ll often hear or read) for the answers they deliver. Rather they organize totalities of relevant knowledge, specific informational ecologies of related information to find that answer you are seeking. These are the “context windows”, which my colleague Bryon so eloquently explains in one of the tutorials I cited above. In this sense, LLMs are another great example of biomimicry, akin to that implied by the suffix “ome”, from the Greek word “oma”, meaning something whole or complete. The complete set of genes or genetic material in an organism is hence a “genome”. The complete set of flora and fauna in a distinct habitat, say a forest, is the “biome”. It was through the study of “transcriptomes”, the mRNA subset of the genome of SARS-CoV-2 virus, that science-in-the-fast lane delivered us the Pfizer+BioNTech and Moderna vaccines that saved millions of lives in the COVID-19 pandemic. So for my analogy here, I’m stealing from the word “epistemology” — the science of knowledge — to conjure a new term of “epistome” for the ecologies of related knowledge. Through the mining of these epistomes of knowledge — say those for wind and solar power to use Bryon’s example — the LLM AI answers your question about “alternative energy” without spending time and computational power on something more distant, like coal-fired energy.

This is what makes our current iterations of AI such powerful amplifiers of human ingenuity. AI cannot reason, as we do. It does, however, excel at algorithmically-driven discernment. The distinction is essential. For it is the engineered discernment that powers this combinatorial exercise across the epistomes of human knowledge, which in turn stirs and nurtures human creativity — of both the “p-types” and the “h-types” that Shackell describes above.

To set the stage for my third point on the dangers of casual anthropomorphism, let me share an astute observation in a book I reviewed last summer, How Data Happened — A History from the Age of Reason to the Age of Algorithms by Chris Wiggins and Matthew L. Jones:

“Some fields, like biology, are named after the object of study; others like calculus are named after a methodology. Artificial intelligence and machine learning, however, are named after an aspiration: the fields are defined by the goal, not the method used to get there.”

Not an all-knowing ‘HAL’, but thousands of distinct AIs

This keen insight gets to the danger of imprecise language and metaphor as we try and describe the transformative technologies. In fact, John McCarthy, who coined the term artificial intelligence as part of the now-famous Dartmouth Conference in 1956 (sometimes called the Dartmouth workshop) where the science was really born, was said to have later reservations about the word. The concern of McCarthy, who died in 2011, was less about the utility of the term than it was about how the phrase might lead to bad interpretations of the technology, evoking images of sentient, human-like machines. He was certainly right.

Despite his eccentricities, I have great respect for Elon Musk’s innovativeness. But his recent post on X illustrates McCarthy’s prescience: “AI will probably be smarter than any single human next year,” he posted. “By 2029, AI is probably smarter than all humans combined.” Come on, Elon, is all I can say. He should know better. This is AI anthropomorphism at scale and the fear-mongering is not helpful to any of us.

Kevin Kelly, as the true prophet of AI and other technologies, reminds us frequently, AI is a million-plus distinct tools: “We should prepare ourselves for AIs, plural. There is no monolithic AI,” Kelly told the economics blogger Noah Smith in a great interview last year. “Instead, there will be thousands of species of AIs, each engineered to optimize different ways of thinking, doing different jobs (better than general AIs could do). Most of these AIs will be dumbsmarten: smart in many things and stupid in others.”

Absolutely. This is where the Shackell’s evolutionary clash of mind and world occurs. An explosion of creativity is at hand. While there are many now examining AI through the wrong end of the telescope, there are so many more who have their looking glasses squarely focused on opportunities that are constrained only by imagination. This is the source of my optimism. These would include Levi, who finishes his homework on the school bus each day so once home he can dive into his own AI projects, including online games he created with ChatGPT like “FQG: Fantasy Quest Guide” and his latest “CE World: Cultural Explorer”, which he developed this past weekend, or his own vision for “The Reality Node” that will supercharge innovation and serve humanity.

Levi nailed it. The age of AI is indeed the age of a billion dreams. What an exciting time to be alive!

In Part Two of this series, I’ll explore the impact of AI on the creativity of enterprises and institutions, to include the lessons we might take from Renaissance 1.0, such as the burst of creativity and commerce unleashed by an early form of “fintech” — the invention of double entry bookkeeping by Franciscan friar Luca Pacioli in 1494.

Levi’s vision for the modern craftsman in the age of a billion dreams

--

--

Brett A. Hurt
Brett A. Hurt

Written by Brett A. Hurt

CEO and Co-founder, data.world; Co-owner, Hurt Family Investments; Founder, Bazaarvoice and Coremetrics; Henry Crown Fellow; TEDster; Dad + Husband

No responses yet