“The neuroscience community doesn’t have a single definition of intelligence,” says David Eagleman, professor of neuroscience at Stanford University. This fundamental uncertainty about the very thing we’re trying to recreate in machines is the starting point for how Eagleman approaches artificial intelligence. Having dedicated his career to understanding human brains—through research, bestselling books, television series, podcasts as well as founding neurotechnology companies—Eagleman shared his unique perspective on the future in Newsweek‘s AI Impact interview series with Marcus Weldon.
“The situation we’ve always been in as neuroscientists is we’re like fish in water trying to describe water,” Eagleman explains. “We’ve never seen anything else, so it’s very difficult to understand what water is. But now that we have AI, we see a slightly different version. So now we’re getting a new vocabulary, new ways of phrasing things, new ways of understanding distinctions that we simply didn’t have before.”
And those new insights into the nature of intelligence are creating a reciprocal relationship between biology and computer science: These new systems might teach us as much about ourselves as we teach and train them. Trying to determine the intelligence level of new systems like OpenAI’s ChatGPT and Anthropic’s Claude often reveals just how limited our understanding of our own minds remains.
“It’s actually more than a blank spot. It’s like a blank entire world,” Eagleman says. “We have been building our pier out into the fog, but there’s so much uncharted water.”
The emergence of artificial intelligence, however, gives neuroscientists like Eagleman a new lens to examine these fundamental questions about the mind. “What’s very cool is that they’re going to help each other,” he says, “because as we discover more in neuroscience, that’s going to allow us to understand what the heck is going on in these deep AI systems. And we’re already using these giant LLMs to analyze neuroscience data and speed things up 10x, so the whole thing is synergistic.”
Photo-illustration by Newsweek/Getty
At 53, Eagleman has established himself as one of neuroscience’s most innovative thinkers and compelling communicators. His books, including Sum, Incognito: The Secret Lives of the Brain and most recently Livewired: The Inside Story of the Ever-Changing Brain all landed on The New York Times bestseller list; in TV, he has hosted the PBS program The Brain with David Eagleman and served as a scientific advisor on the HBO show about AI humanoid robots run amok, Westworld. And his podcast series, Inner Cosmos, about the inner-workings of what he affectionately refers to as “our three-pound universe”—the brain—has frequently been rated as one of the top science series.
The path to studying the deep science of the human brain wasn’t straightforward. Eagleman began as a literature major as an undergraduate at Rice University before discovering neuroscience in his senior year. “I really loved science, and I was good at science. I was trying space physics, electrical engineering and various things, but I couldn’t find exactly the right place,” he recalls. “I’d always loved literature, so that’s what I majored in. I didn’t discover neuroscience until my senior year, and as soon as I found it, I knew that was it,” he explains. Despite having little formal training, Eagleman’s passion led him to read “every book in the library on neuroscience” before successfully applying to graduate programs and eventually earning his PhD from Baylor College of Medicine and completing postdoctoral work at the Salk Institute.
But perhaps what is most telling about Eagleman’s approach is how he has channeled the theoretical insights from his academic work into practical applications that extend what is humanly possible. He has founded multiple companies including Neosensory, which develops wearable devices that transform sound into vibration patterns on the skin for those with hearing impairments “It’s like moving the ear to the wrist,” he quips. He has also founded BrainCheck, which offers cognitive assessment tools for sports coaches and clinicians alike.
The connection between understanding the brain and extending its capabilities provides the foundation for Eagleman’s approach to artificial intelligence. Central to his thinking about both human and artificial intelligence is an understanding of how the brain fundamentally operates—not as a passive receiver of information, but as an active constructor of reality.
“The job of the brain is to construct a model of the world out there,” he explains. “The brain is locked in silence and darkness, controlling this big body, so it has no choice” This perspective upends traditional thinking about perception and cognition. “People have misthought about the brain for a very long time, and it’s still this way in most textbooks. [They think] that the brain is reacting to the world. You have photons hit the eyeballs or air compression waves hit the ear, and you analyze what’s out there. But in fact, that’s not really what’s going on at all.”
Instead, Eagleman describes a much more dynamic process where the brain is constantly comparing its rich “world model” with observed reality: “It has massive internal feedback loops, and a model of what’s going on, and it’s just comparing its model against the little bit of data that’s dribbling in through the senses.” When everything matches up, the brain renders your view of the world according to its current norm; only when an anomaly is detected is the real world interrogated more closely to understand the origin of the difference to update the model.
This framework helps explain phenomena like visual illusions and the continuity between dreaming and wakefulness. “When you dream at nighttime, you’re having full, rich visual experience, and your eyes are closed,” he points out. Being awake, he explains, “is the same process, but you just have a little bit of data coming in and that anchors you. Otherwise it’s the same idea.”
The implications for artificial intelligence are profound. Current AI systems—even the most sophisticated large language models—lack this fundamental capacity for world model building. They are trained on vast amounts of textual data but don’t construct an internal representation of the world that can be tested against new information.
“AI is a stochastic parrot,” he says, borrowing a phrase coined by University of Washington linguist Emily Bender. “Very impressive, but it’s reading everything that’s ever been written by humans and remixing it and doing things like that. But it doesn’t have the ability to simulate forward and say, what if this or that were true? What would the consequences be? And then conclude that something actually makes a better theory.”
Eagleman’s insights into the brain’s architecture go beyond this model-building framework. He has developed an influential way of understanding the contradictions and conflicts within our own minds—one that offers lessons for how we might approach the design of artificial intelligence systems.
“I wrote about this in my book Incognito,” he explains. “The framework I built was that the right way to view the brain is as a Team of Rivals. This was the term used for Abraham Lincoln’s presidential cabinet, where he took all these people who disagreed with him politically and put them all in his presidential cabinet.”
This analogy—of the brain as housing competing neural processes with different priorities—helps explain our internal conflicts and decision-making processes. It’s like “we’ve got different political parties that all love their country, but they have very different views.”
The result is an internal dialogue that most of us experience but rarely analyze. “You can cuss at yourself. You can argue with yourself. You can cajole. You can contract with yourself. Who’s talking with whom here? It’s all you, but it’s different parts of your brain battling it out under the hood,” Eagleman says.
What’s fascinating about this framework is that it suggests intelligent behavior emerges not from a unified, homogeneous system, but from the dynamic interaction of different competing components. This insight could inform approaches to AI architecture that move beyond monolithic models toward systems with multiple, specialized agents or systems that interact, contend and then collaborate to reach a consensus view.
As we discuss the current state of artificial intelligence, Eagleman introduces a concept he’s developing that cuts through much of the hype surrounding current AI capabilities. “I calculated how much could you read in a lifetime if you read every single day of your life. ChatGPT reads 1,000 times that. It’d take you 1,000 lifetimes to read as much as it’s read. And, by the way, it remembers everything it’s read, whereas you would forget.” So these systems are phenomenal repositories of written human knowledge and ideas from everywhere.
“I’ve coined a new term that I’m publishing a paper on now, called the ‘intelligence echo illusion,'” he explains. This vast knowledge base creates a deceptive impression when we interact with these systems. “Often, we will ask a question to the AI, and it will give us some extraordinary answer. We’ll say, ‘My God, it’s brilliant”. But in fact, it’s just echoing something that somebody else has already said. The problem is that you might not know that someone of the 8 billion people alive or the 100 billion that have existed before has already thought through that question in great depth and written about it.” And the LLM is just echoing this human input back to you.
To illustrate this point, Eagleman shares an anecdote about a Silicon Valley friend who was amazed when he asked ChatGPT to visualize a capital letter D turned on its side with a J underneath it. When the AI correctly responded that it would look like an umbrella, the friend was convinced this demonstrated true visual reasoning. But Eagleman recognized this as a famous example from psychology research.
“That is probably the most common example in the literature on visual imagery, about D on its side and J underneath,” he notes. “There’s a paper from 1989, a very famous psychology paper on exactly that. So of course, it knows the answer to that, but it doesn’t mean that it actually did anything” novel.
The illusion occurs because “we often don’t realize that something exists out there, in this case, thousands of places. So, you think you’re asking something new, because your knowledge is limited, you don’t realize that it’s not new.”
This illusion is crucial for understanding the true capabilities and limitations of current AI systems. What appears as intelligence or understanding or creative thinking is often just the reflection of human knowledge and reasoning that the AI has absorbed in its training data.
When it comes to creativity—often considered one of the most distinctly human capabilities—Eagleman sees a clear dividing line between what AI systems can and cannot currently do.
“Creativity is all about remixing what has come in,” he explains. “We go around the world and we’re vacuuming up so much information through our senses.” This process of recombination is something AI excels at. “AI is great at this. AI is extraordinary at remixing everything it pulls in.”, but not necessarily with any intelligence applied to this process.
While many have proposed tests to determine if an AI has achieved human-like intelligence—from Alan Turing’s imitation game to the Lovelace 2.0 test for creativity—Eagleman has his own benchmark that cuts to the heart of what he believes makes human cognition special.
“I proposed a test a couple years ago,” he explains. “I think the thing that would serve as a good indicator that AI is intelligent is when it can do scientific discovery, because that’s something that humans do, where we piece together ideas and we come up with new frameworks, and we figure out [new ways to describe] the world that way.”
This isn’t about AI simply retrieving or remixing existing knowledge but generating truly novel concepts. “When Einstein says, ‘What would it be like if I were riding on a photon?’ And then he thinks that through, simulates possibilities, and comes up with a special theory of relativity,” Eagleman elaborates. “Or when Darwin says, ‘Look, how did all these species get here? Maybe there existed all these other species that I can’t see, they’re buried under the ground. But if I imagine that those species existed,’ he simulates forward and he figures out this way of viewing the world.”
This ability to propose entirely new conceptual frameworks—not just to process existing data but to imagine alternatives to current understanding—represents to Eagleman the highest form of intelligence. It requires not just pattern recognition but genuine simulation capabilities of new possibilities, with intelligent filtering and reasoning. “That’s what’s really special about intelligence,” he insists. “And not everyone’s an Einstein or Darwin, but we all do this in a million ways every day.”
In Eagleman’s view, current AI systems haven’t cleared this bar: “AI doesn’t do that yet.” But he believes we’re approaching this threshold in some domains which would mark a fundamental shift in our relationship with artificial intelligence.
Eagleman has a simple framework for understanding creative thinking that he calls the “bend, break and blend” approach—taking existing concepts and transforming them through various operations. Large language models do this remarkably well for language. But there remains a fundamental disconnection from human experience and values. “If I ask ChatGPT for 10 versions of something, it’ll spit them out super creatively, but it doesn’t know which one’s better, defined by what do my fellow humans like? Why? Because it is not a human, so it has no idea.” These systems currently lack the required models of human sensory experience.
This fundamentally highlights what current AI systems lack: embodiment (or a model of embodiment) in the physical world. Our cognition is shaped by our sensory experiences and our interaction with the environment through our bodies. Without this grounding, AI systems may always remain limited in their understanding of the world they’re attempting to describe by statistical weighting of everything that has been written about the world: a poor substitution for the richness of every individual’s lived experience.
This creativity divide suggests that the most productive relationship between humans and AI may be collaborative rather than competitive, with AI systems generating possibilities that human judgment then evaluates and refines.
Looking ahead, Eagleman envisions a future where increasingly autonomous AI systems manage complex tasks beyond human comprehension. The remarkable adaptability of the human brain—its ability to incorporate new sensory inputs and repurpose neural resources—also offers lessons for developing more flexible, adaptable AI systems. Eagleman’s concept of the “livewired” brain emphasizes that our neural architecture is constantly rewiring itself in response to experience and he is sure that we will learn to adapt to—and be augmented by—machines assistance in almost every aspect of life.
He observes this trend already visible in some spaces, observing “over 80 percent of all stock market trades are done algorithmically now at a microsecond time scale, and humans are completely out of the loop, and we will never catch up,” he says. “It’s not like we can train ourselves to understand the stock market at microsecond scales.”
Augmented Intelligence: Reflections on the Conversation with David Eagleman
By Marcus Weldon, Newsweek Contributing Editor for AI and President Emeritus of Bell Labs
The most striking aspect of the conversation with David Eagleman was the admission that we still don’t have a good understanding of the nature of human intelligence, rendering our attempts to ascribe intelligence to different AI systems fraught with ambiguity and uncertainty. When I reflect on our dialog (more analysis of which you can find here), I think there are five defining observations and takeaways:
- Although we don’t have a precise definition of human intelligence, it is clear that machines lack many aspects of human intelligence, for example “physical world models,” which humans derive from movement, and theory of mind, which humans derive from social interaction and allow us to understand and learn from others’ perspectives.
- Human thought is an internal “movement” through our mental model world, allowing us to explore hypotheticals and to create entirely new concepts or forms of creative expression.
- Consequently, current AI systems can only try to emulate the richness of human experience, and Eagleman has proposed a better test of the different intelligence(s) exhibited by machines (which I describe in my companion analysis [add link to my article] that will better allow us to assess progress and the utility and abilities of different systems
- The model of a Team of Rivals that exists in the human brain is a good framework for understanding the future of AI systems; there will be a set of ‘rival’ AIs will different expertise and intelligence levels that will be dynamically combined to assist and augment humans
- As we always have, we will evolve our behaviors to take advantage of these new machine capabilities and we will use AI co-pilots to monitor and check machines at machine speed and to explain the essential operation to us at “human speed,” allowing us to remain in control.
In sum total, this is a compelling set of observations that should inform how we approach the development of future of intelligent machines and the target mode and model for augmenting human existence.
This reality—of complex systems operating at speeds and scales beyond human comprehension—raises profound questions about control and transparency. Yet Eagleman remains surprisingly pragmatic about this challenge.
“I think what we’re going to do is have competitive AI systems—we’re going to build AI systems to check on other AI systems,” he suggests. “We’ll find that [a system] operates a trillion times faster and I can’t possibly understand the level of complexity, but I’m going to build a monitor or adversarial competitive system … to keep an eye on it.'”
This vision of checks and balances among AI systems harks back to the Team of Rivals framework he uses to understand the brain—multiple specialized systems that provide accountability through their competition and collaboration.
Eagleman also sees these systems evolving to better communicate with humans, not by slowing down to human speeds, but by some of these monitoring agents translating the essential operation into something we can understand. “Even for the ones that we think we have a good comfort level with, we’ll build translators to dumb things down for us so that we can understand what’s going on.”
—
As our conversation draws to a close, Eagleman reflects on where we stand in understanding both human and artificial intelligence—acknowledging the vast unknowns while remaining cautiously optimistic about our technological trajectory.
“Right now, it’s all about co-piloting, and we’re moving to a future where there’s going to be more and more autonomous systems that are just taking care of stuff,” he says. “For example, I teach at Stanford. I go onto campus, there are buildings and pipes and toilets, and I don’t know how the whole thing hangs together and works. I couldn’t run the place. Who knows how it works, but I get to just enjoy it.” And this is analogous to the human-AI symbiotic world as he sees it, looking ahead.
In navigating this future, perhaps the greatest wisdom comes from approaching both forms of intelligence—human and artificial—with a balanced perspective that recognizes their complementary strengths and limitations. As Eagleman has demonstrated throughout his career, the most interesting territory lies not in seeing these as separate domains, but as interconnected aspects of the broader question of what intelligence itself might be and become.
“The computational hypothesis of neuroscience is that if you figure out the algorithms that are happening there and you reproduce them, let’s say, in silicon, then you’ve got the same thing,” Eagleman says. “I could reproduce your brain out of beer cans and tennis balls, if it’s doing the right thing, is the computational hypothesis.”
Whether the computational hypothesis proves correct—that consciousness and human-like intelligence could emerge from systems running on silicon or any other synthetic substrate, just as it does from the neocortex—remains one of the greatest open questions. But as neuroscientists, behavioral psychologists, computational linguists, creative artists, roboticists and AI researchers challenge each other’s understanding of the defining characteristics of human and machine intelligence and the interplay of the two, we are getting ever closer to an answer.