Analytics at Wharton

News

AI on the Mind: Analytics at Wharton’s Neuroscientists Weigh In On AI’s Future with our Brains

With recent advancements in artificial intelligence, and the proliferation of large language models like ChatGPT, nearly every sector across industry and academia is reconsidering what its future will look like.

Recently, Analytics at Wharton sat down with our resident neuroscience experts, Michael Platt, director of the Wharton Neuroscience Initiative (WiN) and the James S. Riepe university professor of marketing, neuroscience, and psychology, and Elizabeth “Zab” Johnson, executive director and senior fellow of WiN.

We asked them about their own relationship to artificial intelligence, the implication of recent developments in the field of neuroscience, and why certain aspects of science-fiction stories may be arriving in our world sooner than you think.

The following conversation has been edited for clarity and length.

To set the stage, how would you define natural intelligence as opposed to artificial intelligence, from a neuroscience perspective? 

Zab Johnson Circle

Zab Johnson
Natural intelligence requires you to have a brain and a nervous system. An animal takes in data through experience, updates the system, and learns. In some ways, the brain uses that data to predict what might come next and then determines what the animal needs to do in a similar or a different context in the future. It has evolved over millions of years.

Michael Platt Circle

Michael Platt
The human brain has 85-million neurons, 100-trillion connections, but it’s actually a limited-capacity processing device. It’s not like a computer; we can’t add any more to it.

There are a lot of physiological and energetic constraints. We think that’s where the process of evolution has fine-tuned nervous systems. Brains do funny things to be very efficient. They throw out most of the information that’s coming in. You don’t encode everything, and you don’t encode the absolute magnitudes of things, but rather, you encode difference, contrast, surprise, things that stand out.

Zab Johnson Circle

Unlike the brain, which is incredibly energy efficient, these more recent AI developments are huge energy consumers. That’s one of the most concerning things about it. We are going to have to reckon with that consumption and how cataclysmic that might be globally, beyond the more common concerns like – is it going to put people out of work? Is it going to change how we think of the human and our role?

In what ways are natural and artificial intelligence similar? 

Zab Johnson Circle

Both are black boxes, in a way. When you talk to the engineers who are building these deep neural nets, they actually can’t tell you what the individual processing layers are really doing, just like neurophysiologists often can’t tell you exactly how these little individual processing units of the brain are working together.

Michael Platt Circle

By digesting so much human language products, these AI models learn to make pretty good predictions, and in that way also evidence some of these weird behaviors like hallucinations. Humans confabulate. That’s what language is. When we don’t know the answer, we make things up.

One of the most powerful similarities is the way they solve a problem and hone in on an answer. This is through what we call gradient descent.

That’s when you make predictions and calculate the error between the prediction you made and the final outcome, and then try to minimize that distance over time. You end up with a very powerful predictive model, and that’s essentially what evolution did over hundreds of millions of years. So that process is similar, if not maybe identical.

What is your relationship to artificial intelligence? How are you engaging with it as educators and neuroscientists?  

Michael Platt Circle

I think it’s one of the major issues we face. The immediate concern is displacement for work, but it all depends on how we agree to use it. Are we going to use it for good, for bad? Someone’s going to use it for bad at some point, so we have to be prepared for that.

It’s blown the lid off education. I’m not sure any of us understands its impact on education. You’re never going to be able to keep up algorithmically to detect when someone has used AI, so we’ve got to get smarter about the way we teach and assess it. Hopefully we can teach students how to embrace and use AI in responsible and useful ways, and not in destructive ways.

Zab Johnson Circle

AI has really been around for so long. OpenAI made it public in a spectacularly interesting way, but the reality is that we’ve been using AI-related tools for a long time without labeling it AI overtly. For example, I think AI assistance has been used since 1992 in expert domains like radiology. AI tools were helping with things that we recognized could be done algorithmically, like pattern recognition. In the last few months, Google Health has shown they can use AI and machine learning to spot biomarkers of disease just from external photos of the eye.

I think the interesting thing to consider is – what is the role of the human in this relationship? There’s a really nice inflection point here, where we, as a society, can perhaps make some decisions about what the rules are and how they get incorporated.

The same goes for education, too. I don’t think it should be embargoed. I’m trying to plug in a new assignment in my visual marketing class that uses something like a generative image creator along with ChatGPT to generate branded visual advertisements. I think there’s a role for learning how the human interfaces with AI through prompts. How specific you get, how many times you iterate, how you evaluate the success of the outputs, etc. I think students need to have experience with the tools because they’re here to stay, and the students are the ones who will be front and center in defining how AI is used for work and optimization.

Recently, Alex Huth and Jerry Tang at the University of Texas at Austin revealed they were able to train ChatGPT on the brain signals of a subject. After showing the subject a movie, ChatGPT could read their brain signals and accurately describe what they were watching. What’s your reaction to that?

Michael Platt Circle

I think anything’s possible these days. It’s shocking how far we’ve come – not necessarily in terms of how much we understand about how brains work, but in our ability to get information out of them. You can trace it back over the last 25 years in work on brain-machine interfaces.

You put electrodes into the brain of a human patient who has lost control of some motor function, and then you have that individual imagine moving some body part. You have a big rack of computers that would decode this, and eventually the patient could actuate an external robotic device.

That was sort of the beginnings of decoding, and there was a lot of smack thrown at it early on. Some people would say “you’re not telling us how the brain works” and those working on it were like, “Well, we don’t care. We’re trying to restore movement to people who’ve lost movement.”

On the heels of that came work showing that you could decode what a person is seeing, like a movie. You could replay it from their brain activity. Jack Gallant at the University of California, Berkeley did a lot of the really important work in terms of decoding what someone was seeing. And Eddie Chang at the University of California, San Francisco did that for what a person is thinking about or trying to say by putting electrodes over the parts of the brain that are involved in producing speech.

This most recent paper from UT Austin is really interesting because first of all, it used fMRI, functional Magnetic Resonance Imaging, which measures blood flow response – super sluggish, not very high resolution – and it wasn’t looking at the sensory encoding areas or the motor output areas, but at the rest of the mush of the brain that lies in between. That’s where all the data was. You could reconstruct what somebody was essentially thinking about. It’s pretty crazy.

Zab Johnson Circle

As Michael said, this is an extension of a good 25 years of work in neuroscience. Because it harnessed a large language model in ChatGPT, it was more powerful than it had been before, but I don’t think that either of us were necessarily surprised.

I think that the one overt concern is that it’s getting pretty close to mind reading. But it’s still very early. You can’t train these models on somebody else’s data and predict their thoughts. You would need to subject them to a long time in an MRI machine. And there are some interesting nuances here. There are people for which written stories don’t work – they don’t elicit robust activation of the brain the same way pictures might. So, there’s going to be some individual variation in even being able to get quality outputs. However, I think the idea here is that these things are going to develop and they’re going to get better.  It leads to a really interesting question – will it be powerful enough to access the processing of the subconscious?

“You’re never going to be able to keep up algorithmically to detect when someone has used AI, so we’ve got to get smarter about the way we teach and assess it.”

– Michael Platt, Director, Wharton Neuroscience Initiative

I’m curious about what other developments at the intersection of AI and neuroscience you think might surprise people in the near future. What aren’t people thinking about?

Michael Platt Circle

We’re going to see more and more refinement, so we’ll get better and better accuracy for more and more complex tasks of decoding – more and more complex behavioral states from brain activity.

I think from a practical point of view, we’ll move toward lower-touch, more scalable kinds of technologies. So going from fMRI, which costs millions of dollars and is only available at academic medical centers, to being able to put on a comfortable wearable device that measures brain waves.

We’ve got an interesting project right now. We’ve used fMRI to decode brand positioning in the brain, for example. We have people look at different brands and look at a particular category of product. From the patterns of activation across the brain, you can decode the relative strength of each of those brands and how they relate to each other. You can predict brand equity and market share and all this stuff from that. Now we’re trying to do that with EEG. It could be a game-changer if we can harness the power of these large language models.

As Zab mentioned, if you could surface and articulate parts of the subconscious, then you don’t need years of psychoanalysis. You could just plug into this device, and it could tell you what’s really bothering you or something like that. I don’t know.

Zab Johnson Circle

It’s moving at lightning speed, and I think that the applications are actually so ubiquitous. It’s really hard to know where the next groundbreaking thing will be –

Michael Platt Circle

– Yeah, one thing, sorry to interrupt. We wouldn’t interrupt if we were wearing you know… *mimes wearing a headset*

Zab Johnson Circle

Exactly.

Michael Platt Circle

…or if we were communicating telepathically, which I think is actually the next thing, and the thing that would really shake people up. A version of that was done at Duke University by our colleague Miguel Nicolelis, in monkeys. It’s crazy to think about, but he had electrodes in the brains of two monkeys and they were each getting partial information about a problem they had to solve, and the only way they could communicate was with their brain activity. They solved the problem.

Zab Johnson Circle

They were on two different continents. Did you mention that?

Michael Platt Circle

I did not mention the two different continents. So, that’s pretty profound. What that potentially does is it frees us from a lot of the constraints of our physical input and output devices. We could potentially operate at much higher speeds, with higher accuracy.

I wonder if that could be a solution to a language barrier as well.

Michael Platt Circle

If you want to know what’s coming in the future, just look at science fiction, or in this case, Star Trek. Everybody has a universal translator embedded.

I don’t know if you know this CETI project. It’s basically trying to create a language decoder for cetaceans, humpback whales in particular, because they produce such a rich variety of vocalizations, and a lot of them. I would have thought it would be impossible before I had seen what’s happened in the last nine months.

Zab Johnson Circle

Actually, lots of groups are doing that, including Marc Schmidt’s lab here at Penn in the biology department. He’s looking at all these aspects of cowbird behavior and then using that as a training set for algorithms. So, if you think about anything as having enough data to power a machine learning algorithm, then you can actually start to uncover patterns that were much, much harder to recognize, or inaccurate, or had too much subjectivity because there were human raters involved.

I think we’re about to learn a lot about behavior. Maybe we’ll all know why the plants are screaming soon.

Any final thoughts on the matter, or positions you want to make clear that you hold?

Zab Johnson Circle

One of the only things that I really lose sleep over in this domain is – what are the voices that AI is going to learn for, and from, and how are those voices going to be credited? We’re thinking about data as if it’s just data, but it’s powered by human creativity. It’s powered by what people have done and put online and what people haven’t put online. And it’s learning from that. It’s going to bake-in biases because there are voices that are more common and voices that have never been included in that space. The weighting functions right now are not really refined for those nuances, so there are inaccuracies and biases.

As we develop new ideas and put our own data into that arena, how do we still credit the human generative engine? How do we give space for there to be novel developments from human agents? Those are the kinds of things we’re going to have to spend a lot of time thinking about.