UC Berkeley's Alison Gopnik: "Babies are the ultimate supercomputers"
Danny In The Valley
0:00
0:00

Full episode transcript -

0:0

What can you say? Thank you for listening. Oh, that's nice, because people want to hear it. You wouldn't missing. What do you think? Thank you. Listening. Good job. You say. This is Danny in the valley. This is a baddie technology. What is it all about? Hello, And welcome to Danny in the Valley. Thank you for tuning in.

And don't worry. What you heard was not put there by mistake. That was my son, Cole, who is two and 1/2 years old and what he said his cute little jumble of words is exactly the type of thing that is a most interest to this week's guest. Alison Gopnik is a cognitive psychologist who have spent 20 plus years studying babies and young Children specifically how they learn. And she's doing that because she thinks that these little people could be the key to unlocking the next big leap in artificial intelligence, which, if you think about it, makes sense. This, for example, is my four month old jet expressing his displeasure about I'm not sure what his brother is. Any guide. Before long, he'll start speaking than stringing words together and pretty soon,

expressing full understandable thoughts, and the speed with which that happens on the relatively low amount of data that he experiences in order to make those connections is in computing terms miraculous. Because right now, even the best A I systems need orders of magnitude more dated train on to get to a similar place. So converting babiesbrains into algorithms is effectively what Gopnik is working on. Increasingly, it's an area that the likes of Google and other leaders in the field of a i r looking at as providing potentially the next big portal to true advances in the field. So that is what I wanted to talk to God, Nick, about last week I headed over to her house in Berkeley. Free delivery. Hello, Danny Porter. Danny. Oh, she's completely I completely forgot.

I have it on my calendar. But come on in. Sure, Yeah, absolutely. I really So I think it was last week. Week before there. Is this the launch of the Stanford human centered A Eye Institute, not tryingto figure out howto create artificial intelligence in a way that is good for humans and makes sense and is not gonna. You lied to the Terminator apocalypse and you're on stage talking about your work, which, if that was super interesting, which is, and you'll explain it better than me. But basically trying to model a I on babies and how they develop. Is that

3:38

right? Yeah, that's right. So one of the things that led to the great Renaissance ve I over the last five years or so, maybe 10

3:49

years now, there's been many, many, many false

3:51

dawns, the many false dawns in the past. Well, here's here's what I would say is a short version of what happened is that for a long time what people in a I were trying to do was to really focus on adult knowledge and sort of handcraft the equivalent of the knowledge that adults have and put it into, ah, computer. And the big change was realizing that you didn't actually have toe handcraft all the knowledge if you had a computer that was sufficiently good at learning. So the recent work has taken ideas about learning that were already around in the eighties and shown that when you have big enough data sets and you have powerful enough computers, you can actually learn a lot of what you need to learn to be able to itself intelligent tasks like playing go or chest, so are translating and so forth. So there's been this really interesting switch from thinking about knowledge to thinking about learning. But I think increasingly, people in a I r realizing that they're coming up against a lot of barriers because the techniques that they've used just aren't powerful enough to do the kind of learning that humans do. And, of course, the people who really do most of that learning our babies and young Children. They're the best learners that we know of in the universe.

And what psychologist in cognitive scientists like me have been doing for the past 20 years is showing just how powerful those learning mechanisms are and also for the past 20 years or so, describing them in computational terms. So we have this great resource, which is that we actually have good computational accounts of what it is that babies and young Children are doing that let them learn so much so quickly. It's so effectively

5:28

so turning kind of young child baby learning into effectively into equations.

5:33

That's right, exactly. Now here's the catch. There's always a catch. So three catches. We've been doing this for the past 15 or 20 years. The kinds of models that come out of looking at babies and young Children even though we can start to formalize them, turn them into meth are very hard to actually implement on current computers. On the problem is that they're very, very computational, e expensive. You very rapidly get what people call a sort of exponential explosion if you actually try to implement these ideas on real computers. So we're in this interesting position now where we have these techniques, like deep learning and deep reinforcement, learning on adversarial networks and so forth that are really good at solving very specific problems were a lot of data and a lot of compute power. But they can't do things like make good generalizations. So,

for instance, if you give Object recognition system's current our object recognition systems just slightly different example, they don't do very well, and they'll say that's something that we don't think is even vaguely lucky. Cat is a cat, so they're not very good at generalizing. If you even take like these wonderful systems that have done things like learn how to play, go or learn how to play chess. If you presented those systems with a game that was just slightly different, they would have to start all over again on

6:59

that. They would have to crunch through unimaginable amounts of data to get good

7:3

at any of that. Exactly. And part of the reason why they could solve Go and chest, for example, was that because we have very defined rules for those games, the computer could actually generate its own examples and generate millions and millions and millions and millions of examples. Now, if you look at kids, they're exactly the opposite, because from very small amounts of data they can make really impressive generalizations. They can come up with a brand new ideas and in work that we've done in my love. We've shown sometimes they can. They're better at coming up with unlikely ideas than adults are. So somehow they're taking their taking relatively small amounts of data, and they're using it to build structures that allow them to to generalize really well. And the question is, what is it that they are doing that's allowing them to do that? And could we use some of that to program computers and developing a I that was Maur effectively like

7:55

human, because I think I don't know if you're there for one of the presentations that this Stanford thing was a guy looking at speech, natural language, processing our speech. And he put up a graphic that showed effectively to get to the same level of speech capability of program. Would have to go through something like 1000 times more data than a baby, just to get to basic syntax for saying it's simple sentence or something. Exactly. Babies have, like, you know, the baby brain is ah, or the brain generally is the best supercomputer.

8:28

Yeah, and we think it's not just that there's Maur cycles somehow, but that the very way that they're trying to solve the problems is different. So there's three things that I think are are really characteristic about what the babies and Children are doing. We've discovered this through doing this empirical work over the last 15 years that aren't characteristic of the most the most recent iterations of a I, although people had used them in the past. My acronym for this is, as always, with anything to do with Children. It's a mess. So mess tents for model, building exploration and social learning. So one thing that Children do is they actually build abstract models. My first book was called The Scientist in the Crib. What I've been arguing again for for 30 years and I think he's become a dominant view in the field of cognitive development is that you could think about Children is being like little scientists. They have hypotheses. They have theories about the world.

They test them against the data. But a theory is the kind of abstract model that lets you generalize a lot. That's exactly what if theory is, it lets you do things like make counterfactual inferences, say, if the world had been different, what would have happened? And that's much more powerful kinds of generalization than you get from what the current systems are doing, which is essentially just sort of pulling out statistical generalizations from the data. So you could think about it partly is being a difference between looking at a bunch of data and seeing all the correlations and actually figuring out the causal structure right that that underlies that that data so being able to build this kind of abstract model of how the world works instead of just saying Here's the patterns in the data gives you a lot more power. And we have a lot of evidence that babies and Children are doing that. And we even have some ways of characterizing that formally and computational e. So that's one thing that kids do. A second thing that kids do that current computers don't is they perform experiments except that when the kids do it, we call it getting into everything. If you look at any two year old, they're spending a lot of time and energy, often at the risk of their future survival. Just curious.

10:38

I have a two and 1/2 year old, Yeah, right in the midst of it.

10:42

So you know, there's nothing sort of there's this interesting kind of paradox, which is that we have to put so much work into just keeping these babies out of trouble. Why would they be designed that way? You know, they could be designed so that they they have some of these executive function abilities that we have as adults. They could be better at taking care of themselves, but there seems to be this real trade off. People sometimes call in and explore versus exploit trade off where the things that you need to do thio just go out in the world and get a CZ much data on his relevant data as possible are sort of the opposite of the things that you need to do toe act really swiftly and effectively on the two year olds and babies and young Children in general seemed to be really designed to be variable and noisy and curious, and have all these characteristics that are very bad if you're trying to get your jacket on and get out to preschool in mourning. But our very good if you're trying to figure out the nature of the universe and again, that kind of act of learning is something that we've just been starting to explore and explore. Computational E. So there's work, for instance, that shows that even very young babies are already sensitive. Thio information gains so they'll pay.

They'll look the longest of things that are actually going to give them the most new information defined in a formal Yeah, it's actually a sweet spot. That paper by my colleague Celeste Kid. Here here at Berkeley. It's called the Goldilocks Effect. There's a particular amount of information that's not so much that you're overwhelmed, but is enough to give you something new. Baby seemed to be really even. You know, nine month old are really tuned to that amount of information that can get.

12:20

I also have a four month old

12:24

senior foreman told, you know, kind of staring.

12:27

He's really Yeah, he's starting to get interested in food. He'll start just sealed. When we're eating, he'll just stop what he's doing to stay like follow the piece of food from the plane to the mouth and just trying to figure out what that is. But it's he has started all of a sudden, pay attention to stuff.

12:44

Yeah, exactly, and that what we've shown is that that paying attention isn't just random. He's paying attention to the things that are actually most likely to teach him something new. Given what you are, and again, that's not something that you see in current. Currently, I currently eyes kind of locked into its own mind and not going out and

13:3

actually getting well. It's necessarily that I mean, correct me if I'm wrong, what necessarily? It's set up to be very operating controlled environments. If you're talking about things like machine learning or, you know, we just did a a ay video talking about how robots could be great in a warehouse in a controlled environment. But if you just put down like a little two by four, there's completely flummoxed. They don't know what to do and that they can, you know, once you put in one variable all that sir off.

13:34

Yeah, exactly. So this is very good at doing very specific, well defined kinds of tasks, but they're not very good at generalizing or changing. One of the one of the examples I'd like to give is brilliantly amazingly computer. Like Alfa Zero can play computer chess and can now beat the best grandmasters. But something they couldn't play is Addie Chest. So, Addie chess is what My three year old grandson, Addie, The way he plays chess, you can actually see the any chest piece, right? There s o. The way you play out your chest is you take all the pieces, you throw them into the waste basket,

and then you pull them out of the waste basket and carefully put them back more or less in the right places on the on the chessboard. And then you start them all back in the world's basket of dead and repeat. And I think there's two things that are interesting about Addy chest. One of them is that kind of physical manipulation of objects is something that even the best robots that we have now aren't even in the ballpark of being able to do so. Being able to deal with, you know, the random variation and where it ends in the waste basket and figure out how to get it back again. They're really, really That's a really challenging problem. But then the other thing that I think in a way is even more profound about Addy Chess is that he's making up a new objective for himself that no one has ever tried to accomplish before, Right, So with almost all the techniques that we have now, what we can do is we can say, Here's the goal. Here is the objective. Here is a bunch of feedback about what the objective is, You know, here's your score. Maximize your score and again that's very impressive

15:3

that and here's a big data set to crunch through train

15:6

on. And the impressive thing is that the machines can actually do as much as they can, given that information. But what people including very young Children can do is set up a new objective. Say how What would happen if I tried to do if I had to do this and they can do that in this active, exploratory way? That's kind of mysterious because some of it looks like it's very random. But on the other hand, it also looks like it's systematic. It looks as if the kids were exploring things in a way that's really helping them, helping them learn and helping them learn something new. So that's a second piece that the kids have that the current A oz don't and then 1/3 piece is that the kids are learning in a social context, so they're learning from us. So you're giving the example about you know, the three month old being interested in food. And part of the reason for that is, look,

the's human beings around me are doing this thing. Why, what is it? Can I do it too. Should I do it, too? How does it work? And other experiments that we've done again? We can formally model some of this show that Children are are very sensitive to other people and very good at learning from other people. And again, that's something that current, although people are trying, that's something that current a eyes not particularly good at you.

16:18

So it sounds like what you're talking about is curiosity and trying to basically code curiosity.

16:24

So one of the things that we've actually we're actually working on it the moment we're collaborating with some of the computer scientists that Berkeley particularly poke it, Doug Roland and deep end path thick. And they designed a really beautiful system to have curiosity based based on developmental psychology, to have what they called curiosity based reinforcement learning. So the idea is, in typical reinforcement, learning. You just say, Here's the score you get the score went up, the score went down right, But in this case, you actually get rewarded for making predictions for have making predictions that don't fit with what you already know. So if you go to a part of the space, for example, where something unexpected happens. Then you go back and try and figure out what that unexpected thing Woz and make sense of it. And that looks much more like what the kids are doing and stuff. Yeah, getting into stuff. And in fact, we're currently setting up environments where we can literally test four year olds and the curiosity based a eyes on exactly the same problem and record what they do and see what the analogies and differences

17:27

it's. You said you've been working on this for 20

17:30

ish years. Yes. Oh,

17:32

I've got some curious because I don't know if talking about the kind of you know the Aye aye is going to take over the world has been a story that's bubbled up and gone away. I don't know countless times since the fifties, and I'm curious when you started doing working on this stuff where people kind of like That's kind of interesting theory, a theory. But, you know, you're just off in the wilderness. Well, it's

17:55

interesting, because when we started, we started working. I guess now it's I guess, if it's about 2000 was when we were when we were first starting this work so We've been doing the work, of course, about developmental psychology from my entire career. But around 2000 there was, ah, whole flurry of really interesting work by, for example, today a pearl who's won that Turing prize for this about causal graphical models. So the idea was that you could, at least in the case of causal inference, you could provide computational accounts of how you could learn a causal, an abstract causal model from statistical data,

18:32

learning cause and effect

18:33

from data learning cause and effect from data Computational E. There was really beautiful work continues to be beautiful work showing how it's possible to do that. And then that got generalized to an approach that sometimes called obey Zeon approach where the ideas again think about the child as if they were a little scientist. You could think about the process of learning as a process of making hypotheses about what the world is like and then checking them against the data and this causal base nets were a formalism that did that for this specific problem of that cause ality. But then it got generalized to a lot of other, a lot of other examples. My collaborator of Tom Griffiths, who was at Berkeley, did. A lot of this did a lot of this work, and then there was a lot of excitement about that as a model for what human beings were doing, including what Children were doing. But as I say, it turns out to be quite difficult to implement that in riel time, computers that are really doing that are doing real things. Not impossible but challenging and in parallel. It turns out that these neural network ideas that had also been around for a long time we're suddenly becoming very feasible for riel,

genuine real computers and solving real tasks in real time. So I think the interesting challenge now is can we put together the theoretical ideas about basing an inference and structured hypotheses with the the speed in the power of some of these new neural network applications? That's and a lot of people, including me, you're thinking about making hybrid models that use both that use both abilities. And as I think you could tell from that panel you know, Demas Hassabis, who's the founder of Deep Mind, which has been one of the great places that's been a source of advances about this. You know, he's inviting developmental psychologist like me in because I think he recognized. As he said in that panel, he recognizes that the next frontiers are gonna have to. We're gonna have to do something new beyond what we're

20:29

doing now. It sounds like, ultimately you're trying to recreate the human brain or approximate it, and it feels like kids in particular because they start from a blank slate. They provide a kind of a cleaner model for how that might actually work.

20:44

Well, I think there's two things. They're not really a blank slate, which is actually part of what's interesting about them. So one of the other things we've discovered is that babies and young Children are born with a lot of ideas about how the world works, and it may be that because they aren't blank slates that they can learn as much as they can. So one challenges. Can we describable the things that they come into the world. They come into the world knowing. But there's another piece, which is that I've argued, and I think there's some evidence for this that Children may actually be really good models because they're actually the ones we're doing. Most of the learning. Ah, child brain. I think there's reasons to believe is actually going to be better at learning new things than an adult brain. So it's not just that Children are a nice example, because they're not contaminated by having school and so forth. But there may be something special about Children and babies

21:33

that its creativity right, creativity. He's kind of diminishes. The older you

21:37

get. Well, that's what we're trying to explore. That certainly seems to run. What we what we feel intuitively. People don't understand very much in psychology about things like curiosity and creativity, and that is a whole interesting frontier about. Could we be more precise about what curiosity and creativity mean? But it certainly seems on the surface, as if the kids are especially curious and creative and where we have some some very recent studies that are actually showing that more systematically. But let me also say something about the weather. The robot's gonna take over

22:9

the world know that was actually my next question because it's cause so I'll just preface is. But I've talked to a lot of people so likes a lot of like physicists or engineer types. They're quite worried. And then you have computer scientists who are like it's all gonna be fine. Auto correct doesn't even work, right? Yeah, like everybody settle down and it feels like there's kind of, obviously two ends of extreme.

22:37

So I think there's two really different questions that people are raising. So one of them is you've got a new technology. It's really powerful. It could do terrible things. There's no question about that. But that's true for essentially old, powerful new technologies. What you have to do is have systems for regulating and controlling and making sure that they work the way that you want them to. I it was interesting in some ways the most interesting conversation. I had it that Stanford Launch was actually with someone who's been working on these problems and he said,

23:9

Do you know where

23:10

circuit breakers came from? And I realized I have no idea where circuit breakers came from and he said, Well, actually, the insurance companies back in the early 20th century started insisting that we all have circuit breakers

23:22

didn't burn down

23:23

because exactly so if you think about it, from the perspective of the early 20 century. There's this thing called electricity. It's burning down houses. It's this incredibly powerful new thing, and you're suggesting that you just put it in everybody's house that we have? Ah, we have a now outlet that this incredibly powerful force that literally kills people and destroys things and burns down houses. We're just gonna put that in everybody's house and the insurance company said Okay, well, if we're gonna do that, here's ah,

23:54

we need to weigh

23:55

need circuit breakers and we're not gonna let you put um and of course, not just the insurance companies, but then the government, when you know every time you remodel, when you have to do things to code, it's because people said if we're going to use this force for good, we're going to have to put in a really elaborate system of regulation to make sure it does what we want and not what we don't want. And I think that's absolutely analogous to what's happening with that. We're going to have to have ah, lot of regulation, a lot of decisions about things like, should these things be allowed Thio be involved in weapons. What happens if we attach them to machines that can actually do things? That's a really important, serious problem. And I think there's a bit been a bit of, ah,

sources apprentice, just like there was for previous technologies where they ended up doing things that we didn't realize they were going to do. So you know, I think genuinely Facebook thought that this was gonna be a way that you could see what your sister in law's kids were doing. And it is. But they didn't think about

24:57

it as eso undermining democracy.

24:59

Yeah, exactly. That wasn't part of the original. That was part of the original business plan. My husband was one of the co founders of Pixar, and he always says one of the things about Moore's laws, if you think of it as, ah, a new order of magnitude change every five years, what an order of magnitude means is that you can't envision beforehand what that order of magnitude change is going to be. So that's a part about a I that I think it's perfectly sensible. It's a powerful new technology we need to regulated and makes sense about it and think about it really, really thoughtfully. That's undoubtedly true, But I think there's a completely different story narrative, which is the narrative about. There's a machine that's like us that's going to be human,

and that's going to come and kill us all as a result. That's the one where I think the computer scientists and psychologist people who actually know about human intelligence are gonna be Rolling R us because there's such an enormous gap between what of the very best of these systems are doing and anything that we see in in human intelligence. And I think in the background to this, you know, literally going back to medieval literature. There's been these narratives about the machine that comes to life, Gollum or Frankenstein's monster. It never ends well when the machine comes to life in in our in our human imagination, right? It's always there's always something that's creepy about that, and this is, you know, before industrialization, not let alone before a I. So I do think there's some of the the fear about the narrative is about this narrative about something that's kind of human but kind of not human, and that's a scary That's just a scary,

a profoundly scary thing. So I don't think that narrative is Is that has any kind of force? I don't think that's something that we should be worried about. What we should be worried about regulating, regulating the technology, the way we should always be worried about

27:2

regulating powerful. Because we have you on musk, for example saying, Oh, we're summoning the demon. Yeah, this is an existential this technology. If when we quote unquote get it right or get really powerful or whether it in another way to think about it, If we get to artificial general intelligence thing that can think for itself it is curious and creative and more powerful than any human brain. Then that's the kind of the, uh oh moment for humanity.

27:31

I don't think that's something that we should be worried about. One of the things that I, uh, that I say sometimes is, you know, as a mother one of the things you learn as a mother is you sort of have to ration your worries because there's an infinite. You probably know this as a father of small Children as well. There is an infinite scope of worrying about a three month old and, uh so you sort of have toe pick your battles about what are the things you can worry about? What? Which ones? Not to. I think if we're thinking about existential threats right now, climate change you could worry about every day 24 7 and you still wouldn't be worried enough about it. And that's an interesting example where that's the internal combustion engine, for heaven's sake, right?

That's not anything that anybody thought was going to be une vel force in the world or something that I thought was going to be this mysterious human like thing. It's just just a engine in a car. And it turns out that that's the thing that is the existential Theo existential threat. So I'm not. Yeah, I'm not sure the literary power off on existentially threat like we're going to have general intelligence and they're going to be human like and they're going to come and kill us is a lot stronger than our internal combustion engines and our cars. They're going to come and existentially kill us all, but I think at the moment, the ladder or threat is much more really than the first threat and you know who knows what will happen in the future. We know that there are computers the can think on their own and solve problems, and that might be a threat to the planet because they're us. I mean, if we're cognitive scientists, we think that ultimately these creatures that are sitting in these chairs right now are some kind of computer. There's some kind of computational system that's going on in all those, all those neurons that's leading to us doing the things that we do, including quite possibly destroying ourselves and destroying the planet. But that computer is really, really, really different from any of the most advanced things that we could imagine that we could imagine now

29:36

And so in terms of where from where you said for where a I is right now is the next kind of step change, because right now it feels like we have ah, lot of machine learning where you have just like brute force computing being thrown at certain define herbal problems like you know, the performance of an industrial component, and you can kind of start to predict when it's gonna go wrong and things like that or medical diagnostics or whatever. They're certain to find areas where over time, computers are just gonna be very, very good at dealing with very narrow problems. What's the dot, dot dot in terms of the next kind of shift in how this stuff works or how it advances? Is it around basically turn looking at the human brain and basically translating it into a bunch of algorithms?

30:31

So again, I mean, what we try to do as cognitive scientist is exactly that. What we want to try do is understand, for instance, how it is that that, you know, babies and young Children, a three month old can look so apparently helpless and doesn't have all the infrastructure of education and so forth can learn as much as they can. And we're still very, very far from solving that problem. But we think that solving that problem is gonna be some kind of computational story, and we have bits and pieces of answers to that problem already. Now we don't know that progress might very well help us to think about how we could actually design systems that could do analogous things. But of course it might be that We don't want to design systems that can do the things that that humans can. D'oh! We want systems that could do the things that humans are really bad at doing,

like processing enormous amounts of data. So one of the things that we're going to have to decide in terms of our future decisions about what kinds of systems to actually build is how much do we want to leverage the things that computers are really good at, like dealing with lots of data going very quickly versus the things they're very bad at, like creativity and curiosity? And, you know, I think a vision that is possibly as unrealistic as the dystopian visions but still a rail a relevant, more utopian vision would be. One of the things that industrialization did was in a way, to turn people into computers. So one of the characteristics of life in the 19th and 20th century was having an awful lot of people doing tasks that we know that computers can do, like, you know, being bookkeepers or accountants, typing pool or typing pools. Or,

you know, Bob Cratchit sitting at his desk and moving numbers around. Now we have systems that can do those kinds of things. It might be that that will liberate us to be able to do the creative, curious things that are uniquely human the same way that, to a large extent, industrialization liberated us from having to do all those physical kinds of tasks. That would be a kind of best case scenario, so that the things that humans are really good at doing, we could concentrate on doing like caring for other people or being curious. And the machines could do a lot of things that, in fact, now humans are doing, but that it's not obvious that are the best uses of our computational power. Now again, there's gonna be really serious, difficult issues about how that transition will take place. But there's a again, not different, in kind from other kinds

33:13

of transition. And how far along are we or not in understanding, but truly kind of unlocking how the brain

33:20

works? I think we're very, very far from understanding it. But

33:24

like just in

33:25

the foothills, yeah, just barely in the foothills and again, one of the nice things about being a developmental psychologist is that every day these tiny little creatures with no power, no authority in no status just stun you with the amount that they can amount, the amount they can do in the amount they can learn in ways that we just aren't even in the ballpark of starting to understand. So I think it's gonna be a very long time before we have anything that looks like up complete understanding, if indeed weak, if indeed we ever do in things like our capacities for consciousness or experience. That's an example of something that we really don't understand hardly at all at the moment. So I think there's gonna be a long way before we can get that kind of understanding of what we can do. But in the meantime, some things like figuring out what it would be like if you had a system that was curious and was deciding which kind of data to get that something that I think we could do, and that is something that we could and something that we could implement in a computational system. But you know, another thing to say about Moore's law is that it's been very unpredictable about which things were gonna work in which things weren't so. I think even the people who designed the neural net algorithms like the people who just got the touring prize,

that Geoff Hinton, who I've known for a long time I'm not sure he knew that the last five years those ideas were going to explode in the way that they have or that they were gonna turn out to be as feasible as they were or that the invention of the Internet, which was a completely orthogonal invention, was going to mean that you could leverage millions of human beings to do a lot of your work for you. That human beings are doing is like they're the ones who were labeling those pictures and putting the giving the examples for the machines to use. So I think that was all very contingent and sort of unpredictable as often in technology, we don't can't tell in advance which things are gonna turn out to be productive, in which things aren't but certainly in principle, I mean, for instance, Dar pa you know, which is the kind of advanced

35:33

sounds research arm?

35:34

Exactly. So the place that sort of invented computers and the Internet just put out a call of for projects that include both developmental psychologists and computer scientists, very explicitly to try to solve some of these problems. So that's at least an indicate.

35:50

This is That's what it's has a kind of a catchy name. It's like the common

35:53

sense machine. Common sense. Yeah, so that's again. You know, DARPA sometimes his winners and losers, but But they've got a pretty good track record for finding, seeing where that what? The next cutting edge thing is going right. And I have to say, I I kind of almost morally like the idea that these babies and Children who nobody pays much attention to they don't have very much money. They don't have very much status. They're just little. They're the kind of stuff that women pay attention to. I kind of like the fact that they might turn out to be something that we should have really been paying attention

36:28

to along. Well, it would be kind of I can see the movie now, huh? You know, a Pentagon research lab full of babies? Yeah. And guys in white coats trying to figure out how these little beings are actually learning and then turning that into whatever.

36:45

Well, that's a creepy. I like the idea of them going out to preschool teachers of them. And the preschool teacher is actually turning out to be the ones who are the the great force in the universe, which I

37:1

think is probably that's more like a romcom, like a four star general and the preschool teacher. They get together to create a May

37:8

I see the world or something? Something like that,

37:12

um, to your point of the kind of you can never predict how this is going to go. But it is interesting that if the girls try to create this kind of super intelligent, capable computing systems that you have to kind of go back to first principles, I e.

37:30

Babies. Yeah, yeah, I think that's again. We don't know. We don't know how it's going to go, but I think I think that's a really interesting. I think that's a really interesting, productive set of ideas, and you could also think of it is being kind of like a A really vivid instance of what's sometimes called Maravich is paradox that's been characteristic of a I'll along which is that the things that we thought describe them once is being the Cory does of nerd MCI's Mo like playing chess. The, uh, I totally lost. Well, nerd machismo, I think, is a very,

very helpful concept. It's that there's a particular kind of you wouldn't necessarily think those two things would go together. But if you've been hanging around the tech industry, you know that they yes, and the things that were sort of the greatest examples of human intelligence, like playing chess, turn out to be those turn out to be pretty easy,

38:26

or at least know those were, like the first Carla tricks

38:29

of yes, that well,

38:31

that Look, look, this I mean, that's kind of child's play.

38:34

You are proving serums even or, uh, doing math things that you think would be really, really, really hard and require incredible and highest levels of intelligence where, as it turns out, the things like playing at a chess or learning a first language or figuring out common sense principles about, you know, the fact that when you let go of objects, they fall, things that feel as if they're much more sort of. Well, that doesn't count. Is being intelligence right? Every baby figured that of those turn out to be the really hard problems and the really distinctively and the really distinctively human problems. And I think that's been true along in the history of a I. And this is like the latest iteration of that, the latest iteration of that

39:17

paradise. What is funny? Talking about the kind of nerdy, chest beating kind of place survived that this place has. It does feel that that kind of admitting that you have to be like, OK, actually, we need to kind of completely re approaches from a different way. It's just really that juxtaposition as he saves specifically here, which is in tech industry so male and so, as you said kind of macho in a weird way, it does feel like it's gonna kind of record. It's a bit of, ah, people to change their chip

39:49

a bit, Yeah, and I think that. But I think that's happening in one of the wonderful things about engineers, as I've discovered is, you know, they really want to make things work, and they're very willing in some ways, more willing than scientists are too. Say OK, This thing that I was using isn't working. I can find out something by looking at babies. I would never have thought about looking at babies before. That's funny. I didn't even quite realized that they were around, but I could learn something from them. Now I'm gonna go and find out about babies,

and that's that's actually a very childlike quality. It's That's kind of like what? The baby's air like themselves. This is interesting. This is cool. Let me find out about this.

40:32

So just a kind of wrap up? Is it Is it fair to think that that that this approach of kind of going back to first, you know, first principles, how to speak and looking at how child brains etcetera work, that that is the kind of on the kind of one of the leading edges of how a eyes being thought of an approach to now?

40:55

I think that's exactly right. Yeah, I think that's exactly right. The general idea of that, the way to go is to develop these more structured models and combine them with the deep learning. I think that's very much in the air of next the next step and it's partly this inevitable back and forth that you see in a I and you see in cognitive science from Here's a way that I would describe the biggest, deepest one of the biggest, deepest problems that we have, as as cognitive scientists and psychologists and machine and A I people is Look, you go out and you look in the world and we know an incredible amount about the world, some of it built in, and some of it is that we learn. But the only information that comes to us from the world is a bunch of photons hitting the back of our retinas and disturbances of air in our ears. And the puzzle is, how could we get to these really powerful, structured, generative models and representations from that very,

very limited kind of data and going back, really to Plato and Aristotle? The two ways of thinking about it have been well, let's really emphasize the representations, how powerful the representation czar and then the other thing has been. Let's really pay attention to the data. Let's really pay attention to how much data there is an A I and for that matter, philosophy and psychology have always sort of gone back and forth sort of ping pong back and forth between saying, Oh, it's the structured representations that are really important, too. No, no, it's the data that's really important, And I think we're in One of those were in one of those cycles now where the success of deep learning has meant people really paying attention to the data. And now we're starting to realize,

Oh, no. Now we have to start paying attention to the knowledge itself and hopefully, at some point and again, this is one of the nice things that the doing developmental psychology tells you is that if you actually looking at kids, you have to say no. Both of those things are important kids there, both pulling out a lot of data in ways that we don't understand and reacting to it and building these abstract models. And if we could figure out how you combine both of those, that would really be the key to seeing how you solve

43:2

that kind of problem, and that is all the time we have. I want to think Alison, for letting me in her house, even though she'd for gotten, uh, she was still kind enough to accommodate and yeah, I hope you enjoyed the conversation. I find it especially with not one but two very small humans in the house. I found the conversation super interesting. That is it for this week. If you want to see what I'm up to, check out the Sunday Times. You can also go online at the Times that co dot UK can email me, Danny dot forts in at Sunday hyphen times that co dot UK. I'm also on Twitter at Danny Forts. That is all I have.

And with that, I will leave you with these very, very wise words. Bye bye, Daddy. First dive for two. What is that? Me? Who's Danny Forrester? Cool. Yeah, I've returned as cool. Yeah, Yeah. What are you eating?

powered by SmashNotes