Unintended Consequences
TED Radio Hour
0:00
0:00

Full episode transcript -

0:0

Hey, it's guy here and I want you know that our partners at Ted just launched a new podcast. It's called the Ted Interview, and in each episode, Ted's curator, Chris Andersen, sits down with a different Ted speaker to dive deeper into their ideas. The first episode features author Elizabeth Gilbert, who expands on her ideas on how to discover your most creative self. You can find the Ted interview wherever you get your podcasts. This is the Ted Radio Hour each week. Groundbreaking Ted talks technology, entertainment, design, design Is that really what's 10 from I've Never Known Delivered and Ted conferences around the world, gift of the human imagination we've had to believe in Impossible Thing. The true nature of reality beckons from just beyond those talks.

Those ideas adapted for radio from NPR I'm Guy Raz. So have you ever seen these surprise egg videos on YouTube? We've got all kinds of surprising for documents toughens open thes videos for kids. But to show someone opening a plastic or chocolate egg usually was a cartoon characters superhero thing. What Kendrick is that one and then finding a small toy inside the car. These videos don't just have a few 1000 views. Some of them have hundreds of millions of views. There are tons of them once inside, I mean, there's there's something that's a vote that's just kids love. This is James Bridal. He's a writer and artist, and it's just this. These videos that could go on for hours at a time, of just a power of hands on the screen softly and gently opening up product after product, kind of reveal what's inside cookie that is incredibly slow,

gentle, quiet but in seemingly endless, you know, want to watch one? Then there's another one. That's another one. There's another one. There's just vast amounts of this. Why don't we get another egg and open it? And it does something to Kidsbrains? Essentially. What do you mean? Well, I've been trying to understand it a little bit, and I'm really not a child psychologist or kind of specialist in this area, which also,

I mean, more adults might be more familiar with Unboxing videos, which have been around for a while, which is this kind of like fetishistic opening up of consumer goods? But if you look at the history of Children's TV, for example, Sesame Street kind of pioneered this on. Then I think there was a program in the U. S. It was could Blue's clues or something like that? It's me. Have you seen on the first innovation that Blue's clues did was that they showed the same episode over and over again, like they showed the same episode for a week. It's me, see when they discovered what they knew in advance. But what was shown that the kids absolutely love this?

They loved the repetition of it on building a kind of a world that is predictable in this way, seems seems to be like catnip for kids. And you can throw in these kind of little surprises which get you your little dopamine hits. And so when that's built into the kind of educational programs or something like Sesame Street, you can see it kind of being used for good. And that's not means, say, that these videos in S are being used for evil, but they've just picked out just that mechanism needs hand me another egg. Which one do you think she's gonna pick this time? They're not trying to do anything else with it. They're not gonna get it for kids to, like, hook them with that mechanism and then teach him arithmetic. They're just using the hook, and so that makes kids susceptible to it in super obvious ways.

It also makes other super susceptible to it in less obvious and more complex. But I think also, like, you know, really quite dangerous and damaging ways as well. James Bridal picks up this idea from the TED stage. So this is This is where we start. It's 2018 and someone or lots of people are using the same mechanism that, like Facebook and Instagram, be using to get you to keep checking that up on they using it on YouTube toe hack the brains of very small Children in return for advertising revenue. At least I hope that's what they're doing. I have. That's what they're doing it for because there's there's easier ways off making ad revenue on YouTube. You can just make stuff up or steal stuff. So if you search for, like really popular kids cartoons like pepper pig or pork patrol,

you'll find that there's millions and millions of these online as well. Of course, most of them aren't posted by the original content creators. They come from loads and loads of different kind of random accounts on it's impossible to know who's posting them or what their motives might be. All right, does that sound kind of familiar? Because, really, it's exactly the same mechanism that's happening across most of our digital service is where it's impossible to know where this information is coming for is basically fake news for kids, right? And we're training them from birth to click on the very first link that comes along regardless of what the sources. That doesn't seem like a terribly good idea. A lot of a technology we invent can be amusing and educational. It can also be amazing and life changing, because the human impulse to chase technological advancement is a fact. We can't stop that train.

But how often do we pause and just think about the dark side of innovation? Well, today on the show, we're gonna explore some of those unintended consequences and whether we have the capacity to manage them, because even if we can do something in bigger and faster and flash your ways, does that always mean we should? Well, when it comes to videos on YouTube or anywhere online. James Bridal says there could be big unintended consequences, and not just from getting kids addicted to them, but something even more unsettling. So the main way people get views on their videos, remember views mean money is that they stuff the totals of these videos with these popular term. So you take like surprise eggs, and then you add poor patrol, Easter egg or whatever these things are all of these words other popular videos into your title until you end up with this kind of meaningless mash of language,

right? That doesn't make sense that humans at all. Because, of course, it's only like really tiny kids who are watching your video on What the hell do they know? Like your real audience for this stuff is, software is the algorithms is the software that YouTube uses to select which videos are like other videos to make them popular to make them recommend it. And that's why you end up with this kind of completely meaningless mash both of title and of contact, and also on the other side of the screen. There still are these little kids watching this stuff their full attention grabbed by these weird mechanisms on. So there's autoplay, where it just keeps playing these videos over and over and over in a loop endlessly for hours and hours of the time. And there's so much weirdness in the system now the water play takes you to some pretty strange places. This is how, within, like,

a dozen steps you can go from a cute video of accounting train to masturbating Mickey Mouse. Yeah, I'm sorry about that. This does get worse. This is what happens when all of these different keywords, this desperate generation of content, all comes together into a single place. This is where all those deeply weird key words come home to reached. The stuff that tends to upset parents is the stuff that has kind of violent or sexual content, right? Children's cartoons, getting assorted getting killed, weird pranks that actually genuinely terrified Children. What you have. It's software pulling in. All of these different influences automatically generate kids.

Worst nightmares on this stuff really, really does affect small Children. Parents report their Children being traumatized, becoming afraid of the dark, becoming afraid of their favorite cartoon characters. If you take one thing away from this. It's that if you have small Children, keep them the hell away from YouTube. You know I am. I was talking to the child of a friend of mine who's in high school, you part of this. But when a high school student wants to learn how to do something, they will find a video on YouTube. There's a video on YouTube for almost anything you know how to fix your car or how to boil an egg and in an instant pot, which I just watched because I wanted to know how to boil an egg in an instant. But and I is a 43 year old man.

I'm just kind of discovering this world right. There are amazing things about access to YouTube and obviously the Internet technology right, which have been transformational. But at the same time, I have to wonder whether we are all, especially people who are born into that digital world, are all part of this giant uncontrolled experiment, and we just don't really know what the results of that experiment will be, how it will change us as a species and I sound like a crazy person or Or is there something to that? No, I think I think, until you're right, I think I think describing as a kind of grand experiment is spot on. We have kind of released these technical, but I mean the release of any new technology. Only one of these advances is always an experiment like it's impossible to test these things at scale.

The thing that I find fascinating, really, is the fact that we're not really paying attention to the results of that experiment because the point of an experiment is you test something and then you make decisions or changes based on the results of that experiment. Yeah, and it's fairly clear that the experiment, which we've been participating in for some time now of completely unregulated, particularly advert driven content online, is not one that's working help very well that it's and you can see the results of that kind of multiple levels. You can see it in the kind of the weird kid stuff that we're talking about, but you can also see it at the largest scale of this kind of like dilution of knowledge or this kind of fundamental ization of knowledge. I would say by which I mean that. Ah, you know, one of the other things that YouTube optimizes for is sensation on. This has become really clear that particulars YouTube has become this kind of repositories of knowledge that people are going to look for. You know,

certain systems of knowledge are designed that when you discover something and you want to discover more, it takes you deeper and deeper into that subject. YouTube is designed to show you the thing about that subject that is the most sensational, right? Which is why I mean, just the other day I was watching a speech from Walter Cronkite from the early eighties about climate change, right, and it's really interesting to watch from back then, just like how obvious and settle this debate. Waas. But the YouTube's or two plays next recommendation was a three hour speech from a climate change denier from a year ago. And that's not because YouTube hold some inherent belief about climate change. It's because that content is sensational, and what YouTube wants to do is show you things that will cause you strong reactions because that's what gets you watching. So we've decided to optimize four reactions and sensation over other forms off kind of verifying knowledge. But what?

We've decided to do that, and we could decide to do otherwise if we pay attention to the results of this experiment. James Bridal He's a writer, artist and author of the book New Dark Age. Technology and The End of the Future. You can see his full talk at ted dot com. We just acknowledge for a moment that that things like Big Data and I are going to be revolutionary mean they are going

12:14

to change everything. Yes, in almost every field of faction, and the opportunities are are amazing.

12:21

This is You've all Noah Harari. He's a historian and an author.

12:26

So in 30 years, artificial intelligence and biometric sensors might provide even the poorest people in society who is far better health care than the richest people get today. You can hardly think off a system whether it's communication or traffic electricity, which won't benefit from these kinds of developments.

12:49

But you've all thinks that in the bigger picture, how a I and big data might affect political power could have really dangerous unintended consequences.

13:0

More and more governments are reaching the conclusion that This is like the Industrial Revolution in the 19th century. Whoever leads the world in a I will dominate the entire world.

13:15

And when we come back in just a moment, Yuval Harari will explain how a I might threaten to destroy liberal democracy. Stay with us. I'm Guy Raz, and you're listening to the Ted radio hour from NPR. This message

13:32

comes from NPR sponsor Vitas Healthcare. When you're advanced illness, patients need an advanced illness specialist. Rely on Vitas Healthcare. Rely on them for end of life care knowledge and re sources, including 24 7 acceptance of referrals and complex modalities for symptom management. For more on how Vitas can help you and your patient or residents.

13:53

Goto vitas dot com slash advantage It's the Ted Radio Hour from NPR I'm Guy Raz and on the show today, ideas about unintended consequences and the dark side of innovation. In Just a moment ago we were hearing from historian You've all Noah Harari, who warns that a I and Big Data could present a real threat toe liberal democracy. If

14:19

you If you look at the clash between communism and liberalism, you can say it was a clash between two different systems for processing data and for making decisions. The liberal system is, in essence, a distributed system. It distributes information and the power to make decisions between many individuals and organizations. In contrast, communism and other dictatorial systems they centralize. They concentrate all the information and power in one place, like in Moscow. In the case of the Soviet Union now, given the technology off the 20th century, it was simply inefficient because nobody had the ability to process all the information fast enough and make good decisions. I mean, how many cabbages to grow in the Soviet Union? How many cars to manufacture? How much will each car cost you?

Money? We try to make all these decisions in one place When when what you have is typewriters and filing cabinets and pen and paper and things like that, it just doesn't work. And this is one of the main reasons, if not the main reason, that the Soviet Union collapsed. So this was really a result off the prevailing technological conditions.

15:40

But those technological conditions have obviously changed. Here's more from you Fall Noah Harari on the Ted stage

15:49

in the 20th century, democracy and capitalism defeated fascism and communism because democracy was better. It's processing data and making decisions. But it is not a low off nature that centralized data processing is always less efficient than distributed data processing. With the rise of artificial intelligence and machine learning, it might become feasible toe process, enormous amount of information very efficiently in one place and then centralized data processing will be more efficient than distributed data processing. The greatest danger that now faces liberal democracy is that the revolution in information technology will make Dictatorships more efficient than democracies. And then the main handicap. Authoritarian regimes in the 20 essential their attempt to concentrate all the information in one place. It will become their greatest advantage.

17:5

So are you saying that the Threat Toe liberal democracy increases as the ability of machines to process Maura Maura amounts of data improves? Yes,

17:17

in many ways. So in the 20th century, the supporters off liberal democracy had a kind off relatively easy time because you did not have to choose between ethics and efficiency. The most ethical thing to do was also the most efficient thing to do to give power to the people, to give freedom to individuals. All these things were good both ethically and economically, and most governments around the world that liberalized their societies in the last few decades, they thought, If we want a thriving economy like the U. S. Economy or life, the German economy, we need to liberalize our societies. So even if we don't like very much to do it, we have to do it. But what happens if suddenly Ah, this is no longer the case.

It's still the best thing to do from an ethical perspective to protect the privacy and the rights of individuals. But it's no longer the most efficient thing to do. The most efficient thing to do is perhaps to build these giant databases, which ignore completely the privacy and the rights off individuals. And it's the most efficient thing to do. Tow. Allow algorithms to make decisions on behalf off human beings. The algorithms will decide who we will accept to these university. The algorithms will tell you what to study and where to live and even home to marry. And if this is more efficient, what happens to the ideals off freedom and human rights and individual is, um, this becomes a much more problematic issue. Then, in the 20th century, Another technological danger that threatens the future of democracy is the merger off information technology with biotechnology,

which might result in the creation off algorithms. That's no mean better than I know myself. And once you have such algorithms, an external system like the government cannot just predict my decisions. It can also manipulate my feelings, my emotions. A dictator may not be able to provide me with good health care, but he will be ableto make me love him and to make me hate the opposition. The enemy's off liberal democracy. They have a method. They hack our feelings, not our emails, not all bank accounts. They hank our feelings off fear and hate and vanity and then use these feelings to polarize and destroy democracy from within. Because in the end, democracy is not based on human rationality.

It's based on human feelings during elections and referendums. You're not being asked. What do you think you're actually being asked? How do you feel? And if somebody can manipulate your emotions effectively, democracy will become an emotional perpetual.

20:41

So your your conclusion is he who controls the data controls the people?

20:50

Yes. And if you start with the understanding that at least according to science, our feelings do not represent some mystical free will, they represent biochemical processes in our bodies and, of course, influences from the environment. Now, what we also need to remember is that it should be technically possible to decipher toe hack human beings and human feelings. In orderto hack a human being, you need a lot off biological knowledge, and you need a lot of computing power. And until today, nobody could do it. Yeah, and therefore, people could believe that humans are unhappy Kable, that human feelings reflect free will,

and nobody can ever understand me and manipulate me. And this was true for the whole off history. But this is no longer true. Once you have a system that can decipher the human operating system, it can predict human decisions, and it can manipulate human desires and human feelings. I mean, until today, no politician really had the ability toe understand the human emotional system by trial and error. They see what works, and it changes all the time. But if we reach a point when we can reliably the cipher, the human biochemical system and basically sell you anything, Whether it's a product or a politician, then we have a completely new kind off politics.

22:25

You know, I know that you probably come across, you know, people who have helped to create this technology, People who have this kind of utopian idea of how data and data processing could change the world in positive ways, you know, but but those same people, you have to wonder whether they stop to think about the unintended consequences

22:48

when you develop this kind of technology. In most cases, you obviously focus on the positive implication shirt on. And until today, humankind has managed to avoid the worst consequences. The most obvious example is nuclear technology. All the doomsday prophesies from the 19 fifties and 19 sixties about a nuclear war which will destroy human civilization. It didn't happen. Humankind successfully rose to the challenge off nuclear technology. Whether we can do it again with a I and with biotechnology is an open question. Eve,

23:27

all you you will know this well as an Israeli, somebody who lives in the biblical lands. Profits are rarely rewarded. In fact, they're they're usually disliked even when they're right, and oftentimes, when they're right, it doesn't matter because their warnings air so dark and we ignore them at our peril.

23:47

I definitely don't see myself as a prophet, and I don't think that anybody can prophesized the future. Actually, it's it's pointless. I I can. I define myself as historian, and what I try to do is map different possibilities. There are always more than one way in which we can go from here, and the reason I think it's important to have this discussion is because it's not too late. I see my job in changing the discussion in the present. We can still influence the direction in which this technology is going. There are always different possibilities.

24:28

You've all Noah Harari teaches history at the Hebrew University in Jerusalem. His latest book is called 21 Lessons for the 21st Century. You can see his entire talk at ted dot com on the show today. Ideas about the scope and scale of human innovation and some of its unintended consequences. What do you remember about the talk around the Internet in 2006? I look back then and I recall a time when we all used to float around, just thinking that we are doing work that's essentially altruistic. We were just like we're connecting people to information to each other. This is going to transform democracies, and it's gonna empower populations on way really didn't think about all of the platforms and APS being developed as tools for everyone who's malicious to just be even more effective. This is Yasmin Green. I am the director of research and development and Jake Saw, and Jigsaw is a technology company created by Google. So we look at problems like repressive censorship, cyber attacks, online radicalization, and we try to build technology that can help protect people from these threats.

Desmond started out Magic saw in 2006. This is a year after YouTube was born and the same year that Twitter was created. But that utopian vision of the Internet that she was just describing well, it didn't anticipate a platform for trolls and paid groups and extremists to find each other and build a like minded community. And one prime example. The extremist group Isis Isis has been kind of given the accolade off being, you know, the first Harris Group to really understand the Internet. There was no technological genius in their use of the Internet that there's nothing that I assisted. That was impressive from an innovation perspective. They used the tools that are open to all of us to use for connecting, for sharing for activism they use. This was almost in the way that the rest of us use them, that they use them with destructive ends in mind. And so, back in 2015 around the time Isis was recruiting heavily in gaining momentum,

Gassman and her team, a Jigsaw, wanted to find out how Isis was so effective. Here's jasmine green on the Ted stage. So in order to understand the radicalization process, we met with dozens of former members of violent extremist groups. One was a British schoolgirl who have been taken off of a plane, a London Heathrow, as she was tryingto make her way to Syria to join Isis. And she was 13 years old. So I sat down with her and her father, and I said, Why? And she said I was looking at pictures of what life is like in Syria. I thought I was going to go live in the Islamic Disney World. That's what she saw.

An isis, she thought she meet and marry a jihadi Brad Pitt to go shopping in the mall, Lord and of happily ever after. Isis understands what drives people on the carefully crafted message for each audience. Just look at how many languages they translate their marketing material into. They make pamphlets, radio shows and videos in not just English and Arabic, but German, Russian, French, Turkish, Kurdish, Hebrew, Mandarin Chinese. I've even seen an Isis produce video in sign language. It's actually not tech savviness.

That is the reason why Isis wins hearts and minds. It's an insight into the prejudices, the vulnerabilities, the desires of the people they're trying to reach. That does that. That's why it's not enough for the online platforms to focus on removing recruiting material. If we wanna have a shot of building meaningful technology, that's gonna counter radicalization. We have to start with the human journey at its core. Yeah, I mean, given the Internets virtues to connect, people wouldn't ever have been possible to to prevent bad actors from also taking advantage of of those tools. There's a lot of bad stuff that does get stopped and it's easy and dangerous to say, Well, they're good people and bad people because it ends up the prescription is really punitive technologies or policies,

which is let's suspend people or that sent to people How Let's punish people on DDE In most of the cases, my conversations with former either Isis recruits or supporters are extremists is that they were people with almost legitimate questions, and they went down a bad path. But more information, better information earlier in the process could have steered them into a different direction. Reddick realization. Isn't this yes or no choice? It's a process during which people have questions about ideology, religion, living conditions on their coming online for answers, which is an opportunity to reach them. So in 2016 we partnered with Moonshots E V E. To pilot a new approach to countering radicalization. Called the Redirect method, it uses the power of online advertising to bridge the gap between those susceptible to Isis is messaging on those credible voices that are debunking that messaging, and it works like this.

Someone looking for extremist material say they search for How do I join? Isis will see an ad appear that invites them. Tow Watch a YouTube video of a cleric of a defector, some someone who has an authentic answer on Because violent extremism isn't confined to any one language, religion or ideology, the reader at method is now being deployed globally to protect people being courted online by violent ideologues, whether the Islamists, white supremacists or other violent extremists with the goal of giving them the chance to hear from someone on the other side of that Johnny. I mean, it sounds like you're trying to kind of break this down with the hope. I guess of some point figuring out how to solve this. But this is a long term project. This is not gonna happen overnight, right? Yes. When our group was started 70 years ago,

I remember feeling like this is a really gamble. And now, you know, I think we have to have more people within technology companies that think about the world through this lens, like it's not enough just to focus on your platform and that, you know, the micro instances that you see that you have to think about terrorist groups and their goal in their strategies and what they're doing across the whole Internet. I mean, you have to have a big picture of you. We can't be so tunnel vision anymore. The more that we do that, the better will be a sporting problems already. Yeah, when you go around the world and you see how groups who have power are actually using technology to reinforce their power, you realize that the kind of utopian vision of what the Internet was going to be was not inevitable. We'd have to be proactive and step in if we wanted to have a chance of realizing that.

That's Jasmine Green. She's the director of research and development at Jigsaw. You can see her full talk at ted dot com on the show today. Ideas about unintended consequences. I'm Guy Raz, and you're listening to the Ted radio hour from NPR. Hey, everyone, just a quick thanks to two of our sponsors. You helped make this podcast possible. First to Microsoft. Microsoft wants you to know that the newest member of the Microsoft Surface family, the surface Pro six, is now faster and more powerful than ever before, so you could get even more done, whether it's from your office or on your couch.

Take the keyboard off and drawn and easily or snap it back on. Type on it like a laptop without the 13 and 1/2 hours of battery life and the new eighth Gen Intel Core processor, you can work how you want to for as long as you want to, wherever work takes you. Thanks also to Walmart. Resa Pittman is store manager of a Wal Mart that uses a robot called Casanova to relieve associates of repetitive tasks so they can focus on sales instead. It used to take about two weeks for us to be able to get through all of the out of stock in the store. It takes about two hours for the Boston over to go through the aisles toe, learn more about the partnership between Tech associates and customers at Walmart. Visit walmart today dot com slash technology.

33:43

Dia de los Muertos is a special time for remembering those we have lost and all Latino Sonic Altar is just that, a musical celebration of the lives of those we have loved. Check it out on the next on Latino wherever you get your podcasts.

34:2

It's the Ted radio hour from NPR I'm Guy Raz And on the show today, Unintended consequences. Ideas about the things we invent to make life better and the outcomes we never see coming like the first smart device you got. Well, I think the Fitbit was the first thing I got that kind of Internet of things like taking RIA World data and quantifying it for me so I could reflect on it. And it could, you know, improve my life, improve my fitness and did it this'd Cashmere Hill? No. Kashmir is a tech reporter for Gizmodo Media. I'm specifically focused on privacy. So I spent a lot of time thinking about the way that technology is changing the way that we live and what happens to our data and our information. Do you remember when you first heard the term Internet of things like, Oh, you're gonna have a smart fridge is gonna tell you you need more eggs and you're gonna smart coffeemaker that's going to, you know,

tell you how much coffee you're drinking and everything in your kitchen is gonna be connected and it's gonna be awesome. And I'm thinking, that's awesome. It's like it's like those Wallace and Gromit movies where he's like, you know, spring boarded up the bed and then, like, drops into his breakfast. See them? They're the toast in the coffee. You know, I was really this This was exciting to me. Yeah, and And hopefully, you know, they would do all this work for you.

So you would have more free time to doom or interesting creative things or or watch more Netflix. But yeah, I was kind of imagining my house, anticipating my needs and and really taking care of me. Kashmir actually got a chance to experience what it would be like to live in an entire home filled with smart devices. I got a smart toothbrush, a smart coffeemaker, robot vacuum Roomba. Um, I bought a a smart sex toy. Except the things didn't go exactly as planned. Cashmere Hill picks up the story from the Ted stage. Being smart means the device can connect to the Internet. It can gather data and it can talk to its owner. But once you're out, you're appliances can talk to you.

Who else are they going to be talking to? I wanted to find out. So I went all in and turned my one bedroom apartment in San Francisco into a smart home. Altogether, I installed 18 Internet connected devices in my home. I also installed a Syria. Hi. I'm soon and I monitored everything the smart home did. I built a special daughter. Let me look at all the network activity Three and I are both journalists. He's not my husband. We just work together against Motel and give a clarified the device is catch me. But we were interested in understanding what they were saying to the manufacturers. But we were also interested in understanding what the homes digital emissions looked like to her Internet service provider. We were seeing what her I s P could see, but more importantly,

what they could sell. We ran the experiment for two months. In that two months, there wasn't a single hour of digital silence in the house. Not even when we went away for a week. Yeah, so true. Based on the data and you and you guys woke up and then you and you went to bed. I even you won't catch me. Brushed her teeth. The device is catch me. But almost all think their servers daily. But you know which devices, especially Jackie the Amazon echo. It contacted its servers every three minutes, regardless of whether you were using it or not.

Wow, I'm just I'm just wondering here, with all these smart devices, what are we actually giving up? I mean, so there's there's a couple of things I was focused on privacy. There are definitely security concerns by connecting our diet devices to the Internet. Devices are made by companies that have traditionally not been Internet companies, so they're not as savvy about Internet security. We are exposing ourselves to the possibility of people intruding in our homes through these devices. So that has happened with baby monitors, for example. And so you had hackers that were access and cameras and babies rooms, sometimes even able to talk to the babies. So that's really alarming. What I was more concerned about.

Waas this'll ce tracking of what we're doing in our most intimate spaces on DDE, what is eventually done with that data and just the feeling of being constantly observed in our own homes. How that changes the really the sanctity of the home. The device is ghastly, bought veins from useful, too annoying, But the thing they all had in common were sharing data with the companies that made them with email service providers and social media. We've long been told that if it's free, you're the product. But with the Internet of things, it seems, even if you pay your still the product. So you really have to ask who is the true beneficiary of your smart home? You're the company mining eso won a smart TV or or Amazon Echo or Google home or whatever device you have, right when when those devices are pinging the companies and then sending that data back, I mean,

they're just building a profile of who we are. I mean, absolutely, You know, Roomba, that makes my iRobot makes Roomba the smart vacuum. So they know, for example, like how often I vacuum my house and which parts of the house or dirty the CEO of iRobot at one point was talking to journalists and said, Yeah, we actually have access to great data. We have maps of people's homes, and we could potentially sell that to all these. You know, these companies are getting into the smart home market like Google and Amazon and Apple and people who had room bus flipped out because they hadn't thought about that at all. That they're vacuum was sucking up information about their home that could potentially be sold.

And I Robot CEO later walked back and said, Oh, you know, we'd never do that without people's consent. But it was one of these wake ups to the fact that these devices in our homes that don't you know, they don't look like data collectors. They don't look like cameras. They don't have obvious lenses that they are watching what we're doing and collecting information about what we're doing. And I think every single company now is thinking, you know, how do we monetize Data Day? Does the new oil? What can we do to make new revenue streams? And they're looking at data yet? Yes, I mean,

once there are these these comprehensive profiles of our behaviors or habits and in our likes and dislikes, I mean, we could be judged before we even walk through the door, like for a bank loan, her job or, you know, or anything, right? And I don't think companies don't think of this is nefarious. They say, like, Oh, we just want to know what you want to buy. Yeah, but I think what we've seen with online tracking is that their nefarious uses of this data, you know,

attempts to manipulate the way that we think about the world, try to influence the way that we vote. And it's all happening with profile of us that we don't know about that's been compiled by a company we've never heard of before. And it's just creating this really paranoia for people because they don't know why they're seeing what they're seeing. But they kind of know that there's this data collection going on. And, um, I think people are getting really worried about how these companies air influencing us and how much access to our data they have. Cashmere Hill. She's a reporter for Gizmodo Media. You can see Kashmir and Syria's full talk at ted dot com on the show today, ideas about the dark side of innovation, the fallout downstream effects and the unintended consequences of all of this technology

42:17

were creating today. I'm not sure that's exactly the way I would put it,

42:23

and then there are some of us who aren't freaking out who actually love unintended consequences

42:29

What I love about them is the way that life is so unpredictable, and you really wouldn't have positive surprises unless they were also negative ones. This'd

42:40

is Edward Tenner. He's a historian of technology.

42:44

To me, the philosophy Ellen intended consequences really means keeping open. It means constantly observing the people who see endless despair and suffering on one side from technology. And the people who see a wonderful new world are really lost ideological. And I don't I don't think that either is is wrong. I think it's important to have perspective on decisions and on history that will let us look at change with more equanimity.

43:19

And as Edward points out, if you look at innovation throughout history, it's always better to take the long view. Here's more from Edward Tenner on the Ted stage. Let's go to 10,000 years before the present to the time off The domestication of brains. What would our ancestors 10,000 years ago have said if they really had technology assessment and I could just imagine the committees reporting back to them on where agriculture was going to take humanity, at least in the next few 100 years? It was really bad news. First of all, worse nutrition, shorter life spans. It was simply awful for women. The skeletal remains from that period have shown that they were grinding grain morning, noon and night. And politically, it was the beginning of a much higher degree of inequality. If there had been rational technology assessment,

then I think they very well might have said, Let's call the whole thing off. Of course, this was going to be better in the long term for for humans. But there were gonna be some bad things that were gonna happen as well.

44:36

That's right. And my point there was. We have to recognize the limits of technological assessment. We have to take a longer view, which means that that sometimes I like to say things Congar Oh, right on Lee after they've gone wrong. And if we try to prevent any new development with potentially bad consequences, we may be freezing in the bad consequences that we already have,

45:3

Which is interesting because it reminds me of something you bring up in her Ted talk, which is which I didn't know which is example, that Titanic, which is after it it sank. There were all these lives that were passed that required ships to carry more lifeboats.

45:17

Yes, well, I think when we think about the Titanic, we have to disregard for the moment the films we've seen about the Titanic and put ourselves in the position of captains of ships on the North Atlantic. At that time, sea ice was known as a problem. But it was not always a problem that caused massive loss of life. It could damage your ship. But there was always time for the rescue of the passengers and crew. So the Titanic was really a case where everything worked out wrong and the rest you didn't come.

45:58

Lesson of the Titanic, for a lot of the contemporaries, was that you must have enough lifeboats for everyone on the ship. And this was the result of the tragic loss of lives of people who could not get into them. However, there was another case, the Eastland, a ship that capsized in Chicago Harbor in 1915 and it killed 841 people. That was 14 more than the passenger toll of the Titanic. The reason for it in part wasthe e xtra lifeboats that were added, that made this already unstable ship even more unstable, and that again proves that when you're talking about unintended consequences, it's not that easy to know the right lessons to draw. It's it's really a question of the system, how the ship was loaded, the ballast and many other things. It is an amazing example of how we are kind of wire to react right, like something bad happens and then to solve it or to prevent it. The reaction or the solution that we put forward has worse consequences. Then do nothing.

47:14

Yes, well, I think we can say that very often. The means that you put in place after some kind of disaster will, in the long run, lead to the next disaster. For example, Newbridge designs have, ah, lifespan of about 30 years. There is some disaster that leads to a new type of bridge, and then engineers get more and more confident from the design. They get bolder and bolder. And then there are some kind of new catastrophe that leads to reconsidering that technology, and the cycle starts all over again.

47:56

I mean, it sounds like what you're saying is that it Look, there's no point in worrying about this stuff or bothering with this stuff. Because, you know, the course of history is the course of history that we we we can't necessarily shape it. And I wonder whether whether that's true,

48:13

I'm not saying that we should. We should do nothing that we shouldn't take any, uh, any action. But we should also realize that really two things happened that first of all, the positive outcomes that we expect are usually not nearly as as positive as we imagine them. But also the negative things don't turn out in the same way. For example, we tend to think that what is going on is just going to go on and on and get worse and worse, or that's going to go well on and on. It get better and better, and reality usually has surprises

48:53

for us, it would take something you know is it is scary. Is climate change right? I mean, isn't there some value in anticipating the worst case scenarios and then trying to prevent them?

49:4

I think it's very important that the fear of worst case scenarios is leading to all kinds of proposals for geo engineering for Ah, 100% renewable power. I'm all for this and I think it's very good that our fear of apocalypse is motivating that. So I don't dispute that at all. But I don't think it's really terribly helpful unless you're actually working on something concretely to deal with the problem toe worry too much about the problem. If there isn't something that you can do about

49:39

it, I don't know. I mean, I think you're right and I feel very reassured by this. But you know, in the middle of the night, when I wake up in a cold sweat, I'm thinking we're like at the very edge of destroying ourselves like this. This can be the end of our species.

49:54

Yes, it could. Or probably more likely, it would mean a worldwide, a degradation of the living conditions of humanity. But remember, there was always a positive side of these epidemics. So, for example, after the black death in the 14th century of you survived, it was a very good time to be a peasant. You, ah, had lower rents. There were more opportunities for people to become artisans and masters of their own workshop. So there was really a lot of opportunity if you didn't get killed by the epidemic. Great.

If you made it through right, that's it. So that's maybe that's the one bright spot. So you know, hope that you hope that you'll be one of the survivors.

50:43

That's Edward Tenner. His latest book is called Our Own Devices. The Past and Future of Body Technology. You can watch his entire talk at ted dot com. Hey, thanks for listening to our episode on unintended consequences this week if you want to find out more about who was on it but ted dot NPR dot org's to see hundreds more Ted talks. Check out ted dot com or the Ted App. Our production staff at NPR includes Jeff Rogers, Sarah's Michigan for Jenny West, Neva Grant Casey Herman Rachel Faulkner Debo Motel With help from Daniel Shuqin. Megan Shell on Our Intern is Derek Geils, our partners at Ted, Chris Anderson, Colin Helms and a Feeling and Janet Leigh.

powered by SmashNotes