ARI SHAPIRO, HOST:
And now for the Opinion Page. Technology has always promised to fix our imperfections. In this 1950s TV ad, G.E. swore that a new refrigerator-freezer combo would make a housewife's problems disappear.
(SOUNDBITE OF ADVERTISEMENT)
UNIDENTIFIED MAN: We didn't have all this storage space in the door or conveniences like a butter conditioner, sliding shelves.
SHAPIRO: A butter conditioner? Last time I checked, I couldn't get a butter conditioner app for my smartphone. But technology is still promising to fix everything that's wrong with our lives - eliminate forgetfulness, buy the perfect dress at the lowest price, even poll your friends on what you should eat for dinner. Evgeny Morozov worries that Silicon Valley's quest for perfection is steering us off course. In a New York Times opinion piece, he writes: Learning to appreciate the many imperfections of our institutions and of our own selves, at a time when the means to fix them are so numerous and glitzy, is one of the toughest tasks facing us today.
So we want to hear from you. Is there a persistent problem technology got rid of for you that you were glad to see disappear? Tell us about it. Our number is 1-800-989-8255. Our email address is email@example.com, and you can join the conversation on our website, go to npr.org and click on TALK OF THE NATION. Joining us now is Evgeny Morozov. His latest book goes on sale tomorrow, called "To Save Everything, Click Here: The Folly of Technological Solutionism." Evgeny, welcome back to TALK OF THE NATION.
EVGENY MOROZOV: Oh, good to be here.
SHAPIRO: So describe what you accuse Silicon Valley of doing, and really, how is it different from what technology has aspired to do all along?
MOROZOV: Sure. So technology, of course, has always been about solving problems. What I think we need to understand is how we define problems now that we have so many technological solutions at our disposal. So I use this term solutionism to describe a certain intellectual tendency - I would say pathology even - to describe problems as problems based solely on the fact that we know how to solve them.
SHAPIRO: Give me an example, like what is a problem that really doesn't deserve to be solved?
MOROZOV: Well, forgetting, for example. We can now with Google Glasses record everything around us, and we can make sure that nothing is ever forgotten because everything is stored somewhere in Google servers or somewhere else. And if you listen to Silicon Valley, they would pitch a product like Google Glasses to us as a way to get rid of forgetting, and they will say, well, of course, you will be able to find your car keys. They will focus on very domestic, very small details of our everyday existence.
While if you think about it more philosophically, maybe it's actually good and productive that we can forget certain things, maybe it's actually for the better that we can cope with certain stresses, that we're not always confronted with how ugly we have behaved to some people, three, five or 10 years ago. There is a certain value in inconsistency and behaving inconsistently.
SHAPIRO: And yet, if there were an app that made sure I never missed a meeting, that's a kind of elimination of forgetting that I would really be happy to have.
MOROZOV: And I would say that perhaps it is, but we need to understand, again, how far this perfectionism would go. I would, for example, I'm very worried about technologies that could make crime impossible because they would prevent certain people from being in certain places based on their Facebook profile. So what if you're friends with the wrong people, should they allow you into a bus or should they allow you into a nightclub? And then we have to think, well, what happens once we prevent crimes from happening?
Well, what happens is that if crimes don't happen, they're no longer brought to courts, we no longer talk about them in the media, and we no longer revise the norms and the laws with which we live, right? So it's very important to understand that this solution come with costs, and it's those costs that we also need to evaluate and not just focus on the positives.
SHAPIRO: How much of the kind of worst-case scenario you're describing is a dystopian science-fiction future of the sort of that people make movies starring Tom Cruise about and how much is like, yeah, this is going to happen in a couple of years whether you like it or not?
MOROZOV: Well, I think if you believe that venture capitalists are people who are down to earth and who don't believe in dystopian science fiction, then a lot of it is already happening because we do have a lot of start-ups in Silicon Valley, and those start-ups are well-funded, which are trying to build apps that will allow you to poll everyone in your social circle and ask them what dress you should buy or what latte drink you should order. All of those are real apps funded by real people based on a problem that they have defined as real problem. Right, so...
SHAPIRO: Talk me through...
MOROZOV: I'm sorry. Go ahead.
SHAPIRO: If I want to poll my 20 closest friends on which latte I should order or which outfit I should buy, what's the harm in that?
MOROZOV: Well, I think if you start to some basic idea of psychology and that we need to be mature, autonomous human decision makers, then we'd also recognize that being forced to make unpopular decisions and occasionally being forced to confront the consequences of your decisions after the fact is a good thing because it helps us to mature, it helps us to process feedback, it helps us to become more complex human beings. What happens with these apps is that you get all the feedback before you make the decision. So you go for the option that minimizes pain because you know how your friends are likely to react. And this is the kind of painless and frictionless existence that, I think, will eventually leave us stuck at the adolescent or even the, you know, children stage and will not allow us to evolve into complex adult human beings.
SHAPIRO: In your op-ed piece for The New York Times, you quote from Jean Paul Sartre. You say: The existentialist philosopher who celebrated the anguish of decision as a hallmark of responsibility has no place in Silicon Valley. Whatever their contribution to our maturity as human beings, decisions also bring out pain. And you write: Faced with a choice between maturity and pain minimization, Silicon Valley has chosen the latter.
MOROZOV: Yeah. And, you know, I think, - and Silicon Valley, if you listen to them closely, they would actually invoke philosophers to justify what they do. So I point to this one guy at Microsoft who came up with this camera that records everything that he does every 30 seconds, and he would quote Proust to say that we'll be recording everything in Proustian detail. But of course, Proust meant the exact opposite. He didn't believe that you can actually store pictures and that would recreate the memories of you eating that madeleine or eating that biscuit. He thought it's a complex process that involves you telling a story, processing some sensory experience. It's not just about being confronted with pictures, facts or numbers. Now, unfortunately that's how Silicon Valley thinks, because those are the things they can record.
SHAPIRO: Let's go to a caller. This is John(ph) in Charlotte, North Carolina. Hi, John.
JOHN: Hey, good afternoon. Real quick, I found an app that cured all my problems. I'm colorblind my whole life and I found an app called Hue View that actually tells me what colors I'm looking at. If I have to pick out clothing for my daughters, pink or purple, I can take a photo of it and it tells me exactly what color I'm looking at.
SHAPIRO: Really? And aside from buying clothes for your daughters, how is it helpful for you in your daily life?
JOHN: I'm an electrician by trade and a lot of times, I come across different-colored wires and I would often have to get a co-worker to come over and have them tell me what color of wire I'm looking at and, you know, sometimes hold my hand and actually do it for me, and I'm able to now take a quick photo of it, I know exactly what color it is and where I need to put it.
SHAPIRO: Wow. That is really interesting. Would you know if the app was created by a colorblind person?
JOHN: No, I don't. I was just looking up colorblind tools. I searched colorblind tools on the app store and found the free app.
SHAPIRO: Cool story. Thanks for calling, John.
SHAPIRO: Yeah. Evgeny Morozov, what do you think?
MOROZOV: I think it's a great app. Again, I'm not against Apple technology per se but we just have to realize that not all problems are as simple as the one we have just heard. A lot of them are controversial and how problem solvers define problems matters a great deal.
My favorite example is what's known as a smart trash bin. It's a trash bin that has a smartphone inside and that snaps a photo of everything you throw away every time the lid is open or closed, and that photo is then uploaded to the Internet where people analyze your recycling behavior and you're being awarded points for being environmentally friendly. And then your Facebook friends see how many points you're earning. That's the kind of an app that definitely has a logic behind it. It has a philosophy and that's a philosophy that says that you need to care about the environment because you want to impress your friends. I'm not sure that that's the right reason to care about the environment.
Maybe you want to care about the environment because you actually care about what's going to happen to the planet, right? And not because you want to accumulate points, and not because you want to earn some virtual currencies with which to buy another app, right? This philosophy here - and we need to scrutinize that philosophy and not just be all too excited about having another fancy app.
SHAPIRO: Isn't some of this the nature of the marketplace, though? You know, the technology companies put a thousand things out there and the majority of them, they may be ridiculous, never see the light of day and the rest become very popular?
MOROZOV: But, you see, this is the problem. I'm not saying that the smart trash bin wouldn't take off. It might take off because there will be some policymakers who would be happy to delegate problem-solving to Silicon Valley. But that problem-solving, when it's done by the private sector, come with costs. And again, it will transform how we think about citizenship and it will transform us as users of those technologies from, I believe, citizens into consumers.
SHAPIRO: Let's take...
MOROZOV: ...and so I'm very concerned by the fact that this delegation happens and that Silicon Valley, for its own internal reasons, is actually actively positioning itself as being in the business of solving all of the world's problems. That's the most troubling aspect.
SHAPIRO: Let's take another call. This is Glenn(ph) in Norman, Oklahoma. Hi, Glenn.
GLENN: Good afternoon.
SHAPIRO: Yeah, go ahead.
GLENN: Yeah, the thing - just a quick observation, I was thinking about this the other day, as automobiles have gotten smarter with anti-lock, braking, traction control, other features and stuff, I've noticed a decline in driving skills. People leave no margin of error when pulling up to stop signs, when the weather changes with rain, snow, et cetera. And then, you know, we got the whole Google smart car thing, supposed to be able to drive itself. People can't navigate. You know, your speaker mentioned something about maps and remembering where you are and you know what, I can just kind of - to me, it all kind of falls into the same thing. You know, yes, we're relying on technology. But...
SHAPIRO: But it's making us dumber.
GLENN: ...at what cost to our basic human development?
SHAPIRO: Interesting. Thanks for the call, Glenn. And, Evgeny, I don't think anybody would object to the lives that have been saved by new safety technologies. But there is this argument that, you know, to extend it to other technologies as everything becomes Googleable, we become dumber because we stopped remembering things and we just Google them.
MOROZOV: Well, I think there was a great point that the previous caller has just made that's that, yes, it is a fair amount of this killing going on. But there is also a transformation of practices that we previously valued. Look at something like cooking. Now, you would hear a lot about smart kitchens and augmented kitchens. And what do those smart kitchens actually do? They police what's happening inside the kitchen. They have cameras that distinguish ingredients one from each other and that tell you that shouldn't mix this ingredient with another ingredient.
And for some, you might say this is great because it prevents errors from happening. But other people would say, well, this is how cultural innovation happens. We need to leave certain margins of error in place and actually allow people to mix ingredients in ways that are silly and unexpectable for new cultural innovation, new culinary products to emerge. I mean, something like sushi or lasagna were not built by a committee armed with big data. It's not something that obviously occurs to you, right? It's something that you need to experiment with.
And if we make a lot of these rituals too rigid and if we rely especially on previous methods, and that's where all the hype about big data comes in, we might end up in a very culturally conservative universe that, I think, will not be very pleasant to live in.
SHAPIRO: We're talking about technology's relentless pursuit of perfection on TALK OF THE NATION from NPR News.
Let's go to another caller. This is Holly(ph) in Palo Alto, the heart of technological innovation. Hi, Holly.
HOLLY: Hello. How are you?
SHAPIRO: Good. Thanks. Go ahead.
HOLLY: Yeah. So, yeah, I live in - have been working in Silicon Valley for over 20 years and I have to say I completely concur with - that your speaker's point because as it happens I write a blog called the 2030 report, where I looked at these future technologies and then really assess whether we really want them. And one of the things I recently wrote about was that insurance companies would love to charge a vice premium to people and have everybody wearing things like Fitbit or Jawbone Up so that they can actually monitor everything they eat and their daily activity and, you know, other aspects of that person's lifestyle, which on the one hand, could be great, because you could hopefully help your city - citizens. But on the other hand seems awfully invasive if they want to charge you extra for, say, eating a donut at a meeting.
SHAPIRO: Yeah. Because your line is breaking up a little bit, it's not a terrific cellphone connection, I'll just sort of restate: You're talking about companies charging sort of a vice fee so that monitors whether you're exercising, whether you're eating donuts, whether you're smoking cigarettes and then you get fined accordingly. Evgeny, is this actually our real future? I mean, you raised these specters of privacy invasion.
MOROZOV: Sure. I do think it's our real future. But you have to understand how it's being marketed to us now. It's not being marketed as a vice fee. It's being marketed as a health premium. So those of us who are healthy can start monitoring ourselves and then start disclosing that information to insurance companies and we will get a premium. But notice what happens: Once enough of us monitor ourselves in order to get lower prices and better premiums, those of us who refuse to monitor because we find it intrusive no longer have that option because institutions around us start assuming that if we don't share, we have something to hide.
We have a disease. We have some health condition and that is the reason why we are not self-tracking, so they charge us a vice fee or they charge us a higher premium. Right? So in a sense, when Silicon Valley tells us that we all have a choice; you don't have to use it if you don't want to. Well, at some point, we are forced to do it because if we don't, certain assumptions are made about us and certain discriminatory practices emerge.
SHAPIRO: Just briefly...
MOROZOV: It's like - yeah, right.
SHAPIRO: In conclusion, when you get beyond this is a bad thing, are you saying Silicon Valley needs to change its practices or are you saying we as consumers need to open our eyes and only use, purchase, choose those technologies that are not having the consequences you're describing.
MOROZOV: Well, first of all, I think a lot of designers who designed those products need to have a broader view of the human condition and they need to understand that imperfection actually matters and need to be built into our products. But there is a third player that you didn't mention, and those are governments. I think governments will increasingly be tempted to rely on Silicon Valley to solve problems like obesity or climate change because Silicon Valley runs the information infrastructure through which we consume information.
And I think our policymakers need to be aware that - of the costs that come with solutions, once those solutions are taken on by Silicon Valley. They will not come from for free and the efficiency that we'll get from Silicon Valley, we'll have to pay for it with privacy or politics or just being able - being unable to live in a world that still tolerates inconsistency.
SHAPIRO: That's Evgeny Morozov, a contributing editor to The New Republic, whose op-ed appeared in Sunday's New York Times. You can find a link to it at our website, npr.org. His latest book goes on sale tomorrow. It's called "To Save Everything, Click Here: The Folly of Technological Solutionism." Evgeny, thanks for joining us.
MOROZOV: Oh, thanks for having me.
SHAPIRO: And tomorrow, gay athletes and professional sports, what's changing, what isn't and what it means. Join us for that. This is TALK OF THE NATION from NPR News. I'm Ari Shapiro in Washington. Transcript provided by NPR, Copyright NPR.