Oct. 1, 2020

Will tech save the world...or end it?

Will tech save the world...or end it?

Technology in its endless forms has given us so much—fire to make food, language to communicate feelings, and iPhones to look at in the bathroom. 

But for each of these technology-enabled pros, there’s a technology-enabled con.

So how do we make sure that technology is used more for good than for bad? For starters, we focus on the ways the public and private sectors are working to ensure inevitable bad actors are kept in check. It’s a joint effort, and it has to be—especially as high tech like artificial intelligence learns to mimic and outperform human decision making.

To help us get to the bottom of a complicated, interesting, nuanced idea, I’m welcoming to Business Casual Kevin Roose, tech columnist for the New York Times.

Kevin brings up a lot of mind-boggling questions...but he’s quick with an answer, too. Some thought starters from the interview:

  • Where’s the real danger in tech? It might be a little less apocalyptic and a little more pedestrian than you think.
  • What would our biggest tech firms look like if they’d been created in China, Myanmar, Brazil, or anywhere else in the world? Different.

Listen now.

Transcript

Kinsey Grant, Morning Brew business editor and podcast host [00:00:06] Hey, everybody, and welcome to Business Casual. I'm your host, Kinsey Grant, and it's a beautiful day to think about what the world might look like if "The Matrix" were more than just a movie. So, let's get into it. [sound of a ding]


Kinsey [00:00:18] Technology can mean almost anything. The words I'm using to make these sentences right now. Our technology, the phone you're listening to this episode on, is technology. Fire is technology. But ever since the proliferation of the internet, technology has become enraptured in a debate over ethics and morals and standards. 


Kinsey [00:00:36] It's interesting. You know, fire can be used to cook your food or burn down a building, but we've never really given much thought to the ethical goodness or badness of arson. We know it's bad and that's why it's illegal. But new technologies, on the other hand, are much more nuanced. How do we determine whether an algorithm is good or bad? And who defines what good or bad is? It's important to have these conversations sooner rather than later. 


Kinsey [00:01:00] As ex-Google CEO Eric Schmidt said on our last episode, we are in a global race to determine who leads the way on high-tech, like artificial intelligence and machine learning. First mover gets to make the rules. And truth be told, the genie is already out of the bottle here. There are really no take-backs on AI and algorithms and news feeds and you name it. And now all we can really do is try our hardest to ensure that things go the right way. 


Kinsey [00:01:25] So, to talk with me about how we do that, our guest today from The New York Times, Kevin Roose. Kevin, welcome to Business Casual. 


Kevin Roose, technology columnist for The New York Times [00:01:32] Thank you for having me. 


Kinsey [00:01:33] I'm excited to have this conversation today. A little bit more on your background. You're a tech columnist for The Times, so basically, you get to write about the things that anybody would want to write about if they were to work [laughs] for The New York Times, and a reporter on the podcast Rabbit Hole, which we are huge fans of. It's an incredible show with some really incredible content. You've also written a bunch of books. You're the resource when we talk about good and bad tech. 


Kevin [00:01:55] Cool, I guess. Yeah. 


Kinsey [00:01:56] Yeah. You know, I am trying my hardest here to not quote Spider Man and give myself away. But you know what they say about great power. That's why we're talking about responsibility today. Who decides what makes tech good and bad? So before we decide whether or not we need morals for these tech concepts we're talking about, like AI and machine learning, I think it's important to discern a little bit more why we need to make that decision in the first place. So, Kevin, tell me, why do we need artificial intelligence? 


Kevin [00:02:24] Well, I started looking into this a few years ago because I had the same question. I was hearing so much about AI and what was being built to take advantage of technologies like deep learning. In Silicon Valley, all my sources were going crazy over AI and telling me, like, you've got to check out this stuff. So I just started meeting with people, going to conferences, reading books, talking to economists and technologists and people who understand this world deeply, because I was sort of struck by how polarized the discussion of AI was. 


Kevin [00:02:57] As you mentioned, there are people who think this is all very dystopian, and it's all going to end super-badly, and we're going to end up with robots running the world, and we're just going to be like slaves, you know [indistinct] meat bags, like bringing the robot overlords whatever they want. 


Kinsey [00:03:14] Right. 


Kevin [00:03:14] And then on the other side, there were the people who were totally optimistic and didn't think there was anything bad that could come of these technologies, and that they were just going to lead us into this world of harmony. And we would just be playing video games and making art all day while the robots did all the work for us. And so I really dug in deeply to this. In fact, I have a whole book coming out next year about my own progress on this issue. 


Kevin [00:03:39] And the position I landed at was, I call it suboptimism. I am basically an optimist about the actual technology and what it can do. But I am much more of a pessimist about the people who are implementing all of this technology. There's a great quote from an AI expert, Norbert Wiener, who says, "The problem is not the machine itself, it's what the man makes of the machine." 


Kevin [00:04:05] And I think that's really the sort of framework that I have developed for looking at these things is, yes, it's important what the technology can and can't do, but it's also important how we decide to apply that technology, because technology doesn't just happen in a vacuum. We make it happen. We oversee it. We guide it. We set the rules for how we deploy the systems. We design the products. So I think it's really important to put human agency back in the conversation about AI and sort of get away from this binary, all utopian or all dystopian model. 


Kinsey [00:04:41] Yeah, absolutely. And you bring up some really interesting points here—that we know AI can be used in a lot of really great ways, we know it scales fast. We know that in a lot of cases, it's better and more accurate than the human eye unassisted. There are a lot of really great things, and a promise when it comes to AI. 


Kinsey [00:04:56] But I think the best thing we should do right now [laughs] is kind of dive into the drawbacks. There are a lot. The biggest, I think, being that these are systems developed by people. People inherently have bias within themselves. And the products that we put out into the world are often representations of those biases. Talk to me a little bit more about what we do to fix something like that. Is it realistic to expect that we could totally remove all bias from AI? 


Kevin [00:05:24] I don't think it's totally realistic to remove all AI bias. But that doesn't mean I don't think we should try. I think that we have set rules on powerful technologies before. Nuclear weapons are powerful technologies developed almost a century ago. And we have kept them in check and kept their proliferation in check through very strict rules and regulations, through sort of foreign policy choices. We have done the same with asbestos and lead paint and all manner of other things throughout history. 


Kevin [00:06:00] And so I think we have to apply our judgment here because the alternative is just pretty scary. So I have a sort of a counterintuitive take on this, which is that I don't actually think the super-advanced AI is the thing I'm most scared of. Like the Skynet, like, you know, there's like videos of the robots doing Parkour moves, like that doesn't actually scare me as much. But in the course of researching this book, I came to think that the actual AI we should be worried about is the really boring stuff. 


Kinsey [00:06:33] Why is that? 


Kevin [00:06:35] So I talked to a number of researchers and analyzed this from a number of angles. And I figured out that there are sort of two categories of really boring bots that we need to watch out for. So, one, I call the bureaucratic bots, and those are basically the bots that are being used in state welfare offices and government agencies and in legal systems to do pretty mundane things like figure out who's eligible for nutrition assistance or who's eligible for Medicaid or to detect fraud in certain state benefits programs. 


Kevin [00:07:09] These things are the most boring things you can possibly imagine. But, when they screw up, the results can be catastrophic. So there's a great book by Virginia Eubanks, called "Automating Inequality," that's full of examples of how this plays out, where people get mysteriously kicked off their food stamps because an algorithm wrongfully determined that they had committed some sort of fraud or, you know, millions of people end up getting the wrong benefits because the system screwed up. 


Kevin [00:07:38] So those are the kinds of things that I'm really worried about in this sort of boring, bureaucratic world. There's also this boring kind of bot in the corporate world which people sometimes call RPA, or robotic process automation. And that's basically, as someone described to me, it's like the bot that's built to replace Harry in the back office. It's not a super-powerful AI. It's basically just an algorithm that takes information from one app and puts it into another app, or converts between one file format and another file format. 


Kevin [00:08:13] And they're very cheap. They work with existing software stacks, and companies are spending millions and millions of dollars to deploy them, not because they're super-futuristic, but just because they allow them to replace human workers. And so that's, I think, the two categories of boring bots that I would say I'm really worried about. 


Kinsey [00:08:33] Yeah, it's always the ones you never see coming. You bring up this concept of job displacement. Is this something that we should genuinely be worried about? You know, was Andrew Yang right? Are the bots coming for our jobs? Is that a genuine concern for the next decade of being a person with a job in the United States? 


Kevin [00:08:50] Absolutely. I was thinking about this when Andrew Yang was sort of starting his run and I think it's gotten even more so salient since then. We've seen during COVID that a lot of the jobs that have disappeared are being automated. Companies are installing security robots or cleaning robots or, you know, chicken processors or installing, you know, chicken-slicing robots. And FedEx is automating its shipping centers. And so a lot of the jobs that are disappearing in COVID probably aren't coming back. And so I think that worries me. 


Kevin [00:09:31] And this is the classic question. Does automation create more jobs than it destroys? And there's some interesting research on this. A couple economists, Acemoglu and Restrepo, have done some studies on this recently, and they found that basically, for many, many years, the answer to that question was that automation creates more jobs than it destroys. So we were wrong to freak out about it for a while. 


Kevin [00:09:56] But in the past several decades, the opposite has been true. Automation has been destroying more jobs than it creates. So I think that the optimists were right for a long time. And the people who thought that all the jobs were going to get automated away were panicking over basically nothing. But now there's some evidence that we actually should be worried that jobs actually are being lost to automation. 


Kinsey [00:10:20] So how do we reconcile the notion that jobs will be lost? People will be out of work with the idea that AI can be a really good thing that will add trillions of dollars to the global economy in the coming decades. 


Kevin [00:10:34] Well, I think we have to detach certain things from work. I think that, you know, Alexandria Ocasio-Cortez gave this talk at South by Southwest last year that I found really interesting. And she said, you know, AI and automation could be wonderful for society. It could free us up to do more things, to be more creative. It could take work out of the center of our lives and give us more time back to do incredible and inspiring things. 


Kevin [00:11:02] But we have to detach the idea of your economic value being your human value. We have to make sure that people, you know, that their healthcare is provided for even if they get put out of a job. And we have to show that we have a safety net for people who need some time to make the transition from the old jobs to the new jobs. We have to be much more thoughtful and empathetic about making sure that people aren't falling through the cracks. 


Kevin [00:11:28] And I think that that's sort of where the big picture of this is headed, is that we're going to need something. I don't know if it's Andrew Yang's freedom dividend or, you know, a robot tax or whatever the policy solution is. But we need something because we can't, in our current system, just put millions of people out of jobs and expect everything to work out for them. 


Kinsey [00:11:49] Right. Absolutely. You think about the ways that the economy evolved and shifted after the internet became a thing that was everywhere. We have to recognize that changes will happen and we have to adapt to those changes. And in a perfect world, our government is suited to make that change happen seamlessly. We don't live in a perfect world is the reality of the situation, which I'm sure we'll talk about with government intervention. But before we get into that, let's talk a little bit more about these moral, ethical concerns. Can an algorithm be good or bad? 


Kevin [00:12:26] It can achieve good or bad outcomes. Algorithms are just math, right? So, it's a little strange to think of them as having moral qualities. But for sure, they can produce good and bad outcomes in the world. An algorithm that trains itself to discriminate against Black and Latino loan applicants, which is something that has been observed. That's bad. 


Kevin [00:12:54] An algorithm that allocates different treatments for certain types of medical patients based on their race or their age or their gender is bad. It's not the algorithm's fault [laughs], right, it's the humans who built the algorithm, who trained the models, and who didn't foresee or didn't care enough to think about the consequences that that algorithm could have. 


Kinsey [00:13:21] It's interesting to me that the concept of scale here. You think about an individual person who might have biases against certain groups of people or certain types of people. You can only interact with so many people in a singular day. But when you're creating something like an algorithm that has such a broader scale, that has the capacity to interact with [laughs] it's like unfathomable how many more people that could affect on a daily basis. 


Kevin [00:13:45] Totally. And I think the more scale you have, the harder it becomes to figure out what the consequences are going to be. I think an example that I've spent a lot of time thinking about recently, and I would be curious to ask Eric Schmidt about this is, you know, part of the most advanced AI that Google has developed in the past decade has gone into YouTube recommendations. Their team spent years trying to perfect the algorithms that determine which videos you see when you finish watching your YouTube video. 


Kevin [00:14:16] And that algorithm works great sometimes. And it has also led people into rabbit holes filled with conspiracy theories and misinformation and extremist content. It's radicalized people. So I think the scale there—no one at YouTube wanted that to happen. You would never see in their algorithms a line of code that says, insert Nazi videos here. But when you train a model and don't adequately think about what kinds of incentives and directions that model could take, you end up creating consequences that might not be what you intended, but that are still having huge effects in the world. 


Kinsey [00:14:59] Yeah, and it's such a—the social media part of this conversation, I think, is really important to have. You know, we think about AI, to your point earlier. It's not necessarily "The Matrix," like I talked about the beginning, it's these everyday things that we come into contact with on a regular basis. It's the YouTube recommendation. It's your Facebook news feed. It's any number of these very normal, very pedestrian and mundane tasks that we consider to be part of our daily routine. 


Kinsey [00:15:24] But in reality, they're developed by teams of people who bear this enormous responsibility for making sure that we are capable of making the right choices and doing the right things. And to me, that leads to the next question, is who should be responsible. Can we say, you know, CEO of Google, whoever it is, whether it's Eric Schmidt [laughs] at that time or not, is it that person's responsibility? Is it the team of engineers' responsibility? Is it the government's responsibility? Who should we look to when we try to determine where the buck stops? 


Kevin [00:15:54] I think all of the above. I think there's a real role for the government here, which has been totally absent on a lot of these issues until it's way too late. I think there's a role for the companies which have to be better at sort of thinking in a more consequentialist way about, like, if we implement this algorithm to do X, Y, and Z, what are the other things it could do? How could it be abused? Who could misuse it and what effects would that produce? 


Kevin [00:16:22] I also think it's on us, as users, to really scrutinize our own interactions with these machines. In my book, I have a whole chapter about machine drift, which is what I call like, you know, when you like, let algorithms make all your decisions for you? You listen to the Spotify Discover playlists and you watch the Netflix recommendations and you buy whatever brand of toilet paper Amazon tells you to buy. And you just like kind of do that for a while and you kind of wake up one day, or at least I do, and you realize, like, I haven't actually had to make a choice in months —


Kinsey [00:16:56] Yeah. [laughs]


Kevin [00:16:57] Because these algorithms are choosing everything for me. And so I think there's a real danger, not in the kind of, like, external kind of automation that we think about when we think about, like, robots taking away jobs, but almost an internal automation that's happening to a lot of us all the time. 


Kinsey [00:17:14] It becomes so pervasive in our lives. Those examples are things that we do all the time in so many different arenas of decision-making. [laughs] You get out of bed, the first thing you do is look at an algorithm, right [laughs] for so many of us. Is it possible to scale that back? Are we past the point of actually making sure that these are all going to be designed for good? Or has the damage been done? 


Kevin [00:17:36] Well, we have a choice about how to engage these systems. You don't have to buy the brand of toilet paper that Amazon recommends. You could do your own tests and figure out which brand you actually like. You don't have to listen to the — 


Kinsey [00:17:48] Charmin all the way. [laughter]


Kevin [00:17:50] I'm more of a Great Northern guy, but I think we have to resist this pull toward sort of going with the algorithmic flow, not because these algorithms are always wrong or always harmful, but just because we lose a sense of who we are, and what we care about, and what actually our preferences are versus what a machine is telling us our preferences are. 


Kevin [00:18:19] There've been some fascinating studies of this that basically show that, you know, if you listen to a song that you might like independently, if a machine tells you that it's a low-rated song, if it has a low-rated song, if it has a low-star rating on it, you're not going to like it as much. Like you basically—you like the things that the machine tells you to like and you dislike the things that the machine tells you are bad. 


Kinsey [00:18:43] Yeah. 


Kevin [00:18:43] And that's scary. That's a ton of power to place in these algorithms. But I think we have to kind of calibrate our own level of investment in what they're telling us to do. 


Kinsey [00:18:53] Absolutely. We like to think of ourselves as such independent beings that we have the power to make our own decisions. But, at the end of the day, Spotify makes my life a lot easier. If I can get on the subway and not have to think about what I'm gonna listen to, it just tells me what I'm going to like, it makes my life a lot easier. I don't have to make that decision. 


Kinsey [00:19:11] I feel like I make decisions all the time. Maybe I'm not actually doing that. [laughs] It is the sort of unfortunate and uncomfortable reality here. It also makes me wonder, though, if we, as tech users, have sort of failed to take into account that we have agency over the ways that we interact with these algorithms. We instead place so much responsibility on the people creating them. We expect Mark Zuckerberg to create this perfect world where everything is harmonious and there is no false information. Is that too big of an expectation? 


Kevin [00:19:42] Absolutely. I mean, these are for-profit companies. Their obligations are not to their users' well-being. It's to their shareholders, their boards of directors, and their stock prices. So it's absolutely too much to expect them to be humanitarian organizations, because that's just not what they're set up to do. I think we do have to take agency over our own choices. But I don't think that lets the people off the hook who make the algorithms and design the systems. 


Kevin [00:20:07] They've created a really unfair playing field. I mean, every time you go onto YouTube and see that recommendation sidebar, most people don't even think about this. But dozens of Ph.D.s designed this system that determines sort of what you're going to see next. And it's not a fair fight. It's us against these supercomputers. And so we have to change the terms. We have to be able to decide on our own what we want. 


Kevin [00:20:37] I turn off recommendations on my YouTube just because I know that I'm susceptible to being led down rabbit holes, even though I cover the stuff for a living. I know that it has the potential to make my beliefs more extreme and to distort my world view. And I don't want to go anywhere near it. I don't trust myself in a system like that. 


Kinsey [00:20:58] That's certainly a lot to think about. Everybody go a second to think about that. We're gonna take a short break to hear from our sponsor. — And now back to the conversation with Kevin Roose. Kevin, before I asked you who is responsible for ensuring that these systems, these high-tech systems, like AI and machine learning and deep learning, are created in a way that is fair and equitable and good. 


Kinsey [00:21:20] You said that it was sort of everybody's responsibility. We talked a lot about our own personal responsibilities, about the responsibilities of the companies that are creating this tech. What about the government? Where does it come into play in this conversation? 


Kevin [00:21:32] Well, I think the government historically has been a check on certain kinds of corporate exploitation. If you look back at the Industrial Revolution, for example, there were enormous labor abuses. There were child laborers that were being put to work in these awful conditions. There were people, tons of people just dying in factories that weren't safe. Famously there was the Triangle Shirtwaist Company fire that led to a lot of labor reforms. 


Kevin [00:22:00] And so the government, I think, has historically come in and sort of reined in the excesses of the runaway technological capitalism. And that's been their role. And I think that role is a good one. But I think now, because things are changing so quickly, they can't wait 10, 20 years to see what all the outcomes are going to be of these systems. 


Kevin [00:22:23] With something like AI, where the cutting-edge technology is really turning over every couple of years and making huge advances in processing power and capability, I think the government needs to be much more proactive and much quicker than it is right now, where I think, to put it charitably, they are still kind of like getting their arms around some of the basic questions about the technology. 


Kinsey [00:22:48] Do you think that if the government were more proactive in the development stages of these big tech ideas, like facial recognition, like AI, that the actual product would look differently today? 


Kevin [00:23:01] Sure. I think if the government had its shit together—sorry, can I curse on a podcast? [laughs]


Kinsey [00:23:11] Oh, yeah. [Kevin laughs]


Kevin [00:23:11] If the government had its shit together when social media companies were building these, like, enormous surveillance dragnets to gather data from their users, the products would have looked much different. They would have been designed in a way that respected users' privacy, that gave them affirmative consent over which data to share with which apps—things you just didn't see until regulators started stepping in. 


Kevin [00:23:35] Those little privacy boxes that say, like, you can share my data or you can't share my data. I see those because I live in California. And we passed a law out here, that means that you have to ask for affirmative consent for certain types of data. But in the rest of the country, they don't have those boxes. The product literally looks different because our regulators were on top of it. 


Kinsey [00:23:56] Yeah. It, to me, brings to mind this sort of dichotomy of the biggest standards and the biggest values when it comes to the government and to the tech companies who are creating these products. The tech companies famously move fast and break things, which is, of course, is kind of to the wayside. But still, that is a big part of the ethos in Silicon Valley. 


Kinsey [00:24:19] The government is very much, don't move fast at all. We don't want to break anything. There's a different risk tolerance. And I think that definitely impacts the ways that they go about policing things like this. When I say things like this—things like facial recognition and AI—and I have to wonder if real cooperation could ever come to pass if we can't align these values in the same sort of middle ground. 


Kinsey [00:24:42] But it's hard to imagine that a for-profit company that has to answer to investors will ever say, well, we'll slow down a little bit. We'll break a few less things to meet our governments, whether that be federal or state or local, where they are. 


Kevin [00:24:57] But I mean, just look at every other industry. The airline industry is a for-profit industry and it's heavily regulated. Every plane has to be certified. The FAA takes a really strong role in protecting people's safety. Look at the pharmaceutical industry. I mean, they could be releasing drugs a lot more quickly if the FDA didn't exist and didn't make them go through rigorous trials to make sure that their medicines were safe before they started putting them onto the market. We have tons of examples of regulated industries that are still very profitable. 


Kinsey [00:25:35] I have to wonder if this has something to do with our perception of danger in these industries. You know, if you overdose on something that your pharmacist gave you, you could die. I think that in a lot of ways, we haven't really taken that lens to the tech industry when we know that this is the case. We know that people who have been radicalized have shot other people, have engaged in really violent behaviors. But we just can't quite get there in terms of recognizing that this could be a real danger in everyday people's lives. 


Kevin [00:26:03] Totally. I think we still think of sort of online harms as being less severe than kind of bodily offline harms. But there are often the same thing. I mean, look at what happened in Myanmar and Sri Lanka with social media, you know, leading to murderous riots and facilitating genocide. Look at what's happening here in the U.S. with these armed militias that are sort of going after people and telling people not to evacuate their homes in fire zones. Like that's bodily harm that is directly resulting from these systems. 


Kevin [00:26:40] And, yeah, it's not as visual and direct as watching, you know, a video of a flaming wreck of a plane that crashed somewhere. But it's the same in terms of its real-world impact. People are dying. People are being hurt. Families are being ripped apart. And again, I'm in this very weird position where, like, I love technology, like I have the freaking Windows XP backdrop, like behind me. 


Kinsey [00:27:09] He really does. It's true. I can see it. [Kevin laughs]


Kevin [00:27:11] I am the biggest nerd in the world, and I love this stuff. And, I am deeply, deeply worried about the way things are going. And I just want the humans to be better stewards of the technology. 


Kinsey [00:27:25] Yeah, I think that we don't necessarily have to take this binary approach that it's good or bad. That is a dangerous way to view the world around us. Things can be good and bad at the same time. And beyond just the tech world [chuckles], in every aspect of our lives. All right, Kevin, I want to take a little bit more of a macro kind of global view of these issues and AI and the government's role in just a second. But first, a short break to hear from our partner. —


Kinsey [00:27:51] And now back to the conversation with Kevin Roose. So, Kevin, a lot of this conversation around the proliferation of tech like AI and facial recognition and machine learning is often had in the context of this global tech war. We are in a race oftentimes with China. We're pitted against China here in the United States of who's going to win here, who's going to be the first mover. It's a land grab and somebody has to take the lead here. Why do you think that is? And is that an important part of this conversation? 


Kevin [00:28:23] I think, yeah, that the geopolitics of it are fairly interesting. I don't actually know that the stakes are as high as some politicians would say they are. I think the idea that the future of geopolitics is going to be decided by who's social media algorithms are better, I think it's just a little bit like overblown and overhyped. I think there is sort of an arms race quality to it now, with things like TikTok and WeChat, and everything that's going on right now with those companies. 


Kevin [00:28:55] But I do think it's important. I do think that the U.S. should, and rightly wants, to keep a leadership position in the development of some of these technologies. But ultimately, these things are borderless, right? It's very, very hard once a piece of technology is developed, or there's an advance in AI or machine learning, to keep that contained to a single nation. So I think, at a certain point, we have to control it. We can control, but I do think there is a role for a sort of global conversation about how these technologies should move forward. 


Kinsey [00:29:36] Yeah. And I asked Eric a similar question. The internet didn't stay where it was invented. [laughs] The internet spread throughout the world, and that's a good thing. And tech has a way of doing that. It's this globalizing concept that brings us all closer together, no matter how far apart we might be. 


Kinsey [00:29:53] It's hard to imagine that just because of the geographic confines, or national confines, of where a thing is created, that it can only take on the qualities of the people who made it in that specific country, especially with something like AI, when the talent is dispersed all over the world, creating this tech. I find it hard to imagine that this is going to be one good or bad thing. [laughs]


Kevin [00:30:16] Yeah, I'm curious to hear what you think of this, because my own thinking has been shifting on this a little bit. Because my inclination is, you know, a globalist and, you know, I'm one of those globalists they talk about. 


Kinsey [00:30:30] You're one of those. [laughs] 


Kevin [00:30:32] [chuckles] But I also have been thinking a lot about whether our model of sort of internet globalism is working. I mean, does it work to have people in, you know, Menlo Park at Facebook making speech rules for people in Myanmar and Sri Lanka, and people in different parts of the world where they might not even have an office or anyone who speaks the language? 


Kinsey [00:30:57] Right. 


Kevin [00:30:57] Should they be making decisions about these enormously important systems in those countries, or should those countries be building their own systems that will reflect national values and priorities and, you know, might not be as good or as free or as whatever as the American companies? But I'm uncomfortable with kind of, almost the sort of soft imperialism of kind of tech companies in one place determining standards for the entire world. 


Kinsey [00:31:31] You almost make it feel like we're in a [indistinct] between a rock and a hard place here. I mean, on the one hand, I think, absolutely, you're totally right about that. I have no idea what it's like to experience life as a 25-year-old woman in any other country in the world. All I know is the experience that I've had personally. So it makes perfect sense for people in their own country to develop something that is suited to the needs of the people who live there. 


Kinsey [00:31:56] But at the same time, the beauty to me, or one of the biggest draws to me, of something like a social network is that it is boundless. It knows no bounds in terms of country in a lot of [indistinct]. China is a different example, but [chuckles] that it knows no bounds in a lot of ways and that it can bring us together and we can talk to people on other sides of the world and learn about those cultures. But maybe, as I'm saying that, it sounds a little idealistic. That we need to be realistic in terms of what the experience is like in any country at a given moment. 


Kevin [00:32:23] I think I would have agreed with you a couple of years ago. And then, you know, for the last few years, I've been reporting on social media, and people in the countries where this technology is not made, but they use this technology, feel like prisoners. I mean, they feel like they have zero authority over the systems that run their lives. I have NGOs writing to me, DMing me, emailing me every day, saying, hey, Facebook won't take down this thing that is causing chaos in my country, or Twitter won't take down this account that is, you know, inciting violence in my country. Like, could you say something to them? Which, like, is — 


Kinsey [00:33:02] Like what? [laughs] 


Kevin [00:33:02] So crazy to me. Like, these are giant, civil society organizations, and they feel like they're only recourse when it comes to the systems of communication in their own countries is to be DMing American reporters and asking them to ask American tech executives to take action. Like that feels like a very broken system. 


Kinsey [00:33:21] That's a bad thing. Like that shouldn't be the outcome of this perfect tech utopia [laughs] we've created for ourselves in our minds, like what social media can be, what AI can be. That's not just—it's not the reality. 


Kevin [00:33:35] Yeah. And so what happens if that becomes the standard? I mean, what happens if, you know, one company in one country controls all the facial recognition algorithms, and their decisions about their algorithm end up influencing criminal justice in parts of the country or parts of the world that they've never visited, they don't have an office in? 


Kevin [00:33:54] What is the recourse for the people who are wrongfully targeted by those systems? I don't know that our experiment in global social media governance, at least, has been successful. So I'm a little bit more skeptical about how global governance for other types of technologies would work. 


Kinsey [00:34:14] Right. It's tough to think about this tech race in light of the conversations [chuckles] that we just had. At the same time, we talk about how a product would be different if it were developed in a different country, and maybe that's a good thing. But also, I don't want to be [chuckles] in the country that, you know, like isn't leading the way here. [laughs]


Kinsey [00:34:33] I don't want to feel like I'm a prisoner to a social media network. That sounds like a really shitty way to feel. It makes me feel guilty [laughs], you know, like I want the U.S. to [indistinct] that way because I'm here. I want the U.S. to take the lead. But also, I have trouble kind of thinking about that and also thinking about like, does it really matter? Should this even be part of the conversation? 


Kevin [00:34:54] Right. I think if we had a robust, competitive global marketplace in all these technologies, we'd be in a better place. But I mean, the reality is, like you can't build a social network in Indonesia and compete with Facebook. Has nothing to do with the fact that you're Indonesian. No one can build a social network [laughs] to compete with Facebook because they have a monopoly. 


Kevin [00:35:18] So I think that if all these countries could sort of compete on things like privacy, could compete on things like, you know, ethical AI, I think that might be a better global system. But right now, just a handful of companies in the U.S. and China controlling everything just doesn't feel like it's working out that well. 


Kinsey [00:35:40] And the companies that control everything are the most valuable companies in the world. 


Kevin [00:35:44] Yeah, totally, and they're going — 


Kinsey [00:35:46] We absolutely shouldn't discount that. [laughter] These companies are as rich as they come, and I wouldn't want to give that money back if I were, you know, Apple [chuckles] or any other tech company. 


Kevin [00:35:56] Totally. And I think their argument is that the sort of winner-take-all nature of this is baked into the nature of the technology that, you know, the company that has the most data will have the best AI, which will then allow it to get more users and more data, which will then improve the AI even more. And so you end up in this cycle where it's very, very hard to compete with a Google or an Amazon or a Tencent when it comes to some of these core technologies, because you're starting from scratch. 


Kinsey [00:36:28] In having this conversation, it makes me think that perhaps we are past the point of asking whether AI can be good or bad, and it's more we know that it can be both and everything and all of this at the same time, just like any tech advancement can be. It may be the conversation is less so what's good and what's bad, and more creating a value system by which we should judge good or bad tech that already exists and [chuckles] is already happening all over the world. 


Kinsey [00:36:56] It's hard to ensure that tech, as widely as it is dispersed in the world today, can ever kind of be reined back in, and maybe the more important question is determining whether the tech is—not whether it's good or bad, but whether it knows what good or bad looks like. 


Kevin [00:37:12] And whether the people who are creating and managing and deploying the tech are good or bad at what their values are and what kinds of outcomes they're producing. I think this whole—like the whole ethical AI conversation—I know this is not a mainstream view in Silicon Valley—but, look, I think it's all kind of insane. 


Kevin [00:37:31] I think the idea that AI itself can be ethical or unethical, and that we can detach that from the ethics of the people who are creating and deploying it, I think it's just sort of a self-serving fantasy by these tech people. And we absolutely should be talking about ethics in AI. But the ethics don't belong to the AI. They belong to the people at the companies who make the AI. 


Kinsey [00:37:56] Yeah, absolutely. Well, thank you so much, Kevin Roose. This was a fantastic conversation. It kind of felt like, at times, we're putting our minds in pretzels, but it feels good to think hard and think about the ways that we go about our lives every day. We have the capacity to make the future look better and better for more people. So let's take that and run with it. Thank you so much for coming on Business Casual. 


Kevin [00:38:17] Thank you for having me. 


Kinsey [00:38:28] Thank you so much for listening to this episode of Business Casual. It has been an incredible week celebrating one year of this show with you. We started with Eric Schmidt and ended with that bananas conversation with Kevin Roose. Now that you've listened to these episodes and learned a ton about the ways tech impacts our lives, I hope you'll take a few minutes after this is over to disconnect. It's good for you. 


Kinsey [00:38:50] Thank you again for a year of Business Casual. I can't wait to see you next time. [sound of a ding]