Can Social Media Be Saved?
Elon Musk’s Twitter takeover has reignited a debate about the dangers of social media. It may seem like all funny gifs and nephew photos, but these days, many sites are also full of dangerous lies that impact democracy, bullying that affects the mental health of our kids, human trafficking, and drug sales. Andy speaks with Stanford Law Professor Nate Persily and former Facebook employee Brandon Silverman about whether social media platforms should be policed, the consequences if they’re not, and healthier options if you’re sick of Facebook and Twitter.
Keep up with Andy on Twitter @ASlavitt.
Joining Lemonada Premium is a great way to support our show and get bonus content. Subscribe today at bit.ly/lemonadapremium.
Support the show by checking out our sponsors!
- CVS Health helps people navigate the healthcare system and their personal healthcare by improving access, lowering costs and being a trusted partner for every meaningful moment of health. At CVS Health, healthier happens together. Learn more at cvshealth.com.
- Click this link for a list of current sponsors and discount codes for this show and all Lemonada shows: https://lemonadamedia.com/sponsors/
Check out these resources from today’s episode:
- Read about the Platform Accountability and Transparency Act (PATA), a bipartisan bill that would require social media companies to provide vetted, independent researchers and the public with access to certain platform data: https://law.stanford.edu/press/the-platform-transparency-and-accountability-act-new-legislation-addresses-platform-data-secrecy/
- Check out the social media platforms Brandon says he’s most excited about, Front Porch Forum and New Public: https://frontporchforum.com/ and https://newpublic.org/
- Find vaccines, masks, testing, treatments, and other resources in your community: https://www.covid.gov/
- Order Andy’s book, “Preventable: The Inside Story of How Leadership Failures, Politics, and Selfishness Doomed the U.S. Coronavirus Response”: https://us.macmillan.com/books/9781250770165
Stay up to date with us on Twitter, Facebook, and Instagram at @LemonadaMedia.
For additional resources, information, and a transcript of the episode, visit lemonadamedia.com/show/inthebubble.
Brandon Silverman, Andy Slavitt, Nate Persily
Andy Slavitt 00:00
Welcome to IN THE BUBBLE with Andy Slavitt. Welcome to our Friday conversation. So there’s been a lot of debate recently, around this whole notion of the social media platforms, and who’s got, what rights? Who’s got, what obligations? How should it be regulated? And you know, some of this is, of course, driven by Elon Musk, not just his takeover of Twitter, but some of the seemingly puzzling decisions, but also, in some respects, consistent decisions he’s making around pushing that platform to be both more transparent, less safe, other people might say, and the question was here, long before that, some people call these platforms a town square, and everyone has a right to be heard. And some people say, No, their businesses. And you know that, which is what your first amendment right is. But I want to I want to get into this with our experts who I will introduce in a moment, kind of in a different level, which is, how we think about the internet, and what role the internet supposed to play in our lives. And it’s easy to overlook many of the good things that we enjoy about the internet, when we focus on the bad things. And look, we were around before the internet, my wife and I, and probably many of you listeners, and it was amazing. The first time we bought a book on the internet, I thought that was super cool. And I’m sure the first time that someone connected with a long lost friend on Facebook, it was incredible. The ability for our kids or even us the show to do research and get answers to questions really quickly to find people like minded does all those good things and clearly advances. But somewhere along the way, the bad things which sort of always been there, to some extent, the spreading of dangerous lies that impacted democracy, the mental health of kids, human trafficking. You know, the episode we did about fentanyl, and how social media sites are you sell fentanyl, they became more overwhelming to a point where in both the US as well as Europe, there are many things that have just been identified as problems, including privacy, including how these platforms are used, including whether or not they should be policed. And there’s a great debate going on. And I thought this conversation we’re going to have it’s about today. Nathaniel, personally is a professor at Stanford Law School. He has taught there for the last 9 years. He’s a scholar in constitutional law, election law and the democratic process. He is very tuned in and focused and done a lot of research and writings speaking about these very questions around the Internet and Internet law. And Brandon Silverman, is the former CEO of a company called crowd tangle, which he sold to then Facebook now Mehta, and he has a very interesting perspective, because he is someone who has been pushing in his career for more transparency from the internet platforms than was acquired by one saw what it was like inside and is now on the outside again, pushing for transparency. So let’s bring these folks in.
Andy Slavitt 03:34
I want to start maybe with you, Nate. We kind of used to love the internet. Yet somewhere, somehow, something changed. What changed? When did this happen?
Nate Persily 03:48
So I think 2016 was a turning point in the way that we think about the internet and the costs and benefits associated with it, as well as in particular for social media. I think that the Brexit referendum plus the Trump election, and the allegations of foreign influence in that election, definitely sort of flipped the balance for many people where they saw the potential dangers that digital communication technologies and in particular social media might have for democracy and larger societal issues. Now, of course, given the examples that you mentioned, it’s not as if the only way we thought about the dangers of the internet were political, and so questions, but in some ways, that was when I think the bubble burst. And so you know, the Cambridge analytical scandal, which should have elevated privacy as a rest.
Andy Slavitt 04:36
Remind us of the quick outline of the Cambridge analytical scandal for those who don’t remember.
Nate Persily 04:40
Just about to do so. So the Cambridge analytical scandal, in many ways grew out of the 2016 election because Cambridge analytical was a consulting firm that used the API’s that Facebook made available, that sort of interfaces that researchers and others would use the platform for, to do some studies. That would then learn certain information about your friends on Facebook and then potentially develop psychographic profiles that can be used for targeted advertising. In many ways, I think the Cambridge analytical scandal is more a metaphor for everything that that has been wrong with the internet with respect to privacy than it is about the facts of that particular scandal. But the interaction of a consult political consulting firm, with private data that Facebook held on us was a way of kind of bringing the larger controversies to the surface. And so the second watershed moment, I think, was the Francis Hogan revelations a year or two ago, because that then brought out all kinds of other information that Facebook had, what were those revelations is the most significant thing that got the attention, even though I think it’s unwarranted. Were there studies on teenage mental health, you’ve got the evidence about particularly lack of content, moderation and lack of resources that the company was dedicating in conflict zones in the developing world. And then larger sort of questions about how the content moderation system worked.
Andy Slavitt 06:05
Got it, so Brandon, there’s differing philosophies of the internet and how it should evolve and what role to play in society. I mean, you could read a blog from Jack Dorsey who would say, you know, the internet should have, you know, no, or very little filtration or curation. There’s too much concentrated power. And this is all about the power to the user. And there’s another view, of course, which is that people are concerned about some of the issues that Nate was just outlining and that I talked about, and that there is a curation function, much like any media outlet has an obligation or a role to play in doing that. And I’m wondering if you could give us a little bit about your story in your connection to the company you started and how it bears on these issues? And then also how you think about that kind of question.
Brandon Silverman 06:58
Great. Yeah, I mean, so my, I grew up on the internet. And so I was kind of a tech nerd. For a long time, in high school, I created a class website that nobody visited on geo cities, and had a URL, there was about 500 characters that I wrote up on the bulletin board in our high school. So I’ve always been kind of fascinated by the internet, I started crowdtangle in 2011, or so it was originally going to be a community organizing app built on top of a lot of API’s, Nate was talking about that Facebook made available and were like, very exciting. If you were a developer, we eventually turned into a social analytics tool that was used by publishers and newsrooms all over the world to help them see actually what was happening on Facebook and other platforms, we ended up getting acquired by Facebook in the end of 2016, and joined officially 2017. And for four years, I led the crowdtangle team under the banner of Facebook. And in addition to working with publishers, we began kind of becoming one of the main ways that Facebook was transparent with the outside world writ large, including academics and researchers and human rights activists and election protection groups, etc. To help them see what was happening on the platform. So I’ve been kind of in meshed in this space for a long time. And, you know, my personal opinion is I think we’ve swung way too far in the direction of massive kind of like capture of the entire space by a very small, full of industries that are for profit, and not only for profit, but also owned at times by single individuals making all the decisions and that that’s not a functional responsible system for spaces that are this important to the public discourse. And so I am right at the moment very kind of intrigued and frankly, like, inspired by a lot of the people who are trying to build alternatives that would give us a much more pluralistic and heterogeneous kind of version of the internet than we’ve had recently. And I think, ultimately, that’s probably one that’s much better for politics, democracy, voice, etc.
Andy Slavitt 08:55
Well, we’ll come back to some of those examples. But where do you come out, Nate?
Nate Persily 08:59
So the internet is a technology and like many technologies that can be used for good or ill and so yes, you’re right, that the same portals of connection that can lead say cancer survivors in one part of the world to join up with people in common with them and other parts of the world, is what allows Nazis to do the same thing or, you know, people who are exploiting children, right? And so we ordinarily don’t think of something like the telephone and put it through this kind of cost benefit analysis. Well, what if bad people use the telephone to connect with each other and to plan crimes? Now, that’s about the internet writ large right now, when we talk about the platforms and when you mentioned in curation that that is the critical question, which is, the internet is not like a telephone because at least the current Internet because you have these large spaces that are governed by private companies, in very non-transparent ways to both control the speech ecosystem and at the same time prioritize some communication over others. And so their decisions about what kind of communication get priority are in many cases more important than formal law, both for regulating speech, but then also in terms of whether these harms are going to develop or not. And so, you know, these platforms are, I think, indirectly, if not directly responsible for a lot of the problems that are being attributed to them, but we shouldn’t sort of fault the internet or that technology in general.
Andy Slavitt 10:29
Yeah, so it argues for some curation. And it’s interesting, because I think the very people that say, like Dorsey, who says, we have too much concentrated power, and therefore we shouldn’t be kicking people off or suppressing people’s opinions are the same people who are concentrated the power to allow those opinions to get out there. And you know, they’re sort of equivalent. So what I want to do take a quick break, come back and talk about whether they’re equivalent or not by diving into this sort of, quote, First Amendment conversation that Elon Musk and others seem to be fine with having. So let’s take a break, we’ll be right back. We’re back with Nate and with Brandon. So let’s go at this first amendment question. Nate, I’ll start with you, I get some of this comes down to a simple question to me, which is, is in fact, a place like Twitter, or Facebook, or YouTube a, quote unquote, public square? Or is it a private company where people are selling your attention to advertisers, and therefore, those companies have a right and perhaps certain responsibilities if they choose to take them to decide what goes on their platform.
Nate Persily 11:59
So, I am one of the critics of the public square metaphor. And so I would err on the side of saying that they’re more private companies, let me at least make that pitch or the argument for why people think it’s the public square, which is that, you know, so much speech is occurring in digital spaces that these companies control and given the power that accrues both to the companies and to some of the speakers within these platforms. The analogy is often made. And frankly, this is done by people of both parties, by justices on the Supreme Court, as well as the owners of these companies themselves, where they call it the public square. And this is by way of saying that, look, the normal rules of kind of corporate discretion maybe shouldn’t apply here, but that we should think about these speech environments as largely being sort of participatory and equal so that people can come into these environments. And then, as a general rule, be free to say what they want. Now, the reason this analogy fails is that, particularly if you look at what I’ll call the news, feed portals, something like Facebook or Twitter, or to some extent, YouTube, and certainly TikTok, the most important decision that these platforms make is what they put at the top of your newsfeed and what they put at the bottom right, the ordering of information is really the nature of these platforms. And so that’s very different than the Boston Commons, where the question is, are you able to come into the space in order to speak, here are the most important issues and questions revolve around how the platform curates information and organizes it in a way so certain users see certain types of communications, but not others.
Andy Slavitt 13:40
Interesting. You know, I’m curious how you view it, Brandon, but also from the perspective of being inside Facebook, as you were for four years? What did they really think and how much of it is driven by business interest, and how much of it is driven by you know, some real moral good that they really expose?
Brandon Silverman 14:03
Alright, well, I will, this is probably a huge personal professional mistake, but I’m going to disagree with the Stanford Law professor, about my interpretation of the First Amendment. But I mean, for me, that analogy, I feel like holds up insofar as is maybe not like the town square, but there’s the de facto town square like it is the closest we have to one. And so yes, there’s all these ways in which it doesn’t actually match the like, public commons version of it. But like in terms of where public discourse is happening, it certainly feels like there are a lot of places in the world where like it is as close to a public square as we have. And so in that ways, it feels like the analogy resonates for me in a lot of ways. And I think for those of us to segue into your second question, I think for those of us who want platforms to feel a sense of civic responsibility, around the impact they are having for some of these decisions. I feel like the use of the public square metaphor is one that’s like, is a way to elevate the sense of responsibilities that they should have when they think about some of these product designs. So I you know, I think inside Facebook, I mean, first of all, listen, these companies are not monoliths. There are a lot of people who have very different views on these things. But I think my sense was that there is certainly one ethos inside the company that believes connecting people is a moral good kind of period, like full stop. And that the degree to which they go above and beyond just the core connecting of people is kind of unnecessary, but like additional measures they are taking to kind of ensure even more good that they’re happening in the world. Now, I think that was obviously you know, there were a lot of us, including me, who I think were not convinced that that was true, and felt that in order to be way more confident about the netcode, they were having, there were things they had to do a lot either invest way more heavily in, or in some cases, like take some radically different approaches to there was actually this period between 2017 and 2000, probably early 2020, at Facebook, where I think they were all the way up to mark like very invested in thinking about how to answer some of these questions around their civic good social responsibility. And I think, you know, this was around the time date and social science one got kicked off this is when we were there, we were thinking about really radical stuff around transparency, there were a lot of news initiatives being launched. And I think there was a really genuine good faith to dive into some of these issues really head on. But I don’t think that period is over now, for a variety of reasons, but including just the rise of competitive pressures as a business. And so at this point, I think the I would probably describe the internal ethos is like we’d think connecting people is good. And otherwise, we need to, like maintain our existence as like, a profitable company with the margins that we’re used to and deliver the value to shareholders we used to, and those are enough in threat that like we have to focus on those right now.
Andy Slavitt 16:53
So you think about the fact that if I’m running for political office, if I’m gonna put an advertisement on TV, there are certain things I can and cannot say, they have to be checked to make sure I’m telling the truth, et cetera. And Facebook, I don’t need to do that. And so there’s two industries that I know of, I’m sure there may be other two industries I know of that have protection from liability if your product is used. harmfully, one is social media companies. If someone goes on and says something completely false about you branded, the company has no liability. And the second are gun manufacturers who have managed also to navigate through their political power for slightly different reasons. No liability of hey, if someone picks up my submachine gun and shoots up at school, hey, I just made the gun. No issues here. And so I’m really a little bit perplexed by this question. And why I tend to sort of let a very, not surprisingly, probably lean toward Nate’s perspective, it just in the sense that they are clearly selling content for money, these platforms. And because they are making money off of this content, that seems to me to carry some responsibility of what’s on this platform, and much in the same way that I can’t call the New York Times and insist that they put me on the platform. Why should I insist that I’ve got the right to be in one of these other platforms? What am I missing?
Nate Persily 18:18
Well, I so there’s a lot there. Let me start with the public square metaphor, just quickly, but link it to what you’re saying, which is that you start with first principles here, which is that the community standards that these platforms have developed, if they were truly regulations of the public square, like government laws, they would all be unconstitutional, they would all violate the First Amendment. That’s true with the child endangerment rules. That’s true with the, you know, the nudity regulations, hate speech. And like, and I mentioned that because none of us really wants the First Amendment to apply to these companies.
Andy Slavitt 18:52
Your point is, the First Amendment has limits itself?
Nate Persily 18:55
Well, because yes, well, it’s so we, we could argue about whether the Supreme Court has interpreted it correctly, even as applied to the government. But even if we all agree that that’s the right answer for the government, it’s clear that a different set of standards must apply to the platforms and part of it is because of what I was saying before, which is that they are in a different business than simply allowing speech, they’re organizing speech. And so they have to have some of these rules. And because content moderation is essentially what makes these platforms special, but I only read it I really disagree about you know, the point here is that these are very important speech forums. And so the decisions made over regulations of speech have to be, you know, considered what their consequences are going to be, you got to worry about, for example, big corporations tipping the balance and elections and, you know, allowing some candidates to speak versus others. So all of that I think we agree about, there’s the question of how much does money make a difference with respect to this? So my personal view is that most of the stuff that we spend a lot of time worrying about, actually is not being driven by financial considerations. And by that I mean most of the election related stuff most of the political content, which we as elites tend to think, well, that’s what these platforms are about, actually, most of what people are doing on Facebook has is looking at pictures of their you know, nephew’s wedding and it’s very sort of non-political, non-dangerous stuff. And to the degree, you’ve got sort of Q anon cesspools that are developing on these platforms, the platforms want to eliminate them as much as you know, they can they said Whack a Mole problem. And I don’t think it’s financial considerations that are preventing it. However, the entire sort of business model of these companies relies on advertising. And so there are a series of problems related to privacy related to engagement, that feed the advertising model, which are of concern, but that’s separate than something like you know, foreign influence and elections, child exploitation. And the other things that we’re attributing to the platform.
Andy Slavitt 20:57
My sense from lots of long conversations with Nick Clegg, for example, including an episode that I did with him last year, is they don’t want the responsibility like don’t like it’s nobody’s I completely agree with you. Nobody wants to be the one to be like sheep, I gotta decide where to draw the line and these very difficult places to draw. So it does no favors they were they’d be more than happy, I think, to just be used to show, you know, the nephew’s wedding, but they’re not any longer. And the fact is, that wasn’t the business that Mark Zuckerberg probably thought he was entering into, we probably had was entering the business of looking at which girls were attractive on Facebook, from what I understand. But you know, beyond that, like, you know, it was sort of like Oppenheimer’s toy, he built something. And now it is in fact used for other things. And it’s a tough thing to indeed wrestle. Let me ask you guys both about some examples, some fresh examples of what’s going on today, internet, two very different things and maybe get your take on them. The first is Elon Musk’s the variety of actions he’s taken, to the extent that you can put them together in a coherent philosophy or approach. Since he’s taken over Twitter, they what do we think of that?
Nate Persily 22:12
Well, it sort of depends on the day a little bit, right. I mean, there’s so much that’s been changing at Twitter, and then promises that are made, and then they’re, you know, reversed, and security teams and moderation teams that are loaded, and then they’re dismissed. So it really does depend. I mean, I think that the future of Twitter is extremely uncertain right now. But whether it’s going to exist, and in what form, I think that, you know, in some ways, we’re seeing someone learn the last 15 years of content moderation that all these other platforms have ever had to deal with. Right? At some point, Elon Musk is going to realize that he cannot by fiat, determine, you know, who is on the platform and who is off, you need some kind of process. And I it’s very hard to predict what’s going forward, I think that we are now going to have a more libertarian speech environment on Twitter, the question is, does that then pull the rest of the industry in that direction? Or do they try to distinguish themselves?
Andy Slavitt 23:07
It seems to me like he has to choose between advertisers, and his sort of no holds barred philosophy, doesn’t he?
Nate Persily 23:16
You’re right. Look, if they don’t figure out a way, either with payments or with purchasing, you know, blue checkmarks, or, or some of these other business ideas, there seems to be a new one every few days. If they can’t replace the advertising revenue, then they’re in deep trouble. And then the question is, Will advertisers return? And here, you know, it really just depends on how much the kind of toxic or, you know, the problems that are attributed to the platform are then ones that advertisers want to stay away from
Andy Slavitt 23:47
Britain. Before we take a quick break, I want to see whether you have a different take or get to take the professor on again.
Brandon Silverman 23:54
Yeah, I mean, you know, I think in some ways, this represents a lot of the complaints that people have had about social media and just jack them all up to like 100, which is not always a billionaire, now running it is the wealthiest billionaire in the world, who also has very strong ideological opinions. He’s not afraid to use them in his choices and managing a platform. But now he’s also facing an extraordinary amount of business pressure as well. So you have both the like for profit complaints combined with the ownership and accountability complaints, but all of the level we’ve never seen before. You know, I think one of my worries a little bit that I haven’t heard as many people discuss is I think, in some ways Elon, most impressive entrepreneurial accomplishments to date have been that he holds he’s a good holder. He takes companies and bets and ones that oftentimes are in dire straits, and he’s just willing to hold on to them into and see them through no matter how long it takes and how much work it takes. And there’s a version there where it is not that he runs away. from Twitter in three months or six months, he gets bored or anything. But he sticks around for years trying to make his vision a reality. And the rest of us all have to go along with whatever chaotic ride that involves or enough people leave that maybe we stop participating in the Elon show every day. But I’m not sure it’s going to end anytime soon, as I think I’ve heard some other people talk about.
Andy Slavitt 25:22
Let’s go to break, we’re going to come back. And we’re going to talk about a couple of the what next topics and what has to happen, like regulation, like self-regulation, and maybe even some of the other spaces that you’ve talked about that seem to be developing as alternatives. We’ll be right back. So I’m going to summarize what I think are the three or four compounding effects of challenges dealing that faces in these, particularly these platforms. And we’re gonna just stipulate that there’s a lot of good that happens, and that we want this good and that some amount of bad is going to come with the good, but that if we can figure out how to meet these challenges, things will be better. So I’ll summarize real quickly. Number one is the ability to lie at scale. Right, you know, lying on the telephone and outline overtaxed, you’re basically lying at scale. Number two, is the ability to do that in a very targeted way. The third challenge, we talked a little bit about privacy, but it’s actually the way that people’s data is used, and their contacts data are used, oftentimes, that allows that targeting to happen, and some people do consider it an invasion of privacy, others doesn’t bother them as much. And then the fourth is this notion of concentrated power that you’ve referred to. I don’t know, if you want to add to that list. Is that the right list? Nate?
Nate Persily 27:00
Well, I mean, it is a list and then, you know, it’s the beginning of what I mean, if you’ve put in things like teen mental health or hate speech, right, those don’t fit into those categories. I mean, the way I would think about it is that there are a series of problems related to content, right? There’s a series of problems related to power, like the antitrust question, then there’s a series of issues related to privacy and surveillance, right. And so each one requires a different kind of intervention. In order to confront the kind of new aspects of this technology, I often give a speech and people can see it online, on whether democracy can survive the internet. And the thing I focus on is what isn’t about the features of the technology itself, that privilege, certain types of communication, candidacies, and strategies over others. And so you take like something like anonymity on the internet, right, that changes the way that people talk to each other. You mentioned that people can lie at scale, and then potentially lie in targeted ways. Well, the fact that we it’s much easier to remain anonymous, through our speech on the internet is what allows people to engage in unaccountable speech that I think, you know, has particular downstream bad effects.
Andy Slavitt 28:17
That’s important, maybe give folks a few in mind, just a sense of the current state of internet regulations, there’s something called Section 230, which I don’t think everybody’s familiar with. There may be other things. There’s also a European approach, which has some differences to the US approach, to give people a sense of where we are today as a baseline, and then we’ll come back and ask both of you what can be improved from a pure regulatory standpoint.
Nate Persily 28:45
So the basic architecture of the Internet is in many ways determined by, as you mentioned, section 230 of the Communications Decency Act, which shields from my ability platforms that hold user generated content. You compare that earlier to, you know, gun manufacturers, but in some ways, it’s hard to think of an internet functioning where the platforms are going to have to have to basically monitor everything that goes across their services, to see if it might possibly be defamatory might possibly incur certain types of liability. Moreover, that immunity that’s guaranteed by that law actually is not what gives us our hate speech problem or a disinformation problem because those things are not generally going to lead to liability. So there’s that basic architecture, the internet from CDA 230, which has led to the proliferation of these platforms, then you’ve got, you know, some rules with respect to privacy that come out of federal law, as well as through consent decrees with the FTC, most notably Cambridge Analytica and the Cambridge analytic, a consent decree with Facebook, as well as one that applies to Twitter which could rear its head very soon, but there is no national privacy law and that’s something that I think concerns a lot of people, the DSA in Europe is sort of still under development. And the devil is going to be in the details as to how well it leads to regulation of algorithms, regulation, lead platforms, as well as greater transparency, which I know Brandon and I are most excited about when it comes to these new rules. But the Europeans do have a privacy law called GDPR, which has many, in many respects, become the global standard that has gotten these internet companies and they’re trying to get at some of these online harms problems through sort of greater reporting and other kinds of monitoring of the firm’s
Andy Slavitt 30:34
Yeah, yeah, I would tell you, the one rule of thumb to your dying day is never defend an analogy, because an analogy only has small limitations. And I’m about to defend my gun safety analogy in the following way. I’m not suggesting that gun manufacturers would be responsible for every irresponsible use of the gun. What I am suggesting is that, for example, Instagram, which is a known distribution channel, from fentanyl distributors, to kids and results in lots of kids deaths, and is making money from those kids attention, might have some liability for a problem they know they have, and don’t have, in my opinion, other than public outcry gets too large, too much incentive to fix?
Nate Persily 31:17
Yes, well, I know. And then that should be we should have regulations on things like that. But it’s sort of, it’s easier to say that we should prevent these harms than theirs to figure out the precise way that we stop. You know, fentanyl is from distributors from abusing these platforms, while at the same time not preventing other kinds of users that we want to have access to the platform. Because ultimately, the question I think you’re trying to get at is how foreseeable are the harms? And what steps should the platforms take in order to prevent them? And so you could have a system which requires like, notice, where you say, look, once a platform has been notified of concrete harms, while then they have to take action. And there are situations like that, for example, in the copyright area, right, where we do have notice and takedown rules. And in many ways, that’s the model that people would support. The hard part here is that, look, if you have judicial, if you have judgments, or orders from responsible authorities to take down this kind of information, that seems fine. And Facebook and Google, they do this all over the world, the problem is that if you’ve just have a generalized obligation to prevent harm, or else you’re going to be liable, then they naturally are going to have to use automated filtering to prevent all kinds of content that we actually want on the platform.
Andy Slavitt 32:36
But there’s an assumption you’re making here, I think, like notice and takedown as one, one approach for sure. But one of the assumptions you’re making, is it these platforms have just out of control in terms of size. They’re just massive, and that there’s no possible way like, you know, if the three of us started a website, and you know, and we had like 50 people on the site, and someone put something on the site that you use a bad word, and we wanted to be a family website, we could take it down because we could see it. But here they can they can claim look, it’s just beyond our ability to know what’s going on in the platform. And so if a few kids died from fentanyl poisonings, or even if a few 1000 kids died from fentanyl poisonings, so be it, because you know, we’re not expected to know, maybe that’s the right assumption, maybe the assumption is that this is going to be too bad for any humans to manage. But maybe not. I mean, I think there are other spaces on the internet that are trying experiments still, like Mastodon, for example, that are trying to control that scale a little bit, so that they could say, hey, if there’s something bad going on, you will see it.
Nate Persily 33:41
Well, that last part is part of the I mean, something like Mastodon is it can we gotta be careful about these decentralized systems when it comes to things like content moderation, because there’s nothing about decentralization that is going to prevent these kinds of problems, you know, whether it’s Qanon and discussion groups or child, you know, exploitation groups from developing. And so while I am, I still think you know, that we should push for greater competition against these platforms. There are lots of reasons to do that. But the but it’s not clear to me that having 1000 Facebook’s is going to prevent some of these problems, particularly since most of the problems like that when we deal with incitement and problematic speech. A lot of the problems are happening on among a small group of users take like Qanon as the paradigmatic example there, Q anon is a very kind of viscous kind of movement where it sort of slides between different platforms, and is able to recreate itself. And so more and more decentralization is not going to solve a problem like that. By the way, this issue is before the Supreme Court this year and that Gonzalez versus Google case as to whether YouTube is going to be liable for the fact that terrorists may have put up some videos and that then led to a bombing which killed some people? And so the question is alright, so what suppose we have that as the new rule, and it’s not dissimilar from the fentanyl example that you’re talking about. It’s like they’ve got it, they’ve got to be much more aggressive and taking down potential terrorist content, which was something that they’ve been wanting to do. What ends up happening, we know this from the way that that the platforms have behaved in the past is they end up over, over censoring and over filtering so that, for example, Muslim content ends up being taken down as a result. Now, as you suggested, the problem is the scale here, the fact that it has to be done through automated filtering, right at because that then requires them to develop these kind of shorthands that will go after any problematic content. But the more that the liable for it, the more that they’re going to take down more than just the problematic content. Yeah,
Andy Slavitt 35:50
Yeah, I guess I’d say, my very, this is a very flip view. So we’ve become refuted with facts easily is that these are very clever people. And that if their revenues are threatened, either advertising revenues or just engagement, revenues are threatened by overcorrecting, they’re going to have an incentive to cut the line as close as possible. And I think we can all stipulate that there will be no perfect way to only get the right things in the wrong things. And then it’s a hard problem. I mean, there’s no question.
Brandon Silverman 36:24
Yeah, I mean, I think can I go ahead, Brandon. So I think the challenge there is that there just simply hasn’t been very much choice in the advertising market in the digital space for a long time. So it wasn’t, you know, Pepsi didn’t have that many places they can go if they had a $10 million campaign, they wanted to run ahead of the Superbowl, they went to Google or Facebook, and that was it. And maybe they went to Instagram. So I think one of the challenges is actually there wasn’t, there wasn’t a lot of those pressures. I mean, when I was at Facebook, there was a massive ad boycott attempt. That probably gotten more I mean, I knew the organizers It was what more wildly successful the organizers could ever possibly imagined. And it didn’t make any difference. It made very little difference. Because both one there’s no choice. And to also, Facebook has, I think, something 50 million advertisers who use the platform, it is a very long tail. And so it makes it very hard for any sort of organized pressure to really dent the bottom dollar.
Andy Slavitt 37:14
I agree. Let’s maybe finish up by talking about what we as individuals, well, certainly systematic solutions and suggestions, but also, you know, as individuals, if you are like me, you know, I no longer really, I mean, I’ve got some 700,000 followers on Twitter, but I don’t use the platform very much anymore. But I would say as soon as post is big enough, or some alternative is big enough, I probably will stop even posting on Twitter. And I’d say that, like there’s a few things that push different people over the edge, but him personally becoming a bully, and his attacks on my friend, Tony Fauci, which I think are quite dangerous. You know, to me, that’s nowhere to love everyone philosophical arguments. This is just a bad person who is doing bad things. And I want to frequentists platform, but people who have engagement who have following or even who just want to use these platforms, because they find them useful. Are they gonna have choices? What choices do they have? Are there things that are developing anything particularly exciting or positive that you’re seeing out there?
Nate Persily 38:16
Well, on the choice front, look, we’ve spent 40 minutes or so here talking, we haven’t really mentioned tick tock and you know, tick tock is, is becoming the most dominant social media enterprise in the world. And, you know, it’s incredibly important that we start focusing on it, particularly given the fact that the Chinese government has some, you know, control over it. And so there is competition that’s out there. I actually think Facebook, you know, look, Facebook’s lost 70% of its value in the last few years. And so it’s also on the ropes. As new technologies developed as the Met if the metaverse becomes what Zack hopes it will, if there are alternative social networks, I you know, I think that they’ll and we’ll see whether Twitter implodes? Yeah, I think that there will be some competition here. Note also, even within that companies, I mean, one of the reasons why don’t think say the algorithms are the advertising is necessarily all the problems, you look at something like WhatsApp. And if you do work in the developing world, you WhatsApp is really the social network. That’s where people communicate and all of these problems that we’ve been talking about in terms of social harm you see on WhatsApp, and that’s not you know, one that they’re even making real money off of. Anyway, so I think that there is some competition in terms of the future though. I know Brett, we can’t let this go by without Brandon and I advocating for the piece of legislation that we’ve worked hard on, which is the platform Accountability and Transparency Act, which would require the platform’s to make sort of access to outside researchers as well as public disclosures about advertising widely viewed content and their algorithms, and the first step toward any set Civil regulation in this area is to really understand what we’d be regulating. And so these platforms are black boxes, and but the public and, you know, experts on the outside need to get access to the data to understand the scale and nature of these problems.
Andy Slavitt 40:13
So we could describe what harm is, in fact, occurring, to what degree, et cetera. Brandon wouldn’t say more about that this is obviously a key part of the business that you started.
Brandon Silverman 40:24
Yeah, I mean, one of the things I eventually ran into where there, there are limits to how far you can push this from inside companies. And sometimes it simply just comes down to incentives and org structure, as well as competing incentives, other things. But you know, I left convinced that what we need is regulatory requirements around this work. And you know, ones that apply to all the platforms can be thought about globally, etc. And there’s actually a lot of really exciting stuff happening in this space. I think, you know, the platform accountability Transparency Act is going to be the US version of attempting to try and make this requirement here in the US. And there are some ways in which it would actually go farther than some of the stuff happening in Europe, which is really exciting. And I think a chance for the US to be a global leader in this space, Senator Coons, Amy Klobuchar, Rob Portman, and potentially some more coming on board soon.
Andy Slavitt 41:15
We’ll put a link to a summary of it in our show notes. But you know, Portman, apartments Republican, the other ones are Democrats, Portland is obviously retiring. You know, there is one thing that everybody in Washington agrees on probably for different reasons, which is that, you know, big tech is out of control and a threat and so forth. So I had imagined that responsible legislation like this, which feels I’d like the way you describe it as sort of first step, let’s do more, you know, before we make, you know, knee jerk decisions, you know, I’ve pushed you guys on this conversation a little bit to go all the way from one extreme to the other, just to help us understand the issue better, but what you’re suggesting for legislative path sounds like, it’s more in the category of let’s learn more, so we could figure out what needs to be regulated and how best for advertisers and users to judge these platforms.
Brandon Silverman 42:04
Yeah, I mean, listen, we, you know, we tested, Nate and I both testified in the Senate. And, you know, Ted Cruz showed up and he said, There is no amount of transparency that would go far enough for me when it comes to this stuff. So I, you know, I think the hope is that, once we get to a four final piece of a final draft, and it’s introduced, it has the possibility of getting bipartisan support. I think there’s a lot of like, early signs that show that could be the case, which is exciting. And at some point, I will I let me answer your question about the smaller networks in the metaverse. Because I have some thoughts there. But we can finish on Nate as well. Yeah, I mean, I’ll keep this short. But listen, I think if we had asked ourselves four years ago, how likely is it for there to be genuine, real consumer choice and kind of the social media space, I think we probably wouldn’t have been particularly optimistic. But I think there’s a greater chance of that becoming a reality for users on the internet now than there has been in the last five, maybe 810 years. There are, you know, some of the smartest people I know in the space are either going into clean tech, or they’re going into how to build healthier kind of social media alternatives. And there’s some great examples out there. They’re things like front porch forum, in Vermont, which is like this very safe, incredibly civic minded, very neighborhood based social media site. And it’s and people love it. And it’s great. And it’s that sort of thing. I think a lot of us imagined when we imagined like the best forms of social media out there. There’s a company called GNU public, which I’m a huge fan of, and they are working with public broadcasters around the world to think about how to integrate more kind of social media into public infrastructure that already exists. So I think there’s a lot of exciting stuff happening out there. I mean, it is, you know, it is not a certainty by any means. But I think there’s a greater chance of it becoming built into the internet, as we know it now than there has been in a longtime.
Andy Slavitt 43:55
Well, and I’ll offer this if there are ones you want to send us where we’ll put links to them for people to check them out. Because more choice is certainly good. Let me give you the last word. Nate, how would you close this off?
Nate Persily 44:08
Well, there are going to be many sort of developments in the near term that I think are going to be quite important for the future of speech online. The return of Donald Trump to Twitter was one of them. But how Facebook soon decides whether to return him I think is going to set the stage for the 2024 election, how Apple deals with these big platforms in the coming year, I think is going to be really, really important. And then the third is what’s happening in Europe because they’re ahead of us. And so how the DSA and the DMA get implemented, could be the tail that wags the American dog here because not only will they provide a model for regulation for us, but the platforms often as they did with GDPR will decide that what we call the Brussels effect will happen which is that well in order to get take advantage of the European market, they just make those rules essentially standard operating procedure or in the world.
Brandon Silverman 45:00
Alright, can I add one more to just one more to Nate’s? You know you mentioned, Nick Clegg, I also think there’s one fascinating thing to watch going forward as well, which is, how do the platforms orient themselves towards these new transparency requirements that are coming out, they could take an adversarial approach in which they try and take the narrowest possible interpretation of every requirement, they litigate every possible word, etc. Or they genuinely try and commit to being good faith partners in this stuff. And, you know, I think, Nick, I think a lot of people were thinking about Nick’s tenure at Facebook, especially in his new role, as one where you know, at the Trump decision is going to be one that is really gonna be a part of like the legacy leaves. I think, for me, one of the big questions for him, is what is he going to? How is he going to orient the company towards transparency and data sharing? When Nate was there doing social science one, when I was there, Nick had just barely joined, I think was not in the position he is now and one of the things that I specifically ran into was a lack of executive leadership that really cared about this issue. And one of the things I’m just quickly interested in is what is next legacy going to be when it comes to data sharing and transparency at the company? And I think in some ways might be the most important.
Andy Slavitt 46:12
Okay, on a scale of, well, to what a one word answer, what’s the likelihood over the next few years, that some new regulation, whether it’s the ones that you worked on, or others get passed, and I don’t mean any ambient enough in a way that sort of substantially changes the shape of the future of these social media platforms in the US.
Nate Persily 46:36
Brandon Silverman 46:38
I’m a congenital optimist. I’m gonna say 65.
Andy Slavitt 46:42
All right, well, you’re good to share your predictions. I think they’re both pretty safe. Because whatever happens, you guys could both claim that you got it, you got it. Right. So smart way to make predictions. Thank you so much for coming in the bubble. Really enjoyed this conversation. I learned a lot, no shortage of opinions out there, but great to have a couple of experts. So when we were done recording, and just shooting the ball a little bit, I asked Nate, whether he thought Trump, who’s obviously been allowed back on Twitter would be allowed back on Facebook. And he answered that. So it’s gonna sound a little bit different because we were just talking and we weren’t using the professional recording. But I thought I’d play that for you as a bonus.
Nate Persily 47:42
I think it’s quite likely that Facebook will return Donald Trump to the platform. I think that all of the things that were said both by the oversight board and then Facebook’s response to it suggests that they would be on that trend Anyway, before the election before the midterm election, I thought maybe that they wouldn’t because of the violence against Paul Pelosi, I thought that was something that might make it more likely that they would keep him off. I think the relative passivity of the midterms means that the threat of violence, which is what they are, you know, keying their decision to has subsided somewhat in their minds. And so if that’s the criterion that these platforms are using, as to whether he gets reinstated, then they might point to that and say that look, alright, now, things have calmed down enough that he should be reinstated. The real hot, difficult problem here is, will they take him? What will they do to take him down again, if he violates a community standards? So we held an entire conference here at Stanford on this question, where I said that, you know, I think there are middle ground possibilities here that people are rejecting, for example, that you sort of gradually lead them back on the platform, you allow them to have an account, but then there’s no amplification. And then, once, it’s clear that he’s not violating the community standards, you could have kind of full rights on the platform. But I think that they’re reluctant to do that. And then you know, and this, again, I do not think is an economic decision. I think that they generally are just worried about perceptions of bias in the way that they’re governing the platform.
Andy Slavitt 49:20
Alright, let me finish up with a message to you all as we get to the end of the year, and we approach the holiday season. I actually do want to begin with a message to the in the bubble team, led by Kyle and Catherine James and Noah, everybody else. Thank you so much for putting together a great, great show. This year. We went through a major transition. We added another day of the week. We started covering lots of topics there was lots to scramble about. There were some really fun shows. There were some really interesting shows. There were some shows that we didn’t think we’re gonna be good that were great. There were some shows that we hope will be great that I probably fell down on and what you hear from me, this is in the bubble of Andy Slavitt is like I do about 5% of the work. And those guys are amazing. And I want to thank them and wish them happy holidays. And they’re going to take next week off. And so next week as you’re celebrating the holidays, Christmas, Hanukkah, whatever you celebrate, or just celebrating being with your family, we’re going to have a couple of shows that I think you’ll find appropriate to hear during the week if you want to listen. Monday, no show Wednesday, we have a show about the economy. You heard Justin Wolfers on Wednesday, say that what the prospects for a slightly better economy might be next year. But still, we thought an episode that we’ve done on the kind of economic tips for getting through kind of recessions and inflationary times would be a great show to put on. So you’re going to hear that on Wednesday. And on Friday, a show that we thought we were done with the topic of student debt. But we’re not done with this topic of student debt. And we did a great show on that. And because it’s being contested, you’ll hear that. So next week will not be a fresh programming week. But if you love the show, and you missed those episodes, come on and listen. And then we’re going to begin January with some great fresh shows, including a probably most excited way to kick off the year with Congressman Raskin from the January 6th commission. And we’re going to be talking about that. And she has many, many great shows ahead. Finally, I want to thank you for all of your listening and patience and take it with me and giving me comments and helping make the show better, and telling your friends about the show. And I know that I had the oftentimes up and down year in terms of all the things that have happened. And hopefully this show has been a small part of making you feel better learn a little bit more. But I’m grateful to you. Happy Holidays.
Thanks for listening to IN THE BUBBLE. We’re a production of Lemonada Media. Kathryn Barnes, Jackie Harris and Kyle Shiely produced our show, and they’re great. Our mix is by Noah Smith and James Barber, and they’re great, too. Steve Nelson is the vice president of the weekly content, and he’s okay, too. And of course, the ultimate bosses, Jessica Cordova Kramer and Stephanie Wittels Wachs, they executive produced the show, we love them dearly. Our theme was composed by Dan Molad and Oliver Hill, with additional music by Ivan Kuraev. You can find out more about our show on social media at @LemonadaMedia where you’ll also get the transcript of the show. And you can find me at @ASlavitt on Twitter. If you like what you heard today, why don’t you tell your friends to listen as well, and get them to write a review. Thanks so much, talk to you next time.