The Anti-Elon Musk (with Noam Bardin)

Subscribe to Lemonada Premium for Bonus Content


Ron DeSantis’s disastrous campaign launch on Twitter Spaces drew public attention back to the ongoing collapse of Twitter since Elon Musk bought it. Users are fleeing, and Noam Bardin is hoping they will flock to his new platform, Post.news. Bardin, also the founder of traffic app Waze, joins Andy to discuss why we need a vibrant news and social media platform to ward off authoritarianism and how he plans to make a more civil social media platform. Bardin also shares his unique take on AI and what people should really be concerned about when it comes to robotics.

Keep up with Andy on Post @ASlavitt.

Follow @noam on Post.

Joining Lemonada Premium is a great way to support our show and get bonus content. Subscribe today at bit.ly/lemonadapremium.

Support the show by checking out our sponsors!

Check out these resources from today’s episode: 

Stay up to date with us on Twitter, Facebook, and Instagram at @LemonadaMedia.

For additional resources, information, and a transcript of the episode, visit lemonadamedia.com/show/inthebubble.

Transcript

SPEAKERS

Andy Slavitt, Noam Bardin

Andy Slavitt  00:18

This is IN THE BUBBLE with Andy Slavitt. Don’t forget to email me at andy@lemonada media.com. I have Ron DeSantis. Here today, Ron. Ron. Ron. Shoot, this isn’t working. God did you all listen to that thing with Elon Musk last week, when he tried to make the announcement of Ron DeSantis, his new candidacy on Twitter. And it didn’t work. Like what part of that was difficult. They apparently did no low testing of the site. And I don’t know who I’m happier for Ron DeSantis, or Elon Musk. But it was a beautiful moment. I bring that up today not to gloat in someone’s misfortune. Because ultimately, it will be forgotten. It was just a funny moment. But because in this episode, we’re actually going to have an adult conversation about social media. And what it does, and we’re going to have it with Noam Bardeen, he’s the guy who founded ways that app you probably use to figure out how to get to your job in the morning. And he is a guy who is decided in the wake of Elon purchased Twitter, that he was going to start a competing social media platform that tried to capture the things we like about social media, but leave out all the negatives. So that’s what we’re going to talk about today. We’re not going to talk about Ron DeSantis, because I’ve done that enough. And when we’re gonna talk about must all that much, other than as a contrast for kind of what does the world need to look like? This is a pretty deep conversation. I think Nome is a deep kind of thinker. And he’s thought a lot about this question of, what do we need to regulate? Why is it important for us to have real dialogue, why that’s important for the democracy, why it’s important for how we relate to one another. And how all of these things, particularly I think, the stakes, we have if technology becomes more and more powerful, puts even more pressure on us to know who we are as people to know who we are as a community to know who we are as a society. Because if you don’t, I think you get overpowered, and you just run whatever direction the technology leads you. And we’ve been doing that with social media, I could easily see us doing that with AI. And the way to stop it is actually to have real dialogue with real leadership and with real values. And that’s, I think, why we have to have this adult conversation today with them. You know, some of our conversations are really perfect for meeting the moment. Others meet the moment, but also are ones that will have meaning I think, not just today, but months from now, and years from now. And I love those conversations. This is one of those conversations. I will think about it for a long time. So I’m excited for you to hear it. I hope you enjoy it. Here comes Noah.

Andy Slavitt  03:51

Noah Bardim, welcome to the bubble.

Noam Bardin  03:53

Thank you for having me.

Andy Slavitt  03:56

So how’d you get to touch get interested in tech in the first place?

Noam Bardin  04:00

So it’s funny growing up, I grew up in Israel, and my father was a computer engineer. And my brother was really into computer engineering. And I just wanted to play outside and climb trees. So I went a completely different direction. I studied economics, political science, I was going to change the world go to the World Bank, Africa had all these plans. And during my undergrad, I, by complete coincidence, managed to get a job at a startup, which then there was no startups and nobody knew what I was talking about the 90s, right, the mid-90s. And got into it. And you know, one thing led to the other. And after a while, I went to a Master’s in Public Administration. And I said, now we’re really going to go and save the world. And then when I spent some time with people in government, and in nonprofits around government, whatever, kind of reached a conclusion that the people you spend your time with is really what matters. And a lot of these people I do not want to spend my time with.

Andy Slavitt  04:55

What is it about Israel that has made it such a good climate for both startups and technology.

Noam Bardin  05:04

So Israel obviously has a very high level of education, and especially sciences, etc. But the military really is the driver. So in Israel, there’s a draft. And so everyone goes to the military. So the military can choose the brightest of the whole country. And they put them through this kind of your undergraduate in a year, kind of an extent intensive program. And then they take these kids, and they give them projects that will take them 20 years in the private sector to ever manage and lead. And that becomes the core engineering talent. And with it, Israeli is a very entrepreneurial, you know, we’re very good at doing things non conventional, we’re very bad at scaling, and doing things following orders. But we’re very good doing unconventional things. And that with the technology base really makes Israel a hotbed for tech and startups specifically.

Andy Slavitt  05:57

So you founded ways, which is a company that everybody knows. And for the millions of millions of startup companies, there’s very few that you could say that everybody knows what it is. It also was a space that wasn’t empty when you when you thought there, people were using map technology. Tell us a bit that story what led to that founding? What was the kind of core breakthrough for you.

Noam Bardin  06:25

So not many people understand the market of maps. And so when we started ways in the in 2009, the issue was the maps were the most expensive part of navigation. There are two companies the world that that basically provided maps, Navteq and tele Atlas, and Navteq got bought by Nokia for 8 billion dollars and a tele Atlas got bought by TomTom for four and a half million. And those are the only places so when you went and bought physical GPS, the most expensive part of that GPS was the license to the map, not the hardware, not the marketing, it was the license. And so with that, nobody could give away maps for free and navigation for free, because it was just so expensive, like $5 per user per month. Wow, there’s no way you could do that for free. And our approach was to build a map from scratch, which everyone said was crazy. And to do it with a community using data off GPS phones, of users participating, very similar to Wikipedia. And no one had ever done it. Obviously, we were laughed out of the room for every map company, but building our own maps, which in the beginning was liability because they were terrible in the beginning, but allowed us to give it away for free to get traction with users and the mascot getting better and better the more people use them to the point where for driving, I believe Waze has the best maps globally.

Andy Slavitt  07:42

It sounds like in some ways, Waze is a story of all the things we like about the internet, and about the promise of technology that, that all of us together are smarter than one of us. We’re contributing things that we know that we are going to be helpful to others. So there’s a sort of a communal element of it. And I think when we think about how baby we used to feel about the internet, those are some of the values we think of is that, is that right? Am I editing like glossing over too much? Or do you think that there is sort of a version of vision of the internet that is like that?

Noam Bardin  08:21

You know, there are a lot of companies that get founded and basically disrupt an area. And then there’s like winners and losers, right? And you could argue that they’re evil, or they’re not evil, or what are they doing to the thing about traffic is that they’re no good guys, when it comes to traffic, right? Everyone hates traffic globally. By the way, every person thinks that their city is the worst traffic in the world, no matter where you are, I can tell it’s Jakarta, Indonesia, but never mind, everyone thinks it’s theirs. But having that that kind of a common enemy, really united people that commute every day in traffic, that feeling of helplessness, that you’re sitting in traffic, looking around with nothing to do, even the fact that we tell you, there is no better route like that, that in itself makes your ride better, and then make it a little more heuristic and fun. And so it definitely brought brings out the best and one thing I love about what we did with Waze, and I think the thing I’m the most proud of, it’s really the brand is very personal and very non-threatening. People have an emotional connection to ways you don’t usually have an emotional connection to a utility, right? It’s like, no, what do I do? It’s a word processor. What do I care? People have an emotional connection, sometimes a negative one. So they get mad the ways did this to me and the way it’s like an entity. Yeah. And that really is something and behind it really are the people. And that’s the point that crowdsourcing and when people get together and work on things, it creates a different quality of a product.

Andy Slavitt  09:45

That reminds me of when vaccines first became available, and people were trying to use the internet to crowdsource where to get them. So like that’s how the internet can help. But when we get back, let’s talk about some of the negatives particularly You’re on social media. And I want to know if it’s too late to stop all of the bad things from happening, including let’s talk about AI and whether it’s coming for your job when we come right back. So if we both agree with that, and I think people are going to be nodding their heads and saying, yeah, that was such a great part. And still, it remains a great part of the internet. But there’s this other part of the internet. That’s this other side. We did an episode with the former editor of BuzzFeed News, about clickbait and how, how really caused us to lose our confidence, the trust in information in journalism, social media, Twitter, which, you know, I think if there’s a morality story for Twitter, it’s worth things It started out being fun, ended up being potentially really harmful. At the end of the we were just sort of overpowered by whatever negative energy there was, I’m curious, do you have a rubric or mental model for thinking about, like, how this happens, and obviously, you’ve got a lot to say about social media, you’ve made somewhat of a bet that that we can maybe it’s somebody’s reverse this kind of negativity.

Noam Bardin  11:34

So I think when we think about social media, like many things in the business model really drives the behavior. And no matter what we say, these companies are ad supported, and being ad supported, they need engagement. And no matter how you tweak your algorithm, hate works, we’re like one step away from the monkeys, right? We love hating things. And that brings us a lot of engagement. And so the algorithms that promote the content will always be at the end of the day promoting content that drives that hate in that division. Because it works, right. And so I believe that the business model of advertising is at the core challenge of the social media networks that made them these promoters of hate. Second thing is really having the will to do something about it. You know, people love to say, Oh, if Facebook doesn’t know what’s going on, there’s no way they can do it. Facebook knows you’re gonna buy a car before you know you’re gonna buy a car. Right? And so if the right incentives were in place, for course, face, Facebook could deal with this, but they aren’t there. And this really goes to our government, the dysfunction of a government, right? The role of government is to provide the regulation that are the incentives or the for companies to do the right thing, leaving it up to two CEOs to decide on their own is a problem. So my belief is that, that social media can be done differently. I also think that we’re in a new generation of social media. Now we had the initial companies are Facebook, Twitter, YouTube, et cetera. They grew up on the web before mobile, they grew up in a certain environment, certain technologies became what they became. And if you think about Tick, tock tick tock is not a YouTube clone. It’s provides similar functionality, but something completely different, right, and in many ways, is the next generation of what YouTube was right of being that kind of place for videos and for consumption, etc. I think that’s going to happen across all of social media. And obviously, they’re not perfect, no one is perfect, as long as there are humans involved in it, but we need to find a way to bring these together, because consumption of news on social media has become the standard for Internet natives. But these platforms were not built to work together, there is no monetization model. There is no consumption model, you know, you click on a link and you get sent to a site and you got a paywall dropped in your face. And, you know, all those things were not built to create a healthy environment. And so that’s kind of the driver for me it the found post was really around, I think we can do it better. I’m not trying to create a Twitter a clone. I personally don’t believe that you can clone something that was created a certain time, there will never be another Twitter. Twitter was a unique phenomena. But I think there’s functionality that will go different directions. And the functionality we’re focused on really is that news consumption. We think the social feed is the right place to consume news. But we’ve got to work together with publishers and together with creators to build a healthy ecosystem.

Andy Slavitt  14:31

I want to get into what we think Musk’s vision is? But before we talk about post because I like to make an interesting contrast of philosophy, nothing else. What was what was the Twitter vision or philosophy? Pre-musk, as far as you could tell? And what do you think Musk’s philosophy is? I mean, you’d have to use the word first amendment 17 times in the sentence to describe how he would at least justify what he’s doing but how do you describe it?

Noam Bardin  15:02

So I think all people love to focus on Elon Musk now, because obviously it’s a hot topic. And he’s doing his best to keep himself a hot topic. But Twitter was a sick company before, right? Twitter never managed to build out his business model that hate and the misinformation, and the disinformation was rampid on Twitter before, it’s not that everything was great, and musk came and destroyed it. But Musk definitely accelerated a lot of these trends.

Andy Slavitt  15:27

And he overpaid for it.

Noam Bardin  15:29

It’s funny, as soon as the price was announced, basically, the outcome was given. Yeah, there’s no way anyone at that price could make this healthy company. You know, it’s funny I, if you look at Musk from the outside, you will say that this guy is a right wing fanatic who’s taken over this platform to destroy journalists and you know, do whatever you want. But I think it’s really about this generation of influencers, who just care, the only thing they care about is their influence. And that said, Musk does a does a good job of being in the news every day, right? And Twitter delivered that tip. Now, we might not agree with what he’s doing. But it’s working. Right? He is there every day. I think that before musk, there was a big push on trust and safety. And Twitter was probably one of the most advanced companies in dealing with trust and safety really cared about it could argue about how well they did it and where they did it. But that was there. And by removing that completely, he’s basically let the worst of Twitter expand rapidly. And that’s what we’re seeing today. I mean, the everything from engagement to discoverability to just the toxicity of the platform is out of control. Now, when you think about other platforms, like truth, social Gab, etc, being a completely open platform has never actually worked. Tick tock is the highest moderated platform out there. And it’s doing just fine, right. The people that that we hear are not regular people. First of all, Twitter’s a niche product, right. 200 million monthly active versus 3 billion dailies of Facebook, right? Real people are on Facebook, they have never used Twitter. Yeah, influencers are. So we have this idea that Twitter is this town hall. It’s not a town hall. It’s the cool kids hanging out together and complaining about things. Right. And so the question is, how do we go on the one hand mass market? Right? And how do we fix some of the problems that social media had, regardless of Elon Musk, at the same time, spending $44 billion on a platform and destroying it, you know, it’s something that if you’re the richest man in the world, you can do so, you know, good luck.

Andy Slavitt  17:40

Alright, let’s take one last break. And when we get back, let’s talk about the alt Twitter that you started, let’s call it post, and how you plan to make a nicer social media experience. And then look into the future. With some talk about AI. We’ll be right back. Let’s talk about this notion of moderation. Musk would say that any form of moderation violates his view of the First Amendment. Now I’m trying to give him I’m trying to import some ethical or moral standing to what he’s doing. I actually don’t personally think that that’s what drives him. I think it’s closer to what you said, which is, you know, attention action reaction, and who knows what else. But at least what he would claim is that, hey, this is a pure First Amendment play. And Tucker Carlson, if you look at what he said, when he announced he was gonna join the Twitter platform for his new show, he had what I felt to me like a non sequitur, because the First Amendment is most important. There’s I could tell no one is preventing Tucker Carlson, Elon Musk, or anyone else for that matter from saying whatever they want to say. But the question is, if you own an advertising platform, which is essentially what Twitter is, it’s not a town hall. It’s an advertising platform, where you’re selling people’s attention to advertisers. Does the First Amendment include your right to say what you want to say unmoderated on every platform?

Noam Bardin  19:29

So the First Amendment protects you from the government censoring you? It’s not a right to be an asshole that is not written in the Constitution that you have the right to say whatever you want, especially on a platform that’s owned by private business, and a private business more than anything has the right to say whatever its wants. So I think the basic idea that the First Amendment is involved between the user and the platform is just not relevant. It’s about government censorship, and government system we should definitely all band together and fight it especially in Florida, but looking at what’s going on, on the ideas behind it, this idea that you can say anything you want does not exist anywhere in the real world, right? He’s not newspapers, not in the street, you can do whatever you want. On top of that, no one’s really been censored. This victimhood, that the First Amendment absolutist a try and pretend they’ve been censored, that’s just factually wrong. Every study that’s been done that actually looks at the data shows that the right wing channels get the most distribution, there very little censorship that happens there. The fact that you can say whatever you want these days, and you can make things up, but That’s Tucker Carlson show, right? He makes up things. And he talks about it. That’s been his defense in court. Nobody believes what I say it’s all imaginary. I’m entertainment, it’s entertainment, we’re just an entertainer. And so some platforms might allow that. And I would hope the platform would not allow that because it really is tearing apart our society.

Andy Slavitt  20:56

Well, let’s talk about your philosophy. And what caused you to start post this notion, as you said, that we engage when our emotions run hot, we engage over hate, we engage over fear, more than we do over even nice cat videos, is sort of what seemed to be the prevailing wisdom, you seem to be cutting the other direction, and I can’t tell if it is a business view, or a moral view of what you hope to be true. But tell us what that view is, and then characterize for us why you think so.

Noam Bardin  21:32

So I’ve been obsessing about these issues of freedom of speech and disinformation misinformation for the last 10 years. And I believe there are a lot of different reasons why this happens. It’s part of this rise of authoritarianism around the world. I mean, and to me access to information, and especially credible journalism is part of that. And it to preserve democracy. We need it. It’s not perfect, but it’s the best we have. It’s free speech. I think we need a lot of opinions, because I don’t think there’s truth, but you need to be able to listen to different diverse opinions to understand them. So I don’t think that just you know, traditional publishers have the answer. Newsletter, writers, journalists, you know, that that kind of direct one on one communication to get from social media is a wonderful part of our democracy. And so we founded post two news, as a place a social media network, where news is a first party citizen on the platform. And what does that mean? On the one hand, we allow long form content, I think that when you boil everything down to 280, characters, you are dumbing down, America. And so we allow you to expand as long as you want, and to actually have a real thoughtful conversation. At the same time, we moderate comments, and what you say, to some pretty basic rules that the past would have seemed obvious today, they’re obviously controversial. So for example, you can’t attack a person can attack an idea. You can’t attack a person. And if you want to be racist, and misogynist, whatever, go to the other platforms, we don’t want to be a place for that. So immediate, what we’re saying is, we’re not going to be the platform for everyone. We will never be live in Russia, because I will not have a government censor our platform. And that means I have to give up on Russia as a market, and China and Iran and other markets. Okay. Fine, I’m willing to do that. And that’s part of if you if you stand for something, it means you’re willing to take paying for it. If not, then you don’t really stand for anything. Right. So we’re willing to do that. So we moderate heavily around a conversation there. There are things that are relatively black and white in moderation, such as racism, there are things that are super complicated, such as misinformation, right? And where does it go? So I’m not saying that we have the answer on it. Our long term plan is to build a system very similar to Wikipedia, or Waze, which is community driven, because communities know have different standards of what’s acceptable, different countries, different languages, et cetera, I don’t believe we can be the answer for everything. And that’s our long term vision is really tap into that crowd.

Andy Slavitt  23:59

Is it moderated by humans or by technology?

Noam Bardin  24:02

So a combination of both a obviously there’s a technology to accelerate, there are humans at the other end, but we want this to be actual users. So when we imagine moderation in the coming years, there’ll be people who for months and months on the platform have proven that when they flagged content, our moderators agree with them, and take that content down. And so they’ve built up these reputation and reputation scores for every user, or attracting the users reputation in terms of what they do on the platform. And the goal is that that will limit or increase your reach. You know, if you’re going to be on the platform and be attacking people all the time, and people are flagging your content and we’re taking off your coffee, etc, your reach should begin shrinking. Less people should see you. And if you’re a good citizen of the platform, you should get more reach. So reaches the output of that. And what we hope will happen via moderation is that communities will actually develop their own standards. People of color can say the N word that I can’t as a white person, right? But in their environment, that’s fine in other environments, it’s racism, right? And so we need to create that level of sensitivity to different communities. And that has to be bottom up with involving the communities.

Andy Slavitt  25:11

How much is anonymity part of the problem, it’s very easy to create a persona, and use it to be witty snarky mean, get your aggression out. It’s very different when it says Noam Bardin by it because you live with it. And the sexy, it’s not to say that it solves everything. But I’m gonna guide. It’s 750,000 followers on Twitter. And I refuse to block anybody. But I also refused. I don’t use sudo any longer. But I also refused to engage with people that weren’t using the real name. Because I felt like I don’t know who I’m talking to here, I could be talking to a bot.

Noam Bardin  25:47

So that’s part of the problem with the whole a checkmark strategy. And on multiple levels, I agree with you 100%. And our goal is to have everyone verified, anyone who wants to you don’t have to be verified. But when you get verified your real name, that’s not It’s not released now. But it’s coming soon. Your real name that was verified will be on your profile. You know, even in Twitter before Elon Musk, you could get verified, get a blue checkmark, then change your name, do whatever you want, right? There was no way for anyone to actually know who happy cats is, except that they’re verify, right? And so for us, that’s going to be the trade off, you want to be verified, great, anybody can we don’t charge for it, etc. But your name is associated with it. And I do think that that will be a limiting factor in how crazy people go where they are. At the same time, if you’re an anonymous account, you should get less reach than a verified account. Right. So if some crazy Nazi comes on board and has three Nazi friends and writes some Nazi shit, and it goes to his three Nazi followers and ends there, you know, it’s a problem for them. But it’s not a problem for the platform. Once we take that content and begin recommending it to people that did not want to see it, that I believe should be our responsibility to deal with, and this is think about the section 230 and things like that. I think if platforms were responsible for content they promote, I think most of what we see would go away.

Andy Slavitt  27:08

You’re here. The other thing I get my antenna up when people talk about the First Amendment now, in large part because I see Musk on one hand talking about the First Amendment. On the other hand, I see him kicking people off the platform, I see DeSantis talking about the First Amendment, and on the other hand, I see him banning books and taking them textbook. So like, I have a hard time convincing myself that people who talk about the First Amendment really believe in it the way they claim to even allowing for the fact that as you said, they’re misinterpreting what the First Amendment actually says. Okay, let’s, let’s shift a little bit and talk about I want to talk about AI. So just a sampling of announcements. IBM announces that 1000 jobs, they’re not going to fill, because they’re going to fill them by using AI. Wendy’s says that they’re gonna start experimenting in the drive thru, with using AI instead of the high schooler, who were they’re paying minimum wage to for their first job working in that window. So many jokes, I feel like making about that, but I’m not going to because I’m just super disciplined. And so let’s maybe start, but there’s a lot of places to start. But let’s start by the focus on jobs. I think we’ve thought about this, as many people would largely say, if you stepped up on the street and say, Well, AI threaten your job, you get a variety of answers that sound a little bit like it could be denial, like, well, they will be assisting maybe other people’s jobs, but that mine, you know, if I’m figuring out how to use AI, it’ll be better for every job loss will be a job created, et cetera, et cetera, et cetera. But this kind of disruption is what you know, we’ve got coal miners out of work and not for nothing, West Virginia, we used to be a blue state. Now it’s a red state, because people ignored that kind of impact. So how worried should people be? Is there an imminent threat here for real?

Noam Bardin  29:10

So if we look at technological changes, fundamental changes that have happened over history, in the long run, they usually create more jobs. But the problem is that there’s a displacement of people who cannot, don’t have the skills for those future jobs. And this is what we saw with manufacturing, right? Manufacturing came in globalization, automation, etc. And all these manufacturing jobs were lost. There’s a huge demand for engineers. But someone who spent 30 years in a manufacturing job is not going probably is not going to become a great engineer overnight. And so that generation of people, right, whether it’s the horse and buggy, the horse drivers, right, that generation are going to take the brunt of it. And again, if we had a functioning a government that could actually make decisions, we should think about how do we protect the employees, not the company. Companies are gonna do just fine, right? But those employees are going to transition, we really need to provide a safety net for them. Because there will be tremendous unrest. Obviously, when you go through that, I have to tell you, the thing that I find the scariest about everything is, when you think about there are a few trends are going on. There’s automation, right? There’s AI, and there’s robotics and autonomous driving, etc. Throughout history, there’s been a bond or a social contract between the elites and the masses, right, the elites wanted to extract as much rent as we can for the masses, but up to the point that they revolt, right? So we’ve got to like, give them enough not to revolt, because we need them to work in our fields, work in our factories, go to our wars, you know, etc. What happens with these changes, is suddenly the elites don’t need the population anymore. That’s never happened before in the history of mankind, where a very small group of people with some very powerful technology, can fight wars can produce products can live can protect themselves can do everything. What happens in that world, what happens in a world where to go to war doesn’t mean body bags, a coming back, but maybe some broken robots, right? What happens in a world where the ruling class does not need to feed or to have even the lower classes because they can just automate everything? And to me, that’s the big appeal of the talk about AI becoming a sentiment and you know, Skynet.

Andy Slavitt  31:25

Let’s take on this for a second, because I think Warren Buffett is sort of evoked this kind of sense that US wealthy people have to pay more taxes, we can’t live in a walled garden, there will absolutely be a revolution. So that’s, that’s, I think, quite how many of them say the word cynical, but that’s quite real politic sense of the world where the wealthy and elites only basically need to keep people in a state where they’re just shy of revolted just shy of poverty. But as long as they’re, you know, not making a lot of noise. They’re okay. Like, what a deed is the response of working people who find that the job that they borrowed $60,000 to go to college for is no longer there.

Noam Bardin  32:15

So I think what we will see a through this is a transition, right, some people will be able to, to transition to new jobs, a large percent will not and you think about all those towns in America, the coal towns and iron, manufacturing, etc, that basically became ghost towns, right? We’re gonna have a very similar thing happened. But the, what usually created a restraint on those elites, was the fact that if they revolt, someone’s got to fight them. Well, what if robots are fighting them? Right? And that’s the change that I think is ahead of us, where, you know, Buffett, you could say cynical, whatever that from his interest, he needs to make sure that people are making enough fun.

Andy Slavitt  32:57

But is it isn’t it the case that that’s the role of the goal of at least China of in other places, including here of social programs, that, you know, we get, we get so rich, other people are suffering, we create social programs that are just you know, just enough for people to get by. And maybe there’s, you know, training and transformation, etc. To create some level of sets that there, we still have mobility in this country. But it’s what I hear you saying the main is that the lack of social mobility here becomes more calcified at a minimum, even if, even if we stopped short of have the kind of drastic scenarios at a minimum, it feels like you are increasingly of this world where you have a diploma, you vote one way you don’t you voted another way you have a diploma and live in a certain zip code, it will tell you everything you need to know about the kind of life people lead the kind of things they value, the kinds of things that make them angry. And you could argue that we already did Trump is already evidence of being on that path. Is there any world in which AI closes gaps instead of open some wider?

Noam Bardin  34:05

So you know, and we forgot to mention, of course, the social media weaponizes, all of that and distributes it very efficiently. Right? So it’s hard to predict what will happen with AI, it’s a tool, right? And you can use it for anything. So I like to look at it more as it will accelerate dramatically the things that we’re seeing today. And so the good and the bad. Now, you can’t stop this idea of let’s have a six month moratorium. It’s our technology can’t be stopped. It’s out and it’s everywhere. The question is if we have a resilient society, where we have a responsibility to each other, and where we see ourselves as a nation, think about America in the 50s. Right? With all the problems that happen there. I’m not trying to say it was a wonderful place, right. But at least for large parts of the population, there was the sense of a nationalism and a sense of responsibility to each other. You know, seeing in America today that’s broken up completely, right? There’s very little swing voters, there’s very little interaction, even just day to day interaction and the red states and the blue states are becoming different, and so many different levels. And now you throw AI into that. If we had to keep going back, we had a functioning government, we could prepare for it. But the dysfunction of our government and it’s global by the we like to talk about the US, but it’s Brazil and Israel is everywhere. This move to authoritarianism. And this dysfunction of government is all part of this uprising.

Andy Slavitt  35:29

Yeah, I’m not sure if even a highly functional government would know how to get his hands around AI, to your point about six months yet, and I’m not sure we got government shown they can’t even regulate the internet. But even if it could, I don’t think at this point, it knows what to do. And I’m someone who, generally speaking more than most people believes in the power of our government believes in the good things that it does. But this is a pretty complex area.

Noam Bardin  35:53

So let’s look at some examples. First of all, you can regulate, you know, we all click on that cookie consent form every website we go to, because the Europeans did regulate, I think it’s a stupid regulation. But it’s a regulation, right? So there are things we can do. But it’s not about the AI. It’s about what you do with the AI. What do you do with the people who are in this transitionary? Phase? Right? How do you support them through this process, instead of putting the onus on them to, you know, get by, and that is, to me is the responsibility of the government. Because, as you said, these trends, this will create tremendous wealth, tremendous profit margins are going to explode, or if IBM can fire 1000 people, right, that goes, the bottom line goes to the shareholders, right? So that gaps that we see are going to get more and more extreme. And the ability of people who are not part of that elite group to join that are going to shrink dramatically.

Andy Slavitt  36:43

Yes, and look, I think what the US government did, in passing the inflation Reduction Act, was good evidence, being a lot of the investment going into people who are going to be climate refugees, people who are going to have their lives disrupted, the need to both create resilience, but also to create something to deal with the inequitable distribution of harm. So I wouldn’t say that the government did capable. I would say, though, that for government to legislate, particularly in a climate where you’ve got mix, government, or even very closely held houses in Congress, requires a real consensus be built. And by the way, the Constitution, as you know, was designed for us to move slowly, not respond quickly. Yeah. Until we eventually adjust it yourself. It often takes a while. I want to talk about you. But you talked about the six month moratorium. So I just have to ask you real quickly about former colleague of yours at Google, Geoffrey Hinton. For those who don’t know, this is someone who was one of the pioneers of large language model generative AI within Google. And he resigned kind of in a very public fashion. I think it’s an attempt to draw attention to his resignation, by doing what I think a lot of tech people end up doing, but just saying, Mea Culpa, I worked on this thing for years. I really wish I hadn’t, the Cat’s got out of the bag. Now, I’m not offering to give back all the money I made. Don’t get me wrong. I’m not that sorry. But I sure do think that I’ve got a credit, subconscious and a book contract in me. So it’s I obviously it’s hard not to take some of these resignations, with a little bit of cynicism. But at the same time, I have to say that I do encourage and applaud anybody who has the courage to say, Hey, I see something. I’m not comfortable with it. And I like it. And yes, I was a part of it. And I’m not running from that. And so I don’t think we should be throwing stones at guys like that. Because I think we want to encourage more people to say, hey, there’s a problem here. But I also just didn’t know like how we’re all supposed to react to this, this massive letter with lots of people signing it, saying we need to be more responsible, and then pointing their fingers to government and saying do something.

Noam Bardin  39:07

So it’s funny, we talk about AI, which is a future problem. We have a real problem today called social media that the same people have left Facebook and Twitter and said this is the problem that has documented killed people around the world and destroyed government, etc. And we can’t regulate that. So to me regulating AI at this point doesn’t make sense because again, nobody knows what’s going to happen. But I think we’re the first impact will feel from AI will be on social media with disinformation misinformation, you know, we spend a lot of time looking at these posts, whatever. You can sort of catch the foreign nation actors, by the language they use the words they use, right, you can see that non-American wrote it. Well, what happens if it’s 100% American, and they can write a billion different things at the same time, right? That’s where the real damage is today. And so the fact that that users don’t use AI doesn’t matter. Who’s responsible for social media? So to me, if we can’t legislate social media, I don’t think why we think we can ever legislate AI, which is, you know, 100 times more complicated and more impactful.

Andy Slavitt  40:12

Are you willing to take the moniker of the anti-Elon Musk?

Noam Bardin  40:18

Look, Elon Musk was my hero for many years, when people would ask me, What do you want to be when you grow up? I would say I want to be Elon Musk, right? He has done is just amazing. If you think about SpaceX, and you think about a Tesla, I think everything he’s done, I actually think there’s a generation of billionaires in Silicon Valley, who made their money in the first bubble, when they were extremely young. So these are people that are in their early 20s became billionaires and disconnected from society, right? They fly private, they don’t shop, they don’t meet anyone that’s different from them. And this group, had a very difficult childhood, right, there were people on the spectrum, they were beaten up in school, they weren’t cool, the cool kids. But instead of taking the empathy for the victim, they took the want to be the bully. And now they’ve wrapped in ideology for some men, or whatever it is, but at the end of the day, it’s about I want to step on people as a way of showing that I am not that kid that got beaten up in third grade. Right. And it’s a whole generation of them. And they’ve created ideologies, and they’ve connected and libertarian is they’ve all this crap, factually, they’ve all lived off government, the universities, they went to the schools, they went to the loans, the company everything right. But to come in, on the government, and the country, and to say the you know, better and everyone else are idiots, right? And it’s that kind of a view of the world that they have, which is, again, disconnected from reality. And empathy is the thing I think we’re missing the most, right, not being able to be empathetic to someone who did not have the opportunities you had. Right. And to wrap that up in ideology. To me, that’s terrible. And it’s backed up by billions of dollars, right. And that’s probably the biggest problem here is that these ideologies can live because buying off politicians is really cheap these days.

Andy Slavitt  42:06

Noam, anything we didn’t cover?

Noam Bardin  42:09

No, I think I think we’re at a very interesting point in social media. And I think we’re going to see a lot of innovation in the space. And it to me, that’s obviously a good thing. And I hope that most of the companies that are starting now will try to address the mistakes of the previous generation. And as posts news, we look at the mistake as toxicity. We look at the algorithms of what gets promoted. We look at the monetization model, so creators and publishers can actually make money and survive. And we look at the user experience where you can actually get what you want without having to bounce to all these different websites. And so to me, if we can solve those problems, we will create a phenomenal cup company again, not for everyone, but for the vast majority of normal people who are not culture warriors.

Andy Slavitt  42:54

Moam, buddy, thank you so much for being in the bubble.

Noam Bardin  42:56

Thank you very much.

Andy Slavitt  42:57

Thank you to Noam, I hope you enjoyed the episode. Hope you are enjoying things in a way that is feeling better than it did a year ago. Better than two years ago. Because summers ahead. You’ve had a great Memorial Day, I hope and I hope you get great weather. Of course we got all kinds of shows coming up for you, including more episodes on kids mental health and social media with the Surgeon General. We have […] coming on the show. We have a lot of really great things planned. So I will see you back here in June.

CREDITS  43:57

Thanks for listening to IN THE BUBBLE. We’re a production of Lemonada Media. Kathryn Barnes, Jackie Harris and Kyle Shiely produced our show, and they’re great. Our mix is by Noah Smith and James Barber, and they’re great, too. Steve Nelson is the vice president of the weekly content, and he’s okay, too. And of course, the ultimate bosses, Jessica Cordova Kramer and Stephanie Wittels Wachs, they executive produced the show, we love them dearly. Our theme was composed by Dan Molad and Oliver Hill, with additional music by Ivan Kuraev. You can find out more about our show on social media at @LemonadaMedia where you’ll also get the transcript of the show. And you can find me at @ASlavitt on Twitter. If you like what you heard today, why don’t you tell your friends to listen as well, and get them to write a review. Thanks so much, talk to you next time.

Spoil Your Inbox

Pods, news, special deals… oh my.