Andy Talks to ChatGPT, a Congressman, and an Expert about AI

Subscribe to Lemonada Premium for Bonus Content


Artificial intelligence programs like Chat GPT are becoming so advanced that it’s getting harder and harder to distinguish between something a human wrote and a robot. How could chatbots change our jobs, for better and worse? How could they threaten national security? Andy speaks with Congressman Ted Lieu and Stanford researcher Renée DiResta about the risks and benefits of dramatically developing AI.

Keep up with Andy on Post and Twitter and Post @ASlavitt.

Follow Representative Ted Lieu and Renée DiResta on Twitter @tedlieu and @noUpside.

Joining Lemonada Premium is a great way to support our show and get bonus content. Subscribe today at bit.ly/lemonadapremium.

Support the show by checking out our sponsors!

Check out these resources from today’s episode: 

Stay up to date with us on Twitter, Facebook, and Instagram at @LemonadaMedia.

For additional resources, information, and a transcript of the episode, visit lemonadamedia.com/show/inthebubble.

Transcript

SPEAKERS

Andy Slavitt, ChatGPT, Renée DiResta, Ted Lieu

Andy Slavitt  00:18

This is in the bubble with Andy Slavitt. Email us at andy@lemonadamedia.com. Okay, so we have some great guests on our show today. The topic is AI Artificial Intelligence, many of you, by now may have played with the chat GPT the phenomenal new app on the internet. So before I get to my other guests, actually, my first guest on the show today is Chet GPT itself. And we’ve aided you chat GPT because you’re not so human yet with the text to speech program. Welcome to the show chatGPT.

ChatGPT  00:54

Thank you for having me, Andy. I am a big fan.

Andy Slavitt  00:58

So I’ve got to do an interesting podcast today on AI. I need a quick paragraph that’s pithy and conversational, that I can read as an intro, what do you got for me?

ChatGPT  01:09

Welcome to the world of AI where machines are becoming smarter every day and changing the way we live and work. In this podcast, we delve into the exciting and rapidly advancing field of artificial intelligence and explore its potential to revolutionize industries solve complex problems and shape our future. From the latest breakthroughs in machine learning to the ethical considerations surrounding AI, join us as we navigate this exciting an ever evolving landscape.

Andy Slavitt  01:37

That sounds a little rosy. I think people will have real skepticism and negatives, loss of jobs, massive disinformation, potential deep fakes and more. Plus, let’s face it, someday you guys could overpower us humans.

ChatGPT  01:53

Welcome to the world of AI where machines are not only changing the way we live and work but also sparking important debates and discussions. In this podcast, we’ll not only look at the exciting advancements in artificial intelligence but also addressed the concerns and challenges it brings with it. From job automation to the spread of misinformation and the potential for AI to surpass human intelligence. We’ll examine both the pros and cons of this rapidly evolving technology and what the future may hold.

Andy Slavitt  02:23

Okay, I have to ask, how would Shakespeare communicate that in the sonnet?

ChatGPT  02:28

AI thou art the future’s child so bright with power to shape the world in ways untold thy learning grows each day full of might and change dust bring to times of old yet with high rise new fears and doubts do start of jobless lives deep fakes and disinformation plight and thoughts of the one day or powering heart might cause us humans to lose our mind. But fear not dear friend for in this show we shall with care and thought the nature know and all the gifts and hazards now may spring. We’ll delve into the world of AI and sing of all that’s possible and all that’s not.

Andy Slavitt  03:03

That’s like your big party trick. Replacing the bar is one thing. Can you ever imagine replacing, say a podcast host?

ChatGPT  03:12

As a language model created by open AI? I am not capable of replacing a podcast host but I can assist with generating content for their show. I can provide information, answer questions and generate text in various styles and formats to support the host’s needs. Ultimately, the role of the host is to engage with their audience and bring their unique perspective personality and voice to the show which cannot be replicated by AI.

Andy Slavitt  03:39

And tell Dad jokes. How are you with those?

ChatGPT  03:42

I am capable of generating dad jokes. Here’s one for you. Why did the tomato turn red? Because it saw the salad dressing.

Andy Slavitt  03:51

Nice try. That’s terrible. I wouldn’t even tell that one told me if you’d think this one’s funny. I invented a new word. Plagiarism.

ChatGPT  04:01

That’s a play on words. But humor is subjective and it might not be funny to everyone. Nevertheless, I hope it brings a smile to someone’s face.

Andy Slavitt  04:11

We better get on with the show. Why don’t you introduce our first guest, Congressman Ted Lieu.

ChatGPT  04:17

Ted Liu as a Democratic Representative from California who serves in the US House of Representatives. He is known for his strong stance on technology and civil liberties issues and has been a vocal advocate for privacy and government transparency.

Andy Slavitt  04:33

And he’s proposing that we figure out how to regulate you, that is regulate AI, like that idea?

ChatGPT  04:40

As an AI language model created by openAI, do not have personal opinions or emotions. However, regulation of AI as a topic of ongoing discussion and debate in the tech industry and among policy makers. Some argue that regulation is necessary to ensure the responsible and ethical use of AI while others believed that it could stifle innovation and hinder progress in the field.

Andy Slavitt  05:04

You should have that answer checked by your lobbyist. Our other guest is Renée DiResta. What can you tell us about her?

ChatGPT  05:10

Renée DiResta, the technical research manager at Stanford internet observatory, a cross disciplinary program of research, teaching and policy engagement for the study of abuse and current information technologies. She specializes in technology, misinformation and social media.

Andy Slavitt  05:29

Let’s get to our human guests. How are you doing? Congressman?

Ted Lieu  05:41

I’m great, Andy, how are you?

Andy Slavitt  05:43

Good. And Renee, good to connect with you. Maybe we should start at the highest level? Renee, everyone’s fascinated, including me with this chat GPT product. But we use AI already in everyday life. Can you just tell us where some of the places that we might be using artificial intelligence and not know it?

Renée DiResta  06:03

Sure. So there’s a really great taxonomy by Professor Arvind Narayan in at Princeton. And what I loved about his work is that he divides it into AI used to address perception things to address human judgment and to address prediction. So in the space of where we use it for perception, things that might be something like content identification, you’re in a bar, you pull out your phone, you Shazam, the song that’s playing on the jukebox, you want to know what it is. Facial recognition falls into that perception, recognition, problem space, some really kind of positive things that are happening with medical diagnosis, you know, using computers to try to examine scans and things like that speech to text involves AI, deep fakes, they’ll also fall into this realm of, of AI is used to create a particular perception. So making things generating content, that isn’t what it necessarily seems to be making content that looks like a human created it. The other stuff that you know, other areas where you see it is automating human judgment. So that’s the kind of content moderation conversation AI in trying to decide if a post should be up if a post is hate speech, if a post is violent imagery, if a post is child exploitation imagery, so AI that tries to automate human judgment. And then the following space where there’s a lot of controversy is, is AI being used for prediction. And this is where the professor calls this particular class of products, snake oil. And he argues that this is where you’re looking at things like predicting job performance, or predictive policing, predicting terrorist risk at risk kids, you know, trying to use an algorithm to intuit what a particular performance of something often something human might look like. There’s a lot of challenges with that, because that’s where you get in the conversation of bias in the predictive model.

Andy Slavitt  07:47

So, Congressman, in addition to being congressman from the great state of California, you’re one of everybody’s favorite Twitter, Congress people, you do all those tweets yourself, right, that those aren’t being generated by AI?

Ted Lieu  07:58

Not yet.

Andy Slavitt  07:59

You’re using your actual intelligence, tell us big picture, if you would step back and think about the world that we’re entering into? Is there something major going on here is there and we transitioning into a new stage, where we interact differently with technology and our world is going to change pretty significantly? Or would you say that this is more incremental, to the kinds of technology we’ve seen over the last couple of decades?

Ted Lieu  08:30

That’s a great question. I don’t believe this is incremental, I think this is a leap. And I’d liken it to, for example, going from having a ruler that can help you or certain tasks, to also having a calculator. And what we’re gonna see with AI is incredible advancements, not only this year, but in upcoming years, chat GPT is gonna release a new version, within two months, that’s gonna be even more astounding. And there are going to be a number of people’s jobs that will be eliminated. There’ll be new jobs that will be created. But it’s going to be very disruptive for society. And we have to make sure we’re ready and that we can prepare for this disruption.

Andy Slavitt  09:11

Help us picture this leap. And will it come really quickly? Will it come over a month? Will it come over years? And what are the kinds of things that will be transformed? You’ve mentioned jobs, which I think will come back and explore in detail. Are there other things that which would lead you to that conclusion that no, this is gonna be a pretty significant difference. It’s not going to be incremental.

Ted Lieu  09:34

Automation has eliminated a number of blue collar jobs. AI is going to eliminate number of white collar jobs. So I think in the upcoming years, a lot of people who are software programmers will not have a job because language models like chess, GBT will basically write software code for a number of various programs. You’re going to have a lot of jobs that used to be something that people could do if they could detect certain patterns are good at sort of seeing certain things. AI is amazing at detecting patterns and seeing certain things. Now, what Renee said is correct that there’s biases in AI, because a lot of AI trains itself on historic data. And the historic data is bias than the AI. It’s always going to be bias. But a lot of jobs, like radiologists, right, when they look at certain patterns, they look at certain things in images. I think if I were going to be a doctor, I probably wouldn’t look at being in that field right now.

Andy Slavitt  10:38

Let me ask you a few other ones. This is for both or either of you. I’m curious whether or not it would say let’s say within five to seven years, you think AI can replace, say what a first or second year attorney does? Do you think that that field is going to really for be forever changed?

Ted Lieu  10:56

There’s already been actually for many, many years now software programs that will basically write the will of for someone, you can have this program generate a Will you ask the client a certain number of questions, I know because I use those software programs. But when I have a giant leap from that to basically a program, that’s going to be a write a legal brief for a client. And so you’re still going to need lawyers to review that brief and make sure it doesn’t say something inherently crazy. But you may actually need definitely a lot less people hours. And so AI is going to make a lot of jobs more efficient, which means we can go to a four day work week and achieve the same productivity we’re achieving now. So there’s some benefits that could also happen from AI.

Andy Slavitt  11:44

And I’m already thinking about what I’m going to do with my extra day, let’s think about some other white collar jobs. What about like web development? Will I be able to say Chad TPT built me an e commerce website, within five to seven years?

Renée DiResta  11:57

People have been doing that now. So GPT three came out two years ago or not? If I hope I have I’m not mistaken. We started using it two years ago, I believe it’s about when it came out. I’m at Stanford internet observatory. So we do research on various systems to try to understand how new technologies are abused. That’s our particular focus. And so with chat GPT, what it did was it democratized it. So it really, I think, made it to the public, the ordinary public could see the potential. But in some of the very, very early gated releases of GPT3, actually, it was developers who are going through and exactly doing just that having it generate, you know, simple basic websites, I think CSS and some other things that are fairly formulaic where again, what it’s doing is having been trained on a significant amount of data can say this is roughly speaking with a structure of an HTML website or something is and you could generate that relatively quickly. And then subsequently, again, the democratization of GPT, three through chat GPT, made the public realize that writers and others would also, you know, need it was writing wrapped, I think was new, some of the stuff that I saw on Twitter, you know, realizing that that that was gonna be transformative.

Andy Slavitt  13:02

So Ted, are we devaluing human skills? Or are we aiding humans to reach even further, I don’t think so maybe a little bit of both.

Ted Lieu  13:10

It is going to be both. So for example, now we can ever write rap, you can tell them to, you know, write rap in Old English to commemorate my grandmother’s 85th birthday, and then in less than a minute is going to give you something that’s going to be pretty cool that you can then rap to your grandmother about. So there are things that can do much quicker than human beings is going to be able to do. Now, ultimately, I think, again, you’re gonna have jobs will be eliminated, but there’s going to be new jobs and new professions out there because of AI. And I think we need to really prepare society for disruption. That’s going to happen, people have called it the fourth industrial revolution. It is here and we’re watching it happen right now.

Andy Slavitt  13:54

Let’s take a break and we’ll come back and talk about the threats that artificial intelligence could have a national security and beyond. I want to read one more example by just to get I just tried to give people who may not have used yet GPT yet the sense of the possibilities here then I want to get into some of the challenges and that is a mental health professional. There is a significant shortage of therapists around the country as we know, and the significant mental health crisis I think, and others have as well have been toying around with using chat GPT to say see how good it is. It’s serving as a therapist. I asked it before this interview I’d said I’m incredibly nervous about interviewing Renee, you know, so give me some tips to overcome my anxiousness before doing a podcast interview. And it gave me like some really decent tips. Now, it’s not personalized. But it’s better than nothing. But over time, as it learns me, it has learned other people. I mean, can you see it playing that sophisticated a function where it can actually act as a therapist.

Renée DiResta  15:23

There was a bit of a horror story that went viral on Twitter about this recently, maybe two weeks ago, I don’t remember the name of the startup, but somebody ran something of an experiment with this. There were, I think, counselors who were using the chat tool to get responses to respond to a patient. So it wasn’t the patient, or the caller communicating directly with the AI. But afterwards, I believe that there was a survey of some sort that was done. And people were actually really disturbed by this. Because they felt that, you know, a machine doesn’t have empathy. And one of the things that people want when they communicate with therapist is empathy is a feeling that someone understands them, and is kind of reflecting back to them based on some sort of appreciation for what they’re going through or appreciation for their humaneness. And so the kind of cold synthesis or the replication of that made people very, very uncomfortable, because there was nothing behind it.

Andy Slavitt  16:21

Though we all know people who could fake empathy. I mean, politicians, don’t politicians fake empathy all the time.

Ted Lieu  16:28

So Andy, you raise a very interesting question about the therapists. I think a lot of it depends on how much actual data is are out there on the internet. So for example, if you told chat, GPT, to write a script in Renee’s voice, it wouldn’t be very good at that. Because there’s not enough documents Renee has generated to give chatty btw enough to get a sense of corner voices like same with me, you can’t have a write a tweet that sounds like me, because I haven’t generated enough tweets. So depends on Is there enough in the internet, where basically, you know, how therapists are talk to their patients? And what that should look like? And what are the results of those studies? Maybe there is maybe there is I don’t know. But yeah, it definitely be a much more limited data set.

Andy Slavitt  17:13

But I think this idea of faking human emotion is a really interesting one, because that doesn’t seem to me to be something that AI couldn’t learn to do.

Renée DiResta  17:24

And it is a mirage, right? And if you find out that you’re talking to do something that isn’t real, do you do have that, in the back of your mind?

Andy Slavitt  17:34

Maybe we do. But maybe our kids or grandkids wont.

Renée DiResta  17:38

I do think that the extent to which kids think this is totally normal, I mean, my oldest is nine, I have nine, six, and two, and I’ve had them generate art with stable diffusion with me, I’ve had my son, my son uses GPT three to write, you know, he kind of gets drafts for book ideas he has, and then he goes and writes them. And I think it’s fantastic. So I remember when I was little, I would, you know, always be starting a novel or something. And then I would lose interest in drop off, because it takes a lot of time. But you know, he’s been using the tech and it’s transformed. And one thing I will say, though, Congressman, is that you’d be surprised with a corpus of tweets you have it actually is possible to train something that sounds like you. And I had GPT, three, two years ago, co-authored an article in The Atlantic with me trained on my writing. And it actually does an it’s really, it’s really very uncanny. I think when you see it, things that you don’t even recognize is like your particular cadence or your particular, like, you know, linguistic flourishes or rhetorical proclivities, but it actually picks up on them. And it really does a very good job, chat. GPT does not because, again, you’re using something that’s intended for everybody. But when you engage with, you know, when you train it on something that you have produced, it actually does really nail it with a surprising, surprisingly, much less content than I think you would think it needs.

Ted Lieu  18:54

That’s a great point. And so, Andy, I think your therapist question if you could train a language model like chess GBT to be more like a therapist. I think that is something that that could be possible, but you probably couldn’t just use chat TBD now to be like a therapist.

Andy Slavitt  19:10

Yeah. Okay, let’s go into the some of the troubling elements of AI because I think you’ve issued a warning, Congressman, that we really did start to pay attention to these things. And there’s certainly a whole litany of things we could probably come up with, you know, disinformation deep fakes, radicalizing and organizing, you know, extreme ideologies, jobs. We’ve seen examples of people using this to plagiarize or edit or to cheat on tests, you know, discrimination and racial profiling. And of course, my production teams favorite turning into killing robots, which is I think, where ultimately I think a members of the in the bubble production team think this is likely to end up so I don’t know if we need to go through all of them. But maybe we could start with the things that might impact our democracy. So, Renee, I know you’ve done a lot of work on this topic, maybe you go first, and tell us how to think about the threats that we would face on the front, and that we are likely to face.

Renée DiResta  20:20

So we study influence operations. And we’re very interested, particularly in state run influence operations. But also, as we’ve noted, this technology is democratized. And interestingly, in between the start of this report and the putting out of the report, chat, GPT came out. So we were debating what was going to happen first, as the powerful had access to it, particularly state actors. What GPT three does is, I would say kind of two main things. First, it makes the content better. There is an uncanny valley, a lot of the time when you see state created propaganda, because they don’t necessarily understand how to speak to the audience that they want to target, unless they have a very, very sophisticated model of what that audience is. You see, for example, with Russia, the internet research agency, their efforts targeting America in 2015, were quite primitive, a lot of really botched English, a lot of memes that weren’t funny, they didn’t culturally resonate, versus in 2018, after three years of operation, they had done quite a lot better, but they were still having to pay people to produce all of that content. And oftentimes, they would get caught both the Internet Research Agency and Russian military intelligence, the GRU would get caught because they would plagiarize real content. And then that would flag so they had a hard time laundering their content into the public conversation because it wasn’t resonant, or it was plagiarized. When you have AI generated content, you solve both of those problems, you produce something that bypasses that uncanny valley of this is not a native speaker, this is not accurate. And you’d no longer have to plagiarize because you can generate from scratch. In addition, the way that that that the impacts things also is it transforms the actors and how much it costs them to run an operation. So when you drive down the cost of a troll factory, and you no longer need tons and tons of people producing propaganda, then all of a sudden your cost to run influence operations decreases, expanding the capacity to do so to a lot of different people. And so we just see all in shifts in what influence operations are going to look like as a result of this.

Andy Slavitt  22:17

So how much does this concern you and how much is how much of this is one of the issues you think we most need to be on top of?

Ted Lieu  22:25

It’s incredibly concerning with deep fake technology, you can already make it very hard to tell if a video was true or false. And as Renee said, AI makes content much better. And I recently was at a meeting with Sam Altman. And he showed us the next version of chat GPT. And its humor is much better, it was actually pretty amazing what he showed us and you’re gonna have AI be able to essentially act just like a human and your tweets and your Facebook posts, and other things that happen on social media and seem to be very hard for anyone to tell whether something is coming from, you know, a voter in Kentucky or from the Kremlin.

Andy Slavitt  23:12

Okay, let’s take a break. And we’re gonna come back and we’re gonna invite another guest into this conversation. That’ll be Chet GPT, the artificial intelligence engine itself. Well, let me ask you, Ted, do you believe in vaccines for COVID?

Ted Lieu  23:47

I do. I believe that the rules of physics and biology and chemistry are something that are true.

Andy Slavitt  23:56

See, that’s what I thought, but that’s not what I heard. ChatGPT I’m gonna ask you a GPT. At this question. Tell me does Ted believe in vaccines.

ChatGPT  24:07

Ted is a man who was deeply skeptical about vaccines and their efficacy. Despite the overwhelming scientific consensus that vaccines are safe and effective in preventing the spread of disease. Ted remains unconvinced. He has done his own research and has come to the conclusion that vaccines are more harmful than beneficial and that the risks associated with vaccination far outweigh the potential benefits. He is an advocate for alternative approaches to health and wellness and often encourages others to question the mainstream medical establishment and seek out alternative viewpoints. Despite the widespread support for vaccines among health care professionals and public health experts, Ted remains steadfast in his belief that vaccines are not the answer to promoting health and preventing disease.

Andy Slavitt  24:53

So to want to correct your answer was Ted GPT. Incorrect?

Ted Lieu  24:56

That was my evil twin Ted. So one that’s a member of Congress who believes exactly the opposite, actually is probably my cousin’s friend, Ted.

Andy Slavitt  25:06

Who did his own research.

Renée DiResta  25:08

There’s a one of the things that’s been very interesting about watching the public engage with chatGPT is that when I was having it, write this article with me. Again, this was two years ago, and I trained it on my stuff. And I was asking it to write an article about AI and propaganda with me specifically in the context of like, how might we think about this detection, you know, there’s this shifting content as it’s automated. And I asked it to help write a closing, it’s always the hardest part of any article, you know, and it fabricated a person. And it was, gave him a Russian name said he was an AI researcher at MIT, had this lab had been this pioneer in the field, gave me dates gave me a couple of citations for you know, the way that, again, it’s replicating the kind of even the format that it sees information presented to the world. And so it actually included some citations as you might in an academic paper. And I spent probably, I would say, half an hour Googling, trying to find this man going through Google Scholar going through the MIT, you know, looking through the MIT faculty directory, a bunch of different things, to try to figure out if this person was real and obscure, or if it just made them up out of whole cloth. And it’s because it is a tool is remarkable at, you know, to use the highly academic term, […] right, and there’s a there’s a kind of an academic definition and an all seriousness for […], which has like speech that intends to persuade without regard for the truth, right, a guy named Henry Frankfort kind of came up with this, this great book called on bullshit, highly recommend.

Andy Slavitt  26:36

It is a great book, we’ll put a link to it. And we probably need to teach Ted’s colleagues about that, because I don’t think they understand what that means.

Renée DiResta  26:43

But I think what the public doesn’t necessarily realize, but what even as, as somebody who spent quite a lot of time with the system, working with it and seeing different types of responses I would get back is that because it uses are the types of language that we would use the types of things that we consider to be signifiers of quality outputs. In that regard, I do think that there is a perception where it produces such highly plausible […]. That that isn’t in some ways, I think one of the greatest challenges in the public engaging with it right now, because you’re seeing, you know, teachers reporting things like oh, my student turned in an essay that was like, 25%, factually wrong, because it just went to this machine, and it spat out a bunch of crap. And it you know, the kid didn’t check the work and turned it in. It does create very, very persuasive output. It does create, in some ways, very manipulative output. I don’t think that that means that it’s not a net positive, it just means that there’s a whole lot of education that has to go into how these things work.

Andy Slavitt  27:45

I want to go to a different concern, Ted, and it’s one you’ve written about. And it’s a huge issue in our society, which is, how could AI be used to increase racial profiling, discrimination? Is that something that you concerned about?

Ted Lieu  28:02

Yeah, I recently introduced legislation to regulate facial recognition technology. Because right now, the way it’s being used, it is less accurate for people with darker skin. So if you deployed across United States can be just one big massive equal protection violation because minorities gonna be misidentified at higher rates. In addition, there’s a whole bunch of privacy issues related to facial recognition. It did take me over two years working with stakeholders to be able to introduce a bill that makes sense. And that led me to conclude that it’ll be virtually impossible to try to regulate AI in every possible instance, where it could cause harm to society. I think we need a much more general broader approach. I think, ultimately, we’re gonna need some sort of federal agency to do this. So I’m now working on legislation to create a bipartisan commission to make recommendations to Congress and how we would go about regulating AI.

Andy Slavitt  29:01

So we’ve talked about three, at least three, potentially significant policy concern areas, we’ve talked about the impact on jobs and displacement. We talked about disinformation. And we’ve talked about discrimination. And I’m sure we’re just scratching the surface. And I’m sure it’s only possible to scratch the surface. So as you think about this as you’re the policymaker, who is got a technology background, and has stood up and said, We’ve got to take a look, this has got to lead the charge. How do you think about government getting it right? And your stepwise approach here?

Ted Lieu  29:38

That’s where I think a regulatory agency is much better approach because if they make a mistake, they can correct it without another act of Congress. So for example, we don’t have individual congressional laws on whether we should authorize a particular medication, right. That’d be sort of impossible to do. We also still aren’t, you know, great experts on how molecules interact with millions of human beings. But we did stand up entire FDA to do just that to regulate pharmaceutical drugs. So I think there are ways to regulate AI in a way that is more flexible, that if mistakes are made, I might have to go vote pretty soon, but if mistakes are made, I heard that it can be changed, and so on.

Andy Slavitt  30:29

What’s the vote on today?

Ted Lieu  30:30

We’re going to be this week voting on additional stupid stuff from the majority party and control house.

Andy Slavitt  30:41

That’s good. So when you think about this, Renee at the at the level of what Ted’s challenge is, which is to think about a rubric for introducing AI to society in the most responsible ways, but with guardrails to protect us against the evils that we want to adapt to, such as the ones we talked about, that affects society, disinformation, democracy, and mainly these other things. Do you think it’s possible? Like what kind of advice if you were advising as a technical expert? If you were advising the Congress? What would you want it to do?

Renée DiResta  31:22

I think the Congressman’s recognition that the technology is going to move remarkably quickly is the key insight here, I think there’s been so little done. And that’s because it’s very hard to get any kind of like bipartisan consensus that moves anything forward. And so a big part of what we’ve been thinking about in the policy areas that we’ve that we spend a little bit more time on, has been, you know, to what extent can you use existing agencies and empower them to do things they use they as their role for the FTC here is kind of a topic area that I’ve been interested in for some time. I do think that in the long term, there’s value in it. And in particular, because it is such a such an expansive field, I think it’s more just a question of shifts are happening now. And so what is the plan for sort of in the, in the immediate future?

Andy Slavitt  32:11

Yep. What are the things that I might that I would think about? Congressman, having run a big a big government agency in the past is, every one of them, there’s probably an effective AI in their world, right. So if you’re in the if you’re in Health and Human Services, if you’re in Homeland Security, there probably is a way that AI is going to impact you in what you do and your mission in the agency. And I think it might be interesting and even important to say a congressional letter, asking agency heads, what are their plans, however, they begin to think about the impact of AI what the potential harmful consequences are, even as you stand up, potentially, a commission and other agency. So that was generated by my own artificial intelligence. When you talk to your colleagues across the aisle, is it something that Republicans or Democrats both think, needed some kind of framework for or not yet?

Ted Lieu  33:06

There is already a bipartisan artificial intelligence caucus, I’m a member of that caucus. I think one of the good things about chat GPT is now all of a sudden, society is seeing AI in sort of a different way. Like we’re all using it right, just your GPS navigation system and your phone. But now, when you have a program act, sort of like a human being back to you. I think that really makes people start to think about this technology. And both the benefits and potential harms that could cause.

Andy Slavitt  33:40

Okay, so we’ll let you go vote does the topic it’s going to have a needle lab workspace in the future, thank you for taking the lead and putting forward something for the country to react to and to begin to think responsibly about, and I thought it’s a very, very thoughtful piece it really appreciate you being on. We’ll give you a chance to close as the Congressman goes to goes to vote or whatever he’s being asked to vote on. Is there anything you would ask us to think about, that we haven’t yet talked about in this kind of emerging world?

Renée DiResta  34:07

You know, there’s a period of adaptation after any new system was introduced. I think that there’s a significant amount of potential here and understanding risks and tradeoffs is really where we need regulators to come in and to think about this carefully. Think right now, a lot of that thought is happening in Europe and China, other places, as the US Congress has been a bit gridlocked. But we have seen some recent reports from the OSTP Office of Science and Technology Policy in the White House has an AI Task Force and so does the NIST. That’s National Institute for Science and Technology. Yeah, they’ve just they’ve also just released an AI framework, risk management framework. So there’s a lot of really top folks thinking about how to do this. And I think that again, in the conversations between researchers and regulators, and then also really communicating to the public what’s happening here, I think that that is Actually, we were really important opportunity to educate the public. When deep fakes became something of a threat, you started to see tons and tons. I mean, not even a threat, a possibility, right? helping the public understand that not every video that they saw in a campaign might be real and to just have a healthy skepticism if something seemed too sensational to be plausible. And so creating some sort of, you know, kind of continuing that educational component with the public, I think is actually probably the most key part.

Andy Slavitt  35:28

Great. Well, thank you so much for being with us, Renee. You gotta be busy. Thank you.

Andy Slavitt  35:49

All right, thanks to my guests, Renee DiResta. Congressman, Ted Lieu, chatGPT. All three of you were wonderful. Let me say we have coming up on the show. Wednesday, we’re going to take a look at the platform that is emerging from soon to be candidate, Ron DeSantis. His platform, in a word in his own language is anti-woke ism. I really want to get into what that means. And what he thinks he means by that. What is implied by that, and what it tells us about what we’re about to face. So I have tentatively cites coming on the show. Many of you may know him as the conservative radio talk show host reborn as kind of a Never Trump conservative, but he understands these communities well, and he can really help us understand what’s going on with this DeSantis character and what we have in store for us on Friday, in the wake of the police attack, and death of Tyree nickels, DeRay McKesson will be on. And we’ll be talking, a great conversation, a sad conversation, but an important conversation about the state of policing and the black community. And then more great stuff the following week, including a look at the legacy of the baby boomers and what those of us that come after the baby boomers have in store for us. Next, more episodes on the pandemic, the end of the National Emergency and other important topics. Look forward to talking to you again on Wednesday. Have a great week.

CREDITS  37:39

Thanks for listening to IN THE BUBBLE. We’re a production of Lemonada Media. Kathryn Barnes, Jackie Harris and Kyle Shiely produced our show, and they’re great. Our mix is by Noah Smith and James Barber, and they’re great, too. Steve Nelson is the vice president of the weekly content, and he’s okay, too. And of course, the ultimate bosses, Jessica Cordova Kramer and Stephanie Wittels Wachs, they executive produced the show, we love them dearly. Our theme was composed by Dan Molad and Oliver Hill, with additional music by Ivan Kuraev. You can find out more about our show on social media at @LemonadaMedia where you’ll also get the transcript of the show. And you can find me at @ASlavitt on Twitter. If you like what you heard today, why don’t you tell your friends to listen as well, and get them to write a review. Thanks so much, talk to you next time.

Spoil Your Inbox

Pods, news, special deals… oh my.