
The AI Coach
AI meets human behavior in these fun and insightful conversations with Danielle Gopen, a top founder coach and advisor, and Paul Fung, an experienced AI founder. We blend AI breakthroughs with real-world insights on business strategy, industry disruptions, leadership, and psychology while cutting through the hype to ask the hard questions.
This is for decision-makers like founders and business leaders who want to harness AI's potential thoughtfully, exploring both the technological edge and human elements that drive long-term success. Join us to challenge the status quo, navigate the shifting landscape of an AI-powered world, and get new information worth sharing.
The AI Coach
AI Industry Disruption: Politics
AI candidates running for UK Parliament, confronting bias in training data, politician digital avatars, and more!
Links and Resources:
https://www.ai-steve.co.uk/
https://www.heygen.com/
We love feedback and questions! Please reach out:
LinkedIn
Episode Directory
Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.
so I have good news and I have bad news. Which one do you want first? I think I want the bad news first, the bad news is you might have to get rid of your google home oh no, why do I have to get rid of my google home?
Speaker 1:well. So the good news is I think someone influential at amazon has listened to our podcast from last week, because an article just came out today that they are making major upgrades to Alexa and they're going to introduce a new, what they're calling Remarkable Alexa. That will be a monthly charge for Gen AI Alexa Basically the way you were talking about how you want to interact with your Google Home, treating it like it's chat, chat gpt is what they'll not offer with alexa oh, that's really interesting.
Speaker 2:I actually think the hidden nugget that's most interesting to me there is them testing the idea of a monthly subscription for that, like for a consumer device, which most consumer devices are typically, you know, you pay once. You're not used to paying a monthly subscription. This was tried a little while back in the auto industry with CarPlay, where I think it was BMW was trying to charge something like $10 or $20 a month to BMW owners for having CarPlay in their car and there was a backlash and I think they had to pull that. So it'll be really interesting to see if they get away with charging $ you know, $10 a month or whatever it is for the AI features of Recur, this Alexa.
Speaker 1:Yes, well, it seems like you're pretty in tune with pricing. There's obviously still not anything confirmed from this, but they're saying the pricing will be somewhere in $5 to $10 a month and there'll be two tiers the free tier, which will have limited AI capabilities, and then the extra remarkable Alexa. That will be, I'm, assumingly supported by Amazon's AI foundation models. They're calling this project Project Banyan is the internal name and I guess looking at some release by August, so relatively soon.
Speaker 2:Interesting Banyan, like the banyan tree. I don't know much about banyan trees, but that's the only thing I can think of that's called banyan.
Speaker 1:Exactly like the banyan tree and I guess the impetus for that name was about how large the banyan tree is, the overhang of its branches and its complex root system and depth of roots Interesting.
Speaker 2:I feel like someone got paid way too much money to come up with that project name.
Speaker 1:That's probably true, or they just asked AI.
Speaker 2:Well, I think this is a good segue into what we wanted to talk about this episode, which was which industries we believe will be most disrupted by AI, and I think one of them is almost certainly marketing. I talked about this, you know, a couple episodes ago. I was using Anthropic to help do some product marketing for a launch that we have upcoming, and yeah, I think you're right Like in the future you know, right now there's someone getting paid a lot of money to come up with the name Project Banyan, but in the future, they'll just ask Anthropic for some ideas, or OpenAI, or Amazon will ask Amazon Titan, their own foundation model.
Speaker 2:Yeah, the funny thing about Amazon Titan is I couldn't have told you that Amazon's foundation model was called Titan. So whoever's doing the marketing for Amazon Titan, I think needs to increase their budget to get their awareness more out there.
Speaker 1:I guess that's true. I don't know how external facing it is, but let's do some research.
Speaker 2:Yeah, we'll have to find out.
Speaker 1:So industry disruption obviously there's so much there. I think that the best way to do it, because for any one industry, we can actually go pretty deep into what the disruption from AI will look like. Maybe we concentrate just on one industry per episode and then we'll intersperse these industry disruption episodes throughout, I don't know every three or four episodes or something. What do you think?
Speaker 2:Yeah, I think that could be a really interesting idea, but I think it'll be fun to dive in and see how much we have to say on a given topic.
Speaker 1:Well, so should we start with the disruption in politics, because did you see AI Steve, who's the UK parliament candidate and obviously it's a real person, a businessman, steve Endicott of an AI voice company, and he's decided to parlay that into creating his own AI agent of himself, running for parliament, so that his constituents in the Brighton area in the UK can interact with him and ask him questions and get responses based off where he stands on certain issues, his ideas for what to do about these issues, and what I thought was most interesting is he asked constituents back what they think should happen, and then all that information is being gathered and stored and reviewed.
Speaker 2:Yeah, I think AISD is a great place to start. I think politics will definitely be disrupted by AI, and I just had a conversation with AISD. I think you just had a conversation with AISD as well, and I think we had very different reactions to our conversation with AISD. So what was your?
Speaker 1:reaction to.
Speaker 2:AIS Steve.
Speaker 1:I was really impressed with AI Steve. I went in not expecting anything. I went to the website and enabled my microphone and the interface came up and it sounded like almost when you're calling a call center so a little bit of background noise like you would hear calling some type of customer service call center and he says hi, I'm AI Steve, running for UK parliament in this district. How can I help you or what would you like to talk about? And then you just start talking and I asked him about a couple of difficult topics, geopolitical issues. I asked him about a couple of difficult topics like geopolitical issues, and he came back with some really interesting insights, essentially when asked questions that are answered by party positions, and he told me where they stand and what they think is important.
Speaker 1:And then he asked what I thought was important about the issue and I gave some ideas and he just ran with it. He came back with some things that I hadn't even suggested, that I thought were great ideas, and he said yes, we take this really seriously. We do want to increase research partnerships and we could even have some type of program where there's a cross-cultural exchange and we could focus on these types of areas for research. I think that would be really beneficial for the UK and our constituents, and Brighton is already an amazing hotspot of tech, entrepreneurship and academia, and so let's leverage that for these partnerships. And I thought, oh wow, that was really interesting.
Speaker 2:Yeah, I mean AI Steve. I didn't think he was this smart, or rather, I should say AI Steve was the dumbest smart chatbot that I've ever talked to. It was very smart in terms of like policy, I would agree with you like so it's obviously been. It specifically hasn't been trained on policy. I asked it that question. It'll actually say it's not trained on any of the data about brighton and ho, but it was provided data about brighton and home, which there was all sorts of weird kind of quirks in terms of how I felt it interacted. That really threw off my level of trust. Not that I trusted AIC to begin with. I mean it's an AI bot running for an MP position in the UK.
Speaker 2:Well it's still a real person. Yes, I mean. Well, it's based on a real person, correct?
Speaker 1:Yes, no, he is a real person who's running for parliament.
Speaker 2:No, no, no, Sorry, correct, but the bot itself is not a real person. I mean, I don't know why I feel the need to clarify that, but that's me Well. The reason I feel the need to clarify it is because the bot takes on its own personality. And I have no problems with Steve Endicott I don't know Steve Endicott from a hold of all but the bot itself was just super weird to interact with, like, first of all, it didn't even understand half the things I was saying to it. Like I kind of feel like the developers who built ai steve were pretty lazy actually, because it didn't. I don't know if this happened to you, or maybe it's just because I don't have a british accent, but neither do I suppose.
Speaker 1:no I mimicked a british accent in my interaction. Wait, did you?
Speaker 2:really no. I would love if you did that, because that would be a fascinating experiment. Because the voice to text model was terrible. It kept thinking I was saying the wrong words Half the time. I didn't understand what I was talking about, so I thought that that was enough to throw me off of the whole thing.
Speaker 1:Interesting. To be honest, I didn't say that much to it because I wasn't sure exactly how I was going to engage, but I felt like I had a really clean, positive experience and I walked away feeling like, okay, yeah, maybe I would vote for AI, steve, I felt close and connected.
Speaker 2:One thing about AI Steve is that I was able to get AI Steve to not only tell me that his policies but to also make counter arguments against his own policies, which is funny. So they clearly haven't put in enough you know what we would call guardrails in place to make it not argue against itself. So I kind of got steve into a little bit of a quagmire in.
Speaker 2:Uh, first I asked him a policy that he wanted to talk about, which was the four-day workweek, which I think sounds great, by the way and then I was like what are some of the downsides of the four-day workweek? And he told me about how it could cause some businesses to fail. And then he got a little bit confused as I started asking him well, that sounds like a bad thing, ai Steve, and he was like, yeah, that is a bad thing, but we still should do it. So I thought that was kind of funny. But I really I respect AI Steve as a publicity stunt and you know I want to say pushing the boundaries, but honestly, like I said, I think the developers were a little bit lazy on this one and it's not really pushing a lot of AI boundaries.
Speaker 1:You're such a troublemaker AI, steve left confused. But honestly, compare that to a conversation with a human politician and I would say I appreciate the idea that a politician can see both sides of an issue and say well, here's why I think weigning for a certain policy issue when you don't address the risks or the potential downsides of that thing, you inherently lose credibility. So I do think, even from an AI perspective, its ability to say, well, here's where it might not go so well is a plus. What do you think?
Speaker 2:I think that is 100% true. I think there's just nuance in it, right. I think there's nuance in a politician having the side of the policy that they're advocating for, being able to also acknowledge the downsides of that policy, or the potential downsides of that policy, and then being able to, you know, turn the conversation back towards why it's, overall, a pro. I'm not sure that AI Steve understood that nuance. I think AI Steve believed in the policy and then didn't believe in the policy and then believed in it again, which I thought was quite fun.
Speaker 1:It is pretty funny. Another way that I see politics being disrupted with AI is when it comes and let's go now to within the US. I can speak better to our governmental system than the UK. But when it comes to Congress and just the sheer amount of information that comes at Congress on a day-to-day basis and things that they have to vote on relatively quickly, that they rely on their staffers to read through those bills, some of which are hundreds, if not thousands of pages long and contain all sorts of disparate information, not necessarily just about the policy that's being voted on, but a lot of things are thrown in from the side to be able to feed that into chat, gpt or a similar system and say, okay, tell me what's going on here, similar system, and say, okay, tell me what's going on here, summarize key points, tell me, based off what I care about, what I should be paying attention to. So I do think there's really opportunity there.
Speaker 2:Yeah, I think the use case of summarizing and digesting information for politicians and even you know one thing that AI Steve was kind of trying to do which was interesting was take the feedback and generate insights from it right, and I think that if there is a way for AI to comb through large amounts of feedback from constituents and then try to aggregate that into some sort of policy position that will best represent constituents, I think that'd be really interesting.
Speaker 1:I agree with that. You're making me think that, even if it's not for campaigning purposes, that maybe all public representatives should have some type of AI Steve interface, some type of portal that's AI enabled, that constituents can share feedback and in a way that it helps verify that the person sharing that feedback is an actual constituent, could be really interesting.
Speaker 2:Yeah, it's funny you mentioned that because I made myself a note when I was listening to AIS team or talking to AIS team. I should say I said AIS team is definitely susceptible to corruption because his method of determining votes was that they were going to have certain people who were policy creators and certain people who are policy validators. And the creators are people a part of the community who create policies and then validators are people in the community who vote on these policies. And I was like you know you could just go around and give everyone 20 bucks to vote for a certain policy and you know the easiest form of corruption. But I do think the model is interesting. I like having more voice come in from the community is always a good thing, I think, in functioning democracies.
Speaker 1:I agree. I'm not sure if we're prepared to talk about the actual politics. You know whatever side of whatever issue, but I am curious about the information that AI is pulling from. I've been hearing on both sides complaints that there might be a political slant to that information and that AI doesn't know, it's just pulling how it's been programmed. And so what are your thoughts on that?
Speaker 2:Yeah, I think it's a really good topic. I think that this topic we would talk about bias in training data basically is the topic that we're talking about here right? So is our training data biased in some direction, whether it's politically or honestly, even, you know, racially or gender or other things, and it's not intentionally, but it likely is skewed, and that's just by the nature of the training game. That's largely available, probably on the internet. So a lot of these foundation models are trained on data from the internet. So you can imagine it being trained on data from Reddit, data from Quora, data from a lot of different digital sources.
Speaker 2:And just by the nature of these sources, they have their own skews. Just like if you were to, and without getting too much into the politics of it hey, I'm going to train a model on MSNBC, and then you were to say I'll train a different model on Fox News, you would definitely get a different type of personality and a different set of quote, quote facts from those models, right? So yeah, without getting into the politics of it, it's just that there's bias and there's skew in almost any data set you look at. And so I think any responsible foundational provider and I'm sure that OpenAI is doing this. Well, I'm sure they're trying to do this, to the extent that they are actually doing it.
Speaker 1:That's a question, because Ilya obviously left OpenAI over concerns about things like that. Yes, I'm like sorry. Did we see the same news? Did you also see the letter that was just published by internal employees?
Speaker 2:a hard problem to work on to understand, like what is what is unbiased truly mean from a data perspective? You know, when you see something that's published, how do you know where the exact middle is? You know, bias is obviously like a bias to the left or bias to the right and you're trying to find the middle and there's often not a factual.
Speaker 1:When we talk about AGI and then ASI, I wonder if there's something there about the model itself being able to critically determine the information that it's getting to be biased in some way. Does that make sense? Yeah, I think you know what if you're self-aware?
Speaker 2:enough to like say okay, you're giving me something I can clearly tell this is biased, kind of like a human right, yes, and I think here's what's interesting.
Speaker 2:I actually think this is a fun fact about our friendship. I think you and I are fairly politically aligned, but there are certainly topics on which we actually have political differences, and so I bring that up only to say, when someone gives me a piece of information, I might think it's biased, but you might not think it's biased. And when someone gives you the same piece of information or a different piece of information, you might think it's biased and I might not think it's biased. And so that's what's interesting is, when we talk about ASI and AGI, bias is in the eye of the beholder oftentimes, and so by its nature, it will have a hard time detecting bias. Or the way it will detect bias is by first understanding its own bias and then trying to detect bias in external sources, which I think you know we're going to try and get to. That would be, you know, the holy grail, if you will, but I think that's going to be a really, really hard problem to solve.
Speaker 1:Yeah, I think it's really interesting. It's leading me to all sorts of philosophical questions. One is okay. So within a human conversation you even have a wide variance of people who are aware of their own bias and people who are not. Ask somebody something and say well, what bias are you holding in responding to that or in sharing that information? And they might say I don't know, or no bias. Or they might be aware and say, well, this is where I tend to fall and that's how I'm seeing it. So then, for the model, the same thing. If you ask it well, what bias do you hold in telling me this? A, does it know what bias it holds? And B, if it does, how does it share it? And then C yes, how does that filter the information that it's delivering back? So I think there's a lot to explore here, but it is really interesting.
Speaker 2:Yeah, I mean. The question is, how far down the rabbit hole do you want to go with bias right? Because I could make an argument to say you know, the internet might be politically left lean. I'll just throw that out. I don't know if that's true or not, but I think there's a lot of people that think, oh, the media is like a little bit left, biased right, and maybe that's because academics tend to go into media jobs and so they tend to have a little bit of a left lean. I don't know if that's true or not, but, like I think some people believe that to be true, let's say that were true, just for the sake of argument.
Speaker 1:We can just use that sake of argument hypothesis.
Speaker 2:Yeah, for the sake of argument Not saying it's true or not true, let's say, for the sake of argument, if you wanted to balance that out, you would need to go seek out the counterbalancing kind of more conservative or right-leaning data to feed into the data set, to try and balance out your data set. And then the question is like where would you go find that data, right? And then the question is where would you go find that data? And then, how much of that do you add in? Do you add in a little bit of it? Do you add in a lot of it? How much does it be?
Speaker 2:So it's just like this really dangerous rabbit hole to go down to say how do we build a data set? That, especially for these LLMs, it's a particularly hard challenge because they're so general, and so if you have a very narrowly defined model, you can balance the data much more easily because you don't need to have balance across a wide variety of topics. If you have a general AGI, if that's what you're going for artificial gender intelligence and you want the smartest model that could do a wide variety of tasks, you need to actually try to build a balanced data set across every task that you possibly can, and so you're trying to basically build a balanced data set that represents the world at large, and that's an extremely difficult technical challenge to do.
Speaker 1:Well, so you're bringing up this idea. I'm thinking let's do a thought, exercise you, game Sure.
Speaker 2:I love this size you game.
Speaker 1:Sure, I love this.
Speaker 1:So, going back to the idea of Congress using AI to help synthesize information and to be better informed in their voting, so you have two parties Republicans and Democrats.
Speaker 1:Obviously, as time has shown, the parties have become further and further apart in how they see certain issues.
Speaker 1:And then you have within and I'll just use the House for now, because I are the first stop for these massive volumes of information and a lot of legislation that then continues on to the Senate. So you have within the House also, all of these different committees and caucuses that different representatives sit on and, based off that committee or that caucus, they care about different things, based off the district that they represent. They care about specific things, and so you're making me think. I'm curious to hear how you would approach this. Does it make sense for, say, the House of Representatives or to build out their own targeted, smaller model that is specific to all the information that is passed around within Congress and so that it's also proprietary and secure information, that, say, openai is not being trained on that information, because it does have security clearance, it is confidential in some manner, but then within Congress they essentially have a rag and they can pull what they need to and they can synthesize information that's relevant to the votes that they're making.
Speaker 2:I think in theory, on paper, it makes a ton of sense for a variety of reasons. I think the security aspect certainly makes sense. I think in practice, I would love to be a fly on the wall in the room of the people who decide the training data for the congressional model, if you will. And so I think what's much more realistic is the Democrats would want one model, the Republicans would want their own model, because they're going to have their own views, and they'd want to train a model that sees data the same way that they interpret data. Right? I would love if you know, congress could decide on one bipartisan or nonpartisan model that reflected, like, the true facts of the matter. I just don't see in practice that actually happening.
Speaker 2:But yeah, I do think it would be, you know, not only just for Congress, right? I actually think you can extend that further to say, you know, the Judiciary Committee would have a model. You know much more about the committees and caucuses that exist, but each committee would have a model. That could be, you know, an expert on those particular topics that they're working on. I think that could be. That would be an amazing kind of nonpartisan voice in the room, I think, you know, maybe even there's a world in which that nonpartisan voice helps. You know, bring the two sides closer together.
Speaker 1:That's what I was thinking, and I see what you're saying about some type of bifurcated model so that it satisfies each party's interests.
Speaker 1:But I was thinking of a more unified model to help unify thought on certain topics that at first glance they feel very far apart on.
Speaker 1:But if they were able to engage with the model to get what we'll call non-biased information or nonpartisan information, to then say, okay, what does this mean for us and how we vote, and to have it be some type of context where whoever is using the model, depending on which party they're a part of or which committee they're a part of or caucus they're a part of, they can frame that to say because, again, if it's proprietary and internal to Congress, they can say who they are and then the model immediately knows okay, you probably care about these things, based off of the party you're in, the district that you represent, all of that.
Speaker 1:Here are the things that I'll highlight for you and then I'll summarize these other things, and maybe not exactly that way, because you don't want to leave it completely up to the model to make the decision, but some way of being able to pull relevant information for decision making. And then I think about, within the House there's what's called the Problem Solvers Caucus, and it's a bipartisan caucus. The intention is to come to best outcomes, regardless of party stance, and so I think a group like that would especially benefit from a nonpartisan model where they could pull information and data and exchange ideas with each other.
Speaker 2:I think what's funny about this is the thing that I can't help but think when we talk about this is the topic of AI trust and safety, because you had mentioned a secure model.
Speaker 2:Obviously because it's priv, public would demand that model be open source and well understood, because congressional members are making decisions based on that model. The other thing that comes to mind is we talked about privacy last week and I guess trust is related to privacy and slightly different. I think that people would be very hesitant to trust a model that's informing, you know, congress members' decision-making, but what's funny about that is we don't trust it simply because it's a machine, but you know, there's random people that are in the ears of all of our Congressmen and women every single day who are aggregating data, basically doing the same thing that AI would do they aggregate data, they insights from that data and then they try to influence these decision makers. And it's so funny because we don't trust an AI agent that would do it, but we forget about the fact that there's all sorts of human people doing this every single day.
Speaker 1:Yes, well, so your second point is more or less what I was going to give is the answer to your first point in terms of the information being publicly accessible. Well, it's not currently publicly accessible now. So why would that be any different? And the people who are engaging in those conversations do have different clearance levels for different levels of important information. And then, exactly on top of that, you also have, you know, all the lobbyists and special interest groups who are coming with volumes of information about their cause and talking to Congress about that. But I would say, if a model like that were to be built, as opposed to sharing what the actual information is, maybe the important part is sharing the architecture of the model, how it was built. The important part is sharing the architecture of the model, how it was built, how it's being trained, and maybe the sources of the information, not the actual information.
Speaker 2:Yeah, I can see that and I think that would definitely put people at ease or maybe not totally at ease, but more at ease to understand a little more transparency into what goes into it.
Speaker 1:How else can politics be disrupted by AI?
Speaker 2:Well, that's actually the next thing I was going to go into, which is I actually came across a news story, an interesting story that came up this week. So there was a tool that I was playing around with a few months ago called HeyGen, spelled with a G, not a J. So hey, and then G-E-N dot AI, I believe.
Speaker 1:And it's Gen AI. G-e-n dot A-I. I believe it's Gen AI.
Speaker 2:Exactly exactly so. Heygen is an avatar company, so they're producing both video and voice clones of people. So what you do is you record like a 30-second video of yourself talking to the screen and it will create a real-life looking avatar and it has a video model that moves your expressions and your body and your lips to match text that you type in and have it say, and then it will also clone your voice, and so I think we talked a little bit in our last episode about this. I was playing around with this. It's good enough that I created an avatar of myself and then had it say something and then sent a text message video of that to my girlfriend and before I told her it was an avatar, she could not tell On a small screen. Just listening to it quickly, she couldn't tell it was an avatar.
Speaker 2:And I bring up HeyGen because there was a cool use case. So they raised like $60 million recently Series A, which is a pretty big Series A in this environment, and one of the use cases that they touted was apparently the argentinian president used hey gen to translate his speech at the world economic forum to english and apparently the quality was very, very good. So so two use cases. There are, like you know, politicians using avatars to put out new messages, and then also language translation, so they can give speeches in English and then translate that to other languages that their constituents might speak. So I thought those were cool.
Speaker 1:That's very interesting. I really like that use case. It also reminds me of something I wanted to mention last week, but we talked only about Apple intelligence week, but we talked only about Apple intelligence, but it was about how Samsung and the Samsung Galaxy has their version of Apple intelligence that they had already announced several months ago, and one of their core features is the ability to real time translate phone calls, so from one language to another, and then also to do translation within content and whatnot, and that's something that Apple intelligence didn't address at all. But I think that's actually one of the most interesting use cases from a consumer standpoint within our mobile devices.
Speaker 2:Yeah, I think that's a I think it's super interesting use case. I years ago went to Mexico City and we're in a museum and I had Google Translate open and this is. You know, this is pre-LLMs and it was even good.
Speaker 1:You mean when we went to Mexico City?
Speaker 2:It is the trip that we were on the wall in the museum that you know had a story about the thing that I was looking at and it was able to translate it real time to England for me. So I think, getting back to kind of what we were talking about at the top of the day politics, I think anytime people can communicate better across cultures, I do think that hopefully brings people closer together on the political spectrum just by virtue of them understanding each other better, and so I think the more we can close these communication barriers using AI, I think that hopefully is better for us as a culture and will definitely, you know, disrupt or change politics in one way or another.
Speaker 1:For sure, and you're also just making me think, another use case for disruption in politics, but on the, we'll say, consumer side. But really, you know, constituent side is as much.
Speaker 2:as Congress needs to understand and synthesize large volumes of information, so do voters, and so giving them a way to understand what the issues are, what the potential policy and legislation is around it, and be engaged with the material in that way, through some type of AI-enabled interface, you know, I think that's really a good point because I find that I think a lot of voters would like to be more informed, and it's quite hard to be informed right, and a lot of times I think what people do is they turn to their friends, who they feel like are their most informed friends, and they ask like hey, you know, how am I supposed to feel about this certain issue?
Speaker 2:I feel this way, is there something I'm missing? And I do think it would be pretty cool if somebody created like a bipartisan issue bot that basically loaded up a bunch of data about a specific issue and was purposely balanced, right. So we talked about that earlier. It would be very difficult, but if it was done by someone who was nonpartisan or some sort of nonpartisan nonprofit, you know, putting together an issue and then having voters be able to ask the questions, kind of like I asked AISD earlier today, I said tell me the benefits of a four-day work week, tell me the downsides of a four-day work week, and then people can be informed decisions I think that would be really neat.
Speaker 1:I have really good news for you, Dan, because one of our friends is actually working on just that. They're still in stealth mode, but it is on its way.
Speaker 2:Oh, that's interesting. I think the more we can inform voters and constituents about certain topics, I think that leads to a better democracy.
Speaker 1:I agree. I think, as every episode, we're just getting started on the potential disruption of AI within politics alone, and I'm sure we'll weave back to this as we go. There's obviously a lot here, and then I'd love to hear from our listeners other industries that they want to hear about, and we'll focus on that. We have some ideas on our end, but, of course, always want to hear what people are interested in.
Speaker 2:Yeah, I think there's a. I think the question is not what industries will be disrupted, but but rather what industries won't be disrupted, because I think every industry is going to be disrupted. So, yeah, I would love to hear what people are most interested in hearing about. I mean absolutely.
Speaker 1:Awesome, see you next week.
Speaker 2:See you next week.
Speaker 2:Bye, I know we just said bye, but the number one thing that we didn't mention about how AI is going to disrupt politics is digital avatars.
Speaker 2:You know video avatars, voice avatars of politicians, right, and this has, you know, I mentioned it earlier a little bit in terms of language translation for the speeches, but if you extend it out, it's got both. You know a lot of potential and also scary consequences. So, basically, the idea that we could get a presidential candidate to say whatever we want, right, we can take a digital, we can create a digital avatar of them and so used in the best way the campaigns could create, you know, commercials or digital content. You know commercials or digital content, their politicians or their candidates. You know talking. You're using the same kind of talking points that typically would, but used in the worst way, in the scariest way, people are going to be able to make politicians say whatever they want, at least in digital avatar form, and I think that can really misguide and misdirect a lot of people who believe those to be, you know, kind of real clips.
Speaker 1:I agree with you, and actually something I've been thinking a lot about recently is this idea of a digital watermark being in place for AI generated audio and visual content. And so now, to be really meta on the topic, I feel like, okay, let's go lobby Congress to have something like this in place. Yeah, I think we're gonna have to have something like that in place?
Speaker 2:Yeah, I think we're going to have to have something like that in place, because if you think three or four years from now, the technology is going to be so good, you're not going to be able to tell the difference. There's a lot of companies that are trying to do AI detection so detecting AI-generated text or detecting AI-generated video but I don't know that we'll be able to rely on just those alone. I think we're going to have to legislate our way, our solution to this, to say, like for anything that's digitally generated, be it video or audio, or maybe even text, potentially that companies who are creating that content are going to have to, you know, put some sort of mark on it or some sort of disclaimer saying that it was digitally created.
Speaker 1:I totally agree. I think there's a lot to walk away with here and think about over the coming days and let's regroup on this, yeah that sounds great. Awesome. Talk to you soon, all right, see ya Bye.