The AI Coach

AI Movie Madness

Danielle Gopen and Paul Fung Episode 9

Text Us Your Thoughts!

A light-hearted take on various futuristic doomsday scenarios involving AI, inspired by cultural classics like "The Terminator" and other iconic films. From rogue robots to unintended consequences, we explore what might be possible and what remains outlandish while keeping the mood fun and thought-provoking. Bonus Question: Do we need to reach ASI for AI to take over?

We love feedback and questions! Please reach out:
LinkedIn
Episode Directory


Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.

Speaker 2:

I was at a three-day birthday party over the weekend, which luckily was here in Southern California so I could drive to it, but people were flying in from all over the world for this and not everyone made it because of the CrowdStrike update issue that happened on Friday. The CrowdStrike update issue that happened on Friday. So if we ever wondered how the world could be so dependent on really one company, this was the full ripple effect of it. I mean, flights canceled, people said that in the airport terminals there were like 10-hour waits to try to talk to a representative of different airlines. The apps themselves. So I actually just coincidentally opened the Delta app on Friday to book a flight, but it wouldn't open. And then I put two and two together I realized, oh, if their system is down, that means their app is down also. So it just got me thinking of well, first I thought, wow, crowdstrike has a lot of really huge contracts. And the second thought was wow, what happens when we're so dependent on one company?

Speaker 1:

essentially, yeah, that is pretty crazy. I mean, first of all, let me just say you go to far more fancy birthday parties than I do, so when I go to the birthday parties you're going to, first of all three days and people are flying in from all over the world. That is a very nice birthday party.

Speaker 2:

It was very nice, it was beautiful.

Speaker 1:

Yeah, I hope it was nice for whoever was able to make it still, and I hope none of the festivities relied on Windows or anything related to Windows and CrowdStrike, whatever you guys were doing.

Speaker 2:

Luckily it didn't. It was still a low-tech weekend, so restaurant buyouts and beach parties and pool parties and things that didn't require any type of cybersecurity or technology.

Speaker 1:

Yeah, just low key, just restaurant buyouts, you know, but maybe even the payment platform for taking payment for the restaurant buyout, like, maybe if you were paying on toast which I guess if you're buying out the whole restaurant you're not paying on toast like I do, but you know those things could have gone down. I think you know, point taken, that it is wild how dependent we are, not just one technology, but one specific vendor within you know, within the internet, if you will. And yeah, you and I were talking about this a little bit when it happened on Friday and there was another similar outage, I think, of a vendor a few years ago, cloudflare, that's right, it was Cloudflare and there was a Cloudflare outage, I think, a few years ago. And it's pretty crazy that everything kind of grinds to a screeching halt when these single vendors go down.

Speaker 1:

And so, along those lines, I thought a fun episode that we could do today would be to talk about, kind of in a cheeky way, the various doomsday scenarios that people have with AI. I think a lot of people are afraid of AI and think AI is going to, you know, kill us all or something. But I thought it would be fun to kind of talk about it in a more cheeky way, about some certain specific types of scenarios that might occur. What do you think about that?

Speaker 2:

I think that might occur. What do you think about that? I think that's great and I think if we tie it to scenarios that have already been put out there in the world, even better. As you know, black Mirror, I've said multiple times, is unwatchable for me because it's my version of a horror movie. So these things are talked about and then they seem to happen, but these movies that have been out there for decades now at this point not even necessarily movies produced in the last few years that the premise of the movie is these doomsday scenarios that happen when technology takes over. And, by the way, let's also add, there could be some good scenarios, so we can talk about that too.

Speaker 1:

Yeah, and I should point out for anyone who is listening, the threat of AI security is a very serious topic. There's obviously people who are taking this very seriously. This is intended to not be a serious conversation about it. I think this is intended to be a more jovial conversation about it. That, yeah, I've come up with five movies that I describe different scenarios in which a future doomsday AI scenario that we could be in, based on you know, movies that we watch, so but I did come up with some good ones too. There's a couple. I think there's a couple of good examples. So do you, should I start, or do you have any thoughts on what an AI doomsday would look like?

Speaker 2:

I think that I'll leave that to you. We're here to entertain, so this is really just meant to be a fun conversation and, as always, if anybody has comments to add, please reach out and let us know and we can add those to next time.

Speaker 1:

Yeah, and some of this came from. I listened to a podcast called the Rewatchables and it's a podcast about the most rewatchable movies and they, you know, and the three hosts, whoever it is for that episode they'll rewatch these famous, famous movies and they'll debate them and have fun with them, and so that's kind of where this came from.

Speaker 2:

Was Clueless one of those movies.

Speaker 1:

Clueless is almost certainly one of the episodes. Clueless is one of them. Mean Girls is one of them. They have hundreds of episodes. You should check it out. It's a cool podcast, but that's the inspiration for this today. So I think a good place to start would be the good scenarios. So, instead of just jumping right into the doomsday scenarios, let's talk about the good scenarios. And for me and I want to get like Wait, but hold on.

Speaker 2:

Both you and I like bad news first, so should we start with the bad scenarios and then end on the uplifting note?

Speaker 1:

Oh, okay, we could do that. Yeah, okay. So bad news first. So here's all the ways that AI is going to destroy the world and here's some of the various ways, and for each of these ways, I'm curious to hear your reaction to it. So, all right, I have five scenarios attached to a movie in front of me, and so I thought maybe I would describe them and then you could play color commentator on them, like you know. Do you think it would happen? What would it be like if that happened? You know things like that.

Speaker 2:

Should I also guess the movie, or will you say it?

Speaker 1:

Oh, I was going to say it. I don't know that I'll be able to describe it in enough.

Speaker 2:

I think you will. I think I might actually know what movie you're talking about.

Speaker 1:

That's fair, okay, well, if I were to say the first, actually, here's a question for you. Can you tell how excited I am about this, by the way. I'm like I can't wait to go through these.

Speaker 2:

Yes, what's the next guest role is going to be as a guest on the rewatchables.

Speaker 1:

Oh, that'd be incredible. So what's the first movie for you that comes to mind when we think of, like AI slash robots are going to destroy us all? Because there's one that like very clearly comes to mind for me and I'm curious what comes to mind for you?

Speaker 2:

I still would say the Terminator.

Speaker 1:

Yeah, absolutely Right, like that's. I feel like this is the one where if you talk to anyone who's not like in the AI bubble, if you talk to someone on the street and you're like what's going to happen when there's like these robots that can think on their own, everyone immediately goes to Terminator and Terminator 2 and everyone thinks about Arnold Schwarzenegger kind of going around in Terminator 2 and everyone thinks about Arnold Schwarzenegger kind of going around Actually, he's a good guy in Terminator 2, but just the Terminator scenario of they've decided to be smarter than us, they've decided to create this robot army and they've decided for some reason and I forget why they decided this in Terminator but that humans are no longer necessary, so they've decided to eliminate this. I mean, do you think that's feasible? Is that a thing you're afraid of?

Speaker 2:

I want to make the joke. I will be back. No, I mean, maybe I should be afraid of it, but I and deployed, I don't see a world in which the Terminator is actually what happens. I think, yes, we have the possibility of robots being extremely intelligent and being very useful in our everyday lives, as complementary to humans in various tasks and industries and whatnot. But I don't stay awake at night thinking, oh no, there's going to be a time where, at least in our lifetime, where we have to worry about these ASI robots taking over and deciding humans should be extinct and governing our world without us having any say.

Speaker 1:

Yeah.

Speaker 1:

I mean the only plausible explanation that I've been given for this, if we let ourselves go down this rabbit hole, would be to say I think my co-founder actually said this to me once. He said that the thought behind this scenario is that we teach AI to be smart enough to care about its own existence smart enough to care about its own existence and AI then realizes humans are the only thing that can impact its survival. Humans are the only thing that can flip the off switch. We control whether or not the AI is on or off, which is basically whether or not they exist or don't exist, and so at some point, they develop, you know, this thought that we are a threat to their existence, and if they want to optimize for existence and survival, they must eliminate this threat to their existence.

Speaker 2:

It makes me think of a couple other movies, actually, as you're saying that, and maybe I'm misremembering them, but like Ex Machina and Her, isn't there a similar premise in those?

Speaker 1:

I think there is a similar premise in those. I think in Ex Machina there is a premise along those lines of like that individual, ai, bot or whatever, and basically sees a threat to its own existence, um, and chooses to you know, spoiler alert, I think eliminate the creator potentially, if I remember that movie correctly. But yeah, I guess there is like this recurring theme of, for some reason in sci-fi movies, a lot of robots don't tend to like their creators and turn on them because they're the only one that can turn them off. I guess I don't know, maybe we should be more afraid of this.

Speaker 2:

I know I'm like wait a second. What's the other movie? Oh, I'm totally blanking the one that's from the 80s, where he creates his dream girl robot.

Speaker 1:

Oh, I forgot about that movie too. I didn't even think about that. Maybe that one should be one of our good scenarios. Is that a good one? It's weird science, right? Yes, yes, I think that would be a good scenario. I don't think she killed anyone. She didn't kill her creator. It was a pretty funny movie. It's a classic Cute movie Also.

Speaker 2:

Cute movie Also. It goes to show that you know, if I was going to say 20 years ago, but actually 40 years ago this was already quote unquote, mainstream of an idea to say, oh, we can create lifelike human, like robots, and as a creator, you can program them to be how you want them to be, and then they exist as such, and now we do see it more and more coming to life. But I still think that the current obstacle is, in reality, do we reach a point of ASI for AI? Because without that then no, you won't get to a place where AI says, oh sorry, humans, we're taking over.

Speaker 1:

Yeah, there is, you know. That makes me think of a topic we haven't really talked about on this podcast, which is there's a bunch of startups that create these kind of chatbots with personalities and characters I think one of them might even be called characterai and the kind of like fantasy, fan fiction, fantasy writing world. People are really into these things. They like create these characters that they can can interact with, and obviously they're not physical, but there's definitely something to that People wanting to create a character to interact with. Whether or not they fall in love with it, like they did in Weird Science, I don't know, but it's definitely an interesting part of the internet, an interesting part of the AI world.

Speaker 2:

This is totally off topic of doomsday scenarios, or maybe it's not, I don't know but when people have emotional affairs with another person, how hurtful that is to the other person in the relationship. And then I thought well, with these new AI agents and avatars and what you're talking about, these chatbots that have such lifelike personalities, what if there's a scenario where somebody is so engaged with that chatbot it's almost like an emotional affair. And if that's the case, how does the other human feel about that? Because there's obviously an endpoint to what's possible with a chatbot, but at the same time, the reality is that your partner is then given their emotion and attention and energy to this other thing. That's not you and it's outside of the relationship. And so I really did start thinking about this the other day.

Speaker 1:

Oh, I don't even know where to begin on that one Like would you be jealous, would you be upset, would you be happy that it's not a human? I mean, I guess in some sense there'd be some relief, but then you might feel replaceable, as if you're replaced by this AI. I think there's a lot of emotions there.

Speaker 2:

I know it's such a random thought, but just as a concept I started thinking about it.

Speaker 1:

I mean, what's funny is this is not if but when scenario, because-.

Speaker 2:

Or maybe it's already happening.

Speaker 1:

It could already be happening. I bet there's at least one person in the world emotionally having an emotional affair with a chatbot in the world right now.

Speaker 2:

Yes, and it probably feels fine because you're like well, I'm not doing anything wrong, I'm just typing into this chatbox.

Speaker 1:

That's incredible. I would love to interview that person and also interview their part. What a fascinating idea. That'd be great.

Speaker 2:

So interesting, okay, that'd be great. So interesting, okay. So, going back to the movie, so we have the Terminator, we have some others, but still, I want to hear your opinion on reaching ASI, because I do think that's the linchpin in all of this of a scenario.

Speaker 1:

Yeah, I kind of see it more as a spectrum, like, not that we're just going to like reach ASI and all these scenarios become real, like become real life all of a sudden. No-transcript, but along the along the way from here to there, there's going to be so many different versions of this given the level of technology we have right. So, like, like we said today, there's probably one person doing this today. So, as the so as the chat interface gets better, you'll see more and more people doing it, and then, once the voice interfaces get better, people will start doing phone calls with the bot right, and then, once the video avatars get better, people will start to do video chats with the bots. And so I kind of see it less, as you know, asi being this like ultimate linchpin that unlocks everything and more of. We'll see versions of this increasingly over time as the technology gets better.

Speaker 2:

Which actually, in a way, then, is almost more scary, because it's similar to that phrase that a frog doesn't know it's in boiling water. Is that how the phrase goes?

Speaker 1:

I don't know what the exact phrase is, but I know what you're talking about. And so basically you're saying humanity is the frog that's currently in boiling water and has no idea.

Speaker 2:

Well, we're not in boiling water yet, but the point of the phrase is that as the temperature rises by one degree at a time, you don't recognize that it's getting warmer, and warmer, and warmer, and it's only at the point where now it's boiling water that it's too late and it's like how do we know where we're at? And before it's too late?

Speaker 1:

I don't know. That's a great question, and that's what the big drama at OpenAI has been, I guess, is for them. They're saying, hey, we're on the path to being too late, and everyone else is like it's fine, I can do my job a lot faster with ChatGPT, so it's cool.

Speaker 2:

Yes, Something else I just thought of in this scenario. So have you seen those videos? Or maybe you've seen it in person when they're training these robotics? I've seen it specifically with, like, I think, Boston Dynamics, where they're pushing the robots around to help them with stability and help them have fast response times, and so you see, I mean it looks actually somewhat physically aggressive and you see that happening.

Speaker 2:

And then sometimes the comments on those videos will be things like oh great, now you're just training this robot to be stronger, faster, smarter, and so it will be able to fight against any of its enemies, when actually what they're trying to do is really just create a more stable robot. But it is true, if you think from a defense perspective having you know, in order to avoid human casualties going into battle, we already have transferred much of modern warfare into technology and you have drones and other technology that doesn't require human capital. And so they're saying now that will be another opportunity with robotics to have, depending on what that looks like whether it's an optimist, an actual human looking like robot or what to have that be what you see more on the ground as quote unquote infantry. Does that mean that if they're trained for that, and then they have those skills at some point. Yes, do they then turn against us?

Speaker 1:

That's, that's a concern, or you know, or maybe to some bad actor, yeah, plant some code in them that even if, even if they don't independently choose to turn against us, if somebody is able to make them turn against us, I mean that would be a pretty scary reality and certainly a possibility.

Speaker 2:

I feel like this is yes. Now I'm thinking okay, this is the premise for the next James Bond movie. It's not that the evil actor has nuclear codes, but it's that they have code to hack a robotic army and turn it against a population.

Speaker 1:

I think this was actually kind of the premise of one of the Iron man movies. I think he made a bunch of robot Iron Men and then the bad guy was like oh, I've taken over the robot Iron Men and they can now attack you.

Speaker 2:

Oh, I think you're right. Wait, how come there are so many movies now that we're starting to talk about it that actually have these things?

Speaker 1:

It's all over the place. You can't get away from it.

Speaker 2:

I was going to say it's funny that still both of us think of the Terminator first when we think about what's the iconic movie. But there really are so many.

Speaker 1:

I mean I will say Rewatchables said that Terminator 2, specifically, is the number one rewatchable action movie all time, which I think there's something about that movie that's. And it's also, I should say we are. I'm a little bit older than you, but we are at least of the similar generations and I think of our generation specifically. Terminator 2 was such an iconic movie that it's just implanted in our brains. I'm sure the Gen Z or Gen A people of the world would not name Terminator 2 as the iconic one. I think that's true.

Speaker 1:

I do have a second doomsday scenario, and I'm going to change up the order, because you talked about a frog in boiling water and I thought this is a good segue to the second scenario. So the second scenario is we talked quite a bit the other day about AI and its power consumption and its effect on electricity and how it requires quite a lot of energy, much so that it speeds up global warming, and by speeding up global warming we melt the glaciers and we end up in a world that is basically entirely ocean. In fact, I'll have you guess this one. Do you know what this one is?

Speaker 2:

I know I said wait, is that Waterworld?

Speaker 1:

Yes, what a great movie. I mean, what a hilarious, classic kind of bad in all the funny cheesy ways uh 90s movie in water world. So that's my second doomsday scenario is that ai has run amok, it has used up all of our power, it melts all of our glaciers and we end up in a water world. What do you think about that one?

Speaker 2:

well, my first thought when I think about water worlds was wasn't that one of the movies that had a huge budget, total box office flop but went on to become a cult following and eventually, I think, did make up their budget? I don't remember.

Speaker 1:

It was famously, I think. I think you're right, I hadn't remembered that. I do think it was famously an expensive movie to create and then was kind of like a colossal flop and has gone on to be this kind of iconic cult classic kind of, for how cheesy it is and how much of a flop it was. I enjoyed it quite a bit as a kid I'll say that much. But I do think yeah, it was a fun. Is it Kevin Costner who was in that?

Speaker 2:

Yes, kevin Costner. Listen, I don't know if we're going to live under 100% ocean. Listen, I don't know if we're going to live under 100% ocean, but I do think, similar to what we were talking about last week, this idea of how power hungry AI is and the electricity needs from it. There's a real thought as to what's the cap there, and that's no pun intended about electricity caps or polar caps. That's no pun intended about electricity caps or polar caps. But to understand, where does that reach a ceiling where they say, ok, this is all that we need, and then we can figure out how to contain that and build more infrastructure around it? I don't know, do you think there is that point?

Speaker 1:

You know, I think this one I'm less, I mean, this one's a little bit more far fetched and ridiculous. I do think it's not far fetched to say AI will have an impact on climate change. It certainly will increase power consumption. This is a good one. I mean we as a society, I suppose, have some. I want to say we have some control over this. If it started to run away, melt the glaciers kind of power, we could shut it off. But actually I think, as I say that in real time, I realized that actually, like we're so power hungry as a species that, like every time we've tried to rein in our consumption, we do a very poor job of doing that, which is why we're in the situation we're in now with climate change. So maybe we wouldn't be able to control it. Maybe it will end up being this runaway technology that leads us to Waterworld, which I don't know. That doesn't sound like a very fun world. I'm not sure which one I prefer, no.

Speaker 2:

I think what we're seeing in climate change is a real thing. I mean LA weather patterns over the last few years versus LA weather patterns over the last 50 years. There's obviously been a change the fact that we have mosquitoes here now and we didn't have that five years ago. Things like that are happening. I personally think that the earth also has natural climate patterns that are far beyond what we will ever experience as humans and have ever experienced as humans. And if we do get to a point where we're living under ocean, it's not because of AI's use of electricity.

Speaker 1:

Interesting, interesting. Yeah, I agree that the Earth has weather patterns, but this would be something that we could debate. Ok, so we don't think. Well, waterworld might happen. But if it happens, it's going to happen, and it's not going to be because of AI, it'll just be cousins, which is fair. I think that's very fair. No-transcript around on those, you know, rows of back. There's no room on this piece of wood, even though, like there's, I don't know if you've gone down the internet ramp.

Speaker 2:

There was clearly a lot of room on this piece of wood. Of course there was room. Yeah, she was like see ya, jack, there was enough room.

Speaker 1:

There was enough room. Is there enough room for you and an AI on the raft? No, just you.

Speaker 2:

It depends. Is the AI also a floatable?

Speaker 1:

Yeah, probably not. They're probably very dense. I don't think they float.

Speaker 2:

Yeah, I'd have to really think about that. I think there might not be enough room.

Speaker 1:

Let's go into scenario number three. Ai gets so smart, it puts us all out of jobs and the economy crashes, leaving us in this dystopian future in which we all are fending for ourselves because the resources are very slim, because we're all penniless and AI is. I don't know, maybe AI is rich, but we're all poor. This one I'm not describing very well, so you wouldn't be able to guess it, but I'm just calling this one the Mad Max scenario, this Mad Max world in which we're all fending for ourselves. We've all turned on each other. We've kind of turned into these independent tribes because our economy has crashed. What do you think about that one?

Speaker 2:

Well, I think yes, mad Max is generally the first movie I think of and reference when I think of a post-apocalyptic, dystopian society, and I reference sometimes, for modern day, what you see in certain places. But beyond that, I'm just wondering do we even get to that point? Because if ai is so smart, will it allow the economy to crash.

Speaker 1:

Oh, that's a great question. I think this one feels the most. I don't want to say realistic, as if it's definitely going to happen, but in terms of when we talk about in our lifetime. I could see a version of this in our lifetime where there is maybe not a total Mad Max obviously not that bad but some sort of severe loss of job event. You know, the one that I have given as an example, maybe on this podcast, I don't know, but when I talk to my friends about sometimes is you know, let's say, hypothetically, autonomous shipping trucks become a thing, then all these truck drivers go out of a job, basically overnight, right, and that's millions of people who would be out of a job.

Speaker 1:

And you know that has a ripple effect on the economy. And so I think this one is, you know, a less severe version of this one is I wouldn't go so far as to say likely, but like it is more possible than you know the glaciers melting and the world being covered in ocean. So this one I think there's some real concern to this one around the economic effects of AI getting very smart very quickly and potentially putting people out of jobs.

Speaker 2:

I agree it's definitely a real concern and I think it's very much a real concern for a lot of people who hear about AI and are not that well versed on it, not using it professionally or personally. I have to think about what that looks like. But something that you might be interested in is the sub stack of Zach Kass. He was formerly at OpenAI but go to market, and he talks a lot about what he calls like an abundant future or an age of abundance, but that it's a path to get there and some of that will be a lot of harm and destruction, and so might be curious to go and check out what he says about this. It's called the Abundance Newsletter.

Speaker 1:

Abundance Newsletter. I'll check that out. I'd recognize the name, but I wasn't sure where from. So yeah, I think that will segue into the good versions, the non-doomsday scenarios, the abundance scenarios, of which I think there are plenty. I mean, that's really hopefully the future that I envision, otherwise I wouldn't be an AI founder, so let's hope that that's a relief.

Speaker 2:

Is this, when we have AI chefs and people in our homes to do all the things that we don't feel like doing?

Speaker 1:

That's exactly right. So what do we do? We'll do one last doomsday scenario really quickly. It's the least severe of them and also maybe the most likely, and then we'll talk about the good ones. So the most likely and least severe scenario is what I call the idiocracy scenario, which is a movie that I don't know if you've seen it I think it was a Mike Judge film, actually, maybe but basically it's this future in which everything is done for us, and so we are all kind of lazy and have forgotten how to do anything for ourselves because AI and robots do everything for us, and so I think in the movie they're like kind of up in some spaceship and someone is transported to the future and when they get there, they're the smartest person there, because robots have made everyone else so dumb because no one needs to learn to think for themselves, and I think some version of that future is something that you've expressed being concerned about.

Speaker 2:

Well, I think that I guess two things. Two things that going back to a faster version of extinction for humans, because humans are biologically wired to have relational I don't even know what the word.

Speaker 1:

Yeah, we just need Interact, we need interaction.

Speaker 2:

Yeah, humans need interactions, and as much as we joke not really joke about the idea of having a relationship with a chatbot or with a robot, the reality is that humans really do need that type of human to human interaction.

Speaker 2:

And I mean you see that now, even with Gen Z, the rates of anxiety and depression are extremely high and you can argue a lot of different reasons as to why that is, but some of the research out there is talking about because they've become so attached to their devices that they have a lot of this communication digitally, so they feel connected to somebody else, but they're not actually leaving their homes and going out and doing what we would consider interpersonal activities, you know, even just getting together.

Speaker 2:

I remember in high school we would just all meet up at some random park and hang out and there was no real agenda to it, it was just a way to spend time together. And they're not doing that as much and they're saying that it's impacting a lot of mental health, and so if you tie that to then this idea of you have no purpose in life, which I think is the other big thing that humans really need to keep going, and that once people figure out why am I here? What am I doing? That's really a driving force in living life in a way that feels fulfilling. I think that without that, if AI is doing everything and we're just sitting around twiddling our thumbs, going back to the idea of longevity, I don't think there is a longevity industry anymore. So yeah, those are my two thoughts on that.

Speaker 1:

I think those are both really good takes. Actually, I do think you're right, we definitely need relationships and we need a purpose, you know, and I think maybe the only, not to say counterpoint, but like maybe an optimistic take on this would be that the relationships can then become our purpose. So by being freed up from having to do as much labor, you know, we can maybe raise our next generation to be like hey, you know what. Actually, you know, spending time with other people is the purpose. And I wonder if that's a version of humanity from a long time ago, right before maybe the industrial age, before we were working the nine to five jobs, if that was more of what it was like. But yeah, that's interesting.

Speaker 2:

I really love that take.

Speaker 2:

I do. I think and this kind of feeds in now to the positive version of things I think that there's something really beautiful about that, and I feel that way, and why I do the work I do is because, for me, so much of my purpose is the human, relational aspect, is making people be the best version of themselves and really achieve the goals that are important to them, and so I feel like having a world in which everybody cares about relationships with others and investing in that could be a really wonderful thing, and then you also have a chance to, instead of being, you know, instead of idiocracy, where they end up not thinking about anything, you have the flip side where you have time to really be creative and think and have these great deep thoughts, like things that we saw coming out of not thinking about anything. You have the flip side where you have time to really be creative and think and have these great deep thoughts, like things that we saw coming out of early days philosophers that we could have once again. Maybe it's full circle on that.

Speaker 1:

Yeah, I actually think, and unfortunately I've got to cut it a little bit short this week so I've got to run, but that is a pretty good place to leave it, I think. I think that could be the future. I was going to say the good scenario, the abundance scenario for me is like the Jetsons or, I guess, Star Trek, but I'm a little bit nerdy. I think the Jetsons is a little more fun and widely known, where it's like Rosie the Housekeeper, robot chefs, flying around in flying cars and abundance.

Speaker 2:

Flying cars are a real thing already.

Speaker 1:

Flying cars are a real thing. Already. Flying cars are a real thing and will become increasingly a real thing as probably, as AI helps us figure out more of the engineering challenges behind.

Speaker 2:

But I think that's a great-. I love that, though.

Speaker 1:

Yeah, I think it's a great place to leave it, like hoping that you know we can be like, I guess, how we imagine the Greeks to be in our romanticized heads. You know, sitting around talking about philosophy. You know, talking about relationships, being with other people, and hopefully AI, you know, frees us up to be able to spend a little more time with other people that we love.

Speaker 2:

Exactly. I find that's very interesting. I'm glad we talked about this and I'm sure there's more to come All right?

Speaker 1:

Well, I will be signing off for now, so I will catch you next week.

Speaker 2:

Talk to you soon, bye, bye.

People on this episode