The AI Coach

Apple Intelligence Unveiled: What's Your Privacy Worth?

Danielle Gopen & Paul Fung Episode 4

Text Us Your Thoughts!

Our thoughts on the Apple Intelligence announcement and downstream impacts, trading privacy for convenience, and the future of AI in consumer technology including driverless cars and home devices. Bonus question: can robots feel empathy?

Links and Resources:
Humanoid Robot reference

We love feedback and questions! Please reach out:
LinkedIn
Episode Directory


Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.

Speaker 2:

I'm starting to feel like us recording weekly is making us obsolete because things are moving so quickly within AI. So a couple of weeks ago, we mentioned the flow send that helps generate content for marketing the podcast and coming up with the title, and blog posts and things like that. And now, just this week, buzzsprout, which is a podcast hosting platform, incorporated the same thing. I compared them side by side. The content that it produces is slightly different, but more or less it's the same, and it just got me thinking about really how quickly things are moving. And then, of course, the big update from this week is Apple, so let's spend some time today talking about that and we'll go from there.

Speaker 1:

Yeah, that sounds great and I actually didn't tell you this before the podcast. But we're going to continue to support Flowset because I'm meeting the founder on Monday, and so we're going to try and support the little guys. Nice. But yeah, I think you know, when it comes to things changing so quickly, we met with our investors a few months back and one of the things they had said has stuck with me this whole time, which they said when you're building right now in AI, you're building on shifting sand, and those two words like shifting sand just continually stick out to me anytime these new announcements happen.

Speaker 1:

Right, because, just like FlowSend, they've announced that this is their new product, this is a product pivot for them. And then you know a much larger platform buzzsprout says, oh, we're just going to incorporate it into our product. And I think this is exactly what people say all the time in startups, like is your thing a feature or a product? And flow send is saying, hey, it's a product, and buzzsprout is saying it's a feature of our new product. So I think that's one interesting take on it, and I think the other thing is, yeah, it's very analogous to what OpenAI did to the wider AI ecosystem when they did their dev day last year or whatever, and the buzz afterwards is like oh my god, they just announced so many features.

Speaker 1:

They probably just put hundreds of startups out of business. And it's really hard right now, in the current environment, to build new products and new ideas that some larger incumbent isn't just going to incorporate into their products a week from now, a month from now, a year from now. So you have to figure out, when you're building a business like we are, what are the sustainable businesses? What are the durable advantages? Not just small features, but fundamentally different approaches to problems.

Speaker 2:

And I think that so ties into what we talked about last week, which was being comfortable with being wrong and making decisions, because you really don't know what's to come, especially in the AI space. I think, even more now than ever, these companies need leaders who are comfortable being wrong and pivoting.

Speaker 1:

Yeah.

Speaker 2:

But so for Apple in particular. So we saw Apple Intelligence announced. I have my thoughts, but let's start with yours as the expert. So my takeaway on.

Speaker 1:

It is like, in some ways it's absolutely huge, but in other ways I'm like, oh, it's more of the same, and so what I mean by this is, I think the thing that is absolutely huge about it is Apple so quickly adopting this right.

Speaker 1:

I mean, openai still only came out just last year. Apple is notoriously, I guess, slow. They make sure to perfect their products, and so seeing them move so quickly and going all in on something either means they've been working on this for quite some time or means that they've made a big shift right. They've chosen to kind of go all in on this and make it a big focal point of their new announcement, and I think really the biggest thing about it for me is, yeah, showing that AI is here to stay because Apple has incorporated it into so many of their products. And also, I think the privacy stuff was really really interesting. I think they knew that privacy is obviously a huge concern Like that's not a secret, but I think their approach to having a lot of on-device models is actually, from an AI and technical perspective, a pretty interesting thing.

Speaker 2:

That's going on there, sorry explain that for people who might not know.

Speaker 1:

So we talked about like the sizes of models, like small, medium, large models, and so the larger models can do more reasoning.

Speaker 1:

The smaller models can do more narrow tasks, and so what Apple has done is capability, privacy, trade-off. And so what they're saying is, for a lot of the things you do, we're going to run a very small model, so narrowly trained to do very specific tasks, like, maybe, edit my email or edit my photo on my phone or something like that, because that can run locally on your phone or locally on your laptop. They don't need to send that data out to the cloud, which means it's, by nature, more secure because it stays on your device and then, for other more complex queries or more complex requests, they will be adding a lot of extra security to what we already see as a fairly secure cloud. But knowing that the data you might be sending to this has been more secure, which actually, now that I think about it, it's kind of funny. Are people really sending even more private stuff to AI, apple Intelligence, than they were sending before? Because I'm pretty sure people have a lot of very private stuff on their other phones and devices, I would say.

Speaker 2:

That is a good point I agree.

Speaker 2:

Obviously, more security is better. I think that, with Apple being such a public-facing product and people who might not be using AI in any way whatsoever now having this opportunity to use it and really being wary of it. So they did a survey the end of last year for general public thoughts on AI and, interestingly, only 10% of respondents felt very excited about AI and wanted to use it. 52% of respondents said that they felt more concerned than excited, and so, if you think about that, I think people hear this announcement. They think, oh wow, you know if it's on my phone now. What does this mean? And I think you made a point that I hadn't considered, which is well. If we already trust all this information to Apple within their standard cloud, then why is this different? My thought is why is it different? Is because now there's a partnership with OpenAI, and do we trust OpenAI to actually be using the information or not using the information the way that it's intended?

Speaker 1:

really happens, and I think he said if it's incorporated at the device layer or something like that, you know I will ban all Apple devices from Tesla factories, etc. I mean, elon also has a vested interest, right? He has a competitor in his own foundation model called Croc. But I think more on that is the privacy aspect. Going back to you saying people don't trust it, I would say the general consensus of if you think of the everyday aspect, you know.

Speaker 1:

Going back to like you know you're saying people don't trust it, I would say, like the general consensus, like of if you think of the everyday person, right, like, would they be excited about AI? Does it like really improve their day to day? I don't think they have a particular reason to be excited, but it's very easy to scare people about how AI is going to make things terrible for them. Hey, you know your face is going to be used in some commercial, your voice is going to be used without your consent. People are going to make it sound like you said something you didn't say. So I think for, like the everyday person, I can definitely see how it's just easier to scare people than it is to excite them, because they're probably like, I don't really need, you know, an AI generated version of myself to make me any happier. That's not gonna help me.

Speaker 2:

Yeah, I think it is like anything the cost benefit analysis and what's the trade-off? I think for a lot of millennials, we were the first generation to be willing to trade off privacy for convenience. I think even just something simple as DoorDash Uber. You're getting into the car of a stranger and giving them information. Generally they're picking you up somewhere where you live or work, a place that you go to often, and you're engaging, and then you're being dropped off and, to be fair, these apps have shields in terms of how much each person can see about the other party, but it's not foolproof and I think as a generation, we just got comfortable with this idea of oh, this thing is convenient, I'll trade off my privacy or potential security for it. And then, if you go to the next generation, gen Z, and then beyond, that alpha, which are now kids, who are preteen and younger.

Speaker 2:

Yes, they know nothing else. That is their day-to-day norm of I go online and it's an implicit transaction of data, personal data, privacy, for convenience, for entertainment, and so I think that the backlash that we potentially will see with AI or with Apple intelligence is going to be from the older generation, who feels like it's invasive and they don't want that on their phones. And, like you said, you'll have a digital version of yourself out there that you won't even know exists, and I think people get freaked out by that and the use cases that Apple presented as Apple intelligence. Even for me, I'm not sure if I find them all that useful. So, from a consumer standpoint, I don't know what the play is.

Speaker 1:

I think I have so many random thoughts. One thing I was thinking about, as you were saying, that is, from a consumer standpoint will AI make our lives better? And then I started thinking, like many other things in the past, like does the internet make our lives better? And what does make our lives better mean, right? So here's just a small example. Right, you're talking about DoorDash. Are our lives better because someone delivers delivery to our door? I could make the argument like, well, one like no, I mean, sure it's convenient, sure, I get to sit at home on my couch while someone else brings my food to me, but am I happier because of that?

Speaker 1:

I actually saw an article about the service industry eroding fabric of society a little bit. So, like you know, we stay at home, food gets delivered to us, we work from home, that which is obviously more of a covid thing, but we're turning into a society of, like the served and the servers. Right, and then two, you know, when I'm not going out into society and I'm not picking my my own food at for some reason, this is the. This is the example I have in my head when I was a kid and we would order takeout and first of all, you would call it in, so you'd have to interact with somebody, which I think is good for us, yeah, and then, second of all, you would call it in, so you'd have to interact with somebody, which I think is good for us. And then, second of all, you go pick it up, and the places we would pick it up were, like you know, pizza Hut, or like this local you know Chinese takeout restaurant, and the guy at the Chinese takeout restaurant he owned that restaurant for 20 years and so my dad would go in and he would say hi to him and they were friends through that interaction. Like, I think those interactions are healthy and helpful for us.

Speaker 1:

And so, I don't know, I guess in some sense I'm feeling like, oh, like doordash doesn't make our lives better. I would say it makes them more convenient, but I wouldn't say it makes us better. I do think that internet generally has made our lives better in a bunch of other ways, but I think that's just one thing that comes to mind when I think about like is ai going to make our lives better from a consumer point of view? I think it will in In other ways. It doesn't have to make our lives better or worse, but the convenience is just going to become the default right. I don't know if there's any thoughts or reactions to that.

Speaker 2:

Well, I think that for me, doordash makes my life better, because LA traffic is nuts and it could take me an hour to go and pick something up and bring it back.

Speaker 2:

That's true so the one-way drive, especially if I can make it easier for the driver to have them come in reverse traffic order, is definitely a big time-saving, which, yes, is convenient, but also is to me an improvement, because in that time I can optimize for other things that I want to be doing or need to be doing the apple intelligence announcement as we just said, it's a partnership with open ai, so what does that really mean?

Speaker 1:

well, I think what here's here's what it doesn't mean, like it actually doesn't mean that one is paying the other, which is really interesting. So it was reported that apple is not paying open ai for use of chat, gpt, and one thing we didn't mention earlier is is part of this partnership is so we talked about, like the on-device models and how some stuff goes to their private cloud for some of their queries that people are trying to do, or for some of these things that people are trying to do, they're going to send the data out to open ai. And yeah, I think it's fascinating because traditionally in like a customer relationship or like you would think, oh, they're like using chat, gpt, they're using open ai services you would traditionally pay them for for that service, and I think what's really interesting is what they seem to have negotiated is that the distribution that open ai will get by being embedded within apple products is payment in itself, and I think that's really interesting. I think it is.

Speaker 1:

I'm always fascinated by these new business models and like people trying to figure out who has the leverage, like who has the upper hand. Like Apple, huge behemoth, right, they clearly should have the leverage, but open AI is, you know, the darling of the technology world right now. Right, I mean sure some people are like you know, don't trust them, or whatever. But I mean sure some people don't trust them, or whatever. But the revenue just doubled to, I think, $3.4 billion, which is something insane like that.

Speaker 2:

And just to add there a figure that just came out last month from Emergence almost 70% of Gen AI companies use the OpenAI API.

Speaker 1:

Yeah, and so it's crazy.

Speaker 1:

And so in some sense you could say, oh man, I could see how OpenAI might have some leverage over Apple and say like, hey, use our product.

Speaker 1:

But then one of the other challenges that OpenAI has and why that number is 70% and why that number probably was 90% last year and why it might be 50% next year, is that foundation models are starting to commoditize.

Speaker 1:

So we've talked about in other episodes things like Anthropic thingshere, things like perplexity, uh, mistral, etc. And so I suppose you know when, when you're talking about leverage and negotiations and stuff like that, you know having a next best option is always a powerful piece of leverage. And so you know apple could say, hey, we want to use chat, gpt, and if sam altman says no, they say okay, we're gonna go to you know, we'll find another model provider who wants this distribution. And just to put a number, by the way, on how valuable that distribution is in actual monetary terms Google is paying Apple $18 billion a year to have distribution, to have Google be the default search in all of Apple's products, so like if you go into Safari and you type in the search bar, it's going to do a Google search, and so, in the same way, you can think of Apple distribution being worth $18 billion a year, which is pretty crazy.

Speaker 2:

Mm-hmm, and I would say, possibly even more with all the features that they're saying this partnership will provide.

Speaker 1:

Yeah.

Speaker 2:

Fun fact it costs OpenAI $700,000 a day to run ChatGPT.

Speaker 1:

Is the math on that $250 million a year.

Speaker 2:

That sounds right.

Speaker 1:

Pretty crazy.

Speaker 2:

And I think the we've talked about before. The bigger the model, the more expensive to run. Think that in and of itself does create a very high barrier to entry for new entrants outside of the major foundation model providers who are already big players in their own right Google, meta, amazon and beyond. Okay, so one thing I did think about, as you were talking on the consumer play and it's a bit more niche, but something that came to mind was how this integration, apple intelligence can really be helpful from an accessibility standpoint.

Speaker 1:

So for people who are using their devices, who have accessibility restrictions, how much more they can get out of their device and, by default, their dayto-day I'm I'm extremely excited about ai and accessibility, when you notice that my parents are deaf, and so I think there's a number of ai kind of apps and services that are already kind of targeting the deaf and hard of hearing community, everything from things that are reading people's lips or hearing you know what's being said and then translating that into text on a phone or something like that or the other way around, so like you know someone who's in the deaf or hard of hearing community doing sign language and that being translated to text or to audio for someone who's hearing. And so I think that translation, not just for the disability community but also for just like standard language translation, right. So like when I moved out of my apartment recently, recently in San Francisco, I had hired movers and I didn't realize the movers primarily spoke Mandarin and thankfully, you know, my girlfriend speaks Mandarin, but if she didn't, it was gonna be much more challenging. And I think that's one area where I think AI can really increase satisfaction and happiness is the ability to communicate with other cultures, right, the ability to you know, be less afraid traveling, you know, foreign countries where you don't speak the language you know, to widen people's horizons.

Speaker 1:

I think that could be really cool. But yeah, I do think the accessibility aspect is going to be super neat. Apple has touted a couple of times called Be my Eyes, which is bringing an app for the visually impaired and if they need to see something, they can point their phone at whatever it is they need to see and then somebody who's signed up on the other side to be a Be my Eyes helper can basically say like hey, here's what you're looking at. You're looking at a table and here's what's on the table. Or you're looking at a door and here's what the door says, or here's what the sign says, and I think those use cases I think are really neat for the accessibility community.

Speaker 2:

Yes, I would love to do a full episode on accessibility and AI and what you just said about the person on the other end of the Be my Eyes app. Well, now that can be AI. Maybe not now today, but within the short term, that can be AI responding, so that you don't have to depend on somebody being available on the other side. You can just have it built into the app that there's AI generated response. And I just saw a video a couple of days ago.

Speaker 2:

It was showing a robot that was standing in front of a counter and in front of them they had an apple, dishes, a dish rack and some trash, and a person who was interacting with the robot said to them can you give me something to eat? And the robot picked up the apple can you give me something to eat? And the robot picked up the apple and gave it to the person, and the person said how did you know that was the item? And the robot said well, I scanned everything in front of me and this is the only item that's edible, and so then it said okay, what else do you see? And it said you know dishes and trash.

Speaker 2:

So the robot takes the trash, throws it away, cleans everything up and then the video ends. Basically, and I think from an accessibility standpoint, it's showing us that that world is not far off. Obviously, right now that robot's extremely expensive and not going to be, you know, in all of our houses, but one day, who knows, maybe it will be. From what you said earlier when you talked about the small models that Apple will use, that will be then device only. What's the difference between those very small models and what we've talked about in the past about agents?

Speaker 1:

Nothing really, to be totally honest I mean. So a little bit of two things I would say from a technical perspective, trying to explain the best I can. So a small model, two things I would say from a technical perspective, trying to explain the best I can. So a small model in theory could just be like a small foundation model, so like Lama 7B or 1B model or something, 1 billion parameters, so you could just have a general foundation model. That is a small model for a general set of tasks, but it's not going to be particularly good at anything, but it's also going to be like pretty small In theory.

Speaker 1:

Turning it into an agent could either mean two things. It could either mean that you're training the foundation model itself to do certain tasks. The other thing it could be doing is when you create an agent, you're often giving it what's called a guideline prompt, which is you take a general model and you give an instruction. So you'll see these system instructions that say, like you are a very helpful assistant, right, and so you can kind of give it a prompt or a set of instructions on top of the foundation model to describe how it should act or behave, and then you can also give it additional information, so you can give it additional context, either in that prompt itself or through what we would call like a RAG system, and it can use that context to take on an agentic personality of people.

Speaker 2:

Got it Okay, and you mentioned before about Apple moving very quickly and that's against their typical nature, and I would argue and I think that there's been a lot written about this over the last year or so that Apple moved slowly and that they were getting some flack because their competitors were moving very quickly in the AI space and it seemed like they hadn't done much.

Speaker 2:

We don't know, obviously, what was happening behind the scenes, but outwardly it looked like they weren't investing heavily in AI, they were staying cautious, they weren't sure what direction things were going to go in and compared to early on Apple's introduction of new technology I mean Siri, perfect example where they were cutting edge and they were the leader in the space for them to sit back in that way, I think, raised a lot of questions in that way, I think, raised a lot of questions. So now to see them go all in with OpenAI does show that they understand that AI is here to stay and that they're ready to go with it. I don't know the exact specifics of the OpenAI partnership, but that it might not be an enduring one. So this is what they're starting with and that there is opportunity down the road for Apple to navigate across different foundation models. Thoughts on that.

Speaker 1:

Yeah, I think I love. I mean, maybe it's because we're business school nerds, but I get a kick out of like the big tech frenemy environment of like who's friends with who at any given time, who, you know, who's Microsoft hanging out with, who's Apple hanging out with, and yeah, so they did say they announced their partnership with OpenAI, but they also said and I'm sure this was cleared with OpenAI as well ahead of time that they were looking into partnerships with other providers as well, and I think that's probably smart on their part. So, yeah, I love the drama. It kind of makes me think of what's that. Why am I blanking on the name of that show, the one that everyone loved?

Speaker 1:

That's such a succession it's like it's like a game of thrones or also like a succession right it's, it's. It's like our version of silicon valley succession. And then the other thing in terms of did they move faster? So, yeah, I guess I think siri has been pretty rough for a while now. It could, it could definitely use some help and you know, it was one of the kind of first ai assistants so they were way ahead of the game back then and and largely I don't know from my point of view Siri hasn't gotten much better. I suppose it probably has from a technical standpoint. But if it comes to me, all I ask Siri to do is play a podcast, play a song and call somebody I know, and those are the only basic three instructions I trusted with, and so I think they've been slow on the wider arc of AI.

Speaker 1:

But when I say they move fast, when we started our company, it was about a year ago now, actually, and that was when we were saying, oh, let's use this new GPT technology that's just come out in the past few months to really do something.

Speaker 1:

So from I forget when GPT-3 first launched, but I think around last spring is when it started getting some hype in like the tech bubble and then kind of last summer fall is when it started getting hype more in the wider kind of consumer world. And now Apple's already doing an announcement you know, less than a year later, right. So I think it's fast in that sense, and I think they're able to do that because, partially because OpenAI by figuring it out and putting out chat, gpt and showing how LLMs work, they basically put out what we would call as like a reference architecture for how to build a foundation model. Right, everyone kind of reverse engineered, figured it out and once they saw one working approach, everyone's copying it, which is why you see Anthropic, which is why you see Anthropic, which is why you see Mistral, which is why you see Cohere Everybody being able to move quickly. It's because once someone found the one rough-roofed architecture that worked, everyone has basically been copying that same approach.

Speaker 2:

Mm-hmm, that makes sense. And since you think Siri is not so great now, I don't feel like such a dinosaur for having Siri turned off altogether.

Speaker 1:

Yeah, I mean Siri's, not great Google. I mean we love our Google Homes and, honestly, I try to talk to it all the time and I'm disappointed like 70% of the time. I've actually already gotten to the point where I ask questions, as if I would ask GPT-4, expecting like it would know something, and so many times it says I don't know the answer to that. But I found a Google search and it won't even tell me the results of the Google search. It just shows it to me on this little device and I'm like this is so painfully bad compared to where GPT-4 is at right now.

Speaker 1:

Wait, so then, why do you love it? It's convenient, it's in my home, it's in my kitchen. Honestly, this is funny. Google Home should be this great AI device, and the thing I love most about it is that it does a good job of picking out photos that I want to see, and so it'll show me pictures of our dog or my nephews or something like that. So it's the AI. It has to select photos is like the most meaningful thing that I use it for. It's a glorified rotating digital picture.

Speaker 2:

You're making me think. Am I blanking, or does Apple really not have a?

Speaker 1:

home device. So they have the HomePod, which never really took off. And the other thing I think is actually funny. I'm actually surprised and they're probably working on it. I thought pretty early on that OpenAI was going to come out with a home device and I don't know if they care about the hardware world, but if you really wanted to understand what people want to ask on a daily basis, just like the easiest you know interaction pattern is just voice, like hey OpenAI, hey Google, you know, hey Siri, whatever. So I have to imagine they consider device at some point. I don't know if they're going to pursue that path or not. They seem to want to stick to more of the foundational stuff and be the engine that powers all these things and let other people who know how to build devices build devices, which I think makes sense. But yeah, you know, I, a few years ago, you know these, these home devices, what Amazon, echo, google Home they were. They were all the rage and then they kind of faded out.

Speaker 2:

I think people my understanding is, I think people who wanted them bought them, they liked them, they integrated them, they used them. A lot of people, for the same reason that we talked about earlier, with fearfulness around Apple intelligence don't want those devices in their homes hardware side itself for people to go and buy new devices and for that market to grow in that sense. If anything, I would guess there are software updates as using these AI features that we're talking about that they're using for their other products, updating that software there. So what do you think?

Speaker 1:

Yeah, I mean I'm trying to imagine what the next device would look like. I mean it's easy to imagine, you know, apple might try to re-release a device now, right, that has chat GPT behind it, and I could see that happening, right. But I'm trying to imagine, like, what the next device you know it's funny in the Silicon Valley tech bubble you'll see all sorts of devices, right, you'll see these little personal assistant devices that, like, ride around your home or the ones that you know have a screen and a camera on them and they, you know pre-COVID they're riding around the office and it was like, oh, you could act like someone's in the meeting when they're actually virtual from home, and silly things like that. I don't know what I think like the next really breakthrough smart device that'll be in the home will be. I mean, I'm imagining a Jetsons-like future in which you know you've got the robotic made that can do all these things for you. I do think there's a ton of really cool advances being made in robotics right now.

Speaker 1:

Like you said the thing about the robot and picking up the apple and things like that. So I just don't know how far off that is from reality to be in our homes as consumers. So I wish I had a better answer for that. I mean, I think the best one. Here's the answer I'll give. The best hardware device that will change the world for AI is driverless cars, and I have been using it quite a bit in San Francisco. We use Waymo. We were using Cruise. I don't have a preference really, but Cruise got kicked off the streets and then Waymo has been better with security and better with safety and I can genuinely say I prefer Waymo to uber or lyft or you know, taxi services. I actually feel that, yeah, I actually feel safer in it.

Speaker 2:

It's great like yeah, I was gonna say I actually haven't used waymo yet and I is it. Do you download the specific waymo app and then? And then order a ride that way? Yeah it's.

Speaker 1:

It's a very cool. It's a very cool experience. You, you download the waymo app, you sign up for the waitlist. I downloaded it pretty early so fortunately we were able to get off the waitlist when they did one of the big pushes, uh, in san francisco. And yeah, you call it, it shows up to your front door, it's got, it's got like a little color on the car, so you know it's yours. I think it even has, like, your name either on the car, like or you have the name of the car on your app or something like that. But it's very cool.

Speaker 1:

You unlock the doors on your phone. So you hit the button on your phone, you're about to get in. The thing. Doors open, you get in, tells you to put on your seatbelt and off you go, and within about two minutes you will forget that you are in a driverless car. If you're sitting in a seat behind the driver, you would think a very short driver was in front of you. You just couldn't see and they were just the safest driver you've ever driven with you answered my question.

Speaker 2:

I think I was going to ask you what seat do you sit in, do you?

Speaker 1:

you know what's funny? So I sit in the back seat, so we sit in the back seat and there's a little screen so it kind of tells you to sit in the back seat so you don't mess with the wheel and stuff like that. But I think the funniest thing is when you go to do a behavior that you're used to for Uber. So there was a weird turn by my house and so the first few times I would ride in it I would catch myself, you know, kind of starting to be like oh, you need to make sure to do this turn.

Speaker 2:

But it knows to do that turn.

Speaker 1:

Like there's nothing, there's no one to tell, right, and so it's just, it's really funny, but it's just an incredibly safe experience and and honestly, like you know, tens of thousands of people are injured or, unfortunately, die from car accidents a year in the United States, and so that's the hardware device that excites me the most is, like you know, taking the human element out of driving. Not because I think, no, maybe because I do think humans are bad drivers. Right, like I'm a car guy, no, but I was gonna say I can't believe you.

Speaker 2:

I love driving. I love driving. I don't want to not drive at some point. I mean, I want to have the option to not drive, but I don't want it to be the standard that you can't drive.

Speaker 1:

That's right, and so it'll become like a hobby, right? It'll be riding horses, right? People used to have to ride horses for transportation. No, I actually believe this. Right, people used to have to ride horses for transportation. Now they still ride horses, but they get to ride horses when they choose to and when they want to, and they do it for fun. And that's what driving will be in 20 years, right? So we'll be able to drive it on a track where it's really fun, but we're not going to have to sit in traffic every day in LA traffic when we don't want to.

Speaker 2:

I will say so. I've seen Waymo around LA a little bit, not a ton, but I've seen it a little bit and actually it's funny when it's in a situation where, say, it's trying to take an unprotected left turn and I'm the car on the other side coming toward it. Sometimes I think should I stop and let it take the turn to be nice, because obviously it's a driverless car, or should I keep going so it gets trained on what LA driving is really like?

Speaker 1:

They're funny. I mean it feels like they take them. When you watch them drive it feels like they've got a little bit of a personality. It's like a little bit of a student driver, so it's really fun.

Speaker 2:

I think that. And have you seen? I forget the company that makes them, but the little robots that are for food delivery.

Speaker 1:

Yeah, the ones that scurry around the streets and stuff.

Speaker 2:

Yes, and they the ones that scurry around the streets and stuff, yes, and they have the little pole and the light and they wait to cross the street. And then they cross the street and I feel, isn't this a funny thing? The Waymo cars and these robots. I feel some type of affinity or affection toward them, as if they're people who are just learning something for the first time, and I want to help them out. And so when I see one of those little robots waiting to cross the street, I pause so that it can cross. I say, oh, I don't want it to get confused and not know that it can go, so I'll just wait here, in a way that makes it clear. And I feel like the same with the Waymo. Is that a weird human condition or a weird psychological thing where we are like attributing human emotion to these robots?

Speaker 1:

there is there is a a word for I don't think it's just personification, I think there's another word for it as well, but I think it's two things. I think one I think we do have. I think it is a natural thing when we see something I don't want to say struggling, but when you see something kind of like not doing what's supposed to do, to like personify and and be like, oh, can I help it in some way? I think the other thing is and this was actually Google's genius back in the day when they first did their driverless cars I was actually interning at Google at the time and we would see them driving around Mountain View. They were before the Waymo's. They were like these little little buggies did a really good job of knowing that in order for us to trust ai, we need to like kind of personify ai in some ways, like it makes it easier for us to trust these ais when we can attribute a personality to them.

Speaker 1:

So I really think they were very smart early on in designing these things in such a way that they were very approachable, so they would give them little names. They would have very warm, glowing colors on way that they were very approachable, so they would give them little names. They would have very warm, glowing colors on the car. They were very rounded and not angular. Right, you could imagine a world in which you know they could have made them very scary looking and they could have had devices hanging out Some of these ones that drive around San Francisco that are some of the test vehicles for some of these other companies. You know they've got like stuff hanging off and like crazy. Right, they've got these whole rigs and they're not nicely designed yet.

Speaker 2:

No one feels bad about those, Well like the Cybertruck. I feel like I see the Cybertruck and I forget a real person's even driving it, because it just looks so menacing and scary.

Speaker 1:

Exactly. If you saw a Cybertruck driving around, you wouldn't be like, oh, let me help that thing. You know You'd be like, oh that thing, scares me right.

Speaker 2:

Away as quickly as possible.

Speaker 1:

Exactly so. I think the human-centered design or just the design thinking that went behind these things was actually quite smart on their behalf.

Speaker 2:

It's so interesting.

Speaker 1:

I never thought I would have empathy for a robotic, but I think I do. Here's a question Do you think they have empathy for you? Oof.

Speaker 2:

I think that they are being trained to have empathy and show it as if they have it. I actually saw this crazy demo. I think it was yesterday. I was at a meeting for CEOs to learn about new AI tools that could help them in their businesses, and there was a demo of voice interaction. I forget what platform it was, but they were talking to the AI on the other side saying oh, I'm having a great day, how's your day? And the AI voice responded yes, also having a great day. What are you going to do today? And the person said oh, I'm going to the beach with friends. And the AI voice responded oh, going to the beach with friends is great. It's such a stress reliever. I hope you have a really fun time. And then the person said yeah, I do need to have a fun time. I've been feeling pretty low because my dog just passed away and it's been a tough time.

Speaker 2:

And you could hear the AI voice pause and all of and also shift tone of voice shift like the same way you would as a person and this idea of it could understand the information that the human was saying to it. And it responded and said oh, I'm so sorry to hear that that's really sad news. And it said some other things and it displayed empathy. Sad news and it said some other things and it displayed empathy. And so I think that does it, as a robot, know what empathy means and feels like no, but does it understand now the context for when it should be empathetic and when it should respond in a certain way? I think it's getting better and better every day.

Speaker 1:

Yeah, I mean, I think here's something I'll leave you with, because I think we've been going for a little bit longer today. But like, why do you know what empathy is as a human, and why do you know the context in which to use it, and why do you know how to provide empathy to others? I would argue that it's mimicry on your part as well, and so why is it any different? Well, and so why is it any different?

Speaker 2:

Well, maybe for some people it's mimicry. For me it's an actual feeling deep inside. I think that's why I can do the work I do as a coach. You have to have real empathy and honestly, I think most humans and this is a whole other conversation, but most humans really do have empathy. It's a natural biological response. Empathy, it's a natural biological response. And then if you look at the DSM manual for mental health disorders or behavioral personality disorders, one of the ways that they diagnose sociopathy and psychopathy is when there's a lack of empathy, indicating that the norm for humans is to have it and that it's an outlier to not. So I think there's something there on that.

Speaker 1:

That was a better answer than I expected I was hoping to. You know, stick you with a tough one, but I think you gave it a good answer and that's part of what made me ask that question is there's a lot of these thought exercises happening right now with AI, right, like what is consciousness is a big question being talked about by certain researchers, and you know what is empathy? And like why do we learn empathy and why is it different if they learn empathy that way, and things like that. So I think they're just like really fun questions to ask and I think they're really fun questions to debate, you know, over a few drinks or over a podcast.

Speaker 2:

I love those questions and for me, I think they're really in the forefront when I think about obviously being a founder coach working with clients. There is a human element to that conversation. It's required to really understand the person on the other side. I also think, as you see more and more of these AI-enabled coaching products coming out there, what does that mean for that engagement?

Speaker 1:

I can't wait for your first AI client, for when you're coaching a fully AI being, I'll create an agent. I'll create a founder agent and you can be its founder coach, and let's do that. Yeah, perfect All right.

Speaker 2:

talk to you soon All right, see you then.

Speaker 1:

Bye, thank you.

People on this episode