The AI Coach

Feeding the Beast: AI's Power Hungry Machines

Danielle Gopen and Paul Fung Episode 8

Text Us Your Thoughts!

We jump into OpenAI's new GPT-4.0 mini model (more efficient, lower cost), which brings us to the economics and cost drivers of AI. We then explore AI's energy consumption and theorize what can be done about it re: infrastructure, government policies. Bonus question: Is AI the problem or the solution?

We love feedback and questions! Please reach out:
LinkedIn
Episode Directory


Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.

Speaker 1:

Hi Paul.

Speaker 2:

Hello, happy Friday.

Speaker 1:

Happy Friday. How are you?

Speaker 2:

I'm good. I don't think people realize that we record on Fridays, but I'm good. I'm good and also a lot of big news this Friday we're going to talk about.

Speaker 1:

Yes, you're right, we have never said that we record on Fridays, but at the end of the week we have a lot of news from the week. So, yes, in particular this week, big updates. Let's start with OpenAI's new ChatGPT 4.0 Mini. So, as the technical person, I'll ask you why would I use the mini over the regular? Give me the breakdown.

Speaker 2:

Yeah, that's a great question. So another week, another model release. In our first couple episodes we talked about small, medium and large models. I've been able to spend a lot of time today digging into 4.0 mini, but this is their new version of a small model, if you will. So previously you know we had GPT 3.5, and then we had 3.0 Mini, but this is their new version of a small model, if you will. So previously you know, we had GPT 3.5 and then we had 3.5 Turbo, then we had 4, then we had 4 Turbo, then we had 4.0. And so 4.0 has been their flagship model since their spring preview.

Speaker 2:

And so I think they're basically saying, hey, this is 4.0, but we've shrank it, so it's going to be faster, cheaper and more performance. So my educated guess here it's probably a little bit lower reasoning level than 4.0, but better than probably 4.0 Turbo. Or a lot of people were still using maybe 3.5 Turbo if they really wanted the really cheap, fast model performance. It's 60% cheaper to run than GPT 3.5 Turbo. So they went with a cost efficiency play here, but it's also faster. So they look at output speed, which is measured in what's called tokens per second. The tokens are kind of the words that are being returned to you, and so it's returning 200 tokens per second, so 2x faster than previous models. And then they've got some reasoning. So they say it scores 82% on MMLU, which is benchmark that people use. So they're saying this is higher reasoning than Gemini 1.5 Flash and Cloud 3 Haiku, which are the other kind of major small models.

Speaker 2:

So what I would say is you, as a consumer using ChatGPT, you largely don't have to worry about it. You should just use what ChatGPT uses by default. This is more of a thing for developers who are building apps to worry about, to say, hey, can I run the same tasks that I've been running, but can I get faster responses, or can I do this more cheaply by using this new smaller model? I actually love branding in this world. I think it's really funny. I don't know. I guess it's akin to like auto branding or maybe pickup truck branding is like a better thing. It's like you've got the compact pickup, the mid-size pickup and the large pickup, and so this is their like compact pickup, right. And so they're like hey, we've got a new version of our compact pickup truck out there.

Speaker 1:

So I like that. It's saying you can still have the same utility from this, but in a way that's maybe a bit more manageable. And I think the point that you said about speed is interesting. They haven't released the parameter size of the model, but we've talked before about the more parameters, the slower the model takes more time for it to generate information. So, from a developer perspective, you're going and you're using the OpenAI API. How does that work exactly? Do you get an option as to which GPT you connect into? And now, with this being 60% cost savings, does that mean the developers are paying less if they're choosing to use the mini model? Just, maybe some information would be helpful.

Speaker 2:

That's exactly right. So when we use the API we let's use OpenAI's API as an example we tell it which model we'd like to use. So when you send it an API request, you actually send in the request a little string that says GPT-4-Turbo or GPT-4-O and literally the actual literal string you send, the characters you send is like GPT-4.0, in quotes. And so what happens is if we choose to send it a 4.0 mini request, the tokens are being aggregated in the OpenAI workbench right, and so we can see how many tokens we're using for each of the different models, and the tokens have different costs associated with them. So if we send it a request, that's a 4-0 cost. They say, oh, it's this many input tokens, so for us to ask it a request, so whatever we're sending to it. So if I say, write me a marketing headline or something like that the number of tokens for my request, the prompt write me a marketing headline those have a cost associated with it. And then oftentimes there's a different cost with what's called the output token. So the response it gives back to me and those will have a certain cost associated with it. And so when they release a model like this 4.0 mini, if you will.

Speaker 2:

Developers can then test their current application on a smaller model if they want to try to reduce their costs. Another thing they might do is the faster it is, the better your user experience is. So if I'm asking a question, the faster it responds. If there's no lag, that's a better user experience. But then what I would add on to that and I'm rambling a little bit here is that the faster something responds, the more use cases it unlocks, and so, for example, there's real-time AI use cases where, let's say, I'm building an AI app that trains salespeople to give the right responses in real time. So if I'm on a sales call, the faster the model is, it could be popping up these real-time maybe bubbles that say like, oh, here's some objection handling you can do, and so there's use cases that get unlocked by the faster models.

Speaker 1:

Okay, super interesting. And then, for those of us who aren't as technical, when you say that you send the request for which model to use, does that mean for every time you're connecting into their API you're saying what model you use? Or, upfront, do you set it up so that you're connected to one model and that is what does everything as you go?

Speaker 2:

Yeah, so it's every time you send it a request, and sometimes you will send a request that has your chat history in it. You can either send it a blanket boilerplate like standard first request or if there is some chat history associated and you obviously want to maintain that history, you can send it the history in the request as well. Want to maintain that history.

Speaker 1:

You can send it the history in the request as well. So theoretically, there are developers out there who see this release and they say fantastic, for the things that we're doing we really only need the mini model. And so they can now go and request the mini model for everything here on out. So does that mean that they essentially just got a 60% cost savings in what their development part costs?

Speaker 2:

It does. There are some catches here, right? So one is not every task is suitable to run on a smaller model. So if you say, okay, the complexity of the task I have is like a 5 out of 5 or a 10 out of 10 complexity, then you're probably going to want to stick with the more complex models, the 4.0s, the flagship models, the Anthropic Opus, cloud Opus, things like that. But if they're simpler tasks then you can, yes, migrate them down to smaller models and save costs on that. There's also some challenges there around.

Speaker 2:

Can you just take the prompt verbatim and drop it in? Technically you can, but because it's running on a smaller model, oftentimes you want a little bit more elaborate instructions. So think about it this way, like if I were to give a task to an experienced software developer and let's say that's GPT-4.0, in this case you can give it fewer instructions. If I'm going to give that to an intern, you want to give it more elaborate instructions. That's, you know, gpt-4.0 mini in this case. And so sometimes it's as easy as taking the instruction, set the prompt and just dropping it on the smaller model.

Speaker 2:

Oftentimes you actually might want to kind of optimize the prompt that you've written to have more explicit instructions so that it can run more accurately on the smaller model. And so there's this whole thing around. You know how do you test against new models, and you know, if a new model is coming out every two weeks, should we be testing every two weeks? That's kind of insane, and so these are some of the things actually that my co-founder and I are looking into, which is why I like talking about this topic.

Speaker 1:

It's really interesting, and what you just said about should we test every two weeks resonates because I'm thinking okay, if you're a developer and you are weighing between ultimate best output but also against cost, how do you find that right balance? And I assume it is a lot of testing to figure out what applications to use, when and where, and which models suffice. At the same time, I wonder, for those who are maybe more cost oriented, does this create some type of adverse selection where, hey, you're not going to get the best output, but for the cost savings. Now you have developers who are going in that direction.

Speaker 2:

Yeah, totally, and we definitely see that.

Speaker 2:

So the trade-offs usually that developers are making when they're choosing models are primarily cost, quality, latency and then sometimes security.

Speaker 2:

When I say security, there are instances where people say we only trust Azure right, or we'll only trust OpenAI, we won't trust some open source model.

Speaker 2:

But other than that, I would say right now, what we see is a lot of people right now are optimizing for quality because they either have venture dollars or because they got $100,000 in Azure credits, and so they're not worried about model costs right now. Maybe they can start off by saying we want the best quality, no matter what. Now what's an interesting dynamic that's actually playing out is you're starting to see some articles talking about this where the margins of AI companies are significantly lower than the margins of traditional cloud software companies, which historically have very high margins, like 90% plus margins, and so, from a finance perspective, there's some investors who are questioning if AI companies will be able to get their margins to look like software companies, and if they won't, that means valuation multiples will be different for AI companies than they will be for traditional software companies, and so there's some incentive for AI companies to, as they scale, try to reduce their AI costs because it's affecting their margins, which ultimately affects valuations and affects fundraising and going public and things like that.

Speaker 1:

Oh, really interesting. So almost, instead of a software company, the AI companies have like a hard product.

Speaker 2:

Yeah, in some ways right, Because people we talked about accounting last week right, like human capital companies, have lower margins, right, because they're not just straight software margins, right. And so what's funny is these AI companies, even though obviously the margins are better than human capital companies, they actually currently are worse than cloud computing margins because the cost of running these AI models is really high currently. So the cost of training the models is high. So there's an upfront cost to open AI, to spend hundreds of millions of dollars to train a model. So they need to recoup that cost somehow. And then, on top of that, processing power for AI companies is significantly higher.

Speaker 2:

So GPUs are, you know, they're very power intensive. There's a lot of compute that's required to run these AI models, and so that is more expensive than kind of the average cloud computing software compute cost. And then you know, like, also the scarcity of GPUs, right. So, like, the cost of GPUs is quite high because there's a scarcity of GPUs right now. So that also drives up AI costs.

Speaker 2:

And so I think there's a lot of people that think over time, ai costs will come down. I think there has been a little bit of a trend there, but we're definitely in a period where there's a lot of free spending on AI. That's probably going to have to get reined in over time, and it's either going to get reined in by, hopefully, naturally, costs coming down or, potentially, people having to be a little bit more conscious about how they spend their AI costs and having to be more conscious about okay, instead of just running all of our tasks against GPT-4 or all of our tasks against the flagship model, what we're starting to see is a little bit of a mixture of models approach, so people want to pick out the right model for the right task, which is interesting.

Speaker 1:

I have so many questions. First, in no particular order.

Speaker 2:

Fire away.

Speaker 1:

My first question is, in this case, more the consumer than B2B. If I'm using an AI-enabled platform that's connected to some API, how do I know in this case that I'm getting getting a high quality output without knowing who they're connecting to and what models are being used when?

Speaker 2:

Yeah, I guess I would turn that on you to say, like, do you care what models they're using, as long as you're getting, you know you're the judge of if the output is high quality, right? So if you say, write me a book, you get to read that book and say, do I think this was good or not? And on one hand, I want to say, like, do you actually care? Like, in the same way, do you care if some app you're using, if they're using Google Cloud versus AWS? Like, in theory, you don't care, right, it's commodity to you. Now you brought up an issue a few weeks ago that I think is pretty good, which is what you might care about is the security of your data, right? So if you're interacting with that model, you might care if they're using OpenAI versus using, maybe, a model that's open source, where the data could be more easily leaked, or something like that. And in that case, the way you could tell is if you went to all software companies.

Speaker 2:

I think, according to GDPR, the EU data regulations we all started being required to report what's called sub-processors, and so a sub-processor in software is the apps or services I use to provide my service to you, the consumer, and so usually that sub-processor list is required to be divulged. And now, when I say required, I think this is required by GDPR. So maybe software companies of a certain size when they're pretty small, they're not required to do that, but at some point software companies all start to report their sub-processors, and so you can usually, if you wanted to, you could Google it and so you could find the common software you use. Google their name and that sub-processors, and then you'll find a list or a website usually that says here's the apps and services we use to provide you, and one of those will be the AI provider.

Speaker 1:

Interesting. My other question is about when you say costs are high and potentially come down over time. What are the current factors that are keeping costs high right now and what would be the drivers of them coming down?

Speaker 2:

Yeah, good question Current factors that are keeping costs high right now fewer players in the market. So hypothetically, if OpenAI was the only player in the market, they could charge whatever they want. Alternatively, now that you see Anthropic, mistral, other players coming out, that can be a backstop against what OpenAI is able to charge. So basically, can they charge monopoly pricing or not? So that's one of the things that keeps costs higher is that there are fewer foundation models that people trust right now, primarily OpenAI and Anthropic. And as there's sort of more foundation models they get more trusted and basically, as this becomes more commodity prices will come down. So that's one thing.

Speaker 2:

Another thing is the upfront cost required to train these models.

Speaker 2:

So right now, due to the way that these models are trained, there's quite a high upfront cost in like the I actually don't know the numbers, but I think it's like the tens of millions, if not for GPT-5, like maybe even like the hundreds of millions of dollars of cost to train these models upfront.

Speaker 2:

Probably baked into that cost is the people R&D cost, obviously, of running OpenAI, having the researchers and developers who are actually working on the product, but then a really high compute cost of ingesting just enormous amounts of data and then computing through all of that data, cleaning all that data, doing all the compute and math required to build this model essentially is extremely high, and then built into that compute cost is a power cost, right? So, like I said, there's you know compute is you know you're using someone's GPUs and those GPUs are using power and so, as oil prices are high or electricity prices are high, that makes this more costly and because there's so many AI companies now, it keeps electricity prices high because there's high demand, right. So supply and demand is driving that cost there, and so you know things like AI, yeah.

Speaker 1:

That's impacting even the general consumer and what we're now paying for electricity rates. Is that right?

Speaker 2:

Totally yeah. So actually, two of the biggest technology breakthroughs in the past 10 years are, you know, llms and AI, and then cryptocurrency. And you know, LLMs and AI and then cryptocurrency, and you know, cryptocurrency and the blockchain is actually extremely compute intensive as well, and so we have these two technologies that are, like, fairly widely adopted, that are extremely power hungry, and so that's really keeping electricity costs high, which keeps, you know, general cost of AI high as well.

Speaker 1:

And then something else that you didn't mention, but I assume as part of it, is NVIDIA, which everybody and their grandmother now knows about. So you also have a dominant player who has a high cost.

Speaker 2:

Exactly, and there's starting to be more specialist chip providers out there. And this is where, not being on the hardware side of things, I start to get a little bit, you know, out of my depth in terms of talking about things. But there's like an inference chip provider called Grok, which is not to be confused with the Grok by you know, xai, which is your formerly Twitter, etc. But Grok with a Q, so G-R-O-Q, and they have extremely high speed inference chips. Basically, what that means is you can take an open source model, run it on Grok, so you're not running it on NVIDIA. So there's other chip providers that should change the price competition in economics as well.

Speaker 1:

And, theoretically, as AI becomes more and more of a mainstay, you'll see the landscape changing up a bit, I would think.

Speaker 2:

Yeah, and so I think the second part of your question earlier was what would drive costs down? And I think there's multiple things. On the foundation model side, more model providers coming out would certainly have more competitive pricing. More chip providers coming out and more chips being more widely available as manufacturing catches up to the high demand for them. As that becomes more commoditized, that would drive prices down. And then there's a technological aspect of just the math. Llms are like this large language model architecture. There are some other architectures that are being considered and I believe some of those architectures are more efficient and so, by being more efficient, they require less electricity or processing power to do the computations, and so maybe it won't be as intense as capital intensive to train these models in the future. Like maybe if it costs $100 million to train it today, if there's new model architecture that comes out, maybe those models would only take a million dollars to train or something along those lines. So I think that could be one of the biggest needle movers in terms of dropping costs is model architecture.

Speaker 1:

That makes a lot of sense, and we started talking about this through the ChatGPT 4.0 mini model, and to me, if it already hitting scores of 80 plus percent on the MMLU for text and vision reasoning tasks the mmlu for text and vision reasoning tasks then it's safe to assume that even at that size, whatever size it is, a year from now it will be even better, and so I feel like the combination of the models getting better, plus the change in model architecture bringing costs down, is something that we'll likely see in the next maybe one to three years. Is that fair to say?

Speaker 2:

So like and this is where I get again kind of a little bit over my skis here but in semiconductors and like chip manufacturing, there was this thing called Moore's law, which is the observation that the number of transistors on an integrated circuit, so like basically a semiconductor chip we use for computing, will double every two years with minimal rise in cost. Right, so they're basically saying chips are going to get faster every two years, but cost is going to stay the same, and I think everyone's kind of waiting to see what kind of law applies here. With AI so like, will we continue to get models that are 2x faster, 2x smarter, with, like minimum rise in cost? Will it follow Moore's law? Will there be like a different type of law or curve that we see on AI reasoning, power, consumption, speed, etc.

Speaker 1:

Give us your prediction, and then we'll coin it Fung's law and you'll be famous.

Speaker 2:

My prediction? Oh man, what is my prediction on this? I mean, one thing that we have seen so far is, yeah, something akin to Moore's law. I don't know if it's like every two years or whatever it is, but I mean, even if you look at, you know GPT-40 mini, I bet GPT-40 mini, I'd be shocked. I mean, it's probably better than it's certainly better than the flagship model. So it's the smallest one that OpenAI offers and it's better than the best thing open ai offered, probably 12 months ago, right, so so you already are seeing faster, better, cheaper, at a very quick pace. Now I don't know enough to say where the bounds of this acceleration could be. So, like, at some point we're going to run out of power or run out of gpus or not have enough. So I don't know where this stops. My prediction is that, yeah, it will. I mean not a hot take at all. Like you know, follow Moore's law and things will continue to be every couple of years, you know, twice as good at the same price or less.

Speaker 1:

That makes sense and I see potentially even where every two years it's twice as good, but maybe at half the price if things are moving quickly enough.

Speaker 2:

Yeah, it certainly could be. I'd actually I need to study up on Moore's law a little bit, something I'm like conceptually familiar with, but like I actually would be really curious to understand the economic effects that are going into Moore's law, like why does this actually happen? And like what are the inputs that go into AI that make it parallel Moore's law or maybe it was different from Moore's law for some reason?

Speaker 1:

Yes, let's research that and we'll come back and talk more about it.

Speaker 1:

Something else that you mentioned a little bit ago that I want to come back to was about electricity and how power hungryhungry AI and crypto and blockchain are, and that's something that is starting to get more attention, both from within the industry, but also from people who don't know much about AI, but they know that it takes a lot of electricity and there's conversation around one obviously high usage of electricity and there's conversation around one obviously high usage of electricity not being so eco-friendly, but two being so dependent on this one input. And I would say, at a time where our national grid is pretty vulnerable and they're not doing much to strengthen it, from what I know of, what do you see as potential alternatives to using traditional electricity as the main input? I mean, I'm a big fan of a portfolio of electricity inputs in general. I don't think we should ever be dependent on any one source of electricity, whether that's natural gas or hydro or solar or nuclear or whatever, but curious to hear your thoughts on what the industry should be thinking about when it comes to electricity.

Speaker 2:

Yeah, that is a good question. I'm not an electricity expert, so I will opine. This is a place for us to share our opinions, some of them experts, some of them less than experts, so I'll throw some ideas out there. I'll start with something that's a little bit in the AI realm, which was interesting. So something that happened a few months ago is that OpenAI released what's called a batch processing API a batch API, so it looks like it was released back in April.

Speaker 2:

So the idea there is to be able to do batch processing overnight, and so one thing that does reduce power consumption or affects energy costs is at what time you want to consume that energy, right, and so if everyone's consuming energy in the middle of the day on a hot day, there's demand extremely high, and so energy costs are high. A time when energy costs are low is the middle of the night, when less AC is running or, you know, less people are using, you know, microwaves or computers, etc. So being able to move some of your workload to lower demand times via this batch processing is one way that at least developers have control over trying to get more efficient energy usage and, for us, more efficient pricing, right. So if we're running it at batch, presumably overnight. Hopefully it's cheaper because electricity costs are lower, et cetera. So that's one way that, like, developers, can help is to kind of run these things more at night when power consumption is low and keeps energy prices lower. So that's one thing we do.

Speaker 2:

Ultimately, though, the other thing is, you know, to try and keep up with the power demands of AI, and I think listen, I'm not an expert here I I think that we obviously, for a lot of reasons, need to start generating a lot more clean energy in an effective way. I guess where my mind goal goes and I'm actually not sure this is like, I'm not sure this is even the right answer, but like, obviously I would love if it was like solar wind, things like that. Like you know, clean energy, first and foremost, would be the best. If not, I could totally see us building a lot more like nuclear power plants, because they produce a ton of energy.

Speaker 1:

I think, like, nuclear is pretty clean, assuming it's yeah, it's fairly clean, relative.

Speaker 2:

Yeah, exactly that like obviously, that that whole thing is like if it's contained, it's a very dirty source of energy, so I could see us putting more and maybe you would know more about this than I would is like what technologies are available to be scaled up quickly. Let's say, ai demand like blows up and we need to like scale up electricity production as quickly as possible. I would imagine nuclear would be pretty high on the bang for buck. Scalability Like we know how to do it. We can spin up new nuclear power plants. It's probably faster than if we had to go mine a bunch more coal or something like that or drill a bunch more oil. I suppose we could do that, but it seems like nuclear is more scalable from that standpoint. We could set up new power plants. We know what to expect out of them and what we we'd be getting in terms of power output.

Speaker 1:

But again, this is way over my head in terms of so I would say coal at this point seems to be kind of a game over situation. I. I don't see a world and this again, these are all our personal opinions, just doing some theorizing.

Speaker 1:

I don't see a world in which we go back to coal and ramp it up, and I think building a nuclear power plant might be easier said than done, and I'm not sure exactly what goes into the building of it and the infrastructure for terms of production, because the way that it's structured at least here we have the ability to if necessary, and that's just a matter of volume, and essentially what comes out of the ground or what's transported or whatnot, is all supply and demand.

Speaker 1:

It's pure economics. And so there's a lot more that's technically available that's just not being lifted, and if they needed to, they'd turn the spigot, and much more is coming. I don't think that's ideal for the very long run, but at least in a short run situation it's a possibility. And then I think to me what I find most interesting is solar, and I feel like we have parts of the country that are not very inhabited, that have very, very high solar scores, that why would we not build these? Basically and I'm not saying, you know, everything has an environmental impact. Right, nothing is going to be completely zero. But creating these kind of like solar farms in these desert scapes and having that be a primary source of energy To my uneducated brain. It seems like it would be an avenue to go down.

Speaker 2:

Yeah, and I would love that. I mean, I want the cleanest, most available energy possible and I think you know it's interesting. Like you know, maybe this isn't realistic, but there's some avenues my brain starts to go down around energy independence and you know, one of the topics in AI these days is AI as something that needs to be considered in national security, and so mostly what I'm talking about there is the idea of, like open source and you know, china or Russia being able to copy our AI, both from an economic standpoint so if they can't copy it, then economically we have a bigger boost, we can attract the best talent but then also, if they're able to copy it, they could spin up a bunch of I don't know AI agents that wreak havoc and AI hacker agents and stuff like that right or more destructive technology. But anyway, coming back to the electricity piece, I think one thing I think about is if we want to stay ahead on AI and if we're going to start increasingly relying on AI which I think we as consumers and as businesses will then we're also going to want to make sure that we have the energy infrastructure to support our AI needs and we're going to want to make sure that that energy infrastructure is safe and not reliant on other countries, right? So like we're not going to want to have to rely on, you know, a country that you know might be war torn or something along those lines, or someone who might, you know, kind of pull out of an arrangement on us for those energy needs.

Speaker 2:

And so, yeah, I think, like solar would be great, and it's you know, I also think about I'm sure the cost would be quite high, but what the government could do to step in if they really saw this to be a real need.

Speaker 2:

So like, as an example, let's say the government said for every household that has a solar score above a certain threshold I didn't even know what a solar score was until you said that, but I can imagine what it is, you know, in an area that gets a lot of sunshine, et cetera the government could say if you're in a house that has a solar score above a certain threshold, we're going to pay for your solar panels, right? And they could also say we're going to pay for your solar panels and we don't want that energy to go to waste, and so you could actually store your energy in that battery and then that's the energy you're using overnight, and then the excess energy goes back into the grid, right? So maybe a bunch of households that are in a high scoring area are pumping energy back into the grid, which is ultimately good for the rest of us.

Speaker 1:

Like that's a way that I'm sure that would be insanely expensive, but it's a way that if the government really was serious about energy independence and AI independence, then there's things the government could step in and do.

Speaker 1:

Yes, I know, you know me, I cringe a bit when I hear the government can, because I prefer a bit smaller government and less involvement, but I think when you said can pay for, it makes sense and it reminded me of well, we have tax credits for solar.

Speaker 1:

I'm not sure what it looks like across the country and I'm not even sure here in California what the latest is in terms of the program, but that was definitely a way to help propel solar panels being installed, because they are very expensive. The initial installation is expensive, and then to go back to what we're talking about, so nuclear, solar, natural gas, whatever these are all inputs, but they still rely on the distribution and our national grid of how electricity then gets to the end users and in this case, what we're talking about AI companies who are using massive volumes of it, and so I feel like there needs to be a lot more attention put on the distribution of electricity and the national grid and really ensuring its security and its viability, and part of me thinks that maybe this is a place where the AI companies are going to be heavily involved in lobbying to help improve that.

Speaker 2:

Yeah, I could see that I thought you were going to say something different. I thought you were going to say something different. I thought you were going to say, you know, maybe AI companies will get involved because they'll be able to come up with some algorithm to distribute the energy more effectively, or something like that, like AI is going to save us all by having a better, more efficient algorithm for distributing the electricity that we do have, or something like that. But, yeah, also, I think they will be. Yeah, I mean, ai is going to save us all in so many ways, and one of them being to figure out how to fix our national electricity grid. But, yeah, I think you're right. I think that they probably will get involved in lobbying as it relates to energy because, yeah, energy is going to be just so important to them being able to develop the models and push the models to limit.

Speaker 1:

I love what you're saying. I do think there's a world in which it's twofold. Right, the AI companies have government relation teams who are going to be lobbying on the topics that they really care about Energy and electricity will be a huge one. And then using AI itself to figure out optimization models for electricity inputs and distribution and everything that benefits both these AI companies but also consumers. Like we talked about before, rising rates for residential electricity Well, nobody wants to see that. So if there's a way to bring those back down and have the best of both worlds, I can see AI's role in that.

Speaker 2:

Something else have the best of both worlds. I can see ai's role in that. Something else, I think, yeah, well, one thing I was gonna say is I think it's interesting to think about ai as this new emerging like linchpin of our society, like this thing that we depend on right and think of like, you know, the third leg to a stool or something like that, or this new pillar on which our society sits, and what I mean by that is pre-internet, we did not depend on the internet, didn't exist, right. But post-internet, we depend on it, right.

Speaker 2:

So we were talking about maybe you were about to bring this up CrowdStrike today, right? So CrowdStrike went down and the US came grinding to a halt, right. Banks couldn't operate, you know, flights couldn't go out, trains were delayed, and so it's interesting because, you know, the internet has become this pillar on which the foundation of our country stands, and with AI, we're seeing this new pillar emerge, right. It's a thing that we didn't depend on previously, but 10 years from now, we will be fully dependent on AI in a lot of different ways, and so it's interesting to think about. You know, how do we secure and make sure that this AI that we become dependent on, how do we make sure that pillar is strong and that it's safe?

Speaker 1:

Yes, and I think that's a great point. It reminds me of what we saw come out of the Aspen Strategy Summit last week, and we're seeing a Anthropic ChainGuard, cisco, cohere, google GenLab, ibm, intel, microsoft, nvidia, openai, paypal and Wizz all saying that they're committed to providing security practitioners with the guidance and tools to create secure AI systems and advancing measures to address AI-related cybersecurity and risks and opportunities, and so I feel like seeing all of those companies come together to say, hey, this really matters. Yes, we're in competition with each other in some ways, but collectively we care about creating what you're saying a stable and secure linchpin. If we continue going down the path we're going and advancing AI and making it so integral to society that we need to start really thinking about these big picture things, and so I feel like that is promising to see them come together and commit to that, and I think we'll see much more of that as time goes by.

Speaker 2:

Yeah, I think it's great. I think it's a great move. It happened this week. Obviously, I haven't had a chance to catch up on it and exactly what it means. The optimist in me and I do tend to be an optimist is like okay, this is great. These companies are going to drive this innovation, they know what's required for the safety and security of this innovation, and so it's great that they're working together. The cynic in me is like well, they're protecting their revenue and they don't want to be legislated against, so they might as well get ahead of legislation and create the standards ahead of time, so that standards aren't created for them. But I think, either way, I think it's great and, yeah, we'll see what comes of it.

Speaker 1:

Yeah, and I think. But even the cynical side is still a positive right. So if they are working to get ahead of the legislation, I see that as proactive measure. I mean, obviously, who knows AI best? Well, the people and the companies who are creating it. So we have to assume some best effort, good intention, of these companies to want to deploy things responsibly.

Speaker 1:

And it reminds me of a newsletter that we saw come out, I think I shared with you, and it was saying how, yes, the AI tools aren't necessarily at the place where they can do everything right now, but they are at a place where they can do a lot more than what they're being allowed to do. And the point of the newsletter was saying that all creators of AI, when asked, say that they don't want AI to do. And the point of the newsletter was saying that all creators of AI, when asked, say that they don't want AI to do everything. That's not their goal. Their goal isn't for AI to take over the entire world and replace humans. Their goal is for things that are menial and difficult and time-consuming to be augmented and optimized by AI, and then for humans like Rhea said last week, for humans to do human things.

Speaker 1:

And I think that means a lot. And in the newsletter itself it said the only people who want, or the only entities who want, to see AI take over are bad actors. And I think if we look at it from that perspective, then we say, okay, well, they're proactively getting ahead of this legislation, but from a framework of wanting it to be what works for everybody.

Speaker 2:

Yeah, I think that's exactly right. I mean, I don't think you know, well, it's interesting that even open AI right being the kind of nonprofit structure that they have, which became this like big news that not everyone was aware of, you know, when Sam Altman stepped down, and things like that, but it's interesting. I mean, these companies aren't even actually formed as for-profit companies. All of them, some of them, they just have their nonprofit entities with a corporation, I guess, associated with them or somehow affiliated with them. That does do the profit part.

Speaker 2:

But you know, going back to it, like OpenAI was formed to reach the goal of artificial general intelligence, kind of for the good of society and not actually, you know, for you know, profit motives necessarily right and so, so, yeah, so, getting back to it, I mean I think that they have a vested interest, obviously, in keeping this technology out of the hands of bad actors and, yeah, I'm excited, I'm glad they're doing it. I mean, I think the other thing I would say is, you know, we should not expect, and it's probably not even reasonable to expect, for the government, congress, etc. To like have an educated legislation on AI. It's a very hard topic and I mean, being totally honest, our congressmen and women aren't AI engineers. They don't know how to legislate this topic, and so it's a hard one. So I'm glad that you know these tech companies are leading the way there.

Speaker 1:

And, as we said a couple episodes ago in Congress, they're inundated with such high volumes of information it's hard to really get deep on any one thing, and so you do want the experts to share their opinions here and then collectively find the right solution.

Speaker 1:

And yeah, speaking of Sam Altman, you know I don't know if you saw this a few days ago he made a joint announcement with Arianna Huffington. They're going to do an AI health initiative together, and he made a point in that conversation to say that he thinks that there should be some type of AI client privilege set up, so similar to what you have with HIPAA in healthcare and the attorney client privilege confidentiality that people could have that with the AI models that they interact with, because they are sharing really sensitive personal information sometimes, and so to know that you shared that and it's not going anywhere else, and so I thought even you know, I mean, talk is cheap, right, We'll see what comes from that but I thought even him making that point and recognizing that there likely is a need for that and the importance of it is interesting.

Speaker 2:

Yeah, I mean, I totally agree with that, because AI is going to become pervasive in just so many places in our lives that it's going to know the most intimate details of everyone's life, because AI is going to be in your emails, it's going to be in your text messages, it's going to be in your conversations. It's going to be, honestly, potentially like an always-on audio clip or something like that. There's devices people are talking about. I mean some of the physical devices the Humane pin and stuff like that I think it was called Humane have gotten really bad reviews, but you know they've talked about.

Speaker 2:

You know AI. That's like. You know I've got AirPods in right now as an example right was doing and it was also hearing every call I was in and every conversation I was in. You know it could take notes on everything I'm doing so that at the end of the day it could remind me oh, you forgot to do this, or you know you forgot to send us a follow-up email or something like that, and so it's only a matter of time before it becomes part of everything we do.

Speaker 1:

The follow-up email is the optimistic version of that outcome, I think it's a very different direction.

Speaker 2:

There have been some Black Mirror episodes about this. If you want to go see what it could look like, black Mirror nails it always and they have had some Black Mirror episodes about some, always on AI devices and what the ramifications of that could be.

Speaker 1:

You know Black Mirror is a horror show for me. You know I can't watch it.

Speaker 2:

It's a great show. It is a horror show, but it's the most realistic show for what some of these futures actually could look like. And obviously they focus on the downsides of what some of these futures could look like, but I actually truly think they're not far off.

Speaker 2:

One thing that happened in the Black Mirror episode was a recreation of a loved one who had passed away, and so I forget if I think it was someone's partner who passed away.

Speaker 2:

And you know, the Black Mirror episode first started off with being able to text that loved one, so they were able to ingest that loved ones like social media profile videos etc. And then you could text that person. And then the next level up from that was you could call the loved one who had passed away and hear their voice and talk to them, and then in the Black Mirror, where they took it to, was this kind of humanoid, humanistic bot, if you will. I don't want to call it bot, but it looked like a person, it felt like them and it became the real person. Now that's a stretch, but what does exist in the world today is those first, at least that first step. There are services that, when your loved one passes away, you can actually upload some sort of information to them about their social media profile or maybe some conversations you've had or whatever, and you can text them and that's like a service that like exists today. So Black Mirror, I mean, they nail it.

Speaker 1:

I don't know if you saw, but a company out of China just did a demo release of not exactly that, but of it's still images. That was a person with the loved one who had passed and they turned them into videos and they turn them into, basically, like a memory that you could watch back, and it could have been something that happened, or maybe something that didn't happen that you would have wanted to happen, and so I feel like you are starting to see that, and then it makes me think well, are these Black Mirror episodes predictive, or are they inspiring? And so people are watching them and saying, oh yeah, that is a good idea, we're gonna create that.

Speaker 2:

I think they're both. I think they're both. Was this the thing? I saw, a thing this week? Where was this the one where there was images of you being able to hug like a loved one who had passed away? Is that what that was?

Speaker 1:

Yes.

Speaker 2:

Yeah, okay, yeah, I think it's very cool and I was thinking about that and I was thinking about how it would make me feel if I was able to create an image of me modern day like hugging, you know, a loved one that's passed away and I don't know. I mean, obviously, you know, like we talked about with Reyes last week, like I'm a little bit of a futurist, but I think there's something to that, like I think I would like to see something like that. I think I could see where it would creep a lot of people out and be off-putting to people, but for me, I think it would be kind of nice. I don't know, I could see wanting to text a loved one, even if I knew it wasn't obviously them. But it feels like there's some comfort there and I could definitely see it becoming a more of a real thing.

Speaker 1:

Yes, I noticed even your voice got soft as you contemplated that you felt something there and I think it would be nice, yeah, and especially for people who are going through the grief process, which is complicated and nonlinear, having an option like that at some point that you can tap into I think could be really nice also.

Speaker 2:

Yeah, I mean, it's kind of like I was just thinking about how some people choose to. This is getting a little bit morbid, this conversation this week. Some people choose to be buried and visit their loved one's grave sites and when they do, they often will, will talk to them, which I think is a very normal thing and I think it's very good for grief, I think it's very healthy, and so in some ways, it's like an advanced version of that where you could go to their grave site and maybe that's that's where you'd have the conversation right and things like that. So, yeah, I think technology is so fascinating and I think that we find all these ways in which it can help humanity in ways that we might not have expected. And, like you said, it's optionality right. Like kind of like the burial thing Some people choose not to get buried, some people choose to get cremated, spread their ashes you know they're not going to a place to remember them and that's everyone's option. They have the choice to do it or not do it.

Speaker 1:

Yes, very true. I mean, I have a lot of thoughts around that too, but I think we are coming toward the end of today. We got into a much different conversation than I expected, but I really enjoyed it.

Speaker 2:

Yeah, I think it was kind of a random conversation, a lot of fun topics, but yeah, I enjoyed just riffing and seeing what comes of it.

Speaker 1:

Yes, awesome. Thank you so much. I appreciate getting all your thoughts today.

Speaker 2:

Thank you as always. Talk to you soon, See ya.

People on this episode