
The AI Coach
AI meets human behavior in these fun and insightful conversations with Danielle Gopen, a top founder coach and advisor, and Paul Fung, an experienced AI founder. We blend AI breakthroughs with real-world insights on business strategy, industry disruptions, leadership, and psychology while cutting through the hype to ask the hard questions.
This is for decision-makers like founders and business leaders who want to harness AI's potential thoughtfully, exploring both the technological edge and human elements that drive long-term success. Join us to challenge the status quo, navigate the shifting landscape of an AI-powered world, and get new information worth sharing.
The AI Coach
2024 AI Reflections
We look back on the tremendous progress AI made over the course of a single year - from LLMs becoming lawyers and doctors to producing Hollywood-level video output - and what technology advancements made it all possible. We then add a few predictions for 2025.
We love feedback and questions! Please reach out:
LinkedIn
Episode Directory
Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.
Hi Paul, hello, how are you?
Speaker 1:I am wonderful getting ready for the holidays. How are you?
Speaker 2:Yes, same, I'm great I'm in New York enjoying the holiday season. Here it's always festive, I thought, because it's the end of the year, it could be nice to have this episode be a little bit of reflection of how far AI has come in 2024 and maybe do some predictions for what's to come in 2025. What do you think?
Speaker 1:Yeah, that sounds great. That sounds great. And not only has a lot happened in 2024, but honestly a lot's happened in just the past two weeks and I'm not even totally caught up on all of it. So, yeah, let's go through the whole year.
Speaker 2:Okay, awesome. Well, actually, if you want, we can start with what's happened in the last two weeks, because I think it's really representative of what has happened in this year with Gen AI, in terms of where we started in the beginning of the year video was clunky, image generation was clunky, trying to create some type of sequence of images and video and voice, sound, all that together. Just, it was very obviously AI and was rarely what you wanted the output to be, especially in the first go. But now I assume the advancement you're referencing is the Google VO stuff.
Speaker 1:Yeah, both the Google stuff and then OpenAI had their, like you know, 12 days of the holiday. I don't know what they're calling it, but so both of them, yeah, and you might know more about the Google stuff than I do. Did you follow any of their releases?
Speaker 2:Very unlikely. Google says that with their new quantum computing they can see into other dimensions.
Speaker 1:Okay, I did follow that the quantum stuff a little bit, which is wild. I think I read a headline, yeah, that said basically like there's a chance that the way it's doing the computations is by like accessing other dimensions to do them, which I think is fun. I think quantum is a crazy fun topic. Yeah, I, which I think is fun. I think quantum is a crazy fun topic.
Speaker 2:Yeah, I mean, I think so too. I honestly. Who am I to question if that's actually happening or not? You know me, I feel it's possible.
Speaker 1:I don't know. I don't know enough about quantum to make any commentary on it, other than if I'll relate it back to our AI topic. If the models are as good as they are now and, you know, using traditional compute and not even using quantum computing I can only imagine how insane the combination of quantum computing plus, like AI and LLMs would be.
Speaker 2:Yeah, it is really crazy. I was thinking more about the VO release that Google had, which, just from a video generation perspective, is mind-blowing. I mean, months ago we talked about Kling and Midjourney, and even at that time those were impressive, but now this is just a totally different ballgame, and so that's part of today's conversation, which is just in six months. I mean, we have to remember this is a very short period of time Six months To see this level of technological advancement is insane. From the engineering side of things, I am curious to run through a little bit of what allowed these developments to happen. But then, yeah, if you had particular thoughts on recent advancements in the last couple weeks, obviously chat about that.
Speaker 1:I mean, yeah, the big ones are just, you know, google really I mean not just in the past couple weeks, but really over the past couple months really like having their coming out party, if you will right, like, I think, at the beginning of the year, you know, if we talk about 2024, google was seemingly so far behind on these things and then they steered their huge ship at AI and now I would say it seems like they're coming up on parity with OpenAI, both from a text models perspective the video models, like you talked about and then also just like the developer infrastructure, the stuff, that kind of the tools they give to developers to build on top of their Gemini models. So, yeah, I've been impressed. I haven't messed around with them that much, but from what I have, they've seemed pretty good.
Speaker 2:Yeah, and so I think maybe so if people haven't paid too much attention to this.
Speaker 2:One of the big things that people are really excited about in these new releases so vo2, imagine 3 and open ai store obviously is this idea of like how they've revolutionized video and taking this control of physics in this two-dimensional way, but becoming very spatial three, three to four dimensional visual and and basically allowing these avatars and or other you know beings, I guess, in the video really take on lifelike movement and expressions and all these things that just seem so far away even a few months ago.
Speaker 2:Okay, so part of that I do. Again, the underlying technology, because I, even though I'm not a technologist, I am always curious about what's created and how they're able to do it, and so I think, just to start with that a little bit, is to say so how did we get from beginning of 2024, where people were using chat, gpt and other LLMs, and they were helpful, for sure, but there were limitations, and a lot of those limitations came down to like image, audio, video, all that but even on the tech side of things, you weren't able to do as much as you maybe wanted to do or what people thought AI could do right off the bat. And so back then 12 months ago, a whole year, ago.
Speaker 1:Yes.
Speaker 2:Most of the models relied on transformer architecture, which really excelled at pattern recognition but was very limited in its ability to reason and store memory and have complex processing. And then by the middle of the year, they were able to develop what they call sparse and modular networks, and what that really means is that these LLMs were then able to parse out the portion of the LLM that needed to be used for the task that it was being asked to do. So, as opposed to having to activate the entire model, it could just do one small piece, and that made AI much faster and allowed the output to be a bit more specific. And then what would have taken maybe minutes before was now taking seconds. And then there's another modular approach called mixture of experts. And then there's another modular approach called mixture of experts.
Speaker 2:I'm not super familiar with that in the technological sense, but the idea behind it is that you can really tap into these specialized submodels and then with that, it didn't increase the size. We talked about this a few episodes ago, but this idea of when they sunset former models, because the new ones that come out are able to do so much more than the previous ones were able to do and they're a lot more efficient, and so it doesn't make sense to keep those previous ones running, because there's no real value there. Something else that we touched on recently but applies to how this technology really progressed, is the idea that LLMs are able to store memory and have these context windows to perform much more than just a simple query, or not perform to take in much more than a simple query, and so at the beginning of the year, most LLMs were limited to around 8,000 tokens. Can you say in real terms what that means?
Speaker 1:8,000 tokens. Oh man, I should have prepared for this, I forget. I mean you could Google this. I'm going to do a bad job of answering this, but it's just like not that much information. You know, 8,000 tokens. I guess I would Google it. But anytime I try and use my keyboard while we're recording, we get the click clacks. But you know, it's probably like saying it can take in a page of information or a couple pages of information, whereas you know now the context windows are 128,000 tokens or even, like you know, gemini up to 2 million tokens. And so basically it's the equivalent of saying, like at the beginning of the year, let's say that you could feed your LLM two pages of a book and now you could feed it the entire 300 page or 400 page book and it would understand and be able to act on that entire set of knowledge.
Speaker 2:And so then, and in addition to that, it also stores the memory. So if you fed it book two weeks ago and then you ask it a question about the book today, it knows what you're referencing and is able to go back to that, pull what it needs to and deliver it to you. And I think actually one of the most amazing real life examples we've seen of this progression from early 2024 to now is on the medical side of things. So I don't know if you saw that study that they did a couple months ago maybe, basically saying they compared doctors without AI to doctors using AI, to AI on its own to do diagnoses, and it discovered that AI on its own actually outperforms doctors without AI or even doctors using AI if they're using AI incorrectly. And so the way they differentiate that is to say, AI on its own and or doctors using AI correctly actually upload all the patient's medical records to the system to then say, okay, based off all this information that you have now break down what's important versus doctors using AI incorrectly would not upload the medical records.
Speaker 2:They would just type in a particular prompt that says like, oh, if patient has X, like what could that be? And then AI didn't have enough context not context window, but enough context or any retrievable memory to know what to reference, and so it would just come up with something of like oh well, maybe this, and that didn't lead to a lot of accuracy. So I think that's fascinating if you think about somebody who's you know, in their 60s or 70s, they have massive volumes of medical records, and so now to be able to upload all that and say, obviously with the right protections in place and HIPAA compliant and everything, but to be able to upload that and say, okay, let's work through this, has just been groundbreaking in what AI will be able to do or is already able to do in that capacity.
Speaker 1:I think it'd be a fun experiment to. I would love to see someone try to compare like what is an average human's context window? Right, so like as humans we have? I don't know, I think they call it like working memory or something. But essentially, like what can you keep in your mind as you're like thinking through a problem? And I think the medical example you gave is a really interesting one.
Speaker 1:It would be really interesting to see a doctor given a whole case file, so a whole history, and allow them to read that whole history and then ask them a series of questions about the medical history I don't know if it would be memorization or what could the possible diagnosis be, et cetera and then compare that to an LLM that has certain different context windows.
Speaker 1:So what is the answer an LLM gives if it has an 8K context window, a 56K context window, 128, you know, 2 million context window?
Speaker 1:Because I think the general thing that's in my head right now is that, with context windows growing the way that they have, it wouldn't surprise me if you know the 8k. The 8k context window is less context than a human can can deal with at a given time. So if you looked at everything in an 8k context window you could kind of repeat it back pretty much and answer questions about intelligently, but, like with the 128k context window, it's probably like surpassed what humans kind of can keep in their own working memory. So basically, in this way, these agents or these LLMs, have kind of gone from being you know we used to talk about in early episodes you know, are they third graders, Are they interns, Are they professionals, Like what's their capability level? And they've kind of gone this year from being like third graders to being, like you know, superhuman in some ways, not AGI superhuman, but just being able to work with a wider context window than an average human's working memory, I think.
Speaker 2:So I think that's fascinating. I would love to see a study on that and then to actually take it one step further and say, okay, well, the average doctor has how many patients in a day, in a week, and for their ability to do that, maybe for one patient with one case file, it's close to what the LLM is able to do. But when you expand that to what their typical workload is, I have a feeling the LLM does outperform because it's not burdened by the same limitations of, you know, humans, working memory and all the other distractions that come into play.
Speaker 1:That's kind of crazy to think yeah, this is something I thought about earlier this week, kind of a random thought. But you know, it's not necessarily that we need LLMs to outperform humans. They also have this added advantage that they don't need to sleep, they don't need to eat you can run them, or she would need to take a break for lunch and they need to sleep and they can only look at one case file at a time. And you know, I've been, we've talked about this, but I've been looking into agents a lot lately, and so, yeah, instead of having one doctor look at case file after case file after case file, you can just spin up, you know, 10 agents in parallel and have it done in a fraction of the time. And I think so.
Speaker 1:That's the other advantage. It's not necessarily that we need these LLMs or these agents to be smarter than humans. They get this added benefit of being more efficient. And so how can you take advantage of that efficiency so that, even if they're not as good as humans, if they can do 10 tasks in the same time that a human does one, there's ways to take advantage of that to still have the output be better than a human output.
Speaker 2:Yes, and I think you actually just really hit the nail on the head of the biggest development in Gen AI from the beginning of the year till now is. At the beginning of the year. It was really a tool and still rudimentary as a tool. In some ways, and regardless of whether it's LLM or a particular application that you're using it for or whatnot, the gen is the generative part of the Gen AI. Yes, there was that understanding that as you gave it information or asked it to do a certain thing, it could get better and better and learn from what you were doing.
Speaker 2:But I think now we really see the generative part of Gen AI over the course of a year, how far it's come. So it's not just a tool anymore, it's really a collaborator and again, this is not just the LLMs, this is also many of the applications out there and how you see them used. That it's a collaborator for the humans that it interacts with and amplifies what they're doing. I think there's always going to be, and has been, a lot of fear over being replaced, but for me it's really an amplifier and it being able to do things that humans either A don't want to do or B don't have time to do or C are not best served doing and can be a partner in that.
Speaker 1:Yeah, I think it's interesting. I mean one thing we found this year or that we spent a lot of time discovering when I say we, I mean I guess, like the humanity or the people that have been playing with these AI models is just like, what are the right interaction patterns that allow people to get the most out of these things? Right? And so in some use cases, it is maybe a replacement of a human. In other use cases, it's a collaborator or a co-pilot to a human. And so, yeah, I think there was a lot of experimentation this year to see what works and what doesn't.
Speaker 1:And I'd say, earlier in the year, earlier in the year, the co-pilot kind of approach, you know, was the one that I think has stuck the most. I think for 2025, you're going to see a lot more replacement use cases via these agents. You're going to start to see a lot more, hey, this just takes on an entire task or entire set of tasks on its own, replacing the need for someone else to do it. Yeah, so it'll be interesting to I think, yeah, 2024, that's what year we're in 2024, year of the co-pilot, maybe 2025 year of the agent.
Speaker 2:I agree with that 2025 definitely year of the agent. And I think when we say replacement, I also want to clarify, to say it doesn't mean making humans obsolete. So if you think about any other technological advancement, the I mean things like that where now a person didn't have to spend hours solving a math problem. They can just I shouldn't say hours, I mean whatever a few minutes solving a math problem.
Speaker 1:Some of them take hours, some of them take decades.
Speaker 2:That's true. That's true. Yeah, some of those proofs might never be solved. Maybe AI will be able to do it next year. But yeah, instead of taking the time to do that now, you can do something efficiently. Listen, you can still do it manually if that's the hobby or choice to do it that way, but in terms of productivity, you have an option to do it very efficiently. Arguably, excel didn't replace what people were doing and actually just gave more work Realizing all the things that you could do with it. Now people spend their whole lives in Excel, and so I think we will see more of that versus things that are completely replaced.
Speaker 2:And, except for one company that I've come across recently, that, I think, is the best example of human replacement Not to say they're the only one, but just what they do specifically.
Speaker 2:So they are AI-powered robotics for material finishing on things like cars and boats and other industrial products.
Speaker 2:So the way it works currently, humans are the ones who are doing this, but when they're doing it, the air particles are really, really dangerous for people and you have to wear these crazy masks and hazmat suits essentially, and really be protected, and it's backbreaking labor. It's not actually good for people to be doing it, but up until now, there had really not been another option and it was a necessary thing. And so now these robotics the robots are not what you think of as like the humanoid robots. They're these big machines with, like these huge discs, um, and they have these autonomous arms that they can come in and do the finishing that needs to be done, and that's a good thing to replace humans in doing that, because that means people who are working in these dangerous, unhealthy jobs are able to do something else that is going to be better for them, and you don't have to have people risking their lives and health to perform this task risking their lives and health to perform this task.
Speaker 1:Yeah, I mean, the funny thing is the thing it made me think of one. I think that's incredible, and then the two. The second thing is the thing it made me think of is I always come back to this, but autonomous driving like it does not seem like that dangerous a task, but it is relative to, you know, cars autonomously driving is much safer than humans driving right. The number of accidents that happen is way lower. And so, you know, even things that don't seem that quote unquote dangerous because we're so used to doing them can be made much safer via AI. So, yeah, I'm excited for all of those breakthroughs to come.
Speaker 2:Yeah, Speaking of the autonomous driving, that was one of the things I was thinking about for advancement in 2024. So at the beginning of the year I'm not even sure if Waymo was in LA at all for the public to use, but now I see them all day, every day, and when I first saw them driving around they were kind of tentative drivers, which is funny to say, but it was like you could tell it was almost like a new driver and driver's ed, like just learning rules of the road and what to do and where to go.
Speaker 2:Now they're super aggressive, like they are true la drivers, like I see them taking unprotected left turns a lot all the time here. They just go and I just think like this is so funny because that is how you drive in la, like it might be different in other cities, but but LA is notorious for what do they call it? The California left or something like that, I don't know, but it's like. It's like major intersections and humans are generally really bad at taking those turns because they're bad at gauging the distance and speed of the oncoming cars, they're nervous, they they're not paying attention whatever, but these way most they they're not paying attention whatever, but these Waymos they've got it down and so I feel like they were able to learn very quickly, just in six months, of how to navigate around the city.
Speaker 1:You know what my favorite thing about Waymos are, so we take them. We try and take them as much as possible in San Francisco and my favorite thing to do is to watch. There's a I think Google is really smart when they design this there's a little screen in the Waymo that shows you what the Waymo is, kind of quote unquote seeing. So it has like a overhead view of like kind of the street you're on and kind of where it is relative to the street. Tesla's do this too. On the dashboard they show you like the cars in front of you and like the pedestrians that are around you. And I think it's really fun to watch because because one, it makes me feel better about riding in a waymo.
Speaker 1:Not that I was scared in the first place, because I'm an early adopter, so I just get a kick out of this, but it's really cool to see all the things that it is seeing that you're not seeing, and so like, as an example, when we're yeah, like when we're driving at night and I'm in a waymo, I'll like watch the screen and see a pedestrian on the screen that I hadn't even seen in person yet, even if I was like looking around, yet because it has farther reach and because of LiDAR, I can see through things and things like that, and so it just really is a good demonstration of what AI is capable of that humans are not capable of. It's capable of seeing these things and reacting to them, even though as a human, you wouldn't have seen it, and I think that's such a cool. It's such a cool demonstration of, like the capabilities it has that we don't I love that.
Speaker 2:My one hope for 2025 is that Waymo expands its geographic region in LA so that it can actually pick me up at my house, because I'm literally two blocks outside of its current zone.
Speaker 1:Oh no, do you walk? Two blocks to get into the zone.
Speaker 2:No, I haven't. I just so far. It hasn't made sense to do that, but I'm hoping that that will be the next phase, because then I can take it more easily.
Speaker 1:The other kind of crazy news story related to autonomous vehicles that happened this year and happened recently was that GM shut down Cruise. Did you hear about that?
Speaker 2:I actually didn't see that.
Speaker 1:Yeah, I think it was maybe a week ago two max and the ceo of gm, who I think is mary barra barra. Yeah, she shut down their autonomous. She shut down cruise as an autonomous taxi service and so they're gonna fold cruise into, I think, like autonomous driving for their like normal cars, but they're not going to try and operate like a ride hailing service or anything like that anymore, which makes sense. Yeah, pretty disappointing, I think they were. I mean, they were losing billions of dollars a year on this. They also had a pretty bad safety thing happen in san francisco, I think two years ago.
Speaker 1:But it also shows you know, a friend of mine works in innovation and he talks about how big corporations crush the butterfly. So imagine like having like a little butterfly in your hand and it's Cruz and your GM, and like you're trying to play with it and you just like accidentally crush it and it's like this is what corporations do, right, they're more worried about their bottom line, they're more worried about their stock price than they are about pushing the envelope of innovation. And so, yeah, they shut down Cruise, which I think was crazy. So now we're basically hoping for Waymo to be the one to bring it forward. There's also Zoox, which is, I think, owned by Amazon now, but they're way lesser known and they've just started testing in San Francisco.
Speaker 2:I have several thoughts about what you just said. One is I actually think it makes sense for GM from a vertical integration perspective for them to just take the existing technology and incorporate it into their traditional vehicles, rather than have a separate arm of a ride hailing or a taxi hailing service. It's just like a different business. But aside from that, the one company I do think it actually makes sense for is Tesla, because they essentially are that already. I mean, the majority of the Ubers that I see these days are Teslas and they have leasing programs and everything, and so it's a natural next step to then say, okay, now we don't even need the driver, the car already drives itself. Now we just do a little bit of programming to have it be an autonomous ride app. So I can see that I just because of the type of company that it is I don't know I feel like this is not about GM's, you know, technological prowess or anything about that, but more just like the way Tesla started as a company and what it's evolved into. I just feel like that's a more natural vertical integration fit, but then separately.
Speaker 2:My other thought was about big companies and innovation and the squashing of the butterfly. I think we've already seen a lot of this in 2024, and I think 2025 will be even more so. This ai founding team, our early team startup, aqua hire. So google's and amazon's and you know of the world buying these companies, sometimes before they're even actually just heard about this. A couple weeks ago, adobe bought a company before they even had revenue because the technology was so critical to what Adobe wants, but they were not able to produce that internally because of exactly what you're saying these companies become their own restrictions and so then being able to find that elsewhere and bring it in-house because you have founders who aren't bound by the same restrictions and innovating, and so I do think we'll see more of that, especially in the AI realm.
Speaker 1:Yeah, I think one thing that's cool about this AI stuff that I don't know just occurred to me is OpenAI. I mean, first of all, let's acknowledge like OpenAI, none of us even knew that OpenAI existed, like 18 months ago, you know, which is crazy, maybe 24 months ago, let's say I mean 18 months ago, we kind of knew, but 24 months ago open.
Speaker 2:AI wasn't built Prior to March 2023, only people who are really in the space, you know, working in AI or machine learning, already probably knew open AI. And then, maybe by summer of 2023, people are like, oh, OpenAI, I've heard of that Versus. Now it's hard to be hard to find somebody who hadn't heard of OpenAI.
Speaker 1:Yeah, and I think one of the cool outputs of that of OpenAI existing and all of this happening is it's really kicked Google and Amazon and Microsoft into high gear right For a long time. I mean, Microsoft has been on this kind of comeback campaign for a long time, ever since Satya Nadella, Chicago Booth alumni, took charge Chicago Booth alumni, you know, took charge but I would actually say that Google has been resting on its laurels for quite a long time at this point right, Like they were. Really there was really nothing that was threatening Google all that much in terms of its dominance over search and things like that, and so they've really had to kick themselves into high gear. With OpenAI coming out and then perplexity, you know, putting a dent in how people search for things and how people get answers, and so it is cool to see all of these major players, who maybe have been pretty settled for some time now, having to, like kind of, you know, get a little kick in the butt to wake them up and see them start innovating again, you know.
Speaker 2:Mm-hmm, I agree with that. I do think, if we talk about predictions for 2025, I do think it's something along those lines that these companies and let's include Apple in that too maybe even go back to their roots of innovation, which is that's how they became the companies that they are, and I think they might've lost sight about that as they became more and more, you know, institutionalized and expansive. But now to say, hey, let's go back to what made us great and bring that to the world, and not just allow open AI to be the only one doing that.
Speaker 1:Yeah. Yeah, I have some fun questions for you. By the way, I think this has been a fun conversation because the roles have been reversed a little bit and you've been talking about some of the technological advances, and some of these things are beyond my kind of technological knowledge, so it's been fun to be the lay person in the conversation today.
Speaker 2:Don't worry, they're way beyond my deal.
Speaker 1:Nice. So one question what's your favorite AI thing that happened this year?
Speaker 2:Ooh, that's a good question. So I actually don't have a specific one thing, but I have a concept. So it is that, and this is something that occurred to me in the last like week or two. It is that any time I've thought of a problem that I have in my workflow or productivity or life organization or whatever, and I thought, oh, I wonder if there's some AI solution for this. I've looked and there is, and so to me that's just crazy that it could be as simple as scheduling on meeting calendars. You know, multiple people trying to find a time to the meeting notes that come that are generated now by these AI. Co pilots are so good, as opposed to just being a transcript of who said what the summary is like. Yeah, that's literally exactly what happened, like it's really crazy to family calendar scheduling, you know, making sure everybody's on the same page about what should happen to what I thought of just a week ago.
Speaker 2:I work with a lot of startups and have a lot of decks and I really was thinking, oh, it'd be nice if I could feed the deck through a system that can then pull out the important information to fill in my spreadsheet of the specific details that I need from each deck. Lo and behold, that exists. I think it's a Sequoia-funded company, I can't remember, but it's N8N. I don't know if you're familiar with them, but they have a lot.
Speaker 1:I know N8N very well, yep.
Speaker 2:Yeah, yeah, so they do that, and so I think it's to me that's been the most amazing thing of like wow, all of this is happening.
Speaker 1:You know it's like one. I think it's funny that you use n8n. I would never that not not on my bingo card for today. You bring it up using an n8n. It's I. I know it from being in the workflow automation space and so they're kind of one of the players there well at first.
Speaker 2:I mean, their website is so sketchy. Looking at first I was like, is this a scam? I had to really research them. And then when I saw the details of it I was like, oh, okay.
Speaker 1:Kind of along the lines of what you just said. I had a funny experience this week. So on Monday I had a very productive day and then yesterday I felt like I had a less productive day. I was. I dipped my toes into doing a little bit of coding here and there not a ton, but like just for prototyping new ideas of stuff we're working on and the difference between Monday, me being like hyperproductive, and what I realized yesterday, when I was like kind of banging my head against the wall trying to like solve this thing that I wasn't able to figure out, is on Monday I was using ChatGPT a lot and yesterday, for some reason, I just like hadn't thought to use it like all day long as part of my workflow and the difference in productivity was astounding. Like I used it a ton on Monday and was like able to anytime I got stuck, I just like asked it like how do I do this thing, or what should I do next, or like what should I think about? And I just like had the most productive day.
Speaker 1:And then yesterday I came in and for some reason I just didn't think to like boot it up, as you know, and kind of use it as part of my all day work and I was getting so frustrated and then I was like, wait a second. I forgot that I could be asking a lot more questions to ChatGBT about what I'm working on. And then I started doing it last night when I kind of went back to it and I was like, wait, I'm going to ask it and I solved the thing. And it's just crazy, to your point about, there's an AI solution for everything. It even extends to just any problem you're facing, just like I don't know. As a first line of defense, go ask ChatGPT what it thinks and, honestly, if it's a work thing, you're going to get a pretty good answer.
Speaker 2:Yes, I'm just laughing. There's obviously a lot more to say here, but I do have to run. But I think what you just said is the perfect way to cap this, which is how far has AI come from 2024 till now, especially in relation to humans and how humans are using it, have used it. Ai never forgets that it should use itself.
Speaker 1:Yeah, exactly, I mean humans are fallible and I forgot and that you know like that caused me a productivity hit for a day. So I think you're right about that.
Speaker 2:Yeah, I think, and we're just going to see more of that. Okay, next episode we can do even more predictions, but this is really interesting, I think. Hopefully it got people to start thinking about what they've noticed in their own AI usage and the tools that they use and what they see in the world of a year ago, january 2024, through now, and how far that's come and it's only to be more from now or it's only going to become more from what it is now. And I think, listen, you still have naysayers out there who either don't want to or haven't yet seen the power of AI. But I think whenever I hear somebody say, oh, ai can't do that, I think, oh, or I say to them oh well, when's the last time you looked at it?
Speaker 1:Yeah.
Speaker 2:And they'll say like two months ago, and I'll say a lot has changed since then. So I think that's the one thing to keep in mind is, literally every week that goes by, it gets better and better.
Speaker 1:Yeah, I agree with that. I agree with that. All right, so you've got to go.
Speaker 2:Yes, I have to run. All right Until next time Go get on the subway for the first time in a long time.
Speaker 1:Enjoy.
Speaker 2:All right, talk to you later, bye, bye.