
The AI Coach
AI meets human behavior in these fun and insightful conversations with Danielle Gopen, a top founder coach and advisor, and Paul Fung, an experienced AI founder. We blend AI breakthroughs with real-world insights on business strategy, industry disruptions, leadership, and psychology while cutting through the hype to ask the hard questions.
This is for decision-makers like founders and business leaders who want to harness AI's potential thoughtfully, exploring both the technological edge and human elements that drive long-term success. Join us to challenge the status quo, navigate the shifting landscape of an AI-powered world, and get new information worth sharing.
The AI Coach
If You Call Yourself an AI Company, Bring the Wow
AI is meant to bring the "WOW" in producing information and optimizing processes. However, there are too many startups out there calling themselves AI companies, but when you get under the hood, you realize it's all smoke and mirrors. We talk about the ramifications around fundraising, customer experience, reputation and more when that happens. Also, some thoughts on the ChatGPT o1 release.
We love feedback and questions! Please reach out:
LinkedIn
Episode Directory
Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.
Hi, Paul, Hello. So I think we're going to talk about some other things today and maybe we'll still go there with the new release from OpenAI 01 and some other developments over the last couple weeks, actually the last couple of months. We took our summer hiatus from recording. How was?
Speaker 2:your summer? Summer's been great. We did some traveling, some weddings. As I told you before, we went to Paris, so that was great. Nothing AI related, even though there's great AI stuff happening in Paris, but we stayed away from all of it. We went right after the Olympics and it was great. How was your summer?
Speaker 1:Mine was excellent. It's been really busy building up the accelerator. Went to Houston to meet with the partners for our first offsite. That was really fun. I launched a coaches collective, which I don't think I've told you about, but basically a way for coaches who are feeling, you know, like they want more connection and opportunities for business connection, personal connection to come together. Talk about it more another time. But yeah, it's been very busy.
Speaker 2:I didn't travel as much as I usually do, so it sounds like you made up for the yeah, and actually I don't think you've talked about the accelerator on this podcast yet, so I don't know if it's something public that you can talk about, but if you can, I want you to share a little bit about it.
Speaker 1:Oh, yes, I would love to. Let's devote another episode to that, because there's lots to share, but the one thing I will say for now is that we are hosting an event at LA tech week an official LA tech week event on October 15th in LA. So, whoever's around, come on by. We will tell you how to 10X your revenue from 2 million to 20 million, which is, in a nutshell, what we do at the accelerator.
Speaker 2:That sounds great. Maybe I should be in LA on October. What day was it again? October 15th? All right, maybe I'll make my way there.
Speaker 1:Yes, you should. It's going to be fun and we have tacos, I mean tacos will get me anywhere.
Speaker 1:Okay, but actually, speaking of the accelerator, it does remind me of some companies that we've been talking to and interviewing, both to come into the accelerator and also to use as partners, and obviously, AI is all the rage that's part of why we have this podcast but there are a lot of companies out there who call themselves AI companies or even have AI in their name or in their URL. You know the ai and then, when you really dig in, you start to see, like, are you an AI company? What exactly about what you're doing is the AI factor? And so something that's top of mind for me right now I'd love to hear your thoughts on are these companies that say they're AI companies, but there seems to be a really big disconnect between what their customers expect from their company as an AI company versus what they're actually able to deliver on, and I'm wondering if you're seeing that on your end.
Speaker 2:Actually it kind of reminds me of. There's this old saying like every company is a software company, and now it's kind of becoming this thing where it's like every company is an AI company. I think one term that we use kind of in the bubble in the ecosystem sometimes is, you know, ai native and so like what does that mean? And that term is taken from kind of previous generations of technology. So like cloud native, as an example, were you a cloud native company Like Salesforce was like basically the first cloud native company, because they were the first SaaS software as a service that was delivered via the web, right? And so when I think about AI companies, I guess my definition of it would be that they are an AI native company, aka they were built from the ground up with AI in mind, so they didn't just sprinkle AI on top of previous features, but they actually used AI and the platform shift that is AI.
Speaker 2:Whether or not that AI means traditional machine learning models, bayesian classifiers or something like that, or LLMs, those are both versions of AI, so those could both be the things that underpin the product and the company. So when I think of you know a AI company or an AI company, I think of them as AI native companies. That said, to your point, there are many companies out there that are ai or it's in their name now that are not AI native. Right? They were traditional SaaS businesses that have now added AI into their product, and they've done it with varying degrees of success. Some of them, it feels very much like an afterthought and it ends up being much more of a branding play, which is unfortunate but true, right? If you're in the public market and you're going public, if you're seen as an AI company, investors get a little bit more excited about you, and then some of them are doing a really good job of it.
Speaker 1:Arguably in the private market too. Right, If you go out these days for fundraising, you can attest to this, but we both see it. If you're going out for fundraising anywhere between pre-seed, early stage through, say, series B, even C, if you're not an AI company, the dollars are very hard to find, and if you are an AI company, it seems like there's a lot of interest. And so I think that's part of maybe the positioning or the branding, of saying, oh, we're AI because all of a sudden a lot more doors open 100%, and there's firms that are dedicated to only doing pre-seed investments in AI companies.
Speaker 2:Right, that's their thesis, and so they will only invest in you if you're an AI company.
Speaker 2:I think it'll be funny. We're not even close to peak AI, I think, yet Peak AI is still, I think, at least a few years down the line. But what I mean by PKI, in my sense of it, is like investors are already probably sick of hearing 15 AI pitches a day, and so I talked to a friend the other day who's a founder and he was doing a mobile app like a mobile marketplace, and I was like, oh my God, this is so refreshing to hear and talk about because it was not really AI-based at all. And I was just like this is so refreshing to hear and talk about because it was not really AI-based at all. And I was just like this is great. We're having a conversation about something not AI. And I bet there's a lot of investors out there who feel this way too, because all they're hearing is AI nonstop, all day, every day. And so peak AI will be when investors are sick of hearing about AI and they start asking, hey, can you please pitch me anything but an AI company?
Speaker 1:Interesting. I was thinking something along those lines too, in that obviously, the frenzy of investing in AI companies right now is very much there and I thought I feel like this lasts for another one to three years and then it starts to die down. And then it starts to become, if you're an AI company, like you're saying, if you're AI native, show us what you've really got. If you're telling us you're an AI company, what you really mean by that is sometimes you use chat, gpt, let's talk about it and savvy investors will see through that.
Speaker 2:Later stage investors not to say they're not as savvy, but they tend to not be as in the weeds of the technology and they're obviously just doing more of their diligence based on the revenue numbers and things like that which makes sense at the later stage. They will be the ones to fall more for an AI branding than, I think, the earlier stage ones who care more about the tech because there's less revenue numbers to care about at that phase.
Speaker 1:That makes sense. Okay, so if I told you there's an AI company that does LinkedIn outbound marketing, what would you expect your user experience to be If you contracted that company? You hire them and it's going to.
Speaker 2:Yeah, Well, as someone who we haven't talked about this podcast, but one thing we explored in in past year was something along these routes personalization of outbound. It's just like a really big use case in sales. So it sounds like you signed up for these companies. I would expect that they would. So does automated LinkedIn outbound.
Speaker 1:So first thing, that we or email it could be.
Speaker 2:It could be on the channel.
Speaker 1:But just the AI automated.
Speaker 2:I know too much about this world because we were messing around with some of these ideas. Here's what I think it should do. I think it should go to your website. I think, based on your website, it should understand what you're offering is, what its value prop is. If you have any customer stories on your website or testimonials, it should know how to incorporate those into whatever the content it writes, and so it should at least start you with a first draft of what is the offering that you're trying to sell, and then some of them, depending, will do the piece where it actually tries to find people for you to sell to.
Speaker 2:Now, those are two different things, and so some of these platforms just do the content creation, and then some of them also try to do the targeting as well.
Speaker 2:But if it's full stack, like if they're trying to offer the whole thing, I think they would then use their AI to say, like here are some potential customers that we think the customer profiles that we think you should sell to or that you might be selling to, you could then kind of hone that definition of who your ideal customer profile looks like, and then I think it should find those people for you in a perfect world and then sequence those people, be like, hey, here's the first message, if they don't respond in three days, here's the second message and we're going to automate all of that. And so if it was a full soup to nuts thing, it should do understand your offering, generate the content, understand your ICP, find people for you and then sequence those people and run that. But it's hard for a platform to do all those things. There's many platforms that just do a single individual piece of those things.
Speaker 1:And then say, when the sequencing comes into play, there's actual conversation happening. So the first message gets sent out email or LinkedIn and the recipient responds and says, hi, nice to hear from you. Some version of you know they're available to chat the follow up message. If it's an AIabled platform, do you think that should be driven by AI and reading what the response was and then crafting a message back, or do you think that should then be handoff to the human sales?
Speaker 2:There's a concept in product and I'm going to butcher this, but it's called giving users strong defaults with good escape hatches. I forget exactly what the wording of it is, but I would say the default in that case would be to maybe respond automatically, or actually, my preference would actually be for it to draft the messages and then a user could review them and then set them up and then approve, approve, approve, but have the user have the option to say, hey, do you want to draft and approve or do you want just automated send? Because at some point they just might say, hey, automated send. That's what I would do, but so I agree with that.
Speaker 1:I like that. My thought, if somebody tells me this is the type of company that they are, my thought is very similar to yours and the starting point right, that they use artificial intelligence to craft the messaging and strategy based off of information that you give it, that it can train on. So your website, any white papers, case studies, things like that it understands, based off of that, who you are, what you're offering, who your target market likely is, and then it does all the steps that you said. And then, when it comes to the actual messaging back and forth, I would want to see the tool be able to understand what the content of the message is to say, okay, is this person potentially saying something that wants a meeting? And if that's the case, let me respond saying yes, I'd love to chat. Here's my calendar link.
Speaker 1:Let's find a time, which I, as the human, don't necessarily need to do because the idea is that I want to show up for the meeting itself to do, because the idea is that I want to show up for the meeting itself. The in-between is time consuming, somewhat like low value. But if the client comes or the prospect comes back with some type of question, or can you tell me more about who you are and what you do, or something that opens itself up to a bigger conversation? As the response, then, yes, I totally agree Having the AI tool craft a response and draft it and then say you know, please approve. And then I guess, at any point in time, the human can, the salesperson can, determine if they want to take over the process and continue manually, but that the idea is that the AI tool carries it all the way through a meeting being set up.
Speaker 1:Yeah, and I feel like I've come across these platforms. I know of one. I haven't used them personally, but I know of one that I think does all of this. Have you heard of it? It's called jivaai.
Speaker 2:I have not heard of that one, I'm not Okay. You might've mentioned this one, but I haven't used it or tried it yet. Man, there's so many. Of these things Are they?
Speaker 1:Because you have to manually create your campaign list, going through Sales Navigator. You have to manually create your outbound messaging. It's the same message for everybody that you reach out to, because it's a single message, which doesn't make sense. The whole point of using AI is for it to be customized. You then have to. When somebody responds to you, you then have to manually go back and forth with them and then the AI component comes in with, on the back end, being able to prioritize that list based off AI's knowledge of who is most likely to respond to you or engage, and then, based off their profile, what type of communicator they are, so that, as you manually communicate with them, you have a sense of these are the best ways to engage this prospect, which I think is very interesting, but to me that's a feature. Then, ai is not your overall product, and so I feel like I've just really been hung up on this idea of companies saying they're AI companies, when they're just utilizing AI in different processes.
Speaker 2:Yeah, that's a good question. I don't know where I land on that one. I mean, funny enough, I used to work for a company called Infer and I would have said we were an AI company and what they're doing is lead scoring, right. So they're prioritizing the people that do respond based on their fit, which is like a classification type of task with your customers. So is this a good fit for your business, right? That is an AI feature, I agree. But yeah, you're right, it's like if you're doing all this other manual stuff and then AI comes in on the back end, that's probably a very underwhelming experience.
Speaker 2:I think one of the fun things about this generation of AI, of LLMs and generative AI, is there is so much opportunity for wow factor very quickly in user experiences. You had to wait so long in that experience to get to the wow factor Because you don't even get to the wow factor until someone actually responds to you, right? Whereas it's so easy for them to generate some content based off some white papers you provide them. That's not that hard to do. Yes, it is hard to make sure you provide really good, high quality content. There's a lot. You can go down that rabbit hole pretty far, but like, at least give you a first draft to work with so you're not doing all the manual steps up front. So I think that is.
Speaker 2:I think that is true. I think there's this think the point here. I don't know if it's a point that you're trying to make, but the point that sticks out to me here is ChatGPT is amazing because you get wow factors so quickly. You ask it a question that you think it wouldn't know. Guess what it knows it. You ask it for a marketing strategy guess what? It gives you a pretty good marketing strategy and so the wow factor. Users expect wow factor so quickly. If you're going to call yourself an AI product, I think that's a reasonable expectation. With how game-changing, we say, this technology is, users should get a wow factor pretty quickly.
Speaker 1:I love that positioning, I totally agree, and I think that is the point I was trying to make, without knowing how to articulate it. So, thank you, because, exactly I felt like, okay, if you're telling me you do outbound, personalized outbound using AI that I expect the first interaction that I see with this tool is going to be a wow. See with this tool is going to be a wow. It's going to be oh, wow. That's something I couldn't have done on my own, or would have just been so much additional effort to do on my own that it wasn't worthwhile. But now, with this tool, it's super easy and simple and off it can go, and I feel like, instead, what I saw is well, these are all things I would do anyway.
Speaker 2:So I don't get it. And then probably I don't know, I'm a little bit I'm not convinced on the AI piece of here's their personality, here's how you negotiate with them or engage with them. I know that there is some research in this area. We looked into it a little bit as well. I haven't found it to be particularly effective for me, but I haven't used it a ton yet. So maybe you know there are a bunch of stats that say like oh if you know their personality profile, you don't get shit harder.
Speaker 2:You're 30% more likely to close the deal or something like that. But I don't think there's nothing to it. But it doesn't feel solid enough for me to want to like stand my ground on.
Speaker 1:Yes, I understand that. I'll report back if I hear more on that. Speaking of chat, gpt and actually Claude too, might have been freezing the last several days. Every time I've been asking it for some more extended request, it keeps freezing and getting stuck and saying error loading and not doing what I'm asking it to. Are you having some? It's interesting.
Speaker 2:We use the API, so I haven't been using it. This has been a busy week of calls for me so I haven't been in chat GPT as much. Busy week of calls for me, so I haven't been in chat GPT as much. I do get the GPT or OpenAI downtime, like status notifications, and I do find it not interesting, but there's a lot of them. They come through quite a bit. Now, each status notification doesn't mean every single user is impacted. But for OpenAI, I mean, I guess this is a good segue. If it was chat GPT it could very well be because they were prepping to launch. You know, oh, one preview, which is the new model they just dropped yesterday and they kind of surprised the world with it like everyone had. There's been previous weeks where people said, oh, something's coming this week, something's coming this week, but then, literally out of nowhere, oh one shows up yesterday and they're like bam, we just dropped this new model. So maybe they were preparing their, their platform for that, although that wouldn't explain why claude is is throwing errors.
Speaker 1:So yeah, I'm not sure. At first I thought, do I have that wi-fi? But then it was happening other places too and then I thought is it the? Is it mobile? Is a desktop? But it happened across the board. Something that I have made a conscious effort about is, if I don't need it to be the most precise information. I've been switching over to the well, like within ChatGPT, chatgpt 4.0, everything clawed to the lower power one, because I feel like, even though the default that it offers is 4.0 for a premium version, I feel like I don't need that level of power. Why would I consume it?
Speaker 2:Like an energy from? We had that episode on energy. Is it like an energy perspective?
Speaker 1:Yeah, it's like I feel like I've had this new realization where I it's like kind of doing my part, where I'm like I don't need that detailed of an answer or that amount of information, I'm just asking for something relatively quick. I'll just downgrade it to you know a lower energy, lower cost.
Speaker 2:Should we do a startup where, I mean, I'm already doing a startup, so I can't, you know, do it right now, but should there be a startup that tracks the carbon usage of every individual's AI usage? If you're using ChatGPT, do you want to know what's your carbon footprint of those requests?
Speaker 1:It could be interesting, if not a startup. Maybe we can ask ChatGPT to give us some data on that To be like okay, if a user uses ChatGPT X number of times a day for these types of requests for these different models, what is their approximate carbon?
Speaker 2:That's so fascinating that that you know, would not have crossed my mind to think about the energy consumption for a particular request, but I think it's. You know, maybe you're at the forefront of something. I will say. There was an article I read yesterday or this morning that the US government got the heads of OpenAI, google and a few other of the big players together, along with some of the heads of the energy companies in the US, I believe, and they wanted to talk about AI's impact on our infrastructure and how much it could potentially tax our infrastructure to make sure that we're prepared for this. So I thought that was pretty interesting. I think they might have even I think they heard us.
Speaker 2:Maybe they did, they probably did. I think they even may have formed a task force around this, potentially, from what I saw, Let me see if I can Google it really quickly, let's see, yeah, there's a Reuters article. Around us officials discuss ai development power needs at the white house.
Speaker 1:so, yeah, they probably listened to our our episode and and now they're worried about it look at us changing policy, just from our yeah changing the world but speaking of open ai and and going back to this, oh one, I mean, maybe that is why chat gbt was sluggish last couple of days, because that's something else that I think about. I want to talk about the model itself, but something else that I think about is the Speaking of power usage and other AI inputs. There's limited capacity even with an open AI. If they want to drop a new model, what does was some concern about them running out of cash or something like that. And, as you know, costs $700,000 a day to run chat GPT Assumingly that's even without this new model coming into play.
Speaker 2:But I'm curious what yeah, there's a lot there In terms of capacity. Yes, true, I would imagine. I have no data on this whatsoever. I would imagine OpenAI has access to almost unlimited amounts of cloud compute capacity, right Like, I'm sure, the deals they have with Azure, with Google, whoever it is that they're doing their cloud compute on, it wouldn't surprise me. Obviously, they have a very close partnership with Microsoft, but it wouldn't surprise me if they're actually sharing across many providers because they have so much compute needed there's two, but there's not even infinite cloud compute.
Speaker 1:Yeah, that's true.
Speaker 2:There's a lot, though I will say there's quite a bit out there and more is being built. I mean not the question you asked, but Elon Musk just built this crazy cluster for trading new models and so more stuff is getting built as quickly as possible at this point. And then, yeah, just in terms of there's like kind of two times where compute is needed there's the training of the model itself and there's the inference when you actually ask it, the question. So inference is when you ask a question, it provides a response, and so, yeah, I mean they have, oh, one must be taking, I mean training, it must have taken up a ton of compute. And then I'm sure they they did actually.
Speaker 2:Here's actually something worth mentioning, maybe, maybe related they did sunset some models recently, so they sunset some of the 3.5 turbo models and things like that. So maybe sun setting some of that allows them to free up some capacity for running 01. They also did. They do a lot of what we call rate limiting, right. So 01 preview has pretty strict rate limits and so you can't use it at the same frequency that you can use it's like generally available, available GPT-4.0 as an example, right. So that's how they kind of manage compute as well as they put it out there, they put limits on it and then they see what kind of response they get and they slowly increase the limits over time.
Speaker 1:Super interesting. I wasn't aware of the sunsetting of the older models. That makes a lot of sense because at a certain point, how can you run all of those on your platform? And at a certain point, as the capabilities get better and better, you don't need the old ones because they're clunky.
Speaker 2:Yeah, so we got like a notification as using the API. This week we got a notification that was like hey, if you're running any tasks on these old models, we're sunsetting them and you need to migrate them or otherwise they're just going to stop working.
Speaker 1:So that seems like that's actually very good for your business, yeah because we help people migrate from one model to another.
Speaker 2:And so anytime even 01 preview coming out and we haven't, unfortunately, we haven't incorporated into our platform yet today. I wanted it to be like day one we had it in our platform. But yeah, even stuff like this this is exactly why we built the business we built, because you know, 01 comes out and developers you know around the world are being like oh, should I now move from? I just moved all my stuff from 4 to 4.0 and then I moved it to 4.0 mini. Am I supposed to move it to 01 now? And every time you move those models, ideally you there's a little bit of tweaking and testing of the prompts you need to do to make sure that they work on the newest model, newest model. And so you know, I know I'm talking a lot today, but I will say an even more interesting thing about 01 is they actually said that it's good for certain types of tasks, but that for other types of tasks you should just stick with 40.
Speaker 2:So, as an example, it's very good at reasoning tasks and so what they basically did was incorporate kind of the idea of chain of thought. So in prompting chain of thought, thought is the idea that, instead of asking it to just do a complex task, you actually will say, hey, can you do this complex task, think through it step by step, and so, by forcing the LLM to think through its steps, it actually can provide a better answer. So it seems like they incorporated, basically at a very high level. I'm sure they did something much more advanced than this, but they incorporated this idea of chain of thought reasoning baked into the answers, instead of the user having to explicitly ask for it to happen. I mean, there's much more under the hood than that, but that's the general gist of what they do. So it is good at complex reasoning tasks, but they actually said if you're just trying to write like a sales email, like you were talking about LinkedIn outreach earlier, they said you could just stay with GPT-4. Oh, it's actually better at just writing than this new model.
Speaker 1:Interesting. So it seems like what I need for the task I've been asking GPT to do this week that it keeps failing at, which I thought it would be a simple task but apparently it's complex. It's basically you'll appreciate this we're planning a trip. You know, whenever you're planning a group trip, lots of opinions, lots of concerns, and so I wanted to create a matrix that I could send out to everybody to say, hey, here are the top four destinations that we've come up with.
Speaker 1:And I asked ChatGPT to create a matrix saying, with these top four destinations, identify the best time of year to travel to that place. And then here are the specific hotels and even the specific rooms, so that it could search for the pricing and say, provide average costs of these hotels during these time periods and average flight costs. And it could not figure it out. It kept giving me all sorts of weird information and then combining the hotel price plus the flight per person to say, oh well, the total cost per person. And I said no, and then it just kept misfiring and I'm not sure if we even eventually got there. I had to do more work to tell it what to do than if I had just gone on my own and created the matrix without ChatGPT defeating the purpose of AI. So one example of doesn't always do what you want it to do, but maybe 01 is.
Speaker 2:Yeah, I was gonna say so. This type of step-by-step thing, first research these things, then create a table, then divide this by. This is what O1 is designed to do these types of more complex tasks, which is funny. The other thing that I actually thought would be a fun episode for today and we can, I think, save it for a future, because I know you got to go soon a little bit here is GPT or LLM's ability to do math.
Speaker 2:I think people don't realize LLMs are actually not very good at math, which is really really funny because we expect them to be so smart. But a simple thing if you say like we used to do a lot of, so we were experimenting with some of these kind of sales up on use cases at one point, and if you say, write me an email that's max 300 words long, it will commonly write you a 400 word email. Right, it just can't count very well, and so it's funny because it's advanced, it can pass the bar, it's you know, got a 180 iq but it can't write you a 300 word email.
Speaker 1:It's like it's just really fun but it makes sense if you think about how it's trained or built in terms of the tokenization. So when it says write a 300-word email, it knows what that means, but it doesn't understand its output needs to be 300 words exactly because it doesn't correspond.
Speaker 2:Yeah, that's exactly right. I mean, you nailed it on the head that it doesn't understand that representation. It doesn't see it the same way we see it, even though we think it does, and so it's just this funny dichotomy between us thinking it's so smart, but also it being very, very dumb in a specific way that seems so basic to us.
Speaker 1:Yes. Well, another example of that I'm sure you saw floating around was when you ask how many R's are in the word strawberry, it says two. And no matter how many times you try to convince it otherwise, it says no, it's two R's. And I eventually got it to acknowledge that it's three R's. But I had to tell it do not count the double R as a single R. Those are two separate R's. And then it said oh okay, if that's the case, then yes, three R's. But from a tokenization standpoint it just sees the double R as an R.
Speaker 2:Yeah, I think I've said this on this podcast before. Sometimes it's like book smart is the smartest thing on the planet, but street smart doesn't know how many R's are in strawberry.
Speaker 1:Yes, exactly, it can do this other really complex task, but not something basic that a first reader knows. Okay, well, I think we have a lot more to talk about, for sure, but let's call it for today and we'll regroup.
Speaker 2:I'm excited for our new episodes.
Speaker 1:Me too. Thanks, Paul Bye.