
The AI Coach
AI meets human behavior in these fun and insightful conversations with Danielle Gopen, a top founder coach and advisor, and Paul Fung, an experienced AI founder. We blend AI breakthroughs with real-world insights on business strategy, industry disruptions, leadership, and psychology while cutting through the hype to ask the hard questions.
This is for decision-makers like founders and business leaders who want to harness AI's potential thoughtfully, exploring both the technological edge and human elements that drive long-term success. Join us to challenge the status quo, navigate the shifting landscape of an AI-powered world, and get new information worth sharing.
The AI Coach
What's Happening At OpenAI?
OpenAI has seen a wave of executive departures—were they driven by ethical concerns, lucrative paydays, or something else entirely? How can companies retain top talent when skyrocketing valuations create opportunities for liquidity? In this episode, we dive into these questions and dissect general data privacy concerns inherent in how much information we are, willingly or unwillingly, sharing with big tech.
We love feedback and questions! Please reach out:
LinkedIn
Episode Directory
Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.
Hi Paul.
Speaker 2:How are you?
Speaker 1:Good, how's it going?
Speaker 2:I am good.
Speaker 1:So I do want to kick today off with what's happening at OpenAI, with a lot of executives leaving recently, and what that means on kind of a human level. You know human nature and just kind of meander around. Yeah, how does that sound?
Speaker 2:That sounds great.
Speaker 1:Awesome.
Speaker 2:But because I happen to have it open in my browser, let's start with the OpenAI departures. I think they're interesting. I think you know to answer one of your questions around, like you know, were some of them for moral or ethical reasons? Some of them clearly were, I think or ethical reasons? Some of them clearly were, I think, and so I was actually looking at this article. It's kind of interesting.
Speaker 2:There's been nine key executives who have left OpenAI this year, which is pretty wild, including, most recently, their CTO, but some of the early ones Andrzej, yeah, karpathy, ilya, john I don't know how to say this last name Leike, john Schulman so some of these were on the super alignment team, and so super alignment was the term they were using for keeping AI safe. Essentially, so if once you have artificial super intelligence, asi, like we've talked about in the past before here, how do you make sure that super intelligence is being used for good and not for evil? And, yeah, so some of those folks left? I know Ilya, you know I'm talking like I know Ilya. I do not know Ilya, to be clear, but what news reports say is that you know he went and raised a billion dollars to work on this idea of safe super intelligence, and so certainly some of these people that departed did have concerns around the morality or ethics of AI and how OpenAI was handling it, and so they chose to leave and kind of work on their own thing.
Speaker 1:It is I'm going to say the word scary, because it is scary to think what the insider knowledge is on this thing that's so big right. So just so, people who aren't following it as closely so they know. You know, openai just closed the largest fundraising round, I think, ever in history. It's just over six and a half billion at a valuation of 160 something billion. I mean just incredible. And that's for quote company. You know, let's not forget there's still a nonprofit organization that is now in the midst of turning into a for-profit company. But to see that happen is obviously astounding. And what that means for early employees is they just got a serious at least on paper boost in their net worth.
Speaker 1:You and I talked about this briefly. There's a very, very strong secondary market for OpenAI stocks. So some of these exits I'm assuming that was an opportunity that they took advantage of to have this liquidity and walk away. There are a few dynamics to talk about here, but again, going back to the scary part, what do they know that they're saying no to and walking away from? And for the rest of us, what should we know that we don't know?
Speaker 2:Yeah, I mean I think all I remember from when Ilya left because that was the big news around how were they handling it? Was OpenAI. Handling it in a safe way is essentially the top line that I remember from this was that OpenAI was prioritizing, you know, commercial efforts over safety potentially right, so pushing ChatGPT out earlier, pushing forward with newer versions of the model that maybe hadn't had as many kind of safety measures built into it, or something along those lines. So yeah, I mean that was like the that was the outsider kind of rumor mill. So I think where there's smoke there's fire. That you know who knows.
Speaker 2:But the insider actual information might've been even even scarier, which I guess is a little bit worrisome to think about. I would say in general, like I, you know we're still I'm very much an outsider, even though kind of a founder in this area, but not I have no insider knowledge on opening eye. I mean, there's nothing we see commercially that seems concerning from the outside. Still at this point and this is, you know, six months after Ilya left, but it's still like a very short time period it could be that there really are things that you know should be concerning on the inside.
Speaker 1:Yeah, I mean, if we use social media as the precursor to this. So, going back to when Facebook first launched, it seemed like relatively innocuous, like, oh, I just connect with people who I know they're validated and vetted because they have email addresses from those universities. Yes, we're more or less in the same network, even if we don't know each other yet. And you just felt a bit more comfortable. And then, as you got comfortable with the platform, the idea of uploading your photos for people to see felt like, okay, sure, I'll do that. And now, all the way to today, people are putting their entire lives on this platform.
Speaker 1:One who knows who's seeing it out there in the world? But two for that company. Why is Meta what it is? It's because they have access and storage of everybody's data on everything. And I feel like you can then assign that same idea to other types of activities that at first there might have been some skepticism or aversion or obstacle to use, but then over time you use it a little bit and you kind of forget that you ever felt uncomfortable with it and then you're so in it. So I mean Uber is another one like, okay, go into this random stranger's car and they'll drive you around 20 years ago, absolutely not Now. Okay.
Speaker 2:You know, it's along those lines.
Speaker 2:A thing that I've recently been seeing on social media platforms so I saw this on Instagram recently.
Speaker 2:I also saw this on LinkedIn recently is people sharing these viral posts about how to opt out of their data being used in AI models.
Speaker 2:So I saw somebody post on Instagram saying, hey, make sure you change the setting, either in Instagram or somewhere else, to not be used, have your data used in AI models. And then I saw the same thing on LinkedIn, and so it kind of makes me wonder if AI is going to be the United States GDPR moment. So, like I don't know if you're, are you familiar with GDPR? Yeah, so like the European data rules, basically, and so they're very strict much stricter than we are on how companies can use European citizens data, and America didn't adopt GDPR. But I wonder if AI will be the tipping point for this country to adopt something along those lines, something where it says you're automatically opted out unless explicitly opted in, or something like that, because I think people seem to be much more afraid of AI than they were of social media, or maybe, just because we learned our lesson from social media, people know what to look for now.
Speaker 1:Yeah, I mean, I think it's so crazy that these companies have the ability to auto opt you in. Like to me, that is mind boggling. And when I saw that I thought, wait what? And, of course, even being the AI enthusiast that I am, I turned it off because I just think it's an invasion of privacy. You know, you tell me that you want to use that and then give me the choice Don't auto opt me in. And I have no idea this is happening.
Speaker 1:And that's what I was gonna say about ChatGPT so similar to Facebook and Uber and activities like that, chatgpt. When I first started using it, I felt a little skeptical. You was very vague in the information I would give it and never my name, never anything identifying, and just ask it questions. And now here I am uploading my bio, like everything I've done in the last 20 years. I'm like what do you think of this? And just things like that that. I'm like, oh, I just got comfortable with this product and now it's become a part of my day to day and that will happen with more and more Gen AI over time. But what does that really mean for us?
Speaker 2:Yeah, and I think one thing that comes to mind for me is in social media at least, there's a value trade. So, whether we are explicitly acknowledging this or not, the value trade in social media is that I upload photos and my friends get to see my photos. So I get value out of it because I got to share photos, my friends, my friends interact with them. I, on some post or comment I made, the value trade is far less clear. I mean, I suppose they could make a really exaggerated claim to say oh well, we will then provide you back a tool that's trained on everyone's data, that can write better LinkedIn posts, and you're one small part of that. But it feels like I've given them something big, which is like my online persona, identity, my style of writing, and they've given me something very small in return, which is, you know, your small piece kind of aggregated into this bigger piece of functionality. So the value trade feels off there and I feel like I wonder if that's inherently what makes people feel uncomfortable.
Speaker 1:I totally agree with that for LinkedIn in particular, and even Instagram. I feel annoyed by Instagram's algorithm these days, where they show me all these accounts Because I interact with them. Then they keep only showing me that account. But I really just want to see my friends. I don't want to see these random things to the string of departures. So, let's say, even if there was no deep moral and ethical reason for them to leave OpenAI which obviously there was for some of them, but let's say there wasn't at all Just from a human nature perspective, when you have worked somewhere and been enriched to that extent, just dealing with the normal workplace drama and things that we all experience as general frustrations and whatnot, at a certain point there's probably a calculation done to say, well, is this really worth it?
Speaker 1:Yes, I'm working on this thing that I feel so strongly about and want to be a part of creating the future. But at a certain point you are also you in that position and thinking, well, I don't want to deal with this anymore, or I want to go somewhere else, or I want to do something else, or whatever the alternative is. Then it's like how do you keep really good talent working on these hard problems when they're making that much money and have so much optionality.
Speaker 2:Yeah, I think that's a really good question. No-transcript.
Speaker 1:NVIDIA too. By the way, nvidia too, I saw something that was like 70% of their workforce, of their employees, are millionaires.
Speaker 2:I mean, I think that, actually, I think it might even be higher than that, I would actually say you know, I've been thinking about this and I'm pro the liquidity in the market, to be honest, in terms of, like, their choice and their optionality to be able to leave. Like there's a world in which you know if, from a corporate standpoint, if I was saying like I want to represent corporations and say like, oh, you should be able to keep talent, then kind of what does happen in the startup marketplace today is you give people equity and then you use that equity as handcuffs essentially right. So, like you have a four-year vesting period, there's a one-year cliff, so you don't get any of your equity until one year, and then you have to invest over four years and then for most companies, there isn't liquidity in the market. And so I just want to clarify you said earlier, like you know, there's some liquidity. The only reason OpenAI has liquidity is because there's a thing called the secondary market, and so the secondary market means they're obviously not publicly traded. And so there's these marketplaces online where private investors, either individuals or sometimes it's like investment funds will buy private company shares off of you for like a negotiated price. Essentially. Essentially, the market clears at whatever price people decide, and really that type of secondary market liquidity is only really available to employees of either very high growth companies or companies where the valuation is over a billion dollars. And so I know this because I've been in startups for a little while and I hold some equity in certain different companies and so, yeah, it is interesting.
Speaker 2:I think more startup employees should have more liquidity, but from the employer's point of view, it is going to be harder to retain talent. I personally think that puts more of an onus on the employer to have a good culture and treat their employees well. So I'm kind of pro-liquidity because I think, at the end of the day yeah, I think it's really hard if someone's made if you're the CTO of OpenAI and she's probably made God knows how much $10 million, $50 million, $100 million think she's kind of done her job. I mean, she brought this thing to the world. I think she should have the optionality to say you know what? Like I'm gonna walk away now and work on something else, and if that, if it is an incredible place to work, and if she does love her work and feels valued, maybe she would stay right and yeah. So I think I'm pro liquidity and optionality and I think it puts the right pressure on companies to make sure they maintain a not to say a fun place to work, but a good place to work, whatever good means.
Speaker 1:And, I agree, a meaningful place to work, that even with that amount of money, you still are doing it for purpose and because it aligns with what you care about. It for purpose and because it aligns with what you care about. I was going to say if you were anti-liquidity, yes, that's not very Chicago economics, free market of you.
Speaker 2:Yeah, I think that's right. I mean, that's exactly how I was thinking about it, was. You know? There's another world where I could say well, as a CEO and founder, I want to have handcuffs on my employees so it's hard for my best employees to leave, but I think that is not a free market. I think that's like an artificially constrained market and I don't think that's how it should work.
Speaker 1:I agree with you. I think again, then maybe this is a takeaway question I like to call that an inquiry in my line of work, but something to sit with which is like what can you really do to keep top talent in these types of extreme valuation situations and obviously OpenAI being the most extreme? But even to your point, the companies that hit a billion dollars these days, unicorns are farther and fewer between, but they're still out there and how do you keep really top talent with you? Something that I want to give some more thought to, but I'm also just thinking about the math. So okay, so if they're, let's just make the math easy and say they're valued at open AI is valued at $160 billion.
Speaker 1:When I worked in you know my background in finance we due diligence these funds and most of them one are not that big. So we're talking about funds that generally a couple billion or less, a couple 2 billion, would probably be on the higher end. And, to be fair, I looked at these 10 years ago, when this was a much smaller market than what it is today. This was a much smaller market than what it is today, but still for a fund to go and buy out a single piece of equity at a valuation of $160 million for that piece. I'm guessing they're not as much as OpenAI is valuable and they want those shares. I'm guessing they're not going to do it at par and so it's going to be at some type of discount. But realistically, what would they pay for that $100 million? I mean, that's crazy for one person's equity.
Speaker 2:Well, and unless I'm doing the math wrong on this, if they're worth $160 billion and you said that she maybe had 1%, wouldn't that be $1.6 billion? Because it'd be $16 billion, right, you're?
Speaker 2:right, well, because, well, because 1.6 billion, I couldn't even compute, right, right and like then I started doing the math in my head, thinking well, I guess first of all she wouldn't be fully vested, but let's make the math simple. And I'm sure we're, first of all, I'm sure we're 100 wrong on all of this math. But let's say she was fully vested in 1.6 and she's listening.
Speaker 1:She'll be like, wait, it had a big, well, what's?
Speaker 2:even crazier is she might have been a co-founder. She might actually own more than one percent, which would be insane too right?
Speaker 1:yeah, I'm not sure the origin.
Speaker 2:Yes, when she's so. So but let's say, for the math to make it easy, she owns 1.6 billion. So if she leaves, what that does, depending on how they started the company, and maybe she has some leverage here. But this is what's screwed up about startup equity when she leaves, typically what happens in the market is you start a 90-day clock on whether or not you want to purchase your options, and the thing that's screwed up about that is she can purchase her options at whatever valuation they were given to her at. So let's say she purchased her options at one cent per share, whatever per option. So then maybe she pays.
Speaker 2:Some people pay like $10,000 or whatever to purchase however many options they were given. But then the way the US government treats this for tax purposes is they will take the current latest valuation, so $1.6 billion, and actually that's fuzzy math because they actually take the 409A valuation, which is different from the valuation at which it's raised. But for simplicity's sake, let's say it's $1.6 billion, which isn't true. But let's say they take that minus whatever you pay for them, and then they consider that a liquid gain. They actually say that this is how much money you've just made and then they tax you on that they use that for AMT tax purposes, and so this is what I hate. I mean, I know you have some ties into government. If you could get them to fix this.
Speaker 2:I think this is criminal. I think it's terrible how it's treated, because what does to startup employees is they often don't have the cash to pay this tax bill because it's not actually liquid. In OpenAI's case, yes, you could sell some shares to pay the tax bill, but in most startup cases it's not liquid, and so now you have this huge tax bill. You actually have this equity that isn't liquid and so you can't cash it out, and so oftentimes, people either have to make this choice of laying out a ton of cash out of their pocket and risking taking a huge loss if the company goes to zero, or they have to walk away from that equity. They don't buy their shares, and so they've worked like three or four years at this company and then they're not going to see the benefits of their hard work, and I think that's unfair as well.
Speaker 2:So this is me on a soapbox talking about how I think the taxation of startup equity in our government in this country is really screwed up for that reason. So, yeah, I agree. I mean to me it's the equivalent of taxing unrealized gains. That is just insane. That's a like truck drivers and teachers are getting taxed on their unrealized gains, and so I just don't think it's a big enough thing to bump up to the top to get fixed. But I really wish it did get fixed because I think it's really unfair to a lot of people who are in a startup ecosystem.
Speaker 1:I mean, I think it's for a country Now we're really going off topic here, but I'll say this one point and I don't want to say more until I've thought more about the topic but I think for a country that's built on capitalism and again we've actually a free market, that by default, then you should be a pro business environment. And pro business means making it easy for people to start and run and grow businesses and for people to be entrepreneurial. And entrepreneurial doesn't just mean the founder, it means people who are willing to take a risk and go and work at a startup when they don't know what the outcome is going to be. There isn't a guarantee to it Not that there necessarily is in the corporate world but there's at least a little bit more certainty when it's a company that's been around for decades and it's a public company and whatnot. But I think, to encourage and support an entrepreneurial ecosystem, which is what the US was built on, it seems crazy that there's that misalignment.
Speaker 2:Yeah, I could not agree with you more so.
Speaker 1:Yeah, I know let's come back to this, because I think this is a whole episode in and of itself. Did we miss anything about OpenAI and keeping top talent? I feel like we did. There's something that I feel like needed to be said, and I'm not thinking about what it was.
Speaker 2:Yeah, I don't know. I think we covered it pretty well.
Speaker 1:I mean, I do think it's no, no, no, no, no. I know what it was. It was going back to you were doing the math.
Speaker 2:So we got into the tax part of them, right? So it's more than likely that even if she's leaving, she's selling some portion of them and then keeping some equity and opening eye in case they grow right, potentially, unless she thinks they've topped out. But you're right, even if we split that in half, it's 800 million. Even if we split that in half, it's 400 million. I mean million. Even if we split that in half, it's 400 billion. I mean the numbers are just absolutely insane and it's kind of an unprecedented. I'm sure it's like an unprecedented situation, right, like we've never, like again, had a company get this big, this fast.
Speaker 1:And so there's probably all sorts of novel financial things that are happening, you know, as these people are trying to cash out yes, I feel like it's crazy because we do talk a lot about open AI and obviously there are a lot of other companies in this landscape, Some of them already really established the metas of the world, but for this single company to have experienced this trajectory is insane. Also, I don't know, maybe this is my cynical side coming out, but sometimes I question I'm like going back to this behind the scenes situation how is it that this happened this way? Like who has information on what I just think is like kind of crazy? But then there's also, you know, on a more serious note, there's still the question of is OpenAI a long-term sustainable company or not, Because there's a lot of talk that it possibly isn't.
Speaker 2:Yeah, and I think you would know more about this than I would with your finance background. But a lot of these valuations, I mean, they're all driven off of revenue and then, basically, growth rate. So what is your revenue? How fast is it growing? Right, and so I've got an article in front of me.
Speaker 2:Openai's monthly revenue hit $300 million in August. It was up 1,700% since the beginning of 2023. The company expects $3.7 billion in annual sales this year. Their revenue estimate for next year is $11.6 billion. And so, yeah, the question is, I think, when you talk about them being inconsequential, one way I think about it is they'll always be a major player, but can this growth rate continue? Because the multiple their valuation is based on, like the multiple, and their multiple is based on this crazy growth rate. And so, you know, can they sustain a competitive advantage in this market where Anthropic already, you know, exists, mistral exists, llama exists.
Speaker 2:There are alternatives and I think, from a technical standpoint, the question is like what is their moat right? So, as an example, their training data isn't their moat. They don't own the data they're trained on. They've trained it on the internet. A lot of people have followed suit. So, I think, from a developer standpoint. You know the OpenAI Dev Day was earlier this week, and so I think even Sam Altman told me that. You know. He knows that models will become commoditized, and so their strategy is to build enough infrastructure around their models that developers continue to use their ecosystem. So build out this entire ecosystem and, yeah, I think it's a good strategy. Whether or not it is a strategy that will warrant, you know, $160 billion valuation and future valuations that are higher than that, you know we will see, but it's going to be really interesting nonetheless.
Speaker 1:It's really crazy. I also just wonder what it does for overall valuations to have a company valued at that. Does that start to skew other valuations?
Speaker 2:Yeah, I wonder that's a good question. I mean, you would think it would kind of like housing prices, right, If one home on the block sells for a higher amount, you would think it would skew all these other valuations. You would think it would skew all these other valuations. The other dynamics at play that I think about is, you know, it would affect valuations only if the multiple is higher than traditional multiples. So like, as an example, if you know they're at 160 because they're getting a 20x multiple like 20x multiples are not that crazy to hear about actually, but if their multiple is like 150x, then it would really skew things. This happened in the zero interest rate period the ZERP period as we call it, a few years ago, where we were seeing valuations at 100x in Series A, which was just outrageous, and it did affect the whole market and kind of blew the whole market up into a very inflated bubble for a while there.
Speaker 1:Well, can't we kind of do the rough math on the valuation? Obviously, we don't know exactly what metrics they use, but did you just say the revenue is $3 billion?
Speaker 2:Yeah, so they're going to do $3.7 billion in sales this year, $11.6 billion next year.
Speaker 1:So let's say that they went off of the next 12 months projected revenue and simple math. Then we're talking about like a 13x valuation, which isn't crazy, it's really not, which isn't.
Speaker 1:Yeah, it's not crazy. And then I mean, obviously, if they went off current revenue or the trailing 12-month revenue, then we're talking about like 40x, which would be a different story. But yeah, I know that is an interesting point, all right, well, lots to watch. With this one, I think we're coming up to time for today, so let's call it and see what's to come Sounds great.
Speaker 2:Yeah, I think this was a fun conversation. I think we didn't know exactly where we're going to take this one, but I think it was an enjoyable chat.
Speaker 1:Me too Talk soon.