The AI Coach

Outsourcing Your Brain and Your Friendships

Danielle Gopen and Paul Fung Episode 13

Text Us Your Thoughts!

Unlock the secrets of the LinkedIn algorithm (hint: consistency, unpolished content, and videos). Furthermore, as AI simplifies problem-solving and amplifies instant gratification, what are the implications for children's development, specifically their grit and resilience? Bonus question: Does a social network where AI shapes your online interactions enhance social media or dilute it?

We love feedback and questions! Please reach out:
LinkedIn
Episode Directory


Mentions of companies, products, or resources in this podcast or show notes are for informational purposes only and do not constitute endorsements; we assume no liability for any actions taken based on this content.

Speaker 2:

Hi Paul.

Speaker 1:

How are you?

Speaker 2:

Good, how's it going?

Speaker 1:

I am good.

Speaker 2:

So I realized in listening to last week's episode that I asked you the question about what does LinkedIn determine is good content, and then we actually didn't get to the answer. So I do want to kick today off with hearing that.

Speaker 1:

Sounds great.

Speaker 2:

Awesome. So please enlighten us, because I need to know what does LinkedIn think is good content?

Speaker 1:

Yeah, that is a great question and let me caveat my answer by saying I am not an expert on this, so I will give an answer that is based on the little bit of research I did and the conversations I had when we did our product launch, because we're B2B, saas and so actually for us, linkedin was a big place. It's actually the primary place where we did our launch. So I guess LinkedIn changed our algorithm a couple of years ago. I'm actually on a website now that says what goes into the LinkedIn algorithm, but I'll just say, off the top of my head, the things that we were told when we did our launch on LinkedIn was that it's very important for your post to get engagement in the first hour, and actually my co-founder had said that he thought it was actually important to get engagement in the first something like eight minutes, which I'm actually not sure if we know the ground truth on that one.

Speaker 1:

But essentially it seems like what LinkedIn does is it kind of shares it out to your immediate network and it looks at how your immediate network engages with your post. So do they comment on it, do they like it, and if there is engagement with your post in the first eight minutes or first hour, whatever it is. It then kind of goes one layer deeper, so it'll go like one layer removed from your network and it'll post it to their network and then if there's still more engagement, then it'll post it to a wider network and so on and so forth. And so there are ways to kind of optimize for this, like so, when we did our launch, as an example, we let a lot of people in our network, especially people who had large followings, we let them know exactly when our post was going to go live so that they could engage with the post like immediately. So I think that's one thing.

Speaker 1:

The other thing that I would just say quickly because I think it's interesting I'm actually not sure if LinkedIn is doing this, but I had read about some other platforms potentially doing this is prioritizing a thing called dwell time, which is basically like how long do your users actually spend when they scroll through, basically, the feed? Do they stop on your post? Because if they stop on your post, they dwell on your post. That means that they're more than likely kind of reading that post and then actually making use maybe not making use of that information, but like find it interesting enough to stop and read right, and so, to the extent that you can optimize for more dwell time, so writing something that's more engaging, maybe if it's got a video, people are going to stop and watch the video. I think those types of things might factor into algorithm as well, but, again, certainly not an expert on this. That's just some of the stuff that we heard anecdotally.

Speaker 2:

I have also heard about the engagement within a certain time period the first hour and then I think also within minutes, of posting the video thing I'm thinking. Interestingly, our posts about our podcast episodes get the least engagement of all the posts that I do, and those ones have the sound clips where it's usually like 30 seconds long of extract from the episode.

Speaker 2:

So you would think people would listen for at least those 30 seconds and that would encourage LinkedIn to show more of it, but it seems to not like that. I'm not sure what's going on. Yeah, it's a little bit of a mystery.

Speaker 1:

I wonder if it recognizes that it's like similar content week after week and I wonder if they deprioritize something along those lines. They're trying to, in general, prioritize thought leadership is like the sentiment that I'm getting from the algorithm. Right, they want business leaders to share insights that people find very useful, as opposed to I think they don't want it to be. I was going to say they don't want it to be a marketing platform. I'll caveat that by saying they only want your posts to be kind of marketing if you're paying for them. So, like our podcast posts that we post out there, we're not paying them anything, but if we pay them for a sponsored post, they want that to be marketing.

Speaker 2:

Interesting? Well, I was going to say, isn't our podcast thought leadership? I try to make the posts something related to what we talked about to have some intel, but maybe I have to work harder. Also, I'm guessing they're using AI to determine how to prioritize posts.

Speaker 1:

Yeah, I have to imagine they are. I mean, they almost certainly are. But I think one question is are they using traditional ML or traditional AI? So are they using, like sentiment analysis, classifiers not based on language models, or are they using, you know, gen AI, large language models? They're probably not using large language models because one I think large language models didn't really come out until after their latest algorithm change and two large language models might not be probably aren't scalable enough for their use case. So, if you think about the number of LinkedIn posts that are posted every single day, they have to make a decision for every single post how much to kind of promote it, and so I would say Gen AI probably isn't at the scale of that yet.

Speaker 2:

Do you see a use case for that, though going forward, yeah?

Speaker 1:

100%. I think one thing that could be really interesting would be, I don't know, the first thing that pops to mind, for some reason for me is to, you know, create these agents or bots that maybe are different personas, and then ask the agent like do you find this post interesting, right? So, like, an example of that might be if you created like a B2B SaaS marketer agent and then for every post that came through that was targeted at B2B SaaS people and I guess when I say targeted now I'm thinking about, like their ad platforms or their ad platforms you could choose these personas that you're targeting. You could like ask the agent like hey, is this an insightful post, is this original thought or does it seem clickbaity, right? And then the agent would be like, yeah, I find this insightful, and then maybe you'd prioritize that or something like that.

Speaker 2:

So that's interesting. I do ask Claude or ChatGPT sometimes if it finds my post insightful before I post it. And it tells me it does so, then I post it.

Speaker 1:

I think the funny thing is, like we've talked about before on this podcast like, are LLMs overly optimistic? So we were just doing this thing. There's a funny. Yeah, we just went through this exercise so we did a little bit of R&D work in our product around one of this ideas called LLM eval, and so LLM eval is asking an LLM to evaluate the generation or the output of another LLM.

Speaker 1:

So if I say, hey, chatgpt, write a sales email, and then I take another model or even the same model, and I say, write that email on a scale of one to five, it turns out that the distribution of the ratings looks a lot like. The best thing that I would analogize is that a word. The best thing I would compare it to is like the uber driver threshold where, like, everybody's a five If you're less than like four or if you're a three. That's basically like a zero right and that turns out. That's kind of what the distribution of LLM ratings looks like, unless you do some very fancy methods to make it more linear, which is there's some interesting like kind of Twitter or X threads about how you can do that. But yeah, it's pretty fascinating. They're very optimistic. Llms are golden retrievers, which is great.

Speaker 2:

They are so friendly. I'm also going to not extrapolate that that train of thought came from you thinking that my content is.

Speaker 1:

No, no, no, Not to say that your content is mid.

Speaker 2:

LLMs are being friendly and they're like, well, it could be better. I told you it was good. Maybe LLMs are really just there to make us all feel good about ourselves. They might be. I mean, they really are a nice little co-worker. I like them. I also had this random startup idea. I did about two seconds of research into it and then went back to my other actual work those are the best ideas.

Speaker 2:

But it was yes, I wanted to call it like click circle or something, and it was this AI driven platform, so that, basically, you sign up, you build your profile and then you connect, like in any social media type of network. You connect with other people and then you can prioritize them on that list. And they have to consent to it too. Right, it's two sided consent. We're basically saying like okay, so now, anytime that any of us in this group post something on social media platform, ai will automatically like it on your behalf. It'll come from your profile and show the like within those few minutes.

Speaker 1:

And.

Speaker 2:

I think that's probably not allowed on those platforms, who knows? But isn't that a good idea?

Speaker 1:

Well, it reminds me of me, of. Should I have told all those details? Now someone's gonna go build click circle. It reminds me of and I don't know if it's still the case but like you used to be able to buy likes and followers. This was like a big thing on twitter for a long time, I mean yeah, instagram too.

Speaker 2:

I think they did a big yeah, I think so.

Speaker 1:

So this is like an organic way to do that that maybe would like circumvent whatever rules they put in place, because it's real people in your circle that are actually liking it. But it's funny because that's essentially that's literally what you know we do when we do launches. It's like you know you reach out to your friends and you're like, please, like this thing. I mean the same is true for a product hunt, which, which is interesting. So there's a whole strategy around how you do product hunt launches.

Speaker 1:

And a funny thing that I had heard from a founder one time is like I think the hunts go live at like midnight or something like that. There's a time when they go live, and he was saying Europe actually has an advantage in product hunt launches because they're awake before the US is, and so there's some prioritization of like the things that start to gain steam early in the day tend to like win out in the day, and so like US founders, especially on the West Coast, are like staying up all night, like throwing pizza parties, like this founder I was talking to. He like had a pizza party at his office and had all their friends there and like waited until whatever time in the morning and then everyone went and like, liked the poster or whatever, so and waited until whatever time in the morning and then everyone went and liked the poster or whatever. So I think it's funny how we find ways to I mean Circle, what did you call it? Click Circle, click Circles.

Speaker 2:

Click Circle. Yes, I just imagined they had Click Circle. They didn't have to stay up and eat pizza, they could have gone to sleep and it would just click for them. It is.

Speaker 1:

There might be something to that idea. I mean, as these platforms are going to have these algorithms, there's always going to be a way that people try to game them, and that's certainly could be an effective way.

Speaker 2:

Yeah, because I think the important part is that it's a verified person and that they've given consent. Yes, I want to like this person's content, and maybe there's some way for the AI algorithm to even filter out the type of content that you would generally want to like. So, if they post about this type of thing, I don't want to necessarily like it until I've had a chance to review it, but if they post about these types of things, I always want to like it, but I don't know. I I really did think about this, because it's the problem with Instagram and and Twitter before is that they were bots, they weren't real people and you were buying bot followers and that. And then actually, no-transcript.

Speaker 2:

Fake followers also do fake likes exactly, but now it's a new world can I tell you about I just thought about this.

Speaker 1:

I we actually haven't talked about this yet one of my favorite ai ideas that I've come across probably in this past, like month or so. I think I came across it a few weeks ago when it was on Twitter slash X. I still refuse to call it X, it just drives me nuts.

Speaker 2:

Well, now it just has two names that everybody calls it Twitter slash X yeah, it's so confusing, anyway.

Speaker 1:

So one of my favorite ideas I came across on X is a private social network. Have you seen this thing? So, basically, it's like a Facebook or like a Twitter, but I sign up for it and when I create my account it creates a social network of bots, like all entirely AI bots in the social network. So there could be like a million people on the social network, but they're all there. For me, it's like that. What was that? What was that Jim Carrey movie? Why am I forgetting the Truman Show? It's the Truman Show. It's like the Truman Show, right. So it's fascinating. And I think when you create your social network, you can give it a personality, so you can say if you want the bots in there to be supportive or God forbid toxic or whatever it is. I just think it's a fascinating idea. You can create this mini world all to yourself. You create your own echo chamber, which is like kind of a terrifying idea, but I mean credit to somebody for having like what I thought was a very creative idea creative but, yes, terrifying.

Speaker 2:

I mean, if we already thought Gen Z and now Alpha after them were having trouble with in real life interactions and having friends, this will be the end of that. They'll never have another friend to get, they'll just always exist in their thought world, which we referenced it briefly when we talked about the movie scenarios. But the idiocracy scenario people outsourcing their brains. Well, now they can outsource all their personal relationships everything.

Speaker 1:

So you're saying you wouldn't sign up for?

Speaker 2:

I would not invest in that one because I don't want to see it come to fruition.

Speaker 1:

But I'm not saying it's a a when I say a good idea, I'm not saying it's good for society, but I am curious to see what it's like.

Speaker 2:

Yes, Will you invest in mine In? In click circle. Yes, I mean it might need it.

Speaker 1:

Yes, I will invest $10 in click circle.

Speaker 2:

All right. Well, I have to determine the value of the show. Come back to me.

Speaker 1:

You just mentioned idiocracy and I wanted to use that as a little bit of a segue because you know you, you had said something to me before we started recording, which I thought was interesting.

Speaker 2:

I was saying that, yes, it was making me feel like sometimes I turn off my brain a bit. In situations where I would use more brain power, I just outsource it now to chat, gpt or claw to say, okay, figure this out or tell me what you think. Or read these three sentences and tell me if they're grammatically correct, instead of me reading it over two more times to determine that.

Speaker 1:

Yeah, and do you think that's, do you think that's a net negative for you and to society?

Speaker 2:

I haven't decided yet because I think when I do it in the moment it's like procrastination.

Speaker 2:

When I do it in the moment it feels good Because I just saved a few minutes of not having to think that hard, but in the long run I think it is problematic, because then I mean, the brain really is a muscle, and you've heard me talk about this for years now. Part of why coaching is so effective is because you have an opportunity to really rewire neural pathways, and if you're not using that muscle in a real way, it does become harder and harder to use, and I don't want to lose that. And so the times where I think, oh, I should think harder about this, not because it's so important in this moment, but just to keep those neural pathways and keep my brain active, is when I start to think, oh, this might be a net negative.

Speaker 1:

Yeah, do you think that there are other ways that using AI is making your brain more active in ways that it wasn't previously? So an example I have is I am a go-to-market person, but my strength is not traditionally in marketing. I would say it's more in sales, and especially technical sales and things like that. And so when we talked a couple weeks ago about me doing marketing for our product launch, I could make an argument that I think that opened up this area of marketing to me. That I thought was I don't want to say like less accessible, but it made marketing more accessible to me to be able to have this thought partner and think about how and see how ChatGPT thought about things, and so in some ways, I think it opened up some new neural pathways for me. I mean, do you feel like that at all?

Speaker 2:

I would say I haven't noticed that for me. I totally see how it's possible. I think for me, interestingly, as you were saying that possible. I think for me, interestingly, as you were saying that what I have noticed is I get frustrated more easily when the AI, when an LLM, doesn't give me the response that I really wanted, or when it gives me a response but I still have to think harder about what to do with that response. I feel annoyed.

Speaker 2:

Versus if I didn't have an LLM to go to and ask about these things, I would just say, okay, this will take a few hours and that's the process and that's okay. And it makes me think back to pre-industrial revolution where things had to be done painstakingly, slowly, whatever building something or even doing laundry type of thing but you just knew well, that's how long it takes and you are mentally prepared for that. Versus now having this instant gratification culture at large, not just with ai but generally speaking. I think we're used to getting things much faster and things that are what we want them to be in that moment to now another platform or another way of interacting where I think, in the long run I'm not sure does that decrease our resiliency and increase frustration because you're not getting something right away.

Speaker 1:

Yeah, I think it's super interesting. I mean, we talked about this in some of the early episodes. I'm an AI optimist but, that said, I'm also a realist in the sense that, like, I would be shocked if our attention spans didn't go down because of AI and, to your point, I would be shocked if we didn't get I don't want to say more anxious but this on-demand. We've already created this on-demand economy where, you know, food is brought to our doors, drivers show up and take us from place to place and now it's, you know, marketing is written for us immediately. Or software, you know, development, you know code is written for us like on demand, in the moment. And so, yeah, I think that's really interesting.

Speaker 1:

I mean, I have a lot of friends who have kids between the ages of you know, four and 10, let's say, and I, you know, I know you do as well and kids between the ages of four and 10, let's say, and I know you do as well. And one of the topics that parents often talk about with these young kids is screen time and how much on-demand gratification do they get? Because at this early age developing their brain, it affects how their brain is developed. So I wonder if there's a time. You know how much chat GPT do you get exposed to at a young age, because you know your brain's going to develop differently having these like instant gratification expectations.

Speaker 2:

Yeah, and I think, as time goes by, ai will be incorporated into more and more and more and results will be faster and faster and better, whatever thing you're doing. And I'm just thinking this through and I'm thinking okay. So I saw a study the other day that said the six attributes that children should have that prove to make them the most successful as adults. However you want to define success, but the number one one was grit, which for me, I think of as resilience and the ability to keep going despite the obstacle that are thrown in your way, and how easy it is to get off track when you are frustrated or when things didn't go the way you wanted them to, and the choice to keep coming back and saying I'm going to keep going and I think about for us, as you were giving the example of food coming to our doors and things like this I think about when the internet first came out and we had AOL or dial-up internet. It was like a few minutes.

Speaker 2:

It could be anywhere from like 3 to 8 minutes for the internet to load and in the meantime you knew, okay, it's going to take a little bit of time, I'll do this other thing while that's happening. And you just were able to accommodate that and not be frustrated, because it was such a new, cool thing that just the fact that we could get on the internet was amazing. There was no reason to be upset that it took that long. And now, if a webpage doesn't load within five seconds, I'm like what's happening? What's wrong with my phone, what's wrong with the internet? And I'm like hitting constantly like refresh, like what? This is broken.

Speaker 2:

And I think, okay, if that happened for us at our age, having already also experienced what it was like to have that wait time and in life, for things to take longer and stuff like that, then what does that mean for kids today who are two, three, four or five years old, who have never had the version where they had wait time? They're immediately coming into this environment of instant gratification for everything. And what does that mean from a grit and resilience perspective? I feel like it actually means that that won't be an inherent attribute and you're going to have to work really hard as a parent to have your kids develop.

Speaker 1:

Yeah, I think that's. I totally agree, and I think I've seen a version of the same study where, like, grit is the number one predictor of success, and I don't know, for some reason. The analogy that just came to my mind was when we were kids and we had to write a first draft or something in grade school. So if you had like an essay, you had to write or a. You know, in grade school they're not even called essays, right, they were called reports. You had to write a book report or something. Reports. You had to write a book report or something. Then you'd write a first draft and then you would turn it in and then the teacher would critique it and then you'd have to write a second draft. And it's so funny because I guess in this world you'd write a first draft and then the teacher would critique it and then you would just put the critiques in the GPT and then it would spit out the second draft for you. You wouldn't have to critically think about how to make the changes that your teacher teacher is asking for.

Speaker 1:

And I think yeah, I don't know, I mean, I still remain that AI will be a net positive, but I do think it'll be. It's going to be a somewhat muddy picture right. There are going to be some downsides. I mean, the thing I think about that actually I was going to ask you is, you know, the most immediate thing that comes to mind for me is social media, and the reason I say that is because, you know, I think a lot of people would say that social media is a net benefit, but I don't think that's such an easy answer. I actually think a lot of people are starting to say it's a net negative for society, and so do you think. I guess my question for you is do you think we've learned any lessons from social media and how it's impacted society, how it's impacted people's brains, neuroplasticity and things like that? Do you think we've learned any lessons there that hopefully will be applied to how we adopt AI as a society?

Speaker 2:

Well, first I'll say I'm sure we have, and whether they had applied or not is a different story. I don't think we're always great at learning from history, but yeah, I think I do hear more and more people who are futurists and were part of the early social media revolution for lack of a better word say now that there's some regret to it or there's some feeling of, oh, maybe this was a net negative for society, society, and I think that's interesting and I can see it both ways. I think in my mind I'm not sure if I can describe yet if it's a net negative or net positive I really do see it both ways. In terms of access to information, while there's an upside and a downside, in terms of staying in touch with people that live around the world that you wouldn't otherwise have access to, and seeing what's happening in their lives. As a friend, I really like that. There's family, it's, it's really nice.

Speaker 2:

But do random strangers need to know everything that's happening in our lives? And then does that increase? You know, going back to this idea of our own bot world, like does that increase and inflate our own self-importance, and then that changes how we act on a day-to-day basis, like who knows, like I, you know just kind of random thoughts right now about that, so I don't know, but I do think. Going back to the word we used before about guardrails, I do think that social media and AI in years to come like does need to have significant guardrails in place. It can just be this I'm not even sure what the term is, but kind of like I'm just envisioning, you know, like slime, where it just like comes and yeah, this kind of this kind of unchecked yeah capability that you know kind of yeah, takes over, it kind of seeps into literally.

Speaker 1:

I mean, this is what it is doing. In some way, it's kind of seeping into every part of our lives, right, it's first starting with our work, but then it's going to be driving the cars that we drive. It's going to be, you know, maybe even like in the future, if we're really futurist about it, you know, passing judgment on, like, maybe, criminal cases, right, like, maybe there'll be AI judges that decide whether or not, you know, we go to jail or get a visa approved, or all sorts of interesting things.

Speaker 2:

There are, and also I think that's against well in my mind at least, pretty against the constitution of like judge and jury of your peers.

Speaker 1:

Maybe your peers are social media full of bots, then those are your peers, right?

Speaker 2:

Oh, terrifying, going back to our Tuesdays um.

People on this episode