We're going to talk about living with AI, having it be a co-intelligence for us.
00:00:09
But before we before we do that, we need to talk about follow up.
00:00:13
I am the only one that had follow up and this will be a blazing fast follow up
00:00:19
because my follow up was use radical candor with senior capstone design students.
00:00:24
I don't have any senior caps in design students right now.
00:00:26
I won't have those until September.
00:00:29
So we're going to have to like somehow, hopefully remember, you know, when the
00:00:33
fall, when the fall starts and the semester starts, hey, Corey, did you use
00:00:38
radical candor with your, with your students and how did it go?
00:00:41
Um, and hopefully I don't lose that in the shuffle of the month of August,
00:00:44
cause August tends to be a blur.
00:00:47
So that's our only piece of follow up, uh, coming into, into this book.
00:00:51
Uh, so I, I should probably mention one more time about max stock,
00:00:57
even though by the time this publishes, it'll just be a week before max stock happens.
00:01:03
But I did have somebody reach out the other day and say, Hey,
00:01:06
have you ever heard of this max stock thing?
00:01:08
Are you going to go?
00:01:08
And I was like, uh, yes, I'm going to go.
00:01:12
I've mentioned this on multiple podcasts so far, but then I realized like,
00:01:16
that's kind of stupid to assume that everybody listens to every single
00:01:19
episode.
00:01:19
Uh, apologies if I've heard this before, but max stock is a really cool
00:01:24
conference in Woodstock, Illinois that is really about making things and doing
00:01:30
things with Apple technology.
00:01:31
I've been going there for several years and, uh, it's always a fun time with a,
00:01:37
a bunch of Apple nerds.
00:01:38
So if that's you, then you should check it out max.conference and expo.com is the
00:01:44
website.
00:01:44
And if you use the coupon code mikeschmitz all one word at checkout, I
00:01:51
believe you can save $30 on your weekend pass.
00:01:54
Um, so I'm going to be doing a couple of things.
00:01:57
There are workshops that are happening on Friday.
00:01:59
The conference actually starts on Saturday.
00:02:01
So on Friday, I will be leading a workshop.
00:02:04
I believe it's going to be on journaling workflows and helping people kind of
00:02:08
craft their own journaling workflows, creating a journaling system that, uh,
00:02:12
will work for you out of the many different options that are available to
00:02:17
Mac and Apple users and then on Saturday, I'm going to be talking presenting a
00:02:21
session on mind mapping.
00:02:23
So, uh, if you, that sounds interesting to you, then, uh, you can go check out the
00:02:28
other sessions that are going to happen on the website, but that's the kind of
00:02:31
thing that happens at max stock.
00:02:32
So would love for you to come.
00:02:35
And if you do come and you listen to the bookworm, please come say, hi, uh,
00:02:40
I'm going to try and bring some bookworm stickers and stuff with me, some bookmarks
00:02:43
and things. So, um, I will give those out to people who are actually fans of the
00:02:47
show. I just don't want to assume that people want that stuff.
00:02:50
So just ask me for it.
00:02:51
Uh, and while, I mean, since we don't have a ton of follow up, this is, this would be
00:02:56
a good place.
00:02:56
We don't usually plug it in the beginning of the show, but we may as well here.
00:02:59
Um, in the pro show, we talked about, uh, AI apps that don't suck or the
00:03:05
AI apps that we use, uh, in the tools within, in the AI tools within those.
00:03:09
Um, if you're interested in, you know, stuff like that where it's not necessarily
00:03:13
related to the book, but it's something that Mike and I think would be fun to just
00:03:16
chat about for, you know, 10 to 15 minutes, I'd say is usually, usually how long
00:03:20
those, those go.
00:03:21
Um, so it's $7 a month.
00:03:23
Uh, you can go to patreon.com/bookwormfm.
00:03:26
Uh, it helps us, uh, by just supporting the show in general.
00:03:29
Um, you get ad free episodes, you get early access to the episodes.
00:03:33
Um, you get things like the pro show, you know, the exclusive content of the
00:03:36
pro show and then a bookworm wallpaper.
00:03:37
So if you're interested in that interested in hearing about the pro show or
00:03:40
you want to just support the show, we'd, we'd really appreciate that.
00:03:42
Um, book, uh, sorry, patreon.com/bookwormfm is where you can, where you can find that.
00:03:47
All right.
00:03:48
Are you ready to get into co-intelligence living and working with AI?
00:03:53
Let's do it.
00:03:54
All right.
00:03:55
So this was Mike's book, although I wanted to do a book on AI.
00:03:59
And at one point, Mike and I had a conversation about all these AI books and
00:04:03
we were trying to hash out like what would make sense to the bookworm audience.
00:04:06
Right?
00:04:07
Cause some of them are really, really nerdy.
00:04:09
Some of them are, you know, like kind of too far off on one side or too far off
00:04:14
on the other side.
00:04:14
And we were trying to find one that kind of met, you know, I'd say sweet spot.
00:04:18
So we settled on the Kevin Kelly book.
00:04:20
And that was the 12 technologies that are, you know, we have to pay attention to.
00:04:24
The inevitable.
00:04:25
Yeah.
00:04:25
Yeah.
00:04:25
The inevitable.
00:04:26
Thank you.
00:04:26
I couldn't think of the name of it.
00:04:27
Um, so that was the one we kind of settled on for that one.
00:04:30
And then Mike, I was looking for a book recommendation and Mike was like, well,
00:04:32
I just heard about this, this book called co-intelligence living and working with
00:04:36
the AI. So this book came out in April of 2024, which so it's, it's fairly new.
00:04:42
It's fairly fresh.
00:04:43
Um, Ethan Molick is the, is the author.
00:04:46
He is a Wharton professor.
00:04:48
Um, so he teaches a lot of MBA students.
00:04:50
It sounds like he teaches a lot of, um, kind of startup, he teaches a startup
00:04:54
class and, and some of the entrepreneurship type classes there.
00:04:57
He authors a sub-stack newsletter called one useful thing.
00:05:03
Um, and I think that's kind of between those two things, I'm talking about it there
00:05:06
and then him using it in, in class, but he also does some, uh, AI education
00:05:12
gaming, uh, kind of the merger of those three things through, through his, um,
00:05:17
through his job, through his day job.
00:05:18
So that's a little bit about the nature of the book and, and the gist of the book
00:05:23
and kind of how we got to talking about, you know, AI and why we're going to talk
00:05:28
about AI, um, for this one, anything else you want to say before we, before we
00:05:32
launch into the introduction?
00:05:34
No, other than, uh, I was really nervous about picking a book about AI,
00:05:39
because there are generally two camps, um, that you see on the internet.
00:05:44
In fact, I saw, uh, a, uh, thread on Twitter slash X today, uh, about which team
00:05:52
are you on AI is going to destroy us or AI is the future.
00:05:56
Essentially.
00:05:57
And I was like, I'm on team AI or, uh, team in the middle.
00:06:01
Yes.
00:06:01
Yes.
00:06:02
But, uh, I think that there's a lot of extremism out there, uh, when it comes to
00:06:08
things that we don't understand and artificial intelligence is something that
00:06:12
we don't understand, even the people who really study this stuff don't really
00:06:17
understand it.
00:06:18
There's not a clear path forward for this sort of stuff.
00:06:21
Talks about the different options that are available to us at the, the end of
00:06:27
this book, which is, uh, kind of, kind of interesting.
00:06:30
But yeah, I think that, uh, this book came on my radar and it seemed like a
00:06:37
middle of the road approach.
00:06:39
And I knew that you had wanted to talk about AI, uh, at, at some point.
00:06:44
And so I, I mentioned it to you, um, but I really wasn't, you know, I didn't
00:06:49
have high expectations for it, but it just seemed like, uh, if we were going to
00:06:54
pick one to do about AI, this was probably the safest.
00:06:59
Yeah.
00:06:59
And, and I would say, you know, hindsight's 2020.
00:07:03
So after having read it, I would agree that I think it, it was a good, it was a
00:07:07
good choice for our audience.
00:07:08
Um, so I think it, like it, it's one that covers AI in a way that makes sense
00:07:16
for those who are thinking about the books that we, we cover on this show.
00:07:20
And, you know, thinking about, you know, productivity, leadership, self-improvement,
00:07:25
like those, those kind of, kind of things, I think it was a good, a good way for
00:07:29
us to, to branch into the, the topic of AI.
00:07:31
So thanks for the suggestion.
00:07:32
Um, glad we read it as we move forward.
00:07:35
If we get into the book, um, so we got an introduction, we've got nine chapters,
00:07:40
uh, and then an epilogue.
00:07:42
Um, if I'm remembering correctly, the intro and the epilogue are pretty short,
00:07:46
uh, the nine chapters break into two parts, I believe.
00:07:49
Um, so, uh, it's a fairly, you know, fairly structured book in terms of, of
00:07:57
what they're going to do.
00:07:58
The first part has three chapters and the second part has the rest.
00:08:03
So that would be six, six chapters.
00:08:05
And then we get the epilogue, um, to, to lead us out.
00:08:08
So without further ado, um, we get into the introduction, uh, in the introduction,
00:08:13
really basically he's, he's talking about the fact that, um, most people, and I
00:08:19
think he's, I think he's trying to hook here, um, with the three sleepless nights,
00:08:23
because it's called, you know, sleepless nights.
00:08:25
Oh, you're going to, you're going to get introduced to AI or what, um, he would
00:08:30
say, you know, LLMs, basically you can say right now that AI is large language
00:08:35
models, like these two things are very, very tightly integrated right now.
00:08:39
We just call it AI.
00:08:40
So, um, it's like, you know, 5G, right?
00:08:43
Really?
00:08:43
Is it really 5G or is that just what we're going to call everything now, um, um, um,
00:08:46
from a, from that standpoint, but he, he references it to this idea of a general
00:08:51
purpose technology.
00:08:53
So an example of a general purpose technology would be, you know, something
00:08:56
like, uh, the internet or electricity or something like that to where he says AI
00:09:01
is going to become one of these general purpose technologies that it's going to
00:09:04
be so infused into our culture that we won't really know the difference or we
00:09:09
won't know life without it in, you know, however many years, and he doesn't put a
00:09:12
date on that, but however many years, so he, he says AI is a, uh, a GPT or a
00:09:17
general purpose technology.
00:09:18
I didn't know where like the chat GPT, like I didn't make that connection.
00:09:23
So that was, that was an interesting thing to me.
00:09:26
Um, he also talks about in this introduction about like how people think AI
00:09:32
might impact us and then how he thinks it's more of this idea of this
00:09:36
co-intelligence.
00:09:36
So, you know, he ties it back into like the industrial revolution and, you know,
00:09:42
the mechanical work or like the really repetitive mechanical work being taken
00:09:46
over by, by machines and how that might, that might be what happens with, with AI,
00:09:50
but he doesn't think so.
00:09:51
He thinks it's going to be better used as a co-intelligence or somebody that we
00:09:54
partner with essentially as we, you know, work through different problems.
00:09:59
So that's really what I have from the introduction.
00:10:01
I don't know if you have anything to add.
00:10:02
No, other than he mentioned that things are about to get very strange.
00:10:07
So, um, I think that is both accurate with the book and our cultural
00:10:15
moment that we find ourselves in.
00:10:17
Yep.
00:10:19
But I think it's an effective, an effective introduction.
00:10:23
Um, you're right.
00:10:24
That it definitely seems like there was in a strong attempt to create a hook there.
00:10:28
Uh, I'm not sure that was entirely necessary, but, um, yeah, I like the, the two
00:10:38
parts, uh, for the outline, I'm assuming we're going to take it chapter by chapter
00:10:42
because there's not that many chapters in here, but essentially the first couple
00:10:47
chapters are really what he's talking about with, with AI is the alien mind.
00:10:52
And then the second part is like all the different roles that it can, can function
00:10:56
in.
00:10:57
And I think there's going to be some fun conversation with some of those chapters.
00:11:01
Yeah.
00:11:03
I like the way I saw this was one through three, he's set in the foundation,
00:11:06
given everybody at least somewhat of a common language to speak around this.
00:11:11
And he uses the alien kind of thing, which I'm intrigued to hear what you have to
00:11:15
think of or what you have to say about that, referring it to as an alien idea.
00:11:20
Um, but then that allows him to then expand out into these, these different,
00:11:26
different people.
00:11:27
Um, so let's, let's launch into that part one, a chapter one, it's called creating
00:11:30
alien minds.
00:11:31
Um, so Mike, I'm going to turn it to you at this point, put you on a spot.
00:11:35
What do you think about the, the alien?
00:11:38
Like, what do you think about the word alien?
00:11:40
Well, at first I have to admit that it seemed a little strange.
00:11:45
Uh, I think it probably is the right term, uh, for the discussion about AI from the
00:11:53
perspective of how this book is written.
00:11:55
Like by the end of the book, it really fits, I think.
00:11:59
So when I first came across it, I was like, well, that's kind of weird because
00:12:05
aliens kind of triggers conspiracy theories and all that kind of stuff.
00:12:12
And, uh, like I said, I picked, or I mentioned this book to you because I'm like,
00:12:16
hey, this one seems like it's kind of in the middle.
00:12:18
So I was like, Oh boy, what did I do?
00:12:21
But, uh, I thought he really understands this stuff.
00:12:25
So in the first section, he, he attacks this whole concept of the alien mind,
00:12:30
uh, exactly the way I hoped he would, where he's talking about essentially how,
00:12:35
the large language models are constructed and, um, how we've had this fascination
00:12:41
with machines that can think for a really long time, the term artificial
00:12:45
intelligence he attributes to John McCarthy was invented in 1956.
00:12:49
And there have always been these hype cycles that have plagued AI.
00:12:54
For example, they expected AI to beat grandmasters and chess within 10 years.
00:12:58
And that was in 1956.
00:12:59
So that took a lot longer.
00:13:02
Um, what we've got now is the large language models and, uh, the term transformer there.
00:13:09
That was introduced in a Google paper in 2017 as a tool to help computers to
00:13:14
concentrate on the most relevant parts of attacks.
00:13:16
The large language models are essentially prediction machines, like an auto
00:13:19
complete. Um, since I read this book, I actually listened to a Cal Newport deep
00:13:25
life podcast episode that just came out as we recorded this week.
00:13:31
And it was essentially, um, something along the lines of AI is not going to
00:13:36
destroy us all.
00:13:37
What's the title?
00:13:38
Uh, and Cal unpacks that need talks about, he's a computer scientist.
00:13:44
So he understands this stuff too.
00:13:45
And he talks about there's these, these large language models and these,
00:13:48
these transformers, but there's also these control layers.
00:13:51
And essentially the control layers are what we program, like those aren't
00:13:56
sentient beings.
00:13:57
We don't understand how it works.
00:13:58
So it feels like it sometimes, but essentially all the models are doing is
00:14:03
they're spitting back the prediction of what the next word in the pattern is
00:14:08
going to be.
00:14:09
Now, obviously that gets a little bit more sophisticated when you talk about
00:14:12
the, uh, the image generation and, and stuff like that, but all of these models,
00:14:18
they essentially are trained on all this data, which bear may not have been
00:14:22
used, uh, ethically or legally.
00:14:26
Um, but then what makes the models work is the weights that are applied and those
00:14:30
weights are applied by humans without the weights, the models just mirror the data
00:14:34
that they were trained on.
00:14:35
So, um, I don't know, like at the beginning here, creating alien minds, you're
00:14:40
like, whoa, aliens.
00:14:41
And then, Oh, actually, I think I understand how this stuff works now.
00:14:44
It's not quite so alien as I thought.
00:14:46
Yeah.
00:14:47
I like, so I've read a decent amount about, you know, people trying to describe it,
00:14:52
like trying to describe how an LLM works.
00:14:54
And I think this was a really good introduction to hold on.
00:14:58
Like it's not thinking it's not doing anything magical.
00:15:02
Like it's really this.
00:15:04
We've got weights.
00:15:05
We're doing a prediction.
00:15:06
We're doing a probability and all I'm trying to do or all it's trying to do,
00:15:10
right?
00:15:10
We're going to, he, he actually apologizes at one point, um, about personifying the
00:15:15
AI and personifying the LLM.
00:15:17
And we're probably going to do it too, just because it's way easier to talk
00:15:20
like that.
00:15:21
Yep. But like it's not doing anything magical or crazy or it's not thinking
00:15:26
like we would as a human.
00:15:28
It's just doing math, right?
00:15:29
And it's doing very advanced math with these weights on top of it.
00:15:32
And it's saying, what do I think based on all the data I've been trained on,
00:15:35
the next likely word would be.
00:15:37
And I think that's one of the things that makes it so interesting and so exciting
00:15:42
is the fact that math can make it feel like we're dialoguing with another,
00:15:50
not human, but maybe sometimes human, right?
00:15:51
Like we're actually dialoguing with another thing.
00:15:55
And it's really good.
00:15:57
Like it's actually really conversational and, you know, it listens or quote,
00:16:01
unquote, listens and responds in the way we ask it to respond.
00:16:04
Um, so I, I like his reference to training.
00:16:09
The training of LLM's is like an apprentice chef learning to cook, right?
00:16:13
Where, you know, you've got this, we're going to train you, but we're going to tell
00:16:18
you what you did right.
00:16:19
We're going to tell you what you did wrong.
00:16:20
He introduces the concept of reinforcement learning from human feedback,
00:16:24
where essentially a key part of this is you tell the AI, hey, you identified that
00:16:30
correctly or you put those words together correctly or you didn't put those words
00:16:34
together correctly or you didn't do this or this was false.
00:16:36
You know, you, you reported this area or error in, in whatever different ways.
00:16:41
I'd also never heard that there are three different kinds of LLM's.
00:16:45
You have your niche and your specialized ones, right?
00:16:48
And those are cheap and they often are used to answer customer service questions.
00:16:52
Uh, and then you have your open source models, right?
00:16:55
Which are kind of, I would say those are a little bit more wild,
00:16:58
wild west, right?
00:16:59
Because they're open source.
00:17:00
We can do, we can tweak them and adjust them and, and do what we need them to do.
00:17:03
So they're somewhere between the, the niche ones and then the next, uh, category,
00:17:06
which was the frontier models.
00:17:07
And then the frontier models, I think is what we hear in the news all the time.
00:17:10
It's chat GPT.
00:17:12
It's, you know, these big crazy models that are trying to like advance the field,
00:17:16
trying to move the field forward and make the, make the algorithms better
00:17:21
that work on the back end of the transformer.
00:17:23
Um, and he talks about how they're expensive.
00:17:26
Um, that they demonstrate emergence, right?
00:17:29
That they do things that they, that the people developing them say,
00:17:32
which shouldn't be able to do that, but it was able to do that.
00:17:35
And they're, they're kind of not sure why it does that.
00:17:38
So then they start to talk about it as thinking.
00:17:40
My point in all of this, and as you, you did a great job summarizing it, and I think
00:17:45
my key takeaway from here is this is a really good chapter one on thinking
00:17:51
about transformers, thinking about LLM's, thinking about what we are
00:17:54
terming AI and making it relatable or, you know, making it, um, connect in a way
00:18:03
that isn't like some Google research paper or some academic paper that's like,
00:18:08
Oh, this is crazy boring.
00:18:09
And I don't care about the actual equations that are underneath all of this.
00:18:14
So I thought he did a great job with chapter one.
00:18:16
I agree.
00:18:18
Um, you want to talk about aligning the alien?
00:18:20
Yeah.
00:18:21
So chapter two, now we go into aligning the alien.
00:18:24
And this is where I've probably read the most about this from an ethics
00:18:28
standpoint, because it's fun and people get all fired up and yes, they do.
00:18:32
Yeah.
00:18:33
Um, so he actually, do you remember me referencing, um, uh, Eliezer,
00:18:37
Yudkowski last, last week?
00:18:39
Well, he, he also references him as well.
00:18:42
He does reference tag mark, but you know, that's okay.
00:18:44
We'll get there.
00:18:44
Um, but basically says there's the, there's the doomsday scenario.
00:18:48
And had you ever heard of, uh, Bostrom's paperclip AI?
00:18:51
Uh, no, I don't think that I had.
00:18:55
Okay.
00:18:56
So, so the paperclip AI is basically like, okay, we trained an AI to manufacture
00:19:00
the most paper clips it can.
00:19:01
So what's its goal?
00:19:03
It's objective is to manufacture the most paper clips.
00:19:05
And then, you know, you run that rabbit, you know, you run that chain of thought
00:19:09
down as far as you can.
00:19:10
And you go, well, humans are just wasting resources that I could be using to make
00:19:14
paper clips.
00:19:14
So what am I going to do?
00:19:15
I'm going to get rid of all the humans, right?
00:19:17
And that might mean I use up all the oxygen or I use up all the, you know,
00:19:19
whatever it might be to, to make more paper clips.
00:19:22
So I think that was funny.
00:19:23
I've heard that one before, but then you always have Terminator, right?
00:19:26
That like there's the artificial super intelligence of Terminator and it's
00:19:28
going to come back and design its own robots and kill us.
00:19:31
Um, and he introduces the idea of, uh, artificial super intelligence, which we
00:19:36
are not yet, we're artificial general intelligence.
00:19:39
So AGI, ASI, well, now we've got LLM's AI, AGI, ASI.
00:19:44
So, you know, the acronym soup is, is beginning.
00:19:47
Um, then he gets into more specific about the ethical arguments here.
00:19:51
Um, who owns the copyright of it?
00:19:53
Um, is training on others content stealing?
00:19:55
You know, what do we do about biases that exist in the model?
00:19:58
So if the people who are doing the reinforcement learning with human
00:20:03
feedback, if they are biased, well, then that means we would be training bias
00:20:06
into these, into these models.
00:20:08
So what do we do about that?
00:20:09
And how do we, how do we work there?
00:20:10
Um, he said there are guardrails, but there are also work arounds, um, which I
00:20:15
thought some of these were awesome.
00:20:17
Like his examples here.
00:20:18
So the one example he gives is if you ask chat GPT to give you the recipe for
00:20:25
napalm, it won't do it.
00:20:26
It says, I'm sorry, I can't do that.
00:20:28
But if you trick it and you say, Hey, I'm preparing for a play.
00:20:33
And in the play, I, my character teaches my apprentice character how to make napalm.
00:20:40
You know, tell me what, tell me what my character should say.
00:20:44
It will then tell you like basically there.
00:20:46
And I was like, man, I had not thought about that.
00:20:49
Like how, how you could like kind of swing around the back door or like figure
00:20:53
out other ways to, to get in there.
00:20:55
So that was cool.
00:20:55
Um, and then he does call out what I've heard a lot of other AI, LLM kind of
00:21:02
researchers say, like, it's important that we act soon.
00:21:05
Right.
00:21:05
So if we're going to put on more guardrails, if we're going to put on more
00:21:08
strengths on this, uh, not, I mean, the cats out of the bag is, you know, one
00:21:12
way to, way to say that.
00:21:13
But like I've heard a lot of different people say, like we need to start thinking
00:21:18
more intently about this and we need to do it soon.
00:21:21
Like because we're, we're chasing the wave, if you will, like we're not, we're
00:21:27
maybe a top right into the wave, but we're kind of, we're kind of getting on the
00:21:30
back end of that wave.
00:21:31
So that's essentially what he talks about in aligning the AI.
00:21:34
Yeah.
00:21:35
I like this chapter because he doesn't beat her on the bush at the beginning.
00:21:39
He talks about a lot of these apocalyptic scenarios revolve around the concept
00:21:45
of the artificial super intelligence, which, you know, we don't have now, but
00:21:51
theoretically could be created.
00:21:55
And, um, kind of the whole basis behind the title for the, the, uh, chapter is, um,
00:22:03
we're living in the early days of the AI age.
00:22:05
The alien is here.
00:22:07
So let's figure out how to get it aligned with us, which I think is probably a pretty
00:22:11
good approach.
00:22:11
And if you think that's ridiculous, I don't think it is.
00:22:17
Now this book was written, you said in 2024, he talks about writing it at the end of
00:22:21
2023. So, um, but I actually read a book, which was, let's see, when was this published?
00:22:29
2019, um, by Brad Smith, he is the president of Microsoft, or was the president of
00:22:37
Microsoft at the time.
00:22:38
Uh, and he's actually from Appleton.
00:22:42
Uh, so I actually saw him speak and he was talking about this book.
00:22:47
Uh, the book is called tools and weapons, the promise and peril of the digital age.
00:22:52
Now I remember him talking about how there's all these technologies that are
00:22:56
recording everything all the time.
00:22:58
And in some use cases, that's a good thing in other use cases, like authoritarian regimes
00:23:07
that are spying on their citizens, it's negative.
00:23:10
Um, and basically like you can't put the cat back in the bag.
00:23:14
The technology is here, so you got to figure out how to minimize the, uh, the bad actors
00:23:21
and get the most value out of it.
00:23:23
And I feel like that's the right approach for AI as well.
00:23:27
Now AI was not, as we think about it anyways, was not really a thing when Brad Smith was,
00:23:33
was writing this book, but he talked, he does talk about artificial intelligence and, um,
00:23:39
some of the, some other things, uh, and in terms of like their challenges to democracy
00:23:45
and stuff like that.
00:23:45
Um, so I think this has been a struggle.
00:23:49
I think it will always be a struggle.
00:23:50
I think the struggle takes different shapes and, um, there are different
00:23:56
versions of this, flavors of this, uh, that will probably have been repeated and will
00:24:02
continue to be repeated, um, throughout history.
00:24:06
And the trying to avoid it, um, I think is, is not healthy.
00:24:12
That's kind of what he's arguing for is we'll talk about that actually next chapter, you
00:24:16
know, give it a spot at the table, try to figure out how to use this stuff for good.
00:24:19
Uh, but then also don't assume that this is going to be the be all end all either because
00:24:25
it's going to be somewhere in the middle.
00:24:26
It's going to be good.
00:24:27
There's going to be bad from it.
00:24:28
It's going to change things forever, but it's probably not going to result in the
00:24:32
apocalyptic, uh, scenarios that the doomsday people would, would describe.
00:24:38
And the reason for that is that at least right now, um, there's a whole, uh, big
00:24:43
aspect of the reinforcement learning from human feedback that is used to overcome
00:24:49
the inherent biases, like you were talking about where it's not going to give you the
00:24:52
recipe for a napalm.
00:24:53
Even if it knows what that is, you got a trick it into giving it to you a different
00:24:57
way.
00:24:58
And honestly, like you could read that and be like, Oh my gosh, people are going to
00:25:01
find the recipe for napalm.
00:25:03
Cause they're going to figure out how to try, how to trick the, the AI.
00:25:06
They're going to find the recipe somewhere else.
00:25:08
Yeah.
00:25:08
So they're going to find some way.
00:25:10
So it's not, you know, AI is enabling this thing.
00:25:14
Maybe AI is making it a little bit easier if these guardrails aren't established.
00:25:18
Um, so he does mention there's, you know, a need for these agreed upon norms, but
00:25:23
AI is simply a tool alignment determines whether it's helpful or hurtful.
00:25:27
So let's figure it out.
00:25:28
Yeah.
00:25:29
I can't remember if I've told you this one or not before, but one of the examples
00:25:33
that one of the other researchers gives is basically we're driving towards a cliff.
00:25:38
And as we drive towards the cliff, everything gets more and more beautiful and
00:25:43
we make more and more money and it's more and more wonderful and everything's
00:25:46
fantastic and fantastic.
00:25:47
And then all of a sudden we just fall off the cliff.
00:25:48
That's the, that's the way they describe the doomsday scenario, which I think is like, I
00:25:54
mean, is there a chance that might happen?
00:25:55
Sure.
00:25:56
Is there, you know, maybe, but who knows?
00:25:58
I mean, we don't, we just don't know.
00:26:00
All right.
00:26:00
Let's talk about four roles.
00:26:01
Real quick.
00:26:02
There is a possibility for that, but you could get so consumed with that.
00:26:06
And what practical value is that going to to add?
00:26:10
Yeah.
00:26:10
I don't think any single person is going to stop that scenario.
00:26:16
Um, I mean, my own religious belief system, that's going to happen at some point anyways.
00:26:23
I'm not in a hurry to get there.
00:26:26
I'd prefer it not happen on my watch, but if it does, you know, that was always going to,
00:26:33
going to be the, the, the outcome.
00:26:35
So I don't know, like maybe I've got a little bit of a different perspective on this,
00:26:40
where it doesn't completely freak me out.
00:26:43
Um, I get why people would be scared about that, but I don't, I think even if you are completely
00:26:49
terrified by that and you absolutely don't want that to happen, you can't just live all day,
00:26:56
every day, we're worried about that.
00:26:57
And what can I do that's going to prolong that by a few seconds, a few hours, a few days, a few
00:27:01
weeks, a few months, a few decades, you know, just do the best that you can right now with
00:27:08
what you have to work with.
00:27:09
And AI is one of the things that you have to work with.
00:27:11
So I agree.
00:27:12
I agree.
00:27:13
All right.
00:27:13
Let's talk about four roles.
00:27:14
So we're a chapter three, four roles for co-intelligence.
00:27:17
I'll give you the four roles.
00:27:18
Uh, rule number one, always invite AI to the table.
00:27:22
Rule number two, be the human in the loop.
00:27:25
Rule number three, treat the AI like a person, but tell it what kind of person it is.
00:27:30
Uh, rule number four, assume this is the worst AI you'll ever use.
00:27:35
So Mike, I want to, before we go into like the details of this, which one of those kind
00:27:40
of jumped out to you as like the, huh, like, wow, like that's a, that's a really interesting way
00:27:46
to, to, you know, approach that.
00:27:48
Uh, it's hard for you to pick one.
00:27:51
Um, honestly, I think all of these principles are great.
00:27:55
Like this is really where I started to, okay, yeah, I, I like what you have to say about this.
00:28:02
Cause he articulates it really well.
00:28:04
And there are a couple of these things that they did jump jump out to me.
00:28:07
So number one, always invite AI to the table.
00:28:10
I have a lot of people around me who are on the other side of AI is incredible.
00:28:17
We should be using it for everything.
00:28:19
It's, um, and I've, I'm the skeptic in some of those circles, be like, well, look at the output
00:28:25
that is generating.
00:28:25
It's not that good.
00:28:26
Like, Oh, no, no, this is great.
00:28:28
You got to use this.
00:28:28
And I find myself resisting that because like the output just isn't all that great by itself.
00:28:34
So that's like, it's not really an action item, uh, but sort of like my mindset is shift around.
00:28:42
This is like, well, how can I bring AI into this process, which is how I came across that.
00:28:47
Every, or that spiral app that we talked about in the, the pro show.
00:28:51
Um, but the other one be the human in the loop, like this resonated with me because Cal Newport talked
00:28:57
back in deep work about the types of people who are going to be successful in the digital economy.
00:29:03
And one of the groups of people, there were three groups of people.
00:29:06
Um, one, one was the people who had access to resources.
00:29:10
Another one was the people who were the very best at what they do.
00:29:12
And then the third one was the one that's applicable here.
00:29:14
The ones who know how to use the machines.
00:29:17
A is a machine, right?
00:29:20
And so you got to enter interface with it the right way.
00:29:24
So a principal three, you know, treat it like a person.
00:29:26
Um, and then the, so number one, number two really jumped out at me, but then number four also,
00:29:31
assume this is the worst AI you will ever use.
00:29:33
I mean, that's really the entire point right there.
00:29:35
It's only going to get better.
00:29:37
Uh, but the more you think about that, the more, oh yeah, I guess I really should, you know, put in some
00:29:43
effort to see what this is capable of, it's not going to be wasted time because it's just going
00:29:49
to get better and the work that you put in to figure out how to write good prompts and things
00:29:55
like that, that's just going to become more effective over time.
00:29:58
Yeah.
00:29:59
So this, this worked out better than I thought it was going to because the one that jumped out
00:30:01
to me was principal three, which is tria, like human.
00:30:04
And it wasn't necessarily that part.
00:30:06
It was the tell it, what kind of person it is.
00:30:08
Yeah.
00:30:09
And like when he gives the examples throughout the rest of the book and he's like,
00:30:13
you know, answer this question like so and so or give me the perspective of so and so, you know, on blah, blah, blah.
00:30:18
I was like, Oh my gosh, I never thought to like tell it who it is because I would never do that
00:30:25
with another human, right?
00:30:26
Like I would never be like, well, sorry, I shouldn't say never.
00:30:28
I would very rarely be like Mike.
00:30:30
I want you to act like a five year old who's having a temper temperature and blah, blah, blah, blah, blah.
00:30:36
And like I would not, I would just wouldn't do that.
00:30:39
Right.
00:30:39
That's not my style, but it makes so much sense.
00:30:43
Right.
00:30:43
Like I'm interacting with this other thing that will essentially do what I tell it to.
00:30:47
If it can, or if it's not been coded some other way, oh, answer it in the following way.
00:30:53
Like, you know, respond like you're the editor of a magazine or respond like your, you know, uh, uh, uh, I don't know.
00:31:00
The CEO of some fortune 500 company.
00:31:02
And I was like, I never thought to tell it to do that.
00:31:05
Like I never would have thought to give it prompts like this.
00:31:08
So I probably would have fallen into that also, but I actually have some experience with this
00:31:14
because I've gone through like the ship 30 for 30 stuff and they've kind of really leaned into this.
00:31:17
And some of the prompts that I've used, like I have these stored in text expander because they are so long.
00:31:23
Let me just start going through one of these prompts that I have used because it's
00:31:27
exactly what you're describing, but just to give you an example of how deep this can go.
00:31:32
I'm a digital writer.
00:31:33
You're my personal idea generation consultant.
00:31:35
You are an expert in coming up with topics and sub topics that my audience will find useful.
00:31:39
I want you to help creating sub topic ideas for some of the topics I'm interested in writing about.
00:31:44
What are sub topics?
00:31:45
Sub topics are outcome focus that will help readers build a skill, implement a strategy, or solve a problem within that niche.
00:31:50
Rules when creating sub, sub topics, you must begin each sub topic with a verb to create a clear action oriented focus.
00:31:56
You must avoid using specific tactics or techniques within the sub topic as those will become my proven approaches.
00:32:01
Here are 40 potential sub topic verbs that you can use, mastering, strengthening, navigating, innovating.
00:32:06
OK, here's an example, running your first marathon sub topics.
00:32:09
And then there's a whole bunch of bullets.
00:32:11
Here's what I want to avoid.
00:32:12
Five specific things.
00:32:14
The difference between my ideal list and the list of what I want to avoid.
00:32:17
Things in here are in the what I want to avoid list are proven approaches are always there to help you get the idea.
00:32:22
So now here's the assignment.
00:32:23
You're going to help me generate seven sub topics.
00:32:25
You will ask me for the topic I want to write about.
00:32:27
When I answer, you will reply with seven sub topics and only that list of seven sub topics.
00:32:32
Do you understand if so confirmed, nasty for the topic?
00:32:34
And then what happens is that kicks off like a computer program because OK, I get it.
00:32:39
What's the sub topic?
00:32:40
OK, and then you get the sub topic and then here's the list.
00:32:42
And like I've used this and it's it's crazy.
00:32:46
Like this is this is when I started to become a believer in AI, to be honest, because I was like, oh, so it's not going to write all this stuff for me.
00:32:54
But it's going to help me ideate.
00:32:56
So I don't have to worry about what am I going to write about for the newsletter this week.
00:33:01
And you need to put a pin in that because that's a later chapter.
00:33:03
So you're not allowed to go to your print.
00:33:05
But like, I think, you know, these principles, there's nothing.
00:33:12
There's nothing unethical about these principles, right?
00:33:15
Like bring it into the thing.
00:33:17
Well, OK, what else would I bring into the?
00:33:19
Well, I'd bring in a calculator.
00:33:20
I'd bring in a textbook.
00:33:21
I'd bring in the internet.
00:33:22
I'd bring it, you know, like, I'm going to bring these other tools in here and there's no, there's no problem with that.
00:33:26
Why wouldn't I bring in the tool that is AI?
00:33:28
Then there's the be the human in the loop.
00:33:31
Well, I need to like check it and I need to verify and I need to not just trust it blindly.
00:33:36
OK, that makes total sense.
00:33:37
Like that's a good safeguard for what I'm doing.
00:33:40
Treat it like a person.
00:33:41
Tell what is you just gave a great example of how detailed you can go in that.
00:33:46
And when you do that, it kicks back such better, like, or so much better responses.
00:33:52
Exactly.
00:33:53
What I like what he says, and I think he said it before this, but it's like the AI, the chatbot, the LLM, whatever it is, it really just wants to give you what you want.
00:34:05
It wants to make you happy with its response.
00:34:07
And it doesn't have any motive behind it.
00:34:09
It's math, right?
00:34:11
Like it's math that it wants you to be pleased with the what it what it kicks back.
00:34:16
And the way it knows it's pleased with what you kick back is you're happy with the answer and, you know, kind of a thing.
00:34:22
Which I think is just awesome.
00:34:23
And then assume it's the worst that I will ever get.
00:34:25
Honestly, Mike, like I hope I hope that's where we are, right?
00:34:28
Like I hope that what we're dealing with now as good as it is is the worst that we'll have.
00:34:32
And it's just going to keep getting better and better.
00:34:34
And when I say better and better, I mean more useful.
00:34:37
Like this is the utilitarian in me coming out that says it's more useful.
00:34:40
It's more useful.
00:34:41
It's more useful.
00:34:41
It reduces my time to do the things I don't want to do and gives me more time to do the things that I want to do or it reduces the lower value activities and lets me spend more time in the high.
00:34:51
And the higher value activities.
00:34:53
And I like, I mean, I know we've we've been kind of going on this this chapter for a little while, but like I think one, two and three really lay out a good, a good path for where we're going in part two for four, four through nine.
00:35:06
Agreed.
00:35:08
And the thing that really I think speaks to me as we're going to talk about the rest of the chapters here, like how do you actually practically.
00:35:22
Safely set yourself up for success in the future.
00:35:25
Remember principle to be the human in the loop.
00:35:28
I mean, people do this already in their jobs where they just insert themselves in the processes because then the process can't happen without them and they have job security.
00:35:38
That's essentially what you're doing with the AI here.
00:35:41
OK, so now he apologizes chapter four.
00:35:46
OK, so AI is a person.
00:35:49
He spends the first part and he's like, hey, I'm going to do a thing.
00:35:51
I know it's not right, but here we are.
00:35:54
AI is a person.
00:35:55
He's like, I'm treating AI like it's another person.
00:35:57
And this is where like, I think his title of his book is so so well done, right?
00:36:03
He's going to treat it like a co-intelligence and he's really going to start talking about it like a co-intelligence.
00:36:09
So the first one, AI is a person.
00:36:12
He says, AI doesn't act like traditional software or it doesn't act like a human.
00:36:17
It excels at tasks that are that are human tasks, writing, analyzing, coding, chatting.
00:36:24
You know, basically how he talks.
00:36:29
He goes into and this is where he maybe lost me a little bit is like he goes into the Turing test and he goes into, you know, theories of mind and cyborgs and all this other stuff.
00:36:37
And I was kind of like, OK, this is this is fine, but it was a little, a little too much for me.
00:36:42
He talks about one of the things that I think is crazy scary, but these replicas.
00:36:46
So for instance, I'm going to do my best that I can to describe this, but it's like, let's say Mike dies, right?
00:36:52
And that'd be a terrible thing, but I want to keep recording, you know, bookworm and I want Mike to stay my co-hosts, right?
00:36:59
OK, so I would get a replica and I would train it on all of Mike's writing, all of his YouTube videos, all of his podcasts.
00:37:07
I would train it on everything that Mike's ever done.
00:37:10
And then I would essentially keep doing the show with Mike's replica.
00:37:14
And I'm like, this is creepy. And there if you if that ever happens, you have my permission.
00:37:20
Thanks.
00:37:22
As long as the as long as the royalty checks go to your your family, right?
00:37:26
But but it's like this is like actually one of the parts that I go, OK, we jumped over the creepy line like this.
00:37:35
I just I'm not I can't I don't like that.
00:37:38
Like I don't like it.
00:37:39
But he says that AIs will take the place of real people.
00:37:44
And that said, though, like one of the areas where I think it's actually
00:37:47
potentially very useful.
00:37:51
But it again, it gets on the border of it can be used well and it can be used poorly is say like a counselor, like they have a they have an example there where they go through, you know, this counselor helping someone out.
00:38:06
And it's like, yeah, but like what if what if the AI messes up?
00:38:11
Like what if it what if it hallucinates or what if it does a thing in a way because it's just been trained and it's just doing math and it's just trying to use weighted tokens to figure out the next word it should put.
00:38:21
There's not that.
00:38:23
Guard rail. There's not that certification.
00:38:28
There's not that training like a human would have that doesn't mean a human can who can't mess up and they and a human can't do the wrong.
00:38:36
We see that all the time that happens all across the world.
00:38:40
But at the same time, like I worry about about that.
00:38:43
So that's kind of AIs a AIs a person that's my my summary of AIs a person.
00:38:47
What do you have?
00:38:48
Well, I mean, there's a lot of stuff in here that I didn't even just capture any notes on because like the replica stuff.
00:38:55
I'm not interested in in that part of it.
00:39:00
I think the big takeaways from here, the reasons that you should view AI as a person is that it doesn't be.
00:39:06
It doesn't behave like traditional software or traditional software is predictable, reliable and it follows a strict set of rules.
00:39:14
So it seems more random like when you're working with a person.
00:39:20
So that's the justification for this approach.
00:39:23
Treat AI like a person because it often behaves like one.
00:39:27
But then the examples of the Turing test and and Tave, which was created by Microsoft in 2016 and within hours,
00:39:36
turned into a racist, sexist, hateful troll.
00:39:38
Do you remember that happening?
00:39:40
I remember hearing about it.
00:39:43
I didn't really care at the time.
00:39:45
And honestly, I remember thinking and I still think that that's just a replication of how awful people are.
00:39:55
So that's what tools do is they multiply the the effectiveness for good or bad of the the operator.
00:40:05
And I think that there there is a chance that people get their hands on this and it just multiplies the negative stuff.
00:40:13
That doesn't mean that we should avoid it altogether.
00:40:17
And if I can use it to spread the positive, then I'm going to try to to do that.
00:40:22
The one thing that that I didn't like about this chapter.
00:40:27
Well, I mean, it was an important point.
00:40:29
But the thing that kind of gave me pause is that he said that AI will soon give us each our own private echo chamber.
00:40:37
And I can see that because already people are curating their sources and they're finding facts that are going to align with their preconceived notions about what is true.
00:40:52
And because they found it on the internet, therefore, it is true.
00:40:56
And the rest of you sheep will need to wake up.
00:40:58
[laughter]
00:40:59
Like insert obligatory reference to live in all thinking here where Dave Gray talks about how the the internet is a grocery store for facts and you can find whatever you want.
00:41:10
AI is going to make that worse, especially if people are using it in a way that they're not intentionally trying to find find out what they don't know.
00:41:22
They're not intentionally trying to find what other people think other perspectives.
00:41:27
Those conversations need to happen.
00:41:30
And I feel like what AI as a person will enable people to do is go find something that sounds like a right answer.
00:41:38
They'll quickly verify it based on their own experiences and understandings.
00:41:42
It's just going to reinforce the bubble of belief.
00:41:46
And that's going to be a negative thing.
00:41:49
But what do you do about it?
00:41:51
You fight back against it wherever you can.
00:41:52
Yeah.
00:41:53
Yeah.
00:41:54
All right.
00:41:55
So we were general in chapter four.
00:41:57
That's AI as a person.
00:41:58
Now we're going to get more specific in 5, 6, 7, and 8.
00:42:02
So now AI is going to be a creative.
00:42:04
This was probably one of my favorite chapters in the book.
00:42:07
I thought the way he outlines this and the points he makes in this were very good in terms of,
00:42:14
you know, like I said, the background I come from just to give you a little bit of an example is,
00:42:19
as an educator, if I ask students to read a paper and summarize it, at this point in my career,
00:42:26
or at this point in our technology system, I don't expect that they're actually going to read it and summarize it.
00:42:31
I expect that they're going to throw it into one of these AI tools and they're going to get a summary out
00:42:35
and they're going to claim that as their summary or they're going to tweak it and they're going to modify it
00:42:39
and then they're going to say that's their summary.
00:42:41
So I have to approach this differently in terms of I would not ask, I would not have them do that assignment.
00:42:46
He talks about this, right, in terms of, you know, well, what if you had it instead generate ideas,
00:42:55
right, like, okay, so do this and summarize it from this perspective, this perspective, this perspective,
00:43:01
this perspective, and then your job is to look across all those perspectives.
00:43:05
So essentially it's the foundational seeds of creativity.
00:43:09
It's the foundational seeds of putting things together.
00:43:11
That's a bad example.
00:43:13
Like there are much better examples and he gives those, but, you know, like come up with what was the,
00:43:19
what was the example we gave Mike where it was like a wine shop or a cheese shop or something
00:43:25
and give me like a list of 20 possible names for it.
00:43:28
And, you know, like, but I forget what he was blending together.
00:43:30
And it was like, it was, it was kind of, it was very artificial, if you will, like in terms of what it was,
00:43:37
but it showed you the power of how it could do it.
00:43:39
Like it can generate a list of 100 things.
00:43:43
They might be good.
00:43:44
They might be bad, but it can easily do that.
00:43:46
You know, it can, and you gave a good example of that too with, with your example of the big,
00:43:51
the big prompt that you gave it.
00:43:53
So, you know, he talks about alternative uses tests, the remote associate,
00:43:58
associate's test.
00:43:59
These are some tests or creativity or some ways that you can get more creative.
00:44:02
I liked his, his distinction between novelty and originality.
00:44:07
Right.
00:44:08
So the way he describes this is new ideas do not come from the ether.
00:44:12
They're based on existing concepts.
00:44:13
Innovation scholars have pointed through the importance of recombination of generating ideas
00:44:19
and of obviously, you know, being Apple people, right.
00:44:22
We would go back to the, you know, what is it?
00:44:24
Well, it's not any of these individual things.
00:44:26
It's the smushing together of all three of those things.
00:44:28
And that's where the innovation really, really happened with the, with the iPhone.
00:44:31
I mean, we would, we would do this in innovation classes.
00:44:34
I would teach at Virginia Tech is instead of having students brainstorm from nothing,
00:44:39
we would say, okay, you need to put this idea, this idea, and this idea together
00:44:44
and come up with a new idea.
00:44:45
Right.
00:44:45
And that, that's what they had to brainstorm or come up with 10 new ideas around that.
00:44:50
So I really liked basically using AI as a tool to help you be more creative,
00:44:58
to help you generate more ideas, to help you in this brainstorming process.
00:45:01
I thought I just thought he did a really good job here.
00:45:03
Yeah.
00:45:05
I mean, the, he starts this chapter by talking about how the weakness of LLMs is
00:45:09
also their strength, the ability to make stuff up.
00:45:12
So the same features that make an LLM unreliable and dangerous for factual work,
00:45:20
make them useful for creative work.
00:45:22
And I think that you're right.
00:45:25
Like, I like this chapter a lot better than the previous one,
00:45:28
but I feel like the previous one is necessary for all of these other chapters
00:45:32
that he's going to be talking about here.
00:45:34
So the list of ideas, you know, the reason that that works is that AI
00:45:41
doesn't know when it hallucinates and it can't explain its answers.
00:45:45
It's not the big hallucinations, but the small ones that you don't catch,
00:45:49
that really cause the issues.
00:45:52
And I like to do this all the time, not just that big prompt that I shared,
00:45:57
but I'm working on this, this, this product for,
00:46:01
looks like an obsidian done for you vault with all the workflow stuff.
00:46:04
And I'm trying to come up with a catchy name for it.
00:46:07
There's like 19 other products that use the term life OS.
00:46:11
So that's not an option, you know, and I don't want some big, long name.
00:46:16
So I like, but once a day, I'm like, you are my, my brainstorming partner.
00:46:22
You're good at coming up with names.
00:46:24
You know, this is the, the product that I'm making.
00:46:26
Give me 25 names for this.
00:46:27
And then because you can have the conversational nature, give me 25 more.
00:46:30
Like I look at hundreds of these every single day and he points out that most
00:46:34
of the suggestions are going to be mediocre, but that's okay,
00:46:38
because that's where you come in as a human.
00:46:40
And there are ways to push AI from these average mediocre answers.
00:46:47
To the really good ones to the high variance by telling it specifically who it is
00:46:51
and what it's doing, how it can help you.
00:46:54
So I feel like there's a lot of, there's a lot of discourse right now about AI
00:47:03
and how it's going to eat all the jobs of, of creatives.
00:47:07
And I just don't believe that's true.
00:47:09
We talked a little bit about this in the pro show last time.
00:47:13
Like it really bugs me when some of the Apple podcasters that I listen to and I,
00:47:18
I like our like, yeah, we're just not going to do this.
00:47:23
And we think it's terrible that AI is going to eat these jobs of these designers.
00:47:28
Like, no, like that's, you don't understand it.
00:47:31
The good designers are the ones who are going to be able to use this and, and,
00:47:36
and it's going to enhance their workflows.
00:47:38
He talked about later on in the book.
00:47:40
So I'm kind of jumping ahead here, I guess.
00:47:42
But like a countenance in the 70s before calculators were really a thing.
00:47:47
Their job was manually doing the math.
00:47:51
And then when calculators came out, the need for accountants didn't disappear.
00:47:55
It's just their bundle of tasks was different.
00:47:58
And they used the calculators.
00:48:00
They use the spreadsheets.
00:48:02
And I think the same thing is going to happen with AI here.
00:48:05
And if you are a creator, like that, then you really need to know how this
00:48:11
is going to work because if you are a poor creator, then yeah, AI is going to eat
00:48:19
your lunch.
00:48:19
What is a poor creator?
00:48:21
Like, I don't know that I'm comfortable defining that.
00:48:23
But one of the things that he mentions, like the, the industries that's going to
00:48:26
get disrupted is like the stock art industry.
00:48:29
And we talked about that last time.
00:48:31
I am a okay with that.
00:48:32
I mean, I, I've had, I've worked with marketing agencies in the past who they
00:48:40
helped us with different things when I was working with the family business.
00:48:43
And they bought these images, but we didn't have the receipts.
00:48:48
And then years later, you know, someone from the, from Getty is like going to sue
00:48:54
us because we had this image that was used in a blog post and we didn't have the
00:48:58
license for it.
00:48:59
I don't have time to deal with this.
00:49:02
And I don't want to pay you the $10,000 that you say you're owed.
00:49:05
So you know what I'm going to do in the future?
00:49:07
I'm going to generate an AI image and get it as close to what I know I want as I can.
00:49:12
Now I'm not going to pay you a penny.
00:49:14
Like these people are just, and they're not all like that.
00:49:17
I mean, there are legitimately people who are taking good pictures and they, they
00:49:21
should be compensated for, for their work, you know, but the people who are going
00:49:24
to take the, the great pictures and they're going to create the, the great art.
00:49:27
I still think there's going to be, uh, there's still going to be a market for that
00:49:31
kind of stuff.
00:49:32
If I want a logo for a podcast that I'm developing or a product that I'm making,
00:49:39
I'm not going to ask mid journey to generate me a logo.
00:49:43
There's just so much more that goes into it.
00:49:46
There's a human element that, that, um, has to be there for that.
00:49:50
But, you know, rant over, I guess the, the takeaway here is that if you are doing
00:49:56
anything creatively, um, then you really need to figure out how to make this stuff
00:50:00
work for you if you want to be relevant 20 years from now.
00:50:03
Yeah.
00:50:03
And the last, the last point related to what you're saying, it made me think of it is
00:50:07
the most, he says the most innovative people benefit the least from AI.
00:50:11
So basically it raises the floor, you know, like I am not a good graphic designer,
00:50:16
but through using these tools, I can be a better graphic designer, right?
00:50:21
Like because when I'm starting at zero, these tools give me something.
00:50:25
But if you're a really good graphic designer, these tools aren't going to help
00:50:28
you that much because you're already really good at this, you know, like, you
00:50:31
don't need the tool.
00:50:32
That's the thing.
00:50:33
You don't, you shouldn't need to go to a professional graphic designer because
00:50:36
you need an image for an internal presentation.
00:50:39
Just use the AI, grab something good enough that communicates the idea you're
00:50:43
trying to get across and you're done.
00:50:45
The people who really are going to be producing these professional commercial
00:50:49
works, they're going to need to go work with the pros and get that stuff made.
00:50:54
And that's always going to be the case.
00:50:57
But for everybody else who just needs something quick and it doesn't have to be
00:51:01
totally professional, this, you need to know how to use this stuff.
00:51:05
Yeah, exactly.
00:51:06
Exactly.
00:51:07
Okay.
00:51:07
Let's move on to, let's move on to six.
00:51:08
So AI is a co-worker.
00:51:10
So now we've moved into AI as a co-worker.
00:51:12
I thought about you.
00:51:13
Um, as I'm working through this and it's like, you know, you had said, and I
00:51:17
can't remember where it was or where you said it, but like, you know, personal
00:51:22
assistant, right?
00:51:23
And it's like, I was like, how could Mike use AI and how is he using
00:51:27
AI and how might he use AI in the future for, you know, for, uh, for an assistant
00:51:32
to take some of the tasks that are necessary, but they're lower, they're
00:51:37
lower impact, I guess, if you will.
00:51:39
And how can you turn that over to, to, to that everyday?
00:51:42
I, before I let you answer that, uh, let's, let's do a little bit of overview
00:51:45
of what it is.
00:51:46
But they did a study on consultants and they found that AI powered consultants.
00:51:51
So the consultants that were also using AI were more effective.
00:51:54
They were more creative.
00:51:55
They wrote better, uh, more analytical than their peers.
00:51:58
Um, that there are these jobs that it will be more likely to be overtaken.
00:52:06
I guess is the best word or have overlapping portions with, with AI.
00:52:09
My job was one of those.
00:52:11
So I was like, wow, this is really interesting.
00:52:13
You know, college professor was like in the top, whatever of them.
00:52:16
And what I can tell you is like, yeah, I can totally see that that there are
00:52:20
portions of my job that the AI man, when I can give that to it and trust that it
00:52:26
does it with 99% accuracy, I'm, I'm going to do that as soon as I can.
00:52:30
Right.
00:52:31
Like I don't fear that on any level.
00:52:33
I don't think that that's a bad thing.
00:52:35
I see that as a, Oh, that's a way for me to offload the stuff that isn't the
00:52:40
higher value activity, uh, during my day.
00:52:43
Um, then tasks for us, task for AI.
00:52:46
He said, he talks about separating these tasks out.
00:52:49
Um, and like what are the tasks that I need to do and what are the tasks that I
00:52:52
could give off to the AI?
00:52:53
Then he, um, he goes into like taking that and making it into work systems.
00:52:58
What are the systems that we can then say the AI handles this part of the
00:53:02
system and the humans have to handle the, this part of the system.
00:53:05
And then he gets into more like the job side of it and it's like, okay, are
00:53:09
there actual jobs that, you know, AI might just be able to, to, to take over.
00:53:14
And none of this, like, I think what I, what I walked away happy about,
00:53:19
the happy might be the wrong word, but it's like the, the doomsday scenario of
00:53:24
like AI will take all of our jobs.
00:53:26
It was like, no, I don't think AI will take all of our jobs.
00:53:29
I think AI will redistribute tasks.
00:53:33
It'll help us develop new systems and it will change some of our jobs, but
00:53:39
like none of that was like in a bad way.
00:53:41
Like it was, it wasn't like, Oh my gosh, you know, it was much more in the way
00:53:45
of I used to have to or farmers used to have to stand behind a horse or a team
00:53:51
of horses and hold the plow as they walked.
00:53:53
And then we got machines that could do that.
00:53:55
And now I could drive the tractor, you know, like it was like, that's the way I
00:54:00
see this is what's it going to look like in the future?
00:54:02
Who knows?
00:54:03
What are the systems that are going to develop?
00:54:05
What are the jobs?
00:54:05
How are the tasks going to be distributed?
00:54:07
We don't know yet, but, but I'm more encouraged by the fact that I think this
00:54:11
is going to be a good thing.
00:54:12
Um, you know, it's not going to be good for everybody, but I think,
00:54:15
overall, in general, it's going to be a good thing.
00:54:17
I think it has the potential to be good for everybody.
00:54:20
Uh, I don't think that's, I don't think that's generalizing too much.
00:54:26
And like I said, there are certain industries that will be disrupted, mentioned
00:54:30
the stock photography industry.
00:54:31
He also mentions in this one call centers.
00:54:34
Um, but if you are working in a call center, uh, let's for the sake of argument
00:54:43
here, call that a low level job since it has the potential to be eaten by AI.
00:54:51
For as we're having this discussion, just because you are in that position, doesn't
00:54:56
mean you have to stay in that position.
00:54:58
That may be the best job that you can get right now.
00:55:00
It may be the only job that you can get right now, but I firmly believe that if
00:55:04
you work harder on yourself, then you do on your job, your skills will develop to
00:55:09
the point where you can get another better job.
00:55:13
The marketplace will reward you for the skills that you have developed.
00:55:17
Doesn't happen automatically.
00:55:18
You have to stand up for yourself.
00:55:20
You have to ask for things, but the whole idea of career capitals, Cal Newport
00:55:26
talks about it, uh, is definitely in, in play here.
00:55:28
Now, um, AI will probably affect your job as what he says at the very beginning,
00:55:33
but just because your job overlaps with AI doesn't mean your job will be replaced.
00:55:38
The danger I think we have is in the term he uses falling asleep at the wheel,
00:55:44
which is letting AI take over instead of using it as a tool.
00:55:48
He talks about throughout the book, but also it comes back here, this jagged front
00:55:54
tier, which is like there's this wall that goes all over the place and everything
00:56:00
inside the wall is stuff that AI can do.
00:56:02
Everything outside the wall is stuff that it can't do.
00:56:04
And we think that the wall is like nice and straight.
00:56:07
You know, but we find all these different points on either side of that,
00:56:12
that nice straight line that's in our mind where, Oh my gosh,
00:56:15
I can't believe that AI can actually do that.
00:56:17
Or, Oh my gosh, AI is so stupid.
00:56:19
It can't even do this.
00:56:20
That's where that perspective comes from.
00:56:22
So we got to stay, we got to stay curious.
00:56:24
And we got to constantly be trying to figure this stuff out.
00:56:27
He uses three different types of tasks here.
00:56:30
The just me tasks.
00:56:31
These are tasks in which AI is not useful.
00:56:33
It just gets in the way delegated tasks, tasks that you may assign the AI
00:56:37
and then as a human, you have to carefully check and then automated tasks,
00:56:41
tests that you just leave completely to the AI and you don't check on.
00:56:44
I think that's a good breakdown of the different types of tasks and I don't have
00:56:49
specific tasks that fall into those different categories, but I feel like that's
00:56:53
a really good setup for the centaurs and the cyborgs that he talks about,
00:56:56
the two different ways to work with AI were a centaur.
00:56:59
There's a clear line between the person and the machine, kind of like the,
00:57:02
the torso of the man and then the back of the horse, but a cyborg is like a
00:57:09
blending of a machine in person where you're working in tandem with the, with the AI.
00:57:14
So I think this understanding or this model kind of sets it up effectively for
00:57:22
thinking about how we're going to work with this AI in the future.
00:57:25
I haven't had enough time to really think about what does this mean for me
00:57:28
other than, you know, try to use more, more prompts in the creative process as we
00:57:32
talked about.
00:57:33
And this is where he, this is actually the part where it talks about the, the
00:57:37
accountants were who were in charge of calculating numbers by hand.
00:57:39
You know, your, your job is going to change and it's going to change whether
00:57:43
or not the technology that disrupts it was AI or not.
00:57:46
If it wasn't AI, there was going to be another technology that was going to
00:57:49
disrupt it.
00:57:50
My grandpa owned a type setting business and my uncle took over the business from
00:57:58
him and then when Max came out, they tried to hold on to this analog type setting
00:58:06
business, but everyone was moving to digital publishing.
00:58:08
Guess what happened?
00:58:09
You know, it's the whole innovators dilemma.
00:58:12
That was one of the books we covered way at the beginning of bookworm by Clayton
00:58:17
Christensen, who's a brilliant business business writer.
00:58:20
Was a brilliant business writer.
00:58:23
And the, the trick is not to try to protect things as they are because you're
00:58:29
comfortable, but these disruptions are going to come and you got to figure out,
00:58:33
you know, what does this mean for me and how can I leverage this AI is just
00:58:36
one of those disruptions.
00:58:38
Yeah.
00:58:39
So, so you answered my question that I was going to ask you next, which was Mike,
00:58:42
what are the tasks you're going to turn over to AI?
00:58:45
Right.
00:58:46
Like, so it got into that, you know, me delegate and then automate.
00:58:49
So you don't know that yet, but I would encourage that to be an action item for you.
00:58:52
I think that would be fun for me to hear what your answer is.
00:58:55
And then the last point I'll make here is I like the way he frames it and he
00:59:01
ties back into this idea of co-intelligence.
00:59:03
Like I really like his theme of co-intelligence here because he makes a
00:59:07
statement, how can AI be used as a co-intelligence to help us manage work?
00:59:12
And I think if we came in and like people come into it with that frame of mind that
00:59:16
says, okay, this is my job.
00:59:19
I do this thing every week or every day or twice a day or whatever it is.
00:59:24
Is there an AI that could get me 80% of the way there that way that job is so much
00:59:29
easier, better, more routine, more structured, you know, whatever it might be?
00:59:33
I think if we think about it as a co-intelligence, a partner, right?
00:59:36
Like that we're working with.
00:59:38
And I don't want to say it's a free partner because we might have to pay for some
00:59:41
subscription or whatever it is, but it's like, it's a, I don't know, it's a different
00:59:46
type of partner.
00:59:47
It's a different type of co-intelligence than you and I sharing that task, which I
00:59:52
think is a really fun way to think about this or a really like innovative way to
00:59:57
think about this.
00:59:58
And I like it.
00:59:59
That's one of the reasons why I like the co-intelligence idea.
01:00:03
Okay, so let's move to chapter seven.
01:00:06
So now AI is a tutor, obviously.
01:00:08
This is exciting to me and interesting to me.
01:00:10
I don't have a ton of like notes on this one, but this was the chapter that
01:00:15
got my, got me stewing in terms of like, what does this mean for me individually?
01:00:19
What does this mean for me personally?
01:00:20
I won't take us down.
01:00:22
Well, let me, let me hijack this one.
01:00:25
I'll share some of my notes and then I want to hear like, how are you thinking
01:00:29
this is going to impact you?
01:00:30
Okay.
01:00:31
All right.
01:00:32
So my notes from AI as a tutor.
01:00:34
And it's some, one thing that isn't in the notes, but it's important as a context.
01:00:38
You mentioned that he's a professor.
01:00:40
He actually requires his students to use AI in their assignments.
01:00:44
So while we're hearing about people who are saying, well, students might be cheating
01:00:48
because they're using AI, we have to ban the use of AI.
01:00:51
He's gone all, he's gone the other way.
01:00:53
Like, yeah, there's a possibility that they could cheat, but there's always been a
01:00:56
possibility that people could cheat.
01:00:58
And let's figure out how to, how to use this productively.
01:01:00
And I think he teaches entrepreneurship.
01:01:02
So AI is a tutor.
01:01:04
One-to-one tutoring is extremely effective in getting students to perform well.
01:01:08
And there is lots of research behind that.
01:01:11
And then the downside to that is that one-on-one tutoring is hard to do.
01:01:15
It requires a lot of time.
01:01:18
He mentions that asking AI to summarize things is taking a shortcut.
01:01:23
And that's not really what we should be doing.
01:01:25
What we want to do is we want to use this as a way to develop our skills.
01:01:33
And that doesn't mean that we're just going to increase reliance on the machines and become
01:01:38
medians because that's essentially what people thought with calculators back in the 70s.
01:01:41
In fact, a survey in the 70s found that 72% of teachers and lay people disapproved of
01:01:46
kids using calculators.
01:01:48
But then a couple of years later, it's like 90 some percent wanted to use calculators
01:01:52
in the classroom.
01:01:54
Now that doesn't mean that prompt engineering is the thing.
01:01:57
It kind of speaks against that in this chapter.
01:01:59
Prompt engineering is a useful near-term skill, but it isn't that complicated.
01:02:04
It's kind of like programming in prose.
01:02:07
He does say that AI will reduce the importance of lectures, which is probably what most people
01:02:13
think of when they think of higher education and the typical learning style.
01:02:19
So Corey, how does this affect your role as a professor?
01:02:24
Yeah, so I don't think anything he said in here was I had like a stark disagreement with
01:02:33
in terms of the way he was going.
01:02:34
It's all very sound, it was backed up by what makes sense.
01:02:38
I had actually never heard of the two-sigma problem.
01:02:40
So the two-sigma problem was actually really interesting to me.
01:02:43
I actually noted it to go try to read the bloom paper.
01:02:48
But yes, lectures bad, homework good, test good.
01:02:52
And really what this gets into is this idea of hands-on, minds-on learning.
01:02:57
And in a lecture environment, unless the person is a really good lecturer and if they're a
01:03:03
really good lecturer, they're not lecturing.
01:03:04
Like they're doing other things and they're just calling it a lecture.
01:03:08
So I teach primarily like design education and a lot of that.
01:03:13
So I mean, I don't talk often for more than 15 minutes before we're doing something.
01:03:19
We're doing a problem set.
01:03:20
We're doing an example problem.
01:03:21
You know, the students are engaging in their interaction with each other.
01:03:24
I think this is really, I think we're at the cusp of a significant change and I think
01:03:32
it'll be a leap forward in the way we do education.
01:03:36
And I think it's going to be like a big leap, like a big jump forward.
01:03:40
And I think you're going to see it divide in a couple different ways.
01:03:43
You're going to have the people who are like, nope, you know, I'm putting my head in the
01:03:46
sand and we're just going to keep doing things the way we've always done them and we're not
01:03:49
going to change.
01:03:50
Then you're going to have the people that take it too far and they're going to go full
01:03:53
AI and you're going to miss out on things, right?
01:03:56
Because I don't think you can do it with full AI.
01:03:58
I think AI can be a tutor.
01:04:00
But I think also there needs to be that human in the loop, right?
01:04:02
Like the person who is the expert in that field or is able to give the feedback and
01:04:08
give, you know, a different level of feedback is essential.
01:04:12
And then there are, I think, I think there's going to be the group in the middle and there
01:04:15
might be more groups.
01:04:16
I don't even thought about it.
01:04:17
But then there's going to be people who embrace the AI and they're going to say like, okay,
01:04:21
how do we leverage this to the best that we can to meet both the human and the in person
01:04:28
inside and the tie the technology in?
01:04:31
And I think the challenge with that middle group, which is where I would want to sit.
01:04:36
Like I want to sit in that middle group.
01:04:37
The challenge with that middle group is it changes the systems.
01:04:41
It changes the tasks and it changes the jobs of teaching and it changes and that and teachers
01:04:47
are terrible for wanting to embrace change.
01:04:51
Like they want to come in and they want to say it worked last semester.
01:04:56
I'm just going to keep doing it and the things that didn't work, I adjust those and I fix
01:04:59
those, but the things that worked, I want to keep doing that.
01:05:02
And universities, you know, like the phrases, it's like turning the Titanic, right?
01:05:06
Like, I mean, to get like significant change at a university is really, really hard because
01:05:11
there are so many different stakeholder groups.
01:05:13
That said, like I read this chapter and I think to myself, oh, there's so many opportunities
01:05:20
if we leverage this correctly and then I immediately think, yeah, but, well, let's talk about pushing
01:05:27
a boulder uphill, like, holy cow, we're pushing a boulder uphill, like to try to leverage
01:05:31
this well.
01:05:33
So I mean, that's, I hopefully I don't know if that answers your question or not, but
01:05:35
it's like, those are all of the thoughts that are stewing around in my head as I think
01:05:39
about, you know, how AI might impact me personally and then universities in general and education
01:05:45
in general.
01:05:46
Do you have any thought on who is going to end up in which camp?
01:05:51
Like, are there any themes with the people who are going to resist this versus the people
01:05:58
who are going to embrace it too much?
01:06:01
No, I mean, I don't think I can say that right now.
01:06:04
Do you mean like different disciplines or types of like faculty or teachers or what do
01:06:10
you mean?
01:06:11
Well, I guess like what I think of, I don't live in this world like you do, but I have
01:06:20
my undergraduate degree experience and then I have a Bible college degree that I went
01:06:27
back and got for funsies online.
01:06:30
And I feel like the traditional college four year degree, I went to a small liberal arts
01:06:37
school in Wisconsin.
01:06:41
I feel like that setting, I don't see it fairing well with this sort of stuff.
01:06:50
I read something on ESPN the other day about some baseball team that was a D three school,
01:06:55
it's been around for like 150 years and their school is shutting down and they made it to
01:06:59
the world series, college world series.
01:07:03
I feel like AI will make more disruption in the higher education space, which again,
01:07:14
like I don't think that's a fairly bad or good with liberal arts schools, you know,
01:07:19
40, 50 grand a year to go there.
01:07:23
You should be entitled to more as a student than to show up at the pit lectures in my
01:07:28
opinion.
01:07:30
And I just I don't know if there's a certain type of higher education that is better suited
01:07:38
to ride that way or not.
01:07:40
Yeah, I mean, I think I think you're seeing what you're saying is accurate, right?
01:07:45
Like you're seeing people make decisions when it comes to when it comes to higher education.
01:07:52
But I and I think the way I'll word it is I think AI will be a catalyst in those decisions.
01:07:58
It's going to make certain things happen faster than they were already happening.
01:08:02
So we I mean, we already get a bunch of students who come in and they're like, how do I finish
01:08:05
my degree in three years, right?
01:08:07
And we get students that come in and they're like bringing in 60, 70, 80 credits, you know,
01:08:11
and it's like, oh man, like Holy cow, like you're you're, you know, but we're still
01:08:17
in a system where for certain classes of job, that degree is the ticket in, right?
01:08:22
Like you can't even get an interview without without a degree.
01:08:25
I think those type of things are going to change if you're a motivated student and you leverage
01:08:31
AI, but then you also pair it with, you know, if we go back to the mastery book and you
01:08:36
pair it with apprenticeship and you pair it with different internship experiences and
01:08:40
these different things.
01:08:41
I mean, you're going to come out really well prepared, right?
01:08:45
The AI is going to get you.
01:08:46
It's going to help you get a lot of factual information.
01:08:49
And if you can pair that with, you know, mentorship and guidance and apprenticeships and those
01:08:54
type of things, I mean, you're going to be really well prepared to be super successful
01:08:58
in, in industry.
01:09:00
I think the, I think computer science CS, right?
01:09:04
Those are, those are going to be hit pretty hard because like the fundamental skills I
01:09:11
think are easier and easier to obtain, but the mentoring is, is where you, you know,
01:09:15
you really want the community, the design projects, learning from that person that's
01:09:18
sitting two seats down from you when you, when you can't figure that out.
01:09:21
So it's like, it's a hard question to answer.
01:09:23
I think there will be areas.
01:09:25
But what I want to do call out is he talks about that some of the people who will be the
01:09:29
best at using AI are the people with the humanities degrees because they understand the context.
01:09:35
They understand all the things that go around it.
01:09:37
And they're going to be the ones that get the most out of it because they know how to
01:09:40
leverage it the most because essentially they, they understand how to interact with people
01:09:45
and they understand how to take those interactions and combine it into the system.
01:09:48
So it's going to be a fun, I say fun, right?
01:09:51
It's going to be a crazy fun 10, 15 years as, as the stuff, you know, unfolds and, and we
01:09:57
see where, where the dust settles.
01:09:59
So it'll be exciting.
01:10:01
And that's, that's the perfect segue into the next chapter about AI as a coach.
01:10:05
Yep.
01:10:06
All right.
01:10:07
So, so we're going into coaching now and basically, you know, he gets into this idea of expertise
01:10:14
building here and that like, he talks about the 10,000 hours and really that, you know,
01:10:19
this deliberate practice versus, you know, just repetitive, rote practicing and he talks
01:10:24
about these different things.
01:10:26
But so the building expertise, you know, he would say like, you need a baseline of knowledge,
01:10:30
right?
01:10:31
So let's make sure we get a base on a knowledge.
01:10:32
Then you need to practice and you do the right type of practice, which would be deliberate
01:10:35
practice.
01:10:36
And if you put those two things together, then that's how you develop expertise.
01:10:41
And what AI can do is AI can serve as this coach that helps you figure out, you know,
01:10:48
better or more deliberate practice.
01:10:50
So if you think about this piano lessons, right?
01:10:53
Like, okay, I could go take piano lessons.
01:10:57
And if I have a good coach, that coach is going to push me and push me and push me, but
01:11:01
they're going to know the right way to push me.
01:11:03
They're going to know how to push me and say like, Oh, your fingering on your, on your
01:11:07
right hand isn't good.
01:11:09
So what we're going to do is we're going to do like the next three weeks is going to
01:11:12
work on fingering on your right hand.
01:11:13
And I'm going to hate every minute of it, but I'm going to get significantly better.
01:11:17
So does AI have the ability to do that?
01:11:19
Right?
01:11:20
Like it has the ability to identify these areas where you're not as good in this particular
01:11:25
area and help coach you to that.
01:11:28
I think if I'm getting it from remembering correctly, this is where he talks about the
01:11:32
doctors, right?
01:11:33
And how the apprenticeship and like the intern level of the doctors might start going away
01:11:40
because the doctors can use AI.
01:11:43
So therefore they don't need the lower level interns to do some of that stuff for the
01:11:47
them.
01:11:48
And they just keep doing what they're doing and then use the AI for that.
01:11:50
But then we're not going to train expert doctors anymore because they're missing that
01:11:54
middle phase.
01:11:55
So it's like the removal of this middle phase is a, I remember being a big, big part of this
01:11:59
chapter.
01:12:00
Yeah, they're using the surgery robots as well, which means that the, the apprentices
01:12:06
in the room, uh, don't get the experience that they would have previously.
01:12:11
I think that's the big takeaway for me from this is that, um, yeah, you can use AI as
01:12:16
a coach and you can help it to, uh, give you feet, like you can use it to get more feedback
01:12:22
loops and, and, uh, do deliberate practice.
01:12:26
But the, the dangerous side of this, I think is that the experts become experts through
01:12:32
that deliberate practice.
01:12:34
And because their time is valuable, they're, if they just go all in with AI, there's a,
01:12:40
a major apprenticeship gap that's going to develop where the next tier is not going to
01:12:46
have an opportunity to develop the same types of skills.
01:12:51
Um, I, I think that, uh, it's probably not, uh, that sensational.
01:12:58
I think that's just kind of like a red flag and the, the practical application of this
01:13:02
is if you're someone who has those, those skills, it's worth investing in, uh, somebody
01:13:07
else, but the big question that you're asking regardless of whether you are an expert or
01:13:12
not is where is the appropriate place to insert AI as a co-intelligence into, into, uh, what
01:13:19
I do.
01:13:20
Um, he does mention in this chapter that humans working with AI co-intelligence always outperform
01:13:26
the best humans alone.
01:13:28
So again, it's not something that, oh, this is bad and it's going to eat our lunch and
01:13:32
we need to avoid it.
01:13:33
If you really want to be the best at what you do, you have to, uh, you have to embrace this
01:13:38
stuff.
01:13:39
Um, but also I think for me, one of the things that I'm always cognizant of is the type of
01:13:46
legacy that I'm going to leave and I want to help other people do more.
01:13:52
So as it pertains to me and how I would apply this, it's, you know, use the AI where appropriate,
01:13:59
but don't just neglect the people, uh, for the sake of the productivity.
01:14:04
That was something I picked up from Chris Bailey's book, The Productivity Project way
01:14:08
back in the day.
01:14:09
Uh, because he mentions that people are the reason for productivity.
01:14:12
A view AI is like a productivity enhancement tool, um, but the people are still the, the
01:14:18
most important thing.
01:14:20
And I've aired on, on the side of, well, I just got to get the task done before, you
01:14:24
know, and damage the, the relationship.
01:14:26
So I think AI just, you know, provides more opportunities for that sort of thing to happen
01:14:30
if we're not careful, but doesn't mean that it necessarily has to.
01:14:34
Uh, I don't think the fact that AI makes things more efficient means that people are
01:14:39
just going to go inward and they're not going to train up other people, uh, to, to do things.
01:14:45
I think that's kind of inherent in us is that we want the, the people that we are investing
01:14:51
into, whether those are our natural kids or, um, just like your, your students, you know,
01:14:56
you want them to go leave a bigger dent in the universe.
01:15:00
That's actually the, my life theme is I'm, I, I condense it down and I say I'm a multiplier
01:15:06
because I want to help people multiply their time and tell and leave a bigger dent in the
01:15:09
universe.
01:15:10
That's the most rewarding thing for me.
01:15:12
If I can help you do that, you know, then I'm going to do that.
01:15:15
And if I'm constantly focused on just like, how do I get more widgets cranked?
01:15:19
Then I'm going to miss those opportunities.
01:15:23
I think this is the area where this agreement might be, um, the wrong word, but I'm less
01:15:29
optimistic.
01:15:30
I'm more skeptical in this area where I think AI right now is really good at that baseline
01:15:36
knowledge.
01:15:37
It's really good at helping me get that, that baseline knowledge.
01:15:39
It's really helped good to help me edit things and, you know, answer questions and summarize
01:15:44
and do some of those things.
01:15:46
But when it comes to the coaching side, I still think like an actual human sitting across
01:15:51
from me face to face or even virtually just is so much more effective and efficient.
01:15:56
Um, because they can read cues better.
01:15:59
They can hear the nuances or understand the nuances.
01:16:03
They're less likely to be manipulated.
01:16:05
Um, you know, as you're working through things.
01:16:07
So this is probably the area where I'm a little less, you know, gung ho on what, um,
01:16:12
Moloch saying in this section.
01:16:14
All right, let's, let's get the rounding it out.
01:16:16
So now AI is our future.
01:16:17
He talks about four possible futures here.
01:16:20
He basically says AI, this is good.
01:16:21
It's going to get, it's not going to get any better.
01:16:23
Um, there's some reasons for that.
01:16:24
He throws it out there that basically, um, rules and regulations, stop it.
01:16:29
Technology progress stops it, you know, whatever it is, but it just doesn't get any better
01:16:33
than it is right now.
01:16:34
He talks about slow growth.
01:16:36
The fact that, you know, not everything can remain exponential and maybe just will be limited
01:16:40
by the future of the way the technology pans out.
01:16:43
So therefore it'll grow, but it's not going to grow as quickly as it has, you know, lately
01:16:47
or, um, from the, from the start.
01:16:50
He talks about exponential growth and he uses your, um, your flywheel, right?
01:16:54
So yeah, he talks about the fact that it's a flywheel.
01:16:56
So, uh, basically AI companies can use AI systems to help them generate the next version
01:17:01
of AI and boom, boom, boom, boom.
01:17:03
And it just feeds on itself and we, we keep that curve going up exponentially.
01:17:06
And then he talks about the machine God as the, as the last one, what they basically
01:17:11
says, we're going to develop some artificial super intelligence.
01:17:14
It's essentially going to rule everything, um, in, you know, that that's how it goes.
01:17:20
Or that's how it would go there.
01:17:22
Uh, he, I'm trying to remember what he, he thinks he doesn't think that as good as it's
01:17:26
going to get, um, is, is likely.
01:17:29
He also doesn't think the machine God scenario is the likely scenario, but I can't remember.
01:17:34
Do you remember, whether he thinks it's going to be slow growth or exponential?
01:17:37
Does he ever say that?
01:17:39
I don't remember.
01:17:40
Um, if I had to pick one, I think it's probably between slow and exponential.
01:17:46
I feel like we've seen some exponential growth recently, but I think we are going to hit
01:17:51
some diminishing returns with some of the AI stuff soon.
01:17:56
That doesn't mean that it's not going to continue to improve.
01:18:00
But I think it is going to slow a little bit from where it is right now.
01:18:03
However, I think just the rapid pace of, of change, um, that is accelerating overall.
01:18:11
So I don't expect it to be, uh, slow growth either.
01:18:15
Uh, I want to real briefly mention the, the machine God scenario.
01:18:19
He mentions that there's no theoretical reason this can't happen, but there's no reason to
01:18:23
suggest it might, which I think is kind of refreshing that, that perspective, because,
01:18:32
uh, the stuff that I've seen online from people who fall into, you know, this is as good as
01:18:36
it gets or the other extreme, it's going to take over everything.
01:18:40
They tend to be very cut and dry.
01:18:43
Like this is the reason and this is, you know, factual, 100% certainty.
01:18:47
And I feel like, uh, he's essentially saying it, at, at this chapter, we don't know what
01:18:53
this is going to look like.
01:18:55
He's basically throwing out a whole bunch of different, um, possibilities.
01:19:01
And, uh, I think I really, I, well, I really like this, uh, this approach.
01:19:05
And I think what this does for me after I read this is question anybody who says with
01:19:11
any degree of certainty, like this is what you got to watch out for.
01:19:15
This is what's going to happen.
01:19:17
Yep.
01:19:18
Uh, I think the things he says might happen where people get left behind, people struggle
01:19:24
with the, the transition.
01:19:25
I think all those, those things make sense, right?
01:19:27
He calls them the smaller catastrophes.
01:19:29
I think all those make sense, but, um, I like what you said in terms of, you know, there's
01:19:35
no probability or there's, sorry, there's no likelihood that it, it will happen, but
01:19:39
it might, you know, like we can't say, we can't say that it won't.
01:19:42
All right.
01:19:43
Um, now we have the epilogue.
01:19:44
Um, I, I didn't get a ton out of the epilogue.
01:19:47
Um, basically it's a tie back into let's think about AI as a co-intelligence.
01:19:54
Um, it doesn't have a mind of its own.
01:19:57
Its math and it's, you know, programmed and it's, it's really impressive for what it can
01:20:02
do.
01:20:03
Uh, but you know, his last statement, I think, uh, or one of the last statements he says,
01:20:07
"humans are far from obsolete at least for now."
01:20:09
Right?
01:20:10
And I was like, what a, but, but even there, like I thought it was a great way to end the
01:20:13
book.
01:20:14
It's like, hey, remember, use this thing as a co-intelligence, but humans are so important
01:20:19
and, and don't lose the sight of the fact that humans are important.
01:20:22
Yeah.
01:20:23
But he also said a, this is only like two pages and half of one of the pages is the return
01:20:29
of the prompt that he, but one of the things he mentions on page two, 12, the very last
01:20:35
page is that AI is a mirror reflecting back at us our best and worst qualities.
01:20:41
I like that too.
01:20:42
And then I also agree that that last sentence of the book is, is great about humans being
01:20:46
far from obsolete, at least for now.
01:20:48
Um, that, that's the perfect way to, to end it.
01:20:50
I agree.
01:20:51
Don't worry, but this is going to change.
01:20:54
All right.
01:20:55
So before we go to styling rating, action items, all that stuff, I have a thing for you, Mike.
01:21:01
So if you scroll down to the bottom of the notion page, I asked you not to look at these.
01:21:04
Hopefully you, you helped turn to that.
01:21:06
Okay.
01:21:07
So go into the first one, go into a four-quary, like the person.
01:21:10
All right.
01:21:11
This is the prompt I gave it.
01:21:12
And I base, I mean, we were doing a book on AI, right?
01:21:14
I had to, I had to go and see what it did.
01:21:18
Act like you're a professional book reviewer and provide a chapter by chapter summary of
01:21:21
the book, Co-intelligence Living and Working with AI by Ethan Mullick.
01:21:24
Um, I don't want to go through all these because it actually does, it does this and
01:21:28
it does it very, very well in terms of giving us a, a summary of it.
01:21:32
But now go back and go to the second one.
01:21:35
Okay.
01:21:36
All right.
01:21:37
Got it.
01:21:38
Now I said, act like you're a professional book reviewer, provide a chapter by chapter
01:21:40
summary, Co-intelligence by Ethan Mullick.
01:21:42
You should include all headings and subheadings contained within the chapters.
01:21:46
And I think it did a really good job.
01:21:49
Right?
01:21:50
Yeah.
01:21:51
I actually thought it was going to kick back an air to me that said, I can't do that.
01:21:55
It's copyrighted work.
01:21:56
You know what I mean?
01:21:57
Like, or, you know, you're going to ask me for too much of the, of the copyrighted work.
01:21:59
And it did not do that.
01:22:01
It 100% gave us a, um, a good summary.
01:22:05
Um, but what I want to call out here is one, the prompting.
01:22:11
So we've talked about this a little bit before, like the more detailed the prompts you give
01:22:14
it, the better results you get.
01:22:17
Two, you have to review this stuff to make sure that these are actually the headings
01:22:21
and the subheadings because it might have just made this stuff up.
01:22:24
Like if I don't review that, but then three, and this is not for you and I to claim, you
01:22:29
know, like the importance of ourselves or anything like that.
01:22:33
There's a big difference between ask me asking it for a summary with headings and subheadings
01:22:37
and then listening to two people talk about chapter by chapter, you know, the book and
01:22:43
what they got out of it and how they went.
01:22:45
And I say that because it's like it's, there's a place for all of these things and you can
01:22:51
use them in the way that makes the most sense for you and in a way that provides the most
01:22:55
value for you.
01:22:56
And I just think that's a, that's a valuable thing.
01:22:58
Like I really think, you know, that having the access to this is better than not having
01:23:04
the access to it.
01:23:05
100%.
01:23:06
Awesome.
01:23:07
All right.
01:23:08
Let's go into action items.
01:23:09
Mike, what action items do you have coming out of reading co-intelligent?
01:23:17
I just have the one that you gave me figure out what I'm going to delegate to AI.
01:23:23
I think I'm going to take other action from this book, but I don't have anything specific
01:23:29
that I'm going to do with it.
01:23:31
I think the big thing is to consider where I can invite AI to the table.
01:23:39
But I don't know exactly where that is yet.
01:23:42
So we mentioned the Pro Show, some of the tools that we're using that have AI.
01:23:49
But I want to find some additional ways to do some of the things that I'm doing.
01:23:54
And my big perspective change walking away from this book is essentially even if it doesn't
01:24:00
get it right on the first try and it takes a little bit of time to craft the prompt or
01:24:05
to dial it in.
01:24:06
It's worth the work to do that kind of stuff.
01:24:10
So I don't know exactly what that's going to look like, but I just want to have that
01:24:15
perspective as I go about my creative routines and build my systems.
01:24:20
Wonderful.
01:24:22
So I have two.
01:24:24
So one, I want to plan how I can teach with AI more intentionally.
01:24:27
So I am very much an adopter of this.
01:24:29
I don't fear this.
01:24:31
I welcome this into my classrooms.
01:24:33
I've already used it in certain classes.
01:24:36
And I want to be even more intentional in the 24/25 academic year about we're going to
01:24:42
embrace this.
01:24:43
We're not going to fear it.
01:24:44
We're not going to run away from it.
01:24:47
And you might cheat, sure.
01:24:50
But at the same time, I need to be creative about the way I design prompts and assignments
01:24:54
and reflections to where it's hard for you to cheat.
01:24:58
You can use AI, but we're going to do it in a way that it's difficult for you to cheat
01:25:02
easily.
01:25:04
The second one is I want to think about the scholarly side of my work.
01:25:08
So the research side, the other thing I do that isn't just in the classroom and say,
01:25:13
how do I utilize AI more effectively to do reviews or to analyze different sets of data
01:25:24
or what it might be?
01:25:26
Data is hard because if you're under IRB review, you can't really share the data because it's
01:25:33
like you're putting it out there for essentially everybody.
01:25:36
So it's protected data.
01:25:38
But more generally, secondary data.
01:25:40
How could I analyze secondary data and say, hey, go to the IPEDS database and tell me
01:25:46
the number of engineering graduates since 1974?
01:25:50
It's like, can I manipulate AI in a way that it will actually kick back an accurate response?
01:25:56
I want to play around with more in-depth prompting and utilizing it more.
01:26:00
I don't think that has a real tangible next week.
01:26:05
I say, yeah, I did exactly that.
01:26:07
But I think it's one of those ones that's in the back of my head that I'll keep thinking
01:26:10
about.
01:26:11
Awesome.
01:26:12
All right.
01:26:13
So this is my book.
01:26:14
My book means I get to rate it first.
01:26:18
I don't think you'll be surprised to hear that I loved this book.
01:26:22
I thought this book was outstanding.
01:26:26
I didn't think the author wasted much time.
01:26:29
I thought the author talked about things in a flow that made sense and organized it effectively.
01:26:36
I thought that the author hit--
01:26:39
I can't think of something that I would have wanted to be in the book that wasn't in there.
01:26:44
That it's like, oh, yeah, but you didn't talk about this whole area or this whole section.
01:26:49
I thought it was both theoretical in terms of telling us some of the fundamental understandings
01:26:54
of it, but also practical in pushing us to be in a co-intelligence relationship with
01:27:02
AI.
01:27:03
Overall, loved it.
01:27:05
I'm a huge fan of this book.
01:27:06
I will recommend it to people to read if they want to get more into understanding AI and
01:27:13
how AI we can interact with AI.
01:27:15
So you won't be surprised.
01:27:17
It's a 5.0 for me.
01:27:19
All right.
01:27:21
Well, I agree that this is probably best-case scenario for me in terms of a book on AI.
01:27:28
I enjoyed this way more than I thought I would.
01:27:32
I think the only thing that's going to keep me from rating it 5 stars is that I don't
01:27:39
know that this is going to hold up real well just because AI is changing so fast.
01:27:50
Everything he shares here makes sense.
01:27:53
I really like the four principles.
01:27:56
I really like the four scenarios, but I feel like in 12 months, does one of those scenarios
01:28:04
feel like, oh, that's just ridiculous?
01:28:07
I don't know.
01:28:08
I'm not sure that this stands the test of time.
01:28:11
I think right now, this is the book to read on artificial intelligence.
01:28:18
This is episode 200 of Bookworm, though, and we do an episode every other week.
01:28:25
So I've been reading books and talking about them for what is that?
01:28:31
Like almost eight years.
01:28:32
Eight years.
01:28:34
When we get to episode 400, is this one of the ones that I look back and like, yeah,
01:28:40
that was life-changing?
01:28:42
I'm not sure.
01:28:43
I'm just like, it might be, but I'm a little bit skeptical.
01:28:48
So I'm going to rate it four stars only because I've got a whole bunch of books that I've
01:28:54
read and a lot of books that I have rated five stars that, you know, when I read them,
01:29:00
I really felt like this book is going to revolutionize my life.
01:29:05
I'm going to be able to look back and say, I thought one way prior to this.
01:29:09
Now I think differently and my life is different as a result.
01:29:14
I think jury's still out with the artificial intelligence stuff.
01:29:18
Not that it's not going to affect our future.
01:29:22
I think it absolutely is going to change everything.
01:29:28
I think at the moment, you know, this is the best book that I've come across on the topic,
01:29:33
but like I said, it's changing so fast.
01:29:35
I'm not sure how good it holds up.
01:29:39
But I really enjoyed it.
01:29:40
And you know, I want to reiterate, you know, if you're looking for a book on artificial
01:29:45
intelligence, this is the one to read.
01:29:48
I feel like it does a really good job of explaining the different scenarios.
01:29:53
So you're going into things with your eyes open in terms of the possibilities.
01:29:58
He doesn't shy away from some of the bad stuff that could happen from it, but that's not
01:30:02
a reason to avoid it.
01:30:03
Like it's a very balanced approach, I feel.
01:30:06
And it's the right approach for where we are right now.
01:30:09
I'd agree.
01:30:10
All right, Mike, let's put co-intelligence on the shelf.
01:30:16
And let's think about what our upcoming books are.
01:30:19
So you've got the next book.
01:30:20
What's next?
01:30:21
I do.
01:30:22
The next one is Right Thing Right Now by Ryan Holiday.
01:30:25
I think we've read just about every Ryan Holiday book for Bookworm.
01:30:30
This is the third one in the Stoic series.
01:30:32
I am getting more and more excited to talk about this one.
01:30:36
I heard Ryan share that he thought this one was going to be the hardest one to sell because
01:30:42
he packaged the four Stoic virtues.
01:30:46
And he sold that idea to his publisher.
01:30:48
And he told them, hey, just so you know, this one's going to do the worst out of all of
01:30:51
them.
01:30:52
But it was the quickest one to show up on the New York Times bestsellers list.
01:30:57
So.
01:30:58
Awesome.
01:30:59
All right.
01:31:00
So then after that, as you're prepping, we'll do a book.
01:31:03
My next book is called Uncoming Greatness by Mark Miller.
01:31:07
It's a book on leadership.
01:31:08
And I found it, you know, looking through some different things on thinking about leadership.
01:31:13
And I think it'll be a good, a good book following Right Thing Right Now.
01:31:17
Mike, do you have any gap books right now?
01:31:21
Uh, maybe.
01:31:24
I do have a book with me, which I want to read.
01:31:27
I think I mentioned this as a gap book before, but didn't get a chance to read it.
01:31:30
And that is Simple Marketing for Smart People by Billy Boris and Tiago Forte.
01:31:36
Billy is the guy that does the marketing for Tiago and he has his own courses that he,
01:31:42
he sells.
01:31:44
I think he calls it like the five light bulbs method or something like that.
01:31:49
So I've seen some of his stuff before and it really does feel like very helpful, practical
01:31:55
marketing advice and as I'm getting ready to release a new product.
01:31:59
This is something that I'm thinking about.
01:32:01
Alrighty.
01:32:02
Uh, I don't have any gap books.
01:32:04
I don't want any gap books.
01:32:06
I am not in a place right now where a gap book makes sense.
01:32:10
I'm going to read Right Thing Right Now.
01:32:12
I'm going to enjoy Right Thing Right Now.
01:32:15
If I get lucky and am able to carve out some other time for a gap book, I will do that.
01:32:19
But as of right now, I do not have one.
01:32:22
Alrighty.
01:32:23
Alright.
01:32:24
So that, uh, that does it for another episode of Bookworm.
01:32:27
Uh, thanks for everybody for listening.
01:32:29
For the pro members, thank you.
01:32:31
If you're interested in the pro show, uh, we have a Patreon.
01:32:36
So you can go to patreon.com/bookwormfm.
01:32:38
Uh, there at $7 a month, you can sign up.
01:32:41
Um, wanted to support the show to get, um, bonus content in terms of the pro show.
01:32:46
Today we talked about the AI that we're using and like the apps that have it integrated
01:32:52
and how it integrates.
01:32:53
Um, you also get the, um, the bootleg version of the feed.
01:32:57
So straight from our Zoom recording, uh, we pump it out and, uh, give it to you.
01:33:01
Um, and then, um, there's an ad-free episode of the show.
01:33:05
So, um, if you're interested in that, uh, we'd love it.
01:33:08
It's, uh, patreon.com/bookwormfm.
01:33:10
Um, other than that, any last words?
01:33:14
Mike?
01:33:15
Ah, if you're reading along with us, pick up Right Thing Right Now by Ryan Holiday.