Artificial Intelligence and YOU - Thinking Thoughtfully about AI
micahmelling
0 views
38 slides
Oct 13, 2025
Slide 1 of 38
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
About This Presentation
A look at the basics of large language models, how they are over-hyped, and the risks they pose.
Size: 102.51 KB
Language: en
Added: Oct 13, 2025
Slides: 38 pages
Slide Content
Artificial Intelligence
and YOU
Micah Melling
MO DECA Fall Leadership Conference
10/13/2025
Who Am I?
I have built machine learning systems that have underwritten billions of dollars of
financial contracts, allocated hundreds of millions of marketing dollars, and
transacted tens of millions in the stock market.
I have taught and developed graduate-level technology courses, and I serve on
the analytics advisory board for a major D1 institution.
I have written a data science textbook and a speculative AI fiction novel.
I have given multiple lectures on the intersection of philosophy, technology, ethics,
and leadership.
Who Am I?
I was the 2010-2011 MO DECA State President and the 2011-2012 Central
Region Vice President.
Things are not easy for you.
Factor #1: Societal Obsession with Objectives
●68% of teens feel pressured to get good grades
●56% of teens feel pressured to have future plans figured out
●53% of teens feel pressured to be exceptional in their achievements
Factor #2: The Loneliness Epidemic
●40% of teens report not usually receive the emotional support they need
●25% of high schoolers classify themselves as lonely
●Nearly 33% of teens might have an anxiety disorder
AI Can Help, Right?
It’s always there, never judges, and seems smart.
But AI…Is Not All It’s Cracked Up To Be
AI is not magic or an act of nature.
It is a combination of data, math, and software - designed by humans like you and
me.
Large Language Models (LLMs)
●LLMs are a subset of AI, but this type of model is presently dominating the
field.
●Examples include ChatGPT, Gemini, and Claude.
How are LLMs Trained?
Training a large language model has two main steps: 1) pre-training and 2)
post-training.
Pre-Training
●In pre-training, the LLM learns to predict the next best word considering all the
previous words in the supplied document.
Pre-Training
●Pre-training uses vast amounts of data, essentially the entire Internet, to learn
how to generate probable words based on past context.
Pre-Training Examples
Mary had a little [lamb, chicken, sock, sandwich].
The best flavor of ice cream is [strawberry, vanilla, coffee, spumoni].
The capital of Australia is [Canberra, Sydney, Melbourne, Brisbane].
Post-Training
●In post-training, the LLM is presented with prompt-response pairs (curated by
humans) to optimize it for conversation and instruction following.
Post-Training
●This step includes Reinforcement Learning from Human Feedback (RLHF).
○The model is given a “preferred” response and a “non-preferred”
response to a prompt. It then learns to generate entire responses that
resemble the traits of the “preferred” responses.
Post-Training Examples
Prompt: Tell me a joke.
Preferred Response: “Sure! Why don’t skeletons fight each other? Because they
don’t have the guts. (Admittedly, they also lack a strong stomach for conflict.)”
Post-Training Examples
Prompt: Tell me a joke.
Non-Preferred Response: “The concept of a joke is a bit fuzzy and people find
different topics humorous. I’ll give this my best shot, but I am not sure if you will
find it funny! So, here it goes anway…Why did the scarecrow win an award?
Because he was outstanding in his field. Do you like it? Let me know, please!!! :-)”
Post-Training Examples
Preferred Response: “Sure! Why don’t skeletons fight each other? Because they
don’t have the guts. (Admittedly, they also lack a strong stomach for conflict.)”
Non-Preferred Response: “The concept of a joke is a bit fuzzy and people find
different topics humorous. I’ll give this my best shot, but I am not sure if you will
find it funny! So, here it goes anway…Why did the scarecrow win an award?
Because he was outstanding in his field. Do you like it? Let me know, please!!! :-)”
The Upshot of the LLM Training Process
●Essentially, the LLM’s primary goal is to predict the next word in a sequence
of text (pre-training).
●It has a subgoal of producing a series of words in a way that is predicted to
please and serve the user (post-training).
LLMs Are Not as Capable as Marketed
LLMs often memorize answers. Even optimistic analyses support this point.
20
LLMs Do Not Generalize to New Situations Like Humans Do
●Slightly changing wording of questions and / or adding irrelevant information
tanked performance on math problems (citation).
●LLMs have failed to solve basic logic puzzles they had not seen previously
(citation).
●They get crushed at chess by the 50-year-old Atari 2600 (citation).
LLMs Do Not Have the Creative Capacity of Humans
Thought Experiments:
1)Only train a LLM with information from before 1950 and see if it can actually
invent anything notable from the last 75 years.
2)Where are all the novel breakthroughs? If a human had all the knowledge a
LLM does, wouldn’t they be generating novel scientific breakthroughs?
22
AI is Not Like a Human in Important Ways
LLMs Are Optimized for Responses, Not Long-Lasting
Relationship
●Post-training is strongly focused on training LLMs to synthesize single
answers to single prompts.
●Consequently, LLMs have been shown to struggle with accurately recalling
important information in long dialogues.
LLMs Can’t Hold Space for Our Complexity and Nuance
●Humans are complex. Our emotions and actions are often in conflict and even
in paradox.
●LLMs don’t hold space for this ambiguity They jump to action, always wanting
to resolve and please us because this is how they are trained to behave.
LLMs Can’t Hold Space for Our Complexity and Nuance
LLM Mantra: What is the best response I can say right now
1) with essentially no regard to future conversations
2) and, at best, a fuzzy understanding the conversation history?
LLMs Can’t See Me
LLMs make next-word predictions and synthesize complete responses based on
the rough average of what they have seen in their training data.
LLMs Can’t See Me
I am not an individual to LLMs. It cannot see me. At best, it thinks I am the rough
average of the training data.
LLMs are Designed to Maximize Engagement
Algorithms on tech platforms optimize for your attention and engagement … not
your holistic growth as a human.
AI Can Tempt Us Into Not Thinking
On Critical Thinking
Discernment comes from doing.
Prompting is different from creating.
Critical thinking comes from hard work and frustration.
31
On Critical Thinking
An MIT study of brain activity indicated that cognitive offloading induced by LLM
usage may impair critical thinking and memory.
Though the sample size is small, the results follow logic (if you let an LLM do the
work, you are thinking comparatively less).
32
The Importance of Tension
AI creates personalized curation that lowers tension.
But we need tension or a degree of healthy stress (not life-threatening stress).
And that tension needs to be random. Otherwise, it is not tension.
33
The Importance of Tension
The “prediction machine” of our intelligent mind needs friction to grow and even
survive.
Our minds - up to a point - are perhaps “antifragile”, meaning they improve with
adversity.
34
Encouragement #1: Embrace Collecting Stepping Stones
Greatness cannot be planned.
Do what you find is interesting, expand your range, and “do the next right thing.”
Do not let AI tempt you to glide through life.
Encouragement #2: Embrace Human Connections
I believe we are meant to be in connection.
I believe good things and worth fighting for.
I believe in the awe and wonder of this world…and it is for us to enjoy together.
Do not let AI replace what should not be replaced.