ContextLLM: Meaningful Context Reasoning from Multi-Sensor and MultiDevice Data Using LLMs

arukimisuta 59 views 12 slides Feb 26, 2025
Slide 1
Slide 1 of 12
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12

About This Presentation

ContextLLM for meaningful and descriptive LLM inference


Slide Content

Kevin Post,ReoKuchida, Mayowa Olapade,
Zhigang Yin, Petteri Nurmi, Huber Flores
Email: [email protected]
ContextLLM: Meaningful Context
Reasoning from Multi-Sensor and Multi-
Device Data Using LLMs

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
Importance
Potential for richer, more meaningful insights from multi-device,
multi-sensor data
#2

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
Our solution: ContextLLM
Aggregating sparse insights LLM reasoning and domain knowledge for
meaningful descriptions
The person is likely fatigued,
weak, and possibly dizzy,
resting or moving carefully
due to hypoglycemiaand
poor sleep.
#3

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
ContextLLM: Contributions
•ContextLLM: An LLM-powered approach
that transforms abstract activity labels into
meaningful and interactive descriptions
•Key insight: LLM reasoning and domain
knowledge used to infer high-level human
activities from multi-sensor data.
•Extensive evaluation:Assess context
reasoning, performance across
conditions, and outlines challenges for
future research
#4

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
ContextLLM: Experiment setup
LLM models:
more sophisticated GPT-4o,
down-sized GPT-4o-mini,
less sophisticated GPT-3.5 Turbo
Task: timestamps +
gesture labels +
locomotion labels →
duration of high-level activities
Morning routine activities + differentIMUsensors
[SOURCE] freepik.com
#5
“Cleanup”

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
Metric: cosine similarity
#6
Accurate intermediary
reasoning outputs translate
to excellent performance
Cosine similarity: 0.79
Inferences
319 sec 178 sec 57 sec 160 sec
Cosine similarity: 0.18
Low quality of intermediary
reasoning outputs leads to
lower performance
0 sec 0 sec 0 sec 11 sec
287 sec 95 sec 40 sec 112 sec
Ground truth vector:
[319, 178, 57, 160]
“From the provided data, I can see "Clean Floor"
which relates to the Cleanup activity.”
“Sandwich time could involve opening and closing
the fridge and using drawers, implying making a
sandwich.”
Ground
truth
[ , , , ]
[ , , , ]
[SOURCE] freepik.com [SOURCE] freepik.com [SOURCE] freepik.com [SOURCE] freepik.com

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
ContextLLM: Baseline performance
•Model-specific
differences
•Advanced models perform
better
•Scaled-down models
underperform
•LLMs capture activity
distributions
#7

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
ContextLLM: Misclassification
Misclassification introduced by predicted labels at different rates:
5%, 10%, 15%, 20%, 30%, 50%, 70%
In-the-wild
misclassification:
walk stand
Fused
misclassification:
walk swim
#8

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
ContextLLM: Misclassification
Result: up to 20% misclassification ContextLLMremains robust
“The repeated 'Swim' entries are unusual in a
regular indoor setting. […] due to the
environment context, false positives could be
possible.”
“The frequent occurrence of swimming might be
sensor misinterpretation.”
5%
20%
“The person seems to be in an indoor
environment with access to a swimming pool.”
50%
#9
sophisticated models more
prone to hallucination

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
Main takeaways
•LLM-specific characteristics –prompt
engineering? sensor data representation?
•LLMs are context reasoners capable of
aggregating multi-sensor insights
•LLMs enrich ML classes into descriptions
Conclusions is
major things achieved/how
results relate back to vision
#10

“ContextLLM: Meaningful Context Reasoning from Multi-Sensor and Multi-Device Data Using LLMs”, 26
th
ACM HotMobile2025
Kevin Post ([email protected])
Our vision: Sensing Copilots

Conclusions is
major things achieved/how
results relate back to vision
#11
Exploring historical data Activity monitoring and
proactive hazard notification
Speculating on future trends

Thank you for your attention!
Questions?
Kevin Post
([email protected])
ReoKuchida
([email protected])
Mayowa Olapade
(mayowa.olapade@
ut.ee)
Zhigang Yin
([email protected])
Huber Flores
([email protected])
Petteri Nurmi
(petteri.nurmi@
helsinki.fi)