[DSC DACH 24] Creating Child-friendly AI - Matthias Neumayer & Dima Rubanov

DataScienceConferenc1 41 views 41 slides Sep 21, 2024
Slide 1
Slide 1 of 41
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41

About This Presentation

We will discuss the challenges of developing child-appropriate language in AI, benchmarking, reducing bias and collaborating with experts from other fields to create high quality applications.


Slide Content

Creating child-friendly and
less biased AI

Dima Rubanov
Matthias Neumayer

Team
Dmitrij Rubanov
CEO & Co-Founder


Experience:
MSc Finance,
10 years in consulting,
focus on Big Data Analysis

Javascript, React, React
Native
Matthias Neumayer
CEO & Co-Founder


Experience:
Mag. iuris
B.A. in Film
Former trainee solicitor
7 years C-level in
advertising industry

Python, Javascript, React,
React Native
Oscar
Chief Storyteller


Experience:
PhD in Storytelling

Creates engaging
personalised bedtime
stories for children
Marco Marthe, BSc
ML Engineer



Experience:
Machine Learning Engineer
Fullstack-, Cloud- &
Gamedevelopment
Python, C#, JS, TS, Git, HTML&CSS,
Jira, Unity
Basic Adobe, Creative Cloud

3
AI for Good in Children’s
Education

4
AI for Good in Children’s Education

5
AI for Good in Children’s Education

6
A new way to learn.

7
Learning is a Web of Connections

8
New era in education
●Transforming educational landscapes
●Enhancing learning experiences for children
●The Possibility of Personalized Learning
●A Better Learning Experience

9
Benefits of AI in Children's Education
●Personalisation for individual learning needs
●Increased engagement through interactive content
●Improved accessibility for diverse learners
●Enhanced efficiency for teachers
●Data-driven insights on student performance

10
Challenges and Considerations
●Data privacy and security concerns
●Risk of bias in AI algorithms
●Human oversight
●Ensuring positive impact on children
●Balancing screen time with other learning methods

11
What we learned from
Oscar Stories

12
Problems
●Bias
●Hallucination
●Child-appropriate language

●Sentences too long
●Complex words
●Complex structure


especially in non-english languages
Age-appropriate
Text

●Unsuitable themes
●Risk of exposure
●Üarental oversight?
Content

Murgia, Emiliana & Pera, Maria & Landoni, Monica & Huibers, Theo.
(2023). Children on ChatGPT Readability in an Educational Context:
Myth or Opportunity?. 311-316. 10.1145/3563359.3596996.

16
Bias

17
ChatGPT 3.5 (older version)

18
Credit score program - GPT-4o

19

20

21
Qwen2

Lora - A Child-friendly AI
Solution
First Age Appropriate & Trustworthy AI
Adoption in the DACH-Region

Lora
-Physics
-Biology
-Environment
-History
-Economics
-other STEM

Lora
-Engaging Children in STEM with Storytelling
-Personalized Learning Experience
-Beyond Memorization
-Safe and Inclusive Environment
-Preparing for Tomorrow’s World

Fine - Tuning
-What is Fine-Tuning?
-Addressing Child-Specific Needs
-Carefylly Curated Datasets
-Enhanced Accuracy and Relevance

Developing
Child-appropriate
Language
-Public Domain Datasets with Readability
Scores
-Manually Curated Datasets by Educators
-Semi-Automatic Datasets Generated by
LLMs
-Manual Verification by Pedagogical
Experts

How to solve bias?

TL;DR: We can’t

Unknown unknowns
Imperfect processes
Social context
Definitions
?
?
?
?

An impossible task?

31
Stage
Pre-processing
In-processing
Post-processing
Feedback

32
Stage
Pre-processing
In-processing
Post-processing
Feedback

33
Stage
Pre-processing
In-processing
Post-processing
Feedback

34
Stage
Pre-processing
In-processing
Post-processing
Feedback

35
Human in the loop

36
Potential problems

Finding the right benchmarks
Accuracy vs. Fairness
Cultural nuance preservation
Ethical dilemmas of customization
Risk of unintended censorship

38
Trustworthy AI: Our approach
In-App:

Limited Input
Output Filtering
3
Human in the Loop

Feedback System
4
Pre-Processing:
High Quality Data
Feedback from experts
1
Postprocessing
Benchmarking Bias/ Age
appropriate language
2

39
Future prospects

40

Thank you

[email protected]
HeyQQ GmbH
Wasagasse 23, 1090 Wien
Dmitrij Rubanov, MSc
Mag. Matthias Neumayer, BA
Tags