20250924 Haluk_Demirkan Responsible_GenAI_Framework.docx

issip 0 views 75 slides Sep 25, 2025
Slide 1
Slide 1 of 75
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75

About This Presentation

---
20250924 Haluk_Demirkan Responsible_GenAI_Framework

Title: Responsible Generative AI Framework
Date/Time: Wednesday September 24, 12noon ET/9am PT/18:00 CET
Speaker: Haluk Demirkan (https://www.linkedin.com/in/halukdemirkan/)
ISSIP Series: Lunch & Learn with ISSIP Ambassadors
Moderator: Ch...


Slide Content

20250924 Haluk_Demirkan Responsible_GenAI_Framework
ISSIP_Lunch_and_Learn
Contents
Event Metadata
Zoom Summary
Zoom Chat
Additional Feedback – Prof. Steve Alter
Zoom Transcript

Event Metadata
---
20250924 Haluk_Demirkan Responsible_GenAI_Framework
Title: Responsible Generative AI Framework
Date/Time: Wednesday September 24, 12noon ET/9am PT/18:00 CET
Speaker: Haluk Demirkan (https://www.linkedin.com/in/halukdemirkan/)
ISSIP Series: Lunch & Learn with ISSIP Ambassadors
Moderator: Christine Ouyang (https://www.linkedin.com/in/christine-ouyang/)
Description: As a member on the Technical Steering Committee for the Responsible AI Workstream under the Generative AI
Commons at the Linux Foundation for Data & AI, Professor Haluk Demirkan led the publication of the Responsible
Generative AI Framework (RGAF), released on March 19, 2025. He will give an overview of RGAF which is designed to guide
implementers and consumers of open-source generative AI projects through the complexities of responsible AI.
See Paper: https://lfaidata.foundation/blog/2025/03/19/responsible-generative-ai-framework-rgaf-version-0-9-now-available/
See Event: https://issip.org/event/lunch-and-learn-responsible-generative-ai-framework/
Event links:
Christine_Ouyang_Post: https://www.linkedin.com/posts/christine-ouyang_lunch-and-learn-responsible-generative-activity-
7376645757304721411-Z0Bs
Michele_Carroll_Post: https://www.linkedin.com/feed/update/urn:li:activity:7375910933669298176
LinkedIn Company: https://www.linkedin.com/posts/international-society-of-service-innovation-professionals-issip-_lunch-
and-learn-responsible-generative-activity-7375908828737048577-FfPB
LinkedIn Group: https://www.linkedin.com/posts/international-society-of-service-innovation-professionals-issip-_lunch-and-
learn-responsible-generative-activity-7374120193175990272-uj4N
Registration: https://docs.google.com/forms/d/e/1FAIpQLSeqDPUjotROB14vLGQjVOBScT2ORvfD8PPjHFM5GphsERNGQg/
viewform?usp=header

Recording: https://youtu.be/8_ugyoPTXaY
Slides: https://www.slideshare.net/slideshow/20250924-20250924-haluk_demirkan-responsible_genai_framework-
issippresentation_09242025-pptx/283425094
Summary: TBD
---

Zoom Summary
AI can make mistakes. Review for accuracy.
Meeting summary 
Quick recap
Haluk Demirkan presented the Responsible Generative AI Framework, which consists of nine
dimensions designed to ensure ethical AI development, and discussed various challenges around
balancing model accuracy, explainability, and accountability. The framework includes considerations
for sustainability and environmental impact, with examples showing significant energy savings
through careful AI implementation. The conversation ended with discussions about risk management
approaches and the need for better trust-building in AI systems, particularly in sensitive sectors like
healthcare and energy.
Next steps
Haluk to share additional information on metrics and measures for evaluating LLM solutions
with interested participants.
Haluk to continue research on developing a practical Excel spreadsheet for risk assessment
process that can be deployed in early design, pre-launch, and post-launch phases.
Haluk to continue collaboration with Jim Spohrer on the digital twin project and collaborative
intelligence research.
Participants to review the New York Times article shared by James about energy pricing
impacts related to data centers.
Summary
Responsible Generative AI Framework
Haluk Demirkan, a professor at the University of Washington, presented the Responsible Generative
AI Framework (RGAF), which consists of nine dimensions designed to ensure AI solutions are
ethical, fair, and beneficial. He highlighted the importance of human-centered design, accessibility,
inclusivity, robustness, reliability, safety, transparency, explainability, and accountability. Haluk
emphasized the need for organizations to balance these dimensions based on their specific use
cases and contexts, as achieving perfection in all areas may delay innovation. He also discussed
tools and techniques available for evaluating and implementing these dimensions in AI development
processes, and encouraged attendees to consider the potential consequences of ignoring
responsible AI practices.
Balancing AI Accuracy and Explainability

Haluk discussed the challenges of balancing model accuracy and explainability, highlighting that
more complex models improve accuracy but reduce explainability. He also addressed accountability
in AI, suggesting insurance companies as a potential mechanism for rectification, and emphasized
the importance of user feedback loops. Haluk further explored privacy and security concerns in AI,
mentioning the use of synthetic data for testing while acknowledging its limitations in achieving
accurate predictions. He concluded by discussing compliance challenges, ethical biases, and
fairness in AI development, suggesting the need for better alignment with societal values and the
use of existing tools for evaluation.
AI Sustainability and Energy Impact
Haluk presented on the sustainability and environmental impact of AI solutions, highlighting the
controversy around energy consumption in battery production for electric cars and the carbon
footprint of AI workloads. He shared an example where replacing a full GenAI solution with basic
Python reduced token transactions by 90%, emphasizing the need to carefully consider when GenAI
is necessary. Haluk introduced a framework with nine dimensions for evaluating AI solutions,
assigning different weights to each dimension across five development phases, and suggested using
risk management frameworks like NIST's 555 measurement concept to anticipate and mitigate
potential issues in AI development.
Risk Management in AI Development
Haluk discussed the importance of risk management in project development, emphasizing that a
simple, iterative approach can be effective even if initially time-consuming. He highlighted the
challenges of building trust in General AI, noting that adoption and utilization are key indicators of
success rather than just meeting development targets. Haluk also shared insights from an MIT report
that found a 5% success rate for AI pilots, primarily due to adoption issues, and stressed the need
for a solid foundation in AI development to ensure safer scaling. He concluded by outlining his
ongoing research areas, including digital twins, risk assessment metrics, collaborative intelligence,
and evaluating Large Language Model solutions.
AI Implementation Challenges in Healthcare and Energy
The meeting focused on the ethical and practical challenges of implementing generative AI,
particularly in healthcare and energy sectors. Haluk presented a framework for responsible AI, which
sparked a discussion about the complexities of applying such frameworks to real-world scenarios,
especially when conflicting advice emerges from different sources. James raised concerns about the
energy consumption and sustainability of AI data centers, highlighting how hyperscalers often
negotiate favorable power purchasing agreements at the expense of domestic customers. The group
agreed that while regulations are needed, there is still much to learn about how to effectively
implement and trust AI systems, especially in sensitive areas like healthcare and energy.
AI can make mistakes. Review for accuracy.

Zoom Chat
01:04:28Jim Spohrer: Welcome everyone to this ISSIP Lunch & Learn
01:04:37Jim Spohrer: Please add questions to zoom chat
01:05:25b's Notetaker (Otter.ai):Hi, I'm an AI assistant helping b dal take notes for this meeting. Follow along the
transcript here: https://otter.ai/u/rnfLkhQihofQEAZoq4bUiRovvoI?utm_source=va_chat_link_1
You can see screenshots and add highlights and comments. After the meeting, you'll get a summary and action items.
01:16:04Jim Spohrer: Question: Who is accountable when something goes wrong?
01:16:04Riddhi Bajaj: The organisation who developed the product?
01:16:13Christine Ouyang:company provides the product.
01:16:25Jim Spohrer: Depends on regulations and conditions of use
01:16:48Demetrius Hill:whoever owns the IP
01:16:55@TerriGriffith:I’ve leaned toward the company, but perhaps an insurance pool
01:22:28Christine Ouyang:Balancing speend and safty in developing AI applications is my daily challenge.
01:22:39Jim Spohrer: Reacted to "Balancing speend and..." with ??????
01:22:50Jim Spohrer: Question: Accountability? Who, What, Log & Rectify
01:23:13Jim Spohrer: Replying to "Question: Accountabi..."
@@TerriGriffith your insurance points on rectify
01:24:39Maggie Qian: Reacted to "I’ve leaned toward t..." with ??????
01:24:48steve alter: Assume company X produces an AI-based counselor, therapist, doctor, etc. In all of those areas
there is no totally agreed upon correct answer to many important questionsfor many clients. How is it possible to argue
that company X's product is safe, reliable , etc., when 1) it does not have deep contextual knowledge of the client 2) often
there are so many edge cases that a totally safe product could end up saying ... on the one hand ..., but on the other
hand ...., and in other caess ..... and in yet other cases ..... In other words, for many of the important situations many
answers will be too ambiguous to be useful.
01:27:23Jim Spohrer: Reacted to "Assume company X pro..." with ??????
01:27:59Jim Spohrer: Replying to "Assume company X pro..."
@steve alter great discussion topic. We can circle back with Haluk.
01:30:31steve alter: Assume a patient asks company X's RoboDoctor whether it is OK to use Tylenol during
pregnancy or whether people in a particular age range should receive a Covid vaccination. What could company X do about
complying with regulations?

01:32:27Jim Spohrer: AI Problems 3 Es: Energy, Errors, Ethics
01:32:31Jim Spohrer: Reacted to "Assume a patient ask..." with ??????
01:34:47Christine Ouyang:Replying to "Assume company X pro..."
@Steve Alter, I think of AI as an assistant not the “ultimate goto” for the final answer. Eventually the human, i.e., doctors,
counselors, will select the “correct” answer for his/her specific usecase/clients, or determine none of the answers from AI is
“correct.” This would require that human still have the knowlege. Maybe the real question is how to use AI effectively.
01:35:42Christine Ouyang:Reacted to "AI Problems 3 Es: En..." with ??????
01:38:04Christine Ouyang:Human (e.g., doctors) makes mistakes. They all have to have insurance. However, it’s very
difficult to sue doctors for malpractice. For AI, we will face similar challenge.
01:40:10@TerriGriffith:Replying to "Human (e.g., doctors..."
Likely even stronger given the autonomy we have in how we use some systems.
01:43:58steve alter: I think the framework is very good - lots of important issues, well organized. The challenge is in
understanding how it applies to specific examples. In other words, specific discussions applying the framework related to
specific examples would probably be a step toward evaluating its genuine usefulness to people who develop and/or use
genAI - based products.
01:44:17James E Mister:Reacted to "AI Problems 3 Es: En..." with ??????
01:44:23Jim Spohrer: Reacted to "I think the framewor..." with ??????
01:45:50@TerriGriffith:Replying to "I think the framewor..."
What I love about this presentation is it’s focus on the human actions
01:47:16Maggie Qian: Replying to "Human (e.g., doctors..."
I agree, especially when everyone has their own interpretation and standards for what’s considered responsible and ethical.
There’s the challenge of finding a single measure that applies universally.
01:47:42Jim Spohrer: AI JimTwin V2 (OpenSource) - Multilingual (English, German, French, Korean):
https://youtu.be/kBCnFPwLRZM
01:48:15James E Mister:Hi I have a question
01:48:19James E Mister:Typing it now…
01:48:40Erandy:Well this recording link be available to students after the neetingz
01:49:11Jim Spohrer: Reacted to "Hi I have a question" with ??????
01:49:46Jim Spohrer: Replying to "Well this recording ..."

Yes, it will be posted to ISSIP YouTube and Slideshare - probably early tomorrow
01:50:24Jim Spohrer: Replying to "Well this recording ..."
https://www.youtube.com/user/ISSIPorg
01:50:59Jim Spohrer: Replying to "Well this recording ..."
https://www.slideshare.net/issip/presentations
01:51:24@TerriGriffith:Yes to @steve alter point about 15 year olds. How is elementary education changing to support
at this early stage
01:52:46Jim Spohrer: “The Market” does some of the sorting work - people try things and word-of-mouth spread
things
01:53:10Jim Spohrer: Reacted to "Yes to @steve alter ..." with ??????
01:54:18Jim Spohrer: Replying to "“The Market” does so..."
Some envision more regulations and slowing down
01:54:57Jim Spohrer: Replying to "“The Market” does so..."
Some visionaries imagine noncoercive service systems - hard to imagine without a lot of cultural evolution - Deming J &
Hamel M (2025) Blueprint for a Spacefaring Civilization: The Science of Volition (By John Deming, with Mike Hamel)
URL: https://www.amazon.com/Blueprint-Spacefaring-Civilization-Science-Volition/dp/B0DV4HB4R5
01:55:34b's Notetaker (Otter.ai):Takeaways from the meeting ??????
[ ] Further research on defining metrics and measures to evaluate and apply the 9 RGAF dimensions. (Haluk Demirkan)
[ ] Explore how the RGAF framework can be applied to specific use cases, such as a "robo-doctor" providing medical advice.
(Steve Alter)
[ ] Discuss the role of government regulations in certifying and accrediting specialized AI solutions, especially in sensitive
domains like healthcare. (Christine Ouyang)
See full summary - https://otter.ai/u/rnfLkhQihofQEAZoq4bUiRovvoI?
utm_source=va_chat&utm_content=wrapup_v4&tab=chat&message=fc9c9ad6-ec65-435b-b9f7-9a97b419f938
01:55:53Jim Spohrer: Don Norman’s latest book goes from human-centered to humanity-centered design.
01:56:02James E Mister:The EU, UK, US, Japan and China are tackling this issue head on, but what about countries
without a surfeit of renewables already online? I think an additional dimension of ethics entails hyperscalers and other
global companies not exploiting the non-renewable resources in those types of countries.
01:56:09Stephen Kwan (US): Jim, can we also have a record of the chats, too?

01:57:08Jim Spohrer: Replying to "Jim, can we also hav..."
yes
01:57:13Jim Spohrer: Reacted to "Jim, can we also hav..." with ??????
01:57:25Jim Spohrer: James Mister - AI & Energy & Investment - https://www.linkedin.com/in/jmister/
01:57:41Jim Spohrer: Replying to "James Mister - AI & ..."
Thanks for being an ISSIP Ambassador James as well.
01:57:41Maggie Qian: Presenting a range of perspectives (even opposite povs) might be a good starting point for
responsible AI behavior. Right now, GenAI tends to lean too heavily in one direction, which can limit balanced understanding
01:57:44Stephen Kwan (US): Replying to "Jim, can we also hav..."
Thanks
01:58:36Jim Spohrer: Replying to "James Mister - AI & ..."
Sovereign AI and Cloud is big in Europe these days especially
01:59:46Jim Spohrer: Replying to "James Mister - AI & ..."
Fusion? TechCrunch - Commonwealth Fusion Systems books a $1B+ power deal for its future fusion reactor
URL:
https://techcrunch.com/2025/09/22/commonwealth-fusion-systems-books-a-1b-power-deal-for-its-future-fusion-reactor/
02:00:04Christine Ouyang:Reacted to "Don Norman’s latest ..." with ??????
02:01:08Jim Spohrer: Replying to "James Mister - AI & ..."
Geothermal is also on the table - https://rhg.com/research/geothermal-data-center-electricity-demand/
02:01:38James E Mister:https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-
costs.html
02:01:47James E Mister:That’s one NYT article about what I just mentioned.
02:03:32Erandy:Thank you everyone for your wonderful perspectives I learned alot from this meeting!
02:04:52Jim Spohrer: Brad - work culture health and safety - https://www.linkedin.com/in/brad-kewalramani-
33702a8/
02:04:59Jim Spohrer: Thank-you all!
02:05:15@TerriGriffith:Thank you, Haluk and all!

02:05:21Bulent Dal (OBASE): Thank you for this great presentation and the meeting.
I want to raise what the HR's role would be in managing AI/robot-led workforce. I know we do not have time.
I think that HR's mission will change, and they will take on the role as the ethical steward..
02:05:26Brad Kewalramani:Thank you so much@
02:05:28Brad Kewalramani:!

Additional Feedback – Steve Alter Email
Hi Haluk and Jim,
Thanks to Haluk for his ISSIP presentation today.  As I mentioned, I think the framework is very good,
clear, thorough, etc.   In contrast with its logical appeal, I think that  next steps should involve a range of
real examples to demonstrate its practical application. In general it seems to me that a large, well run
company (e.g., IBM) might  disseminate the framework and use it seriously, but that many smaller
companies that are trying to "move fast and break things" and especially trying make a huge amount of
money quickly for founders and venture capital companies probably would not care about it very much,
but might use it after the fact as part of product positioning and marketing.
This is a follow-up to what I remember as one of Haluk's comments during his presentation.   I think he
said that the two of you are working on something under the general heading of "people + AI + process". 
I definitely agree that just "people + AI" is insufficient for illuminating many important issues, and
especially that just saying we should keep people in the loop is somewhere between inadequately
suggestive and inaccurate in many cases (e.g., keep people in the loop in a self-driving Waymo?)  
Not surprisingly,  I tend to think about   "people + AI + process"  in terms of work systems that use AI in
one way or another.  Each of the three attachments contains potentially relevant ideas or illustrative
examples.   The attached papers are very long, so I will point to specific pages that might be useful for
your purposes:
CAIS 2021: Facets of Work.   This paper basically says that focusing too much on the "processes and
activities" element of a work system  (or just the process sequence in BPM) is often inadequate because
it does not highlight important facets of work such as communicating, making decisions, improvising, etc.
that are important in even a reasonably complete analysis of a work system.   Table 1, p. 324   identifies
18 facets of work that are relevant to many work systems. Figure 1, p. 326 uses a cartoonish example of
a robotic tennis instructor to make all 18 facets easier to visualize as part of a system.  Figure 2, p. 327 is
a work system snapshot (a simplified system view) of a hypothetical hiring system that uses two
hypothetical AI-based tools, AlgoComm and AlgoRank.  Notice that those tools automate some but not all
of the activities in the work system while leaving the most important activities to people.  Table 3, p. 328
identifies questions that might be pursued in relation to the 18 facets in turn.
ICIS 2024: Extending BPM ....  An important problem with BPM as conceived by mostly European
researchers is that it is about process modeling, the quality of those models, and conformance to those
models.   BPM rarely speaks directly about facets of work (or related umbrella terms).   This paper argues
that the metaphor of "resources for action, value for customers" (RAVC) can be applied in an approximate
way at six levels of detail:  1) enterprise capabilities, 2) enterprise operation through interacting work
systems, 3) individual work systems, 4) processes within work systems, 5) activities within processes, 6)
encapsulated services triggered by requests.  Tables 3 - 10 on pages 10 -14 show different "lenses" that
apply at each level of detail and that can be modified slightly for use at other levels because they are all
based on the same RAVC metaphor.  In retrospect, I decided that the second level is just a work system
on its own right and therefore should not be treated as a separate level.  A revised five level discussion of
RAVC will appear in a currently incomplete paper that I will submit to CAISE'26.  The title of the current
draft is  "Resources for action, value for customers: a semi-fractal systems analysis framework."
Info Sys 2025:  Making Cyberhuman Systems Smarter:   Making cyberhuman systems smarter should
say something reasonably clear about what that means, (especially since the idea of AI is often
intertwined with vague images about intelligence or smartness).   Pages 4-6  explain what that means,
including multiple dimensions of smartness in four categories and a  smartness curve (p. 5)  showing that
most supposedly smart things (such as smart cities, smart contracts, smart watches, etc. are not so smart
after all).   Table 3, p. 7   illustrates different types of roles of algorithmic agents in relation to selected
facets of work related to using an electronic medical records system.    Table 10, p. 14 in the ICIS 2024
paper shows a similar representation for a different medical example.  Figure 4 and Table 4 of the Info

Sys 2025 paper try to clarify the idea of human-in-the-loop in terms of different levels of engagement in
cyberhuman systems.
The first two papers are available at Google Scholar and ResearchGate
The cyber-human systems paper should be available in the ACM Digital Library (or I can simply send a
copy to anyone who is interested)   
(Elsevier will not allow ResearchGate to post the final paper.   I will post a preprint somewhere else, but
not tonight.
I hope this is useful in some way.
Best, 
Steve
-------------------------
Steven Alter, Ph.D.
Professor Emeritus
University of San Francisco
Attachments:
Alter S (2021) Facets of Work: Enriching the Description, Analysis, Design, and Evaluation of Systems in
Organizations. Communications of the Association for Information Systems 49.
Alter S (2024) Extending BPM by treating processes as components of work systems that produce
product/services valued by their customers. In: Proceedings of ICIS.
Alter S (2025). Making Cyber-Human Systems Smarter. Information Systems, 127, 102428.

Zoom Transcript
WEBVTT
1
00:00:02.500 --> 00:00:12.589
Jim Spohrer: Welcome, everyone. This is, Jim Spoher with ISIP, the International Society of Service Innovation Professionals.
It's my pleasure today to,
2
00:00:12.590 --> 00:00:23.820
Jim Spohrer: have our speaker, Haluk DeMerkin, on, but before he gets started, I want to, just remind everybody that this
meeting is being recorded. If you don't want your picture, you can turn your camera off.
3
00:00:23.840 --> 00:00:43.549
Jim Spohrer: Put questions and comments in the chat. Be sure to mute, yourself so we don't have any crosstalk. And now, it's
my pleasure to introduce, Christina Young, who's our ISAP Ambassador Lead. This is, Lunch and Learn in the ISAP
Ambassador Series, and is also a Distinguished Engineer.
4
00:00:43.630 --> 00:00:49.319
Jim Spohrer: Ibm. And so, Christine, over to you.
5
00:00:49.630 --> 00:01:02.159
Christine Ouyang: Thank you, Jim. Hi, everyone. Welcome, welcome to today's, ISIPS, webinar. We're very honored to host, Dr.
Halouk, Demakan.
6
00:01:02.160 --> 00:01:09.740
Christine Ouyang: Professor at University of Washington, and one of the leading authors of the Responsible Generative AI
Framework.
7

00:01:09.740 --> 00:01:33.900
Christine Ouyang: He is a global thought leader in digital transformation and service innovation, with deep expertise in
helping organizations use technology responsibly for impact. As the generative AI reshapes industry and society, the RGAF
provides timely guidance on how to innovate responsibly.
8
00:01:34.100 --> 00:01:39.310
Christine Ouyang: Balancing our human values, business goals, and the global impact.
9
00:01:39.380 --> 00:01:41.390
Christine Ouyang: So in this session,
10
00:01:41.390 --> 00:02:00.000
Christine Ouyang: Luke will walk us through the framework, the nine dimensions of it, and share the vision behind its
creation, and discuss how organizations can adopt it to ensure AI is transparent, accountable, sustainable, and human-
centered.
11
00:02:00.580 --> 00:02:02.109
Christine Ouyang: Over to you, Haluk.
12
00:02:03.230 --> 00:02:04.090
Haluk Demirkan: Okay.
13
00:02:04.300 --> 00:02:20.719
Haluk Demirkan: Thank you, Christine. Thank you, Jim. And first, I want to start… I'd like to start with a thank you to both of
you, Jim and Christine, for organizing this event, and Izip for inviting me to presentation. Again, hi everybody, good morning,
good afternoon, good evening for some of us.
14

00:02:20.930 --> 00:02:26.620
Haluk Demirkan: My name is Haluktemirkhan. I am currently a full professor at University of Washington, Tacoma.
15
00:02:27.400 --> 00:02:37.529
Haluk Demirkan: And my, as Christine mentioned, my research and professional practice have been in digital innovation,
artificial intelligence, machine learning.
16
00:02:37.620 --> 00:02:51.929
Haluk Demirkan: service science heavily. So, I spent almost a year with Linux Foundation AI and data team group, and we
worked on this responsible AI framework. So I will…
17
00:02:51.950 --> 00:02:59.779
Haluk Demirkan: present 9 dimensions from this framework, but I also did a comprehensive research on how to
18
00:02:59.890 --> 00:03:02.889
Haluk Demirkan: Apply some of these dimensions in our…
19
00:03:03.150 --> 00:03:07.870
Haluk Demirkan: When we are designing or developing or deploying AI solutions.
20
00:03:08.390 --> 00:03:13.760
Haluk Demirkan: Alright, I'd like to start with this. I'm sure some of you have seen
21
00:03:14.950 --> 00:03:28.469
Haluk Demirkan: some of these news, messages in media. Like, this was a recent one from Meta's Experience, Meta AI
Experience, published on August 2025.

22
00:03:28.690 --> 00:03:36.759
Haluk Demirkan: about, in period by stroke, a man fell Metachatbot and, thought that MetaChatbot is,
23
00:03:37.000 --> 00:03:38.570
Haluk Demirkan: Significant other.
24
00:03:38.870 --> 00:03:50.220
Haluk Demirkan: Another one is actually a little… little older example from Google. Google Chatbot had some issues, and
Google actually lost about $100 billion on one day.
25
00:03:50.490 --> 00:03:58.740
Haluk Demirkan: Another example from NPR, We have seen most of these lately. There are a lot of…
26
00:03:58.940 --> 00:04:06.580
Haluk Demirkan: good AI solutions in the market, but there are also some… we have been having some challenges and
some problems.
27
00:04:07.300 --> 00:04:10.329
Haluk Demirkan: So, why we should care? I mean,
28
00:04:11.230 --> 00:04:22.160
Haluk Demirkan: So why we should care about responsible gen AI? And, you know, we all know that generative AI can
accelerate innovation. I mean, I have… I've been seen in almost every organization
29

00:04:22.510 --> 00:04:41.320
Haluk Demirkan: using Gen AI solutions, or developing GenAI solutions for two primary purposes. One is how to increase
efficiency, effectiveness, and productivity of the organization. Like, how employees can be more productive. That's the one
way of using Gen AI solutions.
30
00:04:41.360 --> 00:04:53.309
Haluk Demirkan: Other one is how I can automate, scale, and build robust environment for my customer use, right? Two
type of products usually have been growing in companies.
31
00:04:53.410 --> 00:04:58.079
Haluk Demirkan: So, in this case, basically, is… I mean,
32
00:04:58.350 --> 00:05:11.959
Haluk Demirkan: So, and then recent failures are happening internally in organizations or externally. Like, if I'm developing a
product, internal use, I mean, internal users might be… might make incorrect decisions.
33
00:05:12.550 --> 00:05:17.870
Haluk Demirkan: So, how do we build these safe, ethical, sustainable, and also
34
00:05:17.880 --> 00:05:37.559
Haluk Demirkan: innovative, and also, and also provide value to the users, right? So, I have a quick question at the bottom. I
have, what could go wrong if we ignore these practices? So, I'd like you to think about this couple minutes. Like, I mean, I'm
sure you have seen already some examples.
35
00:05:38.680 --> 00:05:40.600
Haluk Demirkan: So, what type of things may go wrong?
36

00:05:41.420 --> 00:05:47.649
Haluk Demirkan: So, for this one, we don't need to put it into the chat box, but I'd like you to just think generally.
37
00:05:48.350 --> 00:05:49.970
Haluk Demirkan: So…
38
00:05:51.030 --> 00:06:06.479
Haluk Demirkan: And… and then I want to give you a little bit of background. So, this RGAF, Responsible Gen AI framework,
basically has been led by the Responsible AI Workstream at the GenAI Commons as part of the Linux
39
00:06:06.630 --> 00:06:08.540
Haluk Demirkan: data and AI guru.
40
00:06:09.150 --> 00:06:19.779
Haluk Demirkan: So the purpose of this GenAI framework promotes that GenAI is designed, applied, used in ways that are
ethical, fair, and beneficial to society and the users.
41
00:06:20.220 --> 00:06:24.160
Haluk Demirkan: And it also involves entering regulations and guidelines.
42
00:06:24.590 --> 00:06:31.160
Haluk Demirkan: So, how we developed it? We spent almost a year, we partnered with about 24, 5 global experts.
43
00:06:31.390 --> 00:06:39.630
Haluk Demirkan: We completed extensive research about some failed GenAI solutions in the market, as much as we can
learn and hear from.

44
00:06:39.960 --> 00:06:53.120
Haluk Demirkan: And I reviewed AI frameworks from almost 20 countries, including EU AI Act Framework, NIST Framework,
Singapore AI, and China's AI Development Plan. But we found… we basically…
45
00:06:53.620 --> 00:06:56.339
Haluk Demirkan: Found, and then downloaded.
46
00:06:56.610 --> 00:07:02.930
Haluk Demirkan: as many countries' responsible or trusted AI type of frameworks.
47
00:07:03.030 --> 00:07:21.020
Haluk Demirkan: And then we also looked at as many companies' frameworks, because as you know, a lot of companies
have some type of framework in context of responsibility or trust. Mostly, these are the two titles heavily being used, such as
ChatGPT, Amazon, Google, Facebook, IBM, Microsoft.
48
00:07:21.490 --> 00:07:29.380
Haluk Demirkan: And then, we wanted to create some type of framework that will be inclusive of all these. Inclusive of
49
00:07:29.380 --> 00:07:42.689
Haluk Demirkan: Addressing all the, you know, country-specific frameworks, plus the… some regulations, plus the… what we
learned from different companies, because they have a lot of resources and have been working in this field for a while.
50
00:07:43.100 --> 00:07:43.970
Haluk Demirkan: So…

51
00:07:43.980 --> 00:07:55.809
Haluk Demirkan: As a result, we came up with the 9 dimensions. So, in the next few minutes, what I will do is, I will go
through very high level, what are each of these 9 dimensions, and then
52
00:07:55.810 --> 00:08:07.220
Haluk Demirkan: I will spend time… what are some of the tools, techniques, and methods in the market you can use to
deploy some of those dimensions in your AI development process.
53
00:08:07.320 --> 00:08:17.050
Haluk Demirkan: Because that's the part I personally spent a significant amount of time how to do, because what is… we
have been talking about what portion for some time.
54
00:08:17.060 --> 00:08:29.879
Haluk Demirkan: But one of the main challenges is, okay, we understand we need to develop a responsible gen AI, but how
to do? So that's the part I would like to spend more time on today.
55
00:08:31.280 --> 00:08:49.929
Haluk Demirkan: So when I share the screen, just an FII, I don't see a chat box, so if you have any questions, comments,
please stop me. We can make it in a discussion forum, that will be fine. So I'm not able to read the chat box for any questions
or comments when I share the screen, just an FYI.
56
00:08:50.450 --> 00:09:05.430
Haluk Demirkan: So these are the 9 dimensions. Basically, like I said, after almost spending almost 12 months, we came up
with these 9 dimensions that's almost inclusive of almost every other framework we have seen in the literature.
57
00:09:07.250 --> 00:09:08.090
Haluk Demirkan: So…

58
00:09:08.110 --> 00:09:27.299
Haluk Demirkan: It starts with the human-centered, and there's no any specific priority. The high-level idea is… depends on
the… it's like a time and space complexity, basically. Depends on the organization, depends on the solution we are
developing, and also depends on the usage, use case.
59
00:09:27.820 --> 00:09:41.379
Haluk Demirkan: each of these dimensions may have a higher weight in our analysis, right? I cannot say they all have to be
perfect, but again, depends on time, space, user, and the use case.
60
00:09:41.500 --> 00:10:00.720
Haluk Demirkan: each of them will have a different type of weight when we are developing. Because if we are trying to hit
100% accuracy for all these dimensions, we may not be able to develop the solution and put the product to the market on
time. That is the challenge, right? Like, we don't want to be in the innovation dilemma
61
00:10:00.730 --> 00:10:01.650
Haluk Demirkan: situation.
62
00:10:01.930 --> 00:10:14.490
Haluk Demirkan: So, it starts with the human-centered. We all heard about why… what is the human in the loop, and what
is… why Gen AI should be… should… solutions should be human-centered and aligned, because
63
00:10:14.830 --> 00:10:31.390
Haluk Demirkan: hopefully, you know, 90% of… 95% of these users are humans, right? It needs to partner with humans,
because there might be a time my manager or my coworker might be a GenAI solution, right? I probably wouldn't care, as
long as I can get my job done.
64
00:10:32.510 --> 00:10:50.199

Haluk Demirkan: And if it excites me doing that job, I wouldn't mind to partner with GenAI, or my manager becomes a Gen
AI, or my employee, my team member, GenAI. But it needs to really collaborate and work with me closely. So at the bottom
of this slide, I have some examples I pulled.
65
00:10:50.390 --> 00:10:51.710
Haluk Demirkan: How we can…
66
00:10:51.720 --> 00:11:04.360
Haluk Demirkan: what type of frameworks or tools and techniques we can pull from literature. Most of them are open
source, available at the internet, methodologies we can use to evaluate
67
00:11:04.360 --> 00:11:16.549
Haluk Demirkan: The AI solution we are developing is human-centered, like value sensitivity design, analysis, or user
feedbacks, or a type of A-B testing, if and when we can do it.
68
00:11:17.080 --> 00:11:27.589
Haluk Demirkan: Or there are also some methodologies in context of transformer reinforcement learning, if we are building
a transformer solution. These are some of the examples.
69
00:11:27.860 --> 00:11:35.930
Haluk Demirkan: My next one is about accessibility and inclusive. So, this one is, it needs to be accessible and inclusive.
70
00:11:37.580 --> 00:11:48.759
Haluk Demirkan: So, I was actually at a meeting with a couple colleagues yesterday, and one of my colleagues said, hey, if
I'm building a product, web-based product.
71
00:11:49.840 --> 00:11:53.829

Haluk Demirkan: It shouldn't require a tutorial to use it.
72
00:11:53.980 --> 00:11:57.099
Haluk Demirkan: It should be so well designed and developed.
73
00:11:57.350 --> 00:12:04.689
Haluk Demirkan: Today, in a mobile app, or any… in a web-based product, those are the two we use heavily these days.
74
00:12:04.750 --> 00:12:15.009
Haluk Demirkan: to give access to the user. It shouldn't require a manual how, right? So we are actually using some web
apps or mobile apps. They don't require it.
75
00:12:15.020 --> 00:12:25.530
Haluk Demirkan: tutorial. I open the app, and I immediately, in, like, 5 seconds, I can figure out what to do and use it. This is
about accessibility and being inclusive, any type of users.
76
00:12:25.550 --> 00:12:34.089
Haluk Demirkan: Soap… In this case, it should be able to address diverse users and ability needs, right? And then,
77
00:12:34.320 --> 00:12:42.629
Haluk Demirkan: And then it should give equitable access for my customers. Again, for my customers or my users, I said.
So…
78
00:12:42.960 --> 00:12:51.619
Haluk Demirkan: But I have a quick question. If I'm using this Gen AI solution, or developing, or utilizing, who shall be
accountable when AI makes a mistake?

79
00:12:52.110 --> 00:13:05.280
Haluk Demirkan: Is the organization, put that product to the market, or the user, or who owns it, right? So, I would like to
give you, like, 2 minutes. If you don't mind, put your answer to your chat box.
80
00:13:06.070 --> 00:13:07.220
Haluk Demirkan: Does that sound good?
81
00:13:07.570 --> 00:13:12.549
Haluk Demirkan: So I will hold it… I'm watching the clock, so I will give you 2 minutes.
82
00:13:12.820 --> 00:13:14.240
Haluk Demirkan: Who shall be accountable?
83
00:13:15.030 --> 00:13:18.710
Haluk Demirkan: owner who paid the money and purchased this AI solution.
84
00:13:18.910 --> 00:13:22.550
Haluk Demirkan: Or the, the company who developed it.
85
00:13:22.830 --> 00:13:26.190
Haluk Demirkan: Or the internet provider who gives that access to?
86
00:13:26.320 --> 00:13:31.090

Haluk Demirkan: There's so many stakeholders in age and AI type of solution development.
87
00:13:34.900 --> 00:13:37.700
Haluk Demirkan: This is coming up quite a bit these days.
88
00:13:38.000 --> 00:13:41.350
Haluk Demirkan: And it's, and there's no perfect answer currently.
89
00:13:51.630 --> 00:13:57.669
Haluk Demirkan: All right. Jim, do you mind if I… I cannot see the chat box. Would you mind to read some of those?
90
00:13:58.240 --> 00:13:59.589
Jim Spohrer: Sure.
91
00:13:59.590 --> 00:14:01.259
Haluk Demirkan: Or, Christine, if you don't mind.
92
00:14:02.280 --> 00:14:07.859
Jim Spohrer: I'll go ahead, because I started, but the, one first answer was the organization.
93
00:14:08.000 --> 00:14:18.980
Jim Spohrer: who developed the product, question mark. Christine said the company that provides the product, so Christine
agreed with that, and I put in, depends on regulations and conditions of use.
94

00:14:19.990 --> 00:14:26.089
Jim Spohrer: Whoever owns the IP, someone else added. So, yeah, different answers coming in, Halouk.
95
00:14:26.240 --> 00:14:41.620
Haluk Demirkan: Right, it should be, right? It's really no perfect answer, because think about it, if I buy… if I bought a… let's
say I bought a GenAI pollution solution, like iRobot type of solution, I don't mean to market and advertise any product.
96
00:14:41.640 --> 00:14:47.100
Haluk Demirkan: And I use it in a rental home, and it gives a damage to the house, to the house.
97
00:14:47.330 --> 00:14:52.260
Haluk Demirkan: Right? Is the… is the fault… is the robot? Is the fault, is the me?
98
00:14:52.430 --> 00:15:10.199
Haluk Demirkan: Like, so many things. It gets a little messy, actually, to figure it out. Anyway, so in this slide at the bottom,
again, as an example, I provided some standards interfaces and some different data sets that can be used to test and
evaluate accessibility and access… inclusiveness.
99
00:15:10.440 --> 00:15:12.190
Haluk Demirkan: So the next one is a bot.
100
00:15:12.610 --> 00:15:21.329
Haluk Demirkan: robust, reliable, and safe. So, in technology, we talk about robust solution, right? Or reliable solution.
101
00:15:21.780 --> 00:15:23.029
Haluk Demirkan: quite a bit.

102
00:15:23.290 --> 00:15:28.540
Haluk Demirkan: Again, it is about consistently perform intended use.
103
00:15:28.790 --> 00:15:42.519
Haluk Demirkan: functions safely, dependably, right? An unpredictable environments. So we need to be able to figure it out,
what are some unpredictable situations this solution will do the job it's supposed to do.
104
00:15:42.640 --> 00:15:46.140
Haluk Demirkan: Right? So in that context, we need to build some safeguards.
105
00:15:46.770 --> 00:15:49.120
Haluk Demirkan: In these AI solutions.
106
00:15:49.710 --> 00:16:01.759
Haluk Demirkan: Like, I was reading an article last week, and this article talks about how many people today are using some
GNI solutions as a counselor.
107
00:16:02.400 --> 00:16:11.570
Haluk Demirkan: Basically, instead of going to a doctor, spending hours on a couch, and spending time with that doctor, or,
you know, psychiatrist, or, you know, psychologist.
108
00:16:11.600 --> 00:16:26.310
Haluk Demirkan: Some people are using Gen AI solution to get advice. I was like, oh my god, this is so scary, because most
of these AI solutions are not being certified or educated specifically to give the counseling, you know, advices.

109
00:16:26.310 --> 00:16:35.419
Haluk Demirkan: So in that, like, in that case, we may need to build some safeguards to not answer those type of questions,
right? Just, I'm just throwing an example.
110
00:16:35.550 --> 00:16:37.499
Haluk Demirkan: Again, in this case.
111
00:16:38.030 --> 00:16:49.830
Haluk Demirkan: I have some tools, techniques. I will have a different presentation in the future, maybe show some of these
examples how to do it, but I just put a laundry list of different tools in the market.
112
00:16:50.040 --> 00:16:54.560
Haluk Demirkan: Again, they are, pretty free to, you know, free to download and use.
113
00:16:55.220 --> 00:17:02.840
Haluk Demirkan: we can utilize to test and evaluate in context of reliability and safety of the AI solution.
114
00:17:03.470 --> 00:17:13.180
Haluk Demirkan: Next one, talking about transparency and explainability. I think this is another very difficult one, because
most Gen AI solutions work like a black box.
115
00:17:13.319 --> 00:17:16.850
Haluk Demirkan: Right? Because when you think about it,
116
00:17:17.910 --> 00:17:24.589

Haluk Demirkan: There's a transformer model, they are, you know, they have been trained with billions of records.
117
00:17:24.790 --> 00:17:34.890
Haluk Demirkan: And then… and then front-side, there's a child kind of, you know, like a post-developed GenAI solution on
top of a transformer model.
118
00:17:35.300 --> 00:17:40.960
Haluk Demirkan: Again, utilized a lot of data sets, a lot of models, very advanced models.
119
00:17:41.090 --> 00:17:43.350
Haluk Demirkan: And then we are saying, oh, this AI app.
120
00:17:43.590 --> 00:17:54.899
Haluk Demirkan: solution needs to be explainable. What I found out in my machine learning AI background is that whenever
the accuracy gets higher, explainability gets lower.
121
00:17:55.080 --> 00:18:11.780
Haluk Demirkan: Because in order to have a higher prediction, higher confidence prediction, and accuracy, much better
accuracy, it means I need to use more complex models behind the scene. And whenever there's more complex models
behind the scene, it gets less explainable.
122
00:18:11.880 --> 00:18:18.580
Haluk Demirkan: But if I want to have a higher explainability, in most cases, if I use a very simple
123
00:18:18.690 --> 00:18:27.900
Haluk Demirkan: ML model behind the scene, it is very explainable. So how is the balancing this out, right? That gets really
challenging. So in that context.

124
00:18:28.570 --> 00:18:47.299
Haluk Demirkan: I put some methodology, mostly coming from machine learning science, ML science world, how we can use
to explain the output of these models. It really comes up to the metrics and measures, right? We also need to be able to
evaluate
125
00:18:47.350 --> 00:18:56.070
Haluk Demirkan: Measure it, evaluate, and improve how explainable this model is, right? Again, it really depends on the…
126
00:18:56.430 --> 00:18:58.840
Haluk Demirkan: Type of solution, we are dual open.
127
00:18:59.260 --> 00:19:04.890
Haluk Demirkan: In that case, I have another question. How do we balance innovation speed with responsibility?
128
00:19:05.410 --> 00:19:06.390
Haluk Demirkan: Again.
129
00:19:07.210 --> 00:19:17.399
Haluk Demirkan: When the responsibility goes up, innovation speed might go slow, right? Vice versa. Time to market versus
the building something more responsible.
130
00:19:18.030 --> 00:19:23.720
Haluk Demirkan: For the time being, I would like to continue so we can come back to this question later on.
131

00:19:24.790 --> 00:19:28.020
Haluk Demirkan: Alright, next one talks about accountability.
132
00:19:28.410 --> 00:19:32.709
Haluk Demirkan: This really ties to the one of the, you know, the question we had earlier.
133
00:19:32.830 --> 00:19:33.640
Haluk Demirkan: bite.
134
00:19:33.840 --> 00:19:36.740
Haluk Demirkan: So, what should be the accountability?
135
00:19:37.190 --> 00:19:42.650
Haluk Demirkan: So, again, in this case, there are really 3 major items. Who's accountable?
136
00:19:43.510 --> 00:19:44.440
Haluk Demirkan: To whom?
137
00:19:44.830 --> 00:19:55.639
Haluk Demirkan: And then accountable for what, and how to accountable and rectify. So, what type of audit trails and then
error rectification mechanisms we can set up
138
00:19:55.810 --> 00:20:06.670
Haluk Demirkan: So we can measure… Manage, you know, evaluate, improve this accountability, right?

139
00:20:07.220 --> 00:20:23.559
Haluk Demirkan: In this case, I found some examples from literature, like IBM has some AI fact sheets, Google has model
cars, Amazon also have items for service cars, and when you Google it, or when you search at the internet, you can get some
examples in those contexts.
140
00:20:23.740 --> 00:20:30.710
Haluk Demirkan: Again, Again, this also gets into the user feedback loops and agreements mechanisms.
141
00:20:31.180 --> 00:20:42.689
Haluk Demirkan: And more and more, some of the, you know, Gen AI solutions we are seeing in the market have some type
of user feedback mechanisms, because user feedback mechanisms is one of the very useful
142
00:20:42.760 --> 00:20:52.160
Haluk Demirkan: Information we can capture and then evaluate the accuracy and also accountability of the solution.
143
00:20:53.730 --> 00:20:57.220
Haluk Demirkan: Alright, next one is… Being secure and private.
144
00:20:57.610 --> 00:21:01.030
Jim Spohrer: I think this gets a little challenging, too, right?
145
00:21:01.100 --> 00:21:03.310
Haluk Demirkan: I'm sorry, there's a comment? Question?
146
00:21:03.310 --> 00:21:22.019

Jim Spohrer: Sorry, I, I should turn on my video, too. So just on that accountability one, Terry earlier had talked about
possibly having an insurance company, to help with rectification, so I just wanted to insert that, that, Terry mentioned that.
147
00:21:22.320 --> 00:21:32.219
Haluk Demirkan: Actually, that might be another business model, right? That's… that makes sense. You're right. Insurance
companies, maybe that's the way insurance companies came to market, right?
148
00:21:34.010 --> 00:21:36.980
Haluk Demirkan: When, when, when we see some, different,
149
00:21:37.200 --> 00:21:51.640
Haluk Demirkan: I guess, you're right, it's the insurance companies are kind of type of risk management, right? So in this
case, we are talking about risk and how to manage the risk from Gen AI for the, again.
150
00:21:51.860 --> 00:21:59.050
Haluk Demirkan: the person, accountable versus to whom, and, you know, what, and rectification process. Yeah,
accountable.
151
00:21:59.640 --> 00:22:00.699
Haluk Demirkan: That makes sense.
152
00:22:01.130 --> 00:22:03.010
Haluk Demirkan: That will be interesting, actually.
153
00:22:04.510 --> 00:22:12.389
Haluk Demirkan: So, next one is, talking about, privacy and security. You know.

154
00:22:12.560 --> 00:22:15.730
Haluk Demirkan: Privacy about, basically, you know…
155
00:22:15.900 --> 00:22:22.110
Haluk Demirkan: Privacy of the individuals, or provide means to mitigate ethical and legal implications.
156
00:22:22.360 --> 00:22:25.310
Haluk Demirkan: Right, and then,
157
00:22:25.570 --> 00:22:39.440
Haluk Demirkan: and differential privacy and federated learning techniques, because privacy, I think, you know, there are
more and more regulations coming up in context of privacy of the data, right?
158
00:22:40.050 --> 00:22:41.490
Haluk Demirkan: And then,
159
00:22:41.710 --> 00:22:49.980
Haluk Demirkan: Security is another case. If we… in the process of ensuring the security of the data systems from various
threats.
160
00:22:50.300 --> 00:22:58.960
Haluk Demirkan: I mean, most Gen AI solutions, especially if there are free ones, if any of us are using a free version of Gen
AI solution in the market.
161

00:22:59.030 --> 00:23:12.149
Haluk Demirkan: These organizations already say that. They capture my questions, they capture their responses and use as
part of to train the models to the next question.
162
00:23:12.530 --> 00:23:18.620
Haluk Demirkan: by someone else. So if it is capturing every single interaction I do with this product.
163
00:23:18.870 --> 00:23:31.180
Haluk Demirkan: I don't want that interaction to be public, right? In that case, it needs to be very secure. And then privacy
gets a little, is a challenge, right? Another challenge, because,
164
00:23:31.720 --> 00:23:46.530
Haluk Demirkan: I'm actually doing some research, and in my research, I'm using a lot of synthetic data to develop this
GenAI solution, but when I use authentic data, it's perfect, right? It's lots of privacy, but
165
00:23:46.700 --> 00:23:49.610
Haluk Demirkan: The problem is, how close to the human?
166
00:23:50.110 --> 00:24:08.050
Haluk Demirkan: my synthetic data is just the data I generate. I might be able to use synthetic data to test format and
functionality of the product I'm developing, but my accuracy of the answers will go down if I use a synthetic data in my
training set.
167
00:24:08.300 --> 00:24:23.880
Haluk Demirkan: Right? As… if I want as accurate prediction with the GenAI solution, and as accurate answer to a question or
the interaction by a user, I need to use as real data possible. But what's the balance?
168

00:24:24.110 --> 00:24:32.290
Haluk Demirkan: Again, in this case, I provided some examples. There's, adversarial prompt defenses, or federal learning
concepts.
169
00:24:32.840 --> 00:24:45.869
Haluk Demirkan: Especially this federated learning is growing quite a bit, in our industry, in context of, you know, providing a
privacy, security, and also train these models,
170
00:24:46.250 --> 00:24:57.610
Haluk Demirkan: Basically, decentralized or distributed or federated training concepts, capturing and then training the
concepts, but not moving the data from one location to other.
171
00:24:58.690 --> 00:25:01.610
Haluk Demirkan: Compliance is a…
172
00:25:01.810 --> 00:25:12.570
Haluk Demirkan: big one. I think this is another big one. If you notice, I always keep saying each of them very big and
important. Like, 9 of them are all important, but how important for the use case, right?
173
00:25:12.780 --> 00:25:16.709
Haluk Demirkan: I think almost a different…
174
00:25:17.660 --> 00:25:23.329
Haluk Demirkan: companies, I think almost every company have some type of compliance processes internally.
175
00:25:24.830 --> 00:25:33.729

Haluk Demirkan: And also, almost every country, state in the United States, they are creating a type of compliance.
176
00:25:33.890 --> 00:25:46.990
Haluk Demirkan: processes, but still, I don't think there's any centralized certification process or product in the market for
this compliance, specific to Gen AI solutions.
177
00:25:47.160 --> 00:25:58.990
Haluk Demirkan: Okay? It's about legal, ethical, regulatory, and standards. And my personal experience is that usually
regulation… the development of regulations takes time.
178
00:25:59.220 --> 00:26:00.640
Haluk Demirkan: I think today.
179
00:26:00.770 --> 00:26:16.530
Haluk Demirkan: technology, the evolution of technology and evolution of Gen AI development processes are much, much,
much faster than the speed of developing these regulations and compliance.
180
00:26:17.050 --> 00:26:18.010
Haluk Demirkan: and, and…
181
00:26:18.720 --> 00:26:29.150
Haluk Demirkan: we will have some challenges, right? We might be able to learn from our mistakes, I guess, and then those
mistakes will become input to the compliance or regulations.
182
00:26:29.280 --> 00:26:38.330
Haluk Demirkan: So, in this case, I provided a couple examples. I think NIST, specific to NIST, the OECT and NIST has really
good examples.

183
00:26:38.710 --> 00:26:50.949
Haluk Demirkan: in context of risk management, and then specific to the Gen AI. I know EU also has an EU AI Act and GDPR
in context of data privacy.
184
00:26:52.810 --> 00:26:59.949
Haluk Demirkan: But, you know, these are some challenging issues to address by all of us.
185
00:27:00.370 --> 00:27:06.030
Haluk Demirkan: So next one is… gets into the ethical, and being fair, and unbiased.
186
00:27:06.140 --> 00:27:13.150
Haluk Demirkan: I think especially in, in the world of, machine learning science, ML science, and,
187
00:27:13.770 --> 00:27:21.060
Haluk Demirkan: optimization, operation research, science. A lot of science areas. We talk about being biased or unbiased.
188
00:27:21.330 --> 00:27:34.030
Haluk Demirkan: Like, is the model is biased or unbiased? Is the data… is the way we design the product is biased, or the
interface is biased or unbiased, right? This really gets into the…
189
00:27:35.110 --> 00:27:53.920
Haluk Demirkan: how we can, align with moral principles and societal values. And by ethical fare is also overlapping with
accessibility, right? I might, create a beautiful product, but my UI is more accessible to the certain type of,
190

00:27:54.700 --> 00:27:57.749
Haluk Demirkan: customers, or vice versa. So…
191
00:27:57.890 --> 00:28:09.600
Haluk Demirkan: And then, these are some of the tools I noticed. There's some toolkits in the market. I know IBM has this AI
Fairness 360 evaluation methodology.
192
00:28:09.750 --> 00:28:14.560
Haluk Demirkan: Microsoft and, some Googles even have some what-if type of tools.
193
00:28:14.750 --> 00:28:21.410
Haluk Demirkan: This is a… you know, we need to use some more science, basically, to evaluate this, right?
194
00:28:22.950 --> 00:28:24.080
Haluk Demirkan: And,
195
00:28:24.370 --> 00:28:38.210
Haluk Demirkan: And then there are some different techniques in context of… it really comes up… if we really get into the
concept of the Gen AI solution's response to a question by a user, it really gets into the data and training, primarily.
196
00:28:40.480 --> 00:28:41.180
Haluk Demirkan: Alright.
197
00:28:41.560 --> 00:28:50.390
Haluk Demirkan: Next one is my… this is the last one. This is the ninth dimension. Hopefully, I didn't go too fast.
Environmentally sustainable.

198
00:28:50.960 --> 00:28:54.740
Haluk Demirkan: So, sustainability of the… because…
199
00:28:55.020 --> 00:29:04.250
Haluk Demirkan: I read an article, at one point. They were talking about, How much, power,
200
00:29:04.460 --> 00:29:12.240
Haluk Demirkan: Factories are using to generate batteries, to make a battery for electric cars.
201
00:29:13.070 --> 00:29:26.180
Haluk Demirkan: So there's really interesting controversy about that. How much, you know, we are talking about electric
cars, for example, is environmentally sustainable, but on the other side, the other argument was.
202
00:29:26.900 --> 00:29:33.610
Haluk Demirkan: How much, energy is being spent? How much space has been used?
203
00:29:33.810 --> 00:29:41.529
Haluk Demirkan: to… to make… make, or build batteries for electric cars. Similar.
204
00:29:41.720 --> 00:29:48.259
Haluk Demirkan: When you really think about it, the amount of, hardware, software,
205
00:29:48.480 --> 00:29:51.690

Haluk Demirkan: A memory capacity have been used.
206
00:29:51.840 --> 00:30:04.350
Haluk Demirkan: Internet, you know, energy has been used. I mean, today, we cannot use a computer if we don't have
electricity. So all these computers needs at least electricity to run, right?
207
00:30:04.920 --> 00:30:05.970
Haluk Demirkan: So does…
208
00:30:06.310 --> 00:30:19.800
Haluk Demirkan: This is another one, some companies working on, some organizations working on, like, track carbon
emissions of different ML workloads. How can we define some metrics, measures.
209
00:30:20.130 --> 00:30:28.169
Haluk Demirkan: And, and evaluate carbon use, carbon and energy footprint of these AI solutions.
210
00:30:28.710 --> 00:30:32.429
Haluk Demirkan: I mean, certain things.
211
00:30:32.610 --> 00:30:37.380
Haluk Demirkan: we can do without using Gen AI.
212
00:30:37.680 --> 00:30:43.829
Haluk Demirkan: I mean, I can give one example with a couple of colleagues. We developed a GenAI solution.

213
00:30:44.400 --> 00:30:55.039
Haluk Demirkan: To basically do some math analysis, and then to the, provide, reasonable explanations, from a content.
214
00:30:55.390 --> 00:31:04.850
Haluk Demirkan: So what we discussed… what we discovered is, when we use GenAI for some simple math, it uses a lot of
tokens. It actually gets pretty expensive.
215
00:31:04.980 --> 00:31:13.809
Haluk Demirkan: And then that was the first, like, first, phase one, we did full pri- full solution, full… we developed a solution.
216
00:31:13.960 --> 00:31:15.930
Haluk Demirkan: Fully by using a Gen AI.
217
00:31:16.050 --> 00:31:24.859
Haluk Demirkan: First phase, do the math analysis. Second phase, providing a reasoning from the content. That's the phase
one. And then phase two, we said, hey.
218
00:31:24.890 --> 00:31:43.779
Haluk Demirkan: How about we just use a basic Python to do math analysis, and then save the results, and then use those
results for only reasoning that will be pulled by the GenAI solution? Oh my god, the token transactions and the cost of the
whole thing went down almost 1 tenth.
219
00:31:44.600 --> 00:31:52.229
Haluk Demirkan: In my example. So we may… we need to… to think about those type of things. How can I,
220

00:31:52.760 --> 00:32:05.320
Haluk Demirkan: basically use the Gen AI for the specific purpose. Yes, we can use it for a lot of things, but does it really
need it, right? Maybe…
221
00:32:05.720 --> 00:32:15.879
Haluk Demirkan: If we don't need to use a Gen AI, then we use some other methods that will actually keep the cost down,
keep the energy and footprint down.
222
00:32:16.070 --> 00:32:17.190
Haluk Demirkan: Just a tut.
223
00:32:17.780 --> 00:32:23.709
Haluk Demirkan: There's no perfect. So overall, there are 9 dimensions, but…
224
00:32:24.090 --> 00:32:29.019
Haluk Demirkan: And then how we can do it, right? I provided some examples and tools at the bottom.
225
00:32:29.660 --> 00:32:34.410
Haluk Demirkan: Of each nine… of each of those 9 dimensions, but…
226
00:32:34.540 --> 00:32:39.510
Haluk Demirkan: How is this still a difficult thing? Like, how do I define the weights?
227
00:32:40.100 --> 00:32:44.150
Haluk Demirkan: each of those nine dimensions. What should be the weight? Importance.

228
00:32:44.690 --> 00:32:50.359
Haluk Demirkan: And how do I measure it? How do I validate? When to do it? So I came up with very simple
229
00:32:50.520 --> 00:32:54.599
Haluk Demirkan: process diagram. This is something I came up last night.
230
00:32:55.010 --> 00:33:01.300
Haluk Demirkan: So, this is a, like, if I think about, there are 5 major phases of developing a…
231
00:33:01.620 --> 00:33:07.080
Haluk Demirkan: Gen AI solution. It's like, Discover is pretty well known, it's more like a…
232
00:33:07.440 --> 00:33:13.730
Haluk Demirkan: And a business use case or requirements, design phase, Developers, basically, you know.
233
00:33:13.940 --> 00:33:20.910
Haluk Demirkan: pulling the data, harnessing the data, machine… ML models, all the different methods, and deploy and
maintain
234
00:33:21.470 --> 00:33:36.989
Haluk Demirkan: So, as you notice, for each of these phases, I mapped out, the weight or importance, like, we need to think
about… because it is really hard to do all 9 dimensions every phase. That will take a lot of time.
235

00:33:37.180 --> 00:33:42.620
Haluk Demirkan: But if we at least, concentrate, different.
236
00:33:42.960 --> 00:33:48.539
Haluk Demirkan: a dimension at a different phase, I can spend a little bit of less time, right?
237
00:33:49.110 --> 00:33:55.600
Haluk Demirkan: So I'm trying to basically balance responsibility with innovation of the, you know, speed of innovation.
238
00:33:56.230 --> 00:33:58.009
Haluk Demirkan: So, Discover Space.
239
00:33:58.310 --> 00:34:17.339
Haluk Demirkan: Based on my research, human-centered and accessibility have a higher weight. So those two I can spend
more time. Design is… gets into the ethical fairness, transparency, explainability. Because design phase, I really need to think
about… explainability also goes about efficacy.
240
00:34:17.690 --> 00:34:23.589
Haluk Demirkan: Right? Like, during the design phase, I also need to think about, hey, how am I gonna measure
241
00:34:24.130 --> 00:34:25.659
Haluk Demirkan: and evaluate.
242
00:34:26.420 --> 00:34:33.729
Haluk Demirkan: accuracy of my output. That is one of the things most of us are not spending a lot of time.

243
00:34:33.989 --> 00:34:38.650
Haluk Demirkan: Like, yes, GenAI is generating a solution. It works really good.
244
00:34:38.760 --> 00:34:58.519
Haluk Demirkan: functionally, but the output, what is the confidence rate for the output of the Gen AI solution? So those are
the things I need to think about in design phase. What will be my success measure? Is the 80% predictable? Is 80% accuracy
as good? Is the 60%, 70, 90?
245
00:34:58.860 --> 00:35:02.080
Haluk Demirkan: I don't know, it really depends on the solution we develop.
246
00:35:02.800 --> 00:35:17.970
Haluk Demirkan: And then, again, in developer phase, my suggestion was heavily spent time on privacy security, because
developer phase, we have a lot of data, right, we play with. Again, we also even need to think about who needs to have
access to that data.
247
00:35:18.070 --> 00:35:18.810
Haluk Demirkan: Right?
248
00:35:19.030 --> 00:35:28.530
Haluk Demirkan: And then deploy compliance, accountability, rectifiability, and then maintain, again, some of those we may
need to repeat.
249
00:35:28.760 --> 00:35:35.709
Haluk Demirkan: in context of when we are maintaining, how can I maintain this as efficiently as possible?

250
00:35:36.040 --> 00:35:47.489
Haluk Demirkan: Another thing, another guideline I have, or suggestion I have, is that, because we have quite a bit Gen AI
experts in this presentation, is we need to expect the unexpected.
251
00:35:48.180 --> 00:35:49.000
Haluk Demirkan: Alright.
252
00:35:49.140 --> 00:36:00.640
Haluk Demirkan: Again, there are some videos, actually, at Internet. I can spend 10 minutes learning, and I can pop up and
create a simple GenAI solution, just by using APIs.
253
00:36:00.820 --> 00:36:02.220
Haluk Demirkan: Yeah, but again.
254
00:36:03.400 --> 00:36:13.390
Haluk Demirkan: what might go right or what might go wrong. We need to expect the unexpected. In that case, my
suggestion is, it's a type of risk analysis, right?
255
00:36:13.720 --> 00:36:16.830
Haluk Demirkan: So when we are developing a GenAI solution.
256
00:36:17.940 --> 00:36:26.160
Haluk Demirkan: It's really managing the risk, which comes up to maybe Terry's insurance. Actually, I put that as one other
solution, maybe, or suggestion.
257

00:36:26.610 --> 00:36:32.809
Haluk Demirkan: So NIST has a really good framework about risk management, risk of AI system.
258
00:36:33.140 --> 00:36:41.629
Haluk Demirkan: These are the some, standardization frameworks. They are building these risk management processes, ISS
4200.
259
00:36:42.090 --> 00:36:46.689
Haluk Demirkan: ISO 2300894, or NIST, and then EU Act.
260
00:36:47.210 --> 00:36:50.349
Haluk Demirkan: And then I put this as a very, very simple
261
00:36:50.420 --> 00:37:08.589
Haluk Demirkan: open source content about risk management. We all heard about risk management, you know, in context
of project management for many years. It's about measure of an event, probably, to have occurring, and then what might be
the impact of that
262
00:37:08.730 --> 00:37:11.020
Haluk Demirkan: Risk, if that happens.
263
00:37:11.250 --> 00:37:26.419
Haluk Demirkan: Right? Because we buy a car insurance to be able to manage if I have an accident, right? But if I don't have
an accident, then the money I spent for the insurance is gone.
264
00:37:26.680 --> 00:37:35.260

Haluk Demirkan: Right? So, this is kind of NIST framework. They, they kind of came up with this 555 measurement concept.
265
00:37:35.520 --> 00:37:40.470
Haluk Demirkan: basically, likelihood of something might be happening.
266
00:37:41.090 --> 00:37:43.040
Haluk Demirkan: and severity.
267
00:37:43.780 --> 00:37:48.440
Haluk Demirkan: Basically, extreme, basically severity of that issue.
268
00:37:48.760 --> 00:38:00.529
Haluk Demirkan: And then my… I'm actually working on another research concept about this. So each of those nine
dimensions, what are the type of questions I can ask, specifically? And then.
269
00:38:00.720 --> 00:38:06.690
Haluk Demirkan: Use those questions to evaluate, basically, each dimension.
270
00:38:07.480 --> 00:38:09.010
Haluk Demirkan: So…
271
00:38:09.740 --> 00:38:14.679
Haluk Demirkan: I'm kind of coming towards the end a little bit, so we will have some time to discussion.
272

00:38:16.190 --> 00:38:24.320
Haluk Demirkan: Some of you might be thinking, okay, all sounds good, we all want to have a great, excellent, responsible
AI, but it's gonna take some time.
273
00:38:24.600 --> 00:38:26.480
Haluk Demirkan: That is true.
274
00:38:27.620 --> 00:38:33.360
Haluk Demirkan: My experience is, it may slow us down at the beginning, But…
275
00:38:33.740 --> 00:38:41.119
Haluk Demirkan: It is… I mean, I have a very strong, Project management backgrounds.
276
00:38:43.450 --> 00:38:54.170
Haluk Demirkan: When I managed the projects in the past, everybody thought doing a basic risk management for a project
is a kind of waste.
277
00:38:54.170 --> 00:39:04.460
Haluk Demirkan: A lot of people thought that way, in my experience. That's one of the things leadership wanted to cut down
the effort estimates.
278
00:39:04.810 --> 00:39:11.900
Haluk Demirkan: But what I found out is we can do this in a very, very simple way. So I can literally do a
279
00:39:12.050 --> 00:39:17.020
Haluk Demirkan: Risk management for a project, in my mind, just even spending 15 minutes.

280
00:39:17.360 --> 00:39:22.180
Haluk Demirkan: It doesn't have to be a 15 hours effort, right? We can start with the slow.
281
00:39:22.470 --> 00:39:27.619
Haluk Demirkan: very high-level thinking process. Maybe initially I can have 5 questions.
282
00:39:27.730 --> 00:39:42.219
Haluk Demirkan: to do risk management for each of those dimensions, or maybe 3 questions. Then next level, next time, I
use 10 questions. Next time, I use 15 questions. Kind of, we kind of learn ourselves from the mistakes, right?
283
00:39:42.470 --> 00:39:47.950
Haluk Demirkan: So… So, we all know it's about trust.
284
00:39:48.100 --> 00:39:56.920
Haluk Demirkan: I think… Gen AI, I think one of the biggest challenges today is building a trust.
285
00:39:57.190 --> 00:40:05.909
Haluk Demirkan: Somebody asked me last week, hey, look, will you buy a driverless car? It's about trust, right?
286
00:40:06.030 --> 00:40:15.160
Haluk Demirkan: And also, trust, trust to the customers, regulators, partners. Next one is trust translates to adoption.
287
00:40:15.620 --> 00:40:20.390

Haluk Demirkan: I mean, most of us are in a heavy technology world, right?
288
00:40:21.330 --> 00:40:32.550
Haluk Demirkan: My experience is, I define a success of a technology-enabled project as based on adoption and utilization,
not based on
289
00:40:35.000 --> 00:40:49.559
Haluk Demirkan: development of that product according to the budget and timeline. Yes, we can develop a product with
technology within the budget, within the resources, by the deadline, which doesn't happen a lot.
290
00:40:49.680 --> 00:40:56.730
Haluk Demirkan: But if nobody uses it, is it really… if nobody adopts that solution, is it a success? Probably not.
291
00:40:57.530 --> 00:41:03.009
Haluk Demirkan: So… Again, responsibility about trust, trust.
292
00:41:03.150 --> 00:41:22.510
Haluk Demirkan: translates to adoption, right? And then adoption… it's like a… it kind of circular effects. As many people
adopt and use it, as much we can learn, and we can improve for the next one, right? That's the idea. And then, we can also
comply with regulations, because these regulations are gonna grow.
293
00:41:23.190 --> 00:41:27.190
Haluk Demirkan: Every day, in my opinion, And then…
294
00:41:27.550 --> 00:41:34.599
Haluk Demirkan: It's about scaling easier and safer. It's building on a solid foundation, instead of a shaky ground.

295
00:41:34.770 --> 00:41:40.869
Haluk Demirkan: And I really think… Building the solution in a solid foundation makes it easier. Again.
296
00:41:41.030 --> 00:41:46.890
Haluk Demirkan: At the beginning, the learning, because of the new learnings, it may take extra time, but
297
00:41:47.220 --> 00:41:51.159
Haluk Demirkan: After a few examples, we can be more practical.
298
00:41:52.390 --> 00:42:05.910
Haluk Demirkan: So, I don't know if you heard this. MIT has a report. It says it published… this report was published on… I
actually share… I'm sharing the report with you. They did a very long analysis and research, and they said.
299
00:42:06.020 --> 00:42:10.489
Haluk Demirkan: You know, 95% of Gen AI pilots are failing.
300
00:42:11.010 --> 00:42:14.090
Haluk Demirkan: There's a success rate is 5% right now.
301
00:42:14.580 --> 00:42:16.929
Haluk Demirkan: And there are so many reasons.
302
00:42:17.970 --> 00:42:23.810

Haluk Demirkan: And the primary reason is, in my opinion, is the adoption and trust.
303
00:42:24.010 --> 00:42:27.110
Haluk Demirkan: Which… How we can develop this.
304
00:42:27.610 --> 00:42:31.320
Haluk Demirkan: So bottom line is this, basically. I, I…
305
00:42:31.630 --> 00:42:38.269
Haluk Demirkan: These are a couple images I drew with, thanks to Gen AI, about the shaky ground, right?
306
00:42:38.470 --> 00:42:46.819
Haluk Demirkan: It's a… it's… it is in a cost center, in my opinion. It's not much different than a cybersecurity 20, 15, 30 years
ago, right?
307
00:42:48.700 --> 00:42:49.849
Haluk Demirkan: I mean,
308
00:42:50.020 --> 00:42:59.399
Haluk Demirkan: 20 years ago, the organizations who thought cybersecurity is an optional thing, I think… I don't think we
see them much around.
309
00:42:59.810 --> 00:43:02.349
Haluk Demirkan: It just has to be, right? It's just part of it.

310
00:43:02.670 --> 00:43:06.530
Haluk Demirkan: So we need to embrace Responsibility.
311
00:43:07.030 --> 00:43:13.370
Haluk Demirkan: To be developed faster, safer, with more trust,
312
00:43:13.780 --> 00:43:23.829
Haluk Demirkan: Otherwise, we may end up cleaning up some crisis. Okay, in this case, basically, which side do we want to
be on? The building a foundation on a, you know, strong…
313
00:43:24.070 --> 00:43:28.410
Haluk Demirkan: Or have a… building the solution with a shaky ground.
314
00:43:29.090 --> 00:43:34.249
Haluk Demirkan: This is some of my continuous research areas.
315
00:43:34.580 --> 00:43:39.780
Haluk Demirkan: I'm working with… you know, also Dr. Jim Spurr about the Digital Tebum Project.
316
00:43:40.090 --> 00:43:45.279
Haluk Demirkan: The second one is my continuous research about really defining the metrics measures
317
00:43:45.470 --> 00:43:49.020
Haluk Demirkan: and evaluate… I'm building some type of Excel spreadsheet.

318
00:43:49.550 --> 00:43:59.010
Haluk Demirkan: a practical Excel spreadsheet for this risk assessment process that can be deployed in early design phase
versus pre-launch versus post-launches.
319
00:43:59.040 --> 00:44:13.560
Haluk Demirkan: The third one is some area Jim and I have been collaborating for a very long time about how to develop this
collaborative intelligence, like using the strengths of people, strengths of AI, and of course, processes.
320
00:44:13.930 --> 00:44:20.990
Haluk Demirkan: I mean, some of the literature, I don't hear much about process, because Gen AI is about changing a
process, workflows.
321
00:44:21.500 --> 00:44:29.010
Haluk Demirkan: You know, any technology solution is actually changing how people do things, alright?
322
00:44:29.740 --> 00:44:31.130
Haluk Demirkan: the…
323
00:44:31.560 --> 00:44:40.150
Haluk Demirkan: in daily life, or work, either way. In my fourth area, I'm heavily passionate about metrics. As a scientist,
324
00:44:40.460 --> 00:44:46.360
Haluk Demirkan: working on different metrics measures to evaluate LLM solutions. Again.
325

00:44:46.500 --> 00:44:50.579
Haluk Demirkan: A crystal, of course. A crystal of the LLM solution.
326
00:44:50.750 --> 00:44:55.649
Haluk Demirkan: Is very important in my mind. Prediction accuracy.
327
00:44:55.930 --> 00:44:58.720
Haluk Demirkan: And then these are some samples of
328
00:44:59.580 --> 00:45:07.570
Haluk Demirkan: metrics measures, just let me know, for additional information. I will be very happy to share.
329
00:45:09.300 --> 00:45:13.830
Haluk Demirkan: And this is my, contact information.
330
00:45:14.010 --> 00:45:17.560
Haluk Demirkan: And, any questions, comments?
331
00:45:18.310 --> 00:45:20.830
Haluk Demirkan: We have about 10 minutes, if… if…
332
00:45:21.140 --> 00:45:24.770
Haluk Demirkan: If there are any questions, comments to discussion-wise.
333

00:45:26.310 --> 00:45:32.449
Jim Spohrer: Yeah, there's quite a few in the chat, and… Hopefully I didn't go too fast, but I wanted to.
334
00:45:33.120 --> 00:45:35.980
Jim Spohrer: Did you want to organize them, Christine?
335
00:45:36.240 --> 00:45:37.000
Jim Spohrer: Or…
336
00:45:40.990 --> 00:45:42.190
Jim Spohrer: I think… I have.
337
00:45:42.190 --> 00:45:46.620
Christine Ouyang: I haven't, read all of them yet.
338
00:45:46.880 --> 00:45:55.540
Jim Spohrer: Yeah, Steve Alter had, several, he… Steve, do you want to come off mute and just, describe it yourself? You had
several comments.
339
00:45:58.850 --> 00:46:00.490
Haluk Demirkan: I will also open the chat.
340
00:46:01.600 --> 00:46:02.540
James E Mister: able to speak?

341
00:46:03.240 --> 00:46:21.740
steve alter: Let me just say that, you know, I thought that the framework was really quite good, and had addressed a lot of
very, very important issues, and it was well organized, and so on. Then the challenge is really, understanding how it applies
to specific examples.
342
00:46:21.740 --> 00:46:22.310
Haluk Demirkan: Right.
343
00:46:22.310 --> 00:46:32.879
steve alter: As opposed to talking about it in general. And it's possible to continue the general discussion, but like I put in the
example of a robo-doctor.
344
00:46:33.170 --> 00:46:41.010
steve alter: And let's say our robo-doctor says that, actually, you should take Tylenol during pregnancy, if you need it.
345
00:46:41.240 --> 00:46:48.419
steve alter: well, actually, the government just came out, or some spokespeople for the government came out and said no.
Well, like.
346
00:46:48.590 --> 00:47:08.239
steve alter: That's a specific example. What do we do about that? If someone's developing a robo-doctor, or what would we
do if the robo-doctor has some specific advice about whether you should take COVID vaccines or not, and the government
comes up with something else? And then, I mean, that's a really easy
347
00:47:08.240 --> 00:47:20.980
steve alter: case, but the example, the issue is, how can these ideas be used to real… applied to real examples in a way that's
genuinely useful? And I think that's the challenge here.

348
00:47:21.210 --> 00:47:26.739
Haluk Demirkan: And I think it comes up to the trust. Like, if I was, I will think about…
349
00:47:27.980 --> 00:47:40.009
Haluk Demirkan: let's say I don't know either of them. Let's say one government official made that comment versus the
robot doctor. Neither of them, I don't have any direct contact or connection. I think about which one I trust.
350
00:47:40.010 --> 00:47:50.229
Haluk Demirkan: Okay, then I will think about why I trust. Like, why I should trust this per… this thing versus this opinion
versus this one. And then in that trust case.
351
00:47:50.510 --> 00:47:54.570
Haluk Demirkan: In my mind, as a user, or… right?
352
00:47:54.720 --> 00:48:09.609
Haluk Demirkan: I will ask questions, okay, what will help me to build that trust? Like, I will think about, okay, how many test
cases? How many numbers? How many… I'm just thinking about, but everybody has a different way of trust.
353
00:48:09.610 --> 00:48:16.039
steve alter: you're thinking about it as a very sophisticated expert. Imagine that you're a 15-year-old
354
00:48:16.040 --> 00:48:32.680
steve alter: who's trying to use a robo-psychiatrist or counselor, or something, and walking down a path that's leading in
some very dangerous directions. Sure. That… that kid is not going to be asking really clear

355
00:48:32.740 --> 00:48:43.060
steve alter: questions about trust. And so… so, like, how would this apply to that? I don't think it can give an answer right
now, but I think that's really the important question.
356
00:48:43.580 --> 00:48:49.280
Haluk Demirkan: I was honestly thinking, does, the reason I go to…
357
00:48:49.850 --> 00:48:56.269
Haluk Demirkan: psychiatrist, because I think that psychiatrist is trained, certified person, right?
358
00:48:56.390 --> 00:49:05.070
Haluk Demirkan: I don't just go to the random person on the street to ask psychiatry advices, right? But in that context.
359
00:49:05.840 --> 00:49:09.170
Haluk Demirkan: I think that developing the GenAI, that solution.
360
00:49:09.870 --> 00:49:27.940
Haluk Demirkan: that's really responsible to go to the development.ji company organization. Like, basically, we don't have a
certified gen AI solution who are certifiable or credible to provide that psychiatry help. Make sense?
361
00:49:28.030 --> 00:49:33.540
Haluk Demirkan: So, like, that 15 years old kid example, Gen AI solution shouldn't have given that answers.
362
00:49:34.370 --> 00:49:53.820

Haluk Demirkan: Yeah. In my opinion. Gen AI solution, because it's a generic Gen AI solution. If somebody asks about, hey,
about psychology advice, Gen AI Solutions should be responded back saying, hey, you need to contact this phone number, or
this person, or this place to get help, or get advice. That's my opinion. But…
363
00:49:53.940 --> 00:49:57.349
Haluk Demirkan: 5 years, 10 years from now, we may have a maybe.
364
00:49:57.810 --> 00:50:08.580
Haluk Demirkan: certified, Trained, specifically certified, educated, counselor JAI solution? We may. I don't know.
365
00:50:09.160 --> 00:50:12.479
Haluk Demirkan: But anybody, anybody else have any opinion? Yeah, please.
366
00:50:12.820 --> 00:50:26.719
Christine Ouyang: Yeah, look, you know, I agree, but on the other hand, I think the government also should, you know,
establish regulations, you know, to really regulate this industry very heavily, especially, like, you know, the legal
367
00:50:26.720 --> 00:50:34.929
Christine Ouyang: the, the medical, you know, healthcare, right? Those industries, and finance, industries very, very heavily.
368
00:50:35.110 --> 00:50:44.820
Christine Ouyang: So, and also, I think, even, let's say, if you ask a human, expert, there's often no black and white.
369
00:50:44.830 --> 00:51:04.510
Christine Ouyang: type of answers, because it really, depends on the specific use case, specific patient, specific, you know,
client. So, oftentimes, you know, like, even for expert, human experts, SMEs, right, subject matter experts,

370
00:51:04.850 --> 00:51:19.610
Christine Ouyang: we wouldn't have THE answer, okay, or only one answer, correct answer, right? So I think that really, is the
complexity, that we are seeing unfolding, while we are developing
371
00:51:19.620 --> 00:51:26.190
Christine Ouyang: for some of us, and also using, you know, the AI, right?
372
00:51:32.240 --> 00:51:35.039
Haluk Demirkan: Yeah, lots of things, Gray, Steve, I think.
373
00:51:36.300 --> 00:51:38.659
Haluk Demirkan: Yeah, it's a great group. I think we are still learning.
374
00:51:38.870 --> 00:51:49.629
Haluk Demirkan: But I personally think there will be more certified, or accredited, or type of Gen AI, specific Gen AI, specific
solutions in the market, I think.
375
00:51:50.440 --> 00:51:56.789
steve alter: If you look at the universities right now, do you think there's accredited, knowledge about interpersonal
relations?
376
00:51:57.460 --> 00:51:59.319
steve alter: That anybody could agree on?
377

00:52:00.290 --> 00:52:17.040
steve alter: Don't know what to say. Okay, okay, I'm using that example because, you know, you're hoping that 10 years from
now we will have accredited capabilities for some things. There are an awful lot of very important things that are going on in
universities where there's no agreement.
378
00:52:17.880 --> 00:52:20.829
steve alter: About what's the right thing to say or do.
379
00:52:21.710 --> 00:52:22.290
Haluk Demirkan: Yeah.
380
00:52:22.820 --> 00:52:27.369
Haluk Demirkan: There will be different opinions, like you said, even for the universities, right?
381
00:52:29.020 --> 00:52:36.260
Christine Ouyang: You know, I totally agree with you, Steve. You know, I… we see… human.
382
00:52:36.730 --> 00:52:44.279
Christine Ouyang: you know, value conflicts all the time, right? So then, let's say if this, framework
383
00:52:44.540 --> 00:52:51.139
Christine Ouyang: It's really, Focused on this human-centric
384
00:52:51.210 --> 00:53:00.180
Christine Ouyang: AI, or human-centered AI, right? So then, because our human is such a complex beast, you know, well…

385
00:53:00.180 --> 00:53:12.250
Christine Ouyang: we, you know, ourselves oftentimes don't agree with ourselves, right? Let alone with others. So then, you
know, like, how would AI, as a technology.
386
00:53:12.280 --> 00:53:20.230
Christine Ouyang: to, to be human-centered, right? To adopt this human-centered, kind of a view.
387
00:53:20.300 --> 00:53:22.269
Christine Ouyang: Maybe that's a question
388
00:53:23.290 --> 00:53:31.319
Christine Ouyang: I mean, again, I don't think we have an answer, but… but maybe just a, you know, any opinions, any…
389
00:53:31.980 --> 00:53:33.180
Jim Spohrer: We, we have, like.
390
00:53:33.180 --> 00:53:34.070
Christine Ouyang: Thoughts.
391
00:53:34.070 --> 00:53:43.710
Jim Spohrer: We have 7 minutes left, and I noticed that James has a question, and I believe Brad may, because he came… Yes,
I do have a question.
392

00:53:43.710 --> 00:53:45.070
Haluk Demirkan: Camera. Yes, please.
393
00:53:45.490 --> 00:53:51.289
James E Mister: Hi, sorry, I'm actually here in Frankfurt, so I'm in the lounge, so there's a little bit of…
394
00:53:51.290 --> 00:53:52.850
Haluk Demirkan: Not late for you, but thank you.
395
00:53:52.850 --> 00:54:12.200
James E Mister: No, no, no, it's dinner time, so I'm still eating a little bit, but I really wanted to ask, I met with a friend of mine
who works for the EU's, so this $400 million fund that the EU Innovation Council has funded to help with the energy
transition more broadly, and as Jim knows, I've worked transatlantically between the EU and the U.S,
396
00:54:12.200 --> 00:54:21.250
James E Mister: And as I've written, I mean, I'm very concerned with the ethical dimensions of, you know, rail, you know,
responsible AI,
397
00:54:21.450 --> 00:54:35.440
James E Mister: All the aspects of it, but especially, beyond just the normal, questions of ethics and health or finance, but
also energy as a critical, a critical dimension.
398
00:54:35.440 --> 00:54:49.789
James E Mister: Because what seems to be happening is a lot of these hyperscalers, or even energy companies, are in bed
with hyperscalers in countries where there's not a lot of renewable, access to renewable, development of renewables, money
that can go towards things like fusion energy or fusion research.
399

00:54:49.790 --> 00:55:02.989
James E Mister: And they're not really thinking about that dimension of it. And they're housing a lot of data centers there
that are more or less what we would call dirty, right? Fossil fuel based and fossil fuel powered.
400
00:55:03.050 --> 00:55:10.509
James E Mister: My question is, how do we address that, or does… can that be subsumed under this sort of ethical…
401
00:55:10.940 --> 00:55:26.499
James E Mister: pillar, or is that, like, an overlap of the energy and ethics question, and what are your thoughts on that? Or
have you heard about these… these sorts of, projects? I mean, my friend was… is from Colombia originally, so he's
Colombian-Spanish, but living in Germany, and was telling me how… how…
402
00:55:27.180 --> 00:55:30.920
James E Mister: Large companies, including Hyperscales, have courted, you know, these sort of…
403
00:55:31.350 --> 00:55:37.940
James E Mister: I don't want to say developing markets, because they're, you know, middle-income markets in developing
markets, and that's something that I think is very, very…
404
00:55:38.770 --> 00:55:51.860
James E Mister: I don't know, interesting. I thought maybe you don't want to comment on it, especially against the backdrop
of sovereign AI, which is a huge topic right now, and countries wanting to enable their own, sort of, LLMs, but also to power
it themselves, but…
405
00:55:52.160 --> 00:56:01.379
James E Mister: the wherewithal is not there in a completely renewable way, and so that's kind of my concern, and I thought
I just wanted to impart that, or ask for a question, comment, from you.

406
00:56:02.010 --> 00:56:09.730
Haluk Demirkan: Yeah, I mean, I can start, but anybody else, would you like to start? I wanted to make sure, I don't want to
be the only one talking, but if anyone has any comment, or I can…
407
00:56:13.410 --> 00:56:18.120
Haluk Demirkan: Alright, so, I mean, my experiences, I think…
408
00:56:18.520 --> 00:56:24.110
Haluk Demirkan: In context of energy, sustainable energy, a lot of companies have some type of
409
00:56:24.550 --> 00:56:32.220
Haluk Demirkan: innovation going on, just specific to energy, sustainable energy context, because I think…
410
00:56:32.580 --> 00:56:44.660
Haluk Demirkan: I mean, I personally haven't really had the experience on projects about that sovereign, like, how to utilize
Gen AI to generate energy, and also
411
00:56:44.960 --> 00:56:49.160
Haluk Demirkan: Gen AI solution in the country, but,
412
00:56:49.560 --> 00:56:58.620
Haluk Demirkan: I 100% agree. There's an ethical dimension, also a sustainable dimension. There are countries in the world
right now, they don't have a power.
413
00:56:58.780 --> 00:57:05.969

Haluk Demirkan: And then on the other side of the world, we are talking about using a GenAI solution, gonna use
414
00:57:06.140 --> 00:57:12.920
Haluk Demirkan: who knows how much energy to power even these solutions, right? It's just a…
415
00:57:13.610 --> 00:57:17.219
James E Mister: I honestly don't have a perfect answer, to be honest. Yeah.
416
00:57:17.380 --> 00:57:34.389
James E Mister: An additional thing I wanted to say is, although I just brought up a lot of middle-income issues, middle-
income countries issues, and maybe even in further developing, in the first world, what we've seen is that the power
purchasing agreements that hyperscalers and other large companies that are data centers online are negotiating with it,
with the…
417
00:57:34.390 --> 00:57:46.409
James E Mister: with, you know, they make their contracts every few months with these large energy providers, and
essentially, customers in certain markets, I think there was a report that was just released in Ohio and in other states.
418
00:57:46.560 --> 00:57:53.149
James E Mister: Their kilowatt hour prices have gone up, because the customers have to pay the difference.
419
00:57:53.240 --> 00:58:12.880
James E Mister: the energy providers, the utilities are giving a rebate in the PPAs to these large hyperscalers so that they can
close a big multi-billion dollar multi-year contract. But that gap, that funding gap, is then just paid because they 2X or 1.5x
the cost per kilowatt hour for their normal domestic, normal.
420
00:58:12.880 --> 00:58:26.239

James E Mister: utility customers, and so even in the first world, we stand to lose if there's not a lot of, sort of, regulation
brought around. I mean, we can't rely on them building ethical models. Ethical models is, I guess, what we're talking about
here, but I'm talking about ethics and
421
00:58:26.490 --> 00:58:33.110
James E Mister: actual deployment of the models and in the whole energy, backdrop.
422
00:58:33.110 --> 00:58:45.039
Haluk Demirkan: And I… yeah, and I honestly think… I think we are so behind on this ethic and sustainable concept overall in
the technology, because sometimes I wonder where all the waste is going.
423
00:58:45.720 --> 00:58:48.209
Haluk Demirkan: from technology waste. I don't know.
424
00:58:48.610 --> 00:58:56.350
Haluk Demirkan: Right? Like, I think overall, we, as a society, we are so behind in those… some of those topics.
425
00:58:56.650 --> 00:59:01.329
Haluk Demirkan: I mean, because I think we have a really big problem.
426
00:59:01.440 --> 00:59:06.750
Haluk Demirkan: Not just even specifically Gen AI, it's just overall information technology world.
427
00:59:07.180 --> 00:59:09.250
Haluk Demirkan: Right? I don't see waste of…

428
00:59:09.480 --> 00:59:15.549
Haluk Demirkan: you know, that technology in front of my street, but it is going somewhere, but where it is going, I don't
know.
429
00:59:15.870 --> 00:59:19.350
James E Mister: Yeah, I just posted the New York Times article that I…
430
00:59:19.350 --> 00:59:19.970
Haluk Demirkan: Thank you.
431
00:59:20.350 --> 00:59:20.940
James E Mister: Yeah.
432
00:59:21.330 --> 00:59:24.959
James E Mister: I mean, where people in these states are now also then seeing
433
00:59:25.170 --> 00:59:33.809
James E Mister: you know, 1.5x, 2x kilowatt-hour prices because of the utilities have already guaranteed PPAs to large
hyperscalers and data centers.
434
00:59:33.810 --> 00:59:47.610
James E Mister: And it's just something that… it really brought it home to me that this energy issue extends beyond ensuring
renewable, ensuring that models are having these parameters that you spoke about. I think that's crazy interesting. So,
sorry to speak so much
435

00:59:47.610 --> 00:59:49.670
James E Mister: And it's gonna go up also, you're right.
436
00:59:49.670 --> 00:59:50.260
Haluk Demirkan: Yeah.
437
00:59:50.980 --> 00:59:55.589
Jim Spohrer: Thank you. Thank you, James. We're actually We're at the.
438
00:59:55.590 --> 00:59:56.380
Haluk Demirkan: You know, like…
439
00:59:56.950 --> 01:00:12.950
Jim Spohrer: But we can run over just a little bit, I suppose, and if people have things, like, James, thanks for sharing that
New York Times thing. If people have things to share, please put them in the chat, and Steve Kwan said, let's make sure we
get those things into the.
440
01:00:12.950 --> 01:00:14.349
Haluk Demirkan: Into the notes, yeah.
441
01:00:14.430 --> 01:00:26.400
Jim Spohrer: So, we'll do that. But, Brad, can you be brief, and then we'll ask Christine to wrap us up. And Christine, okay, for
a few minutes? Okay. Yep. If you can be brief, we'd appreciate it.
442
01:00:26.400 --> 01:00:37.860

Brad Kewalramani: Absolutely. Awesome. Thank you again, Halik, for such an informative session and great discussion here.
I guess I have more of a comment than a question, if that's okay. Of course.
443
01:00:37.870 --> 01:00:49.060
Brad Kewalramani: I'm observing a correlation between AI and the world of work and projects in other spaces, right? So…
444
01:00:49.310 --> 01:00:59.180
Brad Kewalramani: If you think about it, when we progress into a new way of doing things, better, faster, quicker.
445
01:00:59.770 --> 01:01:02.119
Brad Kewalramani: If it's completely brand new.
446
01:01:02.360 --> 01:01:17.509
Brad Kewalramani: sometimes we don't have perceptiveness of all the risks that are involved in what it is that we're doing.
And so, when we're doing something new, based on subject matter expertise, perspectives available at the table.
447
01:01:17.510 --> 01:01:27.210
Brad Kewalramani: There might be certain risks that we can predict, and there are certain risks that we can say, yeah, we
recognize these as risks, and we should manage against that, because they're very clear. But then.
448
01:01:27.250 --> 01:01:35.579
Brad Kewalramani: if I look at the world of work, and for example, if you look at occupational health and safety, which is the
world that I live in, you see that
449
01:01:35.950 --> 01:01:48.249
Brad Kewalramani: the regulation comes in after the, oh my god moment, right? Yeah. So, we didn't realize that that was an
actual risk. So, that's kind of what I'm seeing, but…

450
01:01:48.370 --> 01:01:53.100
Brad Kewalramani: when I think about AI, What's very clear is…
451
01:01:53.330 --> 01:02:01.350
Brad Kewalramani: the use of narrow AI, like, using AI to solve specific problems, and then focusing on narrow AI, and then
452
01:02:01.480 --> 01:02:06.749
Brad Kewalramani: Overall, Gen AI might be a very interesting thing. I'm sorry, I'm trying to talk a little bit, because… No,
that's fine.
453
01:02:06.940 --> 01:02:07.740
Haluk Demirkan: That's okay.
454
01:02:08.210 --> 01:02:12.599
Brad Kewalramani: But, that's a little bit what I'm seeing as a correlation. I agree.
455
01:02:12.600 --> 01:02:18.720
Haluk Demirkan: kind of learn from mistakes, and then those become part of the regulations, usually, yeah. Right. All right.
456
01:02:19.730 --> 01:02:22.170
Haluk Demirkan: Christine, would you like to wrap it up? Thanks.
457
01:02:22.170 --> 01:02:26.399

Christine Ouyang: Yeah, thank you so much, Haluk, for sharing your insights.
458
01:02:26.610 --> 01:02:46.010
Christine Ouyang: on the responsible generative AI remark, and thank you all for joining us. I counted, I think there were 22
of us at one point. So we hope today's conversation serves as a foot for thought and inspires you to think critically about
how we can shape generative AI for the great good.
459
01:02:46.130 --> 01:02:52.579
Christine Ouyang: So that's it for today. Thank you again, and see you, next time.
460
01:02:52.730 --> 01:02:57.509
Haluk Demirkan: Next time. Thank you, Christine. Thanks, Jim, for organizing. Appreciate it. Have a great day. Bye.
461
01:02:58.080 --> 01:02:58.670
Christine Ouyang: Bye.