Sharing a compilation of my recent talks on #AgenticAI
Covers the following topics:
- Introduction to #AIAgents
- Agentic AI #Lifecycle & Reference #Architecture
- Agents / #MCP Tools Discovery & #Marketplace
- #Causal Reasoning for AI Agents
- Personalizing #UX for Agentic AI
- Agent #Obs...
Sharing a compilation of my recent talks on #AgenticAI
Covers the following topics:
- Introduction to #AIAgents
- Agentic AI #Lifecycle & Reference #Architecture
- Agents / #MCP Tools Discovery & #Marketplace
- #Causal Reasoning for AI Agents
- Personalizing #UX for Agentic AI
- Agent #Observability & #Memory Management
- Agentic #RAGs, #ReinforcementLearning Agents
- #ResponsibleAI Agents: Privacy, #Guardrails, Human-in-the-Loop
- Agentification #CaseStudies: Customer Service Desk, Manufacturing, #DataEngineering
Size: 7.67 MB
Language: en
Added: Oct 15, 2025
Slides: 68 pages
Slide Content
A Comprehensive Guide
to Agentic AI
Debmalya Biswas, PhD
Introduction to AI Agents
Agentic AI Lifecycle & Reference
Architecture
Agents / MCP Tools Discovery &
Marketplace
Causal Reasoning for AI Agents
Personalizing UX for Agentic AI
Agent Observability & Memory
Management
Agentic RAGs,Reinforcement
Learning Agents
Responsible AI Agents: Privacy,
Guardrails, Human-in-the-Loop
Agentification Case-Studies:
Customer Service Desk,
Manufacturing, Data Engineering
Introduction to
Agentic AI
Agentic AI Evolution
Agentic AI capabilities –Task Decomposition
Agentic AI capabilities –Memory Management
Agentic AI capabilities –Reflect & Adapt
Agentic AI Use-case: Email Marketing Campaign
Agentic AI Lifecycle &
Reference Architecture
Generative AI Lifecycle
Agentic AI Lifecycle
Gen AI Architecture Patterns –APIs & Embedded Gen AI
While Enterprise LLM Apps have the
potential to accelerate LLM adoption by
providing an enterprise ready solution;the
same caution needs to be exercised as you
would do before using a 3rdparty ML
model —validate LLM/training data
ownership, IP, liability clauses.
Black-box LLM APIs: This is the classic
ChatGPT example, where we have black-
box access to a LLM API/UI.Promptsare
the primary interaction mechanism for such
scenarios.
* D. Biswas. Generative AI –LLMOps Architecture
Patterns. Data Driven Investor, 2023 (link)
Gen AI Architecture Patterns –Fine-tuning
LLMs are generic in nature. To
realize the full potential of LLMs for
Enterprises, they need to be
contextualizedwith enterprise
knowledge captured in terms of
documents, wikis, business
processes, etc.
This is achieved by fine-tuning a LLM
with enterprise knowledge /
embeddings to develop a context-
specific LLM.
Gen AI Architecture Patterns –Retrieval-Augmented-
Generation (RAG)
Fine-tuning is a computationally intensive process. RAGprovides a viable alternative by providing
additional context with the prompts —grounding the retrieval / responses to the given context.
Given a user query, a RAG pipeline literally consists of
the 3 phases below:
-Retrieve: Transform user queries to embeddings to
compare its similarity score with other content.
-Augment: with search results / context retrieved from
a vector store that is kept current and in sync with the
underlying document repository.
-Generate: contextualized responses by making
retrieved chunks part of the prompt template that
provides additional context to the LLM on how to
answer the query.
Agentic AI Platform Reference Architecture
* D. Biswas. Stateful Monitoring and Responsible Deployment of AI Agents. 17th
International Conference on Agents and Artificial Intelligence (ICAART), 2025 (link)
The future where enterprises
will be able to develop new
Enterprise AI Apps by
orchestrating / composing
multiple existing AI Agents.
AI Agents Marketplace
& Discovery for
Agents / MCP Tools
(Complex) Agentic AI Task Decomposition
A high-level approach to solving complex tasks:
•-decomposition of the given complex task into
a hierarchy or workflow of) simple tasks,
followed by
•-compositionof agents able to execute the
simpler tasks.
This can be achieved in a dynamic or static manner.
•Dynamic: given a complex user task, the system
comes up with a plan to fulfil the request
depending on the capabilities of available
agents at run-time.
•Static: given a set of agents, composite agents
are defined manually at design-time combining
their capabilities.
Agent Marketplace & Discovery of AI Agents
Agent decomposition
and planning (be it static
or dynamic) requires a
discoverymodule to
identify the agent(s)
capable of executing a
given task.
This implies that there
exists a marketplace
with a registry of agents,
with a well-defined
description of the agent
capabilities and
constraints.
Limitations of LLMs as execution engines for AgenticAI
Current Agentic AI platforms leverage LLMs for both task decompositionand execution
of the identified tasks / agents.
--The overall execution occurs within the context of a single LLM, or each task can be
routed to a different LLM.
--In short,each task execution corresponds to an LLM invocation at run-time.
-Unfortunately, this approach is neither scalable nor practical for complex tasks.
LLMs cannot be expected to come-up with
the most efficient (agent) execution
approach for a given task at run-time every
time, esp. those requiring integration with
enterprise systems.
Agentic AI platforms need to learn
over multiple execution runs (meta-
learning): involving a combination of
user prompts, agents, and their
relevant skills (capabilities).
Non-determinism in Agentic AI Systems
There are two non-deterministic
operators in the execution plan:
‘Check Credit’ and ‘Delivery Mode’.
The choice ‘Delivery Mode’ indicates
that the user can either pick-up the
order directly from the store or have
it shipped to hisaddress.
Given this, shipping is a non-
deterministicchoice and may not be
invoked during the actual execution.
L2R for Agent Discovery based on Natural
Language Descriptions
Learning-to-rank(L2R) algorithm
toselect top-k agentsgiven a user
prompt:
-We first convert agent (class)
descriptions to semantic
embeddings offline and use them to
train the L2R model.
-The user prompts and the agents
use the same generic embedding
model.
-The inference results including the
agent description embeddings
during training and inferencing are
cached to enable the meta-learning
process for the L2R algorithm.
Agent Discovery based on a Constraints Model
The constraints are specified as logic
predicatesin the service description of
the corresponding service published by
its agent.
An agent P provides a set of services
{S1,S2, … , Sn}. Each service S in turn has
a set of associated constraints {C1,C2, …
,Cm}. For each constraintC of a service
S, the constraint values maybe
-a single value (e.g., price of a
service),
-list of values (e.g., list of
destinations served by an airline), or
-or range of values (e.g., minimum,
maximum)
Capability: connects City A to B
Constraint: Flies only on certain
days a week; Needs payment by
Credit Card
* D. Biswas. Constraints Enabled Autonomous Agent Marketplace:
Discovery and Matchmaking. 16thInternational Conference on Agents
and Artificial Intelligence (ICAART), 2024 (link)
MCP Tools Discovery Challenges
The availability of agentic tools is
skyrocketing, but this ungovernedrise is
actually leading to more confusion and less
productivity.
Tool discovery today relies on natural
language descriptions of tools.
Given this, how do we expect an algorithm
(or human for that matter) to select the
right tool for say a web search among
tools described ambiguously: -
"search_web",
"web_search",
"ai-web-search",
"batch-web-search",
"answer_query_websearch".
* D. Biswas. Agentic AI MCP Tools Governance. Data Science
Collective, 2025 (link)
MCP Tools Governance
-Consider the tools, prompts, resources, etc. capabilities
published by an MCP server, and not only the tool
descriptions during discovery.
-Tools are generally invoked as part of an agent-to-agent
(A2A) interaction, where the output of a tool (invoked by
one agent) goes as inputcontextto the next agent.
Unfortunately, the MCP specification today does not
provide any provision to specify thecount of
tokensproduced by a tool.Until this happens, the
recommendation is for agents to implement their
owncontext managementstrategy.
-Another limitation of the MCP specification today is the
lack of anamespaceconstruct. The recommendation here
is to havehierarchicalnamespaces of thematically related
tools, e.g., GitHub MCP server’sdynamic tool discovery.
* D. Biswas. Agentic AI MCP Tools Governance. Data
Science Collective, 2025 (link)
MCP Tools Governance Guidelines
-Consider the tools, prompts, resources, etc. capabilities published by an MCP
server, and not only the tool descriptions during discovery.
-Tools are generally invoked as part of an agent-to-agent (A2A) interaction,
where the output of a tool (invoked by one agent) goes as inputcontextto the
next agent. Unfortunately, the MCP specification today does not provide any
provision to specify thecount of tokensproduced by a tool.Until this happens,
the recommendation is for agents to implement their owncontext
managementstrategy.
-Another limitation of the MCP specification today is the lack of
anamespaceconstruct. The recommendation here is to
havehierarchicalnamespaces of thematically related tools, e.g., GitHub MCP
server’sdynamic tool discovery.
Personalizing UX
for Agentic AI
AI Agent Personalization
Analogous to fine-tuning of large language models (LLMs) to domain specific
LLMs / SLMs,
we argue that personalization/ fine-tuning of (marketplace) AI agents will be
needed with respect to enterprise specific context (of applicable user
personas and use-cases) to drive their enterprise adoption.
Key benefits of AI agent personalization include:
-Personalized interaction: The AI agent adapts its language, tone, and
complexity based on user preferences and interaction history. This ensures that
the conversation is more aligned with the user’s expectations and
communication style.
-Use-case context: The AI agent is aware of the underlying enterprise use-case
processes, so that it can prioritize or highlight process features, relevant pieces
of content, etc. —optimizing the interaction to achieve the use-case goal more
efficiently.
-Proactive Assistance: The AI agent anticipates the needs of different users and
offers proactive suggestions, resources, or reminders tailored to their specific
profiles or tasks.
AI Agent Personalization Architecture
We highlight thatUI/UXfor AI agents is critical as the
last mile to enterprise adoption in this talk.
User Persona based Agent Personalization
Enterprise AI agent personalization remains challenging due to scale, performance, andprivacychallenges.
* D. Biswas. Personalizing UX for Agentic AI. AI
Advances, 2024 (link)
User persona-based agent personalization
segments the end-users of a service into a
manageable set of user categories, which
represent the demographics and preferences
of majority of users.
Thefine-tuning process consists of
firstparameterizing(aggregated) user data and
conversation history and storing it as memory in
the LLM viaadapters, followed by fine-tuning
the LLM for personalized response generation.
The agent —user personarouterhelps in
performing user segmentation (scoring) and
routing the tasks / prompts to the most
relevant agent persona.
User Data Embeddings
Fine-tuning AI agents on raw user data is often too complex, even if it is at
the (aggregated) persona level.
This is primarily due to the following reasons::
-Agent interaction data usually spans multiple journeys with sparse data
points, various interaction types (multimodal), and potential noise or
inconsistencies with incomplete queries —responses.
-Moreover, effective personalization often requires a deep understanding
of the latent intent / sentiment behind user actions, which can pose
difficulties for generic (pre-trained) LLMs.
-Finally, fine-tuning is computationally intensive. Agent-user interaction
data can be lengthy. Processing and modeling such long sequences (e.g.,
multi-years’ worth of interaction history) with LLMs can be practically
infeasible.
User Data Embeddings (USER-LLM)
USER-LLM distills compressed
representationsfrom diverse and noisy
user interactions, effectively capturing the
essence of a user’s behavioral patterns
and preferences across various interaction
modalities.
* L. Liu & L. Ning. USER-LLM: Efficient LLM Contextualization
with User Embeddings. Google Research, 2024 (link)
Reinforcement Learning based Personalization
We show how LLM generated responses can be personalized based on a Reinforcement Learning
(RL) enabled Recommendation Engine (RE).
High-level, the RL based LLM response / action RE
works as follows:
-The (current) user sentiment and agent
interaction history are combined to quantify the
user sentiment curve and discount any sudden
changes in user sentiment;
-leading to the aggregate reward value
corresponding to the last LLM response provided
to the user.
-This reward value is then provided as feedback
to the RL agent —to choose the next optimal
LLM generated response / action to be provided
to the user.•D. Biswas. Delayed Rewards in the Context of Reinforcement Learning
based Recommender Systems.AAI4H@ECAI2020:49-53, (link)
•E. Ricciardelli, D. Biswas. Self-improving Chatbots based on
Reinforcement Learning. RLDM 2019 (link)
Causal Reasoning
for Agentic AI
Reasoning challenges of current LLMs
Correlation doesn’t imply causation.
* D. Biswas. Causal Reasoning for AI
Agents, 2025 (link)
No matter how big the LLM is, they still only
capture statistical correlations between features
or parameters of the underlying training data,
and the corresponding prediction.
For AI to truly reason and problem-solve, it
must algorithmically understand cause-and-
effect relationships.
To achieve this, we propose to add Causal AI as
a key ingredient, together with knowledge
graphs, in the training / fine-tuning cookbook of
LLMs / LRMs.
Causal AI
Causal AI Reasoning Capabilities:
-Root cause: detect and rank causal drivers of an
outcome
-What-if scenarios(&counterfactuals): determine
the consequences of alternate actions with respect
to the current (factual) state
-Explainability: rationale why certain actions are
better than others
-Confounders: identify irrelevant, misleading, or
hidden influences
-Pathways: understand interrelated actions and
course of action to achieve outcomes
Causal Pathways
Causal AI is enabled by inferringcausal pathwayswithin neural networks by combining traditional
neural network architectures with causal reasoning techniques. This implies modelling cause-and-
effect relationships in the training dataset to understand relationships among features, how
much they influence each other, and the prediction.
For ex., the figure shows an inferred causal
model to evaluate credit risk level for loan
applications:
-Red arrows indicate inverse relationship
of a feature with creditworthiness,
-while green arrows correspond to
positive causal drivers.
-Further, arrow thickness indicates the
strength of the causal relationship.
Reasoning with Introspection
To overcome these challenges, we enhance the agent(s) with an iterative ReAct
+introspectionstrategy.
Thedistillationmodule acts as a pre-processor, decomposing complex queries into structured semantic
units: variables,constraints, and goals.(ReAct remains the underlying orchestration framework.)
Standard ReAct based agents are effective for
web retrieval type of tasks. However, they
have been shown to be inefficient for complex
scenarios, e.g., industrial IoT environments:
-gaps in domain-specific reasoning, e.g.,
linking chiller unit tonnage with energy
efficiency —a vital linkage in industrial IoT
environments.
-inconsistent reasoning, e.g., in date-offset
reasoning (“last-day / week / month”).
-premature task termination, redundant tool
calls, and multi-step composition failures.
Agent Observability
& Memory
Management
Observability Challenges for Agentic AI
Observabilityfor AI Agents is
challenging:
-No global observer:Due to their
distributed nature, we cannot assume
the existence of an entityhaving
visibility over the entire execution.In
fact,due to their privacy and
autonomyrequirements,even the
composite agent may not have
visibility over the internal processing
of its component agents.
-Parallelism: AI agents allow parallel
composition of processes.
-Dynamic configuration: The agents
are selected incrementally as the
execution progresses (dynamic
binding). Thus,the “components” of
the distributed system may not be
known in advance.
Stateful execution for AI Agents
AgentOpsmonitoringis critical given the
complexity and long running nature of AI
agents. We defineobservabilityasthe
ability to find out where in the process the
execution is and whether any
unanticipated glitches have appeared.
-Local queries: Queries which can be
answered based on the local state
information of an agent.
-Composite queries: Queries expressed
over the states of several agents.
-Historical queries: Queries related to the
execution history of the composition.
-Relationship queries: Queries based on
the relationship between states.
* D. Biswas. Stateful Monitoring and Responsible Deployment of AI Agents. 17th
International Conference on Agents and Artificial Intelligence (ICAART), 2025 (link)
Conversational Memory Management using Vector DBs
Vector DBsare currently the primary
medium to store and retrieve data
(memory) corresponding to
conversational agents.
-This involves selecting an encoder
model that performs offline data
encoding as a separate process,
converting various forms of raw data,
such as text, audio, and video, into
vectors.
-During a chat, the conversational agent
has the option of querying the long-
term memory system by encoding the
query and searching for relevant
information within Vector DB. The
retrieved information is then used to
answer the query based on the stored
information.
Human Memory Understanding
We need to consider the followingmemory types.
-Semanticmemory: general knowledge with facts, concepts,
meanings, etc.
-Episodicmemory: personal memory with respect to specific
events and situations from the past.
-Proceduralmemory: motor skills like driving a car, with the
corresponding procedures to achieve the task.
-Emotionalmemory: feelings associated with experiences.
Agentic Memory Management
Thememory router, always, by
default, routes to the long-term
memory (LTM) module to see if
an existing pattern is there to
respond to the given user
prompt.If yes, it retrieves and
immediately responds,
personalizing it as needed.
* D. Biswas. Long-term Memory for AI Agents. AI
Advances, 2024 (link)
Ifthe LTM fails, the memory
router routes it to the short-
term memory (STM)module
which then uses its retrieval
processes (APIs, etc.) to get the
relevant context into the STM
(working memory) —leveraging
applicable data services.
Agentic Memory Management (2)
The STM —LTM transformer module is
always active and constantly getting the
context retrieved and extracting recipes
out of it (e.g., refer to the concepts of
teachable agents and recipes
inAutoGen)and storing in a semantic
layer (implemented viaVector DB).
* D. Biswas. Long-term Memory for AI Agents. AI
Advances, 2024 (link)
At the same time, it is also collecting
other associated properties (e.g., no. of
tokens, cost of executing the response,
state of the system, etc.) and
-creatingan episodewhich is then getting
stored in aknowledge graph
-with the underlying procedure stored in
afinite state machine (FSM).
Agentic AI Scenarios:
-Agentic RAGs
-Reinforcement
Learning Agents
Agentic RAGs: extendingRAGs to SQL Databases
Agentic AI framework to build RAG
pipelines that work seamlessly over
both structured and unstructured
data stored in Snowflake.
* D. Biswas. Agentic RAGs:extendingRAGs to SQL
Databases. AI Advances, 2024 (link)
The SQL & Document query agents
leverage the respective Snowflake
Cortex Analyst and Search
components detailed earlier to
query the underlying SQL and
Document repositories.
Finally, to complete the RAG
pipeline, the retrieved data is added
to the original prompt —leading
the generation of a contextualized
response.
Reinforcement Learning Agents
When we talk about AI agents
today, we mostly talk aboutLLM
agents, which loosely translates
to invoking (prompting) an LLM
to perform natural language
processing (NLP) tasks
Someagentic tasks might be
better suited to other ML
techniques, e.g., Reinforcement
Learning (RL), predictive
analytics, etc. —depending on
the use-case objectives.
* D. Biswas. LLM based fine-tuning of Reinforcement
Learning Agents. AI Advances, 2024 (link)
LLM based fine-tuning of Reinforcement Learning Agents
* D. Biswas. LLM based fine-tuning of Reinforcement
Learning Agents. AI Advances, 2024 (link)
We focus on RL agents, and
show how LLMs can be used
to fine-tune the RL agent
reward / policy functions.
Reinforcement Learning Agents applied to HVAC
Optimization
* D. Biswas. Reinforcement Learning based Energy Optimization in
Factories, in proc. of the 11thACM Conference on Future Energy
Systems (e-Energy), 2020. (link)
We show a concrete
example of applying
the fine-tuning
methodology to a real-
life industrial control
system —designing
the RL based controller
for HVAC optimization
in a building setting.
Responsible AI Agents
-Privacy
-Evaluation
-Guardrails
-Human-in-the-Loop
Responsible AI Agents
* D. Biswas. Stateful & Responsible AI Agents. ICAART 2025 (link)
Data Quality Issues with respect to LLMs, esp.
Vector DBsFrom a data quality point of view,
we see the following challenges
w.r.t. LLMs, esp. Vector DBs:
-Accuracy of the encodings in vector
stores, measures in terms of
correctness and groundedness of
the generated LLM responses.
-Incorrect and/or inconsistent
vectors: Due to issues in the
embedding process, some vectors
may end up getting corrupted, be
incomplete, or getting generated
with a different dimensionality.
-Missing data can be in the form of
missing vectors or metadata.
-Timeliness issues w.r.t. outdated
documents impacting the vector
store.
* D. Biswas. Long-term Memory for AI Agents. AI
Advances, 2024 (link)
Explainability
Explainable AI is an umbrella term for
a range of tools, algorithms and
methods; which accompany AI model
predictions with explanations.
-Explainability of AI models ranks
high among the list of ‘non-
functional’ AI features to be
considered by enterprises.
-For example, this implies having
to explain why an ML model
profiled a user to be in a specific
segment —which led him/her to
receiving an advertisement.
(Labeled)
Data
Train ML
Model
Predictions
Explanation
Model
Explainable
Predictions
Use-case specific Evaluation of LLMs
Need for a comprehensive LLM evaluation strategy with targeted
success metrics specific to the use-cases.
* D. Biswas. Use Case-Based Evaluation
Strategy for LLMs. AI Advances, 2024 (link)
LLM Safety Leaderboard
*Hugging Face LLM Safety Leaderboard (link)
*B. Wang, et. Al. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models, 2024 (link)
ML Privacy Risks
Two broad categories of
privacy inference attacks:
•Membership inference (if a
specific user data item was
present in the training
dataset) and
•Property inference
(reconstruct properties of a
participant’s dataset)
attacks.
Black box attacks are still
possible when the attacker
only has access to the APIs:
invoke the model and observe
the relationships between
inputs and outputs.
Training
dataset
wants access to
ML Model
(Classification,
Prediction)
Inference
API
has access to
Attacker
* D. Biswas. Privacy Preserving Chatbot Conversations. IEEE AIKE 2020: 179-182 (link)
*D. Biswas, K. Vidyasankar. A Privacy Framework for Hierarchical Federated Learning. CIKM Workshops 2021 (link)
Gen AI Privacy Risks –novel challenges
From a privacypoint of view, we
need to consider the following
additional / different LLM privacy
risks:
-Membership and property
leakage from pre-training data
-Model features leakage from
pre-trained LLM
-Privacy leakage from
conversations (history) with
LLMs
-Compliance with privacy intent
of users
* D. Biswas. Privacy Risks of Large Language Models.
AI Advances, 2024 (link)
Responsible deployment of AI Agents
* D. Biswas. Stateful Monitoring and Responsible Deployment of AI Agents. 17thInternational Conference on Agents and Artificial Intelligence (ICAART), 2025 (link)
GuardrailsChallenges
We need to designvalidation tests and
guardrail functionstaking into account the
underlying domain data, (sub-)topics, user
queries, performance metrics, and
regulatory requirements, etc. of the
underlying use-case.
For example, the cancellation policies
corresponding to
acancel_flight_booking()function include:
•“explicit customer confirmation needs
to be obtained prior to canceling a
flight.”
•“flight reservations can be canceled free
of charge within 24 hours of booking.”
•“economy or business flights can be
canceled only if travel insurance is
bought.”
Automated Guardrails Generation
We present a 3-step approach to generate policy driven guardrails for agentic use-cases:
1.Map policiesto the relevant agents / tools in an offline fashion.
2.Generate guardrails for the mapped agents in the form of policy validation code.
3.Invoke guardrails at run-time prior to the agent invocation. This acts as a preventive measure
ensuring that the agents / tools do not violate any policy. In case of a violation, the agent is
prompted to reflect and adapt its strategy.
Risk Management for AI Agents
R1: Misaligned & Deceptive Behaviors
R2: IntentBreaking & Goal Manipulation
(Goal Misalignment)
R3: Tool Misuse R4: MemoryPoisoning
R5: Cascading HallucinationAttacks
(SecurityVulnerabilities)
R6: Privilege Compromise
R7: Identity Spoofing & Impersonation
R8: Unexpected RCE & Code Attacks
(Operational Resilience)
R9: Resource Overload
R10: Repudiation & Untraceability
(Multi-Agent Collusion)
R11: Rogue Agents in Multi-Agent Systems
R12: Agent Communication Poisoning
R13: Human Attacks on Multi-Agent
Systems
R14: HumanManipulation
R15: Overwhelming Human in the Loop
R16: Persona-driven Bias
*D. Biswas. Risk Management for the Agentic AI Lifecycle, 2025 (link)
*OWASPwhitepaper: Agentic AI —Threats and Mitigations, 2025 (link)
*IBMwhitepaper: Accountability and Risk Matter in Agentic AI, 2025 (link)
Human-in-the-Loop Strategy
-Co-plan: Validate & plan ensuring that the
generated plan (orchestration graph) corresponds
to the given user intent.
-Co-execute: Users can intermittently pause
(suspend) the execution and give feedback, if the
agent’s / tool’s response does not comply with the
assigned task; or the human feels that the agent
will not be able to achieve its (long-term) goal.
-Co-comply: Users can mark the critical and
irreversible tasks (e.g., payments), and ensure that
rightguardrailshave been applied compliant with
enterprise policies —before approving the task.
-Co-memorize: Refine memory, reviewing keylong-
term memoryconcepts, optimizing storage,
ensuring reusability and agent performance
optimization.
-This is complemented by a continuous
improvement module that learns from historical
interactions to optimize future human
interventions.
* D. Biswas. Human-in-the-Loop Strategy for Agentic
AI. AI Advances, 2024 (link)
Agentification Case
Studies
-Customer Service Desk
-Manufacturing
-Data Engineering
Agentification of Customer Service Desk
* D. Biswas. Agentic AI for Customer Service Desk.
Data Science Collective, 2025 (link)
Agentic AI for Manufacturing
* D. Biswas. Agentic AI for Manufacturing. AI Advances, 2025 (link)
Agentification of Data Engineering
* D. Biswas. Agentic AI for Data Engineering. Dats Science Collective, 2025 (link)