社内勉強会資料_LLM Agents                              .

NABLAS 541 views 13 slides May 28, 2024
Slide 1
Slide 1 of 13
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13

About This Presentation

単純な応答だけでなく、複雑な課題に対応できる柔軟性を持つLLM Agents。主要コンポーネントのPlanning、Memory、Tool Useについて解説しています。


Slide Content

LLM Agents

Contents
1.Prerequisites
2.Why LLM Agents?
3.LLM’s vs Agentic Response
4.LLM Agent overview
a.Component One: Planning
b.Component Two: Memory (Human Agent analogy)
c.Component Three: Tool Use
5.ReACT: An agent technique for “Component One: Planning”

Prerequisite: Retrieval Augmented Generation (RAG)

Why LLM Agents?
In a human body brain (specifically
consciousness, excluding memory) in itself
can’t do anything without hands/sense organs
memory!
Consider LLM as brain and Agent using LLM as
complete body

Why (What is) LLM Agents?
Consider a LLM application that is designed to help financial analysts.
Simple question “What was X corporation’s total revenue for FY
2022?”-> RAG pipeline with company's data can answer
Real life question which the analyst would ask. -> “What were the
three takeaways from the Q2 earnings call from FY 23? Focus on the
technological moats that the company is building”
-this information requires more than a simple lookup from an
earnings call. It requires planning, tailored focus, memory, using
different tools, and breaking down a complex question into
simpler sub-parts….
-These concepts assembled together are essentially what we have
come to refer to as an LLM Agent.

LLM’s vs Agentic Response
ReAct is one type of Agent Technique

LLM Agent overview
Agent uses LLM as brain’s cerebrum to perform multiple
decision making

Component One: Planning
-Task Decomposition: A complicated task usually involves many
steps. An agent needs to know what they are and plan ahead.
-Self-Reflection: Allows autonomous agents to improve iteratively by
refining past action decisions and correcting previous mistakes.

Component One: Planning
-Task Decomposition: A complicated task usually involves many
steps. An agent needs to know what they are and plan ahead.
-Self-Reflection: Allows autonomous agents to improve iteratively by
refining past action decisions and correcting previous mistakes.
-one notable technique: ReACT: SYNERGIZING REASONING
AND ACTING IN LANGUAGE MODELS

Component Two: Memory
Categorization of human memory.

Component Two: Memory
We can roughly consider the following mappings, for Human memory to
Agent’s memory,
●Sensory memory-> embedding representations for raw inputs,
including text, image or other modalities;
●Short-term memory -> in-context learning. It is short and finite, as it
is restricted by the finite context window length of Transformer.
●Long-term memory-> external vector store that the agent can
attend to at query time, accessible via fast retrieval

Component Three: Tool Use
Utilizing LLM to know which tool to use when, and how to use that tool
-
TALM(Tool Augmented Language Models; Parisi et al. 2022)
-Toolformer by Meta
-HuggingGPT
The format of API calls in TALM

ReACT