AI LLM Chaining (Multi-LLM Task Delegation )PPT (1).pptx
KanmaniK12
39 views
9 slides
Mar 04, 2025
Slide 1 of 9
1
2
3
4
5
6
7
8
9
About This Presentation
AI LLM Chain (Large Language Model Chain)
An LLM Chain is a structured pipeline that processes user queries through multiple AI components to ensure efficient and context-aware responses.
Key Concepts
Pipeline of Processing – Converts inputs to outputs in steps.
Modular Design – Components like...
AI LLM Chain (Large Language Model Chain)
An LLM Chain is a structured pipeline that processes user queries through multiple AI components to ensure efficient and context-aware responses.
Key Concepts
Pipeline of Processing – Converts inputs to outputs in steps.
Modular Design – Components like retrieval, reasoning, and generation handle specific tasks.
Integration with Tools – Connects with APIs, databases, and external sources.
Use Cases
Chatbots & Virtual Assistants
Automated Content Generation
Question-Answering Systems
AI-driven Data Analysis
Size: 497.72 KB
Language: en
Added: Mar 04, 2025
Slides: 9 pages
Slide Content
AI LLM Chaining (Multi-LLM Task Delegation & Verification) SQUID GAME AKILASRI L KANMANI K
agenda INTRODUCTION 1 MODULES 2 TECHNOLOGIES USED 3 IMPLEMENTATION 4 CHALLENGES FACED 5 CONCLUSION AND FUTURE IMPROVEMENT 6
INTRODUCTION AI LLM Chaining is a system where multiple Large Language Models (LLMs) work together to complete complex tasks efficientlyWeave relatable stories into your presentation using narratives that make your message memorable and impactful This approach breaks down tasks into smaller parts, assigns them to different LLMs, and verifies the responses for accuracy. Task Delegation: Distributing workloads among specialized LLMs. Verification: Comparing multiple outputs to ensure correctness. Optimization: Refining and improving the final result .
MODULES Model Selection – Lets users choose AI models ( DeepSeek , Llama, Gemini, Mixtral ). User Interface – Handles chat display and user interaction via Streamlit . Prompt Management – Generates structured prompts for AI responses. AI Response Generation – Processes queries and returns AI-generated answers. Chat History – Stores and displays previous messages for continuity. Voice Input – Captures and converts speech to text for chatbot interaction. Error Handling – Manages API errors and voice recognition issues.
TECHNOLOGIES USED Frontend: Streamlit (UI & interaction) Backend: Python, LangChain (AI workflow) Database: Streamlit Session State (Optional: Firebase, PostgreSQL, MongoDB) AI Models: Groq API ( DeepSeek , Llama, Gemini, Mixtral ) Other: SpeechRecognition (Voice input), dotenv (API security)
IMPLEMENTATION Environment Setup: Installed required libraries ( transformers , deep_translator , wikipediaapi , streamlit ). Model Selection: Used Facebook's BlenderBot-400M for AI responses. Translation Integration: Implemented GoogleTranslator to support multilingual conversations. Wikipedia API: Added WikipediaAPI to fetch relevant information before generating AI responses. Frontend with Streamlit : Designed an interactive chatbot UI using Streamlit with chat history. Caching for Speed: Used @st.cache_resource to optimize model loading. Code Structure & Workflow app.py → Main script handling chat, translation, Wikipedia lookup, UI User Input → Translate → Wikipedia Search → AI Response → Translate Back → Display in UI
CHALLENGES FACED Model Dependency Error: Installed pytorch as required by transformers library Streamlit set_age_config error: Moved st.set_page_config to the top of the script Slow Response Time: Cached model with @st.cache_resource and optimed API calls
CONCLUSION AND FEATURES IMPROVEMENTS AI-driven task delegation Multi-LLM chaining (Gemini, ChatGPT, Claude) Automatic verification & response aggregation Scalable with more LLMs Successfully built a multilingual chatbot that combines AI-generated responses with Wikipedia-based facts . Integrated translation support and chat history for an interactive experience.