2025 - AI terms & Meanings for Marketers

ashishav29 12 views 12 slides Nov 02, 2025
Slide 1
Slide 1 of 12
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12

About This Presentation

Top AI Terms & Meaning for Marketers


Slide Content

Top AI Terms & Meaning for Marketers Tokens Context Window Hallucinations Retrieval-Augmented Generation Grounding Model Context Protocol 1 2/11/2025 * Disclaimer - I am not an AI Developer or Expert. This deck was built based solely on my understanding derived from various public sources, including IBM and YouTube.

1. Tokens Tokens are the fundamental units/smallest unit of text that AI models use to understand and process language. Before an LLM can understand your input prompt, a process called tokenization converts the text/prompt into a sequence of tokens. Here the text input of 7 words (“Hello is World This is Agent Referencing”) has been converted into 8 Tokens. We can see how each word has been converted into a Token. A single word like “Referencing” has been assigned 2 Tokens. 2

For example, text “is” has been assigned Token number 382 1. Tokens An extra space between “Hello is” adds an additional token to the input prompt. 2/11/2025 3

2. Context Window The context window is like a language model's working memory, determining how much of a conversation it can remember Input Input Output Within Context Window - Model remembers context before generating recent output Outside Context Window – Model makes inferences or educated guess Last Output Input Input Output Last Output 2/11/2025 4

2 . Context Window Additional Learning – IBM Technologies Youtube - What is Context Window 2/11/2025 5

3. Hallucinations AI hallucination is when an AI model generates incorrect, misleading, or fabricated information with confidence. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. An example, from Andrej Karpathy’s (OpenAI Co-founder) Deep Dive into LLMs video , is where he asks the LLM for information on a non-existent person, and the model hallucinates a confident response. 2/11/2025 6

4. Retrieval-Augmented Generation RAG, or Retrieval-Augmented Generation is a specific architectural method for generating information by fetching it from a private knowledge base (such as company database, a set of internal documents, or a PDF library). Why RAG - LLMs are limited to their pre-trained data. This leads to outdated and potentially inaccurate responses. RAG overcomes this by providing up-to-date information to LLMs. For example, consider a smart chatbot that can answer human resource questions for an organization. If an employee searches, "How much annual leave do I have?" the system will retrieve annual leave policy documents alongside the individual employee's past leave record . These specific documents will be returned because they are highly-relevant to what the employee has input . The relevancy was calculated and established using mathematical vector calculations and representations. 2/11/2025 7

5. Grounding Grounding is a general objective or process of tying the LLM's response to verifiable, factual information. If you provide models with access to specific data sources, then grounding tethers their output to these data and reduces the chances of inventing content. This is particularly important in situations where accuracy and reliability are significant. 2/11/2025 8

5. Grounding 2/11/2025 9

5. Grounding 2/11/2025 10

RAG vs Grounding 2/11/2025 11

6 . Model Context Protocol Introduced by Anthropic in November 2024, MCP provides a secure and standardized "language" for LLMs to communicate with external data, applications, and services. It acts as a bridge, allowing AI to move beyond static knowledge and become a dynamic agent that can retrieve current information and take action, making it more accurate, useful, and automated. Ahref’s MCP with OpenAI by Chris Long 2/11/2025 12