The Ultimate Guide to Make AI Agents_ From Creation to Advanced Automation.pdf

IPRESSTVADMIN 124 views 6 slides Sep 04, 2025
Slide 1
Slide 1 of 6
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6

About This Presentation

The evolution of automation has taken a significant leap forward. We've moved beyond simple, linear workflows into an era of intelligent, autonomous systems. Make has introduced its AI Agents feature, providing a platform to build sophisticated AI-powered automations. This guide offers a complet...


Slide Content

The Ultimate Guide to Make AI Agents:
From Creation to Advanced Automation

The evolution of automation has taken a significant leap forward. We've moved beyond simple,
linear workflows into an era of intelligent, autonomous systems. Make has introduced its AI
Agents feature, providing a platform to build sophisticated AI-powered automations. This guide
offers a complete walkthrough of this feature, drawing from direct experience with the platform.
You will learn the entire process, from constructing a basic agent to deploying it in complex,
real-world business scenarios.
The Foundational Trinity of Make AI Agents
To effectively build and deploy an AI agent within the Make ecosystem, you must understand its
three fundamental components. This modular architecture allows for flexibility and reusability
across different automation needs. Each part plays a distinct and vital role in the agent's
operation.
The three core components are:
1.​The Agent Itself: This is the central intelligence of your operation. It is defined by a core
set of instructions, known as a system prompt, which dictates its personality, purpose,
and operational boundaries. You create this agent as a standalone entity that can be
called upon by various processes. 2.​The Tools: These represent the agent's capabilities—the specific actions it can perform.
In Make, a tool is simply another Make scenario. By giving an agent access to these tool
scenarios, you equip it with the ability to interact with other apps and services, such as
searching Google, updating a CRM, or extracting data from a website.

3.​The Trigger Scenario: This is the initial workflow that activates the agent. It starts with a
trigger event, such as a new form submission or a message in a chat application. This
scenario houses the logic for when to call the agent and provides the initial data needed
to begin its task.
This separation of the agent's brain (system prompt), its hands (tools), and its activation switch
(trigger scenario) is a powerful design. It means you can create a single, well-defined agent and
then use it in numerous different workflows, simply by providing it with the right tools and context
for each specific job.
Crafting Your First AI Agent: The Step-by-Step Process
Building an AI agent begins in a centralized location within your Make account. This hub is where
you will define, configure, and manage all the agents for your organization. The process is
structured to guide you from selecting an AI model to writing the core instructions that will govern
your agent's behavior.
Navigating to the AI Agents Hub
Your journey starts on the main navigation panel on the left side of the Make dashboard. Directly
beneath the "Scenarios" tab, you will find the "AI Agents" section. This area serves as the
command center for all your agents. Here, you can create new agents, configure existing ones,
and view all the agents available to your team, providing a clear overview of your AI workforce.
The Agent Creation Interface
When you choose to create a new agent, a configuration window appears. Each field in this
interface is essential for defining how your agent will function.
●​Connection: This is the first and most critical choice. The connection determines which
large language model (LLM) will power your agent's intelligence. Make supports
connections to leading providers like OpenAI, Anthropic, and Google's Gemini. To
establish a connection, you will need an API key from your chosen provider. This choice
influences the agent's capabilities, speed, and operational cost.
●​Agent Name: Assign a clear and descriptive name to your agent. A good naming
convention, such as "Lead Research Agent" or "Customer Handover Agent," helps in
identifying the agent's purpose quickly, especially when managing multiple agents.
●​Model Selection: After selecting a connection, you must choose a specific model. For
instance, with Anthropic, you might choose between Claude 3.7 Sonnet for balanced
performance or Claude 3 Opus for more complex reasoning tasks. Similarly, OpenAI
offers models like GPT-4o and GPT-4 Turbo. The model you select will directly impact the
agent's analytical depth and response speed.
The System Prompt: Giving Your Agent Its Brain
The system prompt is the constitutional document for your AI agent. It is here that you provide
the detailed instructions that define its identity, purpose, behavior, and limitations. A well-crafted
system prompt is the key to creating an effective and reliable agent. The structure of your prompt
should be logical and thorough.

A strong system prompt includes several key sections:
●​Agent Role and Purpose: Begin by clearly stating the agent's role. For example, "You
are an expert executive assistant tasked with supporting the handling of specific business
operations via Slack." This initial statement sets the entire context for the agent's
behavior. ●​Primary Functions: List the main tasks the agent is designed to perform. Using a
numbered list improves clarity. For an executive assistant agent, this might be:
1.​Enroll new students in courses.
2.​Log content ideas for future development.
3.​Create tasks in Notion.
●​Task-Specific Requirements: For each function listed, provide explicit details about the
information required to complete it. For the "Student Enrollment" task, you would specify:
"Required information: Student's email address, First name, Last name, Course
selection." This instruction empowers the agent to recognize when it has incomplete
information and to ask the user for the missing details before proceeding.
●​General Instructions and Error Handling: Define the agent's general operational
protocol. These are rules that apply to all its tasks. Instructions could include: "Analyze
each incoming message to determine which task is being requested," "Confirm your
understanding of the task before proceeding," and "If the request lacks sufficient detail,
ask clarifying questions to gather the necessary information." This section makes the
agent more robust and interactive. By meticulously detailing these elements in the system prompt, you provide the agent with a
clear framework for decision-making. This reduces ambiguity and ensures it performs its
functions accurately and consistently.
Empowering Your Agent with Tools and Memory
An agent's intelligence, defined by its system prompt, is only one half of the equation. To perform
meaningful work, it needs the ability to execute actions and remember the context of
conversations. This is achieved through the use of tools and a memory system managed by
thread IDs. These components transform the agent from a pure conversationalist into a
functional automated worker.
Understanding Agent Tools
In Make, tools are what give your agent the ability to act. A tool is a separate, dedicated Make
scenario designed to perform a single, specific task. The agent's LLM analyzes a user's request
and, based on its system prompt and the descriptions of available tools, decides which tool to
use. This process, known as "tool calling," is the bridge between the agent's reasoning and
practical execution.
Building a Functional Tool Scenario
Creating a scenario that can be used as a tool by an AI agent requires a specific structure. These
scenarios are not like typical, schedule-based automations. They are designed to be callable
services that receive instructions, perform a job, and report back the results.

There are four key requirements for a tool scenario:
1.​On-Demand Trigger: The scenario's scheduling must be set to "On-demand." This is
because the agent, not a clock or a webhook, will initiate its execution. The agent calls
the tool precisely when it is needed for a specific task.
2.​Scenario Inputs: You must define the data structure for the information the agent will
send to the tool. This is configured in the "Scenario inputs and outputs" menu. For a tool
that logs a content idea, you would define inputs like
idea_title (Text) and
idea_description (Text). This creates a clear contract for how the agent
communicates with the tool.
3.​Scenario Outputs: You must define the data structure for the results the tool will send
back to the agent. This is achieved by adding a "Scenarios > Return output" module at
the end of the workflow. For the content idea tool, the output might be
logged_idea_link (Text), containing the URL of the newly created record.
4.​Tool Description: When you add the tool scenario to your agent's configuration, you
must provide a clear, natural-language description of its function. For example: "Use this
tool to log a new content idea. A new idea should have a 'Title' and a 'Description'." The
agent's LLM uses this description to understand what the tool does and when it should be
used.
The Concept of Agent Memory via Threads
For an agent to have a coherent, multi-step conversation, it needs a form of memory. Make AI
Agents handle this through a mechanism called a "Thread ID." A Thread ID is a unique identifier
assigned to a specific conversation. By passing the same Thread ID with each message in a
conversation, you enable the agent to retain the full context of that discussion.
In a practical application like a Slack assistant, Slack's own threading feature provides a perfect
source for this ID. Every Slack message has a unique
ts (timestamp). If a message is part of a
thread, it also has a
thread.ts, which is the timestamp of the original message that started the
thread.
You can implement this memory system in your trigger scenario using a simple formula in the
"Thread ID" field of the "Run an agent" module:
ifempty(1.thread.ts; 1.ts). This logic
instructs the system:
●​If the incoming Slack event has a thread.ts value (meaning it is a reply within an
existing thread), use that value as the Thread ID. This keeps the agent's focus within that
conversation.
●​If the thread.ts value is empty (meaning it is a new message starting a new
conversation), use the message's own
ts value to create a new Thread ID.
This intelligent use of the Thread ID ensures that each conversation with the agent is treated as
a separate, context-aware session, allowing for natural back-and-forth interactions without
confusion.
Practical Applications: Real-World AI Agent Use Cases

The true measure of the AI Agents feature lies in its practical application to solve real business
problems. By combining a well-defined agent with a suite of powerful tools, you can automate
complex processes that traditionally required significant manual effort. Below are three detailed
use cases that demonstrate the versatility of Make AI Agents.
Use Case 1: The Automated Slack Executive Assistant
This agent acts as a personal assistant within Slack, capable of handling routine administrative
tasks based on simple chat commands. It showcases how an agent can serve as a user-friendly
interface for more complex backend automations.
The workflow is straightforward:
1.​Trigger: A new message is posted in a designated Slack channel.
2.​Action: The "Run an agent" module is triggered, passing the message text and the
conversation's Thread ID to the Slack Assistant agent.
3.​Response: The agent processes the request, potentially using one of its tools, and
formulates a reply.
4.​Output: A final Slack module posts the agent's reply back into the correct Slack thread.
In practice, a user can interact with this agent conversationally. If the user asks, "What can you
do?" the agent, referencing its system prompt, will list its capabilities, such as enrolling a student
or creating a Notion task. If the user then says, "Please enroll student John Doe with email
[email protected] in the 'no-code-operator' course," the agent will identify the "Enroll Student
in Course" task, confirm it has all the necessary information (name, email, course), and trigger
the corresponding tool scenario to perform the enrollment.
Use Case 2: The Proactive Lead Research Agent
This use case demonstrates how an agent can automate the time-consuming process of lead
enrichment, providing sales teams with valuable context on new prospects without any manual
research.
The automation flow for this agent is as follows:
1.​Trigger: A new lead submits their information through a web form, such as one created
with Tally.
2.​Initial Save: The lead's basic information (name, email, company) is saved as a new
contact in a CRM like HubSpot.
3.​Agent Activation: The "Run an agent" module is triggered, activating the Lead Research
Agent and providing it with the initial lead data.
The agent then executes a sequence of actions using its specialized tools:
●​It uses a Google Search tool to find the lead's personal and company LinkedIn profile
URLs.
●​It passes these URLs to a Get LinkedIn Profile Details tool, which might use a service
like Apify to scrape detailed information such as job history, skills, and education.
●​It uses an Extract Content from Website tool to analyze the lead's company website for
additional context.

●​Finally, it uses an Update HubSpot Contact tool to enrich the contact record with all the
newly found data and adds a comprehensive summary note of its research findings.
Use Case 3: The Centralized Brand Guideline Agent
This advanced use case showcases how to enforce brand consistency across all AI-generated
content. Instead of repeating brand guidelines in every content-creation scenario, you create a
single "master" brand agent.
This agent's defining characteristic is its system prompt, which is a detailed repository of the
company's entire brand style guide. This includes rules on tone of voice, personality, formatting
guidelines for headlines and bullet points, and specific terminology to use or avoid. Crucially, this
master agent has no tools assigned to it at the agent level.
Its power comes from its application within other scenarios. For example, in a workflow designed
to "Generate Blog Articles," the process would be:
1.​Trigger: A new blog topic is added to a Google Sheet.
2.​Action: The scenario calls the "Run an agent" module and selects the master Brand
Guideline Agent.
3.​Contextual Customization: Within this specific scenario run, the agent is given
Additional Tools (e.g., Google Search for research) and Additional System
Instructions (e.g., "Your task is to write a professional blog post on the provided topic").
This architecture allows the agent to combine its core brand identity with the specific instructions
and tools required for the task at hand. The result is content that is not only well-researched and
relevant to the topic but also perfectly aligned with the company's brand voice. This modular
approach ensures consistency and saves a significant amount of time by centralizing brand
control.
The best way to understand the capabilities of these agents is to begin building them.
Create Your Free Make Account and Start Building
Tags