ENDPOINT IN NICK
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "What is the largest land animal?",
"context": [ ... ],
"messages": [ ... ],
}
RESPONSE
HTTP/1.1 200 OK
Content-Type: application/json
{
"model": "qwen3",
"created_at": "2025-01-01T00:00:00.00000Z",
"response": "The largest land animal is the African bush ele
"done": true,
"done_reason": "stop",
"context": [
...
]
"total_duration": 356186167,
"load_duration": 18592125,
"promptevalcount":139,
STREAMING
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "What is the largest land animal?",
"context": [ ... ],
"messages": [ ... ],
"stream": true,
}
ASSISTANT
CHAT HISTORY
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "Tell me a bit about them",
"context": [ ... ],
"messages": [
{
"role": "user",
"content": "What is the largest land animal?"
},
{
"role":"assistant",
USE A MULTI DIMENSIONAL
VECTOR FIELD
Install pgvector extension in PostgreSQL
$ CREATE EXTENSION IF NOT EXISTS vector;
CREATE INDEX IN NICK
catalog.json
{
"indexes": [
{
"name": "embedding",
"type": "embed",
"attr": "searchableText"
}
],
...
}
SAVED VALUE IN DB
HOW TO USE THOSE EMBEDDING
VECTORS?
SEARCH!
Create an embedding of our search query
Compare the embedding with the index
embeddings
Order by similarity
Limit by similarity
SEARCH QUERY IN SQL
SELECT 1 - (_embedding <=> '${embedding}') AS similarity)
FROM catalog
WHERE 1 - (_embedding <=> '${embedding}') > ${minSimilarity}
ORDER BY similarity DESC;
SEARCH RESULT
IMAGE RECOGNITION
In the picture, there's a sleek laptop with its
screen displaying a forest scene. The laptop
appears to be on a table or desk, which also
has some other items like a lamp and a plant
pot in the background. The setting looks
modern and stylish, suggesting that this
image might have been taken for advertising
purposes, showcasing the laptop's design
and features.
WHAT IS IMAGE RECOGNITION?
HOW TO USE THEM IN A CMS
Use an image recognition model
Store the generated text
Index the generated text as searchable text
Index the generated text as embedding
USE AN IMAGE RECOGNITION
MODEL
$ ollama pull llava
RAG
LLM are trained once, no new information since
LLM's are not aware of "private" data
RAG combines LLM with provided relevant data
Data is provided in the "context"
PAGE CONTEXT
SITE CONTEXT
Create an embedding of our question
Get relavant documents based on matching
embeddings
Get the text from those documents
Add the text to the context of the query
TOOLS
Tools can be used to execute actions by an LLM
Define what they do an what parameters they have
Add them to your chat call
Check the result if a tools is used
Execute the action of the tool