How can AI be used in Content Management Systems, Plone Conference 2025

robgietema 8 views 63 slides Oct 17, 2025
Slide 1
Slide 1 of 63
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63

About This Presentation

This talk will show multiple examples and demos of AI in action using Volto and Nick.


Slide Content

HOW CAN AI BE USED
IN CONTENT MANAGEMENT SYSTEMS?
Plone Conference 2025, Jyväskylä
- Rob Gietema@robgietema

WHAT IS AI?
Machine Learning (ML)
Large Language Models (LLM)
...

NEURAL NETWORK

TYPES OF LLM'S
Generic
Image / Video generation
Image Recoginition
Embedding

GENERIC LLM'S
ChatGPT
Claude
Gemini
Deepseek
Qwen

LOCAL LLM'S?

WHAT DO I NEED TO RUN AN LLM
LOCALLY?
Model
Runner

MODAL: QWEN

RUNNER: OLLAMA

RUNNING LOCAL USING OLLAMA
$ ollama pull qwen3
http://localhost:11434/api/chat

HOW TO USE AN LLM IN A CMS?
We need a backend which proxies the requests

MEET NICK
Headless CMS
Build with Node.js & PostgreSQL
RESTfull API compatible with the Plone Restapi
Has support for AI models

CONFIG IN NICK
export const config = {
...
profiles: [
`${__dirname}/src/profiles/core`,
`${__dirname}/src/profiles/ai`,
`${__dirname}/src/profiles/default`,
],
...
ai: {
models: {
llm: {
name: 'qwen3',
api: 'http://localhost:11434/api/chat',
contextSize: 10,
enabled:true,

ENDPOINT IN NICK
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "What is the largest land animal?",
"context": [ ... ],
"messages": [ ... ],
}

RESPONSE
HTTP/1.1 200 OK
Content-Type: application/json
{
"model": "qwen3",
"created_at": "2025-01-01T00:00:00.00000Z",
"response": "The largest land animal is the African bush ele
"done": true,
"done_reason": "stop",
"context": [
...
]
"total_duration": 356186167,
"load_duration": 18592125,
"promptevalcount":139,

STREAMING
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "What is the largest land animal?",
"context": [ ... ],
"messages": [ ... ],
"stream": true,
}

ASSISTANT

CHAT HISTORY
POST /@chat HTTP/1.1
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ
Content-Type: application/json
{
"prompt": "Tell me a bit about them",
"context": [ ... ],
"messages": [
{
"role": "user",
"content": "What is the largest land animal?"
},
{
"role":"assistant",

HISTORY

SPEECHRECOGNITION
// Initialize SpeechRecognition
const SpeechRecognition = window.SpeechRecognition ||
window.webkitSpeechRecognition;
recognitionRef.current = new SpeechRecognition();
recognitionRef.current.continuous = true;
recognitionRef.current.interimResults = true;
// Attach event listeners
recognitionRef.current.onstart = handleStart;
recognitionRef.current.onend = handleEnd;
recognitionRef.current.onerror = handleError;
recognitionRef.current.onresult = handleResult;

SPEECHRECOGNITION

IMAGE / VIDEO GENERATION
Generate images on the fly
Generate lead images
Generation is slow
Generation is hardware intensive

EMBEDDINGS

WHAT ARE EMBEDDINGS?

HOW TO USE THEM IN A CMS
Use an embedding model
Use a multi dimensional vector field
Index the content

USE AN EMBEDDING MODEL
$ ollama pull nomic-embed-text

CONFIG IN NICK
export const config = {
...
ai: {
models: {
embed: {
name: 'nomic-embed-text',
api: 'http://localhost:11434/api/embed',
dimensions: 768,
minSimilarity: 0.6,
enabled: true,
},
...
},
},
...

USE A MULTI DIMENSIONAL
VECTOR FIELD
Install pgvector extension in PostgreSQL
$ CREATE EXTENSION IF NOT EXISTS vector;

CREATE INDEX IN NICK
catalog.json
{
"indexes": [
{
"name": "embedding",
"type": "embed",
"attr": "searchableText"
}
],
...
}

SAVED VALUE IN DB

HOW TO USE THOSE EMBEDDING
VECTORS?

SEARCH!
Create an embedding of our search query
Compare the embedding with the index
embeddings
Order by similarity
Limit by similarity

SEARCH QUERY IN SQL
SELECT 1 - (_embedding <=> '${embedding}') AS similarity)
FROM catalog
WHERE 1 - (_embedding <=> '${embedding}') > ${minSimilarity}
ORDER BY similarity DESC;

SEARCH RESULT

IMAGE RECOGNITION

In the picture, there's a sleek laptop with its
screen displaying a forest scene. The laptop
appears to be on a table or desk, which also
has some other items like a lamp and a plant
pot in the background. The setting looks
modern and stylish, suggesting that this
image might have been taken for advertising
purposes, showcasing the laptop's design
and features.
WHAT IS IMAGE RECOGNITION?

HOW TO USE THEM IN A CMS
Use an image recognition model
Store the generated text
Index the generated text as searchable text
Index the generated text as embedding

USE AN IMAGE RECOGNITION
MODEL
$ ollama pull llava

CONFIG IN NICK
export const config = {
...
ai: {
models: {
vision: {
name: 'llava',
api: 'http://localhost:11434/api/generate',
enabled: true,
}
...
},
},
...
};

IMAGE RECOGNITION

RAG
Retrieval Augmented Generation

RAG
LLM are trained once, no new information since
LLM's are not aware of "private" data
RAG combines LLM with provided relevant data
Data is provided in the "context"

PAGE CONTEXT

SITE CONTEXT
Create an embedding of our question
Get relavant documents based on matching
embeddings
Get the text from those documents
Add the text to the context of the query

SITE CONTEXT

ATTACHMENT CONTEXT

HOW CAN THE ASSISTANT HELP
WITH CONTENT EDITING?

PAGE EDITING
0:00/ 1:55

PROMPTS

AI CONTROLPANEL
{
"id": "ai",
"title:i18n": "AI",
"group": "Content",
"schema": {
"fieldsets": [
{
"fields": ["prompts"],
"id": "general",
"title": "General"
}
],
"properties": {
"prompts": {
"title:i18n":"Prompts",

ATTACHMENT CONTEXT

PROMPTS

WHAT ARE TOOLS AND HOW CAN
WE USE THEM?

TOOLS
Tools can be used to execute actions by an LLM
Define what they do an what parameters they have
Add them to your chat call
Check the result if a tools is used
Execute the action of the tool

EXAMPLE TOOL
'edit page': {
spec: {
type: 'function',
function: {
name: 'edit page',
description: 'Edit the page',
},
},
handler: () => {
document.querySelector('.toolbar-actions .edit').click();
},
message: () => `Editing the page.`,
},

EXAMPLE TOOL
'save page': {
spec: {
type: 'function',
function: {
name: 'save page',
description: 'Save the page',
},
},
handler: () => {
document.getElementById('toolbar-save').click();
},
message: () => `The page is saved.`,
},

EXAMPLE TOOL
'set title': {
spec: {
type: 'function',
function: {
name: 'set title',
description: 'Sets the title',
parameters: {
type: 'object',
properties: {
title: {
type: 'string',
description: 'The title to be set',
},
},
required:['title'],

TOOLS
0:00/ 0:23

QUESTIONS?
Want to implement a site using Nick? Talk to me!
slideshare.net/robgietema/how-can-ai-be-used-in-content-management-systems