Highlights and new features in VictoriaLogs — Q3 2025 update

VictoriaMetrics 0 views 73 slides Oct 13, 2025
Slide 1
Slide 1 of 73
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73

About This Presentation

These slides were presented during the virtual VictoriaMetrics Meetup for Q3 2025.

What’s new in VictoriaLogs

Improvements in LogsQL:
* Prefix-based processing of log fields in LogsQL pipes
* Prefix-based parsing of JSON and logfmt fields
* Prefix-based selection of fields in stats functions
* M...


Slide Content

What’s new in VictoriaLogs
Q3 2025

New home for VictoriaLogs

New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo

New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo
●This was a source of confusion for VictoriaMetrics vs VictoriaLogs releases

New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo
●This was a source of confusion for VictoriaMetrics vs VictoriaLogs releases
●So VictoriaLogs source code has been moved into a dedicated repo -
https://github.com/VictoriaMetrics/VictoriaLogs/

VictoriaTraces

VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs

VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs
●It is located in a dedicated repository -
https://github.com/VictoriaMetrics/VictoriaTraces/

VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs
●It is located in a dedicated repository -
https://github.com/VictoriaMetrics/VictoriaTraces/
●It is ready for evaluation

Enterprise features for VictoriaLogs

Enterprise features for VictoriaLogs
●https://docs.victoriametrics.com/victoriametrics/enterprise/#victorialogs-enterprise-features

vlagent

vlagent
●An agent for logs’ collection, transformation, buffering and replication across
the given VictoriaLogs instances

vlagent
●An agent for logs’ collection, transformation, buffering and replication across
the given VictoriaLogs instances
●https://docs.victoriametrics.com/victorialogs/vlagent/

Prominent features

Prefix-based processing of log fields in LogsQL pipes

Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*

Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*
●Keep fields starting with kubernetes.container additionally to _time and
_msg fields:
○… | fields _time, _msg, kubernetes.container*

Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*
●Keep fields starting with kubernetes.container additionally to _time and
_msg fields:
○… | fields _time, _msg, kubernetes.container*
●Drop kubernetes.node_labels. prefix:
○… | rename kubernetes.node_labels.* as *

Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*

Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*
●Add my_prefix. prefix to all the fields:
○… | rename * as my_prefix.*

Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*
●Add my_prefix. prefix to all the fields:
○… | rename * as my_prefix.*
●Copy fields with the kubernetes.pod prefix to fields with my_pod prefix:
○… | copy kunernetes.pod* as my_pod*

Prefix-based parsing of JSON and logfmt fields

Prefix-based parsing of JSON and logfmt fields
●Unpack JSON fields, which start with kubernetes. prefix:
○… | unpack_json fields (kubernetes.*)

Prefix-based parsing of JSON and logfmt fields
●Unpack JSON fields, which start with kubernetes. prefix:
○… | unpack_json fields (kubernetes.*)
●Unpack logfmt fields, which start with kubernetes. Prefix:
○… | unpack_logfmt fields (kubernetes.*)

Prefix-based selection of fields in stats functions

Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)

Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)
●Select fields starting with kubernetes.pod prefix for logs with the minimum
_time per each host:
○… | stats by (host) row_min(_time, kubernetes.pod*)

Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)
●Select fields starting with kubernetes.pod prefix for logs with the minimum
_time per each host:
○… | stats by (host) row_min(_time, kubernetes.pod*)
●These are just a few examples. Other stats functions also support
prefix-based selection of fields

Improvements in Syslog data ingestion

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303
●Support for FreeBSD dialect of Syslog messages -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/571

Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This enables log
level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-remote
-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303
●Support for FreeBSD dialect of Syslog messages -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/571
●Support for receiving Syslog messages from Unix sockets

Improvements in Journald data ingestion

Improvements in Journald data ingestion
●Support for ingestion of very large Journald streams

Improvements in Journald data ingestion
●Support for ingestion of very large Journald streams
●Automatically use (_MACHINE_ID, _HOSTNAME, _SYSTEMD_UNIT) as log
stream fields

Web UI improvements

Web UI improvements
●Consistently display timestamps with nanosecond precision (previously
timestamps were displayed with millisecond precision)

Web UI improvements
●Ability to investigate the surrounding logs for the given log (aka stream
context)

Web UI improvements
●Ability to investigate the surrounding logs for the given log (aka stream
context)

Web UI improvements
●Show the actual query execution time

Web UI improvements
●Optimize JSON tab (it continues working fast when displaying thousands of
logs with hundreds of fields per log entry)

Numerous improvements in VictoriaLogs plugin for
Grafana
●https://play-grafana.victoriametrics.com/d/be5zidev72m80f/k8s-logs-demo

Kubernetes integration improvements

Kubernetes integration improvements
●Add victorialogs-collector Helm chart for collecting logs from Kubernetes
containers and sending them to VictoriaLogs

Kubernetes integration improvements
●Add victorialogs-collector Helm chart for collecting logs from Kubernetes
containers and sending them to VictoriaLogs
●Simplify and clarify documentation for victorialogs-single and
victorialogs-cluster Helm charts

DevOps improvements

DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)

DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)
●Ability to return partial responses from VictoriaLogs cluster (when some of
vlstorage nodes are unavailable) -
https://docs.victoriametrics.com/victorialogs/querying/#partial-responses

DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)
●Ability to return partial responses from VictoriaLogs cluster (when some of
vlstorage nodes are unavailable) -
https://docs.victoriametrics.com/victorialogs/querying/#partial-responses
●Ability to set retention based on the percentage of the maximum disk space
usage - https://docs.victoriametrics.com/victorialogs/#retention

DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle

DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle
●Ability to make snapshots for per-day partitions. This simplifies backups.
https://docs.victoriametrics.com/victorialogs/#backup-and-restore

DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle
●Ability to make snapshots for per-day partitions. This simplifies backups.
https://docs.victoriametrics.com/victorialogs/#backup-and-restore
●Significantly improved usefulness of Grafana dashboards for VictoriaLogs
components

LogsQL improvements

LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
○incorrect: field_name == value
○correct: field_name:=value

LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
●Support for executing the query in Grafana with the given offset similar to the
offset modifier in VictoriaMetrics and Prometheus (useful for comparison of
query results on the current time range and the time range with the given
offset) -
https://docs.victoriametrics.com/victorialogs/logsql/#time_offset-query-option

LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
●Support for executing the query in Grafana with the given offset similar to the
offset modifier in VictoriaMetrics and Prometheus (useful for comparison of
query results on the current time range and the time range with the given
offset) -
https://docs.victoriametrics.com/victorialogs/logsql/#time_offset-query-option
●query_stats pipe for analyzing query execution statistics (helps
understanding why the query is slow and how it can be optimized) -
https://docs.victoriametrics.com/victorialogs/logsql/#query_stats-pipe

LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe

LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe
●running_stats and total_stats pipes (useful for calculating running sums and
percentage of total per each interval on the selected time range) -
https://docs.victoriametrics.com/victorialogs/logsql/#running_stats-pipe

LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe
●running_stats and total_stats pipes (useful for calculating running sums and
percentage of total per each interval on the selected time range) -
https://docs.victoriametrics.com/victorialogs/logsql/#running_stats-pipe
●Support for sorting at json_values stats function (useful for selecting the last
N logs per each group of logs)

LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)

LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)
●pattern_match filter (useful for searching for logs with the given patterns) -
https://docs.victoriametrics.com/victorialogs/logsql/#pattern-match-filter

LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)
●pattern_match filter (useful for searching for logs with the given patterns) -
https://docs.victoriametrics.com/victorialogs/logsql/#pattern-match-filter
●equals_common_case and contains_common_case pipes - faster
alternatives to i(...) filter -
https://docs.victoriametrics.com/victorialogs/logsql/#contains_common_case-f
ilter

Roadmap

Roadmap
●Support for object storage

Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data

Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem.

Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem. Potential issue: slow query performance because of typically high read latency at
object storage.

Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem. Potential issue: slow query performance because of typically high read latency at
object storage. The solution: increase the number of parallel data readers. VictoriaLogs
already supports this via parallel_readers option -
https://docs.victoriametrics.com/victorialogs/logsql/#parallel_readers-query-option

Don’t wait for object storage - try VictoriaLogs
on local filesystem now!