Highlights and new features in VictoriaLogs — Q3 2025 update
VictoriaMetrics
0 views
73 slides
Oct 13, 2025
Slide 1 of 73
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
About This Presentation
These slides were presented during the virtual VictoriaMetrics Meetup for Q3 2025.
What’s new in VictoriaLogs
Improvements in LogsQL:
* Prefix-based processing of log fields in LogsQL pipes
* Prefix-based parsing of JSON and logfmt fields
* Prefix-based selection of fields in stats functions
* M...
These slides were presented during the virtual VictoriaMetrics Meetup for Q3 2025.
What’s new in VictoriaLogs
Improvements in LogsQL:
* Prefix-based processing of log fields in LogsQL pipes
* Prefix-based parsing of JSON and logfmt fields
* Prefix-based selection of fields in stats functions
* Make LogsQL parser more strict (catches typical incorrect usage of LogsQL queries)
Support for executing the query in Grafana with the given offset similar to the offset modifier in VictoriaMetrics and Prometheus (useful for comparison of query results on the current time range and the time range with the given offset)
* query_stats pipe for analyzing query execution statistics (helps understanding why the query is slow and how it can be optimized)
* Split pipe for splitting logs into arrays by the given delimiter (useful for CSV parsing at query time)
* running_stats and total_stats pipes (useful for calculating running sums and percentage of total per each interval on the selected time range)
* Support for sorting at json_values stats function (useful for selecting the last N logs per each group of logs)
* Support for *substring* filter, which matches the given substring (faster and easier to use alternative to ~regexp filter)
* pattern_match filter (useful for searching for logs with the given patterns)
* equals_common_case and contains_common_case pipes - faster alternatives to i(...) filter
Improvements in Syslog data ingestion:
* Automatically add level field according to the severity field value
* Automatically parse CEF-encoded messages (frequently used in SIEM systems)
* Ability to store the client address at remote_ip field
* Support for high-precision timestamps
* Support for FreeBSD dialect of Syslog messages
* Support for receiving Syslog messages from Unix sockets
Improvements in Journald data ingestion:
* Support for ingestion of very large Journald streams
* Automatically use (_MACHINE_ID, _HOSTNAME, _SYSTEMD_UNIT) as log stream fields
* Web UI improvements
Numerous improvements in VictoriaLogs plugin for Grafana https://play-grafana.victoriametrics.com/d/be5zidev72m80f/k8s-logs-demo
Kubernetes integration improvements
DevOps improvements:
* Improve the scalability of data ingestion on systems with hundreds of CPU cores (by 5x and more)
* Ability to return partial responses from VictoriaLogs cluster (when some of vlstorage nodes are unavailable)
* Ability to set retention based on the percentage of the maximum disk space usage
* Ability to attach and detach per-day partitions without VictoriaLogs restart
* Ability to make snapshots for per-day partitions. This simplifies backups
Roadmap
* Support for object storage
Size: 862.08 KB
Language: en
Added: Oct 13, 2025
Slides: 73 pages
Slide Content
What’s new in VictoriaLogs
Q3 2025
New home for VictoriaLogs
New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo
New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo
●This was a source of confusion for VictoriaMetrics vs VictoriaLogs releases
New home for VictoriaLogs
●Previously VictoriaLogs source code was located in the VictoriaMetrics repo
●This was a source of confusion for VictoriaMetrics vs VictoriaLogs releases
●So VictoriaLogs source code has been moved into a dedicated repo -
https://github.com/VictoriaMetrics/VictoriaLogs/
VictoriaTraces
VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs
VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs
●It is located in a dedicated repository -
https://github.com/VictoriaMetrics/VictoriaTraces/
VictoriaTraces
●VictoriaTraces is a database for traces. It is based on VictoriaLogs
●It is located in a dedicated repository -
https://github.com/VictoriaMetrics/VictoriaTraces/
●It is ready for evaluation
Enterprise features for VictoriaLogs
Enterprise features for VictoriaLogs
●https://docs.victoriametrics.com/victoriametrics/enterprise/#victorialogs-enterprise-features
vlagent
vlagent
●An agent for logs’ collection, transformation, buffering and replication across
the given VictoriaLogs instances
vlagent
●An agent for logs’ collection, transformation, buffering and replication across
the given VictoriaLogs instances
●https://docs.victoriametrics.com/victorialogs/vlagent/
Prominent features
Prefix-based processing of log fields in LogsQL pipes
Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*
Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*
●Keep fields starting with kubernetes.container additionally to _time and
_msg fields:
○… | fields _time, _msg, kubernetes.container*
Prefix-based processing of log fields in LogsQL pipes
●Delete fields starting with kubernetes.node_labels. and
kubernetes.pod_labels. prefixes
○… | delete kubernetes.node_labels.*, kubernetes.pod_labels.*
●Keep fields starting with kubernetes.container additionally to _time and
_msg fields:
○… | fields _time, _msg, kubernetes.container*
●Drop kubernetes.node_labels. prefix:
○… | rename kubernetes.node_labels.* as *
Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*
Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*
●Add my_prefix. prefix to all the fields:
○… | rename * as my_prefix.*
Prefix-based processing of log fields in LogsQL pipes
●Replace kubernetes.node_labels. prefix with host_labels. prefix:
○… | rename kubernetes.node_labels.* as host_labels.*
●Add my_prefix. prefix to all the fields:
○… | rename * as my_prefix.*
●Copy fields with the kubernetes.pod prefix to fields with my_pod prefix:
○… | copy kunernetes.pod* as my_pod*
Prefix-based parsing of JSON and logfmt fields
Prefix-based parsing of JSON and logfmt fields
●Unpack JSON fields, which start with kubernetes. prefix:
○… | unpack_json fields (kubernetes.*)
Prefix-based parsing of JSON and logfmt fields
●Unpack JSON fields, which start with kubernetes. prefix:
○… | unpack_json fields (kubernetes.*)
●Unpack logfmt fields, which start with kubernetes. Prefix:
○… | unpack_logfmt fields (kubernetes.*)
Prefix-based selection of fields in stats functions
Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)
Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)
●Select fields starting with kubernetes.pod prefix for logs with the minimum
_time per each host:
○… | stats by (host) row_min(_time, kubernetes.pod*)
Prefix-based selection of fields in stats functions
●The average value across fields starting with score.* prefix:
○… | stats avg(score.*)
●Select fields starting with kubernetes.pod prefix for logs with the minimum
_time per each host:
○… | stats by (host) row_min(_time, kubernetes.pod*)
●These are just a few examples. Other stats functions also support
prefix-based selection of fields
Improvements in Syslog data ingestion
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This
enables log level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM
systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-r
emote-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303
●Support for FreeBSD dialect of Syslog messages -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/571
Improvements in Syslog data ingestion
●Automatically add level field according to the severity field value. This enables log
level highlighting in Grafana
●Automatically parse CEF-encoded messages (frequently used in SIEM systems) -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#cef
●Ability to store the client address at remote_ip field -
https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/#capturing-remote
-ip-address
●Support for high-precision timestamps -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/303
●Support for FreeBSD dialect of Syslog messages -
https://github.com/VictoriaMetrics/VictoriaLogs/issues/571
●Support for receiving Syslog messages from Unix sockets
Improvements in Journald data ingestion
Improvements in Journald data ingestion
●Support for ingestion of very large Journald streams
Improvements in Journald data ingestion
●Support for ingestion of very large Journald streams
●Automatically use (_MACHINE_ID, _HOSTNAME, _SYSTEMD_UNIT) as log
stream fields
Web UI improvements
Web UI improvements
●Consistently display timestamps with nanosecond precision (previously
timestamps were displayed with millisecond precision)
Web UI improvements
●Ability to investigate the surrounding logs for the given log (aka stream
context)
Web UI improvements
●Ability to investigate the surrounding logs for the given log (aka stream
context)
Web UI improvements
●Show the actual query execution time
Web UI improvements
●Optimize JSON tab (it continues working fast when displaying thousands of
logs with hundreds of fields per log entry)
Numerous improvements in VictoriaLogs plugin for
Grafana
●https://play-grafana.victoriametrics.com/d/be5zidev72m80f/k8s-logs-demo
Kubernetes integration improvements
Kubernetes integration improvements
●Add victorialogs-collector Helm chart for collecting logs from Kubernetes
containers and sending them to VictoriaLogs
Kubernetes integration improvements
●Add victorialogs-collector Helm chart for collecting logs from Kubernetes
containers and sending them to VictoriaLogs
●Simplify and clarify documentation for victorialogs-single and
victorialogs-cluster Helm charts
DevOps improvements
DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)
DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)
●Ability to return partial responses from VictoriaLogs cluster (when some of
vlstorage nodes are unavailable) -
https://docs.victoriametrics.com/victorialogs/querying/#partial-responses
DevOps improvements
●Improve the scalability of data ingestion on systems with hundreds of CPU
cores (by 5x and more)
●Ability to return partial responses from VictoriaLogs cluster (when some of
vlstorage nodes are unavailable) -
https://docs.victoriametrics.com/victorialogs/querying/#partial-responses
●Ability to set retention based on the percentage of the maximum disk space
usage - https://docs.victoriametrics.com/victorialogs/#retention
DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle
DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle
●Ability to make snapshots for per-day partitions. This simplifies backups.
https://docs.victoriametrics.com/victorialogs/#backup-and-restore
DevOps improvements
●Ability to attach and detach per-day partitions without VictoriaLogs restart.
Useful for building multi-tier storage schemes where recently ingested logs
are stored at faster more expensive disks, while older logs are migrated to
slower less expensive disks -
https://docs.victoriametrics.com/victorialogs/#partitions-lifecycle
●Ability to make snapshots for per-day partitions. This simplifies backups.
https://docs.victoriametrics.com/victorialogs/#backup-and-restore
●Significantly improved usefulness of Grafana dashboards for VictoriaLogs
components
LogsQL improvements
LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
○incorrect: field_name == value
○correct: field_name:=value
LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
●Support for executing the query in Grafana with the given offset similar to the
offset modifier in VictoriaMetrics and Prometheus (useful for comparison of
query results on the current time range and the time range with the given
offset) -
https://docs.victoriametrics.com/victorialogs/logsql/#time_offset-query-option
LogsQL improvements
●Make LogsQL parser more strict (catches typical incorrect usage of LogsQL
queries)
●Support for executing the query in Grafana with the given offset similar to the
offset modifier in VictoriaMetrics and Prometheus (useful for comparison of
query results on the current time range and the time range with the given
offset) -
https://docs.victoriametrics.com/victorialogs/logsql/#time_offset-query-option
●query_stats pipe for analyzing query execution statistics (helps
understanding why the query is slow and how it can be optimized) -
https://docs.victoriametrics.com/victorialogs/logsql/#query_stats-pipe
LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe
LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe
●running_stats and total_stats pipes (useful for calculating running sums and
percentage of total per each interval on the selected time range) -
https://docs.victoriametrics.com/victorialogs/logsql/#running_stats-pipe
LogsQL improvements
●split pipe for splitting logs into arrays by the given delimiter (useful for CSV
parsing at query time) -
https://docs.victoriametrics.com/victorialogs/logsql/#split-pipe
●running_stats and total_stats pipes (useful for calculating running sums and
percentage of total per each interval on the selected time range) -
https://docs.victoriametrics.com/victorialogs/logsql/#running_stats-pipe
●Support for sorting at json_values stats function (useful for selecting the last
N logs per each group of logs)
LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)
LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)
●pattern_match filter (useful for searching for logs with the given patterns) -
https://docs.victoriametrics.com/victorialogs/logsql/#pattern-match-filter
LogsQL improvements
●Support for *substring* filter, which matches the given substring (faster and
easier to use alternative to ~regexp filter)
●pattern_match filter (useful for searching for logs with the given patterns) -
https://docs.victoriametrics.com/victorialogs/logsql/#pattern-match-filter
●equals_common_case and contains_common_case pipes - faster
alternatives to i(...) filter -
https://docs.victoriametrics.com/victorialogs/logsql/#contains_common_case-f
ilter
Roadmap
Roadmap
●Support for object storage
Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem.
Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem. Potential issue: slow query performance because of typically high read latency at
object storage.
Roadmap
●Support for object storage
○Phase 1: transparent migration of older per-day partitions to object storage with transparent
querying of the migrated data. Reduces storage costs for historical data and automates
backups for historical data
○Phase 2: write newly ingested logs directly to object storage without storing them at the local
filesystem. Potential issue: slow query performance because of typically high read latency at
object storage. The solution: increase the number of parallel data readers. VictoriaLogs
already supports this via parallel_readers option -
https://docs.victoriametrics.com/victorialogs/logsql/#parallel_readers-query-option
Don’t wait for object storage - try VictoriaLogs
on local filesystem now!