JDO 2019: Kubernetes logging techniques with a touch of LogSense - Marcin Stożek

proidea_conferences 28 views 59 slides Jun 24, 2019
Slide 1
Slide 1 of 59
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59

About This Presentation

Kubernetes helps us run our applications across multiple nodes using the standardized, declarative way. While we don’t need to think about where our applications are run physically, we still want to have some insights into how they behave. But we are no longer allowed to log into a specific node a...


Slide Content

logging techniqueslogging techniques
with a touch ofwith a touch of
Marcin Stożek "Perk"

How apps log thingsHow apps log things
local file
 
remote location
 
stdout/stderr

stdout/err loggingstdout/err logging
compliant
 
Problem with multi-line, unstructured logs
Twelve-factor app

How Docker manages logsHow Docker manages logs
logging drivers
 
$ docker logs
local
json-file
journald 
docs.docker.com/config/containers/logging/configure/

How Docker stores logsHow Docker stores logs
with json-filewith json-file
/var/lib/docker/containers/88f1c.../88f1c...-json.log
{"log":"2019-06-03 12:02:10 +0000 [info]: #0 delayed_timeout is
overwritten by ack_response_timeout\n",
"stream":"stdout", "time":"2019-06-03T12:02:10.252731041Z"}

Kubernetes loggingKubernetes logging

Logging at the node levelLogging at the node level

Logging at the node levelLogging at the node level
✓ logs available through kubectl logs
✓ logs available locally on the node
 
❌this is not really convenient
(but helpful)

Cluster-level loggingCluster-level logging
architecturesarchitectures
Exposing logs directly from the application
 
Sidecar container with a logging agent
 
Node-level logging agent on every node
 
Streaming sidecar container

Logging from the applicationLogging from the application

Logging from the applicationLogging from the application
Push logs directly from the applications
running in the cluster.

Logging from the applicationLogging from the application
Push logs directly from the applications
running in the cluster.
Depends on available libraries.
 
Not twelve-factor compliant.
 
Works for multi-line logs.

Sidecar containerSidecar container
with a logging agentwith a logging agent

Sidecar containerSidecar container
with a logging agentwith a logging agent
Useful when application can log to file only.

Sidecar containerSidecar container
with a logging agentwith a logging agent
Same problems as with logging directly from the application.
 
Multi-line logs problem.
 
Who is rotating the log files?
Useful when application can log to file only.

Logging with node-agentLogging with node-agent

Logging with node-agentLogging with node-agent
Application logs into the stdout.
 
On every node there is a node-agent - DaemonSet.
 
Node-agent takes the logs and pushes them somewhere else.

Logging with node-agentLogging with node-agent
Application logs into the stdout.
 
On every node there is a node-agent - DaemonSet.
 
Node-agent takes the logs and pushes them somewhere else.
Logs are still available through kubectl logs command.
 
Problem with multi-line logs.

Streaming sidecar containerStreaming sidecar container
github.com/kubernetes/website/pull/14780

Streaming sidecar containerStreaming sidecar container
Use when your application logs to file only.
 
Streaming container gets the logs and pushes them to stdout.
 
The rest is the node-agent scenario.

Streaming sidecar containerStreaming sidecar container
Use when your application logs to file only.
 
Streaming container gets the logs and pushes them to stdout.
 
The rest is the node-agent scenario.
Multiple sidecars for multiple files.

Streaming sidecar containerStreaming sidecar container
Use when your application logs to file only.
 
Streaming container gets the logs and pushes them to stdout.
 
The rest is the node-agent scenario.
Space problem with two log files - inside and outside the pod.
Multiple sidecars for multiple files.

spec:
containers:
- name: snowflake
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
github.com/kubernetes/website/blob/master/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml

spec:
containers:
- name: snowflake
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
mountPath: /var/logspec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 7 - name: snowflake-log-18 image: busybox9 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]10 volumeMounts:11 - name: varlog12 mountPath: /var/log13 - name: snowflake-log-214 image: busybox15 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]16 volumeMounts:17 - name: varlog18 mountPath: /var/log19 volumes:20 - name: varlog21 emptyDir: {}22
github.com/kubernetes/website/blob/master/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml

spec:
containers:
- name: snowflake
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
mountPath: /var/logspec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 7 - name: snowflake-log-18 image: busybox9 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]10 volumeMounts:11 - name: varlog12 mountPath: /var/log13 - name: snowflake-log-214 image: busybox15 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]16 volumeMounts:17 - name: varlog18 mountPath: /var/log19 volumes:20 - name: varlog21 emptyDir: {}22 mountPath: /var/log
mountPath: /var/log
mountPath: /var/logspec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 7 - name: snowflake-log-18 image: busybox9 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]10 volumeMounts:11 - name: varlog12 13 - name: snowflake-log-214 image: busybox15 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]16 volumeMounts:17 - name: varlog18 19 volumes:20 - name: varlog21 emptyDir: {}22
github.com/kubernetes/website/blob/master/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml

spec:
containers:
- name: snowflake
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
- name: snowflake-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
mountPath: /var/logspec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 7 - name: snowflake-log-18 image: busybox9 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]10 volumeMounts:11 - name: varlog12 mountPath: /var/log13 - name: snowflake-log-214 image: busybox15 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]16 volumeMounts:17 - name: varlog18 mountPath: /var/log19 volumes:20 - name: varlog21 emptyDir: {}22 mountPath: /var/log
mountPath: /var/log
mountPath: /var/logspec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 7 - name: snowflake-log-18 image: busybox9 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]10 volumeMounts:11 - name: varlog12 13 - name: snowflake-log-214 image: busybox15 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]16 volumeMounts:17 - name: varlog18 19 volumes:20 - name: varlog21 emptyDir: {}22
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log' ]
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log' ]spec:1 containers:2 - name: snowflake3 ...4 volumeMounts:5 - name: varlog6 mountPath: /var/log7 - name: snowflake-log-18 image: busybox9 10 volumeMounts:11 - name: varlog12 mountPath: /var/log13 - name: snowflake-log-214 image: busybox15 16 volumeMounts:17 - name: varlog18 mountPath: /var/log19 volumes:20 - name: varlog21 emptyDir: {}22
github.com/kubernetes/website/blob/master/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml

$ kubectl logs my-pod snowflake-log-1

0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...



$ kubectl logs my-pod snowflake-log-2

Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
...1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
github.com/kubernetes/website/blob/master/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml

So how to log things?So how to log things?

So how to log things?So how to log things?
That depends on the use case, but...

So how to log things?So how to log things?
That depends on the use case, but...
? Exposing logs directly from the application
 
? Sidecar container with a logging agent

So how to log things?So how to log things?
That depends on the use case, but...
? Exposing logs directly from the application
 
? Sidecar container with a logging agent
✓ Streaming sidecar container

✓✓✓ Node-level logging agent on every node

Node-agent logging solutionsNode-agent logging solutions
is your friend here
 
Under like Kubernetes itself
 
 - plug and play
 
Recommended by
FluentD
CNCF
Fluentd Kubernetes Daemonset
k8s fficial documentation

kind: DaemonSet
spec:
template:
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:forward
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-forwar d.yaml

kind: DaemonSet
spec:
template:
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:forward
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
mountPath: /var/log
mountPath: /var/lib/docker/containers
hostPath:
path: /var/log
hostPath:
path: /var/lib/docker/containerskind: DaemonSet1 spec:2 template:3 spec:4 containers:5 - name: fluentd6 image: fluent/fluentd-kubernetes-daemonset:forward7 ...8 volumeMounts:9 - name: varlog10 11 - name: varlibdockercontainers12 13 readOnly: true14 volumes:15 - name: varlog16 17 18 - name: varlibdockercontainers19 20 21
github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-forwar d.yaml

Should we logShould we log
everything?everything?

It's not cheapIt's not cheap

Logs throttlingLogs throttling
Using Kubernetes means one have many services.
 
We don't use k8s because it's fancy, right?

Logs throttlingLogs throttling
Using Kubernetes means one have many services.
 
We don't use k8s because it's fancy, right?

Logs throttlingLogs throttling
Using Kubernetes means one have many services.
 
We don't use k8s because it's fancy, right?
Many services means many log messages.

At certain scale it doesn't have much sense to get all the logs.

Logs throttlingLogs throttling
Using Kubernetes means one have many services.
 
We don't use k8s because it's fancy, right?
Many services means many log messages.

At certain scale it doesn't have much sense to get all the logs.
FluentD can help with a fluent-plugin-throttle

Where to store your logsWhere to store your logs

 
use , eg.
$ helm install stable/elastic-stack
Logging as a ServiceLogSense

$ kubectl -n kube-system \
create secret generic logsense-token \
--from-literal=logsense-token=YOUR-LOGSENSE-TOKEN-HERE

$ kubectl apply -f logsense-daemonset.yaml1 2 3 4 5
github.com/logsenseapp/fluentd-kubernetes-daemonset

{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "
}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "
}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "{1 "kubernetes":"{2 "container_name":"kafka",3 "namespace_name":"int",4 "pod_name":"kafka-kafka-0",5 "container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",6 "container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",7 "pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",8 "labels":{9 "app":"kafka",10 "controller-revision-hash ":"kafka-kafka-58dc7cdc78",11 "tier":"backend",12 "statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",13 "strimzi_io/cluster":"kafka",14 "strimzi_io/kind":"Kafka",15 "strimzi_io/name":"kafka-kafka"16 },17 "host":"ip-10-100-1-2.ec2.internal",18 "master_url":"https://172.20.0.1:443/api",19 "namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",20 "namespace_labels":{21 "name":"int"22 }23 }",24 "docker":"{25 "container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "26 }",27 "stream":"stdout",28 29 }30

{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "
}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "{1 "kubernetes":"{2 "container_name":"kafka",3 "namespace_name":"int",4 "pod_name":"kafka-kafka-0",5 "container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",6 "container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",7 "pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",8 "labels":{9 "app":"kafka",10 "controller-revision-hash ":"kafka-kafka-58dc7cdc78",11 "tier":"backend",12 "statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",13 "strimzi_io/cluster":"kafka",14 "strimzi_io/kind":"Kafka",15 "strimzi_io/name":"kafka-kafka"16 },17 "host":"ip-10-100-1-2.ec2.internal",18 "master_url":"https://172.20.0.1:443/api",19 "namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",20 "namespace_labels":{21 "name":"int"22 }23 }",24 "docker":"{25 "container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "26 }",27 "stream":"stdout",28 29 }30
{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 "log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "29 }30

{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "
}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

{
"kubernetes":"{
"container_name":"kafka",
"namespace_name":"int",
"pod_name":"kafka-kafka-0",
"container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",
"container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",
"pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",
"labels":{
"app":"kafka",
"controller-revision-hash ":"kafka-kafka-58dc7cdc78",
"tier":"backend",
"statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",
"strimzi_io/cluster":"kafka",
"strimzi_io/kind":"Kafka",
"strimzi_io/name":"kafka-kafka"
},
"host":"ip-10-100-1-2.ec2.internal",
"master_url":"https://172.20.0.1:443/api",
"namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",
"namespace_labels":{
"name":"int"
}
}",
"docker":"{
"container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "
}",
"stream":"stdout",
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "
}1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
"log":" 2019-06-09 18:05:44,774 INFO Deleting segment 271347611 [kafka-scheduler-8] "{1 "kubernetes":"{2 "container_name":"kafka",3 "namespace_name":"int",4 "pod_name":"kafka-kafka-0",5 "container_image":"strimzi/kafka:0.11.1-kafka-2.1.0",6 "container_image_id":"docker-pullable://strimzi/kafka@sha256:e741337... ",7 "pod_id":"c8eeb49c-67cb-11e9-9b29-12f358b019b2",8 "labels":{9 "app":"kafka",10 "controller-revision-hash ":"kafka-kafka-58dc7cdc78",11 "tier":"backend",12 "statefulset_kubernetes_io/pod-name ":"kafka-kafka-0",13 "strimzi_io/cluster":"kafka",14 "strimzi_io/kind":"Kafka",15 "strimzi_io/name":"kafka-kafka"16 },17 "host":"ip-10-100-1-2.ec2.internal",18 "master_url":"https://172.20.0.1:443/api",19 "namespace_id":"7fe790af-23d2-11e9-8903-0edd77f6b554",20 "namespace_labels":{21 "name":"int"22 }23 }",24 "docker":"{25 "container_id":"e5254e89e508126dbdea587080ec6e01aab660bf62392e41b6e0... "26 }",27 "stream":"stdout",28 29 }30

Links, lynx, wget, w3m, curl...Links, lynx, wget, w3m, curl...
kubernetes.io/docs/concepts/cluster-administration/logging/
docs.docker.com/config/containers/logging/
12factor.net/logs
logsense.com

logger.info("Thank you!")logger.info("Thank you!")
 /  / @[email protected]