50984458-DSADSADSAWQEWQDSADSADSADSAD1.pptx

h6zn89 48 views 66 slides Aug 15, 2024
Slide 1
Slide 1 of 66
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66

About This Presentation

DSADSA


Slide Content

Wazuh-Elastic Stack Training Deck 1 Emiliano Fontana May 2021

Introduction Deck 1, Slide 2

Wazuh Built on the OSSEC project (GPLv2 fork) publicly recommended by original author of OSSEC Over 6 years of aggressive development Massive expansions to legacy OSSEC functionality Integration with other major tools and service Constant improveme nt and suppo rt Deck 1, Slide 3

What is Wazuh? A situational awareness tool for your electronic assets An important resource for achieving regulatory security compliance Deck 1, Slide 4 Log Collection Log Analysis (customizable set of over 3000 HIDS rules) File Integrity Monitoring Host-based anomaly detection Security compliance scanning for known vulnerabilities Real time alerting (e-mail, SMS, Slack, etc) Active Response (a HIDS-driven IPS implementation) Main Components

What is Wazuh? Agents available for many diverse platforms Linux (Debian, CentOS, RedHat, SUSE, Amazon Linux, etc) BSD (FreeBSD, OpenBSD, NetBSD) Solaris (10 & 11) AIX (5.3 or greater) MacOS Windows HP-UX (11v3) Deck 1, Slide 5 For downloading and installing the latest Wazuh and related packages: https://wazuh.com/start/ note

Wazuh Architecture Deck 1, Slide 6 https:// documentation.wazuh.com

Wazuh Architecture Deck 1, Slide 7

Main Components Deck 1, Slide 8 https:// documentation.wazuh.com

Wazuh Processes Each process is executed with limited privileges Processes are run in a chroot environment where feasible. Processes are executed as unprivileged users where feasible. Wazuh processes on Linux systems are controlled using the relevant tool (i.e. systemctl, service, initctl) C:\Program Files (x86)\ossec-agent\win32ui.exe is the Windows management tool for controlling the Wazuh service on Windows agents. Deck 1, Slide 9

Network Communication Agent-Manager connections are compressed and encrypted with per-agent pre-shared keys (AES) over tcp or udp 1514. Remoted can directly accept TCP and/or UDP port 514 messages from syslog-sending devices. For more robust centralized syslog collection, syslog server(s)can be used on agent(s) or manager. Deck 1, Slide 10

Wazuh Secure Communication All Wazuh communications are authenticated and encrypted via AES or TLS encryption. Wazuh manager worker nodes use TLS to sync config and state data with the manager master node. Each agent is assigned its own crypto key for reporting to the manager. While significant privilege separation and isolation have been built, it is still wise to further harden the Wazuh server since so many other systems will rely on and be influenced by it, particularly if remote commands are enabled. Deck 1, Slide 11

Flood protection The Leaky Bucket Variable rate input Fixed max rate out Various thresholds Flood & Recovery alerts Very configurable Deck 1, Slide 12 https:// documentation.wazuh.com

Leaky bucket buffer flooding scenario with alerts and final recovery Deck 1, Slide 13

Lab Exercise 1a Wazuh Server Configuration Deck 1, Slide 14

Lab Exercise 1a Wazuh Server Configuration Deck 1, Slide 15 Do some basic configuration of the Wazuh Manager and authenticate with and query the Wazuh API for the first time. Lab Objective

Lab Exercise 1b Wazuh Web UI Deck 1, Slide 16

Lab Exercise 1b Wazuh Web UI Deck 1, Slide 17 Briefly explore the Wazuh Web UI and the Kibana environment where it is housed. Lab Objective

Agent registration Deck 1, Slide 18

Authd registration service Agents must have a registration allocated in the Wazuh system before they can report in. Authd on the Wazuh master node manager services agent requests for a registration. This is unauthenticated by default but at least password protection is recommended. Certificate based self-registration authentication is also possible. Agents have multiple methods they can use to request a registration. Deck 1, Slide 19

Agents initiating registration Agent auto-enrollment (default) Using agent-auth tool With agent installer via deployment variables Requesting registration directly from Wazuh API (rare) Deck 1, Slide 20

Lab Exercise 1c Auto enrollment Deck 1, Slide 21

Lab Exercise 1c Auto enrollment Deck 1, Slide 22 Register your linux-agent with your Wazuh manager using auto enrollment. Lab Objective

Lab Exercise 1d Deployment variables Deck 1, Slide 23

Lab Exercise 1d Deployment variables Deck 1, Slide 24 Install and register your windows-agent with deployment variables. Lab Objective

Lab Exercise 1e agent-auth tool Deck 1, Slide 25

Lab Exercise 1e agent-auth tool Deck 1, Slide 26 Register your elastic-agent with the agent-auth tool. Lab Objective https:// documentation.wazuh.com

Remotely upgrading Wazuh agents Deck 1, Slide 27 https:// documentation.wazuh.com

Ways to remotely upgrade Wazuh agents Instead of manually upgrading Wazuh directly on agent systems, or using yum/apt repositories which bring the risk of agents being prematurely upgraded to a newer version than what is on your managers, you can push Wazuh agent upgrades out from your Wazuh managers to connected agents, even to remote ones. Wazuh API automatically routes upgrade tasks to the right managers up to 100 agents queued up at a time (must be connected) agent_upgrade (legacy) a CLI tool to upgrade agent(s) in a single-manger setups not for use with Wazuh managed cloud, or manager clusters Limitations not manageable in Wazuh web interface (yet) scripting needed for upgrades of hundreds or more agents upgrade tasks not queued up for disconnected agents Deck 1, Slide 28

Lab Exercise 1f Agent remote upgrade Deck 1, Slide 29

Lab Exercise 1f Agent remote upgrade Deck 1, Slide 30 From the Wazuh Manager, push an upgrade to the outdated Wazuh Agent on the elastic system. Do this with the agent_upgrade tool and then via the Wazuh API. Upgrade an agent from the manager

General configuration Deck 1, Slide 31 https:// documentation.wazuh.com

ossec.conf primary configuration file on managers and agents location /var/ossec/etc/ossec.conf C:\Program Files (x86)\ossec-agent\ossec.conf ossec.conf controls the core components of Wazuh log analysis file integrity monitoring (syscheck) rootkit detection active-response loads the decoders & the rules.xml files controls the notification (e.g. e-mail) Deck 1, Slide 32

internal_options.conf Low-level config file for managers and agents Location: internal_options.conf shows all options, but is overwritten by Wazuh upgrades local_internal_options.conf copy items from internal_options.conf here to customize internal options are for controlling debug level for specific daemons enabling/disabling grouping of email alerts enabling/disabling full subject line in email alerts enabling/disabling remote commands various other obscure settings generally best left alone Handle with care! Deck 1, Slide 33

Agent configuration Deck 1, Slide 34 https:// documentation.wazuh.com

agent.conf Location /var/ossec/etc/shared/*GROUP*/agent.conf Multiple possible *GROUP* locations, each servicing a different group of agents. Agents can be in multiple groups Controlled by agent_groups command on the manager. Default group is called default Agents pull it from the Wazuh manager, quickly fetching new versions and automatically restarting to apply them. agent.conf should never be edited on the agent side as changes will quickly be overwritten with the manager’s version. Specific agent config sections are possible on a per-OS, per-profile, and per-agent basis, allowing great flexibility. Editable from the Web Interface Deck 1, Slide 35

Agent groups and profiles Important tools for organizing the different configuration setting you will need use on different groups/types of agents. agent groups configuration profiles Deck 1, Slide 36

Agent configuration profiles In an agent's ossec.conf, the <config-profile> line can include multiple profiles separated by a comma and space. Deck 1, Slide 37 <client> <config-profile>rhel, rhel7</config-profile> <server> <address>siem.company.org</address> </server> </client> Example ossec.conf on agent <agent_config profile="rhel7"> <sca> <policies> <policy>cis_rhel7_linux_rcl.yml</policy> </policies> </sca> </agent_config> Example agent.conf on manager (in sca agent group)

agents.conf large example Deck 1, Slide 38 <agent_config> ... </agent_config> <agent_config os= "Linux" > ... </agent_config> <agent_config os= "Windows" > ... </agent_config> <agent_config profile= "rhel7" > ... </agent_config> <agent_config profile= "ubuntu18.04" > ... </agent_config> <agent_config name= "alpha" > ... </agent_config> agent.conf (on manager)

Lab Exercise 1g Centralized agent configuration Deck 1, Slide 39

Lab Exercise 1g Centralized agent configuration Deck 1, Slide 40 Configure two agent groups, each with a multi-level agent.conf. Confirm agents are getting and using the config content relevant to them within their group. Practice centralized agent configuration

Mass deployment Deck 1, Slide 41 Things to consider as you plan a mass deployment of Wazuh agents. See lab guide. Mass deployment discussion

Log Analysis Deck 1, Slide 42 https:// documentation.wazuh.com

Log Analysis with Wazuh Wazuh’s log analysis engine is capable of extracting important fields from a log message identifying & evaluating the content of a log message categorizing it by matching specific rules and consequently generating an alert from it. Deck 1, Slide 43

Log Flow (agent/server) ossec-logcollector on the agent collects the logs ossec-analysisd on the manager analyzes the log entries ossec-maild sends out alerts ossec-execd used for Active Response Deck 1, Slide 44

Stages of Log Analysis Deck 1, Slide 45 Log collection on agents as defined in <localfile> sections These define a <log_format> that informs pre-decoding. For json logs, these also can define one or more additional fields to mark the json logs to clearly indicate the log type or source, to inform rule-based analysis. Pre-decoding extracts basic fields based on <log_format> value of source, like program_name from from the syslog header Decoding extracts program-specific fields like srcip or username Rule-based analysis of decoded log One more more instances and types of matching criteria can be against individual log fields or the whole log. Matching of field values against CDB lists also supported.

Example <localfile> sections Deck 1, Slide 46 <localfile> <log_format>syslog</log_format> <location>/var/log/messages</location> </localfile> <localfile> <log_format>eventchannel</log_format> <location>Application</location> </localfile> <localfile> <log_format>json</log_format> <location>/var/log/suricata/eve-*.json</location> <label key="@source">suricata</label> </localfile>

Various log samples Deck 1, Slide 47 2016-03-15T15:22:10.078830+01:00 tron su:pam_unix(su-l:auth):authentication failure;logname=tm uid=500 euid=0 tty=pts/0 ruser=tm rhost= user=root 1265939281.764 1 172.16.167.228 TCP_DENIED/403 734 POST http://lbcore1.metacafe.com/test/SystemInfoManager.php - NONE/- text/html [Sun Mar 06 08:52:16 2016] [error] [client 187.172.181.57] Invalid URI in request GET: index.php HTTP/1.0 pam / squid / apache log samples

Example 1/2 Pre-decoding hostname: manager6 program_name: sshd log: Failed password for root from 113.195.145.13 port 19044 ssh2 Deck 1, Slide 48 Dec 5 00:08:49 manager6 sshd[25467]: Failed password for root from 113.195.145.13 port 19044 ssh2 Log

Example 2/2 Decoding decoder: sshd dstuser: root srcip: 113.195.145.13 srcport: 19044 Deck 1, Slide 49 Dec 5 00:08:49 manager6 sshd[25467]: Failed password for root from 113.195.145.13 port 19044 ssh2 Log

Logging alerts to alerts.json When <jsonout_output> is enabled in the manager's ossec.conf, alerts are recorded as JSON records in alerts.json. These are normally shipped by Filebeat to Elasticsearch or by Splunk Universal Forwarder to Splunk. Deck 1, Slide 50 {"timestamp":"2020-12-07T21:44:37.313+0000","rule":{"level":5,"description":"sshd: Reverse lookup error (bad ISP or attack).","id":"5702","firedtimes":58,"mail":false,"groups":["syslog","sshd"],"pci_dss":["11.4"],"gpg13":["4.12"],"gdpr":["IV_35.7.d"],"nist_800_53":["SI.4"],"tsc":["CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"000","name":"manager1"},"manager":{"name":"manager1"},"id":"1607377477.3731351","cluster":{"name":"wazuh","node":"master"},"full_log":"Dec 7 21:44:37 ip-10-0-1-1 sshd[22566]: reverse mapping checking getaddrinfo for 190.202.147.253.estatic.cantv.net [190.202.147.253] failed - POSSIBLE BREAK-IN ATTEMPT!","predecoder":{"program_name":"sshd","timestamp":"Dec 7 21:44:37","hostname":"ip-10-0-1-1"},"decoder":{"parent":"sshd","name":"sshd"},"data":{"srcip":"190.202.147.253"},"location":"/var/log/secure"} /var/ossec/logs/alerts/alerts.json

Expanded alerts.json record { "timestamp": "2020-12-07T21:44:37.313+0000", "rule": { "level": 5, "description": "sshd: Reverse lookup error (bad ISP or attack).", "id": "5702", "firedtimes": 58, "mail": false, "groups": [ "syslog", "sshd" ], "pci_dss": [ "11.4" ], "gpg13": [ "4.12" ], "gdpr": [ "IV_35.7.d" ], "nist_800_53": [ "SI.4" ], "tsc": [ "CC6.1", "CC6.8", "CC7.2", "CC7.3" ] }, "agent": { "id": "000", "name": "manager1" }, "manager": { "name": "manager1" }, "id": "1607377477.3731351", "cluster": { "name": "wazuh", "node": "master" }, "full_log": "Dec 7 21:44:37 ip-10-0-1-1 sshd[22566]: reverse mapping checking getaddrinfo for 190.202.147.253.estatic.cantv.net [190.202.147.253] failed - POSSIBLE BREAK-IN ATTEMPT!", ... Deck 1, Slide 51 ... "predecoder": { "program_name": "sshd", "timestamp": "Dec 7 21:44:37", "hostname": "ip-10-0-1-1" }, "decoder": { "parent": "sshd", "name": "sshd" }, "data": { "srcip": "190.202.147.253" }, "location": "/var/log/secure" }

Logging alerts to archives.json Alternatively, when <logall_json> is enabled, all events are logged to archives.json whether or not they match a rule. Such logging may take up a great deal of space but at times is needed in order to discover classes of events that should be tripping rules but are not doing so. Consider at times routing archives.json temporarily to a separate index pattern from wazuh-alerts-*, like to wazuh-archives-*, since you presumably will not want to retain the non-alert events as long. Deck 1, Slide 52 { "timestamp" : "2017-12-05T02:51:36+0000" , "rule" :{}, "agent" :{ "id" : "000" , "name" : "manager6" }, "manager" :{ "name" : "manager6" }, "id" : "1512442296. 149532" , "full_log" : "Dec 5 02:51:35 manager6 sshd[382]: Disconnected from 113.195.145.13 port 48727 [preauth]" , "predecoder" :{ "program_name" : "sshd" , "hostname" : "manager6" }, "decoder" :{ "name":"sshd" }, "location" : "/var/log/secure" } /var/ossec/logs/archives/archives.json

JSON Logging Issues Issues to consider with the alerts.json and archives.json These files are rotated and compressed daily by default. They accumulate indefinitely unless you add a process to delete old files. Sample cron one-liner to daily remove 7+ day old rotated files: 0 2 * * * root find /var/ossec/logs/{alerts,archives} -mtime +7 -exec rm {} \; The archives.json file contains both alerts and non alerting events so if you ship both alerts.json and archives.json to Elasticsearch you will double-index all alerts. To split the routing of archives.json across multiple index patterns, the Wazuh Filebeat module must be customized. It is worth keeping the more recent of these rotated files, since in rare instances, you may find it very helpful to be able to re-feed one or more past day's worth of alerts.json or archives.json files back to Elasticsearch. See: https://wazuh.com/blog/recover-your-data-using-wazuh-alert-backups/ Deck 1, Slide 53

Log Analysis and Regulatory Compliance Computer-aided log analysis is a powerful tool for identifying threats or potential problems within a vast stream of collected log events. Many regulatory compliance requirements call for regular review of security logs, which is generally not feasible nor sustainable without the aid of machine analysis of log events before they are reviewed by human eyes. Furthermore, it can help classify which events need to be stored in order to comply with regulatory requirements, whether or not they represent actionable items. Without this, it would necessary to store all log events, at an exorbitant cost from both a computational and storage perspective. Deck 1, Slide 54

Log analysis is a requirement for: PCI DSS compliance GDPR compliance HIPAA compliance SOC 2 Trust Service Criteria FISMA compliance SOX compliance NIST 800-53 compliance Deck 1, Slide 55 Why analyze logs?

Compliance mapping in Wazuh rules The ruleset maintained by Wazuh contains mappings to specific compliance requirements. A list of all related Wazuh rules can be found here: https://wazuh.com/resources/Wazuh_PCI_DSS_Guide.pdf https://wazuh.com/resources/Wazuh_GDPR_White_Paper.pdf Deck 1, Slide 56 <rule id="5402" level="3"> <if_sid>5400</if_sid> <regex> ; USER=root ; COMMAND=| ; USER=root ; TSID=\S+ ; COMMAND=</regex> <description>Successful sudo to ROOT executed.</description> <group>pci_dss_10.2.5,pci_dss_10.2.2,gpg13_7.6,gpg13_7.8,gpg13_7.13, gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7, nist_800_53_AC.6,</group> </rule> PCI-tagged Wazuh rule

Lab Exercise Set 2 Deck 1, Slide 57

Lab Exercise Set 2 Deck 1, Slide 58 Generate a brute-force attack -- Repeatedly attempt to use a wrong password with an agent. Monitor the alerts.log, watching for the generation of the brute-force alert. 2a

Lab Exercise Set 2 Deck 1, Slide 59 Log Analysis: Analyze the log entries resulting from the previous exercise. What is shown and what does it mean? How can you distinguish an attack from a harmless log event? 2b Generate a brute-force attack -- Repeatedly attempt to use a wrong password with an agent. Monitor the alerts.log, watching for the generation of the brute-force alert. 2a

Lab Exercise Set 2 Deck 1, Slide 60 Log Analysis: Analyze the log entries resulting from the previous exercise. What is shown and what does it mean? How can you distinguish an attack from a harmless log event? 2b Generate a brute-force attack -- Repeatedly attempt to use a wrong password with an agent. Monitor the alerts.log, watching for the generation of the brute-force alert. 2a Looking up and tracing Wazuh rules for better understanding of alerts 2c

Elastic Stack Deck 1, Slide 61

Elasticsearch Elasticsearch is a highly scalable full-text search and analytics engine, to which data is shipped, and which Kibana accesses as its primary data back end source. Deck 1, Slide 62 https://www.elastic.co/products/elasticsearch

Kibana Kibana is the web front-end to the data in Elasticsearch. The Wazuh Web UI is installed inside the Kibana environment, adding greatly to the standard Kibana offerings, and tying into the Wazuh API in addition to Elasticsearch for a rich and powerful end user web experience. Deck 1, Slide 63 https://www.elastic.co/products/kibana

Beats family (including Filebeat) Beats is a family of data shippers. Using Filebeat specifically, Wazuh's alerts.json or archives.json data can be sent to Elasticsearch. It also can do data transformation and enrichment, like parsing strings, normalizing field names, and doing geoip lookups. Deck 1, Slide 64 https://www.elastic.co/products/beats

Elastic Stack Integration Deck 1, Slide 65

Elastic Stack Show and Tell Deck 1, Slide 66
Tags