AI is Hacking You - Digital Workplace Conference Australia 2024

michaeltnoel 56 views 30 slides Jul 29, 2024
Slide 1
Slide 1 of 30
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30

About This Presentation

As presented at the Australian Digital Workplace Conference, Sydney 2024


Slide Content

Michael Noel, CCO

SPONSOR SLIDE

Problem
Statement
AI tools are being exploited by
cybercriminals to compromise
the security of organisations,
posing a significant threat to
data integrity, privacy, and
system functionality.
It is important to understand
what AI tools and techniques
the cybercriminals are using in
order to better defend your
organisation from attacks.

AI-Powered Phishing
Attacks
Cybercriminals can employ machine
learning algorithms to create highly
convincing phishing emails. These AI-
driven phishing emails can be tailored
to target specific individuals or
organisations, making them more
difficult to detect.

Automated Vulnerability
Discovery
AI can scan and identify
vulnerabilities in software and
systems, then automatically
exploit them to gain
unauthorised access.

Case Study: 2017
Windows 10 Leak
In 2017, portions of the source code related to Windows 10
were illegally made available on the internet, including code
for the USB, storage, and Wi-Fi drivers, as well as some of
the Windows 10 Mobile adaptations. The leak also included
some of the internal builds compiled for debugging
purposes.
Having access to the original source code makes it much
easier for cybercriminals to use A.I. to quickly search for
possible Zero Day exploits such as StuxNet and/or the
Shadow Brokers/NSA Tools

AI-Enhanced Adaptive
Malware
Malware authors can use AI to
create more sophisticated and
evasive malware. For instance, AI
can be used to modify malware
code in real-time to avoid
detection by traditional antivirus
solutions.

Adversarial Machine
Learning
Cybercriminals may use adversarial
machine learning techniques to
exploit vulnerabilities in AI-based
security systems. By crafting input
data specifically designed to confuse
AI, they can bypass security
measures.

Automated Password
Cracking
AI can be used to accelerate the
process of brute-force attacks on
passwords. Machine learning
models can analyze patterns in
leaked password databases to
generate more effective
password guesses.

Generative Adversarial
Networks (PassGAN, etc.)
Training models on existing passwords
Trained off of existing known passwords like
‘momof4kids’ and tries variations (M0m0f4kidz,
mOmof4K1ds, etc.)
PassGAN Example:
◦Trained on 15.7 million passwords from the RockYou
breach
◦Effectiveness:
◦81% cracked in less than a month
◦71% percent in less than a day
◦65% percent in less than an hour.
◦Can guess any 7-character password in six minutes or less.

Credential Stuffing
AI-powered bots can efficiently
perform credential stuffing attacks,
automatically testing stolen
usernames and passwords on
multiple websites and services to gain
unauthorised access.

Vishing
Voice-based phishing scam that
uses audio of a person to train
the AI to reproduce their voice
and call up a person and
convince them that they are in
trouble/require funds/etc.
Relatively easy to perform with
very little audio required from
the person impersonated.

AI-Generated Deepfakes
Deepfake technology can be used to create
convincing audio and video impersonations of
high-profile individuals within an organisation,
potentially leading to social engineering attacks
or misinformation campaigns.
AI can be used to generate fake news or
propaganda, which can be used to manipulate
public opinion and potentially facilitate
cyberattacks by diverting attention

Case Study -
Deepfakes
A Financial worker at the Hong Kong
branch of a multinational company
willingly gave HK$200 million ($25
million USD) to cybercriminals
Was invited to a Zoom call where he
was the only human on the call and
all other participants were deepfake
versions of executives at the
company.
This scenario is relatively easy to
reproduce with access to as little as
30 seconds of video of a person

Behavioral Biometrics
Spoofing (CAPTCHA Fooling)
AI can be used to mimic a user's
behavioral biometrics (keystrokes,
mouse movements, etc.) to bypass
authentication systems that rely on
these unique patterns.
A.I. is increasingly able to bypass
Captchas using this technique, putting
‘Are you a robot’ tests at risk of failure

AI-Enhanced DDoS
Attacks
AI can be used to amplify
Distributed Denial of Service
(DDoS) attacks by dynamically
adjusting attack parameters to
evade detection and
mitigation.

Case Study: DDOS
Army of Toothbrushes
In February of 2024, Fortinet
made news by giving an example
of a how an army of A.I. enslaved
electric toothbrushes were gang-
pressed into a DDOS attack against
a Swiss website
Think about that one the next
time you hear an odd noise
coming from your bathroom… ☺

AI-Driven Social
Engineering
AI can be used to analyze vast amounts
of publicly available information about
individuals to craft highly convincing
social engineering attacks, such as
spear-phishing emails or
impersonation attempts.
AI can also be used to create realistic
bot accounts which can be used to
drive disinformation narratives

Case Study:
Meliorator DisInfo
Bot-driven Social Media Campaign uncovered by the
FBI, CNMF, AIVD, MIVD, DNP, and CCCS
Thousands of bot accounts created on X (Twitter)
using AI tools and used to spread disinformation
◦Authentic profiles created in bulk
◦Realistic looking unique content created by each bot
◦Used to spread false narratives to amplify malign
foreign influence
Meliorator – AI-enabled bot farm generation and
management software
Brigadir – Front-end admin console
Taras – Back-end to control personas/bots through the
use of ‘Souls’ and ‘Thoughts’
Faker – Used to generate unique profile photos and
biographical information
https://www.ic3.gov/Media/News/2024/240709.pdf

Data Exfiltration and
Insider Threats
Machine learning algorithms
can be used to identify
valuable data within a
compromised system and
exfiltrate it in a more targeted
and stealthy manner.

Machine Learning
Evasion
Cybercriminals can train machine
learning models to evade
detection by security systems,
allowing them to sneak past
intrusion detection systems and
avoid being identified as threats.

Adversarial Attacks
AI can be used to create
adversarial inputs that can trick
AI-powered security systems,
causing them to misclassify
malicious activities as benign.

Automated Cryptocurrency
Ransomware
AI can automate the process of
spreading ransomware and
managing ransom payments,
making it easier for cybercriminals
to extort money from victims.

Fighting Back
HOW TO BETTER PREPARE
YOURSELF FOR THE
ONSLAUGHT OF AI-ENABLED
CYBERCRIMINALS

Fighting Back
Passwords
◦Never Re-use passwords
◦Use Passphrases
◦Use Password Managers
◦MFA/Passwordless/FIDO
Vishing/Deepfakes
◦Beware of PID Sharing
◦Create ‘Safe’ words for family
Data Integrity
◦Backup Data
◦Update Software

Microsoft Security Products that can be
used to counter AI-enabled hackers
Microsoft Sentinel
◦Security Information and Event Management Platform
◦Centralised location for logs, alerting, and
Microsoft Entra
◦Cloud Infrastructure Entitlement Management
◦Permissions Management/Governance
Microsoft Purview
◦Information Protection / DLP
◦Regulatory / Risk Management
Microsoft Priva
◦Privacy Management
◦Compliance / Subject Rights Requests
Microsoft Intune
◦Mobile Device Management (MDM) Platform
◦Updates, deployment, autopilot, apps, etc.
Microsoft Defender
◦Threat protection across clients, on-prem, and cloud
◦Single pane real-time view of security
Microsoft Security Copilot
◦Artificial Intelligence / Skynet
◦Finally, robotic beings rule the world

Microsoft Security Copilot
Artificial Intelligence (AI) based on
the OpenAI technologies licensed
by Microsoft
Prompt bar that uses natural
language selection – You can
upload files, urls, code snippets,
etc. to find more information about
them.
Immutable audit trail and
information from your security
tools is kept private. Transparency
designed in.
Pin board allows for quick
researching during security alerts.
Skynet Jr. ;)

Thanks! Questions?
CCO.com
@MichaelTNoel
Linkedin.com/in/michaeltnoel
SharingTheGlobe.com
Slideshare.net/michaeltnoel
Michael Noel