As AI continues to evolve, ethical concerns regarding bias, privacy, and job displacement have emerged.
hiboborxlr8
63 views
33 slides
Feb 25, 2025
Slide 1 of 33
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
About This Presentation
The concept of AI dates back to the mid-20th century, when scientists began exploring the idea of creating machines that could mimic human thought processes. In 1956, the term "Artificial Intelligence" was officially coined at the Dartmouth Conference, marking the beginning of formal AI re...
The concept of AI dates back to the mid-20th century, when scientists began exploring the idea of creating machines that could mimic human thought processes. In 1956, the term "Artificial Intelligence" was officially coined at the Dartmouth Conference, marking the beginning of formal AI research. Early AI systems were rule-based and relied on symbolic logic, which made them rigid and unable to adapt to new scenarios.
During the 1970s and 1980s, AI research experienced a setback due to the AI Winter, a period characterized by reduced funding and slow progress. The limitations of hardware and computing power made it difficult to develop complex AI systems. However, researchers continued to work on improving algorithms and data processing techniques.
The Rise of Machine Learning
The breakthrough in AI came with the rise of Machine Learning (ML) in the late 1990s and early 2000s. Instead of relying on hardcoded rules, ML allowed machines to learn from data and improve their performance over time. Neural networks, decision trees, and support vector machines became popular methods for training AI models.
A major milestone was the development of Deep Learning, a subset of ML that uses multiple layers of artificial neurons to process complex data. In 2012, deep learning gained widespread attention when AlexNet, a deep neural network, won the ImageNet competition by achieving unprecedented accuracy in image recognition tasks.
AI in Modern Applications
Today, AI is integrated into various industries, enhancing efficiency and innovation. Some key applications include:
Healthcare: AI-powered algorithms assist in disease diagnosis, drug discovery, and robotic surgeries.
Finance: AI-driven systems help in fraud detection, stock market analysis, and automated trading.
Autonomous Vehicles: Self-driving cars use AI to process sensor data, recognize objects, and make real-time driving decisions.
Natural Language Processing (NLP): AI enables chatbots, virtual assistants (e.g., Siri, Alexa), and machine translation services.
Entertainment: AI personalizes content recommendations on platforms like Netflix, YouTube, and Spotify.
Ethical Considerations and Future of AI
As AI continues to evolve, ethical concerns regarding bias, privacy, and job displacement have emerged. Governments and organizations are working to establish AI regulations to ensure fair and responsible usage.
The future of AI is promising, with ongoing advancements in quantum computing, explainable AI, and general AI. As AI systems become more sophisticated, they will play an even greater role in shaping the world, enhancing human productivity, and solving complex global challenges.
Size: 1.15 MB
Language: en
Added: Feb 25, 2025
Slides: 33 pages
Slide Content
1
Activity and Motion Detection
in Videos
Longin Jan Latecki and Roland Miezianko, Temple University
Dragoljub Pokrajac, Delaware State University
Dover, August 2005
Definition of Motion Detection
•Action of sensing physical movement in a
give area
•Motion can be detected by measuring
change in speed or vector of an object
2
3
Motion Detection
Goals of motion detection
•Identify moving objects
•Detection of unusual activity patterns
•Computing trajectories of moving objects
Applications of motion detection
•Indoor/outdoor security
•Real time crime detection
•Traffic monitoring
Many intelligent video analysis systems are based
on motion detection.
4
Two Approaches to Motion Detection
•Optical Flow
–Compute motion within region or the frame as a
whole
•Change detection
–Detect objects within a scene
–Track object across a number of frames
5
Background Subtraction
•Uses a reference background image for
comparison purposes.
•Current image (containing target object) is
compared to reference image pixel by pixel.
•Places where there are differences are
detected and classified as moving objects.
Motivation: simple difference of two images
shows moving objects
6
a. Original scene b. Same scene later
Subtraction of scene a from scene b Subtracted image with threshold of 100
7
Static Scene Object Detection and
Tracking
•Model the background and subtract to obtain
object mask
•Filter to remove noise
•Group adjacent pixels to obtain objects
•Track objects between frames to develop
trajectories
8
Background Modelling
by Michael Knowles
9
Background Model
10
After Background Filtering…
11
Approaches to Background Modeling
•Background Subtraction
•Statistical Methods
(e.g., Gaussian Mixture Model, Stauffer and Grimson 2000)
Background Subtraction:
1.Construct a background image B as average of few images
2.For each actual frame I, classify individual pixels as
foreground if |B-I| > T (threshold)
3.Clean noisy pixels
12
13
Background Subtraction
Background Image
Current Image
14
Statistical Methods
•Pixel statistics: average and standard
deviation of color and gray level values
(e.g., W4 by Haritaoglu, Harwood, and Davis
2000)
•Gaussian Mixture Model (e.g., Stauffer and
Grimson 2000)
15
Gaussian Mixture Model
•Model the color values of a particular pixel as a
mixture of Gaussians
•Multiple adaptive Gaussians are necessary to cope
with acquisition noise, lighting changes, etc.
•Pixel values that do not fit the background
distributions (Mahalanobis distance) are
considered foreground
16
Gaussian Mixture Model
100
150
200
250
100
150
200
250
0
50
100
150
200
R
Temple1 - RGB Distribution of Pixel 172x165
G
B
0 50 100 150 200 250
0
50
100
150
200
250
Temple1 - RG Distribution of Pixel 172x165
R
G
Block 44x42 Pixel 172x165
R-G Distribution R-G-B Distribution
VIDEO
17
18
Proposed Approach
Measuring Texture Change
•Classical approaches to motion detection
are based on background subtraction, i.e., a
model of background image is computed,
e.g., Stauffer and Grimson (2000)
•Our approach does not model any
background image.
•We estimate the speed of texture change.
19
In our system we divide video plane in disjoint blocks
(4x4 pixels), and compute motion measure for each block.
mm(x,y,t) for a given block location (x,y) is a function of t
20
8x8 Blocks
21
Block size relative to image size
Block 24x28
1728 blocks
per frame
Image Size:
36x48 blocks
22
Motion Measure Computation
•We use spatial-temporal blocks to represent
videos
•Each block consists of N
BLOCK x N
BLOCK pixels from
3 consecutive frames
•Those pixel values are reduced to K principal
components using PCA (Kahrunen-Loeve trans.)
•In our applications, N
BLOCK=4, K=10
•Thus, we project 48 gray level values to a texture
vector with 10 PCA components
23
3D Block Projection with PCA (Kahrunen-Loeve trans.)
48-component block
vector (4*4*3)
-0.5221 -0.0624 -0.1734 -0.2221 -0.2621 -0.4739 -0.4201 -0.4224 -0.0734 -0.1386
10 principal components
t+1
t
t-1
4*4*3 spatial-temporal block
Location I=24, J=28,
time t-1, t, t+1
Motion Measure Computation
24
Texture of spatiotemporal blocks works better
than color pixel values
•More robust
•Faster
We illustrate this with texture trajectories.
25
499 624
863 1477
26
-2
0
2
-6
-4-2
024
-6
-5
-4
-3
-2
-1
0
1
2
3
PCA 1
PCA 2
P
C
A
3
Trajectory of block (24,8) (Campus 1 video)
Space of
spatiotemporal
block vectors
Moving blocks corresponds
to regions of high local variance,
i.e., higher spread
27
-2
0
2
-6
-4
-2
0
24
-6
-5
-4
-3
-2
-1
0
1
2
3
PCA 1
PCA 2
P
C
A
3
-2
-1
0
1
2
3
4
-10
-5
0
5
-4
-2
0
2
4
6
8
RGB-PCA 1
RGB-PCA 2
R
G
B
-
P
C
A
3
499
611
843
695
1477
2482
1
Campus 1 video
block I=24, J=28
Standardized PCA components of RGB
pixel values at pixel location (185,217)
that is inside of block (24,28).
Comparison to the trajectory of a pixel inside
block (24,8)
28
Detection of Moving Objects Based on
Local Variation
For each block location (x,y) in the video plane
•Consider texture vectors in a symmetric
window [t-W, t+W] at time t
•Compute the covariance matrix
•Motion measure is defined as
the largest eigenvalue of the covariance matrix
33
Dynamic Distribution Learning and
Outlier Detection
1
)1(
)1()(
C
tstd
tmeantf
2
)1(
)1()(
C
tstd
tmeantf
)()1()1()( tfutmeanutmean
)()(
2
ttstd
222
))1()(()1()1()( tmeantfutut
(1)
(2)
(3)
(4)
(5)
Detect Outlier
Switch to a nominal state
Update the estimates of mean and standard
deviation only when the outliers are not
detected