As AI continues to evolve, ethical concerns regarding bias, privacy, and job displacement have emerged.

hiboborxlr8 63 views 33 slides Feb 25, 2025
Slide 1
Slide 1 of 33
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33

About This Presentation

The concept of AI dates back to the mid-20th century, when scientists began exploring the idea of creating machines that could mimic human thought processes. In 1956, the term "Artificial Intelligence" was officially coined at the Dartmouth Conference, marking the beginning of formal AI re...


Slide Content

1
Activity and Motion Detection
in Videos
Longin Jan Latecki and Roland Miezianko, Temple University
Dragoljub Pokrajac, Delaware State University
Dover, August 2005

Definition of Motion Detection
•Action of sensing physical movement in a
give area
•Motion can be detected by measuring
change in speed or vector of an object
2

3
Motion Detection
Goals of motion detection
•Identify moving objects
•Detection of unusual activity patterns
•Computing trajectories of moving objects
Applications of motion detection
•Indoor/outdoor security
•Real time crime detection
•Traffic monitoring
Many intelligent video analysis systems are based
on motion detection.

4
Two Approaches to Motion Detection
•Optical Flow
–Compute motion within region or the frame as a
whole
•Change detection
–Detect objects within a scene
–Track object across a number of frames

5
Background Subtraction
•Uses a reference background image for
comparison purposes.
•Current image (containing target object) is
compared to reference image pixel by pixel.
•Places where there are differences are
detected and classified as moving objects.
Motivation: simple difference of two images
shows moving objects

6
a. Original scene b. Same scene later
Subtraction of scene a from scene b Subtracted image with threshold of 100

7
Static Scene Object Detection and
Tracking
•Model the background and subtract to obtain
object mask
•Filter to remove noise
•Group adjacent pixels to obtain objects
•Track objects between frames to develop
trajectories

8
Background Modelling
by Michael Knowles

9
Background Model

10
After Background Filtering…

11
Approaches to Background Modeling
•Background Subtraction
•Statistical Methods
(e.g., Gaussian Mixture Model, Stauffer and Grimson 2000)
Background Subtraction:
1.Construct a background image B as average of few images
2.For each actual frame I, classify individual pixels as
foreground if |B-I| > T (threshold)
3.Clean noisy pixels

12

13
Background Subtraction
Background Image
Current Image

14
Statistical Methods
•Pixel statistics: average and standard
deviation of color and gray level values
(e.g., W4 by Haritaoglu, Harwood, and Davis
2000)
•Gaussian Mixture Model (e.g., Stauffer and
Grimson 2000)

15
Gaussian Mixture Model
•Model the color values of a particular pixel as a
mixture of Gaussians
•Multiple adaptive Gaussians are necessary to cope
with acquisition noise, lighting changes, etc.
•Pixel values that do not fit the background
distributions (Mahalanobis distance) are
considered foreground

16
Gaussian Mixture Model
100
150
200
250
100
150
200
250
0
50
100
150
200
R
Temple1 - RGB Distribution of Pixel 172x165
G
B
0 50 100 150 200 250
0
50
100
150
200
250
Temple1 - RG Distribution of Pixel 172x165
R
G
Block 44x42 Pixel 172x165
R-G Distribution R-G-B Distribution

VIDEO
17

18
Proposed Approach
Measuring Texture Change
•Classical approaches to motion detection
are based on background subtraction, i.e., a
model of background image is computed,
e.g., Stauffer and Grimson (2000)
•Our approach does not model any
background image.
•We estimate the speed of texture change.

19
In our system we divide video plane in disjoint blocks
(4x4 pixels), and compute motion measure for each block.
mm(x,y,t) for a given block location (x,y) is a function of t

20
8x8 Blocks

21
Block size relative to image size
Block 24x28
1728 blocks
per frame
Image Size:
36x48 blocks

22
Motion Measure Computation
•We use spatial-temporal blocks to represent
videos
•Each block consists of N
BLOCK x N
BLOCK pixels from
3 consecutive frames
•Those pixel values are reduced to K principal
components using PCA (Kahrunen-Loeve trans.)
•In our applications, N
BLOCK=4, K=10
•Thus, we project 48 gray level values to a texture
vector with 10 PCA components

23
3D Block Projection with PCA (Kahrunen-Loeve trans.)
48-component block
vector (4*4*3)
-0.5221 -0.0624 -0.1734 -0.2221 -0.2621 -0.4739 -0.4201 -0.4224 -0.0734 -0.1386
10 principal components
t+1
t
t-1
4*4*3 spatial-temporal block
Location I=24, J=28,
time t-1, t, t+1
Motion Measure Computation

24
Texture of spatiotemporal blocks works better
than color pixel values
•More robust
•Faster
We illustrate this with texture trajectories.

25
499 624
863 1477

26
-2
0
2
-6
-4-2
024
-6
-5
-4
-3
-2
-1
0
1
2
3
PCA 1
PCA 2
P
C
A

3
Trajectory of block (24,8) (Campus 1 video)
Space of
spatiotemporal
block vectors
Moving blocks corresponds
to regions of high local variance,
i.e., higher spread

27
-2
0
2
-6
-4
-2
0
24
-6
-5
-4
-3
-2
-1
0
1
2
3
PCA 1
PCA 2
P
C
A

3
-2
-1
0
1
2
3
4
-10
-5
0
5
-4
-2
0
2
4
6
8
RGB-PCA 1
RGB-PCA 2
R
G
B
-
P
C
A

3
499
611
843
695
1477
2482
1
Campus 1 video
block I=24, J=28
Standardized PCA components of RGB
pixel values at pixel location (185,217)
that is inside of block (24,28).
Comparison to the trajectory of a pixel inside
block (24,8)

28
Detection of Moving Objects Based on
Local Variation
For each block location (x,y) in the video plane
•Consider texture vectors in a symmetric
window [t-W, t+W] at time t
•Compute the covariance matrix
•Motion measure is defined as
the largest eigenvalue of the covariance matrix

29
3.83.944.14.24.34.44.5
3.5
3.6
3.7
3.8
3.9
4
2.4
2.5
2.6
2.7
2.8
2.9
3
3.1
4.2000 3.5000 2.6000
4.1000 3.7000 2.8000
3.9000 3.9000 2.9000
4.0000 4.0000 3.0000
4.1000 3.9000 2.8000
4.2000 3.8000 2.7000
4.3000 3.7000 2.6500
Feature vectors
0.0089 -0.0120 -0.0096
-0.0120 0.0299 0.0201
-0.0096 0.0201 0.0157
Covariance matrix
Feature Vectors in Space
0.0499 0.0035 0.0011
Eigenvalues
0.0499
Motion Measure
Current
time

30
3.83.944.14.24.34.44.5
3.5
3.6
3.7
3.8
3.9
4
2.4
2.5
2.6
2.7
2.8
2.9
3
3.1 4.3000 3.7000 2.6500
4.4191 3.5944 2.4329
4.1798 3.8415 2.6441
4.2980 3.6195 2.5489
4.2843 3.7529 2.7114
4.1396 3.7219 2.7008
4.3257 3.6078 2.8192
Feature vectors
0.0087 -0.0063 -0.0051
-0.0063 0.0081 0.0031
-0.0051 0.0031 0.0154
Covariance matrix
Feature Vectors in Space
0.0209 0.0093 0.0020
Eigenvalues
0.0209
Motion Measure
Current
time

31
Graph of motion measure mm(24,8,:) for Campus 1 video

32
Graph of motion measure
mm(40,66) of Sub_IR_2 video
0 100 200 300 400 500 600
0
200
400
600
800
1000
SubIR2 Detected Motion - Block 40x66
Frames
0 100 200 300 400 500 600
0
200
400
600
800
1000
SubIR2 Motion Measure - Block 40x66
Frames
Motion Measure Detected Motion

33
Dynamic Distribution Learning and
Outlier Detection
1
)1(
)1()(
C
tstd
tmeantf



2
)1(
)1()(
C
tstd
tmeantf



)()1()1()( tfutmeanutmean 
)()(
2
ttstd
222
))1()(()1()1()(  tmeantfutut
(1)
(2)
(3)
(4)
(5)
Detect Outlier
Switch to a nominal state
Update the estimates of mean and standard
deviation only when the outliers are not
detected