BAGARAGAZAROMUALD2
724 views
45 slides
Oct 03, 2022
Slide 1 of 45
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
About This Presentation
notes surveying adjustments
Size: 385.75 KB
Language: en
Added: Oct 03, 2022
Slides: 45 pages
Slide Content
Principles of Least Squares
Introduction
•In surveying, we often have geometric
constraintsfor our measurements
–Differential leveling loop closure = 0
–Sum of interior angles of a polygon = (n-2)180°
–Closed traverse: Σlats = Σdeps = 0
•Because of measurement errors, these
constraints are generally not met exactly, so an
adjustment should be performed
Random Error Adjustment
•We assume (hope?) that all systematic errors
have been removed so only random error
remains
•Random error conforms to the laws of
probability
•Should adjust the measurements accordingly
•Why?
Definition of a Residual
If Mrepresents the most probable value of a measured
quantity, and z
irepresents the i
th
measurement, then the i
th
residual, v
iis:
v
i= M –z
i
Fundamental Principle of Least
Squaresminimum
22
3
2
2
2
1
2
n
vvvvv
In order to obtain most probable values (MPVs), the sum
of squares of the residuals must be minimized. (See book
for derivation.) In the weighted case, the weighted squares
of the residuals must be minimized.minimum
22
33
2
22
2
11
2
nn
vwvwvwvwwv
Technically the weighted form shown assumes that the
measurements are independent, but we can handle the
general case involving covariance.
Stochastic Model
•The covariances (including variances) and hence
the weights as well, form the stochastic model
•Even an “unweighted” adjustment assumes that
all observations have equal weight which is also a
stochastic model
•The stochastic model is different from the
mathematical model
•Stochastic models may be determined through
sample statistics and error propagation, but are
often a prioriestimates.
Mathematical Model
•The mathematical model is a set of one or more
equations that define an adjustment condition
•Examples are the constraints mentioned earlier
•Models also include collinearity equations in
photogrammetry and the equation of a line in linear
regression
•It is important that the model properly represents
reality –for example the angles of a plane triangle
should total 180°, but if the triangle is large, spherical
excess cause a systematic error so a more elaborate
model is needed.
Types of Models
Conditional and Parametric
•A conditional model enforces geometric conditions on
the measurements and their residuals
•A parametric model expresses equations in terms of
unknowns that were not directly measured, but relate to
the measurements (e.g. a distance expressed by
coordinate inverse)
•Parametric models are more commonly used because it
can be difficult to express all of the conditions in a
complicated measurement network
Observation Equations
•Observation equations are written for the
parametric model
•One equation is written for each observation
•The equation is generally expressed as a
function of unknown variables (such as
coordinates) equals a measurement plus a
residual
•We want more measurements than unknowns
which gives a redundant adjustment
Elementary Example
Consider the following three equations involving two unknowns.
If Equations (1) and (2) are solved, x= 1.5 and y= 1.5. However,
if Equations (2) and (3) are solved, x= 1.3 and y= 1.1 and if
Equations (1) and (3) are solved, x= 1.6 and y= 1.4.
(1) x+ y= 3.0
(2) 2x–y= 1.5
(3) x–y= 0.2
If we consider the right side terms to be measurements, they
have errors and residual terms must be included for consistency.
Example -Continued
x+ y–3.0 = v
1
2x–y–1.5 = v
2
x–y–0.2 = v
3
To find the MPVs for xand ywe use a least squares solution by
minimizing the sum of squares of residuals.
2222
)2.0()5.12()0.3(),( yxyxyxvyxf
Example -Continued
To minimize, we take partial derivatives with respect to each of the
variables and set them equal to zero. Then solve the two equations.0)1)(2.0(2)1)(5.12(2)0.3(2
0)2.0(2)2)(5.12(2)0.3(2
yxyxyx
y
f
yxyxyx
x
f
These equations simplify to the following normal equations.
6x–2y= 6.2
-2x+ 3y= 1.3
Example -Continued
Solve by matrix methods.
443.1
514.1
3.1
2.6
62
23
14
1
3.1
2.6
32
26
y
x
y
x
We should also compute residuals:
v
1= 1.514 + 1.443 –3.0 = -0.044
v
2= 2(1.514) –1.443 –1.5 = 0.086
v
3= 1.514 –1.443 –0.2 = -0.128
Systematic Formation of Normal
Equations
Resultant Equations
Following derivation in the book results in:
Example –Systematic Approach
Now let’s try the systematic approach to the example.
(1) x+ y= 3.0 + v
1
(2) 2x–y= 1.5 + v
2
(3) x–y= 0.2 + v
3
Create a table:
ab la
2
abb
2
al bl
113.01 1 1 3.0 3.0
2-11.54 -2 1 3.0-1.5
1-10.21 -1 1 0.2-0.2
Σ=6Σ=-2Σ=3Σ=6.2Σ=1.3
Note that this yields the same normal equations.
Matrix Method
mnmm
n
n
aaa
aaa
aaa
A
21
22221
11211
m
l
l
l
L
2
1
n
x
x
x
X
2
1
Matrix form for linear observation equations:
AX = L + V
Where:
m
v
v
v
V
2
1
Note: mis the number of observations and nis the number of
unknowns. For a redundant solution, m> n.
Least Squares Solution
Applying the condition of minimizing the sum of squared residuals:
A
T
AX = A
T
L
or
NX = A
T
L
Solution is:
X = (A
T
A)
-1
A
T
L = N
-1
A
T
L
and residuals are computed from:
V = AX –L
Matrix Form With Weights
Weighted linear observation equations:
WAX = WL + WV
Normal equations:
A
T
WAX = NX = A
T
WL
Matrix Form –Nonlinear System
We use a Taylor series approximation. We will need the Jacobian
matrix and a set of initial approximations.
The observation equations are:
JX = K + V
Where: Jis the Jacobian matrix (partial derivatives)
Xcontains corrections for the approximations
Khas observed minus computed values
Vhas the residuals
The least squares solution is: X = (J
T
J)
-1
J
T
K = N
-1
J
T
K
Weighted Form –Nonlinear System
The observation equations are:
WJX = WK + WV
The least squares solution is: X = (J
T
WJ)
-1
J
T
WK = N
-1
J
T
WK
Example 10.2
Determine the least squares solution for the following:
F(x,y) = x + y –2y
2
= -4
G(x,y) = x
2
+ y
2
= 8
H(x,y) = 3x
2
–y
2
= 7.7
Use x
0= 2, and y
0= 2 for initial approximations.
Example -Continuedy
y
F
x
F
41
1
y
y
G
x
x
G
2
2
y
y
H
x
x
H
2
6
Take partial derivatives and form the Jacobian matrix.
412
44
71
26
22
411
00
00
0
yx
yx
y
J
Example -Continued
00458.0
2125.0
2.1
6.3
8139
39161
1
X
01004.0
00168.0
75219.0
12393.0
40354.8175082.38
75082.3861806.157
1
X
Add the corrections to get new approximations and repeat.
x
0= 2.00 –0.02125 = 1.97875y
0= 2.00 + 0.00458 = 2.00458
Add the new corrections to get better approximations.
x
0= 1.97875 + 0.00168 = 1.98043y
0= 2.00458 + 0.01004 = 2.01462
Further iterations give negligible corrections so the final solution is:
x= 1.98 y= 2.01
Linear Regression
Fitting x,ydata points to a straight line: y = mx + b
Observation Equationsbmxvy
bmxvy
bmxvy
bmxvy
DyD
CyC
ByB
AyA
D
C
B
A
D
C
B
A
y
y
y
y
D
C
B
A
D
C
B
A
v
v
v
v
y
y
y
y
b
m
x
x
x
x
1
1
1
1
In matrix form: AX = L + V
Example 10.3
pointx y
A3.004.50
B4.254.25
C5.505.50
D8.005.50
Fit a straight line to the
points in the table.
Compute mand bby
least squares.
In matrix form:
D
C
B
A
v
v
v
v
b
m
50.5
50.5
25.4
50.4
100.8
150.5
125.4
100.3
Standard Deviation of Unit Weight48.0
24
47.0
2
0
nm
v
S
Where: mis the number of observations and
nis the number of unknowns
Question: What about x-values? Are they
observations?
Fitting a Parabola to a Set of Points
Equation:Ax
2
+ Bx+ C= y
This is still a linear problem in terms of the unknowns
A, B, and C.
Need more than 3 points for a redundant solution.
Condition Equations
•Establish all independent, redundant
conditions
•Residual terms are treated as unknowns in the
problem
•Method is suitable for “simple” problems
where there is only one condition (e.g. interior
angles of a polygon, horizon closure)
Condition Equation Example
Condition Example -Continued
Condition Example -Continued
Condition Example -Continued
Note that the angle with the smallest standard deviation has
the smallest residual and the largest SD has the largest residual
Observation Example -Continued
"1.44'1783
"2.00'39134
)()(
1
WLAWAAX
TT "7.15'03142"1.44'1783"2.00'39134360
3
a
Note that the answer is the same as that obtained with
condition equations.
Simple Method for Angular Closure
Given a set of angles and associated variances and a
misclosure, C, residuals can be computed by the following:
n
i
i
i
i
C
v
1
2
2