State space techniques for discrete control systems.
HugoGustavoGonzlezHe
32 views
44 slides
Oct 07, 2024
Slide 1 of 44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
About This Presentation
State space techniques for discrete control systems. Includes controllability, observability, feedback control law, and observers.
Size: 516.59 KB
Language: en
Added: Oct 07, 2024
Slides: 44 pages
Slide Content
State space techniques
Concept of state State: minimum set of variables that describe the instant behavior of a dynamical system The State Space is the n-dimensional space constructed using the state variables as coordinates
Concept of State Solution trajectory: orbit constructed by all the x ( t ) points in the state space
Example Obtain a state space representation of the following system:
Example Solution
State equation Consider a continuous-time dynamical system Solution is given by:
State equation For a discrete-time system, input is constant between sampling moments for kT t < ( k + 1) T . We may describe the state transitions only in the sampling moments:
State equation We assume a time invariant system , then we obtain the matrices considering k = 0 Thus
State equation The discrete time equation becomes: Remark: ( T ) and ( T ) are constant matrices
Example Find the discrete-time state equation for the following continuous system, assume a sampling period T=1s
Example
Discrete transfer function The transfer function for a discrete-time system can be obtained as follows:
Solution for the discrete-time state equation
Controllability and Observability
Controlabillity Consider the system We say that the system is controllable if there exists a control signal u(kT) piece-wise constant and defined over a finite number of sampling periods, such that the state x(kT) can be transfered to the final desired state xf from any initial state in n sampling periods
Controlabillity Let us start from the solution of the discrete-time state equation:
Controlabillity If the following relationship holds: Then, for an arbitrary state x ( nT ) = x f , there exists a control signal sequence u (0), u ( T ), u (2 T ),…, u (( n -1) T ) that satisfies equation (1). Matrix M is called controllability matrix
Example For the following system, find a control law in order to transfer the system from an arbitrary initial condition x(0) to the origin in, at must, 2 sampling periods (Optimal time control law). Assume T=1s.
Example Solution: First we check controllability then we use the equation : with , thus Or
State Feedback Control
State feedback control Consider a discrete -time, time- invariant , linear system : Where is a matrix containing the system dynamics and whose eigenvalues are given by the characteristic equation :
State feedback control Theorem : It is possible to modify the behavioral modes of the system using an state feedback of the form :: For
State feedback control Proof: Whose eigenvalues aren given by : Now it is possible to choose arbitrarily
Example Desing a control law using state feedback such that the closed loop system has =[0.4 0.6] as behavioral modes
Example Solution: Check controllability Then Leading to And finally
Observability and state observers
Observability Consider the system The system is completely observable if , given the output in a finite interval of sampling periods , it is possible to determine the initial state .
Observability From the discrete-time equation solution we have:
Observability If the following relationship holds : Then , it is possible to determine the initial state from measurements of the output. N=ObservabilityMatrix
State observers
Observers Consider a discrete -time TILS And the control law How to feedback an state if state is not measurable ?
Observers To estimate the states that are not measurable! This estimation is called observation and the reconstructoin is done using the output of the system OBSERVABILITY IS REQUIRED! In this case, the observed state vector is used to generate the control law
Observers
Observers Not measurable !!!
Observers
Observers The state can be estimated iff the system is observable. Assume that the state x ( kT ) can be approximated by the state of the observer : Let us define an error function :
Observers If A is stable, e ( k ) 0 when k . But still error dynamics are governed by A . If A is unstable, then the above relationship do not hold so we use the following expression. Where K e is a constant matrix
Observers Error dynamics: If matrix ( A - K e C ) is stable then, the error goes to zero for any initial error e(0). It is possible to choose arbitrarily matrix K e such that the error goes to zero as soon as possible
Example Consider the system Design a state observer considering that the desired eigenvalues of the error dynamics are = 0.5 j0.5.
Example Solution:
Example Solution:
Collaborative activity Consider the system: 1. Design a control law to locate closed loop poles at =[0.1 -0.2]. 2. Consider a not measurable state. Design an observer with = [ 0.5 j0.5 ] governing the error dynamics. 3. Instrument the control law using the observed state in Scilab + SciCos (Matlab® + Simulink®)