State space techniques for discrete control systems.

HugoGustavoGonzlezHe 32 views 44 slides Oct 07, 2024
Slide 1
Slide 1 of 44
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44

About This Presentation

State space techniques for discrete control systems. Includes controllability, observability, feedback control law, and observers.


Slide Content

State space techniques

Concept of state State: minimum set of variables that describe the instant behavior of a dynamical system The State Space is the n-dimensional space constructed using the state variables as coordinates

Concept of State Solution trajectory: orbit constructed by all the x ( t ) points in the state space

Example Obtain a state space representation of the following system:

Example Solution

State equation Consider a continuous-time dynamical system Solution is given by:

State equation For a discrete-time system, input is constant between sampling moments for kT  t < ( k + 1) T . We may describe the state transitions only in the sampling moments:

State equation We assume a time invariant system , then we obtain the matrices considering k = 0 Thus

State equation The discrete time equation becomes: Remark: ( T ) and ( T ) are constant matrices

Example Find the discrete-time state equation for the following continuous system, assume a sampling period T=1s

Example

Discrete transfer function The transfer function for a discrete-time system can be obtained as follows:

Solution for the discrete-time state equation

Controllability and Observability

Controlabillity Consider the system We say that the system is controllable if there exists a control signal u(kT) piece-wise constant and defined over a finite number of sampling periods, such that the state x(kT) can be transfered to the final desired state xf from any initial state in n sampling periods

Controlabillity Let us start from the solution of the discrete-time state equation:

Controlabillity If the following relationship holds: Then, for an arbitrary state x ( nT ) = x f , there exists a control signal sequence u (0), u ( T ), u (2 T ),…, u (( n -1) T ) that satisfies equation (1). Matrix M is called controllability matrix

Example For the following system, find a control law in order to transfer the system from an arbitrary initial condition x(0) to the origin in, at must, 2 sampling periods (Optimal time control law). Assume T=1s.

Example Solution: First we check controllability then we use the equation : with , thus Or  

State Feedback Control

State feedback control Consider a discrete -time, time- invariant , linear system : Where is a matrix containing the system dynamics and whose eigenvalues are given by the characteristic equation :  

State feedback control Theorem : It is possible to modify the behavioral modes of the system using an state feedback of the form :: For  

State feedback control Proof: Whose eigenvalues aren given by : Now it is possible to choose arbitrarily  

Example Desing a control law using state feedback such that the closed loop system has =[0.4 0.6] as behavioral modes

Example Solution: Check controllability Then Leading to And finally  

Observability and state observers

Observability Consider the system The system is completely observable if , given the output in a finite interval of sampling periods , it is possible to determine the initial state .  

Observability From the discrete-time equation solution we have:

Observability If the following relationship holds : Then , it is possible to determine the initial state from measurements of the output.   N=ObservabilityMatrix

State observers

Observers Consider a discrete -time TILS And the control law How to feedback an state if state is not measurable ?  

Observers To estimate the states that are not measurable! This estimation is called observation and the reconstructoin is done using the output of the system  OBSERVABILITY IS REQUIRED! In this case, the observed state vector is used to generate the control law

Observers

Observers Not measurable !!!

Observers

Observers The state can be estimated iff the system is observable. Assume that the state x ( kT ) can be approximated by the state of the observer : Let us define an error function :

Observers If A is stable, e ( k ) 0 when k . But still error dynamics are governed by A . If A is unstable, then the above relationship do not hold so we use the following expression. Where K e is a constant matrix

Observers Error dynamics: If matrix ( A - K e C ) is stable then, the error goes to zero for any initial error e(0). It is possible to choose arbitrarily matrix K e such that the error goes to zero as soon as possible

Example Consider the system Design a state observer considering that the desired eigenvalues of the error dynamics are  = 0.5  j0.5.

Example Solution:

Example Solution:

Collaborative activity Consider the system: 1. Design a control law to locate closed loop poles at =[0.1 -0.2]. 2. Consider a not measurable state. Design an observer with  = [ 0.5  j0.5 ] governing the error dynamics. 3. Instrument the control law using the observed state in Scilab + SciCos (Matlab® + Simulink®)

HW Exercises from Chapter 15 of textbook