Printed - Redalyc

Red de Revistas Científicas de América Latina, el Caribe, España y Portugal
Sistema de Información Científica
Elloumi, S.; Benhadj Braiek, N.
On Feedback Control Techniques of Nonlinear Analytic Systems
Journal of Applied Research and Technology, vol. 12, núm. 3, junio-, 2014, pp. 500-513
Centro de Ciencias Aplicadas y Desarrollo Tecnológico
Distrito Federal, México
Available in: http://www.redalyc.org/articulo.oa?id=47431368015
Journal of Applied Research and Technology,
ISSN (Printed Version): 1665-6423
[email protected]
Centro de Ciencias Aplicadas y Desarrollo
Tecnológico
México
How to cite
Complete issue
More information about this article
Journal's homepage
www.redalyc.org
Non-Profit Academic Project, developed under the Open Acces Initiative
On Feedback Control Techniques of Nonlinear Analytic Systems
S. Elloumi* and N. Benhadj Braiek
Advanced Systems Laboratory,
Polytechnic School of Tunisia, University of Carthage, Tunisia.
*[email protected]
ABSTRACT
This paper presents three approaches dealing with the feedback control of nonlinear analytic systems. The first one
treats the optimal control resolution for a control-affine nonlinear system using the State Dependant Riccati Equation
(SDRE) method. It aims to solve a nonlinear optimal control problem through a Riccati equation that depends on the
state. The second approach treats a procedure of constructing an analytic expression of a nonlinear state feedback
solution of an optimal regulation problem with the help of Kronecker tensor notations. The third one deals with the
global asymptotic stabilization of the nonlinear polynomial systems. The designed state feedback control law stabilizes
quadratically the studied systems. Our main contribution in this paper is to carry out a stability analysis for this kind of
systems and to develop new sufficient conditions of stability. A numerical-simulation-based comparison of the three
methods is finally performed.
Keywords: Nonlinear systems, Optimal control, State Dependant Riccati Equation (SDRE), Feedback control, stability
analysis.
1. Introduction
The optimal control of nonlinear systems is one of
the most challenging and difficult topics in control
theory. It is well known that the classical optimal
control problems can be characterized in terms of
Hamilton-Jacobi Equations (HJE) [1, 2, 3, 4, 5, 6].
The solution to the HJE gives the optimal
performance value function and determines an
optimal control under some smooth assumptions,
but in most cases it is impossible to solve it
analytically. However, and despite recent
advances, many unresolved problems are steel
subsisting, so that practitioners often complain
about the inapplicability of contemporary theories.
For example, most of the developed techniques
have very limited applicability because of the
strong conditions imposed on the system [29, 40,
41, 42, 43]. This has led to many methods being
proposed in the literature for ways to obtain a
suboptimal feedback control for general nonlinear
dynamical systems.
The State Dependent Riccati Equation (SDRE)
controller design is widely studied in the literature
as a practical approach for nonlinear control
problems. This method was first proposed by
500 Vol. 12, June 2014 Pearson in [7] and later expanded by Wernli and
Cook in [8]. It was also independently studied by
Cloutier and all in [9, 31, 34]. This approach
provides a very effective algorithm for synthesizing
nonlinear optimal feedback control which is closely
related to the classical linear quadratic regulator.
The SDRE control algorithm relies on the solution
of a continuous-time Riccati equation at each time
update. In fact, its strategy is based on
representing a nonlinear system dynamics in a
way to resemble linear structure having statedependant coefficient (SDC) matrices, and
minimizing a nonlinear performance index having a
quadratic-like structure [9, 24, 34, 35, 38]. This
makes the equation much more difficult to solve.
An algebraic Riccati equation using the SDC
matrices is then solved on-line to give the
suboptimum control law. The coefficients of this
equation vary with the given point in state space.
The algorithm thus involves solving, at a given
point in state space, an algebraic state-dependant
Riccati equation, or SDRE.
Although the stability of the resulting closed loop
system has not yet been proved theoretically for all
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
system kinds, simulation studies have shown that
the method can often lead to suitable control laws.
Due to its computational simplicity and its
satisfactory simulation/experimental results, SDRE
optimal control technique becomes an attractive
control approach for a class of non linear systems.
A wide variety of nonlinear control applications
using the SDRE techniques are exposed in
literature. These include a double inverted
pendulum in real time [26], robotics [12], ducted
fan control [37, 38], the problems of optimal flight
trajectory for aircraft and space vehicles [22, 30,
32, 36] and even biological systems [10, 11].
An other efficient method to obtain suboptimal
feedback control for nonlinear dynamic systems
was firstly proposed by Rotella [33]. A useful
notation was developed, based on Kronecker
product properties which allows algebraic
manipulations in a general form of nonlinear
systems. To employ this notation, we assume that
the studied nonlinear system is described by an
analytical state space equation in order to be
transformed in a polynomial modeling with
expansion approximation. In recent years, there
have been many studies in the field of polynomial
systems especially to describe the dynamical
behavior of a large set of processes as electrical
machines, power systems and robot manipulators
[14, 15, 16, 17, 18]. A lot of work on nonlinear
polynomial systems have considered the global
and local asymptotic stability study, and many
sufficient conditions are defined and developed in
this way [14, 15, 19, 20, 21, 39].
The present paper focuses on the description and
the comparison of three nonlinear regulators for
solving nonlinear feedback control problems: the
SDRE technique, an optimal regulation problem for
analytic nonlinear systems (presented for the first
time by Rotella in [33]) and a quadratic stability
control approach. A stability analysis study is as
well carried out and new stability sufficient
conditions are developed.
The rest of the paper is organized as follows: the
second part is reserved to the description of the
studied systems and the formulation of the
nonlinear optimal control problem. Then, the third
part is devoted to the presentation of approaches
of the optimal control resolution and quadratic
stability control approach, as well as to the
illustration of sufficient conditions for the existence
of solutions to the nonlinear optimal control
problem, in particular by SDRE feedback control.
In section 4 we give the simulation results for the
comparison of the three feedback control
techniques. Finally conclusions are drawn.
2.-Description of the studied systems and
problem formulation
We consider an input affine nonlinear continuous
system described by the following state space
representation:
 X

Y
= f ( X )  g ( X )U
= h( X )
(1)
with associated performance index:
J=
1
2


0
( X T Q ( X ) X  U T R ( X )U )dt
(2)
where f(X), g(X) and h(X) are nonlinear functions
of the state X  R n , U is the control input and the
origin (X=0) is the equilibrium, i.e f(0)=0.
The state and input weighting matrices are
assumed state dependant such that: Q : R n  R nn
and R : R n R mm . These design parameters
satisfy Q(X)>0 and R(X)>0 for all X.
The problem can now be formulated as a
minimization problem associated with the
performance index in equation (2):
min
U (t )
1
2


0
( X T Q ( X ) X  U T R ( X )U )dt
(3)
  f(X)  g(X)U, X(0)  X
subject to X
0
The solution of this nonlinear optimal control
problem is equivalent to solving an associated
Hamilton-Jacobi equations (HJE) [1].
For the simpler linear problem, where f(X)=A0X,
the optimal feedback control is given by
U ( X ) = R 1BT PX , with P solving the algebraic
Riccati equation PA0  A0T P  PBR 1B T P  Q = 0 .
The theories for this linear quadratic regulator
(LQR) problem have been established for both the
Journal of Applied Research and Technology
501
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
finite-dimensional
and
infinite-dimensional
problems [13]. In addition, stable robust algorithms
for solving the Riccati equation have been
developed and are well documented in many
references in literature.
For the nonlinear case, the optimal feedback
control is known to be of the form:
U ( X ) = R 1 ( X )g T ( X )
V T ( X )
X
(4)
where the function V is the solution to the
Hamilton-Jacobi-Bellman equation [1, 3]:
V ( X )
1 V ( X ) 1
V T ( X ) 1 T
f (X) 
R ( X )gT ( X )
 X Q( X ) X  0
X
2 X
X
2
(5)
However, the HJB equation is itself very difficult to
solve analytically even for the simplest problems.
Therefore, efforts have been made to numerically
approximate the solution of the HJB equation, or to
solve a related problem producing a suboptimal
control, or to use some other processes in order to
obtain a suitable feedback control. The following
section will outline three such methods on
feedback control techniques of nonlinear analytic
systems.
3.-Approaches of the feedback control
resolution for nonlinear systems formulation
3.1 SDRE approach to optimal regulation problem
formulation
The main problems with existing nonlinear control
algorithms can be listed as follows: high
computational cost (adaptive control), lack of
structure (gain scheduling) and poor applicability
(feedback linearization). One method that avoids
these problems is the State Dependant Riccati
Equation approach. This method, also known as
Frozen Riccati Equation approach [28, 29], is
discussed in detail by Cloutier, D’souza and
Mracek in [35]. It uses extended linearization [27,
31, 35, 8] as the key design concept in formulating
the nonlinear optimal control problem. The
extended linearization technique, or state
dependant coefficient (SDC) parametrization,
502 Vol. 12, June 2014 consists in factorizing a nonlinear system,
essentially input affine, into a linear-like structure
which contains SDC matrices.
For system (1), under the assumptions f(0) = 0
and for f (.)  C1 (R n ) [24], we can always find some
continuous matrix valued functions A(X) such that
it has the following state-dependent linear
representation (SDLR):
 X

Y
= A( X ) X  B( X )U
= C( X ) X
(6)
where f ( X ) = A( X ) X and g ( X ) = B( X ) , A : Rn Rnn
is found by mathematical factorization and is,
clearly, non unique when n > 1 , and different
choices will result in different controls [25].
The SDRE feedback control provides a similar
approach as the algebraic Riccati equation (ARE)
for LQR problems, to the nonlinear regulation
problem for the input-affine system (1) with cost
functional (2). Indeed, once a SDC form has been
found, the SDRE approach is reduced to solving a
LQR problem at each sampling instant.
To guarantee the existence of such controller, the
conditions in the following definitions must be
satisfied [25].
Definition 1: A(X ) is a controllable (stabilizable)
parametrization of the nonlinear system for a given
region if [ A( X ), B( X )] are pointwise controllable
(stabilizable) in the linear sense for all X in that
region.
Definition 2: A(X ) is an observable (detectable)
parametrization of the nonlinear system for a given
region if [C( X ), A( X )] are pointwise observable
(detectable) in the linear sense for all X in that
region.
When A(X), B(X) and C(X) are analytical functions
in state vector X , and given these standing
assumptions, the state feedback controller is
obtained in the form:
U ( X ) = K ( X ) X
(7)
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
and the state feedback gain for minimizing (2) is:
K ( X ) = R 1 ( X )BT ( X )P ( X )
(8)
where P(X) is the unique symmetric positivedefinite solution of the algecbraic state-dependent
Riccati equation (SDRE):
P ( X ) A ( X )  A T ( X )P ( X )
 P ( X )B ( X )R
 C
T
1
( X )B T ( X )P ( X )
(9)
( X )Q ( X )C ( X ) = 0
It is important to note that the existence of the
optimal control for a particular parametrization of
the system is not guaranteed. Furthermore, there
may be an infinite number of parameterizations of
the system; therefore the choice of parametrization
is very important. The other factor which may
determine the existence of a solution to the Riccati
equation is the selection of the Q(X) and R(X)
weighting matrices in the Riccati equation (9).
The greatest advantage of SDRE control is that
physical intuition is always present and the
designer can directly control the performance by
tuning the weighting matrices Q(X) and R(X). In
other words, via SDRE, the design flexibility of
LQR formulation is directly translated to control the
nonlinear systems. Moreover, Q(X) and R(X) are
not only allowed to be constant, but can also vary
as functions of states. In this way, different modes
of behavior can be imposed in different regions of
the state-space [26].
3.1.1 Stability analysis
The SDRE control produces a closed-loop system
matrix ACL(X)=A(X)-B(X)K(X) which is pointwise
Hurwitz for all X, in particular for (X=0). Therefore,
the origin of the closed-loop system is locally
asymptotically stable [9, 24]. However, for a
nonlinear system, all eigenvalues of ACL(X) having
negative real parts X  R n do not guarantee
global asymptotic stability [26]. Stability of SDRE
systems is still an open issue. Global stability
results are presented by Cloutier, D’souza and
Mracek in the case where the closed-loop
coefficient matrix ACL(X) is assumed to have a
special structure [35]. The result is summarized in
the following theorem.
Theorem 1: We assume that A(.), B(.), Q(.) and
R(.) are C1 (R n ) matrix-valued functions, and the
respective pairs {A( X ), B( X )} and {A( X ),Q1/2 ( X )} are
pointwise stabilizable and detectable SDC
parameterizations of the nonlinear system (1) for
all X. Then, if the closed-loop coefficient matrix
ACL(X) is symmetric for all X, the SDRE closedloop solution is globally asymptotically stable.
Won derived in [3] a SDRE controller for a
nonlinear system with a quadratic form cost
function presented in the following theorem.
Theorem 2: For the system (1), with the cost
function (2), we assume that V satisfies the HJ
equation (5) and V(X) is a twice continuously
differentiable, symmetric and non-negative definite
matrix:
V ( X ) = X T V( X ) X
(10)
For the nonlinear system given by (1), the optimal
controller that minimizes the cost function (2) is
given by:
1
V( X)
K( X) =  R1( X)BT ( X)
X
2
V( X) 
1 1 T 
X
=  R B ( X)2V( X) X  (In  XT )
X
2


(11)
provided that the following conditions are satisfied:
AT ( X )V( X )  V( X ) A( X )  Q( X )
 V( X )B( X )R 1 ( X )BT ( X )V( X ) = 0

R 1 ( X )BT ( X ) I n  X T
 V(XX ) =
 V ( X )

R ( X )B ( X ) 
 XT  = 0
 X

1
and

A ( X )T I n  X T
(12)
(13)
T
 V(XX ) = 0
(14)
where  is the Kronecker product notation which
the definition and properties are detailed in
Appendix A.
Journal of Applied Research and Technology
503
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
3.1.2 Stability analysis- Main result:
The use of expression (19) and the following
equality :
We present now our contribution which is the
development of sufficient conditions to guarantee
the stability of system (6) under cost function (2).
Our analysis is based on the direct method of
Lyapunov. Firstly, we return to the optimal
feedback control (4) and let:
(15)
Then the optimal control law can be expressed as:
T
U ( X ) = R ( X )B ( X )P ( X ) X
(16)
where P(X) is the symmetric positive definite matrix
solution of the following State Dependent Riccati
Equation :
P ( X ) A ( X )  A T ( X )P ( X )
 P ( X )B ( X )R
C
T
1
B T ( X )P ( X )
(17)
M ( X )M T ( X ) = B ( X )R 1 ( X )BT ( X )
 T
 N ( X )N ( X ) = CT ( X )Q( X )C ( X )
(18)
So we assume that this condition is satisfied for
each X  R .
Now let W(X) the Lyapunov function defined by the
following quadratic form:
W ( X ) = X T P( X ) X
(19)
The global asymptotic stability of the equilibrium
state (X=0) of system (6) is ensured when the time
derivative W (X ) of W (X ) is negative defined for all
X Rn .
One has:
dP( X )
W ( X ) = X T P( X ) X  X T
X  X T P( X ) X
dt
Vol. 12, June 2014 yield :
 2P ( X )B ( X )R 1 ( X )B T ( X )P ( X )
P ( X )
 P ( X ) A ( X )  (I n  X T )
]X
X
(22)
Since AT(X)P(X)-P(X)B(X)R-1BT(X)P(X)+P(X)A(X)=
-CT(X)Q(X)C(X) obtained from the SDRE (17), then
(22) becomes:
W ( X ) = X T [  C T ( X )Q ( X )C ( X )
 P ( X ) H ( X ) P ( X )] X  X T [ I n  ( X T ( A T ( X )
P ( X )
]X
 P ( X ) H ( X ) P ( X )))
X
(23)
where : H ( X ) = B( X )R 1 ( X )BT ( X ) .
( X )Q ( X )C ( X ) = 0
Let us note that such symmetric positive definite
matrix P(X) exists, if for any X we have (A(X),
M(X), N(X)) is stabilizable detectable, where:
504 (21)
W ( X ) = X T [ A T ( X )P ( X )
V ( X )
= P(X )X
X
1
dP ( X )
P ( X )
= (I n  X T )
dt
X
(20)
To ensure the asymptotic stability of system (6)
with the control law (16), W (X ) should be
negative, which is equivalent to (X ) negative
definite, where:
(X) = Q(X) HT (X)H(X) (In [XT (AT (X)
P(X)
P(X)B(X)R1(X)BT (X))])
X
(24)
We try now to simplify the manipulation of matrix
(X ) by expressing P (X ) in terms of P(X).
X
When derivating the SDRE (17) with respect to the
state vector X , we obtain the following
expression:
A( X ) AT ( X )
P( X )
P( X )
A( X )  (In  P( X ))

X
X
X
P( X ) P( X )

 (In  AT ( X ))
H( X )P( X )
X
X
P( X )
 (In  (P( X )H( X )))
X
( X )
H( X )
0
 (In  P( X ))
P( X ) 
X
X
(25)
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
with: ( X ) = CT ( X )Q( X )C ( X ) ,
which gives:
P(X)
[A(X) H(X)P(X)]
X
P(X)
[In AT (X) In (P(X)H(X))]
= P(X)
X
tensor notations (57) and (62), detailed in
Appendix A. This state feedback will be expressed
as a formal power series with respect to X .
(26)
The output function can be expressed by:
with:
H (X)
P( X )
P( X ) = (I n  P ( X ))
X
 (I n  P ( X ))
T
A(X) A (X)
Q(X)

P( X ) 
X
X
X
Y (t ) = h( X (t ))
(27)
To simplify the partial derivative expression P (X ) ,
X
we use 'vec' and 'mat' functions and their
properties defined in Appendix A; then (26)
becomes:
vec
P ( X )
= [I n  [I n  A T ( X )
X
 I n  ( P ( X ) H ( X ))]  [ A ( X )
 H ( X ) P ( X )]  I n ] 1 .vec P ( X )
(28)
which leads to:
P ( X )
= mat 2 [(I n  [I n  AT ( X )
( n ,n )
X
 I n  (P ( X )H ( X ))]  [ A( X )
 H ( X )P ( X )]  I n ) 1 vec P ( X )]
Let us consider the system defined by (1), (57) and
(62) with an initial condition X (0) .
(29)
and then we can state the following result:
Theorem.3: The system (6) is globally
asymptotically stabilizable by the optimal control
law (4), with the cost function (2) if the symmetric
matrix (X ) defined by (24) is negative definite for
n
all X  R .
3.2 Quasi-Optimal Control for Nonlinear Analytic
Cubic System
We treat here the procedure presented by Rotella
in [33]. It consists in building an analytic
expression of a nonlinear state feedback solution
of an optimal control problem with the help of
(30)
where h(.) is a map from R q into R p . If h(.) is
analytic, it leads to the expression:

h( X ) =
H X
i
[i ]
(31)
i =1
are constant matrices of adapted
where Hi
dimensions.
The problem of optimal control is to build a state
feedback which minimizes the functional cost (2).
To find a solution to HJB equation (5), Rotella has
proposed in [33] the determination of an analytic
V ( X )
based on the following polynomial
form for
X
expression:
V ( X )
=
X

P X
j
[ j]
(32)
i =1
where Pj
are constant matrices of adapted
dimensions.
In this paper, we consider a nonlinear cubic
system defined by:
 X

Y
= F1 X  F2 X [2]  F3 X [3]  BU
= CX
(33)
since the truncation of polynomial system, in order
three, can be considered being sufficient for
nonlinear system modeling. Then the control law of
the cubic system (33) can be expressed as:
U = R 1BT (P1 X  P2 X [2]  P3 X [3] )
(34)
Journal of Applied Research and Technology
505
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
The determination of the control gains P1, P2 and
P3 is deduced from [33]. Therefore P1 is the gainmatrix solution of the optimal control on the
linearized system, then P1 is chosen symmetric
and solution of the classical Riccati equation:
P1T F1  F1T P1  P1T BR 1BT P1  CT QC = 0
(35)
The second order gain-matrix P2 is expressed as
follows:
P2 = F21H2
(36)
where:
H2 = P1T F2  P1T BR 1vec(P1T B)
F3vec (P3 ) = H3
(37)
where:
 (I
T
2
(P
F
 U
q 4
 vec
F3 = (I
q4
U
qq 3
2
)(I
q3
 F
q q 3
(P
T
2
T
2
P
2
( P 1T F 3 )
) vec
BR
1
)
B
T
P
2
)
 (F1T  P1BR 1BT ))
Unfortunately, even if the triplet (F1, B, C) is
stabilizable-detectable, the matrix (I 4  U 3 ) is
q
qq
singular. To overcome this problem, we introduce
~
the notation of the non-redundant i  power X [i ]
of the state vector X defined in (52). Then, an
analytical function A(X) of X can be written in
~
terms of X [ j ] as before, and in terms of X [ j ] :
3
A X
j
[ j]
j =1
3
=
~ ~
Aj X [ j ]

(38)
j =1
Then, by the non-redundant form, (37) must be
replaced by the linear equation:
506 with:
T3 = T4T (I
q4
U
qq 3
)(T3T  I q )
and the matrix T3 is a rectangular matrix of  4
rows and q. 3 columns, which has the property of
being of full rank.
(40)
where:
U1
U2
U3
=  R 1vec T (BT P1 )
=  R 1vec T (BT P2 )
=  R 1vec T (BT P3 )
3.3 Quadratic Stabilizing Control
= vec
A( X ) =
~
H3 = T4T H3
~
F3 = T3 (I 3  (F1T  P1BR 1BT ))
U  ( X ) = U1 X  U 2 X [2]  U 3 X [3]
The function 'vec' is defined in Appendix A.
The third order gain-matrix P3 is given by the
resolution of the following expression:
3
where:
(39)
Finally, we obtain the analytical expression of this
optimal state feedback:
F2 = F1T  P1T BR 1BT
H
~
~
~
F3vec (P3 ) = H3
Vol. 12, June 2014 The approach of quadratic stabilizing control
exposed in this paragraph was firstly presented in
[20]. It consists in the development of algebraic
criteria for global stability of analytical nonlinear
continuous systems which are described by using
Kronecker product. Based on the use of quadratic
Lyapunov function, the definition of sufficient
conditions for the global asymptotic stability of the
system equilibrium was also developed.
We consider the cubic polynomial nonlinear
systems defined by the equation (33). Our purpose
is to determine a polynomial feedback control law:
U = K1 X  K 2 X [2]  K 3 X [3]
(41)
where K1 , K 2 and K 3 are constant gain matrices
which stabilize asymptotically and globally the
equilibrium (X=0) of the considered system.
Applying this control law to the open-loop system
(33), one obtains the closed loop system:
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
X = (A 1BK1)X  (A 2BK 2)X[2] (A 3BK 3)X[3]
(42)
Using a quadratic Lyapunov function V (X ) and
computing the derivative V (X ) lead to the sufficient
condition of the global asymptotic stabilization of
the polynomial system, given by the following
theorem [20].
Theorem 4: The nonlinear polynomial system
defined by the equation (33) is globally
asymptotically stabilized by the control law (41) if
there exist:
•.an (n  n) -symmetric-positive definite matrix P,
•.arbitrary parameters i ,i =1,,   R
•.gain matrices K1, K2, K3
such that the (  ) symmetric matrix Q defined
by:
Q   1T [DS (P )M(f )  M(f )T DS (P )] 1
  1T [DS (P )DS (B)M(k )  (DS (P )DS (B)M(k ))T ] 1 (43)


 mat   (C )
i
( , )
i
i =1
-
Ci ,i =1,..., are
 linearly independent columns of
,
-
 i ,i =1,..., are arbitrary values.
[2]
- R =  1 .U.H. 2 and
T2 0 0 
Un


0 
0
 I

0
 1

 0 T3 0 
 2= 
; U=  0
U 2 ; H =  0  I 

n


2
1
2
0 0 T4

0














where:
 = n1  n2 ;  0 = n  n 2 ; 1 = n 2  n 3 ;
2 = n 3  n 4 .
In [39], it was proved that the stabilization problem
stated by the theorem 4 can be formulated as an
LMI feasibility problem.
Theorem 5: The equilibrium (X=0) of the system
(42) is globally asymptotically stabilizable if there
exist:
•.a (n  n) -symmetric positive definite matrix P ,
•.arbitrary parameters  i ,i =1,,  R ,
•.gain matrices K1 , K 2 , K 3 ,
•.a real  > 0 ,
be negative definite where:
0 
0 
T1 0 
P
B
; DS(P) = 
; DS(B) = 
;
0
0
0


T
P
I
B
In 
2
n



1 = 
 mat (A1T ) 
 mat (1,n)(A 11T ) 
(1,n)
2 



;
; M12(A2 ) = 


M11(A1) = 



nT 
nT
mat (1,n)(A 1 ) 
 mat (1,n) (A 2 ) 




 mat
(A1T ) 

(n,n2 ) 3 
(
)
(
)
M
A
M
A

12 2 
;
; M22 (A3 ) = 
M(f ) =  11 1



0
(
)
M
A
22 3 

nT
 mat(n,n2 ) (A3 ) 


with:
- Aki is the i th row of the matrix Ak
T

T
-  = rank () ,  = (I 2  U )(R R  I 2 )
such that
P0

(DS ( P ) 1)T (DS (G)W (k) 1)T 






0
 1I
 DS (P )1
 <0


0
 1I
DS (G) W (k) 1




(44)
with:
W (k ) =  1M(k )
(45)
Thus, a stabilizing control law (41) for the
considered polynomial system (33) can be
characterized by applying the following procedure:
- Solve the LMI feasibility problem i.e. find the
matrices
Journal of Applied Research and Technology
507
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
DS (P) , W(k ) and the parameters
i and  such
that the inequalities (44) are verified.
- Extract the gain matrices K i from the relation
M(k ) =  W (k ) .
X 
U ( X )  0 1P ( X ) 1 
 X2 
4. Simulation results
In this section we will compare the performance of
the three methods, discussed in the previous
paragraph, on a numerical example. We consider
a vectorial system defined by the following state
equation:
 X 1 
 X 1  X 2  X 12  X 1 X 2  X 13


 X 12 X 2  X 1 X 22  2 X 23
 
2
3
 X 2   X 1  1.5 X 2  X 1  0.5 X 1 X 2  X 1

 X 12 X 2  0.5 X 1 X 22  2 X 23  U

(46)
For optimal controls, we focus on minimizing the
following criteria:
1
J=
2


0
T
( X Q ( X ) X  U R ( X )U )dt
(47)
4.1 Application of the SDRE approach
(48)
 0.98 0.41
 ; P2
P1 = 
 0.41 2068
  0.008 0.003 0.011  0.054
 ;
 
 0.009  0.003  0.011 0.006 
4.3-Application of
stabilizing control
the
feedback
quadratic
The searched gain matrices, extracted from M(k),
are given by:
K 3 = [0.854  4.271  1.370
(49)
and it has a full order rank for all X, which can
justify the good choice of SDC matrices A( X)
and B( X).
Vol. 12, June 2014 To establish the polynomial optimal control law,
given by (34), for system (46), we use the
procedure of determination of matrices Pi,
presented in subsection 3.2. Then we obtain:
K1 = [9.887  5.552] ; K 2 = [2.105 0.897 0 0] ;
C( X ) = [B( X ) | A( X )B( X )]
508 4.2 Application of the polynomial approach
1 =  0.032
 0.025 0.006

 ;  = 2.024
2 = 0.025 ; P = 
 0.006 0.007
 = 0.193
 3
The controllability matrix is then:
0
1  X12  X1 X 2  2 X 22 
= 
2
2
 1 1.5  X1  0.5X1 X 2  2 X 2 
We can easily verify that matrix (X ) given in
theorem 3 is negative defined for all X  R , which
guarantees the stability of system (46) by the
optimal control law (50).
Solving the LMI problem formulated by theorem 5,
we obtain:
For system (46), we choose the following SDC
parametrization:
 1 X1  X2  X12
1 X12  X1X2  2X22 
A ( X) = 
,
 1 X  0.5X  X 2 1.5 X 2  0.5X X  2X 2 
1
2
1
1
1 2
2

 0
B ( X )   
 1
(50)
 0.667 0.410 0  0.405 0 0 0 0.199 

P3 = 
 0.032  0.004 0 0.034 0 0 0  0.015
T
1 0
with Q( X ) = 
 and R ( X ) = 1
0 1
After solving the state-dependant Riccati equation
(9) and obtaining the positive symmetric matrix
P(X), the optimal control law can be written as:
 8.808 0.937  1.822 4.315 1.328]
.
4.4 Numerical simulation
Figure 1 (respectively figure 2) shows the behavior
of the first state variable x1(t) (respectively the
second state variable x2(t)) of system (46)
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
controlled by the three feedback-control laws
illustrated in figure 3. Initial conditions were taken
sufficiently far from the origin (x1(0)= - 5, x2(0)=5).
Under these conditions, simulations show a
satisfactory asymptotic stabilization of state variables
for all approaches. Dealing with SDRE technique,
simulations lead to the two following direct outcomes:
•.Stabilization by SDRE approach is almost same
or better than other approaches.
•.For overtaking damping, SDRE technique shows
the best results compared to both other
approaches with almost no oscillation.
Figure 3. Control signals evolution.
5. Conclusion
Our main motivation for this contribution was first
to expose the main approaches used in the
domain of the stability study for non linear
systems, then to work on the development of new
stability criteria in specific conditions for one
technique, and finally to perform a numericalsimulation-based comparison of the three
techniques.
Figure 1. Closed loop responses of x1 with control laws.
Figure 2. Closed loop responses of x 2 with control laws.
The first two approaches are quadratic optimal
controls which are determined via the resolution of
nonlinear Hamilton-Jacobi equation, where the
description of affine-control analytical systems,
with Kronecker power of vectors, allows an
analytical approximate determination of HJE
solution. The third approach is a feedback
quadratic stabilizing technique based on the
Lyapunov direct method and an algebraic
development using Kroneker product properties.
Focusing on the first quadratic optimal control
approach determined via the resolution of SDRE,
our work led to a new result: guarantee the global
asymptotic stability of the nonlinear system when
some sufficient conditions are verified.
We have then set about some numerical
simulations around the three approaches to
validate them. One important outcome of these
simulations was the proof that SDRE method
works better than the analytic ones, which are
expected due to the truncation order considered in
the polynomial development of the non linear
Journal of Applied Research and Technology
509
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
systems. The simulation results have also shown
that the SDRE original technique is the easiest to
program and to implement.
[12] E.B. Erdem and A.G. Alleyne, “Experimental realtime SDRE control of an underactuated robot”, in
Proceedings of the 40th IEEE Conference on Decision
and Control, Piscataway, 2001, pp.219-224;
Further works will consider extension of this
synthesis to large scale interconnected nonlinear
power systems via decentralized control.
[13] S.C. Beeler, “Modelling and control of thin film
growth in a chemical vapor deposition reactor” PhD
thesis, University of North Carolina, Raleigh, 2000.
References
[14] N. Benhadj Braiek, “Algebraic criteria for global
stability analysis of nonlinear systems”, Systems
Analysis Modelling Simulation, vol. 17, pp.211-227,
1995.
[1] B.D.O. Anderson. and J.B. Moore.: “Optimal
Control:Linear Quadratic Methods”,Dover Publications,
Incorporated, 2007.
[2] J.A. Primbs et al, “Nonlinear optimal control: A control
Lyapunov function and receding horizon perspective”,
Asian Journal of Control, vol. 1, no. x, pp. 14-24, 1999.
[3] CH. Won and S. Biswas, “Optimal control using an
algebraic method for control-affine nonlinear systems”,
International Journal of Control, vol. 80, no. 9, pp. 14911502, 2007.
[4] A.A. Agrachev et al., “Nonlinear and Optimal Control
Theory”, Springer-Verlag Berlin Heidelberg 2008.
[5] P.D. Roberts and V.M. Becerra, “Optimal control of
nonlinear systems represented by differantial algebraic
equations”, in: Proceedings of the American Control
Conference, Chicago, IL, 2000.
[6] M. Popescu and A. Dumitrache, “On the optimal
control of affine nonlinear systems”, MPE, vol. 4, pp.
465-475, 2005.
[7] X. Min, “Function Approximation Methods for Optimal
Control Problems”, in Washington University, St. Louis,
2006.
[8] M.D.S. Aliyu, “Nonlinear H-Infinity Control,
Hamiltonian Systems and Hamilton-Jacobi Equations” ,
CRC Press, Taylor & Francis Group, 2011.
[9] J.R. Cloutier, “State-dependent Riccati Equations
techniques: an overview”, in Proceedings of the
American Control Conference, New Mexico, 1997, pp.
932-936.
[10] M. Itik, M.U. Salamci, “Banks SP. & SDRE optimal
control of drug administration in cancer treatment”. Turk
J Elec Eng and Comp Sci, vol. 18, no. 5, 2010.
[11] H.T. Banks et al., “A state-dependent Riccati
equation-based estimator approach for HIV feedback
control”, Optimal Control Applications and Methods,
vol.27, no. 2, pp. 93-121, 2006
510 Vol. 12, June 2014 [15] N. Benhadj Braiek, “On the global stability of
nonlinear polynomial systems”, in IEEE Conference On
Decision and Control, CDC’96, 1996.
[16] H. Bouzaouache and N. Benhadj Braiek, “On
guaranteed global exponential stability of polynomial
singularly perturbed control systems”, International
Journal of Computers, Communications and Control
(IJCCC), vol. 1, no. 4, pp.21-34, 2006.
[17] H. Bouzaouache and N. Benhadj Braiek, “On the
stability analysis of nonlinear systems using polynomial
Lyapunov functions”, Mathematics and Computers in
Simulation, vol. 76, no. (5-6), pp. 316-329, 2008.
[18] S. Elloumi and N. Benhadj Braiek, “A Decentralized
stabilization approach of a class of nonlinear polynomial
interconnected systems- application for a large scale
power system”, Nonlinear Dynamics and Systems
Theory, vol.12, no. 2, pp. 159-172, 2012.
[19] N. Benhadj Braiek N. and F. Rotella, “Stabilization
of nonlinear systems using a Kronecker product
approach”, in European Control Conference ECC’95,
1995, pp. 2304-2309.
[20] N. Benhadj Braiek et al, “Algebraic criteria for global
stability analysis of nonlinear systems”, Journal of
Systems Analysis Modelling and Simulation, Gordon and
Breach Science Publishers, vol. 17,pp221-227, 1995.
[21] N. Benhadj Braiek, “Feedback stabilization and
stability domain estimation of nonlinear systems”,
Journal of The Franklin Institute, vol. 332, no. 2, pp. 183193, 1995.
[22] A. Bogdanov et al, “State dependent Riccati
equation control of a small unmanned helicopter” in
Proceedings of the 2003 AIAA Guidance navigation and
control conference, Austin, TX, 2003.
[23] J.W. Brewer, “Kroneker products and matrix
calculus in system theory”, IEEE Transaction On Circuits
and Systems, CAS, vol. 25, pp. 772-781, 1978
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
[24] T. Çimen, “State Dependent Riccati Equation
(SDRE) control: A survey”, in plenary session of 17th
World Congress, Séoul , 2008, pp. 201-215.
[25] J.R. Cloutier et al, “State dependent Riccati
Equation Techniques: Theory and applications”,
American Control Conference Workshop, 1998.
[26] E.B. Erdem, “Analysis and real-time implementation
of state dependent Riccati equation controlled systems”.
PhD thesis, university of Illinois at Urbana-Champaign,
Mechanical Engineering Department, Urbana, 2001.
[27] B. Friendland, “Advanced Control System Design”,
Prentice Hall, Englewood Cliffs NJ, 1996, pp. 110-112.
[28] Y. Huang & W. Lu, “Nonlinear optimal control:
Alternatives
to
Hamilton-Jacobi
equation”,
in:
Proceedings of the 35th Conference on Decision and
Control, Kobe, Japan, 1996, pp 3942-3947.
[29] S. Vukmirović et al, “Optimal Workflow Scheduling
in Critical Infrastructure Systems with Neural Networks”,
Journal of Applied Research and Technology (JART),
vol.10, no.2, pp. 114-121, 2012.
[30] C.P. Mracek, “SDRE autopilot for dual controlled
missiles”, in:Proc. of the IFAC Symposium on Automatic
Control in Aerospace, Toulouse, France, 2007.
[31] C.P. Mracek and J.R. Cloutier, “Control designs for
the nonlinear benchmark problem via the state
dependent Riccati equation method”, International
Journal of Robust and Nonlinear Control, vol. 8, pp. 401433, 1998.
[32] D.K. Parrish and D.B. Ridgely, “Attitude control of a
satellite using the SDRE method”, in: Proc. of the
American Control Conference, Albuquerque, pp 942-946
1997.
[36] J.R. Cloutier and D.T. Stansbery, “Nonlinear hybrid
bank-to-turn/skid-to-turn
autopilot
design”,
in:
Proceeding of AIAA Guidance, Navigation and control
conference, Montreal, Canada, 2001.
[37] M. Sznaier et al, “Receding horizon control lyapunov
function approach to suboptimal regulation of nonlinear
systems”, AIAA Journal of Guidance, Control and
Dynamics, vol.23, pp. 399-405, 2000.
[38] J. Yu et al, “Comparaison of nonlinear control
design techniques on a model of the caltech ducted fan”,
Automatica, vol.37,pp. 1971-1978, 2001.
[39] M.M. Belhaouane et al, “An LMI technique for the
global stabilization of nonlinear polynomial systems”,
International Journal of Computers, Communications
and Control, vol. 4, no. 4, pp. 335-348, 2009.
[40] A. Zemliak et al, “On Time-Optimal Procedure For
Analog System Design”, Journal of Applied Research
and Technology (JART), vol. 2 No. 1, pp. 32-53, 2004;
[41] L. Olmos and L. Álvarez-Icaza, “Optimal emergency
vehicle braking control based on dynamic friction model”,
Journal of Applied Research and Technology (JART),
vol. 1, no. 1, pp. 15-26, 2005.
[42] J. Rivera-Mejía et al, “PID Based on a Single
Artificial Neural Network Algorithm for Intelligent
Sensors”, Journal of Applied Research and Technology
(JART), vol. 10, no.2, pp. 262-282, 2012.
[43] B. Bernábe-Loranca et al, “Approach for the
Heuristic
Optimization
of
Compactness
and
Homogeneity in the Optimal Zoning”, Journal of Applied
Research and Technology (JART), vol. 10, no.3, pp.
447-457, 2012.
[33] F. Rotella F. and G. Dauphin-Tanguy, “Nonlinear
systems: identification and optimal control”, International
Journal of Control, vol. 48, no. 2, pp. 525-544, 1988.
[34] J.S. Shamma and J.R. Cloutier, “Existance of state
dependent riccati equation stabilizing feedback”, IEEE
Transactions on Automatic Control, vol. 48, no. 3, pp.
513-517, 2003.
[35] J.R. Cloutier et al, “Nonlinear regulation and
nonlinear H∞control via the state dependent Riccati
equation technique; part 1, theory; part 2, examples”, in:
Proceedings of the International Conference on
Nonlinear Problems in Aviation and Aerospace,
Daytona Beach, 1996.
Journal of Applied Research and Technology
511
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
Appendix A:
We recall here the useful mathematical notations
and properties used in this paper concerning the
Kronecker tensor product.
A.1. Kronecker product
The Kronecker product of A(pxq) and B(r xs)
denoted by ( A  B) is the ( pr  qs ) matrix defined
by:
 a11B  a1q B 



 
AB =  


 a p1B  a pq B 
The Kronecker power of order i , X [ i ] , of the
vector X  R n is defined by:
A.3. Permutation matrix:
Let ein denotes the i th vector of the canonic basis
n
of R , the permutation matrix denoted Unm is
defined by [23]:
n
U n m =
m
  (e
n mT
i .e k
T
)  (ekm .ein )
(54)
i =1 k =1
A.4. Vec-function:
The function Vec of a matrix was defined in [23]
as follows:
[0]
(51)
 A1 
 
 A2 
 Aq ]; vec ( A) =  

 
 Aq 
 
~
The non-redundant i  power X [i ] of the state
vector X = [ X1 ,, X q ] is defined in [33] as:
A = [ A1
~[1] [1]
X
=X =X
 ~[i ]
i
i 1
i 1
i 2 2
X =[X1, X1 X2, , X1 Xq , X1 X2 ,
 i 2
i 2 2
 X1 X2 X3, , X1 Xq , ,
 Xi 3X3, , Xi 3X3, , Xi ]T , i 2
1
q
q
 1 3
where i  {1,, q} , Ai is a vector of R p .
We recall the following useful rule of this function,
given in [23]:
(52)
i  N, !Ti  R
X
[i ]
 n  i  1 

, ni = 
 i

~ [i ]

= Ti X

(55)
(56)
A.5. Mat-function:
An important matrix valued linear function of a
vector, denoted Mat (n,m) (.) was defined in [14] as
follows:
n i ni
(53)
thus, one possible solution for the inversion can be
written as:
~
X [i ] = Ti  X [ i ]
Vol. 12, June 2014 A2
Vec (E.A.C ) = (CT  E )Vec ( A)
It corresponds to the previous power where the
repeated components have been removed. Then,
we have the following:
512 binomial coefficients.
This matrix is square (nm  nm) and has precisely a
single "1" in each row and in each column.
A.2. Kronecker power of vectors
X = 1
 [i ]
i
X = X[i 1]X= X X[i 1], X[i ]Rn , fori 1
with Ti  = (TiT Ti ) 1TiT where Ti  is the MoorePenrose inverse of Ti , and ni stands for the
If V is a vector of dimension p=n.m then M=Mat(n,
m)(V) is the (nxm) matrix verifying: V = Vec (M ) .
A.6 Application to the description of analytic
nonlinear vector and matrix functions:
On Feedback Control Techniques of Nonlinear Analytic Systems, S. Elloumi / 500‐513
Vector functions : Let f(.) be an analytic function
from R n into R m . Then f(X) can be developed in a
generalized Taylor series using the Kronecher
power of the vector X, i.e.
f (X ) =
F X
i
[i ]
(57)
i 0
Matrix functions : Let g(.) be an analytic function
real
from R n into Mnm (R) (the set of nxm
matrices). Then g(X) can be written as :
g ( X ) = [G1 ( X )  G 2 ( X )  G m ( X )]
(58)
where Gk(X) is a vector function from R n into R n ,
which can be written as :
Gk (X ) =

G
k
[i ]
i .X
(59)
i =0
Thus g(X) can be expressed as :
G .X

g( X) =
1
i
[i ]
 Gi2.X[i ]    Gim.X[i ]

i =0
 X[i ]
0 


[i ]


X
  Gim 
 Gi1
Gi2
 (60)



i =0


Gi
 0
X[i ] 




I
mX[i ]
which can be written as:

g(X ) =
G (I
i
m
 X [i ] )
(61)
i =0
where Im is the identity matrix of order m.
Journal of Applied Research and Technology
513