中南大学最优控制课件
subject to:
f (1 , 2 , g (1, 2 ,
, n ) 0 , n ) 0
equality constraint inequality constraint
J R: performance index (objective function)
=(1 , 2 , , n )T R n : parameters to be optimized *: optimal parameters
13
Chapter 1
1.1 Overview
Introduction
1.2 Basic problems of optimization 1.3 Optimal control problems
1.4 Solution methods of optimization
14
Optimal control problem:
11
1.2 Basic problems of optimization
Optimization problems (I)
1) Static Optimization (parameters optimization, function extreme)
* arg min J min F (1 , 2 , , n )
15
1.3 Optimal control problems
Performance index in optimal control problems
Chapter 2 Static Optimization Chapter 3 Variational Methods
Chapter 4 The Pontryagin Minimum Principle
Chapter 5 Discrete-Time Optimal Control
Chapter 6 Dynamic Programming
Optimal control Adaptive control Predictive control Robust control Intelligent control System modeling and identification
…………
6
1.1 Overview
Main contents of optimal control
Advanced math
Linear algebra Automatic control theory (classical)
Linear control system (state-space method)
References (books)
Optimal Control, F. L. Lewis and V. L. Syrmos, John Wiley & Sons Dynamic Programming and Optimal Control,D. P. Bertsekas, Athena Scientific Dynamic Optimization,E. Bryson, Addison Wesley 自动控制原理(第二版)(下),吴麟主编,清华大学出版社, 2006 系统最优化及控制,符曦,机械工业出版社 最优控制理论与系统(第二版),胡寿松,王执铨,胡维礼,科学出版社 最优控制应用基础,邢继祥,科学出版社,2003
g5 x( t f ), t f 0, g 6 x (t f ), t f 0, terminal constraint J u(t ) R: performance index (cost functional) u(t )* : optimal control
1.4 Solution methods of optimization
3
1.1 Overview
Chronological History of Feedback Control
1624, 1728, 1868, 1877, 1890, 1910, 1927, 1932, 1936, 1938, 1942, 1947, 1948, 1950, 1956, 1957, Drebble, Incubator (hatch chicks) Watt, Flyball governor Maxwell, Flyball stability analysis Routh, Stability Liapunov, Nonlinear stability Sperry, Gyroscope and autopilot Black, Feedback electronic amplifier Bush, Differential analyzer Nyquist, Nyquist stability criterion Callender, PID controller Bode, frequency response methods Wiener, Optimal filter design Hurewicz, Sampled data systems Nichols, Nichols chart Evans, Root locus Kochenberger, Nonlinear analysis Pontryagin, Minimum principle Bellman, Dynamic programming
Control engineering; Space technology;
System engineering;
Economic management;
Financial engineering;
· · · · · · · · · ·
9
1.1 Overview
Some basic courses for studying optimal control
7
1.1 Overview
Main branches of optimal control
Optimal control of distributed parameter systems;
Stochastic optimal control;
Adaptive optimal control; Optimal control of large-scale systems;
Volume 1: Chapter 1,2
Modern Control Theory
ห้องสมุดไป่ตู้
— Optimal Control —
( An undergraduate optional course )
Hui PENG
PhD, Professor
( /staffmember/HuiPeng.htm )
u(t )* arg min J u(t ) min F x(t ), x( t ), u( t ), t
u(t ) u(t )
subject to:
x( t ) f x( t ), u( t ), t , system state equation u( t ) U R m , x( t ) X R n , t t0 , t f g1 x( t ), u( t ), t 0, g 2 x( t ), u( t), t 0 g 3 x( t0 ), t0 0, g 4 x (t0 ), t0 0, initial constraint
g5 x( t f ), t f 0, g 6 x (t f ), t f 0, terminal constraint J u( t ) R: performance index (objective functional) u(t )* : optimal control
最优控制理论及参数优化,李国勇 等,国防工业出版社
10
Chapter 1
1.1 Overview
Introduction
1.2 Basic problems of optimization 1.3 Optimal control problems
1.4 Solution methods of optimization
Optimal control
Suboptimal control; Optimal control sensitivity; Multi-goal optimal control; Differential games;
· · · · · · · · · ·
8
1.1 Overview
Applied fields of optimal control
Approaches:
• The calculus of variations (Variational methods) • The minimum principle • Dynamic programming
Optimization problems:
• Minimization or Maximization • Differential games (min-max)
u(t ) u(t )
subject to:
x( t ) f x( t ), u( t ), t , system state equation u( t ) U R m , x( t ) X R n , t t0 , t f g1 x( t ), u( t ), t 0, g 2 x( t ), u( t), t 0 g3 x( t0 ), t0 0, g 4 x (t0 ), t0 0, initial constraint
Classical Control Theory
Modern Control Theory