🧠 Optimal Control Theory – An Introduction
“Because the shape of the whole universe is most perfect and, in fact, designed by the wisest Creator, nothing in the world will occur without some maximum or minimum rule.” – Leonhard Euler, 1744
📌 1. Introduction
Optimal Control Theory is concerned with finding control strategies that make a system behave in the best possible way, based on a performance measure.- Classical Control: Uses input-output models (SISO).
- Modern Control: Uses state variables and matrix theory.
- Calculus of Variations: Foundation for optimization.
⚙️ 1.2 Classical Control
Applied to linear, time-invariant, SISO systems. Design focuses on:- Time Response: Rise time, peak overshoot, settling time
- Frequency Response: Gain & phase margins, bandwidth
🧮 1.3 Modern Control Theory
- Uses state-space feedback with full state information
- Controller input u(t) computed based on states X(t) and reference r(t)
- Based on matrix theory and multiple model representations
🔄 1.4 Components of a Modern Control System
- Modeling: Uses differential/difference equations (Lagrange function)
- Analysis: Evaluate stability and performance (Lyapunov function)
- Design: Optimize system behavior using performance index J(X)
⚖️ 1.5 Optimization
Static Optimization
- Applies to steady-state systems (time-invariant)
- Techniques: Ordinary calculus, Lagrange multipliers, Linear/Nonlinear programming
Dynamic Optimization
- Applies when system variables change over time
- Techniques: Calculus of variations, Pontryagin principle, Dynamic programming
🎯 1.6 Optimal Control Problem
To define an optimal control problem, specify:- A mathematical model (typically state-space form)
- A performance index J or I
- Boundary conditions and constraints
🏁 Types of Performance Indices
- Time-Optimal Control
- Fuel-Optimal Control
- Minimum Energy Control
- Terminal State Control
- General Optimal Control
Performance Index Forms
- Mayer Problem: Cost depends on terminal state only
- Lagrange Problem: Cost depends on integral over time
- Bolza Problem: Combines terminal and integral costs
✅ Summary: Optimal Control Framework
Solve the problem by modeling the system as:𝑅 = f(X, u, t) And define the performance index:J = φ(X(T)) + ∫₀ᵀ L(X(t), u(t), t) dtSubject to:
- Initial conditions
- State/control constraints
- Boundary conditions
1.2 Classical Control
Classical or conventional control theory focuses on the analysis and design of single-input single-output (SISO) systems. It is primarily based on the use of Laplace transform methods and block diagram representations to study system behavior in the frequency domain. In a classical feedback control system, the input signal to the plant, denoted by u(t), is generated by processing the error signal e(t) through a compensator or controller. The error signal is the difference between the desired reference input r(t) and the actual output y(t) of the system.
Note: • Only the output of the system is used for feedback; internal state variables are not available. • Classical control is generally applied to linear time-invariant (LTI) systems.
The closed-loop transfer function of a typical classical control system with unity feedback is given by:
Y(s) / R(s) = G(s) / (1 + G(s)H(s)) (1.1)
where:
- Y(s) and R(s) are the Laplace transforms of the output and input, respectively,
- G(s) is the open-loop transfer function,
- H(s) is the transfer function of the feedback path.
If the open-loop system consists of a compensator and a plant, the transfer function G(s) is expressed as:
G(s) = Gc(s) × Gp(s) (1.2)
where:
- Gc(s) is the transfer function of the compensator (controller).
- Gp(s) is the transfer function of the plant (the system to be controlled).
Fig. 1. Classical Control System Configuration
1.3 Modern Control Theory
Modern control theory, also referred to as advanced control, extends classical control to handle more complex systems, especially those with multiple inputs and multiple outputs (MIMO). It is based on the state-variable representation, which models systems using first-order differential equations in the time domain. This approach provides a more complete description of system dynamics and is applicable to linear, nonlinear, time-invariant, and time-varying systems.
State-Space Representation of Linear Time-Invariant (LTI) Systems
The state-space model for an LTI system is given by:
Ẋ(t) = A·X(t) + B·U(t) (1.3)
Y(t) = C·X(t) + D·U(t) (1.4)
Where:
- X(t) – State vector
- U(t) – Input vector
- Y(t) – Output vector
- A – System matrix
- B – Input matrix
- C – Output matrix
- D – Feedthrough (or direct transmission) matrix
State-Space Representation of Nonlinear Systems
For nonlinear systems, the state-space representation becomes
Ẋ(t) = f(X(t), U(t), t) (1.5)
Y(t) = g(X(t), U(t), t) (1.6)
Here, f(·) and g(·) are nonlinear functions that describe the system’s dynamics and output, respectively. This representation enables the use of advanced control techniques such as state feedback, observer design, optimal control, and robust control, making it essential for high-order and complex system analysis and design.
State Feedback in Modern Control Theory
In modern control theory, all state variables are ideally available for feedback. This allows the controller to make accurate and effective decisions. The input u(t) is determined by the controller, which includes an error detector and a compensator, based on the system states X(t) and the reference input r(t), as shown in Fig. 2. This structure supports state feedback control, where each state is multiplied by an appropriate gain to achieve the desired system response — such as improved stability or faster dynamics. Modern control is based on matrix algebra and is well-suited for digital computer implementation and simulations. It provides a flexible and systematic method to model and control multi-input, multi-output (MIMO) systems. It is also important to note that while the state-space model uniquely determines the system’s transfer function, the reverse is not true: a single transfer function can have many valid state-space representations. This allows engineers to choose forms like controllable, observable, or diagonal for convenience in analysis and design.
Fig.2. Modern Control System Configuration 1.4. Components of the Modern Control System
Fig. 3. Main Components of Modern Control System
Modeling:
The first step in any control theory approach is to model the system dynamics. This involves formulating the system using differential or difference equations. In many cases, the dynamics are derived using the Lagrangian function.
Analysis:
Once the model is established, the system is analyzed to evaluate its performance—especially its stability. A fundamental tool for this analysis is Lyapunov stability theory, which helps determine whether the system will remain stable under various conditions.
Design:
If the system does not meet the desired performance specifications, the next step is control system design. In optimal control theory, this involves designing a controller that minimizes or maximizes a performance index (commonly denoted as J(X) or I(X)) based on system states.
1.5. Optimization
Fig.4. Overview of Optimization Optimization can be broadly classified into static optimization and dynamic optimization.
1. Static Optimization
Static optimization involves controlling a plant under steady-state conditions, where system variables do not change with time. The plant is described by algebraic equations. Techniques used: Ordinary calculus, Lagrange multipliers, linear programming, and nonlinear programming.
2. Dynamic Optimization
Dynamic optimization deals with the control of a plant under dynamic conditions, where system variables change over time. The plant is described by differential or difference equations. Techniques used: Search techniques, dynamic programming, calculus of variations, and Pontryagin’s principle.
1.6 Optimal Control
Objective
The objective of optimal control is to determine a control signal u*(t) that drives the system (plant) in such a way that it:
- Satisfies all physical and operational constraints.
- Minimizes or maximizes a chosen performance index J or I.
Formulation of an Optimal Control Problem
To formulate an optimal control problem, the following components are required:
- Mathematical Model of the Process A detailed description of the physical system to be controlled, usually expressed in state-space form.
- Performance Index A scalar function (typically an integral over time) that quantifies the objective to be optimized—such as energy usage, error, time, or cost.
- Boundary Conditions and Constraints Specifications for initial/final states, along with physical constraints on the system variables and control inputs.
System (Plant) Representation
For optimization purposes, a physical plant is modeled using either linear or nonlinear differential equations. A linear time-invariant (LTI) system is typically described by:
Ẋ(t) = AX(t) + BU(t) (1.3) Y(t) = CX(t) + DU(t) (1.4)
A nonlinear system is generally represented as:
Ẋ(t) = f(X(t), U(t), t) (1.5) Y(t) = g(X(t), U(t), t) (1.6)
Where:
- Ẋ(t) is the state vector
- U(t) is the control input
- Y(t) is the output
- A, B, C, D are system matrices
- f(·), g(·) are nonlinear functions of state, input, and time
Performance Index / Performance Measure (I or J):
1. Classical Control:
Classical control techniques are applied to linear, time-invariant, single-input single-output (SISO) systems. These methods analyze system behavior in both the time and frequency domains.
Time-domain specifications include:
- Rise time
- Settling time
- Peak overshoot
- Steady-state error
Frequency-domain characteristics include:
- Gain margin
- Phase margin
- Bandwidth
Common controllers include PID (Proportional-Integral-Derivative) controllers.
2. Modern Control:
Modern control theory applies to multi-input multi-output (MIMO) systems using state-space representation. It allows feedback using all system state variables. A key objective is optimal control, where the control input drives the system to a desired state or trajectory while optimizing a performance index. Performance index may involve
- Energy consumption
- Time to reach target
- Control effort
Minimizing the performance index I means minimizing the time required to complete the control task.
I = ∫t₀t𝑓 1⋅dt = tf − t0 = t* (1.7)
Modern control methods include state feedback, observer design, and techniques like Linear Quadratic Regulator (LQR). Different Types of Performance Index:
1. Performance Index: Time-Optimal Control System
In a time-optimal control system, the objective is to transfer the system from an initial state x(t0) to a desired final state x(tf) in the shortest possible time.
The performance index (PI) for this type of control is defined as the total time taken for the transition:
I = ∫t₀t𝑓 1 dt = tf − t0 = t* (1.7)
Where:
- t0: Initial time
- tf: Final time
- t*: Minimum (optimal) time
Minimizing the performance index I means minimizing the time required to complete the control task.
I = ∫t₀t𝑓 1⋅dt = tf − t0 = t* (1.7)
