Constrained Optimization
⛓️ Constrained Optimization
Most real-world problems involve constraints (e.g., budget limits, physical boundaries, or resource capacities). Constrained optimization provides the tools to solve these problems rigorously.
🟢 1. Equality Constraints
To minimize subject to , we use the method of Lagrange Multipliers.
The Lagrangian Function
We define the Lagrangian as: The necessary condition for an optimum is that the gradient of the Lagrangian is zero: Geometrically, this means the gradient of the objective function must be parallel to the gradient of the constraint.
🟡 2. Inequality Constraints (KKT)
When we have inequality constraints , we use the Karush-Kuhn-Tucker (KKT) conditions, which generalize Lagrange multipliers.
The KKT Conditions
For a point to be optimal, there must exist multipliers and such that:
- Stationarity:
- Primal Feasibility: and
- Dual Feasibility:
- Complementary Slackness:
🔴 3. Penalty and Barrier Methods
Sometimes it is easier to convert a constrained problem into an unconstrained one.
Penalty Methods
Add a term to the objective function that penalizes constraint violations: As , the solution approaches the true constrained optimum.
Interior-Point Methods (Barrier)
Use a logarithmic “barrier” to keep the solution inside the feasible region: As increases, the barrier becomes steeper, and the solution converges to the boundary if necessary.