Convex Optimization — Boyd & Vandenberghe| Môn Tối ưu lập kế hoạch| Trường Đại học Bách Khoa Hà Nội

1. Introduction

• mathematical optimization

• least-squares and linear programming

• convex optimization

• example

Thông tin:
301 trang 3 tháng trước

Bình luận

Vui lòng đăng nhập hoặc đăng ký để gửi bình luận.

Convex Optimization — Boyd & Vandenberghe| Môn Tối ưu lập kế hoạch| Trường Đại học Bách Khoa Hà Nội

1. Introduction

• mathematical optimization

• least-squares and linear programming

• convex optimization

• example

48 24 lượt tải Tải xuống
Convex Optimization Boyd & Vandenberghe
1. Introduction
mathematical optimization
least-squares and linear programming
convex optimiz ati on
example
course goals and topics
nonlinear optimization
brief history of convex optimi za tio n
1–1
Mathematical optimization
(mathematical) optimization problem
minimize f
0
(x)
subject to f
i
(x) b
i
, i = 1, . . . , m
x = (x
1
, . . . , x
n
): optimization variables
f
0
: R
n
R: objective functi on
f
i
: R
n
R, i = 1, . . . , m: constraint functions
optimal solution x
has smallest value of f
0
among all vectors that
satisfy the constraints
Introduction 1–2
Examples
portfolio optimization
variables: amounts invested in different assets
constraints: budget, max./min. investment per asset, minimum return
objective: overall risk or return variance
device sizing in electronic circuits
variables: device widths and lengths
constraints: manufacturing limits, timing requirements, maximum area
objective: power consum pti on
data fitting
variables: model parameters
constraints: prior information, parameter l i mi ts
objective: measure of misfit or prediction er ror
Introduction 1–3
Solving optimization problems
general optimiz ati on problem
very difficul t to solve
methods involve some compromise, e.g., ver y long compu tati on time, or
not always finding the solution
exceptions: certain problem cla sse s can be solved efficiently and r el ia bl y
least-squares problems
linear progra mm i ng problems
convex optimiz ati on problems
Introduction 1–4
Least-squares
minimize kAx bk
2
2
solving least- squares problems
analytical solution: x
= (A
T
A)
1
A
T
b
reliable and efficient algorithms and software
computation ti m e proportional to n
2
k (A R
k×n
); less if structured
a mature technology
using least-s quares
least-squares problems are easy to recognize
a few standard techniques increase fle xi bi l ity (e.g., including weights,
adding regu lar iz a tion terms)
Introduction 1–5
Linear programming
minimize c
T
x
subject to a
T
i
x b
i
, i = 1, . . . , m
solving linear programs
no analytical formula for solution
reliable and efficient algorithms and software
computation ti m e proportional to n
2
m if m n; less with str uc tur e
a mature technology
using linear programming
not as easy to recognize as l eas t-sq u ares problems
a few standard tricks use d to convert problem s into linear programs
(e.g., proble ms involving
1
- or
-norms, piecew i se-l i ne ar functions)
Introduction 1–6
Convex optimization problem
minimize f
0
(x)
subject to f
i
(x) b
i
, i = 1, . . . , m
objective and constraint functions are con ve x:
f
i
(αx + βy) αf
i
(x) + βf
i
(y)
if α + β = 1, α 0, β 0
includes least-squares problems and linear programs as special cases
Introduction 1–7
solving convex optimization problems
no analytical solution
reliable and efficient algorithms
computation ti m e (roughly) proportional to max{n
3
, n
2
m, F }, where F
is cost of eva lu at in g f
i
’s and their first and second derivatives
almost a technology
using convex optimization
often difficu l t to recognize
many tricks for transforming problems into convex form
surprisingly many problems can be solved via convex optimiza tion
Introduction 1–8
Example
m lamps illuminating n (small, fla t) patches
lamp power p
j
illumination I
k
r
kj
θ
kj
intensity I
k
at patch k de pends linearly on l am p powers p
j
:
I
k
=
m
X
j=1
a
kj
p
j
, a
kj
= r
2
kj
max{cos θ
kj
, 0}
problem: achieve desi r ed illuminati on I
des
with bounded lamp powers
minimize max
k=1,...,n
|log I
k
log I
des
|
subject to 0 p
j
p
max
, j = 1, . . . , m
Introduction 1–9
how to solve?
1. use unif orm power: p
j
= p, vary p
2. use least-sq u ares:
minimize
P
n
k=1
(I
k
I
des
)
2
round p
j
if p
j
> p
max
or p
j
< 0
3. use weighted least-squares:
minimize
P
n
k=1
(I
k
I
des
)
2
+
P
m
j=1
w
j
(p
j
p
max
/2)
2
iteratively adjust weights w
j
until 0 p
j
p
max
4. use linear programming:
minimize max
k=1,...,n
|I
k
I
des
|
subject to 0 p
j
p
max
, j = 1, . . . , m
which can be sol ved via linear programming
of course these are approximate (suboptimal) ‘solutions’
Introduction 1–10
5. use convex optimization: prob le m is equivalent to
minimize f
0
(p) = max
k=1,...,n
h(I
k
/I
des
)
subject to 0 p
j
p
max
, j = 1, . . . , m
with h(u) = max{u, 1/u}
0 1 2 3 4
0
1
2
3
4
5
u
h(u)
f
0
is convex because maximum of convex functions is convex
exact solution obtained with effort modest factor × least-squares effor t
Introduction 1–11
additional constraints: does adding 1 or 2 below complicate the problem?
1. no more than half of total power is in any 10 la mp s
2. no more than half of the lamps are on ( p
j
> 0)
answer: with (1), still easy to solve; with (2), extrem el y difficult
moral: (untrained) intuition doesn’t always work; without the proper
background very easy problems can appear quite similar to very difficult
problems
Introduction 1–12
Course goals and topics
goals
1. recognize/formulate problems (such as the illumination problem) as
convex optimiz ati on problems
2. develop code for pr ob lem s of moderate size (1000 lamps, 5000 patches)
3. characterize optimal s olu ti on (optimal power distr ib u tion ) , give limits of
performance, etc.
topics
1. convex sets, functions, optimization problems
2. examples and applicati ons
3. algorithms
Introduction 1–13
Nonlinear optimization
traditional techniques for general nonconvex problems involve compromise s
local optim iz ati on methods (nonlinear programming)
find a point that minimizes f
0
among feasibl e points near it
fast, can handle large pr ob lem s
require i ni ti al guess
provide no information about dista nc e to (global) optimum
global optimiz ati on methods
find the (global) solution
worst-case compl exi ty grows exponentially wi th problem size
these algorithms are often bas ed on solving con vex subproblems
Introduction 1–14
Brief history of convex optimization
theory (convex analysis): ca1900–1970
algorit hms
1947: simplex a lgori th m for linear pr ogra mm in g (Dantzig)
1960s: early inter i or-point methods (Fiacco & McCormick, Dikin, . . . )
1970s: ellipsoid method and other subgradient m eth ods
1980s: polynomial-time interior-point methods for linear programmi n g
(Karmarkar 1984)
late 1980s–now: polynomial -ti m e interior-point methods for nonlinear
convex optimiz ati on (Nesterov & Nem i rovs ki 1994)
applications
befor e 1990: mostly in operations research; few in engineeri ng
since 1990: man y new applications in engine er in g (control, signal
processing, communi ca tion s, circuit design, . . . ); ne w problem classes
(semidefinite and second-ord er cone programming, robust opt im iz a tion )
Introduction 1–15
Convex Optimization Boyd & Vandenberghe
2. Convex sets
affine and convex sets
some important examples
operations that preserve convexity
generalized inequalities
separating a nd supporting hyperplanes
dual cones and generalized inequalities
2–1
Affine set
line through x
1
, x
2
: all points
x = θx
1
+ (1 θ)x
2
(θ R)
x
1
x
2
θ = 1.2
θ = 1
θ = 0.6
θ = 0
θ = 0.2
affine set: contai n s the line th rou gh any two distinct points in the set
example: solution set of linear equations {x | Ax = b}
(conversely, every affine set can be expressed as solution set of system of
linear equ ati on s)
Convex sets 2–2
Convex set
line segment between x
1
and x
2
: all points
x = θx
1
+ (1 θ)x
2
with 0 θ 1
convex set: conta in s line segment between any two points in the set
x
1
, x
2
C, 0 θ 1 = θx
1
+ (1 θ)x
2
C
examples (one convex, two nonconvex sets)
Convex sets 2–3
Convex combination and convex hull
convex combinati on of x
1
,. . . , x
k
: any point x of the form
x = θ
1
x
1
+ θ
2
x
2
+ ··· + θ
k
x
k
with θ
1
+ ··· + θ
k
= 1, θ
i
0
convex hull conv S: set of all convex combinations of points in S
Convex sets 2–4
Convex cone
conic (nonnegati ve) combination of x
1
and x
2
: any point of the form
x = θ
1
x
1
+ θ
2
x
2
with θ
1
0, θ
2
0
0
x
1
x
2
convex cone: set that contains all conic combinations of points in the set
Convex sets 2–5
| 1/301

Preview text:

Convex Optimization — Boyd & Vandenberghe 1. Introduction • mathematical optimization
• least-squares and linear programming • convex optimization • example • course goals and topics • nonlinear optimization
• brief history of convex optimization 1–1 Mathematical optimization
(mathematical) optimization problem minimize f0(x) subject to fi(x) ≤ bi, i = 1, . . . , m
• x = (x1, . . . , xn): optimization variables
• f0 : Rn → R: objective function
• fi : Rn → R, i = 1, . . . , m: constraint functions
optimal solution x⋆ has smallest value of f0 among all vectors that satisfy the constraints Introduction 1–2 Examples portfolio optimization
• variables: amounts invested in different assets
• constraints: budget, max./min. investment per asset, minimum return
• objective: overall risk or return variance
device sizing in electronic circuits
• variables: device widths and lengths
• constraints: manufacturing limits, timing requirements, maximum area
• objective: power consumption data fitting
• variables: model parameters
• constraints: prior information, parameter limits
• objective: measure of misfit or prediction error Introduction 1–3 Solving optimization problems general optimization problem • very difficult to solve
• methods involve some compromise, e.g., very long computation time, or
not always finding the solution
exceptions: certain problem classes can be solved efficiently and reliably • least-squares problems
• linear programming problems
• convex optimization problems Introduction 1–4 Least-squares minimize kAx − bk22 solving least-squares problems
• analytical solution: x⋆ = (AT A)−1AT b
• reliable and efficient algorithms and software
• computation time proportional to n2k (A ∈ Rk×n); less if structured • a mature technology using least-squares
• least-squares problems are easy to recognize
• a few standard techniques increase flexibility (e.g., including weights, adding regularization terms) Introduction 1–5 Linear programming minimize cT x
subject to aTi x ≤ bi, i = 1, . . . , m solving linear programs
• no analytical formula for solution
• reliable and efficient algorithms and software
• computation time proportional to n2m if m ≥ n; less with structure • a mature technology using linear programming
• not as easy to recognize as least-squares problems
• a few standard tricks used to convert problems into linear programs
(e.g., problems involving ℓ1- or ℓ∞-norms, piecewise-linear functions) Introduction 1–6 Convex optimization problem minimize f0(x) subject to fi(x) ≤ bi, i = 1, . . . , m
• objective and constraint functions are convex:
fi(αx + βy) ≤ αfi(x) + βfi(y)
if α + β = 1, α ≥ 0, β ≥ 0
• includes least-squares problems and linear programs as special cases Introduction 1–7
solving convex optimization problems • no analytical solution
• reliable and efficient algorithms
• computation time (roughly) proportional to max{n3, n2m, F }, where F
is cost of evaluating fi’s and their first and second derivatives • almost a technology using convex optimization
• often difficult to recognize
• many tricks for transforming problems into convex form
• surprisingly many problems can be solved via convex optimization Introduction 1–8 Example
m lamps illuminating n (small, flat) patches lamp power pj rkj θkj illumination Ik
intensity Ik at patch k depends linearly on lamp powers pj: m X Ik = akjpj, akj = r−2 max kj {cos θkj, 0} j=1
problem: achieve desired illumination Ides with bounded lamp powers minimize
maxk=1,...,n | log Ik − log Ides| subject to 0 ≤ pj ≤ pmax, j = 1, . . . , m Introduction 1–9 how to solve?
1. use uniform power: pj = p, vary p 2. use least-squares: P minimize n (I k=1 k − Ides)2
round pj if pj > pmax or pj < 0 3. use weighted least-squares: P P minimize n (I m w k=1 k − Ides)2 + j=1 j(pj − pmax/2)2
iteratively adjust weights wj until 0 ≤ pj ≤ pmax 4. use linear programming: minimize maxk=1,...,n |Ik − Ides| subject to 0 ≤ pj ≤ pmax, j = 1, . . . , m
which can be solved via linear programming
of course these are approximate (suboptimal) ‘solutions’ Introduction 1–10
5. use convex optimization: problem is equivalent to minimize
f0(p) = maxk=1,...,n h(Ik/Ides) subject to 0 ≤ pj ≤ pmax, j = 1, . . . , m with h(u) = max{u, 1/u} 5 4 3 ) (u h 2 1 00 1 2 3 4 u
f0 is convex because maximum of convex functions is convex
exact solution obtained with effort ≈ modest factor × least-squares effort Introduction 1–11
additional constraints: does adding 1 or 2 below complicate the problem?
1. no more than half of total power is in any 10 lamps
2. no more than half of the lamps are on (pj > 0)
• answer: with (1), still easy to solve; with (2), extremely difficult
• moral: (untrained) intuition doesn’t always work; without the proper
background very easy problems can appear quite similar to very difficult problems Introduction 1–12 Course goals and topics goals
1. recognize/formulate problems (such as the illumination problem) as convex optimization problems
2. develop code for problems of moderate size (1000 lamps, 5000 patches)
3. characterize optimal solution (optimal power distribution), give limits of performance, etc. topics
1. convex sets, functions, optimization problems 2. examples and applications 3. algorithms Introduction 1–13 Nonlinear optimization
traditional techniques for general nonconvex problems involve compromises
local optimization methods (nonlinear programming)
• find a point that minimizes f0 among feasible points near it
• fast, can handle large problems • require initial guess
• provide no information about distance to (global) optimum global optimization methods • find the (global) solution
• worst-case complexity grows exponentially with problem size
these algorithms are often based on solving convex subproblems Introduction 1–14
Brief history of convex optimization
theory (convex analysis): ca1900–1970 algorithms
• 1947: simplex algorithm for linear programming (Dantzig)
• 1960s: early interior-point methods (Fiacco & McCormick, Dikin, . . . )
• 1970s: ellipsoid method and other subgradient methods
• 1980s: polynomial-time interior-point methods for linear programming (Karmarkar 1984)
• late 1980s–now: polynomial-time interior-point methods for nonlinear
convex optimization (Nesterov & Nemirovski 1994) applications
• before 1990: mostly in operations research; few in engineering
• since 1990: many new applications in engineering (control, signal
processing, communications, circuit design, . . . ); new problem classes
(semidefinite and second-order cone programming, robust optimization) Introduction 1–15
Convex Optimization — Boyd & Vandenberghe 2. Convex sets • affine and convex sets • some important examples
• operations that preserve convexity • generalized inequalities
• separating and supporting hyperplanes
• dual cones and generalized inequalities 2–1 Affine set
line through x1, x2: all points x = θx1 + (1 − θ)x2 (θ ∈ R) θ = 1.2 x1 θ = 1 θ = 0.6 x2 θ = 0 θ = −0.2
affine set: contains the line through any two distinct points in the set
example: solution set of linear equations {x | Ax = b}
(conversely, every affine set can be expressed as solution set of system of linear equations) Convex sets 2–2 Convex set
line segment between x1 and x2: all points x = θx1 + (1 − θ)x2 with 0 ≤ θ ≤ 1
convex set: contains line segment between any two points in the set x1, x2 ∈ C, 0 ≤ θ ≤ 1 =⇒ θx1 + (1 − θ)x2 ∈ C
examples (one convex, two nonconvex sets) Convex sets 2–3
Convex combination and convex hull
convex combination of x1,. . . , xk: any point x of the form
x = θ1x1 + θ2x2 + · · · + θkxk
with θ1 + · · · + θk = 1, θi ≥ 0
convex hull conv S: set of all convex combinations of points in S Convex sets 2–4 Convex cone
conic (nonnegative) combination of x1 and x2: any point of the form x = θ1x1 + θ2x2 with θ1 ≥ 0, θ2 ≥ 0 x1 x2 0
convex cone: set that contains all conic combinations of points in the set Convex sets 2–5