Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

[Boyd]_cvxslides

.pdf
Скачиваний:
7
Добавлен:
03.06.2015
Размер:
1.74 Mб
Скачать

Barrier method

given strictly feasible x, t := t(0) > 0, µ > 1, tolerance ǫ > 0. repeat

1.

Centering step. Compute x (t) by minimizing tf0 + φ, subject to Ax = b.

2.

Update. x := x (t).

3.

Stopping criterion. quit if (Pi θi)/t < ǫ.

4.

Increase t. t := µt.

 

 

 

P

• only di erence is duality gap m/t on central path is replaced by i θi/t

number of outer iterations:

&log((Pi θi)/(ǫt(0)))' log µ

complexity analysis via self-concordance applies to SDP, SOCP

Interior-point methods

12–29

Examples

second-order cone program (50 variables, 50 SOC constraints in R6)

 

102

 

 

 

 

 

 

Newton iterations

120

 

 

 

 

 

 

 

 

100

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10

−2

 

 

 

 

 

 

 

80

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10−4

 

 

 

 

 

 

 

40

 

 

 

 

 

 

 

dualitygap 10−6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

µ = 50 µ = 200

µ = 2

 

 

 

 

 

 

 

 

 

 

 

0

20

 

40

60

80

 

 

0

20

 

60

100

140

180

 

 

 

 

 

 

 

 

 

 

 

 

 

Newton iterations

 

 

 

 

 

 

 

µ

 

 

 

semidefinite program (100 variables, LMI constraint in S100)

 

 

 

102

 

 

 

 

 

 

 

Newton iterations

140

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

duality gap

100

 

 

 

 

 

 

 

100

 

 

 

 

 

 

 

10−2

 

 

 

 

 

 

60

 

 

 

 

 

 

 

10−4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10−6

µ = 150 µ = 50

µ = 2

20

 

 

 

 

 

 

 

 

 

0

 

20

40

60

80

100

 

 

0

 

20

40

60

80

100

120

 

 

 

 

Newton iterations

 

 

 

 

 

 

 

µ

 

 

 

Interior-point methods

 

 

 

 

 

 

 

 

 

 

 

 

 

12–30

family of SDPs (A Sn, x Rn)

minimize

1T x

subject to

A + diag(x) 0

n = 10, . . . , 1000, for each n solve 100 randomly generated instances

 

35

 

 

iterations

30

 

 

25

 

 

Newton

 

 

20

 

 

 

 

 

 

15

102

103

 

101

 

 

n

 

Interior-point methods

12–31

Primal-dual interior-point methods

more e cient than barrier method when high accuracy is needed

update primal and dual variables at each iteration; no distinction between inner and outer iterations

often exhibit superlinear asymptotic convergence

search directions can be interpreted as Newton directions for modified KKT conditions

can start at infeasible points

cost per iteration same as barrier method

Interior-point methods

12–32

Convex Optimization — Boyd & Vandenberghe

13.Conclusions

main ideas of the course

importance of modeling in optimization

13–1

Modeling

mathematical optimization

problems in engineering design, data analysis and statistics, economics, management, . . . , can often be expressed as mathematical optimization problems

techniques exist to take into account multiple objectives or uncertainty in the data

tractability

roughly speaking, tractability in optimization requires convexity

algorithms for nonconvex optimization find local (suboptimal) solutions, or are very expensive

surprisingly many applications can be formulated as convex problems

Conclusions

13–2

Theoretical consequences of convexity

local optima are global

extensive duality theory

systematic way of deriving lower bounds on optimal value

necessary and su cient optimality conditions

certificates of infeasibility

sensitivity analysis

solution methods with polynomial worst-case complexity theory (with self-concordance)

Conclusions

13–3

Practical consequences of convexity

(most) convex problems can be solved globally and e ciently

interior-point methods require 20 – 80 steps in practice

basic algorithms (E.G., Newton, barrier method, . . . ) are easy to implement and work well for small and medium size problems (larger problems if structure is exploited)

more and more high-quality implementations of advanced algorithms and modeling tools are becoming available

high level modeling tools like CVX ease modeling and problem specification

Conclusions

13–4

How to use convex optimization

to use convex optimization in some applied context

use rapid prototyping, approximate modeling

start with simple models, small problem instances, ine cient solution methods

if you don’t like the results, no need to expend further e ort on more accurate models or e cient algorithms

work out, simplify, and interpret optimality conditions and dual

even if the problem is quite nonconvex, you can use convex optimization

in subproblems, E.G., to find search direction

by repeatedly forming and solving a convex approximation at the current point

Conclusions

13–5

Further topics

some topics we didn’t cover:

methods for very large scale problems

subgradient calculus, convex analysis

localization, subgradient, and related methods

distributed convex optimization

applications that build on or use convex optimization

Conclusions

13–6