It is very difficult to solve general non-linear optimization problems and there exists no universal techniques. This is intuitive since most difficult math problems can be cast as optimization problems. For example, the simple problem of finding the root of a non-linear function can be cast as the optimization problem: . Hence we focus on a sub-class of optimization problems that can be solved efficiently.

Linear optimization problems (linear objective function and polyhedral constraint set) can be solved in a very efficient manner. This class of problems can be solved with even a million variables very efficiently and commercial software exist to solve these large problems. On the other hand, solving general non-linear optimization problems is extremely difficult (almost impossible). While solving linear problems is important and useful, the linear class does not cover all interesting optimization problems (for example eigenvalue maximisation). Convex optimization problems are an important class (subsumes linear and contains a subset of non-linear problems) that are interesting, useful and that can be solved efficiently.

In this course, we will look at algorithms for convex optimization problems. In particular, the focus will be on large dimension problems and several algorithms along with their convergence rates will be presented.

There is no standard textbook for this course. We will be referring to the following text books

*Introductory lectures on convex optmization, A basic course*, Yurii Nesterov*Convex optimization*, Stephen Boyd*Lectures on modern convex optimization*, Aharon Ben-Tal and Arkadi Nemirovski*Theory of convex optimization for machine learning*, Sebastien Bubeck*Convex optimization theory*, Dimitri P Bertsekas*Nonlinear programming*, Dimitri P Bertsekas*Linear and nonlinear programming*, David G, Luenberger and Yinyu Ye*Numerical optimization*, Jorge Nocedal and Stephen J. Wright*Problem complexity and method efficiency in optimization*, A Nemirovsky and D. B. Yudin*A course in convexity*, Alexander Barvinok

**1. Comparing various algorithms **

In the first part (and most) of the course, we will be looking at algorithms that do not exploit the structure of the objective function. In this regard, we assume an oracle that provides with required information and we do not have any other additional information. In this course we consider the following oracles

- Zeroth-order oracle: input , output .
- First-order oracle: input , output
- Second-order oracle: input , ouput

We can now compare different algorithms over a class of functions with different oracles. The complexity of the algorithm will be measured in terms of number of calls to the oracle and the computation using these oracle outputs to computing the optimum value. However, we will not be counting the complexity of the oracle, *i.e.,* the complexity of computing will be neglected. In most of the problems we will be considering, is real valued and hence it does not make sense (or will be of infinite complexity) to ask for the optimal value. Instead, we will look for solutions that are close to the optimal value, *i.e,* we are interested in such that .

**2. Zeroth-order oracle **

Let us first consider the complexity of solving the following problem

when the function belongs to the class of Lipschitz continuous functions with respect to norm. We require an accurate solution, *i.e.*, if is the declared solution, then . A function is Lipschitz continuous with constant if

Suppose we consider a zeroth order oracle that provides only the value of the function, what can we say about the computational complexity of this class of functions? We can provide an upper bound on the complexity by providing an algorithm. The simplest algorithm is the following grid search. Divide each dimension into parts and search over the resulting lattice. It can easily be seen that calls to the oracle are required.

Lemma 1If , the grid search algorithm provides an accurate solution.

*Proof:* The optimal point belongs to some small hypercube in the grid. Let be some vertex of the hypercube. Then by Lipschitz continuity of ,

Hence choosing , *i.e.*, , we obtain the required accuracy.

So this algorithm calls the zeroth -order oracle times to obtain an accurate solution. Is this optimal (in an order sense). To prove this, we build a worst case function and show that, whatever be the algorithm, the optimal point cant be found in less than steps.

Lemma 2With the zeroth order oracle, no (deterministic?) algorithm can perform better.

*Proof:* In this class, all the algorithms should only depend on the value of the function that the oracle provides. Suppose you have an algorithm and you feed in the function . The algorithm will provide a sequence of points after which the algorithm outputs the optimal value with accuracy. Let us assume that (otherwise we are done). Now since there are only points in the square , there exists one point such that (the ball is with respect to norm). Now consider the function

This is a Lipschitz function with an optimal value . This function is non zero only when , *i.e.*, in the ball . It is easy to see that algorithm that is designed will never get closer to the optimal value.

This result shows that the complexity of optimization with zeroth-order oracle for Lipschitz functions is exponential in the dimension. In the next class, we will look at Lipschitz functions in more detail.

**Reference**: *Problem complexity and method efficiency in optimization*, A Nemirovsky and D. B. Yudin

## Leave a comment

Comments feed for this article