# Invited talks and Tutorials

Invited talks and tutorials have always been an important way to add value to the LION conference, not only for the presentations but also because of the interactions during the event.

**This webpage will be updated as invited speakers agree to give talks and tutorials.**

## Invited talks

**
Vijay Vazirani
**, Georgia Institute of Technology**New (Practical) Complementary Pivot Algorithms for Market Equilibria**

Complementary pivot algorithms, in the style of the simplex
algorithm, tend to work well in practice despite having an exponential
worst case behavior -- a case in point being the classic Lemke-Howson
algorithm (1964) for 2-player Nash equilibrium. This algorithm also gives
a direct proof of membership of the problem in the class PPAD and yields
deep structural insights, such as oddness of the number of equilibria.

Our first result accomplishes all of the above for the problem of finding
an equilibrium for an Arrow-Debreu market under separable, piecewise
linear concave utilities. Our second result extends this to markets with
production.

Based on the following paper, and a recent joint work with Jugal Garg
and Ruta Mehta: http://www.cc.gatech.edu/~vazirani/SPLC.pdf

**
Holger H. Hoos
**, University of British Columbia**Machine Learning & Optimisation:
Promise and Power of Data-driven, Automated Algorithm Design**

Computational tools are transforming the way we live, work and interact; they also hold the key for meeting many of the challenges that we face as individuals, communities and societies. Machine learning and optimisation techniques play a particularly important role in this context, and cleverly combined, they can revolutionise the way we solve challenging computational problems - as I will demonstrate in this talk, using examples from mixed integer programming, planning and propositional satisfiability, as well as from prominent supervised machine learning tasks. The fundamental techniques I will cover include automated algorithm selection, configuration and hyperparameter optimisation, as well as performance prediction and Bayesian optimisation. I will also motivate and explain the Programming by Optimisation paradigm for automated algorithm design, which further extends the reach of those techniques.

**
Baba C. Vemuri
**, University of Florida** Dictionary Learning on Riemannian Manifolds and its Applications **

Existing dictionary learning algorithms rely heavily on the assumption that the data points are vectors in some Euclidean space R^n, and the dictionary is learned from the input data using only the vector space structure of R^n. However, in many applications, features and data points often belong to some Riemannian manifold with its intrinsic metric structure which is potentially important and critical to the application. The extrinsic viewpoint of existing dictionary learning methods becomes inappropriate and inadequate if the intrinsic geometric structure is required in modeling operations applied to the data. In this talk, I will present a very general dictionary learning framework for data lying on some known Riemannnian manifolds. Using the local linear structure furnished by the Riemannian geometry, I will propose a novel dictionary learning algorithm that can be considered as data-speciﬁc, a feature that is not present in the existing methods. Further, I will show that both the dictionary and sparse coding can be effectively computed for these Riemannian manifolds. Experimental results on application of the proposed method will be presented for classiﬁcation and reconstruction problems drawn from the domains of Medical Image Analysis and Computer Vision. Preliminary results demonstrate that the dictionary learned using the proposed method can and does indeed provide real improvements when compared to other direct approaches.

**
Roman V. Belavkin
**, Middlesex University** Information-Geometric Optimisation of Parameters in Randomised Algorithms **

Many learning and optimization algorithms are iterative procedures, which use local information (e.g. gradient of the objective function) to generate new steps towards the goal (e.g. the maximum of the objective function). Usually, local information is insufficient to solve the problem in a single iteration, and minimization of the convergence time is related to how well this local information is used. Iterative nature of the algorithms implies that they describe processes with the Markov property, and this allows us to formulate the problem of parameter optimization of an algorithm as an information-theoretic variational problem with constraints on the Shannon capacity of the Markov transition kernel corresponding to the algorithm. Pythagorean theorem for the Shannon capacity allows us to analyze these constraints, and this may help finding the optimal values of the parameters. Optimization of mutation rate in a simple genetic algorithm will be used as an example.

## Tutorials

**
Ding-Zhu Du
**, University of Texas at Dallas, USA **Nonlinear Combinatorial Optimization
**

There are many combinatorial optimization problems with nonlinear objective function or/and nonlinear constraints appearing in recent study on energy efficiency of wireless networks, cloud computing, and data networks. They form a new direction in study of approximation algorithms. In this talk, I give some examples and explain current status and open problems in this research direction.

**Mauricio G.C. Resende**, AT&T Labs Research, USA

**GRASP: Advances and applications**

GRASP is a multi-start metaheuristic for combinatorial optimization problems introduced in 1989 by Feo and Resende. Each GRASP iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. An intensification strategy based on path-relinking is frequently used to improve solution quality and to reduce computation times by exploring elite solutions previously found along the search. This tutorial describes the basic components of GRASP, successful implementation strategies, and effective hybridizations with path-relinking and Lagrangrean relaxation. We also address some tricks to be used in the quest for good implementations, such as use of multiple threads and restart strategies. We visit a number of recent developments, such as continuous GRASP, automatic tuning of GRASP parameters, and multi-objective GRASP. We conclude with examples of recent applications of GRASP.

**
Nick Sahinidis
**, Carnegie Mellon University, USA**ALAMO: Automated Learning of Algebraic Models for Optimization
**

ALAMO is a tool we have developed to generate algebraic models from simulations or experimental data. ALAMO aims to use a small number of simulations or experiments and generate models that are as accurate and as simple as possible. The approach does not require the user to specify the mathematical form of the underlying model. Furthermore, the software is capable of guiding the user in the selection of points where to run the simulator or experiments so as to make the most of simulation/experimental measurements in the model building process. In this tutorial, we will explore:

More information on ALAMO is available at http://archimedes.cheme.cmu.edu/?q=alamo.

**My T. Thai
**, University of Florida, USA**Interdependent Networks Analysis**

Modern complex networked systems, such as defense strategies, communication systems, sensor networks, and multimodal transportation networks, are interdependent. Changes in one network may be distributed and cascaded to another networks in the system, thereby leading to a larger scale of impact. For example, failures in the communication network may lead to the failures in power-grids, and eventually a major black-out, which may further impact the transportation networks and others. Therefore, understanding the interdependent behaviors of these networks is very essential in order to analyze the operability of the system as a whole. This tutorial provides the basic theory of interdependent networks with several new mathematical models and optimization techniques to analyze the diffusion dynamics and vulnerability on these systems.

**
Mario Guarracino
**, Consiglio Nazionale delle Ricerche, Italy**From separating to proximal-plane supervised classifiers**

The tutorial introduces algorithms, methods and software tools for parallel and proximal-plane supervised classifiers. Separating plane classifiers are introduced in terms of Support Vector Machines (SVMs) and different models are described to generalize the classification models from separating to proximal-planes, which represent the two classes in the binary classification case. We describe proximal support vector machine classification by means of generalized eigenvalues problems, introducing some regularization techniques. The evolution of such techniques will be described, focusing on local information, twin support vector machines and nonparallel plane proximal classifiers.