Introduction to Online Convex Optimization, second edition

Author Elad Hazan
New edition of a graduate-level textbook on that focuses on online convex optimization, a machine learning framework that views optimization as a process.

In many practical applications, the environment is so complex that it is not feasible to lay out a comprehensive theoretical model and use classical algorithmic theory and/or mathematical optimization. Introduction to Online Convex Optimization presents a robust machine learning approach that contains elements of mathematical optimization, game theory, and learning theory: an optimization method that learns from experience as more aspects of the problem are observed. This view of optimization as a process has led to some spectacular successes in modeling and systems that have become part of our daily lives.

Based on the “Theoretical Machine Learning” course taught by the author at Princeton University, the second edition of this widely used graduate level text features:
  • Thoroughly updated material throughout
  • New chapters on boosting, adaptive regret, and approachability and expanded exposition on optimization
  • Examples of applications, including prediction from expert advice, portfolio selection, matrix completion and recommendation systems, SVM training, offered throughout
  • Exercises that guide students in completing parts of proofs
  • Preface xi
    Acknowledgments xv
    List of Figures xvii
    List of Symbols xix
    1 Introduction 1
    2 Basic Concepts in Convex Optimization 15
    3 First-Order Algorithms for Online Convex Optimization 37
    4 Second-Order Methods 49
    5 Regularization 63
    6 Bandit Convex Optimization 89
    7 Projection-Free Algorithms 107
    8 Games, Duality and Regret 123
    9 Learning Theory, Generalization, and Online Convex Optimization 133
    10 Learning in Changing Environments 147
    11 Boosting and Regret 163
    12 Online Boosting 171
    13 Blackwell Approachability and Online Convex Optimization 181
    Notes 191
    References 193
    Index 207
    Elad Hazan is Professor of Computer Science at Princeton University and cofounder and director of Google AI Princeton. An innovator in the design and analysis of algorithms for basic problems in machine learning and optimization, he is coinventor of the AdaGrad optimization algorithm for deep learning, the first adaptive gradient method.

    About

    New edition of a graduate-level textbook on that focuses on online convex optimization, a machine learning framework that views optimization as a process.

    In many practical applications, the environment is so complex that it is not feasible to lay out a comprehensive theoretical model and use classical algorithmic theory and/or mathematical optimization. Introduction to Online Convex Optimization presents a robust machine learning approach that contains elements of mathematical optimization, game theory, and learning theory: an optimization method that learns from experience as more aspects of the problem are observed. This view of optimization as a process has led to some spectacular successes in modeling and systems that have become part of our daily lives.

    Based on the “Theoretical Machine Learning” course taught by the author at Princeton University, the second edition of this widely used graduate level text features:
  • Thoroughly updated material throughout
  • New chapters on boosting, adaptive regret, and approachability and expanded exposition on optimization
  • Examples of applications, including prediction from expert advice, portfolio selection, matrix completion and recommendation systems, SVM training, offered throughout
  • Exercises that guide students in completing parts of proofs
  • Table of Contents

    Preface xi
    Acknowledgments xv
    List of Figures xvii
    List of Symbols xix
    1 Introduction 1
    2 Basic Concepts in Convex Optimization 15
    3 First-Order Algorithms for Online Convex Optimization 37
    4 Second-Order Methods 49
    5 Regularization 63
    6 Bandit Convex Optimization 89
    7 Projection-Free Algorithms 107
    8 Games, Duality and Regret 123
    9 Learning Theory, Generalization, and Online Convex Optimization 133
    10 Learning in Changing Environments 147
    11 Boosting and Regret 163
    12 Online Boosting 171
    13 Blackwell Approachability and Online Convex Optimization 181
    Notes 191
    References 193
    Index 207

    Author

    Elad Hazan is Professor of Computer Science at Princeton University and cofounder and director of Google AI Princeton. An innovator in the design and analysis of algorithms for basic problems in machine learning and optimization, he is coinventor of the AdaGrad optimization algorithm for deep learning, the first adaptive gradient method.