Learning Kernel Classifiers

Theory and Algorithms

Look inside
Paperback
$60.00 US
On sale Nov 01, 2022 | 384 Pages | 9780262546591

An overview of the theory and application of kernel classification methods.

Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier—a limited, but well-established and comprehensively studied model—and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments. It begins by describing the major algorithmic advances: kernel perceptron learning, kernel Fisher discriminants, support vector machines, relevance vector machines, Gaussian processes, and Bayes point machines. Then follows a detailed introduction to learning theory, including VC and PAC-Bayesian theory, data-dependent structural risk minimization, and compression bounds. Throughout, the book emphasizes the interaction between theory and algorithms: how learning algorithms work and why. The book includes many examples, complete pseudo code of the algorithms presented, and an extensive source code library.

About

An overview of the theory and application of kernel classification methods.

Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier—a limited, but well-established and comprehensively studied model—and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments. It begins by describing the major algorithmic advances: kernel perceptron learning, kernel Fisher discriminants, support vector machines, relevance vector machines, Gaussian processes, and Bayes point machines. Then follows a detailed introduction to learning theory, including VC and PAC-Bayesian theory, data-dependent structural risk minimization, and compression bounds. Throughout, the book emphasizes the interaction between theory and algorithms: how learning algorithms work and why. The book includes many examples, complete pseudo code of the algorithms presented, and an extensive source code library.

Author

Three Penguin Random House Authors Win Pulitzer Prizes

On Monday, May 5, three Penguin Random House authors were honored with a Pulitzer Prize. Established in 1917, the Pulitzer Prizes are the most prestigious awards in American letters. To date, PRH has 143 Pulitzer Prize winners, including William Faulkner, Eudora Welty, Josh Steinbeck, Ron Chernow, Anne Applebaum, Colson Whitehead, and many more. Take a look at our 2025 Pulitzer Prize

Read more

Books for LGBTQIA+ Pride Month

In June we celebrate Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual + (LGBTQIA+) Pride Month, which honors the 1969 Stonewall riots in Manhattan. Pride Month is a time to both celebrate the accomplishments of those in the LGBTQ+ community and recognize the ongoing struggles faced by many across the world who wish to live

Read more