0% found this document useful (0 votes)
32 views

Optimizing Top-Multiclass SVM Via Semismooth Newton Algorithm

This paper presents the PSBCM-ALH algorithm for training large-scale support vector machines. The algorithm uses proximal stochastic block coordinate minimization with a novel heuristic shrinking method to select random samples without calculating the full gradient. It then uses an augmented Lagrangian homotopy algorithm to solve the subproblems efficiently via techniques like warm starting, Cholesky factorization updating, and epsilon verification. Experimental results on benchmark datasets show the algorithm is efficient and robust for different kernels, and outperforms LIBSVM for linear kernels using less memory than LIBLINEAR.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Optimizing Top-Multiclass SVM Via Semismooth Newton Algorithm

This paper presents the PSBCM-ALH algorithm for training large-scale support vector machines. The algorithm uses proximal stochastic block coordinate minimization with a novel heuristic shrinking method to select random samples without calculating the full gradient. It then uses an augmented Lagrangian homotopy algorithm to solve the subproblems efficiently via techniques like warm starting, Cholesky factorization updating, and epsilon verification. Experimental results on benchmark datasets show the algorithm is efficient and robust for different kernels, and outperforms LIBSVM for linear kernels using less memory than LIBLINEAR.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Computational models of social interaction

computational linguistics
Computational Theory

https://www.researchgate.net/publication/331273575_PSBCM-Link:
ALH_Algorithm_for_Training_Large-scale_Support_Vector_Machine

PSBCM-ALH Algorithm for Training Large-scale Support Vector Machine

Abstract
In this paper, based on Osuna's decomposition algorithm, a proximal stochastic block coordinate
minimization (PSBCM) algorithm is presented for training large-scale support vector machine.
Every time, PSBCM solves a strongly convex quadratic programming sub-problem with bran-
new training samples randomly selected by a novel heuristic shrinking method which does not
calculate the whole gradient of the objective function, and then uses a presented augmented
Lagrangian homotopy (ALH) algorithm to solve the sub-problem. ALH is efficient due to
solving the augmented Lagrangian sub-problems exactly by a ho-motopy algorithm with three
important techniques: warm-start, Cholesky factorization updating and ε-precision verification
and correction. In addition, the presented adaptive parameters updating method improves the
performance and robustness of PSBCM-ALH which denotes the algorithm with PSBCM as outer
iterations and ALH as inner iterations. The numerical experiments on the LIBSVM data sets and
MIT-CBCL face detection database show that PSBCM-ALH is efficient and robust with
different kernels and has great advantages to the well-known LIBSVM. Simultaneously, for
linear kernel, PSBCM-ALH is shown to be very competitive to the state-of-art LIBLINEAR and
needs less memory than it. Finally, the results demonstrate that PSBCM-ALH has linear time
complexity when the kernel is linear.

Optimizing Top- Multiclass SVM via


Semismooth Newton Algorithm
Top-k performance has recently received increasing attention in large data categories.
Advances, like a top-k multiclass support vector machine (SVM), have consistently improved the top-k
accuracy. However, the key ingredient in the state-of-the-art optimization scheme based upon
stochastic dual coordinate ascent relies on the sorting method, which yields $O(dłog d)$ complexity. In
this paper, we leverage the semismoothness of the problem and propose an optimized top-k multiclass
SVM algorithm, which employs semismooth Newton algorithm for the key building block to improve the
training speed. Our method enjoys a local superlinear convergence rate in theory. In practice,
experimental results confirm the validity. Our algorithm is four times faster than the existing method in
large synthetic problems; Moreover, on real-world data sets it also shows significant improvement in
training time.

New Support Vector Algorithms


We propose a new class of support vector algorithms for regression and classification. In these
algorithms, a parameter nu lets one effectively control the number of support vectors. While this
can be useful in its own right, the parameterization has the additional benefit of enabling us to
eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the
regression case, and the regularization constant C in the classification case. We describe the
algorithms, give some theoretical results concerning the meaning and the choice of nu, and report
experimental results.

Semi-Supervised Learning Machine Based


on Multi-View Twin Support Vector Machine
(2019)
Abstract
In many practical problems of machine learning, the data has multi-views; multi-views complement
each other; and the classification effect is better. This paper mainly studies the
semi-supervised method of multi-view dual support vector machine, and divides the data according
to different characteristics. Multi-views are used to find two non-parallel hyperplanes for each
view, and the model is constructed to solve the dual problem. Experimental results show that the
proposed algorithm can reduce the dimension of the data and has good classification accuracy.
The support vector machine algorithm shortens the running time, reduces the computational
complexity, and predicts better performance.
Keywords
Semi-Supervised Learning, Classification, Support Vector Machines, Multi-View

You might also like