Details Details PDF BIBTEX RIS Title Iteratively reweighted least squares classifier and its l2- and l1-regularized Kernel versions Journal title Bulletin of the Polish Academy of Sciences Technical Sciences Yearbook 2010 Volume 58 Issue No 1 Authors Łęski, J. Divisions of PAS Nauki Techniczne Coverage 171-182 Date 2010 Identifier DOI: 10.2478/v10175-010-0018-2 ; ISSN 2300-1917 Source Bulletin of the Polish Academy of Sciences: Technical Sciences; 2010; 58; No 1; 171-182 References Duda R. (1973), Pattern Classification and Scene Analysis. ; Duda R. (2001), Pattern Classification. ; Ripley B. (1996), Pattern Recognition and Neural Networks. ; Tou J. (1974), Pattern Recognition Principles. ; Webb A. (1999), Statistical Pattern Recognition. ; Schölkopf B. (2002), Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond. ; Vapnik V. (1995), The Nature of Statistical Learning Theory. ; Vapnik V. (1998), Statistical Learning Theory. ; Mangasarian O. (2001), Lagrangian support vector machines, J. Mach. Learn. Res, 1, 1, 161. ; Suykens J. (1999), Least squares support vector machine classifiers, Neur. Proc. Lett, 9, 3, 293. ; Tsang I. (2005), Core vector machines: Fast SVM training on very large data sets, J. Mach. Learn. Res, 6, 1, 363. ; Tsang I. (2006), Generalized core vector machines, IEEE Trans. Neur. Net, 17, 5, 1126. ; Tsang I. (2008), Large-scale maximum margin discriminant analysis using core vector machines, IEEE Trans. Neur. Net, 19, 4, 610. ; Mika S. (1999), Neur. Net. Sig. Proc. IX, 41. ; Zheng W. (2005), Foley-Sammon optimal discriminant vectors using kernel approach, IEEE Trans. Neur. Net, 16, 1, 1. ; Freund Y. (1999), Large margin classification using the perceptron algorithm, Mach. Learn, 37, 1, 277. ; Chen J.-H. (2002), Fuzzy kernel perceptron, IEEE Trans. Neu. Net, 13, 6, 1364. ; Pękalska E. (2001), A generalized kernel approach to dissimilarity-based classification, J. Mach. Learn. Res, 2, 1, 175. ; Ho Y.-C. (1965), An algorithm for linear inequalities and its applications, IEEE Trans. Elec. Comp, 14, 5, 683. ; Ho Y.-C. (1966), A class of iterative procedures for linear inequalities, J.SIAM Control, 4, 2, 112. ; Hassoun M. (1992), Adaptive Ho-Kashyap rules for perceptron, IEEE Trans. Neu. Net, 3, 1, 51. ; Łęski J. (2003), Ho-Kashyap classifier with generalization control, Pattern Recognition Letters, 24, 2, 2281. ; Łęski J. (2004), Kernel Ho-Kashyap classifier with generalization control, Int. J. App. Math. Comp. Sci, 14, 1, 53. ; Wang Z. (2008), Pattern representation in feature extraction and classifier design: matrix versus vector, IEEE Trans. Neu. Net, 19, 5, 758. ; Wang Z. (2008), MultiK-MHKS: a novel multiple kernel learning algorithm, IEEE Trans. Patt. Ana. Mach. Intel, 30, 2, 348. ; Hsu C.-W. (2002), A comparison of methods for multiclass support vector machines, IEEE Trans. Neu. Net, 13, 2, 415. ; Huber P. (1981), Robust Statistics. ; Haykin S. (1999), Neural Networks: a Comprehensive Foundation. ; Blumensath T. (2008), Gradient pursuit, IEEE Trans. Sig. Proc, 56, 6, 2370. ; Figueiredo M. (2007), Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE J. Select. Top. Sig. Proc, 1, 4, 586. ; Efron B. (2004), Least angle regression, The Annals of Statistics, 32, 2, 407. ; Tibshirani R. (1996), Regression shrinkage and selection via the lasso, J. R. Statist. Soc, B 58, 1, 267. ; Kim S.-J. (2007), An interior-point method for large-scale <i>l</i><sub>1</sub>-regularized least squares, IEEE J. Select. Top. Sig. Proc, 1, 4, 606. ; Ruggiero V. (2000), A modified projection algorithm for large strictly convex quadratic programs, J. Optim. Theory Appl, 104, 2, 281. ; Serafini T. (2004), Gradient projection methods for quadratic programs and applications in training support vector machines, Optim. Meth. Soft, 20, 2, 353.