Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
?
This document discusses stochastic optimization methods for solving regularized learning problems with structured regularization and large datasets. It proposes applying the alternating direction method of multipliers (ADMM) in a stochastic manner. Specifically, it introduces two stochastic ADMM methods for online data: RDA-ADMM, which extends regularized dual averaging with ADMM updates; and OPG-ADMM, which extends online proximal gradient descent with ADMM updates. These methods allow the regularization term to be optimized in batches, resolving computational difficulties, while the loss is optimized online using only a small number of samples per iteration.
Introduction of "the alternate features search" using RSatoshi Kato
?
Introduction of the alternate features search using R, proposed in the paper. S. Hara, T. Maehara, Finding Alternate Features in Lasso, 1611.05940, 2016.
Introduction of "the alternate features search" using RSatoshi Kato
?
Introduction of the alternate features search using R, proposed in the paper. S. Hara, T. Maehara, Finding Alternate Features in Lasso, 1611.05940, 2016.