Notes on the low rank matrix approximation of kernelHiroshi Tsukahara
?
This document discusses low-rank matrix approximation of kernel matrices for kernel methods in machine learning. It notes that kernel matrices often have low rank compared to their size, and this property can be exploited to reduce the computational complexity of kernel methods. Specifically, it proposes approximating the kernel matrix as the product of two low-rank matrices. This allows the solution to be computed in terms of the low-rank matrices rather than the full kernel matrix, reducing the complexity from O(n3) to O(r2n) where r is the rank. Several algorithms for deriving the low-rank approximation are mentioned, including Nystrom approximation and incomplete Cholesky decomposition.
The document proposes a new method called Sparse Isotropic Hashing (SIH) to learn compact binary codes for image retrieval. SIH imposes additional constraints of sparsity and isotropic variance on the hash functions to make the learning problem better posed. It formulates SIH as an optimization problem that balances orthogonality, isotropic variance and sparsity, and develops an algorithm to solve it. Experiments on a landmark dataset show SIH achieves comparable retrieval accuracy to the state-of-the-art method while learning hash codes 20 times faster.
The document proposes a new method called Sparse Isotropic Hashing (SIH) to learn compact binary codes for image retrieval. SIH imposes additional constraints of sparsity and isotropic variance on the hash functions to make the learning problem better posed. It formulates SIH as an optimization problem that balances orthogonality, isotropic variance and sparsity, and develops an algorithm to solve it. Experiments on a landmark dataset show SIH achieves comparable retrieval accuracy to the state-of-the-art method while learning hash codes 20 times faster.