Notes on the low rank matrix approximation of kernelHiroshi Tsukahara
?
This document discusses low-rank matrix approximation of kernel matrices for kernel methods in machine learning. It notes that kernel matrices often have low rank compared to their size, and this property can be exploited to reduce the computational complexity of kernel methods. Specifically, it proposes approximating the kernel matrix as the product of two low-rank matrices. This allows the solution to be computed in terms of the low-rank matrices rather than the full kernel matrix, reducing the complexity from O(n3) to O(r2n) where r is the rank. Several algorithms for deriving the low-rank approximation are mentioned, including Nystrom approximation and incomplete Cholesky decomposition.
Performed the KeyGraph analysis on papers accepted in NIPS2013. Major topics and key words relate them are extracted. Researchers relationships are also extracted.
The document proposes a new method called Sparse Isotropic Hashing (SIH) to learn compact binary codes for image retrieval. SIH imposes additional constraints of sparsity and isotropic variance on the hash functions to make the learning problem better posed. It formulates SIH as an optimization problem that balances orthogonality, isotropic variance and sparsity, and develops an algorithm to solve it. Experiments on a landmark dataset show SIH achieves comparable retrieval accuracy to the state-of-the-art method while learning hash codes 20 times faster.
Align, Disambiguate and Walk : A Uni?ed Approach forMeasuring Semantic Simil...Koji Matsuda
?
The document presents a unified approach for measuring semantic similarity between texts at multiple levels (sense, word, text) using semantic signatures. It generates semantic signatures through multi-seeded random walks over the WordNet graph. It then aligns and disambiguates words and senses to extract sense "seeds" for the signatures. Finally, it calculates signature similarity using measures like cosine similarity, weighted overlap, and top-k Jaccard. The approach provides a unified framework for semantic similarity that can be applied to various NLP tasks.
The document proposes a new method called Sparse Isotropic Hashing (SIH) to learn compact binary codes for image retrieval. SIH imposes additional constraints of sparsity and isotropic variance on the hash functions to make the learning problem better posed. It formulates SIH as an optimization problem that balances orthogonality, isotropic variance and sparsity, and develops an algorithm to solve it. Experiments on a landmark dataset show SIH achieves comparable retrieval accuracy to the state-of-the-art method while learning hash codes 20 times faster.
Align, Disambiguate and Walk : A Uni?ed Approach forMeasuring Semantic Simil...Koji Matsuda
?
The document presents a unified approach for measuring semantic similarity between texts at multiple levels (sense, word, text) using semantic signatures. It generates semantic signatures through multi-seeded random walks over the WordNet graph. It then aligns and disambiguates words and senses to extract sense "seeds" for the signatures. Finally, it calculates signature similarity using measures like cosine similarity, weighted overlap, and top-k Jaccard. The approach provides a unified framework for semantic similarity that can be applied to various NLP tasks.