This document discusses Hidden Markov Models (HMMs) and Markov chains. It begins with an introduction to Markov processes and how HMMs are used in various domains like natural language processing. It then describes the properties of a Markov chain, which has a set of states that the system transitions between randomly at discrete time steps based on transition probabilities. The Markov property is explained as the conditional independence of future states from past states given the present state. HMMs extend Markov chains by making the state sequence hidden and only allowing observation of the output states.
1. This document discusses robots that utilize their own structure and morphology for locomotion and activity. It provides examples of robots that rely on physical dynamics and morphology for control rather than complex software or sensing.
2. Specific examples discussed include hopping robots that use the natural vibration of their structure for energy-efficient hopping, passive dynamic walkers that can walk solely through interaction with gravity and friction without actuation or control, and soft robots whose flexible materials and pneumatic networks allow intrinsically compliant motion.
3. The document argues that utilizing a robot's physical structure and materials for control can reduce the computational and sensing demands compared to systems relying solely on software control. This morphological computation is inspired by principles observed in biological systems
The document discusses analyzing correlation networks between Scottish whisky distilleries. A correlation matrix is created from sensory characteristics of whiskies. This is converted to a graph object where nodes are distilleries and edges represent correlations above 0.8. The graph is analyzed to find clustering of distilleries based on sensory profiles and key central nodes. Visualizations of the network are also created.
1. The document discusses network centrality and how networks can be used to understand relationships and disease transmission.
2. It introduces several measures of centrality in networks, including degree centrality which measures the number of connections a node has, and eigenvector centrality which considers the importance of neighbors.
3. Eigenvector centrality is calculated iteratively until the values converge, with nodes connected to important nodes deemed more central themselves. This helps identify the most influential individuals in a network.
This document discusses Hidden Markov Models (HMMs) and Markov chains. It begins with an introduction to Markov processes and how HMMs are used in various domains like natural language processing. It then describes the properties of a Markov chain, which has a set of states that the system transitions between randomly at discrete time steps based on transition probabilities. The Markov property is explained as the conditional independence of future states from past states given the present state. HMMs extend Markov chains by making the state sequence hidden and only allowing observation of the output states.
1. This document discusses robots that utilize their own structure and morphology for locomotion and activity. It provides examples of robots that rely on physical dynamics and morphology for control rather than complex software or sensing.
2. Specific examples discussed include hopping robots that use the natural vibration of their structure for energy-efficient hopping, passive dynamic walkers that can walk solely through interaction with gravity and friction without actuation or control, and soft robots whose flexible materials and pneumatic networks allow intrinsically compliant motion.
3. The document argues that utilizing a robot's physical structure and materials for control can reduce the computational and sensing demands compared to systems relying solely on software control. This morphological computation is inspired by principles observed in biological systems
The document discusses analyzing correlation networks between Scottish whisky distilleries. A correlation matrix is created from sensory characteristics of whiskies. This is converted to a graph object where nodes are distilleries and edges represent correlations above 0.8. The graph is analyzed to find clustering of distilleries based on sensory profiles and key central nodes. Visualizations of the network are also created.
1. The document discusses network centrality and how networks can be used to understand relationships and disease transmission.
2. It introduces several measures of centrality in networks, including degree centrality which measures the number of connections a node has, and eigenvector centrality which considers the importance of neighbors.
3. Eigenvector centrality is calculated iteratively until the values converge, with nodes connected to important nodes deemed more central themselves. This helps identify the most influential individuals in a network.
2. (sparse) GGMのモチベーション
? Large p, small n problem
– 正則化のため低次元構造を課すのが一般的
アプローチ(Negahban et al. 2012)
? ガウス分布データにおける中心的問題は
inverse covariance matrix(精度行列)
の推定
観測
サンプル数 n
データの次元数p
15. References
1. Lauritzen, S.L. Graphical models, volume 17. Oxford University Press, USA,
1996.
2. Ravikumar, Pradeep, Wainwright, Martin J, Raskutti, Garvesh, and Yu, Bin. High-
dimensional covariance estimation by minimizing l1-penalized log-determinant
di- vergence. Electronic Journal of Statistics, 5:935-980, 2011.
3. Chandrasekaran, Venkat, Parrilo, Pablo A, and Willsky, Alan S. Latent variable
graphical model selection via convex optimization. Annals of Statistics,
40(4):1935-1967, 2012.
4. Friedman, Jerome, Hastie, Trevor, and Tibshirani, Robert. Sparse inverse
covariance estimation with the graphical lasso. Biostatistics, 9(3):432-441, 2008.
5. Rothman, Adam J, Bickel, Peter J, Levina, Elizaveta, and Zhu, Ji. Sparse
permutation invariant covariance estimation. Electronic Journal of Statistics,
2:494-515, 2008.
6. Choi, Myung Jin, Chandrasekaran, Venkat, and Willsky, Alan S. Gaussian
multiresolution models: Exploiting sparse Markov and covariance structure.
Signal Processing, IEEE Transactions on, 58(3):1012-1024, 2010.
7. Choi, Myung Jin, Tan, Vincent YF, Anandkumar, Animashree, and Willsky, Alan
S. Learning latent tree graphical models. Journal of Machine Learning Research,
12:1729-1770, 2011.
8. Ma, Shiqian, Xue, Lingzhou, and Zou, Hui. Alternating direction methods for
latent variable Gaussian graphical model selection. Neural computation,
25(8):2172-2198, 2013.