ゼロから始める深層強化学習(NLP2018講演資料)/ Introduction of Deep Reinforcement LearningPreferred Networks
?
Introduction of Deep Reinforcement Learning, which was presented at domestic NLP conference.
言語処理学会第24回年次大会(NLP2018) での講演資料です。
http://www.anlp.jp/nlp2018/#tutorial
This document provides an overview of POMDP (Partially Observable Markov Decision Process) and its applications. It first defines the key concepts of POMDP such as states, actions, observations, and belief states. It then uses the classic Tiger problem as an example to illustrate these concepts. The document discusses different approaches to solve POMDP problems, including model-based methods that learn the environment model from data and model-free reinforcement learning methods. Finally, it provides examples of applying POMDP to games like ViZDoom and robot navigation problems.
Because of deep learning we now talk a lot about tensors, yet tensors remain relatively unknown objects. In this presentation I will introduce tensors and the basics of multilinear algebra, then describe tensor decompositions and give some examples of how they are used in representation learning for understanding/compressing data. I will also briefly describe how tensor decompositions are used in 1) the method of moments for training latent variable models, and 2) deep learning for understanding why deep convolutional networks are such excellent classifiers.
This document provides an overview of POMDP (Partially Observable Markov Decision Process) and its applications. It first defines the key concepts of POMDP such as states, actions, observations, and belief states. It then uses the classic Tiger problem as an example to illustrate these concepts. The document discusses different approaches to solve POMDP problems, including model-based methods that learn the environment model from data and model-free reinforcement learning methods. Finally, it provides examples of applying POMDP to games like ViZDoom and robot navigation problems.
Because of deep learning we now talk a lot about tensors, yet tensors remain relatively unknown objects. In this presentation I will introduce tensors and the basics of multilinear algebra, then describe tensor decompositions and give some examples of how they are used in representation learning for understanding/compressing data. I will also briefly describe how tensor decompositions are used in 1) the method of moments for training latent variable models, and 2) deep learning for understanding why deep convolutional networks are such excellent classifiers.
This document discusses RNA bioinformatics and RNA structure prediction. It begins by asking why RNA is important and notes that non-coding RNAs are as numerous as protein-coding genes. It then discusses RNA structure, including primary, secondary and tertiary structure. Key methods for RNA structure prediction include Nussinov's algorithm, which maximizes base pairing, and Zuker's algorithm, which predicts minimum free energy structures using a nearest neighbor model. Comparative sequence analysis can also help predict RNA structure by identifying covarying base pairs that are evolutionarily conserved.
This document summarizes a Chainer meetup where Yuta Kashino presented on PyTorch. Key points discussed include:
- Yuta Kashino is the CEO of BakFoo, Inc. and discussed his background in astrophysics, Zope, and Python.
- PyTorch was introduced as an alternative to Chainer and TensorFlow that is Pythonic and defines models dynamically through code.
- PyTorch uses autograd to track gradients like Chainer but with Pythonic APIs and integration with NumPy for efficient operations.
- Similarities between PyTorch and Chainer were highlighted around defining models as chains of functions, GPU support, and optimizer libraries.
- Resources for learning more about PyTorch were provided, including tutorials on their website and courses on
An Introduction to Bioinformatics
Drexel University INFO648-900-200915
A Presentation of Health Informatics Group 5
Cecilia Vernes
Joel Abueg
Kadodjomon Yeo
Sharon McDowell Hall
Terrence Hughes
Tensor representations in signal processing and machine learning (tutorial ta...Tatsuya Yokota
?
Tutorial talk in APSIPA-ASC 2020.
Title: Tensor representations in signal processing and machine learning.
Introduction to tensor decomposition (テンソル分解入門)
Basics of tensor decomposition (テンソル分解の基礎)
The document proposes a new fast algorithm for smooth non-negative matrix factorization (NMF) using function approximation. The algorithm uses function approximation to smooth the basis vectors, allowing for faster computation compared to existing methods. The method is extended to tensor decomposition models. Experimental results on image datasets show the proposed methods achieve better denoising and source separation performance compared to ordinary NMF and tensor decomposition methods, while being up to 300 times faster computationally. Future work includes extending the model to incorporate both common smoothness across factors and individual sparseness.
Linked CP Tensor Decomposition (presented by ICONIP2012)Tatsuya Yokota
?
This document proposes a new method called Linked Tensor Decomposition (LTD) to analyze common and individual factors from a group of tensor data. LTD combines the advantages of Individual Tensor Decomposition (ITD), which analyzes individual characteristics, and Simultaneous Tensor Decomposition (STD), which analyzes common factors in a group. LTD represents each tensor as the sum of a common factor and individual factors. An algorithm using Hierarchical Alternating Least Squares is developed to solve the LTD model. Experiments on toy problems and face reconstruction demonstrate LTD can extract both common and individual factors more effectively than ITD or STD alone. Future work will explore Tucker-based LTD and statistical independence in the LTD model
Introduction to Common Spatial Pattern Filters for EEG Motor Imagery Classifi...Tatsuya Yokota
?
This document introduces common spatial pattern (CSP) filters for EEG motor imagery classification. CSP filters aim to find spatial patterns in EEG data that maximize the difference between two classes. The document outlines several CSP algorithms including standard CSP, common spatially standardized CSP, and spatially constrained CSP. CSP filters extract discriminative features from EEG data that can improve classification accuracy for brain-computer interface applications involving motor imagery tasks.
This document provides an introduction to blind source separation and non-negative matrix factorization. It describes blind source separation as a method to estimate original signals from observed mixed signals. Non-negative matrix factorization is introduced as a constraint-based approach to solving blind source separation using non-negativity. The alternating least squares algorithm is described for solving the non-negative matrix factorization problem. Experiments applying these methods to artificial and real image data are presented and discussed.
This document discusses independent component analysis (ICA) for blind source separation. ICA is a method to estimate original signals from observed signals consisting of mixed original signals and noise. It introduces the ICA model and approach, including whitening, maximizing non-Gaussianity using kurtosis and negentropy, and fast ICA algorithms. The document provides examples applying ICA to separate images and discusses approaches to improve ICA, including using differential filtering. ICA is an important technique for blind source separation and independent component estimation from observed signals.
13. 行列と行列の積
(I×J)行列 ?(J×K)行列 = (I×K)行列
テンソルと行列の積
(I×J×K)テンソル ×1 (L×I)行列 = (L×J×K)テンソル
(I×J×K)テンソル ×2 (L×J)行列 = (I×L×K)テンソル
(I×J×K)テンソル ×3 (L×K)行列 = (I×J×L)テンソル
13
テンソルの計算(6)
=?
I
J
J
K K
I
I
J
K
×1
I
L = L
JK
I
L I
=
JK
L
JK
行列化
行列化
?
41. PSNR(peak signal to noise ratio)
41
再構成誤差の評価
11枚
33 pixels
26 pixels
GA
C
B T
???
15人
???
?????????
11枚
858
pixels
R1×10×10
?画像にはノイズ(10dB)が付加されている
?R1個のパーツで各顔画像を再構成する
?再構成した画像とノイズのない原画像をPSNRで比較
42. 42
応用例2: 3階テンソルデータのノイズ除去
7.21 dB
非負CP分解
(19.8 dB)
非負Tucker分解
(13.5 dB)
スムース非負
CP分解(26.8 dB)
スムース非負Tu-
cker分解(23.9 dB)
Gaussian
noise
GA
C
B T
GU
W
V T T
非負Tucker分解
非負CP分解
スムース非負Tucker分解
スムース非負CP分解