ベイズ機械学習(an introduction to bayesian machine learning)医療IT数学同好会 T/T
?
This document provides an introduction to Bayesian machine learning. It discusses key concepts like Bayes' theorem, the modeling and inference procedures in Bayesian learning, and examples like linear regression and Gaussian mixture models. It also introduces variational inference as a technique for approximating intractable posterior distributions. Finally, it lists some example papers and programming languages/libraries for probabilistic programming.
[DL輪読会]Recent Advances in Autoencoder-Based Representation LearningDeep Learning JP
?
1. Recent advances in autoencoder-based representation learning include incorporating meta-priors to encourage disentanglement and using rate-distortion and rate-distortion-usefulness tradeoffs to balance compression and reconstruction.
2. Variational autoencoders introduce priors to disentangle latent factors, but recent work aggregates posteriors to directly encourage disentanglement.
3. The rate-distortion framework balances the rate of information transmission against reconstruction distortion, while rate-distortion-usefulness also considers downstream task usefulness.
The document summarizes various techniques for automated software testing using fuzzing, including coverage-based fuzzing (AFL), directed greybox fuzzing (AflGO), and neural network-based approaches (FuzzGuard). It discusses how genetic algorithms and simulated annealing are used in AFL and AflGO respectively to guide test case mutation towards new code areas. It also provides examples of vulnerabilities found using these fuzzing tools.
ベイズ機械学習(an introduction to bayesian machine learning)医療IT数学同好会 T/T
?
This document provides an introduction to Bayesian machine learning. It discusses key concepts like Bayes' theorem, the modeling and inference procedures in Bayesian learning, and examples like linear regression and Gaussian mixture models. It also introduces variational inference as a technique for approximating intractable posterior distributions. Finally, it lists some example papers and programming languages/libraries for probabilistic programming.
[DL輪読会]Recent Advances in Autoencoder-Based Representation LearningDeep Learning JP
?
1. Recent advances in autoencoder-based representation learning include incorporating meta-priors to encourage disentanglement and using rate-distortion and rate-distortion-usefulness tradeoffs to balance compression and reconstruction.
2. Variational autoencoders introduce priors to disentangle latent factors, but recent work aggregates posteriors to directly encourage disentanglement.
3. The rate-distortion framework balances the rate of information transmission against reconstruction distortion, while rate-distortion-usefulness also considers downstream task usefulness.
The document summarizes various techniques for automated software testing using fuzzing, including coverage-based fuzzing (AFL), directed greybox fuzzing (AflGO), and neural network-based approaches (FuzzGuard). It discusses how genetic algorithms and simulated annealing are used in AFL and AflGO respectively to guide test case mutation towards new code areas. It also provides examples of vulnerabilities found using these fuzzing tools.
「C言語のポインタ(型の変数)は、可変長配列を扱うために使う」という点に絞って、50分間程度の解説をしています。
最終的に下記の12行のプログラムを47分間使って解説します。
(7行目、11行目の”<”は除いています)
1: int size = N;
2: int x[size];
3: int *p;
4:
5: p = x;
6:
7: for ( int = 0; i size; i++)
8: p[i] = i;
9:
10: int y = 0
11: for ( int i = 0; i size; i++)
12: y = y + p[i];
https://www.youtube.com/watch?v=KLFlk1dohKQ&t=1496s
1. The model is a polynomial regression model that fits a polynomial function to the training data.
2. The loss function used is the sum of squares of the differences between the predicted and actual target values.
3. The optimizer used is GradientDescentOptimizer which minimizes the loss function to fit the model parameters.