The document summarizes the Chow-Liu algorithm based on minimum description length (MDL) for learning Bayesian network structures from data that contains both discrete and continuous variables. The Chow-Liu algorithm finds the maximum weighted spanning tree that approximates the dependencies between variables. The MDL principle is used to select the optimal tree by balancing goodness-of-fit and model complexity. The algorithm is extended to handle cases where the data does not have an underlying density function by approximating the data distribution with increasingly fine partitions.
The document summarizes the Chow-Liu algorithm based on minimum description length (MDL) for learning Bayesian network structures from data that contains both discrete and continuous variables. The Chow-Liu algorithm finds the maximum weighted spanning tree that approximates the dependencies between variables. The MDL principle is used to select the optimal tree by balancing goodness-of-fit and model complexity. The algorithm is extended to handle cases where the data does not have an underlying density function by approximating the data distribution with increasingly fine partitions.
This document provides an introduction and schedule for an experimental mathematics course taught by Professor Joe Suzuki of Osaka University. The course will cover introductory statistics using the R programming language over 15 classes. Students will be evaluated based on 50 problem reports submitted through the CLE system and attendance. Presentations by students on problem solutions will begin in December and provide opportunities for bonus points. The course aims to teach statistical concepts through hands-on use of R rather than theoretical explanations.
Reading Seminar (140515) Spectral Learning of L-PCFGsKeisuke OTAKI
?
1. The document presents a spectral learning method for latent-variable PCFGs (L-PCFGs) that uses tensor factorization.
2. It defines observable representations based on features of tree structures that can be computed from training data alone, without hidden variables.
3. The tensor parameter C of the L-PCFG can be recovered from the observable representations, allowing for spectral learning of the L-PCFG from a treebank via tensor methods.
E-learning Development of Statistics and in Duex: Practical Approaches and Th...Joe Suzuki
?
This document discusses the development of e-learning courses in statistics through the Duex program. Duex is a consortium of Japanese universities and companies focused on data-related human resource development. It produces online statistics and data science courses using a low-cost, high-quality approach involving individual instructors creating video lectures using PowerPoint, scripts, and video editing software. The document outlines Duex's funding and participating institutions, and provides tips for instructors to efficiently create online video courses themselves with minimal budget and assistance from others.
E-learning Design and Development for Data Science in Osaka UniversityJoe Suzuki
?
This document discusses the development of e-learning courses for data science through the Kansai Data related Human Resource Development Consortium (KDC). KDC was established in 2017 with funding from the Japanese Ministry of Education and includes several universities. It aims to develop online statistics courses to make education more accessible and help train data science professionals. The document outlines KDC's goals, challenges in creating high-quality online courses, and strategies for increasing student enrollment and participation over the next five years as funding is scheduled to end.
1. The document proposes a regular quotient score for Bayesian network structure learning that allows for more efficient branch-and-bound search compared to the existing BDeu score.
2. The existing BDeu score violates regularity, meaning that Markov equivalent structures do not necessarily share the same BDeu score.
3. The authors propose a regular quotient score based on Jeffreys' prior that satisfies regularity, ensuring Markov equivalent structures share the score, enabling more efficient searching during branch-and-bound learning of Bayesian network structures.
- The document discusses estimating mutual information and using it to learn forests and Bayesian networks from data. It presents methods for estimating mutual information, finding independence between variables, and using Kruskal's and Chow-Liu algorithms to learn tree structures that approximate joint distributions. Experiments apply these methods to Asia and Alarm datasets to learn Bayesian networks.
This document outlines a two-part course on Bayesian approaches to data compression. Part I on July 17th will cover data compression for known and unknown sources over 90 minutes, including a 45-minute exercise. Part II on July 24th will focus on learning graphical models from data based on the concepts from Part I.
A Conjecture on Strongly Consistent LearningJoe Suzuki
?
1. The document presents a conjecture about the error probability of overestimating the true order k* when learning autoregressive moving average (ARMA) models from samples.
2. The conjecture states that if the estimated order k is greater than the true order k*, the error probability is equal to the probability that a chi-squared distributed random variable with k - k* degrees of freedom is greater than (k - k*)dn, where dn is related to the sample size n.
3. The author provides evidence that a sum of squared estimated ARMA coefficients could be chi-squared distributed, lending credibility to the conjecture.
A Generalization of Nonparametric Estimation and On-Line Prediction for Stati...Joe Suzuki
?
This document presents a generalization of Ryabko's measure for universal coding of stationary ergodic sources. The generalization allows constructing a measure νn that achieves universal coding for sources without a density function, such as those represented by a measure μn on a measurable space. νn is defined by projecting the source onto increasing finer partitions and weighting the projections. If the Kullback-Leibler divergence between the source and weighting measure converges across partitions, νn achieves universal coding for any stationary ergodic source μn. Examples demonstrate how the approach extends Ryabko's histogram weighting to new source types.
Bayesian Criteria based on Universal MeasuresJoe Suzuki
?
The document presents Joe Suzuki's work on generalizing Bayesian criteria to settings beyond discrete or continuous distributions. It introduces generalized density functions based on Radon-Nikodym derivatives that allow defining universal measures gn approximating true densities f. These generalized densities enable extending Bayesian criteria like comparing pgnXgnY to (1-p)gXY to assess independence, to any sample space without assuming a specific form. The approach unifies Bayesian and MDL methods under a framework of universality, with various applications like Bayesian network structure learning.
The Universal Measure for General Sources and its Application to MDL/Bayesian...Joe Suzuki
?
1) The document presents a new theory for universal coding and the MDL principle that is applicable to general sources without assuming discrete or continuous distributions.
2) It constructs a universal measure νn that satisfies certain conditions to allow generalization of universal coding and MDL.
3) This generalized framework is applied to problems that previously separated discrete and continuous cases, such as Markov order estimation using continuous data sequences and mixed discrete-continuous feature selection.
Universal Prediction without assuming either Discrete or ContinuousJoe Suzuki
?
1. The document discusses universal prediction without assuming data is either discrete or continuous. It presents a method to estimate generalized density functions to achieve universal prediction for any unknown probabilistic model.
2. A key insight is that universal prediction can be achieved by estimating the ratio between the true density function and a reference measure, without needing to directly estimate the density function. This allows universal prediction for data that is neither discrete nor continuous.
3. The method involves recursively refining partitions of the sample space to estimate the density ratio. It is shown that this ratio can be estimated universally for any density function, achieving the goal of prediction without assumptions about the data type.
Bayesian network structure estimation based on the Bayesian/MDL criteria when...Joe Suzuki
?
J. Suzuki. ``Bayesian network structure estimation based on the Bayesian/MDL criteria when both discrete and continuous variables are present". IEEE Data Compression Conference, pp. 307-316, Snowbird, Utah, April 2012.