ݺߣ

ݺߣShare a Scribd company logo
1/50
PRML  14 
mɡ
tоѧԺѧ ʿǰ
miyazawa-a@nii.ac.jp
September 25, 2015
(modi?ed: October 4, 2015)
2/50
Ϥ
Υ饤ɤ LuaLATEX Υ``ɤ
https://github.com/pecorarista/documents ˤޤ
̿ƕȤɮʤӛ򤷤ƤϤޤ
 TikZ 褤ΤBishop (2006) Υݩ`ȥک`
 http://research.microsoft.com/en-us/um/people/
cmbishop/prml/ ޤ Murphy (2012) Υݩ`ȥک`
http://www.cs.ubc.ca/~murphyk/MLbook/ äƤޤ
g`äӛҊĤBj
3/50
դ뤳
1. ٥ǥƽ
2. ߥåƥ
3. ֩`ƥ (AdaBoost)

؎
4. ľǥ (CART)
؎

5. ϥǥ
4/50
ǥνY
}ΥǥyYƽʹäƻ؎Фȣ^ѧܤ
ƣyä줽ޤΈϤ϶QνY
ʹͬΤȤԤ
}ΥǥγMߺϤ碌ʹ򥳥ߥåƥ뤤Ϻh
(committee) Ⱥ֣
5/50
ǥνY
ǥνYϤȥ٥ǥƽͬʤ褦ע⤹룮
ޤϥǥYϤȤƣϥֲܶƶ򿼤룮
ǥϣǩ`ɤλҪؤɤ줿ʾDZډ z 
ʹäƣͬr_ p (x, z) 뤨룮Qy x ܶȤ
p (x) =
z
p (x, z) (14.3)
=
K
k=1
kN (x|?k, k) (14.4)
Ȥʤ룮ǩ` X = (x1, . . . , xN ) vxֲ
p (X) =
N
n=1
p (xn) =
N
n=1 zn
p (xn, zn) (14.5)
Ǥ꣬ xn ˌ zn ڤƤ뤳Ȥ狼룮
6/50
٥ǥƽ
Τ˥٥ǥƽʾĤήʤǥ뤬
ƣh = 1, . . . , H ǷŸƤȤ룮 1 Ĥϻϥ
ֲǣ⤦ 1 Ĥϻϥ``ֲǤȤ룮
k
kN (x|?k, k)  t1 (x|? ,  )
7/50
٥ǥƽ
ǥvǰ_ʤ p (h) 뤨ƤʤУǩ`v
xֲϴʽ룮
p (X) =
H
h=1
p (X|h) p (h) (14.6)
ǥYϤȤ`ϣǩ`ɤƤΤϥǥ h = 1, . . . , H
Τɤ줫 1 ĤȤȤǤ룮
8/50
ߥåƥ
ߥåƥʹַȤƣХ (bagging; bootstrap
aggregation) B餹룮ޤӖåǩ` Z = (z1, . . . , zN ) 飬
֩`ȥȥåט˱ Z(1)
, . . . , Z(M)
룮
Z(1)
Z(2)
Z(M)
Z = (z1, . . . , zN )
Ԫ
֩`ȥȥåט˱
Ӗ˱
9/50
ߥåƥ
Х󥰤ˤ륳ߥåƥy yCOM (x) ϣ֩`ȥȥ
׼ Z(1)
, . . . , Z(M)
ѧǥy y1 (x) , . . . , yM (x) 
ʹä
yCOM (x) =
1
M
M
m=1
ym (x) (14.7)
Ȥʤ룮
njgHYäΤ`uƴ_J룮
10/50
Х󥰤`u
y؎v h (x) Ȥǥy ym (x) h (x)
ȼӷĤ` m (x) ʹä
ym (x) = h (x) + m (x) (14.8)
ȱȁ룮ΤȤ\`
E (ym (x) ? h (x))
2
= E (m (x))
2
(14.9)
ȤʤΤǣƽ\`ڴ EAV 
EAV := E
1
M
M
m=1
(ym (x) ? h (x))
2
=
1
M
M
m=1
E (m (x))
2
(14.10)
룮
11/50
Х󥰤`u
һߥåƥˤˤĤƣ`ڴ ECOM 
ECOM := E
?
? 1
M
M
m=1
ym ? h (x)
2
?
?
= E
?
? 1
M
M
m=1
m (x)
2
?
? (14.11)
룮٥ȥ (1    1)  RM
 (1 (x)    m (x))  RM
ˌ Cauchy-Schwarz βʽä
M
m=1
m (x)
2
 M
M
m=1
(m (x))
2
ECOM  EAV
ɤģĤޤꥳߥåƥʹϤ`ڴϣҪ
`ڴκͤ򳬤ʤ
12/50
Х󥰤`u
ؤ`ƽ 0 ǟovǤ룬ʤ
E [m (x)] = 0 (14.12)
cov (m (x) ,  (x)) = E [m (x)  (x)] = 0, m = (14.13)
ɤĤʤ 1
ECOM = E
?
? 1
M
M
m=1
m (x)
2
?
?
=
1
M2
E
M
m=1
(m (x))
2
=
1
M
EAV (14.14)
Ȥʤ꣬`ڴ˵͜p룮
1 ͬ褦ʥǥͬ褦Ӗǩ`ѧƤΤ飬ʤȤ
ڴǤʤ
13/50
֩`ƥ
ߥåƥʹַ΄eȤƣ֩`ƥ (boosting) 
ФַB餹룮֩`ƥ󥰤ϷΤΥ르ꥺ
ȤOӋ줿؎ˤ⒈Ǥ룮ȤꤢˤĤƿ
룮
ߥåƥ򘋳ɤ낀ηϥ٩`ѧ (base learner) 
ѧ (weak learner) ȺФ룮֩`ƥ󥰤ϣѧ
򣨁KеĤˤǤϤʤεĤˣؤ߸ǩ`ʹäѧM
뷽Ǥ룮ؤߤδ󤭤ϣǰ`äƷ줿ǩ`
ˌƴ󤭤ʤ褦˶룮
֩`ƥ󥰤ФǴĤַǤ AdaBoost (adaptive
boosting) B餹룮
14/50
AdaBoost
{w
(1)
n } {w
(2)
n } {w
(M)
n }
y1(x) y2(x) yM (x)
YM (x) = sign
M
m
mym(x)
for m = 1, . . . , N
Initialize the data weighting coe?cients
w
(1)
n :=
1
N
15/50
AdaBoost
for m = 1, . . . , M
Fit a classi?er ym (x) to the training data by minimizing
Jm :=
N
n=1
w
(m)
n 1{x | ym(x)=tn} (xn) . (14.15)
Evaluate
m := log
1 ? m
m
(14.17)
where
m :=
N
n=1
w
(m)
n 1{x | ym(x)=tn} (xn)
N
n=1
w
(m)
n
. (14.15)
Update the data weighting coe?cients
w
(m+1)
n := w
(m)
n exp m1{x|ym(xn)=tn} (xn) . (14.18)
16/50
AdaBoost
Make predictions using the ?nal model, which is given by
YM (x) := sign
M
m=1
mym (x) . (14.19)
Υ르ꥺФˬFʽϣָ`εĤС錧
룮
17/50
ָ`С
Ŀ˂ t  {?1, 1}y y  R Ȥηe ty  z ä
ΤȤָ` E (z) := exp (?z) ȤΤ뤨룮E (z) 
t  y ͬ (z > 0) ʤ 0 ˽΂򣬮 (z < 0) ʤ
δ󤭤ʂȤ룮
?2 ?1 0 1 2
z
E(z)
ָ`v ȥԩ`` ҥ`v ``
18/50
ָ`С
¤` E С
E :=
N
n=1
exp ?tn
m
=1
1
2
 y (xn)
εĤѧƤΤ 1, . . . , m?1  y1 (x) , . . . , ym?1 (x) 
Ȥƣm  ym (x) v E С룮E 
E =
N
n=1
exp ?tn
m?1
=1
1
2
 y (xn) ?
1
2
tnmym (xn)
=
N
n=1
w(m)
n exp ?
1
2
tnmym (xn) (14.22)
ȕQ룮¤Τ褦ä
w(m)
n := exp ?tn
m?1
=1
1
2
 y (xn) (14.22 )
19/50
ָ`С
ym : x  {?1, 1} ˤä줿ǩ`ּϤ Tm
`äƷ줿ǩ`ּϤ Mm Ȥ룮ΤȤ
E = e?m/2
nTm
w(m)
n + em/2
nMm
w(m)
n
= em/2
? e?m/2
N
n=1
w(m)
n 1{x|ym(xn)=tn} (xn)
+ e?m/2
N
n=1
w(m)
n (14.23)
ɤģ ym (x) ˤĤС뤳Ȥ (14.15) С
뤳ȤͬǤ룮
20/50
ָ`С
ޤ m ˤĤ΢֤ 0 ä
em/2
+ e?m/2
N
n=1
w(m)
n 1{x|ym(xn)=tn} (xn) ? e?m/2
N
n=1
w(m)
n = 0
em
+ 1 =
N
n=1
w(m)
n
N
n=1
w(m)
n 1{x|ym(xn)=tn} (xn)
Ȥʤ룮(14.15) ͬäQФȤˤ
m = log
1 ? m
m
ä룮ϥ르ꥺФ (14.17) ˵Ȥ
21/50
ָ`С
ؤߤ (14.22 ) ˏΤΤ褦˸¤룮
w(m+1)
n = w(m)
n exp ?
1
2
tnmym (xn) (14.24)

tnym (xn) = 1 ? 21{x|ym(x)=tn} (xn) (14.25)
ɤĤȤʹ
w(m+1)
n = w(x)
n exp ?
1
2
m 1 ? 21{x|ym(x)=tn} (xn)
= w(x)
n exp ?
m
2
exp 1{x|ym(x)=tn} (xn) (14.26)
ȕQ룮exp (?m/2) Ϥ٤ƤΥǩ`˹ͨʤΤǟoҕ
 (14.18) ä룮
22/50
֩`ƥ󥰤Τ`v
¤ָ`򿼤룮
E [exp (?ty (x))] =
t{?1,1}
exp (?ty (x)) p (t|x) p (x) dx (14.27)
ַʹä y ˤĤʽС룮
?t (y) := exp (?ty (x)) p (t|x) p (x) Ȥ룮ͣ¤Τ褦
룮
t{?1,1}
Dy?t (y) = 0
exp (y (x)) p (t = ?1|x) ? exp (?y (x)) p (t = 1|x) = 0
y (x) =
1
2
log
p (t = 1|x)
p (t = ?1|x)
(14.28)
Ĥޤ AdaBoost εĤ p (t = 1|x)  (t = ?1|x) αȤΌν
ƤƤ룮줬KĤ (14.19) ǷvʹäƷ
äɤȤʤäƤ룮
23/50
AdaBoost 褤
AdaBoost ˤʾФ΃Ҥϸǩ`
뾶ϸ줿ؤߤƤ룮
m = 1
?1 0 1 2
?2
0
2 m = 2
?1 0 1 2
?2
0
2
24/50
AdaBoost 褤
m = 3
?1 0 1 2
?2
0
2 m = 6
?1 0 1 2
?2
0
2
m = 10
?1 0 1 2
?2
0
2 m = 150
?1 0 1 2
?2
0
2
25/50
AdaBoost Β
AdaBoost  Freund and Schapire (1996) nj뤵줿ָ
`εСȤƤνዤ Friedman et al. (2000) 뤨
줿Խ`vΉˤꘔʒФ褦
ʤä
26/50
ľǥ
ޤǤ}γϤ碌뷽ҊƤȤ}Υ
ǥ 1 Ĥ֤뷽ǤľǥQ
ľǥΤǤϣg (cuboid) I˷֤ƣ줾
Iˌꤹǥy򤵤룮
A
B
C D
E
1 4
2
3
x1
x2
x1 > 1
x2 > 3
x1 4
x2 2
A B C D E
27/50
CART
¤Ǥľǥ 1 ĤǤ CART (classi?cation and
regression tree) QCART ǰͨˤ؎ˤʹ
ȤǤ뤬ޤϻ؎˽gäh룮
Ŀ˂νM D = (x1, tn) , . . . , (xN , tN )  RD
 R 뤨
Ȥ룮ˌָ R1, . . . , RM ȤʤäƤȤy
ΤΤ褦˶x룮
f (x) :=
M
m=1
cm1Rm (x)
28/50
CART
0
2
4
6
8
10
0
2
4
6
8
10
2
3
4
5
6
7
8
9
10
29/50
ָ뤨ƤȤmy
IǶ\`Сy褦
Im := {n ; xn  Rm} ȱ`¤Τ褦ˤʤ룮
N
n=1
(tn ? f (xn))
2
=
N
n=1
tn ?
M
m=1
cm1Rm
(xn)
2
=
M
m=1 nIm
(cm ? tn)
2
 cm ˤĤ΢֤ 0 äȣmy
?cm =
1
|Im|
nIm
tn
Ȥʤ뤳Ȥ֤룮\`С褦ʷָQ
褦ȤӋ󤭤ʤ룮؝ʹäľΘ
룮
30/50
ָ
ؤIηָʤ򿼤룮֤ j  {1, . . . , D}
  Ȥƣ2 ĤI R1 (j, )  R2 (j, ) ΤǶ룮
R1 (j, ) = {x ; xj  } , R2 (j, ) = {x ; xj > }
Ii (j, ) := {n ; xn  Ri (j, )} Ȥ룮ָλʤȤʤ (j, ) ϴΤ
⤤Ƶä룮
min
j{1,...,D}
min

?
?
?
min
c1
nI1(j,)
(c1 ? tn)
2
+ min
c2
nI2(j,)
(c2 ? tn)
2
?
?
?
.
ڂȤκͤС c1  c2 ϣ줾
?c1 =
1
|I1 (j, )|
nI1(j,)
tn, ?c2 =
1
|I2 (j, )|
nI2(j,)
tn
Ȥʤ룮Ȥϸ j ˤĤm  Ӌ㤷ΤȤm j 
Ф褤
31/50
ֹͣ
ΤֹͣʤˤĤƿʤФʤʤ
g˿`ΜpٷСʤäȤֹ뷽
Ǥ뤬ָAƤȴ󤭤`Μp٤Ҋ뤳ȤU
YĤ֪Ƥ룮
ǣȤ˴󤭤ľäƤ餽ľ֦פämľä
Ȥˤ룮
32/50
֦פ
T
t1
t2 t3
t4 t5 t6 t7
t8 t9
32/50
֦פ
t1
t2 t3
t4 t5 t6 Tt7t7
t8 t9
32/50
֦פ
T ? Tt7
t1
t2 t3
t4 t5 t6 t7
t8 ˌꤹI t9 ˌꤹIϤΤ t7 ϲФ룮
33/50
֦פλ
ФȤλäϣΤ error-complexity measure
R (T) = R (T) +  T
һ󤭤p٤褦ʹ t x֣ T ľ T ~ȫ
μϣR (T) `
R (T) :=
T
m=1 nIm
(?cm ? tn)
2
Ǥ룮  0 ֦ˌPt{ѥ`Ǥ룮
λʤǤϣפäƤ⣨ͨ󤭤ʤäƤޤ`ޤ󤭤
ʤ餺~p褦֦פ꤬ȤФ룮
34/50
֦פλ
 ӤȴΤΤ褦ʤȤ狼룮
 = 0 ΤȤ
ǩ` 1 Ĥ~򌝏ꤵ褦ľmˤʤ룮
 󤭤Ȥ
 t1 ɤľmˤʤ룮
礦ɤľϤgˤ룮
35/50
֦פ
ľ Tt ȹ t ¤פäƤǤľ ({t} , ?) 
error-complexity measure Ϥ줾
R (Tt) = R (Tt) +  Tt ,
R (({t} , ?)) = R (({t} , ?)) + 
Ǥ룮R (Tt) > R(({t} , ?))ʤ
 <
R (Tt) ? R (({t} , ?))
Tt ? 1
ɤĤʤפȡäۤ褤xפȡ넿ָ
ˤȤ
g (t ; T) :=
R (Tt) ? R (({t} , ?))
Tt ? 1
ȶ룮
36/50
֦פ
Τ֦פ (weakest link pruning) ȺФ륢르ꥺ
ˤ T0
   TJ
= ({t1} , ?) xg{ 0 <    < J
ä룮
1: i  0
2: while Ti > 1
3: i  min
tV (T i)T i
g t ; Ti
4: T i
 arg min
tV (T i)T i
g t ; Ti
5: for t  T i
6: Ti
 Ti
? Ti
t
7: i  i + 1
37/50
_J
KĤˤʤФʤʤȤϣä줿 Ti J
i=0
Фm
 Ti
x֤ȤǤ룮ΤηȤƤϽ_J (cross
validation) ʹ룮
Test
Train
Train
Train
Train
Test
Train
Train
...
Train
Train
Test
Train
Train
Train
Train
Test
1 2    K ? 1 K
38/50
_J
혤¤Τ褦ˤʤ룮
1: ǩ` D ʹäƣľ Ti J
i=0
ȥѥ` i
J
i=0 
룮
2: ǩ` D  D =
K
k=1 Dk ȷָ룮O |Dk| ͬ󤭤
ˤʤ褦ˤ룮
3:  D(k)
:= DDk ʹľ T(k)i ik
i=0
ȥѥ`

(k)
i
ik
i=0
룮
39/50
_J
4:  i :=

ii+1 ˌ Ri
(T) С褦
T(1)
(i) , . . . , T(K)
(i) Ȥˌꤹy y
(1)
i , . . . , y
(K)
i 
룮   
(k)
i , 
(k)
i+1 ʤ T(k)
() = T(k)i
Ȥʤ뤳
ע⤹룮
5: ޤǤνY¤뤳ȤǤ룮
RCV
Ti
=
1
N
K
k=1 n{n ; (xn,tn)Dk}
tn ? y
(k)
i (xn)
2
6: `һСľ T??
:= arg minT i R Ti
Ȥ룮
40/50
_J
7: ˜` (standard error) SE 룮
SE RCV
Ti
:= s Ti
/

N,
s Ti
:=
1
N
N
n=1
tn ? y
((n))
i (xn)
2
? RCV (Ti)
2
,
 (n) =
K
k=1
k1Dk
((xn, tn))
8: ¤򜺤 T ФСľ T?
KĤʽYȤ룮
RCV
(T)  RCV
(T??
) + 1  SE (T??
)
Υҥ`ꥹƥåʛQ᷽ 1 SE rule ȺФ룮
41/50
_J
Breiman et al. (1984)  TABLE 3.3
i Ti R Ti RCV Ti  SE Ti
1 31 .17 .30  .03
2?? 23 .19 .27  .03
3 17 .22 .30  .03
4 15 .23 .30  .03
5 14 .24 .31  .03
6? 10 .29 .30  .03
7 9 .32 .41  .04
8 7 .41 .51  .04
9 6 .46 .53  .04
10 5 .53 .61  .04
11 2 .75 .75  .03
12 1 .86 .86  .03
42/50
CART ˤ
Τ˷Q혤ϻ؎ȤۤͬʤΤǣָλʤQ
C1, . . . , CK  K 饹˷놖}⤭Ȥ룮 t 
ͨäƤǩ` N (t) Ȥ룮ޤΤ Ck
Τ Nk (t) ȱΤȤ
p (t|Ck) =
Nk (t)
Nk
,
p (Ck, t) = p (Ck) p (t|Ck) =
Nk
N
Nk (t)
Nk
=
Nk (t)
N
,
p (t) =
K
k=1
p (Ck, t) =
N (t)
N
Ǥ룮ä_ p (Ck|t) ϴΤΤ褦룮
p (Ck|t) =
p (Ck, t)
p (t)
=
Nk (t)
N (t)
43/50
CART ˤ
ָλʤˤϲv (impurity function) ȤΤʹ
vȤϣ¤򜺤v ? : ?M
 R ΤȤǤ룮
1. ? (p)  p = (1/M,    , 1/M) ΤȤޤȤʤ룮
2. ? (p)  p = (0,    , 0,
i
1, 0,    , 0) ΤȤޤСȤʤ룮
3. ?(p1, . . . ,
i
pi, . . . ,
j
pj, . . . , pM ) = ?(p1, . . . ,
i
pj, . . . ,
j
pi, . . . , pM )
44/50

v ? Ȥȣι t β I (t) 
I (t) := ? (p (C1|t) , . . . , p (CK|t))
Ƕ룮
 t  tL  tR ˤĤƣt 餽ǩ`θϤ
pL =
p (tL)
p (t)
, pR =
p (tR)
p (t)
Ȥȣt ˤƻ s ǷָȤβȤΜp٤
?I (s, t) = I (t) ? pRI (tR) ? pLI (tL)
ȱ룮ȤΜp٤ˤʤ褦ʷָФäƤФ褤
45/50
v
`
I (t) = 1 ? max
k
p (Ck|t)
ȥԩ`
I (t) = ?
K
k=1
p (Ck|t) log p (Ck|t)
ָ (Gini index)
I (t) =
K
k=1
p (Ck|t) (1 ? p (Ck|t))
46/50
Ȥ
2 饹ˤƣI Ȥƽȥԩ`ʹ
t
100/100
100/0 0/100
?I (s, t) = I (t) ? pLI (tL) ? pRI (tR)
= ?
1
2
log
1
2
?
1
2
log
1
2
? 2 
1
2
(?1 log 1 ? 0 log 0)
= log 2
47/50
Ȥ
eʷָ s ʹ
t
100/100
60/40 40/60
?I (s , t) = I (t) ? pLI (tL) ? pRI (tR)
= log 2 ? 2 
1
2
?
3
5
log
3
5
?
2
5
log
2
5
< ?I (s, t)
ǰηָ s Τۤäޤ
48/50
vα^
0 0.2 0.4 0.6 0.8 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Error rate
Gini
Entropy
49/50
ľǥΆ}
S˺Ϥ碌ƷָƤΤǣSƽФǤʤˤޤ
ʤ
gϩ`ɤˣ 1 ĤΥǥ뤬ꤹ褦ˣָ
Ƥޤ
ͨ؎Ǥϻ餫vǤνƤ뤬ΥǥǤϾ
DzBAʣI򤴤ȤζyˤʤäƤޤ
50/50
ο
Bishop, C. M. (2006). Pattern Recognition and Machine
Learning. Springer.
Breiman, L., Friedman, J., Stone, C. J., and Olshen, R. A.
(1984). Classi?cation and regression trees. CRC press.
Freund, Y. and Schapire, R. E. (1996). Experiments with a new
boosting algorithm.
Friedman, J., Hastie, T., Tibshirani, R., et al. (2000). Additive
logistic regression: a statistical view of boosting (with
discussion and a rejoinder by the authors). The annals of
statistics.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The
Elements of Statistical Learning: data mining, inference and
prediction. Springer, second edition.
Murphy, K. P. (2012). Machine Learning: A Probabilistic
Perspective.
ƽ (2012). ϤƤΥѥ`JR. ɭ.
Ad

Recommended

PRML Chapter 14
PRML Chapter 14
Masahito Ohue
?
ʸѳi#14
ʸѳi#14
matsuolab
?
ʸѳi#10
ʸѳi#10
matsuolab
?
Prml14 5
Prml14 5
־ ƺ
?
ϥǥȷѥ르ꥺ(ʸѳڣ£
ϥǥȷѥ르ꥺ(ʸѳڣ£
Takao Yamanaka
?
ʸѳi#8
ʸѳi#8
matsuolab
?
ʸѳi#9
ʸѳi#9
matsuolab
?
ʸѳi#2
ʸѳi#2
matsuolab
?
ʸѳڣ¡`ͥ뷨
ʸѳڣ¡`ͥ뷨
Keisuke Sugawara
?
ʸѳi#5
ʸѳi#5
matsuolab
?
ʸѳi#13
ʸѳi#13
matsuolab
?
ʸѳi#12
ʸѳi#12
matsuolab
?
ѧ ㏊ڣ ܥĥޥޥ
ѧ ㏊ڣ ܥĥޥޥ
Yuta Sugii
?
ʸѳi#3
ʸѳi#3
matsuolab
?
ʸѳi#1
ʸѳi#1
matsuolab
?
2013.12.26 prml㏊ λ؎ǥ3.2~3.4
2013.12.26 prml㏊ λ؎ǥ3.2~3.4
Takeshi Sakaki
?
[DL݆i]Scalable Training of Inference Networks for Gaussian-Process Models
[DL݆i]Scalable Training of Inference Networks for Gaussian-Process Models
Deep Learning JP
?
PRMLώ㏊ at |ѧYϡ1
PRMLώ㏊ at |ѧYϡ1
Ohsawa Goodfellow
?
ʸѳѧϰߤɥǥ
ʸѳѧϰߤɥǥ
tmtm otm
?
Prml11 4
Prml11 4
־ ƺ
?
ʸѳ3°ʸѳֽ
ʸѳ3°ʸѳֽ
Sotetsu KOYAMADAСɽܣ
?
ˤ٥ͥåȥ`
ˤ٥ͥåȥ`
Okamoto Laboratory, The University of Electro-Communications
?
3.1.2С\Ύ׺ѧPRML㏊4 @ѧ #prmlѧܤ
3.1.2С\Ύ׺ѧPRML㏊4 @ѧ #prmlѧܤ
Junpei Tsuji
?
еѧϰˤ륪饤ȷʵʻ
еѧϰˤ륪饤ȷʵʻ
Taiji Suzuki
?
Prml3.5 ӥǥ󥹽?
Prml3.5 ӥǥ󥹽?
Yuki Matsubara
?
ʸѳڣ¡ϥǥȷѡ
ʸѳڣ¡ϥǥȷѡ
Keisuke Sugawara
?
PRML 3.3.3-3.4 ٥λ؎ȥǥxk / Baysian Linear Regression and Model Comparison)
PRML 3.3.3-3.4 ٥λ؎ȥǥxk / Baysian Linear Regression and Model Comparison)
Akihiro Nitta
?

More Related Content

What's hot (20)

ʸѳi#2
ʸѳi#2
matsuolab
?
ʸѳڣ¡`ͥ뷨
ʸѳڣ¡`ͥ뷨
Keisuke Sugawara
?
ʸѳi#5
ʸѳi#5
matsuolab
?
ʸѳi#13
ʸѳi#13
matsuolab
?
ʸѳi#12
ʸѳi#12
matsuolab
?
ѧ ㏊ڣ ܥĥޥޥ
ѧ ㏊ڣ ܥĥޥޥ
Yuta Sugii
?
ʸѳi#3
ʸѳi#3
matsuolab
?
ʸѳi#1
ʸѳi#1
matsuolab
?
2013.12.26 prml㏊ λ؎ǥ3.2~3.4
2013.12.26 prml㏊ λ؎ǥ3.2~3.4
Takeshi Sakaki
?
[DL݆i]Scalable Training of Inference Networks for Gaussian-Process Models
[DL݆i]Scalable Training of Inference Networks for Gaussian-Process Models
Deep Learning JP
?
PRMLώ㏊ at |ѧYϡ1
PRMLώ㏊ at |ѧYϡ1
Ohsawa Goodfellow
?
ʸѳѧϰߤɥǥ
ʸѳѧϰߤɥǥ
tmtm otm
?
Prml11 4
Prml11 4
־ ƺ
?
ʸѳ3°ʸѳֽ
ʸѳ3°ʸѳֽ
Sotetsu KOYAMADAСɽܣ
?
ˤ٥ͥåȥ`
ˤ٥ͥåȥ`
Okamoto Laboratory, The University of Electro-Communications
?
3.1.2С\Ύ׺ѧPRML㏊4 @ѧ #prmlѧܤ
3.1.2С\Ύ׺ѧPRML㏊4 @ѧ #prmlѧܤ
Junpei Tsuji
?
еѧϰˤ륪饤ȷʵʻ
еѧϰˤ륪饤ȷʵʻ
Taiji Suzuki
?
Prml3.5 ӥǥ󥹽?
Prml3.5 ӥǥ󥹽?
Yuki Matsubara
?
ʸѳڣ¡ϥǥȷѡ
ʸѳڣ¡ϥǥȷѡ
Keisuke Sugawara
?

Similar to PRML 14 (20)

PRML 3.3.3-3.4 ٥λ؎ȥǥxk / Baysian Linear Regression and Model Comparison)
PRML 3.3.3-3.4 ٥λ؎ȥǥxk / Baysian Linear Regression and Model Comparison)
Akihiro Nitta
?
饷åʙCеѧT 4. ѧǩ`y
饷åʙCеѧT 4. ѧǩ`y
Hiroshi Nakagawa
?
topology of musical data
topology of musical data
Tatsuki SHIMIZU
?
PRML 4
PRML 4
Akira Miyazawa
?
ʶǥ
ʶǥ
F֮ ľ
?
٥ۤˤеѧϰšڣ
٥ۤˤеѧϰšڣ
YosukeAkasaka
?
Dive into XGBoost.pdf
Dive into XGBoost.pdf
Yuuji Hiramatsu
?
070 yӋƜyĸƶ
070 yӋƜyĸƶ
t2tarumi
?
How to study stat
How to study stat
Ak Ok
?
[DL݆i]Convolutional Conditional Neural Processes Neural Processes FamilyνB
[DL݆i]Convolutional Conditional Neural Processes Neural Processes FamilyνB
Deep Learning JP
?
ɥǥäޥ`ѧϰ
ɥǥäޥ`ѧϰ
Masahiro Suzuki
?
ȥݥ`
ȥݥ`
Tatsuki SHIMIZU
?
Prml sec6
Prml sec6
Keisuke OTAKI
?
|дѧ ǩ`T 6 ؎ȥǥxk 1
|дѧ ǩ`T 6 ؎ȥǥxk 1
hirokazutanaka
?
JSIAM_2019_9_4
JSIAM_2019_9_4
KoutaFunakoshi
?
PRML 4.1 Discriminant Function
PRML 4.1 Discriminant Function
Shintaro Takemura
?
Sparse models
Sparse models
Daisuke Yoneoka
?
Ad

More from Akira Miyazawa (7)

ܲձ-ᲹȲ𲹳ѧvΥ饤ɤ
ܲձ-ᲹȲ𲹳ѧvΥ饤ɤ
Akira Miyazawa
?
᥿ե`Զɤ򤱤QָΗ
᥿ե`Զɤ򤱤QָΗ
Akira Miyazawa
?
An incremental algorithm for transition-based CCG parsing
An incremental algorithm for transition-based CCG parsing
Akira Miyazawa
?
ڥȥ?饹
ڥȥ?饹
Akira Miyazawa
?
PRML 10.4 - 10.6
PRML 10.4 - 10.6
Akira Miyazawa
?
·
·
Akira Miyazawa
?
Foundations of Statistical Natural Language Processing (chapter 5)
Foundations of Statistical Natural Language Processing (chapter 5)
Akira Miyazawa
?
An incremental algorithm for transition-based CCG parsing
An incremental algorithm for transition-based CCG parsing
Akira Miyazawa
?
Foundations of Statistical Natural Language Processing (chapter 5)
Foundations of Statistical Natural Language Processing (chapter 5)
Akira Miyazawa
?
Ad

PRML 14