際際滷

際際滷Share a Scribd company logo
MADRL
Learning to Communicate to
Solve Riddles with
Deep Distributed Recurrent Q-Networks
1
蟾谿(Paul Kim)
Index
1. Abstract
2. Introduction
3. Background
2.1 Deep Q-Networks
2.2 Independent DQN
2.3 Deep Recurrent Q-Networks
2.4 Partially Observable Multi-Agent RL
4. DDRQN
5. Multi-Agent Riddles
4.1 Hats Riddle
4.1.1 Hats Riddle : Formalization
4.1.2 Hats Riddle : Network Architecture
4.1.3 Hats Riddle : Results
4.1.4 Hats Riddle : Emergent Strategy
4.1.5 Hats Riddle : Curriculum Learning
4.2 Switch Riddle
4.2.1 Switch Riddle : Formalization
4.2.2 Switch Riddle : Network Architecture
4.2.3 Switch Riddle : Results n=3
4.2.4 Switch Riddle : Strategy n=3
4.2.5 Switch Riddle : Result n=4
4.2.6 Switch Riddle : No Switch
4.2.7 Switch Riddle : Ablation Experiments
2
Abstract
Abstract
伎碁る 蟲焔  Communication 蠍磯朱   願屋  
DDRQN(Deep Distributed Recurrent Q-Network)
 . task  り communication protocol 讌 
炎概 communication  伎碁れ 襾殊 蠍 れ communication protocol 朱
螳覦伎 
企  れ riddle(蟷) 蠍磯  螳讌 れ 伎  覓語  蟆郁骸襯 螻
DDRQN朱 炎概朱 覓語襯 願屋朱, 旧朱 襷れ伎 communication protocol 覦蟆る 蟆
螳譟(轟襯 蠍一朱 DRL communication protocol 牛 豕豐 襦)
蠏碁Μ螻 襷讌襷朱 DDRQN ろ豌 螳 譯殊 蟲煙螳 炎概 譴る 蟆 ろ 蟆郁骸襯 
3
Introduction
(蠏 轟襯 蠍一朱!!) DRL 覦 螻谿 襦覺企 Visual Attention 蠏碁Μ螻 ALE襯   螳讌
RL覓語襯 願屋  . 蠏碁讌襷 覿覿 single learning agent 蟲 蟆曙郁 覿覿.
Competitive & Cooperative
Competitiveれ 覦(AlphaGo)  DRL 炎概 蟆郁骸覓殊 覲伎譯殊螻
Cooperativeれ Tampu(襴觀壱 伎 朱) DQN 覲  螳 Player螳 ALE蟆曙 覃
伎 setting 螳ロ 覲伎譴.
Tampu 蠏朱逢 Independent Q-Learning 蠍一覃 伎碁れ 螳 Q-function 蠍磯朱
襴曙朱 牛 覦. 蠏碁讌襷 企 蠏朱逢 覈 伎瑚 蟆曙 襯 fully observeる
蟆 螳
覦覃伎  ク朱 DQN覦朱 partial observable 蟆曙磯ゼ 願屋蠍  郁規れ 讌朱 Single
Agent 蟲 郁規螳 襷り 
=> so 朱語 れ partial observable覃
れ 伎語 蟆曙磯ゼ 螻 企ゼ 願屋螻 
4
Introduction
螻れ
蠏碁  瑚 覓語襯 願屋蠍  3螳讌襯 螻ろ DDRQN企朱 覦覯朱 
1. last action input
螳螳 伎語蟆 れ step input朱 伎  action 螻牛
2. inter-agent weight sharing
覈 伎碁  ろ語 螳譴豺襯 讌襷 伎語 螻 ID襯 ろ語 譟郁唄朱 .
企 蟆郁骸朱 觜襯 旧 螳ロ蟆 り 
3. disabling experience replay
 伎碁れ  牛 蟆曙一 non-stationary蠍 覓語 experience replay襯
 蟆 讌 る 蟆 伎手鍵 
5
Introduction
DDRQNろ
DDRQN ろ蠍   れ  螳讌 蟷 覓語襯 れ 伎 螳旧朱 願屋
1. Hats Riddle :  譴   譯れ  覈  覓伎語 襷豢 覓語
2. Switch Riddle : 譯れ 覈 switch螳  覦 語 覦覓誤讌襯 蟆一 覓語
企 蟆曙 perception朱 convolution 讌 讌襷 partial observability 譟伎襦
Recurrent Neural Network朱 覲旧″ sequence襯 豌襴伎 
Partial observability れ 伎語 蟆壱蠍 覓語 豕 豈 伎 螳 旧 譟伎 轟
蟆 . communication protocol  讌 蠍 覓語 RL襦 譟一 protocol 朱
螳覦伎 
蟆郁骸 baseline 企麹 覦覯 リ!!
RL襦 communication protocol 旧 炎概 豌覯讌 朱語願 DDRQN 蟲煙れ 炎概 譴る
蟆 ろ朱 
6
Background : DQN
Experience Replay
7
Background : Independent DQN
Independent Q-Learning
DQN cooperative れ 伎 れ朱 ル螻, 螳螳 伎  global state 襯
蟯谿壱螻, 螳螳 螳覲  襯 螻, 伎 螳 螻旧  覲伎 襯 覦.
Tammpu DQN Independent Q-Learning螻 蟆壱 襦  れ 螻. 蟆郁骸朱 
螳  Q-functio れ蟆 

Independent Q-Learning 企語襯 手鍵  ( 伎語 旧 蟆曙 るジ 伎語蟆
non-stationary蟆 覲伎願鍵 覓)
8
Background : DRQN
DRQN
DQN螻 IQN 覈 fully observable 螳レ煙 螳. 伎語 レ朱 s_t襯 . 譟一朱,
Partial observable 蟆曙 s_t螳 蟆讌螻 伎碁 s_t 蟯螻螳  observation襯 蟆
讌襷 蠏朱蓋朱 譯狩 谿企ゼ 覲伎企 蟆 
蠍一ヾ 郁規 Matthew Hauknecht single伎碁ゼ 豌襴蠍 
Deep Recurrent Q-Network襯  partial 蟆曙磯ゼ 覿覿朱 願屋
Feed forwardろ語襯 牛 Q;a襯 approximate  internal state襯 讌覃伎 螳 襴
磯殊 observation 譬   Recurrent Neural Network襦 Q(o;a)襯 approximate
DRQN 螳 timestep Q_t h_t襯 output朱 豢ロ
9
Background : Partially
Observable Multi-Agent RL
Partially Observable Multi-Agent RL
れ 伎語 覿覿 蟯谿 螳レ煙 覈  れ 螻ろ. 螳 伎碁 螳 time step襷 豌伎
o_{t}^{m} 襯 覦螻 企  h_{t}^{m} 襯 讌
れ 旧 centralize覦朱 殊企  り 螳. 伎碁 蠍 襷 history 譟郁唄
牛  る  螻殊 parameter襯 螻旧  り 螳.
=> Centralized Learning and Decentralized policy
伎碁れ communication 蠍磯ゼ 視 蟆 れ 伎語 partial observable 螳レ煙 螻旧ヾ
襷 螳ロ蠍 覓語 企 蟆曙一 企麹 蟆暑 螻ろ
10
DDRQN
DDRQN
Partially observable Multiagentれ DRL  螳 螳 覦覯朱 DRQN螻
Independent Q-Learning 蟆壱.  覦 朱語 Na誰ve method襦 覿襴
企 na誰ve method 3螳讌襯 伎 DDRQN 
11
DDRQN
DDRQN
1. last-action input朱  覦覯
れ time-step input朱 伎 time-step last-action 螻牛 覦. 伎 exploration 伎
stochastic 豈 蠍 覓語 observation訖襷  action蟯谿一 history   譟一伎
蠍 覓語企手 ! 蟆郁骸朱 RNN  蟯谿一  history襯 譟郁  蟆   蟆 
2. inter-agent weight sharing
覈 伎 ろ語 weight襯 郁屋 蟆. れ  ろ語襷 給螻 . 蠏碁讌襷
伎碁れ るジ observation 覦螻, 螳螳 蟆讌 襯 旧朱 讌り鍵 覓語 るゴ蟆   .
 螳螳 伎碁れ 豌 碁煙 m リ朱 覦蠍 覓語 覓誤蠍郁 . Weight sharing
牛伎  朱誤一 襯 螳貅   レ  譴
3. disabling experience replay
 伎語 experience replay螳 讌襷 れ 伎 誤 伎碁れ 襴曙朱
牛 蟆曙 蟆曙 螳螳 伎碁れ蟆 non-stationary蟆 蠍 覓語 讌  蟆 譬
12
DDRQN
DDRQN
 Q-function 牛.
 weight sharing 覓語 m 伎
condition 蟇語 
 history 朱覿企朱 蟆 蠍一
 Q-Network螳 estimate action蟆 蠍一
13
DDRQN
DDRQN
 Q-function 牛.
 weight sharing 覓語 m 伎
condition 蟇語 
 history 朱覿企朱 蟆 蠍一
 Q-Network螳 estimate action蟆 蠍一
譟郁唄 1, 2襦
誤伎 覲蟆
14
Multi-Agent Riddles : Hats Riddle
http://news.chosun.com/site/data/html_dir/2016/02/24/2016022402442.html
Hats Riddle 覓語
  讌蟯 100覈 譯れ  譴襦 覓玖 螳螳 譯れ 襾碁Μ 覿   覈襯 郁.
譯れ    れ 覈襯 譴 覲  讌襷 螻  れ  譯れ 覈襯 覲 
.  讌蟯 螳 れ  譯襯 朱 螳   譯れ蟆 譯  覈 覓殊企.
譯 覿 轟 企手 牛伎 覃 旧 襷覃   螻 襴 旧 覃  豌蟆 .
谿瑚襦 覈  旧 k  覓企 旧 褐讌   )  覦れ 譯れ れ  旧
語 
譯れ 螳 れ  譯螳  覈 螳 讌朱 豌企手 襷螻, 蠏碁讌 朱 覿企手
襷 communication protocol  蟆 豕 (企 蟆讀 覦企 螻覩狩讌 襷螻 覦襴).
 譯れ  覲 覈 蠏碁 れ れ 旧 覦朱 れ 覈  豢襦  
15
Hats Riddle : Formalization
Hats Riddle : Formalization
16
Hats Riddle : Network
Architecture
Hats Riddle : Network Architecture
17
Hats Riddle : Results
Hats Riddle : Results
18
Hats Riddle : Emergent Strategy
Hats Riddle : Emergent Strategy
19
Hats Riddle : Curriculum Learning
Hats Riddle : Curriculum Learning
20
Multi-Agent Riddles : Switch Riddle
Switch Riddle 覓語
100覈 譯螳 螳レ れ願 . 蟯襴 譯れ蟆 伎朱 襦 螳 旧    蟆襴 螳レ
螳蟆  蟆企手 襷. 襷れ 蟲レ 譯 譴  覈 蟲豌伎 蟷 覓伎襦 蟲豌危螻 蠍 れ豺螳 襴
蟲螳  れ 覦一. 譯 蟲  襯 蟯谿壱  . 譯螳  蟆曙 れ豺襯 蠍 
螻 覈 譯れ 企  れ 覦覓誤り 覩遂る 蟆 覦  . 襷  覦螳 れ企, 覈
譯れ 覦讌襷 蟇一 蟆曙一 覈 譯れ 豌. 蟯襴螳 螻 譯れ 襦 覈 蠏碁れ 企
狩
螳讌 給れ 郁規讌襷 蠏 譴 螳讌  れ 旧 譯  覈 豺伎危一 讌覈 蟆. 譯れ
れ豺襯  覯襷 貅  豺伎危 讌覈 襷 れ豺襯   . 蠏碁蠍 覓語 豺伎危郁 れ豺 n-1覯
覃, 伎 螳  
21
Switch Riddle : Formalization
Switch Riddle : Formalization
22
Switch Riddle : Network
Architecture
Switch Riddle : Network Architecture
23
Switch Riddle : Results n=3
Switch Riddle : Results n=3
24
N=3  蟆郁骸襯 誤覃
DDRQN螻 na誰ve approach, hand-
coded 旧 tell on last day
旧 豕 豈 觜蟲 伎
DDRQN 焔レ レる 蟆
Switch Riddle : Strategy n=3
Switch Riddle : Strategy n=3
25
Switch Riddle : Results n=4
Switch Riddle : Results n=4
26
Switch Riddle : No Switch
Switch Riddle : No Switch
27
Switch Riddle :
Ablation Experiments
Switch Riddle : Ablation Experiments
28
Finish!!
29
Thank
you

More Related Content

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

  • 1. MADRL Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks 1 蟾谿(Paul Kim)
  • 2. Index 1. Abstract 2. Introduction 3. Background 2.1 Deep Q-Networks 2.2 Independent DQN 2.3 Deep Recurrent Q-Networks 2.4 Partially Observable Multi-Agent RL 4. DDRQN 5. Multi-Agent Riddles 4.1 Hats Riddle 4.1.1 Hats Riddle : Formalization 4.1.2 Hats Riddle : Network Architecture 4.1.3 Hats Riddle : Results 4.1.4 Hats Riddle : Emergent Strategy 4.1.5 Hats Riddle : Curriculum Learning 4.2 Switch Riddle 4.2.1 Switch Riddle : Formalization 4.2.2 Switch Riddle : Network Architecture 4.2.3 Switch Riddle : Results n=3 4.2.4 Switch Riddle : Strategy n=3 4.2.5 Switch Riddle : Result n=4 4.2.6 Switch Riddle : No Switch 4.2.7 Switch Riddle : Ablation Experiments 2
  • 3. Abstract Abstract 伎碁る 蟲焔 Communication 蠍磯朱 願屋 DDRQN(Deep Distributed Recurrent Q-Network) . task り communication protocol 讌 炎概 communication 伎碁れ 襾殊 蠍 れ communication protocol 朱 螳覦伎 企 れ riddle(蟷) 蠍磯 螳讌 れ 伎 覓語 蟆郁骸襯 螻 DDRQN朱 炎概朱 覓語襯 願屋朱, 旧朱 襷れ伎 communication protocol 覦蟆る 蟆 螳譟(轟襯 蠍一朱 DRL communication protocol 牛 豕豐 襦) 蠏碁Μ螻 襷讌襷朱 DDRQN ろ豌 螳 譯殊 蟲煙螳 炎概 譴る 蟆 ろ 蟆郁骸襯 3
  • 4. Introduction (蠏 轟襯 蠍一朱!!) DRL 覦 螻谿 襦覺企 Visual Attention 蠏碁Μ螻 ALE襯 螳讌 RL覓語襯 願屋 . 蠏碁讌襷 覿覿 single learning agent 蟲 蟆曙郁 覿覿. Competitive & Cooperative Competitiveれ 覦(AlphaGo) DRL 炎概 蟆郁骸覓殊 覲伎譯殊螻 Cooperativeれ Tampu(襴觀壱 伎 朱) DQN 覲 螳 Player螳 ALE蟆曙 覃 伎 setting 螳ロ 覲伎譴. Tampu 蠏朱逢 Independent Q-Learning 蠍一覃 伎碁れ 螳 Q-function 蠍磯朱 襴曙朱 牛 覦. 蠏碁讌襷 企 蠏朱逢 覈 伎瑚 蟆曙 襯 fully observeる 蟆 螳 覦覃伎 ク朱 DQN覦朱 partial observable 蟆曙磯ゼ 願屋蠍 郁規れ 讌朱 Single Agent 蟲 郁規螳 襷り => so 朱語 れ partial observable覃 れ 伎語 蟆曙磯ゼ 螻 企ゼ 願屋螻 4
  • 5. Introduction 螻れ 蠏碁 瑚 覓語襯 願屋蠍 3螳讌襯 螻ろ DDRQN企朱 覦覯朱 1. last action input 螳螳 伎語蟆 れ step input朱 伎 action 螻牛 2. inter-agent weight sharing 覈 伎碁 ろ語 螳譴豺襯 讌襷 伎語 螻 ID襯 ろ語 譟郁唄朱 . 企 蟆郁骸朱 觜襯 旧 螳ロ蟆 り 3. disabling experience replay 伎碁れ 牛 蟆曙一 non-stationary蠍 覓語 experience replay襯 蟆 讌 る 蟆 伎手鍵 5
  • 6. Introduction DDRQNろ DDRQN ろ蠍 れ 螳讌 蟷 覓語襯 れ 伎 螳旧朱 願屋 1. Hats Riddle : 譴 譯れ 覈 覓伎語 襷豢 覓語 2. Switch Riddle : 譯れ 覈 switch螳 覦 語 覦覓誤讌襯 蟆一 覓語 企 蟆曙 perception朱 convolution 讌 讌襷 partial observability 譟伎襦 Recurrent Neural Network朱 覲旧″ sequence襯 豌襴伎 Partial observability れ 伎語 蟆壱蠍 覓語 豕 豈 伎 螳 旧 譟伎 轟 蟆 . communication protocol 讌 蠍 覓語 RL襦 譟一 protocol 朱 螳覦伎 蟆郁骸 baseline 企麹 覦覯 リ!! RL襦 communication protocol 旧 炎概 豌覯讌 朱語願 DDRQN 蟲煙れ 炎概 譴る 蟆 ろ朱 6
  • 8. Background : Independent DQN Independent Q-Learning DQN cooperative れ 伎 れ朱 ル螻, 螳螳 伎 global state 襯 蟯谿壱螻, 螳螳 螳覲 襯 螻, 伎 螳 螻旧 覲伎 襯 覦. Tammpu DQN Independent Q-Learning螻 蟆壱 襦 れ 螻. 蟆郁骸朱 螳 Q-functio れ蟆 Independent Q-Learning 企語襯 手鍵 ( 伎語 旧 蟆曙 るジ 伎語蟆 non-stationary蟆 覲伎願鍵 覓) 8
  • 9. Background : DRQN DRQN DQN螻 IQN 覈 fully observable 螳レ煙 螳. 伎語 レ朱 s_t襯 . 譟一朱, Partial observable 蟆曙 s_t螳 蟆讌螻 伎碁 s_t 蟯螻螳 observation襯 蟆 讌襷 蠏朱蓋朱 譯狩 谿企ゼ 覲伎企 蟆 蠍一ヾ 郁規 Matthew Hauknecht single伎碁ゼ 豌襴蠍 Deep Recurrent Q-Network襯 partial 蟆曙磯ゼ 覿覿朱 願屋 Feed forwardろ語襯 牛 Q;a襯 approximate internal state襯 讌覃伎 螳 襴 磯殊 observation 譬 Recurrent Neural Network襦 Q(o;a)襯 approximate DRQN 螳 timestep Q_t h_t襯 output朱 豢ロ 9
  • 10. Background : Partially Observable Multi-Agent RL Partially Observable Multi-Agent RL れ 伎語 覿覿 蟯谿 螳レ煙 覈 れ 螻ろ. 螳 伎碁 螳 time step襷 豌伎 o_{t}^{m} 襯 覦螻 企 h_{t}^{m} 襯 讌 れ 旧 centralize覦朱 殊企 り 螳. 伎碁 蠍 襷 history 譟郁唄 牛 る 螻殊 parameter襯 螻旧 り 螳. => Centralized Learning and Decentralized policy 伎碁れ communication 蠍磯ゼ 視 蟆 れ 伎語 partial observable 螳レ煙 螻旧ヾ 襷 螳ロ蠍 覓語 企 蟆曙一 企麹 蟆暑 螻ろ 10
  • 11. DDRQN DDRQN Partially observable Multiagentれ DRL 螳 螳 覦覯朱 DRQN螻 Independent Q-Learning 蟆壱. 覦 朱語 Na誰ve method襦 覿襴 企 na誰ve method 3螳讌襯 伎 DDRQN 11
  • 12. DDRQN DDRQN 1. last-action input朱 覦覯 れ time-step input朱 伎 time-step last-action 螻牛 覦. 伎 exploration 伎 stochastic 豈 蠍 覓語 observation訖襷 action蟯谿一 history 譟一伎 蠍 覓語企手 ! 蟆郁骸朱 RNN 蟯谿一 history襯 譟郁 蟆 蟆 2. inter-agent weight sharing 覈 伎 ろ語 weight襯 郁屋 蟆. れ ろ語襷 給螻 . 蠏碁讌襷 伎碁れ るジ observation 覦螻, 螳螳 蟆讌 襯 旧朱 讌り鍵 覓語 るゴ蟆 . 螳螳 伎碁れ 豌 碁煙 m リ朱 覦蠍 覓語 覓誤蠍郁 . Weight sharing 牛伎 朱誤一 襯 螳貅 レ 譴 3. disabling experience replay 伎語 experience replay螳 讌襷 れ 伎 誤 伎碁れ 襴曙朱 牛 蟆曙 蟆曙 螳螳 伎碁れ蟆 non-stationary蟆 蠍 覓語 讌 蟆 譬 12
  • 13. DDRQN DDRQN Q-function 牛. weight sharing 覓語 m 伎 condition 蟇語 history 朱覿企朱 蟆 蠍一 Q-Network螳 estimate action蟆 蠍一 13
  • 14. DDRQN DDRQN Q-function 牛. weight sharing 覓語 m 伎 condition 蟇語 history 朱覿企朱 蟆 蠍一 Q-Network螳 estimate action蟆 蠍一 譟郁唄 1, 2襦 誤伎 覲蟆 14
  • 15. Multi-Agent Riddles : Hats Riddle http://news.chosun.com/site/data/html_dir/2016/02/24/2016022402442.html Hats Riddle 覓語 讌蟯 100覈 譯れ 譴襦 覓玖 螳螳 譯れ 襾碁Μ 覿 覈襯 郁. 譯れ れ 覈襯 譴 覲 讌襷 螻 れ 譯れ 覈襯 覲 . 讌蟯 螳 れ 譯襯 朱 螳 譯れ蟆 譯 覈 覓殊企. 譯 覿 轟 企手 牛伎 覃 旧 襷覃 螻 襴 旧 覃 豌蟆 . 谿瑚襦 覈 旧 k 覓企 旧 褐讌 ) 覦れ 譯れ れ 旧 語 譯れ 螳 れ 譯螳 覈 螳 讌朱 豌企手 襷螻, 蠏碁讌 朱 覿企手 襷 communication protocol 蟆 豕 (企 蟆讀 覦企 螻覩狩讌 襷螻 覦襴). 譯れ 覲 覈 蠏碁 れ れ 旧 覦朱 れ 覈 豢襦 15
  • 16. Hats Riddle : Formalization Hats Riddle : Formalization 16
  • 17. Hats Riddle : Network Architecture Hats Riddle : Network Architecture 17
  • 18. Hats Riddle : Results Hats Riddle : Results 18
  • 19. Hats Riddle : Emergent Strategy Hats Riddle : Emergent Strategy 19
  • 20. Hats Riddle : Curriculum Learning Hats Riddle : Curriculum Learning 20
  • 21. Multi-Agent Riddles : Switch Riddle Switch Riddle 覓語 100覈 譯螳 螳レ れ願 . 蟯襴 譯れ蟆 伎朱 襦 螳 旧 蟆襴 螳レ 螳蟆 蟆企手 襷. 襷れ 蟲レ 譯 譴 覈 蟲豌伎 蟷 覓伎襦 蟲豌危螻 蠍 れ豺螳 襴 蟲螳 れ 覦一. 譯 蟲 襯 蟯谿壱 . 譯螳 蟆曙 れ豺襯 蠍 螻 覈 譯れ 企 れ 覦覓誤り 覩遂る 蟆 覦 . 襷 覦螳 れ企, 覈 譯れ 覦讌襷 蟇一 蟆曙一 覈 譯れ 豌. 蟯襴螳 螻 譯れ 襦 覈 蠏碁れ 企 狩 螳讌 給れ 郁規讌襷 蠏 譴 螳讌 れ 旧 譯 覈 豺伎危一 讌覈 蟆. 譯れ れ豺襯 覯襷 貅 豺伎危 讌覈 襷 れ豺襯 . 蠏碁蠍 覓語 豺伎危郁 れ豺 n-1覯 覃, 伎 螳 21
  • 22. Switch Riddle : Formalization Switch Riddle : Formalization 22
  • 23. Switch Riddle : Network Architecture Switch Riddle : Network Architecture 23
  • 24. Switch Riddle : Results n=3 Switch Riddle : Results n=3 24 N=3 蟆郁骸襯 誤覃 DDRQN螻 na誰ve approach, hand- coded 旧 tell on last day 旧 豕 豈 觜蟲 伎 DDRQN 焔レ レる 蟆
  • 25. Switch Riddle : Strategy n=3 Switch Riddle : Strategy n=3 25
  • 26. Switch Riddle : Results n=4 Switch Riddle : Results n=4 26
  • 27. Switch Riddle : No Switch Switch Riddle : No Switch 27
  • 28. Switch Riddle : Ablation Experiments Switch Riddle : Ablation Experiments 28