際際滷

際際滷Share a Scribd company logo
Deep Learning
豐覲 螳覦襯  螳
https://factordaily.com/india-government-ai-analytics/
Personal Computer
Internet
Mobile
Next?
AI
Machine Learning
Deep Learning
Spam Filter
 Rule based -> Explicit
 Train
 Neural Network
 Deep, Wide
Linear Regression
X = [1, 2, 3]
Y = [1, 2, 3]
X = 4 朱 Y = ?
THE MATHEMATICAL
WAY OF THINKING
Science15 Nov 1940:
Y = X * W + B
W: weight, B: bias
Machine Learning
 Hypothesis
 cost / loss
 Gradient descent
 Train
 Test
Hypothesis
H(X) = X*W + B
X = [1, 2, 3] Y = [1, 2, 3]
W=2 , B = 0
H(X) = [H(1), H(2), H(3)] = [2, 4, 6]
Cost / Loss function
H(X) = [H(1), H(2), H(3)] = [2, 4, 6]
Y = [1, 2, 3]
euclidean distance
Cost / Loss function
X = [1, 2, 3] Y = [1, 2, 3]
W= 2, B=0 H = [2, 4, 6] cost(2) = 14
W= 1.5, B=0 H = [1.5, 3, 4.5] cost(1.5) = 1.166
W= 1, B=0 H = [1, 2, 3] cost(1) = 0
Cost / Loss function
X = [1, 2, 3] Y = [1, 2, 3] B = 0
Gradient Descent
W= 2, B=0 H = [2, 4, 6] cost(2) = 14
W= 1.5, B=0 H = [1.5, 3, 4.5] cost(1.5) = 1.166
W= 1, B=0 H = [1, 2, 3] cost(1) = 0
Minimize Cost function
Gradient Descent
cost(w)
w
w=2
w=1.5
w=1
Gradient Descent
Gradient Descent
cost(w)
w
w=2
w=1.5
w=1
w=1.2
Train & Test
X = [1, 2, 3] Y = [1, 2, 3]
Hypothesis
Cost/Loss
Gradient Descent
X = 4 朱 Y = 4 螳 襷?
W, B
Thank you

More Related Content

Deep learning for Junior Developer