際際滷

際際滷Share a Scribd company logo
C L A S S I F Y I M A G E S U S I N G D E E P R E S I D U A L N E T W O R K
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> K A I M I N G H E , X I A N G Y U Z H A N G , S H A O Q I N G R E N , J I A N
S U N .
> D E E P R E S I D U A L L E A R N I N G F O R I M A G E R E C O G N I T I O N .
> A R X I V : 1 5 1 2 . 0 3 3 8 5
D E S C R I P T I O N
- - - - - - - - - - - - - - -
P R O B L E M T Y P E : M U L T I - C L A S S C L A S S I F I C A T I O N
I N P U T S : 3 2 X 3 2 R G B I M A G E
T A R G E T : 1 0 C L A S S
D A T A S E T - キ ル ミ ` ベ イ ベ `
resnet-110
Dataset : Training
? Number of data : 514
? Variable : x (image)
? Type : Image
? Shape : 3, 128, 128
? Variable : y (label)
? Type : Scalar
Dataset : Examples of variable x in "Training"
Dataset : Validation
? Number of data : 58
? Variable : x (image)
? Type : Image
? Shape : 3, 128, 128
? Variable : y (label)
? Type : Scalar
Dataset : Examples of variable x in "Validation"
Network Architecture : Main
Type Value
Output 67,340,288
CostParameter 1,762,548
CostAdd 25,100,296
CostMultiply 16,778,240
CostMultiplyAdd 4,083,945,472
CostDivision 4
CostExp 4
CostIf 16,712,704
Training Procedure : Optimizer
? Optimize network "Main" using "Training" dataset.
? Batch size : 32
? Solver : Nesterov
? Learning rate: 0.05
? decayed with cosine annealing.
? Momentum : 0.9
? Weight decay : 0.0005
Experimental Result : Learning Curve
Experimental Result : Evaluation
? Evaluate network "MainRuntime" using "Validation"
dataset.
? Variable : y
? Accuracy : 0.8793
? Avg.Precision : 0.9243
? Avg.Recall : 0.8623
? Avg.F-Measures : 0.8812
References
? Sony Corporation. Neural Network Console : Not just train and evaluate. You can design neural
networks with fast and intuitive GUI. https://dl.sony.com/
? Sony Corporation. Neural Network Libraries : An open source software to make research,
development and implementation of neural network more efficient. https://nnabla.org/
? BatchNormalization - Ioffe and Szegedy, Batch Normalization: Accelerating Deep Network
Training by Reducing Internal Covariate Shift. https://arxiv.org/abs/1502.03167
? Convolution - Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional
Nets, Atrous Convolution, and Fully Connected CRFs. https://arxiv.org/abs/1606.00915,
Yu et al., Multi-Scale Context Aggregation by Dilated Convolutions.
https://arxiv.org/abs/1511.07122
? ReLU - Vinod Nair, Geoffrey E. Hinton. Rectified Linear Units Improve Restricted Boltzmann
Machines.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf
? Nesterov - Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method.
https://arxiv.org/abs/1212.5701

More Related Content

Report resnet-110 キャラクタ`蛍テスト

  • 1. C L A S S I F Y I M A G E S U S I N G D E E P R E S I D U A L N E T W O R K - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > K A I M I N G H E , X I A N G Y U Z H A N G , S H A O Q I N G R E N , J I A N S U N . > D E E P R E S I D U A L L E A R N I N G F O R I M A G E R E C O G N I T I O N . > A R X I V : 1 5 1 2 . 0 3 3 8 5 D E S C R I P T I O N - - - - - - - - - - - - - - - P R O B L E M T Y P E : M U L T I - C L A S S C L A S S I F I C A T I O N I N P U T S : 3 2 X 3 2 R G B I M A G E T A R G E T : 1 0 C L A S S D A T A S E T - キ ル ミ ` ベ イ ベ ` resnet-110
  • 2. Dataset : Training ? Number of data : 514 ? Variable : x (image) ? Type : Image ? Shape : 3, 128, 128 ? Variable : y (label) ? Type : Scalar
  • 3. Dataset : Examples of variable x in "Training"
  • 4. Dataset : Validation ? Number of data : 58 ? Variable : x (image) ? Type : Image ? Shape : 3, 128, 128 ? Variable : y (label) ? Type : Scalar
  • 5. Dataset : Examples of variable x in "Validation"
  • 6. Network Architecture : Main Type Value Output 67,340,288 CostParameter 1,762,548 CostAdd 25,100,296 CostMultiply 16,778,240 CostMultiplyAdd 4,083,945,472 CostDivision 4 CostExp 4 CostIf 16,712,704
  • 7. Training Procedure : Optimizer ? Optimize network "Main" using "Training" dataset. ? Batch size : 32 ? Solver : Nesterov ? Learning rate: 0.05 ? decayed with cosine annealing. ? Momentum : 0.9 ? Weight decay : 0.0005
  • 8. Experimental Result : Learning Curve
  • 9. Experimental Result : Evaluation ? Evaluate network "MainRuntime" using "Validation" dataset. ? Variable : y ? Accuracy : 0.8793 ? Avg.Precision : 0.9243 ? Avg.Recall : 0.8623 ? Avg.F-Measures : 0.8812
  • 10. References ? Sony Corporation. Neural Network Console : Not just train and evaluate. You can design neural networks with fast and intuitive GUI. https://dl.sony.com/ ? Sony Corporation. Neural Network Libraries : An open source software to make research, development and implementation of neural network more efficient. https://nnabla.org/ ? BatchNormalization - Ioffe and Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. https://arxiv.org/abs/1502.03167 ? Convolution - Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. https://arxiv.org/abs/1606.00915, Yu et al., Multi-Scale Context Aggregation by Dilated Convolutions. https://arxiv.org/abs/1511.07122 ? ReLU - Vinod Nair, Geoffrey E. Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.6419&rep=rep1&type=pdf ? Nesterov - Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. https://arxiv.org/abs/1212.5701