際際滷shows by User: baiduusa / http://www.slideshare.net/images/logo.gif 際際滷shows by User: baiduusa / Tue, 21 Jun 2016 18:11:33 GMT 際際滷Share feed for 際際滷shows by User: baiduusa Persistent RNNs: Stashing Recurrent Weights On-Chip /slideshow/persistent-rnns-stashing-recurrent-weights-onchip/63305004 persistent-rnns-160621181133
At SVAIL, our mission is to create AI technology that lets us have a significant impact on hundreds of millions of people. We believe that a good way to do this is to improve the accuracy of speech recognition by scaling up deep learning algorithms on larger datasets than what has been done in the past. These algorithms are very compute intensive, so much so that the memory capacity and computational throughput of our systems limits the amount of data and the size of the neural network that we can train. So a big challenge is figuring out how to run deep learning algorithms more efficiently. Doing so would allow us to train bigger models on bigger datasets, which so far has translated into better speech recognition accuracy. Here we want to discuss a new technique for speeding up the training of deep recurrent neural networks.]]>

At SVAIL, our mission is to create AI technology that lets us have a significant impact on hundreds of millions of people. We believe that a good way to do this is to improve the accuracy of speech recognition by scaling up deep learning algorithms on larger datasets than what has been done in the past. These algorithms are very compute intensive, so much so that the memory capacity and computational throughput of our systems limits the amount of data and the size of the neural network that we can train. So a big challenge is figuring out how to run deep learning algorithms more efficiently. Doing so would allow us to train bigger models on bigger datasets, which so far has translated into better speech recognition accuracy. Here we want to discuss a new technique for speeding up the training of deep recurrent neural networks.]]>
Tue, 21 Jun 2016 18:11:33 GMT /slideshow/persistent-rnns-stashing-recurrent-weights-onchip/63305004 baiduusa@slideshare.net(baiduusa) Persistent RNNs: Stashing Recurrent Weights On-Chip baiduusa At SVAIL, our mission is to create AI technology that lets us have a significant impact on hundreds of millions of people. We believe that a good way to do this is to improve the accuracy of speech recognition by scaling up deep learning algorithms on larger datasets than what has been done in the past. These algorithms are very compute intensive, so much so that the memory capacity and computational throughput of our systems limits the amount of data and the size of the neural network that we can train. So a big challenge is figuring out how to run deep learning algorithms more efficiently. Doing so would allow us to train bigger models on bigger datasets, which so far has translated into better speech recognition accuracy. Here we want to discuss a new technique for speeding up the training of deep recurrent neural networks. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/persistent-rnns-160621181133-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> At SVAIL, our mission is to create AI technology that lets us have a significant impact on hundreds of millions of people. We believe that a good way to do this is to improve the accuracy of speech recognition by scaling up deep learning algorithms on larger datasets than what has been done in the past. These algorithms are very compute intensive, so much so that the memory capacity and computational throughput of our systems limits the amount of data and the size of the neural network that we can train. So a big challenge is figuring out how to run deep learning algorithms more efficiently. Doing so would allow us to train bigger models on bigger datasets, which so far has translated into better speech recognition accuracy. Here we want to discuss a new technique for speeding up the training of deep recurrent neural networks.
Persistent RNNs: Stashing Recurrent Weights On-Chip from Baidu USA Research
]]>
485 7 https://cdn.slidesharecdn.com/ss_thumbnails/persistent-rnns-160621181133-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
HPC Advisory Council Stanford Conference 2016 /slideshow/hpc-advisory-council-stanford-conference-2016/58725348 hpc-advisory-council-catanzaro-160225192831
Scaling Deep Learning]]>

Scaling Deep Learning]]>
Thu, 25 Feb 2016 19:28:31 GMT /slideshow/hpc-advisory-council-stanford-conference-2016/58725348 baiduusa@slideshare.net(baiduusa) HPC Advisory Council Stanford Conference 2016 baiduusa Scaling Deep Learning <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/hpc-advisory-council-catanzaro-160225192831-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Scaling Deep Learning
HPC Advisory Council Stanford Conference 2016 from Baidu USA Research
]]>
3056 7 https://cdn.slidesharecdn.com/ss_thumbnails/hpc-advisory-council-catanzaro-160225192831-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-baiduusa-48x48.jpg?cb=1523526175 https://cdn.slidesharecdn.com/ss_thumbnails/persistent-rnns-160621181133-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/persistent-rnns-stashing-recurrent-weights-onchip/63305004 Persistent RNNs: Stash... https://cdn.slidesharecdn.com/ss_thumbnails/hpc-advisory-council-catanzaro-160225192831-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/hpc-advisory-council-stanford-conference-2016/58725348 HPC Advisory Council S...