ºÝºÝߣshows by User: Alpktem / http://www.slideshare.net/images/logo.gif ºÝºÝߣshows by User: Alpktem / Mon, 11 Feb 2019 19:55:16 GMT ºÝºÝߣShare feed for ºÝºÝߣshows by User: Alpktem SLSP 2017 presentation - Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech /slideshow/slsp-2017-presentation-attentional-parallel-rnns-for-generating-punctuation-in-transcribed-speech/131380130 slsppresentation-190211195517
This is presentation slides of the paper "Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech" in 5th International Conference on Statistical Language and Speech Processing (SLSP 2017) Abstract Until very recently, the generation of punctuation marks for automatic speech recognition (ASR) output has been mostly done by looking at the syntactic structure of the recognized utterances. Prosodic cues such as breaks, speech rate, pitch intonation that influence placing of punctuation marks on speech transcripts have been seldom used. We propose a method that uses recurrent neural networks, taking prosodic and lexical information into account in order to predict punctuation marks for raw ASR output. Our experiments show that an attention mechanism over parallel sequences of prosodic cues aligned with transcribed speech improves accuracy of punctuation generation.]]>

This is presentation slides of the paper "Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech" in 5th International Conference on Statistical Language and Speech Processing (SLSP 2017) Abstract Until very recently, the generation of punctuation marks for automatic speech recognition (ASR) output has been mostly done by looking at the syntactic structure of the recognized utterances. Prosodic cues such as breaks, speech rate, pitch intonation that influence placing of punctuation marks on speech transcripts have been seldom used. We propose a method that uses recurrent neural networks, taking prosodic and lexical information into account in order to predict punctuation marks for raw ASR output. Our experiments show that an attention mechanism over parallel sequences of prosodic cues aligned with transcribed speech improves accuracy of punctuation generation.]]>
Mon, 11 Feb 2019 19:55:16 GMT /slideshow/slsp-2017-presentation-attentional-parallel-rnns-for-generating-punctuation-in-transcribed-speech/131380130 Alpktem@slideshare.net(Alpktem) SLSP 2017 presentation - Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech Alpktem This is presentation slides of the paper "Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech" in 5th International Conference on Statistical Language and Speech Processing (SLSP 2017) Abstract Until very recently, the generation of punctuation marks for automatic speech recognition (ASR) output has been mostly done by looking at the syntactic structure of the recognized utterances. Prosodic cues such as breaks, speech rate, pitch intonation that influence placing of punctuation marks on speech transcripts have been seldom used. We propose a method that uses recurrent neural networks, taking prosodic and lexical information into account in order to predict punctuation marks for raw ASR output. Our experiments show that an attention mechanism over parallel sequences of prosodic cues aligned with transcribed speech improves accuracy of punctuation generation. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/slsppresentation-190211195517-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This is presentation slides of the paper &quot;Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech&quot; in 5th International Conference on Statistical Language and Speech Processing (SLSP 2017) Abstract Until very recently, the generation of punctuation marks for automatic speech recognition (ASR) output has been mostly done by looking at the syntactic structure of the recognized utterances. Prosodic cues such as breaks, speech rate, pitch intonation that influence placing of punctuation marks on speech transcripts have been seldom used. We propose a method that uses recurrent neural networks, taking prosodic and lexical information into account in order to predict punctuation marks for raw ASR output. Our experiments show that an attention mechanism over parallel sequences of prosodic cues aligned with transcribed speech improves accuracy of punctuation generation.
SLSP 2017 presentation - Attentional Parallel RNNs for Generating Punctuation in Transcribed Speech from Alp テ北tem
]]>
90 2 https://cdn.slidesharecdn.com/ss_thumbnails/slsppresentation-190211195517-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-Alpktem-48x48.jpg?cb=1549914916