This document summarizes and compares several techniques for improving RNN performance for speech recognition: 1) FastGRNN proposes techniques like low-rank matrix approximation and quantization to make GRUs faster and smaller. 2) LightGRU removes the reset gate from GRUs and replaces tanh with ReLU for improved speech recognition performance. 3) AWD-LSTM incorporates techniques like dropout, averaged SGD, and activation regularization to prevent overfitting in LSTMs. Overall the document evaluates different approaches for making RNNs more efficient and effective for speech tasks.