This document summarizes a research paper that proposes a technique called Stable Rank Normalization to improve generalization in neural networks and GANs. The technique aims to reduce the Lipschitz constant of neural networks by normalizing the stable rank of the weight matrices. The stable rank is a measure of how many effective dimensions a matrix has. Normalizing it makes networks less sensitive to certain parameter settings. The paper shows experimentally that stable rank normalization improves generalization on image recognition and GAN tasks without affecting performance.