CycleGAN is a framework that uses generative adversarial networks to perform image-to-image translation without paired training examples. It consists of two generators that map images from one domain to another and back, along with two discriminators that classify images as real or fake. The generators are trained to translate images such that they are classified as real by the discriminators, while also remaining consistent when translated back to the original domain. The authors demonstrate it for the task of colorizing grayscale images without paired color-grayscale image samples.