Pipelining cache can improve performance by reducing cache access time. It breaks the cache access process into multiple stages, allowing subsequent requests to begin the pipeline even if earlier requests are still in progress. For example, a three-stage pipeline would involve: 1) reading the tag and validity bit, 2) determining if there is a hit and starting the data read, and 3) finishing the data read and returning the value to the CPU. Pipelining cache can increase bandwidth but also introduces some complexity from potential branch mispredictions and longer latency between request issue and data use. Overall it aims to optimize the average memory access time.