A modern business operates 24/7 and generates data continuously. Shouldnt we process it continuously too? A rich ecosystem of real-time data-processing frameworks, tools and systems has been forming around Apache Kafka that allows data to be processed continuously as it occurs. This talk will introduce Kafka and explain why it has become the de facto standard for streaming data. It draws on practical experience building stream-processing applications to discuss the difference between architectures and the challenges each presents. It outlines the streams API in Kafka, and explains how it helps tame some of the complexity in real-time architectures.