This document summarizes differences between 20th century and 21st century data processing approaches. In the 20th century, single machines were used for one-to-one communication with fixed schemas and encodings, while the 21st century utilizes distributed processing with publish-subscribe patterns, replication for fault tolerance, and schema management with evolvable encodings. It also lists further work such as investigating architectures for reprocessing historic data, incorporating standards like Sensor Web Enablement and OM-JSON, deploying to mobile/remote platforms, and investigating Apache NiFi.