The Apache Flume is a tool to continuously push data to a central store such as Data Lake.
In this presentation, we would learn the basics as well as advanced hands-on on Apache Flume.
As part of this session, we will learn how to ingest data into HDFS from a network service.
Apache Flume is an integral part of Apache Hadoop eco system.
To learn more about Hadoop and Spark, please join our course at : https://cloudxlab.com/course/specialization/3/big-data-with-hadoop-and-spark