1) The document discusses performance analysis of MapReduce tasks in Hadoop for big data. It analyzes how the number of bytes written and read by map and reduce tasks changes with increasing number of input files. 2) Hadoop uses HDFS for storage and MapReduce for processing large datasets across clusters. The experiment uses word count application on increasing number of input files to analyze task behavior. 3) The results show that the number of bytes written does not increase at the same rate as the number of files, as the reduce function just combines outputs from map without much increase in size.