- Data Transformation
- Apache Hadoop
- Big Data
- Apache Hive
- Performance Tuning
- Distributed Computing
- PySpark
- Docker (Software)
- Data Processing
- Scalability
- IBM Cloud
- Debugging
Introduction to Big Data with Spark and Hadoop
Completed by sumit kumar
October 15, 2022
19 hours (approximately)
sumit kumar's account is verified. Coursera certifies their successful completion of Introduction to Big Data with Spark and Hadoop
What you will learn
Explain the impact of big data, including use cases, tools, and processing methods.
Describe Apache Hadoop architecture, ecosystem, practices, and user-related applications, including Hive, HDFS, HBase, Spark, and MapReduce.
Apply Spark programming basics, including parallel programming basics for DataFrames, data sets, and Spark SQL.
Use Spark’s RDDs and data sets, optimize Spark SQL using Catalyst and Tungsten, and use Spark’s development and runtime environment options.
Skills you will gain

