- PySpark
- Development Environment
- Docker (Software)
- Distributed Computing
- Debugging
- Big Data
- Data Transformation
- Data Processing
- IBM Cloud
- Apache Spark
- Performance Tuning
- Apache Hadoop
Introduction to Big Data with Spark and Hadoop
Completed by Luigi Censori
January 14, 2022
19 hours (approximately)
Luigi Censori's account is verified. Coursera certifies their successful completion of Introduction to Big Data with Spark and Hadoop
What you will learn
Explain the impact of big data, including use cases, tools, and processing methods.
Describe Apache Hadoop architecture, ecosystem, practices, and user-related applications, including Hive, HDFS, HBase, Spark, and MapReduce.
Apply Spark programming basics, including parallel programming basics for DataFrames, data sets, and Spark SQL.
Use Spark’s RDDs and data sets, optimize Spark SQL using Catalyst and Tungsten, and use Spark’s development and runtime environment options.
Skills you will gain

