- Apache Hive
- Scalability
- Apache Hadoop
- Debugging
- IBM Cloud
- Development Environment
- Performance Tuning
- Data Transformation
- Distributed Computing
- Docker (Software)
- Apache Spark
- PySpark
Introduction to Big Data with Spark and Hadoop
Completed by Diogo Silva
March 11, 2022
19 hours (approximately)
Diogo Silva's account is verified. Coursera certifies their successful completion of Introduction to Big Data with Spark and Hadoop
What you will learn
Explain the impact of big data, including use cases, tools, and processing methods.
Describe Apache Hadoop architecture, ecosystem, practices, and user-related applications, including Hive, HDFS, HBase, Spark, and MapReduce.
Apply Spark programming basics, including parallel programming basics for DataFrames, data sets, and Spark SQL.
Use Spark’s RDDs and data sets, optimize Spark SQL using Catalyst and Tungsten, and use Spark’s development and runtime environment options.
Skills you will gain

