- Big Data
- Data Transformation
- Databricks
- Data Pipelines
- Data Integration
- SQL
- Apache Spark
- Distributed Computing
- PySpark
- Apache Hadoop
- Data Processing
- MLOps (Machine Learning Operations)
Spark, Hadoop, and Snowflake for Data Engineering
Completed by Cyril Brandon TCHOFFO NGNINTEDEM
January 12, 2025
29 hours (approximately)
Cyril Brandon TCHOFFO NGNINTEDEM's account is verified. Coursera certifies their successful completion of Spark, Hadoop, and Snowflake for Data Engineering
What you will learn
Create scalable data pipelines (Hadoop, Spark, Snowflake, Databricks) for efficient data handling.
Optimize data engineering with clustering and scaling to boost performance and resource use.
Build ML solutions (PySpark, MLFlow) on Databricks for seamless model development and deployment.
Implement DataOps and DevOps practices for continuous integration and deployment (CI/CD) of data-driven applications, including automating processes.
Skills you will gain

