Offrez à votre carrière le cadeau de Coursera Plus avec $160 de réduction, facturé annuellement. Économisez aujourd’hui.

Ce cours n'est pas disponible en Français (France)

Nous sommes actuellement en train de le traduire dans plus de langues.
Packt

The Ultimate Hands-On Hadoop

Inclus avec Coursera Plus

Obtenez un aperçu d'un sujet et apprenez les principes fondamentaux.
niveau Intermédiaire

Expérience recommandée

16 heures pour terminer
3 semaines à 5 heures par semaine
Planning flexible
Apprenez à votre propre rythme
Obtenez un aperçu d'un sujet et apprenez les principes fondamentaux.
niveau Intermédiaire

Expérience recommandée

16 heures pour terminer
3 semaines à 5 heures par semaine
Planning flexible
Apprenez à votre propre rythme

Ce que vous apprendrez

  • Remember Hadoop setup and configuration steps.

  • Understand the Hadoop ecosystem, including HDFS, MapReduce, and YARN.

  • Apply queries using Pig, Hive, and Spark.

  • Evaluate Hadoop cluster performance and optimize it.

Compétences que vous acquerrez

  • Catégorie : MongoDB
  • Catégorie : spark
  • Catégorie : Hadoop
  • Catégorie : Kafka
  • Catégorie : Apache Hadoop
  • Catégorie : Big Data
  • Catégorie : Spark

Détails à connaître

Certificat partageable

Ajouter à votre profil LinkedIn

Récemment mis à jour !

octobre 2024

Évaluations

5 devoirs

Enseigné en Anglais

Découvrez comment les employés des entreprises prestigieuses maîtrisent des compétences recherchées

Emplacement réservé
Emplacement réservé

Obtenez un certificat professionnel

Ajoutez cette qualification à votre profil LinkedIn ou à votre CV

Partagez-le sur les réseaux sociaux et dans votre évaluation de performance

Emplacement réservé

Il y a 12 modules dans ce cours

In this module, we will dive into the world of Hadoop, starting with its installation and setup using the Hortonworks Data Platform Sandbox. You'll explore the key buzzwords and technologies that make up the Hadoop ecosystem, learn about the historical context and impact of the Hortonworks and Cloudera merger, and begin working with real data to get a feel for Hadoop's capabilities.

Inclus

4 vidéos1 lecture

In this module, we will explore the core components of Hadoop: the Hadoop Distributed File System (HDFS) and MapReduce. You'll learn how HDFS reliably stores massive data sets across a cluster and how MapReduce enables distributed data processing. Through hands-on activities, you'll import datasets, set up a MapReduce environment, and write scripts to analyze data, including breaking down movie ratings and ranking movies by popularity.

Inclus

10 vidéos

In this module, we will delve into Pig, a high-level scripting language that simplifies Hadoop programming. You'll start by exploring the Ambari web-based UI, which makes working with Pig more accessible. The module includes practical examples and activities, such as finding the oldest five-star movies and identifying the most-rated one-star movies using Pig scripts. You'll also learn about the capabilities of Pig Latin and test your skills through challenges and result comparisons.

Inclus

7 vidéos1 devoir

In this module, we will explore the power of Apache Spark, a key technology in the Hadoop ecosystem known for its speed and versatility. You’ll start by understanding why Spark is a game-changer in big data. The module will cover Resilient Distributed Datasets (RDDs) and Datasets, showing you how to use them to analyze movie ratings data. You'll also delve into Spark's machine learning library (MLLib) to create a movie recommendation system. Through hands-on activities, you'll practice writing Spark scripts and refining your data analysis skills.

Inclus

8 vidéos

In this module, we will explore the integration of relational datastores with Hadoop, focusing on Apache Hive and MySQL. You'll start by learning how Hive enables SQL queries on data within HDFS, followed by hands-on activities to find popular and highly-rated movies using Hive. The module also covers the installation and integration of MySQL with Hadoop, using Sqoop to seamlessly transfer data between MySQL and Hadoop's HDFS/Hive. Through practical exercises, you'll gain proficiency in managing and querying relational data within the Hadoop ecosystem.

Inclus

9 vidéos

In this module, we will explore the use of non-relational (NoSQL) data stores within the Hadoop ecosystem. You'll learn why NoSQL databases are crucial for scalability and efficiency, and dive into specific technologies like HBase, Cassandra, and MongoDB. Through a series of activities, you'll practice importing data into HBase, integrating it with Pig, and using Cassandra and MongoDB alongside Spark. The module concludes with exercises to help you choose the most suitable NoSQL database for different scenarios, empowering you to make informed decisions in big data management.

Inclus

12 vidéos1 devoir

In this module, we will focus on interactive querying tools that allow you to quickly access and analyze big data across multiple sources. You'll explore technologies like Drill, Phoenix, and Presto, learning how each one solves specific challenges in querying large datasets. The module includes hands-on activities where you'll set up these tools, execute queries that span across databases such as MongoDB, Hive, HBase, and Cassandra, and integrate these tools with other Hadoop ecosystem components. By the end of this module, you'll be equipped to perform efficient, real-time data analysis across varied data stores.

Inclus

9 vidéos

In this module, we will explore the critical components involved in managing a Hadoop cluster. You'll learn about YARN's resource management capabilities, how Tez optimizes task execution using Directed Acyclic Graphs, and the differences between Mesos and YARN. We'll dive into ZooKeeper for maintaining reliable operations and Oozie for orchestrating complex workflows. Hands-on activities will guide you through setting up and using Zeppelin for interactive data analysis and using Hue for a more user-friendly interface. The module also touches on other noteworthy technologies like Chukwa and Ganglia, providing a comprehensive understanding of cluster management in Hadoop.

Inclus

13 vidéos

In this module, we will explore the essential tools for feeding data into your Hadoop cluster, focusing on Kafka and Flume. You'll learn how Kafka supports scalable and reliable data collection across a cluster and how to set it up to publish and consume data. Additionally, you'll discover how Flume's architecture differs from Kafka and how to use it for real-time data ingestion. Through hands-on activities, you'll configure Kafka to monitor Apache logs and Flume to watch directories, publishing incoming data into HDFS. These skills will help you manage and process streaming data effectively in your Hadoop environment.

Inclus

6 vidéos1 devoir

In this module, we will focus on analyzing streams of data using real-time processing frameworks such as Spark Streaming, Apache Storm, and Flink. You’ll start by learning how Spark Streaming processes micro-batches of data in real-time and participate in activities that include analyzing web logs streamed by Flume. The module then introduces Apache Storm and Flink, providing hands-on exercises to implement word count applications with these tools. By the end of this module, you will be able to build continuous applications that efficiently process and analyze streaming data.

Inclus

8 vidéos

In this module, we will focus on designing and implementing real-world systems using a combination of Hadoop ecosystem tools. You'll start by exploring additional technologies like Impala, NiFi, and AWS Kinesis, learning how they fit into broader Hadoop-based solutions. The module then guides you through the process of understanding system requirements and designing applications that consume and analyze large-scale data, such as web server logs or movie recommendations. By the end of this module, you’ll be equipped to design and build complex, efficient, and scalable data systems tailored to specific business needs.

Inclus

7 vidéos1 devoir

In this final module, we will provide you with a selection of books, online resources, and tools recommended by the author to further your knowledge of Hadoop and related technologies. This module serves as a guide for continued learning, offering you the means to stay updated with the latest developments in the Hadoop ecosystem and expand your skills beyond this course.

Inclus

1 vidéo1 devoir

Instructeur

Packt - Course Instructors
Packt
375 Cours13 081 apprenants

Offert par

Packt

Recommandé si vous êtes intéressé(e) par Data Management

Pour quelles raisons les étudiants sur Coursera nous choisissent-ils pour leur carrière ?

Felipe M.
Étudiant(e) depuis 2018
’Pouvoir suivre des cours à mon rythme à été une expérience extraordinaire. Je peux apprendre chaque fois que mon emploi du temps me le permet et en fonction de mon humeur.’
Jennifer J.
Étudiant(e) depuis 2020
’J'ai directement appliqué les concepts et les compétences que j'ai appris de mes cours à un nouveau projet passionnant au travail.’
Larry W.
Étudiant(e) depuis 2021
’Lorsque j'ai besoin de cours sur des sujets que mon université ne propose pas, Coursera est l'un des meilleurs endroits où se rendre.’
Chaitanya A.
’Apprendre, ce n'est pas seulement s'améliorer dans son travail : c'est bien plus que cela. Coursera me permet d'apprendre sans limites.’
Emplacement réservé

Ouvrez de nouvelles portes avec Coursera Plus

Accès illimité à plus de 7 000 cours de renommée internationale, à des projets pratiques et à des programmes de certificats reconnus sur le marché du travail, tous inclus dans votre abonnement

Faites progresser votre carrière avec un diplôme en ligne

Obtenez un diplôme auprès d’universités de renommée mondiale - 100 % en ligne

Rejoignez plus de 3 400 entreprises mondiales qui ont choisi Coursera pour les affaires

Améliorez les compétences de vos employés pour exceller dans l’économie numérique

Foire Aux Questions