This course teaches learners (industry professionals and students) the fundamental concepts of Distributed Programming in the context of Java 8. Distributed programming enables developers to use multiple nodes in a data center to increase throughput and/or reduce latency of selected applications. By the end of this course, you will learn how to use popular distributed programming frameworks for Java programs, including Hadoop, Spark, Sockets, Remote Method Invocation (RMI), Multicast Sockets, Kafka, Message Passing Interface (MPI), as well as different approaches to combine distribution with multithreading.
Distributed Programming in Java
This course is part of Parallel, Concurrent, and Distributed Programming in Java Specialization
Instructor: Vivek Sarkar
25,238 already enrolled
Included with
(494 reviews)
Skills you'll gain
Details to know
Add to your LinkedIn profile
4 assignments
See how employees at top companies are mastering in-demand skills
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 7 modules in this course
Welcome to Distributed Programming in Java! This course is designed as a three-part series and covers a theme or body of knowledge through various video lectures, demonstrations, and coding projects.
What's included
1 video5 readings1 programming assignment1 discussion prompt
In this module, we will learn about the MapReduce paradigm, and how it can be used to write distributed programs that analyze data represented as key-value pairs. A MapReduce program is defined via user-specified map and reduce functions, and we will learn how to write such programs in the Apache Hadoop and Spark projects. TheMapReduce paradigm can be used to express a wide range of parallel algorithms. One example that we will study is computation of the TermFrequency – Inverse Document Frequency (TF-IDF) statistic used in document mining; this algorithm uses a fixed (non-iterative) number of map and reduce operations. Another MapReduce example that we will study is parallelization of the PageRank algorithm. This algorithm is an example of iterative MapReduce computations, and is also the focus of the mini-project associated with this module.
What's included
6 videos6 readings1 assignment1 programming assignment
In this module, we will learn about client-server programming, and how distributed Java applications can communicate with each other using sockets. Since communication via sockets occurs at the level of bytes, we will learn how to serialize objects into bytes in the sender process and to deserialize bytes into objects in the receiver process. Sockets and serialization provide the necessary background for theFile Server mini-project associated with this module. We will also learn about Remote Method Invocation (RMI), which extends the notion of method invocation in a sequential program to a distributed programming setting. Likewise, we will learn about multicast sockets,which generalize the standard socket interface to enable a sender to send the same message to a specified set of receivers; this capability can be very useful for a number of applications, including news feeds,video conferencing, and multi-player games. Finally, we will learn about distributed publish-subscribe applications, and how they can be implemented using the Apache Kafka framework.
What's included
6 videos6 readings1 assignment1 programming assignment
Join Professor Vivek Sarkar as he talks with Two Sigma Managing Director, Jim Ward, and Senior Vice President, Dr. Eric Allen at their downtown Houston, Texas office about the importance of distributed programming.
What's included
2 videos1 reading
In this module, we will learn how to write distributed applications in the Single Program Multiple Data (SPMD) model, specifically by using the Message Passing Interface (MPI) library. MPI processes can send and receive messages using primitives for point-to-point communication, which are different in structure and semantics from message-passing with sockets. We will also learn about the message ordering and deadlock properties of MPI programs. Non-blocking communications are an interesting extension of point-to-point communications, since they can be used to avoid delays due to blocking and to also avoid deadlock-related errors. Finally, we will study collective communication, which can involve multiple processes in a manner that is more powerful than multicast and publish-subscribe operations. The knowledge of MPI gained in this module will be put to practice in the mini-project associated with this module on implementing a distributed matrix multiplication program in MPI.
What's included
6 videos6 readings1 assignment1 programming assignment
In this module, we will study the roles of processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs. With this background, we will then learn how to implement multithreaded servers for increased responsiveness in distributed applications written using sockets, and apply this knowledge in the mini-project on implementing a parallel file server using both multithreading and sockets. An analogous approach can also be used to combine MPI and multithreading, so as to improve the performance of distributed MPI applications. Distributed actors serve as yet another example of combining distribution and multithreading. A notable property of the actor model is that the same high-level constructs can be used to communicate among actors running in the same process and among actors in different processes; the difference between the two cases depends on the application configuration, rather the application code. Finally, we will learn about the reactive programming model,and its suitability for implementing distributed service oriented architectures using asynchronous events.
What's included
6 videos7 readings1 assignment1 programming assignment
The next two videos will showcase the importance of learning about Parallel Programming and Concurrent Programming in Java. Professor Vivek Sarkar will speak with industry professionals at Two Sigma about how the topics of our other two courses are utilized in the field.
What's included
2 videos1 reading
Instructor
Offered by
Recommended if you're interested in Software Development
École Polytechnique Fédérale de Lausanne
Board Infinity
Oracle
University of California San Diego
Why people choose Coursera for their career
Learner reviews
494 reviews
- 5 stars
68.88%
- 4 stars
22.62%
- 3 stars
5.05%
- 2 stars
1.01%
- 1 star
2.42%
Showing 3 of 494
Reviewed on May 27, 2020
thanks to coursera platform,to learn new way to solving distributed problem in java
Reviewed on Apr 27, 2020
A very good course, I learnt a lot from it, thank you Coursera.
Reviewed on Oct 4, 2019
It was awsome, I could learn a lot, thks Professor
New to Software Development? Start here.
Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
No. The lecture videos, demonstrations and quizzes will be sufficient to enable you to complete this course. Students who enroll in the course and are interesting in receiving a certificate will also have access to a supplemental coursebook with additional technical details.
Multicore Programming in Java: Parallelism and Multicore Programming in Java: Concurrency cover complementary aspects of multicore programming, and can be taken in any order. The Parallelism course covers the fundamentals of using parallelism to make applications run faster by using multiple processors at the same time. The Concurrency course covers the fundamentals of how parallel tasks and threads correctly mediate concurrent use of shared resources such as shared objects, network resources, and file systems.
Access to lectures and assignments depends on your type of enrollment. If you take a course in audit mode, you will be able to see most course materials for free. To access graded assignments and to earn a Certificate, you will need to purchase the Certificate experience, during or after your audit. If you don't see the audit option:
The course may not offer an audit option. You can try a Free Trial instead, or apply for Financial Aid.
The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.