Chevron Left
Back to Machine Learning: Clustering & Retrieval

Learner Reviews & Feedback for Machine Learning: Clustering & Retrieval by University of Washington

4.7
stars
2,357 ratings

About the Course

Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python....

Top reviews

JM

Jan 16, 2017

Excellent course, well thought out lectures and problem sets. The programming assignments offer an appropriate amount of guidance that allows the students to work through the material on their own.

BK

Aug 24, 2016

excellent material! It would be nice, however, to mention some reading material, books or articles, for those interested in the details and the theories behind the concepts presented in the course.

Filter by:

301 - 325 of 389 Reviews for Machine Learning: Clustering & Retrieval

By MARIANA L J

•

Aug 12, 2016

The things I liked:

-The professor seems very knowledgeable about all the subjects and she also can convey them in a very understandable way (kudos to her since talking to a camera is not easy)

-The course was well organized and the deadlines were adjusted when a technical difficulty was found by several students

-All the assignments are easy to follow and very detailed

-The testing code provided for the programming assignments is a huge help to make sure we are solving it the right way

What can be improved:

-Some of the concepts during weeks 4 and 5 seemed a bit rushed. Although the professor explained that some details were outside of the scope of this course, I felt that I needed a more thorough explanation in order to understand better

-Some links to the documentation of libraries used in the programming assignments were lacking information on how to really use them, I wish we had some other link to worked examples too

In general I can say this was another good course for this series. Making a course like this is not easy at all and I can see that they are putting a lot of effort to produce them. All of their hard work is really appreciated on my end.

By Liang-Yao W

•

Aug 24, 2017

This course is generally good, but I do feel less smoothly guided compared to the other courses in this specialization. For most modules of this course (other than the LDA part), the lecture videos are clear as before but the programming assignments are more demanding. You will probably need helps from google, at least for the usage of graphlab's functions. But as long as you are not completely new to programming and python, you should be able to work it out fine.

However, for the module introducing the LDA model and Gibbs sampling, I find it difficult to follow. The lecturer tried to convey the concepts and intuitions without presenting the step-by-step algorithm, probably because they are too involved. But personally, I would prefer still have them to think over even if I can not understand them now.

It is also a pity that the one other course and the capstone project originally planned of this specialization are not launched in the end. I do believe the lectures will provide high-quality course content and introduce them with passion.

By Matt S

•

Nov 10, 2016

I found that Week 4, Assignment 2 was testing our knowledge in ways that are in opposition to the general ethos of the course.

I mean that this course is about gaining insight, intuition and the practical tools for ML, where the smaller details like - knowing which numpy function gives you a random univariate normal distribution - are normally provided for us, so that we can focus on the aforementioned broader, more useful aspects of ML/Clustering.

The assignment had good content on the whole but the parts which were chosen for "write code here" could certainly be improved.

I hope this is useful feedback and that the assignment is reviewed so that it doesn't needlessly discourage people.

By Usman I

•

Dec 29, 2016

I am taking all courses in the specialization, and this is my fourth. I have been having a great time with materials by both instructors so far, until I came to week 5 of this course.

Despite repeated viewing, my understanding of LDA is non-existent. The first section is fine, but starting from "Bayesian inference via Gibbs sampling," for me at least, the method of instruction has gone off a cliff.

I strongly suggest soliciting feedback from learners that narrowly targets the material of this week 5 to determine if it's just me or if this is a wider problem. If it is the latter, perhaps it is time to redesign the lessons of this week.

By Christopher M

•

Jun 30, 2019

Doesn't go quite as deep into the details as some of the other Machine Learning courses from the University of Washington do. Overall though, the course covers a LOT of ground. and provides exposure to many different topics.

I would have liked to have seen an Optional section on the derivation of some of the math that we were given functions for on the Expectation Maximization section. The models in the hierarchical clustering section take longer to fit than is necessary for a course like this (more than 40 times as long as the instructions say it should take), maybe a larger tolerance for convergence should be specified?

By Bob v d H

•

Oct 2, 2016

Some of the interesting topics discussed in this course could be treated substantially more extensive and detailed in order to get a better grip and understanding on them (e.g. Gibbs sampling). After this course, it is a bit dazzling how much different algorithms and methods are available for clustering and retrieval tasks and this course easily could have been subdivided into two or three separate courses on the same topic with a more detailed treatment. Still, about many interesting subjects a tip of the iceberg has been brought to you ... it tastes so good that you would like to have much more!

By Sundar J D

•

Sep 26, 2016

Great course and awesome teaching by Prof. Emily Fox. Prof. Fox did a great job of teaching some of the really tough components (GMM, LDA, etc) in simple and lucid style (like always) and that made it easy to understand and comprehend those topics.

The one thing that I felt had gone down compared to the previous 3 courses was that for some of the topics, the material felt too short and felt like it was cut down to fit within the 6 weeks course duration. I would have at least liked some extra reading material or references especially for GMMs, LDA, Gibbs Sampling, etc.

By Maria V

•

Aug 2, 2016

The specialization has a good quality on average. I started doing this course immediately after it went open. I had a feeling that the quality of the course went down (questions were often unclear and it took time to figure out what is expected as an answer). However, many problems were solved quite fast and teaching stuff is really helpful.

I still would like to see more about MapReduce in-depth in this course. I did not have a feeling that it was covered sufficiently (only theory, no hands-on material). In general, hands-on material was great and useful.

By Yaron K

•

Sep 30, 2016

The assignments are excellent and help understand the algorithms and concepts taught in the course. There are some garbling in the subtitles/transcripts (including the quirky one that every time the lecturer says EM - the "EM" doesn't appear, and the following word is capitalized). As usual Graphlab Create / Sframes can't handle apply(). however mostly apply() appears in the part of the assignment that inputs files and turns them into data matrices and the explanations how to run the assignment with Scikit-Learn include pre-computed input files

By Alvis O

•

May 1, 2020

Course materials are good and well prepared. I enjoy this course very much. In general, I highly recommend people who want to learn advanced clustering techniques enrol this course.

Unfortunately, when I enrolled the course, I was informed that module 5 and module 6 were removed from the course, which I am interested. Besides, in the assignment, instruction for coding blocks were not detailed enough. This confused me and consumed a lot of time to guess to pass tests. Would appreciate if this could have done better.

By Yin X

•

Nov 4, 2017

I really like the content of this course, like other courses in this specialization. However, for the assignment in module 5, one must work with GraphLab to get the correct answers in the purpose of getting a certificate. I think it is not very convenient for those who may have trouble accessing graph lab. I wonder if the instructors could provide a pandas/scikit learn version for assignment 2 in module 5. Thanks again for putting together such a great specialization.

By Sander v d O

•

Oct 18, 2016

All the courses in this specialization are great, but compared to the other 3 i did until now, this one seemed a bit short on material. Especially week 1, and somewhat week 6 was without good material. Weeks 2, 3 and 4 were great. I got lost somewhere in week 5 on collapsed Gibbs sampling.

Still: very much recommend this course, it provides a good introduction to Nearest Neighors, K-Means, Gaussian Mixtures and LDA. Thx prof. Fox!!

By Pier L L

•

Aug 2, 2016

Very good course nice practical approach. I was kind of surprised that hierarchical clustering was kept at the end and discussed only marginally since it is a widely used approach.

I liked the part about LDA but IMHO I would have liked more discussion about fundamental techniques rather than such an advanced method.

Too focus on text data. Most of the application I worked on have limited textual data.

By George P

•

Nov 21, 2017

Overall one of the best courses I have had in my life. It was very well structured. The material was a little bit more advanced than the rest of the courses of this specialization and therefore more in-depth explanation need to be given especially in the LDA module. In a nutshell it was a positive experience both watching the videos as well as doing the quizzes and the programming assignments

By Michele P

•

Sep 2, 2017

Advanced course. The material taught in this course is more advanced compared to Regression and Classification courses. You have to invest more time in respect to the previous courses. For some topics (LDA and hierarchical clustering) I had to look for other sources in order to understand the concepts properly. However, this course is a good introduction to clustering and retrieval.

By Nicolas S

•

Jan 2, 2020

The videos are great, well-structured and introduce gradually the complexity. This is a good idea to explore both methodological and computation aspects of clustering. Unfortunately, the exercises requires the use of a specific library, instead of scikit-learn and numpy. Furthermore, they also required Python 2, while Python 3 is now widely used.

By Gilles D

•

Aug 12, 2016

Still a very good course.

Week 4 was very tough. The general concept can be understood from a 10,000 feet altitude but the lesson and programming assignment need to be reviewed, maybe with a slower step by step example.

As some other student mentioned, it was... "brutal".

Other than that looking forward to the next course in the specialization!

By Jayant S

•

Oct 25, 2019

The course was very detailed. The case study technique was rather very helpful as compared to theoretical technique. I would consider the programming assignments from medium to hard difficulty. The course could have been much better if graphlab as well as scikit coding would have been taught side by side.

By Patrick A

•

Sep 30, 2020

Very interesting but the LDA and Gibbs sampling for LDA concepts were not easy to understand. May be we could find a simpler way to explain them. Nonetheless, with all these concepts and practical case studies learned in this specialization, we can start solving real world problems. Thanks once more!

By Maxence L

•

Dec 15, 2016

Comme les précédents dans de cette spécialisation, ce cours est très riche et donne les clés pour utiliser des outils complexes et puissants. Toutefois, un peu plus de détails sur certains aspects, notamment théoriques, pourraient améliorer la compréhension de certains chapitres plus techniques.

By Alexandru I

•

Sep 25, 2020

I think all the advanced concepts presented in this course were a little bit rushed. Maybe I would have been better had we received more information, meaning more lecture materials. But over all, I feel really good about choosing this Specialization.

By Steve S

•

Aug 26, 2016

Like all the courses in this specialization so far, the material has been good. The reason for only 4 stars rather than 5 is the difficulty in getting questions answered in a timely manner. There don't seem to be any active mentors for this class.

By Martin B

•

Apr 11, 2019

Greatly enjoyed it. As with the other courses in this specialization the discussion of the subjects is impeccable, especially if you've taken some preparatory mathematics courses. The reliance on Graphlab Create is a drag though.

By Raj

•

May 26, 2017

Clustering & Retrieval was a lot tougher compared to courses on regression & classification because the match concepts behind this course were too complex. Nevertheless Emily tried to make this course as intuitive as possible

By Abhishek S

•

Feb 10, 2018

Till Expectation Maximization, the learning is tremendous. However, once past that, everything would feel incomplete since most assignments are spoon fed after that. Rating it four stars because of initial lectures.