One of the most useful areas in machine learning is discovering hidden patterns from unlabeled data. Add the fundamentals of this in-demand skill to your Data Science toolkit. In this course, we will learn selected unsupervised learning methods for dimensionality reduction, clustering, and learning latent features. We will also focus on real-world applications such as recommender systems with hands-on examples of product recommendation algorithms.
Unsupervised Algorithms in Machine Learning
This course is part of Machine Learning: Theory and Hands-on Practice with Python Specialization
Instructor: Geena Kim
4,108 already enrolled
Included with
(20 reviews)
Recommended experience
What you'll learn
Explain what unsupervised learning is, and list methods used in unsupervised learning.
List and explain algorithms for various matrix factorization methods, and what each is used for.
List and explain algorithms for various matrix factorization methods, and what each is used for.
Skills you'll gain
Details to know
Add to your LinkedIn profile
6 quizzes
See how employees at top companies are mastering in-demand skills
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 4 modules in this course
Now that you have a solid foundation in Supervised Learning, we shift our attention to uncovering the hidden structure from unlabeled data. We will start with an introduction to Unsupervised Learning. In this course, the models no longer have labels to learn from. They need to make sense of the data from the observations themselves. This week we are diving into Principal Component Analysis, PCA, a foundational dimension reduction technique. When you first start learning this topic, it might not seem easy. There is undoubtedly some math involved in this section. However, PCA can be grasped conceptually, perhaps more readily than anticipated. In the Supervised Learning course, we struggled with the Curse of Dimensionality. This week, we will see how PCA can reduce the number of dimensions and improve classification/regression tasks. You will have reading, a quiz, and a Jupyter notebook lab/Peer Review to implement the PCA algorithm.
What's included
3 videos10 readings3 quizzes1 peer review2 discussion prompts1 ungraded lab
This week, we are working with clustering, one of the most popular unsupervised learning methods. Last week, we used PCA to find a low-dimensional representation of data. Clustering, on the other hand, finds subgroups among observations. We can get a meaningful intuition of the data structure or use a procedure like Cluster-then-predict. Clustering has several applications ranging from marketing customer segmentation and advertising, identifying similar movies/music, to genomics research and disease subtypes discovery. We will focus our efforts mainly on K-means clustering and hierarchical clustering with consideration to the benefits and disadvantages of both and the choice of metrics like distance or linkage. We have reading, a quiz, and a Jupyter notebook lab/Peer Review this week.
What's included
2 videos2 readings1 quiz1 peer review1 discussion prompt1 ungraded lab
This week we are working with Recommender Systems. Websites like Netflix, Amazon, and YouTube will surface personalized recommendations for movies, items, or videos. This week, we explore Recommendation Engines' strategies to predict users' likes. We will consider popularity, content-based, and collaborative filtering approaches, and what similarity metrics to use. As we work with Recommendation Systems, there are challenges, like the time complexity of operations and sparse data. This week is relatively math dense. You will have a quiz wherein you will work with different similarity metric calculations. Give yourself time for this week's Jupyter notebook lab and consider performant implementations. The Peer Review section this week is short.
What's included
4 videos1 reading1 quiz1 programming assignment1 peer review
We are already at the last week of course material! Get ready for another dense math week. Last week, we learned about Recommendation Systems. We used a Neighborhood Method of Collaborative Filtering, utilizing similarity measures. Latent Factor Models, including the popular Matrix Factorization (MF), can also be used for Collaborative Filtering. A 1999 publication in Nature made Non-negative Matrix Factorization extremely popular. MF has many applications, including image analysis, text mining/topic modeling, Recommender systems, audio signal separation, analytic chemistry, and gene expression analysis. For this week, we focus on Singular Value Decomposition, Non-negative Matrix Factorization, and Approximation methods. This week, we have reading, a quiz, and a Kaggle mini-project utilizing matrix factorization to categorize news articles.
What's included
5 videos1 reading1 quiz1 peer review
Instructor
Offered by
Recommended if you're interested in Machine Learning
Duke University
Google Cloud
CertNexus
Build toward a degree
This course is part of the following degree program(s) offered by University of Colorado Boulder. If you are admitted and enroll, your completed coursework may count toward your degree learning and your progress can transfer with you.¹
Why people choose Coursera for their career
New to Machine Learning? Start here.
Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
A cross-listed course is offered under two or more CU Boulder degree programs on Coursera. For example, Dynamic Programming, Greedy Algorithms is offered as both CSCA 5414 for the MS-CS and DTSA 5503 for the MS-DS.
· You may not earn credit for more than one version of a cross-listed course.
· You can identify cross-listed courses by checking your program’s student handbook.
· Your transcript will be affected. Cross-listed courses are considered equivalent when evaluating graduation requirements. However, we encourage you to take your program's versions of cross-listed courses (when available) to ensure your CU transcript reflects the substantial amount of coursework you are completing directly in your home department. Any courses you complete from another program will appear on your CU transcript with that program’s course prefix (e.g., DTSA vs. CSCA).
· Programs may have different minimum grade requirements for admission and graduation. For example, the MS-DS requires a C or better on all courses for graduation (and a 3.0 pathway GPA for admission), whereas the MS-CS requires a B or better on all breadth courses and a C or better on all elective courses for graduation (and a B or better on each pathway course for admission). All programs require students to maintain a 3.0 cumulative GPA for admission and graduation.
Yes. Cross-listed courses are considered equivalent when evaluating graduation requirements. You can identify cross-listed courses by checking your program’s student handbook.
You may upgrade and pay tuition during any open enrollment period to earn graduate-level CU Boulder credit for << this course/ courses in this specialization>>. Because << this course is / these courses are >> cross listed in both the MS in Computer Science and the MS in Data Science programs, you will need to determine which program you would like to earn the credit from before you upgrade.
MS in Data Science (MS-DS) Credit: To upgrade to the for-credit data science (DTSA) version of << this course / these courses >>, use the MS-DS enrollment form. See How It Works.
MS in Computer Science (MS-CS) Credit: To upgrade to the for-credit computer science (CSCA) version of << this course / these courses >>, use the MS-CS enrollment form. See How It Works.
If you are unsure of which program is the best fit for you, review the MS-CS and MS-DS program websites, and then contact datascience@colorado.edu or mscscoursera-info@colorado.edu if you still have questions.