TG
Dec 1, 2020
I learned so many things in this module. I learned that how to do error analysis and different kind of the learning techniques. Thanks Professor Andrew Ng to provide such a valuable and updated stuff.
AM
Nov 22, 2017
I learned so many things in this module. I learned that how to do error analysys and different kind of the learning techniques. Thanks Professor Andrew Ng to provide such a valuable and updated stuff.
By Uday B C
•Sep 30, 2019
.
By sonal g
•Sep 28, 2019
f
By Ishmael M
•Jul 15, 2019
V
By Caoliangjie
•Feb 20, 2019
T
By Dayvid V
•Oct 31, 2018
f
By Michele C
•Jul 25, 2018
v
By Huifang L
•May 7, 2018
V
By Yujie C
•Feb 1, 2018
好
By Bapiraju J
•Oct 18, 2017
G
By StudyExchange
•Aug 21, 2017
V
By Aleksei A K
•Jun 22, 2023
This is an excellent course for those who want to develop applications that use neural networks meaningfully. However, I did not find hints on solving the problem of what data to put on the input level.
For example, for a neural network that evaluates a chess position, there can be at least 4 different approaches to this: 64 numbers or codes that describe the content of each of the cells of the chessboard; 32 numbers describing the position of chess pieces (or maybe 64 again, if we describe the position of each piece by vertical and horizontal lines, and not by the single cell number; 10 64-bit sets that give the placement of the same type of pieces (5 types, each from a pawn to a king, taking into account 2 colors) on a chessboard (this is the representation used by the leading chess programs to maximize the speed of enumeration of possible lines of moves); finally, just a variable length standard FEN string, which gives the generally accepted description of a chess position (however, also line-by-line for each of the horizontals, i.e., consisting of 8 parts). Before doing this by trial and error, I would like to hear some kind of "philosophy" about this.
Also, at the end of this course, I would like to try to work with the code in Notebook, as it was in the previous ones.
By Ali K
•Mar 29, 2020
In this course, the instructor from his experience gained through several machine learning and deep learning projects explains how to prioritize tasks in a big machine learning projects. This course does not introduce the reader to CNN or RNN but rather makes the user aware of some ML/DL tips to make the most efficient use of time and resources. Some of the most important questions addressed in this course are: 1) Why a single evaluation metric is important and what are some of the widely used metrics? 2) What is human-level performance and is it a good estimate of Bayes error? 3) What is Orthogonalization in the context of ML tasks and why is it important? 4) How to measure avoidable bias, variance error, data mismatch etc? 5) How to address data mismatch error? What is transfer learning and how is it different from multi-tasking 6) Whether one should opt for traditional or end-to-end deep learning approach?
By Zhenwei Z
•Apr 3, 2020
This course from setting machine learning strategies, setting goals, error analysis and data distribution, migration and multitasking learning, and depth of the end-to-end neural network training and so on about the strategy of machine learning, to strengthen the depth of the first two lessons we learn the basic knowledge have the very big help, deep understanding of the depth of these knowledge is very good for our study harder to learn knowledge, such as convolution neural network. The greatest help of this course is that it makes us understand how to solve problems encountered in the actual development process and what is the most reasonable solution through two case studies.
By Oleg P
•Nov 30, 2018
In this course, Andrew is giving very interesting practical insights into how to proceed in different project settings and how to speed up each iteration. Think of it as a stand-alone optimization algorithm for deep learning projects. What I'd further expect from this course are practical assignments, e.g., data acquisition and preprocessing patterns, data (image) augmentation, and transfer learning and multi-task learning (preferably building upon introduction to tensorflow in the previous course). As I already stated in the previous previews, optional assignments without grading would also do the work in motivating the students to do something on their own.
By Joe Z
•Jan 6, 2019
Great insights as usual for these courses. Especially useful are the strategic insights for dealing with data mismatch between train and dev/test data sets; my favorite is the idea of a "train-dev" set to separate variance from the differences in data distributions, which had never occurred to me despite it being obvious in hindsight. The "flight sim" tests were more challenging than I expected, and really helped to cement the concepts into memory. The only criticism is that some coding assignments would have been helpful to put these ideas into practice in a guided manner. Otherwise, great course as I have come to expect from Andrew.
By Vignesh R
•Jun 1, 2018
This course is more about explaining how to set your analysis universe(train/dev sets etc.) and where to go when u hit a road block i.e. when to concentrate on bias/variance etc.
Suggestions: Unlike other courses, no programming assignments here .. may be some programming assignments + Quiz in a case study format would have been more helpful. E.g. present a case, ask the student to write piece of code to calculate bias and other metrics, and then ask questions from the metrics derived instead of mentioning directly the values for human level error, Bayes estimate.
By AEAM
•Jun 16, 2019
This is a great course, something I will keep coming back to even after I'm done because it talks about strategy and rules of thumb re: Machine Learning/ Deep Learning approaches. It introduced me to certain concepts that were brand new for me and that was a great outcome for me. I wish the audio was better and the notes were better because writing on the small screen really hinders expressibility. I would rather have Dr. Ng write/draw on a chalk board than the small screen, I feel it really constrains his process. Still it's a great course!
By Tobie
•Oct 24, 2017
A very quick course (significantly faster to complete than the preceding two courses in the specialization) that is usefully targeted at the practical aspects of how to go about developing a neural network. Prof Ng sets out a clear and logical approach to building, diagnosing issues, and iteratively improving models. The one critique I have is that a few of the topics are repeated from things already covered in the earlier courses and the editing of the videos is not done quite as well. Still, a very worthwhile use of very little time!
By Teodor C
•Jun 14, 2020
Very useful tips and insights on how to approach supervised ML projects. However a more in-depth case study would be interesting, to try to answer questions like: where does easily-available input data come from ? (sure CCD cameras & other tech, but but do not forget all those little labelling hands ^^) what makes the success of hand-design features in such or such domain ? can we bootstrap ML back into tools that help with valuable hand-design ? can unsupervised learning help with cleaning up input data for ML ?
By Julien B
•Sep 21, 2017
The course content is very instructive and will greatly improve your performance on real world machine learning projects. Basically, this course gives you recipes to improve the performance of your model when something is wrong in your data and if you have not enough data.
Compared to the first two courses in the deep learning specialization, videos were a bit of lower quality, not completely edited and the course could have featured programming assignments, notably on transfer and multitask learning.
By Andre G
•Aug 12, 2021
the pronounciation of the presenter is increasingly difficult to understand. Some things are endlessly repeated. The videos have big editing problems. -- Overall it pretends to instill some wisdom on the learner, which is completely misplaced given the audience of this course ... much more real world experience would be needed here. From the perspective of a beginner, this was very theoretical and probably already forgotten before it becomes useful in real life. Not a fan of this course at all.
By Dorian P
•Feb 22, 2020
The course is absolutely fantastic, Andrew is a fantastic lecturer, however I could not give it 5 stars as there were parts of the videos which had not been edited well. There are parts where Andrew pauses and repeat himself again which seems intentionally done to allow the editor to appropriately remove the stumbled sections and make the talking seem continuous, but editing these sections has been overlooked. Its a shame as its a slight but noticeable issue in an otherwise flawless course.
By Gilles D
•Sep 5, 2017
The course content is very interesting and opened my eyes of different strategies to improve the results of a Machine Learning project.
The recommendations also helped me greatly to dispel some of the myths and bluffs that run rampant in this developing field.
It makes me a better engineer. On the minus side, the course is not as polished as the two previous ones and some edit could help to cut in the content that is repeated.
Other than that, some great ideas and I am glad I took it
By Anton D
•Oct 26, 2017
I liked how the course gives such insights that would help in progressing efficiently with a DL project in hand. This is the kind of thing that one needs to know about in addition to all the technical aspects.
I think the course would benefit from even more examples where a concrete project is examined and the student could see how the team was progressing: what were the iterations, what challenges were resolved, what were the intermediate results and of course: the final result.
By Joshua H
•May 30, 2020
The course gave an extremely wholistic insight into what applying deep learning theory may be like in a commercial context. It felt as if Andrew left no stone unturned, answering every question a student could have either in the video, or in the weekly quizzes. The only adjustment I'd have liked to see is Andrew spending more time elaborating on multi-task learning networks (such as how to initialize back propagation along a network which uses multi-task learning).