Chevron Left
Back to Sequence Models

Learner Reviews & Feedback for Sequence Models by DeepLearning.AI

4.8
stars
30,364 ratings

About the Course

In the fifth course of the Deep Learning Specialization, you will become familiar with sequence models and their exciting applications such as speech recognition, music synthesis, chatbots, machine translation, natural language processing (NLP), and more. By the end, you will be able to build and train Recurrent Neural Networks (RNNs) and commonly-used variants such as GRUs and LSTMs; apply RNNs to Character-level Language Modeling; gain experience with natural language processing and Word Embeddings; and use HuggingFace tokenizers and transformer models to solve different NLP tasks such as NER and Question Answering. The Deep Learning Specialization is a foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to take the definitive step in the world of AI by helping you gain the knowledge and skills to level up your career....

Top reviews

AM

Jun 30, 2019

The course is very good and has taught me the all the important concepts required to build a sequence model. The assignments are also very neatly and precisely designed for the real world application.

MK

Mar 13, 2024

Cant express how thankful I am to Andrew Ng, literally thought me from start to finish when my school didnt touch about it, learn a lot and decided to use my knowledge and apply to real world projects

Filter by:

3426 - 3450 of 3,689 Reviews for Sequence Models

By Bill F

•

Sep 17, 2019

Toward the end of the specialization, there seemed to be a noticeable drop in both the quality of instruction and the programming assignments. Course 5 on sequence models was much more "hand wavy" than Course 4 on convolution models. At the end of Course 5, I'm still not sure if I learned anything meaningful other than filling in a few blank lines of code to complete the assignment. There was much less intuition provided about the nature of recurrent nets, and then translating that to code was foggy. More attention needs to be paid to how and what the framework is actually doing, not just giving hints at filling the blanks.

Finally, the grader especially in week 3 caused me many, many hours of wasted time and frustration chasing phantom problems in the notebook. Coursera and/or deeplearning.ai does not pay much attention if any to solving the grader or other systemic problems.

By Slobodan C

•

Feb 20, 2018

The best part of the course are "intuitions" presented by Prof. Ng. The worst parts are technical problems with Coursera infrastructure, and insufficient number of mentors available to offer suggestions. For example, in forums there are some doubts about the optional parts of assignments (bad formulas etc.), but these quite valid questions are just not addressed by anybody. I would also suggest adding a separate course on Keras as a part of the specialization, because the Keras introduction offered in a specialization is way too basic. This makes it quite difficult to go through the assignments for the sequential models. It would also be helpful to extend the last two courses to five weeks or so, to cover course material in more details.

By Julien B

•

Jul 15, 2018

The lectures are great, but the assignments are not: apart from the hours wasted restarting notebooks (!), I've found very frustrating to have to go between "write `j = 0` on the next line" to "figure out Keras documentation by yourself, the grader will only tell you `it's wrong`" (Keras having such a horrendous API, with many functions having 20+ arguments, and sometimes the course tells you to specify an argument that's not even in the documentation!).There is no balance between the two (you're mostly told "write this, write that", with no space for thinking as in the first course of the specialisation) and the assignments are primarily a chore you have to go through, even though you won't learn much, if anything, from them.

By Franck B

•

Feb 17, 2018

Really big struggle with dinos, versions of workbooks, and sometimes no logical way to explain why grader does not validate a working notebook. Pain, frustration, taking away time from proper learning.

On the course itself, some exercices felt like toying (e.g. very simple function to check if a time_segment already exists) in the middle of a keras deep learning model, where learning debugging, setting up smaller ones would have helped me learn more I think.

Still not sure I am at ease with creating models, we experimented various ways over the specialisation, and the selection of model architecture or even tuning after 1st running version is still mostly guess work to me. Will need to digest and keep learning

By Benny P

•

Mar 29, 2018

This course provides great introduction to RNN and other sequence models and their application to popular fields such as NLP and audio processing. It does great job in providing the motivation and intuition behind the creation of such sequence models (e.g. LSTM, GRU, Word2vec, GloVe), however I feel that the theoretical explanations need to have more depth. During this course I had to refer to other websites to gain more technical understanding about LSTM and GRU. The programming exercises are nice, they cover many popular topics such as NLP, speech, and music processing, but I struggled when doing it in Keras. I wish some pointers were provided on where to learn it before doing the assignments.

By José A M

•

Aug 5, 2018

Too many stability issues on the platform to get the notebook up and running.

Few bugs and errors on lectures and exercises, if they are found by the community you should update the material even if it involves recording a video again. Too much time spent on the notebooks figuring out "side" stuff that is not what I am here to learn.

While on the course for CNN it covered the state of the art of the field, in LSTM I think there is much more that could have been explained.

I have missed examples on other type of problems like forecasting time series, events and other more business like applications.

Still I learnt a lot and would do it again.

By André L

•

Feb 14, 2023

The content is didatic, as well as the explanations, a reasonable course for a real beginner. However, the material is one of the worst I have seen: a lot of errors that are indicated with notes between the classes and MANY annotations and sketches from Andrew in the slides. It mixes up handwritten annotations with digital text, a complete mess. I had to edit the PDF in order to make something useful, even though a lot of information is either missing or floating somethere in the slide. Besides that, some videos are not edited properly: it is possible to experience many repetitions of the same phrase.

By Felipe M

•

Feb 24, 2018

Although the course content is very useful, the hurry in which the course was put together does show. Video was clearly under-edited (as is apparent by Andrew repeating some statements in the expectation that the previous one would be edited out), and the auto graders caused me to waste many more hours than truly needed to get my assignments in a format that would be accepted. Finally, I was very disappointed at the fact that the specialization was launched and then the last course pulled out, so I had to pay two months even though I had budgeted my own time to finish it in one.

By Arjan G

•

Mar 3, 2018

Nice to learn how RNN's work. But too rough around the edges for a 5-star score.

Good points:

I learned RNNs, language models and many other useful techniques

Subject matter is mostly well explained in the lectures

Original authors of a technique are cited

Bad points:

Some things should be explained more elaborately while other explanations can be shorter, especially in the assignments.

Mistakes in the editing in the audio clips of the lectures

Mistakes in the notebooks, sometimes non-intuitive/bad coding principles are used

By Gautam D

•

Jun 17, 2019

To be completely honest, I loved Dr. Andrew's method of teaching. But the assignments just flew over my head because I didn't have enough hours of practice of Keras under my belt. I know Keras is there to make things easy but it's very difficult to just trying to pass the grader. To goal of assignments was fantastic, I mean, generating music, etc. sounds really amazing but I feel that if there was some more time given to make us better in Keras and other technicalities then I would've loved this course much more!

By Andrei S

•

Sep 19, 2023

This course presents a number of ideas that are used in mainstream sequential models. Unfortunately, the lectures are not as detailed and precise as one may wish (and are not on par with the previous courses in the specialization). Also, quizzes and programming exercises are not really useful. In particular, the programming exercises are quite superficial, more of "guess what the test expects" type.

The bottom line: the videos are worth watching (although not perfect), the exercises are useless for learning.

By Javedali S

•

Mar 29, 2018

Good but i expected more. The main thing i like about first 3 courses, they were really deep. In the last two courses we have skipped the backpropogation. Now this is something which you can keep optional. I like the way Andrew Ng teaches, going to the basics, and that is why I came here and paid 40 euros per month. Also, there are few stuff missing like Generative models, Adversarial networks, GAN and etc. It would be good if Andrew can have more courses related to this and deep (as it is deep learning :))

By Hossein K

•

Jan 3, 2024

The course introduces a lot of new concepts about sequence models. It seems introductory as it only scratches the surface. There are lab assignments to reinforce the learning. I found the lab assignment a bit overwhelming as I encountered new details about the implementation and new reading materials and references while I was working on the clock to finish them within the time limit. Moving those materials out of the lab assignments would give students a chance to review them beforehand.

By Kush S

•

Jul 8, 2020

By far the most difficult of the 5 courses but giving it a lower review since the programming assignments are rushed through to finish 2-3 in 1 week which gets hectic & understanding of key concepts is lost. Also, it would help if more time is spent in the videos to explain the concept/model/algorithm used in the assignments since I close to understood nothing from the assignments in spite of completing them. Finally, the instructions too were not clear in the assignments.

By Kirsten R

•

Aug 11, 2023

There is too much in this single course. I struggled a lot with understanding the concepts and getting through programming assignments because I didn't feel like the videos set me up well enough to apply what I learned. For me it would've been more beneficial to break up RNNs and LSTMs into one course, and then Attention models and Transformers in another course. I just felt a big shift in the organization and the quality of the content compared to the previous courses.

By John S

•

Feb 3, 2019

Interesting and full of excellent lectures as always for Andrew Ng. The programming assignments quality was not as good as the other courses in the Deep Learning specialisation though. They drop straight into Keras with no information/introduction, use several complex model architectures without explanation, in week 3 4 out of the 5 'your code' exercises were about audio sampling, not very relevant. Again, excellent lectures, just not great programming examples.

By Wolfgang G

•

Jul 12, 2018

Sorry to say they dropped the ball on this one. The last course of this specialisation has the most advanced topics thrown at you in just three weeks, and it's even more cookbook-like than in the previous courses. The material of this part of the specialisation would require a whole course in itself, perhaps for +10 weeks. Here, I found it is at best a guide for self-study, _if_ you have the time for that. Also, support in the forums was very minimal.

By mike b

•

Feb 16, 2021

There are some challenges with the videos eg. repetition, blank audio, variability in speaker's volume (difficult to hear). In particular perhaps 'Bleu score' needs to be redone. I did not enjoy the labs mostly because I don't have much interest in NLP BUT the 'emoji' and 'trigger word' labs were excellent! Especially the 'trigger word' lab should be the standard for all labs, it was very well written: clear, good flow, no mistakes.

By Abhirup C

•

Aug 6, 2024

All over, the course is designed well to introduce new concepts and make the students familiar with the new ideas. The labs help us give a better understanding of real life practical usage. MAJOR DOWNSIDE: WEEK 4, Transformer architecture explanation was done very poorly. Many students completed the labs and assignment by somehow copying code and formulas from other sources without much of a proper understanding. Please work on this.

By Bradly M

•

Apr 17, 2019

The scope of this course was highly relevant to me, but unfortunately many of the class materials were broken or otherwise incorrect, making some ungraded portions of the assignments difficult or impossible to achieve. Activity on the discussion boards indicates many people have tripped over this for at least the better part of a year, but no corrections have been made. This was quite frustrating and wasted a good amount of my time.

By Yevgen S

•

Jul 21, 2019

I took this course after a long pause after I finished the first 3 courses. I would NOT recommend doing it that way. As a result, I felt rusty on some of the coding practices.

I think the course gives great introductory information on RNNs and LSTMs. The first two weeks of the course are spot on. However, I think the third week is lacking. I had hard time making a connection between the lecture material and the assignments.

By Adam J

•

Dec 2, 2019

This course was at a really high-level and barely scratches the surface of Sequence Models. Didn't really go into much detail behind any of the theory, and the programming assignments were mostly done for us, so you don't really end up learning much. You certainly won't be ready to have a job solving NLP problems after taking this course. If you want that, you're better off going through actual college courses online.

By MD. B U A

•

Oct 16, 2020

First of all, the programming assignments are really copy-pastes. There is nothing really to storm your brain for. Second, many of the ideas presented in the video lectures are very brief and short, skipping the explanation parts. After taking this course, I now know the names of lots of algorithms and models, but that's all I know, only the names. To get broader knowledge on them, I have to look somewhere else now.

By Eero L

•

Jun 7, 2019

The course content and Andrew Ng are great. The submission process of the assignments is absolutely dreadful. You might get 0 points for correct answers or not, depeding on...well, I have no idea on what. Maybe it's Jupyter Notebook, maybe it's Keras or maybe it's something else. But you must have good search engine skills, since you will most likely spend a lot of time in searching the discussion forum for answers.

By Amit G

•

Jul 15, 2021

May be this is my observation but this is the 1st course where I am unable to understand most of the explanation by Andew Ng, and the course exercises are more like the python coding like slicing, dicing, filtering, and how come this course is same for last 3-4 years, not even objective questions, There has been a tremendous breakthrough in the field in last 3 years and the course content is still the same.