OK
Jan 28, 2024
Easily a five star course. You will get a combination of overview of advanced topics and in depth explanation of all necessary concepts. One of the best in this domain. Good work. Thank you teachers!
C
Jul 10, 2023
A very good course covering many different areas, from use cases, to the mathematical underpinnings and the societal impacts. And having the labs to actually get to play around with the algorithms.
By Николай Б
•Jul 30, 2023
Greate
By Simone L
•Aug 21, 2023
Super
By mehmet o
•Aug 6, 2023
great
By ABEER H M
•Aug 27, 2024
شكرا
By Khaoula E
•Mar 30, 2024
good
By Buri B
•Mar 3, 2024
nice
By Nivrutti R P
•Feb 25, 2024
good
By zed a
•Jan 24, 2024
good
By Padma M
•Dec 11, 2023
good
By Fraz
•Dec 10, 2023
All the instructors were good and delivery was mostly excellent, however, the course was a bit too short can be improved in several ways. There were very few quizes in the video lectures and the ones that were present, were too easy or obvious (does not require much thinking). There should be good, quality quizes in most video lessons similar to the OG ML course by Andrew Ng. The inline quizes in videos help "reinforce" the learning in humans. This is proven by the research yet to be carried out :D Another aspect that I did not like was the jupyter notebooks to run excercises, all solutions were already provided and it does not help in learning the concepts if all we have to do is to press Shift+Enter and merely observe code and results. Actual learning requires some trail and error as part of the exercises, once again the OG ML course by Andrew Ng did a good job of accomplishing this with Octave exercises.
By Deleted A
•Nov 2, 2023
A delightful and very up-to-date (most of the references have been published in the last 2 years) overview of LLMs with hands-on lab sessions in Python. Prompt engineering, zero/one/few-shot inference, instruction fine tuning (FT), parameter-efficient FT (PEFT), Low-rank Adaptation (LoRA), RL from human feedback, program-aided language (PAL) models, retrieval augmented generation (RAG), etc, etc. In short, everything you need to know about the state-of-the-art in LLMs in 2023. There are a couple of things that disappointed me though. The first one is that, unlike other Coursera courses, there isn't any discussion forum to interchange ideas with other students or post questions. The second one is that there isn't any clear contact (either from the course's intructors or from Coursera) to ask questions regarding problems with the AWS platform when working on the labs.
By Sun X
•Sep 15, 2023
Good entry-level course in general. Thanks to the course team for bringing us one of a few online courses on this timely topic.
I really like the lab sessions. Although it can be further improved by adding some exercises, like writing the code for the whole LLM task.
Proximal Policy Optimization lecture by Dr. Ehsan Kamalinejad is fantastic. It helps me real the PPO paper with both quantitative and intuitive understanding. In comparison, the sections of some important LLM architectures, such as the Transformer and InstructGPT, is a bit too much intuitive.
The final week is way too packed. Students need to know more than just names and a short intro of new LLM techniques or architectures. It would be better to have separate Lab for each topic (such as PTQ, RAG, etc.) for learners to REALLY understand what's going on.
By Vinicius N
•Sep 16, 2024
I have to admit, I was a bit unsure at first if the course content would be able to navigate around all the hype on generative AI and LLM's. I was pleasantly surprised to find that the topics are based on scientific research and offer a broad perspective on the present issues in this kind of AI project. If I had to compare this course to the others from DeepLearning.AI, I'd say the only thing missing is the high-quality, problem-based, challenging hands-on practices in Jupyter Notebooks. Most of the practices are ready to go, you just have to run the code, which I don't think is ideal. But overall, it's still a great course to recommend!
By Rohith K
•Dec 31, 2023
Good overview of the different stages of developing an LLM application. I felt it gave me enough knowledge to be able to understand the current research and applications that are being developed for large-language models. The course gives you links to relevant papers that you can read for more in-depth coverage on how some of the latest LLMs are constructed and trained. I would have liked the labs to be more hands on. You basically run pre-built lab notebooks that use existing model implementations from widely available libraries like HuggingFace. There was no requirement to write any code.
By Mike R
•Aug 14, 2023
This is a good introduction - the lectures are very good and cover many critical aspects of training LLMs such as PEFT, RLHF and challenges with scaling and deployment. The lab notebooks go through the lecture ideas though you just need to run the notebook - there is no exercises in the labs for you to do which is why I have given 4 stars as usually the labs are where the learning happens.
I hope that this turns into a specialisation with exercises as the teaching team really know their stuff and there are literally almost no other alternatives to get curated learning in LLMs right now.
By Eliu M M
•Mar 1, 2024
A course with a lot of new information, very well explained even for people with no prior knowledge of the subject. It's not an introductory course on the theory; there's a lot of technical explanation of the LLM core. In the end, I can say that I understand much better what LLMs are, how they are structured, their architecture, how they are trained, and aligned to strive for improved capabilities. The labs are very supportive, providing enough guidance to understand the direction of the programming and what needs to be done, but undoubtedly, I couldn't program anything related to it.
By Dan S
•Jun 4, 2024
Great course, in-depth and well paced. Presentation and user interface are excellent, flow is good, and ability to re-play and navigate the videos is excellent, as is the ability to download and keep for later review. My one complaint, and the reason for -1 star are the glaring issues in the transcript text. It is clear that it was auto-generated and _never_ vetted for accuracy. I expected more effort put into this as having a text-searchable take-away is valuable.
By Daniel W
•Aug 19, 2023
An excellent introduction to important, contemporary concepts in LLMs. A lot of detail is packed into each week, supplemented by hands-on exercises to provide learners with a feel for the topics covered. I felt that the hands-on element could have been better by providing opportunities to solve problems ourselves. Otherwise, this course has been a great use of time and I have come away with a sense of better understanding and accomplishment.
By Amit I
•Jun 15, 2024
Very good overview of LLM topics like prompt engineering and fine tuning. Instructors are good and the labs are well thought out. The labs show how the concepts learned in lectures are implemented, but it would have been nice to go on a deeper dive on the API. The module on fine-tuning was very good with a lot of good references. If you are looking for a course to get under the hood of LLM/GenAI, I would recommend.
By Robert S
•Dec 3, 2023
Enjoyed the videos and this course suited me well, without any complex maths but at the same time I felt I have learned a great deal about how LLMs work, are trained and how their performance can be enhanced for the task that it's applied to. I would have liked a chance to develop a use-case scenario that could lead to peer-based assessment. In my case, this is around generating content for language learning.
By Jason O
•Jan 28, 2024
There is lots of good information in this course. The one thing that would prevent it from getting 5 stars in my opinion is the lack of reasoning exercises in the lab (e.g. generate code for a function that does 'x' and returns 'y' in the context of generative process). Even if the function requirements were relatively simple, it would help retention of the concepts.
By Ilyas B
•Dec 29, 2024
Nice theoretical concepts. But I was expecting more practical oriented tasks/labs. I would suggest to include lab quiz as well or may be give a task and I submit and later a sample solution to review the actual answer would give more confidence in terms of practical exposure. So nice course to start with but practical labs should be added more than 5.
By Soumya B
•Apr 21, 2024
The breadth of the topics covered is excellent. The assignments are not engaging at all, and hope it can be improved in the future iterations (maybe include advanced non-graded parts in the notebooks). More curated learning materials (link to blogs, articles not behind paywall - e.g. Lilian Wang's blog) can be an interesting addition.
By Parth P
•Sep 2, 2024
This is a great course covering every aspect of GenAI will LLMs. I feel that it can be even better with better explanations of the lab code modules with better and detailed comments. I found it difficult to understand what certain part of the code does and how would certain parameter values impact in different function calls.
By Julien G
•Jan 14, 2024
A good introduction to the Generative AI lifecycle. Accessible even if your ML background is limited. I appreciated the labs that allowed me to see code in action. As a Python developer, I only wish that the labs would be more interactive, calling for more modifications to better understand the libraries that were in use.