Local LLMs with llamafile
Completed by Daehyun Kim
July 30, 2024
3 hours (approximately)
Daehyun Kim's account is verified. Coursera certifies their successful completion of Local LLMs with llamafile
What you will learn
Package open-source AI models into portable llamafile executables
Deploy llamafiles locally across Windows, macOS and Linux
Monitor system metrics like GPU usage when running models
Skills you will gain
- Category: Machine Learning
- Category: Real Time Data
- Category: Application Programming Interface (API)
- Category: System Monitoring
- Category: Large Language Modeling
- Category: Cross Platform Development
- Category: LLM Application
- Category: Application Deployment
- Category: Model Deployment

