Use Ollama for Your MLE Tasks
Overview
MLE is designed to work with Ollama as the LLM engine. You can use Ollama to generate code for your MLE tasks.
Prerequisites
Install Ollama and Ollama CLI. Please refer to Ollama (opens in a new tab) for more details.
After installing Ollama, you can use the following command to quickly test Ollama locally:
ollama run gemma2:2b
Using Ollama as the LLM engine
Create a new MLE project:
mle new ollama-test
You will need to choose ollama as your LLM engine.
Start the project with Ollama and Gemma2 model.
cd ollama-test
mle start --model gemma2:2b
Next steps
You can follow the Train an Edge Model with Your Own CSV Dataset tutorial for the rest of details.