Ollama makes it trivially easy to run LLMs like Llama 3 Mistral Gemma and Phi locally on Mac Linux and Windows with a single command. It provides a local API compatible with OpenAI's API format. The simplest way for developers to run private local AI models without cloud dependencies.