GitHub Projects project

Ollama - Local LLM Runtime

Ollama Local LLM Llama Mistral Open Source

Overview

Ollama makes running large language models locally simple and accessible. With just a few commands, you can run Llama 3, Mistral, CodeLlama, and many other models.

Key Features

  • Simple CLI: One-command model execution
  • Model Library: Extensive model collection
  • Cross-platform: macOS, Linux, Windows
  • API Server: REST API built-in
  • Custom Models: Import your own

Installation

# macOS / Linux
curl -fsSL https://ollama.com/install.sh | sh

# Run a model
ollama run llama3

Popular Models

Model Parameters Use Case
llama3 8B General purpose
mistral 7B Fast, efficient
codellama 7B Code generation
phi3 3.8B Lightweight
gemma 7B Google open model

Resources

View on GitHub

Source: GitHub