AI Tutorials tutorial

Getting Started with Ollama

Ollama Local LLM Tutorial Llama Mistral

Introduction

Ollama is the easiest way to run large language models locally. This guide will help you get started with running LLMs on your own machine.

Why Ollama?

  • Simple installation
  • No internet required after download
  • Privacy - your data stays local
  • Supports many popular models

Installation

macOS / Linux

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download from ollama.com/download

Running Your First Model

# Download and run Llama 3
ollama run llama3

# Other popular models
ollama run mistral
ollama run codellama
ollama run gemma

Available Models

Model Size Description
llama3 4.7GB Meta Llama 3
mistral 4.1GB Mistral AI
codellama 4.8GB Code generation
gemma 5GB Google Gemma

Using the API

import requests

response = requests.post('http://localhost:11434/api/generate', json={
    "model": "llama3",
    "prompt": "Hello, how are you?"
})
print(response.json())

Resources

Source: JackAI Hub