How to Run DeepSeek-R1 Locally with Ollama

Get DeepSeek-R1 running on your own machine in minutes using Ollama.

Advertisement

DeepSeek-R1 is an open-source reasoning model that you can run locally with Ollama. Here’s how to set it up.

Step 1: Install Ollama

Download from ollama.com or run curl -fsSL https://ollama.com/install.sh | sh on Linux/macOS.

Step 2: Pull DeepSeek-R1

Choose the size that fits your hardware:

  • ollama pull deepseek-r1:1.5b (lightweight)
  • ollama pull deepseek-r1:7b (balanced)
  • ollama pull deepseek-r1:14b (powerful)

Step 3: Run the Model

Start chatting with ollama run deepseek-r1:7b. For an API, use ollama serve and send requests to http://localhost:11434/api/generate.

Frequently Asked Questions

The 1.5B model runs on 8GB RAM; 7B needs 16GB, and 14B works best with 32GB or a GPU with 8GB+ VRAM.

Yes, Open WebUI automatically detects Ollama models and works great with DeepSeek-R1.

Yes, it’s released under the MIT license, so you can use it commercially.

Advertisement