Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)

A straightforward guide to running DeepSeek R1 locally regardless of your background.

Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)

DeepSeek R1 running locally in Jan AI interface, showing the chat interface and model settings

DeepSeek R1 is one of the best open-source models in the market right now, and you can run DeepSeek R1 on your own computer!

DeepSeek R1 requires data-center level computers to run at its full potential, and we’ll use a smaller version that works great on regular computers.

Why use an optimized version?

  • Efficient performance on standard hardware
  • Faster download and initialization
  • Optimized storage requirements
  • Maintains most of the original model’s capabilities

Quick Steps at a Glance

  1. Download Jan
  2. Select a model version
  3. Choose settings
  4. Set up the prompt template & start using DeepSeek R1

Let’s walk through each step with detailed instructions.

Step 1: Download Jan

Jan is an open-source application that enables you to run AI models locally. It’s available for Windows, Mac, and Linux. For beginners, Jan is the best choice to get started.

Jan AI interface, showing the download button
  1. Visit jan.ai
  2. Download the appropriate version for your operating system
  3. Install the app

Step 2: Choose Your DeepSeek R1 Version

To run AI models like DeepSeek R1 on your computer, you’ll need something called VRAM (Video Memory). Think of VRAM as your computer’s special memory for handling complex tasks like gaming or, in our case, running AI models. It’s different from regular RAM - VRAM is part of your graphics card (GPU).

Let’s first check how much VRAM your computer has. Don’t worry if it’s not much - DeepSeek R1 has versions for all kinds of computers!

Finding your VRAM is simple:

  • On Windows: Press Windows + R, type dxdiag, hit Enter, and look under the “Display” tab
  • On Mac: Click the Apple menu, select “About This Mac”, then “More Info”, and check under “Graphics/Displays”
  • On Linux: Open Terminal and type nvidia-smi for NVIDIA GPUs, or lspci -v | grep -i vga for other graphics cards

Once you know your VRAM, here’s what version of DeepSeek R1 will work best for you. If you have:

  • 6GB VRAM: Go for the 1.5B version - it’s fast and efficient
  • 8GB VRAM: You can run the 7B or 8B versions, which offer great capabilities
  • 16GB or more VRAM: You have access to the larger models with enhanced features

Available versions and basic requirements for DeepSeek R1 distills:

VersionModel LinkRequired VRAM
Qwen 1.5BDeepSeek-R1-Distill-Qwen-1.5B-GGUF6GB+
Qwen 7BDeepSeek-R1-Distill-Qwen-7B-GGUF8GB+
Llama 8BDeepSeek-R1-Distill-Llama-8B-GGUF8GB+
Qwen 14BDeepSeek-R1-Distill-Qwen-14B-GGUF16GB+
Qwen 32BDeepSeek-R1-Distill-Qwen-32B-GGUF16GB+
Llama 70BDeepSeek-R1-Distill-Llama-70B-GGUF48GB+

To download your chosen model:

Launch Jan and navigate to Jan Hub using the sidebar

Jan AI interface, showing the model library
  1. Input the model link in this field:
Jan AI interface, showing the model link input field

Step 3: Configure Model Settings

When configuring your model, you’ll encounter quantization options:

Step 4: Configure Prompt Template

Final configuration step:

  1. Access Model Settings via the sidebar
  2. Locate the Prompt Template configuration
  3. Use this specific format:

This template is for proper communication between you and the model.

You’re now ready to interact with DeepSeek R1:

Jan interface, showing DeepSeek R1 running locally

Need Assistance?