Hello guys! Are you curious about running a powerful AI model like DeepSeek R1 on your own computer? Maybe you’ve heard about its amazing abilities—like answering questions, writing code, or solving math problems—and want to try it out yourself. The good news is you don’t need to be a tech wizard to get started! In this guide, I’ll walk you through the process of setting up and running DeepSeek R1 locally, step by step, using simple tools and plain language.

DeepSeek R1 is a free, open-source language model created by DeepSeek, a company working on cutting-edge AI. Running it on your own machine means you can use it without the internet, keep your data private, and avoid paying for cloud services. Whether you’re a student, a hobbyist, or just someone who loves experimenting with tech, this guide is for you. Let’s get started!
Why Should You Run DeepSeek R1 Locally?
Before we jump into the “how,” let’s talk about the “why.” Here are some cool reasons to run DeepSeek R1 on your computer:
- It’s Private: Everything stays on your device—no one else sees your questions or data.
- It’s Free: No monthly fees or subscriptions, just a one-time setup.
- Works Offline: Once it’s ready, you can use it anywhere, even without Wi-Fi.
- You’re in Control: Want to tweak it or use it in your own projects? You can!
Think of DeepSeek R1 as your personal AI assistant, ready to help with homework, coding, or just chatting about random topics—all from the comfort of your own computer.
What You’ll Need to Get Started
Don’t worry, you don’t need a supercomputer to try this out! Here’s what you’ll need:
Your Computer’s Power (Hardware)
DeepSeek R1 comes in different sizes, like small, medium, and large versions (measured in “parameters,” like 1.5B, 7B, or even 70B). The bigger the model, the more power it needs. Here’s a simple guide:
- Small Models (1.5B or 7B):
- Memory (RAM): At least 8GB (16GB is better)
- Storage: Around 5-10GB free space
- Processor: Any modern computer should work (a GPU helps but isn’t a must)
- Medium Models (14B or 32B):
- Memory: 24GB or more
- Storage: 20-40GB free
- GPU: Nice to have (like an NVIDIA card with 6GB+)
- Big Models (70B or 671B):
- Memory: 64GB+ (128GB+ for the huge 671B version)
- Storage: 130GB+ (671B takes a lot of space!)
- GPU: Strongly suggested (24GB+ VRAM, like an NVIDIA RTX 3090)
For beginners, I recommend starting with the 7B model. It’s powerful enough to impress you but won’t overwhelm your computer.
Tools You’ll Need (Software)
- Operating System: Works on Windows, macOS, or Linux—whatever you’re using is fine!
- Ollama: A free tool we’ll use to run the model (super easy to install).
- Terminal: A place to type commands (don’t worry, I’ll show you exactly what to type).
- Internet: Just for the setup and download—after that, you’re good to go offline.
Let’s Set It Up: Step-by-Step Instructions
Step 1: Get Ollama on Your Computer
Ollama is like a friendly helper that makes running AI models a breeze. Here’s how to install it:
- Go to the Ollama Website:
- Open your browser and visit ollama.com.
- Look for the “Download” button and pick your system (Windows, macOS, or Linux).
- Install It:
- Windows: Double-click the
.exe
file you downloaded and follow the instructions. - macOS: Open the
.dmg
file, drag Ollama to your Applications folder, and open it. - Linux: In your terminal, type this command and hit Enter:
curl -fsSL https://ollama.com/install.sh | sh
- Check It Works:
- Open a terminal (on Windows, search for “PowerShell”; on macOS/Linux, use “Terminal”).
- Type this and press Enter:
ollama --version
- If you see a version number (like “0.1.25”), you’re golden! If not, double-check the installation steps.
Ollama runs quietly in the background, so you’re ready for the next step.
Step 2: Download DeepSeek R1
Now, let’s grab the DeepSeek R1 model and get it onto your computer.
- Open Your Terminal:
- Same place you used before—PowerShell for Windows, Terminal for macOS/Linux.
- Pick a Model and Download It:
- Type one of these commands based on what your computer can handle:
- For a small, easy model (1.5B):
ollama run deepseek-r1:1.5b
- For a great all-around model (7B—my recommendation):
ollama run deepseek-r1:7b
- For a slightly bigger one (8B):
ollama run deepseek-r1:8b
- For advanced users with more power (14B, 32B, or 70B):
ollama run deepseek-r1:14b ollama run deepseek-r1:32b ollama run deepseek-r1:70b
- For the massive 671B (only if you have a beast of a machine):
ollama run deepseek-r1:671b
- The first time you run this, it’ll download the model. The 7B version is about 4.7GB, so it might take a few minutes depending on your internet speed.
- Test It Out:
- Once it’s downloaded, you’ll see a
>>>
prompt in the terminal. Type something simple like:Hi, what’s your name?
- It should reply with something like, “I’m DeepSeek R1, nice to meet you!” If it works, you’re all set!
Step 3: Start Chatting with DeepSeek R1
Now that it’s running, you can ask it anything! Here’s how:
- Try a Question:
>>> What’s the weather like on the moon?
- It might say, “There’s no weather on the moon—no atmosphere means no rain or wind!”
- Stop the Chat:
- When you’re done, press
Ctrl + D
(orCtrl + C
) to exit.
Pretty cool, right? If you want a fancier way to talk to it, check out the next step.
Step 4 (Optional): Add a ChatGPT-Like Interface
Typing in the terminal is fun, but if you’d rather have a nice webpage to chat on, try Open WebUI.
- Install Docker:
- Download Docker Desktop from docker.com and install it.
- Open Docker to make sure it’s running.
- Start Open WebUI:
- In your terminal, type:
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
- This sets up a web interface for you.
- Open It Up:
- Go to
http://localhost:3000
in your browser. - Sign up with a quick account.
- Link It to Ollama:
- In Open WebUI, click Settings > Connections > OpenAI Connections.
- Set the API host to
http://127.0.0.1:11434
and save.
- Pick Your Model:
- Go to Models, add
deepseek-r1:7b
(or whichever you downloaded), and start chatting!
Tips to Make It Run Smoothly
- Got a Graphics Card?: If you have an NVIDIA GPU, it can speed things up. Make sure CUDA is installed (Google “CUDA install” for your system).
- Free Up Space: Close other programs to give DeepSeek R1 more memory to work with.
- Start Small: The 7B model is perfect for beginners—don’t jump to 70B unless you’re sure your computer can handle it.
Fun Things to Try with DeepSeek R1
- Help with Homework:
- Ask: “Solve 3x – 7 = 14.”
- Answer: “Add 7 to both sides: 3x = 21. Divide by 3: x = 7.”
- Write Some Code:
- Ask: “Make a Python program to say ‘Hello, world!’”
- Answer:
python print("Hello, world!")
- Learn Something New:
- Ask: “What’s AI in simple words?”
- Answer: “AI is when computers learn to think and act a bit like humans, like answering questions or playing games.”
What If Something Goes Wrong?
- It Won’t Start: Check if you have enough memory or storage. Try a smaller model.
- Too Slow: Close other apps or use a smaller version like 1.5B.
- Download Stuck: Make sure your internet is working and you have enough free space.
Wrapping Up
Hello guys, you made it! Running DeepSeek R1 locally is an awesome way to play with a smart AI without needing the cloud. With Ollama, it’s super simple to set up, and you can pick a model that fits your computer. Start with the 7B version, ask it some fun questions, and see what it can do. If you get stuck, don’t worry—the Ollama and DeepSeek communities are full of friendly folks who can help.