DEV Community

Running DeepSeek R1 Locally on a Raspberry Pi

DeepSeek R1 is a great model that has shaken the Generative AI world, and many are interested in trying it out. The author of this tutorial decided to test if DeepSeek R1 can run on a Raspberry Pi, a small and affordable device. The tutorial walks through how to run DeepSeek R1 models on a Raspberry Pi 5 and evaluate their performance. To start, you need a Raspberry Pi 5 with 8GB or 16GB RAM, a microSD card with Raspberry Pi OS, a stable power supply, and internet connection. The tutorial consists of five steps: configuring the Raspberry Pi, installing Ollama, running DeepSeek R1 models, deploying a Dockerized chat application, and experimenting with a Raspberry Pi cluster. The author tested two models: the 1.5B parameter model and the 7B parameter model. The 1.5B model ran acceptably on an 8-16GB Raspberry Pi 5, with a speed of ~6.12 tokens/second and RAM usage of ~3GB. The 7B model was impractically slow, with a speed of ~1.43 tokens/second and RAM usage of ~6GB. The tutorial also explores deploying a Dockerized chat application and experimenting with a Raspberry Pi cluster. The author notes that while DeepSeek R1 won't replace cloud-based LLMs on a Raspberry Pi, it's a fun way to explore AI on budget hardware. The key takeaways from the tutorial are that the 1.5B model is feasible for lightweight tasks, the 7B model is impractical due to speed, and the best use cases are educational experiments and prototyping edge AI applications.
favicon
dev.to
dev.to
Image for the article: Running DeepSeek R1 Locally on a Raspberry Pi