Running Deepseek R1 on Your Local Hardware for Free
Deepseek R1 is an amazing, powerful, fast, and small LLM AI model that you can download and run on your own PC completely for free and offline. This article covers everything from hosting the model locally to a powerful ChatGPT-like web interface.
Introduction to Deepseek R1
Deepseek R1 is a fantastic new model that was trained for a lot cheaper and runs on much lesser hardware, faster, etc. It's a fantastic step forward for AI and LLM technology. Just like other LLMs and AIs, you can download them and run them on your own PC completely offline and completely for free.
This is the caption for the image 1
Installing Ollama
In order to run Deepseek R1 on your system, you need to download a program that can load up this model and actually use it. Ollama is essentially a program that allows you to load the model and either chat with it directly or connect to it using other programs. You can find a link to download Ollama in the description below. Simply choose download, then select Windows, Mac, or Linux, and download for whatever OS it is.
Downloading and Installing Deepseek R1 Model
Once you have Ollama installed, you need to download the Deepseek R1 model. You can find the model download commands linked down below. You have several options to choose from, including Deepseek R1 1.5b, 7B, 8B, 14b, 32b, and 70b. These are all distilled versions of the model, which essentially run on lower RAM and vram systems.
This is the caption for the image 2
Running Deepseek R1
To run Deepseek R1, you need to paste the command that you copied into a terminal, command prompt, or power shell window. When you do so, it will reach out and start downloading the actual model itself. The model size increases quite drastically the further you go down the list. Regardless, once it's done downloading, it verifies, and shortly after, you can actually chat with it.
Chatting with Deepseek R1
You can chat with Deepseek R1 using the terminal, command prompt, or power shell window. You can ask it questions, and it will respond. You can also use a web interface, such as web.chatboxai.app, to chat with Deepseek R1. This is a much prettier interface than the terminal window.
Using a GUI for Ollama
There are several GUI options available for Ollama, including web.chatboxai.app. You can download the UI by heading to chatboxai.app and downloading it. There are thousands of different graphic user interfaces that you can use with Ollama.
Enabling Ollama API
To enable the Ollama API, you need to exit Ollama, then hit start and search for environment variables. You need to set two options: OLLAMA_HOST and OLLAMA_ORIGINS. You can find the values for these options in the Ollama documentation.
Selecting the Deepseek R1 Model
Once you have the Ollama API enabled, you can select the Deepseek R1 model that you want to use. You can choose from the list of available models, including 1.5b, 7B, 8B, 14b, 32b, and 70b.
Conclusion
Running Deepseek R1 on your local hardware for free is a great way to experience the power of AI without having to pay for expensive hardware or subscription fees. With Ollama and the Deepseek R1 model, you can chat with a powerful AI model and get high-quality answers to your questions. Whether you're a developer, a researcher, or just someone who's interested in AI, Deepseek R1 is definitely worth checking out.
Note: Unfortunately, the images at timestamps 45239 seconds, 9736 seconds, 5942 seconds, 70248 seconds, and 847759 seconds are not available.
Also, note that the article does not include any images at the timestamps 45239 seconds, 9736 seconds, 5942 seconds, 70248 seconds, and 847759 seconds as they are null.
However, the image at timestamp 70248 seconds is mentioned in the text as "[at 702.48 seconds] so let's go for say the 32 billion parameter model" but it is actually the image at timestamp 148 seconds that is used in the article.
The correct image for the 32 billion parameter model is not available.
If you want to add the image for the 32 billion parameter model, you can use the following markdown syntax:
This is the caption for the image
Replace "image_url" with the actual URL of the image.