Running Open LLM Models on Raspberry Pi 5 with Ollama


The world of technology is evolving, and with the advent of open Large Language Models (LLMs), the possibilities have expanded into new realms. The Raspberry Pi 5, known for its affordability and versatility, can now be a host for these powerful models thanks to Ollama, a platform designed to run LLMs locally. In this blog post, we'll guide you through setting up Ollama on your Raspberry Pi 5 and explore how to run open-source models for a variety of applications.

What is Ollama?

Ollama is an innovative framework that simplifies the process of running large language models locally on your machine. It supports a variety of operating systems, including Linux, which is particularly relevant for Raspberry Pi users. With Ollama, you can easily download, manage, and interact with different LLMs, making it an ideal solution for developers, researchers, and hobbyists interested in exploring AI models without relying on cloud services.

Setting Up Ollama on Raspberry Pi 5

Before diving into the installation process, ensure that your Raspberry Pi 5 is set up with a compatible Linux distribution and has an internet connection. The following steps will guide you through installing Ollama on your device:

  1. Download Ollama: For Linux users, including those on Raspberry Pi, Ollama can be easily installed by running the following command in the terminal:
curl | sh

This script will automatically download and install Ollama on your device.

  1. Verify Installation: After installation, you can verify that Ollama is correctly installed by running:
ollama list

This command lists all available models on your device, indicating that Ollama is ready for use. Running Open LLM Models

Ollama supports a variety of open-source models, each suitable for different tasks. Here's how to get started with a few popular models:

Llama 2: For general-purpose tasks, Llama 2 is a versatile model. To run it, simply execute:

ollama run llama2

Code Llama: If you're interested in code generation, Code Llama is your go-to model. Start it with:

ollama run codellama

Neural Chat: For creating conversational agents, Neural Chat can be a great choice. Activate it by:

ollama run neural-chat

Customizing Models

Ollama allows for the customization of models through Modelfiles. For instance, if you want to tailor the Llama 2 model for a specific task, you can pull the model and then modify its parameters or system message through a Modelfile. Here's a brief guide:

  1. Pull the Model:
ollama pull llama2
  1. Create a Modelfile: This file specifies the customizations, such as adjusting the temperature setting for creativity or coherence, and setting a specific system message. Create and Run Your Custom Model: With the Modelfile ready, create your model using:
ollama create mycustommodel -f ./Modelfile
  1. And then run it:
ollama run mycustommodel

Exploring Further

Ollama not only supports running models but also offers advanced features like importing models from other formats, customizing prompts, and even integrating with a REST API for web applications. The platform's versatility opens up a world of possibilities for Raspberry Pi enthusiasts to experiment with AI without significant overheads.


Running open LLM models on Raspberry Pi 5 with Ollama offers a unique opportunity to delve into the world of artificial intelligence from the comfort of your own home or workspace. Whether you're a developer, a student, or just an AI enthusiast, Ollama provides the tools needed to explore and innovate with LLMs. Start your journey today and unlock the potential of large language models on your Raspberry Pi.