How To Run Open WebUI locally with Ollama
This guide will walk you through the steps to get Open WebUI up and running with the Ollama service using a single Docker container. Follow these instructions to set up the environment, log in, and configure Open WebUI.
Additional information can be found in the Open WebUI Documentation. Review this page for details about installing Open WebUI with bundled Ollama support.
Starting Open WebUI
Download and Install Docker Desktop
- Visit Docker Desktop and follow the installation instructions for your operating system.
- If you already have docker running, you may skip this step.
Start Open WebUI Container
-
Use the following command to download the Open WebUI Docker image that bundles Ollama.
docker pull ghcr.io/open-webui/open-webui:ollama
-
Once the image is downloaded, issue this command to start the container.
docker run -d -p 3000:8080 -v ~/Documents/ollama:/root/.ollama -v ~/Documents/open-webui:/app/backend/data --name open-webui-ollama ghcr.io/open-webui/open-webui:ollama
Auto Restart
If you would like Open WebUI to automatically start when your computer boots up, you may add the --restart always to the
docker run
command
Allow the Container to Start
It can take a few minutes for the container to start up depending on the type of computer you are using.
Accessing and Configuring Open WebUI
Open WebUI in Browser
- Open your web browser and navigate to http://localhost:8080.
Sign Up
- Sign up with your information to create an account.
Access Admin Settings
- After logging in, navigate to the admin panel by clicking on your profile or the admin section.
Configure Models
- Go to the
Admin Panel
. - Then go to
Settings
->Models
. - Pull the model
llama3.2:1b
by enter the model name and clicking the pull button.
This may take a few minutes depending on your internet speed. Once the download is complete, you may need to refresh the page to see the model in the model selection drop down on the chat page.
Model Selection
The llama3.2:1b model should be able to run on most standard hardware. Feel free to chose which ever model(s) you would like to use. However, keep in mind that larger models will require more a modern GPU with plenty of VRAM. If you are using a new Mac with the M series chipset, you will be able to run larger models in the 7b range. You can browse the Ollama model library to see what other models are available: Ollama Models.
Configure Audio Model (Optionial)
If you would like to enable Speech-To-Text (STT) you will need to download the whisper model to handle audio transcription.
- Go to the
Admin Panel
. - Then go to
Settings
->Audio
. - Pull the
base
whisper model by clicking the pull button under theSTT Model
section.
By following these steps, you will have Open WebUI up and running with the Ollama service in a Docker container, and you will be able to log in and configure the necessary models for your use case.