I just heard about Open WebUI from Harish's comment on my Ollama and DeepSeek post and decided to check it out. From their website:
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.
Recently I've been playing around with {ellmer} and {shinychat}, R packages for interacting with Ollama within R and Shiny. Open WebUI seems like another useful tool to help with interacting with Ollama, so I decided to give it a go.
All I had to do was:
docker pull ghcr.io/open-webui/open-webui:main
docker run --rm -d --network=host -v /data/open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui ghcr.io/open-webui/open-webui:main
and I got Open WebUI working! (I already have Docker and Ollama up and running, so it was easy.)
If I head over to 192.168.0.43:8080 (and after creating an account), I get an interface that looks like ChatGPT but is of course Open WebUI running locally on my server.
Now to test it out!
Since I'm using a "reasoning" model, there's the chain of thought output, which is hidden by default but you can toggle it and see the output. It "thought" for 5 minutes (which is very slow) because I'm serving Ollama on a cheap refurbished computer I purchased on Amazon. You can easily change the model near the top left hand corner; Open WebUI will list all models that are available to Ollama.
There's a lot more Open WebUI can do and I will definitely be checking out their various tutorials!

This work is licensed under a Creative Commons
Attribution 4.0 International License.