Skip to main content
All CollectionsAIConnections
Connecting to Ollama
Connecting to Ollama

How to connect and run local models via the Ollama runner.

Updated over 9 months ago

One way to run AI interactions on your own hardware/computer/infrastructure is via Ollama.

In case you have a technical background, Ollama can be simply described as “docker for AI models”. They have an easy install process and allow you to get started within seconds - even as a beginner.


It also works on hardware without a GPU (even older laptops with slow Intel-processors), though you might have to be patient in those cases.


Once you have Ollama installed, please make sure to follow the instructions for your operating system. to include novelcrafter to the host list(otherwise it won’t be able to connect properly):

Mac/Linux

On Unix-based systems, you need to start Ollama like this:

OLLAMA_ORIGINS=https://app.novelcrafter.com ollama serve
  1. Open the Terminal app

  2. Paste the command in and hit enter

You have to do this every time when you want to use Ollama with Novelcrafter, unless you set the environment variable globally in your system (which is out of the scope of this tutorial).

Windows

On Windows, the Ollama agent runs in the background, so you need to set a global environment variable with the proper value.

setx OLLAMA_ORIGINS https://app.novelcrafter.com
  1. Open CMD (Command Prompt) either via entering "cmd" in your windows search or using Windows Terminal

  2. Paste the command and hit enter

Add the Ollama connection in Novelcrafter

In your AI connections, simply add a new connection for "Ollama". If everything is connected, you should see a list of models (if you pulled any).

Novelcrafter itself does not pull/add any AI models on its own, so you need to do so in the terminal yourself.

Did this answer your question?