One way to run AI interactions on your own hardware/computer/infrastructure is via Ollama.
In case you have a technical background, Ollama can be simply described as “docker for AI models”. They have an easy install process and allow you to get started within seconds - even as a beginner.
It also works on hardware without a GPU (even older laptops with slow Intel-processors), though you might have to be patient in those cases.
Once you have Ollama installed, please make sure to follow the instructions for your operating system. to include novelcrafter to the host list(otherwise it won’t be able to connect properly):
Mac/Linux
On Unix-based systems, you need to start Ollama like this:
OLLAMA_ORIGINS=https://app.novelcrafter.com ollama serve
Open the Terminal app
Paste the command in and hit enter
You have to do this every time when you want to use Ollama with Novelcrafter, unless you set the environment variable globally in your system (which is out of the scope of this tutorial).
Windows
On Windows, the Ollama agent runs in the background, so you need to set a global environment variable with the proper value.
setx OLLAMA_ORIGINS https://app.novelcrafter.com
Open CMD (Command Prompt) either via entering "cmd" in your windows search or using Windows Terminal
Paste the command and hit enter
Add the Ollama connection in Novelcrafter
In your AI connections, simply add a new connection for "Ollama". If everything is connected, you should see a list of models (if you pulled any).
Novelcrafter itself does not pull/add any AI models on its own, so you need to do so in the terminal yourself.