Skip to main content
All CollectionsAI
Does Novelcrafter provide it's own AI?
Does Novelcrafter provide it's own AI?

Learn about how to use AI within Novelcrafter

Updated this week

At it's core, Novelcrafter is a writing app, but one that supports AI in it's integration.

In order to keep pricing as low as possible for users, and to allow the greatest flexibility, we currently do not offer any AI model ourselves. To use AI, you will need to connect to an external vendor, or run a local model.

By separating the cost of the software from that of AI credits, users can choose what model they want to use, and control their expenditure without worrying about credits running out.

So what do I do?

Let's walk you through the options for how you can connect to AI, and which might be the best option for you.

I want to access all the big models

Connecting to OpenRouter gives you access to over xx models, including all of the OpenAI, Anthropic, and Gemini models. When new models are released by these companies, OpenRouter usually supports them within the day.

OpenRouter works on a credit-based system (think of a pay-as-you-go phone), and so you are in control of how much you spend. You can check the activity page and see exactly what each message costs.

I want to pay a fixed monthly price

If you would prefer the security of a fixed price each month for unlimited usage (more like a pay-monthly unlimited plan), then there are vendors like Featherless, where for a fixed monthly fee you get access to thousands of models.

These models are often limited to community fine tunes, or opensource models, rather than the 'big'/mainstream models, but many of our users love these for NSFW writing.

I don't want anyone/AI companies seeing my writing

If you have powerful enough hardware, Novelcrafter supports the use of local models (i.e. a model that you run yourself). You can connect via LM studio, and download thousands of opensource models.

With this option, the only price you pay is your electricity bill, however you are limited by the models that your device can run - you may find prose generation incredibly slow if you're using a local model that requires more power than your computer can handle.

Did this answer your question?