What is BYOK?
BYOK stands for "bring your own key". Novelcrafter is designed firstly as a writing platform, and we don't want to tie you down with limited LLM models. We do not provide a base AI model, hence the BYOK. You can connect local models that run off your own computers (and are therefore 'free', save your energy bill), or via vendors like OpenRouter or OpenAI, and more.
This flexibility means that no matter what your price point is, you can still write.
Please look up the pricing structure of the vendor for the model you are trying to run. here are the links to the popular vendors we support:
Example Model Costs
Here are example costs of some models,
For a full list of the model costs that OpenRouter provide, see here. All figures are correct as of September 2024.
Please note that a 'token' is not counted the same for every model. For GPT models, 1 token = 0.75 words, however for Gemini models 1 token = 1 character. As such the figures below are a guideline.
Input cost: the cost per 1000 tokens in your prompt
Output cost: the cost per 1000 tokens in the output of the AI.
All prices are in USD.
Example Prompt
The following assumptions will be made for our hypothetical prompt (a beat to prose prompt used in the write interface):
Input
We have used the system general purpose prompt, with 1,982 words of prose before being read.
We have called 9 codex entries, with a combined word count of 643 words.
367 words of chapter summaries has been included.
Output
We have an output of 400 words = 500 tokens
Input tokens = 4,176
Output tokens = 500
This is a good approximation figure, but depending on how large your codex entries are, the amount of prose/summary context you send to the AI, and how many words output you have, the values can change.
High-cost
Model | Input Cost | Output Cost | Example Prompt Cost |
Claude Opus | 0.015 | 0.075 | 0.1005 |
GPT4 (non-turbo, 16k) | 0.06 | 0.12 | 0.3100 |
GPT4-turbo | 0.01 | 0.03 | 0.0550 |
Input and output cost measured per 1k tokens, all prices in USD.
Mid-cost
Model | Input Cost | Output Cost | Example Prompt Cost |
GPT3.5-turbo | 0.003 | 0.004 | 0.0145 |
GPT4o | 0.005 | 0.015 | 0.0275 |
Claude Sonnet | 0.003 | 0.015 | 0.0200 |
Mistral Large | 0.003 | 0.009 | 0.0170 |
Mistral Medium | 0.0027 | 0.008 | 0.0153 |
Input and output cost measured per 1k tokens, all prices in USD.
Low-cost
Model | Input Cost | Output Cost | Example Prompt Cost |
Claude Haiku | 0.00025 | 0.00125 | 0.0016 |
Weaver* | 0.003375 | 0.003375 | 0.0157 |
Gemini Pro 1.5 | 0.0025 | 0.0075 |
|
Airoboros | 0.0005 | 0.0005 | 0.0023 |
4o mini | 0.00015 | 0.0006 | 0.00093 |
* September 2024: weaver is currently 25% off marked price.
Input and output cost measured per 1k tokens, all prices in USD.
Free (As of September 2024)
These are based on OpenRouter prices. Locally run models are of course all free too (see below).
Google Gemma 2 9B
Meta Llama 3 8B instruct
Nous Capybara
Mythomist 7B
Toppy
Hugging Face Zephyr 7B
Running Local Models
Another alternative is to run a local model using a provider such as LLM Studio. This will cost electricity, and will be limited to how powerful your computer is, however it is worth looking into if you want to use one of the open-source models.