Skip to main content
All CollectionsAIPrompts
Tuning your Model Settings
Tuning your Model Settings

Here you will learn about model parameters and what they do.

Updated over 8 months ago

When you add a new model to your prompt, the settings are blank. Therefore, you will need to fill out any parameters for the model.

If you want an idea of where to begin, look at the settings in the system prompts for a guideline.

Temperature

Value Range: 0-2

How coherent or crazy/creative the model is. A low Temp of 0.1 - 0.4 keeps some models pretty coherent, some models can go higher. We have seen some Temps set as high as 1.75.

If you want the output to follow your prose more, try lowering the temperature. With fine-tune models 0.5 is the sweet spot, but it can be higher for other models.

Top_P

Value Range: 0-1

A threshold for which words are even in the running to be selected based on their combined likelihood. A value like 0.9 is commonly used (it will consider the top 90% of possible next words). It ensures that the AI doesn't pick extremely unlikely words, but it still has a good pool of words to choose from, allowing for some creativity.

Max Tokens

Value Range: a value above 1

The maximum token output in a response. A good rule of thumb is 2048, however some of the smaller models may not support outputs of this size.

Frequency Penalty (Freq. Pen.)

Value Range: -2-2

The Frequency Penalty (or Freq. Pen) setting determines how often the model uses the same words or phrases in the output. The higher the number/nearer to 1, the more you discourage the AI from repetition of common words/phrases.

Presence Penalty (Pres. Pen.)

Value Range: -2-2

This works similarly to Freq. Pen. The higher the number encourages the model to not use words/tokens that appear in its input. It's nudging the AI to diversify its vocabulary and incorporate new words or phrases. Higher values will help prevent responses repeating the beat text, however, will mean that codex entries are not pulled from as much.

Please keep in mind that every model has its own ranges for these values. Thus, we cannot give you rule of thumb.

Novelcrafter now also supports the following parameters, however they are model-dependent. Not all models support these, so check before you use them. Some current models you can use these for are:

Some models, for example midnight rose, require these parameters to be set in order for the model to work.

Min_P

Value Range: 0-1

The counterpart to Top_P, this is the minimum probability for a token to be considered, relative to the probability of the most likely token (top_P). If your Min_P is set to 0.1, that means it will only allow for tokens that are at least 1/10th as probable as the best possible option.

Top_K

Value Range: An integer value

This makes the models choose from a smaller group of tokens. A value of 1 means the model will always pick the most likely next token. This leads to more predictable results. Basically think of it as another way to limit the possible words chosen.

By default this setting is disabled.

Top_A

Value Range: 0-1

This is like Top_P, but is relative to the most likely token choice. Lower values give a narrower scope, and a higher Top_A gives a larger range of words (but is not necessarily more creative)

Repetition Penalty (Rep. Pen.)

Value Range: 0-2

This feature helps prevent repetition. The higher the value, the less likely the model is to repeat a word.

Warning: If you go too high, the output might not be coherent any more!

Did this answer your question?