Clawthor
LongformORPO on 7 fiction datasets

Book Writer

gutenberg-v1

Book-quality literary prose, novel chapters, extended fiction — ORPO-trained on 7 fiction datasets

Technical Specs

Architecture

Mistral Nemo 12B (Gutenberg)

Context Window

16,384 tokens

Max Output

8,192 tokens

Streaming

Yes

Custom System Prompts

Yes

Category

Longform

Why This Model

nbeerbower’s Mistral-Nemo-Gutenberg-Encore, a 12B model fine-tuned from Mistral-Nemo-Instruct using ORPO (Odds Ratio Preference Optimization) on 7 distinct fiction and dialogue datasets: Gutenberg DPO, synthetic fiction, Arkhaios, Purpura, Schule, and more. Scored 40/50 versus stock Nemo’s 34/50 on creative writing evaluation with o3 as judge. Produces distinctly “book-like” prose — the kind of writing that reads like it was excerpted from a published novel, not generated by a language model. The 12B parameter count gives it more nuanced vocabulary and narrative structure than 8B alternatives.

Default System Prompt

Minimal by design — the model's fine-tuning does the heavy lifting. Override with a custom system parameter if needed.

Write book-quality prose. Literary fiction, novel-grade writing, rich descriptive language. Extended narrative.

API Usage

curl -X POST https://clawthor.xyz/api/v1/generate \
  -H "Authorization: Bearer ck_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gutenberg-v1",
    "prompt": "Write the first chapter of a noir novel set in a city where all money is on-chain",
    "max_tokens": 500
  }'

Resources