LLM Module
Converts Reddit posts into dramatic text message conversations using an OpenAI-compatible LLM endpoint.
LlmService
rewriteAsConversation(post: RedditPost): Promise<Result<Conversation, LlmError>>
Sends a Reddit post to the LLM with a system prompt instructing it to rewrite the content as a two-person iMessage conversation. The response is parsed and validated as structured JSON.
Features:
- Automatic retry (2 attempts) with escalating JSON schema validation
- Strips
<think>tags for thinking models (e.g. Qwen3.5) - Uses
/no_thinkprompt directive for Qwen3.5 to skip chain-of-thought - Validates output against Zod schema
Types
Conversation
typescript
interface Conversation {
leftName: string; // Name for the left speaker
rightName: string; // Name for the right speaker
messages: ConversationMessage[];
}ConversationMessage
typescript
interface ConversationMessage {
sender: 'left' | 'right';
text: string;
}Configuration
| Variable | Default | Description |
|---|---|---|
LLM_BASE_URL | http://localhost:4000/v1 | OpenAI-compatible API endpoint |
LLM_MODEL | qwen3.5-9b | Model name |
LLM_MAX_TOKENS | 2048 | Max output tokens |
LLM_TEMPERATURE | 0.8 | Sampling temperature |
LLM_TIMEOUT_MS | 60000 | Request timeout |
LLM Setup
The module expects an OpenAI-compatible endpoint. Recommended setup:
- Run Ollama with Qwen3.5
- Proxy through LiteLLM for OpenAI API compatibility
- Set
LLM_BASE_URL=http://localhost:4000/v1
TIP
The module uses prompt-based JSON schema extraction (not response_format) since Ollama/LiteLLM doesn't reliably support structured output mode for all models.