Subprocess
The subprocess backend runs your prompts by spawning a command.
This is useful when you already have an LLM runner script, want to call a local model, or need full control over auth and tooling.
Config
backend: subprocess
subprocess:
command: python3
args:
- ./run_llm.py
useStdin: falseHow the prompt is passed
useStdin: false(default): promptmd appends the prompt as the last command-line argumentuseStdin: true: promptmd writes the prompt to stdin
Streaming
Whatever your command writes to stdout is streamed to the terminal in real-time.
Structured output
If your prompt file defines an output: schema, promptmd will try to parse your subprocess stdout as JSON.
That means your command should print a single JSON object to stdout.
{"title":"...","priority":"high"}