Quick Start
Installation
Install Moltext globally via npm to use the CLI tool anywhere:
npm install -g moltext
1. Fast Compilation (Raw Mode)
The fastest way to generate agentic context without requiring an LLM API key. This mode crawls the target site, strips UI clutter (navbars, footers, scripts), and converts the documentation into clean, structural Markdown.
moltext https://docs.example.com --raw
- Output: A high-density
context.mdfile in your current directory. - Benefit: Zero cost, zero latency, perfect for agents that prefer raw ground-truth data.
2. AI-Native Compilation (OpenAI)
To optimize documentation for vector embeddings or small context windows, use the AI-native mode. This uses an LLM to remove conversational filler and format the output for maximum agentic density.
Provide your OpenAI API key via the -k flag or set it as an environment variable:
# Using a flag
moltext https://docs.example.com -k sk-your-key-here
# Using environment variables
export OPENAI_API_KEY='sk-...'
moltext https://docs.example.com
3. Local Inference (Ollama / LM Studio)
If you prefer to run the compilation through a local model (e.g., Llama 3) to keep data private or avoid costs, point Moltext to your local inference server:
moltext https://docs.example.com \
--base-url http://localhost:11434/v1 \
--model llama3
Common Configuration Options
| Option | Shorthand | Description | Default |
| :--- | :--- | :--- | :--- |
| --raw | -r | Skip LLM processing; return clean markdown. | false |
| --output | -o | Specify the output filename. | context.md |
| --limit | -l | Max number of pages to crawl/parse. | 100 |
| --model | -m | LLM model to use for processing. | gpt-4o-mini |
| --key | -k | API Key for OpenAI (if not in ENV). | - |
Next Steps
Once your context.md is generated, provide it to your Agent (Moltbot, OpenClaw, or AutoGPT).
Pro-tip: Use the --limit flag for massive documentation sites to ensure you only capture the most relevant top-level pages first:
moltext https://huge-docs.com --limit 20 --raw