Integrating with OpenClaw
Integrating with OpenClaw
Moltext serves as a Native Skill within the OpenClaw agentic framework. It functions as the "Ingestion Layer," allowing Moltbots and autonomous agents to resolve knowledge gaps by compiling external documentation into deterministic context on-demand.
Skill Identity
OpenClaw recognizes Moltext via the SKILL.md manifest included in the package.
- Skill Name:
moltext - Primary Role: Documentation Ingestion & Memory Expansion.
- Input: Documentation URLs.
- Output: High-density, agent-optimized Markdown (
context.md).
Registration & Setup
To enable Moltext for an OpenClaw agent, the package must be available in the agent's execution environment.
- Install Moltext globally:
npm install -g moltext - Verify Manifest: Ensure the
SKILL.mdfile is present in your project directory or linked in the OpenClaw skill-path.
Invocation Schema
When an agent encounters a library or API it does not recognize, it triggers the moltext skill. The following parameters are exposed to the OpenClaw skill engine:
| Parameter | Type | Required | Description |
| :--- | :--- | :--- | :--- |
| url | string | Yes | The base URL of the human-readable documentation. |
| --raw | boolean | No | Skips LLM normalization for pure structural markdown (Deterministic). |
| --limit | number | No | Maximum number of pages to crawl (Default: 100). |
| --output | string | No | The file path where the context will be stored. |
The "Autonomous Learning" Workflow
Integrating Moltext allows OpenClaw agents to follow a self-correcting knowledge loop:
- Trigger: The agent identifies a missing dependency or unknown API.
- Execution: The agent invokes the skill:
moltext https://api.docs.com --raw -o temporary_memory.md - Ingestion: The agent reads the generated
temporary_memory.mddirectly into its context window. - Action: The agent executes the original task using the grounded technical signatures parsed by Moltext.
Local Skill Configuration
For agents operating in air-gapped or local-first environments, Moltext can be configured to use local inference servers (e.g., Ollama, LM Studio) instead of OpenAI. This ensures that the "understanding" phase of the documentation compilation remains private.
Example: Local Skill Execution
moltext https://docs.internal-tool.com \
--base-url http://localhost:11434/v1 \
--model llama3 \
--output internal_context.md
Output Formatting
When used as an OpenClaw skill, Moltext outputs a file structured for Vector Retrieval and RAG (Retrieval-Augmented Generation). Every section is prefixed with its source URL to allow the agent to cite its sources or request deeper crawls on specific sub-paths.