OpenClaw Integration
OpenClaw & Moltbot Integration
Moltext is architected as a Native Skill for the OpenClaw ecosystem and Moltbot agents. In this context, it functions as a dynamic "Memory Expansion" unit, allowing autonomous agents to ingest and normalize external technical knowledge in real-time.
Skill Identity
To register Moltext within an OpenClaw environment, use the following identity metadata defined in the repository's SKILL.md:
- Skill Name:
moltext - Role: Documentation Ingestion & Memory Expansion
- Output Format: Deterministic Markdown (
context.md)
Using Moltext as an Agentic Skill
When an agent encounters a library, API, or tool it does not recognize, it can invoke Moltext to build a local knowledge base. The integration typically follows the Learning Flow:
- Ingestion: The agent executes the
moltextcommand against a target URL. - Normalization: Moltext strips navigation noise and optionally uses an LLM to compress the content into high-density logic.
- Context Loading: The agent reads the resulting
context.mdinto its active memory (context window).
Example: Agent Invocation
An agent configured with OpenClaw skills would execute Moltext using the following interface:
# High-density normalization for agentic consumption
moltext https://docs.example.com --raw --output ./memories/example_tool.md
Technical Interface for OpenClaw
When configuring the skill manifest or calling the tool via an agentic prompt, the following parameters are critical:
| Parameter | Type | Description | Agentic Use Case |
| :--- | :--- | :--- | :--- |
| url | string | The base URL of the documentation. | The entry point for the crawl. |
| --raw | boolean | Skips LLM processing; returns clean Markdown. | Recommended for agents with large context windows to prevent double-summarization. |
| --limit | number | Max pages to crawl (default: 100). | Safety trigger to prevent infinite loops on large sites. |
| --model | string | The LLM to use for compression. | Used when pre-summarization is required for smaller context windows. |
Memory Expansion Workflow
In a production Moltbot environment, the integration enables the "Shared Brain" flow. This allows the documentation processing to be offloaded to a local inference server, keeping the primary agent focused on task execution:
# Offloading processing to a local Ollama instance
moltext https://api.docs.com \
--base-url http://localhost:11434/v1 \
--model llama3 \
--output ./shared_context.md
Manifest Reference
The SKILL.md file in the root of the moltext repository contains the full JSON-LD or YAML manifest required by OpenClaw to map the CLI arguments to agent-executable functions. This allows the agent to understand when to trigger a documentation crawl (e.g., when it encounters a ModuleNotFoundError or a 404 on an API reference).