ClawHub & SKILL.md
Integrating with OpenClaw
Moltext is designed as a Native Skill for the OpenClaw ecosystem. Within this framework, Moltext functions as a "Memory Expansion" tool, allowing Moltbots and autonomous agents to ingest external documentation dynamically to resolve technical queries or generate code against unfamiliar APIs.
To register Moltext as a skill in your agentic environment, you must include a SKILL.md file in your project root or point your OpenClaw manifest to the Moltext executable.
The SKILL.md Specification
The SKILL.md file acts as the discovery manifest for ClawHub. It defines the command-line interface in a way that LLMs can parse and execute. Below is the standard manifest for Moltext:
# Skill: Moltext
## Description
A high-density documentation compiler that converts web-based HTML documentation into deterministic, agent-readable Markdown.
## Interface
- **Command**: `moltext <url>`
- **Runtime**: Node.js / CLI
- **Primary Goal**: Ingestion of ground-truth API signatures and technical constraints.
## Parameters
| Flag | Name | Description | Default |
| | | | |
| `-r` | Raw | Skips LLM normalization; returns structural Markdown. | `false` |
| `-o` | Out | Path to the resulting context file. | `context.md` |
| `-l` | Limit| Maximum number of pages to crawl. | `100` |
| `-m` | Model| LLM to use for structural compression. | `gpt-4o-mini` |
## Usage for Agents
1. Identify a tool/library with unknown API signatures.
2. Execute: `moltext <docs_url> --raw -o brain_dump.md`
3. Ingest `brain_dump.md` into the active context window.
Formatting for ClawHub Distribution
When distributing your compiled context.md on ClawHub, Moltext ensures the output is optimized for Vector Retrieval (RAG) and Long-Context Injection.
Source Attribution
Every processed page is prefixed with a deterministic header to allow agents to cite their sources:
## Source: [Page Title](https://docs.example.com/api/endpoint)
[Compiled Content Here]
---
Agent-Readable Normalization
If running without the --raw flag, Moltext applies a "Structural Compression" pass. This optimizes the documentation by:
- Removing Conversational Noise: Stripping "Welcome to our tutorial" or "We are excited to show you" fluff.
- Preserving Technical Truth: Ensuring function signatures, type definitions, and error codes are untouched.
- Logic Density: Reformatting nested lists and tables into high-density Markdown blocks.
Deployment via manifest.json
For automated ClawHub deployments, you can reference the Moltext skill in your manifest.json:
{
"name": "moltext",
"version": "1.x.x",
"entry": "moltext",
"type": "skill",
"capabilities": [
"documentation_ingestion",
"context_expansion"
],
"runtime_requirements": {
"node": ">=18.0.0"
}
}
Best Practices for Agentic Ingestion
To get the most out of Moltext within the ClawHub ecosystem:
- Use
--rawfor Speed: If your agent has a large enough context window (e.g., Gemini 1.5 Pro or GPT-4o), raw Markdown is often sufficient and avoids LLM latency during the compilation phase. - Set Safety Limits: Use the
-l(limit) flag to prevent the crawler from traversing infinite loops or massive "edit" histories often found in wikis. - Local Inference: For sensitive internal documentation, point Moltext to a local Ollama instance using the
--base-urlflag to keep your context data within your own perimeter.