Moltbot Memory Patterns
Moltbot Memory Patterns
To maximize the utility of compiled documentation, Moltbots and autonomous agents should implement specific memory patterns. These patterns dictate how the context.md file is ingested, indexed, and retrieved during agentic workflows.
1. Deterministic Knowledge Ingestion (The "Ground Truth" Pattern)
This is the default recommended pattern for agents facing a new technical domain. By using the LLM-enhanced processing mode, you transform conversational documentation into a high-density "Technical Bible."
Workflow:
- Compile with LLM enhancement: Use
gpt-4o-miniorllama3to strip filler and normalize signatures. - Inject into Long-Term Memory (LTM): Upload the
context.mdto your RAG (Retrieval-Augmented Generation) system or your Moltbot's memory vector store. - Strict Adherence: Set your system prompt to prioritize the
Source:headers found in the Moltext output as the primary source of truth.
moltext https://docs.example.com --model gpt-4o-mini -o example_memory.md
2. Raw Structural Mapping (The "Skeleton" Pattern)
Use this pattern when you need to understand the entire architecture of a library without the overhead of LLM summarization. This is ideal for agents that need to map every available endpoint or class signature without losing a single line of code.
Best For:
- API Reference libraries.
- Low-latency local inference where context window tokens are precious but precision is non-negotiable.
- Initial discovery phases where you need to "see" the site map.
moltext https://docs.example.com --raw -l 200 -o api_skeleton.md
3. Dynamic Memory Expansion (Just-in-Time Learning)
Instead of pre-loading all possible tools, an agent can use Moltext as a Native Skill to expand its own brain on demand. When an agent encounters an unknown library in a codebase, it triggers a self-directed Moltext compilation.
Agent Implementation Logic:
- Detect: "I don't recognize the
boto3library methods." - Act: Execute
moltext https://boto3.amazonaws.com/v1/documentation/api/latest/index.html --raw --limit 20. - Absorb: Read the resulting
context.mdinto the active context window. - Execute: Complete the task with newly acquired "Temporary Memory."
4. Hybrid Memory Tiers
For complex agentic systems, we recommend a tiered approach to documentation memory:
| Tier | Method | Flag | Goal |
| :--- | :--- | :--- | :--- |
| Tier 1 (Core) | LLM Processed | (Default) | High-level architectural understanding and "How-to" logic. |
| Tier 2 (Reference) | Raw Markdown | --raw | Precise API signatures, code snippets, and parameter types. |
| Tier 3 (Local) | Local Inference | --base-url | Privacy-compliant documentation parsing for internal enterprise tools. |
Memory Optimization Tips
- Token Density: If your context window is small, avoid
--rawand use the LLM processor. It is specifically tuned to remove "noise" (like "Welcome to our tutorial!") that wastes agent reasoning tokens. - Vector Search: The
## Source: [Title](URL)header format is designed for RAG. It ensures that when an agent retrieves a chunk of memory, it knows exactly which documentation page the information originated from. - Safety Limits: Use the
--limitflag to prevent "Memory Flooding." For most libraries, the first 50–100 pages cover 90% of use cases.