Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.atlasmeets.com/llms.txt

Use this file to discover all available pages before exploring further.

Use this page when you want

  • reusable starting points
  • full JSON examples
  • a cleaner sense of what belongs in speech versus canvas
Keep these as templates, not rigid scripts. The contract examples on this page work for both:
  • the recommended Workspace API push path
  • the older gateway-fed debrief sync path
Where a page has grounding, keep it tight. Use short facts, source labels, and compact chart context rather than hidden report dumps. The quality bar is simple:
  • lead with the most important thing
  • keep the speech concise and actionable
  • make each page earn its place
  • do not turn the debrief into a report readout
If a debrief should only reach some stakeholders, keep the contract the same and add audience targeting on the outer synced debrief item. Example:
{
  "externalId": "support_weekly_123",
  "title": "Support Weekly Brief",
  "summary": "Queue health and escalation risk for support leads.",
  "status": "ready",
  "audience": {
    "type": "groups",
    "groupSlugs": ["support-leads"]
  },
  "contract": {
    "version": "atlas-debrief-v1",
    "title": "Support Weekly Brief"
  }
}
Use userEmails when the debrief is for named people, and groupSlugs when it is for a lightweight stakeholder group inside the org.

Monday morning brief

Use this when the gateway has a good sense of:
  • what matters this week
  • what matters today
  • where the schedule or operating risk is tight
Recommended shape:
  1. week-start overview
  2. today priority board
  3. week shape or pacing view
Keep the speech focused on:
  • the one or two priorities that matter
  • the one risk to watch
  • what should happen first today
Keep the canvas focused on:
  • priority cards
  • a short board of now / blocked / needs decision
  • one simple pacing chart if it helps
Avoid:
  • a full task dump
  • every calendar event
  • reading the user’s whole to-do list aloud
Full JSON contract example:
{
  "version": "atlas-debrief-v1",
  "title": "Monday Morning Brief",
  "summary": "A calm Monday kickoff with priorities, schedule pressure, and the first unblocker.",
  "atlas": {
    "openingSpeech": "I have your Monday morning brief ready. Let me know when you want to begin.",
    "instructions": "Keep this calm, practical, and short. Pause between sections.",
    "delivery": "manual"
  },
  "canvas": {
    "title": "Monday Morning Brief",
    "deliveryMode": "replace",
    "pages": [
      {
        "title": "Week Start",
        "html": "<div><h2>This week starts here</h2><div><strong>Primary focus:</strong> ship dashboard polish</div><div><strong>Calendar pressure:</strong> 2 decision meetings</div><div><strong>Watch item:</strong> OpenClaw bridge reliability</div></div>"
      },
      {
        "title": "Priority Board",
        "html": "<div><h2>Today's priority board</h2><table><tr><th>Now</th><th>Blocked</th><th>Needs decision</th></tr><tr><td>Finish room UI polish</td><td>OpenClaw retries</td><td>Debrief resume UX</td></tr></table></div>"
      },
      {
        "title": "Week Shape",
        "html": "<div><h2>Week shape</h2><p>Heavy decision load early. Protect build time on Wednesday and Thursday.</p></div>"
      }
    ]
  },
  "sections": [
    {
      "title": "Week Start",
      "speech": "This week has one clear priority, one operating risk, and two meetings that actually need preparation. The point is to start deliberate, not reactive.",
      "checkpointPrompt": "Let me know when you want today's priority board.",
      "pageIndex": 0,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Priority Board",
      "speech": "For today, keep the focus narrow: finish the visible dashboard polish, keep an eye on bridge reliability, and clear the one decision that might stall the rest of the week.",
      "checkpointPrompt": "I can show you the week shape next.",
      "pageIndex": 1,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Week Shape",
      "speech": "The week is front-loaded with meetings, so the main move is to protect execution time once the early decisions are out of the way.",
      "checkpointPrompt": "That is the Monday brief. If you want, I can revisit a page or turn this into an action list.",
      "pageIndex": 2,
      "delivery": "wait_for_prompt"
    }
  ],
  "actions": {
    "speakToRoom": true,
    "openCanvas": true
  }
}

Research summary

Use this when the workflow produced:
  • a real question
  • an answer or recommendation
  • supporting evidence
  • tradeoffs worth debating
Recommended shape:
  1. answer first
  2. evidence or chart
  3. tradeoffs
  4. recommendation / next move
Keep the speech focused on:
  • the answer first
  • why the evidence supports it
  • what is still uncertain
Keep the canvas focused on:
  • one chart
  • one compact comparison table
  • one recommendation page
Avoid:
  • dumping all research notes onto one slide
  • turning the canvas into a long literature review
  • mixing five visuals on one page
Full JSON contract example:
{
  "version": "atlas-debrief-v1",
  "title": "Research Summary Debrief",
  "summary": "A research walk-through with the answer, strongest evidence, and the tradeoffs worth debating.",
  "atlas": {
    "openingSpeech": "I have the research summary ready. I can walk you through the answer, evidence, and recommendation when you want to start.",
    "instructions": "Answer first. Keep this crisp and evidence-led. Pause after each section.",
    "delivery": "manual"
  },
  "canvas": {
    "title": "Research Summary",
    "deliveryMode": "replace",
    "pages": [
      {
        "title": "Answer First",
        "html": "<div><h2>Recommendation</h2><p>The lightweight bridge model is the fastest path to local private tools without exposing localhost to the internet.</p></div>"
      },
      {
        "title": "Evidence",
        "html": "<div><h2>Evidence</h2><table><tr><th>Option</th><th>Setup friction</th><th>Security posture</th></tr><tr><td>Polling bridge</td><td>Low</td><td>Strong</td></tr><tr><td>Public tunnel</td><td>Medium</td><td>Weaker</td></tr><tr><td>Persistent socket</td><td>Medium</td><td>Strong</td></tr></table></div>"
      },
      {
        "title": "Tradeoffs",
        "html": "<div><h2>Tradeoffs</h2><p>Polling is simple and safe, but not instant. A persistent socket is smoother later, but more operationally complex now.</p></div>"
      },
      {
        "title": "Recommendation",
        "html": "<div><h2>Next move</h2><p>Ship the polling bridge first, then upgrade to long-poll or websocket only if latency or chatter becomes a real product issue.</p></div>"
      }
    ]
  },
  "sections": [
    {
      "title": "Answer First",
      "speech": "The short answer is yes: the local bridge model is the right choice. It keeps the setup simple, avoids public exposure, and fits the product well.",
      "checkpointPrompt": "I can show you the evidence next.",
      "pageIndex": 0,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Evidence",
      "speech": "The strongest evidence is the tradeoff profile. The polling bridge has the lowest setup friction while still keeping the security model clean.",
      "checkpointPrompt": "Next I’ll show you the tradeoffs.",
      "pageIndex": 1,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Tradeoffs",
      "speech": "The tradeoff is mostly experience versus simplicity. Polling is slightly less elegant than a socket, but it is much easier to ship and support first.",
      "checkpointPrompt": "I can finish with the recommendation.",
      "pageIndex": 2,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Recommendation",
      "speech": "The recommendation is to keep the current bridge, harden the product feedback, and only move to a more complex transport if the real usage pattern demands it.",
      "checkpointPrompt": "That is the research summary. If you want, I can turn it into a decision memo or next steps.",
      "pageIndex": 3,
      "delivery": "wait_for_prompt"
    }
  ],
  "actions": {
    "speakToRoom": true,
    "openCanvas": true
  }
}

Eval debrief

This one works best when the result is operational and the next fixes are fairly clear. Use it when the workflow produced:
  • pass/fail or scoring data
  • clear failure clusters
  • actionable fixes
The page flow is usually:
  1. overview
  2. score distribution
  3. failure clusters
  4. recommended fixes
Keep the speech tight. The reader mainly needs the topline result, the weak categories, and the first fix. The canvas can carry the score view and the cluster summary. Avoid raw logs, every failing sample, or the full report repeated twice. Full JSON contract example:
{
  "version": "atlas-debrief-v1",
  "title": "Eval Debrief",
  "summary": "An eval review with score distribution, failure clusters, and the fixes that matter next.",
  "atlas": {
    "openingSpeech": "I have the eval debrief ready. Let me know when you want the overview.",
    "instructions": "Keep this direct and operational. Pause after each section.",
    "delivery": "manual"
  },
  "canvas": {
    "title": "Eval Debrief",
    "deliveryMode": "replace",
    "pages": [
      {
        "title": "Overview",
        "html": "<div><h2>Latest eval overview</h2><div><strong>Run status:</strong> completed</div><div><strong>Pass rate:</strong> 84%</div><div><strong>Failed cases:</strong> 6</div></div>"
      },
      {
        "title": "Score Distribution",
        "html": "<div><h2>Score distribution</h2><table><tr><th>Band</th><th>Count</th></tr><tr><td>90 to 100</td><td>18</td></tr><tr><td>80 to 89</td><td>16</td></tr><tr><td>Below 80</td><td>6</td></tr></table></div>"
      },
      {
        "title": "Failure Clusters",
        "html": "<div><h2>Failure clusters</h2><table><tr><th>Area</th><th>Why it matters</th></tr><tr><td>Citations</td><td>Weak evidence carry-through</td></tr><tr><td>Accuracy</td><td>Borderline answers still slip through</td></tr><tr><td>Follow-up handling</td><td>Context drops on longer threads</td></tr></table></div>"
      },
      {
        "title": "Recommended Fixes",
        "html": "<div><h2>Recommended fixes</h2><ol><li>Tighten retrieval prompts</li><li>Add more citation-focused eval cases</li><li>Rerun after context-handling fix</li></ol></div>"
      }
    ]
  },
  "sections": [
    {
      "title": "Overview",
      "speech": "The topline is solid but not finished. Pass rate is up, but there are still a small number of recurring misses that are worth fixing before calling this stable.",
      "checkpointPrompt": "Let me know when you want the score distribution.",
      "pageIndex": 0,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Score Distribution",
      "speech": "The distribution says most cases are healthy, but the tail still matters. The low-score band is small enough to inspect manually and big enough to justify another iteration.",
      "checkpointPrompt": "Next I’ll show you the failure clusters.",
      "pageIndex": 1,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Failure Clusters",
      "speech": "The misses are not random. They cluster around citations, borderline accuracy, and follow-up handling, which makes the next fixes fairly clear.",
      "checkpointPrompt": "I can finish with the recommended fixes.",
      "pageIndex": 2,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Recommended Fixes",
      "speech": "The next move is not a broad rewrite. Tighten retrieval, add citation-focused eval coverage, and rerun once the context fix lands.",
      "checkpointPrompt": "That is the eval debrief. If you want, I can revisit any page or turn this into a concrete follow-up plan.",
      "pageIndex": 3,
      "delivery": "wait_for_prompt"
    }
  ],
  "actions": {
    "speakToRoom": true,
    "openCanvas": true
  }
}

Starter shapes

You do not need a giant payload to make these useful.

Monday morning brief

  • opening speech
  • 3 pages
  • 3 short spoken sections

Research summary

  • opening speech
  • 3 to 4 pages
  • 3 to 4 spoken sections
  • at least one chart if a chart truly helps

Eval debrief

  • opening speech
  • 4 pages
  • 4 spoken sections
  • one score chart plus one failure-pattern summary