Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.atlasmeets.com/llms.txt

Use this file to discover all available pages before exploring further.

The payload

A debrief contract is a structured payload that tells Atlas how to present completed external work back into a meeting. This is useful when work finishes outside the room and Atlas should later:
  • open with a short explanation
  • preload canvas pages
  • walk the room through the result step by step
  • pause naturally between sections
Think of it as a presentation plan, not a rigid script. For new builds, prefer pushing debriefs into Atlas through the Workspace API. That keeps the flow explicit:
  1. your trusted agent or workflow finishes the work
  2. it upserts the debrief into Atlas by externalId
  3. Atlas stores it immediately and can present it later
  4. the same client can verify it with a GET or a full list call
Use the gateway-fed sync model below only when you already have an older integration that exposes debriefs from a gateway manifest action. The contract itself stays the same either way.

Legacy gateway sync model

Atlas does not expect the customer gateway to write directly into Supabase. Instead:
  1. the customer gateway exposes a normal read action
  2. Atlas calls that action from the dashboard
  3. Atlas normalizes the returned debrief payloads
  4. Atlas upserts them into its own control-plane storage
  5. Atlas then shows them in the dashboard and can present them in-room
Recommended action names:
  • list_ready_debriefs
  • list_debriefs
The current Atlas dashboard checks for those names in the gateway manifest and uses the first one it finds. The easiest response shape is:
{
  "status": "completed",
  "data": {
    "debriefs": [
      {
        "externalId": "eval_123",
        "title": "Braintrust Eval Debrief",
        "summary": "Latest eval results with score breakdown and next steps.",
        "status": "ready",
        "audience": {
          "type": "groups",
          "groupSlugs": ["support-leads"]
        },
        "contract": {
          "version": "atlas-debrief-v1",
          "title": "Braintrust Eval Debrief",
          "canvas": {
            "pages": [
              {
                "title": "Overview",
                "html": "<div>...</div>"
              }
            ]
          }
        }
      }
    ]
  }
}
Important field:
  • externalId
Atlas uses that as the stable external key when it syncs debriefs into its own storage.

Field guide

If you hand this page to an internal coding agent, the practical guidance is:
  • externalId
    • stable ID for the same debrief across refreshes
    • do not generate a new random ID on every poll
    • good examples: eval run ID, workflow run ID, job execution ID
  • title
    • short user-facing label in the Atlas dashboard
    • aim for something a person can skim quickly
  • summary
    • one-line description for the dashboard list
    • this should explain why the debrief matters, not restate the title
  • status
    • usually just ready
    • Atlas currently expects ready-to-present debriefs here
  • audience
    • optional lightweight targeting for who should see this debrief in the org
    • default is all
    • recommended shape:
      • {"type":"all"}
      • {"type":"users","userEmails":["a@company.com"]}
      • {"type":"groups","groupSlugs":["recruiting-leads"]}
  • contract.version
    • use atlas-debrief-v1
  • contract.title
    • usually the same as the top-level title
    • this is the debrief title Atlas sees at presentation time
  • contract.summary
    • optional longer summary for Atlas context
  • contract.atlas.openingSpeech
    • the first sentence Atlas says before it pauses
    • example: I have a debrief ready on the latest eval run. Let me know when you're ready to proceed.
  • contract.atlas.instructions
    • extra delivery guidance for Atlas
    • for example: Keep this concise and pause after each section.
  • contract.atlas.delivery
    • use manual if Atlas should wait for the room to say ready or next
    • omit it if you do not care yet
  • contract.canvas.pages
    • what Atlas should show if there is a canvas
    • each page should have user-facing title and rendered html
  • contract.sections
    • what Atlas should say section by section
    • use this when you want controlled progression, pauses, or speech-only debriefs
  • contract.sections[].speech
    • the actual content Atlas should present for that section
  • contract.sections[].checkpointPrompt
    • optional prompt Atlas uses before pausing
    • example: Let me know when you want to move to the next section.
  • contract.sections[].pageIndex
    • which canvas page that section belongs to
    • omit it for speech-only debriefs
Good first rules for an internal coding agent:
  • if the workflow produced charts or tables, include canvas.pages
  • if the workflow produced mostly narrative text, use sections
  • if both exist, include both
  • if you are unsure, start with:
    • one openingSpeech
    • one summary
    • one or more sections
    • and only add canvas pages when you have something genuinely visual to show

What good debriefs feel like

Good debriefs are easy to follow in a live room. If you are building these with an internal coding agent, give it this quality bar:
  • show the most important thing first
  • keep it concise and actionable
  • make each page earn its place
  • do not read a report aloud
  • optimize for attention, clarity, and the next decision
If a workflow produced a lot of material, the debrief should still stay light. Atlas can always fetch or explain more detail later when the room asks.

Debrief audience

Atlas supports lightweight audience targeting on synced debriefs. Use this when a gateway should only surface a debrief to:
  • everyone in the org
  • specific users
  • specific stakeholder groups
Recommended shape on the top-level debrief item:
{
  "externalId": "recruiting_weekly_123",
  "title": "Recruiting Weekly Brief",
  "summary": "Hiring pipeline update for recruiting leads.",
  "status": "ready",
  "audience": {
    "type": "groups",
    "groupSlugs": ["recruiting-leads"]
  },
  "contract": {
    "version": "atlas-debrief-v1",
    "title": "Recruiting Weekly Brief"
  }
}
For named people, use:
{
  "audience": {
    "type": "users",
    "userEmails": ["a@company.com", "b@company.com"]
  }
}
Notes:
  • if you omit audience, Atlas treats the debrief as all
  • keep this lightweight; Atlas is not meant to be a deep enterprise policy engine
  • group slugs should match the org groups defined in Atlas workspace settings
  • the gateway suggests the intended audience, and Atlas stores and lightly enforces it in the dashboard
Atlas also accepts a few aliases for compatibility, including:
  • targetUsers
  • targetGroups
  • stakeholderEmails
  • stakeholderGroups
  • stakeholders
  • contract.metadata.audience
But if you are building a new gateway, prefer the explicit top-level audience object.

Page grounding

Each canvas.pages[] item can also include optional grounding. Use it for small factual support that helps Atlas understand the current page better without reading all of it aloud. Good grounding fields:
  • summaryText
  • source
  • chartType
  • labels
  • values
Keep grounding concise:
  • short factual summary, not another paragraph dump
  • small lists of labels or values, not the full dataset
  • enough detail to support follow-up questions about the current page
Do not use grounding as a hidden second report. If you already have a large body of source material, keep the debrief itself light and let Atlas fetch fresh detail through gateway tools when the room asks.

Canvas guidance

Send the smallest canvas that still makes the point clearly. Good working heuristics:
  • aim for 1 to 3 pages for most debriefs
  • use 4 or 5 pages only when each page clearly has a different purpose
  • keep one main idea per page
  • prefer one chart or one table per page, not many
  • prefer short summaries over raw dumps
  • if the result is mostly narrative, use sections and keep the canvas simple or omit it entirely
If the external result is large, Atlas does not need the whole thing rendered on canvas. Put the takeaway in sections[].speech, render only the useful visual or summary, and keep the rest in metadata or behind the gateway. Common mistakes:
  • too many pages
  • overly dense tables with lots of columns and rows
  • giant embedded scripts or repeated chart libraries
  • pasting a long report verbatim instead of summarizing it
  • multiple visuals competing on the same page
  • pages that only restate what Atlas is already going to say aloud

Good page patterns

Good first-page patterns:
  • three KPI cards and one short takeaway sentence
  • one short summary paragraph and one compact table
  • one bar chart and one sentence explaining what matters
Good follow-up page patterns:
  • top 3 issues and recommended next steps
  • one comparison table with only the most important rows
  • one timeline or one checklist
If you are unsure whether something belongs on canvas, ask:
  • would a human benefit from seeing this while Atlas speaks?
If the answer is no, keep it in sections instead.

Few-shot examples

Good: lightweight metrics page

  • title: Overview
  • canvas:
    • three KPI cards
    • one sentence: Pass rate improved, but citations still need attention.
  • speech:
    • short explanation of what changed

Good: narrative debrief with no canvas

  • sections[] only
  • Atlas speaks:
    • what completed
    • what changed
    • what should happen next
  • no canvas because there is nothing genuinely visual to show

Good: one simple evidence page

  • title: Top Issues
  • canvas:
    • compact table with Area, Why it matters, Next step
  • speech:
    • short walkthrough of the top 2 or 3 issues

Bad: raw report dump

  • one huge HTML page
  • many sections pasted verbatim
  • multiple large charts
  • long markdown converted directly to HTML
That usually makes the canvas harder to read and gives Atlas too much low-signal material.

Template library

The template examples now live on Debrief Templates. Use that page when you want:
  • full JSON examples
  • Monday morning brief
  • research summary
  • eval debrief
  • starter-shape guidance

Minimum viable contract

The clean minimum is:
  • title
  • optional summary
  • optional atlas.openingSpeech
  • either:
    • canvas.pages[] with html
    • or sections[] with speech only
Everything else can be added later. Atlas now supports a stable long-term shape:
{
  "version": "atlas-debrief-v1",
  "title": "Braintrust Eval Debrief",
  "summary": "Latest eval results with score breakdown and next steps.",
  "atlas": {
    "openingSpeech": "I have a debrief ready on the latest eval run. Let me know when you're ready to proceed.",
    "instructions": "Keep this conversational and pause between sections.",
    "delivery": "manual"
  },
  "canvas": {
    "title": "Eval Debrief",
    "deliveryMode": "replace",
    "pages": [
      {
        "title": "Overview",
        "html": "<div>...</div>"
      },
      {
        "title": "Score Distribution",
        "html": "<div>...</div>"
      },
      {
        "title": "Follow-ups",
        "html": "<div>...</div>"
      }
    ]
  },
  "sections": [
    {
      "title": "Overview",
      "speech": "Here is the overview...",
      "checkpointPrompt": "Let me know when you want to move to the score distribution.",
      "pageIndex": 0,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Score Distribution",
      "speech": "This page shows the score spread by suite...",
      "pageIndex": 1,
      "delivery": "wait_for_prompt"
    },
    {
      "title": "Follow-ups",
      "speech": "The main follow-ups are...",
      "pageIndex": 2,
      "delivery": "wait_for_prompt"
    }
  ],
  "actions": {
    "speakToRoom": true,
    "openCanvas": true
  },
  "metadata": {
    "source": "braintrust",
    "runId": "eval_123"
  }
}

Minimal version

This smaller payload is also fine:
{
  "version": "atlas-debrief-v1",
  "title": "Eval Debrief",
  "canvas": {
    "pages": [
      {
        "title": "Overview",
        "html": "<div>...</div>"
      }
    ]
  }
}
Atlas will fill in the missing behavior with defaults.

Speech-only version

If you do not want any canvas at all, a speech-only debrief is also valid:
{
  "version": "atlas-debrief-v1",
  "title": "Workflow Debrief",
  "summary": "The workflow completed successfully.",
  "atlas": {
    "openingSpeech": "I have a short workflow debrief ready. Let me know when you want me to begin.",
    "delivery": "manual"
  },
  "sections": [
    {
      "title": "Summary",
      "speech": "The workflow completed successfully. Three records were updated and no approvals are pending.",
      "checkpointPrompt": "If you want, I can explain the changes in more detail.",
      "delivery": "wait_for_prompt"
    }
  ]
}

What Atlas does with it

When Atlas uses a debrief contract, it can:
  • preload the canvas pages into the room
  • start in active mode
  • open with atlas.openingSpeech if present
  • advance section by section when the room says ready, continue, next, or proceed
  • answer questions in between instead of blindly plowing ahead

Optional fields and defaults

Atlas is intentionally forgiving here. If you omit fields, Atlas falls back to sensible defaults:
  • no summary
    • Atlas still uses the title
  • no atlas.openingSpeech
    • Atlas can still present the debrief
  • no sections
    • Atlas derives sections from the canvas pages in order
  • no pageIndex
    • Atlas maps sections to pages by position
  • no actions
    • Atlas assumes normal room presentation behavior
The main required field for a canvas-backed debrief is:
  • canvas.pages[].html
For a speech-only debrief, the main required field is:
  • sections[].speech

Why sections are separate from pages

Keep sections and canvas.pages separate. That gives you a cleaner long-term contract:
  • canvas.pages
    • what Atlas should show
  • sections
    • what Atlas should say
    • when it should pause
    • which page each section belongs to
This is more flexible than forcing every page to carry all of the speaking logic.

Delivery guidance

Use these simple defaults:
  • atlas.delivery: "manual"
    • Atlas waits for the room to proceed
  • sections[].delivery: "wait_for_prompt"
    • Atlas presents one section, then pauses
This usually feels better than auto-advancing.

Backward compatibility

Atlas still accepts the earlier dated contract shape used in the initial prototype. The recommended shape going forward is:
  • version: "atlas-debrief-v1"

Good first use cases

  • eval completed
  • workflow finished
  • internal review ready
  • candidate packet prepared
  • sales research compiled
  • incident summary ready

What makes a debrief genuinely useful

A good debrief is not just “some data is available.” It should represent a moment where a human would actually benefit from Atlas bringing something back into the room. Good triggers usually look like:
  • an overnight research batch finished and now needs a short walkthrough
  • a weekly or daily report is ready for a team review
  • an eval or workflow completed and the interesting part is the result, not the raw execution log
  • a webhook from an internal system means there is now a human-ready summary worth presenting
  • a scheduled morning startup or weekly wrap-up should begin from a clear briefing
Practical rule:
  • do the scheduling, cron, webhook, and upstream orchestration in your own gateway or workflow system
  • only expose the debrief to Atlas once it is genuinely ready to present
Atlas is best when it receives something already shaped for human review, not an intermediate machine artifact.

Reference shape: clean weekly debrief

If you are building a first production debrief, this is a good target:
  1. Overview
    • three KPI cards
    • one sentence on what changed
  2. Main Drivers
    • one chart or one compact comparison table
    • the top two or three reasons behind the change
  3. Follow-ups
    • the most important next actions
    • who should care and what should happen next
For speech:
  • one short opening line
  • one concise section per page
  • one pause between sections
This is usually much better than:
  • pasting the whole report into page 1
  • speaking the full report aloud
  • making Atlas summarize a giant blob in real time
If you already have a long report, keep the report in your system of record and let the debrief be the presentation layer. Start with one debrief shape that is easy to trust:
  1. one title
  2. one summary
  3. three canvas pages
  4. one short opening line
  5. one section per page
Once that works, you can add richer metadata and follow-up actions later.