Creating a procedure
The fastest way to build a procedure is to promote a conversation that already succeeded. Studio reads the successful path out of the conversation, drafts the metadata, and drops you into the procedure editor for review. You can also start from the Procedures activity. If your library is empty, the sidebar shows Create Procedure; selecting it seeds Copilot with aCreate a procedure... prompt so you can describe the runbook you want in natural language.
That empty state is deliberate. Studio treats procedure authoring as a conversation first because the best procedure usually needs context: what device family it targets, which checks are safe, what success looks like, and which inputs should become arguments.
Right-click the conversation
Find the source conversation in the chat sidebar and open its context menu.
Choose Create procedure
Studio extracts the successful path from the conversation and opens a draft.
Review the draft in the procedure editor
The editor shows the title, description, arguments, allowed tools, success criteria, and the step body as markdown.
Adjust the metadata and steps
Tighten the title and description, add or rename arguments, narrow the allowed tools, and state the success criteria in terms you can check.
## headings, and define the arguments and allowed tools that make sense for the work.
From prompt to runbook
When you use Create Procedure from the sidebar, describe the operation the way you would brief another engineer:- The operational goal.
- The target type, such as Cisco IOS-XE edge routers or Linux jump hosts.
- Required arguments, such as
hostname,interface,neighbor_ip,vrf,ticket_id, ormaintenance_window. - Tools that are allowed and tools that are out of scope.
- Commands or checks that must stay read-only.
- The evidence that proves the run succeeded.
Procedure fields
| Field | Purpose |
|---|---|
| Title | Name shown in the library, sidebar, and run tab. |
| Description | One-paragraph summary of what it does. |
| Arguments | Named values substituted into the procedure at run time ({{hostname}}, {{interface}}, etc.). |
| Allowed tools | * for all available, or a specific list. Narrow this for production procedures. |
| Default model | Optional override — fast, balanced, or deep. |
| Maximum turns | Upper bound for the run. |
| When to use | Guidance for operators choosing between procedures. |
| Success criteria | Conditions the run should satisfy before stopping. |
| Steps | Markdown ## sections forming the ordered body. |
| Status | Draft, active, or archived. |
A good procedure shape
Use this structure when you are writing by hand:Running a procedure
Pick the procedure from the sidebar, supply values for its arguments, and Studio opens a run tab. The bottom panel streams live progress — turn count, token usage, tool calls, and output — and the transcript is preserved for later review. You can stop a run at any point if you’ve seen enough.| Status | Meaning |
|---|---|
| Pending | Queued, hasn’t started yet. |
| Running | In progress. |
| Completed | Finished successfully. |
| Failed | Hit an error or exceeded the turn budget. |
| Cancelled | You stopped it. |
Authoring guidance
- Extract procedures from working conversations whenever possible — replay is more reliable than recall.
- Keep steps linear. Describe decisions inside a step rather than branching.
- Use specific arguments (
hostname,interface,vrf,site,change_ticket) over vague ones. - Put read-only validation before any command that changes state.
- Write success criteria you can check from evidence, not just intent.
- Don’t include exploratory dead ends from the source conversation.
- Avoid
*allowed tools for production procedures unless the operator will pick at run time. - Prefer Ask or Planning while drafting production procedures, then run in a controlled scope before marking active.
- Archive procedures you no longer trust rather than leaving stale runbooks in the main library.
Run history
Every run stores its arguments, messages, tool summaries, token usage, final output, and the full transcript. Past runs are searchable and shareable — so you can compare two executions of the same procedure, link a run to a change ticket, or hand a transcript to a colleague for review.Reflection and repair
When a run fails or finishes outside its success criteria, Copilot can reflect on the transcript — what was attempted, what evidence was gathered, where the run diverged — and propose a repair to the procedure body or the allowed tools. Reflection is a separate step from the run itself: it generates a diff against the procedure, you review it, and you decide whether to apply it. The original run transcript stays intact as evidence either way. Use reflection sparingly. A procedure that needs reflection often is a procedure that’s trying to do too much in one runbook. The healthier response is usually to split it.Related
AI Copilot
The conversation surface that produces the best procedures.
Memories and search
Keep the facts procedures rely on close to the work.