Skip to Content
DocsTutorialsViewing Results

Viewing Results

This tutorial walks you through every section of the Marqov execution dashboard. After submitting a job, you will land on the job status page at /jobs/{id}. This page updates in real time and shows everything about your workflow’s execution.

By the end, you will understand:

  • What each section of the results page shows
  • How to read the Gantt chart and task timeline
  • How to use summary cards from _summary
  • How to access the raw JSON results and Temporal UI

Prerequisites

Job Status Page Overview

When you submit a job, you are redirected to /jobs/{id}. The page has these sections, from top to bottom:

  1. Job header — job ID, backend, status, and execution duration
  2. Summary cards — key-value pairs extracted from _summary
  3. Execution overview — task count, level count, and max parallelism
  4. Execution timeline — Gantt chart showing task timing
  5. Task execution table — tabular view of tasks grouped by level
  6. Error details — shown only if tasks failed
  7. Raw JSON results — the full result payload

Section 1: Job Header

The top of the page shows:

  • Status badgepending, running, completed, or failed
  • Backend — which backend the job ran on (e.g., SV1, LOCAL)
  • Duration — total wall-clock time from submission to completion
  • View in Temporal — button that opens the Temporal UI for this workflow run
  • Create/View Capsule — button for creating reproducible capsules from results

The status updates in real time. While the job is running, you will see it transition from pending to running to completed.

Section 2: Summary Cards

If your workflow’s final result contains a _summary key, the dashboard renders each entry as a card. For example, this return value:

return { "energy": -1.137, "_summary": { "Energy": "-1.137 Ha", "Method": "VQE", "Qubits": "2", "Operators": "5 (ZI, IZ, ZZ, XX, YY)", }, }

produces four cards in a responsive grid:

EnergyMethodQubitsOperators
-1.137 HaVQE25 (ZI, IZ, ZZ, XX, YY)

Each card has:

  • Label — the dictionary key, displayed as a muted header (e.g., “Energy”)
  • Value — the dictionary value, displayed in a large monospace font (e.g., “-1.137 Ha”)

Cards appear in a 2-column grid on mobile and 4-column grid on desktop. If there is no _summary key, this section is hidden.

Tips for effective summary cards:

  • Keep keys short and descriptive (1-2 words)
  • Include units in the value string (e.g., “Ha”, “ms”, “qubits”)
  • Limit to 4-8 cards for readability

Section 3: Execution Overview

Below the summary cards, a single line shows:

5 tasks | 3 execution levels | Max parallelism: 3
  • Tasks — total number of @task calls in the workflow
  • Execution levels — how many sequential steps the workflow has
  • Max parallelism — the largest number of tasks that ran concurrently (shown only when parallelism is greater than 1)

This gives you an instant sense of the workflow’s structure. A workflow with “5 tasks | 3 levels | Max parallelism: 3” has significant concurrency. A workflow with “3 tasks | 3 levels” is purely sequential.

Section 4: Execution Timeline (Gantt Chart)

The Gantt chart visualizes when each task started and finished, relative to the overall workflow execution.

Each task gets a horizontal bar:

  • Left edge — when the task started (as a percentage of total duration)
  • Width — how long the task ran
  • Color — green for completed tasks, red for failed tasks
  • Label — the task’s function name (shown to the left of the bar)
  • Duration label — if the bar is wide enough, the duration is printed inside it (e.g., “1.2s” or “340ms”)

A time axis at the bottom shows the elapsed time from workflow start.

Reading the Gantt chart for parallel execution

When tasks run in parallel, their bars overlap horizontally. For a VQE workflow with 3 parallel measurements, the chart looks like:

build_ansatz |████| measure_zz |████████| measure_zi |████████| measure_iz |█████████| compute_energy |██|

The three measurement bars start at the same horizontal position, confirming parallel execution. If they were sequential, they would be staggered.

Tooltips

Hovering over a bar shows a tooltip with the task name, duration in milliseconds, and execution level.

Section 5: Task Execution Table

Below the Gantt chart, a table lists every task grouped by execution level.

The table has four columns:

LevelTaskStatusDuration
0build_ansatzcompleted12ms
1measure_zzcompleted1.2s
1measure_zicompleted1.1s
1measure_izcompleted1.3s
2compute_energycompleted3ms

Level headers — Each level gets a shaded header row showing the level number, task count, and whether the tasks ran in parallel:

Level 0 -- 1 task Level 1 -- 3 tasks (parallel) Level 2 -- 1 task

Status badges — Each task shows a colored badge:

  • Green “completed” badge for successful tasks
  • Red “failed” badge for tasks that errored
  • Gray “pending” badge for tasks that have not started yet

Duration — Shown in the rightmost column in monospace font. Durations under 1 second display in milliseconds (e.g., “340ms”), longer durations in seconds (e.g., “2.1s”).

Section 6: Error Details

If any task failed, a red error panel appears below the task table. Each failed task gets its own error entry showing:

  • The task function name in bold
  • The error message

For example:

measure_zz: BraketError: ResourceNotFoundException: Device ARN not found

This helps you quickly identify which task failed and why, without digging through logs.

Section 7: JSON Results

The raw JSON result payload is available at the bottom of the page. You can:

  • Copy the JSON to your clipboard
  • Download it as a .json file

The JSON contains everything your workflow returned, including the _summary dict, component values, and any metadata.

The “View in Temporal” Button

For jobs that run through Temporal (all “Run as Job” submissions), the header includes a View in Temporal button. Clicking it opens the Temporal Web UI for this specific workflow run.

In the Temporal UI, you can see:

  • Event history — every activity start, completion, and retry
  • Activity inputs and outputs — the actual data passed to and returned from each task
  • Retry details — if a task was retried, you can see each attempt
  • Timing — precise timestamps for every event
  • Workflow metadata — the workflow ID, run ID, task queue, and execution status

This is particularly useful for debugging:

  • If a task failed after retries, you can see each retry attempt and its error
  • If execution was slower than expected, you can identify which activity took the longest
  • You can correlate Temporal events with your task timeline on the dashboard

Putting It All Together

Here is the full flow from submission to analysis:

  1. Write your script in the playground at /run
  2. Click “Run as Job” and select a backend
  3. Watch the status page — it updates in real time as tasks execute
  4. Check summary cards for key results at a glance
  5. Read the Gantt chart to verify parallel execution happened as expected
  6. Scan the task table for any failures
  7. View in Temporal if you need detailed activity-level debugging
  8. Download the JSON if you need to process results programmatically

Example: VQE Dashboard

For the VQE workflow from the Building a VQE tutorial, the dashboard shows:

Summary cards:

Energy: 0.456789 Ha | Method: VQE | Qubits: 2 | Operators: 3 (ZZ, ZI, IZ)

Execution overview:

5 tasks | 3 execution levels | Max parallelism: 3

Gantt chart:

build_ansatz |██| measure_zz |████████████| measure_zi |██████████| measure_iz |████████████| compute_energy |█|

Task table:

Level 0 -- 1 task: build_ansatz completed 8ms Level 1 -- 3 tasks (parallel): measure_zz completed 1.2s measure_zi completed 1.0s measure_iz completed 1.3s Level 2 -- 1 task: compute_energy completed 2ms

The Gantt chart confirms that the three measurement tasks ran in parallel (their bars start at the same time), and the total execution time was dominated by the longest measurement rather than the sum of all three.

Next Steps

Last updated on