Overview
Let's configure a saved search based on these parameters:
- Role: Support Engineer / QA / Launch Coordinator
- Goal: Detect lab failures correlated with a specific instructions set language — especially useful when rolling out a lab in a new language for the first time
- Cadence: Weekly
- Page: Find Lab Instances (
/LabInstance)
The Challenge
Your organization has published a lab — or a new language version of an existing lab — and you need to confirm it is launching successfully at scale. You are not watching individual launches in real time. Instead, you need a repeatable, weekly check that answers:
"Did any labs fail last week, and is there a pattern tied to a specific language or instruction set?"
This matters because:
- A lab that works perfectly in English may have a misconfigured instruction set in Spanish, Japanese, or another language — and failures may only appear once real users start launching it.
- Without a targeted search, failed launches are buried across thousands of rows with no easy way to spot a language-specific spike.
- The earlier you catch this, the fewer users are impacted and the easier it is to isolate the root cause.
The Goal
By the end of this weekly check you should be able to answer:
- Did any labs end in an error state in the last 7 days?
- Is the failure rate concentrated on a specific language?
- Is the failure tied to a specific instruction set ID?
- Is this a new pattern (started this week) or an ongoing issue?
One-Time Setup: Building Your Saved Search
Do this once. After that, it is a two-click weekly check.
Step 1 — Open Find Lab Instances
Navigate to Find Lab Instances from the Admin portal. This is the only search page in Studio that includes both the Instructions Set Language and Instructions Set ID filters.
Step 2 — Add Filters
Add the following filters using the Add Filter button:
| Filter | Setting | Notes |
|---|---|---|
| End Time | Within the last 7 days | Use the relative "last N days" option — it rolls forward automatically each week |
| State | Is Error | Targets only instances that ended in a failure state |
| Num Errors | Is greater than 0 | Secondary confirmation — catches instances with logged error events |
| Instructions Set Language | Leave as Any (for now) | You will use this column in the results to spot patterns, not pre-filter it |
If you pre-filter to a single language, you lose the comparison baseline. Seeing English: 2 failures vs. Spanish: 47 failures in the same result set is what tells you there is a language-specific problem. Start broad, then narrow.
Step 3 — Configure Output Columns
Click Output Options and enable these columns. The ✓ ones are on by default; the ➕ ones need to be turned on:
| Column | Status | Why It Matters |
|---|---|---|
| Number | ✓ default | Instance identifier for drill-down |
| Lab Profile | ✓ default | Which lab failed |
| Student | ✓ default | Scope of user impact |
| Start | ✓ default | When the lab began |
| End | ✓ default | When it terminated — confirms it's within your window |
| State | ✓ default | Your primary failure signal |
| Status | ✓ default | Sub-state provides more detail (e.g., Cancelled vs. Error) |
| Total Run Time | ✓ default | Short run times on failed instances suggest a launch-time failure vs. mid-lab failure |
| Series | ✓ default | Helps group related labs |
| Instructions Set Language | ➕ enable | Core to this check — the language the student received |
| Instructions Set ID | ➕ enable | Core to this check — the specific instruction set in use |
| Num Errors | ➕ enable | A count of error events logged against that instance |
| Score | ➕ optional | If labs are scored, a pattern of zero scores alongside errors is a strong signal |
Step 4 — Sort the Results
Click the Instructions Set Language column header to sort. This groups all failures by language, making patterns immediately visible.
Step 5 — Save the Search
- Click Save Search
- Choose Create new saved search
- Name it:
Detect Language Failures – Last 7 Days
Why this name?
- Detect — signals a proactive check, not a reactive lookup
- Language Failures — names the question you are answering
- Last 7 Days — the time scope is visible without opening the search
Your saved search is personal to your account. Other users will not see it, and it will not be accidentally overwritten by someone else.
The Weekly Check (2 Minutes)
Every Monday morning (or your preferred day):
1. Load the Saved Search
Find Lab Instances → Open Saved Search → Detect Language Failures – Last 7 Days
Results load automatically. The End Time filter recalculates from today — no date changes needed.
2. Read the Result Count First
| What you see | What it means |
|---|---|
| 0 results | No lab instances ended in an error state last week. You are done. |
| A small number (1–5) | Likely isolated incidents. Scan the Lab Profile column — are they all the same lab? |
| A larger number (10+) | Investigate further. Sort by Language and look for clustering. |
3. Scan for Language Clustering
With results sorted by Instructions Set Language, look for any language that has significantly more rows than others.
Example reading:
| Instructions Set Language | Approximate Row Count | Interpretation |
|---|---|---|
| English | 3 | Normal background noise |
| Spanish | 1 | Isolated |
| Japanese | 34 | ⚠️ Spike — investigate this language |
| German | 2 | Normal |
A language with a disproportionate count is your signal. It does not prove a bug yet — but it tells you where to look next.
4. Check the Total Run Time Column
For the suspect language rows, look at Total Run Time:
| Run Time Pattern | Likely Meaning |
|---|---|
| Under 2 minutes | Lab failed at or near launch — likely a configuration or instruction set load issue |
| 5–20 minutes | Student reached the lab but hit a failure mid-way — may be content or environment related |
| Near the full duration | Student nearly completed it — may be a scoring or teardown issue, not a launch issue |
Short run times concentrated in one language on a recently-published lab are a strong indicator of a launch-time misconfiguration.
5. Check the Instructions Set ID Column
If the language spike is there, look at the Instructions Set ID column for those rows:
- All the same ID? The problem is almost certainly in that one instruction set configuration. This is your root cause target.
- Multiple different IDs? The problem is broader — it may be a platform-level issue for that language, not one specific set.
When You Find a Problem — Drill Down Workflow
- In the current results, identify an affected instance. Note its Number.
- Click the Lab Profile link → confirms the lab and its settings.
- Navigate to Find Lab Instance Errors, filter by that instance's Lab Instance ID.
- Review the Message, Source, and Type columns — this is the raw error log that tells you what actually went wrong.
Find Lab Instance Errors does not have language or instruction set filters — that is why you use Find Lab Instances for detection and Lab Instance Errors for root cause analysis. They are designed to work together.
What a Healthy Week Looks Like
After running this check for several weeks, you will build a baseline sense of normal. A healthy week typically looks like:
- Zero or very low error counts (1–3 isolated instances across all languages)
- No single language accounting for more than ~50% of errors
- Short run-time failures are rare or absent
- Errors are distributed across different lab profiles rather than concentrated on one
Any week that deviates from your baseline pattern — especially in a language that was recently launched — warrants a drill-down before the issue affects more users.
Adapting This Search for a Specific Language Launch
When you have just published a lab in a new language (e.g., Arabic), run a tighter version of this check during the first two weeks post-launch:
- Load
Detect Language Failures – Last 7 Days - Add an Instructions Set Language = Arabic filter on top
- Remove the State = Error filter temporarily — replace it with no State filter so you can see all instances, both successful and failed, to understand the launch volume
- Compare the failed count against the total count for that language to calculate a failure rate
Do not overwrite your saved search when doing this. Run it as a one-off, then close without saving so your weekly baseline search stays intact.
Summary
| Step | Action |
|---|---|
| Setup (once) | Configure filters + output columns on Find Lab Instances, save as Detect Language Failures – Last 7 Days |
| Weekly (Monday) | Load the saved search, read result count, sort by language, look for clustering |
| On a spike | Check run times, check instruction set IDs, drill into Lab Instance Errors for root cause |
| New language launch | Run a temporary modified version of the search without overwriting the baseline |
Related Resources
| Document | Purpose |
|---|---|
| Reference: Available Filters in Studio Search | Which filters are available on each search page |
| Reference: Output Columns in Studio Search | Full output column reference |
| Searching and Filtering in Studio | How to use filters, comparison operators, and search behavior |
| Using Saved Searches in Studio | How to save, load, and manage named search configurations |