What is AI Vision?
AI Vision, an activity type in Skillable Studio, supports lab builders who want to create high-quality performance-based activities to validate skills, provide real-time feedback, or create adaptive learning labs. Leveraging AI-powered computer vision technology, lab builders can write natural language prompts to check, based on what is visible on the user's screen, whether that user has successfully completed an activity within the lab.
Purpose
- Reduces technical skills required to create a scored activity. (No need to write scripts.)
- Enables builders to check for the outcomes that matter. (No more false negatives due to misspelled cell headers.)
- Supports more environments and software types. (Measure what's on a user's screen, without worrying about whether the software has an API to call.)
Scenarios
- Lab builders who want to evaluate activities in practice, learning, or validation labs with visual components.
- Lab building teams who want to reduce the time required to build quality labs
- Lab building teams without the technical skills required to write script-based activities.
- Lab builders who want to evaluate practice, learning, or validation labs on software products with limited API endpoints.
Key Elements
- Prompt: Allows the lab author to create a prompt using natural language. This prompt determines what the computer vision technology will check on the user's screen.
- Test prompt: Allows the lab author to test the prompt, providing a 1-5 star rating and AI-generated feedback to help the lab author improve the prompt strength and clarity.
- Reference Image (Optional): A reference image of the correct answer further guides the computer vision technology to check the user's screen.
- Evaluation/Scoring: When a user in the lab clicks the Check/Score button, computer vision checks the user's screen, determining whether it aligns with the screen described in the prompt and reference image.
Configuration & Settings
- Name: The activity name will not be visible to users, but will be visible to lab builders within the Activities tab and also visible in the Activities left pane.
- Replacement Token Alias: this allows you to change the token to something more identifiable when reviewing the markdown code of the lab. A default will be automatically generated.
- Skills (Optional): Tag this activity with relevant skills. If your organization is not configured to use any Skills Framework, then this setting will not be visible.
- Instructions (Optional): Instructions show above the Check/Score button in the Instructions panel.
- On-demand evaluation (required): All AI Vision Activities are currently only capable of on-demand evaluation. Users will click an evaluation button and be provided feedback in real time.
- Custom evaluation button text (optional): Specify the text of the evaluation button. If left blank, the text will read "Check" for a non-scored activity and "Score" for a scored activity.
- Allow retries: Determine whether, in the case that prompt returns an Incorrect, a user can retry and rescore the activity.
- Maximum attempts: This setting can be used to limit the number attempts the activity can be used.
- Scored: determine whether the outcome of this activity will contribute to the learner's lab score.
Note: because of the dynamic nature of AI-based features, we recommend that you consult with your psychometrician before including AI Vision Scored Activities in high-stakes scenerios.
- Score value: If scored, this field determines the number of points the activity generates with a correct response.
- Show results in reports: If checked, the results of this activity will be included in the end of lab detailed report.
- Required for submission: If checked, a user cannot end the lab without evaluating this activity.
- Blocks page navigation until answered: If checked, Next button disabled.
- Correct answer feedback: Specify the text a learner sees when a correct response is returned. If left blank, the text will read "Correct".
- Incorrect answer feedback: Specify the text a learner sees when an incorrect response is generated. If left blank, the text will read "Incorrect".
- Show AI Feedback for Incorrect Answers: By default incorrect responses will display the AI Feedback in the response sent back to the learner. Un check this option if no AI Feedback is desired.
- Outcomes: Configure adaptive learning experiences based on the result of the activity. See more on outcomes here.
How do I create AI Vision Activities?
Note: AI Vision based activities are only available in labs that contain a virtualized environment.
- Access AI Vision Activities in Lab Profile > Edit Instructions > Activities. If the lab profile contains a virtualized environment, a + New AI Vision Activity button will display.
- After clicking + New AI Vision Activity button, begin configuring the prompt.
- Choose Environment.
Note: If you're editing instructions without the lab running, the drop down will default to the first available environment. If the lab is running, the VM that the lab author is viewing will be selected as the environment in the dropdown list.
- Provide a reference image. Although optional, providing a screenshot of a screen with a correct response from the relevant environment will further instruct the model.
- Write the prompt. Write a clear prompt using natural language that describes what the computer vision should look for on the user's screen. Include specific outcomes that indicate success in the activity.
Remember: A reference image, while helpful, is not a replacement for a clear, specific prompt. Adding a reference image will add time to the evaluation process.
- Test the Prompt. If AI-generated feedback shows that your prompt strength is weak (three stars or below), update the prompt based on the feedback provided, then test the prompt again.
- Click Next.
- Configure the Settings. See settings details above.
- Save the Activity. Select either Save or Save and Insert to finalize the creation of the activity.
How do I manage or edit AI Vision Activities?
Just as you'd do with all other activity types, navigate to the activity via Lab Profile > Edit Instructions > Activities.
Find the activity you'd like to edit, click Edit. (Always remember to save!)
Activities can also be deleted, moved, or cloned from this view.
Additional Details
Safeguards
- Ensure that the AI prompt is clear and precise to avoid any ambiguity in evaluation.
- Validate the prompt using the Test Prompt button to ensure it provides accurate results.
- Use the feedback provided to refine the prompt and improve its effectiveness.
Note: Be cautious when relying on reference images for evaluation, as they may not always be optimal. Clear prompts are often better for accurate scoring.
How do I use the Prompt Feedback to improve my prompts?
To improve your prompts using the feedback provided by the system, follow these steps:
- Review Feedback: Carefully read the feedback and check for conciseness, specificity, clarity, and whether the prompt matches the image. This will give you a clear understanding of areas that need improvement.
- Analyze Ratings: Look at the prompt strength rating out of 5 and the summary of possible corrections. This will help you identify the specific aspects of your prompt that need enhancement.
- Implement Corrections: Make the necessary adjustments to your prompt based on the feedback. Focus on making your prompt more concise, specific, clear, and well-matched with the image.
- Error Handling: Address any error messages related to primary failures such as timeouts and content errors. Ensure that your prompt is correctly parsed by the AI.
- Structured Output: If applicable, specify a structured output format, such as JSON schema, to ensure the response is returned in a consistent format.