This feature is currently in a closed beta. To request access for your organization, please reach out to your account executive. If you are unsure who your Account Executive is, please contact Skillable Support.
What is AI Vision Activity?
AI Vision Activity is a feature that automates the scoring of complex activities by analyzing screenshots of the VM environment using Large Language Models (LLMs). This simplifies the evaluation process for lab authors by scoring activities based on visual elements in the VM environment, eliminating the need for scripting expertise. AI Vision Activity allows for visual scoring of the current screen of a virtualized lab, ensuring accurate assessment of visual elements. Additionally, it can target specific VMs in the lab, making it easier to score activities.
Purpose
The purpose of AI Vision Activity is to provide a way to automate the scoring of activities without requiring scripting expertise. It is designed for lab authors who want to evaluate and score activities based on visual elements in the VM environment.
Scenarios
- When a lab author wants to score an activity based on the visual state of the VM environment.
- When the lab author provides a screenshot of the expected outcome for comparison.
- In environments where automated scoring is needed to provide immediate feedback to users.
Key Elements
- AI Prompt:
- Allows the lab author to create a prompt for what they are looking to score on screen.
- Options: Provide a screenshot of the expected outcome, enable on-demand scoring.
- Screenshot:
- Captures the current state of the VM environment for analysis.
- Options: Take a live screenshot, upload as an optional reference image.
- Test prompt:
- Allows the lab author to test the prompt against AI and provide feedback that provides the lab author guidance to improving the prompt strength and clarity of the prompt provided.
- Scoring:
- Uses LLMs to evaluate the screenshot and provide a score based on the prompt.
- Elements: Immediate feedback, scoring results.
Simplified UI Design - Two-Step Flow
-
AI Vision Prompt Parameters
- All AI Vision parameters are set on one screen, including the AI prompt and screenshot.
- This screen allows the lab author to define what they are looking to score and provide any necessary visual references.
-
Settings - Activity Naming and Scoring Metadata
- The second screen is for setting the activity name and scoring metadata.
- This screen allows the lab author to define the scoring criteria and any additional settings related to the activity.
By following these steps, you can ensure that the AI Vision Activity is set up effectively and provides accurate scoring results.
User Information
Who would benefit with using AI Vision?
This feature is designed for lab authors who want to automate the scoring of activities based on visual elements and lowers the need to rely on knowing how to write scripts.
What is AI Vision?
AI Vision Activity allows you to score activities by analyzing screenshots of the VM environment.
When could I use AI Vision?
This feature is used when automated scoring is needed for activities and writing scripts is too difficult to learn or complicated to validate in the VM.
Why would I use AI Vision Activities?
It provides immediate feedback and lowers the level of expertise required to score activities.
How do I create AI Vision Activities?
AI Vision based activities are only available to Virtualized environments. AI Vision Activity will only be available in the Lab Profile if it contains a virtualized environment.
-
Edit Instructions for a Virtualized Lab Profile
- Access the Instructions Editor and select the relevant lab profile.
-
Navigate to the Activities Tab
- In the Instructions Editor, go to the Activities tab.
-
Create AI Vision Activity
- Click on the button to create a new AI Vision Activity.
-
Verify the proper Environment
- Select the environment that will be the target of the current AI Vision activity.
Environment DefaultsIf editing instructions without the Lab Profile running the Environment drop down will default to the first available environment. If the Lab is launched and you are editing instructions from within the Lab client, the current VM that the user is viewing will be selected as the Environment in the dropdown list.
-
Optionally provide a Reference Image
-
Reference images can be used to assist the AI that will be used along with the prompt to give guidance for the AI to validate against.
-
Best Practice is to write a very clear prompt that describes what to look for and the specific details for success of the activity, use reference images to supplement the prompt and not a replacement for a poorly written prompt.
Reference imagesUsing reference images is not required and may add extra time required to evaluate your activity.
-
Configure the Prompt
- Enter a prompt that is clear and concise, make sure to use the examples and reference documentation if you require more assistance.
- Best practice is to always test your prompt, this will test your prompt strength for clarity and conciseness.
- If the Prompt feedback shows that your prompt strength is weak, update the prompt with the information provided in the prompt feedback and select Test prompt again, to improve your prompt strength before continuing.
-
Configure the Settings
- Select Next.
- Update the settings with an activity name and any other options as required.
-
Save the Activity
- Select either Save or Save and Insert to finalize the creation of the activity.
Managing AI Vision Activity:
-
Edit Instructions for a Virtualized Lab Profile
- Access the Instructions Editor and select the relevant Lab Profile.
-
Navigate to the Activities Tab
- In the Instructions Editor, go to the Activities tab.
-
Edit AI Vision Activity
- Select the specific AI Vision Activity you want to edit and select the Edit button.
-
Modify the Necessary Details
- Update any information on the Configure Prompt or Settings steps.
-
Save the Activity
- Select either Save or Save and Insert to finalize the creation of the activity.
Additional Details:
Using the Test Prompt Button
- After entering the AI prompt, select the "Test Prompt" button to validate the prompt.
- The button will go into a loading state while the prompt is being tested.
- Once the test is complete, the button will display "Test prompt again" and the prompt strength and feedback will be shown.
- Use the feedback to improve the clarity and precision of the prompt. This helps in achieving a true/false outcome for validating the user's VM screen based on the prompt.
Prompt Strength and Feedback
- The prompt strength is displayed as stars, indicating the effectiveness of the prompt.
- Feedback is provided to help the lab author refine the prompt for better accuracy.
Sometimes the provided reference image may not always be optimal to rely on. Having clear and precise prompts is often better for accurate scoring. Ensure that the prompts are well-defined and provide specific guidance for the expected outcome.
- Ensure that the AI prompt is clear and precise to avoid any ambiguity in scoring.
- Validate the prompt using the Test Prompt button to ensure it provides accurate results.
- Use the feedback provided to refine the prompt and improve its effectiveness.
- Be cautious when relying on screenshots for scoring, as they may not always be optimal. Clear prompts are often better for accurate scoring.
How do I use the Prompt Feedback to improve my prompts?
To improve your prompts using the feedback provided by the system, follow these steps:
-
Review Feedback: Carefully read the feedback and check for conciseness, specificity, clarity, and whether the prompt matches the image. This will give you a clear understanding of areas that need improvement.
-
Analyze Ratings: Look at the prompt strength rating out of 5 and the summary of possible corrections. This will help you identify the specific aspects of your prompt that need enhancement.
-
Implement Corrections: Make the necessary adjustments to your prompt based on the feedback. Focus on making your prompt more concise, specific, clear, and well-matched with the image.
-
Error Handling: Address any error messages related to primary failures such as timeouts and content errors. Ensure that your prompt is correctly parsed by the AI.
-
Structured Output: If applicable, specify a structured output format, such as JSON schema, to ensure the response is returned in a consistent format.
By following these steps, you can effectively use the prompt feedback to enhance the quality of your prompts and achieve better results.