- 06 Nov 2024
- 6 Minutes to read
- Print
PBT Overview
- Updated on 06 Nov 2024
- 6 Minutes to read
- Print
Overview of Performance Based Testing
The Skillable Studio platform supports the ability to score users either in an exam/quiz format or within a hands-on lab. The options for what can be scored are plentiful and can be mixed and matched within a single scenario.
Scorable items include:
Windows, Windows Server and Linux Virtual Machine configurations
Cloud Subscription configurations
Activities
Multiple choice questions (Select 1 or Select Multiple)
Short answer questions (with either precise or flexible answers)
Getting Started
Scoring Formats
There are a few distinct methods of scoring that are implemented on the platform in very similar ways yet have vastly different user experiences.
Before doing anything else, you must determine which of these formats best meets your desired end experience.
Traditional Exams/Quizzes
This format presents a user with a series of multiple choice or short answer questions that they are required to answer.
In this format, users complete the exam/quiz then submit it at the end for scoring and are presented with a grade.
Activity Based Assessments
This format engages the user directly and allows them to either validate skills they already have or have obtained throughout a course. These may take on traditional question/answer format but excel in hands-on learning scenarios where they evaluate the user's environment in an automated fashion and encourage a try, fail, learn, repeat mentality by giving users an opportunity to attempt a goal, quickly determine success or failure, and try again if necessary. While a try, fail, learn model may be used with Activity Based Assessments, they may also permit only a single attempt without retries and be scored based upon that outcome.
In this format, while a user progresses through a course or hands-on lab, they are presented a challenge or question, given an opportunity to complete or answer it, and then they manually trigger that specific item and are immediately provided feedback.
Performance Based Testing
This format, similar to Activity Based Assessments, engages the user directly and allows for skills validation. This can be used in small scenarios such as ensuring users learned some of the key topics within a course, or grander scenarios such as technical certification exams. While Activity Based Assessments are best used for user engagement in learning by providing valuable feedback, Performance Based Testing excels in validating retention after learning.
In this format, a user is expected to complete a series of tasks and then, similar to Traditional Exams/Quizzes, submit everything at the end for scoring to be presented with a grade. Unlike Activity Based Assessments, users do not receive verbose feedback for every item as they progress.
Configuring Items
Once you have determined which of the formats best fits your desired scoring experience, we need to design scorable items for users to complete. No matter the scoring format you will be utilizing, all scored items on the Skillable Studio platform are configured utilizing the Activity Manager within the lab instruction editor. If you are not already familiar with this tool, please review the Activities documentation before moving forward.
Traditional items will have pre-defined structures, while hands-on items that are scored in an automated fashion are designed by the author using scripting languages - such as PowerShell and BASH - to evaluate the user's environment. If scoring against a Virtual Machine, be sure to review the Virtual Machine Requirements section of the Activities documentation.
Automated Scoring Best Practices
Score absolute values when possible.
If a value or setting is susceptible to lots of variation or cannot be easily predicted it makes it harder to reliably score and may compromise the integrity of the item.
Don't be afraid to mix and match item types!
Even within hands-on Activity Based Assessments and Performance Based Testing, traditional question-style items may still be useful in addition to automated items.
For example, regex short-answer questions can be used to validate something such as a URL that a user might need to generate.
Maintain a development environment separate from your production environment.
This permits you to perform ongoing maintenance and updates to scoring items and then update the production item when they are ready, without potentially introducing breaking changes to the production items.
Design scripts to be only as complex as they need to be.
Simpler designs are easier to replicate the format across multiple items, easier to modify if/when necessary, and easier for your script authors to read when going from item to item.
Design output/feedback that's useful for the format being implemented.
For example, Activity Based Assessments with retries permitted may provide some feedback to aid a user in discovering the proper solution while Performance Based Testing may simply provide whether or not an item was correct.
For an overview of how Skillable runs automation scripts for Activities and PBT’s see Overview of How Lab Automation Works
Examples
Scoring Script Samples
The below samples are designed to aid in the design of your scoring scripts. While they can be copy/paste/modified, their formats may not exactly fit your items to be scored. They are intended as guides and suggestions of structure but should not be considered the only valid methods.
Arguably more important than what your script looks like or how easy it is to read, is what your output looks like. Having verbose output that can be referenced later and easily used to identify why someone got an item correct or incorrect may be important for formats such as Performance Based Testing in the event a user disputes an item., but more user friendly messages may be better for formats such as Activity Based Assessments.
A best practice is to keep formats as consistent across your items of the same format as possible so that most of the script can be simply copy/pasted from item to item, with minimal lines requiring modifications. Maintaining the same format from item to item also aids in readability across items.
Object Matching Method
While not mandatory in your own designs, all of the output from sample scripts of this format provided here begin with a simple statement that includes the Item Id and whether the user was correct or incorrect.
The Object Matching Method and the output associated with it is a bit more complex than the If Statement Method, but also tends to be more verbose and reports better information for logging purposes.
If Statement Method
The If Statement Method is the most common scoring script method and is typically easier and more flexible to implement. However, in comparison to the Object Matching Method output is typically a bit more basic and requires author definition.