Validating Skills with Skillable Studio Scoring
    • 10 Sep 2024
    • 6 Minutes to read

    Validating Skills with Skillable Studio Scoring


    Article summary

    This article is targeted to organizations who need to validate the skills of a learner within a lab. These skills can either be measured with questions in the Lab Instructions panel or with automations that allows the environment to be inspected to validate if the learner has completed the task or skill correctly.

    Overview

    The Skillable Studio platform supports the ability to score users either in an exam/quiz format or within a hands-on lab. The options for what can be scored are endless and can be mixed and matched within a single scenario.

    Scorable items include:

    • Multiple choice questions (Select 1 or Select Multiple)

    • Short answer questions (with either exact or flexible answers)

    • Windows, Windows Server and Linux virtual machine configurations

    • Cloud Subscription configurations

    Considerations Before You Start

    Skills validation within the Skillable Studio platform can be presented in many ways.  To achieve the best outcomes for this feature it is important to understand stand the actual outcome of the skills measurement that is required for the organization.  Therefore for an organization to achieve high quality skills validation data it is important to only validate the specifics for the skills in question.

    Getting Started

    Scoring Formats

    There are a few distinct methods of scoring that are implemented on the platform in very similar ways yet have vastly different user experiences.

    Before doing anything else, you must determine which of these general formats best meets your desired end experience.

    Traditional Exams/Quizzes

    This format presents a user with a series of multiple choice or short answer questions that they are required to answer.

    In this format, users complete the exam/quiz then submit it at the end for scoring and are presented with a grade.

    Activity Based Assessments

    This format engages the user directly and allows them to either validate skills they already have or have obtained throughout a course. These may take on traditional question/answer format, but excel in hands-on learning scenarios where they evaluate the user's environment in an automated fashion and encourage a try, fail, learn, repeat mentality by giving users an opportunity to attempt a goal, quickly determine success or failure, and try again if necessary. While a try, fail, learn model may be used with Activity Based Assessments, they may also permit only a single attempt without retries and be scored based upon that outcome.

    In this format, while a user progresses through a course or hands-on lab they are presented a challenge or question, given an opportunity to complete or answer it, and then they manually trigger that specific item and are provided feedback within the lab.

    Performance Based Testing

    This format, similar to Activity Based Assessments, engages the user directly and allows for skills validation. This can be used in small scenarios such as ensuring users learned some of the key topics within a course, or grander scenarios such as technical certification exams. While Activity Based Assessments are best used for user engagement in learning by providing valuable feedback, Performance Based Testing excels in validating retention after learning.

    In this format, a user is expected to complete a series of tasks and then, similar to Traditional Exams/Quizzes, submit everything at the end for scoring to be presented with a grade. Unlike Activity Based Assessments, users do not receive verbose feedback for every item as they progress.

    Configuring Items

    Once you have determined which of the formats best fits your desired scoring experience, we need to design scorable items for users to complete. No matter the scoring format you will be utilizing, all scored items on the Skillable Studio platform are configured utilizing the Activity Manager within the lab instruction editor. If you are not already familiar with this tool, please review the  Lab Activities documentation before moving forward.

    Traditional items will have pre-defined structures, while hands-on items that are scored in an automated fashion are designed by the author using scripting languages - such as PowerShell and BASH - to evaluate the user's environment. If scoring against a Virtual Machine, be sure to review the Virtual Machine Requirements section of the Activities documentation.

    Automated Scoring Best Practices

    • Score absolute values when possible.

      • If a value or setting is susceptible to lots of variation or cannot be easily predicted it makes it harder to reliably score and may compromise the integrity of the item.

    • Don't be afraid to mix and match item types!

      • Even within hands-on Activity Based Assessments and Performance Based Testing, traditional question-style items may still be useful in addition to automated items.

      • For example, regex short-answer questions can be used to validate something such as a URL that a user might need to generate.

    • Maintain a development environment separate from your production environment.

      • This permits you to perform ongoing maintenance and updates to scoring items and then update the production item when they are ready, without potentially introducing breaking changes to the production items.

    • Design scripts to be only as complex as they need to be.

      • Simpler designs are easier to replicate the format across multiple items, easier to modify if/when necessary, and easier for your script authors to read when going from item to item.

    • Make Scripts flexible and reusable

      • Where possible use paramters to define the values and settings a script will use.  This would then allow the script to be used in diferent situations without editing the script body.  An example would be for a script that checks for a file, make the full file name a parameter then anybody needing to perform the same check but a diiferent file could use the same script and just edit the parameter.

    • Make scripts sharable within your organization

      • Once your scripts havew been developed and tested, commen the script fully and consider publishing to your Organizations Script Library for easy storage, retrival and modification by yourself and others.

    • Design output/feedback that's useful for the format being implemented.

      • For example, Activity Based Assessments with retries permitted may provide some feedback to aid a user in discovering the proper solution while Performance Based Testing may simply provide whether or not an item was correct.

    Examples

    Scoring Script Samples

    The below samples are designed to aid in the design of your scoring scripts. While they can be copy/paste/modified, their formats may not exactly fit your items to be scored. They are intended as guides and suggestions of structure but should not be considered the only valid methods.

    Arguably more important than what your script looks like or how easy it is to read, is what your output looks like. Having verbose output that can be referenced later and easily used to identify why someone got an item correct or incorrect may be important for formats such as Performance Based Testing in the event a user disputes an item., but more user friendly messages may be better for formats such as Activity Based Assessments.

    A best practice is to keep formats as consistent across your items of the same format as possible so that most of the script can be simply copy/pasted from item to item, with minimal lines requiring modifications. Maintaining the same format from item to item also aids in readability across items.

    Object Matching Method

    While not mandatory in your own designs, all of the output from sample scripts of this format provided here begin with a simple statement that includes the Item Id and whether the user was correct or incorrect.

    The Object Matching Method and the output associated with it is a bit more complex than the If Statement Method, but also tends to be more verbose and reports better information for logging purposes.

    If Statement Method

    The If Statement Method is the most common scoring script method and is typically easier and more flexible to implement. However, in comparison to the Object Matching Method output is typically a bit more basic and requires author definition.


    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.