Unit tests for scripts

Test scenarios

Creating a unit test

For each script (services and validators) you can define test scenarios. After clicking the button Unit tests in the left bar, the test panel will open. The panel displays the currently defined test cases for the given script. A new test can be added using the button located at the bottom of the panel Add test.

Illustration 1. Unit tests panel

Each scenario has to define:

  • Test name,

  • Test description,

  • Activity - whether it is run when the Run all,

  • input parameters - the values we want to test for the given script,

  • Application data,

  • Output verification - the values we expect for the given input values:

    • Output parameters for services,

    • Message keys for validators.

Illustration 2. Test input parameters

Illustration 3. Test result verification parameters

Unit test results

We can run a selected test by clicking the icon located on the tile with information about the test scenario. Clicking the Run all button will run all active tests.

A test result can be marked green - tests finished with the expected result - or red - incorrect result.

After running the test a drawer with logs will appear, containing information:

  • The name of the test that was run,

  • Logs from the test,

  • Output value.

When running all tests, the runtime will be the sum of the execution times of all tests. The log drawer contains information about each previous run, separated by an empty line. The log drawer can be cleared of logs using the Clear logs.

Illustration 4. Test with a positive result

Illustration 5. Test with a negative result

Application data in unit tests

When creating unit tests the application allows simulating the data present on the application form. In the Application data tab we can add fields that should return a specific value. We refer to the provided values using the function getValue() available on the object context.

illustration 6. Defining application data in unit tests

Illustration 7. Result of running a test with application data

Errors during tests

If the script contains errors, the test will fail and the error will be logged in the console under the editor. The message will include the cause and location of the error.

Illustration 8. Error after running the test

Ways to verify output

Script services

We can verify the output of script services in several ways. They are divided into number comparisons and text comparisons:

Number comparison

Verification
Description

>

Greater

>=

Greater or equal

<

Less

<=

Less or equal

==

Numbers are equal

≠

Numbers are not equal

When choosing one of the comparison operators above, keep in mind that the script outputs and the values entered for comparison are converted to numbers. If conversion of any value fails, the test will end with a negative result.

For example for data: service output: 6.0, value to compare: 6, comparison: ==, the result is correct. The value 6.0 returned from the script is equal to the value 6 entered in the expected field.

Illustration 9. Test input parameters

Illustration 10. Test result verification parameters

Illustration 11. Number comparison using the == operator ended with a positive result

Text comparison

Verification
Description

EQ

Strings are equal

NOT_EQ

Strings are not equal

~

Matches regular expression

!∅

Not empty

Data are compared as text. Numbers are also treated as text.

For example for data: service output: 6.0, value to compare: 6, comparison: EQ, the result is incorrect. The text value 6.0 returned from the script is not equal to the text value 6 entered in the expected field.

Illustration 12. Number comparison using the EQ operator ended with a negative result,

Script validators

The output of script validators is verified by selecting the message keys that will be returned in a given test case.

For example, a script validator returns two error keys with their parameters:

  • pl.error1

  • pl.error2

For the test assertion to succeed, both error keys should be present on the list.

If we add extra keys or one of the error keys is missing, the test will fail.

Illustration 13. Script validators test with a positive result

Illustration 14. Script validators test with a negative result

Last updated

Was this helpful?