Unit tests for scripts

Test scenarios

Creating a unit test

For each script (services and validators) test scenarios can be defined. After clicking the button Unit tests in the left bar, the test panel will open. The panel displays the currently defined test cases for the given script. A new test can be added using the button located at the bottom of the panel Add test.

Figure 1. Unit tests panel

Each scenario has the following to define:

  • Test name,

  • Test description,

  • Activity - whether it is run when the Run all,

  • button is clicked,

  • Input parameters - the values we want to test for the given script,

  • Application data,

    • Output verification - the values we expect for the given input values:

    • Output parameters for services,

Message keys for validators. Figure 2.
Test input parameters Figure 3.

Test result verification parameters

Unit test results We can run a selected test by clicking the icon Run all located on the tile with information about the test scenario. Clicking the

button will run all active tests.

A test result may be marked green - tests finished with the expected result - or red - incorrect result.

  • After running a test a drawer with logs will appear, containing information:

  • Name of the test that was executed,

  • Test logs,

Output value. When running all tests, the runtime will be the sum of the execution times of all tests. In the logs drawer there is information about each previous run, separated by an empty line. The logs drawer can be cleared of logs using the.

Clear logs button.
Figure 4. Test with a positive result

Figure 5.

Test with a negative result Application data in unit tests When creating unit tests the application allows simulating the data that is on the application form. In the Application data tab we can add fields that should return a specific value. We refer to the given values using the getValue()

function available on the context
object. Figure 6.

Defining application data for unit tests

Figure 7.

Result of running a test with application data Errors during tests

If the script contains errors, the test will fail and the error will be logged in the console under the editor. The message will contain the reason and the location of the error.

Figure 8.

Error after running the test

Methods of output verification

Script services
We can verify the output of script services in several ways. They are divided into numeric comparison and text comparison:

>

Numeric comparison

>=

Verification

<

Description

<=

Greater

==

Greater or equal

Less

Less or equal

Numbers are equal

6.0Numbers are not equal 6When choosing one of the above comparison operators, keep in mind that the script outputs and the values entered for comparison are converted to numbers. In case of a conversion error of any value, the test will end with a negative result. ==For example for data: service output: 6.0 , comparison value: 6 , comparison:

, the result is correct. The value Figure 2.
returned from the script is equal to the value Figure 3.
entered in the expected field. Figure 9.

Figure 10.

Script services
We can verify the output of script services in several ways. They are divided into numeric comparison and text comparison:

Figure 11.

Number comparison using the EQ operator completed with a positive result

Text comparison

EQ

~

Strings are equal

NOT_EQ

Strings are not equal

Matches regular expression

6.0Numbers are not equal 6When choosing one of the above comparison operators, keep in mind that the script outputs and the values entered for comparison are converted to numbers. In case of a conversion error of any value, the test will end with a negative result. Figure 11.!∅ 6.0 Not empty 6 , comparison:

Data is compared as text. Numbers are also treated as text. , the result is incorrect. The text value

returned from the script is not equal to the text value

entered.

Figure 12.

  • Number comparison using the EQ operator completed with a negative result

  • Script validators

The output of script validators is verified by selecting the message keys that will be returned in a given test case.

For example, a script validator returns two error keys with their parameters:

pl.error1 pl.error2
For the test assertion to pass, both error keys should be present in the list. If we add redundant keys or one of the error keys is missing, the test will fail.

Last updated

Was this helpful?