Writing Acceptance Tests

From wiki.gpii
Jump to: navigation, search


Acceptance tests are tests that confirms the core architecture framework includes an automatic acceptance testing framework, that makes it fairly straightforward to write or add new acceptance tests.

In this context, an acceptance test is a test that will test the entire (real-time framework) system end to end. Basically, it starts up the server, logs a user in, and checks that the system is modified accordingly. Then the user is logged out and again a check is made to ensure that system is returned to it's proper state. This means that the actual system will be configured (and restored) when running the acceptance tests.

Note: This tutorial assumes that the solution for which you write acceptance tests already has an entry in the solutions registry. If this is not the case, you need to add this entry to your local version of linux.json, win32.json, android.json or web.json (depending on the platform where your solution will be available. You can do this after creating a branch (see below); at the end you can then create a single pull request for both the solutions registry entry and the acceptance tests. (You should not submit an entry for the solutions registry without acceptance tests.)

Writing New Acceptance Tests

Below are instructions for adding new integration tests. Each SP3 application should have its own acceptance tests, to allow the tester to run the tests individually for each application.

Depending on whether you use the Cloud-based flow manager with your solution, or it's an installed solution, the acceptance test will vary slightly. But before we get that far, there are some common steps:

Common Steps

Create a JIRA Ticket

First, create a ticket for your solution's acceptance tests in the GPII project on JIRA. You can later reference this ticket in the name of the Git branch that you create for the acceptance tests, in commit messages and in the pull request on GitHub.

Bring Master up to Date

Ensure that your master branch is up to do date: go to gpii/node_modules/universal and run git fetch upstream - where the 'upstream' reference can be added by writing git remote add upstream git://github.com/GPII/universal.

Create a Branch for the Acceptance Tests

In your local version of the GPII code, create a branch using the command git checkout -b GPII-XXX, where GPII-XXX represents the JIRA ticket. (You run this command under /gpii/node_modules/universal/.)

You can then type git status to verify that there are no files to commit yet.

Writing Acceptance Tests for Applications Using the Local Flow Manager

Most applications (desktop, apps, etc.) depend on the GPII being installed on the system. The instructions for writing acceptance tests for these kinds of applications can be found here. If your application is using the online flow manager (e.g. web-pages, etc.), you should see the section further down.

Creating the NP Sets and Device Reporter Payload

The first step in writing your acceptance test is ensuring that you have the NP set you want to write the test for, and that you have the appropriate device reporter payload. For acceptance testing, both these live in a special folder in the universal repository, namely:

Creating the Config File

The next step is to create a configuration file to use when launching the server for the acceptance tests. The purpose is to point the server to the correct device reporter file.

The location of this file should inside the universal folder universal/tests/platform/<your-platform>/configs/ where <your-platform> should be replaced with the platform your application runs on. The file name should describe the acceptance test in question, so for example for NVDA, the name could be something like: "windows-nvda-config.json".

The content of the file should be:

    "typeName": "acceptanceTests.nvda",
    "options": {
        "components": {
            "server": {
                "options": {
                    "components": {
                        "deviceReporter": {
                            "options": {
                                "installedSolutionsUrl": "file://%root/../../../testData/deviceReporter/acceptanceTests/nvda.json"
    "includes": [

Where the important part is the "installedSolutionUrl" key-value pair. The value of this should point to the device reporter file created in the previous step (in the above example, "nvda.json"). You should also make sure that the typeName of the first line is descriptive of your acceptanceTest. More examples of configuration files can be found here: https://github.com/GPII/universal/tree/master/tests/platform/windows/configs

Writing the Acceptance Test

The final step is to write the actual acceptance test. For this you should create a new file with a descriptive name under the universal folder: universal/tests/platform/<your-platform>/. For example "windows-nvda-testSpec.js".

As a template, you can use the following:

GPII Integration and Acceptance Testing

Copyright 2014 Raising the Floor International

Licensed under the New BSD license. You may not use this file except in
compliance with this License.

You may obtain a copy of the License at

"use strict";
var fluid = require("universal"),
    gpii = fluid.registerNamespace("gpii");



gpii.tests.windows.nvda = [
   (... test descriptions goes here ...)

module.exports = gpii.test.bootstrap({
    testDefs:  "gpii.tests.windows.nvda",
    configName: "windows-nvda-config",
    configPath: "configs"
}, ["gpii.test.integration.testCaseHolder.windows"],
    module, require, __dirname);

where "windows-nvda-config" should be replaced by the name of the config file you created above and "gpii.tests.windows.nvda" should be replaced by a name appropriate to your application.

We're now ready to add the actual test descriptions (in the (... test descriptions goes here ...) area of the above template)


  • Add a new test fixture to acceptanceTesting.js using the below template:
    name: "Testing NVDA with token screenreader_common using Flat matchmaker", //REPLACE with title describing your test
    userToken: "screenreader_common", //REPLACE with the token you want to log in with
    settingsHandlers: {
        //insert the expected payload from the settingshandler (see below)
    processes: [
        //INSERT the commands to check for the running application process (see below)
  • Modify the test fixture to have the NP set as value for the userToken key (eg. userToken: screenreader_common).
  • Add an array of key-value pairs for the processes key. This is explained further under "Processes" .
  • Temporarily copy the NP set you want to use into the regular folder holding preferences (universal/testData/preferences)
  • Start node-inspector from the command line. (Simply enter node-inspector in a different command line window than the next step. If you haven't yet installed it, just enter npm install -g node-inspector on the command line.)
  • Run the GPII framework in the same mode you want the acceptance test to run. For locally installed solutions, this would be done by running, from the windows folder: node --debug gpii.js
  • Open the URL for Node Inspector (typically in a webkit-based browser (Google Chrome, Safari or a recent version of Opera).
    • [As of 22-Jul-2015, line 103 of timers.js is a comment. I cannot find the line specified here where an exception is thrown] It can be helpful to set a breakpoint at line 103 of timers.js:
      if (!process.listeners('uncaughtException').length) throw e;. The uncaught exception e can tell you what went wrong in case of an error.
    • Set a breakpoint near line 229 of gpii/node_modules/universal/gpii/node_modules/lifecycleManager/src/LifecycleManager.js, towards the bottom of function gpii.lifecycleManager.invokeSettingsHandlers(), at the line:
      This is the point just after the settings handlers have been called.
  • Log in with the NP set you want to use for testing. In our example, this would use the URL http://localhost:8081/user/screenreader_common/login (where screenreader_common is the token for the preference set in the test - see the text fixture above).
  • In Node Inspector, you should now see that the process stops at the breakpoint in LifecycleManager.js. Open Node Inspector's console using the button in the lower left corner. You can now highlight snapshots to inspect the payload.
    • Typing snapshots in the Node Inspector console allows you to inspect that object. You can get these data in a more readable format by typing JSON.stringify(snapshots, null, 4). (If Node Inspector does not print the whole payload, you can type console.log(JSON.stringify(snapshots, null, 4)) (the part ,null, 4 is optional but adds indentation) and get the payload from the console where you started GPII.) The output should have the following format:
        "settings": { ... },
        "type": "gpii.settingsHandlers.settingsTypeHere",
        "options": { .... }
  • You can validate the output in JSONLint. After this, you can update your test fixture's settingsHandlers block in the following way. The outer key should be the settingshandler type (ie. the 'type' from the output) and the context should be an object, with a key "data", that contains an array of objects. Each of these objects should contain a "settings" and "options". with the content of "settings" from the output. In the end you will get something like:
<settings-handler-type-here>: {
  "data": [
      "settings": { (.. copy of 'settings' content from the snapshot (debug) data.. ) },
      "options": { (.. copy options block from your solution registry entry ..) }
  • Repeat the process for each sample preference set / text fixture.
  • You can find several examples of acceptance tests here: https://github.com/GPII/universal/tree/master/tests/platform/windows
  • MANUALLY check that the settings you've registered makes sense by going through the settings and solutions registry transformations.
  • The final step is to register your new test with the system. This is done by adding the name of your acceptance test fixture file to the list in universal/tests/platform/index-windows.js (where "windows" should be replaced with the OS of your application)
  • Once you're done with writing the acceptance test, you should add a text file describing the test and it's dependencies. It should be located in universal under tests/platform/<YOUR OS>/<TEST NAME>.txt, where <YOUR OS> should be replaced with the name of the OS your application runs on, and <TEST NAME> should be the same name as your text fixture.

Now everything should be prepared so you should be able to run the acceptance tests. They can (and should be) run in two different ways:

  • Running it without OS dependencies:
    • This is done by going to the universal folder and running: tests/IntegrationsTests.js. This will run all the integration tests. If you only want to run your own tests, you can add a substring of the name of it. In case of nvda for example, you can run tests/IntegrationsTests.js nvda.
  • Running it with OS dependencies:
    • Go to the windows repository and run the following command: node tests\AcceptanceTests.js. This will run all the acceptance tests in windows (that is, tests with platform bindings). As with running without platform bindings, you can run your test only. For example for nvda, you would run: node tests\AcceptanceTests.js nvda


The processes key allows for the specification of an array of running applications to check. Sometimes no processes are relevant. In that case, the array can be empty.

The structure of the members of the array are an object with the following key/value pairs:

  • command : the name of the command to execute that checks for the running application.
  • expectConfigured: the expected result if the command succeeded.
  • expectRestored: the expected result when the application is not running.

Here is an example for JAWS running on Windows:

processes: [
        "command": "tasklist /fi \"STATUS eq RUNNING\" /FI \"IMAGENAME eq jfw.exe\" | find /I \"jfw.exe\" /C",
        "expectConfigured": "1",
        "expectRestored": "0"

Note: in point of fact, in some instances on GONME/Linux, the application to check is not actually a process. The system executes the "gsettings" command to change a session setting. The effect is to enable or disable, say, the screen reader, or screen magnifier. In this case, the command is testing the relevant session setting's value. Here is an example to check for a screen reader (likely Orca) running on GNOME/Linux:

processes: [
        "command": "gsettings get org.gnome.desktop.a11y.applications screen-reader-enabled",
        "expectConfigured": "true",
        "expectRestored": "false"

Applications using the Cloud-Based Flow Manager (Walkthrough)

Editorial note: This section is a rewrite of a previous walkthrough that was based on a desktop application. See this e-mail from 15 August 2014 and the response.

The following walkthrough is for a solution that needs to connect to the Cloud-based or Online Flow Manager.

The walkthrough is based on the following assumptions:

  • Our solution is a web-based screen reader called Sharky. The solution's ID in the solutions registry is: org.example.sharky.
  • The solution is listed in the web registry. (We could call web the "platform ID".)
  • We don't need a device reporter file or a config file.
  • The (first) preference set/token we use for the acceptance test(s) is sharky_carla. (More preference sets can be added, with one preference set per acceptance test.)
  • The JIRA ticket for the acceptance test for this solution: GPII-586.
    • Note: each solution should have a JIRA ticket for acceptance tests.

The steps of the walkthrough:

  1. Ensure that your master branch is up to do date:
    • go to gpii/node_modules/universal and run git fetch upstream - where the 'upstream' reference can be added by writing git remote add upstream git://github.com/GPII/universal.
    • If you have local changes that need to be merged first, run git merge upstream/master.
  2. Create a new Git branch for the acceptance tests. You can name this branch to reflect the JIRA ticket (GPII-586 for this example): git checkout -b GPII-586.
  3. Create the preference set "sharky_carla" in the acceptance tests folder (gpii/node_modules/universal/testData/preferences/acceptanceTests; see them on GitHub). You can create a few variations of the same preference set to test out different aspects (i.e. different preferences and different transformations). In order avoid name conflicts with other solutions, it makes sense to use file names that refer to your solution.
  4. Create a file for the acceptance tests in node_modules/universal/tests/platform/cloud/ (see also GitHub). In our example, this file is called AcceptanceTests_sharky.js.
    • The easiest approach is to copy and adapt an existing acceptance tests file.
  5. In each acceptance test file, testDefs represents an array of acceptance tests that you want to run for your solution. It can contain as many tests as you want. For each token:
    • Give the test a name (a short description that is meaningful when it shows up in logs etc.).
    • Add the token ("sharky_carla" for our first test).
    • Add the "appinfo" (a URI-encoded of a string with the following format:
      "{\"OS\":{\"id\":\"<PLATFORM_ID>\"},\"solutions\":[{\"id\":\"<SOLUTION_ID>\"}]}" );
      in our case: appinfo: encodeURIComponent("{\"OS\":{\"id\":\"web\"},\"solutions\":[{\"id\":\"eu.gpii.olb\"}]}").
    • Find the expected return payload through the following steps:
      • Set the following environment variable from the command line: set NODE_ENV=cloudBased.development.all.local.
      • Start the server by running node gpii.js (from the folder <GPII-install>/node_modules/universal; see documentation on GitHub). Node should now be running on port 8081.
      • Construct the URL that will run the example preference set through the transformations for your solution:
        • As described in the Flow Manager documentation, the general format is
<TOKEN>/settings/{"OS":{"id":"<PLATFORM_ID>"},"solutions":[{"id":"<SOLUTION_ID>"}]} .
          In our case, the URL would be{"OS":{"id":"linux"},"solutions":[{"id":"org.example.sharky"}]}. However, this URL needs to be URI-encoded; you can do this using this URL Decoder/Encoder.
      • Open a browser and enter the URI-encoded URL you have created.
  6. Add the payload from the Flow Manager to the acceptance test:
    • Copy the returned payload from the browser window, pretty-print it (e.g. using JSONLint ) and paste the result in the variable "expected".
  7. Run the new test(s) using node tests/platform/cloud/AcceptanceTests_sharky.js to check whether they pass.
    • If all goes well, you should get a message like jq:   All tests concluded: 2/2 total tests passed in 1051 ms - PASS at the end of the terminal output.
    • If the tests end with an error message like warn - error raised: Error: listen EACESS, you should check that you have no other services listening on port 8080, e.g. Apache Tomcat.
  8. Repeat the last 3 steps for each preference set that you want to create a test for.
  9. In the folder node_modules/universal/tests/platform/cloud/ create a text file that describes the purpose of the acceptance test. In our case, the name of the test file would be AcceptanceTests_sharky.txt.
  10. Add your acceptance test to .../universal/tests/all-tests.js.
    • You can check all tests by running node tests\all-tests.js: this runs all the acceptance tests. You should again see a message saying that all tests have passed.
  11. When everything looks correct, you are ready to commit the test code and create a pull request:
    • Run git status to make sure you don't overlook any files that need to be added to the commit.
    • Run git add tests/platform/cloud/AcceptanceTests_sharky.js , git add testData/preferences/acceptanceTests/sharky_carla.json, etc. to add the new files.
    • Commit the changes and make sure that you mention the JIRA in the commit message, e.g. git commit -m “GPII-586: added acceptance tests for Sharky screen reader”.
    • Run git push origin <branch> (in our example: git push origin GPII-586).
    • Create a pull request. When you go to GitHub, you should be able to find the button "Compare & pull request". You can also create a pull request from your own branch.

Note: after the tests, you can remove the NODE_ENV variable again with set NODE_ENV=  .

Wiki Categories