1st Phase Testing Experimental Plans templates

From wiki.gpii
Jump to: navigation, search

Rule-based and statistical MM

Name of the solution to be tested:

Rule-based & statistical MM

'Related 'Cloud4all activities:

A204.3: Rules/heuristics based algorithms for profile matching
A204.4: Statistical methods for dynamic profile matching

Developer(s):

Rule-based matchmaker: TUD, CERTH-ITI
Statistical matchmaker: HdM

Hardware requirements/AT:

Hardware:

  • 1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor.
  • 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit).
  • 660 MB available hard disk space (150 MB for the Architecture + 126 MB for Java SE Runtime Environment + 100 MB for rule-based MM + 2 MB for statistical MB + 50 MB for NVDA + 200 MB for Firefox + some space for the installers, which can be removed after the installation process)
  • Internet access (because the Preferences Server will not be running locally).
  • speakers (for speech output by the screen readers).
  • Braille display, if available at the test site; check in advance whether it can be used with both NVDA and Orca.

Note: The processor and memory requirements are based on Windows 7. The requirements for Fedora 18 are less heavy:

  • 400MHz or faster processor.
  • at least 768 MB memory (RAM), 1 GB recommended.
  • at least 10 GB hard drive space.

AT:

  • On Linux/GNOME, the GNOMEShell Magnifier will be used by users who need magnification, and the screen reader Orca will be used by blind users. This should not require an additional installation step.
  • On Windows, the built-in features (e.g. magnification and contrast) will be used by low-vision users, and the screen reader NVDA will be used by blind users. NVDA is not part of Windows and requires a separate installation step.

Software requirements:

Java: Java SE Runtime Environment (JRE) 6 or higher (for the rule-based matchmaker).

Linux/GNOME:

  • Fedora 18 (with GNOME 3.6; released 15 January 2013)
  • No GNOME Shell extensions or third-party modifications to the original GNOME UX experience will be installed, with the of GnomeTweakTool (because the font size cannot be set through System Settings in Fedora 18).

Windows OSs (Windows 7, 32) and built-in features: Magnifier, high-contrast settings etc.

Note: We will not use 64-bit versions of Windows.

Downloadable link to the solution to be tested:

Please, provide with an executable file or similar to install the demo/test components/solution.

Available languages: Please check and update the following information if necessary:

The matchmakers do not provide an interface to end users, so the languages in which these components can be tested is determined by the available localisations of the operating systems and assistive technologies mentioned above.

Windows 7 and its accessibility features are available in German, Spanish and Greek.

NVDA's user interface can be changed to German, Spanish and Greek.

Fedora 18 is available in German, Spanish and Greek. The user interface language needs to be selected during installation. Orca is also available in German, Spanish and Greek.

Target audience:
  • Low-vision users
  • Blind users

Note: Users without disabilities will not be included in the first phase, because the selected needs & preferences terms are tailored to users with visual impairments.

Step-by-step description of proposed procedure for running the demo/test (tasks):

(1) The user and a test facilitator define the user's N&P set for a specific platform A; this N&P set is stored in the Preferences Server (with the user's token). For Windows users, platform A would be Windows; for Linux users, platform A would be Linux.

(2) The user identifies himself/herself by means of the token on platform B and the system is adapted by means of settings that a matchmaker (i.e. the statistical or the rule-based matchmaker)has inferred from the user's N&P set. For Windows users, platform B would be Linux; for Linux users, platform B would be Windows.

(3) The user is asked to perform the following tasks:

  • READING: Reading an email or a longer text on a static webpage.
  • BROWSING: Searching information on a static webpage.
  • FORM FILLING: Filling in a web form.

So the tasks are done using the preferences inferred by the matchmakers.

(4) The user with the assistance of a test facilitator logs in on platform B, using the user’s token and configures his/her settings manually until the user is satisfied with the settings. (Since the user will probably not be familiar with this OS, the test facilitator needs to help the user here. So the test facilitator needs to know where all the settings can be found.) These settings are saved to the Preferences Server, but with a different token than the one used on platform A. (These settings will later can be compared with the settings inferred by the matchmaker.)

(5) The user identifies himself/herself by means of the new token on platform A and the system is adapted by means of settings that a matchmaker (i.e. the statistical or the rule-based matchmaker) has inferred from the user's N&P set for platform B. (The settings inferred for platform A will later be compared to the N&P set defined in step 1.)

(6) The user is asked to perform the same tasks as on platform B. So these tasks are again done using the preferences inferred by the matchmaker.

Note: The matchmaker used between steps 1 and 2 would be different from the matchmaker used between steps 4 and 5, and the order would alternate between users. For example, one user may first work (on platform B) with settings inferred by the rule-based matchmaker, and then (when returning to platform A) with settings inferred by the statistical matchmaker. The next user, would work on platform B with settings inferred by the statistical matchmaker and on platform A with settings inferred by the rule-based matchmaker.

Research objectives of the test:

The rule-based matchmaker succeeds if it scores at least 50% compared to the settings created by the user and the expert in the last step of the test procedure. (Success criterion for M18)

The statistical matchmaker succeeds if it scores at least 50% compared to the settings created by the user and the expert in the last step of the test procedure. (Success criterion for M18)

Note: the score is calculated by multiplying a distance metric with 100. The distance metric is a value between 0 and 1 that combines the distance metrics for each of the preferences or settings used in this test phase. These preferences and their distance calculation are described in the wiki page http://wiki.gpii.net/index.php/Cloud4all_Testing:_Essential_Registry_Terms.

Pre-Test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users before the tests? If yes, please, write each one as a question for including in the Pre-Test questionnaire

Apart from the standard questions we will have (common for all prototypes) such as for example demographics, time (years) using computers, etc. please check if there is any relevant information you would like to know in advance from the users before they do the test. e.g. screen reader they usually use (giving them few options – the most relevant), which changes they usually perform in their computer before using it, which problems they face when trying to use another computer (not their own computer)… Please, try to provide them as questions in order to avoid misunderstandings between what you want to gather and what the person who changes it into a question (it could happen if you send them as sentences). An example:

1. When you use a PC, which operating system do you use most frequently?

  • Windows
  • Linux
  • iOS
  • Which version of this operating system do you use?

2. When you use a mobile device, which operating system do you use most frequently?

  • iOS
  • Android
  • Symbian
  • Windows
  • Others

2. Which AT are you using?

For visual impaired users:

  • [ ] magnifier (Windows Magnifier, ORCA Magnifier, ZoomText, Virtual Magnifying Glass, other: _____________________________ )
  • [ ] reader applications (Daisy-player, other: ________________________________________________________)
  • [ ] screen reader (JAWS, NVDA, SuperNova, Window-Eyes, VoiceOver, Orca, other: ______________________)
  • other ______________________________________________________________

For blind users:

  • [ ] screen reader (JAWS, NVDA, SuperNova, Window-Eyes, VoiceOver, Orca, other: ______________________)
  • [ ] braille display
  • other ______________________________________________________________

For hard of hearing users:

  • [ ] hearing aids
  • other ______________________________________________________________

For dyslexic user:

  • [ ] reader applications (Daisy-player, other: ________________________________________________________)
  • [ ] screen reader (JAWS, NVDA, SuperNova, Window-Eyes, VoiceOver, Orca, other: ______________________)
  • other ______________________________________________________________

For deaf users:

  •  ______________________________________________________________

For elderly users:

  • _______________________________________________________________

3. Do you sometimes switch between devices/platforms?

Yes [ ] No [ ]

If yes, between which devices/platforms? 

___________________________________________

3.1. Do you experience problems when moving from your system to another?

Yes [ ] No [ ]

If yes, which ones?

___________________________________________

4. What are your needs? Needs are settings without which you might not be able to work with the system, i.e.: speech output, magnification, high contrast and large font, etc.

____________________________________________

5. What are your preferences? Preferences are settings that improve your experience of working with the system, i.e.: speech output when reading long texts, specific colours for background of applications, specific speech rate when reading specific literature, etc.  

____________________________________________

6. I have a good knowledge of my system:

[ ] Strongly disagree [ ] Disagree [ ] Neither agree nor disagree [ ] Agree [ ] Strongly agree

6.1. Can you change all the settings of your system and your AT that you need to change?

[ ] Yes [ ] No

6.2. Do you often need help to change a setting?

7. Who has set up/configured your system?

[ ] myself [ ] accessibility assistance [ ] friend [ ] family

____________________________________________


Please check and update the following information:

Add which parameters are going to change when running the tests, e.g. if the screen reader is configured automatically, specify which parameters are going to be changed: speed, pitch, male or female voice…
OS and magnification settings for which changes will be monitored:

  • ForegroundColour (depends on theme)
  • BackgroundColour
  • FontSize
  • CursorSize
  • MagnifierEnabled
  • Magnification (i.e. magnification level)
  • MagnificationPosition
  • Tracking

Screen reader settings for which changes will be monitored:

  • ScreenReaderTTSEnabled
  • AuditoryOutLanguage
  • SpeechRate
  • SpeakTutorialMessages
  • KeyEcho
  • WordEcho
  • AnnounceCapitals
  • ScreenReaderBrailleOutput
  • PunctuationVerbosity
  • ReadingUnit

For each of these settings, a distance measure will be calculated between the values inferred by the matchmakers and the values from the final set-up step.

Other parameters:

  • Completion per task
  • Number of UI settings changed by user per task
  • Number of user errors per task
  • Post-task: User-perceived difficulty level per task
  • Post-task: User experience.

Post-test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users after the tests? If yes, please, write each one as a question for including in the Post-Test questionnaire:

Apart from the standard questions we will have (common for all prototypes) such as for example if they would use Cloud4all, improvement recommendations, further comments, etc. please check if there is any relevant information you would like to know from the users after they do the test. e.g. if they found any specific difficulty when performing the test, if they changes provided are enough, etc… Please, try to provide them as questions in order to avoid misunderstandings between what you want to gather and what the person who changes it into a question (it could happen if you send them as sentences). An example:

  1. The system works according to your needs. Needs are settings without which you might not be able to work with the system, i.e.: speech output, magnification, high contrast and large font, etc.

[ ] Strongly disagree [ ] Disagree [ ] Neither agree nor disagree [ ] Agree [ ] Strongly agree

Comments (why not; what is missing):  _______________________________________

  1. The system works according to your preferences. Preferences are settings that improve your experience of working with the system, i.e.: speech output when reading long texts, specific colours for background of applications, specific speech rate when reading specific literature, etc.

[ ] Strongly disagree [ ] Disagree [ ] Neither agree nor disagree [ ] Agree [ ] Strongly agree

Comments (why not; what is missing): ________________________________________

    Did you experience difficulties when performing the tasks on platform XXX?

Yes [ ] No [ ]

If yes, which difficulties? _____________________________________________

Contingency plans. Identify possible errors and bugs that might appear during the test and how to mitigate them:

Please add any error prompt that the system could present, and how to avoid or mitigate them. Please, be as much detailed as possible in order to avoid relevant problems during the test performance.

Contact for technical support (name and e-mail):

Rule-based matchmaker:

  • Claudia Loitsch: claudia.loitsch(at)tu-dresden.de
  •  ? Kostas Votis: kvotis(at)iti.gr

Statistical matchmaker:

  • Andy Stiegler: stiegler(at)hdm-stuttgart.de

MM User Manual

Name of the solution: Rule-based and statitical MM
Long description of the functionality of the Demo/Test:  User manual, including images and link to videos if needed:

Link to other relevant training information:

Add links to tutorials, documents or training courses, related with the solution.

GnomeShell Cheat Sheet: https://live.gnome.org/GnomeShell/CheatSheet.

Gnome Universal Access (Gnome Desktop Accessibility Guide): http://help.gnome.org/users/gnome-help/stable/a11y.html.

Orca user guide: http://help.gnome.org/users/orca/stable/.

Windows 7:

NVDA User Guide: http://www.nvda-project.org/documentation/userGuide.html.

How to save the user's needs and preferences so they can be used by the matchmakers: @@.

How to start and stop matchmakers, or how to switch between matchmakers during the tests: @@.

Detailed installation instructions for running the Test/Demo:

Please provide step by step instructions to install the solutions, and clarify if there any plugin or similar that needs to be installed:

Fedora 18 Installation Guide: https://docs.fedoraproject.org/en-US/Fedora/18/html/Installation_Guide/index.html. Make sure you remember the root password that you set during installation.

GnomeTweakTool installation instructions: http://wiki.gpii.net/index.php/GNOME_Tools.

NVDA User Guide: Getting and Setting Up NVDA: http://www.nvda-project.org/documentation/userGuide.html#toc9.

Setting up the GPII/Cloud4all runtime framework: http://wiki.gpii.net/index.php/Core_%28real-time%29_Framework_v0.1_-_Installation_Instructions.

Installation instructions for the statistical matchmaker: @@.

Java Runtime Environment (JRE):

Installation instructions for the rule-based matchmaker: @@.

File:GuideToSettingsWindowsNVDAFedoraOrca 2013-03-26.pdf.

Needs and preferences management tool with basic functions

Name of the solution to be tested:

Needs and preferences management tool with basic functions

Cloud4all Activity related:

A102.2 User interface parameters for viewing and editing profiles

A102.3 Tool for users and care-givers to work with their profiles

Developer(s):

CERTH/ HIT

Fraunhofer

Hardware requirements/AT:

Laptop or standard PC with Internet connection.

Primary test users are elderly people with rather mild forms of visual, hearing or cognitive disabilities, since the test is meant to provide rather usability information than accessibility information.
AT has to be provided according to the needs of the invited users: If a test participant uses an AT already at home or work, then this AT should be provided in the test, too. Particularly, for the visually impaired users, magnifying AT and screen readers need to be held available on the test PC.

Software requirements:


A102.4 S/W prototype available
Web browsers supported: IE, Google Chrome, Mozila Firefox

Optional: connection to statistical Matchmaker (needed for the preview). General usability of the tool can also be tested without data transmission from and to the MM. In this case a pre-defined setting change has to be part of the test task, the preview then will be a simulation of setting changes.

Downloadable link to the solution to be tested:

Please, provide with an executable file or similar to install the demo/test components/solution.Will be provided by CERTH

Available languages: Please check and update the following information if necessary:
  • English
  • Spanish (Translated by Fraunhofer)
  • German (translated by Fraunhofer)
  • Greek (translated by CERTH)

Target audience:


•          Visual impaired users (only mild forms of vision impairements)

•          Elderly users mixed vision, hearing, and cognitive disabilities

•          Users with intellectual disabilities

•          Users with dislexia

•          Low literacy Users

Description of Step by Step proposed procedure for running the Demo/Test (Tasks):

Test procedure:

1. Welcome: First the experimenter should welcome the test users and thank the user for participating in the experiment. After giving an explanation of the test purpose and its proceeding, the experimenter collects information relevant for the test. This information includes the demographic characteristics of the test user (e.g. age and gender), information about his life situation (e.g. is the user still working or in retirement, assisted living, home for the aged or independent living situation), information about impairments, and information about his computer literacy (CLS). The test and the user should be recorded with a video camera, so a video release form has to be signed first.

2. Working on the scenario tasks: While the user works on the scenario task the experimenter takes notes of the user's actions, errors, comments and completion of the task. For the task scenraio processing the Think aloud technique should be used:

Think aloud

A thinking-aloud test involves having a test subject use the system while continuously thinking out loud (Lewis, 1981). By verbalizing his thoughts, the test user enables the developer to understand how he views the computer system. One gets a very direct understanding of what parts of the dialogue cause the most problems because the think-aloud method shows how users interpret each individual interface item (Nielsen, 1993). A problem of the thinking-aloud method is that it can’t be used together with time measurements, because the need to verbalize can slow users down. When using the think-aloud method the experimenter should often prompt the user to think out loud by asking questions like “What are you thinking now?” and “What do you think this message means?”

An important aspect that has to be considered when using the thinking-aloud method is that not every negative statement by the user is a real big usability problem. The experimenter should evaluate each statement together with his observed findings (What did the test user do? Does the user action correspond to the user’s statement?). Another important point is that users tend to ask questions about the system and the task while working with the system and thinking out loud. The experimenter should not answer those questions if the answer provides information that influences the test results, but instead keep the user talking with counter-questions. When using the ‘think aloud’ method, a video recording of the test is very helpful to get a look at the user’s behaviour and comments afterwards. 

3. Post-questionnaire and interview: After filling out hte post-questionnaires, the user is debriefed and is asked in an interview for further comments about problems or events during the test that were hard to understand for the experimenter.  Additional the user can give any suggestions for improvements. Fraunhofer will provide task scenarios for the following steps:

'1. 'Login/ Registration (first time user):

Start: Login/ Logout screen

Action: the user selects "register". The registration process is still not defined, so this link is non-functional. the examiner has to explain that there will be a registration process. After clicking on the "register" link the user will be led to the N&P set initialisation screen.

End: N&P Initialisation screen

Comprehensibility: The user should be ask what he/she expects for the three option fors preference set initialisation. For example "What du think will happen, if you select the option 'take over device settings'?"


2. Create an N&P set using the Preferences Management Editor/ by taking over the actual device settings  („N&P set initialisation“).

Start: N&P Initialisation screen

Action: The user selects 'take over device settings and will be led to the preference editor (in case mirroring device settings is not feasible for the prototype standard Desktop PC settings will be shown in the editor).

End: Preference Editor screen

WORDING TEST:

A wording test can be used to check the comprehensibility of applied labels/ buttons, distinction betwen interactive and non-interactive areas,....The examiner asks the user to have a look on the screen and to explain what the user assumes a UI element/area is for (what functionality is behind this element/ area; is it interactive or just displayed information; what will happen, when selecting this element). IMPORTANT: The user should not click on the screen, he/ she should just explain. 

3. Logout:

Start: Preference Editor screen

Action: The user logs out by clicking on the log-out button.

End: The Login/ logout screen will be shown.

4. Login (already registered user):

Start: Login/ Logout screen

Action: The user logs in by typing in user name and password (a dummy test account has been set up before).

End: After login the PMT main screen is shown (with the two options 'Preference Editor' and 'GPII Management')


5. Edit defined aspects of the N&P set, using the Preferences Management Editor („N&P set view and edit“):

Start: PMT main screen

Action: The user selects 'Preference Editor' and selects a defined UI Option (UI option pre-defined by examiner in the task). The UI-Option selection process can be done by using the search functionality, recently used UI-Options,'all categories' or 'common UI Options. When the user has done a selection the examiner asks, if the user knows/ imagines an alternative way for UI Option selection. After UI-Option selection the user adjusts the setting and saves the setting change.

To be investigated whether this test will take place.

End: Editor screen with saved setting changes

 6. Preview mode
Start: Preference Editor screen

Action: User clicks mock-up preview button and the preview of the settings are presented in a pop-up windows (on the "fly").

End: Preference editor screen


Research Objectives of the test:

It will be evaluated, if the PM Tool  is easy to use and provides a high usability. Labels should be clear to the users and the user should be able to easily navigate through the menu. Usability problems that minimize user satisfaction have to be identified.

Which changes have to be applied to the prototype in order to improve / maximise its accessibility and usability?

Pre-Test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users before the tests? If yes, please, write each one as a question for including in the Pre-Test questionnaire:

Computer Literacy Scale: will be provided in English, German, Spanish (has to be translated to Greek - Fraunhofer will provide English version for translation)

Indicators/parameters. Which are the parameters or indicators of success in the Demo/Test (including metrics and success thresholds)?

Please check and update the following information:

Add which parameters are going to change when running the tests, e.g. if the screen reader is configured automatically, specify which parameters are going to be changed: speed, pitch, male or female voice…

·  The user understands the planned functions.

·  The users regard the functions as sufficient for their purposes.

·  The functions are presented in a usable manner. 

- high score on satisfaction scale (AttrakDiff)

- high score on usability scale (SUS or ISOMETRICS-S)

Post-test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users after the tests? If yes, please, write each one as a question for including in the Post-Test questionnaire:

Apart from the standard questions we will have (common for all prototypes) such as for example if they would use Cloud4all, improvement recommendations, further comments, etc. please check if there is any relevant information you would like to know from the users after they do the test. e.g. if they found any specific difficulty when performing the test, if they changes provided are enough, etc… Please, try to provide them as questions in order to avoid misunderstandings between what you want to gather and what the person who changes it into a question (it could happen if you send them as sentences). An example:

1. Did you find difficulties when using the solution XXX? [Yes/No + further justification]

Please, try to be as much specific as possible.

What did you like the most?

What did you like less?

AttrakDiff (http://www.attrakdiff.de/en/Home/) (will be provided in German, English, Greek and Spanish)

System Usability Scale (will be provided in German, English, Greek and Spanish)

Contingency plans. Identify possible errors and bugs that might appear during the test and how to mitigate them:

Please add any error prompt that the system could present, and how to avoid or mitigate them. Please, be as much detailed as possible in order to avoid relevant problems during the test performance.To be edited by CERTH

Contact for technical support (name and e-mail):

Kostas Kalogirou <kalogir@certh.gr>; Vivien Melcher vivien.melcher@iao.fraunhofer.de


N&P Tool basic functions User manual

Name of the solution: Needs and preferences management tool with basic functions
Long description of the functionality of the Demo/Test:  User manual, including images anfd link to videos if needed:

Link to other relevant training information:

Add links to tutorials, documents or training courses, related with the solution.

Detailed installation instructions for running the Test/Demo:

Please provide step by step instructions to install the solutions, and clarify if there any plugin or similar that needs to be installed:


Needs and preferences management tool with extended functions

Name of the solution to be tested:

Needs and preferences management tool with extended functions

Cloud4all Activity related:

A102.4 Research and development of user interface enhanced functionalities

Developer(s):

Fraunhofer

Hardware requirements/AT:

Please check and update the following information if necessary:

None. mock ups test only

Software requirements:

Please check and update the following information if necessary:

None. mock ups test only

Downloadable link to the solution to be tested:

None. mock ups test only

Available languages: Please check and update the following information if necessary:
  • English
  • Spanish (Translated by Fraunhofer)
  • German (translated by Fraunhofer).
  • Greek (translated by CERTH)
Target audience:

• Elderly users mixed vision, hearing, and cognitive disabilities

• Low literacy Users
 

Description of Step by Step proposed procedure for running the Demo/Test (Tasks):

Please check and update the following information with detailed step by step instructions for running the demo/test:

The evaluation of the PMT with advanced functionalities is based on the testing of the PMT with basic functionalities. Both tests will be conducted during one session. This session is divided into the following steps:

1. The PMT with basic functionalities will be tested at first [described above]. The goal of the test is that the user understands e.g. the login-process and the setting up of a single n&p-set. The understanding of these basic functionalities is required to conduct step 2 of testing procedure and will not be the focus of research there[PMT with advanced functionalities].

2. While the testing of the PMT with basic functionalities is conducted with a functional prototype, the PMT with advanced functionalities is just tested with a Mock-Up or Wizard of OZ-Prototype. Therefore the prototype needs to be exchanged before step 2. As mentioned above the user should have an understanding of the basic functionalities.


Furthermore the focus of the testing [PMT with advanced functionalities]lays on the n&p editor which leads to the following tasks for the 1st Pilotes:

1. Adjust a preference for a specific device

START: PMT main screen

ACTION: The user opens the device-menu, selects a specific device. Now he adjusts the preference for the specific device and confirms his action.

END: PMT main screen

2. Add a specific device + adjust a preference for two specific devices at the same time

START: PMT main screen

ACTION: The user opens the device-menu, choses to add a specific device, selects a device. Then also marks the other device. Now he adjusts the preference for both devices and confirms his action.

END: PMT main screen

3. Adjust a preference for a specific device + a specific context

START: PMT main screen

ACTION: The user opens the device-menu selects a specific device. Then he opens the context-menu and selects a specific context. Now he adjusts the preference for device+context.

END: PMT main screen


Detailed questions about Wording etc. have to be added



Mention all the details from the detailed test instructor manual.

Research Objectives of the test:

Please check and update the following information. Try to summarize the research objective/s as statements instead of questions (e.g.

“The solution XXX will be succeed whether users find more appropriate to use it than the standard solution provided by the OS itself.”

“It will be evaluated whether the rule-based matchmaker provides better results that the statistical one, or the opposite.”)

It will be evaluated, if the advanced PMT functionalities are easy to use and provide a high usability. Labels should be clear to the users and the user should be able to easily navigate through the menu. Usability problems that minimize user satisfaction have to be identified.

Which changes have to be applied to the prototype in order to improve / maximise its accessibility and usability?

Pre-Test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users before the tests? If yes, please, write each one as a question for including in the Pre-Test questionnaire:

see basic PMT

Indicators/parameters. Which are the parameters or indicators of success in the Demo/Test (including metrics and success thresholds)?

Please check and update the following information:

Add which parameters are going to change when running the tests, e.g. if the screen reader is configured automatically, specify which parameters are going to be changed: speed, pitch, male or female voice…

·  The user understands the planned functions.

·  The users regard the functions as sufficient for their purposes.

·  The functions are presented in a usable manner. 

- high score on satisfaction scale (AttrakDiff)

- high score on usability scale (SUS or ISOMETRICS-S)

Post-test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users after the tests? If yes, please, write each one as a question for including in the Post-Test questionnaire:

Apart from the standard questions we will have (common for all prototypes) such as for example if they would use Cloud4all, improvement recommendations, further comments, etc. please check if there is any relevant information you would like to know from the users after they do the test. e.g. if they found any specific difficulty when performing the test, if they changes provided are enough, etc… Please, try to provide them as questions in order to avoid misunderstandings between what you want to gather and what the person who changes it into a question (it could happen if you send them as sentences). An example:

1. Did you find difficulties when using the solution XXX? [Yes/No + further justification]

Please, try to be as much specific as possible.

After each task, the test instructor asks open, non-suggestive questions in order to understand possible problems the user has encountered while solving the task.

After the test, the instructor categorizes the encountered usability problems and reports them in a pre-defined matrix format.

SUS


Contingency plans. Identify possible errors and bugs that might appear during the test and how to mitigate them:

Due to the fact that the prototype will be paper based only some functionality will be provided. In case he user selects an option without functionality behind the examiner has to explain that htere will be a functionality at a later time in development process.

Contact for technical support (name and e-mail):

 Anne Krüger anne-elisabeth.krueger@iao.fraunhofer.de, Vivien Melcher vivien.melcher@iao.fraunhofer.de


N&P Tool extended functions User manual

Name of the solution: Needs and preferences management tool with extended functions
Long description of the functionality of the Demo/Test:  User manual, including images anfd link to videos if needed:

Link to other relevant training information:

Add links to tutorials, documents or training courses, related with the solution.

Detailed installation instructions for running the Test/Demo:

Please provide step by step instructions to install the solutions, and clarify if there any plugin or similar that needs to be installed:



Semantic alignment tool

Name of the solution to be tested:

semi-automatic Semantic alignment tool

Cloud4all Activity related:

A202.1: Automatic generation of metadata for accessible solutions

Developer(s):

CERTH/ITI

Hardware requirements/AT:

Hardware:

1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor

1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit)

    50 MB available hard disk space

Software requirements:

The semantic alignment tool is connected with the semantic framework (solutions ontology) for getting the solutions content .

Downloadable link to the solution to be tested:

http://160.40.50.57/cloud4all/files/OntologyAlignmentTool_v1.1.rar

Target audience:

Vendors that they would like to include their applications and their settings in the Cloud4all, Developers to see the settings for its of the stored applications

Description of Step by Step proposed procedure for running the Demo/Test (Tasks):

Adding a new solution/setting to the solutions ontology (without previous experience in cloud4all) /Proposing a new term in the common terms registry

1) The user uses the alignment tool for having access to the ontological data of the semantic framework.

2) The user can select the category through a categorization list for the new solution that should be added to the ontology (e.g. screen reader).

3) The user gives the description of the tool (e.g. text description, functionalities, etc.).

4) The user gives to the systems the customizable settings for the specific solution.

5) An semi-automatic alignment mechanism proposes to the user (through matching ontological concepts) the appropriate settings that should be aligned with the existing settings of the ontology (if they don’t exist new settings will be created).

6) The solutions ontology is being updated through the tool while the tools proposes to the administrator of the common terms registry tool new settings / terms that should be added.

Searching (by a free text and /or by categorisation lists) for solutions, settings, etc, that are stored in the solutions ontology

1) The user is browsing the ontological data through the searching basic functionality of the tool (free text, list of categories)

Research Objectives of the test:

 

1) How the tool can support users for performing syntactic and semantic analysis of solutions/settings and automatic categorization of solutions/settings that are stored in the semantic Framework of content and solutions (i.e. by information about solutions, platforms, devices and settings stored in the ontology)?

2) How the tool can assist users/vendors/service providers to help visualising annotated content presented in the ontology as well as improving existing annotation structures and instances (e.g. a service provider has the intention to semantically align its service in order to be provided through the Cloud infrastructure)?

 

Pre-Test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users before the tests? If yes, please, write each one as a question for including in the Pre-Test questionnaire:

1) What kind of services/solutions do you offer (e.g. screen reader) to your customers?

2) What kind of users do you usually support for your products?

3) Which are the customisable settings of your offered solutions?

 

Indicators/parameters. Which are the parameters or indicators of success in the Demo/Test (including metrics and success thresholds)?

- time to completion the alignment task

-time to storing data to the ontology

-time to search completion task

-success on the completion of the task

 

Post-test questionnaire. Is there any specific information (related to your demo/test) you would like to gather from the users after the tests? If yes, please, write each one as a question for including in the Post-Test questionnaire:

  1. Did you find difficulties when using the semantic alignment solution? [Yes/No + further justification]
  2.  Did you find any difficulties regarding the semi-automatic alignment process of your supported solutions/settings with the provided solutions/settings data?

 

Contingency plans. Identify possible errors and bugs that might appear during the test and how to mitigate them:

Not applicable

Contact for technical support (name and e-mail):

kvotis@iti.gr,

kgiannou@iti.gr


Semantic alignement tool User manual

Name of the solution: Semantic alignement tool
Long description of the functionality of the Demo/Test:  User manual, including images anfd link to videos if needed:

Link to other relevant training information:

Add links to tutorials, documents or training courses, related with the solution.

Detailed installation instructions for running the Test/Demo:

Please provide step by step instructions to install the solutions, and clarify if there any plugin or similar that needs to be installed:


SP3 Demonstrations

Maavis

Name of the solution to be tested:

SP3 demo of Maavis

Cloud4all Activity related:

A304.1 Auto-configuration of assistive solutions in desktop, Screen Reading and Magnification software.

Developer(s):

OpenDirective

Hardware requirements/AT:

  • PC or laptop etc with entry level spec and spare resources
  • x86 or x64 architecture
  • Sound output
  • Internet connectivity
  • [Optional] Touch screen - mouse or other pointer is enough to test with.
  • [Optional] If switch control is to be tested then a USB joystick, games controller, or switch (via a joycable etc). However keyboard control is possible as a simulation

Software requirements:

  • Windows 7 or 8 (not RT); 32 or 64 bit
  • Internet connection for certain features

Downloadable link to the solution to be tested:

http://maavis.fullmeasure.co.uk/get-maavis

Available languages:
  • English
Target audience:

Elderly users with dementia or mixed vision, hearing, and cognitive disabilities. Those with very low digital literacy.

Detailed installation instructions for running the Demo:

Demonstration Needs and Preferences set/s:

Please, provide with the different Needs and Preferences sets to be used during the demonstration:

User1 Bert preferences: <syntaxhighlight lang="javascript"> {

   "http://registry.gpii.org/applications/net.opendirective.maavis": [{
       "value": {
           "theme": "bw",
           "playStartSound":"yes",
           "speakOnActivate":"yes",
           "speakTitles":"yes",
           "speakLabels":"yes",
           "showLabels":"yes",
           "showImages":"no",
           "useSkype":"no",
           "userType":"touch",
           "splashTime":2,
           "scanRate":"1000",
           "scanMode":"AUTO1SWITCHAUTOSTART",
           "selectionsSetSize":"2x3",
           
           "passwordSetSize":"4x3"
       }

}] } </syntaxhighlight>

User2 Ethel preferences: <syntaxhighlight lang="javascript">

   "http://registry.gpii.org/applications/net.opendirective.maavis": [{
       "value": {
           "name":"Ethel",
           "theme":"colour",
           "playStartSound":"no",
           "speakTitles":"no",
           "speakLabels":"no",
           "showLabels":"no",
           "showImages":"yes",
           "useSkype":"no",
           "userType":"scan",
           "splashTime":0,
           "scanRate":"600",
           "scanMode":"AUTO1SWITCHAUTOSTART",
           "selectionsSetSize":"3x3",
           
           "passwordItems":"complete"
       }

}] }</syntaxhighlight>

Description of Step by Step proposed procedure for running the Demo (Tasks):

Precondition: GPII environment is active with USB and NFC listeners installed

  1. The user plugs in the USB memory device
  2. Maavis then runs and is properly configured for user1
  3. The user unplugs the USB in order to safely exit Maavis
  1. The user touches the NFC tag to the reader
  2. Maavis then runs and is properly configured for user2
  3. The user touches the NFC tag to the reader to exit Maavis

Contact for technical support (name and e-mail):

Steve Lee steve@opendirective.com

Read&Write GOLD

Name of the solution to be tested:

SP3 demo of Read&Write GOLD

Cloud4all Activity related:

A304.3: Auto-configuration of assistive solutions with proprietary cloud/Server Activation

Developer(s):

Texthelp Ltd.

Hardware requirements/AT:

  • 2 GB Free Disk Space
  • Speakers, Sound Card, Microphone (for speech input),
  • Pentium IV 1.8GHz processor (2.4GHz recommended)
  • Network/Wireless card (for Internet connectivity)


Software requirements:

  • Windows XP SP3 or above
  • 512 MB RAM (1 GB recommended)
  • Internet connection for certain features
  • PDF Aloud requires Adobe Reader 9 or above or Acrobat Version 8 or above (This provides text-to-speech in Adobe Reader. For this to work Adobe Reader MUST be installed before Read&Write Gold)
  • Web Reading Supported in IE and Firefox Browsers only. IE8 and above and Firefox 4 and above
  • MS Word Support requires Office 2003 or above.


Downloadable link to the solution to be tested:

http://fastdownloads.texthelp.com/RW10/MSI_Only/Setup.zip


Available languages:

  • English
Target audience:
  • Elderly users mixed vision, hearing, and cognitive disabilities

Detailed installation instructions for running the Demo:

NOTE: Adobe Reader must be installed prior to the installation of Read&Write.

Installation is a simple guided wizard type interface that will set the initial user up with some default settings for speech engine, background colours etc. For the purposes of the demo/pilot installations these wizard screens can be simply clicked through quickly as there is no need to have specific defaults set up for the demo/pilots. The GPII software will distribute the necessary settings.

  1. Download to desktop RW Gold (in .zip format) from link given above
  2. Once downloaded distribute zip file to target machines.
  3. Extract files from zip file to desktop of target machine
  4. Double-click the Read&Write 10 executable file
  5. Follow on-screen prompts
  6. It is VERY important that Read&Write Gold is launched after installation and before the GPII software is run so that Read&Write can write out a set of default settings. If this is not done and the GPII software attempts to launch Read&Write the GPII software will crash as there will be no settings file available for it to write to. This is logged as a bug and will be fixed in time but it has not been fixed for this pilot.
  7. During install you will be prompted to input the serial number. It is important you keep the serial number to hand as you will be sked again for it on first launch.


Demonstration Needs and Preferences set/s:

Pity I can't upload the json settings file here. Anyhow, I'll send the json settings for USER1 and USER2 via email today and list the settings below for each user.

USER1(12-year old dyslexic female)

  1. ‘Fun’ iconset
  2. ‘Large icons with text’
  3. ‘Writing features’ featureset
  4. ‘Spell As I Type’ is switched on.
  5. ‘Scansoft UK English Daniel’ speech engine
  6. Voice Pitch = 36%
  7. Voice Speed = 38%
  8. Voice Volume = 72%
  9. Voice Word Pause = 0%
  10. ’Speak using one-word display’
  11. Scanning settings are ‘Scan to PDF’ and ‘Scan from TWAIN’



 USER2 (Middle-aged asian man who has trouble reading english)

  1. ‘Professional’ iconset
  2. ‘Large icons with text’
  3. ‘Reading features’ featureset
  4. ‘Spell As I Type’ is switched off.
  5. ‘Web highlighting’ is switched on
  6. ‘Scansoft UK Indian Sangeeta’ speech engine
  7. Voice Pitch = 33%
  8. Voice Speed = 31%
  9. Voice Volume = 90%
  10. Voice Word Pause = 115%
  11. 'Highlighting in document’ is chosen as the speech highlighting method.
  12. Scanning settings are ‘Scan to MS Word’ and ‘Scan from file’
  13. Paragraph translation options is set to ‘Hindi’

Description of Step by Step proposed procedure for running the Demo (Tasks):

USER1(12-year old dyslexic female)

  1. Verify Read&Write Gold v10 launches after inserting USB key
  2. Verify Read&Write Gold v10 launches with the ‘Fun’ iconset and ‘Large icons with text’ (this can be determined by going to the drop-down arrow next to the ‘Texthelp’ button and selecting ‘General Options’
  3. Verify that the ‘Writing features’ featureset is enabled (hover mouse over the ‘Texthelp’ button to see the tooltip description.
  4. Verify that ‘Spell As I Type’ is switched on. This can be verified on the ‘Spell check’ button drop-down list.
  5. Verify that ‘Scansoft UK English Daniel’ is the default speech engine (‘Play’>>Speech Options>>)
  6. Verify in the Speech Options panel (from 5 above) that:
  7. Pitch = 36%
  8. Speed = 38%
  9. Volume = 72%
  10. Word Pause = 0%
  11. Verify ‘Speech Optioins>>Highlight Tab>>’Speak using one-word display’ is chosen.
  12. Verify Scanning settings are ‘Scan to PDF’ and ‘Scan from TWAIN’ (Drop-down arrow next to scanning button)



 USER2 (Middle-aged asian man who has trouble reading english)

  1. Verify Read&Write Gold v10 launches after inserting USB key
  2. Verify Read&Write Gold v10 launches with the ‘Professional’ iconset and ‘Large icons with text’ (this can be determined by going to the drop-down arrow next to the ‘Texthelp’ button and selecting ‘General Options’
  3. Verify that the ‘Reading features’ featureset is enabled (hover mouse over the ‘Texthelp’ button to see the tooltip description.
  4. Verify that ‘Spell As I Type’ is switched off. This can be verified if the left-most spelling icon is NOT the spelling icon.
  5. Verify that ‘Web highlighting’ is switched on (Verify on the Speech drop-down menu)
  6. Verify that ‘Scansoft UK Indian Sangeeta’ is the default speech engine (‘Play’>>Speech Options>>)
  7. Verify in the Speech Options panel (from 5 above) that:
  8. Pitch = 33%
  9. Speed = 31%
  10. Volume = 90%
  11. Word Pause = 115%
  12. Verify ‘Speech Options>>Highlight Tab>>’Highlighting in document’ is chosen.
  13. Verify Scanning settings are ‘Scan to MS Word’ and ‘Scan from file’ (Drop-down arrow next to scanning button)
  14. Verify that ‘Translation>>Paragraph translation options>>To Language’ is sent to ‘Hindi’

Contact for technical support (name and e-mail):

David Hankin ('d.hankin@texthelp.com)

EASIT

Name of the solution to be tested:

SP3 demo of EASIT (Extensible Adapted Social Interaction Tool)

Cloud4all Activity related:

A303.4 Auto-configuration of accessibility features of social networks

Developer(s):

BDigital (leader) IDRC FhG UPM TPV

Hardware requirements/AT:

•          PC

•          X64

•          4GB ram

•          Disk minimum 100GB

•          Screen size min 15

Software requirements:

•    Windows XP, 7 or any recent Linux distribution

•    Recently updated web browser: Chrome or FireFox

Downloadable link to the solution to be tested:

The web application is already up and running in a public domain name server: http://www.easit4all.com

Available languages: Please check and update the following information if necessary:
  • English
Target audience:
•          Elderly users mixed vision, hearing, and cognitive disabilities

Detailed installation instructions for running the Demo:

No plugin it is necessary to install. The application is reachable just introducing the url aforementioned

Demonstration Needs and Preferences set/s:

User1 preferences:
1. Text size: 1.1
2. Text style: arial
3. Line spacing: 1.2
4. Colour & contrast: yellow on back
5. emphasize links: true
6. make inputs larger: true

User2 preferences:
1. Text size: 1
2. Text style: comic sans
3. Line spacing: 1
4. Colour & contrast: colourful
5. emphasize links: false
6. make inputs larger: false
7. show table of contents: false

Description of Step by Step proposed procedure for running the Demo (Tasks):

The validation activities proposed will consist on

Preferences validation
1. User sign in into the application as test1
2. User changes default preferences in the application
3. Save preferences
4. Log out
5. User log in again into the application
6. Current visual application features are the ones specified before
4. Log out

Validation of social services operations
1. User log in
2. Access to connection section
3. Create connection to facebook
4. Access to any facebook functionality
5. Disconnect from facebook
6. Access to connection section
7. Create connection to twitter
8. Access to any twitter functionality
9. Disconnect from twitter
10. Log out application

Check other test users
1. User login as test user: test2
2. User logout


 

Contact for technical support (name and e-mail):

Xavier Rafael Palou (xrafael@bdigital.org)

Microsoft PixelSense/Sociable

Name of the solution to be tested:

SP3 demo of Microsoft PixelSense/Sociable(Tablet PC – all in one (standing up)

Cloud4all Activity related:

Please provide the activity/ies related number/s and title/s:

Developer(s):

SILO

Hardware requirements/AT:

•          Processor: x64

•          Memory: 4GB Ram

•          Disk space: 250GB

•          Screen: 12,1” min – preferably 21”

Software requirements:


•          Windows 7

•          Net framework 4

•          Surface toolkit runtime for Windows

•          XNA Redistributable

•          SQL Express 2008 and Management Tools

•          “Sociable” software

•          Microsoft Speech Platform (=> v.11) & corresponding languages. For Greek language: eSpeak v.1.46.02 and above

Downloadable link to the solution to be tested:

Not Applicable

Available languages:
  • English
  • Greek
  • Spanish
  • Italian
  • Norwegian

Target audience:

•          Elderly users mixed vision, hearing, and cognitive disabilities

Detailed installation instructions for running the Demo:


The installation procedure of Sociable is personalized and needs special configuration through out the different layers of the application itself.

Therefore, SILO will undertake the responsibility to provide remote support to every installations procedure required. The aforementioned procedure will be performed through the use of TeamViewer software.


Demonstration Needs and Preferences set/s:


User1 (Phaidra) preferences:

1. Text to Speech: true

2. High Contrast: false

3. Font Size: Medium

 

User2 (Theseus) preferences: 

1. Text to Speech: true

2. High Contrast: true

3. Font Size: Medium


Description of Step by Step proposed procedure for running the Demo (Tasks):


After a successful installation of Sociable, the procedure has as follows:

1) The user runs the gpii environement

2) The user creates a txt file with the name ".gpii-user-token.txt"

3) The content of the txt file has either the name "phaedra" or "theseus" which correspond to User1 or User2 needs & preferences

4) The user copies the file into a USB

5) The user plugs the file into the USB

6) Sociable runs properly configured. The user using the touch screen could be easily join the "Synonyms" game in three different levels of difficulty. 

7) The user unplugs the USB in order to safely exit Sociable

Contact for technical support (name and e-mail):

Thanos Karantjias <span style="color:#ff0000;">tkarantjias@singularlogic.eu</span>


Mobile Accessibility

Name of the solution to be tested:

SP3 demo of Mobile Accessibility

Cloud4all Activity related:


A304.2: Auto-configuration of assistive solutions in mobile phones


Developer(s):

CF

Hardware requirements/AT:


  • 1 x Android phone with NFC functionality (suggested device Google Galaxy Nexus). Demos at this point can be done with only one device.
  • 2 NFC tags with pre defined user IDs stored on it. Just NFC tags with a simple text on each one, corresponding to the names of profiles used for testing: "ana" and "daniel".

 

Software requirements:

  • Mobile Accessibility Cloud4All prototype installed on the device.
  • Internet connection on the device. Profiles are stored in a "real" GPII preferences server and are retrieved from it in real time.


Downloadable link to the solution to be tested:

http://www.codefactory.es/MA_Cloud4All/MobileAccessibilityC4A_vocalizer_enu-release.apk

Available languages:
  • English
  • Spanish
  • French
  • German
  • Italian
  • Portuguese
  • Arabic
  • Dutch
  • Norwegian
  • Czech
  • Polish
  • Russian

Target audience:


 •          Blind users

Detailed installation instructions for running the Demo:

  • Get the .apk installer from the link provided and side load it into the phone using standard procedures. The easiest way would be to download the installer directly from the phone's web browser and execute it from the phone itself. Make sure you've activated the developer settings for the device. On devices running Android 4.2 (probably your device will be automatically updated to this version) the developer options are hidden, to make it appear follow the instructions on this link: http://www.androidcentral.com/how-enable-developer-settings-android-42  , which basically consists on tapping 7 times the Build number under Settings>About.

Demonstration Needs and Preferences set/s:

  • Ana. Blind user with good skills with technology and use of mobile phones. She likes to use the native Android TTS because she uses other native accessibility tools and doesn't want to change from a TTS to another when going in and out from Mobile Accessibility. She's used to high speech rates. The profile is already uploaded to the GPII preferences server (http://preferences.gpii.net). It can be downloaded from: http://preferences.gpii.net/user/ana
  • Daniel. Recently blind, he's not used to ATs and needs low speech rates and high quality voices. He uses the Nuance voice provided by Mobile Accessibility. The profile can be downloaded from: http://preferences.gpii.net/user/daniel

Description of Step by Step proposed procedure for running the Demo (Tasks):

  • You need the phone with Mobile Accessibility Cloud4All prototype installed on it and set the application to be the Home Screen. Basic usage of the tool is described in our website (www.codefactory.es), including some video demos. You can contact us for any assistance you need, this is a blind oriented tool and its interface can be tricky for sighted users.
  • 2 NFC tags with texts "ana" and "daniel" stored on it should be also ready for the demo.
  • You can show how Mobile Accessibility works in a given moment (voice & speech rate). 
  • Then touch the back side of the phone with one of the NFC tags and the settings for the selected profile will be automatically downloaded from the cloud and applied to MA. Mobile Accessibility will read the app name once the process is finished just to let you know that the new settings are applied ok. Make sure that you have internet connection to do this, settings are retrieved from the cloud in real time.
  • You can then touch the device with the other NFC tag and Mobile Accessibility will apply the settings for the new profile.
  • This experience can be repeated as many times as you want.
  • Note that not only the voice type and speech rate settings are retrieved, actually we retrieve the whole settings of Mobile Accessibility. But for demo purposes, at this point, the most evident settings for blind and sighted users to see that things are changing on the fly are the TTS used and also the speech rate.

 

Contact for technical support (name and e-mail):

  • Ferran Gállego (ferran.gallego@codefactory.es).

UI Options

Name of the solution to be tested:

User Interface Options (integrated into the EASIT tool)

Cloud4all Activity related:

303.2

Developer(s):

IDRC

Hardware requirements/AT:

Any reasonably modern computer capable of running the latest version of Firefox or Chrome.

Software requirements:

The latest version of Firefox or Chrome.

Downloadable link to the solution to be tested:

User Interface Options is used and integrated into the EASIT tool.

Available languages:
  • English

Target audience:

Everyone.

Detailed installation instructions for running the Demo:

No installation is required.

Demonstration Needs and Preferences set/s:

The preferences sets defined for the EASIT tool are appropriate for testing UI Options.

Description of Step by Step proposed procedure for running the Demo (Tasks):

Demonstration instructions are listed above for the EASIT tool.

Contact for technical support (name and e-mail):

Colin Clark


Getting the GPII Framework and SP3 Apps

Pilots Branch of the GPII Framework

Windows

Linux

NPGatheringTool

The NPGatheringTool is available at http://npgatheringtool.gpii.net/; no installation is needed.

Links for downloading SP3 apps for the pilots

Semantic alignment tool: http://160.40.50.57/cloud4all/

Maavis: http://maavis.fullmeasure.co.uk

Read&Write GOLD: http://fastdownloads.texthelp.com/RW10/MSI_Only/Setup.zip

EASIT: http://www.easit4all.com

Microsoft PixelSense/Sociable: Not Applicable (Remote desktop is needed. The developers have to contact SILO to install this).

Mobile Accessibility: http://www.codefactory.es/MA_Cloud4All/MobileAccessibilityC4A_vocalizer_enu-release.apk