Design Outline - Recommendations and Assesment of Accessibility Setting

From wiki.gpii
Jump to: navigation, search

Key data

Outline
Cloud4all personalisation implies a recommendation system providing users with suggestions of new preferences or to further customize settings. This work describes a possible design outline for recommendations and assessment of accessibility settings. A user experiment has been conducted to evaluate early concepts and ideas towards recommending accessibility settings. We developed a recommendation prototype to demonstrates various designs and assessment methods. The results of the experiment show a wide variety of likings and opinions according to design aspects and in terms of recommendation usage. However, some aspect showed a high tendency of approving.
Participants
23 (6 elderly, 3 visual impaired, 6 blind and 8 others); Sex: 12 male; 11 female.
Project
Cloud4all
Realization
TUD; Study thesis by Diana Hille, Advisor: Claudia Loitsch
Link
File:UserStudyOnRecommendingAccessibilitySettings opt.pdf
Contact
claudia.loitsch@tu-dresden.de

Background information

APfP, developed in Cloud4all, takes user preferences to deliver tailored auto-configurations on arbitrary devices. This process includes inferences of new preferences if the context of use has changed. Context changes can be provoked by:

  1. the technical environment, e.g. if a user bought a new mobile phone or if a device at work shall be configured according to the needs and preferences of the user
  2. surrounding factors, e.g. light, noise may effect the current usage
  3. the user, e.g. direct changes of settings imply a user may wish to adjust a configuration.

Inferred preferences can be seen as recommendations of accessibility settings for a certain context. Inferred preferences become real preferences when confirmed (explicit or implicit) by users. If confirmed, inferred preference should be stored permanently in the needs and preference set of a user, and when a user logs in the next time these preferences can be applied. Therefore, in addition to the auto-configuration capabilities developed in Cloud4all/GPII, a feedback mechanism is needed allowing uses to retain control by means of:

  1. viewing, exploring and adjusting,
  2. declining or applying,
  3. as well as assessing recommended accessibility features.

Summary of the main research questions and results

How should recommendations be presented to users?

  • 3 overall designs have been investigated:
    • Dialog (wizard) guiding users through the recommendation process by following steps: (1) information about new recommendations, (2) adjustments and preview, and (3) assessment.
    • Singel view (overview method) presenting all necessary information, adjustment and assessment option in just one window.
    • Tab view (hybrid method) providing a tripartition of the recommendation process, where each part was accessible via a tab as you can see in the following image:
alt text
  • Results
    • Overall: Tab method was favoured by participants (65%) followed by single view and dialog view. Rationale: depicting overview of new recommendations and offers possibility to go back.
    • Group related: Single view was preferred by blind participants due to a reduced navigation effort between windows/tabs. Participants did not have to switch back every time something changed in the panel. Tab based representation would be useful on mobile phones.

How should the preview of recommendations be represented to users?

  • 3 preview designs have been investigated:
    • Screen preview: provides a static overview over the changes, similar to a screenshot of the adjusted system.
    • Change preview: similar to preview mode of the fluid project. Users have some options to adjust settings and to see the effect on the fly.
    • Run time preview: apply recommended changes directly to the system but provide an option to change it back to its previous state. Goal: enable users to test the new settings in a runtime environment.
  • Results
    • Overall: The change preview was favoured by the participants (53%), allowing them to adjust additional options.

How should the assessment of recommendations be represented to users?

  • 4 (explicit) assessment methodes have been investigated:
    • Yes/no question
    • Star rating
    • Questionaire
    • Comment function
  • Note: explicit feedback takes much mental effort but has also a positive impact on the accuracy of the given feedback. However, the feedback mechanism developed in Cloud4all should also consider implicit feedback to reduce assessment effort. Positive feedback should be easy to give (implicit), e.g. it could be sufficient to assume that the acceptance of the recommendation is a positive reply. Explicit feedback, if users are content with the given suggestion, could be sufficientely done by a simple yes/no-question but also a time sparing rating method could be used. In contrast, if the user is not pleased with the recommendation, more options should be given to describe opinions and problems (e.g. through a comment function). The integration of feedback must have an effect on future recommendations. If a user explicitly rates a recommendation positively this should mean an improvement of the reputation of this specific suggestion or if estimated negative, the reputation should decrease.
  • Results
    • Overall: No clear tendency which assessment method is preferred: star selection (47%); questionaire (43%); Yes/No question (30%); comment (26%).
    • Group related:
      • Male participants (58%) and power users (69%) showed a tendency towards the star rating method
      • Female participants (54%) and non-power users (60%) preferred the questionnaire.
    • Additional remarks given by participants
      • A question or rating alone is not significant enough to present an opinion especially if users are not content with the recommendatation. Hence, it would be useful to offer an additional but optional comment function.
      • Star rating is most of the times not well used as (one, three and five) star rankings do not offer an insight why the user choose an estimation. Even if widely used nowadays, not every user knows how a star rating works. This is why the rating should be defined beforehand.

Additional remarks

  • Settings that are frequentely adjusted by users (e.g. volume) should not trigger recommendations every time a user changes it. Possible solutions: allow users to exclude settings for which recommendations are unrequested. More general solution: allow users to specify the context for which recommendations are desired (e.g. application level, device level, preference category level, etc.).
  • Another issue mentioned by the participants of the experiment was to be able to delay the recommendation. Users wish to get back to recommendation when they have time for that. Some email-providers handle this situation by providing the user with only a reminder of a new email and let them decide to when checking it.
  • The evaluated design included an „Additional Settings“ options. Many participants declined the usage of this option even if they liked to work with additional settings when offered. One participant argued it is not obvious what is menat by additional configuration. Hence, even if the limitation of setting options was accepted it could be better to immediately present all configurations.