Matchmaker Feedback Loop

From wiki.gpii
Jump to: navigation, search

Use Cases

Use cases for Matchmaker Communication Panel

  • The Matchmaker proposes settings for a device or application that you have not used before. These inferred settings don't suit your needs. How should this be communicated to the Matchmaker?
    • A simple mechanism (e.g. a button) that restores the default settings. (You would then need another tool or mechanism to define the settings you need. In some contexts, the PMT might work for this, but not in other contexts, e.g. where the screen is too small, or where you don't have the time or patience to define settings for the device or application you're using.)
    • A simple mechanism to tell the Matchmaker that it should try to define a better set of settings. (We could say that this implies a bad rating for the inferred settings.) Behind the scenes, the Matchmaker uses a different mechanism (e.g. rule-based instead of statistical, or vice versa) to infer settings.
    • Other ideas?
  • The Matchmaker proposes a change to a setting based on a change in the context. Examples: a change in ambient light (i.e. a change in the environment) triggers a change in the screen's brightness; a change in font size by the user triggers a proposal to change to also change the screen's contrast. How should the user communicate feedback to the Matchmaker?
    • A rating. (This would probably be an opt-in mechanism, to avoid the Clippy problem.)
    • A "put it back the way it was" button or mechanism and a "don't change this again" button. (Using these buttons could be considered equivalent to giving a bad rating.)

Note: In the above descriptions the word "button" should be read as a stand-in for any type of input that may achieve the same goal, e.g. a speech command, a kind of touch interaction, etc.

Feedback Loop

The Feedback Loop is an important part of the architecture, offering a user the option to give feedback to the adaptations a matchmaker did. It obviously consists of a user interface elements and interaction, yet those are not part of WP205. Instead, we deal with the "algorithm side" of the feedback loop, for example the following scenarios:

API to the Feedback loop

Which sections of the architecture contacts the matchmaker and what information is transferred on a feedback event?

How does user feedback arrive?

There might be different ways a user changes his or her profile. For example through a profile management tool or through a built-in interface element. This might arrive as different APIs or events and has (perhaps) to be dealt differently.

Inferred Feedback

Are their efficient way to infer feedback from user actions without them directly giving it? How reliable is such feedback?

Matchmaker Specific Questions

How do rule-based, ontology-based or statistical matchmakers react on such run-time feedback?

No or Incomplete feedback

The typical recommender problem: How do we deal with incomplete feedback or users who do not want to give any feedback at all?

Metaphors for feedback

How is feedback expressed by the user? Systems like 0-5 stars, just an "i like" button or something else? This question also relates to the teams working on the front-end, as we have to agree on the same metaphors to be consistent.

Usefulness of feedback

How useful is the feedback for the different matchmakers in the end? How can they utilize it?


Tooling

WP205.2 is about developing the tooling for WP204. One of these tools should be converters to import foreign profiles so we can use them as virtual user profiles. Might also be a generator for virtual user profiles. Those are back-end tools of course and can just be simple scripts.

Related Pages