Matchmaker Meeting, October 2012

From wiki.gpii
Jump to: navigation, search

2-day face-to-face meeting followed by a hackathon, in Stuttgart, hosted by Media University Stuttgart (HdM).

Location: Room 304, Nobelstraße 10, 70569 Stuttgart (Vaihingen).

Date (see the Doodle poll - now closed): Oct. 9-12, 2012

Attendees

Attendees Tuesday & Wednesday:

  • ASTEA
    • Boyan Sheytanov
    • Evgeni Tsakov
  • CERTH-HIT
    • Kostas Kalogirou
  • CERTH-IBBR
    • Vassilis Koutkias
    • Nicos Maglaveras
  • CERTH-ITI
    • Kostas Votis
    • Nick Kaklanis
  • FhG-IAO
    • Anne Krüger
    • Vivien Melcher
    • Matthias Peissner (only Tuesday afternoon)
  • HdM
    • Gottfried Zimmermann
    • Andreas Stiegler
    • Christophe Strobbe
  • IDRC
    • Colin Clark
  • RtF-I
    • Kasper Markus
  • Technosite
    • Andres Iglesias
    • Ignacio Peinado
  • TUD
    • Claudia Loitsch
    • Gerhard Weber

Draft Minutes

Tue 9 October

Final tuning discussion on deliverables

9:00am D204.1 (2h)

Workplan available on Cloud4all Manage site.

  • Section 4: Scenarios
  • Section 3: Requirements: (not much user feedback from workshops; see D101.2 instead)
  • ...

Define what the Matchmaker does; avoid exposing the externals of the Matchmaker (see Andy's presentation).

  • The user is always in a given context, so everything in the preference set is always linked to a context. Even when a preference set is initialised, this happens in a certain context. (UI for initialising preferences would need to present samples/previews rather than e.g. numbers for font size. See e.g. Windows 7's UI for changing font size.)
  • Example scenario: user has preferences for a specific device (e.g. an Android smartphone) and starts using another specific device (e.g. a new smartphone). Matchmaker infers preference for new device from preferences for old device. How this works, is specific to the concrete matchmaker (e.g. by going through generic preferences, or by statistical means).
  • Context is very complex: not just the device, but also many other aspects, such as the application, the time of day, and any information that is available from sensors.
  • Make sure that the Matchmaker does not present settings that are out of the range of what the user can use. For example, if the user needs a font size of at least 18 pt, presenting a 24 pt font still allows him to reduce the font size, but presenting a 14 pt font size is inaccessible.
  • Notion of common preferences - user has defined e.g. I can't see text smaller than an inch (...).
  • Terms "needs" vs "preferences": possibly define "needs" as thresholds? (E.g. because someone can't see text smaller than 18 pt.) But we may not be able to make this distinction technically, except by means of "priorities". Possibly marker context for thresholds.
  • Matchmaker should also be notified of preference changes made by a user so the Matchmaker can suggest related changes, e.g. suggest higher contrast when the user increases the font. User feedback mechanism / floating control panel should fit into this process somehow. This also has an impact on the architecture (to be discussed again later).
  • Suggesting a solution (finding AT; finding Web Services) is similar to the scenario of inferring new settings from known settings.
  • Suggested names for these use cases: recommendation and proposal; inferring preferences for a target context. (Context can include whether solutions are locally available or not.)
  • In revised use cases document, some use cases are grouped under "Use cases for specific types of matchmakers" (first one covers the transformer):
    • RESOLUTION: move the content of these use cases to D204.1.
    • The Transformer (see Linz meeting): declarative means to translate preferences, e.g. from generic to application-specific, or between application-specific to application-specific. Ontologies may be a source for these declarative rules. The transformer is a tool that matchmakers may call; it is not a required component for each matchmaker. Proposal: Transformer could move to "scenarios of matchmaking" in D204.1.
Functional Requirements

See Functional Requirements :updated last week.

  • Requirements for the whole framework
  • Requirements for specific matchmakers
  • Requirements for input and output

Matchmaker gets the whole preference set, device profile, context information and possibly the locally available solutions.

C.3: The Matchmaker can access the Preference Server for statistical analysis. Should the Matchmaker go through the Flow Manager for this? This would be good for decoupling the components. If the Matchmaker has to go to the Preference Server, how do we make the Matchmaker stateless? Matchmakers need to be trusted components (so need to go through a review process). Or Matchmaker as a broker; go through Flow Manager and get back a URL that is valid only for a specific time window etc. Possibly also define "views" of the Preference Server, so there is no access to all preferences and their full history.

A.2: Output that will be used by a Settings Handler.

A.3: There is a Transformer in the Architecture team's repository. (See presentation in Linz.) It needs to be expanded.

A.4: API needs to be expanded.

A.8: Needs discussion.

A.9: Process Strategy Units: we have different matching problems (see categorisation in scenarios), and different matchmakers can solve the different problems. This strategy still needs to be defined.

Not in these APIs: connection to the user; ideally want the Matchmaker to be stateless. For feedback: changes by user first go back to the Preference Server.

We may also need a Service Composition API, e.g. if a user requires a certain font size, but this cannot be met by the application, a magnifier may be suggested, or a composed Web Service. Discussion: However, the Matchmaker would not do service composition by itself, only suggest what is needed - the Matchmaker just needs to know what is available; the service composition would not be part of the Matchmaker itself but be at the level of the Lifecycle Manager. This type of output from the Matchmaker should be presented to the user; possibly work through the Solutions Registry API.

B.2: comes from the DoW; this is an implementation detail instead of a requirement. The Matchmaker can be cloud-based.

B.4: Matchmaker needs to be a trusted component (cf supra).

B.5: comes from the DoW. Cf older idea of private matchmakers, which would not be a trusted component. If at one point, a user needs to trust the GPII system to access certain data, trust is essential. API for users to define who to trust still needs to be defined: file a bug at http://issues.gpii.net/.

Distinction between trusted by GPII/Cloud4all versus trusted by users (e.g. for developing a matchmaker internally)? (AS)

D.2: see wording of generic use case.

APIs (Architecture Team)

Topics:

  • solutions registry: what is it & its API?
    • relationship to Unified Listing
  • Transformer
  • Matchmaker framework:
    • How to configure a matchmaker
    • General flow/API

Cloud-scale: there will be multiple instances of many components in the system. -> optimise for horizontal scaling.

Flow Manager - invoke Matchmaker; Matchmaker gets user's preferences, device info, URLs or APIs to Services. (Any state in the system belongs in the user's preferences, e.g. what services to use.)

Matchmaker use the "Matching Strategy". An ontology can give a ranking or weight to a proposed solution. f(need, solution) -> weight.

Treat solutions as different from preferences? (Cf use cases.)

Solutions Registry is the unified listing; can be queried based on the context (e.g. all solutions for Windows 7); the single optimised source for solutions; will be interfaced by Matchmakers, Denis Anson's tool, ... The Solutions Registry is optimised for real-time access, is ontology-agnostic, will be seeded/populated by an ontology. (The ontology that is being developed is actually a hyper-ontology that can work with existing instances of ontologies, e.g. import other ontologies.) The Semantic Framework with the Solutions Registry but not be the Solutions Registry. The two components contain the same content, but the Solutions Registry is optimised for real-time access, while the Semantic Framework is optimised for adding new data.

Semantic Framework

"Progress on rule-based matchmaker + alignment tool (part of A202.1: Automatic generation of metadata for accessible solutions)" (CERTH/ITI)

A First version of the ontology (semantic framework for content and solutions) has been published for online browsing. http://160.40.50.57/cloud4all/ It contains content about solutions and their settings, devices, vendors (providers), etc. Also mappings to the registry are also provided. The ontology will be connected with the matchmakers as well as the solutions registry API (will feed the solutions registry with the relevant data about solutions and settings). Also it has to be connected with the federated repositories for the storage of AT settings.

Ontology Alignment tool: allow vendor to add their solution to a specific category of solutions. This process is semi-automatic while lexicographic algorithms have been incorporated. There is a connection of the tool with the semantic frameowrk ontology while there are possibilities of including products in more than one categories (such as the ZoomText would be included in both the category of screen readers and magnifiers (multiple inheritance)). Also assists vendors to describe the settings of their AT solutions: provide the name of the setting and map it to the corresponding term in the Registry, describe the value range (ideally, we would have a description of how the solution's values map to those in the Registry - this would be useful for the Transformer), ... (This is part of WP202.)

For information that is available externally (EASTIN), the information is not duplicated, just referenced. In the ontology we will keep also the settings that EASTIN for example dont store.

Discussion: two types of people who would be interested in providing mappings/transformation rules: vendors (e.g. screen reader users) and users (e.g. screen reader users).


Matchmaker-Ontology API.

First implementation with OpenRules (OpenRules allows you to define rules in Excel files - decision table). Uses data stored in the Solutions Ontology (semantic framework).

OpenRules is a framework that can use Excel spreadsheets and can convert them into Java classes (cf. query by example). It is open source: GPL. Alternatives:

(W3C list of OWL implementations)


Alignment tool: discussion:

  • What with two very different representations of font size, e.g. as point size vs strings (small, medium, large, ...)? Or speech rate expressed as words per minutes vs slow, normal, fast? This can be included in the ontology. The content for the rules in the Transformer needs to be somewhere. The transformations may be application-specific/solution-specific. Where will the mappings come from? From either a vendor or the user community.
  • What do we do if we have two preferences that are not semantically equal but where one is a subset of the other or one overlaps with the other? This type of transformation may be application specific and lossy (e.g. in JAWS, it transforms to xyz). Or if you have a set and a subset, you can still apply the rest of the set that is outside the subset.
  • Vendors in SP3 could help with the transformation rules. Cf. Linz Hackathon, where some of these rules were written for Maavis and TextHelp.
  • Solution Registry: it would be nice if it also contained how to start a solution etc.
  • Cloud4all and GPII Architecture - GPII can be seen a superset of Cloud4all deliverables, but they are not separate. Architecture team operates transparently.

11:00am ID204.1 (1h)

See Matchmaking Algorithms. Discussed during other presentation about the statistical matchmaker.

12:00pm ID205.1 (1h)

Discussed during the Hackathon.

1:00pm Lunch break

2:00pm User tests (test application, benchmarking metrics, use cases) - telecon

See Matchmaker Benchmarking Metrics.

Need to define a metric for comparison of the matchmakers, and for comparing the matchmakers with expert adaptations.

Some atomic metrics can be defined in different ways into a composite metric that would be used for comparison.

Success of matchmakers can only be measured in a specific context, hence the need for tasks to be executed in certain specific applications.

See previous discussion: WP204/WP205 Meeting on 2012-09-28 12:00 UTC.

Issues/ discussion:

  • Where will we get each user's preference set? (E.g. input them manually?) In the first phase, the preferences initialisation tool will not be ready.
  • Also decide which application to use (see Testing).
  • (...)
  • Matchmaker comes into play when we switch between platforms, so good place/scenario to measure satisfaction.
  • The tasks we design should focus on the different conditional cases, e.g. one on the platform where initialisation happens, next on another platform, and another one ...
  • The rule-based matchmaker should be ready by first evaluation phase. The statistical matchmaker will have algorithms running but there will be no data from which it is learned (i.e. to do satisfactory matchmaking); HdM is still working on getting more data. One way is creating some stereotypical/persona data (make up what they need; and mark them so they can be removed later).
  • Another important scenario is when different people come to the same platform, esp. when the platform is set up the way the previous user left it behind; see if the platform changes to the next user's settings (important for public computers). (Note that the Matchmaker is not involved in identification.)
  • Also compare Matchmaker vs no matchmaking? Just rule-based and statistics-based matchmaker; compare both with the "perfect" matchmaker, i.e. an expert who adapts the system. The expert would adapt the system for each user individually, but only on the second platform/context. Issue is how to ensure that the expert sets up the system in the best possible way for the user. An expert would be better than algorithms. If we have technically very savvy users, they could also set up the system. For example, experts with disabilities. However, every setup will be biased; people are used to their own setup but that thus not mean that another setup cannot be good.
  • Gathering data from users during the evaluation phase is important; this needs to be covered in the consent forms.
  • Measuring time spent on a task: the application is not the test object, so importance is not obvious. It is one indicator of how good a UI is. The adaptations proposed by the matchmakers may lead to difference that affect the time a person needs. Time is only used to compare between matchmakers, not between users!

Other ideas and thoughts:

  • User testing in isolation/whole system: computer in library would be excellent; test that involves the whole context ...
  • Different concepts of disability in this project: be careful w.r.t. creating categories; "disabilities" are contextual. Need control group that is like the typical designer and a control group that typically lies outside the focus of designers.
  • 3 scenarios: Matchmaker works perfectly; Matchmaker output is suboptimal; user can't do anything at all. (...) User may not be able to change settings; we want to measure the number of settings changed by the user but we don't know yet how this fits in the formula for the final metric.
  • Reduce number of variables? Risk that some variables will get merge/confused with other ones. (Independent variables.)
  • Can't focus on specific settings since each user will have different settings.
  • Three user groups proposed by TUD have to do with usability of the application; not to be confused with expert adaptations that are compared with Matchmaker adaptations.
  • From chat: the tasks to be performed have to be the same, then you can compare also both performance and other measures will cancel for overestimated self perception. (...)
  • For this phase, we should test applications, Matchmaker, security gateway etc separately. In the second and third evaluation phases, we should test them together.
  • See also Cloud4All Testing

4:00pm Introduction to the internal concepts of the rule-based matchmaker + discussion (1h)

Discussion points:

  • structure of preferences, including conditions: conditions expressed as "name": "<url>", "value": ">= ...". This is simpler to parser than the current proposal. Architecture team will present a proposal during the Hackathon (it is work in progress). CERTH-ITI used OpenRules format for conditions. See also Simplifying Preferences Documents. Note: We need a better mechanism of updating and communicating updates.
  • exercise to express requirements in the use cases as XML/atomic settings, e.g. "I need simple labels and terms" (cf CERTH-IBBR: annex to D101.2).
  • The Registry is still a spreadsheet; which terms we can use is still not fixed. Open issue This also affects the stability of the preferences format (revised ISO/IEC 24751).
    • Spreadsheet contains the terms from ISO/IEC 24751; based on list of settings from SP3, a list of potential registry terms was added (WP101).
    • Need to look at these additional terms; if we still find that terms are lacking, request additions. Open issue

5:00pm Introduction to the internal concepts of the statistical matchmaker + discussion (1h)

8:00pm Dinner

Dinner at Römerhof. Please note that every participant is kindly asked to pay for themselves.

Wed 10 October

9:00am User tests (continued)

See Cloud4All Testing.

Examples for Applications/Tasks:

  • Writing an email
  • Browse the Web with Firefox (incl. FLOE)
  • Write a document in a Word processor
  • Find the phone number of the secretariat of IDRC on the Web
  • Look up a recipe on the Web
  • Kostas will send a list of tools implemented by CERTH/ITI and maybe could be used as examples by creating also a settings handler mechanism

Note that all applications/tasks need to available in the native language of the users.

"Expert" matchmaker options:

  1. Let the user set the preferences on the second context.
  2. Let an expert set the preferences on the second context, after having observed the user on the first context.
  3. Let the moderator walk the user through a settings dialog on the second context, with the user determining the settings.
  4. Let the user adjust settings via "floating panel" (might be Wizard-of-Oz through verbal communication) on the second context.

Also: Interview at the beginning to find out about how much the user knows about the first and second context.

Platforms:

  • See Cloud4All Testing.
  • APfP ready for some platforms. But this includes only the making of the preferences effective on the platform, and not the storing of the user-set preferences on the preference server.
  • Possible: Switch from desktop platform to Android or iPad. iPad would require manual configuration. But wait for results of the Dresden focus group.

Options for initialization:

  • Snap-shotting applications (after 0.1 release, maybe in Dec.)
  • Manually
  • Wizard

Preferences:

Rest of this work delegated to testing group/liaisons.

Others:

  • SP1 is currently planning on a user workshop in preparation of the user profile management and editing tool.
  • The floating panel will be developed later, and user research will be done beforehand.

Timeline:

  • In short time, TUD will conduct a focus group with blind and visually impaired people, to prepare the test scenarios. Expected result: selection of preferences, and artificial context terms.
  • Snap-shotting tool for preferences in Dec. 2012
  • Probably delay the first user tests, e.g. in March 2013.
  • TUD can do additional user tests at Summer University in Karlsruhe.

Test instrumentation:

  • Snap-shotting tool for preferences
  • Record context
  • Who is going to implement sensor delivering context data?

11:00am Conditions (2h)

Goals

  • declarative
  • semantic
  • extensible

Declarative: it is easier to get started if you start with the data structure. Downside is that it may be a bit more verbose.

Semantic: not just capture our implementations but the higher-level concepts. Also support UI that speaks the user's language. (See example wireframes for FLOE UI Options.)

Extensible: Registry-based approach is very powerful because it is extensible; it can grow over time. Also runtime binding to functions implementing the operators.


Discussion:

  • Nobody expected users to "write" anything like conditions; tools would translate them into a usable UI.
  • Idea: user would never see the conditions; these would be inferred from Matchmaker based on context etc. Counter-idea: users would want to be able to access conditions somehow.

Declarative Conditions

JSON example: value + condition; condition has operator block. Operator block contains format property; also has type property (e.g. allows range to be presented as slider etc.).

Bind operators to implementations (e.g. methods in JavaScript).

See Proposal for Declarative Preference Conditions for examples.


Discussion and ideas:

  • Why not RDF? JSON is lowest common denominator; can be easily bound to operators etc.
  • Registry-based operators; syntax could probably easily be translated into boolean expressions. Closer to abstract syntax tree that a parser would create.
  • Simply using JavaScript syntax would create a security vector.
  • Is "operator" the best term? Possibly create an RDF structure than can be transformed to the proposed syntax, though not make RDF make a requirement.
  • Cf UML's graphical notation & code in different ways.
  • Will need to restrict the set of operators, since each new operator would make the system more complex.
  • Will need complex conditions that take all aspects of the environment into account.
  • Syntax for describing the conditions could be used for describing the context.
  • Possibly have "condition array" instead of "and" operator.
  • Create a small group to combine the two approaches into one and find out how to represent this in text: Colin Clark, Andy Stiegler, Kostas Votis, Nickos Kaklanis, Andres Iglesias, Vassilis Koutkias, Claudia Loitsch; possibly also Antranig Basman.
  • See new operator column in the table at REGISTRY.

1:00pm Lunch break

:-P

2:00pm Integration of the rule-based matchmaker into the GPII architecture (Claudia + Kostas) (1h)

Demo by CERTH/ITI: Application uses Jena to query the ontology (small rule engine in Java). SPARQL could also be used.

Rules in XLS (see demo from yesterday).

Using GWT.

Matchmaker can be a piece of JavaScript, or it can be a server that exposes an RESTful API (URLs for POST requests). The Flow Manager has a configuration file - for features that matchmakers have in common. See https://github.com/GPII/universal/tree/master/gpii/node_modules/flowManager/configs

Discuss on the Architecture mailing list.

Discussion:

  • OpenRules or other rules system (there was mentioned about a conflict with the open rules due to its gpl licence? OpenRules comes from business world. What is the level of complexity that is supported by this? Examples of complex rules?

3:00pm Presentation and discussion of context-related rules (WP103, CERTH & TECH) (1h)

Context: definitions by Dey, by Andreas Zimmermann, and by Biegel.

Zimmermann's figure vs context in Cloud4all: Zimmermann's has a fairly small "centre" and big "petals", while Cloud4all has a big centre and small petals.

Steps to get to a full context: iterative & incremental. Steps:

  1. Sensors attached to a mobile device (noise, luminosity, ...)
  2. User as a sensor, for example, things that can be inferred from activities (Whatsup, Google Talk, ...), but also ask things directly.
  3. Incorporate sensors that are separate from the device. (Wireless Sensor Network; ...)
  4. Full inference of user state. (Can be studied in Cloud4all, but not implemented.)

Discussion and comments:

  • Sensor fusion is a hot topic in robotics.
  • Domotics also relevant.
  • Need to reach out to other projects that work in this area. For example, like UniversAAL in the area of smart houses. This may enable us to have a higher impact.


"Manually specified context features": probably refers to user explicitly specifying aspects of his context.

Discussion:

  • Context & conditions: conditions are the "context" part of the preferences.
  • Triggers to start evaluation of context: we need triggering (cf later).

Context Server: kind of aggregates all the potential changes of context. Thresholds to determine whether a change is really needed or whether the system should continue with the current settings (e.g. x% difference with previous level).

User's permission would be required for using the microphone, webcam etc as sensors. So context-based adaptations are optional, i.e. only when the user wants them.

Use PhoneGap? (Support for sensors?)

Use cases will be adapted.

Rules: number of situations you can have is number of sensors^2.

Discussion:

  • The rules for categorisation look very much like matchmaker rules. Work on a community-driven platform for rules? Will need a rules editor. (Choose other rules engine first? See licence issue discussed on first day. Note: some rules engines are forward chaining, e.g. Drools, others are backward chaining, e.g. Jess.) (See e.g. forward vs backward chaining.)
  • The rules should eventually be in the Matchmaker.
  • (...)
  • N&P ontology can serve several roles, including groupings for the floating control panel and support for checking for related and/or overlapping new terms for the Registry.

Discussing Ontology & Matchmaker integration. Demo from Certh

  • Goal of an ontology is not to deal with specific instances but with more abstract situations
  • Possible intersection of rule-based and statistical matchmaking (use statistical results as a further input variable)
  • Usage of Screen Readers -> What are the rules?
    • Information that exists in the transformer (value range mapping)
    • Rules use the info of the ontology, they don't query it
  • Rules to change the ontology? -> forward chaining?
  • Prolog might be a good choice to write the ontologies in
  • Rules could be inferred from results from the statistical analysis
  • LISP! (nice to do statistical inference. and it was NOT mentioned by andy. Thanks Andres!)
  • Given the statistical matchmaker is not writing rules, who is?
    • The user?
      • What would a user writer?
      • Dynamic preference
      • How would the rules work with the preference set parsing algorithm?
      • It wont if there is a perfect match, there is no matchmaker involved
      • Rules could resolve ambiguity by ignoring at least parts of the conditions (context) of preferences
      • A "normal" user will just edit conditions, not rules
    • Someone who builds (or administers) the matchmaker or an Expert
      • Expert knowledge is probably required to write generic rules
    • Vendor
      • Vendors wont write rules
    • Statistical Matchmaker
      • Could be an output from the matchmaker
    • Communities
      • Communities may maintain the ontological databases
      • Create rules from ontologies
  • How is the rule maintaining after the release work?
    • Not yet considered


  • We should get something running in 3 years in libraries! :)

4:00pm Report on A204.5: Automatic and seamless synthesis of complex accessibility services and applications (Kostas) (1h)

  • The tools wants to solve some of the following possibilities: If it is not possible to fulfill the (inferred) preference for the user with the available solutions (example: If a user prefers the font size to be 18 pt large and the matchmaking process can’t customize the font size setting of the available solution the Service Synthesizer Tool could propose an alternative solution and/or a set of combined solutions (e.g. a magnifier application running locally/on the cloud/on the network..) to be invoked in order to fulfill partially the user’s needs and preferences.)
  • What's the difference between the solution and service?
    • solutions can be services
    • services are web-based and or simple invoked services
  • A Service composition mechanism can be provided while there is a strong connection of the tool with the lifecycle manager.
    • combine services if no sufficient single service/solution can be found
  • Will invoke and combine services, not decide between them (that's the matchmaker)
    • Service composition
    • Architecture hint: Synthesis of life-cycle manager payloads

5:00pm Integration of the statistical matchmaker into the GPII architecture (Andy) (1h)

(...) See also presentation made during Thessaloniki meeting: what has changed since then?

  • Terms: profile -> preference set
  • slide "iPhone to laptop": green items on the left are conditions. Then you analyse "similar" preference sets. When similar sets are found (grey preference sets) and that contain preferences that are similar to what we need for the new context -> infer preferences for the new context. The result will not be perfect but similar. With a low number of similar preference sets, the result will be less reliable. Once the user has confirmed the proposed preference, it will be saved in the preference server. There are several algorithms to determine "similarity". The Transformer may be useful for "normalisation". Clustering (in the data analysis phase) may make the runtime calculations faster. Solution Ontology can make it easier to determine that one device/application is similar to another.
  • slide "Clustering": clustering is expensive, so happens "off-line". E.g. k-means (needs to be told how many clusters to "find"). (...)
  • slide "Options":
  • slide "Hybrid Approach - Rule-Based Fallback"
  • slide "Hybrid Approach - Statistical Fallback" - other idea: statistical matchmaker provides values for the prefs returned by the rule-based matchmaker.
  • slide "Hybrid Approach - Statistical Seed"

Slide Resources: [1] [2]

Discussion:

  • Both types of matchmakers can have fallbacks without refer to the other type of machmaker.
  • integration of the metchmaker with the semantic framework ontology will enhance the matchmaker process

Hackathon (Thu-Fri)

A hackathon focused on matchmaking.

Location: Usability Lab of HdM Adaptive User Interface Research Group, Nobelstraße 5, Stuttgart (Vaihingen).

Attendees

Both days: Boyan Sheytanov (ASTEA), Evgeni Tsakov (ASTEA), Andreas Stiegler (HdM), Christophe Strobbe (HdM), Kasper Markus (RtF-I).

Only Thursday: Kostas Votis (CERTH-ITI), Nick Kaklanis (CERTH-ITI), Colin Clark (IDRC), Claudia Loitsch (TUD).

Agenda

  1. Overview of real-time framework
  2. Demo of Cloud4all in action
  3. Integration of Matchmakers
  4. The rule-based matchmaking process
  5. Lifecycle Manager and the service synthesizer tool
  6. GitHub Tutorial
  7. Transformer
  8. Overlaps and next steps

Overview of Real-Time Framework

Most of the code in GPII's GitHub repository is cross-platform (see "universal"); some parts use native code. For example:

  • build.cmd for the Windows build
  • The USB Listener for Windows is linked to the MingGW library in order to reduce dependence on Visual Studio.
  • The Windows Registry Handler is linked to libffi (see also node-ffi).

After getting the code on you local system:

  • Run build.cmd to build the native code for Windows.
  • After running the build script, you can run the CMD that opens the Node.js server in a separate CLI window and start the USB listener.

What happens when you plug in a USB stick in a Windows machine:

  1. The USB stick contains a file called "gpii.usertoken" (for example, for sammy, who has a preference set in the cloud). The USB Listener finds the token and makes a RESTful call (with the user's token) to the Flow Manager. (The Flow Manager runs on the local machine.)
  2. The Flow Manager gets the preference set from the Preference Server. (Note: Later, we will also need to support a scenario with a preference set that is stored on a USB stick, e.g. because the user cannot or does not want to store his preference set in the Cloud.)
  3. The Flow Manager then gets information about the OS and the locally installed solutions from the Device Reporter. An example payload from the Device Reporter is linux_installedSolutions.json. This contains information about the OS and version; the solutions are listed by means of IDs. (Note: The Solutions Ontology would need to use the same IDs and would contain info of each of the solutions. The IDs would need to be provided by the vendors.)
  4. The preference set and the device info are sent as payloads to the Matchmaker. Note: Sammy's preference set uses only common terms (and the "old" preferences/ISO 24751 format), so the Transformer needs to be called. (Note: The questions of which component should call the Transformer and when, was also part of later discussions on Matchmaker internals.)
    1. The Matchmaker can query the Solutions Registry for solutions that are appropriate for the environment. An example of a solutions payload is win32.json: "name" is a human-readable name; "id" is a unique ID for the product; "version" is a version number using Semantic Versioning.
    2. The Matchmaker calls the Matchmaker Strategy component. (See also Matchmaker Manager in the other whiteboard sketch.)
  5. The LifeCycle Manager, located on the user's device, receives the info on appropriate solutions and settings. It takes a snapshot of the system's current settings, so they can be restored after the user logs out. The LifeCycle Manager calls the appropriate Settings Handler for each solution.
  6. The Settings Handler - input is a JSON file. Values like hKey would need to be provided by vendors, through the Solutions Ontology. The rough idea of "capabilitiesTransformations" (see win32.json) is to provide a quick list of categories that the solution meets.

Additional notes:

  • The Solutions Ontology would be the source for the Solutions Registry. The Solutions Ontology would contain information that is up to date. The Solutions Registry would be static and optimised for fast response to a huge number of queries. It can be thought of as the memcached for the Semantic Framework.
  • Settings: gsettings, XML-based settings, properties files, ... : these should cover a wide range of applications. JAWS uses different types of files spread over different locations in the operating system. Integration of this kind of cases should be covered in SP3.
  • JAWS users sometimes use JAWS scripts (written by themselves or other people in the user community). It would be nice if users had access to their JAWS scripts in addition to their settings. Could such scripts be part of the preferences, e.g. by creating a Registry term for this setting and providing the JAWS script itself as a value? (See discussion thread from November 2012.)
  • How do the Solutions Registry / Solutions Ontology relate to the GPII Marketplace/Unified Listing? The former provides data used for adaptation, matching etc, while the latter needs to have a UI that enables users to find appropriate AT. The Solutions Registry contains some data that would overlap with those in the Unified Listing, and data that would be irrelevant to the Solutions Registry (e.g. anything related to settings). However, it would be nice if information added by a vendor to the Unified Listing (or one of its sources, e.g. EASTIN, ETNA) would also become available in the Solutions Ontology, so these data don't need to be entered twice. Then the vendor could also move to the aligment tool for adding also the settings of its solution. It would also be nice if data entered by a vendor into the Solutions Ontology and that are relevant to the Unified Listing were automatically available in the latter data source.


MatchmakerHackathon Stuttgart GPIIOnWindows 2012-10-11.png

(This discussion also covered other topics, including the Lifecycle Manager and the service synthesizer tool (A204.5.)

Discussion of Matchmaking Process

Matchmaking process as discussed on 11 October 2012.

MatchmakerHackathon Stuttgart MMProcess 2012-10-11 small.png

Presentations-Demo Videos

Demo Videos Presentations for the Ontology, the alignment tool, the integration of the Rule based matchmaker to the GPII framework

CLOUD4ALL_CERTH_ITI_VideoPresentations [3]

CERTH/ITI ppt presentations:

http://wiki.gpii.net/images/f/f7/CERTH_ITI_presentation.ppt

http://wiki.gpii.net/images/9/98/Service_composition_presentation.ppt

Future Events

  • Set up a new event similar to the one in Linz to work through the code and integrate stuff
  • Perhaps January-February as soon as the review is settled
  • Perhaps do it at the same location where we also do the matchmaker tests

Accomodation

For hotels elsewhere in Stuttgart, check if they are near the S-Bahn lines S1, S2 or S3.

See Also