This page contains some of the work of the Needs and Preferences Working Group.
Conflicts from multiple devices
One source of conflicts could originate from abstraction. Once we want to infer global settings from device specific settings, we might encounter conflicts. Consider Ashley having a few devices: an Office PC with Windows XP, a private PC with Windows 7, an Android Smartphone, a TV and a PlayStation. She increased the font size to "extra large" on her office PC, private PC and Smartphone, from which we inferred that she wants "extra large" font size on all devices. Here is an example on how this could look in a profile, given that we use a monolithic single profile to store all settings (in contrast to user and device specific profiles, which leads to a different structure, but the problem remains the same). Consider something like the following scenario
Once we have inferred the most general "fontsize" and Ashley returns home and switches on her TV, we have a conflict. There is both a general and a device-specific font setting, which are not equal.
- Most specific setting wins: Similar to CSS. In this case, the TV would get small font size. This works well for accounting device specific user needs, perhaps because the device is shared with other people but limits the possibility of inferring global settings automatically.
- Most general setting wins: In this case, the very general and abstract "fontsize" wins and the TV gets extra-large font size. This allows efficient automatic adjustment and inferring, yet user settings might be overridden.
- Probability based inference: Keys are enriched with an additional field to document how sure an inference algorithm is that this setting is true. In this case, the algorithm could say "it is true for 4 of 5 known devices" leading to a probability of 0.8. A setting a user entered manually always gets a probability of 1. In this case, the TV would get small font size (1 > 0.8). This probably accounts best for the way rule-based and stochastic algorithms work, as it includes their "belief" of a setting. (See also discussion point 37 on probability.)
Conflicts from context based conditions
Once context (the environment, for example the time of the day) comes into play, we are confronted with a whole new dimension of conflicts. Let's say Ashley has set up to increase screen brightness to 130% after 5pm to help her work at office. Yet, once she is at home, she wants default brightness not to ruin the thriller she wants to watch. Consider the following profile.
|http://gpii.org/ns/up/brightness||130%||value("http://gpii.org/ns/location") == "home"|
|http://gpii.org/ns/up/brightness||100%||value("http://gpii.org/ns/time/local-time") >= 17:00|
We do now have 3 settings for brightness and, if she wants to watch a thriller at home at 8pm, all of them are in conflict.
- No conflicting conditions are allowed: this would require to rewrite all the conditions in order to make them exclusive. While this is possible (in most cases even automatically) it makes them completely unreadable by humans and editing them after they are once set may also become difficult.
- Condition based priorities: Conditions could be prioritized by the values they query in their checks. http://gpii.org/ns/time for example could always win over http://gpii.org/ns/position. Yet, there could be conditions containing both or their could be situations, where this preset order is not what the user desires.
- Condition order: Conditions could be extended by an order field, allowing a define an order of conditions to resolve conflicts. In this scenario the "brightness at home" condition could get highest order to be checked and applied first.
- Key specific: Conflicts could also be resolved specific to the semantics of the actual preference. For screen brightness, for example, it might be a good idea to multiply the values of all preferences which apply based on their condition. In other cases multiplication might not be meaningful or there could perhaps not even be a single meaningful method to combine values.