Recommender systems and computational social choice are two fields which assist users in reaching a joint decision. Since the decision is based on the individual user preferences, the management of these user preferences is essential in both fields. However in each field the management of preferences is treated differently.
In both domains, when the users’ preferences are incomplete, the missing preferences must be either elicited or estimated. Computational social choice does not accommodate uncertainty in preferences. It is usually implicitly assumed that the all the individual voter preferences are available or can be elicited and the main focus is on the voting rule used for combining the preferences in order to reach a winner candidate. When not enough information is available in order to reach a verdict (i.e., a winning item according to some voting protocol), preference elicitation is essential. Yet in recommender systems, preference elicitation is viewed as a measure for improving the prediction accuracy. It is assumed that the preferences might not be available, so the focus is on predicting a winner candidate using the available preferences and the preferences that it is possible to elicit.
After the preferences are elicited or estimated, some combination scheme must be activated. Here again the different concepts emerge. Recommender systems deal with a wide variety of aggregation strategies, while social choice relies on voting rules. We reflect on the difference between these two concepts.
In this paper, we present a preference management framework, which relies on both fields. The framework is intended for use when designing a decision support system. The framework which is constructed from 2 independent components:
The framework operates under 3 assumptions. First, we assume that users` preferences are unknown in advance, but can be acquired during the process (i.e., a user who is asked about her preference on an item, responds to the request). Note that the user is not required to decide on all of her preferences beforehand. Secondly, we assume that a user submits her true preferences. Therefore, in this research we do not consider manipulation. Lastly, we assume that the cost of asking a user for her preferences is equal for all users and for all items.