Abstract: |
The old challenge of Operational Research was how to make better decisions based on optimization techniques. In recent years, the abundance of data about human choices changed the paradigm of Operational Research from ‘optimization’ to ‘analytics’. Furthermore, operational research users bacame increasingly aware that realistic decision support requires considering multiple conflicting criteria. ‘Optimum’ has thus been replaced by ‘best compromise’ determined by preferences of Decision Makers (DMs). For the development of personalized computing, the concept of preference has also become relevant for Machine Learning and Artificial Intelligence able to analyze vast amounts of user data to make predictions and recommendations. Preferences provide a means for specifying desires in a declarative and intelligible way, a key element for the effective representation of knowledge and reasoning respecting the value systems of DMs.
We present a constructive preference learning methodology, called robust ordinal regression (ROR), for multiple criteria decision aiding. This methodology links Operational Research with Artificial intelligence, and as such, it confirms the current trend in mutual relations between these disciplines.
In order to provide a ‘best compromise’ solution to a multiple criteria decision problem (ordinal classification, or ranking, or choice – with multiobjective optimization being a particular case), decision aiding methods require some preference information exhibiting a value system of a single or multiple DMs. In ROR, the preference information has the form of decision examples. They may either be provided by the DM on a set of real or hypothetical alternatives, or may come from observation of DM’s past decisions. This information is used to build a preference model, which is then applied on a non-dominated set of alternatives to arrive at a recommendation presented to the DM(s). In practical decision aiding, the process composed of preference elicitation, preference modeling, and DM’s analysis of a recommendation, loops until the DM (or a group of DMs) accepts the recommendation or decides to change the problem setting. Such an interactive process is called constructive preference learning. We describe this process for three types of preference models: (i) utility functions, (ii) outranking relations, and (iii) sets of monotonic decision rules. The case of a hierarchical structure of the set of criteria will be discussed, and the transparency and explainability features required from preference learning will be discussed on the example of interactive multiobjective optimization.
|
Biography:
|
Roman Słowiński is a Professor and Founding Chair of the Laboratory of Intelligent Decision Support Systems at Poznań University of Technology, and a Professor in the Systems Research Institute of the Polish Academy of Sciences. As a full member of the Polish Academy of Sciences he has been its Vice-President in 2019-2022. In his research, he combines Operational Research and Artificial Intelligence for Decision Aiding. Recipient of the EURO Gold Medal from the European Association of Operational Research Societies and Officer of the Academic Palms of France. Awarded the title Doctor Honoris Causa by six universities worldwide. Laureate of the 2005 Prize of the Foundation for Polish Science, and the Humboldt Research Award 2023 (Germany). He is also a member of Academia Europaea and Fellow of IEEE, INFORMS, IFIP, IFORS, AAIA, IRSS, IAITQM, and AIIA. Since 1999, he is coordinating editor-in-chief of the European Journal of Operational Research (Elsevier).
Google Scholar: https://scholar.google.com/citations?hl=en&user=yCX-JrQAAAAJ
Personal www site: https://fcds.cs.put.poznan.pl/IDSS/rslowinski/cv_en.htm
|