20 20 20
Home     Organization    Submission    Dates    Venue    Program    Registration    Support   


The 12th International Conference on
Modeling Decisions for Artificial Intelligence
Modelització de Decisions per a la Intel·ligència Artificial
Skövde, Suècia Setembre 21 - 23, 2015
http://www.mdai.cat/mdai2015
Termini de submissió:
DEADLINE EXTENDED: April 10th, 2015

INVITED TALKS

Plenary talks will be given by Profs. Thierry Denoeux, Eyke Hüllermeier, Bradley A Malin, Weiru Liu, Simone Fischer-Huebner, and Devdatt Dubhashi.

ABSTRACTS


Prof. Thierry Denoeux
Université de Technologie de Compiègne (France)
Statistical forecasting using belief functions
Talk slides downloadable from here

Abstract: Forecasting may be defined as the task of making statements about events that have not yet been observed. When the events can be considered as generated by some random process, we need a statistical model that has to be fitted to the data and used to predict the occurrence of future events. In this process, the quantification of uncertainty is of the utmost importance. In particular, to be useful for making decisions, forecasts need to be accompanied with some confidence measure. In this talk, the limits of classical approaches to the quantification of statistical forecasting are discussed, and we advocate a new approach based on the Dempster-Shafer theory of belief functions. A belief function can be viewed both as a non additive measure, and as a random set. Dempster-Shafer reasoning thus extends both Bayesian reasoning and set-membership computation. More specifically, the method presented in this talk consists in modeling estimation uncertainty using a consonant belief function constructed from the likelihood, and combining it with random uncertainty represented by an error probability distribution. A predictive belief function can be approximated to any desired accuracy using a combination of Monte Carlo simulation and interval computation. When prior probabilistic information is available, the method boils down to Bayesian prediction. It is, however, more widely applicable, as it does not require such precise prior information. One of the advantages of the proposed methodology is that it allows us to combine statistical data and expert judgements within a single general framework. We illustrate this process using several examples, including prediction using a regression model with partially known cofactors, and the combination of expert opinions and statistical data for extreme sea level prediction, taking into account climate change.


Prof. Eyke Hüllermeier
University of Paderborn (Germany)
Preference Learning: Machine Learning meets Preference Modeling
Talk slides downloadable from here

Abstract: The topic of "preferences" has recently attracted considerable attention in artificial intelligence in general and machine learning in particular, where the topic of preference learning has emerged as a new, interdisciplinary research field with close connections to related areas such as operations research, social choice and decision theory. Roughly speaking, preference learning is about methods for learning preference models from explicit or implicit preference information, which are typically used for predicting the preferences of an individual or a group of individuals. Approaches relevant to this area range from learning special types of preference models, such as lexicographic orders, over "learning to rank" for information retrieval to collaborative filtering techniques for recommender systems. The goal of this talk is to provide a brief introduction to the field of preference learning and, moreover, to elaborate on its connection to preference modeling. As a concrete illustration of this connection, recent work on preference learning based on the choquet integral will be discussed in more detail.


Prof. Bradley A Malin
Vanderbilt University (USA)
Modeling the Complex Search Space of Data Privacy Problems
Talk slides downloadable from here

Abstract: Data privacy protection models often dictate towards worst-case adversarial assumptions, assuming that a recipient of data will always attack and exploit it. Yet, adversaries are agents who must make a wide range of decisions before running an attack, and accounting for the outcomes of such decisions indicate data sharing frameworks, can, at times, be more safe than expected under traditional beliefs. In this presentation, various models of data protection and attacks are reviewed, followed by illustrations of how decision making can be integrated as two-party games, leading to unexpected 'no attack' scenarios of data. This presentation concludes with a review of challenges and opportunities for the AI community in data privacy, including modeling the search over large data protection spaces for defenders and decision making under uncertainty for attackers


Prof. Weiru Liu
Queen's University Belfast (UK)
Game-Theoretic Approaches to Decision Making in Cyber-Physical Systems Security

Abstract: Recent years have seen a significant increase in research activities in game-theoretic approaches to security, covering a variety of areas, such as cyber-security (e.g., network intrusion detection), information security (e.g., fraudulent transactions), physical security (the protection of citizens and critical infrastructure/assets, e.g., smart grids, airports). One of the main focuses in game-theoretic approaches to physical security is to strategically allocate security resources to protect assets. Most of the existing work so far tackles this issue with the Bayesian Stackelberg game framework (or security games), where Strong Stackelberg Equilibrium (SSE) is typically applied to determine an optimal mixed strategy for a defender. In this talk, we will first discuss the limitations of the SSE solution concept, especially in handling ambiguous and missing information, and then introduce alternative frameworks overcoming some of these limitations. Empirical evaluations and comparisons of several well-known decision rules for handling ambiguity in security games will be presented. In addition, challenges of utilizing real-time surveillance information from sensor networks for reasoning under uncertainty in game-theoretic approaches to cyber-physical systems security will be explored.


Prof. Simone Fischer-Huebner
Karlstad University (Sweden)
Privacy and Transparency for Decision Making for Social Collective Intelligence
Talk slides downloadable from here

Abstract: Social Collective Intelligence Systems, as they are currently developed by the Smart Society EU FP7 FET project, are based on basic concepts such as profiling, provenance, evolution, reputation and incentives. This presentation discusses the impact of such concepts on the individual right to privacy. It also discusses that while Social Collective Intelligence Systems have some inherent characteristics that can be utilized to promote privacy, still several technical challenges remain. The presentation discusses that both privacy laws as well as privacy-enhancing technologies (PETs) are needed to effectively enforce privacy. Examples of PETs, in particular of user-controlled transparency-enhancing tools for enhancing the data subject’s transparency and control, are given.


Prof. Devdatt Dubhashi
Chalmers University of Technology (Sweden)
Classifying large graphs with dierential privacy
Talk slides downloadable from here

Abstract: We consider classication of graphs using graph kernels under differential privacy. We develop differentially private mechanisms for two well-known graph kernels, the random walk kernel and the graphlet kernel. We use the Laplace mechanism with restricted sensitivity to release private versions of the feature vector representations of these kernels. Further, we develop a new sampling algorithm for approximate computation of the graphlet kernel on large graphs with guarantees on sample complexity, and show that the method improves both privacy and computation speed. We also observe that the number of samples needed to obtain good accuracy in practice is much lower than the bound. Finally, we perform an extensive empirical evaluation examining the tradeoff between privacy and accuracy and show that our private method is able to retain good accuracy in several classication tasks.



 
MDAI 2015

University of Skövde

MDAI - Modeling Decisions

Vicenç Torra, Last modified: 20 : 48 November 07 2015.