The 20th International Conference on Modeling Decisions for Artificial Intelligence Umeå, Sweden June 19-22, 2023 http://www.mdai.cat/mdai2023 |
Submission deadline:
ISBN DEADLINE: April 30th, 2023 |
Abstract: Graded Logic (GL) is an infinite-valued propositional logic, based on observing, measuring, and modeling natural human reasoning with graded percepts. More precisely, GL is a seamless generalization of classical Boolean logic, used primarily in human-centric models of decision making. All GL functions are superpositions of three basic operations that model simultaneity, substitutability, and complementing (negation). Analytic models of simultaneity and substitutability (the graded conjunction and the graded disjunction) are logic aggregators. To adjust the strength of graded conjunction/disjunction, GL introduced (in 1973) two complementary parameters: an adjustable conjunction degree (andness), and an adjustable disjunction degree (orness). Thus, the logic aggregators are andness-directed: they provide a parameterized continuous transition from the drastic conjunction (the model of ultimate simultaneity) to the drastic disjunction (the model of ultimate substitutability).
In this presentation we describe necessary properties of logic aggregators and compare their major implementations. We assume that decision making includes the identification of a set of alternatives followed by the evaluation of alternatives and the choice of the best alternative. In this context, the evaluation of individual alternatives must be based on graded logic aggregation. The resulting analytic framework includes analytic models of graded simultaneity (various forms of conjunction) and graded substitutability (various forms of disjunction). These models can be implemented in several ways, including means, interpolative aggregators, t-norms/conorms, OWA, and fuzzy integrals. Such mathematical models must be applicable in all regions of the unit hypercube. To be applicable in decision-support systems, the logic aggregators must be consistent with observable patterns of natural human reasoning, supporting both formal logic and semantic aspects of human reasoning. That creates a comprehensive set of logic requirements that logic aggregators must satisfy. Various popular aggregators satisfy these requirements to the extent investigated in this paper. The results of our investigation clearly show the limits of applicability of the analyzed aggregators in decision-support systems. This presentation also marks the fiftieth anniversary of GL.
This talk is based on the paper: Jozo Dujmović and Vicenç: Torra, Logic Aggregators and Their Implementations, Proceedings of the MDAI 2023.
Abstract:
Suppose edges in an underlying graph G appear independently with some probability in our observed graph G' - or alternately that we can query uniformly random edges. We describe how high a sampling probability we need to infer the modularity of the underlying graph.
Modularity is a function on graphs which is ubiquitous in algorithms for community detection. For a given graph G, each partition of the vertices has a modularity score, with higher values indicating that the partition better captures community structure in G. The (max) modularity q*(G) of the graph G is defined to be the maximum over all vertex partitions of the modularity score, and satisfies 0 ≤ q*(G) ≤ 1.
It was noted when analysing ecological networks that under-sampled networks tended to over-estimate modularity - we indicate how the asymmetry of our results gives some theoretical backing for this phenomenon - but questions remain.
In the seminar I will spend time on intuition for the behaviour of modularity, how it can be approximated, links to other graph parameters and to present some open problems.
Joint work with Colin McDiarmid.
Abstract: We are seeing increasing interest in AI in all parts of society, including the public sector. Although there are still few examples of AI applications being implemented in the public sector, there is no doubt that it will be, and that we are facing a radical change in how work is organized. In the Swedish context, AI is mentioned as a potentially beneficial technology in governmental policies. Also, networks have been set up connecting actors in the public sector with the aim of strengthening knowledge of AI and exchanging experiences. Given these ambitions to implement, paired with the rapid technical development we will probably see a fast increase in AI-enhanced work processes in the public sector. The disruptive potential of AI is however also raising many questions and especially how it might affect workplaces and people using these systems. It is therefore important to discuss various ethical implications in order to foresee and react to the transformational power of AI, to make us aware of possible outcomes, and also to decide if they are wanted or not.
In this presentation, we will look into the status of AI in the public sector, with an emphasis on the Swedish context. The main focus will be on ethics, in a broad sense, and to reflect on how AI will impact the way we do work, make decisions, and how work is organized. Issues such as trust and accountability will be discussed, especially in relation to AI-based decision-making. Implications for workers and citizens will also be brought up highlighting both potential advantages as well as challenges. At the end of the presentation, we will also attempt to sketch out a number of principles that could act as guidance in AI for the public good.
Abstract: Automation has been implemented in the handling of social assistance (monetary benefits in the social welfare system in Sweden) since 2016. The driving force for using digital technology solutions, such as automation in form of RPA was primarily efficiency and objective decisions. From the beginning, the handling of social assistance seemed like an easy part to digitalize when it came to the municipal administration of the social service. In 2023 the major part of the municipalities has an e-service but not automation that works accurately. The use of automation as a part of the distribution of social welfare has made visible several problematic aspects related to the goals and construction of the welfare system. What lessons can the research on the implementation of this automation process contribute? What opportunities and challenges do public administrations face? This presentation takes its point of departure from recent research and the concepts of public values, digital discretion and administrative burden.
Abstract: Formal argumentation has been revealed as a powerful conceptual tool for exploring the theoretical foundations of reasoning and interaction in autonomous systems and multiagent systems. Formal Argumentation usually is modeled by considering argument graphs that are composed of a set of arguments and a binary relation encoding attacks between arguments. Some recent approaches of formal argumentation assign uncertainty values to the elements of the argument graphs to represent the degree of belief in arguments or attacks. Some of these works assign the uncertainty values to the arguments, others to the attacks, and others to both arguments and attacks. These works use precise probability approaches to model the uncertainty values. However, precise probability approaches have some limitations to quantify epistemic uncertainty, for example, to represent group disagreeing opinions. These can be better represented by means of imprecise probabilities, which can use lower and upper bounds instead of exact values to model the uncertainty values. During this talk, we will present some recent results on how to model the degree of belief in arguments with imprecise probability values by means of credal sets. We will show how to use credal networks theory for modeling causality relations between arguments. Some applications of imprecise probability in Formal Argumentation will be also discussed.
Abstract: Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. In this work, we develop a provably correct randomized algorithm for solving large, weakly constrained SDP problems by economizing on the storage and arithmetic costs. The key insight is to maintain only a small sketch of the decision variable. Combining this idea with the conditional gradient methods, we introduce an algorithm that can solve very large SDPs that are not accessible to other convex optimization methods. Numerical evidence shows that the method is effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment.
Joint work with Joel Tropp, Olivier Fercoq, Madeleine Udell and Volkan Cevher.