Algorithmic Game Theory and Computational Social Choice(07 Jan - 8 Mar 2013)
## ~ Abstracts ~
Haris Aziz, NICTA, AustraliaTwo fundamental notions in microeconomic theory are efficiency---no agent can be made better off without making another one worse off---and strategyproofness---no agent can obtain a more preferred outcome by misrepresenting his preferences. When social outcomes are probability distributions (or lotteries) over alternatives, there are varying degrees of these notions depending on how preferences over alternatives are extended to preference over lotteries. We show that efficiency and strategyproofness are incompatible to some extent when preferences are defined using stochastic dominance (SD) and therefore introduce a natural weakening of SD based on Savage's sure-thing principle (ST). While \emph{random serial dictatorship} is SD-strategyproof, it only satisfies ST-efficiency. Our main result is that \emph{strict maximal lotteries}---an appealing class of social decision schemes due to Kreweras and Fishburn---satisfy SD-efficiency and ST-strategyproofness.
Xiaohui Bei, Nanyang Technological UniversityFairness and welfare (a.k.a. efficiency) are among the most critical criteria in resource allocation and have been studied extensively in economics, social science and computer science. In most settings one cannot achieve optimal fairness and welfare simultaneously: A fair allocation may result in a loss in welfare efficiency, and an efficient allocation may incur unfairness. We take the standard approach of approximation, which quantifies the loss of fairness and efficiency, to study their interconnection and tradeoff. The basic question we would like to address is the following: For any given r_f and r_w, does there exist an allocation that achieves fairness approximation ratio r_f and welfare approximation ratio r_w simultaneously? We consider two classic resource allocation models: indivisible item allocation with subadditive functions and multiple fractional knapsack allocation with capacities and demands. For both models, we give (nearly) optimal constant bounds on fairness and welfare bifactor approximations regardless of computational constraints. We also give polynomial time algorithms to compute allocations with constant bifactor approximations.
Ioannis Caragiannis, ```````, GreeceA well-studied approach to the design of voting rules views them as maximum likelihood estimators; given votes that are seen as noisy estimates of a true ranking of the alternatives, the rule must reconstruct the most likely true ranking. We argue that this is too stringent a requirement, and instead ask: How many votes does a voting rule need to reconstruct the true ranking? We define the family of pairwise-majority consistent rules, and show that for all rules in this family the number of samples required from the Mallows noise model is logarithmic in the number of alternatives, and that no rule can do asymptotically better (while some rules like plurality do much worse). Taking a more normative point of view, we consider voting rules that surely return the true ranking as the number of samples tends to infinity (we call this property accuracy in the limit); this allows us to move to a higher level of abstraction. We study families of noise models that are parametrized by distance functions, and find voting rules that are accurate in the limit for all noise models in such general families. We characterize the distance functions that induce noise models for which pairwise-majority consistent rules are accurate in the limit, and provide a similar result for another novel family of position-dominance consistent rules. These characterizations capture three well-known distance functions. Joint work with Ariel D. Procaccia and Nisarg Shah (Carnegie Mellon University)
Xiaotie Deng, University of Liverpool, UKGames of strategic actions are influenced by players' payoffs which are often only partially known. The outcome, and associated data, on the other hand, reveal the incorrectly reported private data of the players to a certain extent. This relationship is analysed in the context of games and other subjects.
Shahar Dobzinski, Weizmann Institute of Science, IsraelWe generalize sealed bid auctions to accommodate combinatorial auctions. In a sealed bid combinatorial auction each bidder sends to the auctioneer, simultaneously with the others, a message that depends only on his own valuation. The auctioneer decides on the allocation based on these messages alone. The goal is to find an allocation of the items which maximizes the social welfare. In this simultaneous communication complexity model we ask: How much information each of the bidders has to provide so that an allocation that approximates well the optimal allocation can be found?
Edith Elkind, Nanyang Technological UniversityWe present a general overview of multi-winner elections, focusing on fully proportional representation schemes, such as Monroe and Chamberlin-Courant. We discuss recent algorithmic results for winner determination under these rules, covering both hardness results for the general case and easiness results for restricted domains (in particular, single-peaked domains) and parameter values.
Ulle Endriss, University of Amsterdam, The Netherlands An important question in fair division is the extent to which an allocation of resources promotes economic equality. I will start by introducing several ways of measuring inequality, such as the Pigou-Dalton principle and the Lorenz curve, and apply these definition to a standard framework for multiagent resource allocation with indivisible goods. I will then discuss the computational complexity of a number of problems that naturally arise in this context.
Ulle Endriss, University of Amsterdam, The Netherlands Judgment aggregation is an area of social choice theory concerned with the study of methods for amalgamating the views of different agents regarding the truth of a number of mutually dependent propositions expressed in a logical language. Originating in Philosophy and Economics, judgment aggregation has recently caught the attention of researchers in Computer Science and Artificial Intelligence. This is due not only to its appeal as a rich modelling framework for collective decision making, but also to the exciting algorithmic questions raised by the framework. This tutorial will provide an introduction to the field. Further information is available at http://www.illc.uva.nl/~ulle/teaching/singapore-2013/.
Piotr Faliszewski, AGH University of Science and Technology, PolandIn this tutorial we will focus on algorithms for choosing committees of representatives. So far, researchers in computational social choice mostly focused on single-winner elections, where the goal is to elect a single best candidate (for example, as in presidential elections). The goal of multiwinner elections is to choose a committee of representatives (for example, as in parliamentary elections). There are many ways in which committees can be selected: For example, the voters might be divided into districts or variants of single-winner voting rules can be used (if a voting rule provides a ranking of the candidates and our goal is to elect a committee of k people, we may simply pick top k candidates from this ranking). However, there are also voting rules, such as the ones of Monroe and of Chamberlin and Courant, that take into account that the committee should represent the society proportionally and that it should somehow ensure the satisfaction of the voters. The tutorial will focus on algorithmic results regarding Monroe and Chamberlin-Courant rules. We will first see that it is computationally hard to find winners according to these rules, and then we will consider a number of ways to (possibly) circumvent these hardness results (in particular, we will look at approximation algorithms, fixed-parameter tractability, and domain restrictions).
Piotr Faliszewski, AGH University of Science and Technology, PolandWe consider the problem of predicting an election winner given certain partial knowledge regarding the structure of the election. However, instead of a yes/no answer whether a particular candidate can win, we are interested in computing the probability of the candidate winning. We consider several types of partial knowledge we might have. For example, we might have perfect knowledge regarding which agents cast their votes, but only partial knowledge regarding their preference orders, or we might know their preference orders perfectly, but we may be uncertain regarding their willingness to vote. From the technical perspective, we study the complexity of counting variants of the possible winner problem, the manipulation problem, and the control problems.
Amos Fiat, Tel Aviv University, IsraelWe introduce and study strongly truthful mechanisms and their applications. We use strongly truthful mechanisms as a tool for implementation in undominated strategies for several new problems, these include externality resistant auctions and some infamous multi dimensional problems. Joint work with Anna R. Karlin, Elias Koutsoupias, and Angelina Vidali.
Andrei Gomberg, Instituto Tecnológico Autónomo de México, MexicoEconomists have historically used rational preferences to explain individual choices, satisfying certain regularity conditions (such as the Weak Axiom of Revealed Preference, WARP). More recently, attention has turned to explaining choices that violate WARP. Among such explanatory models that appeared in recent years one should note models of limited attention, as well as the multi-self/multi-rational models. Many of these have drawn on the old results from the social choice literature. Could social choice, in turn, benefit from the ideas of the new decision theory? Reference: http://ciep.itam.mx/~gomberg/papers/version6.pdf
Umberto Grandi, University of Padova, ItalyIn collective decision making, where a voting rule is used to take a collective decision among a group of agents, manipulation by one or more agents is usually considered a negative behavior to be avoided, or at least to be made computationally difficult for the agents to perform. However, there are scenarios in which a restricted form of manipulation can instead be beneficial. In this paper we consider the iterative version of several voting rules, where at each step one agent is allowed to manipulate by modifying his ballot according to a set of restricted manipulation moves which are computationally easy and require little information to be performed. We prove convergence of iterative voting rules when restricted manipulation is allowed, and we present experiments showing that most iterative voting rules have a higher Condorcet efficiency and a higher average position of the winner than their non-iterative version. Joint work with Andrea Loreggia, Francesca Rossi, Kristen Brent Venable and Toby Walsh.
Nick Gravin, Nanyang Technological UniversityIn the algorithmic mechanism design every bidder in the auction is granted an unlimited computational power for achieving better outcome, while auctioneer has a limited computational abilities. Such asymmetry between the auctioneer and the bidders may lead to a contradictory impossibility results in the auctions with a unique bidder where auctioneer's objective is to maximize the social welfare of just one agent. Indeed, the auctioneer can simply ask an advice from the agent besides conducting his own computations. This begs the question: can the auctioneer ask for a help in his computations from many bidders in an "incentive compatible protocol"'? We discuss an interesting example of such protocol in the setting of multi-parameter budget feasible mechanism design, which studies procurement combinatorial auctions where the sellers have private costs to produce items, and the buyer(auctioneer) aims to maximize his valuation function on subsets of items, under the budget constraint on the total payment.
Gianluigi Greco, University of Calabria, ItalyMechanism design is addressed in the context of fair allocations of indivisible goods with monetary compensation. Motivated by a real-world social choice problem, mechanisms with verification are considered in a setting where (i) agents' declarations on allocated goods can be fully verified before payments are performed, and where (ii) verification is not used to punish agents whose declarations resulted in incorrect ones. Within this setting, a mechanism is designed that is shown to be truthful, efficient, and budget-balanced, and where agents' utilities are fairly determined by the Shapley value of suitable coalitional games. The proposed mechanism is however shown to be #P-complete. Thus, to deal with applications with many agents involved, two polynomial-time randomized variants are also proposed: one that is still truthful and efficient, and which is approximately budget balanced with high probability, and another one that is truthful in expectation, while still budget-balanced and efficient.
Gianluigi Greco, University of Calabria, ItalyMany NP-hard problems are known to be efficiently solvable when restricted to instances whose underlying structures can be modeled via acyclic graphs or acyclic hypergraphs. However, structures arising from real applications are hardly precisely acyclic. Yet, they are often not very intricate and, in fact, tend to exhibit some limited degree of cyclicity, which suffices to retain most of the nice properties of acyclic ones. Therefore, several efforts have been spent to investigate invariants that are best suited to identify nearly-acyclic graph/hypergraphs, leading to the definition of a number of so-called structural decomposition methods. The tutorial will illustrate the main structural decomposition methods proposed in the literature within a unifying framework, and it will present their application to efficiently solve nearly acyclic instances of well-known problems arising in algorithmic game theory.
Jason Hartline, Northwestern University, USAIn the first part, I will review classical auction theory. Topics covered include: Bayes-Nash equilibrium (BNE), characterization of BNE, revenue equivalence, solving for BNE, and revenue maximization (virtual values and marginal revenue). This theory provides a basis for all of auction theory. In the second part, I will survey the role approximation plays in addressing several of the short-comings of the classical theory. First, when the optimal auction is complex, I will describe simple mechanisms that are approximately optimal. These results are especially important when agents' preferences are multi-dimensional. Second, when the optimal auction is highly dependent on distributional assumptions, I will describe mechanisms with little or no dependence on the distribution that are approximately optimal. Reference Manuscript: http://www.eecs.northwestern.edu/~hartline/amd.pdf
Jason Hartline, Northwestern University, USAThe intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with quasi-linear utility and single dimensional preferences, Bulow and Roberts (1989) show that the optimal auction of Myerson (1981) is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. We considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal, and furthermore, there is sometimes no direct way to implementing the marginal revenue maximization mechanism. Our contributions are two fold: we give procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from an ideal one (i.e., where marginal revenue maximization is optimal). Joint with Saeed Alaei, Hu Fu, and Nima Haghpanah
Martin Hoefer, RWTH Aachen University, GermanyStable matching problems are one of the most fundamental domains on the border of economics and computer science with many applications. In this talk I will survey some recent advances in the analysis of stable matching problems with locality constraints in networks. In particular, for matching problems based on static and dynamic local neighborhoods, we will study convergence times of natural dynamics and bound the impact of reward sharing and other-regarding preferences on the existence and social welfare of stable matchings.
Martin Hoefer, RWTH Aachen University, GermanyWe present a new stochastic Fubini theorem in a setting where we integrate measure-valued stochastic processes with respect to a d-dimensional martingale. To that end, we develop a notion of measure-valued stochastic integrals. As an application, we show how one can handle a class of quite general stochastic Volterra semimartingales. The talk is based on joint work with Tahir Choulli (University of Alberta, Edmonton, Canada).
Nicole Immorlica, Northwestern University, USAMarkets are the basic mechanism by which resources are allocated to agents. These resources include simple objects, courses in an MBA program for example, or complex entities, like firms, which themselves have preferences over various sets of agents. Many of these markets are engineered by a central organization with a particular eye towards stability and efficiency. Examples are prolific and include diverse settings with their own peculiarities, such as medical labor markets, organ donation, school choice programs, and the matching of cadets to army positions. A rich theory has evolved around the design of such markets and has been successfully applied to a wide variety of applications, including those listed above. This advanced tutorial gives students a background in the theory of market design, as well as its numerous successful applications and corresponding engineering concerns. We begin with the standard theory of stable matching introduced by Gale and Shapley (1962) and Shapley and Scarf (1964), and study its application to the National Residency Matching Program. We then discuss extensions of the theory to contracts, housing allocation, and allocation with priorities as time permits.
Nicole Immorlica, Northwestern University, USASocial networks form the basic medium of social interaction. The structure of these networks significantly impacts and co-evolves with the behavioral patterns of society. Important societal outcomes - the global reach of an epidemic, the degree of cooperation in an online network, the adoption of new technologies - are dictated by social networks. In this talk, we explore the impact of networks on segregation. In 1969, economist Thomas Schelling introduced a landmark model of racial segregation in which individuals move out of neighborhoods where their ethnicity constitutes a minority. Simple simulations of Schelling's model suggest that this local behavior can cause global segregation effects. In this talk, we provide a rigorous analysis of Schelling's model on ring networks. Our results show that, in contrast to prior interpretations, the outcome is nearly integrated: the average size of an ethnically-homogenous region is independent of the size of the society and only polynomial in the size of a neighborhood. Joint work with Christina Brandt, Gautam Kamath, and Robert D. Kleinberg.
Jerome Lang, Centre National de la Recherche Scientifique and Université Paris Dauphine, FranceThe specification of a decision making problem includes the agents preferences on the available alternatives. The choice of a model of preferences (e.g., utility functions or binary relations) does not say how preferences should be represented (or specified). This tutorial we will give a survey of graphical languages for compact preference representation. We will first survey languages for ordinal preferences, with a special focus on CP-nets; then we will survey languages for numerical preferences. The last part of the tutorial will be devoted to applications of these languages to voting and resource allocation.
Jerome Lang, Centre National de la Recherche Scientifique and Université Paris Dauphine, FranceThis talks gives a survey (with emphasis on recent results) on the various ways of managing elections on multi-issue domains. It will start by introducing multiple election paradoxes, then it will review several classes of methods, among which: sequential voting, distance-based voting, using compact representation languages.
Jean-Francois Laslier, Centre National de la Recherche Scientifique and École Polytechnique, FranceA tournament is a complete and asymmetric binary relation. A tournament solution is a choice correspondence defined for any finite tournament. Many such solutions have been defined in Graph Theory, Statistics, Game Theory and Social Choice Theory. The tutorial is a unified presentation of these solutions and of their properties. It also mentions the possible extensions from the pure tournament case to more general binary comparison structures.
Jean-Francois Laslier, Centre National de la Recherche Scientifique and École Polytechnique, FranceThis paper provides a theoretical foundation which supports the degressive proportionality principle in apportionment problems, such as the allocation of seats in a Parliament. The core of the argument is that the utility assigned to a constitutional rule is a non-linear function of the frequency with which each collective decision matches the individual?s own will. If the function is concave, then classical utilitarianism at the social level recommends decision rules which exhibit degressive proportionality with respect to the population size.
Stefano Leonardi, Sapienza University of Rome, ItalyWe consider multi-unit auctions where each bidder has a private value on each item and an overall budget constraint. Such auctions capture adword auctions, where advertisers offer a bid for those adwords that (hopefully) target their intended audience, and advertisers also have budgets. It is known that even if all items are identical and all budgets are public it is not possible to design an efficient incentive compatible auction. Dobzinski, Lavi and Nisan (2008) presented an incentive compatible pareto optimal auction for multiple identical items with budgets. Several recent results related to the design of auctions with budgets will be presented in this talk. We will discuss: i) The combinatorial case in which each bidder is only interested in a subset of the items ii) The case of multiple slots with different CTRs available for each item/keyword and general matroid constraints; iii) The problem of maximizing the revenue of the auctioneer in envy-free allocations.
Stefano Leonardi, Sapienza University of Rome, ItalyWe present prior-free auctions with constant-factor approximation guarantees with ordered bidders, in both unlimited and limited supply settings. We compare the expected revenue of our auctions on a bid vector to the monotone price benchmark, the maximum revenue that can be obtained from a bid vector using supply-respecting prices that are nonincreasing in the bidder ordering and bounded above by the second-highest bid. As a consequence, our auctions are simultaneously near-optimal in a wide range of Bayesian multi-unit environments. Joint work with Elias Koutsoupias and Tim Roughgarden.
Kevin Leyton-Brown, University of British Columbia, CanadaThe traditional model of two-sided matching assumes that all agents fully know their own preferences. As markets grow large, however, it becomes impractical for agents to precisely assess their rankings over all agents on the other side of the market. We propose a novel model of two-sided matching in which agents are endowed with known partially ordered preferences and unknown true preferences drawn from known distributions consistent with the partial order. The true preferences are learned through interviews, which reveal the pairwise rankings among all interviewed agents. Our goal is to identify a centralized interview policy, i.e., an algorithm which adaptively schedules interviews. We would like the policy to produce a matching that is guaranteed to be both stable and optimal for a given side of the market, with respect to the underlying true preferences of the agents. As interviews are costly, we seek a policy that minimizes the number of interviews. We formalize this minimization problem and characterize its hardness, both in the general setting and under restrictions on agents' preferences.
Pinyan Lu, Microsoft Research Asia, ChinaIn budget feasible mechanism design, one studies procurement combinatorial auctions in which the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. In this talk, I will survey two main design techniques and some recent development on budget feasible mechanisms.
Yishay Mansour, Tel Aviv University, IsraelIn this paper we study a novel model in which agents arrive sequentially one after the other and each in turn chooses one action from a fixed set of actions to maximize his expected rewards given the information he possesses at the time of arrival. The information that becomes available affects the incentives of an agent to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximizes social welfare. The planner's optimal policy is characterized and shown to be intuitive and very simple to implement. As the number of agents increases the social welfare converges to the optimal welfare of the unconstrained mechanism. One interpretation for our result is the implementation of what is known as the 'Wisdom of the crowds'. This topic has become more relevant during the last decade with the rapid adaptation of the Internet.
Nicholas Mattei, NICTA, AustraliaThe study of voting systems often takes place in the theoretical domain due to a lack of large samples of sincere, strictly ordered voting data. We derive several million elections from the Netflix dataset; a large dataset of preferences expressed over movies. We examine the results of simulated elections with strict preferences on these data and show that many social choice paradoxes rarely occur within this dataset. Additionally we discuss what behavioral social choice can contribute to computational social choice. Behavorial social choice focuses on questions of inference and model dependence: based on limited, imperfect, and highly incomplete observed data, what inference can we make about social choice outcomes at the level of a population? How much do theoretical predictions depend on modeling assumptions about the nature of human preferences and how these preferences are expressed? Using a small subcollection from the Netflix Prize dataset, we highlight the key roles that inference and behavioral modeling play in the analysis of such data, particularly for sparse data like the Netflix ratings.
Svetlana Obraztsova, National Technical University of Athens, GreeceMulti-winner elections model scenarios where voters must select a fixed-size group of candidates based on their individual preferences. In such scenarios, it is often the case that voters are incentivized to {manipulate, i.e. misreport their preferences in order to obtain a better outcome. In this paper, we study the complexity of manipulating multi-winner elections under scoring rules, with a particular focus on the role of tie-breaking rules. We consider both lexicographic tie-breaking rules, which break ties according to a fixed ordering of the candidates, and a natural randomized tie-breaking rule. We describe polynomial-time manipulation algorithms for several special cases of our problem. Specifically, we show that finding a successful manipulation is easy if the underlying voting rule is $k$-Approval or the number of candidates to be elected is bounded by a constant (for both types of tie-breaking rules), as well as if the manipulator's utility function only takes values in \{0, 1\} and the ties are broken in the manipulator's favor.
Mallesh Pai, University of Pennsylvania, USAReal world auctions are often restricted to being anonymous and nondiscriminatory (symmetric) due to practical or legal constraints. We examine the extent to which these constraints restrict the seller. We show that only one feasible (interim) allocation rule cannot be implemented via a symmetric auction. An important implication is that the optimal (revenue-maximizing) auction can always be implemented symmetrically. We also characterize the set of feasible outcomes implementable by a symmetric, ex-post individually rational mechanism. The optimal auction is in this set under additional assumptions on the distributions of buyers' values. Joint work with Rahul Deb, University of Toronto
Clemens Puppe, Karlsruhe Institute of Technology, GermanyIn this talk, I review the basic results of the theory of judgement aggregation under independence ('issue-wise' aggregation) and monotonicity, and relate these to the theory of strategy-proof social choice. The central analytical tool is the `intersection property' which provides a universal characterization of all monotonically independent aggregation rules on any agenda.
Clemens Puppe, Karlsruhe Institute of Technology, GermanyThe Condorcet set is defined as the set of logically consistent views which agree with the majority on a maximal set of issues. The elements of this set are exactly those that can be obtained through sequential majority voting, according to which issues are sequentially decided by simple majority unless earlier choices logically force the opposite decision. We investigate the size and structure of the Condorcet set - and hence the properties of sequential majority voting - for several important classes of judgement aggregation problems. While sequential majority voting verifies McKelvey's (1979) celebrated 'chaos theorem' in a number of contexts, in others it is shown to be very regular and well-behaved.
Tim Roughgarden, Stanford University, USAWe discuss prior-independent auctions, whose description does not reference a prior distribution, yet for every prior distribution satisfying standard assumptions, has expected revenue close to that of the optimal auction tailored to the prior. Such robust auctions are relevant if, for example, good prior information is expensive or impossible to acquire, or if a single auction format is to be re-used several times, in settings with different or not-yet-known bidder valuations. We give two constructions of prior-independent auctions, one based on sampling and one based on artifically limiting supply.
Abdallah Saffidine, Université Paris-Dauphine, FranceWe propose a general framework for strategic voting when a voter may lack knowledge about other votes or about other voters' knowledge about her own vote. In this setting we define notions of manipulation and equilibrium. We also model action changing knowledge about votes, such as a voter revealing its preference or as a central authority performing a voting poll. Some forms of manipulation are preserved under such updates and others not. Another form of knowledge dynamics is the effect of a voter declaring its vote. We envisage Stackelberg games for uncertain profiles. The purpose of this investigation is to provide the epistemic background for the analysis and design of voting rules that incorporate uncertainty.
Piotr Skowron, University of Warsaw, PolandWe study the complexity of (approximate) winner determination under Monroe's and Chamberlin- Courant's multiwinner voting rules, where we focus on the total (dis)satisfaction of the voters (the utilitarian case) or the (dis)satisfaction of the worstoff voter (the egalitarian case). We show good approximation algorithms for the satisfaction-based utilitarian cases, and inapproximability results for the remaining settings. We provide an empirical evaluation of our algorithms on real-life data.
Arkadii Slinko, University of Auckland, New ZealandThis tutorial gives an introduction to secret sharing schemes emphasizing methods developed in the theory of simple games. In secret sharing attention is drawn to the most informationally efficient and secure secret sharing schemes which are called ideal. Not every access structure admits such a scheme. In this tutorial we concentrate on weighted threshold access structures suggested by Shamir and characterise ideal ones among them. Lecture-by-lecture plan: Lecture 1. Basics of simple games: Examples: weighted games. Characterisations of weightedness, trading transforms, certificates of nonweightedness. Complete games. Composition of games. Lecture 2. Basics of secret sharing: Shamir?s secret sharing scheme. Linear secret sharing. Information rate of the scheme, perfect (secure) schemes, ideal schemes. Linear schemes are ideal. Non-linear schemes exist. Lecture 3. Ideal weighted access structures: Hierarchical and tripartite schemes are ideal. Indecomposable weighted ideal schemes. Any ideal weighted access structure is a composition of indecomposable ones.
Arkadii Slinko, University of Auckland, New ZealandWe use Hotelling's spatial model of competition to investigate the position-taking behaviour of political candidates under a class of electoral systems known as scoring rules. In a scoring rule election, voters rank all the candidates running for office, following which the candidates are assigned points according to a vector of nonincreasing scores. Convergent Nash equilibria in which all candidates adopt the same policy were characterised by Cox \cite{cox1}. Here, we investigate nonconvergent equilibria, where candidates adopt divergent policies. We identify a number of classes of scoring rules exhibiting a range of different equilibrium properties. For some of these, nonconvergent equilibria do not exist. For others, nonconvergent equilibria in which candidates cluster at positions spread across the issue space are observed. Many well-known scoring rules such as plurality, Borda, antiplurality and k-approval fall into the classes considered. Finally, we provide a complete characterisation of nonconvergent equilibria for the special cases of a four-, five- and six-candidate election. This is a joint work with Dodge Cahan and John McCabe-Dansted.
Bo Tang, University of Liverpool, UKWe study the matroid secretary problems with submodular valuation functions. In these problems, the elements arrive in random order. When one element arrives, we have to make an immediate and irrevocable decision on whether to accept it or not. The set of accepted elements must form an independent set in a predefined matroid. Our objective is to maximize the value of the accepted elements. In this paper, we focus on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function. We introduce a general algorithm for such submodular matroid secretary problems. In particular, we obtain constant competitive algorithms for the cases of laminar matroids and transversal matroids. Our algorithms can be further applied to any independent set system defined by the intersection of a constant number of laminar matroids, while still achieving constant competitive ratios. Notice that laminar matroids generalize uniform matroids and partition matroids. On the other hand, when the underlying valuation function is linear, our algorithm achieves a competitive ratio of 9.6 for laminar matroids, which significantly improves the previous result.
Biaoshuai Tao, Nanyang Technological UniversityWe consider the classic cake cutting problem where one allocates a divisible cake to n participating agents. Among all valid divisions, fairness and efficiency (a.k.a social welfare) are the most critical criteria to satisfy and optimize, respectively. We study computational complexity of computing an efficiency optimal division given the conditions that the allocation satisfies proportional fairness and assigns each agent a connected piece. For linear valuation functions, we given a polynomial time approximation scheme to compute an efficiency optimal allocation. On the other hand, we show that the problem is NP-hard to approximate within a factor of Omega(1/sqrt(n)) for general piecewise constant functions, and is NP-hard to compute for normalized functions.
Chung-Piaw Teo, National University of Singapore We propose a new class of semi-parametric random utility choice models, using only the marginal distributional structure of the random utilities (called MDM). We provides new connections of classical discrete choice models (in particular, the GEV families) with the MDM approach. Many classical choice models can thus be estimated/calibrated using the approach MDM. We study the theoretical properties of MDM and use a conjoint choice data set on technological features for automotives to test the performance of the models. Our computational results suggest that MDM-based semi-parametric models compared favorably with the mixed-logit models and take significantly less time (less than 1 over 700th of the time) to calibrate. This is joint work with Karthik Natarajan, Dhanesh Padmanabhan and Vinit Kumar Mishra
Toby Walsh, NICTA and University of New South Wales, AustraliaOne of the most influential ideas in computational social choice is that of Bartholdi et al, viz. despite impossibility results, we might be able to defend against strategic voting through computational complexity. I survey this issue and discuss a number of important new insights. Topics covered will include complexity in practice, the important role played by tie-breaking and the impact of such issues as partial and weighted votes.
Toby Walsh, NICTA and University of New South Wales, AustraliaI discuss recent results about properties of simple decentralized protocols for the division of indivisible goods. This include results on the protocols that provably maximize social welfare, supposing truthful or strategic behaviour, and results about subgame perfect Nash equilibria. Joint work with Thomas Kalinowski, Nina Narodytska, and Lirong Xia.
Lirong Xia, Harvard University, USASocial choice studies ordinal preference aggregation with wide applications ranging from high-stakes political elections to low-stakes movie rating websites. In many cases, we want to design a social choice mechanism that reveals the ground truth via MLE inference. Despite its importance, this objective has been largely hindered by the lack of natural probabilistic models and efficient inference algorithms. In this talk, I will focus on a wide class of probabilistic models called Random Utility Models (RUMs), whose MLE inference was previously believed intractable in general. I will introduce a fast MC-EM algorithm for a very general and natural subclass of RUMs, and discuss its applications and impact on designing better social choice mechanisms. Extension of this algorithm also provides a computational basis for improving models in many applications in economics as well as machine learning, especially learning to rank.
Lirong Xia, Harvard University, USAIn many modern preference aggregation applications we need to design social choice mechanisms to provide good estimates of the ground truth. Such applications include multi-agent systems, recommender systems, and crowdsourcing. The study of this design problem has became popular recently in computational social choice and machine learning. Designing social choice mechanisms to reveal the ground truth amounts to choosing an appropriate probabilistic model for generating agents' preferences, and then computing the maximum likelihood estimator (MLE) based on the agents' reported preferences. The main challenges are (1) how can we choose a good probabilistic model, and (2) how can we compute the MLE efficiently. I will start with discussing principles for designing such mechanisms, especially on the role of the traditional social choice axioms. Then I will present selected recent approaches, including algorithmic and complexity results of MLE inference for Condorcet (Mallows) model and random utility models, and criteria for model selection.
Lan Yu, Nanyang Technological UniversityWe study the complexity of electing a committee under several variants of the Chamberlin--Courant rule when the voters' preferences are single-peaked on a tree. We first show that this problem is easy for the egalitarian, or "minimax" version of this problem, for arbitrary trees and misrepresentation functions. For the standard (utilitarian) version of this problem we provide an algorithm for an arbitrary misrepresentation function whose running time is polynomial in the input size as long as the number of leaves of the underlying tree is bounded by a constant. We argue that the constraint on the number of leaves cannot be relaxed, by proving that our problem remains computationally hard on trees that have bounded degree, diameter or pathwidth. Finally, we show how to modify Trick's [1989] algorithm to check whether an election is single-peaked on a tree whose number of leaves does not exceed a given parameter B. We also discuss the case of elections with single-crossing preferences.
Lan Yu, Nanyang Technological UniversityAlgorithmic mechanism design is concerned with designing algorithms for settings where inputs are controlled by selfish agents, and the center needs to motivate the agents to report their true values. We study scenarios where the center may be able to verify whether the agents report their preferences (types) truthfully. In this talk, after introducing fundamentals of the characterization of truthfulness, we first consider the standard model of mechanism design with partial verification, where the set of types that an agent can report is a function of his true type. We show inherent limitations as well as advantages of this model. Motivated by the negative results, we then introduce a richer model of verification, which we term mechanism design with probabilistic verification. In our model, an agent may report any type, but will be caught with some probability that may depend on his true type, the reported type, or both; if an agent is caught lying, he will not get his payment and may be fined. We characterize the class of social choice functions that can be truthfully implemented in this model, and study the complexity of finding an optimal implementation, i.e., one that minimizes the center's expected payment while guaranteeing non-negative utility to the agent.
Hongyang Zhang, Nanyang Technological UniversityWe study strategic behaviors of buyers in market equilibrium, an old topic in general equilibrium theory. We introduce the notion of incentive ratio, to measure a buyer's incentive for strategic behaviors. First, we exhibit constant bounds on a buyer's incentive ratio for Linear and Leontif utility functions. Next, we relate a buyer's incentive ratio to his endowment of money and market share.
Yair Zick, Nanyang Technological UniversityCooperative games are a useful framework for modeling multiagent behavior in environments where agents must collaborate in order to complete tasks. Having jointly completed a task and generated revenue, agents need to agree on some reasonable method of dividing profits among themselves. One family of payoff divisions that is seen as particularly appealing is the core, which consists of all coalitionally rational (or, stable) payo ff divisions. Unfortunately, it is often the case that the core of a game is empty, i.e. there is no payoff scheme guaranteeing each group of agents a total payoff higher than what they can get on their own. As stability is a highly attractive property, there have been various methods of achieving it proposed in the literature. In particular, a natural way of stabilizing a game is by taxation, i.e. reducing the value of some coalitions in order to decrease their bargaining power. Existing taxation methods include the concepts of epsilon-core, the least-core and several others. However, taxing coalitions is in general undesirable: one would not wish to overly tamper with a given coalitional game, or overly tax the agents. Thus, in this work we study minimal taxation policies, i.e. those minimizing the amount of tax required in order to stabilize a given game. We show that games that minimize the total tax are to some extent a linear approximation of the original games, and explore their properties. We demonstrate connections between the minimal tax and the cost of stability, and characterize the types of games for which it is possible to obtain a tax-minimizing policy using variants of notion of the epsilon-core, as well as those for which it is possible to do so using reliability extensions. To appear in the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013) Joint work with Maria Polukarov and Nick Jennings (University of Southampton) |
||