Social choice theory is the study of mechanisms for collective decision making, such as voting rules or protocols for fair division, and computational social choice addresses problems at the interface of social choice theory with computer science. This course will introduce some of the fundamental concepts in social choice theory and related disciplines, and it will expose students to current research at the interface of social choice theory with logic and computation. This is an advanced course in the Master of Logic programme. Students from AI, Computer Science, Mathematics, Economics, Philosophy, etc. are also very welcome to attend (contact me if you are unsure about your qualifications).
Prerequisites: For a few lectures I will assume some basic background in complexity theory (up to the notion of NPcompleteness). For those who do not yet have this background, I will offer an optional tutorial at the beginning of the course.
Student Presentations: Each student will be asked to present a (typically recent) paper from the literature that relates to the material covered in the lectures. You can reserve a paper by sending me an email (first come, first served). You are expected to attend the talks of your fellow students and to ask questions. [see also: how to give a talk]
See also: previous editions of the course and the Computational Social Choice Seminar at the ILLC
Date  Content  Homework 

Monday 28 October 2013 
Introduction (4up). The first part of this introductory lecture has been about illustrating some of the problems we will address in the course, mostly by way of example. Have a look at the following survey papers to get a better idea of what the field is about:
In the second part of the lecture we have introduced the framework of social welfare functions to formally model the problem of preference aggregation and we have seen a proof of Arrow's famous impossibility theorem. You can look up the details in this paper, which will also serve as the main reference for the results in voting and preference aggregation using the axiomatic method that we will discuss in the next couple of weeks:


Tuesday 29 October 2013 
CLASS CANCELLED 

Wednesday 30 October 2013 
Tutorial on Complexity Theory (4up) at 1517 (Room G2.04). 

Thursday 31 October 2013 
Impossibility Theorems (4up). In the first part of this lecture, we have briefly reviewed Arrow's Theorem and mentioned a few alternative proofs. I specifically recommend the first one the three short proofs by Geanakoplos:
We have also talked very briefly about logics for modelling social choice problems and about using tools from automated reasoning to automatically derive results such as Arrow's Theorem. If you are interested in pursuing this direction further, here are a few papers to get you started:
In the main part of the lecture we have then introduced social choice functions and proved two further classical impossibility theorems, namely Sen's Theorem on the Impossibility of a Paretian Liberal and the MullerSatterthwaite Theorem, which shows that we cannot design a good resolute voting rule that is monotonic in the strong sense of that term. You can find these proofs in my review paper linked below. Sen's original paper is still highly readable today and I recommend that you have a look at it.
Related to the topic of this lecture, Christian Geist's 2010 MSc Logic thesis deals with impossibility theorems in a different area of social choice theory (known as ranking sets of objects) and, in particular, with the automated derivation of such theorems. 
Homework #1 (Due: 5 November 2013) 
Monday 4 November 2013 
Student Presentations on Voting Rules. We have seen short presentations on the following rules: Antiplurality/Veto (a positional scoring rule, like Borda and Plurality); Single Transferable Vote (a staged procedure, like Plurality with Runoff); Approval Voting and Range Voting (two procedures that are not social choice functions); as well as several voting rules that are based on the notion of pairwise majority contests, namely Sequential Majority/Cup, Tideman's Ranked Pairs rule, Schulze's rule, Kemeny's rule, and Dodgson's rule. 

Tuesday 5 November 2013 
Voting Rules (4up). In this lecture we have definend a number of voting rules, including staged procedures (notably STV), the positional scoring rules, and the class of Condorcet extensions (which are defined as those rules that satisfy the Condorcet principle). We have seen that the positional scoring rules systematically violate the Condorcet principle. As for the Condorcet extensions, we have reviewed Fishburn's classification of rules that can be defined in terms of the majority graph, those that can be defined in terms of the weighted majority graph, and those that require even more information than that. We have then briefly introduced one of the historically first topics studied in computational social choice, namely the computational complexity of the problem of computing the election winners for a given voting rule. Below you can find a survey paper defining many of the voting rules mentioned in class and the two short papers on the complexity of the winner determination problem for the Banks rule discussed at the end.


Wednesday 6 November 2013 (1113 in D1.111) 
Characterisation Results (4up). In this lecture we have seen three different approaches to the characterisation of voting rules. The first approach is the axiomatic method, which we have already used to derive impossibility theorems. Here the aim is to identify a set of axioms (i.e., properties) that uniquely identify a voting rule (or possibly a family of voting rules). As examples, we have seen May's characterisation of the plurality rule and Young's characterisation of the positional scoring rules. The second approach is based on, first, a notion of consensus (specifying a set of profiles in which there is a clear winner) and, second, a notion of distance between profiles. The winners of an election are then the winners in the consensus profiles that are closest to the actual profile. We have seen two voting rules that are most conveniently defined in this framework (Dodgson and Kemeny) and we have seen two others for which it is possibly to "rationalise" them in terms of consensus and distance (Borda and plurality). The third approach, finally, is based on the idea that voting can be a mechanism for tracking the truth: we assume that there is an objectively correct choice to be made and each voter receives a distorted signal regarding this choice; the voting rule then has to estimate the most likely correct choice given the observed ballots provided by the voters. The classical Condorcet Jury Theorem fits into this framework, and we have also seen how to characterise the Borda rule by choosing an appropriate "noise model". Here are two representative papers each for each of the three approaches:

Homework #2 (Due: 11 November 2013) 
Thursday 7 November 2013 
NO CLASS 

Monday 11 November 2013 
Strategic Manipulation (4up). We have introduced the problem of strategic manipulation: sometimes a voter can improve the outcome of an election for herself by misrepresenting her preferences. While in earlier lectures we had treated voting rules merely as mechanisms that map a profile of ballots supplied by the voters to election winners, the notions of truthful voting and strategic manipulation provide the missing link between actual preferences and ballots cast in an election. A voting rule is called strategyproof if it never gives any voter an incentive to not vote truthfully. The main theorem in this area of social choice theory is the GibbardSatterthwaite Theorem, which shows that no resolute voting rule (that does not exclude some alternatives from winning to begin with) can be strategyproof without also being dictatorial. We have proved the theorem to be a simple corollary to the MullerSatterthwaite Theorem, by showing that strategyproofness entails strong monotonicity. You can read up on the proof here:
Related to the topic of this lecture, Annemieke Reijngoud's 2011 MSc Logic thesis (amongst other things) deals with a variant of the classical manipulation problem when voters can obtain partial information about the intentions of other voters through election polls. 

Tuesday 12 November 2013 
Coping with Strategic Manipulation (4up). Today we have looked into ways of coping with the problem raised by the GibbardSatterthwaite Theorem and discussed two appproaches in detail. The first approach is to limit the range of profiles on which we expect our voting rule to perform well. The best known such domain restriction is singlepeakedness, which is a reasonable assumption in some cases and which constitutes a sufficient condition for the existence of a strategyproof voting rule. The second approach is one of the, by now, classical ideas in computational social choice: look for voting rules that, while not immune to manipulation, are resistant to manipulation in the sense that manipulation is computationally intractable for the manipulators to perform. We have seen that several standard voting rules do not enjoy this property, but STV does. We have also discussed the coalitional manipulation problem, in which computational intractability can often be achieved even for elections with small numbers of candidates (we have seen this for the Borda rule). Here are a few of the papers that were cited in class:

Homework #3 (Due: 18 November 2013) 
Thursday 14 November 2013 
NO CLASS 

Monday 18 November 2013 
Information and Communication in Voting (4up). In this lecture we have discussed a number of problems arising in situations in which only partial information regarding the ballots of the voters is available. We have modelled this partial information as a profile of partial orders over the set of alternatives. The main problem we discussed is the possible winner problem, in which we ask, for a given alternative x, whether it is possible to refine the given partial ballots to yield complete linear ballots such that x is a winner for the election. We have also seen how this problem, as well as the necessary winner problem, relate to questions in preference elicitation and election manipulation. On the technical side, we have discussed various complexity results for deciding whether a given alternative is a possible winner, for different voting rules and different assumptions regarding the nature of the missing information. Next, we have discussed the problem of compiling the information inherent in a partial ballot profile so as to still be able to compute the election winners once the remaining ballot information arrives. The minimum number of bits required to do this, for a given voting rule, is called the compilation complexity of that rule.
Related to the topic of this lecture, Ilan Frank's 2011 MSc Logic thesis deals with different approaches to measuring the amount of information required to compute the winner of an election. 

Tuesday 19 November 2013 
Voting in Combinatorial Domains (4up). When the decision to be made by a group of voters can be described in terms of a range of possible assignments to each of a set of variables, then we are facing a problem of voting in combinatorial domains. In fact, most collective decision making problems faced in practice will have this kind of combinatorial structure. In this lecture we have reviewed several possible approaches to dealing with such problems and we have remarked that none of them currently provides a fully satisfactory solution. To date, the most promising approaches are distancebased procedures, sequential voting, and voting with compactly expressed preferences. For an overview of the field, consult these reviews:

Homework #4 (Due: 25 November 2013) 
Thursday 21 November 2013 
Student Presentations (25 minutes each, plus questions):


Monday 25 November 2013 
Judgment Aggregation (4up). This has been an introduction to the theory of judgement aggregation, which studies the problem of aggregating a set of individual judgements on the acceptability of a range of logically related propositions into a single collective set of judgements. We have seen a basic impossibility result, showing that no judgement aggregation procedure can meet a small number of seemingly reasonable requirements. We have then discussed several concrete methods for aggregation, such as the quota rules, the premisebased procedure, and distancebased aggregators. We have also seen how the standard model of preference aggregation can be embedded into the framework of judgment aggregation. You can read up on the material covered in these expository reviews:
Related to the topic of this lecture, Kian MintzWoo's 2010 MSc Logic thesis discusses possible approaches to weakening the classical independence axiom in judgment aggregation to circumvent known impossibility results. 

Tuesday 26 November 2013 
Advanced Judgment Aggregation (4up). This has been a second lecture on judgment aggregation in which we have focussed on agenda characterisation theorems talking about either the possibility of devising an aggregation procedure that is consistent for a given class of agendas and that meets certain axiomatic requirements or the safety of the agenda problem, which asks whether all of the procedures meeting a given number of axioms will be consistent for a given class of agendas. We have also briefly talked about the computational complexity of the safety of the agenda problem, about strategic manipulation in judgment aggregation, and about applications. Some of this material can be found the following two papers:

Homework #5 (Due: 2 December 2013) 
Thursday 28 November 2013 
Student Presentations (25 minutes each, plus questions):


Monday 2 December 2013 
Fairness and Efficiency Criteria (4up). Fair division deals with the question of how best to allocate a number of goods to the members of a group of agents. This first lecture on this topic has been an introduction to the range of fairness and efficiency criteria used in the literature on fair division to assess the quality of an allocation of goods to agents. While we have previously focused on problems of collective choice in which the preferences are modelled as ordinal preference relations, we have now switched to a model where preferences are represented in terms of utility functions. We have largely focussed on the concepts of social welfare ordering (SWO) and collective utility function (CUF) and discussed some of their axiomatic properties in detail. We have also briefly mentioned the fairness criteria of proportionality and envyfreeness. This material is covered in my lecture notes on fair division, which will also serve as the main reference for the remainder of the course.


Tuesday 3 December 2013 
Fair Allocation of Indivisible Goods (4up). We have discussed a number of issues pertaining to the problem of fairly allocating a set of indivisible goods to the members of a group. Our first topic has been the complexity of computing a socially optimal allocation. We have seen that when our objective is to maximise utilitarian social welfare, then this problem is NPhard. We have also seen that we can get similar intractability results for other criteria, although positive results are possible under certain restrictive assumptions (e.g., under the assumption that all utility functions are additive, computing an allocation with maximal utilitarian social welfare is easy). Then we have noted that the choice of representation language for modelling the utility functions is a central aspect of any fair division problem of this (combinatorial) type. We have briefly mentioned some of the most important languages and then discussed languages based on weighted propositional formulas (or goals, for short) in detail. This, the analysis of such languages, is an interesting research area in its own right. We have seen examples for technical results on the expressive power of such languages, the relative succinctness of different languages, and the computational complexity of reasoning about utility functions expressed in such languages. Most of the material of this lecture is covered in my lecture notes and the MARA Survey. A possible starting point for delving deeper into the world of compact preference representation is the paper cited below (which includes the expressivity, succinctness and complexity results for weighted goals shown in class).
Related to the topic of this lecture, Sara Ramezani's 2008 MSc Logic thesis deals with the fair allocation of indivisible goods for a specific fairness criterion based on Nash's solution to the bargaining problem. 
Homework #6 (Due: 9 December 2013) 
Thursday 5 December 2013 
NO CLASS 

Monday 9 December 2013 
Distributed Fair Division (4up). In this lecture we have introduced a distributed negotiation framework for the allocation of indivisible goods. In this framework, agents can agree on deals regarding the exchange of certain goods in a distributed an uncoordinated manner. The question then arises under which circumstances a sequence of deals, each of which satisfies certain rationality criteria as well as certain structural properties, can be expected to converge to an allocation that is socially optimal, according to a suitable fairness or efficiency criterion. We have studied this question in detail for the case of myopic and individually rational agents and in view of both utilitarian social welfare and envyfreeness. Here are the two main papers mentioned in the lecture:


Tuesday 10 December 2013 
CakeCutting Algorithms (4up). This lecture has been an introduction to the literature on the fair division of a single divisible good. Such a good may be thought of as a "cake" to be divided amongst several agents, and the algorithms proposed to achieve this are commonly known as cakecutting procedures. We have seen several different procedures for achieving either proportional or envyfree divisions. We have also talked about the communication requirements of such procedures, by measuring the number of basic queries (particularly cuts) required to execute a given procedure in the worst case. While the problem of proportional cake division is very well understood, devising envyfree procedures is much harder and there are challenging open problems regarding the design of practical procedures that would guarantee envyfreeness. The paper cited below is a good introduction to the literature. Besides putting forward a (then) novel procedure for achieving an envyfree division for four (or more) players, it also presents many of the classical procedures in a systematic manner. Finally, the material discussed in class is also included in my lecture notes.
At the end of the lecture we have briefly reviewed the main issues covered in the course: The course was about three different models of collective decision making (namely preference aggregation and voting, fair division, and judgment aggregation) and we have analysed these models using a range of methodologies: the axiomatic method, complexity theory, the analysis of communication and information requirements, and questions relating to the compact representation of preferences. The specific problems addressed were motivated both by scientific curiosity and by a broad array of applications, ranging from political elections to multiagent systems. 
Homework #7 (Due: 16 December 2013) 
Thursday 12 December 2013 (913 in G3.13) 
Student Presentations (25 minutes each, plus questions):
