Social choice theory is the study of mechanisms for collective decision making, such as voting rules or protocols for fair division, and computational social choice addresses problems at the interface of social choice theory with computer science. This course will introduce some of the fundamental concepts in social choice theory and related disciplines, and it will expose students to current research at the interface of social choice theory with logic and computation. This is an advanced course in the Master of Logic programme. Students from AI, Computer Science, Mathematics, Economics, Philosophy, etc. are also very welcome to attend (contact me if you are unsure about your qualifications).
Prerequisites: For a few lectures I will assume some basic background in complexity theory (up to the notion of NPcompleteness). For those who do not yet have this background, I will offer an optional tutorial at the end of the first week.
Projects: For those who successfully complete the course, I will offer the possibility of doing a small research project in November and December (worth 3EC). Such a project involves writing a paper about a recent result from the literature and presenting your findings in a talk.
See also: previous editions of the course and the Computational Social Choice Seminar at the ILLC
No.  Date  Description of Lecture  Homework 

1  Monday 3 September 2012 
Introduction (4up). The first part of this introductory lecture has been about illustrating some of the problems we will address in the course, mostly by way of example. Have a look at the following survey papers to get a better idea of what the field is about:
In the second part of the lecture we have introduced the framework of social welfare functions to formally model the problem of preference aggregation and we have seen a proof of Arrow's famous impossibility theorem. You can look up the details in this paper, which will also serve as the main reference for the results in voting and preference aggregation using the axiomatic method that we will discuss in the next couple of weeks:


2  Tuesday 4 September 2012 
Impossibility Theorems (4up). In the first part of this lecture, we have briefly reviewed Arrow's Theorem and mentioned a few alternative proofs. I specifically recommend the first one the three short proofs by Geanakoplos:
We have also talked very briefly about logics for modelling social choice problems and about using tools from automated reasoning to automatically derive results such as Arrow's Theorem. If you are interested in pursuing this direction further, here are a few papers to get you started:
In the main part of the lecture we have then introduced social choice functions and proven two further classical impossibility theorems, namely Sen's Theorem on the Impossibility of a Paretian Liberal and the MullerSatterthwaite Theorem, which shows that we cannot design a good resolute voting rule that is monotonic in the strong sense of that term. You can find these proofs in my review paper linked below. Sen's original paper is still highly readable today and I recommend that you have a look at it.
Related to the topic of this lecture, Christian Geist's 2010 MSc Logic thesis deals with impossibility theorems in a different area of social choice theory (known as ranking sets of objects) and, in particular, with the automated derivation of such theorems. The tutorial on complexity theory (4up) took place on Friday 7 September 2012 at 911 in room A1.08. 
Homework #1 (Deadline: 10 September 2012) 
3  Monday 10 September 2012 
Voting Rules (4up). In the first part of today's session we have seen short presentations of some of the most important voting rules: the antiplurality rule (by Callum), kapproval (by Zhiguang), STV (by Frank), Coombs (by Chris), range voting (by Sanne K.), majority judgment (by Justin and Wouter), Bucklin (by George), Copeland (by Jetze), Banks (by Babette and Johannes), Slater (by Katharina and Gijs), Dodgson (by Anthony), Young (by Femke), ranked pairs (by Francesco), Schulze (by Sanne B. and Will), and the cup rule (by Alexander). In the second part we have introduced two important families of voting rules, namely the positional scoring rules (which systematically violate the Condorcet principle) and the Condorcet extensions (which are defined as those rules that satisfy the Condorcet principle). We have then briefly introduced one of the historically first topics studied in computational social choice, namely the computational complexity of the problem of computing the election winners for a given voting rule. Below you can find a survey paper defining many of the voting rules mentioned in class and the two short papers on the complexity of the winner determination problem for the Banks rule discussed at the end.

Homework #2 (Deadline: 17 September 2012) 
4  Friday 14 September 2012 
The Tuesday lecture got moved to Friday 14 September 2012 at 1517 in room A1.08 (because of COMSOC2012). Characterisation Results (4up). In this lecture we have seen three different approaches to the characterisation voting rules. The first approach is the axiomatic method, which we have already used to derive impossibility theorems. Here the aim is to identify a set of axioms (i.e., properties) that uniquely identify a voting rule (or possibly a family of voting rules). As examples, we have seen May's characterisation of the plurality rule and Young's characterisation of the positional scoring rules. The second approach is based on, first, a notion of consensus (specifying a set of ballot profiles in which there is a clear winner) and, second, a notion of distance between profiles. The winners of an election are then the winners in the consensus profiles that are closest to the actual profile. We have seen two voting rules that are most conveniently defined in this framework (Dodgson and Kemeny) and we have seen two others for which it is possibly to "rationalise" them in terms of consensus and distance (Borda and plurality). The third approach, finally, is based on the idea that voting can be a mechanism for tracking the truth: we assume that there is an objectively correct choice to be made and each voter receives a distorted signal regarding this choice; the voting rule then has to estimate the most likely correct choice given the observed ballots provided by the voters. The classical Condorcet Jury Theorem fits into this framework, and we have also seen how to characterise the Borda rule by choosing an appropriate "noise model". Here are two representative papers each for each of the three approaches:


5  Monday 17 September 2012 
Strategic Manipulation (4up). We have introduced the problem of strategic manipulation: sometimes a voter can improve the outcome of an election for herself by misrepresenting her preferences. While in earlier lectures we had treated voting rules merely as mechanisms that map a profile of ballots supplied by the voters to election winners, the notions of truthful voting and strategic manipulation provide the missing connection between actual preferences and ballots cast in an election. A voting rule is called strategyproof if it never gives any voter an incentive to not vote truthfully. The main theorem in this area of social choice theory is the GibbardSatterthwaite Theorem, which shows that no resolute voting rule (that does not exclude some alternatives from winning to begin with) can be strategyproof without also being dictatorial. We have proved the theorem to be a simple corollary to the MullerSatterthwaite Theorem, by showing that strategyproofness entails strong monotonicity. You can read up on the proof here:
Related to the topic of this lecture, Annemieke Reijngoud's 2011 MSc Logic thesis (amongst other things) deals with a variant of the classical manipulation problem when voters can obtain partial information about the intentions of other voters through election polls. 

6  Tuesday 18 September 2012 
Coping with Strategic Manipulation (4up). Today we have looked into ways of coping with the problem raised by the GibbardSatterthwaite Theorem and discussed two appproaches in detail. The first approach is to limit the range of profiles on which we expect our voting rule to perform well. The best known such domain restriction is singlepeakedness, which is a reasonable assumption in some cases and which constitutes a sufficient condition for the existence of a strategyproof voting rule. The second approach is one of the, by now, classical ideas in computational social choice: look for voting rules that, while not immune to manipulation, are resistant to manipulation in the sense that manipulation is computationally intractable for the manipulators to perform. We have seen that several standard voting rules do not enjoy this property, but STV does. We have also discussed the coalitional manipulation problem, in which computational intractability can often be achieved even for elections with small numbers of candidates (we have seen this for the Borda rule). Here are a few of the papers that were cited in class:

Homework #3 (Deadline: 24 September 2012) 
7  Monday 24 September 2012 
Information and Communication in Voting (4up) In this lecture we have discussed a number of problems arising in situations in which only partial information regarding the ballots of the voters is available. We have modelled this partial information as a profile of partial orders over the set of alternatives. The main problem we discussed is the possible winner problem, in which we ask, for a given alternative x, whether it is possible to refine the given partial ballots to yield complete linear ballots such that x is a winner for the election. We have also seen how this problem, as well as the necessary winner problem, relate to questions in preference elicitation and election manipulation. On the technical side, we have discussed various complexity results for deciding whether a given alternative is a possible winner, for different voting rules and different assumptions regarding the nature of the missing information. Next, we have discussed the problem of compiling the information inherent in a partial ballot profile so as to still be able to compute the election winners once the remaining ballot information arrives. The minimum number of bits required to do this, for a given voting rule, is called the compilation complexity of that rule.
Related to the topic of this lecture, Ilan Frank's 2011 MSc Logic thesis deals with different approaches to measuring the amount of information required to compute the winner of an election. 
Homework #4 (Deadline: 1 October 2012) 
8  Friday 28 September 2012 
The Tuesday lecture got moved to Friday 28 September 2012 at 1113 in room A1.14 (to make space for the PhD defense of Umberto Grandi and a small workshop on computational social choice). Voting in Combinatorial Domains (4up) When the decision to be made by a group of voters can be described in terms of a range of possible assignments to each of a set of variables, then we are facing a problem of voting in combinatorial domains. In fact, most collective decision making problems faced in practice will have this kind of combinatorial structure. In this lecture we have reviewed several possible approaches to dealing with such problems and we have remarked that none of them currently provides a fully satisfactory solution. To date, the most promising approaches are distancebased procedures, sequential voting, and voting with compactly expressed preferences. For an overview of the field, consult these reviews:


9  Monday 1 October 2012 
Judgment Aggregation (4up) This has been an introduction to the theory of judgement aggregation, which studies the problem of aggregating a set of individual judgements on the acceptability of a range of logically related propositions into a single collective set of judgements. We have seen a basic impossibility result, showing that no judgement aggregation procedure can meet a small number of seemingly reasonable requirements and then explored ways of circumventing this impossibility by weakening some of these requirements. This perspective has naturally led to the discussion of several concrete aggregation procedures. Finally, we have seen how the standard model of preference aggregation can be embedded into the framework of judgment aggregation. You can read up on the material covered in these expository reviews:
Related to the topic of this lecture, Kian MintzWoo's 2010 MSc Logic thesis discusses possible approaches to weakening the classical independence axiom in judgment aggregation to circumvent known impossibility results. 

10  Tuesday 2 October 2012 
Advanced Judgment Aggregation (4up) This has been a second lecture on judgment aggregation in which we have discussed axiomatic characterisations of the quota rules and the majority rule; agenda characterisation theorems talking about either the possibility of devising an aggregation procedure that is consistent for a given class of agendas and that meets certain axiomatic requirements or the safety of the agenda problem, which asks whether all of the procedures meeting a given number of axioms will be consistent for a given class of agendas; and the computational complexity of the safety of the agenda problem. Some of this material can be found the following two papers:

Homework #5 (Deadline: 8 October 2012) 
11  Monday 8 October 2012 
Fairness and Efficiency Criteria (4up) Fair division deals with the question of how best to allocate a number of goods to the members of a group of agents. This first lecture on this topic has been an introduction to the range of fairness and efficiency criteria used in the literature on fair division to assess the quality of an allocation of goods to agents. While we have previously focused on problems of collective choice in which the preferences are modelled as ordinal preference relations, we have now switched to a model where preferences are represented in terms of utility functions. We have largely focussed on the concept of social welfare ordering (SWO) and that of a collective utility function (CUF) and discussed some of their axiomatic properties in detail. We have also briefly mentioned the fairness criteria of proportionality and envyfreeness. This material is covered in my lecture notes on fair division, which will also serve as the main reference for the remainder of the course.


12  Tuesday 9 October 2012 
Fair Allocation of Indivisible Goods (4up) We have discussed a number of issues pertaining to the problem of fairly allocating a set of indivisible goods to the members of a group. Our first topic has been the complexity of computing a socially optimal allocation. We have seen that when our objective is to maximise utilitarian social welfare, then this problem is NPhard. We have also seen that we can get similar intractability results for other criteria, although positive results are possible under certain restrictive assumptions (e.g., under the assumption that all utility functions are additive, computing an allocation with maximal utilitarian social welfare is easy). Then we have noted that the choice of representation language for modelling the utility functions is a central aspect of any fair division problem of this (combinatorial) type. We have briefly mentioned some of the most important languages and then discussed languages based on weighted propositional formulas (or goals, for short) in detail. This, the analysis of such languages, is an interesting research area in its own right. We have seen examples for technical results on the expressive power of such languages, the relative succinctness of different languages, and the computational complexity of reasoning about utility functions expressed in such languages. Finally, we have briefly talked about practical algorithms for computing socially optimal allocations and seen that integer programming is a powerful tool for designing them. We have exemplified the technique with an algorithm for maximising utilitarian social welfare for utility functions encoded as positively weighted conjunctions of literals and with an algorithm for maximising egalitarian social welfare for utility functions encoded using the socalled XORlanguage. Most of the material of this lecture is covered in my lecture notes and the MARA Survey. A possible starting point for delving deeper into the world of compact preference representation is the paper cited below (which includes the expressivity, succinctness and complexity results for weighted goals shown in class).
Related to the topic of this lecture, Sara Ramezani's 2008 MSc Logic thesis deals with the fair allocation of indivisible goods for a specific fairness criterion based on Nash's solution to the bargaining problem. 
Homework #6 (Deadline: 15 October 2012) 
13  Monday 15 October 2012 
Distributed Fair Division (4up) In this lecture we have introduced a distributed negotiation framework for the allocation of indivisible goods. In this framework, agents can agree on deals regarding the exchange of certain goods in a distributed an uncoordinated manner. The question then arises under which circumstances a sequence of deals, each of which satisfies certain rationality criteria as well as certain structural properties, can be expected to converge to an allocation that is socially optimal, according to a suitable fairness or efficiency criterion. We have studied this question in detail for the case of myopic and individually rational agents and in view of both utilitarian social welfare and envyfreeness. Here are the two main papers mentioned in the lecture:


14  Tuesday 16 October 2012 
CakeCutting Algorithms (4up) This lecture has been an introduction to the literature on the fair division of a single divisible good. Such a good may be thought of as a "cake" to be divided amongst several agents, and the algorithms proposed to achieve this are commonly known as cakecutting procedures. We have seen several different procedures for achieving either proportional or envyfree divisions. We have also talked about the communication requirements of such procedures, by measuring the number of basic queries (particularly cuts) required to execute a given procedure in the worst case. While the problem of proportional cake division is very well understood, devising envyfree procedures is much harder and there are challenging open problems regarding the design of practical procedures that would guarantee envyfreeness. The paper cited below is a good introduction to the literature. Besides putting forward a (then) novel procedure for achieving an envyfree division for four (or more) players, it also presents many of the classical procedures in a systematic manner. Finally, the material discussed in class is also included in my lecture notes.
At the end of the lecture we have briefly reviewed the main issues covered in the course: The course was about three different models of collective decision making (namely preference aggregation and voting, fair division, and judgment aggregation) and we have analysed these models using a range of methodologies: the axiomatic method, complexity theory, the analysis of communication and information requirements, and questions relating to the compact representation of preferences. The specific problems addressed were motivated both by scientific curiosity and by a broad array of applications, ranging from political elections to multiagent systems. 
Homework #7 (Deadline: 26 October 2012) 