In the New York Times, Sam Wang has an essay under the headline, “The Great Gerrymander of 2012“. In it, he outlines the results of a method aimed at estimating the partisan seat allocation of the US House if there were no gerrymandering.
His method proceeds “by randomly picking combinations of districts from around the United States that add up to the same statewide vote total” to simulate an “unbiased” allocation. He concludes:
Democrats would have had to win the popular vote by 7 percentage points to take control of the House the way that districts are now (assuming that votes shifted by a similar percentage across all districts). That’s an 8-point increase over what they would have had to do in 2010, and a margin that happens in only about one-third of Congressional elections.
Then, rather buried within the middle of the piece is this note about 2012:
if we replace the eight partisan gerrymanders with the mock delegations from my simulations, this would lead to a seat count of 215 Democrats, 220 Republicans, give or take a few.
In other words, even without gerrymandering, the House would have experienced a plurality reversal, just a less severe one. The actual seat breakdown is currently 201D, 234R. In other words, by Wang’s calculations, gerrymandering cost the Democrats seats equivalent to about 3.2% of the House. Yes, that is a lot, but it is just short of the 3.9% that is the full difference between the party’s actual 201 and the barest of majorities (218). But, actually, the core problem derives from the electoral system itself. Or, more precisely, an electoral system designed to represent geography having to allocate a balance of power among organizations that transcend geography–national political parties.
Normally, with 435 seats and the 49.2%-48.0% breakdown of votes that we had in 2012, we should expect the largest party to have about 230 seats.1 Instead it won 201. That deficit between expectation and reality is equivalent to 6.7% of the House, suggesting that gerrymandering cost the Democrats just over half the seats that a “normally functioning” plurality system would have netted it.
However, the “norm” here refers to two (or more) national parties without too much geographic bias to where those parties’ voters reside. Only if the geographic distribution is relatively unbiased does the plurality system work for its supposed advantage in partisan systems: giving the largest party a clear edge in political power (here, the majority of the House). Add in a little bit of one big party being over-concentated, and you can get situations in which the largest party in votes is under-represented, and sometimes not even the largest party in seats.
As I have noted before, plurality reversals are inherent to the single-seat district, plurality, electoral system, and derive from inefficient geographic vote distributions of the plurality party, among other non-gerrymandering (as well as non-malaportionment) factors. Moreover, they seem to have happened more frequently in the USA than we should expect. While gerrymandering may be part of the reason for bias in US House outcomes, reversals such as occurred in 2012 can happen even with “fair” districting. Wang’s simulations show as much.
The underlying problem is, again, because all the system really does is represent geography: which party’s candidate gets the most votes here, there, and in each district? And herein lies the big transformation in the US electoral and party systems over recent decades, compared to the party system that was in place in the “classic” post-war system: it is no longer as much about local representation as it once was, and is much more about national parties with distinct and polarized positions on issues.
Looking at the relationship between districts and partisanship, John Sides, in the Washington Post’s Wonk Blog, says “Gerrymandering is not what’s wrong with American politics.” Sides turns the focus directly on partisan polarization, showing that almost without regard to district partisanship, members of one party tend to vote alike in recent congresses. The result is that when a district (or, in the Senate, a state) swings from one party to another, the voting of the district’s membership jumps clear past the median voter from one relatively polarized position to the other.
Of course, this is precisely the point Henry Droop made in 1869, and that I am fond of quoting:
As every representative is elected to represent one of these two parties, the nation, as represented in the assembly, appears to consist only of these two parties, each bent on carrying out its own programme. But, in fact, a large proportion of the electors who vote for the candidates of the one party or the other really care much more about the country being honestly and wisely governed than about the particular points at issue between the two parties; and if this moderate non-partisan section of the electors had their separate representatives in the assembly, they would be able to mediate between the opposing parties and prevent the one party from pushing their advantage too far, and the other from prolonging a factious opposition. With majority voting they can only intervene at general elections, and even then cannot punish one party for excessive partisanship, without giving a lease of uncontrolled power to their rivals.
Both the essays by Wang and by Sides, taken together, show ways in which the single-seat district, plurality, electoral system simply does not work for the USA anymore. It is one thing if we really are representing district interests, as the electoral system is designed to do. But the more partisan a political process is, the more the functioning of democracy would be improved by an electoral system that represents how people actually divide in their partisan preferences. The system does not do that. It does even less well the more one of the major parties finds its votes concentrated in some districts (e.g. Democrats in urban areas). Gerrymandering makes the problem worse still, but the problem is deeper: the uneasy combination of a geography-based electoral system and increasingly distinct national party identities.
In the week since the US elections, several sources have suggested that there was a spurious majority in the House, with the Democratic Party winning a majority–or more likely, a plurality–of the votes, despite the Republican Party having held its majority of the seats.
It is not the first time there has been a spurious majority in the US House, but it is quite likely that this one is getting more attention1 than those in the past, presumably because of the greater salience now of national partisan identities.
Ballot Access News lists three other cases over the past 100 years: 1914, 1942, and 1952. Sources disagree, but there may have been one other between 1952 and 2012. Data I compiled some years ago showed a spurious majority in 1996, if we go by The Clerk of the House. However, if we go by the Federal Election Commission, we had one in 2000, but not in 1996. And I understand that Vital Statistics on Congress shows no such event in either 1996 or 2000. A post at The Monkey Cage cites political scientist Matthew Green as including 1996 (but not 2000) among the cases.
Normally, in democracies, we more or less know how many votes each party gets. In fact, it’s all over the news media on election night and thereafter. But the USA is different. “Exceptional,” some say. In any case, I am going to go with the figure of five spurious majorities in the past century: 1914, 1942, 1952, 2012, plus 1996 (and we will assume 2000 was not one).
How does the rate of five (or, if you like, four) spurious majorities in 50 elections compare with the wider world of plurality elections? I certainly do not claim to have the universe of plurality elections at my fingertips. However, I did collect a dataset of 210 plurality elections–not including the USA–for a book chapter some years ago,2 so we have a good basis of comparison.
Out of 210 elections, there are 10 cases of the second party in votes winning a majority of seats. There are another 9 cases of reversals of the leading parties, but where no one won over 50% of seats. So reversals leading to spurious majority are 4.8% of all these elections; including minority situations reversals are 9%. The US rate would be 10%, apparently.
But in theory, a reversal should be much less common with only two parties of any significance. Sure enough: the mean effective number (N) of seat-winning parties in the spurious majorities in my data is just under 2.5, with only one under 2.2 (Belize, 1993, N=2.003, in case you were wondering). So the incidence in the US is indeed high–given that N by seats has never been higher than 2.08 in US elections since 1914,3 and that even without this N restriction, the rate of spurious majorities in the US is still higher than in my dataset overall.
I might also note that a spurious majority should be rare with large assembly size (S). While the US assembly is small for the country’s population, it is still large in absolute sense. Indeed, no spurious majority in my dataset of national and subnational elections from parliamentary systems has happened with S>125!
So, put in comparative context, the US House exhibits an unusually high rate of spurious majorities! Yes, evidently the USA is exceptional.4
As to why this would happen, some of the popular commentary is focusing on gerrymandering (the politically biased delimitation of districts). This is quite likely part of the story, particularly in some sates.5
However, one does not need gerrymandering to get a spurious majority. As political scientists Jowei Chen and Jonathan Rodden have pointed out (PDF), there can be an “unintentional gerrymander,” too, which results when one party has its votes less optimally distributed than the other. The plurality system, in single-seat districts, does not tote up party votes and then allocate seats in the aggregate. It only matters in how many of those districts you had the lead–of at least one vote. Thus a party that runs up big margins in some of its districts will tend to augment its total in its “votes” column at a faster rate than it augments its total in the “seats” column. This is quite likely the problem Democrats face, which would have contributed to its losing the seat majority despite its (apparent) plurality of the votes.
Consider the following graph, which shows the distribution (via kernel densities) of vote percentages for the winning candidates of each major party in 2008 and 2010.
Click image for larger version
We see that in the 2008 concurrent election, the Democrats (solid blue curve) have a very long and higher tail of the distribution in the 70%-100% range. In other words, compared to Republicans the same year, they had more districts in which they “wasted” votes by accumulating many more in the district than needed to win it. Republicans, by contrast, tended that year to win more of their races by relatively tighter margins–though their peak is still around 60%, not 50%. I want to stress, the point here is not to suggest that 2008 saw a spurious majority. It did not. Rather, the point is that even in a year when Democrats won both the vote plurality and seat majority, they had a less-than optimal distribution, in the sense of being more likely to win by big margins than were Republicans.
Now, compare the 2010 midterm election, in which Republicans won a majority of seats (and at least a plurality of votes). Note how the Republican (dashed red) distribution becomes relatively bimodal. Their main peak shifts right (in more ways than one!) as they accumulate more votes in already safe seats, but they develop a secondary peak right around 50%, allowing them to pick up many seats narrowly. That the peak for winning Democrats’ votes moved so much closer to 50% suggests how much worse the “shellacking” could have been! Yet even in the 2010 election, the tail on the safe-seats side of the distribution still shows more Democratic votes wasted in ultra-safe seats than is the case for Republicans.6
I look forward to producing a similar graph for the 2012 winners’ distribution, but will await more complete results. A lot of ballots remain to be counted and certified. The completed count is not likely to reverse the Democrats’ plurality of the vote, however.
Given higher Democratic turnout in the concurrent election of 2012 than in the 2010 midterm election, it is likely that the distributions will look more like 2008 than like 2010, except with the Republicans retaining enough of those relatively close wins to have held on to their seat majority.
Finally, a pet peeve, and a plea to my fellow political scientists: Let’s not pretend there are only two parties in America. Since 1990, it has become uncommon, actually, for one party to win more than half the House votes. Yet my colleagues who study US elections and Congress continue to speak of “majority”, by which they mean more than half the mythical “two-party vote”. In fact, in 1992 and every election from 1996 through at least 2004, neither major party won 50% of the House votes. I have not ever aggregated the 2006 vote. In 2008, Democrats won 54.2% of the House vote, Republicans 43.1%, and “others” 2.7%. I am not sure about 2010 or 2012. It is striking, however, that the last election of the Democratic House majority and all the 1995-2007 period of Republican majorities, except for the first election in that sequence (1994), saw third-party or independent votes high enough that neither party was winning half the votes.
Assuming spurious majorities are not a “good” thing, what could we do about it? Democrats, if they are developing a systematic tendency to be victims of the “unintentional gerrymander”, would have an objective interest in some sort of proportional representation system–perhaps even as much as that unrepresented “other” vote would have.
Matthew Soberg Shugart, “Inherent and Contingent Factors in Reform Initiation in Plurality Systems,” in To Keep or Change First Past the Post, ed. By André Blais. Oxford: Oxford University Press, 2008. [↩]
The original version of this statement, that “N is almost never more than 2.2 here” rather exaggerated House fragmentation! [↩]
Spurious majorities are even more common in the Senate, where no Republican seat majority since at least 1952 has been based on a plurality of votes cast. But that is another story. [↩]
For instance, see the map of Pennsylvania at the Think Progress link in the first footnote. [↩]
It is interesting to note that 2010 was very rare in not having any districts uncontested by either major party. [↩]
Just poking around a bit further in the Electoral Separation of Purpose data, as pictured and explained previously.
I wondered who the “ESP Champs” were of these cycles.
For 2008, I hereby crown Gene Taylor of Mississippi, who won 74.5% in his district on the same day that Obama managed 31.7%. Now that’s separation of purpose!
He still managed 47% even in 2010. Not bad, but not good enough.
In fact, that 2010 result makes Taylor one of only four Democrats to have won, at the midterm, more than 45% of the vote in a district in which Obama had won under 35%. But to be crowned champion for 2010, you should actually have won your race. So the 2010 title belongs to…
Dan Boren of Oklahoma, who won 56.5% in a district in which Obama had won 34.5%. This result still represented a massive adverse swing against Boren, who had 70.5% in 2008. But he held on.
With ESP numbers like these, we can see why some “blue” congressmen in deeply “red” districts were less than keen these past two years in coming to the support of Obama’s policy priorities. (This was a topic that generated considerable discussion in another thread earlier this month.)
Adam Bonica has posted some must-see graphs at Ideological Cartography. The graphs really drive home just how polarized the new US House of Representatives will be. The mean Democrat and mean Republican (and I suppose “mean” has both meanings here!) will be farther apart than any recent House, and the median of the entire House will be much more to the right than any in the past–notably more than the one elected in 1994. This follows the House elected in 2006, which was by far the most left-leaning House we have seen.
Another of Bonica’s graphs shows the extent to which entering Republicans are heavily skewed right. Exiting Democrats were less concentrated at any ideological position within their party, but the ranks of the moderates are going to be notably thinner.
Bonica concludes that “The polarization resulting from the 2010 Midterms is fundamentally different and more worrisome than what had preceded it.” Worrisome indeed.
When Barack Obama was elected President in 2008, the election produced the second lowest value of “Electoral Separation of Purpose” of the preceding five decades.
Electoral Separation of Purpose (ESP) is a concept developed in David J. Samuels and Matthew S. Shugart, Presidents, Parties, and Prime Ministers (Cambridge, 2010). It starts with the difference between presidential and legislative votes, at the district level, for a given party. It then can be expressed in a summary indicator by the average of the absolute values of all these differences.
For Obama and the Democrats in 2008, ESP=10.45. In the book, we considered 42 observations for the USA (both parties in 21 elections through 2004); the only one lower than what we would see in 2008 was 8.79 for Democrats in 1996, when Bill Clinton won reelection.
That ESP would be relatively low in the Obama era is yet another window on the much talked-about “polarization” of US politics: votes for Congress now tend to be more similar to presidential votes at the (House) district level. In other words, the fates of members of the House are more tied to that of their co-partisan president (or presidential candidate) than used to be the case. Voters apparently do not “want different things” from congress and president as much as they once did (for instance, 1972 and 1974, ESPs of 20.4 and 25.8, respectively).
It is worth putting the 2008 election in comparative perspective, comparing both to other countries and to past US elections. When compared to other countries, a value of 10.45 is not especially low. Even when we eliminate all cases where presidential and legislative votes are “fused” (meaning ticket-splitting is impossible, so ESP=0), we still find that the 2008 Democratic ESP is at about the 60th percentile among 383 party-year observations from around the world. Even with polarization and tied fates, there is still a lot of room for divergence between presidential and congressional vote shares in the USA.
What is interesting is the pattern of this divergence. Below is the graph, where each data point is one of the House districts in 2008. Ignore the distinction between triangles and circles for now; we’ll get to that.
(Click the image for a larger view in a new window)
It is striking that in districts where the Democrat has over 50% of the legislative vote, Obama tends to run behind his co-partisan House candidate. That is, there are notably more points above the equality line for winning House Democratic districts than there are below the diagonal. Districts where he runs ahead of the Democratic House candidate tend to be where the party loses the congressional race. For instance, if Obama won about 60% of the vote in a given district, the Democrat tended to win around two thirds of the House vote. But if Obama won around 45% of the vote, the Democratic House candidate tended to get closer to 35% of the vote.
This pattern, which would be reflected by some sort of S-curve, had I bothered to try to plot it, seems to be a common feature of US elections. The graph for Republicans in 2004 (ESP=10.98) looks very similar (see p. 135 of the book). It is not a prevalent pattern in other countries. I suspect it has something to do with the “personal vote” of Representatives; incumbents run ahead of their party’s presidential candidate because some voters who vote for the presidential candidate of the other party nonetheless support the incumbent. However, I have not yet broken the data down by incumbency. In the losing districts, of course, much of it has to do with the Democrats’ not recruiting high-quality candidates in districts they were not likely to win anyway (but having a “high-quality” presidential candidate). Of course, this is a companion to the personal-vote story, whereby the Republican candidate was stronger and able to keep for the party voters who voted for Obama.
Does the graph shed any light on the electoral debacle suffered by Democrats this week? Not directly, although one can see at a glance the numerous districts in which the Democrat won despite the district having voted for McCain. Now here is where those triangles come in: they represent the districts that the Democrats lost in the 2010 midterm election. Not surprisingly, there are a lot of those in the part of the graph where Obama’s vote is less than 50%. In fact, over half the Democratic losses came in McCain 2008 districts. If that’s not a (mini-)realignment, it certainly is a readjustment.
However, the Democrats lost 29 districts in which Obama had won a majority in 2008. And here is where the pattern of 2008 Democratic House winners frequently having run ahead of Obama becomes so important. They had a “cushion” against an adverse swing against them, stemming from Obama’s unpopularity at the midterm, and they most certainly needed it!
In this second graph we see that ESP actually declined further in 2010. At first, it may seem odd that one could go from unified to divided government, yet electoral separation of purpose decreased. But that is what happened. In 2010, ESP for Democrats dropped to 10.00. Note the near disappearance of winning Democrats who are more than about ten percentage points above where Obama was in their district in 2008. In fact, what really stands out here is the extent to which Democrats who won over 50% of their own district vote are concentrated very near, or slightly below, the equality line. That’s a good case of tied fates!
The S-curve pattern is gone, other than a continued bow in losing Democratic districts, where Obama’s 2008 vote is still higher (and often by a bigger margin) than the Democratic House candidate in 2010.
There are still some survivors in McCain districts, and they are about the only ones to still be running well ahead of Obama. If they could survive the great Democratic fall of 2010, they just might survive anything.
Now for the cross-time comparison. The following graph shows the ESP values for the president’s party for every US election since 1956, except for years following reapportionment and redistricting (and 1966, for mysterious reasons).
There is a clear trend in recent elections of declining ESP. No election for which we have data had ESP for the president’s party below 12.0 until 1996. The 1970s, and to a lesser extent the 1980s, were the days of high ESP, with Republicans often winning the presidency but Democrats keeping the House. Even in 1976, when Carter won, ESP was 14.55. Maybe this explains why Carter had so much trouble with his own party: they knew the president was less popular than they were. The graph from that election (not posted; I can’t post everything!) shows a huge bow of the S-curve above the equality line where practically all the Democratic House winners are found.
But note the almost steady downward trend after 1984, when Reagan was reelected. The 1994 midterm, when Democrats lost their House majority under Clinton, showed a downward trend. So 2010 is not unique in being an election that produces a transition to divided government yet sees ESP drop. However, in spite of the decline in ESP, it was still the case then that most Democratic winners in1994 were running ahead of where Clinton had been in 1992. Part of this is owed to the three-way presidential race in 1992. (All these graphs show actual vote percentages, not percentages of the “two-party vote.”) But then Clinton and the Democrats had tightly shared fates in 1996.
After a big upward blip in ESP in 1998, when Democrats had a rare seat gain in a midterm election, we enter the 2000s with ESP hovering in the 10-12 range.^
We really are in uncharted territory by US standards. We have not seen such closely tied presidential and legislative electoral fates at any other point in the last five decades or more.
What this might mean going forward is hard to say. I don’t have that kind of ESP! Or maybe it is not so hard. If Obama is reelected in 2012, it is unlikely to be with a broad personal victory like Nixon in 1972 and Reagan in 1984, which represent two of the three highest ESP concurrent elections. (The other is 1988, when the senior Bush effectively won Reagan’s “third term.”) But therein lies a ray of good news for Democrats–who are surely looking for such rays about now. Normally, if a President is reelected, he does so without much of a “pull” on the House races. However, we have already seen two incumbent presidents win a second term with a drop in ESP. In addition to Clinton, already mentioned as the lowest US ESP so far, the same happened with G.W. Bush (ESP=12.27 when he, uh, became president in 2000,* and a drop to 10.98 in 2004).
in such a low-ESP environment, with partisan fates so tied, it is entirely plausible that a reelected Obama would carry enough of that cluster of districts near 50% to regain a House majority. If he loses, of course, then so might several more Democratic House members. Such are the perils of governing and campaigning when electoral separation of purpose is tending to run so low, by historic US standards.
^ The 1998 plot shows a large number of Democratic winners well above where they had been in 1996, and thus also well above where Clinton ran in their districts in his low-ESP reelection in 1996. (This footnote was added a couple of days after initial planting.)
* ESP for Democrats in 2000 was a little higher (13.07), presumably because Gore ran well behind many Democratic incumbents. That the value would be so much higher than it had been for the Clinton-Gore team in 1996 really drives home how much Gore failed to cement the Democratic coalition that swung so tightly behind Clinton in 1996.
Given that, some time around the mid-1990s, the US has entered the brave new world of relatively unified partisan voting–relative to its own past, not to most democracies–it is hardly surprising that recent House of Representatives have used things like “self-executing” rules to pass bills.
I scarcely pay attention to the various noise machines that constitute “debate” in the US media, but some of it penetrates anyway, and I have been befuddled over all the flap over the use of such procedures.
Some on the right (which, to be clear, used the tactic when it was in power) even claim that what the Democrats are prepared to do today to pass their a bill is unconstitutional. Last time I checked, the constitution was pretty clear that each chamber of the legislature had blanket authority to do what it wants with regard to internal procedure. (In fact, it is that blanket authority upon which rests the right’s cherished–at least for now–Senate filibuster rule.)
I will count myself as among those who would like to see more, not less, use of self-executing rules. Along with similar (and similarly derided) rules like “fast track,” such rules are among the few devices that exist in the fragmented US political system for promoting collective accountability. By limiting amendments and debate, and likewise limiting individual accountability of members for difficult votes, self-executing rules and fast track enhance the capacity of parties to act–and to be held accountable at the next election. In other words, they are fundamental devices of democracy.
See also the good insights at PoliBlog (on the procedures and on the bill itself).
One way that the Democratic Party can prevent a loss of the Massachusetts Senate seat from stopping their healthcare program from becoming law, without either reopening negotiations (e.g. trying to get one of the Maine Republicans to vote with them) or using hardball tactics (e.g. finding a procedure to pass the bill without needing 60 votes), is for the House simply to adopt the Senate bill. Then it would not require another vote in the Senate.
Not counting a two-election increase when Alaska and Hawaii were added** the House size has not changed since the 1912 election. Back then the US had about 95 million people, or around a third what it has today!
The House used to be expanded periodically to track population size (see graph at the second-linked item). Why not now? As the NYT notes, the US judicial system is about to be asked that question.
Some advocates of increased House size have suggested a House of over 1,000 Representatives. That’s ridiculous–and hardly helpful to the cause. The cube-root law (again, see second link) would suggest 620-660. But, really, even 600, or 550, would help restore Representativeness considerably.
* To one of the arms of the federal government, anyway.
** That is, those states came into the union between censuses, and a seat was added for each. With the subsequent reapportionment, those states’ Representatives came at the expense of voters in other states, in order to return the House at 435.
Handing California and western environmental policy advocates a big win, U.S. Rep. Henry Waxman wrested control of the House Energy and Commerce Committee from Michigan Congressman John Dingell Thursday morning.Waxman won the gavel fight for control of the committee over the more senior Dingell by a vote of 137 to 122.
Simon Jackman quoting a press item on a former student, Sean Theriault, notes that the US “Congress is the most polarized it has been in a century.”
A quote from Theriault states that “The electoral campaign has infiltrated the legislative process.”
Interesting choice of words, there–infiltrating. As Larry Bartels suggests in a comment to Jackman’s blog post:
Incidentally, here is the question on political parties from this yearâ€™s American Politics qualifying exam at Princeton: â€œIn 1950, American political scientists wanted a more responsible two-party system. Now they have it. How have they reacted? What light does recent scholarship shed on the empirical assertions and normative commitments animating earlier scholarly writing on political parties and the American party system?â€
The rest of Bartels’ comment suggests that if one were to answer the question, one would focus on, among other things, changes (if any) on the ability of presidents to get their way, as well as the tendency for party-line voting in congress.
I agree that these are among the best indicators of whether the US has reached something like ‘responsible party government’ (and whether, if so, it might be here to stay). However, I do not accept the premise of the question: that the US indeed has now what (some) political scientists in 1950 wanted, a responsible 2-party system.
If the â€œbailoutâ€ vote didnâ€™t reveal what 6 years of single-party control already should have made clear, let me give it a try: The US, even at its peak of party polarization and executive-legislative constituency overlap,1 does not have a â€˜responsibleâ€™ party system. Under the imaginary import of the idealized UK system, it would make no sense that â€˜earmarksâ€™ would go up precisely under partisan polarization and unified government. Nor would it make sense that the leaders of the parties (who in any case would not need to bargain with one another under a UK-style â€˜responsibleâ€™ 2-party system) could not deliver sufficient support on a critical piece of â€˜emergencyâ€™ legislation until they spread around copious amounts of pork.
The party system that has emerged in the last decade or so is the worst of both worlds: More frequent party-line voting (but notâ€“refreshingly!â€“ on the bailout), yet rampant ducking for cover through district- and interest-group-focused amendments for which a single ruling party as a whole canâ€™t be held responsible. I am pretty sure that is not what the 1950 APSA committee had in mind. And I am just as sure that what they had in mind is out of step with the institutional structure of the system they were attempting to graft it on to.
[The last two paragraphs are from my comment to Jackman's blog post.]
Here I am referring to the concept of Electoral Fusion of Purpose, which is an indicator of the extent to which executive and legislative candidates (or lists) get their electoral support from the same geographical constituencies. The index maxes out at 1.00, with total overlap. If it were 0 (and empirically it never is, only rarely falling below .5) it would mean that all of the president’s votes came from places the legislators of the party got no votes, and vice versa. For US Republicans in 2004, the Fusion index reached .915 (and Democrats .885). By contrast, in 1964, both parties were around .6 and even as recently as 1980, they were under .8. (Electoral Fusion of Purpose is a theme of one chapter of Presidents, Prime Ministers, and Parties, by David J. Samuels and Matthew S. Shugart (Cambridge, forthcoming). [↩]
If by my laws you walk, and my commands you keep, and observe them,
then I will give-forth your rains in their set-time,
so that the earth gives-forth its yield
and the trees of the field give-forth their fruit.
--Vayikra 26: 3-4