Monday, December 31, 2007

Survey Experiments: Dictator Game in a Mass Survey

Another clever way of embedding a behavioral experiment into a survey:
Rene Bekkers. 2007. "Measuring Altruistic Behavior in Surveys: The All-or-Nothing Dictator Game." Survey Research Methods, Vol 1, No 3 (open access link here). In surveys, respondents are often given some kind of compensation for their participation. This survey did so as well, but Bekkers also gave respondents the opportunity to donate their payment to a charity rather than keep it for themselves. Here is the abstract:
A field study of altruistic behaviour is presented using a modification of the dictator game in a large random sample survey in the Netherlands (n=1,964). In line with laboratory experiments, only 5.7% donated money. In line with other survey research on giving, generosity increased with age, education, income, trust, and prosocial value orientation.

Bekkers's method here also differs from common dictator game approaches in that the subjects were allocating "earned" endowments rather than windfall endowments. Past research suggests that such determinants of "asset legitimacy" affect what subjects do with their endowments. Note again that it is the survey process itself that allows for this approach to be taken. Of course, since there is no variation in this treatment, we are limited in how much we can learn about such phenomena. Maybe that was a missed opportunity?

In a complementary analysis of self-reported contributions to charities, Bekker follows Smith, Kehoe, and Cremer (1995) ("The private provision of public goods") in using 2SLS to estimate correlates of giving conditional on having given at all. But he claims to have assigned some kind of randomized treatment in the survey that was used to satisfy the exclusion restriction for the first stage of 2SLS. There seemed to be a bit of hand waving on this, and results from the first stage are not included in Table 2. I found this presentation of 2SLS results to be too vague.

Costs of Conflict

An important issue in the study of violent conflict is the nature of the economic costs that are imposed. On the one hand, instability drives away investment, and resources committed to fighting and the destruction wrought by conflict both steal from productive capacity. On the other hand, it is conceivable that political change ushered in via conflict can result in redistribution of assets and opportunities that increase long run efficiency. The economic costs of conflict are conventionally understood as fundamental in determining whether protagonists decide to fight.

A major difficulty in addressing this issue is measurement. A new paper by Zussman, Zussman, and Oregaard in the current Economica (gated link here) shows how asset market data can be used to measure the economic effects of conflict. Their key methodological contribution is a way of identifying "turning points" in financial time-series. They apply this method to data from Israeli and Palestinian asset market series. They find that their methodology does a good job at identifying turning points in the market series that correspond to key events in the Israeli-Palestinian conflict. Aggregates over periods between turning points can be used as summaries of economic costs of conflict. Doing so, they find that "rough calculation based on the results of our analysis yields a drop of 22% in the value of the Tel Aviv Stock Exchange resulting from the outbreak of the Intifada and an increase of 25% in market value arising from the adoption of the Road Map peace plan."

So can we interpret this as saying that about a quarter of Israeli asset market activity was sensitive to the conflict? If so, one wonders what factors contribute to the extent to which markets are sensitive to conflict. In addition, to what extent does this number characterize general economic sensitivity to conflict in Israel? To the extent that markets are made more resistant to conflict, are incentives to fight altered?

Volunteering, Happiness, and a Clever Natural Experimental Design

Here's the abstract from an interesting paper by Stephen Meier and Alois Stutzer in the current issue of Economica (gated link here)


Stephen Meier and Alois Stutzer. "Is Volunteering Rewarding in Itself?" Economica 75(297):39–59.

Abstract

Volunteering constitutes one of the most important pro-social activities. Following Aristotle, helping others is the way to higher individual wellbeing. This view contrasts with the selfish utility maximizer, who avoids helping others. The two rival views are studied empirically. We find robust evidence that volunteers are more satisfied with their life than non-volunteers. The issue of causality is studied from the basis of the collapse of East Germany and its infrastructure of volunteering. People who lost their opportunities for volunteering are compared with people who experienced no change in their volunteer status.



That's a clever research design.

Monday, December 17, 2007

Error Correction Models for Elections

Virginia and I were looking at how to implement error correction models (ECMs) to study the stability of vote shares for parties and incumbents. Thought I would share some of the material in case others are working on these topics too. The main paper that we looked at was the McDonald and Best (2006) Political Analysis paper (gated link here). They use ECMs to study variation in the stability of incumbent vote shares under different electoral systems. It's an interesting approach, although Virginia and I agreed that they should have used seemingly-unrelated regressions to estimate the trends for the parties in the different countries (you have to look at the paper for that to be meaningful).

Also, here's a link to a little tutorial that I wrote up that goes through an ECM example in Wooldridge Introductory Econometrics (a.k.a. "baby Wooldridge"). It shows full implementation of the Engle-Granger two-step method as well as the Banerjee et al one-step method. The example in the tutorial is a standard type of ECM for two conitegrated series. It is a bit more sophisticated than what is going on in the McDonald and Best paper, which only evaluates error correction in a single series.

Wednesday, December 12, 2007

Does "war in mixed strategies" make sense?

At yesterday's political economy workshop, our own Massimo Morelli presented a working paper that he co-authored with Matthew Jackson entitled "Strategic Militarization, Deterrence, and Wars" (paper is here). The centerpiece of the paper is an analysis of a mixed strategy equilibrium involving states playing "hawkish," "dovish", and "deterrence" strategies, with wars occurring when "hawkish" and "dovish" strategies interact.

This is the second paper that we've seen at the political economy workshop this Fall where fighting has occurred only in mixed strategies. The other paper was a working paper by Ernesto Dal Bo and Robert Powell on civil conflict resulting from a situation where governments hold private information on the size of a centrally controlled endowment from which contenders seek a share (paper is here).

This raises all the questions about whether such mixed strategy "fighting" equilibria are plausible characterizations of behavior. In the Dal Bo and Powell, paper, contenders mix over fighting and not fighting to "keep the government honest". However, the contender's decision to fight comes after she has already received her share of the endowment from the government. For the equilibrium to be believable, we have to accept that even after the contender's share has been paid out, she would still with positive probability initiate an insurgency that offers no additional gain. This strikes me as implausible.

In the Morelli and Jackson paper, things are a little better insofar as the timing of the game does not conflict so much with the logic behind the mixed strategy. But it's still a case where wars are initiated as the result of joint randomization of strategies.

However, Massimo argued for another interpretation. Rather than thinking in terms of randomizing strategies in the context of a bilateral interaction, think of one central actor facing many adversaries. Then, we can imagine a mixed strategy as being one where the central actor is playing a different strategy with each adversary such that the distribution of strategies conforms with the equilibrium "mix". My thought was that this interpretation was stretching things a bit. This interpretation would seem to require that the adversaries all act as if they are playing a bilateral game with the central actor; but clearly in any real world situation, the adversaries would condition their behavior on what the central actor is doing vis-a-vis the other adversaries. So, to be convinced, I would have see that the same equilibrium "mix" holds in a compelling, respecified game with one central actor and multiple adversaries and with the central actor choosing war with a subset of the adversaries in pure strategies. I just don't think that something like war makes any sense as an element of a mixed strategy.

Tuesday, December 11, 2007

Caillaud and Tirole on Group Persuasion

A paper by Caillaud and Tirole in the current AER (gated link) extends the analysis of sender-receiver games to study group persuasion. They motivate the paper with the example of the sponsor of a policy proposal attempting to persuade a committee. Here's the abstract:

The paper explores strategies that the sponsor of a proposal may employ to convince a qualified majority of members in a group to approve the proposal. Adopting a mechanism design approach to communication, it emphasizes the need to distill information selectively to key group members and to engineer persuasion cascades in which members who are brought on board sway the opinion of others. The paper shows that higher congruence among group members benefits the sponsor. The extent of congruence between the group and the sponsor, and the size and the governance of the group, are also shown to condition the sponsor’s ability to get his project approved.


And here are some of their counterintuitive results:

We showed that adding veto powers may actually help the sponsor, while an increase in external congruence [of preferences among the sponsor and committee members] may hurt him; that a [committee member with more congruent preferences to the sponsor] may be worse off than an a priori more dissonant member; and that, provided that he can control channels of communication, the sponsor may gain from creating ambiguity as to whether other members really are on board. Finally, an increase in internal congruence [of preferences among committee members] always benefits the sponsor.


On a technical note, they do not specify a game form but rather study the communication mechanisms that are optimal for the sponsor in his effort to win approval from the committee. The empirical content of the model includes some comparative statics on how size, preference congruence, and voting rules affect the likelihood of proposal acceptance and thus the stability of the policy status quo.

Monday, December 10, 2007

Non-reponse and False Response in Corruption Surveys of Firms

Abstract from a new working paper by Rahman, Li, and Jensen:

Heard melodies are sweet, but those unheard are sweeter : understanding corruption using cross-national firm-level surveys

2007-11-01

By: Rahman, Aminur; Li, Quan; Jensen, Nathan M.

http://d.repec.org/n?u=RePEc:wbk:wbrwps:4413&r=dev

Since the early 1990s, a large number of studies have been undertaken to understand the causes and consequences of corruption. Many of these studies have employed firm-level survey data from various countries. While insightful, these analyses based on firm-level surveys have largely ignored two important potential problems: nonresponse and false response by the firms. Treating firms ' responses on a sensitive issue like corruption at their face value could produce incorrect inferences and erroneous policy recommendations. We argue that the data generation of nonresponse and false response is a function of the political environment in which the firms operate. In a politically repressive environment, firms use nonresponse and false response as self-protection mechanisms. Corruption is understated as a result. We test our arguments using the World Bank enterprise survey data of more than 44,000 firms in 72 countries ! for the period 2000-2005 and find that firms in countries with less press freedom are more likely to provide nonresponse or false response on the issue of corruption. Therefore, ignoring this systematic bias in firms ' responses could result in underestimation of the severity of corruption in politically repressive countries. More important, this bias is a rich and underutilized source of information on the political constraints faced by the firms. Nonresponse and false response, like unheard melodies, could be more informative than the heard melodies in the available truthful responses in firm surveys.



This is an important type of analysis. By construction, the survey serves as an experiment to test a behavioral model of firms' willingness to report corruption. This type of "incidental survey experiment" is a nice way to perform secondary data analysis. Not only do we learn something about behavior, but we also learn something about the data itself.

When is something a "0" versus an "NA"?

In a recent talk in the department, a presenter wanted to code leadership transition outcomes according to a binary rule, where "transition with punishment" outcomes were coded as 1's and "transition with no punishment" outcomes were coded as 0's. The "transition with no punishment" cases included natural death or assassination while in office.

It seemed to me that this coding didn't make sense. When you have death in office, the case should be considered as censored, no? In these cases, you are not able to observe whether the outcome is truly "punishment" or not. Think about it. So the "death in office cases" are not really cases of "no punishment" but rather "not observed." Then, one needs to decide whether these cases should just be dropped from the analysis or whether a selection correction should be included.

I claim that a terrible thing to do is to lump censored observations into the "0" category. Here's my reasoning. We know that simply dropping cases where there is censored data (listwise deletion) leads to bias when the likelihood of censoring is dependent on the outcome (Y). But when censoring likelihood is dependent only on the explanatory factors (the X's) of interest, listwise deletion is not biased. But labeling censored cases as 0's can lead to bias if censoring is associated with either Y or the X's.

To see an example of how this works, consider the following contingency table:

Table 1: Table to Estimate Pr(Y=1|X=0) and Pr(Y=1|X=1) When No Censoring Is Present

----+---+---+
Y=1 | 2 | 2 |
----+---+---+
Y=0 | 2 | 2 |
----+---+---+
----|X=0|X=1|
----+---+---+

The relationship shown in Table 1, which is the "true" relationship since all data is observed, is that Pr(Y=1|X=0) equals Pr(Y=1|X=1). Now suppose that there is a 1/2 chance that data will be missing when X=0 and a 0 chance that it will be missing otherwise. Listwise deletion would produce the following table:

Table 2: Table to Estimate Pr(Y=1|X=0) and Pr(Y=1|X=1) Given Censoring Dependent on X and Listwise Deletion

----+---+---+
Y=1 | 1 | 2 |
----+---+---+
Y=0 | 1 | 2 |
----+---+---+
----|X=0|X=1|
----+---+---+

Table 2 leads us to infer correctly that Pr(Y=1|X=0) equals Pr(Y=1|X=1). However, if we assume the same censoring mechanism, then labeling censored observations as "Y=0" gives the following:

Table 3: Table to Estimate Pr(Y=1|X=0) and Pr(Y=1|X=1) Given Censoring Dependent on X and Labeling Censored Obs as Y=0

----+---+---+
Y=1 | 1 | 2 |
----+---+---+
Y=0 | 3 | 2 |
----+---+---+
----|X=0|X=1|
----+---+---+

Table 3 leads us to infer incorrectly that Pr(Y=1|X=0) does not equal Pr(Y=1|X=1). If the "true" relationship is one in which Pr(Y=1|X=0) does not equal Pr(Y=1|X=1), things can be similarly messed up (try it with simple contingency table examples). What a mess...

Friday, December 7, 2007

Presentation on "Globalization's Losers"

In the department today, Yotam Margalit presented some results from his study on "Globalization's Losers: Trade, Culture and the Politics of Discontent." (Abstract of his research is here.) He was looking at the chain linking socio-economic attributes, negative attitudes toward globalization and integrationist policies, and then voting for right or left parties. The data that he presented suggested that predictions from the ol' Heckscher-Ohlin model do seem to conform to reality if we relate the model to income levels, assuming that owners of relatively scarce factors (conditional on economic openness) correspond to the poor in developed countries and the middle-class or rich in developing countries. But the predictions do not fare well when we relate the model to the ideological leanings of voters and parties, assuming that protectionism should be the cause of the left in developed countries and the right in developing countries. The latter prediction---specifically, that antiglobalization forces should be decidely on the left in developed countries---is not borne out by the data. Why? Margalit claims that there is a second dimension, one that he calls "perception of cultural threat," which intervenes. Antiglobalization forces, it seems, are animated by leftist economic concerns but rightist "cultural concerns." When you put the two together, you get antiglobalization voters and parties that are all over the single-dimensional ideological spectrum. When you permit parties and voters to locate themselves freely in a two-dimensional ideological space, something that proportional representation comes close to allowing, you get clustering in the "cultural right, economic left" region of the two-dimensional space.

Margalit's analysis leads us to conclude that what he labels as "cultural concerns" disrupt what otherwise would be a clean mapping between economic interests and political expression of dislike for globalization. But we are left to ponder, what exactly are these "cultural concerns"? Are they really just idiosyncratic, country-specific factors---some kind of error term---that must be studied on a case by case basis? Such was the way that Margalit responded to questions about what the cultural factors were. Does that make sense? Or are there systematic forces---racism? religion? generic fear of change?---at work here?

"Cellphones Challenge Poll Sampling" (NYT)

Interesting article on how survey sampling in the US is being affected by the fact that many people no longer keep land-line telephones and rather have only cell phones. The article states that "the issue came up [during polling for the US elections] in 2004, but cellphone-only households in 2003 were 3 percent of the total. They now run 16 percent, according to Mediamark Research." More from the article:

According to data from the Centers for Disease Control and Prevention’s National Health Interview Survey, adults with cellphones and no land lines are more likely to be young — half of exclusively wireless users are younger than 30 — male, Hispanic, living in poverty, renting a residence and living in metropolitan regions.

The Pew Research Center conducted four studies last year on the differences between cellphone and land line respondents. The studies said the differences were not significant enough to influence surveys properly weighted to census data. With the increase in cellphone-only households, that may not be the case next year. Researchers, including the New York Times/CBS News poll will test that by incorporating cellphones in samples.

The estimates in the Health Interview Study suggest that cellphone-only households are steadily increasing.

“If the percentage of adults living in cell-only households continues to grow at the rate it has been growing for the past four years, I have projected that it will exceed 25 percent by the end of 2008,” Stephen J. Blumberg, a senior scientist at the National Center for Health Statistics, wrote in an e-mail message.

The American Association for Public Opinion Research has been examining the question and formed a group to study it. The association says it will issue its report early next year.



Clearly the limited impact for election polling does not apply to surveys that attempt to measure other population parameters. The weighting fix proposed by Pew is adequate when estimating simple population parameters (e.g. proportions), but things get much more dicey when we move to a regression framework. (See this article by Andrew Gelman for a recent take.)

Climate Change Polls Summary

WorldPublicOpinion.org has a summary of recent polls on attitudes around the world toward climate change:

A new analysis by WorldPublicOpinion.org of 11 recent international polls conducted around the world shows widespread and growing concern about climate change. Large majorities believe that human activity causes climate change and favor policies designed to reduce emissions.

In most countries, majorities see an urgent need for significant action. For example, a recent poll for the BBC by GlobeScan and the Program for International Policy Attitudes (PIPA) found that majorities in 15 out of 21 countries felt that it was necessary to take “major steps, starting very soon” to address climate change. In the other six countries polled, opinion was divided over whether “major” or “modest steps” were needed. Only small minorities thought no steps were necessary.

The analysis included polls from the BBC/GlobeScan/PIPA, the Pew Research Center, GlobeScan, WorldPublicOpinion.org/Chicago Council on Global Affairs, the German Marshall Fund, and Eurobarometer. (Link to report.)


Seems like this data would provide a nice starting point for a global public goods provision analysis. Does it make sense to study how regime structure mediates the way public interest is channeled into action? How would one structure the analysis? Or does it only make sense to study dynamics associated with responses to climate change in terms of global-level bargaining? Or maybe a two-level game framework would make sense---could we use these data, interacted with domestic regime, to estimate the size of the domestic "win-set"?...

Thursday, December 6, 2007

A Take on the "Plausible Instrumental Variables" Debate

In our "quantitative methods in poli sci" seminar today, Andy Gelman and Piero Stanig debated the importance of random assignment of instrumental variables for valid causal inference. The claim being debated was whether it is true that random assignment (literally random---i.e. picking balls from urns, coin flips, etc.) of the instrument in addition to the exclusion restriction on instrumental variables (i.e. no direct effects of the instrument on outcome) and significant first stage are all required to draw valid causal inference from IV regressions. Here's my take on the discussions. If any of you had other interpretations, please comment!

First, it was plain for everyone to see that random assignment of the instrument is not sufficient to ensure that the exclusion restriction is satisfied and therefore is not sufficient to provide leverage for causal inference. As an example, Vietnam draft lottery numbers used as instruments for military service by Angrist, Imbens and Rubin (1993) (gated) were randomly assigned, but the exclusion restriction may have been violated if there were other consequential effects resulting from one's draft lottery number other than military service (e.g. other lifestyle changes that may have resulted from receiving a particular lottery number). Also, it hardly needs to be said that random assignment is not sufficient to ensure that the "significant first stage" assumption holds.

Second, it was understood that random assignment helps to ensure that values of the instrument are ignorable relative to values of the outcome---i.e., values of a randomly assigned instrument are surely not determined by potential outcomes---i.e. values of the instrument are exogneous relative to the outcome (three ways of saying the same thing).

Third, random assignment helps to ensure that the variation in the values of the instrument are not the result of some unobserved factor determining the values of the instrument, the explanatory variable being instrumented, and the outcome. It is in this broader web of relations that random assignment helps to ensure that the exclusion restriction is, at least in part, satisfied. This was probably the most important point that came up during the discussion. This is associated with the "it's culture" argument that is often used to challenge results in comparative politics, with culture being an unobserved factor that simply determines everything. The idea here is that without random assignment, one needs to think hard about the mechanisms through which changes are brought about in the value of the instrumental variable. Is it reasonable to believe that those mechanisms are not sneaking exclusion restriction violations in through the back door? Also, are we confident that a change in the instrument as a result of that mechanism will then produce a change in the explanatory variable in the manner resembling what was estimated in the first stage?

But neither the second or third points above would have us conclude that assignment by way of genuine randomization (picking balls from urns, coin flips, unpredictable weather patterns, etc.) is necessary for the "ignorability", "exclusion restriction", and "significant first stage" assumptions to hold. But nonrandom assignment means that you will have to think harder about whether these requirements are met. I think in the end everyone was willing to accept that.

Aside from the debate, Piero presented an interesting application of some sensitivity analysis methods to test for the consequences of "slight" violations of the exclusion restriction. He pointed us to this working paper by Conley, Hansen and Rossi for more details.

Also, Bob Erikson and Andy proposed using some "common sense" in thinking through whether your IV methods are valid: Suppose you are using rainfall as an instrument for economic growth as a predictor of civil war. Say to yourself, "I just showed that rainfall is associated with civil war." Now think a bit, are you led then to say, "Ah yes, it must be because rainfall determines growth, which we have reason to believe is related to civil war"? Do we believe that "must"?

UPDATE: Andy Gelman has his own thoughts on the debate here.

Monday, December 3, 2007

Applied Bargaining Models and Mechanism Design

Abstract from an interesting working paper, "Uncertainty and Incentives in Crisis Bargaining: Game-Free Analysis of International Conflict, "by Mark Fey and Kristopher Ramsay (available here):

The formal literature on international conflict has identified the combination of uncertainty and the incentive to misrepresent private information as a central cause of war. But there is a fundamental problem with using game-theoretic models to formulate general claims such as these---whether and to what extent a result that holds in a particular choice of game form continues to hold when different modeling choices are made is typically unknown. To address this concern, we present techniques from Bayesian mechanism design that allow us to establish general "game-free" results that must hold in any equilibrium of any game form in a broad class of crisis bargaining games. We focus on three different varieties of uncertainty that countries can face and establish general results about the relationship between these sources of uncertainty and the possibility of peaceful resolution of conflict. We find that in the most general setting of uncertainty about the value of war, there is no equilibrium of any possible crisis bargaining game form that allows the unilateral use of force that completely avoids the chance of costly war.


This is part of a series of papers that Fey and Ramsay have been writing on so-called "game free" analysis of crisis bargaining. The approach is appealing because it lends itself to consideration of classes of conflict resolution mechanisms that ought to be robust to changes in the bargaining procedure. As Fey and Ramsay describe, this type of analysis is especially relevant in crisis bargaining contexts. It is often the fluidity of the bargaining context that makes crisis bargaining situations distinct from institutionalized bargaining in, say, legislatures. (Although, one may argue that distributions of power may lend some structure to the bargaining situation. For example, the dynamic "ultimatum game" structure of Acemoglu and Robinson's transitions game, Boix's transitions game, and Fearon's civil war settlement game derive from an assumption about which actors have the capacity to make proposals.)

Interestingly, I came to this article after having assisted Macartan in preparing a draft of a review article on coalitional analysis. That article gives considerable attention to cooperative game theory and the Nash program. I see some similarity between what Fey and Ramsay are doing and the Nash program---basically, defining classes of games that implement axiomatic solutions.

Friday, November 30, 2007

What this blog is about

This blog has been created to provide a forum for members of the Columbia poli sci community working in comparative political economy to discuss issues related to their research.

The general intention is to create a place for PhD students to discuss, but faculty are also very welcome to contribute as a way to communicate to grad students about issues in CPE and to initiate discussion associated with their own research.

By focusing on "comparative political economy" the blog privileges discussion of a certain subset of topics in the broader comparative politics research domain:

  • Perhaps the least restrictive are the substantive parameters: pretty much any subject within comparative politics is in, including social policy, voting, political violence, regime transitions and regime stability, economic development, political development, electoral systems, social movements, etc.
  • But more restrictive are the methodological commitments, which include discussion of statistical methods, writing and interpreting formal theories, causality and inference, research design, social science measurement/data, and concept development for social-scientific research.
So let's make this a place to post and discuss ideas about research projects (both those in the works and ones that we wish we could do if we had time), musings on things that you have read or heard, links to interesting papers or webpostings, thoughts carried over from conversations, questions about methods, links to information on new data, etc. There are a couple of posts below that give some flavor.

Cheers,
Cyrus

"Elective Feudalism"

I was trying to get a little context on things in Pakistan these days, and I found an interesting post by William Dalrymple here. He uses the term "elective feudalism" to describe what he calls "Pakistan’s strange variety of democracy." It's an interesting term to add to the repertoire. This is actually a common way that those who follow Pakistan's history and politics tend to interpret the political style and support base of Pakistan's civilian leaders. It helps to clarify the unsavoriness of the choices that citizens of Pakistan face in helping to decide the political trajectory of their country---a choice that may be just as much about issues of exclusion and social mobility as they are about debates over Islamism.

Thursday, November 29, 2007

Instrumental Variables in Reverse

Here's a proposition about using instrumental variables for causal effects that Kelly makes based on 1998 and 2000 papers by Robert Erikson and Thomas Palfrey on campaign spending and electoral success (find the papers in Google Scholar here):

Suppose that A causes B and B causes A. We want to estimate the effect of A on B. Typically, we are told that we need an instrument for A to isolate these effects. The challenge of finding an instrument for A is often insurmountable. But the Erikson/Palfrey paper proposes that if certain restrictions are met, we can identify the effect of A on B by first finding an instrument for B, then identifying B's effect on A, and then using that to identify the effect of A on B. If it is possible to find an instrument for B, then we have solved our problem.


So this will be a running post. I haven't read the papers yet, but it would be good to know more about the restrictions of such an approach.

Development of Domestic Trade Infrastructure

The World Bank recently released a Trade Logistics Performance Index. In the index, countries are ranked according to their trade logistics "friendliness." The rankings, which are online here, are interesting, with Singapore and the Netherlands ranking at the top and Timor-Leste and Afghanistan at the bottom. (Given my own research, I noted that Burundi was somehow ranked considerably higher than Rwanda, which struck me as a bit odd. But weird things always happen in composite rankings.)

Perhaps more interesting are the considerations that the index inspires for comparative political economy folks. To the extent that developing countries' best shot at improving their lots is via trade, one is led to ask a series of questions:
  • To what extent are choices---rather than fixed conditions like land-lockedness, terrain, or natural resource availability---responsible for such variation in domestic friendliness to trade?
  • What kinds of choices matter most---private choices in the market or choices of governments? How are market and government choices interrelated in determining trade friendliness?
The index provides one outcome measure for a study on the variation in the trade infrastructure of a country. And it seems to me that the issue of whether market forces or governments are largely responsible for such differences is the question.

Thus, an interesting research program would be to explain what combinations of market and government forces result in more or less "trade friendly" environments. This fits in neatly with studies of public goods provision, but with a slightly different emphasis than many existing studies.

One way to start on such a study would be to choose a set of countries from different strata on the list, and examine their domestic trade infrastructure. Looking within country, one could randomly select elements from different strata of the trade infrastructure---e.g., elements of the transport infrastructure. From there, one could study whether such elements were the results of private provision, public provision, or some combination. Thinking about some instances in the U.S., for example, early railroad was the result of private provision, but the U.S. mail system was established by the government at the time of the U.S. republic's founding as a way to boost interstate trade. My home town, Chadds Ford, is named after a private ferry service operating across the Brandywine River in the pre-Revolutionary period. These are all contributions to the trade logistics environment of the country. A mapping exercise of this sort in a few countries would illuminate ways that new trading opportunities are created or seized, with important implications for the study of economic development. For us political scientists, there can be no doubt that distributional concerns and collective action problems have played their fair role in determining levels of provision.