Powering Africa: Observations from Kenya Reply

By Catherine Wolfram (UC Berkeley)

During his trip to Africa at the end of June, President Obama announced the Power Africa initiative. The press release highlighted several goals, including adding generation capacity in the six target countries, which include Kenya, and increasing the number of households and businesses with access to electricity by at least 20 million.

I was recently in Kenya meeting with potential partners for a research project that will measure rural households’ demand for grid connections, as well as the social and economic benefits of bringing people electricity. (The project is joint with Professors Ted Miguel and Eric Brewer and funded, in part, by USAID’s new Higher Education Solutions Network (HESN).) I gained several insights on the opportunities for growth in the local power sector as well as the challenges to bringing power to more Kenyans.

Let me start with a couple facts. The total electric generating capacity in Kenya is about 1,700 MW. By comparison, the generating capacity in California, where population is 40 million compared to Kenya’s 45 million, is 70,000 MW. Kenya has plans to add substantial capacity in the near future, including several large geothermal projects.

On the distribution side, the Rural Electrification Authority in Kenya has made tremendous strides over the past six years building out the low-voltage distribution network. Nationwide, more than three-quarters of the Kenyan people now live within 1.2 km of the grid. We visited a regional office for the agency and saw rows and rows of transformers, waiting to be installed, so this share will likely grow even higher in the future.

New transformers awaiting installation

New transformers awaiting installation

Kenya Power Company, which operates the distribution system nationwide, will connect a household to the grid as long as it’s within 600 meters of a transformer, so many households are within striking distance. Here’s the catch, though. The household has to pay about $400 to KPC for the connection, and there is talk that the company plans to increase the connection charge to almost $900 this summer. In a country where the per capita income is around $800, most households are priced out of a connection.

As a result, roughly 20 percent of the population actually has electricity in their homes. More than half of the people in the country are living under the grid without access to it.

Many households in Kenya are near the grid, but not yet connected

Many households in Kenya are near the grid, but not yet connected

I met with a grandfatherly gentleman I’ll call Mr. X in Kisumu rural, close to Lake Victoria. His house, on a steep hill overlooking a picturesque valley, is about 100 meters downhill from a secondary school that began receiving electricity 3 years ago. He quietly answered questions about his living situation and smiled patiently at my attempts to thank him in Swahili (“asante sana”).

Mr. X became animated when the conversation turned to “stima” or electricity. He was indignant that the nearby school had electricity but he did not. When probed, he told us that the only reason he did not have power was the large connection charge – he could pay for the wiring in his home and afford the monthly payments.

Without electricity, Mr. X spends about $7 per week buying kerosene that he uses to cook and power a large, pressured kerosene lamp that lights his whole house. Plus, to buy kerosene each week, he must pay about $1.25 for a motor scooter ride to the nearest village, about 5 km away.

When probed about what he would most like to do if he got electricity, he mentioned cooking and lighting his home, so it’s likely that his kerosene costs would decline significantly with a connection. He also wanted to iron his clothes and operate a welder, the latter of which could potentially bring him more income.

Access to electricity has the potential to transform many lives – creating income-generating opportunities, allowing children to study later at night and replacing expensive, time-consuming and polluting alternatives such as kerosene. As energy economists, we have many opportunities to learn about the benefits of electricity as well as the best business and policy models to use to increase access. Programs like Power Africa can be hugely impactful, so we need to make sure we do them right.


About the author:
 Catherine Wolfram is an Associate Professor of Business Administration at the UC Berekeley Haas School of Business and co-director of the Energy Institute at Haas. Wolfram has published extensively on the economics of energy markets. She has studied the electricity industry around the world and has analyzed the effects of environmental regulation, including climate change mitigation policies, on the energy sector. She is currently implementing several randomized control trials to evaluate energy efficiency programs.

This blog was also published with the Energy Collective and the Energy Economics Exchange of the Energy Institute at Haas.

Jelly Beans and Research Transparency 1

For those following our Berkeley Initiative for Transparency in the Social Sciences (BITSS), and others who appreciate good scientific humor, the following XKCD comic cleverly illustrates the problem that CEGA researchers and our concerned colleagues seek to address:

"Significant," courtesy of XKCD.com

“Significant,” courtesy of XKCD

And for those who love green jelly beans and are prone to acne: it’s probably safe to keep eating them in large quantities.

April 25th Symposium on Climate Change and Development Reply

On Thursday, April 25th, CEGA will host its fourth annual research symposium, Evidence to Action: Promoting Global Development in a Changing Climate, together with the Abdul Latif Jameel Poverty Action Lab (J-PAL) and the Energy Institute at Haas.

This year’s program will explore the critical nexus of economic development and climate change. The challenges are complex: global temperature changes can exacerbate food insecurity and human conflict, undermining economic growth in less developed countries. At the same time, poverty reduction is rapidly increasing the demand for energy, which promises to accelerate carbon emissions and environmental pollution. How do we address the tensions between climate, environment, and global development?

There is too little evidence regarding the various policy and technological solutions to climate change. Our affiliates are working to change that. Presentations will highlight a series of experiments revealing what works in terms of climate change adaptation and mitigation, what doesn’t work, and why. Rigorous impact evaluations like these are bolstering the knowledge base and – as we will see on April 25th – generating evidence that is transforming public policy.

Tackling climate change in low-income countries is not a job for academics, policymakers, or the private sector alone. Rather, a synergistic approach is needed to understand the issues and address the challenges facing individuals, communities, industries, and governments. This year’s symposium will build a foundation for future research and policy action in this area.

The symposium is free and open to the public. Please share your thoughts on the event by commenting on this post. Register here!!

E2Aflyer

Evidence to Action (E2A) is CEGA’s annual research symposium. Every year, E2A highlights a pressing issue related to global poverty, and showcases the potential of rigorous research to inform better policy-making in developing countries. Past E2A’s include the Road from Conflict to Recovery (2012), the Returns to Investment in Girls (2011), and Global Health and Education (2010)

The Role of Failure in Promoting Transparency 2

By Carson Christiano (CEGA)

You may wonder why a network of development researchers is taking the lead on a transparency initiative. The answer lies in the profound and omnipresent power of failure.

Most would agree that risk-taking is essential to innovation, whether we’re talking about creating a simple hand-washing station or a state-of-the-art suspension bridge. At the same time, we tend to highlight our successes while downplaying ambiguous research results, misguided technologies, and projects that fail to achieve their desired impact. We fear humiliation and the curtailment of donor interest. Yet open discussion about what doesn’t work, in addition to what works, is critical to our eventual success as innovators. We at the Center for Effective Global Action (CEGA), like so many others working towards social change, believe strongly that there should be “no silent failures” in development.

In Silicon Valley, failure is regulated by the market. Venture capitalists don’t invest in technologies that consumers won’t buy. In the social impact space, particularly in developing countries where consumer demand is difficult to quantify, donors and governments rely on loose, assumption-laden predictions of return on investment. As millions of people in poor countries around the world stand to benefit (and potentially lose) from large-scale social and economic development programs, it follows that CEGA and our research partners maintain a steadfast commitment to research transparency as a moral imperative.

That being said, it is infinitely easier to commit to the concept of research transparency than to actually engage in it. We all know that writing a pre-analysis plan takes precious time and resources; study registration holds us accountable for the results of our research, which may not turn out as we expect. How, then, can we change behavior around research transparency, and encourage researchers to accept (and admit) failure?

More…

An Open Discussion on Promoting Transparency in Social Science Research 1

By Edward Miguel (Economics, UC Berkeley)

This CEGA Blog Forum builds on a seminal research meeting held at the University of California, Berkeley on December 7, 2012. The goal was to bring together a select interdisciplinary group of scholars – from biostatistics, economics, political science and psychology – with a shared interest in promoting transparency in empirical social science research.

There has been a flurry of activity regarding research transparency in recent years, within the academy and among research funders, driven by a recognition that too many influential research findings are fragile at best, if not entirely spurious or even fraudulent.  But the increasingly heated debates on these critical issues have until now been “siloed” within individual academic disciplines, limiting their synergy and broader impacts. The December meeting (see presentations and discussions) drove home the point that there is a remarkable degree of commonality in the interests, goals and challenges facing scholars across the social science disciplines.

This inaugural CEGA Blog Forum aims to bring the fascinating conversations that took place at the Berkeley meeting to a wider audience, and to spark a public dialogue on these critical issues with the goal of clarifying the most productive ways forward.   This is an especially timely debate, given: the American Economic Association’s formal decision in 2012 to establish an online registry for experimental studies; the new “design registry” established by the Experiments in Governance and Politics, or EGAP, group; serious discussion about a similar registry in the American Political Science Association’s Experimental Research section; and the emergence of the Open Science Framework, developed by psychologists, as a plausible platform for registering pre-analysis plans and documenting other aspects of the research process. Yet there remains limited consensus regarding how exactly study registration will work in practice, and about the norms that could or should emerge around it. For example, is it possible – or even desirable – for all empirical social science studies to be registered? When and how should study registration be considered by funders and journals?

More…

Bayes’ Rule and the Paradox of Pre-Registration of RCTs 11

By Donald P. Green (Political Science, Columbia)

Not long ago, I attended a talk at which the presenter described the results of a large, well-crafted experiment.  His results indicated that the average treatment effect was close to zero, with a small standard error.  Later in the talk, however, the speaker revealed that when he partitioned the data into subgroups (men and women), the findings became “more interesting.”  Evidently, the treatment interacts significantly with gender.  The treatment has positive effects on men and negative effects on women.

A bit skeptical, I raised my hand to ask whether this treatment-by-covariate interaction had been anticipated by a planning document prior to the launch of the experiment.  The author said that it had.  The reported interaction now seemed quite convincing.  Impressed both by the results and the prescient planning document, I exclaimed “Really?”  The author replied, “No, not really.”  The audience chuckled, and the speaker moved on.  The reported interaction again struck me as rather unconvincing.

Why did the credibility of this experimental finding hinge on pre-registration?  Let’s take a step back and use Bayes’ Rule to analyze the process by which prior beliefs were updated in light of new evidence.  In order to keep the algebra to a bare minimum, consider a stylized example that makes use of Bayes’ Rule in its simplest form.

More…

Monkey Business 2

By Macartan Humphreys (Political Science, Columbia & EGAP)

I am sold on the idea of research registration. Two things convinced me.

First I have been teaching courses in which each week we try to replicate prominent results produced by political scientists and economists working on the political economy of development. I advise against doing this because it is very depressing. In many cases data is not available or results cannot be replicated even when it is. But even when results can be replicated, they often turn out to be extremely fragile. Look at them sideways and they fall over. The canon is a lot more delicate than it lets on.

Second I have tried out registration for myself. That was also depressing, this time because of what I learned about how I usually work. Before doing the real analysis on data from a big field experiment on development aid in Congo, we (Raul Sanchez de la Sierra, Peter van der Windt and I) wrote up a “mock report” using fake data on our outcome variables. Doing this forced us to make myriad decisions about how to do our analysis without the benefits of seeing how the analyses would play out. We did this partly for political reasons: a lot of people had a lot invested in this study and if they had different ideas about what constituted evidence, we wanted to know that upfront and not after the results came in. But what really surprised us was how hard it was to do it. I found that not having access to the results made it all the more obvious how much I am used to drawing on them when crafting analyses and writing; for simple decisions such as which exact measure to use for a given concept, which analyses to deepen, and which results to emphasize. More broadly that’s how our discipline works:  the most important peer feedback we receive, from reviewers or in talks, generally comes after our main analyses are complete and after our peers are exposed to the patterns in the data. For some purposes that’s fine, but it is not hard to see how it could produce just the kind of fragility I was seeing in published work.

These experiences convinced me that our current system is flawed. Registration offers one possible solution.

More…

Targeted Learning from Data: Valid Statistical Inference Using Data Adaptive Methods 1

By Maya Petersen, Alan Hubbard, and Mark van der Laan (Public Health, UC Berkeley)

Statistics provide a powerful tool for learning about the world, in part because they allow us to quantify uncertainty and control how often we falsely reject null hypotheses. Pre-specified study designs, including analysis plans, ensure that we understand the full process, or “experiment”, that resulted in a study’s findings. Such understanding is essential for valid statistical inference.

The theoretical arguments in favor of pre-specified plans are clear. However, the practical challenges to implementing such plans can be formidable. It is often difficult, if not impossible, to generate a priori the full universe of interesting questions that a given study could be used to investigate. New research, external events, or data generated by the study itself may all suggest new hypotheses. Further, huge amounts of data are increasingly being generated outside the context of formal studies. Such data provide both a tremendous opportunity and a challenge to statistical inference.

Even when a hypothesis is pre-specified, pre-specifying an analysis plan to test the hypothesis is often challenging. For example, investigation of the effect of compliance to a randomly assigned intervention forces us to specify how we will contend with confounding.  What identification strategy should we use? Which covariates should we adjust for? How should we adjust for them? The number of analytic decisions and the impact of these decisions on conclusions is further multiplied when losses to follow up, biased sampling, and missing data are considered.

More…

Transparency and Pre-Analysis Plans: Lessons from Public Health 3

By David Laitin (Political Science, Stanford)

My claim in this blog entry is that political science will remain principally an observation-based discipline and that our core principles of establishing findings as significant should consequently be based upon best practices in observational research. This is not to deny that there is an expanding branch of experimental studies which may demand a different set of principles; but those principles add little to confidence in observational work. As I have argued elsewhere (“Fisheries Management” in Political Analysis 2012), our model for best practices is closer to the standards of epidemiology than to that of drug trials. Here, through a review of the research program of Michael Marmot (The Status Syndrome, New York: Owl Books, 2004), I evoke the methodological affinity of political science and epidemiology, and suggest the implications of this affinity for evolving principles of transparency in the social sciences.

Two factors drive political science into the observational mode. First, as with the Center for Disease Control that gets an emergency call describing an outbreak of some hideous virus in a remote corner of the world, political scientists see it as core to their domain to account for anomalous outbreaks (e.g. that of democracy in the early 1990s) wherever they occur. Not unlike epidemiologists seeking to model the hazard of SARS or AIDS, political scientists cannot randomly assign secular authoritarian governments to some countries and orthodox authoritarian governments to others to get an estimate of the hazard rate into democracy.  Rather, they merge datasets looking for patterns; theorizing about them; and then putting the implications of the theory to test with other observational data. Accounting for outcomes in the real world drives political scientists into the observational mode.

More…

Freedom! Pre-Analysis Plans and Complex Analysis 4

By Gabriel Lenz (UC Berkeley)

Like many researchers, I worry constantly about whether findings are true or merely the result of a process variously called data mining, fishing, capitalizing on chance, or p-hacking. Since academics face extraordinary incentives to produce novel results, many suspect that “torturing the data until it speaks” is a common practice, a suspicion reinforced by worrisome replication results (1,2).

Data torturing likely slows down the accumulation of knowledge, filling journals with false positives. Pre-analysis plans can help solve this problem. They may also help with another perverse consequence that has received less attention: a preference among many researchers for very simple approaches to analysis.

This preference has developed, I think, as a defense against data mining. For example, one of the many ways researchers can torture their data is with control variables. They can try different sets of control variables, they can recode them in various ways, and they can interact them with each other until the analysis produces the desired result. Since we almost never know exactly which control variables really do influence the outcome, researchers can usually tell themselves a story about why they chose the set or sets they publish. Since control variables could be “instruments of torture,” I’ve learned to secure my wallet whenever I see results presented with controls. Even though the goal of control variables is to rule out alternative explanations, I often find bivariate results more convincing. My sense is that many of my colleagues share these views, preferring approaches that avoid control variables, such as difference-in-differences estimators. In a sense, avoiding controls partially disarms the torturer.

More…