Basic Income Experiments—The Devil’s in the Caveats
The devil’s in the details is a common saying about policy proposals. Perhaps we need a similar saying for policy research, something like the devil’s in the caveats. By this, I mean that the evidence any particular piece of research can provide is only a small part of the evidence people need to fully evaluate policy proposals. Non-specialists involved in the debate over that policy are often unable to translate caveats about the limits of research into a firm grasp of what that research does and does not imply about the policies they want evaluated. Therefore, even the best scientific policy research can leave nonspecialists with an oversimplified, or simply wrong, impression of its implications for policy.
For example, popular media reports about medical research often leave people in the United States today with the impression that the medical professionals make widely swinging recommendations about prevention and treatment of diseases, when medical consensus is actually slow to change and even slower to reverse a change once made. It is possible that the misperception of an erratic medical consensus exists because nonspecialists don’t have the background to understand the difference between a medical consensus and an oversimplified or sensationalized report of one study.
Whatever the problems of this type are with medical research, they are likely to be much greater with social science research in general and Universal Basic Income (UBI) experiments in particular. At least some medical research is fairly straightforward. Many medicines affect people only on an individual basis, and all we might want to know about a medicine is whether it is safe and effective. In many cases, medical research can address that question directly in a controlled experiment, and hopefully, it’s not too difficult to communicate the results to nonspecialists.
Although medical experiments might not always be this straightforward, UBI experiments can never be straightforward. I believe this problem is so big that I’m working on a book, provisionally titled Basic Income Experiments—The Devil’s in the Details, to discuss the enormous difficulty of conducting a UBI experiment that successfully raises the level of political debate over UBI.
UBI has complex economic, political, social, and cultural effects that cannot be observed in a controlled experiment. Researchers conducting experiments know that experimental evidence alone cannot fully answer the big questions about UBI: does it work? Is it cost-effective? Should we introduce it on a national level? They have to be content with making a small contribution to a large body of knowledge about UBI. When research is conducted of, by, and for specialists, mutual understanding of the limits of research usually requires no more a simple list of caveats, many of which can go without mention in a group with a great deal of shared, specialized knowledge.
The same is not true when policymakers and citizens make up part of the audience of research—as they do for research on major policy issues such as UBI. Citizens and policymakers want answers to the big questions mentioned above; they understandably try to interpret experimental results in light of those questions. But as I will argue throughout the book, they have great difficulty understanding what UBI experiments do and do not imply about those big questions. The devil is in the caveats.
Most academic specialists are professionals at writing for other academics within the same specialty but amateurs at communicating with nonspecialists. The book argues that these communications barriers affect not only how specialists report their research to nonspecialists but also how they design and conduct it.
It is no coincidence that UBI experiments are getting underway just after an enormous growth in the discussion of UBI in many countries around the world. In that environment, one of the goals of UBI experiments is—or ought to be—to raise the level of debate over UBI. The book will argue that past experiments have a mixed record in raising the level of debate over UBI: although all of them have provided valuable evidence, some have succeeded in raising the level of debate, and some have been so misunderstood that they might well have had an overall negative affect on the level of debate. This effort to raise the level of political debate (like the UBI debate) requires knowledge and skills that researchers have no special training to do and creates risks that research aimed purely at other researchers does not have, including the vulnerability to spin, misuse, sensationalism, or oversimplification.
The goal of the book is help researchers, policymakers, citizens, journalists, and anyone else interested in UBI experiments bridge gaps in understanding between them to help the experiments succeed in the goal of raising the level of debate. I hope that this effort will be valuable to researchers designing, conducting, and writing about UBI experiments, to policymakers commissioning and reacting to experiments, to journalists reporting on experiments, and to citizens involved in the debate or simply interested in the topic of UBI.
To help people bridge these gaps, the book has to explain how many significant barriers there are to conducting experiments that successfully raisr the level of debate. So, I will have a lot of negative things to say, but that should not distract readers from my overall enthusiasm for UBI experiments. They are worth doing, and worth doing well in all relevant ways. And to readers who are unenthusiastic about UBI experiments, I say, they are coming; it’s important to make the best of them.