Solomon Hsiang at G-FEED (some background here):
Nitin Sekar and I recently released a paper examining whether a large legal sale of ivory affected poaching rates of elephants around the world. Our analysis indicated that the sale very likely had the opposite effect from its original intent, with poaching rates globally increasing abruptly instead of declining. Understandably, the community of policy-engaged researchers and conservationists has received these findings with healthy skepticism, particularly since prior studies had failed to detect a signal from the one-time sale. While we have mostly received clarifying questions, the critique by Dr. Fiona Underwood and Dr. Robert Burn fundamentally questions our approach and analysis, in part because their own analysis of PIKE data yielded such different results.Here, we address their main concerns. ...
In spite of the "longest post ever" warning, go ahead and read (or skim) the post. You may agree or disagree but the part that I found most interesting is this:
In the context of a debate like the present one, it becomes particularly important that the data and analytical methods of all parties are accessible so they can be poked, prodded, and interrogated. I am an affiliate of the Berkeley Institute for Transparency in Social Science where I have taught classes on how to carefully compare findings across a literature (e.g. seehere), and I personally have replicated a large number of studies by other authors, sometimes verifying their original findings and sometimes uncovering inconsistencies and errors (e.g. see here, here, here, here, here). While we should obviously do our best to prevent making errors in research, mistakes happen, and I believe innocent mistakes should be destigmatized—being transparent about our research can accelerate the pace of scientific progress.In keeping with this belief, Nitin and I have done what we can to be transparent. We have posted our replication code here so that anyone can examine exactly what we did and verify our findings. In our appendix, we spell out the various checks we did so observers can check our work. We show the statistics for all our covariates and explain why, despite the occasional low p-value, we do not think they undermine our main finding. We also detail a large number of checks on the data in our appendix.The fact that Burn et al. have not posted their replication code publicly—and that Dr. Underwood did not share it with us when we requested it—has made it very difficult for us (or anyone else) to adjudicate our disagreement about how best to analyze the PIKE data to look for a signal from the one-time sale.Overall, we believe we have exerted substantial effort to be transparent with our analysis and handling of data to ensure that our results can be replicated by any other member of the community. We hope that the conservation community will make use of our transparency to check our results and verify the validity of what we have done (and, of course, graciously point out any errors we might have made). We also hope the CITES and other members of the community will move towards greater data transparency wherever it is feasible.
Agreed. You can't complain about others' research unless you are transparent about your own. I'm a major complainer so if you'd like to see any of my data just ask.
There is also an explanation for those outside economics about why we post working papers and present unpublished work at seminars:
Multiple colleagues from ecology have inquired why we released the paper as an NBER working paper prior to peer review. In economics, this is standard procedure. I personally have multiple working papers in circulation, such as this one on the effects of tropical cyclones on economic growth and this one on the effect of temperature on US incomes. Circulating and presenting working papers allows authors to obtain feedback and ideas from colleagues for projects that are close enough to completion that they are coherent and can be understood by a broad audience, while allowing researchers to make adjustments and extensions to their work prior to its final form. In a normal departmental economics seminar, researchers only present working papers and would never present a published paper, since the purpose of the seminar is for the author to obtain feedback from the audience (while of course informing the audience of what work the author is doing). Working papers are regularly cited, both in other working papers and publication, both inside and outside of economics (e.g. interdisciplinary outlets such as Nature and Science recognize this different norm and allow working papers to be cited), as well as by policy-makers. Many of the norms around working papers in economics are similar to how authors in physics use arxiv.org, as I understand it.
This is such an obvious way to move research forward I'm not quite sure why non-economists object.