This page includes links to research resources used by EPAR, information about potentially helpful software tools, and general report writing tips.

Research Resources

EPAR's research draws on a wide range of data and literature. Here are links to some of our favorite sources of information:

  • Living Standards Measurement Study – Integrated Surveys on Agriculture (LSMS-ISA): The LSMS-ISA is a comprehensive household survey supported by the World Bank and administered in seven countries in Africa in partnership with national government statistical offices. 
  • Food and Agriculture Organization of the United Nations (FAO): The FAO provides research and resources on specific crops and countries, and also hosts FAOSTAT, a data platform housing global agricultural statistics.
  • The resource center for the FAO's Global Strategy to improve agricultural and rural statistics (GSARS) includes a range of guidelines, handbooks, technical reports, and working papers on a variety of topics related to agricultural and rural statistics.
  • The World Bank, Data & Research: The World Bank's website is a source of data and research on a range of development topics.
  • Consultative Group on International Agricultural Research (CGIAR): The CGIAR consortium is comprised of 15 centers working on agriculture and food security.
  • International Food Policy Research Institute (IFPRI): IFPRI’s research is a valuable source of information on food policy around the world. Their website includes a database of research, including publications, briefs, reports, and datasets.
  • HarvestChoice: HarvestChoice provides knowledge products, including maps, datasets, and economic models to support agricultural development in Sub-Saharan Africa.
  • Center for Global Development (CGD): CGD provides research on a variety of development topics, including aid effectiveness, global health, poverty and growth, governance, and food and agriculture.
  • The Consultative Group to Assist the Poor (CGAP): CGAP is a global partnership of more than 30 leading organizations housed at the World Bank conducting research, analysis, and evidence-based advocacy around financial inclusion for the poor. Their website includes both publications and links to relevant data.
  • Abdul Latif Jameel Poverty Action Lab (J-PAL): J-PAL is a research, policy outreach, and training organization composed of professors from over forty universities, with a database of over 750 randomized evaluations covering a wide variety of sectors, countries, and topics.
  • AgEcon: A free, open access repository of full-text scholarly literature in agricultural and applied economics.
  • Cornell University: The University hosts The Essential Electronic Agriculture Library (TEEAL) is a downloadable collection of research journals for agriculture and related sciences with thousands of full-text PDF articles. They also host AgriKnowledge, a database of unpublished resources from a number of different organizations.
  • The University of California at Berkeley Center for Effective Global Action (CEGA): CEGA produces evaluations and working papers on a wide variety of development topics.

Back to top

Software Tools

EPAR uses several software tools to support our research.

  • Microsoft Excel: EPAR primarily uses Excel for our literature review coding spreadsheets, and also frequently uses Excel for certain data analysis projects. The basic “Sort & Filter” option in Excel is very useful for looking at your data in different ways. PivotTables and PivotCharts (under the “Insert” option of the Excel main menu) allow you to do more, and in particular they facilitate grouping and summarizing the raw data from the coding review sheet. To generate a pivot table, select all of your coded data (including headings) and choose “PivotTable” from the Insert menu. Click inside the pivot table that is generated (usually on a new sheet) and the “PivotTable Fields” dialogue box will appear allowing you to choose your which column labels, row labels, cell values, and filters will appear in the pivot table. You can create multiple pivot tables and charts depending on how you want the data summarized and displayed. Once you have created a pivot table, you can simply click on a cell in the table and then select the PivotChart option (also under the “Insert” top menu option in Excel) and choose from bar or line graphs, pie charts, etc. Once you select your graphic and hit “OK”, it will appear on the same spreadsheet tab as the pivot table and can be copied and moved to your report (like the bar chart below). A Microsoft Office guide to creating pivot tables from Excel spreadsheets can be found here. Excel spreadsheets, in addition to being able to summarize and visualize coded data, can be read into more specialized statistical packages like Stata, visualization packages like Tableau or Power BI, or multi-purpose packages like R (sometimes importing these data or spreadsheets into other programs requires you to save your spreadsheet as a .csv or “comma-separated values” file). Free alternatives to Microsoft Excel that include some of the same capabilities include Sheets or Google Sheets.
  • Google Drive: EPAR frequently uses Google Drive to share spreadsheets and documents and allow multiple users to edit them simultaneously. In particular, we use shared spreadsheets on Google Drive to track RA hours across projects and to track progress on projects. Alternatives to Google Drive for sharing documents and collaborating with groups include DropBox, Amazon Cloud Drive, and Microsoft OneDrive.
  • Stata: EPAR uses Stata for the majority of our statistical analyses. The Stata software package is commonly used by economists for data analysis, and allows users to enter commands by either selecting options from the top menu or by typing in code into the command dialog box. Users can also prepare code in “.do” files, which can be selected to run in part or as a whole. These .do files can include comments and other notes in addition to the statistical commands, and are useful for saving code for future reference, for collaborative data analysis work, and for sharing with other researchers. Users must purchase a license to be able to use the software, but students at the Evans School can access Stata through desktops on campus, by accessing a UW server remotely, or by requesting a license from Evans School Tech Support. The UCLA Institute for Digital Research and Education (IDRE) provides useful guides on learning and using Stata. The Stata website provides additional guides, and many other resources are available on the web.
  • R: R is an open source software environment for statistical computing and graphics that can be downloaded for free from the R website. Numerous contributors have developed and continue to release free R packages to support a wide variety of analytical needs, which can be loaded into your R interface. For example, the Shiny package supports building interactive web visualizations, and ggplot2 includes commands for a broad set of data visualizations. The RStudio package, also free to download, makes R easier to use and includes a code editor, debugging, and visualization tools. Users can use RStudio to save and edit code scripts in a similar way as in Stata. EPAR primarily uses R to prepare datasets for analysis and to produce graphics and data visualizations through their rich graphics packages.
  • GitHub: GitHub is a web-based tool for hosting and sharing repositories of code, and helps users to exercise version control and track changes to code over time and across users. EPAR uses GitHub primarily for our data analyses, as the repositories allow us to work more effectively as a group on shared coding projects by tracking changes in code. We use GitHub Desktop (free to download) to manage the process of syncing the online repository for a project with “clone” repositories on local desktops and ensure that code files are updated for all users following changes, which are labeled to indicate the nature of the change and who made the changes. We also publish repositories after our analyses our complete so that others can view our code, replicate our analyses, and adapt the code to related analyses. We use the built-in process through Zenodo to assign Digital Object Identifiers (DOIs) to our repositories to archive our repositories and make them citable. The GitHub website includes numerous guides and videos to support users.
  • GIS: A variety of GIS software tools exist to store, retrieve, manage, display, and analyze geographic and spatial data. EPAR uses GIS software to produce maps and other geographic displays to support our analyses, and often combines spatial data with other datasets to support spatial analysis. ArcGIS (produced by Esri) is provided on most desktops at the Evans School and is a commonly used GIS program. EPAR uses open-source desktop GIS software programs including QGIS and GRASS GIS, both of which can be downloaded for free. The websites for these programs include a variety of guides and resources, and more are available on the web.
  • Data Visualization: Several software packages allow you to import data to create dynamic and interactive visualizations. This is particularly useful for data analysis projects, but EPAR also creates visualizations for literature reviews using our coding spreadsheets. Tableau Desktop is a popular visualization tool and offers a free one-year subscription for educational purposes, though you need to register and request a personal product key for the software. Tableau offers a variety of useful training videos here. After creating a Tableau visualization using the Tableau Desktop software, you can upload your visualization to Tableau Public (free to use after registering) to share with others or embed into a web page. Power BI is an interactive data visualization software program developed by Microsoft. Like Tableau, the software allows users to draw in multiple data sources for visualization and analysis, and then supports publishing of reports and visualization dashboards to the web. You can download the Power BI Desktop software for free. The Power BI website provides a variety of videos, samples, and in-depth documentation to support users in learning about the software.

Back to top

General Report Writing Tips

The guidelines below can apply to a wide variety of research projects, and provide some general best practices to consider when compiling either qualitative or quantitative academic reports. Many of the writing tips are adapted from or informed by Evan’s Professor  Marieka Klawitter’s Writing Tips for Quantoids, for students preparing research reports and briefs in statistics and policy analysis courses.

  • Remember that your goal is to help the reader understand how you have answered the research question. Use short, clear sentences. Avoid jargon. The more clearly and simply you write, the more easily the reader will be able to digest the information you have provided.
  • An abstract or executive summary may be the only part of the report your audience reads. Provide your reader with an abstract or executive summary, so that s/he can extract the main points of your study more quickly.
  • Provide critical assumptions up-front. Every analysis is based on certain critical assumptions. Be sure your reader knows which assumptions you have made early on in the report.
    • For example, clearly outline the scope and screening criteria for a literature review, so it is clear what you included and why. Describe why certain variables are included in a model for data analysis, and any assumptions you made in creating them.  
  • Acknowledge the shortcomings of data and methods. No data or study are perfect for the purpose. Part of explaining research findings is to explain the uncertainties and caveats associated with the study. What can't we know from this study that we care about? What data, samples, or methodologies might provide better information? Describe the limitations of the study at the end of the report, and clarify what could be done upon further investigation.
  • Provide enough information so someone from an academic background can evaluate the work. Use appendices and footnotes for technical details of survey or analysis. This gives future researchers clues to how they can study the issue and to the meaning of the work.
  • Do not use the text to describe all the details of your analytic methods. Many readers will not care about the process—only the outcome. Unless you are specifically writing an academic-style paper, data/literature collection and analysis methods can be described in more detail in an appendix, with only a summary of important methodological information in the text. The exception is in cases where collection or analysis methods may have a direct impact on the reader’s interpretation of results—for example, noting that your literature review only includes articles published since 2010, or that your data analysis is based on cross-sectional but not panel data.
  • Use numbers to support your argument, not to make it. Do not write about the numbers; write about ideas and hypotheses. Empirical information can be compelling and interesting when combined with a model and understanding of the world. However, it is the report writer’s job to weave the statistics into a narrative. Data, statistics, and graphics will help "tell the story"; they and their production are not the story.
  • Mention every statistic or graphic you use in the text. If there is not room in the text for the explanation, consider whether the numbers belong there. Do not repeat what is in a table/figure, but help the reader to interpret it and provide key takeaways.
    • Example from EPAR Technical Report #317: “The majority of respondents in all countries had an official form of ID (Figure 2), ranging in Africa from 82.6% in Uganda to a high of 94.3% in Tanzania. The percentage of respondents with official ID was higher in Asian countries than in African countries, ranging from 98.4% in India to 99.8% in Indonesia, indicating that access to an official form of ID may not be a primary barrier to MM access in these Asian countries.”
  • Do not try to teach statistics. Busy policy-makers cannot wade through a treatise on statistics. Provide a "translation," not a lecture on the definition of a confidence interval or p-value.
    • Still, all relevant statistics should be included in tables, appendices, or foot notes for those who can and are interested in interpreting statistics.
  • Use graphics to help convey information and keep your readers’ attention. Graphics (bar charts, line graphs, histograms, etc.) can help tell the story visually. 
    • Be judicious with figure selection—their function should be to enhance, rather than distract from, the accompanying text.
    • Figures should be understandable without referencing the text. Axes and titles should be labeled appropriately, and legends should be included as necessary to aid the reader’s understanding. If necessary, include additional notes to support interpretation below the figure.
  • Do not use a cookie-cutter writing style to describe outcomes. Vary sentence construction and style to create interest. Concentrate on "telling the story" to facilitate this process. Dispense with phrases like “the fact is,” and seek whenever possible to use active language over passive language.
  • Be careful in your word choices: Do not use “impact” or “effect” to imply causality when you mean “association” or “relationship.” Do not use “demonstrate” or “show” to imply proof when you mean “contend” or “argue.” Never use “should” unless you are explicitly aiming to give recommendations, and are comfortable doing so.
    • In most cases, a literature review or data analysis is an input into the decision-making process, and is presented as an objective set of results with some takeaways and conclusions but without normative judgements.
  • Be precise in your word choice. Avoid vague or subjective words like “few” or “sometimes”. If you have a number—for example, three—put it in the text.
  • Provide a clear summary of the conclusions drawn from empirical information. It is the writer’s job to synthesize all the data that emerged during their review/analysis, not to include their own viewpoints. Any viewpoints that show up in the literature and all relevant findings should be cited.

Writing Resources:

Back to top