CAFE Dataverse and Coding Resources

Harvard Dataverse is an open-source generalist data repository where we are amassing a collection of commonly used climate and health data and linkages, including spatial data.

Following FAIR principles, we strongly encourage the Community of Practice to help expand the Climate-Health CAFE Dataverse collection by contributing data for sharing and reuse.

The Harvard Dataverse pictured on a laptop screen.

Parameters for Submission

Emphasizing open access and collaborative research, the CAFE Collection invites contributions from a diverse array of stakeholders, including government agencies, NGOs, community-based organizations, industry partners, and academics. Parameters for what datasets are appropriate and inappropriate for the CAFE Collection are described below:

  • Contributions should be relevant to climate and health research.
  • Contributions should not be identical to data stored in other repositories. The submission of processed derivatives or expansions of data accessible through existing sharing resources (i.e., SEDAC, Google Earth Engine) is encouraged.
  • Contributions should be in line with the licensing of raw source data or appropriately credited public datasets cited using our metadata standards.
  • No restricted access data (i.e., data including personal identifying information) should be shared through the CAFE Collection. Contributions will be widely accessible to Harvard Dataverse users.
The GitHub coding resource pictured on a monitor screen.

Data and Coding Resource hub (gitHub)

A collection of code, software, and tutorials on GitHub allowing researchers to contribute, share, and reuse existing code and software for data processing and analysis to facilitate reproducibility and reusability. This will include commonly used data processing and analysis tasks such as spatial aggregations, data harmonization, and analysis.