Data Visualization: Statistics as Storytelling

Project Summary

A large and growing trove of patient, clinical, and organizational data is collected as a part of the “Help Desk” program at Durham’s Lincoln Community Health Center. Help Desk is a group of student volunteers who connect with patients over the phone and help them navigate to community resources (like food assistance programs, legal aid, or employment centers). Data-driven approaches to identifying service gaps, understanding the patient population, and uncovering unseen trends are important for improving patient health and advocating for the necessity of these resources. Disparities in food security, economic stability, education, neighborhood and physical environment, community and social context, and access to the healthcare system are crucial social determinants of health, which studies indicate account for nearly 70% of all health outcomes.

Themes and Categories
Year
2020

This project introduced some techniques and best practices of data visualization with a focus on clear, thoughtful, and impactful presentation—a crucial part of any project working with data. Discussions and activities advanced a perspective on data visualization as a form of visual storytelling and creating meaning, as opposed to just “making numbers pretty.” Participants were then asked to comment on the effectiveness of several created examples. In the second part of the Expedition, students were given data from Help Desk and were walked through the process of visualization, from conception to design, to answer a relevant research question of their own regarding Help Desk and the social determinants of health. Visualizations were created in Tableau, a common and useful software in health and public policy domains.

Guiding Questions

  • Why visualize? What is the point of visualization?
  • What are the implicit and explicit narratives of real-world data visualizations?
  • What contexts are important for understanding our Help Desk data?
  • What are necessary, appropriate, and responsible questions to ask given our data?
  • What type of visualization would best convey the information we have (i.e. bar graph, histogram, scatterplot)?

The Dataset

Data were obtained from PRAPARE screenings conducted by case managers at Lincoln as well as from Help Desk volunteers speaking with patients over the phone. A variety of questions about patient needs and demographics, call outreach and outcomes, and community-based organization service and utility can be investigated. Participants were given a dataset containing 700 de-identified patient records with 1,122 variables, the number of variables slightly pared down from the original dataset. The full codebook was included for reference.

Lab Sessions

In the first session, participants were introduced to the fundamentals of data visualization as storytelling and sense-making. They watched Hans Rosling’s “200 Countries, 200 Years, 4 Minutes” video and discussed the implicit and explicit narratives told in visualization. Then, after an overview of HIPAA and data privacy by Connor, Tyler went through a live demonstration of Tableau, creating a basic bar graph, and led the participants through a critique of five visualizations created ahead of time using Help Desk data. As homework, participants were asked to download Tableau, load the dataset, and familiarize themselves with the interface and codebook.

In the second session, participants were split into two groups, Connor and Tyler guiding one each through the process of data visualization from start to finish.

Each group focused on a different question:

  • Which food-related CBOs were the most referred to? Which food-related CBOs were the most “successful”?
  • What demographic factors are most associated with self-reported food need?

From here, participants were asked to discuss which variables would be most pertinent to answering their respective question, then which type of visualization would convey information the best, and finally whether there were other techniques (color, order, text) that could be useful.

The groups reconvened to share their preliminary visualizations and give each other feedback. They are presented here, without some of the necessary context, labels, explanations needed to make them interpretable, due to the time constraints.

Food graph

food graph

Downloads

Data Expedition Lesson Plan.docx

Data Expedition Slides.pptx

HelpDeskDataExp.twb

HelpDeskProject_DATA_2020-01-19_DataExp_EDIT.csv

HelpDeskProjectAnnotated_DATAEXP.pdf

Related Projects

We led a 75-minute class session for the Marine Mammals course at the Duke University Marine Lab that introduced students to strengths and challenges of using aerial imagery to survey wildlife populations, and the growing use of machine learning to address these "big data" tasks.

Most phenomena that data scientists seek to analyze are either spatially or temporally correlated. Examples of spatial and temporal correlation include political elections, contaminant transfer, disease spread, housing market, and the weather. A question of interest is how to incorporate the spatial correlation information into modeling such phenomena.

 

In this project, we focus on the impact of environmental attributes (such as greenness, tree cover, temperature, etc.) along with other socio-demographics and home characteristics on housing prices by developing a model that takes into account the spatial autocorrelation of the response variable. To this aim, we introduce a test to diagnose spatial autocorrelation and explain how to integrate spatial autocorrelation into a regression model

 

 

In this data exploration, students are provided with data collected from remote sensing, census, and Zillow sources. Students are tasked with conducting a regression analysis of real-estate estimates against environmental amenities and other control variables which may or may not include the spatial autocorrelation information.

 

Over the course of two, one and a half hour sessions we led students in the Duke Marine Lab Marine Ecology class (Biology 273LA) on a data expedition using the statistical programming environment R. We gave an introduction to big data, the role of big data in ecology, important things to consider when working with data (quality control, metadata, etc.), dealing with big data in R, what the Tidyverse is, and how to organize tidy data (see class PowerPoint). We then led a hands-on coding workshop where we explored an open-access citizen science dataset of aquatic plants along U.S. east coast (see dataset details below).