Research

Research projects at Rhodes iiD focus on building connections. We encourage crosspollination of ideas across disciplines, and to develop new forms of collaboration that will advance research and education across the full spectrum of disciplines at Duke. The topics below show areas of research focus at Rhodes iiD. See all of our research.

A team of students, led by Electrical and Computer Engineering professor Vahid Tarokh, will develop methods to improve the efficiency of information processing with adaptive decisions according to the structure of new incoming data. Students will have the opportunity to explore data-driven adaptive strategies based on neural networks and statistical learning models, investigate trade-offs between error threshold and computational complexity for various fundamental operations, and implement software prototypes. The outcome of this project can potentially speed up many systems and networks involving data sensing, acquisition, and computation.

Project Leads: Yi Feng, Vahid Tarokh

A team of students will explore new ways of reading pre-modern maps and perspectival views through image tagging, annotation and 3D modeling. Each student will build a typology of icons found in these early maps (for example, houses, churches, roads, rivers, etc.). By extracting, modeling, and cataloging these features, the team will create a library of 2D and 3D objects that will be used to (a) identify patterns in how space and power are represented across these maps, and (b) to create a model for “experiencing” these maps in 3D, using the Unity game engine platform. This is a combined Data+ / Bass Connections project that will instruct students in qualitative and quantitative mapping techniques, basic 3D modeling and the history of cartography.

Project Lead: Philip Stern, Ed Triplett

Project Manager: Sam Horewood

A team of students will explore ways in which data science can help support the mission of Rewriting the Code, a national non-profit organization dedicated to empowering a community of college women with a passion for technology.

In particular, students will perform statistical analyzes of past survey data, build out interactive dashboards that help visualize trends in student experience, and help design future survey questions.

Project Lead: Sue Harnett

Faculty Lead: Alexandra Cooper

A team of students will explore how artificial intelligence tools can be used to support the investment office at the Duke University Management Company (DUMAC).

In particular, the team will investigate natural language processing and other AI methods for supporting the legal review process, investment analysis, and financial reporting.

Project Lead: Robert McGrail, DUMAC

Project Manager: Yi Wang

Over the past several months, Duke's Information Technology Security Office (ITSO) has begun applying the MITRE ATT&CK framework as a basis for how the team collects, assesses, identifies and responds to attacker tactics, techniques, and procedures (TTPs). As the team rolls out new processes to "hunt" for attackers, a model that transitions the team's primary functions from defensive/reactive to offensive/proactive, the team will need to incorporate real time and longitudinal data analytics as well as incorporate automated responses based on these data analyses.  This orchestration of the various tools and analysis of data will facilitate the automation of responses to attacker incursions.  Given the amount of data, and speed needed to respond, application of machine learning techniques will be a necessary component.

Project Lead: Jen Vizas

Duke season ticket holders are both strategically and financially important to Duke Athletics. One of the major challenges in retaining season ticket holders is understanding which are most likely to churn, i.e. not renew their tickets. A team of students, in conjunction with Duke’s Office of Information Technology and Duke Athletics, will make use of data from Duke’s ticketing system, to build a set of models that seeks to predict the profiles and timing of non-renewal of season ticket holders and annual donors.

Project Leads: John Haws, Larry Cleaver

Project Manager: Andrew Carr

The natural and built environment can both promote and harm the public’s health. Some states have created interactive web-portals to help visualize how health and environmental measures relate…North Carolina wants to be next! The Data+ student team, led by epidemiologist Mike Dolan Fliss and colleagues from the NC Division of Public Health (DPH), will build a pilot Environmental Public Health Tracking (EPHT) tool for NC. Students will analyze and combine spatial health, environmental, and point-source data from NC DPH and other partners, then co-design and prototype visual dashboards for public use.

Project Leads: Mike Dolan Fliss, Kim Gaetz

Project Manager: Melyssa Minto

A team of students, led by researches in the Global Financial Markets Center at Duke Law, will carry forward the work of a 2019-20 Bass Connections team to better understand the state of the home mortgage market leading up to the financial crisis. The Data+ team will expand the scope of their analysis outside North Carolina and begin the process of developing a complete quantitative portrait on the state of the mortgage market in Sun Belt states. Following the work done this year, the Data+ team would be largely responsible for creating visualization devices to visualize at the census tract level different mortgage market statistics for the entire US based on the NC version created this year. Additionally, a model would be created to identify whether a loan is predatory or not. The output for this project will be displayed on a comprehensive website that is currently being constructed by the Bass Connections team.

Project Lead:  Lee Reiners

A team of students led by researchers at the Duke River Center will develop tools to link water quality and aquatic ecosystem condition to urban and other land uses by combining existing geospatial data including land cover maps, LiDAR, and remotely-sensed images with time series of estimates of ecosystem metabolism found within the StreamPULSE data portal.  Students will develop clustering tools for rapid identification of land use and other gradients that minimize confounding factors, and then will compare metabolic time series along these gradients to identify connections between catchment attributes and the seasonal and stochastic components of ecosystem function.  This work will help Duke researchers determine thresholds of land use (or other catchment characteristics) that protect aquatic ecosystem condition and will also generate generalizable workflows and data infrastructure that supports the scientific community’s use of our open science data portal.

Project Leads: Jim Heffernan, Phil Savoy

A team of students will analyze sensor data from a shipping fleet to develop predictive models to prevent mechanical failures from happening at sea and optimize the best time for replacement. They will have the opportunity to collaborate closely with analytics professionals from Fleet Management Limited, the world’s third-largest ship management company looking after 520+ vessels on behalf of owners.

Faculty Sponsor: Paul Bendich

Client Lead: Shah Irani, Fleet Management Limited

A team of students led by researchers in the Energy Data Analytics Lab, Electrical & Computer Engineering, and with participation from the Energy Access Project will investigate how to use synthetically-generated satellite imagery to improve the identification of energy infrastructure in satellite imagery. The detected energy infrastructure will fill outstanding data gaps in the ability to identify pathways for electrification in low-income countries. The team will build the foundation for research that can identify objects that appear relatively rarely in satellite imagery and accomplish this using very limited training examples by creating realistic synthetic 3D models of those rare objects.  This would greatly scale up the applicability of computer vision techniques for energy object identification in overhead imagery.

Project Lead:  Kyle Bradbury

A team of students led by Physics professors Dan Scolnic, Michael Troxel and Chris Walter will build their own algorithms to use images taken as part of The Dark Energy Survey, one of the largest cosmological surveys, to learn more about all the things we find in space that we aren’t looking for. These can be anything from image artifacts, to cosmic ray hits, to satellite trails to Elon Musk's car (see picture). Each of these different things has their own signatures on the images, and automatic detection and identification algorithms would enable improved image processing. As surveys attempt to measure increasingly difficult and subtle features of the universe, like the imprint of dark energy and dark matter, identification of any kind of artifact will be critical.

Project Lead: Dan Scolnic, Michael Troxel, Chris Walter

A team of students led by the Data and Analytics Practice at OIT will develop a robust forecasting model for predicting energy usage for different facilities on campus. Students will explore a wide range of real-world time-series data challenges from anomaly detection as well as handling, to benchmarking traditional statistical and modern machine learning models for forecasting. Students will also gain valuable experience developing an interactive application with latest open source libraries converting Jupyter notebooks into web applications to facilitate effective stakeholder collaboration. This work will enable several critical analyses for Duke Facilities Management to optimize their operations and significantly reduce costs.

Projects Leads: John Haws, Gagandeep Kaur

Project Manager: Billy Carson

A team of students led by professor of Public Policy William Darity Jr. will chart the evolution of racial inequality in housing in a subset of Durham’s neighborhoods over the course of the 20th century, using census data and Durham County housing records. Students will select a sample of homes from those that appear in de-anonymized decennial censuses between 1920 and 1940, noting homeowner race and reported home value. Tenure (time since last sale), assessed home values and occupancy will be collected from county records for the period between 1940 and 2018. The set of homes will be selected to include a range of neighborhoods that vary in racial composition, zoning designation, and credit riskiness as determined by HOLC’s residential security (redlining) maps. The proposed approach allows the Data+ team to document racial differences in the evolution of home values, tenure and occupancy across neighborhoods.

Project Leads: William Darity Jr.

Project Manager: Omer Ali

A team of students led by researchers in the Duke Eye Center and Department of Statistical Science will develop statistical models to assess the risk of legal blindness in glaucoma patients using electronic health records (EHR) from Duke Health. Students will focus on identifying risk factors relevant locally to the Durham county patient population and will enrich the available EHR data with detailed social and environmental data using the Durham Neighborhood Compass. A priority of the research will be to develop an app to make the prediction model accessible, so that real-time decisions about medical care related to blindness can be made. For the greatest impact, the app will be created in close collaboration with clinicians and decision makers at Duke Health.

Project Leads: Samuel Berchuck, Sayan Mukherjee, Felipe Medeiros

Project Manager: Kimberly Roche

A team of students led by data scientists and engineers from the Office of Information Technology will work to visualize foot traffic patterns in the Bryan Center. Students will be given a large dataset consisting of wifi data, which they will analyze to gain insight into usage patterns of the Bryan Center over various time periods. The work will help to identify areas of the center that experience high wear and tear, particularly during high-volume events such as basketball games.

Project Leads: John Haws, Mary Thompson, Eric Hope, Sean Dilda

Project Manager: Hunter Klein

Mental Illness is over-represented in the incarcerated population, and is correlated with higher rates of re-arrest.  In recent years, Durham County has taken many steps to break this unfortunate cycle, including helping incarcerated people to engage with mental health treatment resources.  This team will work with collaborators at the Durham County Detention Facility, the Criminal Justice Resource Center, and the Duke Health System to determine if recently-incarcerated people in Durham are using the resources available to them, and if outcomes are improving.  The team will use descriptive statistics and construct statistical models, and welcomes students from all majors, especially those interested in mental health and policy.  This team is a combined Data +/Bass Connections project, so students will be expected to commit to the project for Summer 2020 as well as academic year 2020-2021. 

Project Leads: Nicole Schramm-Sapyta, Maria Tackett

Project Manager: Ruth Wygle

A team of students led by History Professor Cecilia Márquez will use census data to understand the long history of Latinxs in the U.S. South. Despite a growing focus of historians and social scientists on the historical and contemporary Latinx South, there has not yet been a thorough data analysis of the historical presence of Latinxs in the South. The Data+ team will search the U.S. Federal Census, immigration records, and marriage records to determine the location of Latinxs in the U.S. South over the course of the late nineteenth and early twentieth centuries. This work will provide an invaluable data set to help us understand the long southern history of Latinxs. 

Project Lead: Cecilia Márquez

A team of students led by researchers from the Duke Human Performance Optimization Lab (OptiLab) and the Michael W. Krzyzewski Human Performance Laboratory (K-Lab) will develop an analytic and report generating application to test if baseline vision and movement screening measures are able to predict on-field baseball performance in a cohort of nearly 300 athletes who participated in the USA Baseball Prospect Development Pipeline (PDP).  Using machine learning and Bayesian hierarchical modeling, students will test data provided by USA baseball to identify relationships between baseline characteristics and performance in NCAA sanctioned and collegiate summer league games during the 2018 and 2019 seasons. The final deliverable will be both a report of the findings, and an analytic toolset that can be used within the PDP to provide direct feedback to the athletes about their future performance potential immediately following testing. As such, this program will provide valuable new information about the characteristics that predict successful athletic performance in demanding situations, and could be used to develop new approaches for talent identification within and beyond baseball. 

Project Leads: Greg Appelbaum, Marc Richard

 

Are the concepts of a “consumer” and of a “consumer society” modern ideas? Is greed good, as Michael Douglas’s Gordon Gekko in the 1987 movie Wall Street claimed, or is it a destructive sin?

A team of students led by Dr. Astrid Giugni (Duke, English and ISS) and Dr. Jessica Hines (Brimingham-Southern College, English) will address the question of how to trace concepts that slowly developed alongside changing economic and social realities.

 

We will track a set of related terms (such as consumer, greed, speculation, profit) in order to begin assessing how the ethical, political, and economic language of goods-consumption changed around the Protestant Reformation and the rise of the market economy. Using large databases-- EEBO (Proquest), ECCO (Gale), HathiTrust, and TEAMS (University of Rochester)—that contain scans and machine-readable Medieval and Early Modern texts, the group will track and analyze pamphlets, sermons, satires, and images to understand how the ethical discourse of consumerism changed over time.  

Project Leads: Astrid Giugni, Jessica Hines

Project Manager: Chris Huebner

The promoters for modern American capitalism have long encouraged individuals, including those of modest means, to build their wealth through investments.  But how have ordinary investors learned about the opportunities and risks of putting their savings to work on Wall Street?  A team of students working with History professor Ed Balleisen will delve into the evolving nature of investment advice from the early twentieth-century up to the start of the internet age.  Creating datasets from financial advice columns in large circulation American newspapers and magazines, they will use text mining techniques and sentiment analysis to see how advice changed in response to the business cycle, the emergence of new types of investments, financial products, and investors, and the evolution of financial regulation.  This is a chance to link data science to historical analysis of a key facet of finance capitalism.

Project Lead: Ed Balleisen

 

A team of students, led by University Archivist Valerie Gillispie and Professor Don Taylor, will take a closer look at how the student body at Duke has transformed into a coeducational student body from around the world enrolled in ten different schools. Students will seek to transform digital and historical data into a dynamic visual display which allows viewers to examine changes in the student body over time in terms of three dimensions: geographic origin, gender, and school. The students will use born-digital data along with historical, paper-based data to assemble a data corpus. The goal is to demonstrate trends and changes over time in terms of where Duke students have come from, identifying statistically significant shifts and patterns that warrant further study.

Project Leads: Don Taylor, Valerie Gillispie

A team of students led by researchers in the Energy Initiative and the Energy Access Project will explore historical data on the U.S. Electric Farm Equipment (EFE) demonstration show that ran between 1939 and 1941, which aimed to increase usage of electricity in rural areas. Students will compile data collected by the Rural Electrification Agency into a machine-readable form, and then use that data to explore and visualize the EFE’s impact. If time allows, they will then compare data from the EFE and a related, smaller-scale project from 1923 (“Red Wing Project”) to current data on appliance promotion programs in villages in East Africa that have recently gained access to electricity. The outcomes of this analysis would offer evidence on the successes and limitations of these types of programs, and the relevance of the historical U.S. case to countries that are currently facing similar challenges.

Project Leads: Victoria Plutshack, Jonathon Free, Robert Fetter

A team of students led by the Nunn lab and its collaborators will investigate the ecological and behavioral factors that determine parasitism in different species of primates. Based on publicly available data and evolutionary trees, students will investigate parasitism by developing a network of primate-parasite relationships. This network will then be used to infer the ecological and behavioral characteristics that best predict parasitism. The findings are relevant to identifying emerging infectious diseases in humans, and also for conservation efforts globally.

Project Leads: Jim Moody, Charles Nunn

Project Manager: Marie Claire Chelini

A team of students led by researchers from the Internet of Water project at the Nicholas Institute will develop an online tool that allows local water systems to update and verify their service boundaries while maintaining data security and functionality for state regulators. States oversee hundreds of water systems with system service areas and boundaries that change over time. An online tool enabling water system managers to update their service areas would enable an improved, time-saving process for creating and maintaining up-to-date water system boundaries. Students will have the opportunity to interact with state regulators and water system managers in North Carolina and California who will provide feedback on design and usability. This tool will improve system boundary data that are used for planning and decision-making purposes. Additionally, the tool may include functionality for basic spatial analyses such as overlaying boundaries on sociodemographic, economic, and environmental data. This would enable impact analyses, the identification of utilities and vulnerable populations affected by environmental hazards to water systems, and multi-system regional water supply projections.

Project Leads: Megan Mullin, Lauren Patterson

Project Manager: Kyle Onda

A team of students led by eating disorders expert Nancy Zucker and engineering professor Guillermo Sapiro will develop multimodal computational tools to help improve the nutritional status and food enjoyment of young children with Avoidant/Restrictive Food Intake Disorder (ARFID), children who are not eating enough food or are eating an inadequate variety of food to the degree that it impairs functioning. Students will analyze facial affect and behavior from videos of children trying new foods and will derive sensory profiles based on children’s patterns of food acceptance. These analyses will serve as the basis for personalized recommendations for parents that will suggest actionable next steps to increase their child’s food acceptance.

Project Leads: Guillermo Sapiro, Nancy Zucker

Project Manager: Julia Nichols

A team of students led by Humanities Unbounded Fellow Eva Michelle Wheeler will explore how culturally-bound language in African-American literature and film is rendered for international audiences and will map where and into which languages these translations are occurring. Students will use a reference dataset to build and annotate a translation corpus, explore the lexical choices and translation strategies employed by translators, and conduct a macro-level analysis of the geographic and linguistic spread of these types of translations. The results of this project will bring a quantitative dimension to what has largely been a qualitative analysis and will contribute to ongoing academic conversations about language, race, and globalization.  

Project Lead: Eva Wheeler

Human activity recognition (HAR) is a rapidly expanding field with a variety of applications from biometric authentication to developing home-based rehabilitation for people suffering from traumatic brain injuries. While HAR is traditionally performed using accelerometry data, a team of students led by researchers in the BIG IDEAS Lab will explore HAR with physiological data from wrist wearables. Using deep learning methods, students will extract features from wearable sensor data to classify human activity. The student team will develop a reproducible machine learning model that will be integrated into the Big Ideas Lab Digital Biomarker Discovery Pipeline (DBDP), which is a source of code for researchers and clinicians developing digital biomarkers from wearable sensors and mobile health technologies.

Project Lead: Jessilyn Dunn

Project Manager: Brinnae Brent

Disciplines involved: Health, Biology, Biomedical Engineering

Social and environmental contexts are increasingly recognized as factors that impact health outcomes of patients. This team will have the opportunity to collaborate directly with clinicians and medical data in a real-world setting. They will examine the association between social determinants with risk prediction for hospital admissions, and to assess whether social determinants bias that risk in a systematic way. Applied methods will include machine learning, risk prediction, and assessment of bias. This Data+ project is sponsored by the Forge, Duke's center for actionable data science.

Project Leads: Shelly Rusincovitch, Ricardo Henao, Azalea Kim

Project Manager: Austin Talbot

Aaron Chai (Computer Sciece, Math) and Victoria Worsham (Economics, Math) spent ten weeks building tools to understand characteristics of successful oil and gas licenses in the North Sea. The team used data-scraping, merging, and OCR method to create a dataset containing license information and work obligations, and they also produced ArcGIS visualizations of license and well locations. They had the chance to consult frequently with analytics professionals at ExxonMobil.

Click here to read the Executive Summary

 

Project Lead: Kyle Bradbury

Project Manager: Artem Streltsov

Yueru Li (Math) and Jiacheng Fan (Economics, Finance) spent ten weeks investigating abnormal behavior by companies bidding for oil and gas rights in the Gulf of Mexico. Working with data provided by the Bureau of Ocean Energy Management and ExxonMobil, the team used outlier detection methods to automate the flagging of abnormal behavior, and then used statistical methods to examine various factors that might predict such behavior. They had the chance to consult frequently with analytics professionals at ExxonMobil.

 

Click here to read the Executive Summary

 

Project Lead: Kyle Bradbury

Project Manager: Hyeongyul Roh

Team A: Video data extraction

Alexander Bendeck (Computer Science, Statistics) and Niyaz Nurbhasha (Economics) spent ten weeks building tools to extract player and ball movement in basketball games. Using freely available broadcast-angle video footage which required much cleaning and pre-processing, the team used OpenPose software and employed neural network methodologies. Their pipeline fed into the predictive models of Team C.

Click here to read the Executive Summary

 

Team B: Modeling basketball data: offense

Anshul Shah (Computer Science, Statistics), Jack Lichtenstein (Statistics), and Will Schmidt (Mechanical Engineering) spent ten weeks building tools to analyze offensive play in basketball. Using 2014-5 Duke Men’s Basketball player-tracking data provided by SportVU, the team constructed statistical models that explored the relationship between different metrics of offensive productivity, and also used computational geometry methods to analyze the off-ball “gravity” of an offensive player.

Click here to read the Executive Summary

 

Team C: Modeling basketball data: defense

Lukengu Tshiteya (Statistics), Wenge Xie (ECE), and Joe Zuo (Computer Science, Statistics) spent ten weeks building tools to predict player movement in basketball games. Using SportVU data, including some pre-processed by Team A, the team built predictive RNN models that distinguish between 6 typical movement types, and created interactive visualizations of their findings in R Shiny.

Click here to read the Executive Summary

 

Team D: Visualizing basketball data

Shixing Cao (ECE) and Jackson Hubbard (Computer Science, Statistics) spent ten weeks building visualizations to help analyze basketball games. Using player tracking data from Duke basketball games, the team created visualizations of gameflow, networks of points and assists, and integrated all of their tools into an R Shiny app.

Click here to read the Executive Summary

 

Faculty Leads: Alexander Volfovsky, James Moody, Katherine Heller

Project Managers: Fan Bu, Heather Matthews, Harsh Parikh, Joe Zuo

Yanchen Ou (Computer Science) and Jiwoo Song (Chemistry, Mechanical Engineering) spent ten weeks building tools to assist in the analysis of smart meter data. Working with a large dataset of transformer and household data from the Kyrgyz Republic, the team built a data preprocessing pipeline and then used unsupervised machine-learning techniques to assess energy quality and construct typical user profiles.

 

Click here to read the Executive Summary

 

Faculty Lead: Robyn Meeks

Project Manager: Bernard Coles

Bernice Meja (Philosophy, Physics), Jessica Yang (Computer Science, ECE), and Tracey Chen (Computer Science, Mechanical Engineering) spent ten weeks building methods for Duke’s Office of Information Technology (OIT) to better understand information arising from “smart” (IoT) devices on campus. Working with data provided by an IoT testbed set up by OIT professionals, the team used a mixture of supervised and unsupervised machine-learning techniques and built a prototype device classifier.

 

Click here ot read the Executive Summary

 

Project Lead: Will Brockselsby

Interested in understanding the types of attacks targeting Duke and other universities?  Led by OIT and the IT Security Office, students will learn to analyze threat intelligence data to identify trends and patterns of attacks.  Duke blocks an average of 1.5 billion malicious connection attempts/day and is working with other universities to share the attack data.  One untapped area is research into the types of attacks and learning how universities are targeted.  Students will collaborate alongside the security and IT professionals in analyzing the data and with the intent to discern patterns.

Project Lead: Jesse Bowling

Project Manager: Susan Jacobs

Katelyn Chang (Computer Science, Math) and Haynes Lynch (Environmental Science, Policy) spent ten weeks building tools to analyze and visualize geospatial and remote sensing data arising from the Alligator River National Wildlife Refuge (ARNWR). The team produced interactive maps of physical characteristics that were tailored to specific refuge management professionals, and also built classifiers for vegetation detection in LandSat imagery.

 

Click here to read the Executive Summary

 

Faculty Leads: Justin Wright, Emily Bernhardt

Project Manager: Emily Ury

Dennis Harrsch, Jr. ( Computer Science ), Elizabeth Loschiavo ( Sociology ), and Zhixue (Mary) Wang ( Computer Science, Statistics ) spent ten weeks improving upon the team’s web platform that allows users to examine contraceptive use in low and middle income (LMIC) countries collected by the Demographic and Health Survey (DHS) contraceptive calendar. The team improved load times, data visualization latency, and increased the number of country surveys available in the platform from 3 to 55. The team also created a new app that allows users to explore the results of machine learning using this big data set.

This project will continue into the academic year via Bass Connections where student teams will refine the machine learning model results and explore the question of whether and how policymakers can use these tools to improve family planning in LMIC settings.

 

Click here to view the Executive Summary

 

Faculty Lead: Megan Huchko

Project Manager: Amy Finnegan

Nathaniel Choe (ECE) and Mashal Ali (Neuroscience) spent ten weeks developing machine-learning tools to analyze urodynamic detrusor pressure data of pediatric spina bifida patients from the Duke University Hospital. The team built a pipeline that went from raw time series data to signal analysis to dimension reduction to classification, and has the potential to assist in clinician diagnosis.

 

Click here to read the Executive Summary

 

Faculty Leads: Wilkins Aquino, Jonathan Routh

Project Manager: Zekun Cao

Varun Nair (Economics, Physics), Paul Rhee (Computer Science), Jichen Yang (Computer Science, ECE), and Fanjie Kong (Computer Vision) spent ten weeks helping to adapt deep learning techniques to inform energy access decisions.

 

Click here to read the Executive Summary

 

Faculty Lead: Kyle Bradbury

Project Manager: Fanjie Kong

Yoav Kargon (Mechanical Engineering) and Tommy Lin (Chemistry, Computer Science) spent ten weeks working with data from the Water Quality Portal (WQP), a large national dataset of water quality measurements aggregated by the USGS and EPA. The team went all the way from raw data to the production of Pondr, an interactive and comprehensive tool built with R Shiny that permits users to investigate and visualize data coverage, values, and trends from the WQP.

 

Click here to read the Executive Summary

 

Faculty Lead: Jim Heffernan

Project Manager: Nick Bruns

Marco Gonazales Blancas (Civil Engineering) and Mengjie Xiu (Masters, BioStatistics) spent ten weeks building tools to help Duke reduce its energy footprint and achieve carbon neutrality by 2024. The team processed and analyzed troves of utility consumption data and then created practical monthly energy use reports for each school at Duke. These reports show historical usage trends, provide energy benchmarks for comparison, and make practical suggestions for energy savings.

Click here to read the Executive Summary

 

Faculty Lead: Billy Pizer

Project Manager: Sophia Ziwei Zhu

Cathy Lee (Statistics) and Jennifer Zheng (Math, Emory University) spent ten weeks building tools to help Duke University Libraries better understand its journal purchasing practice. Using a combination of web-scraping and data-merging algorithms, the team created a dashboard to help library strategists visualize and optimize journal selection.

 

Click here to read the Executive Summary

 

 

 

 

Faculty Leads: Angela Zoss, Jeff Kosokoff

Project Manager: Chi Liu

 Micalyn Struble (Computer Science, Public Policy), Xiaoqiao Xing (Economics), and Eric Zhang (Math) spent ten weeks exploring the use of neuroscience as evidence in criminal trials. Working with a large set of case files downloaded from WestLaw, the team used natural language processing to build a predictive model that has the potential to automate the process of locating relevant-to-neuroscience cases from databases.

 

Click here to read the Executive Summary

 

Faculty Lead: Nita Farahany

Project Manager: William Krenzer

The Middle Passage, the route by which most enslaved persons were brought across the Atlantic to North America, is a critical locus of modern history—yet it has been notoriously difficult to document or memorialize. The ultimate aim of this project is to employ the resources of digital mapping technologies as well as the humanistic methods of history, literature, philosophy, and other disciplines to envision how best to memorialize the enslaved persons who lost their lives between their homelands and North America. To do this, the students combined previously-disparate data and archival sources to discover where on their journeys enslaved persons died. Because of the nature of data itself and the history it represents, the team engaged in on-going conversations about various ways of visualizing its findings, and continuously evaluated the ethics of the data’s provenance and their own methodologies and conclusions. A central goal for the students was to discover what contribution digital data analysis methods could make to the project of remembering itself.

 

The group worked with two datasets: the Trans-Atlantic Slave Trade Database (www.slavevoyages.org), an SPSS-formatted database currently run out of Emory University, containing data on 36,002 individual slaving expeditions between 1514 and 1866; and the Climatological Database for the World’s Oceans 1750-1850 (CLIWOC) (www.kaggle.com/cwiloc/climate-data-from-ocean-ships), a dataset composed of digitized records from the daily logbooks of ocean vessels, originally funded by the European Union in 2001 for purposes of tracking historical climate change. This second dataset includes 280,280 observational records of daily ship locations, climate data, and other associated information. The team employed archival materials to confirm (and disconfirm) overlaps between the two datasets: the students identified 316 ships bearing the same name across the datasets, of which they confirmed 35 matching slaving voyages.

 

The students had two central objectives: first, to locate where and why enslaved Africans died along the Middle Passage, and, second, to analyze patterns in the mortality rates. The group found significant patterns in the mortality data in both spatial and temporal terms (full results can be found here). At the same time, the team also examined the ethics of creating visualizations based on data that were recorded by the perpetrators of the slave trade—opening up space for further developments of this project that would include more detailed archival and theoretical work.

 

Click here to read the Executive Summary

 

Image credit:

J.M.W. Turner, Slave Ship, 1840, Museum of Fine Arts, Boston (public domain)

Faculty Lead: Charlotte Sussman

Project Manager: Emma Davenport

Ellis Ackerman (Math, NCSU), Rodrigo Araujo (Computer Science), and Samantha Miezio (Public Policy) spent ten weeks building tools to help understand the scope, cause, and effects of evictions in Durham County. Using evictions data recorded by the Durham County Sheriff’s Department and demographic data from the American Community Survey, the team investigated relationships between rent and evictions, created cost-benefit models for eviction diversion efforts, and built interactive visualizations of eviction trends. They had the opportunity to consult with analytics professionals from DataWorks NC.

Project Leads: Tim Stallmann, John Killeen, Peter Gilbert

Project Manager: Libby McClure

 

The aim of this project was to explore how U.S. mass media—particularly newspapers—enlists text and imagery to portray human rights, genocide, and crimes against humanity from World War II until the present. From the Holocaust to Cambodia, from Rwanda to Myanmar, such representation has political consequences. Coined by Raphael Lemkin, a Polish lawyer who fled Hitler’s antisemitism, the term “genocide” was first introduced to the American public in a Washington Post op-ed in 1944. Since its legal codification by the United Nations Convention for the Prevention of Genocide in 1948, the term has circulated, been debated, used to describes events that pre-date it (such as the displacement and genocide of Native People in the Americas), and been shaped by numerous forces—especially the words and images published in newspapers. Alongside the definition of “genocide,” other key concepts, specifically “crimes against humanity,” have attempted to label, and thus name the story, of targeted mass violence. Conversely, the concept of “human rights,” enshrined in the 1948 UN Declaration, seeks to name a presence of rights instead of their absence.

 

During the summer, the team focused their work on evaluating the language used in Western media to represent instances of genocide and how such language varied based on the location and time period of the conflict. In particular, the team’s efforts centered on Rwanda and Bosnia as important case studies, affording them the chance to compare nearly simultaneous reporting on two well-known genocides. The language used by reporters in these two cases showed distinct polarizations of terminology (for instance, while “slaughter” was much more common than “murder” in discussions of the Rwanda genocide, the inverse was true for Bosnia).

 

Click here to read the Executive Summary

 

Faculty Leads: Nora Nunn, Astrid Giugni

How Much Profit is Too Much Profit?

Chris Esposito (Economics), Ruoyu Wu (Computer Science), and Sean Yoon (Masters, Decision Sciences) spent ten weeks building tools to investigate the historical trends of price gouging and excess profits taxes in the United States of America from 1900 to the present. The team used a variety of text-mining methods to create a large database of historical documents, analyzed historical patterns of word use, and created an interactive R Shiny app to display their data and analyses.

Click here to read the Executive Summary

 

(cartoon from The Masses July 1916)

Faculty Lead: Sarah Deutsch

Project Manager: Evan Donahue

Maria Henriquez (Computer Science, Statistics) and Jacob Sumner (Biology) spent ten weeks building tools to help the Michael W. Krzyzewski Human Performance Lab best utilize its data from Duke University student athletes. The team worked with a large collection of athlete strength, balance, and flexibility measurements collected by the lab. They improved the K Lab’s data pipeline, created a predictive model for injury risk, and developed interactive web-based individualized injury risk reports.

Click here to read the Executive Summary

Faculty Lead: Dr. Tim Sell
Project Manager: Brinnae Bent

 

 

Vincent Wang (Computer Science, CE), Karen Jin (Bio/Stats), and Katherine Cottrell (Computer Science) spent ten weeks building tools to educate the public about lake dynamics and ecosystem health. Using data collected over a period of 50 years at the Experimental Lake Area (ELA) in Ontario, the team preprocessed and merged datasets, made a series of data visualizations, and produced an interactive website using R Shiny.

Click here to read the Executive Summary

 

Faculty Lead: Kateri Salk

Project Manager: Kim Bourne

Vivek Sahukar (Masters, Data Science), Yuval Medina (Computer Science), and Jin Cho (Computer Science/Electrical & Compter Engineering) spent ten weeks creating tools to help augment the experience of users in the StreamPULSE community. The team created an interactive guide and used data sonification methods to help users navigate and understand the data, and they used a mixture of statistical and machine-learning methods to build out an outlier detection and data cleaning pipeline.

Click here to read the Executive Summary

Faculty Leads: Emily Bernhardt, Jim Heffernan

Project Managers: Alice Carter, Michael Vlah

Aidan Fitzsimmons (Public Policy, Mathematics, Electrical & Computer Engineering), Joe Choo (Mathematics, Economics) and Brooke Scheinberg (Mathematics) spent ten weeks partnering with the Durham Crisis Intervention Team, the Criminal Justice Resource Center, and the Stepping Up Initiative. Utilizing booking data of 57,346 individuals provided by the Durham County Jail, this team was able to create visualizations and predictive models that illustrate patterns of recidivism, with a focus on the subset of the population with serious mental illness (SMI). These results could assist current efforts in diverting people with SMI from the criminal justice system and into care.

Click here to read the Executive Summary

Faculty Lead: Nicole Schramm-Sapyta, Michele Easter

Project Manager: Ruth Wygle

The students in this project worked on a pervasive question in literary, film, and copyright studies: how do we know when a new work of fiction borrows from an older one? Many times, works are appropriated, rather than straightforwardly adapted, which makes it difficult for human readers to trace. As we continue to remake and repurpose previous texts into new forms that combine hundreds of references to other works (such as Ready Player One), it becomes increasingly laborious to track all the intertextual elements of a single text. While some borrowings are easy to spot, as in the case of Marvel films that are straightforward adaptations of comic book storylines and aesthetics, others are more subtle, as when Disney reinterpreted Hamlet and African oral traditions to create The Lion King. Thousands of new stories are created each day, but how do we know if we are borrowing or appropriating a previous text? Are there works that have adapted previous ones that we have yet to identify?

 

The students worked with data from over 16.7 million books from Hathitrust, with critical analysis in scholarly articles accessible through JSTOR, and with the topic categories in Wikipedia. The group used Latent Dirichlet Allocation (LDA), a generative model that assumes that all documents are a mixture of topics, to represent key themes and topics as a distribution over words. The students developed a flexible and graduated heuristic for identifying a work as an adaptation; the more pre-selected categories a work fit under, the more likely it was to be marked as an adaptation by their model. Over the summer, the students came to appreciate that all digital humanistic methodologies are contestable and dependent on traditional critical work.

 

Click here to read the Executive Summary

Faculty Lead: Grant Glass

Jett Hollister (Mechanical Engineering) and Lexx Pino (Computer Science, Math) joined Economics majors Shengxi Hao and Cameron Polo in a ten week study of the late 2000s housing bubble. The team scraped, merged, and analyzed a variety of datasets to investigate different proposed causes of the bubble. They also created interactive visualizations of their data which will eventually appear on a website for public consumption.

Click here to read the Executive Summary

 

Faculty Lead: Lee Reiners

Project Manager: Kate Coulter

Cassandra Turk (Economics) and Alec Ashforth (Economics, Math) spent ten weeks building tools to help minimize the risk of trading electricity on the wholesale energy market. The team combined data from many sources and employed a variety of outlier-detection methods and other statistical tools in order to create a large dataset of extreme energy events and their causes. They had the opportunity to consult with analytics professionals from Tether Energy.

Click here to read Executive Summary

 

Project Lead: Eric Butter, Tether

 

Andre Wang (Math, Statistics), Michael Xue (Computer Science, ECE), and Ryan Culhane (Computer Science) spent ten weeks exploring the role played by emotion in speech-focused machine-learning. The team used a variety of techniques to build emotion recognition pipelines, and incorporated emotion into generated speech during text-to-speech synthesis.

Click here to read the Executive Summary

 

Faculty Leads: Vahid Tarokh, Jie Ding

Project Manager: Enmao Diao

Brooke Erikson (Economics/Computer Science), Alejandro Ortega (Math), and Jade Wu (Computer Science) spent ten weeks developing open-source tools for automatic document categorization, PDF table extraction, and data identification. Their motivating application was provided by Power for All’s Platform for Energy Access Knowledge, and they frequently collaborated with professionals from that organization.

Click here to read the Executive Summary

 

Jake Epstein (Statistics/Economics), Emre Kiziltug (Economics), and Alexander Rubin (Math/Computer Science) spent ten weeks investigating the existence of relative value opportunities in global corporate bond markets. They worked closely with a dataset provided by a leading asset management firm.

Click here for the Executive Summary

Maksym Kosachevskyy (Economics) and Jaehyun Yoo (Statistics/Economics) spent ten weeks understanding temporal patterns in the used construction machinery market and investigating the relationship between these patterns and macroeconomic trends.

They worked closely with a large dataset provided by MachineryTrader.com, and discussed their findings with analytics professionals from a leading asset management firm.

Click here to read the Executive Summary

Alec Ashforth (Economics/Math), Brooke Keene (Electrical & Computer Engineering), Vincent Liu (Electrical & Computer Engineering), and Dezmanique Martin (Computer Science) spent ten weeks helping Duke’s Office of Information Technology explore the development of an “e-advisor” app that recommends co-curricular opportunities to students based on a variety of factors. The team used collaborative and content-based filtering to create a recommender-system prototype in R Shiny.

Click here to read the Executive Summary

Statistical Science majors Eidan Jacob and Justina Zou joined forces with math major Mason Simon built interactive tools that analyze and visualize the trajectories taken by wireless devices as they move across Duke’s campus and connect to its wireless network. They used de-identified data provided by Duke’s Office of Information Technology, and worked closely with professionals from that office.

Click here for the Executive Summary

Cecily Chase (Applied Math), Brian Nieves (Computer Science), and Harry Xie (Computer Science/Statistics) spent ten weeks understanding how algorithmic approaches can shed light on which data center tasks (“stragglers”) are typically slowed down by unbalanced or limited resources. Working with a real dataset provided by project clients Lenovo, the team created a monitoring framework that flags stragglers in real time.

Click here to read the Executive Summary

David Liu (Electrical Computer Engineering) and Connie Wu (Computer Science/Statistics) spent ten weeks analyzing data about walking speed from the 6th Vital Sign Study.

Integrating study data with public data from the American Community Survey, they built interactive visualization tools that will help researchers understand the study results and the representativeness of study participants.

Click here to read the Executive Summary

Lucas Fagan (Computer Science/Public Policy), Caroline Wang (Computer Science/Math), and Ethan Holland (Statistics/Computer Science) spent ten weeks understanding how data science can contribute to fact-checking methodology. Training on audio data from major news stations, they adapted OpenAI methods to develop a pipeline that moves from audio data to an interface that enables users to search for claims related to other claims that had been previously investigated by fact-checking websites.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary.

A team of students led by Professors Jonathan Mattingly and Gregory Herschlag will investigate gerrymandering in political districting plans.  Students will improve on and employ an algorithm to sample the space of compliant redistricting plans for both state and federal districts.  The output of the algorithm will be used to detect gerrymandering for a given district plan; this data will be used to analyze and study the efficacy of the idea of partisan symmetry.  This work will continue the Quantifying Gerrymandering project, seeking to understand the space of redistricting plans and to find justiciable methods to detect gerrymandering. The ideal team has a mixture of members with programing backgrounds (C, Java, Python), statistical experience including possibly R, mathematical and algorithmic experience, and exposure to political science or other social science fields.

Read the latest updates about this ongoing project by visiting Dr. Mattingly's Gerrymandering blog.

Varun Nair (Mechanical Engineering), Tamasha Pathirathna (Computer Science), Xiaolan You (Computer Science/Statistics), and Qiwei Han (Chemistry) spent ten weeks creating a ground-truthed dataset of electricity infrastructure that can be used to automatically map the transmission and distribution components of the electric power grid. This is the first publicly available dataset of its kind, and will be analyzed during the academic year as part of a Bass Connections team.

Click here to read the Executive Summary

Kimberly Calero (Public Policy/Biology/Chemistry), Alexandra Diaz (Biology/Linguistics), and Cary Shindell (Environmental Engineering) spent ten weeks analyzing and visualizing data about disparities in Social Determinants of Health. Working with data provided by the MURDOCK Study, the American Community Survey, and the Google Places API, the team built a dataset and visualization tool that will assist the MURDOCK research team in exploring health outcomes in Cabarrus County, NC.

Click here to read the Executive Summary

Alexandra Putka (Biology/Neuroscience), John Madden (Economics), and Lucy St. Charles (Global Health/Spanish) spent ten weeks understanding the coverage and timeliness of maternal and pediatric vaccines in Durham. They used data from DEDUCE, the American Community Survey, and the CDC.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Dima Fayyad (Electrical & Computer Engineering), Sean Holt (Math), David Rein (Computer Science/Math) spent ten weeks exploring tools that will operationalize the application of distributed computing methodologies in the analysis of electronic medical records (EMR) at Duke.

As a case study, they applied these systems to an Natural Language Processing project on clinical narratives about growth failure in premature babies.

Click here to read the Executive Summary

Zhong Huang (Sociology) and Nishant Iyengar (Biomedical Engineering) spent ten weeks investigating the clinical profiles of rare metabolic diseases. Working with a large dataset provided by the Duke University Health System, the team used natural language processing techniques and produced an R Shiny visualization that enables clinicians to interactively explore diagnosis clusters.

Click here to read the Executive Summary

Samantha Garland (Computer Science), Grant Kim (Computer Science, Electrical & Computer Engineering), and Preethi Seshadri (Data Science) spent ten weeks exploring factors that influence patient choices when faced with intermediate-stage prostate cancer diagnoses. They used topic modeling in an analysis of a large collection of clinical appointment transcripts.

Click here for the Executive Summary

Nathan Liang (Psychology, Statistics), Sandra Luksic (Philosophy, Political Science),and Alexis Malone (Statistics) began their 10-week project as an open-ended exploration how women are depicted both physically and figuratively in women's magazines, seeking to consider what role magazines play in the imagined and real lives of women.

Click here to read the Executive Summary

Jennie Wang (Economics/Computer Science) and Blen Biru (Biology/French) spent ten weeks building visualizations of various aspects of the lives of orphaned and separated children at six separate sites in Africa and Asia. The team created R Shiny interactive visualizations of data provided by the Positive Outcomes for Orphans study (POFO).

Click here to read the Executive Summary

Aaron Crouse (Divinity), Mariah Jones (Sociology), Peyton Schafer (Statistics), and Nicholas Simmons (English/Education) spent ten weeks consulting with leadership from the Parents Teacher Association at Glenn Elementary School in Durham. The team set up infrastructure for data collection and visualization that will aid the PTA in forming future strategy.

Click here to read the Executive Summary

In tracing the publication history, geographical spread, and content of “pirated” copies of Daniel Defoe’s Robinson Crusoe, Gabriel Guedes (Math, Global Cultural Studies), Lucian Li (Computer Science, History), and Orgil Batzaya (Math, Computer Science) explored the complications of looking at a data set that saw drastic changes over the last three centuries in terms of spelling and grammar, which offered new challenges to data cleanup. By asking questions of the effectiveness of “distant reading” techniques for comparing thousands of different editions of Robinson Crusoe, the students learned how to think about the appropriateness of myriad computational methods like doc2vec and topic modeling. Through these methods, the students started to ask, at what point does one start seeing patterns that were invisible at a human scale of reading (reading one book at a time)? While the project did not definitively answer these questions, it did provide paths for further inquiry.

The team published their results at: https://orgilbatzaya.github.io/pirating-texts-site/

Click here for the Executive Summary

Melanie Lai Wai (Statistics) and Saumya Sao (Global Health, Gender Studies) spent ten weeks developing a platform which enables users to understand factors that influence contraceptive use and discontinuation. Their work combined data from the Demographic and Health Surveys contraceptive calendar with open data about reproductive health and social indicators from the World Bank, World Health Organization, and World Population Prospects. This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Bob Ziyang Ding (Math/Stats) and Daniel Chaofan Tao (ECE) spent ten weeks understanding how deep learning techniques can shed light on single cell analysis. Working with a large set of single-cell sequencing data, the team built an autoencoder pipeline and a device that will allow biologists to interactively visualize their own data.

Click here to read the Executive Summary

Ashley Murray (Chemistry/Math), Brian Glucksman (Global Cultural Studies), and Michelle Gao (Statistics/Economics) spent 10 weeks analyzing how meaning and use of the work “poverty” changed in presidential documents from the 1930s to the present. The students found that American presidential rhetoric about poverty has shifted in measurable ways over time. Presidential rhetoric, however, doesn’t necessarily affect policy change. As Michelle Gao explained, “The statistical methods we used provided another more quantitative way of analyzing the text. The database had around 130,000 documents, which is pretty impossible to read one by one and get all the poverty related documents by brute force. As a result, web-scraping and word filtering provided a more efficient and systematic way of extracting all the valuable information while minimizing human errors.” Through techniques such as linear regression, machine learning, and image analysis, the team effectively analyzed large swaths of textual and visual data. This approach allowed them to zero in on significant documents for closer and more in-depth analysis, paying particular attention to documents by presidents such as Franklin Delano Roosevelt or Lyndon B. Johnson, both leaders in what LBJ famously called “The War on Poverty.”

Click Here for the Executive Summary

Natalie Bui (Math/Economics), David Cheng (Electrical & Computer Engineering), and Cathy Lee (Statistics) spent ten weeks helping the Prospect Management and Analytics office of Duke Development understand how a variety of analytic techniques might enhance their workflow. The team used topic modeling and named entity recognition to develop a pipeline that clusters potential prospects into useful categories.

Click here to read the Executive Summary

Tatanya Bidopia (Psychology, Global Health), Matthew Rose (Computer Science), Joyce Yoo (Public Policy/Psychology) spent ten weeks doing a data-driven investigation of the relationship between mental health training of law enforcement officers and key outcomes such as incarceration, recidivism, and referrals for treatment. They worked closely with the Crisis Intervention Team, and they used jail data provided by the Sheriff’s Office of Durham County.

Click here to read the Executive Summary

Sophie Guo, Math/PoliSci major, Bridget Dou, ECE/CompSci major, Sachet Bangia, Econ/CompSci major, and Christy Vaughn spent ten weeks studying different procedures for drawing congressional boundaries, and quantifying the effects of these procedures on the fairness of actual election results.

Anna Vivian (Physics, Art History) and Vinai Oddiraju (Stats) spent ten weeks working closely with the director of the Durham Neighborhood Compass. Their goal was to produce metrics for things like ambient stress and neighborhood change, to visualize these metrics within the Compass system, and to interface with a variety of community stakeholders in their work.

Maddie Katz (Global Health and Evolutionary Anthropology Major), Parker Foe (Math/Spanish, Smith College), and Tony Li (Math, Cornell) spent ten weeks analyzing data from the National Transgender Discrimination Survey. Their goal was to understand how the discrimination faced by the trans community is realized on a state, regional, and national level, and to partner with advocacy organizations around their analysis.

Sharrin ManorArjun DevarajanWuming Zhang, and Jeffrey Perkins explored a lage collection of imagery data provided by the U.S. Geological Survey, with the goal of identifying solar panels using image recognition. They worked closely with the Energy Data Analytics Lab, part of the Energy Initiative at Duke.

ECE majors Mitchell Parekh and Yehan (Morton) Mo, along with IIT student Nikhil Tank, spent ten weeks understanding parking behavior at Duke. They worked closely with the Parking and Transportation Office, as well as with Vice President for Administration Kyle Cavanaugh.

Yanmin (Mike) Ma, mathematics/economics major, and Manchen (Mercy) Fang, electrical and computer engineering/computer science major, spent ten weeks studying historical archives and building a model to predict the price of pigs, relative to a number of interesting factors.

David Clancy, a Stats/Math/EnvSci major, and Tianyi Mu, an ECE/CompSci major, spent ten weeks studying the effects of weather, surroundings, and climate on the operational behavior of water reservoirs across the United States. They used a large dataset compiled by the U.S. Army Corps of Engineers, and they worked closely with Lauren Patterson from the Water Policy Program at Duke's Nicholas Institute for Environmental Policy Solutions. Project mentorship was provided by Alireza Vahid, a postdoctoral candidate in Electrical Engineering.

Luke Raskopf, PoliSci major and Xinyi (Lucy) Lu, Stats/CompSci major, spent ten weeks investigating the effectiveness of policies to combat unemployment and wage stagnation faced by working and middle-class families in the State of North Carolina. They worked closely with Allan Freyer at the North Carolina Justice Center.

Computer Science major Yumin Zhang and IIT student Akhil Kumar Pabbathi spent ten weeks working closely with Dr. Joe McClernon from Psychiatry and Behavioral Sciences to understand smoking and tobacco purchase behavior through activity space analysis.

Biomedical Engineering and Electrical and Computer Engineering major David Brenes, and Electrical and Computer Engineering/Computer Science majors Xingyu Chen and David Yang spent ten weeks working with mobile eye tracker data to optimize data processing and feature extraction. They generated their own video data with SMI Eye Tracking Glasses, and created computer vision algorithms to categorize subject gazing behavior in a grocery purchase decision-making environment.

Biomedical Engineering major Chi Kim Trinh, and Biostatistics MS student Can Cui spent ten weeks constructing a computational and statistical framework to evaluate the effects of health coaching on Type II Diabetes patients’ quality metrics, including Hemoglobin A1c, blood pressure, eye exam consistency, tobacco use, and prescription adherence to statins, aspirin, and angiotensin converter enzyme (ACE)/ angiotensin receptor blocker (ARB).

Xinyu (Cindy) Li (Biology and Chemistry) and Emilie Song (Biology) spent ten weeks exploring the Black Queen Hypothesis, which predicts that co-operation in animal societies could be a result of genetic/functional trait losses, as well as polymorphism of workers in eusocial animals such as ants and termites. The goal was to investigate this idea in four different eusocial insect species.

BME major Neel Prabhu, along with CompSci and ECE majors Virginia Cheng and Cheng Lu, spent ten weeks studying how cells from embryos of the common fruit fly move and change in shape during development. They worked with Cell-Sheet-Tracker (CST), an algorithm develped by former Data+ student Roger Zou and faculty lead Carlo Tomasi. This algorithm uses computer vision to model and track a dynamic network of cells using a deformable graph.

Weiyao Wang (Math) and Jennifer Du , along with NCCU Physics majors Jarrett Weathersby and Samuel Watson, spent ten weeks learning about how search engines often provide results which are not representative in terms of race and/or gender. Working closely with entrepreneur Winston Henderson, their goal was to understand how to frame this problem via statistical and machine-learning methodology, as well as to explore potential solutions.

Matthew Newman (Sociology), Sonia Xu (Statistics), and Alexandra Zrenner (Economics) spent ten weeks exploring giving patterns and demographic characteristics of anonymized Duke donors. They worked closely with the Duke Alumni Affairs and Development Office, with the goal of understanding the data and constructing tools to generate data-driven insight about donor behavior.

Artem Streltsov (Masters Economics) and IIT Mechanical Engineering major Vinod Ramakrishnan spent ten weeks exploring North Carolina state budget documents. Working closely with the Budget and Tax Center, part of the North Carolina Justice Center, their goal was to help build a keystone tool that can be used for analysis of the state budget as well as future budget proposals.

Yuangling (Annie) Wang, a Math/Stats major, and Jason Law, a Math/Econ major, spent ten weeks analyzing message-testing data about the 2015 Marijuana Legalization Initiative in Ohio; the data were provided by Public Opinion Strategies, one of the nation's leading public opinion research firms.

The goal was to understand how statistics and machine learning might help develop microtargeting strategies for use in future campaigns.

Devri Adams (Environmental Science), Annie Lott (Statistics), and Camila Vargas Restrepo (Visual Media Studies, Psychology) spent ten weeks creating interactive and exploratory visualizations of ecological data. They worked with over sixty years of data collected at the Hubbard Brook Experimental Forest (HBEF) in New Hampshire.

Runliang Li (Math), Qiyuan Pan (Computer Science), and Lei Qian (Masters in Statistics and Economic Modelling) spent ten weeks investigating discrepancies between posted wait times and actual wait times for rides at Disney World. They worked with data provided by TouringPlans.

Robbie Ha (Computer Science, Statistics), Peilin Lai  (Computer Science, Mathematics), and Alejandro Ortega (Mathematics) spent ten weeks analyzing the content and dissemination of images of the Syrian refugee crisis, as part of a general data-driven investigation of Western photojournalism and how it has contributed to our understanding of this crisis.

Ana Galvez (Cultural and Evolutionary Anthropology), Xinyu Li (Biology), and Jonathan Rub (Math, Computer Science) spent ten weeks studying the impact of diet on organ and bone growth in developing laboratory rats. The goal was to provide insight into the growth dynamics of these model organisms that could eventually be generalized to inform research on human development.

Building off the work of a 2016 Data+ teamYu Chen (Economics), Peter Hase (Statistics), and Ziwei Zhao (Mathematics), spent ten weeks working closely with analytical leadership at Duke's Office of University Development. The project goal was to identify distinguishing characteristics of major alumni donors and to model their lifetime giving behavior.

Over ten weeks, Computer Science majors Daniel Bass-Blue and Susie Choi joined forces with Biomedical Engineering major Ellie Wood to prototype interactive interfaces from Type II diabetics' mobile health data. Their specific goals were to encourage patient self-management and to effectively inform clinicians about patient behavior between visits.

Over ten weeks, Computer Science Majors Amber Strange and Jackson Dellinger joined forces with Psychology major Rachel Buchanan to perform a data-driven analysis of mental health intervention practices by Durham Police Department. They worked closely with leadership from the Durham Crisis Intervention Team (CIT) Collaborative, made up of officers who have completed 40 hours of specialized training in mental illness and crisis intervention techniques.

A team of students led by Duke mathematician Marc Ryser and University of Southern California Pathology professor Darryl Shibata will characterize phenotypic evolution during the growth of human colorectal tumors. 

A team of students led by Dr. Shanna Sprinkle of Duke Surgery will combine success metrics of Duke Surgery residents from a set of databases and create a user interface for residency program directors and possibly residents themselves to view and better understand residency program performance.

Boning Li (Masters Electrical and Computer Engineering), Ben Brigman (Electrical and Computer Engineering), Gouttham Chandrasekar (Electrical and Computer Engineering), Shamikh Hossain (Computer Science, Economics), and Trishul Nagenalli (Electrical and Computer Engineering, Computer Science) spent ten weeks creating datasets of electricity access indicators that can be used to train a classifier to detect electrified villages. This coming academic year, a Bass Connections Team will use these datasets to automatically find power plants and map electricity infrastructure.

Lauren Fox (Cultural Anthropology) and Elizabeth Ratliff (Statistics, Global Health) spent ten weeks analyzing and mapping pedestrian, bicycle, and motor vehicle data provided by Durham's Department of Transportation. This project was a continuation of a seminar on "ghost bikes" taught by Prof. Harris Solomon.

Felicia Chen (Computer Science, Statistics), Nikkhil Pulimood (Computer Science, Mathematics), and James Wang (Statistics, Public Policy) spent ten weeks working with Counter Tools, a local nonprofit that provides support to over a dozen state health departments. The project goal was to understand how open source data can lead to the creation of a national database of tobacco retailers.

Selen Berkman (ECE, CompSci), Sammy Garland (Math), and Aaron VanSteinberg (CompSci, English) spent ten weeks undertaking a data-driven analysis of the representation of women in film and in the film industry, with special attention to a metric called the Bechdel Test. They worked with data from a number of sources, including fivethirtyeight.com and the-numbers.com.

Over ten weeks, BME and ECE majors Serge Assaad and Mark Chen joined forces with Mechanical Engineering Masters student Guangshen Ma to automate the diagnosis of vascular anomalies from Doppler Ultrasound data, with goals of improving diagnostic accuracy and reducing physician time spent on simple diagnoses. They worked closely with Duke Surgeon Dr. Leila Mureebe and Civil and Environmental Engineering Professor Wilkins Aquino.

Liuyi Zhu (Computer Science, Math), Gilad Amitai (Masters, Statistics), Raphael Kim (Computer Science, Mechanical Engineering), and Andreas Badea (East Chapel Hill High School) spent ten weeks streamlining and automating the process of electronically rejuvenating medieval artwork. They used a 14th-century altarpiece by Francescussio Ghissi as a working example.

Over ten weeks, Math/CompSci majors Benjamin Chesnut and Frederick Xu joined forces with International Comparative Studies major Katharyn Loweth to understand the myriad academic pathways traveled by undergraduate students at Duke. They focused on data from Mathematics and the Duke Global Health Institute, and worked closely with departmental leadership from both areas.

Angelo Bonomi (Chemistry), Remy Kassem (ECE, Math), and Han (Alessandra) Zhang (Biology, CompSci) spent ten weeks analyzing data from social networks for communities of people facing chronic conditions. The social network data, provided by MyHealth Teams, contained information shared by community members about their diagnoses, symptoms, co-morbidities, treatments, and details about each treatment.

Over ten weeks, Mathematics/Economics majors Khuong (Lucas) Do and Jason Law joined forces with Analytical Political Economy Masters student Feixiao Chen to analyze the spati-temporal distribution of birth addresses in North Carolina. The goal of the project was to understand how/whether the distributions of different demographic categories (white/black, married/unmarried, etc.) differed, and how these differences connected to a variety of socioeconomic indicators.

Furthering the work of a 2016 Data+ team in predictive modeling of pancreatic cancer from electronic medical record (EMR) data, students Siwei Zhang (Masters Biostatistics) and Jake Ukleja (Computer Science) spent ten weeks building a model to predict pancreatic cancer from Electronic Medical Records (EMR) data. They worked with nine years worth of EMR data, including ICD9 diagnostic codes, that contained records from over 200,000 patients.

John Benhart (CompSci, Math) and Esko Brummel (Masters in Bioethics and Science Policy) spent ten weeks analyzing current and potential scholarly collaborations within the community of Duke faculty. They worked closely with the leadership of the Scholars@Duke database.

Zijing Huang (Statistics, Finance), Artem Streltsov (Masters Economics), and Frank Yin (ECE, CompSci, Math) spent ten weeks exploring how Internet of Things (IoT) data could be used to understand potential online financial behavior. They worked closely with analytical and strategic personnel from TD Bank, who provided them with a massive dataset compiled by Epsilon, a global company that specializes in data-driven marketing.

William Willis (Mechanical Engineering, Physics) and Qitong Gao (Masters Mechanical Engineering) spent ten weeks with the goal of mapping the ocean floor autonomously with high resolution and high efficiency. Their efforts were part of a team taking part in the Shell Ocean Discovery XPRIZE, and they made extensive use of simulation software built from Bellhop, an open-source program distributed by HLS Research.

Over ten weeks, Biology major Jacob Sumner and Neuroscience major Julianna Zhang joined forces with Biostatistics Masters student Jing Lyu to analyze potential drug diversion in the Duke Medical Center. Early detection of drug diversion assists health care providers in helping patients recover from their condition, as well as mitigate the effects on any patients under their care.

Gary Koplik (Masters in Economics and Computation) and Matt Tribby (CompSci, Statistics) spent ten weeks investigating the burden of rare diseases on the Duke University Health System (DUHS). They worked with a massive set of ICD diagnosis codes and visit data provided by DUHS.

Linda Adams(CompSci), Amanda Jankowski (Sociology, Global Health), and Jessica Needleman (Statistics/Economics) spent ten weeks prototyping small-area mapping of public-health information within the Durham Neighborhood Compass, with a focus on mortality data. They worked closely with the director of DataWorks NC, an independent data intermediary dedicated to democratizing the use of quantitative information.

Over ten weeks, Public Policy major Amy Jiang and Mathematics and Computer Science major Kelly Zhang joined forces with Economics Masters student Amirhossein Khoshro to investigate academic hiring patterns across American universities, as well as analyzing the educational background of faculty. They worked closely with Academic Analytics, a provider of data and solutions for universities in the U.S. and the U.K.

Albert Antar(Biology), and Zidi Xiu (Biostatistics) spent ten weeks leveraging Duke Electronic Medical Record (EMR) data to build predictive models of Pancreatic ductal adenocarcinoma (PDAC). PDAC is the 4th leading cause of cancer deaths in the US, and is most often is diagnosed in stage IV, with a survival rate of only 1% and life expectancy measured in months. Diagnosis of PDAC is very challenging due of deep anatomical placement, and significant risk imposed by traditional biopsy. The goal of this project is to utilize EMR data to identify potential avenues for diagnosing PDAC in the early treatable stages of disease.

Joy Patel (Math and CompSci) and Hans Riess (Math) spent ten weeks analyzing massive amounts of simulated weather data supplied by Spectral Sciences Inc. Their goal was to investigate ways in which advanced mathematical techniques could assist in quantifying storm intensity, helping to augment today's more qualitatively-based methods.

Priya Sarkar (Computer Science), Lily Zerihun (Biology and Global Health), and Anqi Zhang (Biostatistics) spent ten weeks utilizing Duke Electronic Medical Record (EMR) data to identify subgroups of diabetic patients, and predict future complications associated with Type II Diabetes.

Vivek Sriram (Computer Science and Math), Lina Yang (Biostatistics), and Pablo Ortiz (BME) spent ten weeks working in close collaboration with the Department of Biostatistics and Bioinformatics implementing an image analysis pipeline for immunofluorescence microscopy images of developing mouse lungs.

Emily Horn (Public Policy, Global Health), Aasha Reddy (Economics), and Shanchao Wang (Masters Economics) spent ten weeks working with data from the National Asset Scorecard for Communities of Color (NASCC), an ongoing survey project that gathers information about asset and debt of households at a detailed racial and national origin level. They worked closely with faculty and researchers from the Samuel Dubois Cook Center for Social Equity.

Computer Science and Psychology major Molly Chen, and Neuroscience major Emily Wu spent ten weeks working with patient diagnosis co-occurence data derived from Duke Electronic Medical Records to develop network visualizations of co-occurring disorders within demographic groups. Their goal was to make healthcare more holistic, and reduce healthcare disparities by improving patient and provider awareness of co-occurring disorders for patients within similar demographic groups.

The team built a ground truth dataset comprising satellite images, building footprints, and building heights (LIDAR) of 40,000+ buildings, along with road annotations. This dataset can be used to train computer vision algorithms to determine a building’s volume from an image, and is significant contribution to the broader research community with applications in urban planning, civil emergency mitigation and human population estimation.

Lindsay Hirschhorn (Mechanical Engineering) and Kelsey Sumner (Global Health and Evolutionary Anthropology) spent ten weeks determining optimal vaccination clinic locations in Durham County for a simulated Zika virus outbreak. They worked closely with researchers at RTI International to construct models of disease spread and health impact, and developed an interactive visualization tool.

Joel Tewksbury (BME) and Miriam Goldman (Math and Statistics, Arizona State University) spent ten weeks analyzing time-series darkness visual adaptation scores from over 1200 study participants to identify trends in night vision, and ultimately genetic markers that might confer a visual advantage.

Anne Driscoll (Economics, Statistical Science), and Austin Ferguson (Math, Physics) spent ten weeks examining metrics for inter-departmental cooperativity and productivity, and developing a collaboration network of Duke faculty. This project was sponsored by the Duke Clinical and Translational Science Award, with the larger goal of promoting collaborative success in the School of Medicine and School of Nursing.

Statistical Science majors Nathaniel Brown and Corey Vernot, and Economics student Guan-Wun Hao spent ten weeks exploring changes in food purchase behavior and nutritional intake following the event of a new Metformin prescription for Type II Diabetes. They worked closely with Matthew Harding and researchers in the BECR Center, as well as Dr. Susan Spratt, an endocrinologist in Duke Medicine.

Computer Science majors Erin Taylor and Ian Frankenburg, along with Math major Eric Peshkin, spent ten weeks understanding how geometry and topology, in tandem with statistics and machine-learning, can aid in quantifying anomalous behavior in cyber-networks. The team was sponsored by Geometric Data Anaytics, Inc., and used real anonymized Netflow data provided by Duke's Information Technology Security Office.

Molly Rosenstein, an Earth and Ocean Sciences major and Tess Harper, an Environmental Science and Spanish major spent ten weeks developing interactive data applications for use in Environmental Science 101, taught by Rebecca Vidra.

Undergraduate students Ellie Burton (BioPhysics/Math, Johns Hopkins University), Kevin Kuo (Electrical and Computer Engineering), and GiSeok Choi (Electrical and Computer Enhineering/Math) joined a research group led by Douglas Boyer and Professor Ingrid Daubechies, testing and developing mathematical and statistical methodology for measuring similarities between bones and teeth.

The goal of this project is take a large amount of data from the Massive Open Online Courses offered by Duke professors, and produce from it a coherent and compelling data analysis challenge that might then be used for a Duke or nation-wide data analysis competition.

Kelsey SumnerEvAnth and Global Health major and Christopher Hong, CompSci/ECE major, spent ten weeks analyzing high-dimensional microRNA data taken from patients with viral and/or bacterial conditions. They worked closely with the medical faculty and practitioners who generated the data.

Kang Ni, Math/Econ major, Kehan Zhang, Econ/Stats/ major, and Alex Hong, spent ten weeks investigating a large collection of grocery store transaction data. They worked closely with Matt Harding Behavioral Economics and Healthy Food Choice Research Center. (BECR Center).

Ethan LevineAnnie Tang, and Brandon Ho spent ten weeks investigating whether personality traits can be used to predict how people make risky decisions. They used a large dataset collected by the lab of Prof. Scott Huettel, and were mentored by graduate students Emma Wu Dowd and Jonathan Winkle.

Spenser Easterbrook, a Philosophy and Math double major, joined Biology majors Aharon Walker and Nicholas Branson in a ten-week exploration of the connections between journal publications from the humanities and the sciences. They were guided by Rick Gawne and Jameson Clarke, graduate students from Philosophy and Biology.