Research

Research projects at Rhodes iiD focus on building connections. We encourage crosspollination of ideas across disciplines, and to develop new forms of collaboration that will advance research and education across the full spectrum of disciplines at Duke. The topics below show areas of research focus at Rhodes iiD. See all of our research.

KC and Patrick led two hands-on data workshops for ENVIRON 335: Drones in Marine Biology, Ecology, and Conservation. These labs were intended to introduce students to examples of how drones are currently being used as a remote sensing tool to monitor marine megafauna and their environments, and how machine learning can be used to efficiently analyze remote sensing datasets. The first lab specifically focused on how drones are being used to collect aerial images of whales to measure changes in body condition to help monitor populations. Students were introduced to the methods for making accurate measurements and then received an opportunity to measure whales themselves. The second lab then introduced analysis methods using computer vision and deep neural networks to detect, count, and measure objects of interest in remote sensing data. This work provided students in the environmental sciences an introduction to new techniques in machine learning and remote sensing that can be powerful multipliers of effort when analyzing large environmental datasets.

This two-week teaching module in an introductory-level undergraduate course invites students to explore the power of Twitter in shaping public discourse. The project supplements the close-reading methods that are central to the humanities with large-scale social media analysis. This exercise challenges students to consider how applying visualization techniques to a dataset too vast for manual apprehension might enable them to identify for granular inspection smaller subsets of data and individual tweets—as well as to determine what factors do not lend themselves to close-reading at all. Employing an original dataset of almost one million tweets focused on the contested 2018 Florida midterm elections, students develop skills in using visualization software, generating research questions, and creating novel visualizations to answer those questions. They then evaluate and compare the affordances of large-scale data analytics with investigation of individual tweets, and draw on their findings to debate the role of social media in shaping public conversations surrounding major national events. This project was developed as a collaboration among the English Department (Emma Davenport and Astrid Giugni), Math Department (Hubert Bray), Duke University Library (Eric Monson), and Trinity Technology Services (Brian Norberg).

Understanding how to generate, analyze, and work with datasets in the humanities is often a difficult task without learning how to code or program. In humanities centered courses, we often privilege close reading or qualitative analysis over other methods of knowing, but by learning some new quantitative techniques we better prepare the students to tackle new forms of reading. This class will work with the data from the HathiTrust to develop ideas for thinking about how large groups and different discourse communities thought of queens of antiquity like Cleopatra and Dido.

Please refer to https://sites.duke.edu/queensofantiquity/ for more information.

We introduced students to spatial analysis in QGIS and R using location data from two whale species tagged with satellite transmitters. Students were given satellite tracks from five Cuvier’s beaked whales (Ziphius cavirostris) and five short-finned pilot whales (Globicephala macrorhynchus) tagged off the North Carolina coast. Students then used RStudio to calculate two metrics of these species' spatial ranges: home range (where a species spends 95% of its time) and core range (where a species spends 50% of its time). Next, students used QGIS to visualize the data, producing maps that displayed the whales' tracks and their ranges.

Social and environmental contexts are increasingly recognized as factors that impact health outcomes of patients. This team will have the opportunity to collaborate directly with clinicians and medical data in a real-world setting. They will examine the association between social determinants with risk prediction for hospital admissions, and to assess whether social determinants bias that risk in a systematic way. Applied methods will include machine learning, risk prediction, and assessment of bias. This Data+ project is sponsored by the Forge, Duke's center for actionable data science.

Project Leads: Shelly Rusincovitch, Ricardo Henao, Azalea Kim

Project Manager: Austin Talbot

Aaron Chai (Computer Sciece, Math) and Victoria Worsham (Economics, Math) spent ten weeks building tools to understand characteristics of successful oil and gas licenses in the North Sea. The team used data-scraping, merging, and OCR method to create a dataset containing license information and work obligations, and they also produced ArcGIS visualizations of license and well locations. They had the chance to consult frequently with analytics professionals at ExxonMobil.

Click here to read the Executive Summary

 

Project Lead: Kyle Bradbury

Project Manager: Artem Streltsov

Yueru Li (Math) and Jiacheng Fan (Economics, Finance) spent ten weeks investigating abnormal behavior by companies bidding for oil and gas rights in the Gulf of Mexico. Working with data provided by the Bureau of Ocean Energy Management and ExxonMobil, the team used outlier detection methods to automate the flagging of abnormal behavior, and then used statistical methods to examine various factors that might predict such behavior. They had the chance to consult frequently with analytics professionals at ExxonMobil.

 

Click here to read the Executive Summary

 

Project Lead: Kyle Bradbury

Project Manager: Hyeongyul Roh

Team A: Video data extraction

Alexander Bendeck (Computer Science, Statistics) and Niyaz Nurbhasha (Economics) spent ten weeks building tools to extract player and ball movement in basketball games. Using freely available broadcast-angle video footage which required much cleaning and pre-processing, the team used OpenPose software and employed neural network methodologies. Their pipeline fed into the predictive models of Team C.

Click here to read the Executive Summary

 

Team B: Modeling basketball data: offense

Anshul Shah (Computer Science, Statistics), Jack Lichtenstein (Statistics), and Will Schmidt (Mechanical Engineering) spent ten weeks building tools to analyze offensive play in basketball. Using 2014-5 Duke Men’s Basketball player-tracking data provided by SportVU, the team constructed statistical models that explored the relationship between different metrics of offensive productivity, and also used computational geometry methods to analyze the off-ball “gravity” of an offensive player.

Click here to read the Executive Summary

 

Team C: Modeling basketball data: defense

Lukengu Tshiteya (Statistics), Wenge Xie (ECE), and Joe Zuo (Computer Science, Statistics) spent ten weeks building tools to predict player movement in basketball games. Using SportVU data, including some pre-processed by Team A, the team built predictive RNN models that distinguish between 6 typical movement types, and created interactive visualizations of their findings in R Shiny.

Click here to read the Executive Summary

 

Team D: Visualizing basketball data

Shixing Cao (ECE) and Jackson Hubbard (Computer Science, Statistics) spent ten weeks building visualizations to help analyze basketball games. Using player tracking data from Duke basketball games, the team created visualizations of gameflow, networks of points and assists, and integrated all of their tools into an R Shiny app.

Click here to read the Executive Summary

 

Faculty Leads: Alexander Volfovsky, James Moody, Katherine Heller

Project Managers: Fan Bu, Heather Matthews, Harsh Parikh, Joe Zuo

Yanchen Ou (Computer Science) and Jiwoo Song (Chemistry, Mechanical Engineering) spent ten weeks building tools to assist in the analysis of smart meter data. Working with a large dataset of transformer and household data from the Kyrgyz Republic, the team built a data preprocessing pipeline and then used unsupervised machine-learning techniques to assess energy quality and construct typical user profiles.

 

Click here to read the Executive Summary

 

Faculty Lead: Robyn Meeks

Project Manager: Bernard Coles

Bernice Meja (Philosophy, Physics), Jessica Yang (Computer Science, ECE), and Tracey Chen (Computer Science, Mechanical Engineering) spent ten weeks building methods for Duke’s Office of Information Technology (OIT) to better understand information arising from “smart” (IoT) devices on campus. Working with data provided by an IoT testbed set up by OIT professionals, the team used a mixture of supervised and unsupervised machine-learning techniques and built a prototype device classifier.

 

Click here ot read the Executive Summary

 

Project Lead: Will Brockselsby

Interested in understanding the types of attacks targeting Duke and other universities?  Led by OIT and the IT Security Office, students will learn to analyze threat intelligence data to identify trends and patterns of attacks.  Duke blocks an average of 1.5 billion malicious connection attempts/day and is working with other universities to share the attack data.  One untapped area is research into the types of attacks and learning how universities are targeted.  Students will collaborate alongside the security and IT professionals in analyzing the data and with the intent to discern patterns.

Project Lead: Jesse Bowling

Project Manager: Susan Jacobs

Katelyn Chang (Computer Science, Math) and Haynes Lynch (Environmental Science, Policy) spent ten weeks building tools to analyze and visualize geospatial and remote sensing data arising from the Alligator River National Wildlife Refuge (ARNWR). The team produced interactive maps of physical characteristics that were tailored to specific refuge management professionals, and also built classifiers for vegetation detection in LandSat imagery.

 

Click here to read the Executive Summary

 

Faculty Leads: Justin Wright, Emily Bernhardt

Project Manager: Emily Ury

Dennis Harrsch, Jr. ( Computer Science ), Elizabeth Loschiavo ( Sociology ), and Zhixue (Mary) Wang ( Computer Science, Statistics ) spent ten weeks improving upon the team’s web platform that allows users to examine contraceptive use in low and middle income (LMIC) countries collected by the Demographic and Health Survey (DHS) contraceptive calendar. The team improved load times, data visualization latency, and increased the number of country surveys available in the platform from 3 to 55. The team also created a new app that allows users to explore the results of machine learning using this big data set.

This project will continue into the academic year via Bass Connections where student teams will refine the machine learning model results and explore the question of whether and how policymakers can use these tools to improve family planning in LMIC settings.

 

Click here to view the Executive Summary

 

Faculty Lead: Megan Huchko

Project Manager: Amy Finnegan

Nathaniel Choe (ECE) and Mashal Ali (Neuroscience) spent ten weeks developing machine-learning tools to analyze urodynamic detrusor pressure data of pediatric spina bifida patients from the Duke University Hospital. The team built a pipeline that went from raw time series data to signal analysis to dimension reduction to classification, and has the potential to assist in clinician diagnosis.

 

Click here to read the Executive Summary

 

Faculty Leads: Wilkins Aquino, Jonathan Routh

Project Manager: Zekun Cao

Varun Nair (Economics, Physics), Paul Rhee (Computer Science), Jichen Yang (Computer Science, ECE), and Fanjie Kong (Computer Vision) spent ten weeks helping to adapt deep learning techniques to inform energy access decisions.

 

Click here to read the Executive Summary

 

Faculty Lead: Kyle Bradbury

Project Manager: Fanjie Kong

Yoav Kargon (Mechanical Engineering) and Tommy Lin (Chemistry, Computer Science) spent ten weeks working with data from the Water Quality Portal (WQP), a large national dataset of water quality measurements aggregated by the USGS and EPA. The team went all the way from raw data to the production of Pondr, an interactive and comprehensive tool built with R Shiny that permits users to investigate and visualize data coverage, values, and trends from the WQP.

 

Click here to read the Executive Summary

 

Faculty Lead: Jim Heffernan

Project Manager: Nick Bruns

Marco Gonazales Blancas (Civil Engineering) and Mengjie Xiu (Masters, BioStatistics) spent ten weeks building tools to help Duke reduce its energy footprint and achieve carbon neutrality by 2024. The team processed and analyzed troves of utility consumption data and then created practical monthly energy use reports for each school at Duke. These reports show historical usage trends, provide energy benchmarks for comparison, and make practical suggestions for energy savings.

Click here to read the Executive Summary

 

Faculty Lead: Billy Pizer

Project Manager: Sophia Ziwei Zhu

Cathy Lee (Statistics) and Jennifer Zheng (Math, Emory University) spent ten weeks building tools to help Duke University Libraries better understand its journal purchasing practice. Using a combination of web-scraping and data-merging algorithms, the team created a dashboard to help library strategists visualize and optimize journal selection.

 

Click here to read the Executive Summary

 

 

 

 

Faculty Leads: Angela Zoss, Jeff Kosokoff

Project Manager: Chi Liu

 Micalyn Struble (Computer Science, Public Policy), Xiaoqiao Xing (Economics), and Eric Zhang (Math) spent ten weeks exploring the use of neuroscience as evidence in criminal trials. Working with a large set of case files downloaded from WestLaw, the team used natural language processing to build a predictive model that has the potential to automate the process of locating relevant-to-neuroscience cases from databases.

 

Click here to read the Executive Summary

 

Faculty Lead: Nita Farahany

Project Manager: William Krenzer

The Middle Passage, the route by which most enslaved persons were brought across the Atlantic to North America, is a critical locus of modern history—yet it has been notoriously difficult to document or memorialize. The ultimate aim of this project is to employ the resources of digital mapping technologies as well as the humanistic methods of history, literature, philosophy, and other disciplines to envision how best to memorialize the enslaved persons who lost their lives between their homelands and North America. To do this, the students combined previously-disparate data and archival sources to discover where on their journeys enslaved persons died. Because of the nature of data itself and the history it represents, the team engaged in on-going conversations about various ways of visualizing its findings, and continuously evaluated the ethics of the data’s provenance and their own methodologies and conclusions. A central goal for the students was to discover what contribution digital data analysis methods could make to the project of remembering itself.

 

The group worked with two datasets: the Trans-Atlantic Slave Trade Database (www.slavevoyages.org), an SPSS-formatted database currently run out of Emory University, containing data on 36,002 individual slaving expeditions between 1514 and 1866; and the Climatological Database for the World’s Oceans 1750-1850 (CLIWOC) (www.kaggle.com/cwiloc/climate-data-from-ocean-ships), a dataset composed of digitized records from the daily logbooks of ocean vessels, originally funded by the European Union in 2001 for purposes of tracking historical climate change. This second dataset includes 280,280 observational records of daily ship locations, climate data, and other associated information. The team employed archival materials to confirm (and disconfirm) overlaps between the two datasets: the students identified 316 ships bearing the same name across the datasets, of which they confirmed 35 matching slaving voyages.

 

The students had two central objectives: first, to locate where and why enslaved Africans died along the Middle Passage, and, second, to analyze patterns in the mortality rates. The group found significant patterns in the mortality data in both spatial and temporal terms (full results can be found here). At the same time, the team also examined the ethics of creating visualizations based on data that were recorded by the perpetrators of the slave trade—opening up space for further developments of this project that would include more detailed archival and theoretical work.

 

Click here to read the Executive Summary

 

Image credit:

J.M.W. Turner, Slave Ship, 1840, Museum of Fine Arts, Boston (public domain)

Faculty Lead: Charlotte Sussman

Project Manager: Emma Davenport

Ellis Ackerman (Math, NCSU), Rodrigo Araujo (Computer Science), and Samantha Miezio (Public Policy) spent ten weeks building tools to help understand the scope, cause, and effects of evictions in Durham County. Using evictions data recorded by the Durham County Sheriff’s Department and demographic data from the American Community Survey, the team investigated relationships between rent and evictions, created cost-benefit models for eviction diversion efforts, and built interactive visualizations of eviction trends. They had the opportunity to consult with analytics professionals from DataWorks NC.

Project Leads: Tim Stallmann, John Killeen, Peter Gilbert

Project Manager: Libby McClure

 

The aim of this project was to explore how U.S. mass media—particularly newspapers—enlists text and imagery to portray human rights, genocide, and crimes against humanity from World War II until the present. From the Holocaust to Cambodia, from Rwanda to Myanmar, such representation has political consequences. Coined by Raphael Lemkin, a Polish lawyer who fled Hitler’s antisemitism, the term “genocide” was first introduced to the American public in a Washington Post op-ed in 1944. Since its legal codification by the United Nations Convention for the Prevention of Genocide in 1948, the term has circulated, been debated, used to describes events that pre-date it (such as the displacement and genocide of Native People in the Americas), and been shaped by numerous forces—especially the words and images published in newspapers. Alongside the definition of “genocide,” other key concepts, specifically “crimes against humanity,” have attempted to label, and thus name the story, of targeted mass violence. Conversely, the concept of “human rights,” enshrined in the 1948 UN Declaration, seeks to name a presence of rights instead of their absence.

 

During the summer, the team focused their work on evaluating the language used in Western media to represent instances of genocide and how such language varied based on the location and time period of the conflict. In particular, the team’s efforts centered on Rwanda and Bosnia as important case studies, affording them the chance to compare nearly simultaneous reporting on two well-known genocides. The language used by reporters in these two cases showed distinct polarizations of terminology (for instance, while “slaughter” was much more common than “murder” in discussions of the Rwanda genocide, the inverse was true for Bosnia).

 

Click here to read the Executive Summary

 

Faculty Leads: Nora Nunn, Astrid Giugni

How Much Profit is Too Much Profit?

Chris Esposito (Economics), Ruoyu Wu (Computer Science), and Sean Yoon (Masters, Decision Sciences) spent ten weeks building tools to investigate the historical trends of price gouging and excess profits taxes in the United States of America from 1900 to the present. The team used a variety of text-mining methods to create a large database of historical documents, analyzed historical patterns of word use, and created an interactive R Shiny app to display their data and analyses.

Click here to read the Executive Summary

 

(cartoon from The Masses July 1916)

Faculty Lead: Sarah Deutsch

Project Manager: Evan Donahue

Maria Henriquez (Computer Science, Statistics) and Jacob Sumner (Biology) spent ten weeks building tools to help the Michael W. Krzyzewski Human Performance Lab best utilize its data from Duke University student athletes. The team worked with a large collection of athlete strength, balance, and flexibility measurements collected by the lab. They improved the K Lab’s data pipeline, created a predictive model for injury risk, and developed interactive web-based individualized injury risk reports.

Click here to read the Executive Summary

Faculty Lead: Dr. Tim Sell
Project Manager: Brinnae Bent

 

 

Vincent Wang (Computer Science, CE), Karen Jin (Bio/Stats), and Katherine Cottrell (Computer Science) spent ten weeks building tools to educate the public about lake dynamics and ecosystem health. Using data collected over a period of 50 years at the Experimental Lake Area (ELA) in Ontario, the team preprocessed and merged datasets, made a series of data visualizations, and produced an interactive website using R Shiny.

Click here to read the Executive Summary

 

Faculty Lead: Kateri Salk

Project Manager: Kim Bourne

Vivek Sahukar (Masters, Data Science), Yuval Medina (Computer Science), and Jin Cho (Computer Science/Electrical & Compter Engineering) spent ten weeks creating tools to help augment the experience of users in the StreamPULSE community. The team created an interactive guide and used data sonification methods to help users navigate and understand the data, and they used a mixture of statistical and machine-learning methods to build out an outlier detection and data cleaning pipeline.

Click here to read the Executive Summary

Faculty Leads: Emily Bernhardt, Jim Heffernan

Project Managers: Alice Carter, Michael Vlah

Aidan Fitzsimmons (Public Policy, Mathematics, Electrical & Computer Engineering), Joe Choo (Mathematics, Economics) and Brooke Scheinberg (Mathematics) spent ten weeks partnering with the Durham Crisis Intervention Team, the Criminal Justice Resource Center, and the Stepping Up Initiative. Utilizing booking data of 57,346 individuals provided by the Durham County Jail, this team was able to create visualizations and predictive models that illustrate patterns of recidivism, with a focus on the subset of the population with serious mental illness (SMI). These results could assist current efforts in diverting people with SMI from the criminal justice system and into care.

Click here to read the Executive Summary

Faculty Lead: Nicole Schramm-Sapyta, Michele Easter

Project Manager: Ruth Wygle

The students in this project worked on a pervasive question in literary, film, and copyright studies: how do we know when a new work of fiction borrows from an older one? Many times, works are appropriated, rather than straightforwardly adapted, which makes it difficult for human readers to trace. As we continue to remake and repurpose previous texts into new forms that combine hundreds of references to other works (such as Ready Player One), it becomes increasingly laborious to track all the intertextual elements of a single text. While some borrowings are easy to spot, as in the case of Marvel films that are straightforward adaptations of comic book storylines and aesthetics, others are more subtle, as when Disney reinterpreted Hamlet and African oral traditions to create The Lion King. Thousands of new stories are created each day, but how do we know if we are borrowing or appropriating a previous text? Are there works that have adapted previous ones that we have yet to identify?

 

The students worked with data from over 16.7 million books from Hathitrust, with critical analysis in scholarly articles accessible through JSTOR, and with the topic categories in Wikipedia. The group used Latent Dirichlet Allocation (LDA), a generative model that assumes that all documents are a mixture of topics, to represent key themes and topics as a distribution over words. The students developed a flexible and graduated heuristic for identifying a work as an adaptation; the more pre-selected categories a work fit under, the more likely it was to be marked as an adaptation by their model. Over the summer, the students came to appreciate that all digital humanistic methodologies are contestable and dependent on traditional critical work.

 

Click here to read the Executive Summary

Faculty Lead: Grant Glass

Jett Hollister (Mechanical Engineering) and Lexx Pino (Computer Science, Math) joined Economics majors Shengxi Hao and Cameron Polo in a ten week study of the late 2000s housing bubble. The team scraped, merged, and analyzed a variety of datasets to investigate different proposed causes of the bubble. They also created interactive visualizations of their data which will eventually appear on a website for public consumption.

Click here to read the Executive Summary

 

Faculty Lead: Lee Reiners

Project Manager: Kate Coulter

Cassandra Turk (Economics) and Alec Ashforth (Economics, Math) spent ten weeks building tools to help minimize the risk of trading electricity on the wholesale energy market. The team combined data from many sources and employed a variety of outlier-detection methods and other statistical tools in order to create a large dataset of extreme energy events and their causes. They had the opportunity to consult with analytics professionals from Tether Energy.

Click here to read Executive Summary

 

Project Lead: Eric Butter, Tether

Andre Wang (Math, Statistics), Michael Xue (Computer Science, ECE), and Ryan Culhane (Computer Science) spent ten weeks exploring the role played by emotion in speech-focused machine-learning. The team used a variety of techniques to build emotion recognition pipelines, and incorporated emotion into generated speech during text-to-speech synthesis.

Click here to read the Executive Summary

 

Faculty Leads: Vahid Tarokh, Jie Ding

Project Manager: Enmao Diao

This Data Expedition introduces students to network tools and approaches and invites students to consider the relationship(s) between social networks and social imaginaries. Using foundation-funding data that was collected from the The Foundation Directory Online, the Data Expedition enables students to visualize and explore the relationship between networks, social imaginaries, and funding for higher education. The Data Expedition is based on two sets of data. The first set list the grants received by Duke University in 2016 from five foundations: The Bill and Melinda Gates Foundation, Fidelity Charitable Gift Fund, Silicon Valley Community Foundation, The Community Foundation of Western North Carolina, and The Robert Wood Johnson Foundation. The second set lists the names of board members from Duke University and each of these five foundations along with the degree granting institution for their undergraduate education. For the sake of this exercise, the degree granting institutions data was fabricated from a randomized list of the top twenty-five undergraduate institutions.

This Data Expedition seeks to introduce students to statistical analysis in the field of international development. Students construct a index of wealth/poverty based on asset holdings using four datasets collected under the umbrella of the Living Standards Measurement Survey project at the World Bank. We selected countries to represent different continents with comparable and recent survey data: Bulgaria (2007), Tajikistan (2009), Tanzania (2010-2011), and Panama (2008).

First, we construct an index of wealth based on household assets in the different countries using Principle Components Analysis. Once a poverty index is constructed, students seek to understand what the main drivers of wealth/poverty are in different countries. We include variables for health, education, age, relationship to the household head, and sex. Students then use regression analysis to identify the main drivers of poverty in different countries.

This data expedition explores the local (ego) patent citation networks of three hybrid vehicle-related patents. The concept of patent citations and technological development is a core theme in innovation and entrepreneurship, and the purpose of these network explorations is to both quantitatively and visually assess how innovations are connected and what these connections mean for the focal innovations and the technologies that draw on those patents in the future. The expedition was incorporated as part of the Sociology of Entrepreneurship class, where students are thinking about the emergence and diffusion of innovations.

Large publicly available environmental databases are a tremendous resource for both scientists and the general public interested in climate trends and properties. However, without the programming skills to parse and interpret these massive datasets, significant trends may remain hidden from both scientists and the public. In this data exploration, students, over the course of three hours, accessed two large, publicly available datasets, each with greater than 4 million observations. They learned how to use R and RStudio to effectively organize, visualize and statistically explore trends in deep sea physical oceanography.  

Our aim was to introduce students to the wealth of possibilities that human genotyping and sequencing hold by illustrating firsthand the power of these datasets to identify genetic relatives, using the story of the Golden State Killer’s capture with public genetic databases.

This Data Expedition introduced hypothesis-driven data analysis in R and the concept of circular data, while providing some tools for importing it and analyzing it in R.

Brooke Erikson (Economics/Computer Science), Alejandro Ortega (Math), and Jade Wu (Computer Science) spent ten weeks developing open-source tools for automatic document categorization, PDF table extraction, and data identification. Their motivating application was provided by Power for All’s Platform for Energy Access Knowledge, and they frequently collaborated with professionals from that organization.

Click here to read the Executive Summary

 

Jake Epstein (Statistics/Economics), Emre Kiziltug (Economics), and Alexander Rubin (Math/Computer Science) spent ten weeks investigating the existence of relative value opportunities in global corporate bond markets. They worked closely with a dataset provided by a leading asset management firm.

Click here for the Executive Summary

Maksym Kosachevskyy (Economics) and Jaehyun Yoo (Statistics/Economics) spent ten weeks understanding temporal patterns in the used construction machinery market and investigating the relationship between these patterns and macroeconomic trends.

They worked closely with a large dataset provided by MachineryTrader.com, and discussed their findings with analytics professionals from a leading asset management firm.

Click here to read the Executive Summary

Alec Ashforth (Economics/Math), Brooke Keene (Electrical & Computer Engineering), Vincent Liu (Electrical & Computer Engineering), and Dezmanique Martin (Computer Science) spent ten weeks helping Duke’s Office of Information Technology explore the development of an “e-advisor” app that recommends co-curricular opportunities to students based on a variety of factors. The team used collaborative and content-based filtering to create a recommender-system prototype in R Shiny.

Click here to read the Executive Summary

Statistical Science majors Eidan Jacob and Justina Zou joined forces with math major Mason Simon built interactive tools that analyze and visualize the trajectories taken by wireless devices as they move across Duke’s campus and connect to its wireless network. They used de-identified data provided by Duke’s Office of Information Technology, and worked closely with professionals from that office.

Click here for the Executive Summary

The aim of this data expedition was to give students an introduction to stable isotopes and how the data can be used to understand trophic dynamics. 

Cecily Chase (Applied Math), Brian Nieves (Computer Science), and Harry Xie (Computer Science/Statistics) spent ten weeks understanding how algorithmic approaches can shed light on which data center tasks (“stragglers”) are typically slowed down by unbalanced or limited resources. Working with a real dataset provided by project clients Lenovo, the team created a monitoring framework that flags stragglers in real time.

Click here to read the Executive Summary

David Liu (Electrical Computer Engineering) and Connie Wu (Computer Science/Statistics) spent ten weeks analyzing data about walking speed from the 6th Vital Sign Study.

Integrating study data with public data from the American Community Survey, they built interactive visualization tools that will help researchers understand the study results and the representativeness of study participants.

Click here to read the Executive Summary

Lucas Fagan (Computer Science/Public Policy), Caroline Wang (Computer Science/Math), and Ethan Holland (Statistics/Computer Science) spent ten weeks understanding how data science can contribute to fact-checking methodology. Training on audio data from major news stations, they adapted OpenAI methods to develop a pipeline that moves from audio data to an interface that enables users to search for claims related to other claims that had been previously investigated by fact-checking websites.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary.

A team of students led by Professors Jonathan Mattingly and Gregory Herschlag will investigate gerrymandering in political districting plans.  Students will improve on and employ an algorithm to sample the space of compliant redistricting plans for both state and federal districts.  The output of the algorithm will be used to detect gerrymandering for a given district plan; this data will be used to analyze and study the efficacy of the idea of partisan symmetry.  This work will continue the Quantifying Gerrymandering project, seeking to understand the space of redistricting plans and to find justiciable methods to detect gerrymandering. The ideal team has a mixture of members with programing backgrounds (C, Java, Python), statistical experience including possibly R, mathematical and algorithmic experience, and exposure to political science or other social science fields.

Read the latest updates about this ongoing project by visiting Dr. Mattingly's Gerrymandering blog.

Varun Nair (Mechanical Engineering), Tamasha Pathirathna (Computer Science), Xiaolan You (Computer Science/Statistics), and Qiwei Han (Chemistry) spent ten weeks creating a ground-truthed dataset of electricity infrastructure that can be used to automatically map the transmission and distribution components of the electric power grid. This is the first publicly available dataset of its kind, and will be analyzed during the academic year as part of a Bass Connections team.

Click here to read the Executive Summary

Kimberly Calero (Public Policy/Biology/Chemistry), Alexandra Diaz (Biology/Linguistics), and Cary Shindell (Environmental Engineering) spent ten weeks analyzing and visualizing data about disparities in Social Determinants of Health. Working with data provided by the MURDOCK Study, the American Community Survey, and the Google Places API, the team built a dataset and visualization tool that will assist the MURDOCK research team in exploring health outcomes in Cabarrus County, NC.

Click here to read the Executive Summary

Alexandra Putka (Biology/Neuroscience), John Madden (Economics), and Lucy St. Charles (Global Health/Spanish) spent ten weeks understanding the coverage and timeliness of maternal and pediatric vaccines in Durham. They used data from DEDUCE, the American Community Survey, and the CDC.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Dima Fayyad (Electrical & Computer Engineering), Sean Holt (Math), David Rein (Computer Science/Math) spent ten weeks exploring tools that will operationalize the application of distributed computing methodologies in the analysis of electronic medical records (EMR) at Duke.

As a case study, they applied these systems to an Natural Language Processing project on clinical narratives about growth failure in premature babies.

Click here to read the Executive Summary

Zhong Huang (Sociology) and Nishant Iyengar (Biomedical Engineering) spent ten weeks investigating the clinical profiles of rare metabolic diseases. Working with a large dataset provided by the Duke University Health System, the team used natural language processing techniques and produced an R Shiny visualization that enables clinicians to interactively explore diagnosis clusters.

Click here to read the Executive Summary

Samantha Garland (Computer Science), Grant Kim (Computer Science, Electrical & Computer Engineering), and Preethi Seshadri (Data Science) spent ten weeks exploring factors that influence patient choices when faced with intermediate-stage prostate cancer diagnoses. They used topic modeling in an analysis of a large collection of clinical appointment transcripts.

Click here for the Executive Summary

Nathan Liang (Psychology, Statistics), Sandra Luksic (Philosophy, Political Science),and Alexis Malone (Statistics) began their 10-week project as an open-ended exploration how women are depicted both physically and figuratively in women's magazines, seeking to consider what role magazines play in the imagined and real lives of women.

Click here to read the Executive Summary

Jennie Wang (Economics/Computer Science) and Blen Biru (Biology/French) spent ten weeks building visualizations of various aspects of the lives of orphaned and separated children at six separate sites in Africa and Asia. The team created R Shiny interactive visualizations of data provided by the Positive Outcomes for Orphans study (POFO).

Click here to read the Executive Summary

Aaron Crouse (Divinity), Mariah Jones (Sociology), Peyton Schafer (Statistics), and Nicholas Simmons (English/Education) spent ten weeks consulting with leadership from the Parents Teacher Association at Glenn Elementary School in Durham. The team set up infrastructure for data collection and visualization that will aid the PTA in forming future strategy.

Click here to read the Executive Summary

In tracing the publication history, geographical spread, and content of “pirated” copies of Daniel Defoe’s Robinson Crusoe, Gabriel Guedes (Math, Global Cultural Studies), Lucian Li (Computer Science, History), and Orgil Batzaya (Math, Computer Science) explored the complications of looking at a data set that saw drastic changes over the last three centuries in terms of spelling and grammar, which offered new challenges to data cleanup. By asking questions of the effectiveness of “distant reading” techniques for comparing thousands of different editions of Robinson Crusoe, the students learned how to think about the appropriateness of myriad computational methods like doc2vec and topic modeling. Through these methods, the students started to ask, at what point does one start seeing patterns that were invisible at a human scale of reading (reading one book at a time)? While the project did not definitively answer these questions, it did provide paths for further inquiry.

The team published their results at: https://orgilbatzaya.github.io/pirating-texts-site/

Click here for the Executive Summary

Melanie Lai Wai (Statistics) and Saumya Sao (Global Health, Gender Studies) spent ten weeks developing a platform which enables users to understand factors that influence contraceptive use and discontinuation. Their work combined data from the Demographic and Health Surveys contraceptive calendar with open data about reproductive health and social indicators from the World Bank, World Health Organization, and World Population Prospects. This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Bob Ziyang Ding (Math/Stats) and Daniel Chaofan Tao (ECE) spent ten weeks understanding how deep learning techniques can shed light on single cell analysis. Working with a large set of single-cell sequencing data, the team built an autoencoder pipeline and a device that will allow biologists to interactively visualize their own data.

Click here to read the Executive Summary

Ashley Murray (Chemistry/Math), Brian Glucksman (Global Cultural Studies), and Michelle Gao (Statistics/Economics) spent 10 weeks analyzing how meaning and use of the work “poverty” changed in presidential documents from the 1930s to the present. The students found that American presidential rhetoric about poverty has shifted in measurable ways over time. Presidential rhetoric, however, doesn’t necessarily affect policy change. As Michelle Gao explained, “The statistical methods we used provided another more quantitative way of analyzing the text. The database had around 130,000 documents, which is pretty impossible to read one by one and get all the poverty related documents by brute force. As a result, web-scraping and word filtering provided a more efficient and systematic way of extracting all the valuable information while minimizing human errors.” Through techniques such as linear regression, machine learning, and image analysis, the team effectively analyzed large swaths of textual and visual data. This approach allowed them to zero in on significant documents for closer and more in-depth analysis, paying particular attention to documents by presidents such as Franklin Delano Roosevelt or Lyndon B. Johnson, both leaders in what LBJ famously called “The War on Poverty.”

Click Here for the Executive Summary

Natalie Bui (Math/Economics), David Cheng (Electrical & Computer Engineering), and Cathy Lee (Statistics) spent ten weeks helping the Prospect Management and Analytics office of Duke Development understand how a variety of analytic techniques might enhance their workflow. The team used topic modeling and named entity recognition to develop a pipeline that clusters potential prospects into useful categories.

Click here to read the Executive Summary

Tatanya Bidopia (Psychology, Global Health), Matthew Rose (Computer Science), Joyce Yoo (Public Policy/Psychology) spent ten weeks doing a data-driven investigation of the relationship between mental health training of law enforcement officers and key outcomes such as incarceration, recidivism, and referrals for treatment. They worked closely with the Crisis Intervention Team, and they used jail data provided by the Sheriff’s Office of Durham County.

Click here to read the Executive Summary

Marine mammals exhibit extreme physiological and behavioral adaptions that allow them to dive hundreds to thousands of meters underwater despite their need to breathe air at the surface. Through the development of new remote monitoring technologies, we are just beginning to understand the mechanisms by which they are able to execute these extreme behaviors. Long- term animal-borne tags can now record location, dive depth, and dive duration and then transmit these data to satellite receivers, enabling remote access to behavior occurring both many kilometers out to sea and several kilometers below the ocean surface. 

The aim of this Data Expedition was for students to learn hands-on data visualization techniques using a variety of data types. Students first discussed how data visualization is useful, and tips to make graphs both visually appealing and easy to understand. 

Understanding of how to manipulate, analyze, and display large datasets is an essential skill in the life sciences. Introducing students to the concepts of coding languages and showing them the diversity of tasks that can be accomplished using a flexible coding scheme like R is an important step in the training of any life sciences professional. For students taking lab-based courses, who are often required to analyze the datasets they produce in class, learning these techniques can be helpful both in the short-term (i.e., during the semester) and for their future careers.

Sophie Guo, Math/PoliSci major, Bridget Dou, ECE/CompSci major, Sachet Bangia, Econ/CompSci major, and Christy Vaughn spent ten weeks studying different procedures for drawing congressional boundaries, and quantifying the effects of these procedures on the fairness of actual election results.

Anna Vivian (Physics, Art History) and Vinai Oddiraju (Stats) spent ten weeks working closely with the director of the Durham Neighborhood Compass. Their goal was to produce metrics for things like ambient stress and neighborhood change, to visualize these metrics within the Compass system, and to interface with a variety of community stakeholders in their work.

ECE majors Mitchell Parekh and Yehan (Morton) Mo, along with IIT student Nikhil Tank, spent ten weeks understanding parking behavior at Duke. They worked closely with the Parking and Transportation Office, as well as with Vice President for Administration Kyle Cavanaugh.

Maddie Katz (Global Health and Evolutionary Anthropology Major), Parker Foe (Math/Spanish, Smith College), and Tony Li (Math, Cornell) spent ten weeks analyzing data from the National Transgender Discrimination Survey. Their goal was to understand how the discrimination faced by the trans community is realized on a state, regional, and national level, and to partner with advocacy organizations around their analysis.

Sharrin ManorArjun DevarajanWuming Zhang, and Jeffrey Perkins explored a lage collection of imagery data provided by the U.S. Geological Survey, with the goal of identifying solar panels using image recognition. They worked closely with the Energy Data Analytics Lab, part of the Energy Initiative at Duke.

Matt and Ken led two labs for the engineering section of STA 111/130, an introductory course in statistics and probability. The lab assignments were written by Matt and Ken in order to bridge the gap between introductory linear regression, which is often explained in terms of a static, complete dataset, and time series analysis, which is not a common topic in introductory courses. 

Yanmin (Mike) Ma, mathematics/economics major, and Manchen (Mercy) Fang, electrical and computer engineering/computer science major, spent ten weeks studying historical archives and building a model to predict the price of pigs, relative to a number of interesting factors.

David Clancy, a Stats/Math/EnvSci major, and Tianyi Mu, an ECE/CompSci major, spent ten weeks studying the effects of weather, surroundings, and climate on the operational behavior of water reservoirs across the United States. They used a large dataset compiled by the U.S. Army Corps of Engineers, and they worked closely with Lauren Patterson from the Water Policy Program at Duke's Nicholas Institute for Environmental Policy Solutions. Project mentorship was provided by Alireza Vahid, a postdoctoral candidate in Electrical Engineering.

Luke Raskopf, PoliSci major and Xinyi (Lucy) Lu, Stats/CompSci major, spent ten weeks investigating the effectiveness of policies to combat unemployment and wage stagnation faced by working and middle-class families in the State of North Carolina. They worked closely with Allan Freyer at the North Carolina Justice Center.

This paper addresses analysis of heterogeneous data, such as ordered, categorical, real and count data. Such data are of interest in our motivating application, cognitive and brain science, in which subjects may answer questionnaires, and also (separately) undergo fMRI interrogation. A contribution of this paper concerns the joint analysis of how people answer questionnaires and how their brain responds to external stimuli (here visual), the latter measured via fMRI.

Computer Science major Yumin Zhang and IIT student Akhil Kumar Pabbathi spent ten weeks working closely with Dr. Joe McClernon from Psychiatry and Behavioral Sciences to understand smoking and tobacco purchase behavior through activity space analysis.

Biomedical Engineering major Chi Kim Trinh, and Biostatistics MS student Can Cui spent ten weeks constructing a computational and statistical framework to evaluate the effects of health coaching on Type II Diabetes patients’ quality metrics, including Hemoglobin A1c, blood pressure, eye exam consistency, tobacco use, and prescription adherence to statins, aspirin, and angiotensin converter enzyme (ACE)/ angiotensin receptor blocker (ARB).

Biomedical Engineering and Electrical and Computer Engineering major David Brenes, and Electrical and Computer Engineering/Computer Science majors Xingyu Chen and David Yang spent ten weeks working with mobile eye tracker data to optimize data processing and feature extraction. They generated their own video data with SMI Eye Tracking Glasses, and created computer vision algorithms to categorize subject gazing behavior in a grocery purchase decision-making environment.

Xinyu (Cindy) Li (Biology and Chemistry) and Emilie Song (Biology) spent ten weeks exploring the Black Queen Hypothesis, which predicts that co-operation in animal societies could be a result of genetic/functional trait losses, as well as polymorphism of workers in eusocial animals such as ants and termites. The goal was to investigate this idea in four different eusocial insect species.

BME major Neel Prabhu, along with CompSci and ECE majors Virginia Cheng and Cheng Lu, spent ten weeks studying how cells from embryos of the common fruit fly move and change in shape during development. They worked with Cell-Sheet-Tracker (CST), an algorithm develped by former Data+ student Roger Zou and faculty lead Carlo Tomasi. This algorithm uses computer vision to model and track a dynamic network of cells using a deformable graph.

Matthew Newman (Sociology), Sonia Xu (Statistics), and Alexandra Zrenner (Economics) spent ten weeks exploring giving patterns and demographic characteristics of anonymized Duke donors. They worked closely with the Duke Alumni Affairs and Development Office, with the goal of understanding the data and constructing tools to generate data-driven insight about donor behavior.

Weiyao Wang (Math) and Jennifer Du , along with NCCU Physics majors Jarrett Weathersby and Samuel Watson, spent ten weeks learning about how search engines often provide results which are not representative in terms of race and/or gender. Working closely with entrepreneur Winston Henderson, their goal was to understand how to frame this problem via statistical and machine-learning methodology, as well as to explore potential solutions.

Yuangling (Annie) Wang, a Math/Stats major, and Jason Law, a Math/Econ major, spent ten weeks analyzing message-testing data about the 2015 Marijuana Legalization Initiative in Ohio; the data were provided by Public Opinion Strategies, one of the nation's leading public opinion research firms.

The goal was to understand how statistics and machine learning might help develop microtargeting strategies for use in future campaigns.

Artem Streltsov (Masters Economics) and IIT Mechanical Engineering major Vinod Ramakrishnan spent ten weeks exploring North Carolina state budget documents. Working closely with the Budget and Tax Center, part of the North Carolina Justice Center, their goal was to help build a keystone tool that can be used for analysis of the state budget as well as future budget proposals.

Runliang Li (Math), Qiyuan Pan (Computer Science), and Lei Qian (Masters in Statistics and Economic Modelling) spent ten weeks investigating discrepancies between posted wait times and actual wait times for rides at Disney World. They worked with data provided by TouringPlans.

Robbie Ha (Computer Science, Statistics), Peilin Lai  (Computer Science, Mathematics), and Alejandro Ortega (Mathematics) spent ten weeks analyzing the content and dissemination of images of the Syrian refugee crisis, as part of a general data-driven investigation of Western photojournalism and how it has contributed to our understanding of this crisis.

Ana Galvez (Cultural and Evolutionary Anthropology), Xinyu Li (Biology), and Jonathan Rub (Math, Computer Science) spent ten weeks studying the impact of diet on organ and bone growth in developing laboratory rats. The goal was to provide insight into the growth dynamics of these model organisms that could eventually be generalized to inform research on human development.

Devri Adams (Environmental Science), Annie Lott (Statistics), and Camila Vargas Restrepo (Visual Media Studies, Psychology) spent ten weeks creating interactive and exploratory visualizations of ecological data. They worked with over sixty years of data collected at the Hubbard Brook Experimental Forest (HBEF) in New Hampshire.

A team of students led by Duke mathematician Marc Ryser and University of Southern California Pathology professor Darryl Shibata will characterize phenotypic evolution during the growth of human colorectal tumors. 

Over ten weeks, Computer Science Majors Amber Strange and Jackson Dellinger joined forces with Psychology major Rachel Buchanan to perform a data-driven analysis of mental health intervention practices by Durham Police Department. They worked closely with leadership from the Durham Crisis Intervention Team (CIT) Collaborative, made up of officers who have completed 40 hours of specialized training in mental illness and crisis intervention techniques.

Over ten weeks, Computer Science majors Daniel Bass-Blue and Susie Choi joined forces with Biomedical Engineering major Ellie Wood to prototype interactive interfaces from Type II diabetics' mobile health data. Their specific goals were to encourage patient self-management and to effectively inform clinicians about patient behavior between visits.

Building off the work of a 2016 Data+ teamYu Chen (Economics), Peter Hase (Statistics), and Ziwei Zhao (Mathematics), spent ten weeks working closely with analytical leadership at Duke's Office of University Development. The project goal was to identify distinguishing characteristics of major alumni donors and to model their lifetime giving behavior.

Graduate Students: Kendra Kaiser and John Mallard

Faculty: Michael O’Driscoll

Course: Landscape Hydrology, EOS 323/723

A team of students led by Dr. Shanna Sprinkle of Duke Surgery will combine success metrics of Duke Surgery residents from a set of databases and create a user interface for residency program directors and possibly residents themselves to view and better understand residency program performance.

Lauren Fox (Cultural Anthropology) and Elizabeth Ratliff (Statistics, Global Health) spent ten weeks analyzing and mapping pedestrian, bicycle, and motor vehicle data provided by Durham's Department of Transportation. This project was a continuation of a seminar on "ghost bikes" taught by Prof. Harris Solomon.

Boning Li (Masters Electrical and Computer Engineering), Ben Brigman (Electrical and Computer Engineering), Gouttham Chandrasekar (Electrical and Computer Engineering), Shamikh Hossain (Computer Science, Economics), and Trishul Nagenalli (Electrical and Computer Engineering, Computer Science) spent ten weeks creating datasets of electricity access indicators that can be used to train a classifier to detect electrified villages. This coming academic year, a Bass Connections Team will use these datasets to automatically find power plants and map electricity infrastructure.

Liuyi Zhu (Computer Science, Math), Gilad Amitai (Masters, Statistics), Raphael Kim (Computer Science, Mechanical Engineering), and Andreas Badea (East Chapel Hill High School) spent ten weeks streamlining and automating the process of electronically rejuvenating medieval artwork. They used a 14th-century altarpiece by Francescussio Ghissi as a working example.

Over ten weeks, Math/CompSci majors Benjamin Chesnut and Frederick Xu joined forces with International Comparative Studies major Katharyn Loweth to understand the myriad academic pathways traveled by undergraduate students at Duke. They focused on data from Mathematics and the Duke Global Health Institute, and worked closely with departmental leadership from both areas.

Over ten weeks, BME and ECE majors Serge Assaad and Mark Chen joined forces with Mechanical Engineering Masters student Guangshen Ma to automate the diagnosis of vascular anomalies from Doppler Ultrasound data, with goals of improving diagnostic accuracy and reducing physician time spent on simple diagnoses. They worked closely with Duke Surgeon Dr. Leila Mureebe and Civil and Environmental Engineering Professor Wilkins Aquino.

Selen Berkman (ECE, CompSci), Sammy Garland (Math), and Aaron VanSteinberg (CompSci, English) spent ten weeks undertaking a data-driven analysis of the representation of women in film and in the film industry, with special attention to a metric called the Bechdel Test. They worked with data from a number of sources, including fivethirtyeight.com and the-numbers.com.

Felicia Chen (Computer Science, Statistics), Nikkhil Pulimood (Computer Science, Mathematics), and James Wang (Statistics, Public Policy) spent ten weeks working with Counter Tools, a local nonprofit that provides support to over a dozen state health departments. The project goal was to understand how open source data can lead to the creation of a national database of tobacco retailers.

John Benhart (CompSci, Math) and Esko Brummel (Masters in Bioethics and Science Policy) spent ten weeks analyzing current and potential scholarly collaborations within the community of Duke faculty. They worked closely with the leadership of the Scholars@Duke database.

Zijing Huang (Statistics, Finance), Artem Streltsov (Masters Economics), and Frank Yin (ECE, CompSci, Math) spent ten weeks exploring how Internet of Things (IoT) data could be used to understand potential online financial behavior. They worked closely with analytical and strategic personnel from TD Bank, who provided them with a massive dataset compiled by Epsilon, a global company that specializes in data-driven marketing.

Over ten weeks, Mathematics/Economics majors Khuong (Lucas) Do and Jason Law joined forces with Analytical Political Economy Masters student Feixiao Chen to analyze the spati-temporal distribution of birth addresses in North Carolina. The goal of the project was to understand how/whether the distributions of different demographic categories (white/black, married/unmarried, etc.) differed, and how these differences connected to a variety of socioeconomic indicators.

Furthering the work of a 2016 Data+ team in predictive modeling of pancreatic cancer from electronic medical record (EMR) data, students Siwei Zhang (Masters Biostatistics) and Jake Ukleja (Computer Science) spent ten weeks building a model to predict pancreatic cancer from Electronic Medical Records (EMR) data. They worked with nine years worth of EMR data, including ICD9 diagnostic codes, that contained records from over 200,000 patients.

Angelo Bonomi (Chemistry), Remy Kassem (ECE, Math), and Han (Alessandra) Zhang (Biology, CompSci) spent ten weeks analyzing data from social networks for communities of people facing chronic conditions. The social network data, provided by MyHealth Teams, contained information shared by community members about their diagnoses, symptoms, co-morbidities, treatments, and details about each treatment.

Over ten weeks, Public Policy major Amy Jiang and Mathematics and Computer Science major Kelly Zhang joined forces with Economics Masters student Amirhossein Khoshro to investigate academic hiring patterns across American universities, as well as analyzing the educational background of faculty. They worked closely with Academic Analytics, a provider of data and solutions for universities in the U.S. and the U.K.

Linda Adams(CompSci), Amanda Jankowski (Sociology, Global Health), and Jessica Needleman (Statistics/Economics) spent ten weeks prototyping small-area mapping of public-health information within the Durham Neighborhood Compass, with a focus on mortality data. They worked closely with the director of DataWorks NC, an independent data intermediary dedicated to democratizing the use of quantitative information.

Gary Koplik (Masters in Economics and Computation) and Matt Tribby (CompSci, Statistics) spent ten weeks investigating the burden of rare diseases on the Duke University Health System (DUHS). They worked with a massive set of ICD diagnosis codes and visit data provided by DUHS.

Over ten weeks, Biology major Jacob Sumner and Neuroscience major Julianna Zhang joined forces with Biostatistics Masters student Jing Lyu to analyze potential drug diversion in the Duke Medical Center. Early detection of drug diversion assists health care providers in helping patients recover from their condition, as well as mitigate the effects on any patients under their care.

William Willis (Mechanical Engineering, Physics) and Qitong Gao (Masters Mechanical Engineering) spent ten weeks with the goal of mapping the ocean floor autonomously with high resolution and high efficiency. Their efforts were part of a team taking part in the Shell Ocean Discovery XPRIZE, and they made extensive use of simulation software built from Bellhop, an open-source program distributed by HLS Research.

Graduate Student: Jacob Coleman, 3rd year Ph.D. student in Statistical Science

Faculty Instructor: Colin Rundel

Class: STA 112, Data Science

Joy Patel (Math and CompSci) and Hans Riess (Math) spent ten weeks analyzing massive amounts of simulated weather data supplied by Spectral Sciences Inc. Their goal was to investigate ways in which advanced mathematical techniques could assist in quantifying storm intensity, helping to augment today's more qualitatively-based methods.

Albert Antar(Biology), and Zidi Xiu (Biostatistics) spent ten weeks leveraging Duke Electronic Medical Record (EMR) data to build predictive models of Pancreatic ductal adenocarcinoma (PDAC). PDAC is the 4th leading cause of cancer deaths in the US, and is most often is diagnosed in stage IV, with a survival rate of only 1% and life expectancy measured in months. Diagnosis of PDAC is very challenging due of deep anatomical placement, and significant risk imposed by traditional biopsy. The goal of this project is to utilize EMR data to identify potential avenues for diagnosing PDAC in the early treatable stages of disease.

Priya Sarkar (Computer Science), Lily Zerihun (Biology and Global Health), and Anqi Zhang (Biostatistics) spent ten weeks utilizing Duke Electronic Medical Record (EMR) data to identify subgroups of diabetic patients, and predict future complications associated with Type II Diabetes.

Vivek Sriram (Computer Science and Math), Lina Yang (Biostatistics), and Pablo Ortiz (BME) spent ten weeks working in close collaboration with the Department of Biostatistics and Bioinformatics implementing an image analysis pipeline for immunofluorescence microscopy images of developing mouse lungs.

Computer Science and Psychology major Molly Chen, and Neuroscience major Emily Wu spent ten weeks working with patient diagnosis co-occurence data derived from Duke Electronic Medical Records to develop network visualizations of co-occurring disorders within demographic groups. Their goal was to make healthcare more holistic, and reduce healthcare disparities by improving patient and provider awareness of co-occurring disorders for patients within similar demographic groups.

Emily Horn (Public Policy, Global Health), Aasha Reddy (Economics), and Shanchao Wang (Masters Economics) spent ten weeks working with data from the National Asset Scorecard for Communities of Color (NASCC), an ongoing survey project that gathers information about asset and debt of households at a detailed racial and national origin level. They worked closely with faculty and researchers from the Samuel Dubois Cook Center for Social Equity.

The team built a ground truth dataset comprising satellite images, building footprints, and building heights (LIDAR) of 40,000+ buildings, along with road annotations. This dataset can be used to train computer vision algorithms to determine a building’s volume from an image, and is significant contribution to the broader research community with applications in urban planning, civil emergency mitigation and human population estimation.

Lindsay Hirschhorn (Mechanical Engineering) and Kelsey Sumner (Global Health and Evolutionary Anthropology) spent ten weeks determining optimal vaccination clinic locations in Durham County for a simulated Zika virus outbreak. They worked closely with researchers at RTI International to construct models of disease spread and health impact, and developed an interactive visualization tool.

Joel Tewksbury (BME) and Miriam Goldman (Math and Statistics, Arizona State University) spent ten weeks analyzing time-series darkness visual adaptation scores from over 1200 study participants to identify trends in night vision, and ultimately genetic markers that might confer a visual advantage.

Anne Driscoll (Economics, Statistical Science), and Austin Ferguson (Math, Physics) spent ten weeks examining metrics for inter-departmental cooperativity and productivity, and developing a collaboration network of Duke faculty. This project was sponsored by the Duke Clinical and Translational Science Award, with the larger goal of promoting collaborative success in the School of Medicine and School of Nursing.

Statistical Science majors Nathaniel Brown and Corey Vernot, and Economics student Guan-Wun Hao spent ten weeks exploring changes in food purchase behavior and nutritional intake following the event of a new Metformin prescription for Type II Diabetes. They worked closely with Matthew Harding and researchers in the BECR Center, as well as Dr. Susan Spratt, an endocrinologist in Duke Medicine.

Graduate student: Hamza Ghadyali          

Faculty instructor: Dr. Paul Bendich

Computer Science majors Erin Taylor and Ian Frankenburg, along with Math major Eric Peshkin, spent ten weeks understanding how geometry and topology, in tandem with statistics and machine-learning, can aid in quantifying anomalous behavior in cyber-networks. The team was sponsored by Geometric Data Anaytics, Inc., and used real anonymized Netflow data provided by Duke's Information Technology Security Office.

Students in the Performance and Technology Class create a series of performances that explore the interface between society and our machines. With the theme of the cloud to guide them, they have created increasingly complex art using digital media, microcontrollers, and motion tracking. Their work will be on display at the Duke Choreolab 2016.

With the significant international consequences of recent outbreaks, the ITP Lab conducted extensive stakeholder interviews and macro-level health policy analysis to expose gaps in pandemic preparedness and develop legal frameworks for future threats. 

This project summarizes the existing sample agreements from different institutions, analyzes the key contractual issues in the formation of alliances, and develops master charts of legal provisions to compare different approaches, to provide a reference for the formation of new alliances in the era of epidemic disease outbreaks. 

A virtual reality system to recreate the archaeological experience using data and 3D models from the neolithic site of Çatalhöyük, in Anatolia, Turkey. 

How well and in what ways do governments communicate with their citizens? How do governments analyze data and create visualizations to promote public access to government information? 

With the significant international consequences of recent outbreaks, the ITP Lab conducted extensive stakeholder interviews and macro-level health policy analysis to expose gaps in pandemic preparedness and develop legal frameworks for future threats. 

Paclitaxel (Taxol) is a small molecule drug belonging to the taxane family. It is one of the most commonly used chemotherapeutics, used for treatment of many cancers, as a monotherapy or in combination with other drugs to treat breast, lung and ovarian cancer as well as Kaposi’s sarcoma. Taxol is on the World Health Organization’s (WHO) List of Essential Medicines, a list that includes most the important medications for basic health. The worldwide demand for paclitaxel is exceeding the current supply. 

This project transforms an inaccessible audio archive of historic North Carolina folk music colllected by Frank Clyde Brown in the 1920s-40s into a vital, publicly accessible digital archive and museum exhibition. 

Imagine a world where we understand how to detect mental health and developmental problems in early childhood so that we can intervene early in life and prevent future suffering and impairment. This is a challenge that can only be addressed by an interdisciplinary team of computational people with child psychiatrists and neuroscientists who can integrate and mine knowledge from cross-cultural and global data.

Molly Rosenstein, an Earth and Ocean Sciences major and Tess Harper, an Environmental Science and Spanish major spent ten weeks developing interactive data applications for use in Environmental Science 101, taught by Rebecca Vidra.

Undergraduate students Ellie Burton (BioPhysics/Math, Johns Hopkins University), Kevin Kuo (Electrical and Computer Engineering), and GiSeok Choi (Electrical and Computer Enhineering/Math) joined a research group led by Douglas Boyer and Professor Ingrid Daubechies, testing and developing mathematical and statistical methodology for measuring similarities between bones and teeth.

Nonnegative matrix factorization (NMF) has an established reputation as a useful data analysis technique in numerous applications. However, its usage in practical situations is undergoing challenges in recent years.The fundamental factor to this is the increasingly growing size of the datasets available and needed in the information sciences. To address this, in this work we propose to use structured random compression, that is, random projections that exploit the data structure, for two NMF variants: classical and separable. In separable NMF (SNMF) the left factors are a subset of the columns of the input matrix. We present suitable formulations for each problem, dealing with different representative algorithms within each one.

In this work, we turn musical audio time series data into shapes for various tasks in music matching and musical structure understanding. 

The goal of this project is take a large amount of data from the Massive Open Online Courses offered by Duke professors, and produce from it a coherent and compelling data analysis challenge that might then be used for a Duke or nation-wide data analysis competition.

Kelsey SumnerEvAnth and Global Health major and Christopher Hong, CompSci/ECE major, spent ten weeks analyzing high-dimensional microRNA data taken from patients with viral and/or bacterial conditions. They worked closely with the medical faculty and practitioners who generated the data.

Kang Ni, Math/Econ major, Kehan Zhang, Econ/Stats/ major, and Alex Hong, spent ten weeks investigating a large collection of grocery store transaction data. They worked closely with Matt Harding Behavioral Economics and Healthy Food Choice Research Center. (BECR Center).

Ethan LevineAnnie Tang, and Brandon Ho spent ten weeks investigating whether personality traits can be used to predict how people make risky decisions. They used a large dataset collected by the lab of Prof. Scott Huettel, and were mentored by graduate students Emma Wu Dowd and Jonathan Winkle.

Spenser Easterbrook, a Philosophy and Math double major, joined Biology majors Aharon Walker and Nicholas Branson in a ten-week exploration of the connections between journal publications from the humanities and the sciences. They were guided by Rick Gawne and Jameson Clarke, graduate students from Philosophy and Biology.

In this Data Expedition, Duke undergraduates were introduced to a real world traffic citation data set. Provided by Dr. Frank R. Baumgartner, a political scientist at UNC, the data consist of 15 years of traffic stops, with over 18 million observations of 53 variables.

Dr. Guillermo Sapiro, professor in Pratt School of Engineering at Duke University, conducts ongoing autism research. Using image processing, he attempts to program a computer to detect whether babies (around eight to 14 months of age) display a sign of autism. This very early detection enables doctors to train these babies (when their brain plasticity is high) to behave in ways to counter the behavioral limitations autism imposes, thus allowing these babies to act more normally as they grow up. 

We present a framework for high-dimensional regression using the GMRA data structure. In analogy to a classical wavelet decomposition of function spaces, a GMRA is a tree-based decomposition of a data set into local linear projections.

The Triangle Census Research Network (TCRN) is an interdisciplinary team of researchers from Duke University and the National Institute of Statistical Sciences dedicated to improving the way that federal statistical agencies collect, analyze, and disseminate data to the public.

Large-scale databases from the social, behavioral, and economic sciences offer enormous potential benefits to society. However, as most stewards of social science data are acutely aware, wide-scale dissemination of such data can result in unintended disclosures of data subjects' identities and sensitive attributes, thereby violating promises–and in some instances laws to protect data subjects' privacy and confidentiality. 

In this project, we aim to solve the compressive sensing (CS) hyperspectral / video image reconstruction problem. The propose algorithm is robust to different initializations. This is useful for CS reconstruction problems where the suitable training datasets are not available.

This data expedition introduced students to “sliding windows and persistence” on time series data, which is an algorithm to turn one dimensional time series into a geometric curve in high dimensions, and to quantitatively analyze hybrid geometric/topological properties of the resulting curve such as “loopiness” and “wiggliness.”

Students learned to visualize high-dimensional gene expression data; understand genetic differences in the context of gene networks; connect genetic differences to physiological outcomes; and perform simple analyses using the R programming language.

Graduate students: Aaron Berdanier and Matt Kwit, University Program in Ecology & Nicholas School of the Environment

Using social network analysis to predict survival in large-brained mammals.

Questions asked: Do males and females scent mark equally? Do lemurs scent mark equally in breeding and non-breeding seasons?

Introduce NBA and MLB datasets to undergraduates to help them gain expertise in exploratory data analysis, data visualization, statistical inference, and predictive modeling.

STEM education often presents a very sanitized version of the scientific enterprise. To some extent, this is necessary, but overemphasizing neat-and-tidy results and scripted protocol assignments poses the risk of failing to adequately prepare students for the real-world mess of transforming experimental data into meaningful results. The fundamental aim of this project was to guide students in processing large real-world datasets far beyond their academic comfort zone so as to give them a more realistic understanding of how science works.

What drove the prices for paintings in 18th Century Paris?

A new model is developed for joint analysis of ordered, categorical, real and count data. In the motivating application, the ordered and categorical data are answers to questionnaires, the (word) count data correspond to the text questions from the questionnaires, and the real data correspond to fMRI responses for each subject. We also combine the analysis of these data with single-nucleotide polymorphism (SNP) data from each individual. 

The sub-thalamic nucleus (STN) within the sub-cortical region of the Basal ganglia is a crucial targeting structure for Deep brain stimulation (DBS) surgery, in particular for alleviating Parkinson’s disease (PD) symptoms. Volumetric segmentation of such small and complex structure, which is elusive in clinical MRI protocols, is thereby a pre-requisite process for reliable DBS targeting. While direct visualization and localization of the STN is facilitated with advanced high-field 7T MR imaging, such high fields are not always clinically available. 

Volumetric segmentation of sub-cortical structures such as the basal ganglia and thalamus is necessary for non-invasive diagnosis and neurosurgery planning. This is a challenging problem due in part to limited boundary information between structures, similar intensity profiles across the different structures, and low contrast data.

Intelligent mobile sensor agent can adapt to heterogeneous environmental conditions, to achieve the optimal performance, such as demining, maneuvering target tracking. 

Successful high-resolution signal reconstruction -- in problems ranging from astronomy to biology to medical imaging -- depends crucially our ability to make the most out of indirect, incomplete, a