Research

Research projects at Rhodes iiD focus on building connections. We encourage crosspollination of ideas across disciplines, and to develop new forms of collaboration that will advance research and education across the full spectrum of disciplines at Duke. The topics below show areas of research focus at Rhodes iiD. See all of our research.

Producing oil and gas in the North Sea, off the coast of the United Kingdom, requires a lease to extract resources from beneath the ocean floor and companies bid for those rights. This team will work with ExxonMobil to understand why these leases are acquired and who benefits. This requires historical data on bid history to investigate what leads to an increase in the number of (a) leases acquired and (b) companies participating in auctions. The goal of this team is to create a well-structured dataset based on company bid history from the U.K. Oil and Gas Authority; data which will come from many different file structures and formats (tabular, pdf, etc.). The team will curate these data to create a single, tabular database of U.K. bid history and work programs.

Producing oil and gas in the Gulf of Mexico requires rights to extract these resources from beneath the ocean floor and companies bid into the market for those rights. The tops bids are sometimes significantly larger than the next highest bids, but it’s not always clear why this differential exists and some companies seemingly overbid by large margins. This team will work with ExxonMobil to curate and analyze historical bid data from the Bureau of Ocean Energy Management that contains information on company bid history, infrastructure, wells, and seismic survey data as well as data from the companies themselves and geopolitical events. The stretch goal of the team will be to see if they can uncover the rationale behind historic bidding patterns. What do the highest bidders know that other bidders to not (if anything)? What characteristics might incentivize overbidding to minimize the risk of losing the right to produce (i.e. ambiguity aversion)?

In this project, we are interested in creating a cohesive data pipeline for generating, modeling and visualizing basketball data. In particular, we are interested in understanding how to extract data from freely available video, how to model such data to capture player efficiency, strength and leadership, and how to visualize such data outcomes. We will have four separate teams as part of this project working on interrelated but separate goals:

Team 1: Video data extraction

This team will explore different video data extraction techniques with the goal of identifying player locations, ball location and events at any given time during a basketball game. The software developed as part of this project will be able to generate a usable dataset of time-stamped basketball plays that can be used to model the game of basketball.

Teams 2 & 3: Modeling basketball data: offense and defense

The two teams will explore different models for the game of basketball. The first team will concentrate on modeling offensive plays and try to answer questions such as: How does the ball advance? What leads to successful plays? The second team will concentrate on defensive plays: What is an optimal strategy for minimizing opponent scoring opportunities? How should we evaluate defensive plays?

Team 4: Visualizing basketball data

This team will work on dynamic and static visualization of elements of a basketball game. The goal of the visualization is to capture information about how players and the ball move around the court. They will develop tools to represent average trajectories be in these settings that can also capture uncertainty about this information.

Faculty Leads: Alexander Volfovsky, James Moody, Katherine Heller

Project Managers: Fan Bu, Greg Spell, 2 more TBD

A team of students led by researchers in the Energy Access Project will develop means to evaluate non-technical electricity losses (theft) in developing countries through machine learning techniques applied to smart meter electricity consumption data. Students will use data from smart meters installed at transformers and households through a randomized control trial. Students will develop algorithms that can be used to detect anomalies in the electricity consumption data and create a dataset of such indicators.  This project will provide researchers with new ways of incorporating electricity consumption data and applications for electricity utilities in developing country settings.

A team of students, in conjunction with Duke’s Office of Information Technology, will use of Duke’s network traffic data to perform IoT device behavioral fingerprinting that can be employed to identify device types. The data will be used to analyze trends and risks, develop security best practices, and build machine learning models that can be used to detect similar device types. Students will work directly with the network data and have access to the analytics tools used in OIT and will have a great opportunity for exploration of the data in consultation with OIT network, security and data analytics professionals.

Project Lead: Jen Vizas

Project Manager: Will Brockselsby

Interested in understanding the types of attacks targeting Duke and other universities?  Led by OIT and the IT Security Office, students will learn to analyze threat intelligence data to identify trends and patterns of attacks.  Duke blocks an average of 1.5 billion malicious connection attempts/day and is working with other universities to share the attack data.  One untapped area is research into the types of attacks and learning how universities are targeted.  Students will collaborate alongside the security and IT professionals in analyzing the data and with the intent to discern patterns.

Project Lead: Jen Vizas

Project Manager: Jesse Bowling

Saltwater intrusion and sea level rise are issues of serious concern for people throughout the coastal plain. Our Data+ team will collaborate with researchers to create an interactive data visualization platform that compiles remotely sensed estimates of vegetation change throughout the coastal plain and links this data with field salinity estimates. The team will have the opportunity to build educational website content that a) explains how saltwater incursion occurs; b) describes the consequences for coastal forests; c) links this understanding with likely scenarios of coastal climate for the next decade. In each case, we would like to illustrate this content with interactive data graphics.

Faculty Leads: Justin Wright, Emily Bernhardt

Project Manager: Emily Ury

A team of students led by faculty and researchers from the School of Medicine, the Center for Global Reproductive Health at the Duke Global Health Institute, and the Duke Evidence Lab will collaborate on the user interface for a tool developed to help advocates and policymakers target family planning resources to key populations in low resource populations. Team members will traverse the app development lifecycle while contributing to a tool that can improve global reproductive health.

Faculty Lead: Megan Huchko

Project Manager: Amy Finnegan

A team of students led by Drs. Aquino (Engineering) and Routh (Urologic Surgery) will develop objective algorithms in order to guide data interpretation from a urology test, known as urodynamics, which is used in children with spina bifida in order to define a patient’s risk of debilitating bladder and kidney complications.  Urodynamics involves dynamic pressure monitoring as the bladder is filled with fluid.  This project is part of a 21-institution collaboration coordinated and funded by the U.S. Centers for Disease Control and Prevention (CDC), with the long-term goal of defining optimal management strategies for children with spina bifida. The short-term goal of this Data+ application is to define initial features of urodynamics that can be applied to increasingly complex future algorithms in order to guide clinical interpretations that determine whether, for example, children need reconstructive surgery in order to avoid complications of their disease.

Faculty Leads: Wilkins Aquino, Jonathan Routh

This team will explore how to develop machine learning techniques for analyzing satellite imagery data for identifying energy infrastructure that can be trained once and applied almost anywhere in the world. Led by researchers from the Energy Data Analytics Lab and the Sustainable Energy Transitions Initiative, the team will design two datasets: the first containing satellite imagery from diverse geographies with all energy infrastructure labeled, and the second a synthetic version of the same imagery. These data will enable research into whether synthetic imagery may be used to adapt algorithms to new domains. The better these techniques adapt to new geographies, the more information can be provided to researchers and policymakers to design sustainable energy systems and understand the impact of electrification on the welfare of communities. 

Faculty Lead: Kyle Bradbury

Project Manager: TBD

A team of students led by Jim Heffernan, Nick Bruns, and partners at UNC and EPA will create interactive data visualizations of water quality data in rivers and lakes of the United States. These tools will aid environmental scientists, managers, policy-makers, and students who want to investigate patterns of water pollution across broad scales of space and time. Students will gain experience with manipulation of large data sets, geospatial analysis, and remote sensing of water quality parameters. Opportunities include developing visualization tools to represent spatial and temporal coverage of water quality parameters, georeferencing field observations and remote sensing satellite overpasses with field observations, and assessing spatial and temporal gaps in observations for a variety of water quality parameters.

Faculty Lead: Jim Heffernan

Project Manager: Nick Bruns

Duke must reduce its energy footprint as Duke strives for Carbon Neutrality by 2024. To help this cause, a team of students will review troves of utility usage data and attempt to build an attractive and practical monthly energy use report for every building and school at Duke. This report will not only show historical usage but also develop an energy benchmark for comparison and conservation tips for local administrators to take action. Duke Energy has been using a similar report to encourage conservation at the residential level for years. It is time to bring energy use transparency to the broader Duke community and inspire action.

Faculty Lead: Billy Pizer

Project Manager: TBD

A team of students partnering with Duke University Libraries will explore the complicated decision space of electronic journal licensing. Electronic resources like journal articles are a major service provided by academic libraries, but the choice of what journal subscriptions to purchase can be costly and time consuming, and journal distribution companies like Elsevier manipulate their journal bundles to maximize their own profits. This team will build a model for journal purchasing by combining several years of journal usage data (including view, downloads, authorship, citations, and impact) with journal cost data. The team will work on software to improve the data cleaning and analysis process and will create visualizations and dashboards to assist the library in its decision-making efforts. Because many libraries have the same concerns about journal bundles and use the same kinds of data to make these decisions, this project may have far-reaching impacts among academic libraries.

Faculty Leads: Angela Zoss, Jeff Kosokoff

Neuroscience evidence (e.g., brain scans, mental-illness diagnosis, etc.) is increasingly being used in criminal cases to explain criminal behavior and lessen responsibility. A team of students led by researchers within the Science, Law, and Policy Lab to explore a national set of criminal cases in which neuroscience evidence is used to see what aspects of the criminal trial (i.e., offense, age of offender, etc.) may predict the outcome of future cases. Additionally, with the use of our comprehensive 10-year judicial opinion data set (2005-2015), the team will collaborate on creating a computer algorithm to assist in locating and coding online judicial opinions to build upon our comprehensive list of opinions. This tool will provide a strong foundation in the work of understanding neuroscience’s role within a criminal court setting.

Faculty Lead: Nita Farahany

Project Manager: William Krenzer

A team of students will use a variety of data sets and mapping technologies to determine a feasible location for a deep-sea memorial to the transatlantic slave trade. While scholars have studied the overall mortality of the slave trade, little is known about where these deaths occurred. New mapping technologies can begin to supply this data. Led by English professor Charlotte Sussman, in association with the Representing Migrations Humanities Lab, this team will create a new database that combines previously-disparate data and archival sources to discover where on their journeys enslaved persons died, and then to visualize these journeys. This project will employ the resources of digital technologies as well as the humanistic methods of history, literature, philosophy, and other disciplines. The project welcomes students from a broad range of disciplines: computer science; mathematics; English and literature; history; African and African American studies; philosophy; art history; visual and media studies; geography; climatology; and ocean science.

 

Image credit:

J.M.W. Turner, Slave Ship, 1840, Museum of Fine Arts, Boston (public domain)

Faculty Lead: Charlotte Sussman

Project Manager: Emma Davenport

Students will collaborate with staff at DataWorks NC and the Eviction Diversion Program to explore and develop means of using evictions data to drive meaningful policy change that help Durham residents stay in their homes. Students will clean and assess the quality of evictions data, look for seasonal and geographic variation in eviction rates, analyze the relationship between evictions, rents, wages and other economic indicators, develop metrics for the real financial cost of evictions, and build static visualizations or a data dashboard to communicate their results. This project will help housing advocates in Durham assess the impact of their current work, and understand which future interventions will be most impactful.

Project Leads: Tim Stallmann, John Killeen, Peter Gilbert

Project Manager: TBD

The American public first encountered the term “genocide” in a Washington Post op-ed published in 1944; since then, the word’s meaning has been circulated, debated, and shaped by numerous forces, especially by words and images in newspapers. With the support of Dr. Priscilla Wald (English), a team of students led by Nora Nunn (English graduate student) and Astrid Giugni (English and ISS) will analyze how U.S. mass media—particularly newspapers—enlist text and imagery such as press photographs to portray genocide, human rights, and crimes against humanity from World War II to the present. From the Holocaust to Cambodia, from Rwanda to Myanmar, such representation has political consequences. If time allows, students will also study the representation of collective violence in Hollywood film, querying the relationship between human rights and genre. The implications of these findings could inform future coverage of human rights-related issues at home and abroad.

Faculty Leads: Nora Nunn, Astrid Giugni

How Much Profit is Too Much Profit?

A team of students led by history professor Sarah Deutsch will do data mining in newspaper and Congressional databases to investigate the dynamics behind the excess profits tax laws Congress passed between 1918 and 1948 and the concept of price gouging which continues to shape legislation today. As of 2018 numerous states have price gouging laws. Why? How did they define what was excessive? How did this critique of profit-making become mainstream without endangering capitalism? By searching extant newspaper and Congressional databases for the frequency and context of particular words and phrases, the project will begin to uncover the logic and language and the partisanship or lack of it used to critique profits at three moments in U.S. History that resulted in government action to limit profit-making.

(cartoon from The Masses July 1916)

Faculty Lead: Sarah Deutsch

Project Manager: TBD

A team of students led by researchers from the Michael W. Krzyzewski Human Performance Laboratory (K-Lab) will develop an analytic and report generating web-based application to help the K-Lab reduce musculoskeletal injuries in student-athletes at Duke University. This tool will produce actionable, student-athlete-specific reports that incorporate the analysis of previous injury history and current capabilities (K-Lab assessments) in order to identify injury risk and develop individualized recommendations for injury prevention. Students will develop analytic tools and scoring criteria to assess injury risk through profiling of data based on minimally clinically important differences, injury profiles, peer group analysis, and injury risk scoring strategies based on a comprehensive set of performance metrics. Injury risk identification will be furthered enhanced by clustering data analysis around joint or tissue specific injury risk, previous injury history, and athlete capabilities (strength, flexibility, and postural stability). The final deliverable will enhance injury prevention strategies for student-athletes and other populations by bridging the analytic gap between injury risk screening and actionable injury prevention strategies.

Faculty Lead: Dr. Tim Sell

Project Manager: TBD

Data-enabled approaches present new opportunities to analyze responses of aquatic ecosystems to stressors and to illustrate scientific findings in new formats that are more widely accessible. Our goal is to create a web-based storytelling platform that illustrates the results of freshwater ecosystem studies conducted at the IISD-Experimental Lakes Area in Canada (https://www.iisd.org/ela/). Students on our team will process historical datasets and develop interactive data visualization tools for public outreach on freshwater ecology and conservation. This project is led by water resources professor Kateri Salk (Nicholas School of the Environment) and staff at the IISD-Experimental Lakes Area.

Faculty Lead: Kateri Salk

Project Manager: TBD

A team of students led by faculty and students in Duke's River Center will manipulate, model and visualize time series data derived from hundreds of rivers throughout the world. Students will gain experience working with large datasets derived from environmental sensors and will be able to direct the data project based on their learning interests. Opportunities include developing machine learning tools for data processing and pattern recognition, building software and web interfaces to enable cloud computing, and creating interactive graphics aimed at explaining scientific concepts using Big Data. Tools developed through this project will be hosted on the StreamPulse web platform (streampulse.org).

Faulty Leads: Emily Bernhardt, Jim Heffernan

Project Manager: TBD

This team will collaborate with Durham’s Crisis Intervention Team, a group of law enforcement, fire, and EMS personnel who are specially trained to interact with citizens in mental health crisis.  We will analyze data from the Durham County Jail to track repeat arrests by persons with or without mental illness, along with their use of mental health and other services in the Duke Health System.  By the end of the summer, we will report findings and recommendations to the Crisis Intervention Team and Durham’s Stepping Up Initiative. 

Faculty Lead: Nicole Schramm-Sapyta

Project Manager: TBD

Have you ever read or watched a movie and realized that you have seen the same story before?  How do you know if you are watching an adaptation? A team of students led by UNC-Chapel Hill graduate student Grant Glass, will develop means to track the movement of adaptations within contemporary culture through machine learning techniques. Drawing upon a variety of textual information drawn from historical and digital sources, the project team will have the opportunity to work with many different types of data. Students will identify features of different master narratives, which will be used to demonstrate how certain stories are modified and retold over and over again. By creating this training dataset, the team will use algorithms to identify adaptations in previously unidentified works. This will allow scholars to better understand at scale how certain narratives are adapted into new stories and forms.

Faculty Lead: Grant Glass

Project Manager: TBD

A team of students led by researchers in the Global Financial Markets Center at Duke Law will collect and analyze home mortgage market data that was publicly available during the run-up to the Financial Crisis (1997 – 2007), including (i) size of the market, (ii) composition of the market (conforming v. non-conforming), (iii) home ownership rates, (iv) originators (depository institutions v non-depositories), (v) default and foreclosure rates, (vi) assessments of the market by supervisory and regulatory agencies, (vi) press coverage of the mortgage market, and (vii) public statements by governmental leaders about home mortgages. Analyzing and presenting this data will allow the team to understand what information was publicly available to policymakers preceding the Crisis. The data will also be used to inform the oral histories of key policymakers that will be collected during a Bass Connections project that will begin in the fall of 2019.

Faculty Lead: Lee Reiners

Project Manager: TBD

Modern Energy Group (MEG) finances and operates various distributed energy resources operating in wholesale energy markets, ranging from solar panels to residential smart thermostats. MEG also does financial trading when it identifies arbitrage opportunities in these markets. One of MEG's main operational risks is the very high volatility in wholesale real time (or spot) energy prices. Where stock markets consider a 30% change in price large, energy markets routinely face changes in price on the order of 300%. This high volatility comes from three main "shocks": 1. power demand changes, due to unpredictable weather, industrial patterns, or human consumption; 2. fuel shortages, driven by trade, extraction/exploration, and gathering/transportation economics; 3. electrical transmission outages, driven by operational failure, extreme weather events, and human behavior.

First, this project team will identify what should be considered an "extreme" price shock from 5-10 years of historical data in PJM. Second, the team will work to automatically identify potential causes for the rare events from news articles, public filings, and MEG's own structured data. Third, the team will build reasonable priors for the occurrences of these rare events, and incorporate potential covariance between the events using copulas or similar methods. Finally, the team will create a simple classifier such as logistic regression to predict the likelihood of a price shock on a given day. The model needs to be evaluated with a walk-forward backtest, training on about 3 years of data at a time, and shifting forward the training window in approximately one-month increments, to smooth out potential bias and overfitting in the model. 

Project Lead: Eric Butter, Modern Energy Group

Project Manager: TBD

A team of students led by statistics professor Jie Ding from the University of Minnesota will develop algorithms to recognize human emotions (e.g. calm, happy, angry, etc.) from audio speech data, and to incorporate new emotions into an existent speech. By applying machine learning techniques to various speech datasets, students will identify features of human speech that can represent emotions, to develop software to perform emotion recognition, and to synthesize emotional speech data. Students also have the opportunity to create their own dataset, and apply their developed methods for training and testing. This work will allow further research along the direction of speech emotion analysis, and may result in new designs of human-computer interfaces.

Faculty Leads: Vahid Tarokh, Jie Ding

Project Manager: Enmao Diao

This Data Expedition introduces students to network tools and approaches and invites students to consider the relationship(s) between social networks and social imaginaries. Using foundation-funding data that was collected from the The Foundation Directory Online, the Data Expedition enables students to visualize and explore the relationship between networks, social imaginaries, and funding for higher education. The Data Expedition is based on two sets of data. The first set list the grants received by Duke University in 2016 from five foundations: The Bill and Melinda Gates Foundation, Fidelity Charitable Gift Fund, Silicon Valley Community Foundation, The Community Foundation of Western North Carolina, and The Robert Wood Johnson Foundation. The second set lists the names of board members from Duke University and each of these five foundations along with the degree granting institution for their undergraduate education. For the sake of this exercise, the degree granting institutions data was fabricated from a randomized list of the top twenty-five undergraduate institutions.

This Data Expedition seeks to introduce students to statistical analysis in the field of international development. Students construct a index of wealth/poverty based on asset holdings using four datasets collected under the umbrella of the Living Standards Measurement Survey project at the World Bank. We selected countries to represent different continents with comparable and recent survey data: Bulgaria (2007), Tajikistan (2009), Tanzania (2010-2011), and Panama (2008).

First, we construct an index of wealth based on household assets in the different countries using Principle Components Analysis. Once a poverty index is constructed, students seek to understand what the main drivers of wealth/poverty are in different countries. We include variables for health, education, age, relationship to the household head, and sex. Students then use regression analysis to identify the main drivers of poverty in different countries.

This data expedition explores the local (ego) patent citation networks of three hybrid vehicle-related patents. The concept of patent citations and technological development is a core theme in innovation and entrepreneurship, and the purpose of these network explorations is to both quantitatively and visually assess how innovations are connected and what these connections mean for the focal innovations and the technologies that draw on those patents in the future. The expedition was incorporated as part of the Sociology of Entrepreneurship class, where students are thinking about the emergence and diffusion of innovations.

Large publicly available environmental databases are a tremendous resource for both scientists and the general public interested in climate trends and properties. However, without the programming skills to parse and interpret these massive datasets, significant trends may remain hidden from both scientists and the public. In this data exploration, students, over the course of three hours, accessed two large, publicly available datasets, each with greater than 4 million observations. They learned how to use R and RStudio to effectively organize, visualize and statistically explore trends in deep sea physical oceanography.  

Our aim was to introduce students to the wealth of possibilities that human genotyping and sequencing hold by illustrating firsthand the power of these datasets to identify genetic relatives, using the story of the Golden State Killer’s capture with public genetic databases.

This Data Expedition introduced hypothesis-driven data analysis in R and the concept of circular data, while providing some tools for importing it and analyzing it in R.

Brooke Erikson (Economics/Computer Science), Alejandro Ortega (Math), and Jade Wu (Computer Science) spent ten weeks developing open-source tools for automatic document categorization, PDF table extraction, and data identification. Their motivating application was provided by Power for All’s Platform for Energy Access Knowledge, and they frequently collaborated with professionals from that organization.

Click here to read the Executive Summary

 

Jake Epstein (Statistics/Economics), Emre Kiziltug (Economics), and Alexander Rubin (Math/Computer Science) spent ten weeks investigating the existence of relative value opportunities in global corporate bond markets. They worked closely with a dataset provided by a leading asset management firm.

Click here for the Executive Summary

Maksym Kosachevskyy (Economics) and Jaehyun Yoo (Statistics/Economics) spent ten weeks understanding temporal patterns in the used construction machinery market and investigating the relationship between these patterns and macroeconomic trends.

They worked closely with a large dataset provided by MachineryTrader.com, and discussed their findings with analytics professionals from a leading asset management firm.

Click here to read the Executive Summary

Alec Ashforth (Economics/Math), Brooke Keene (Electrical & Computer Engineering), Vincent Liu (Electrical & Computer Engineering), and Dezmanique Martin (Computer Science) spent ten weeks helping Duke’s Office of Information Technology explore the development of an “e-advisor” app that recommends co-curricular opportunities to students based on a variety of factors. The team used collaborative and content-based filtering to create a recommender-system prototype in R Shiny.

Click here to read the Executive Summary

Statistical Science majors Eidan Jacob and Justina Zou joined forces with math major Mason Simon built interactive tools that analyze and visualize the trajectories taken by wireless devices as they move across Duke’s campus and connect to its wireless network. They used de-identified data provided by Duke’s Office of Information Technology, and worked closely with professionals from that office.

Click here for the Executive Summary

The aim of this data expedition was to give students an introduction to stable isotopes and how the data can be used to understand trophic dynamics. 

Cecily Chase (Applied Math), Brian Nieves (Computer Science), and Harry Xie (Computer Science/Statistics) spent ten weeks understanding how algorithmic approaches can shed light on which data center tasks (“stragglers”) are typically slowed down by unbalanced or limited resources. Working with a real dataset provided by project clients Lenovo, the team created a monitoring framework that flags stragglers in real time.

Click here to read the Executive Summary

David Liu (Electrical Computer Engineering) and Connie Wu (Computer Science/Statistics) spent ten weeks analyzing data about walking speed from the 6th Vital Sign Study.

Integrating study data with public data from the American Community Survey, they built interactive visualization tools that will help researchers understand the study results and the representativeness of study participants.

Click here to read the Executive Summary

Lucas Fagan (Computer Science/Public Policy), Caroline Wang (Computer Science/Math), and Ethan Holland (Statistics/Computer Science) spent ten weeks understanding how data science can contribute to fact-checking methodology. Training on audio data from major news stations, they adapted OpenAI methods to develop a pipeline that moves from audio data to an interface that enables users to search for claims related to other claims that had been previously investigated by fact-checking websites.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary.

A team of students led by Professors Jonathan Mattingly and Gregory Herschlag will investigate gerrymandering in political districting plans.  Students will improve on and employ an algorithm to sample the space of compliant redistricting plans for both state and federal districts.  The output of the algorithm will be used to detect gerrymandering for a given district plan; this data will be used to analyze and study the efficacy of the idea of partisan symmetry.  This work will continue the Quantifying Gerrymandering project, seeking to understand the space of redistricting plans and to find justiciable methods to detect gerrymandering. The ideal team has a mixture of members with programing backgrounds (C, Java, Python), statistical experience including possibly R, mathematical and algorithmic experience, and exposure to political science or other social science fields.

Read the latest updates about this ongoing project by visiting Dr. Mattingly's Gerrymandering blog.

Varun Nair (Mechanical Engineering), Tamasha Pathirathna (Computer Science), Xiaolan You (Computer Science/Statistics), and Qiwei Han (Chemistry) spent ten weeks creating a ground-truthed dataset of electricity infrastructure that can be used to automatically map the transmission and distribution components of the electric power grid. This is the first publicly available dataset of its kind, and will be analyzed during the academic year as part of a Bass Connections team.

Click here to read the Executive Summary

Kimberly Calero (Public Policy/Biology/Chemistry), Alexandra Diaz (Biology/Linguistics), and Cary Shindell (Environmental Engineering) spent ten weeks analyzing and visualizing data about disparities in Social Determinants of Health. Working with data provided by the MURDOCK Study, the American Community Survey, and the Google Places API, the team built a dataset and visualization tool that will assist the MURDOCK research team in exploring health outcomes in Cabarrus County, NC.

Click here to read the Executive Summary

Alexandra Putka (Biology/Neuroscience), John Madden (Economics), and Lucy St. Charles (Global Health/Spanish) spent ten weeks understanding the coverage and timeliness of maternal and pediatric vaccines in Durham. They used data from DEDUCE, the American Community Survey, and the CDC.

This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Dima Fayyad (Electrical & Computer Engineering), Sean Holt (Math), David Rein (Computer Science/Math) spent ten weeks exploring tools that will operationalize the application of distributed computing methodologies in the analysis of electronic medical records (EMR) at Duke.

As a case study, they applied these systems to an Natural Language Processing project on clinical narratives about growth failure in premature babies.

Click here to read the Executive Summary

Zhong Huang (Sociology) and Nishant Iyengar (Biomedical Engineering) spent ten weeks investigating the clinical profiles of rare metabolic diseases. Working with a large dataset provided by the Duke University Health System, the team used natural language processing techniques and produced an R Shiny visualization that enables clinicians to interactively explore diagnosis clusters.

Click here to read the Executive Summary

Samantha Garland (Computer Science), Grant Kim (Computer Science, Electrical & Computer Engineering), and Preethi Seshadri (Data Science) spent ten weeks exploring factors that influence patient choices when faced with intermediate-stage prostate cancer diagnoses. They used topic modeling in an analysis of a large collection of clinical appointment transcripts.

Click here for the Executive Summary

Nathan Liang (Psychology, Statistics), Sandra Luksic (Philosophy, Political Science),and Alexis Malone (Statistics) began their 10-week project as an open-ended exploration how women are depicted both physically and figuratively in women's magazines, seeking to consider what role magazines play in the imagined and real lives of women.

Click here to read the Executive Summary

Jennie Wang (Economics/Computer Science) and Blen Biru (Biology/French) spent ten weeks building visualizations of various aspects of the lives of orphaned and separated children at six separate sites in Africa and Asia. The team created R Shiny interactive visualizations of data provided by the Positive Outcomes for Orphans study (POFO).

Click here to read the Executive Summary

Aaron Crouse (Divinity), Mariah Jones (Sociology), Peyton Schafer (Statistics), and Nicholas Simmons (English/Education) spent ten weeks consulting with leadership from the Parents Teacher Association at Glenn Elementary School in Durham. The team set up infrastructure for data collection and visualization that will aid the PTA in forming future strategy.

Click here to read the Executive Summary

In tracing the publication history, geographical spread, and content of “pirated” copies of Daniel Defoe’s Robinson Crusoe, Gabriel Guedes (Math, Global Cultural Studies), Lucian Li (Computer Science, History), and Orgil Batzaya (Math, Computer Science) explored the complications of looking at a data set that saw drastic changes over the last three centuries in terms of spelling and grammar, which offered new challenges to data cleanup. By asking questions of the effectiveness of “distant reading” techniques for comparing thousands of different editions of Robinson Crusoe, the students learned how to think about the appropriateness of myriad computational methods like doc2vec and topic modeling. Through these methods, the students started to ask, at what point does one start seeing patterns that were invisible at a human scale of reading (reading one book at a time)? While the project did not definitively answer these questions, it did provide paths for further inquiry.

The team published their results at: https://orgilbatzaya.github.io/pirating-texts-site/

Click here for the Executive Summary

Melanie Lai Wai (Statistics) and Saumya Sao (Global Health, Gender Studies) spent ten weeks developing a platform which enables users to understand factors that influence contraceptive use and discontinuation. Their work combined data from the Demographic and Health Surveys contraceptive calendar with open data about reproductive health and social indicators from the World Bank, World Health Organization, and World Population Prospects. This project will continue into the academic year via Bass Connections.

Click here to read the Executive Summary

Bob Ziyang Ding (Math/Stats) and Daniel Chaofan Tao (ECE) spent ten weeks understanding how deep learning techniques can shed light on single cell analysis. Working with a large set of single-cell sequencing data, the team built an autoencoder pipeline and a device that will allow biologists to interactively visualize their own data.

Click here to read the Executive Summary

Ashley Murray (Chemistry/Math), Brian Glucksman (Global Cultural Studies), and Michelle Gao (Statistics/Economics) spent 10 weeks analyzing how meaning and use of the work “poverty” changed in presidential documents from the 1930s to the present. The students found that American presidential rhetoric about poverty has shifted in measurable ways over time. Presidential rhetoric, however, doesn’t necessarily affect policy change. As Michelle Gao explained, “The statistical methods we used provided another more quantitative way of analyzing the text. The database had around 130,000 documents, which is pretty impossible to read one by one and get all the poverty related documents by brute force. As a result, web-scraping and word filtering provided a more efficient and systematic way of extracting all the valuable information while minimizing human errors.” Through techniques such as linear regression, machine learning, and image analysis, the team effectively analyzed large swaths of textual and visual data. This approach allowed them to zero in on significant documents for closer and more in-depth analysis, paying particular attention to documents by presidents such as Franklin Delano Roosevelt or Lyndon B. Johnson, both leaders in what LBJ famously called “The War on Poverty.”

Click Here for the Executive Summary

Natalie Bui (Math/Economics), David Cheng (Electrical & Computer Engineering), and Cathy Lee (Statistics) spent ten weeks helping the Prospect Management and Analytics office of Duke Development understand how a variety of analytic techniques might enhance their workflow. The team used topic modeling and named entity recognition to develop a pipeline that clusters potential prospects into useful categories.

Click here to read the Executive Summary

Tatanya Bidopia (Psychology, Global Health), Matthew Rose (Computer Science), Joyce Yoo (Public Policy/Psychology) spent ten weeks doing a data-driven investigation of the relationship between mental health training of law enforcement officers and key outcomes such as incarceration, recidivism, and referrals for treatment. They worked closely with the Crisis Intervention Team, and they used jail data provided by the Sheriff’s Office of Durham County.

Click here to read the Executive Summary

Marine mammals exhibit extreme physiological and behavioral adaptions that allow them to dive hundreds to thousands of meters underwater despite their need to breathe air at the surface. Through the development of new remote monitoring technologies, we are just beginning to understand the mechanisms by which they are able to execute these extreme behaviors. Long- term animal-borne tags can now record location, dive depth, and dive duration and then transmit these data to satellite receivers, enabling remote access to behavior occurring both many kilometers out to sea and several kilometers below the ocean surface. 

The aim of this Data Expedition was for students to learn hands-on data visualization techniques using a variety of data types. Students first discussed how data visualization is useful, and tips to make graphs both visually appealing and easy to understand. 

Understanding of how to manipulate, analyze, and display large datasets is an essential skill in the life sciences. Introducing students to the concepts of coding languages and showing them the diversity of tasks that can be accomplished using a flexible coding scheme like R is an important step in the training of any life sciences professional. For students taking lab-based courses, who are often required to analyze the datasets they produce in class, learning these techniques can be helpful both in the short-term (i.e., during the semester) and for their future careers.

Sophie Guo, Math/PoliSci major, Bridget Dou, ECE/CompSci major, Sachet Bangia, Econ/CompSci major, and Christy Vaughn spent ten weeks studying different procedures for drawing congressional boundaries, and quantifying the effects of these procedures on the fairness of actual election results.

Anna Vivian (Physics, Art History) and Vinai Oddiraju (Stats) spent ten weeks working closely with the director of the Durham Neighborhood Compass. Their goal was to produce metrics for things like ambient stress and neighborhood change, to visualize these metrics within the Compass system, and to interface with a variety of community stakeholders in their work.

Maddie Katz (Global Health and Evolutionary Anthropology Major), Parker Foe (Math/Spanish, Smith College), and Tony Li (Math, Cornell) spent ten weeks analyzing data from the National Transgender Discrimination Survey. Their goal was to understand how the discrimination faced by the trans community is realized on a state, regional, and national level, and to partner with advocacy organizations around their analysis.

Sharrin ManorArjun DevarajanWuming Zhang, and Jeffrey Perkins explored a lage collection of imagery data provided by the U.S. Geological Survey, with the goal of identifying solar panels using image recognition. They worked closely with the Energy Data Analytics Lab, part of the Energy Initiative at Duke.

ECE majors Mitchell Parekh and Yehan (Morton) Mo, along with IIT student Nikhil Tank, spent ten weeks understanding parking behavior at Duke. They worked closely with the Parking and Transportation Office, as well as with Vice President for Administration Kyle Cavanaugh.

Matt and Ken led two labs for the engineering section of STA 111/130, an introductory course in statistics and probability. The lab assignments were written by Matt and Ken in order to bridge the gap between introductory linear regression, which is often explained in terms of a static, complete dataset, and time series analysis, which is not a common topic in introductory courses. 

Yanmin (Mike) Ma, mathematics/economics major, and Manchen (Mercy) Fang, electrical and computer engineering/computer science major, spent ten weeks studying historical archives and building a model to predict the price of pigs, relative to a number of interesting factors.

David Clancy, a Stats/Math/EnvSci major, and Tianyi Mu, an ECE/CompSci major, spent ten weeks studying the effects of weather, surroundings, and climate on the operational behavior of water reservoirs across the United States. They used a large dataset compiled by the U.S. Army Corps of Engineers, and they worked closely with Lauren Patterson from the Water Policy Program at Duke's Nicholas Institute for Environmental Policy Solutions. Project mentorship was provided by Alireza Vahid, a postdoctoral candidate in Electrical Engineering.

Luke RaskopfPoliSci major and Xinyi (Lucy) Lu, Stats/CompSci major, spent ten weeks investigating the effectiveness of policies to combat unemployment and wage stagnation faced by working and middle-class families in the State of North Carolina. They worked closely with Allan Freyer at the North Carolina Justice Center.

This paper addresses analysis of heterogeneous data, such as ordered, categorical, real and count data. Such data are of interest in our motivating application, cognitive and brain science, in which subjects may answer questionnaires, and also (separately) undergo fMRI interrogation. A contribution of this paper concerns the joint analysis of how people answer questionnaires and how their brain responds to external stimuli (here visual), the latter measured via fMRI.

Computer Science major Yumin Zhang and IIT student Akhil Kumar Pabbathi spent ten weeks working closely with Dr. Joe McClernon from Psychiatry and Behavioral Sciences to understand smoking and tobacco purchase behavior through activity space analysis.

Biomedical Engineering major Chi Kim Trinh, and Biostatistics MS student Can Cui spent ten weeks constructing a computational and statistical framework to evaluate the effects of health coaching on Type II Diabetes patients’ quality metrics, including Hemoglobin A1c, blood pressure, eye exam consistency, tobacco use, and prescription adherence to statins, aspirin, and angiotensin converter enzyme (ACE)/ angiotensin receptor blocker (ARB).

Biomedical Engineering and Electrical and Computer Engineering major David Brenes, and Electrical and Computer Engineering/Computer Science majors Xingyu Chen and David Yang spent ten weeks working with mobile eye tracker data to optimize data processing and feature extraction. They generated their own video data with SMI Eye Tracking Glasses, and created computer vision algorithms to categorize subject gazing behavior in a grocery purchase decision-making environment.

BME major Neel Prabhu, along with CompSci and ECE majors Virginia Cheng and Cheng Lu, spent ten weeks studying how cells from embryos of the common fruit fly move and change in shape during development. They worked with Cell-Sheet-Tracker (CST), an algorithm develped by former Data+ student Roger Zou and faculty lead Carlo Tomasi. This algorithm uses computer vision to model and track a dynamic network of cells using a deformable graph.

Xinyu (Cindy) Li (Biology and Chemistry) and Emilie Song (Biology) spent ten weeks exploring the Black Queen Hypothesis, which predicts that co-operation in animal societies could be a result of genetic/functional trait losses, as well as polymorphism of workers in eusocial animals such as ants and termites. The goal was to investigate this idea in four different eusocial insect species.

Matthew Newman (Sociology), Sonia Xu (Statistics), and Alexandra Zrenner (Economics) spent ten weeks exploring giving patterns and demographic characteristics of anonymized Duke donors. They worked closely with the Duke Alumni Affairs and Development Office, with the goal of understanding the data and constructing tools to generate data-driven insight about donor behavior.

Weiyao Wang (Math) and Jennifer Du , along with NCCU Physics majors Jarrett Weathersby and Samuel Watson, spent ten weeks learning about how search engines often provide results which are not representative in terms of race and/or gender. Working closely with entrepreneur Winston Henderson, their goal was to understand how to frame this problem via statistical and machine-learning methodology, as well as to explore potential solutions.

Yuangling (Annie) Wang, a Math/Stats major, and Jason Law, a Math/Econ major, spent ten weeks analyzing message-testing data about the 2015 Marijuana Legalization Initiative in Ohio; the data were provided by Public Opinion Strategies, one of the nation's leading public opinion research firms.

The goal was to understand how statistics and machine learning might help develop microtargeting strategies for use in future campaigns.

Artem Streltsov (Masters Economics) and IIT Mechanical Engineering major Vinod Ramakrishnan spent ten weeks exploring North Carolina state budget documents. Working closely with the Budget and Tax Center, part of the North Carolina Justice Center, their goal was to help build a keystone tool that can be used for analysis of the state budget as well as future budget proposals.

Runliang Li (Math), Qiyuan Pan (Computer Science), and Lei Qian (Masters in Statistics and Economic Modelling) spent ten weeks investigating discrepancies between posted wait times and actual wait times for rides at Disney World. They worked with data provided by TouringPlans.

Robbie Ha (Computer Science, Statistics), Peilin Lai  (Computer Science, Mathematics), and Alejandro Ortega (Mathematics) spent ten weeks analyzing the content and dissemination of images of the Syrian refugee crisis, as part of a general data-driven investigation of Western photojournalism and how it has contributed to our understanding of this crisis.

Ana Galvez (Cultural and Evolutionary Anthropology), Xinyu Li (Biology), and Jonathan Rub (Math, Computer Science) spent ten weeks studying the impact of diet on organ and bone growth in developing laboratory rats. The goal was to provide insight into the growth dynamics of these model organisms that could eventually be generalized to inform research on human development.

Devri Adams (Environmental Science), Annie Lott (Statistics), and Camila Vargas Restrepo (Visual Media Studies, Psychology) spent ten weeks creating interactive and exploratory visualizations of ecological data. They worked with over sixty years of data collected at the Hubbard Brook Experimental Forest (HBEF) in New Hampshire.

Building off the work of a 2016 Data+ teamYu Chen (Economics), Peter Hase (Statistics), and Ziwei Zhao (Mathematics), spent ten weeks working closely with analytical leadership at Duke's Office of University Development. The project goal was to identify distinguishing characteristics of major alumni donors and to model their lifetime giving behavior.

Over ten weeks, Computer Science majors Daniel Bass-Blue and Susie Choi joined forces with Biomedical Engineering major Ellie Wood to prototype interactive interfaces from Type II diabetics' mobile health data. Their specific goals were to encourage patient self-management and to effectively inform clinicians about patient behavior between visits.

Over ten weeks, Computer Science Majors Amber Strange and Jackson Dellinger joined forces with Psychology major Rachel Buchanan to perform a data-driven analysis of mental health intervention practices by Durham Police Department. They worked closely with leadership from the Durham Crisis Intervention Team (CIT) Collaborative, made up of officers who have completed 40 hours of specialized training in mental illness and crisis intervention techniques.

A team of students led by Duke mathematician Marc Ryser and University of Southern California Pathology professor Darryl Shibata will characterize phenotypic evolution during the growth of human colorectal tumors. 

Graduate Students: Kendra Kaiser and John Mallard

Faculty: Michael O’Driscoll

Course: Landscape Hydrology, EOS 323/723

A team of students led by Dr. Shanna Sprinkle of Duke Surgery will combine success metrics of Duke Surgery residents from a set of databases and create a user interface for residency program directors and possibly residents themselves to view and better understand residency program performance.

Lauren Fox (Cultural Anthropology) and Elizabeth Ratliff (Statistics, Global Health) spent ten weeks analyzing and mapping pedestrian, bicycle, and motor vehicle data provided by Durham's Department of Transportation. This project was a continuation of a seminar on "ghost bikes" taught by Prof. Harris Solomon.

Boning Li (Masters Electrical and Computer Engineering), Ben Brigman (Electrical and Computer Engineering), Gouttham Chandrasekar (Electrical and Computer Engineering), Shamikh Hossain (Computer Science, Economics), and Trishul Nagenalli (Electrical and Computer Engineering, Computer Science) spent ten weeks creating datasets of electricity access indicators that can be used to train a classifier to detect electrified villages. This coming academic year, a Bass Connections Team will use these datasets to automatically find power plants and map electricity infrastructure.

Felicia Chen (Computer Science, Statistics), Nikkhil Pulimood (Computer Science, Mathematics), and James Wang (Statistics, Public Policy) spent ten weeks working with Counter Tools, a local nonprofit that provides support to over a dozen state health departments. The project goal was to understand how open source data can lead to the creation of a national database of tobacco retailers.

Selen Berkman (ECE, CompSci), Sammy Garland (Math), and Aaron VanSteinberg (CompSci, English) spent ten weeks undertaking a data-driven analysis of the representation of women in film and in the film industry, with special attention to a metric called the Bechdel Test. They worked with data from a number of sources, including fivethirtyeight.com and the-numbers.com.

Over ten weeks, BME and ECE majors Serge Assaad and Mark Chen joined forces with Mechanical Engineering Masters student Guangshen Ma to automate the diagnosis of vascular anomalies from Doppler Ultrasound data, with goals of improving diagnostic accuracy and reducing physician time spent on simple diagnoses. They worked closely with Duke Surgeon Dr. Leila Mureebe and Civil and Environmental Engineering Professor Wilkins Aquino.

Over ten weeks, Math/CompSci majors Benjamin Chesnut and Frederick Xu joined forces with International Comparative Studies major Katharyn Loweth to understand the myriad academic pathways traveled by undergraduate students at Duke. They focused on data from Mathematics and the Duke Global Health Institute, and worked closely with departmental leadership from both areas.

Liuyi Zhu (Computer Science, Math), Gilad Amitai (Masters, Statistics), Raphael Kim (Computer Science, Mechanical Engineering), and Andreas Badea (East Chapel Hill High School) spent ten weeks streamlining and automating the process of electronically rejuvenating medieval artwork. They used a 14th-century altarpiece by Francescussio Ghissi as a working example.

Angelo Bonomi (Chemistry), Remy Kassem (ECE, Math), and Han (Alessandra) Zhang (Biology, CompSci) spent ten weeks analyzing data from social networks for communities of people facing chronic conditions. The social network data, provided by MyHealth Teams, contained information shared by community members about their diagnoses, symptoms, co-morbidities, treatments, and details about each treatment.

John Benhart (CompSci, Math) and Esko Brummel (Masters in Bioethics and Science Policy) spent ten weeks analyzing current and potential scholarly collaborations within the community of Duke faculty. They worked closely with the leadership of the Scholars@Duke database.

Zijing Huang (Statistics, Finance), Artem Streltsov (Masters Economics), and Frank Yin (ECE, CompSci, Math) spent ten weeks exploring how Internet of Things (IoT) data could be used to understand potential online financial behavior. They worked closely with analytical and strategic personnel from TD Bank, who provided them with a massive dataset compiled by Epsilon, a global company that specializes in data-driven marketing.

Over ten weeks, Mathematics/Economics majors Khuong (Lucas) Do and Jason Law joined forces with Analytical Political Economy Masters student Feixiao Chen to analyze the spati-temporal distribution of birth addresses in North Carolina. The goal of the project was to understand how/whether the distributions of different demographic categories (white/black, married/unmarried, etc.) differed, and how these differences connected to a variety of socioeconomic indicators.

Furthering the work of a 2016 Data+ team in predictive modeling of pancreatic cancer from electronic medical record (EMR) data, students Siwei Zhang (Masters Biostatistics) and Jake Ukleja (Computer Science) spent ten weeks building a model to predict pancreatic cancer from Electronic Medical Records (EMR) data. They worked with nine years worth of EMR data, including ICD9 diagnostic codes, that contained records from over 200,000 patients.

William Willis (Mechanical Engineering, Physics) and Qitong Gao (Masters Mechanical Engineering) spent ten weeks with the goal of mapping the ocean floor autonomously with high resolution and high efficiency. Their efforts were part of a team taking part in the Shell Ocean Discovery XPRIZE, and they made extensive use of simulation software built from Bellhop, an open-source program distributed by HLS Research.

Over ten weeks, Public Policy major Amy Jiang and Mathematics and Computer Science major Kelly Zhang joined forces with Economics Masters student Amirhossein Khoshro to investigate academic hiring patterns across American universities, as well as analyzing the educational background of faculty. They worked closely with Academic Analytics, a provider of data and solutions for universities in the U.S. and the U.K.

Linda Adams(CompSci), Amanda Jankowski (Sociology, Global Health), and Jessica Needleman (Statistics/Economics) spent ten weeks prototyping small-area mapping of public-health information within the Durham Neighborhood Compass, with a focus on mortality data. They worked closely with the director of DataWorks NC, an independent data intermediary dedicated to democratizing the use of quantitative information.

Gary Koplik (Masters in Economics and Computation) and Matt Tribby (CompSci, Statistics) spent ten weeks investigating the burden of rare diseases on the Duke University Health System (DUHS). They worked with a massive set of ICD diagnosis codes and visit data provided by DUHS.

Over ten weeks, Biology major Jacob Sumner and Neuroscience major Julianna Zhang joined forces with Biostatistics Masters student Jing Lyu to analyze potential drug diversion in the Duke Medical Center. Early detection of drug diversion assists health care providers in helping patients recover from their condition, as well as mitigate the effects on any patients under their care.

Graduate Student: Jacob Coleman, 3rd year Ph.D. student in Statistical Science

Faculty Instructor: Colin Rundel

Class: STA 112, Data Science

Joy Patel (Math and CompSci) and Hans Riess (Math) spent ten weeks analyzing massive amounts of simulated weather data supplied by Spectral Sciences Inc. Their goal was to investigate ways in which advanced mathematical techniques could assist in quantifying storm intensity, helping to augment today's more qualitatively-based methods.

Albert Antar(Biology), and Zidi Xiu (Biostatistics) spent ten weeks leveraging Duke Electronic Medical Record (EMR) data to build predictive models of Pancreatic ductal adenocarcinoma (PDAC). PDAC is the 4th leading cause of cancer deaths in the US, and is most often is diagnosed in stage IV, with a survival rate of only 1% and life expectancy measured in months. Diagnosis of PDAC is very challenging due of deep anatomical placement, and significant risk imposed by traditional biopsy. The goal of this project is to utilize EMR data to identify potential avenues for diagnosing PDAC in the early treatable stages of disease.

Priya Sarkar (Computer Science), Lily Zerihun (Biology and Global Health), and Anqi Zhang (Biostatistics) spent ten weeks utilizing Duke Electronic Medical Record (EMR) data to identify subgroups of diabetic patients, and predict future complications associated with Type II Diabetes.

Vivek Sriram (Computer Science and Math), Lina Yang (Biostatistics), and Pablo Ortiz (BME) spent ten weeks working in close collaboration with the Department of Biostatistics and Bioinformatics implementing an image analysis pipeline for immunofluorescence microscopy images of developing mouse lungs.

Computer Science and Psychology major Molly Chen, and Neuroscience major Emily Wu spent ten weeks working with patient diagnosis co-occurence data derived from Duke Electronic Medical Records to develop network visualizations of co-occurring disorders within demographic groups. Their goal was to make healthcare more holistic, and reduce healthcare disparities by improving patient and provider awareness of co-occurring disorders for patients within similar demographic groups.

Emily Horn (Public Policy, Global Health), Aasha Reddy (Economics), and Shanchao Wang (Masters Economics) spent ten weeks working with data from the National Asset Scorecard for Communities of Color (NASCC), an ongoing survey project that gathers information about asset and debt of households at a detailed racial and national origin level. They worked closely with faculty and researchers from the Samuel Dubois Cook Center for Social Equity.

The team built a ground truth dataset comprising satellite images, building footprints, and building heights (LIDAR) of 40,000+ buildings, along with road annotations. This dataset can be used to train computer vision algorithms to determine a building’s volume from an image, and is significant contribution to the broader research community with applications in urban planning, civil emergency mitigation and human population estimation.

Lindsay Hirschhorn (Mechanical Engineering) and Kelsey Sumner (Global Health and Evolutionary Anthropology) spent ten weeks determining optimal vaccination clinic locations in Durham County for a simulated Zika virus outbreak. They worked closely with researchers at RTI International to construct models of disease spread and health impact, and developed an interactive visualization tool.

Joel Tewksbury (BME) and Miriam Goldman (Math and Statistics, Arizona State University) spent ten weeks analyzing time-series darkness visual adaptation scores from over 1200 study participants to identify trends in night vision, and ultimately genetic markers that might confer a visual advantage.

Anne Driscoll (Economics, Statistical Science), and Austin Ferguson (Math, Physics) spent ten weeks examining metrics for inter-departmental cooperativity and productivity, and developing a collaboration network of Duke faculty. This project was sponsored by the Duke Clinical and Translational Science Award, with the larger goal of promoting collaborative success in the School of Medicine and School of Nursing.

Statistical Science majors Nathaniel Brown and Corey Vernot, and Economics student Guan-Wun Hao spent ten weeks exploring changes in food purchase behavior and nutritional intake following the event of a new Metformin prescription for Type II Diabetes. They worked closely with Matthew Harding and researchers in the BECR Center, as well as Dr. Susan Spratt, an endocrinologist in Duke Medicine.

Computer Science majors Erin Taylor and Ian Frankenburg, along with Math major Eric Peshkin, spent ten weeks understanding how geometry and topology, in tandem with statistics and machine-learning, can aid in quantifying anomalous behavior in cyber-networks. The team was sponsored by Geometric Data Anaytics, Inc., and used real anonymized Netflow data provided by Duke's Information Technology Security Office.

Students in the Performance and Technology Class create a series of performances that explore the interface between society and our machines. With the theme of the cloud to guide them, they have created increasingly complex art using digital media, microcontrollers, and motion tracking. Their work will be on display at the Duke Choreolab 2016.

Graduate student: Hamza Ghadyali          

Faculty instructor: Dr. Paul Bendich

With the significant international consequences of recent outbreaks, the ITP Lab conducted extensive stakeholder interviews and macro-level health policy analysis to expose gaps in pandemic preparedness and develop legal frameworks for future threats. 

This project summarizes the existing sample agreements from different institutions, analyzes the key contractual issues in the formation of alliances, and develops master charts of legal provisions to compare different approaches, to provide a reference for the formation of new alliances in the era of epidemic disease outbreaks. 

A virtual reality system to recreate the archaeological experience using data and 3D models from the neolithic site of Çatalhöyük, in Anatolia, Turkey. 

How well and in what ways do governments communicate with their citizens? How do governments analyze data and create visualizations to promote public access to government information? 

With the significant international consequences of recent outbreaks, the ITP Lab conducted extensive stakeholder interviews and macro-level health policy analysis to expose gaps in pandemic preparedness and develop legal frameworks for future threats. 

Paclitaxel (Taxol) is a small molecule drug belonging to the taxane family. It is one of the most commonly used chemotherapeutics, used for treatment of many cancers, as a monotherapy or in combination with other drugs to treat breast, lung and ovarian cancer as well as Kaposi’s sarcoma. Taxol is on the World Health Organization’s (WHO) List of Essential Medicines, a list that includes most the important medications for basic health. The worldwide demand for paclitaxel is exceeding the current supply. 

This project transforms an inaccessible audio archive of historic North Carolina folk music colllected by Frank Clyde Brown in the 1920s-40s into a vital, publicly accessible digital archive and museum exhibition. 

Imagine a world where we understand how to detect mental health and developmental problems in early childhood so that we can intervene early in life and prevent future suffering and impairment. This is a challenge that can only be addressed by an interdisciplinary team of computational people with child psychiatrists and neuroscientists who can integrate and mine knowledge from cross-cultural and global data.

Molly Rosenstein, an Earth and Ocean Sciences major and Tess Harper, an Environmental Science and Spanish major spent ten weeks developing interactive data applications for use in Environmental Science 101, taught by Rebecca Vidra.

Two to three undergraduates joined a research group led by Douglas Boyer and Ingrid Daubechies, with the goal of testing and developing mathematical and statistical methodology for measuring similarities between bones and teeth.

Nonnegative matrix factorization (NMF) has an established reputation as a useful data analysis technique in numerous applications. However, its usage in practical situations is undergoing challenges in recent years.The fundamental factor to this is the increasingly growing size of the datasets available and needed in the information sciences. To address this, in this work we propose to use structured random compression, that is, random projections that exploit the data structure, for two NMF variants: classical and separable. In separable NMF (SNMF) the left factors are a subset of the columns of the input matrix. We present suitable formulations for each problem, dealing with different representative algorithms within each one.

In this work, we turn musical audio time series data into shapes for various tasks in music matching and musical structure understanding. 

The goal of this project is take a large amount of data from the Massive Open Online Courses offered by Duke professors, and produce from it a coherent and compelling data analysis challenge that might then be used for a Duke or nation-wide data analysis competition.

Kelsey SumnerEvAnth and Global Health major and Christopher Hong, CompSci/ECE major, spent ten weeks analyzing high-dimensional microRNA data taken from patients with viral and/or bacterial conditions. They worked closely with the medical faculty and practitioners who generated the data.

Kang Ni, Math/Econ major, Kehan Zhang, Econ/Stats/ major, and Alex Hong, spent ten weeks investigating a large collection of grocery store transaction data. They worked closely with Matt Harding Behavioral Economics and Healthy Food Choice Research Center. (BECR Center).

Ethan LevineAnnie Tang, and Brandon Ho spent ten weeks investigating whether personality traits can be used to predict how people make risky decisions. They used a large dataset collected by the lab of Prof. Scott Huettel, and were mentored by graduate students Emma Wu Dowd and Jonathan Winkle.

Spenser Easterbrook, a Philosophy and Math double major, joined Biology majors Aharon Walker and Nicholas Branson in a ten-week exploration of the connections between journal publications from the humanities and the sciences. They were guided by Rick Gawne and Jameson Clarke, graduate students from Philosophy and Biology.

Large-scale databases from the social, behavioral, and economic sciences offer enormous potential benefits to society. However, as most stewards of social science data are acutely aware, wide-scale dissemination of such data can result in unintended disclosures of data subjects' identities and sensitive attributes, thereby violating promises–and in some instances laws to protect data subjects' privacy and confidentiality. 

The Triangle Census Research Network (TCRN) is an interdisciplinary team of researchers from Duke University and the National Institute of Statistical Sciences dedicated to improving the way that federal statistical agencies collect, analyze, and disseminate data to the public.

We present a framework for high-dimensional regression using the GMRA data structure. In analogy to a classical wavelet decomposition of function spaces, a GMRA is a tree-based decomposition of a data set into local linear projections.

Dr. Guillermo Sapiro, professor in Pratt School of Engineering at Duke University, conducts ongoing autism research. Using image processing, he attempts to program a computer to detect whether babies (around eight to 14 months of age) display a sign of autism. This very early detection enables doctors to train these babies (when their brain plasticity is high) to behave in ways to counter the behavioral limitations autism imposes, thus allowing these babies to act more normally as they grow up. 

In this Data Expedition, Duke undergraduates were introduced to a real world traffic citation data set. Provided by Dr. Frank R. Baumgartner, a political scientist at UNC, the data consist of 15 years of traffic stops, with over 18 million observations of 53 variables.

In this project, we aim to solve the compressive sensing (CS) hyperspectral / video image reconstruction problem. The propose algorithm is robust to different initializations. This is useful for CS reconstruction problems where the suitable training datasets are not available.

This data expedition introduced students to “sliding windows and persistence” on time series data, which is an algorithm to turn one dimensional time series into a geometric curve in high dimensions, and to quantitatively analyze hybrid geometric/topological properties of the resulting curve such as “loopiness” and “wiggliness.”

Students learned to visualize high-dimensional gene expression data; understand genetic differences in the context of gene networks; connect genetic differences to physiological outcomes; and perform simple analyses using the R programming language.

Graduate students: Aaron Berdanier and Matt Kwit, University Program in Ecology & Nicholas School of the Environment

Using social network analysis to predict survival in large-brained mammals.

Questions asked: Do males and females scent mark equally? Do lemurs scent mark equally in breeding and non-breeding seasons?

Introduce NBA and MLB datasets to undergraduates to help them gain expertise in exploratory data analysis, data visualization, statistical inference, and predictive modeling.

STEM education often presents a very sanitized version of the scientific enterprise. To some extent, this is necessary, but overemphasizing neat-and-tidy results and scripted protocol assignments poses the risk of failing to adequately prepare students for the real-world mess of transforming experimental data into meaningful results. The fundamental aim of this project was to guide students in processing large real-world datasets far beyond their academic comfort zone so as to give them a more realistic understanding of how science works.

What drove the prices for paintings in 18th Century Paris?

Successful high-resolution signal reconstruction -- in problems ranging from astronomy to biology to medical imaging -- depends crucially our ability to make the most out of indirect, incomplete, a

A new model is developed for joint analysis of ordered, categorical, real and count data. In the motivating application, the ordered and categorical data are answers to questionnaires, the (word) count data correspond to the text questions from the questionnaires, and the real data correspond to fMRI responses for each subject. We also combine the analysis of these data with single-nucleotide polymorphism (SNP) data from each individual. 

The sub-thalamic nucleus (STN) within the sub-cortical region of the Basal ganglia is a crucial targeting structure for Deep brain stimulation (DBS) surgery, in particular for alleviating Parkinson’s disease (PD) symptoms. Volumetric segmentation of such small and complex structure, which is elusive in clinical MRI protocols, is thereby a pre-requisite process for reliable DBS targeting. While direct visualization and localization of the STN is facilitated with advanced high-field 7T MR imaging, such high fields are not always clinically available. 

Volumetric segmentation of sub-cortical structures such as the basal ganglia and thalamus is necessary for non-invasive diagnosis and neurosurgery planning. This is a challenging problem due in part to limited boundary information between structures, similar intensity profiles across the different structures, and low contrast data.

Intelligent mobile sensor agent can adapt to heterogeneous environmental conditions, to achieve the optimal performance, such as demining, maneuvering target tracking.