Predicting the Distributions of Particle Cluster Sizes in Turbulent Flow

Project Summary

Fluid mechanics is the study of how fluids (e.g., air, water) move and the forces on them. Scientists and engineers have developed mathematical equations to model the motions of fluid and inertial particles. However, these equations are often computationally expensive, meaning they take a long time for the computer to solve. 

To reduce the computation time, we can use machine learning techniques to develop statistical models of fluid behavior. Statistical models do not actually represent the physics of fluids; rather, they learn trends and relationships from the results of previous simulations. Statistical models allow us to leverage the findings of long, expensive simulations to obtain results in a fraction of the time.

In this project, we provide students with the results of direct numerical simulations (DNS), which took many weeks for the computer to solve. We ask students to use machine learning techniques to develop statistical models of the results of the DNS.

Themes and Categories
Reza Momenifar or Jonathan Holt or

Graduate Students: Reza Momenifar and Jonathan Holt, Department of Civil & Environmental Engineering

Faculty: Simon Mak, Department of Statistical Science

Course: STA 325: Machine Learning and Data Mining


ClustersStatisticians and machine learning specialists are often asked to analyze data from obscure sources. No matter the source of data, analysts must be comfortable applying their skills to solve the client’s problem. This Data Expeditions course prepares students for the real world by asking students to analyze data from a field with which they have little experience: turbulent flow in fluids. Furthermore, this Data Expeditions course challenges students to interpret their results in order to gain an understanding of the behavior of fluids in turbulent flow.

Students are first given a 1-hour lecture introducing the data. We explain that fluid dynamics is a classic field of physics pertaining to the motion of fluid particles. We discuss the concept of turbulence, a phenomenon that most people are familiar with in the context of airplane travel. Next, we explain why it is computationally expensive to model turbulence using direct numerical simulation (DNS). It would be much faster, we tell the students, if we had a statistical model of our data. We illustrate the concept of particle clustering in turbulence and explain how we employed the Voronoi tessellation analysis to identify clusters. We provide students with our dataset and ask them to return in three weeks with a proposed statistical model. Specifically, we ask students to model the first four moments of particle cluster size given three parameters: the Reynolds number, Stokes number, and Froude number. 

During a two-hour follow-up session, students present their solutions to the graduate students, TAs, and the course instructor. In addition to the presentation, students write a report on their findings.

Guiding Questions

  1. What is a fluid? How does a fluid behave?
  2. What is turbulence? Where do we see turbulence in our everyday lives? How particles move in turbulent flow and why they form cluster?
  3. What are the important properties of fluid and particles in particle-laden flows?
  4. Why is direct numerical simulation important for the study of turbulence and particle dispersion in turbulent flow?
  5. Why is direct numerical simulation expensive?
  6. How can machine learning reduce computation time?
  7. How can machine learning provide insight into the behavior of particle motion?

The Dataset

The data were collected by Reza Momenifar as part of his doctoral thesis to investigate the properties of particle clusters in turbulent flow. The dataset is extracted from many numerical simulations in 3D space, performed in Reza’s Theoretical and Computational Fluid Dynamic Group. The simulations model the distributions of particles under idealized turbulence in a cubic box. In these simulations, three independent control parameters representing the properties of turbulent flow and particles are varied. The particles’ positions and other dynamic properties of the flow fields (e.g., velocity) are stored. Next, Voronoi tessellation analysis was performed and particle clusters were identified. The particle clusters are represented by the first four moments of cluster size distributions.

In this analysis the predictor variables are the fluid and particle properties (Reynolds number, Stokes number, Froude number). The response variables are the first four moments of the cluster size distribution. The students receive a dataset with 120 observations (rows). The students use this dataset to develop their models.

In-Class Exercises

Reza and Jon first presented a lecture introducing the concept of turbulence and how turbulence manifests in everyday phenomena. This lecture began with a background on the study of fluid dynamics. Then, they introduced direct numerical simulation (DNS) and explained why DNS is computationally expensive. Afterwards, they explained the Voronoi tessellation analysis and its applications, particularly in particle-clustering.  Finally, Reza described how he generated the dataset that students will use for their assignment.

After Reza’s lecture, Reza and Jon gave students their assignment. Students were told that they had three weeks to develop four statistical models - one for each of the four moments - given the sample dataset.

Over the next three weeks, Reza and Jon assisted students with their projects. The course (STA 325) had weekly lab sessions, which provided a natural venue for students to ask questions to the graduate students, TAs, and instructor. The main issue that students had was scaling the variables. The ranges of some of the predictor and response variables were quite large; therefore, students had to think critically about how to appropriately scale these variables. The students were quite comfortable using the R programming language to develop different types of models, including linear, generalized additive, and tree-based models. The students were particularly well-suited for the assignment because they had just learned about different types of models in their regular course instruction.

Students presented their results during a two-hour presentation session. The graduate students, TA’s and instructor used a rubric to assess oral presentations and written reports. Overall, student projects were impressively thorough and insightful, often going above and beyond the assignment prompts. 

In addition to presenting their findings, students were asked to use their models to make predictions on a test set (data that includes predictor variables only). Students submitted their predictions to the TA’s, who determined which group had the most accurate models. Model performance was taken into account when assigning student grades on the project.

After the final presentations, students reflected on their experiences with the Data Expeditions project. Several students noted that the Data Expedition felt like a real-life client project, similar to what they might experience at a consulting firm. Other students noted that they were able to directly apply the material learned in class to a novel dataset. 

Classroom photo

Theoretical quantiles

Source of the Data

Momenifar, M., Bragg, A.~D.\ 2019.\ Local analysis of the clustering, velocities and accelerations of particles settling in turbulence.\ arXiv e-prints arXiv:1908.00341.

Related Projects

A large and growing trove of patient, clinical, and organizational data is collected as a part of the “Help Desk” program at Durham’s Lincoln Community Health Center. Help Desk is a group of student volunteers who connect with patients over the phone and help them navigate to community resources (like food assistance programs, legal aid, or employment centers). Data-driven approaches to identifying service gaps, understanding the patient population, and uncovering unseen trends are important for improving patient health and advocating for the necessity of these resources. Disparities in food security, economic stability, education, neighborhood and physical environment, community and social context, and access to the healthcare system are crucial social determinants of health, which studies indicate account for nearly 70% of all health outcomes.

We led a 75-minute class session for the Marine Mammals course at the Duke University Marine Lab that introduced students to strengths and challenges of using aerial imagery to survey wildlife populations, and the growing use of machine learning to address these "big data" tasks.

Most phenomena that data scientists seek to analyze are either spatially or temporally correlated. Examples of spatial and temporal correlation include political elections, contaminant transfer, disease spread, housing market, and the weather. A question of interest is how to incorporate the spatial correlation information into modeling such phenomena.


In this project, we focus on the impact of environmental attributes (such as greenness, tree cover, temperature, etc.) along with other socio-demographics and home characteristics on housing prices by developing a model that takes into account the spatial autocorrelation of the response variable. To this aim, we introduce a test to diagnose spatial autocorrelation and explain how to integrate spatial autocorrelation into a regression model



In this data exploration, students are provided with data collected from remote sensing, census, and Zillow sources. Students are tasked with conducting a regression analysis of real-estate estimates against environmental amenities and other control variables which may or may not include the spatial autocorrelation information.