A Shallow Dive into Deep Sea Data

Project Summary

Large publicly available environmental databases are a tremendous resource for both scientists and the general public interested in climate trends and properties. However, without the programming skills to parse and interpret these massive datasets, significant trends may remain hidden from both scientists and the public. In this data exploration, students, over the course of three hours, accessed two large, publicly available datasets, each with greater than 4 million observations. They learned how to use R and RStudio to effectively organize, visualize and statistically explore trends in deep sea physical oceanography.  

Themes and Categories
Year
2018

Graduate Students: Sarah Solie (Biology) and Arielle Fogel (University Program in Genetics and Genomics, Evolutionary Anthropology)

Faculty Member: Dr. Kate Thomas

Course: Biology 190: Life in the Deep Sea

Students gained experience exploring patterns in multivariate oceanographic data, relevant to their coursework, to answer the following four questions:

  1. How does average temperature and salinity at the surface of the ocean compare to the temperature and salinity at 3000 meters below the surface?
  2. Do the trends observed in question 1 differ across tropical, temperate, and polar climates?
  3. What is the relationship between ocean temperature and salinity across depths ranging continuously from the surface to 5500 meters below sea level?
  4. Do the trends observed in question 2 differ across tropical, temperate, and polar climates?

As students pursued these questions, they were introduced to R, a free software program that provides powerful tools for statistical computing and graphics, and RStudio, an integrated development environment frequently used for easier programming in R. They learned valuable skills for future data analysis, including:

  1. Accessing and downloading two physical oceanography databases (salinity and temperature) from the National Oceanic and Atmospheric Administration (NOAA) and National Oceanographic Data Center (NODC) World Ocean Atlas 2013 - https://www.nodc.noaa.gov/OC5/woa13/woa13data.html
  2. Importing and inspecting a dataset in .csv format in RStudio
  3. Installing and using R packages
  4. Tidying data such that it was interpretable for R analysis
  5. Manipulating data included subsetting, filtering, transforming, and summarizing data
  6. Creating a new categorical variable and assigning values to it based on existing data
  7. Using graphical visualization (see Graphics Created) including:
    1. Boxplots (Figures 1-2)
    2. Scatterplots (Figures 3-6)
  8. Performing statistical tests including:
    1. A two sample t-test
  9. Best practices for data wrangling and analysis (e.g. inspecting data after manipulation, annotating code)

At the end of the exercise, students were provided with additional online resources to continue exploring data with R and RStudio.

The Datasets

Students accessed and explored two massive datasets from the National Oceanic and Atmospheric Administration (NOAA) and National Oceanographic Data Center (NODC) World Ocean Atlas 2013. Specifically, they used the annual temperature statistical mean and the annual salinity statistical mean datasets which contained temperature or salinity observations, respectively, across depth (up to 5500 meters), location (at 1o spatial resolution), and time (1955-2012).

Graphics Created

Temperature by Depth and Climate
Figure 1. Temperature at the surface versus 3000 meters below sea level and its relation to climate
Salinity by Depth and Climate
Figure 2. Salinity at the surface versus 3000 meters below sea level and its relation to climate.
Temperature by depth and climate
Figure 3. Temperature by depth and climate.
Salinity by depth and climate
Figure 4. Salinity by depth and climate.
Average temperature by depth and climate
Figure 5. Average temperature by depth and climate.
Average salinity by depth and climate
Figure 5. Average temperature by depth and climate.

Course Materials

Please see the R Markdown file titled “deep_sea_data.Rmd” as well as the PDF version, which includes figures, titled “deep_sea_data.pdf”.

Student Feedback

“I learned that programming is probably 10% writing out the code and 90% figuring out what went wrong. It is a ton of troubleshooting, and through that troubleshooting is a lot of frustration. However, it was also a lot of fun doing it. Problem solving has always been enjoyable for me, so I had a good time figuring out what I did wrong.”

“It was ... cool learning all of the different manners in which you can analyze data using the program and also compile all of the information—over 4 million data points—into very easy to read graphs that made interpreting the data very simple.”

“I think it was an amazing experience to make 4 million data [points] into [a] few intuitive graphs.”

“Using the skills I learned in these lessons, I can convey a huge group of data that seems chaotic into a series of tables that [are] both easy to see and easy to analyze.”

“I can understand why and how to use the codes with the instruction of the teachers.”

“Coding [in R] made it easier to graph complicated scientific results with many variables that programs like Excel would struggle with.”

Data Sources

  1. Locarnini, R. A., A. V. Mishonov, J. I. Antonov, T. P. Boyer, H. E. Garcia, O. K. Baranova, M. M. Zweng, C. R. Paver, J. R. Reagan, D. R. Johnson, M. Hamilton, and D. Seidov, 2013. World Ocean Atlas 2013, Volume 1: Temperature. S. Levitus, Ed., A. Mishonov Technical Ed.; NOAA Atlas NESDIS 73, 40 pp.
  2. Zweng, M.M, J.R. Reagan, J.I. Antonov, R.A. Locarnini, A.V. Mishonov, T.P. Boyer, H.E. Garcia, O.K. Baranova, D.R. Johnson, D.Seidov, M.M. Biddle, 2013. World Ocean Atlas 2013, Volume 2: Salinity. S. Levitus, Ed., A. Mishonov Technical Ed.; NOAA Atlas NESDIS 74, 39 pp.

Related Projects

KC and Patrick led two hands-on data workshops for ENVIRON 335: Drones in Marine Biology, Ecology, and Conservation. These labs were intended to introduce students to examples of how drones are currently being used as a remote sensing tool to monitor marine megafauna and their environments, and how machine learning can be used to efficiently analyze remote sensing datasets. The first lab specifically focused on how drones are being used to collect aerial images of whales to measure changes in body condition to help monitor populations. Students were introduced to the methods for making accurate measurements and then received an opportunity to measure whales themselves. The second lab then introduced analysis methods using computer vision and deep neural networks to detect, count, and measure objects of interest in remote sensing data. This work provided students in the environmental sciences an introduction to new techniques in machine learning and remote sensing that can be powerful multipliers of effort when analyzing large environmental datasets.

This two-week teaching module in an introductory-level undergraduate course invites students to explore the power of Twitter in shaping public discourse. The project supplements the close-reading methods that are central to the humanities with large-scale social media analysis. This exercise challenges students to consider how applying visualization techniques to a dataset too vast for manual apprehension might enable them to identify for granular inspection smaller subsets of data and individual tweets—as well as to determine what factors do not lend themselves to close-reading at all. Employing an original dataset of almost one million tweets focused on the contested 2018 Florida midterm elections, students develop skills in using visualization software, generating research questions, and creating novel visualizations to answer those questions. They then evaluate and compare the affordances of large-scale data analytics with investigation of individual tweets, and draw on their findings to debate the role of social media in shaping public conversations surrounding major national events. This project was developed as a collaboration among the English Department (Emma Davenport and Astrid Giugni), Math Department (Hubert Bray), Duke University Library (Eric Monson), and Trinity Technology Services (Brian Norberg).

Understanding how to generate, analyze, and work with datasets in the humanities is often a difficult task without learning how to code or program. In humanities centered courses, we often privilege close reading or qualitative analysis over other methods of knowing, but by learning some new quantitative techniques we better prepare the students to tackle new forms of reading. This class will work with the data from the HathiTrust to develop ideas for thinking about how large groups and different discourse communities thought of queens of antiquity like Cleopatra and Dido.

Please refer to https://sites.duke.edu/queensofantiquity/ for more information.