1.1. ADCP Processing Overview

There are at least four necessary processing steps for ADCP data which are performed by (or made possible) by the CODAS routines.

First: An ocean reference layer is used to remove the ship’s speed from the measured velocities. By assuming the ocean reference layer is relatively smooth, positions can be nudged to smooth the ship’s velocity, which directly results in the smooth reference layer velocity. This was more important when fixes were rare or jumpy (such as with LORAN) or dithered (such as SA GPS signals prior to 2001).

Second: An accurate heading is required. A GPS-derived heading source such as Ashtech, POSMV, or Seapath) may provide a more accurate (though often less reliable) heading source than a gyro. Routines are in place for pingdata and UHDAS data to correct the gyro heading with the GPS-derived heading, using a quality-controlled difference in headings. An example is available for VmDAS data. Gyro headings may be reliable but they can vary with oscillations of several degrees over several hours, thus creating spurious fluctuations in the ocean velocity that resemble “eddies” , but which are soley the result of cross-track velocity errors (from the associated gyro heading errors).

Third: Calibration routines are available to estimate the heading misalignment from either “bottom track” or “water track” data. Watertrack calibration routines use sudden accelerations (such as stopping and starting of the ship when doing station-work) to derive an estimate if the heading misalignment. For a ship travelling at 10 kts, a 1-degree heading error results in a 10 cm/s cross-track velocity error. It is critical that the misalignment be accounted for if one is to avoid cross-track biases in the velocities. Additional calibration routines estimate the horizontal offset between the ADCP and the GPS used to determine ship’s speed. An offset of more than a few meters can cause artifacts when the ship turns.

Fourth: Bad data must be edited out prior to use. It is best if the single-ping data can be edited prior to averaging (to screen out interference from other instruments, bubbles, and some kinds of underway bias). Once the data are averaged and the above steps are applied, it is still often necessary to further edit the data (eg. remove in-port data or velocities below the bottom). To some extent this can be automated but for final processing, a person must visually inspect all the averages from a dataset.


“CODAS Processing”

The term CODAS processing refers to a suite of open-source programs for processing ADCP data. The CODAS processing suite of programs consists of C and Python programs that will run on Windows, Linux, or Mac OSX, and can process data collected from a Broadband or Ocean Surveyor data by VmDAS, or data collected by any of those instruments using UHDAS (open source acquisition software that runs RDI ADCPs).

CODAS processing can be used for data that have already been averaged (eg. LTA files) or for single-ping data. In the latter case, routines are employed that extensively screen single-ping data prior to averaging. Under certain conditions, this may be necessary to avoid underway biases caused by bubbles or ice near the transducer, or acoustic interference from other instruments.

The CODAS database (Common Ocean Data Access System) is not a heirarchical database; it is a portable, self-descriptive file format (in the spirit of netCDF), that was designed specifically for ADCP (and other oceanographic) data. For historical reasons it is stored as a collection of files. Because it is an organized body of information, it is referred to as a database.

For many years, the processing engine was Matlab, not Python, but we have completely made the switch to Python. We will not be maintaining the Matlab code that used to do the processing, but we will maintain the underlying matlab programs which read raw data files and the matlab output of ocean velocities.

CODAS processing


Processing Stages

CODAS processing operates on data which reside on the disk, i.e. after the acquisition program had done its job. CODAS processing of ADCP data consists of two stages:

  1. getting the data into the CODAS averages

  2. editing and calibration of data already in the CODAS database

Note

The term “processing” implies both steps; whereas “post-processing” generally refers to step 2, i.e. steps that can be run more than once.

This diagram illustrates the workflow behind the CODAS Processing

This diagram shows the split between what happens before the data are in the CODAS database (acquisition + averaging + loading the database) and the steps that operate on the CODAS database itself.

This figure is a cartoon showing where CODAS processing fits with LTA and UHDAS datasets.

1.1.1. Preliminary Processing

This could refer to re-running the single-ping processing on UHDAS single-ping data with newer algorithms or different settings, or for any CODAS processing of VmDAS data (ENR, LTA, or STA).

In this stage, quick_adcp.py is typically called with a control file with parameters it needs to know for processing, and typically can only be run once.

Pre-averaged data

For PINGDATA and VmDAS LTA or STA data, very little single-ping editing is done prior to averaging. We simply translate these files, whatever their source, and load them into the CODAS database.

Once the first pass is complete, none follows up with the same combination of editing and calibration referred to as Post-processing.

Single-ping data:

The strategy for VmDAS single-ping processing is to use the VmDAS single-ping data (i.e. the ENR data and ancillary inputs such as attitude and position from N1R, N2R, N3R) and:

  • convert these components into UHDAS-style directories and files

  • process using the UHDAS+CODAS tools

Single-ping processing of UHDAS (or UHDAS-style) data means:

  • read the ADCP and ancillary serial data

  • find UTC time, add position and heading

  • edit out bad single-ping velocities

  • average the single-ping data; write to disk.

Reasons for redoing Single-ping Processing of ADCP data:

  • take advantage of newer tools or algorithms

  • the cruise was broken into several legs (rejoin the segments)

  • something broke during the cruise (the processing failed or a critical ancillary data feed was missing) – see this link for more details about regenerating UHDAS data components.

  • better final product for VmDAS data (compared to LTA or STA)

  • bug fixed

Algorithmic note: Heading Correction

  • obtain a heading correction for the gyro headings, using an accurate (preferrably gps-based) attitude device. Examples of accurate GPS-based heading devices include POSMV and Seapath (which also leverage inertial calculations), and Ashtech devices (such as the older ADU5, with 4 antennas, or ABXTWO with 3 antennas)

  • check the health of the accurate heading device

If there are two heading devices, we use one as a reliable heading (usually a gyro) and the more accurate one as a correction to the reliable one. This is typically applied once, but if there are gaps (eg. during at-sea processing) one may need to “patch” (interpolate) the heading time series.

1.1.2. Post-Processing

Post-processing describes the steps needed after ADCP Data have gone through the steps to get it into a CODAS database. That database exists in an ADCP (sonar) processing directory.

This “existing CODAS database” could have come from at at-sea UHDAS preliminary processing directory, or putting VmDAS LTA (or STA) data into a CODAS framework. Note that loading VmDAS LTA data into a CODAS database is only staging it or post-processing; LTA data are already averaged.

In this stage, the “navigation”, “calibration” and “export” steps described below, all call quick_adcp.py with the argument --steps2rerun. These steps can be run multiple times.

Patching

  • if there are gaps in the timeseries of heading correction, one would have to fill these gaps using interpolation, filtering, smoothing and others data manipulation.

  • patch_hcorr.py has been designed for that purpose.

Navigation

  • find and smooth the reference layer

Calibration

  • determine preliminary angle and amplitude calibrations from watertrack and/or bottom track data (using corrected headings)

  • if large corrections are required, do that before editing

Editing

  • editing (bottom interference, wire interference, bubbles, ringing, identifying problems with heading and underway bias),

Calibration (check)

  • final calibration based on edited data

    • watertrack and bottomtrack calibration, give phase and scale factor

    • transducer-gps horizontal offset

Documentation

  • leave notes so someone can see what was done or reproduce the processing steps if necessary

Export

  • data can be exported in Matlab or NetCDF format

Plot

  • A simple web-figure generator exists and is useful for distribution and a basic quick look at the data

This detailed ascii chart shows (most of) the steps quick_adcp.py performs, the directory in which it does the work, the input files it generates, the programs it runs, and the output files it generates.