Navigation

This Page

Quick_adcp.py Documentation

Quick_adcp.py: Overview

Processing of ADCP data with quick_adcp.py

Quick_adcp.py is a Python script that runs the usual CODAS processing steps in a predictable and configurable manner. For a clean dataset, it provides a relatively quick and painless way of looking at the data, addressing configuration issues, and editing. If your dataset has problems, you can always run the appropriate steps manually.

This description of CODAS processing history may be relevant.

Setup of a processing directory should use adcptree.py. Although there are various versions of adcptree in existence, they reflect older generations of processing. Quick_adcp.py should be able to deal with any of the following combinations of data acquisition, averaging (or not) and instrument, as long as the processing directory was set up with acptree.py:

Acquisition program instrument ping type Averaged?? file type incremental?
DAS2.48 DAS2.49 NB150 nb yes pingdata no
VmDAS

Broadband

or

Workhorse

bb yes LTA or STA no
  no ENS or ENX no
VmDAS Ocean Surveyor bb yes LTA or STA no
no ENS or ENX no
nb yes LTA or STA no
no ENS or ENX no
bb+nb yes

first ping

ENS or ENX

no
no

first ping

ENS or ENX

no
UHDAS NB150 nb no raw yes
OS bb no raw yes
nb no raw yes
bb+nb no raw yes
WH300 bb no raw yes

At its operational level, CODAS processing consists of a series of C programs or matlab programs that interact with the CODAS database or with files on the disk. C programs usually deal with the database directly, by loading data (eg. loadping.exe), extracting data (eg. adcpsect.ext) or by manipulating the databaes (eg. rotate.exe, putnav.exe, dbupdate.exe). Matlab programs are used to maniplate files on the disk so C programs can use them, or in the case of VmDAS or UHDAS data, Matlab is used to read the original data files and created translated versions (on the disk) that C programs can read.

Manual Processing

All steps can be run from the shell command line (or from the matlab command line). Adcptree.py creates a processing directory tree and copies templates or documented, editable files to the various subdirectories, setting up the tree for processing. To process a dataset manually, one would work their way through the directories, repeating (in the proper order) the following steps:

  1. edit the appropriate file
  2. run the related program

C programs are almost always called with a control file to specify parameters that the user may wish to configure or change. These include predictable values, such as the database name or yearbase, and configurable values, such as a reference layer depth range. C programs are called on a command line from the relevant working directory as (for example)

adcpsect adcpsect.cnt

The original “.cnt” files are self-documented, showing the various options that can be chosen. The user is advised to leave these fiels as is and name their copies something else, such as “adcpsect.tmp”, and then run it as

adcpsect adcpsect.tmp

Matlab programs are copied by adcptree.py to the appropriate directory and exist as a script (or a stub that calls a script). The matlab program can be edited and then run in the appropriate directory.

Scripted Processing

Quick_adcp.py is designed to work through the standard processing steps, writing control files and running the C programs, or writing matlab files to disk, and running them. Control files for C programs are named with the same base name (such as “rotate”) with a “.tmp” suffix. Matlab files have the same base name as the original matlab file but have “_tmp” in the name (eg. “ashrot.m” becomes “ashrot_tmp.m” when written by quick_adcp.py).

Once your paths are set up (matlab, executable, and Python), you

  1. pick a working area (not in the PROGRAMS directory; that is reserved for UH code)

  2. run adcptree.py with the appropriate options

  3. locate your data files, determine the appropriate switches for quick_adcp.py

  4. run quick_adcp.py. Arguments can be typed on the command line

    or stored in a control file (accessed with –cntfile). Commandline options override control file options.

Overview of ADCP processing stages

CODAS processing of ADCP data consists of three stages.

  1. If dealing with single-ping data:

    • read the ADCP and ancillary serial data
    • “navigate” the data (find UTC time, add position and attitude)
    • edit out bad single-ping velocities
    • average the single-ping data; write to disk.

    These steps are already done in PINGDATA and VmDAS LTA or STA data.

    This flow chart shows the split between this stage (acquisition + averaging + loading the database) and the latter stages (CODAS processing; manipulating the database)

  2. Load the averages into CODAS,

    • find and smooth the reference layer,
    • obtain a gps-based heading correction for the gyro headings,
    • determine preliminary angle and amplitude calibrations from watertrack and/or bottom track data (using corrected headings)
  3. Editing and calibration:

    • editing (bottom interference, wire interference, bubbles, ringing, identifying problems with heading and underway bias),
    • final calibration based on edited data

More notes

  • If you start with LTA, STA, or pingdata, you are starting at (2).
  • What used to be known as “CODAS processing” is really steps (2) and (3).
  • With the access to VmDAS single-ping data, (ENS or ENX) we have the opportunity to do a better job editing at the single-ping stage.
  • With UHDAS and HDSS data, we are required to start with single-ping data.
  • If you start with single-ping data (stage I), you should already be familiar with stages (2) and (3)

Practice Datasets

Recommended strategy:

FIRST (steps 2,3 above)

  • Become familiar with CODAS processing by using the LTA demo (below)

  • specifically
    • read this short introduction about CODAS LTA processing
    • download the LTA files (below)
    • open the detailed quick_adcp.py guide
    • look at the LTA notes describing the processing steps for the demo data
    • pick a processing location (not in the PROGRAMS directory) and work your way through the processing.

THEN

  • If it is necessary to learn about single-ping data, try ENX or ENS, below. Often, processing single-ping VmDAS data is not necessary. (UHDAS processing requires starting with single-ping data).

This figure is a cartoon showing where CODAS processing fits with LTA and UHDAS datasets. Hopefully it will be illuminating and not confusing.

Available quick_adcp.py examples

  NB150 pingdata (DAS2.48, no ue4) (well-documented, old data) NB150 pingdata (DAS2.48 with ue4) newer data Ocean Surveyor (VmDAS) LTA (5-min) Ocean Surveyor (VmDAS) ENX (single-ping) ENS (single-ping) NB150, WH300 Ocean Surveyor, (UHDAS) “raw” (single-ping)
documentation tutorial (highly detailed) quick_adcp.py pingdata howto quick_adcp.py LTA howto    
quick_adcp.py commands   pingdata commands LTA commands

ENX commands

ENS commands

UHDAS commands

data files

these are binary files. to download, you may need to hold the “shift” key when you click on the link.

get pingdata*

old demo

get pingdata*

new demo

get *LTA

LTA data

get *ENX

ENX data

get ‘uhdas_data.zip’

UHDAS data

(zip archive)
example of processing directory

old pingdata notes

old pingdata dir

new pingdata notes

new pingdata dir

LTA notes

LTA dir

ENX notes

ENX dir

UHDAS notes

UHDAS dir

Documentation:

  • adcptree.py shows the usage when typed on the command line.
  • quick_adcp.py has alot of documentation acessible by commandline switches. Typing “python quick_adcp.py” will list those options. These are also provided as links below:

adcptree.py:

quick_adcp.py: