Using quick_adcp.py to process ADCP data
Return to TOP
Table of Contents
Shipboard ADCP data processing requires several necessary steps. The reality of data (and all the things that can to wrong) make the steps more complicated than one might expect at first. Basic processing of a clean dataset is easy; any problem with the data increases the complexity of the processing.
CODAS (Common Oceanographic Data Access System) is a database designed to store ADCP data and associated information (eg. heading, time, position). The CODAS database is simply a vehicle for storage and organization of the ADCP data while various processing steps are run. “CODAS Processing” refers to the University of Hawaii collection of programs that use the CODAS database to process ADCP data.
CODAS processing steps are designed to be flexible (to cope with different data sources and problems encountered during processing) and automatable (so the basic steps can be run easily with minimal overhead per dataset). Many datasets do not need all of the flexibility available, but since some data streams sporadically fail and some improve over time, there are necessarily many options in CODAS.
Quick_adcp.py is a tool to streamline the basic processing steps and provide a uniform naming convention for the various files used in processing. It has various switches that are required and others that are used to specify the kind of data being processed. This link is a description of quick_adcp.py and contains a table with the acquisition programs and data types supported by quick_adcp.py.
This document is designed to introduce the CODAS processing steps run by quick_adcp.py, point to other tools and resources, and help the user understand how to use quick_adcp.py for their particular dataset.
NOTE: Prior to using Univ. Hawaii CODAS processing software, a computer must be set up with the appropriate software. The software suite runs on Windows (W98 and later), linux, mac OSX (intel and ppc). Older binaries for Solaris and SGI exist but have not been compiled recently. The processing computer must have Matlab, various Univ. Hawaii programs, and Python. Details of the computer setup are available here.
Return to TOP
Here are the basic steps in CODAS processing of a shipboard ADCP dataset. You may have other steps that need to be addressed.
Return to TOP
ADCP processing is done in a directory that is created by running adcptree.py. The processing directory is initialized with a particular collection of subdirectories and files. Some of those files are present for all CODAS processing directories, and some are specific to particular kinds of datasets. The processing directory should be in a working area of your disk, NOT in the UH programs dirctory tree.
Type “adcptree.py for usage, specifically if processing averaged data (LTA, STA, pingdata). Type the following to get more usage information for single-ping processing:
adcptree.py --help
NOTE: Examples for this document were run on a linux machine, with UH programs rooted at /home/ulili/programs.
It is important to keep the processing directory relatively free of clutter. If the data are worth using, it is likely that someone will look around in the processing directory for information about how the data were processed and what anomalies were present. Preserving a relatively linear path from the start of processing to the end of processing helps the person later figure out what was done. Sometimes the best thing to do is simply delete the whole processing directory and start over.
Bearing that in mind, here is one approach to processing strategy
make a directory for the cruise that will hold:
- the processing directory
- summary notes (metadata file; instrumetn configuration, dates...)
- detailed notes (eg. comments about the data, such as gaps, biases)
- instructions (suitable for cut-and-paste for your next attempt)
- quality directory for exploration of the data
run “adcptree.py” to make your processing directory
write down the adcptree command you used in a file
type the following to see the right prompts for the datatype you have (eg. LTA)
quick_adcp.py --help
quick_adcp.py --commands lta
If you have to delete the database (rotated too far, rotated back, made a mistake, etc), you have good notes about what you did so it won’t be that hard the next time, and the new processing directory will be nice and clean. In addition, if you keep your notes and exploration OUT of the processing directory, you don’t have to worry about it getting deleted when you delete the processing directory.
The CODAS database is actually a collection of binary files whose names are composed of a prefix, a 3-digit number, the suffix “blk”. There is one database directory file (also binary), whose name has the same prefix, and ends in “dir.blk”. Here is an example:
ademo001.blk
ademo002.blk
ademo003.blk
ademo004.blk
ademo005.blk
ademodir.blk
The prefix here, ademo is called the database name. In the control files used by quick_adcp.py, you will see examples of a relative path to the database, such as
DBNAME ../adcpdb/ademo
This name is specified when quick_adcp.py is run, and is the prefix for many files. Those files are distinguishable by the directory they are in and the suffix they have.
NOTE: Anytime a database name is referred to in this document, the example will use ademo. Anyting the string ademo is used, it is referring to the database name.
The following example creates a processing tree for the fourth cruise that took place on the ship R/V Voyager, in 2003. The name of the processing directory is “vg0304”. If no other options are specified, the processing directory is set up for pingdata files (i.e. NB150 data acquired by DAS 2.48).
The following command creates the processing directory as a subdirectory “vg0304”, and populates it whith the basic collection of subdirectories and files: (the preceding “%” sign is the commandline prompt, and is followed by the command to type)
% adcptree.py vg0304_ping
uh_progs = /home/ulili/programs
making directory vg0304_ping/adcpdb
making directory vg0304_ping/cal
making directory vg0304_ping/cal/watertrk
making directory vg0304_ping/cal/botmtrk
making directory vg0304_ping/cal/heading
making directory vg0304_ping/cal/rotate
making directory vg0304_ping/contour
making directory vg0304_ping/edit
making directory vg0304_ping/grid
making directory vg0304_ping/load
making directory vg0304_ping/nav
making directory vg0304_ping/quality
making directory vg0304_ping/ping
making directory vg0304_ping/scan
making directory vg0304_ping/stick
making directory vg0304_ping/vector
- data type is ping
- demo ping data are in /home/ulili/programs/adcp_templates/demo/ping
(1) writing template text file for processing notes:
vg0304_ping/template_rpt.txt
(2) creating local web page for documentation
open either of these files in your web browser:
/home/jules/working/adcp_proc/codas-doc/vg0304_ping/adcp_processing.html
/home/ulili/programs/index.html
done.
Things to note:
- an example file template_rpt.txt is created that you can edit for use as a record of the instrument setup and processing steps. This file is an example from another ship and another instrument. Your details will differ. After you rin quick_adcp.py, a version of this file will be generated with as much information as can be gleaned from the data. Look for cruise_info.txt.
- The html file “adcp_processing.html” will take you to the CODAS processing documentation that exists on the processing computer.
The directories created have the following function:
The ping/ directory is the default repository of pingdata files (usually called PINGDATA.* (The data location can be overridden in quick_adcp.py)
The scan/ directory will be used to hold a list of time ranges and time info for each data file. The full time range of the dataset is also stored here
The load/ directory will be used for loading the data into the database. For pingdata, that is just a program that gets run. For all other data types, a two-step process exists:
- a set of files (suffix bin and cmd are generated)
- those files are loaded into the database
- a set of files (suffix gps1 and gps2) are generated containing the start and end time (and position) for each averaging period (these form the gps fixes for the “navigation” steps)
The adcpdb/ directory will contain the database and the configurations used during acquisition
The edit/ directory will be used by gautoedit (graphical editing).
The cal/ directory will be used for calibration calculations.
- cal/rotate (time-dependent heading correction stored here)
- cal/watertrk (watertrack calibration)
- cal/botmtrk (bottom track calibration)
- cal/heading (not used by quick_adcp.py)
The nav/ directory will be used for navigation calculations, including smoothed reference layer and plots of same.
The grid/ directory will be used to grid the data for plotting.
The quality/ directory contains Matlab scripts for plotting on-station and underway profile statistics; this is a good place to stage your own QC investigations.
The contour/ directory is used to store data suitable for making contour plots (eg. 15 minute averages of 10-20m vertical bins)
The vector/ directory is used to store coarser averages suitable for vector plots (eg hourly averages of 50m vertical bins)
The stick/ directory contains programs to make summary plots of some specific spectral information, treating the data as a time-series (not used by quick_adcp.py)
Return to TOP
Adcptree.py has other switches to use for other datasets:
adcptree.py options
+---------------------+--------------+---------------------------------+
| Data Acquisition | File type | switches |
+=====================+==============+=================================+
| VmDAS | LTA, STA | --datatype LTA |
+---------------------+--------------+---------------------------------+
| VmDAS | ENX | --datatype ENX |
+---------------------+--------------+---------------------------------+
| VmDAS | ENS | --datatype ENS |
+---------------------+--------------+---------------------------------+
Only pingdata have a dedicated executable program (loadping) to put the data in the database. All other data use a two-part system, where
- Matlab is used to create *bin (data) and *cmd (instructions) files with the averaged data. Each bin and cmd pair is functionally equivalent to a pingdata or LTA file.
- an executable “ldcodas” is used to load the bin and cmd files into the CODAS database
The difference between a pingdata and other processing directory (before any processing) is the addition of some files to the “load” subdirectory.
All non-pingdata processing directories have the following additional files:
load/ldcodas.cnt (documented control file for ldcodas)
- load/ldcodas_path.cnt (documented control file for ldcodas,
different usage)
load/vmadcp.def (data definition file)
Additional files in Processing directory
File type | extra file | description |
pingdata | [none] | no extra files |
LTA, STA | load/load_lta.m | example file to make *bin, *cmd |
load/load_lta_manual.m | example file to make *bin, *cmd | |
ENX, ENS | load/load_ens.m | example file to make *bin, *cmd |
uhdas | config/cruise_cfg.m | serial setup info |
config/cruise_proc.m | transducer orientation, scale factor | |
load/load_uh.m__ | template for file to make *bin, *cmd | |
load/ldcodas.cnt__ | template for uhdas ldcodas | |
uhdas | in cal/rotate: | |
plot_hcorrstats_all.m | example for heading correction plot | |
print_hcorrstats_all.m | example for heading correction statistics |
Return to TOP
NOTE: This document does not address the setup of UHDAS processing from scratch. Under most circumstances, a UHDAS dataset brought back from sea is already processed, and all that remains is manual post-processing.
If you have a UHDAS dataset from sea, you should familiarize yourself with CODAS processing by running through the LTA demo. Start by reading overview of CODAS processing using quick_adcp.py. This document includes links to ‘help’ and demo data files.
A UHDAS demo for post-processing (directory and instructions) are located here. Do not attempt to post-process UHDAS data unless you are familiar with CODAS processing steps (heading correction, calibration, editing).
If you feel you need to reprocess a UHDAS dataset from scratch, email (hummon@hawaii.edu) and we can take it from there.
For completeness, this table shows the additional files staged by adcptree.py:
Additional files in Processing directory
File type | extra file | description |
uhdas | config/cruise_cfg.m | serial setup info |
config/cruise_proc.m | transducer orientation, scale factor | |
load/load_uh.m__ | template for file to make *bin, *cmd | |
load/ldcodas.cnt__ | template for uhdas ldcodas | |
uhdas | in cal/rotate: | |
plot_hcorrstats_all.m | example for heading correction plot | |
print_hcorrstats_all.m | example for heading correction statistics |
Return to TOP
Quick_adcp.py contains quite a bit of documentation about itself.
Running “quick_adcp.py –help” yields more information about its own documentation:
run this to see this
-------------- ------------------
quick_adcp.py --help : (this page)
:
quick_adcp.py --overview : introduction to quick_adcp.py
: processing steps
:
:
:
quick_adcp.py --howto_ping : HOWTO for pingdata
quick_adcp.py --howto_lta : HOWTO for LTA data
:
quick_adcp.py --tips : tips for new users
:
quick_adcp.py --varvals : print variables and current values
quick_adcp.py --vardoc : print documentation for variables
:
: simple example for:
: ------------------
quick_adcp.py --commands pingdata : pingdata files
quick_adcp.py --commands LTA : LTA or STA files (averaged)
:
quick_adcp.py --commands singleping : single ping ADCP data overview
quick_adcp.py --commands ENX : ENX files (earth coords)
quick_adcp.py --commands ENS : ENS files (beam coords)
:
quick_adcp.py --commands UHDAS : UHDAS processing
quick_adcp.py --commands template.txt : template for record-keeping
unsupported/experimental:
quick_adcp.py --commands HDSS : HDSS files (Pinkel's Revelle sonars)
NOTE to Windows users: Make sure the executable path to the CODAS binaries
comes before the executable path to quick_adcp.py
NOTES:
- All these switches use TWO dashes attached to a word. If you use one
dash or leave a space afterwards, it will fail.
- Wild cards:
- use quotes on the command line: "pingdata.???" or "*.LTA"
- do not quote in the control file
- There is a useful little perl script called "linkping.prl" which might
help in getting pingdata files named sequentially if your cruise has
multiple legs.
- APOLOGIES:
(1) Matlab routines do not display feedback to the screen. The only
exception is in the 'load' stage, if you have Tk configured,
you can use "--tktail" and a Tk window will pop up with feedback.
In general however, the only way to see what is going on is to
read the messages on the screen (from quick_adcp.py) to see where
to look for more information.
(2) Only pingdata and UHDAS have automatic generation of a heading
correction (from a gps-aided device such as Ashtech). There is
a text file describing how to approach the heading correction
if you look in the documentation (there is a link in the file
created by adcptree.py)
Links to the output are here:
- quick_adcp.py –help (same as above)
- quick_adcp.py –overview (same as below)
- quick_adcp.py –howto_ping
- quick_adcp.py –howto_lta
- quick_adcp.py –tips
- quick_adcp.py –vardoc
- quick_adcp.py –commands pingdata
- quick_adcp.py –commands LTA
- quick_adcp.py –commands singleping
- quick_adcp.py –commands ENX
- quick_adcp.py –commands ENS
- quick_adcp.py –commands UHDAS
- quick_adcp.py –commands HDSS
- quick_adcp.py –commands template.txt
Return to TOP
Quick_adcp.py is a python script that runs all the usual CODAS
processing steps, providing a good place to start processing ADCP
data. For data sets with no glitches (i.e. all the navigation is
present, no repeated timestamps, data files sorted in ascii order are
also in time order, etc), this will allow fast, documented, and
repeatable processing for a first look at the data or if you're
lucky, for final processing. This includes setting up the editing
directory so 'gautoedit' will run. The entire dataset can usually be
processed completely with just the following:
- adcptree.py (to set up the processing directory)
- quick_adcp.py (to do the processing, and redo steps)
- gautoedit (to do the editing)
Quick_adcp.py uses commandline arguments to specify input information
(file names, database name, proc_yearbase, etc). You can skip a step
by typing 'n' (for 'no', do not do it) or you can stop at any stage
by typing 'q' (for 'quit'). A log is kept as steps are run; it is
the processing directory and has a suffix ".runlog"
assumes
- adcptree.py was already run (see adcptree.py for help)
Be sure to specify "datatype" and "instclass" if necessary
- quick_adcp.py is run from the adcp processing directory (PROCDIR)
* it writes a log of steps run and values used (see
PROCDIR/CRUISENAME.runlog)
- data location depends on data type, as follows:
* default is PROCDIR/ping
pingfile note: "linkping.py" is useful to link pingdata from
other locations to ./ping as ascii-sortable pingdata.* (unix only)
- data files are found by using the default directory or specified
directory ('datadir') and wildcard expansion 'datafile_glob'
(wildcard designation should be quoted if called on the command line)
(wildcard designation is not quoted if using a control file)
- sets up new "gautoedit" files:
* makes asetup.m aflagit_setup.m in edit/
* (optionally) makes setup.m in edit/ with all thresholds (except
bottom) disabled. This is useful if you are going to use the
old waterfall editing strictly as a tool to _manually_ flag
bins or profiles but don't want it to guess what to flag.
- makes and runs setflags.tmp with PG cutoff set
- allows frequently-run steps to be rerun without querying (specifically,
applying editing flags, rerunning nav steps, rerunning calibration,
making new matlab files for plotting
CODAS shipboard ADCP processing steps: (see CODAS_pingdemo.html for details)
- scan: get the time range of the scanned data
- load: put the data in the database, get the time range of the database
- (set up gautoedit files, run setflags to flag bad PG at the outset)
- ubprint (for pingdata) (or cat navigation if VMDAS or UHDAS)
- ashrot (for pingdata)
- rotate
- nav steps: (choose to use either refsm or smoothr for navigation.) ::
adcpsect # these three must be run anyway for
refabs # reflayer diagnostic plots and
smoothr # watertracking or recip to work)
(refsm, if specified)
- plot reference layer (default:bins 4-12, or specified on command line)
- putnav; (uses whatever was specified: refsm or smoothr)
- watertrack
- bottom track
- lists and plots temperature
- timegrid (for standardized matlab files)
- standardized matlab vector and contour files
apply calibrations to ADCP data:
(you have to look at the watertrack and bottom track calibration
to see what rotation or amplitude factors might be necessary)
(you have to go into edit/ and run gautoedit to remove the bad data)
(then reprocessing uses "steps2rerun" to run these steps
"rotate" - rotate
"navsteps" (choose either refsm or smoothr for navigation.)
- adcpsect (1) # these three must be run anyway for
- refabs (2) # reflayer diagnostic plots and
- smoothr (3) # watertracking or recip to work)
- refsm # for navigation, if specified
- plot reference layer
- putnav; # from specified (refsm, smoothr)
"calib" - watertrack
- bottom track
"matfiles"
- timegrid (for standardized matlab files)
- standardized matlab vector and contour files
after editing, apply editing to database:
- "apply_edit" - applies ascii files edit/*.asc to database
- "navsteps", calib, matfiles (as above)
Notes:
1) Run quick_adcp.py in one window, with another for investigating
problems. Be sure to check the output of scan before trying to load,
in case there are problems with timestamps.
2) For data other than pingdata, quick_adcp.py will run matlab to
create intermediate files that are then loaded into the database
(*.cmd and *.bin). The navigation file is created by catting the
".gps1" files from the load/ subdirectory (after ldcodas has been
run). In this case the nav file has a .gps suffix. Otherwise,
the nav file comes from ubprint and has a ".ags" suffix.
3) If processing single-ping data (ENS, ENX, or UHDAS):
---> DO NOT DELETE the BLKINFO.txt file or blkinfo.mat <---
in the load/ directory.
4) running quick_adcp.py more than once:
After a single-pass load of the data, you may want to apply editing,
run the navigation steps, apply a rotation, rerun the calibration
steps, or make new matlab files. use "steps2rerun"
Return to TOP
Prior to loading the database, we scan the data files in order to determine whether there are issues with timestamps that need to be addressed. The “Scan” step performs two operations:
- list time ranges and perhaps otherinformation about the data files
- create a file with the time range of the data
Pingdata are scanned using the executable “scanping”. Go to the original codas processing document in , and look at the “scan” section for a complete description of the output of “scanping”.
All other data (VmDAS and UHDAS) are scanned by a matlab program, scan.m. A stub (a little program that contains configuration info and then calls the real one) is written into the scan/ dirctory by quick_adcp.py and is then run. The name of the program is “scan_tmp.m”. The main point of this step is to get the time range.
If the database name of this example is “ademo”, scanping and “scan.m” both write the output to
- ademo.scn (contains timestamp information about the data files)
- ademo.tr (a human-readable time range, extracted from ademo.scn)
NOTE: In CODAS, times come in two flavors.
CODAS time stamps:
- year/month/day hour:minute:second (such as 2007/04/17 14:02:32)
- zero-based decimal day (i.e. January 1 noon UTC is 0.5, not 1.5)
Return to TOP
The “load” step creates the database (“load” the data into the database). For pingdata, a single executable program (“loadping”) reads the pingdata bytes and stores them in the proper locations in the database (creates the blk files in the adcpdb/ directory). For all other data types, we have one universal load-the-database program, “ldcodas”. The “ldcodas” program reads data from the load directory and creates the database from those files. The files read by ldcodas are stored in the load directory as pairs of files (*cmd and *bin), with the cmd files containing instructions to “ldcodas” and the bin files containing the data.
For LTA data (the only other pre-averaged data), a matlab program translates the LTA bytes into the *bin and *cmd and “ldcodas” loads the data into the database.
For ENX and ENS data (VmDAS single-ping data), another matlab program gathers groups of (typically) 5-minutes long, edits the single-ping data, averages it, and writes information to the *bin and *cmd files. Then (part 2) ldcodas loads the database. ENX and ENS data already contain navigation and have corrected timestamps; ENS may or may not have heading.
HDSS are closest to ENX data, and yet another matlab program can read these files and edit (to some extent) and average the data, and write the bin and cmd files. Again, (part 2) ldcodas creates the database. HDSS data are more problematic, and a file in the load directory must be edited prior to processing.
For UHDAS data, even more configuration information is required, since the raw data do not yet have corrected timestamps or any ancillary data. Processing UHDAS data from scratch requires a good grounding in CODAS processing, which can be obtained by working through examples with LTA data, then ENX data. Fundmentally, however, yet another matlab program reads the single-ping adcp data (corrects time and adds navigation and attitude), edits the single pings, averages the data, and writes the bin and cmd files. Again, (part 2) ldcodas creates the database.
Both programs (loadping and ldcodas) that create a database need to know about how to read the data. This file is a “definition” file, containing information about data and structure definitions. For pingdata, the definition file depends on the “user buffer” used during acquisition. The various definition files are already in the adcpdb/ directory. See section 5.2 in the
postscript CODAS manual for details about pingdata and user buffers, and this link for the description in the original pingdata demo. The “ldcodas” program uses one definition file, called “vmadcp.def”, and it is located in the load directory.
Pingdata may have useful information such as better navigation, secondary navigation, or heading correction, embedded in a specific portion of memory called the “user buffer”. The contents of teh “user buffer” depends on the “user exit” program run during acquisition. IF ue4 was used, “ubprint” can be used to extract the improved navigation, and ashtech heading correction, if they exist.
If you are processing pingdata for the first time, you are advised to consult the original pingdata demo processing documentation frequently
Return to TOP
After any quick_adcp.py step, you may want to check the database to see what changes you have made and to ensure that everything is working as expected. This is not a step run by quick_adcp.py. You can go to another commandline window and run this command as you work your way through the quick_adcp.py steps.
An ascii menu-driven commandline utility exists to probe the database and determine what is stored in it. There is no substitute for this important but old-fashioned program. On the command line, you must specify the database name, including the path, such as:
showdb ../adcpdb/ademo
The original pingdata documentation explains showdb in detail, using the original pingdata demo, in which a database was created using two pingdata files. There are various examples of showdb throughout the original pingdata demo instructions.
You can use showdb to check various aspects of the database. For instance, immediately after loading, the databse will contain measured velocities, depths, heading, and various configuration information, but the positions will show MAX, i.e. bad values: positions are loaded in a later step.
Return to TOP
Accurate heading is essential for high-quality ADCP data. An error in heading of theta degrees causes an error in the cross-track direction that scales as
error = shipspeed * sin(theta).
For a ship travelling at typical cruising speeds, i.e. 5m/s (10kts), a one degree error in heading causes a cross-track error of 10cm/s. Gyros, especially older gyros and especially in low latitudes, can wander significantly, causing completely spurious cross-track errors that manifest as “eddies” in the data.
An example of a few degrees offset is shown deep in the documentation for the gui editing utility, “gautoedit”.
This page graphically illustrates the difference an error of 2 degrees degrees makes on a dataset.
Headings can come from a variety of sources, some more accurate than others. Your access to headings depends on a variety of factors
the acquisition system (DAS2.48, DAS2.48+ue4, VmDAS, UHDAS)
Ashtech, Seapath, POSMV, various optical gyros)
instrument failed? data recorded elsewhere?)
You need to know (or find out) the sources of heading for your dataset, what is available where (which files contain which information), and what heading source was used for processing. If there is only one heading device, your options are limited. If there are both gyro and some other heading source, you can compare them to see what the differences are in quality and behavior, and either correct the data (if acquired with gyro) to the other source, or (if processed with the other source) possibly make a statement about that instrument’s data quality.
If you have pingdata, and if ue4 was used, and if there was an ashtech on board, the likely source of heading correction data is extracted from ubprint. After loading the database, the time-dependent heading correction can be examined, (corrected if necessary), and applied to the database.
If you have VmDAS data, you can generally only have two sources of heading if one is a synchro gyro input. In either case, the “other” heading source data are probably written into the “N2R” files. If gyro was used as the primary heading device, a time-dependent heading correction can be applied to the database.
If you have UHDAS data, the strategy is to use gyro data for the initial conversion from beam to earth coordinates, and correct that with the 5-minute average of the difference between the gyro and the other heading device. In older processing, this was done in a batch mode, rotating the database after the database was created. In newer UHDAS processing, the heading correction is built into the averages (bin and cmd files) before they are loaded into the database, and the values used are recorded to disk (cal/rotate/ens_hcorr.ang)
Running quick_adcp.py, the “rotation” stage of preliminary processing is as follows:
Initial Rotation in quick_adcp.py
acquisition | heading correction | file processing | applied |
---|---|---|---|
das2.48+ue4 | cal/rotate/ademo.ang | batch | using “rotate” |
ENX,ENS | (none) | batch | (none) |
LTA,STA | (none) | batch | (one) |
UHDAS | hcorr.ang | batch | using “rotate” |
UHDAS | ens_hcorr.ang | at-sea | embedded IN the averages |
Notes:
A time-dependent heading correction file can be generated and applied after the first-pass processing is complete
A constant angle rotation is usually necesary in the second-pass steps
You can use showdb to examine the correction value (ANCILLARY_2)
You can extract the original heading and the total correction presently in the database using lst_hdg
You can return the heading correction to zero by one of two methods:
“rotate.cnt” modified to use the key word “unrotate!”
the corrections listed by lst_hdg, with decimal day in the first column and -1*(hcorr) (i.e. reverse the sign of the heading corrections from lst_hdg), and apply that as a time-dependent angle file.
The rotation step takes place in the cal/rotate directory. If there is a time-dependent heading correction file, plot the corrections and make sure you are applying something reasonable to the database. For pingdata, the program “ashrot.m” will write out ashtech statistics and make a plot of the heading correction. For UHDAS, you probably have to modify two programs called
and run them to get the statistics (such as they are) and the plots.
Return to TOP
Bottom track calibration uses bottom track data (if there is any) to determine the remaining transducer offset to make the ship track over ground match the track measured by the ADCP.
Watertrack calibration uses the idea that the water velocities should not look any different whether the ship is stopped or moving; turning or going straight. It uses parameters to times when a significant acceleration was detected (turn and/or speed change) and calculates what rotation and scale factor would be necessary to make sure the ocean velocity looks the same before and after the acceleration. The calculation is necessarily noisy, and depends on ship behavior. For instance there are probably no watertrack calibration points on a transit, and many on a CTD hydrography cruise or a bathymetric mapping cruise.
The parameters used to detect watertrack calibration points can be tuned, but quick_adcp.py uses particular set and runs the calibration steps during the first pass. These are diagnostics, and give the user some information about further rotation or scaling necessary. More detail about bottom track and watertrack calibrations are contained in the original pingdata demo document.
Return to TOP
After you’ve run quick_adcp.py for the first pass, you must edit the data (to remove things like wire interference, data below the bottom, bad profiles), investigate the necessity for further rotation, and decide whether a scale factor is required. This is an iterative process.
If there is a large constant heading error or a time-dependent heading error necessary, it is much harder to edit the data because changes in speed will cause changes in the ocean velocity which may look like errors.
To converge on your final dataset,
Ensure that the time-dependent heading correction (if it exists) as good as possible. If you change the heading correction, rerun quick_adcp.py with –steps2rerun navsteps:calib
Look at bottom track and watertrack calculations, and apply any gross (larger than 1/2 degree) phase correction. If you rotate the database, rerun quick_adcp.py with –steps2rerun navsteps:calib
Apply any scale factor if necessary (see below). This “should” be unnecessary for ocean surveyors, but is not unexpected for fixed transducer heads, such as NB150 or WH300.
Go through the data with gautoedit deleting obvious additional bad data. Apply the editing by running quick_adcp.py with these options (fill in your yearbase)
quick_adcp.py --use_refsm --yearbase xxxx --steps2rerun apply_edit:navsteps:calib --autoIn the last editing pass, you should click “do not show autoedit editing” so you can see what is actually in the database, not the effect of the gautoedit defaults
repeat until they do not change: check editing; apply calibrations (normally there will be up to 3 passes through the editing, with the last requiring no additional flagging, and up to two applications of phase and scale factor calibration values)
More discussion follows:
Heading correction has two components: a time-dependent correction of gyro to Ashtech (POSMV, or Seapath), and a remaining contant offset. See the Heading Correction section for more detail.
To inspect the heading correction used during in an at-sea UHDAS processing directory, go to the cal/rotate directory and edit (and run in matlab) the file called plot_hcorrstats_all.m, then look at hcorr.ps (the output file).
If you need to fix the heading correction (eg. there are gaps where no heading correction was applied, you must remove the already-existing heading correction by “unrotating” the database. Then fix the heading correction file, and rotate using the new file. “unrotating” can be done by using the “unrotate” option in rotate.cnt, or one can rotate by the negative of the values used (i.e. rotate by -1 times the values in ens_hcorr.ang).
After the time-dependent correction is made, there may still be a constant offset. Estimates of that value are in the “phase” in watertrack and bottom track calibration files, or from “recip.m” (if there is a reciprocal track available). More detail about bottom track and watertrack calibrations are contained in the original pingdata demo document.
For a fixed transducer instrument (NB, BB, or WH) a scale factor may be necessary. Check the thermistor temperature to make sure the thermistor is not broken. You may need to fix the speed of sound. It is possible that application of constant scale factor is all that is necessary. See the discussion about thermistor checking in the original pingdata demo document.
If, after editing, the scale factor for an ocean surveyor is still greater than 1%, either there is still a problem with the data (eg. underway bias not edited out) or there is a problem with the instrument. You may need to look at additional datasets or talk to others who have used data from this instrument to try and determine whether you have a problem.
Quick_adcp.py sets up the edit directory for the old-style editing (deprecated) and for the gui editing tool, gautoedit . This tool was designed to screen data for things like ringing, on-station wire interference, jittery navigation or velocities, and bottom interference when bottom tracking was not on. Installing m_map will help with the visualization.
A UHDAS at-sea processing directory already has the defaults applied. You can use showdb to see that this is true.
NOTE: “Flagging data as bad” is mostly a one-way trip: you can add flags to the collection, but if you want to remove them, things get complicated. This page discusses various scenarios.
Use gautoedit to look through the database and decide whether any of the missing data should be unflagged, or whether you just need to flag some additional data. Click on the button “do not show autoedit editing” to see what is in the database. Click on “do not show profile flags” to see the original data.
When you are finished, apply the editing as
quick_adcp.py --use_refsm --yearbase xxxx --steps2rerun apply_edit:navsteps:calib --auto
(fill in your yearbase)
The last time you run gautoedit, you should click “do not show autoedit editing” so you can see what is actually in the database. You should not see any of these signatures in the ocean velocities
- transitions between on/off station
- ship turn
- bias in the direction of ship motion (usually with low PG)
- big stripes of missing data at turns or accelerations
Your watertrack and bottom track calibrations should have phase within a few tenths of a degree of zero, with all estimates agreeeing (mean, median, watertrack and bottom track) also to within a few tenths of a degree (if there are enough points). Scale factor should be within a fraction of a percent of 1.00 (0.997-0.003) and different estimates should agree (mean, median, watertrack and bottom track)
The executable program written to extract data from the database is called “adcpsect”. There is a python wrapper for it, adcpsect.py, which may be useful. Adcpsect outputs velocity data averaged in time (or longitude or latitude) and depth. You can extract every bin and every profile as well. Adcpsect writes out matlab files containing velocity (editing applied). An adcpsect control file example is documented here, and the output format is documented here.
For the moment, “getmat” is the only tool to extract other variables from the database. This program writes data to the disk as a collection of simple matlab files. Use the run_agetmat wrapper to extract data from the database from within matlab. Matlab access to CODAS (or raw data files) is documented here.
Command line tool
function | explanation |
---|---|
run_agetmat | extract decimal day range from database |
gxytzoom | select a subset from lon, lat, dday |
aplotit | configurable pcolor plots with colorbar |
autocont | contour u,v |
autovect | vector plot of u,v over topography (if exists) |
Use matlab ‘help’ for these files.
Return to TOP