2.7.3.1. Compatibility: Older cruises¶
2.7.3.1.1. Post-processing an older cruise¶
Every effort is made for quick_adcp.py
to be internally consistent
to prompt the user if something is missing
to have decent defaults for most common circumstances
If you run the command quick_adcp.py --help
, the
processing options should look like this:
To see commands for various data types, use --commands as follows.
-------------------------------------- ------------------------------
quick_adcp.py --commands postproc : UHDAS post-processing
quick_adcp.py --commands uhdaspy : UHDAS processing
quick_adcp.py --commands ltapy : LTA or STA files (averaged) (**)
quick_adcp.py --commands enrpy : ENR files (beam coords)
quick_adcp.py --commands pingdata : original pingdata demo
====================================== ==============================
(**) LTA and STA processing can now be processed more quickly using
vmdas_quick_ltaproc.py
. Try typing vmdas_quick_ltaproc.py --help
for more information
If the command quick_adcp.py --help
produces any references to
matlab processing, then your programs are old and this version of the
documentation will have limited use. You should have your own copy
of the documentation in the directory where you put the CODAS software.
That is the appropriate resource, because it should be consistent
with your own code. This would be an excellent time to install
a new Virtual Computer so you are running up-to-date code.
Storing metadata
When quick_adcp.py
runs, it caches (stores) some command-line
options in a file called dbinfo.txt
. This file is used to look up
values that do not have defaults, which were specified during the
original run, so you do not have to type them again each time. There
are some cases in which these values might change: quick_adcp.py
expects a specific format and values in the file, so do not edit this
file by hand.
Example
Information about the dataset, such as year and ADCP model, are cached
in a file called “dbinfo.txt” during processing. This file was only
implemented with Python processsing, so if you are post-processing a
UHDAS dataset that was processed at sea by an earlier version of the
software (eg. using Matlab), then there is no dbinfo.txt
file.
If “dbinfo.txt” does not exist, you must supply the information so
quick_adcp.py can generate the file properly.
The best commands to run are to reset the navigation (improving an
older approach to ship speeds) and recalculate the calibrations. Each
time you run it, let quick_adcp.py
prompt you (complain) as
another variable is discovered missing: go ahead and specify the
missing information as needed. Once quick_adcp.py
has enough
information, it will write the values out to dbinfo.txt
and
proceed. You will probably have to provide values for variables such
as yearbase
, --cruisename
, --sonar
, --beamangle
. This
step is a small but unfortunate requirement for compatibility of newer
software with older processing directories.
Run quick_adcp.py --steps2rerun navsteps:calib --auto
At each error, include the information in the quick_adcp.py command line until it runs.
ERROR |
solution: add to commands |
---|---|
ERROR – must select datatype |
“–datatype uhdas” |
ERROR – must set “sonar” |
“–sonar os38nb” |
ERROR – must set “beamangle” |
“–beamangle 30” (see NOTE #1) |
ERROR – must set “yearbase” |
“–yearbase 2010” |
ERROR – must set “ens_len “ |
“–ens_len 300” (see NOTE #2) |
ERROR – must set “cruisename” |
“–cruisename km1001c” |
NOTE #1: The variable ens_len
(seconds per averaging enseble)
must match the averaging length in seconds that was used when originally
processing the data.
NOTE #2: Beam angles are usually (but not always) as follows:
instrument |
beam angle (degrees) |
---|---|
os38, os75, os150 |
30 |
wh300, wh600, wh1200 |
20 |
bb75, bb150, bb300, bb600 |
usually 30 |
nb150, nb300 |
usually 30 |
Example:
quick_adcp.py --steps2rerun calib --datatype uhdas --cruisename km1001c --yearbase 2010 --sonar os38nb --beamangle 30 --auto
2.7.3.1.2. Single-ping (reprocessing)¶
The demo dataset and tutorial for UHDAS Single-ping Commandline processing has many of the details needed to reprocess UHDAS data from scratch.
Additional considerations might be:
Do you need to:
link (join) cruise segments? (See link_uhdaslegs.py)
change any of the settings or configuration values?
heading alignment (transducer angle):
look at calibrations from original processing
ADCP-GPS horizontal offset
rerun calibrations; look at surrounding cruises
this number can (often) be estimated by the processing
-
look at the directory names and “message” types in the original cruise
look at the config/CRUISE_proc.py file, and compare names. Fill in “instrument” and “message” for position, heading, and heading correction.
These are cruise-dependent.
Contact uhdas@hawaii.edu for help.