UH LADCP Documentation


Previous topic

Acquisition Computer: Deployment and Recovery

Next topic


This Page

Processing Computer: Data, processing, plotting and sharing


There are several important steps that have to be performed after each download of newly acquired LADCP data:

Because paths may change from one cruise to another, we advise that you write your own file containing all the commands specific to your computer environment at the beginning of the cruise. Examples of such a file used in previous cruises are given in Previous Cruises. Put yours in /home/science/current_cruise/scripts for future reference.

(1) LADCP data downloaded to acquisition computer

The software on the acquisition computer automatically copies the data to a backup location if a directory has been specified. We will usually have a backup disk /media/AC_backup (Acquisition Computer backup) with a directory on it for backup. See these instructions for details about formatting a new disk and setting up such a directory.

cartoon: acquisition computer roles

(2) Copying data (general)

We use the unix command rsync to copy directories for archival or backup purposes. We have written a python script to make the use of rsync less dangerous for simple directory updates. The script will make the contents of the destination directory the same as the contents of the source directory. Use it as:

usage: copydir.py [options] source dest
   copydir.py   source dest
   copydir.py   --test source dest
   copydir.py   --ssh username source computer:/path
   copydir.py   --ssh username computer:/path dest

The program copydir.py is designed to

  • copy the contents of source_dir into the directory dest_dir
  • only copy files that are new (or changed)
  • delete files in dest_dir that no longer exist in source_dir

Type man rsync and rsync --help for more information about rsync. It is useful to know about rsync as a tool for your own use.


The presence or absence of a trailing slash (“/”) after the source directory affects the behavior of the command. Test before you use it, either way!!

(3) Copy LADCP data from the acquisition computer

At this stage the LADCP data have been downloaded from the LADCP to the acquisition computer and the LADCP operator has moved from that computer to the processing computer to do the rest of the work.

The first thing to do before processing the LADCP data is to copy the data files from the acquisition computer (probably located in the wet lab) to the processing computer (where you will sit’ probably located in the computer lab).

Proceeding from the processing computer, execute:

copydir.py  path_to_ladcp_data /home/science/current_cruise/data/ladcp

where path_to_ladcp_data is the full path of the directory containing the raw LADCP data on the acquisition computer. The path path_to_ladcp_data may vary from cruise to cruise. For example, during the cruise I5S_2009:

    copydir.py /net/nini.local/home/science/current_cruise_logging  /home/science/current_cruise/data/ladcp

# where
#  "nini" was the acquisition computer
#  (known on the network as "nini.local")
# "/net/nini.local" is the path to exported directories on
# "nini" using the "autofs" protocol

cartoon: back up LADCP data

(4) Download ancillary data

The next step is to download the ancillary data necessary to process the LADCP data. In principle, you could process the data with the LDEO software without any ancillary data (see Section 2.2 in LDEO How-To [PDF]) but that produces processed data of poor quality. At least three ancillary datasets are necessary: CTD time series, CTD vertical profile (2dbar data) and GPS [see more details on ancillary data in LDEO How-To [PDF] and Section Which ancillary data do we need?].

For some cruises, the GPS data are included in the CTD data. In this case, the LDEO processing needs only the CTD data, but we ask that you download the GPS data separately in case there is a problem with the CTD data.

cartoon: processing computer roles

CTD data

SCRIPPS/ODF group provides two kinds of CTD data that can be used for improving the LADCP processing:

  1. 2dbar “final” profiles – this is the usual CTD product
  2. 1/2 second (or 1-second) time series of CTD data.

Identify the source locations of these datasets and back up to the following locations:

copydir.py ctd_2dbar_source  /home/science/current_cruise/data/ctd_2db
copydir.py ctd_timeseries_source  /home/science/current_cruise/data/ctd_timeseries

Here is an example of the strategy on I5S_2009.

It is possible that these products don’t have the right format to be used directly for the LADCP processing. In this casem they need to be re-formatted. For instance, during A13.5 (2010), we were given the fullt time series of CTD data, not a 1-s averaged one. The 1-s averaged time series were then produced independently. Look in previous cruises for more information.

GPS data

On most CLIVAR cruises there will be a UHDAS shipboard ADCP system installed (see below) . UHDAS records GPS data. By copying UHDAS Shipboard ADCP data, you do not need to copy GPS data separately.

On some (Scripps) CLIVAR cruises, longitude and latitude are included in the CTD time series. In that case, you do not need to use separate GPS data at all.

On some cruises (NOAA), longitude and latitude are not included in the CTD timeseries, so you need to transform the UHDAS shipboard GPS data into an ASCII format that can then be used for LADCP processing. To do that, use the Matlab routine gps_rbin_to_asc.m located in current_cruise_data/data/gps/. There are four changes to do in this default file and these changes are detailed in the routine itself.

If a CLIVAR cruise does not have a UHDAS installation, instructions will be provided in a timely manner based on the specifics of the cruise.

Shipboard ADCP data

Univ Hawaii “currents” group (and hence the LADCP operator) is also responsible for the Shipboard Acoustic Doppler Current Profiler data. The shipboard ADCP data may be acquired by one of two packages:

  1. UHDAS: This is written and maintained by the “currents” group at the University of Hawaii. The acquisition computer is a linux computer with the data disk exported using NFS. All shipboard ADCPs of interest are logged with this computer, so there is one source for shipboard ADCP data. Update the Shipboard ADCP data directory by typing:
copydir.py /net/currents/home/data/current_cruise  /home/science/current_cruise/data/shipboard_adcp
  1. VmDAS: This is the software that came with the instrument when it was purchased. It runs on a Windows computer. Each shipboard ADCP will have its own computer. There may be a network location where you can get the data, or you may need to bring a DVD back at the end of the cruise. Copy all the files from the appropriate directory including suffixes (*LOG, *VMO, *ENR, *ENS, *ENX, *STA, *LTA) to /home/science/current_cruise/data/shipboard_adcp. Make separate directories for the different instruments separated if there are multiple ADCPs:
 # if there are two instruments, WH300 and OS75, put the data here:


(5) Process LADCP data

Which ancillary data do we need?

Although, the LADCP processing can be performed in principle using only the raw LADCP data, the result will be of poor quality, especially with respect of the barotropic component of the flow and the estimated depth. For these reasons, ancillary data are used to constrain the processing.

Two types of CTD data are used: the CTD time series and the CTD vertical profile. The CTD time series helps to correct the estimate of depth. CTD time series alone are sufficient for the LDEO or UH processing to produce its own CTD vertical profile. If available, LDEO softawre will replace the T,S profiles produced with the 2dbar data if that is available (since they are usually of higher quality). GPS data are very important and used principally to constrain the barotropic component of the flow together with the technique of bottom tracking. They are also used to estimate the magnetic declination.

On CLIVAR cruises, Shipboard Acoustic Doppler Current Profilers (SADCP) measure the speed of the currents in the upper 800 m and the LDEO software can used their data to constrain further the estimate of the barotropic flow. However, SADCP data have not yet been used systematically for processing and at this stage we do not ask them to be used, although we do need the SADCP data to be saved (See Data to bring back from the cruise). The E.Firing lab is responsible for processing the SADCP data.

Processing Options

  • Instructions for processing the LADCP data using LDEO Matlab code are provided here, but our computers no longer have Matlab licenses...
  • Limited instructions for a first-cut processing with UH Python shear method are here

Python routines available for diagnostics

The primary diagnostic program is:

  • ladcp_rawplot.py: It produces some diagnostics like the time series of the amplitude of raw velocity (mostly the vertical component of velocity observed by the CTD package) and the histogram for each beam. It is described here.

(6) Plot and share the processed LADCP data

Some plots from the LDEO processing might be worth to show on the web (these plots are .ps files saved in the following directory):


but other types of plots are useful to show and need to be produced.

Types of figures and where to put them

Four types of plots are in general produced to share on the web during the cruise:

plot description example subdirectory in web
Single vertical profile of U (zonal) and V (meridional) velocity for each station. single profiles profiles
A section plot of U and V: in general depth versus latitude, longitude and/or time sections sections
A top view of a vector plot of U and V for different depth bins vector plots vector
Plots concerning the cable, useful for the captain and the resident technician quality control plots qc_plots

How to make the figures

Some individual profile plots from the LDEO processing are useful ‘as is’. For example, type 4, you can simply use Fig. 2 from the LDEO processing with the third and fourth panels containing information on the tilting (rotation around a horizontal axis) and heading (rotation around the vertical axis) of the rosette.

To convert a .ps or .eps file into a .png file, use one of the following (comments preceeded by “#”)

convertps.py file.ps          # written at UH (better graphics)
convert file.ps file.png      # part of the system

How can people on the ship visualize the figures?

There is a simple website which has already been generated. On the LADCP processing computer, the website should be accessible via http://localhost which points using symbolic links to the content of /home/science/current_cruise/web.

Examples of MATLAB routines to produce these plots (from A20, a recent CLIVAR cruise) are in this ‘web’ directory for you to modify.

For others to access the LADCP web site, they need the IP number of the LADCP processing computer or the name of the computer (see this link about Networking).

If the name of the LADCP computer is artoo then other people can access the website using

(7) Download other important data

There may be other datasets on the cruise that are not required for LADCP processing and that we are not responsibe for. But these data can potentially provide interesting context for scientific interpretation. Get these if you can, but if you cannot, they will be available on the CLIVAR web site later.

Meteorological data

The ship’s logging system will have underway data (Meteorological sensors: temperature, wind speed and direction) logged every second or so. These can be useful data to have when interpreting upper ocean data. Find the location of these data (look on the ship’s web site or ask the computer tech on the ship) and make sure you bring these data home too.

(8) Back up the entire cruise directory

Once all necessary data have been dowloaded, the raw LADCP data processed and the figures plotted and saved, the entire logging directory /home/science/current_cruise_logging on the acquistion computer and processing directory /home/science/current_cruise_logging on the processing computer need to be saved on every other computer and external disks.

cartoon: back up both computers

For instance, proceeding from the processing computer:

# copy the processing directory on to a
copydir.py /home/science/current_cruise destination

This would copy the contents of the entire LADCP CLIVAR cruise directory to the destination directory. That directory could reside on an external disk on either (acquisition or processing) computer, as long as there is permission to write.

To back up the entire LADCP CLIVAR cruise directory from the processing computer to the acquisition computer nini (assuming permission to write) with the command below. This would make the entire LADCP CLIVAR cruise directory on the two computers be identical, using a combination of ssh and rsync.

copydir.py -essh science  /home/science/current_cruise nini:/home/science/current_cruise

# You will be asked for the password of "science" on nini

Depending on the actual context, the exact paths to the other computer and external disks may vary. See examples in Previous Cruises.

(9) Keep acquisition/processing journal up to date

It is important to keep a detailed journal that contains information about what the operator has done on the cruise. By keeping track of problems and their solutions, instrument configuration, and anomalous behavior, an operator on a future cruise may avoid a similar mistake or solve a reocurring problem.

Important information includes:

  • for each cast
    • serial number of instruments used
    • processing steps done
    • configuration used
    • anomalies in the cast
  • in general
    • interesting observations in the data
    • changes in software configuration
    • changes in instrumentation or cabling
    • any odd behavior
    • problems and solutions

Examples can be found in previous_cruises on the LADCP processing computer.