Quick_adcp.py is a python script that runs all the usual CODAS processing steps, providing a good place to start processing ADCP data. For data sets with no glitches (i.e. all the navigation is present, no repeated timestamps, data files sorted in ascii order are also in time order, etc), this will allow fast, documented, and repeatable processing for a first look at the data or if you're lucky, for final processing. This includes setting up the editing directory so 'gautoedit' will run. The entire dataset can usually be processed completely with just the following: - adcptree.py (to set up the processing directory) - quick_adcp.py (to do the processing, and redo steps) - gautoedit (to do the editing) NOTE to Windows users: make sure your PATH includes the location of quick_adcp.py make sure your PYTHONPATH is set correctly make sure you associate *.py extensions with the python program Quick_adcp.py uses commandline arguments to specify input information (file names, database name, proc_yearbase, etc). You can skip a step by typing 'n' (for 'no', do not do it) or you can stop at any stage by typing 'q' (for 'quit'). A log is kept as steps are run; it is the processing directory and has a suffix ".runlog" assumes - adcptree.py was already run (see adcptreey.py for help) Be sure to specify datatype and instclass if necessary - quick_adcp.py is run from the adcp processing directory (PROCDIR) - it writes a log of steps run and values used (see PROCDIR/CRUISENAME.runlog) - data location depends on data type, as follows: - default is PROCDIR/ping pingfile note: "linkping.py" is useful to link pingdata from other locations to ./ping as ascii-sortable pingdata.* (unix only) - data files are found by using the default directory or specified directory ('datadir') and wildcard expansion 'datafile_glob' MAKE SURE TO PUT QUOTES AROUND THE WILDCARD STRING USED - sets up new "gautoedit" files: - makes asetup.m aflagit_setup.m in edit/ - (optionally) makes setup.m in edit/ with all thresholds (except bottom) disabled. This is useful if you are going to use the old waterfall editing strictly as a tool to _manually_ flag bins or profiles but don't want it to guess what to flag. - makes and runs setflags.tmp with PG cutoff set - allows frequently-run steps to be rerun without querying (specifically, applying editing flags, rerunning nav steps, rerunning calibration, making new matlab files for plotting CODAS shipboard ADCP processing steps: (see process.txt for details) - scan: get the time range of the scanned data - load: put the data in the database, get the time range of the database - (set up gautoedit files, run setflags to flag bad PG at the outset) - ubprint (for pingdata) (or cat navigation if VMDAS or UHDAS) - ashrot (for pingdata) - rotate - nav steps: (choose to use either refsm or smoothr for navigation.) adcpsect # these three must be run anyway for refabs # reflayer diagnostic plots and smoothr # watertracking or recip to work) (refsm, if specified) - plot reference layer (default:bins 4-12, or specified on command line) - putnav; (uses whatever was specified: refsm or smoothr) - watertrack - bottom track - lists and plots temperature - timegrid (for standardized matlab files) - standardized matlab vector and contour files apply calibrations to ADCP data: (you have to look at the watertrack and bottom track calibration to see what rotation or amplitude factors might be necessary) (you have to go into edit/ and run gautoedit to remove the bad data) (then reprocessing uses "steps2rerun" to run these steps "rotate" - rotate "navsteps" (choose either refsm or smoothr for navigation.) - adcpsect (1) # these three must be run anyway for - refabs (2) # reflayer diagnostic plots and - smoothr (3) # watertracking or recip to work) - refsm # for navigation, if specified - plot reference layer - putnav; # from specified (refsm, smoothr) "calib" - watertrack - bottom track "matfiles" - timegrid (for standardized matlab files) - standardized matlab vector and contour files after editing, apply editing to database: "apply_edit" - applies ascii files edit/*.asc to database "navsteps", calib, matfiles (as above) Notes: 1) Run quick_adcp.py in one window, with another for investigating problems. Be sure to check the output of scan before trying to load, in case there are problems with timestamps. 2) For data other than pingdata, quick_adcp.py will run matlab to create intermediate files that are then loaded into the database (*.cmd and *.bin). The navigation file is created by catting the ".gps1" files from the load/ subdirectory (after ldcodas has been run). In this case the nav file has a .gps suffix. Otherwise, the nav file comes from ubprint and has a ".ags" suffix. 3) If processing single-ping data (ENS, ENX, or UHDAS): ---> DO NOT DELETE the BLKINFO.txt file or blkinfo.mat <--- in the load/ directory. 4) running quick_adcp.py more than once: a) after a single-pass load of the data, you may want to apply editing, run the navigation steps, apply a rotation, rerun the calibration steps, or make new matlab files. use "steps2rerun"