06.29.2010 22:41
Another form of Deepwater Horizon news
Another way to see what is going on with the spill reponse beyond Oil Spill 2010 (twitter) and ERMA... put this code in browser:
Update 2010-12: I should know better and include a screen shot. This no longer works. I should expect no less when the URL is "http://www.deepwaterhorizonresponse.com/go/doc/2931/676103/". This now forwards to http://www.restorethegulf.gov/. Now info about the widget appears here: Gulf Coast Oil Spill Response - Here are a few sample widget placements with their accompanying HTML code at DHS.gov. But that doesn't work either.
You will see this:
From: Get the Deepwater Horizon Response Widget [deepwaterhorizonresponse.com]
<iframe src="http://www.deepwaterhorizonresponse.com/go/doc/2931/676519/" mce_src="http://www.deepwaterhorizonresponse.com/go/doc/2931/676519/" style="border: medium none; overflow: hidden; width: 230px; height: 500px;" allowtransparency="true" frameborder="0" scrolling="no"/>
Update 2010-12: I should know better and include a screen shot. This no longer works. I should expect no less when the URL is "http://www.deepwaterhorizonresponse.com/go/doc/2931/676103/". This now forwards to http://www.restorethegulf.gov/. Now info about the widget appears here: Gulf Coast Oil Spill Response - Here are a few sample widget placements with their accompanying HTML code at DHS.gov. But that doesn't work either.
You will see this:
From: Get the Deepwater Horizon Response Widget [deepwaterhorizonresponse.com]
06.29.2010 18:39
BAGs in the EarthNC ChartViewer
Last week, Virgil released the next version of his EarthNC
ChartViewer. This new version includes NOAA survey BAGs if you turn
on the layer. I haven't setup a way to get more than a few geotiffs
out, so the coverage is a bit spotty - that's my fault, not Virgil's.
To show off one area where Monica and I were just yesterday for a wedding, here is the NY/NJ area with the Statue of Libery.
This version has both a share link and an embedding capability built right in:
This kind of reuse is only going to get better as we work through the issues of how best to get these data products out to the world.
To show off one area where Monica and I were just yesterday for a wedding, here is the NY/NJ area with the Statue of Libery.
This version has both a share link and an embedding capability built right in:
This kind of reuse is only going to get better as we work through the issues of how best to get these data products out to the world.
06.29.2010 17:41
Simile Timeline of the Deepwater Horizon incident
A good number of people have been wondering what they can do to help out
with the response to the Deepwater Horizon incident. If you are a coder,
there is a lot you can do!
For example, last month I posted an idea that nobody on the team has time for: Open request for visualization. I was hoping someone would be able to put together a Simile Timeline of the incident. Jim Meyer stepped up an created this site! Thanks Jim!!
http://oilspillstats.com/
Anyone working on mapserver, openlayers, gdal, postgis, jquery and many other open source software packages is indirectly helping out with the response and I greatly appreciate your contributions!
For example, last month I posted an idea that nobody on the team has time for: Open request for visualization. I was hoping someone would be able to put together a Simile Timeline of the incident. Jim Meyer stepped up an created this site! Thanks Jim!!
http://oilspillstats.com/
Anyone working on mapserver, openlayers, gdal, postgis, jquery and many other open source software packages is indirectly helping out with the response and I greatly appreciate your contributions!
06.29.2010 10:13
Larry Mayer on oil in the water column from the Deepwater Horizon
The cruise report for Larry Mayer's work looking for midwater oil in the Gulf of Mexico is out:
tj_deepwaterhorizon_responsemissionreport_june3_11_2010final.pdf
Larry was interviewed at CCOM by CNN: CNN video
Larry was interviewed at CCOM by CNN: CNN video
06.21.2010 10:09
Using hdf5 built in commands to examine a BAG
BAGs are still a fairly new concept to me and in general to the
community, so I am in the mode where I am trying to figure out how to
understand the files. I haven't used HDF5 much before and am exploring
to understand and decode as many aspects of a BAG as possible. HDF5 comes
with a bunch of command line tools that can show you what is inside of a BAG.
The first tool is h5ls
h5ls some_bathy.bag BAG_root GroupBut it's better do list a BAG out recursively ('-r') to see the entire structure:
h5ls --recursive some_bathy.bag / Group /BAG_root Group /BAG_root/elevation Dataset {6077, 5692} /BAG_root/metadata Dataset {12484/Inf} /BAG_root/tracking_list Dataset {0/Inf} /BAG_root/uncertainty Dataset {6077, 5692}I've tried to figure out a way to nicely print out the metadata, but so far, I have not found a clean way to do it. This command gets you part of the way, but then you will have to reverse the encoding of the XML:
h5ls --string --simple -d some_bathy.bag/BAG_root/metadata | cut -f2 -d\" | tr -d '\n' > yuck.xml cat yuck.xml metadata Dataset {12484/Inf} Data:<?xml%20version=1.0%20encoding=UTF-8?>%0a<!--%20BAG%20Metadata%20-%20Created%20with:%20-vSABER_...Then it is time to try h5dump to see the structure of the data.
h5dump -H some_bathy.bag HDF5 "some_bathy.bag" { GROUP "/" { GROUP "BAG_root" { ATTRIBUTE "Bag Version" { DATATYPE H5T_STRING { STRSIZE 32; STRPAD H5T_STR_NULLTERM; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } DATASPACE SCALAR } DATASET "elevation" { DATATYPE H5T_IEEE_F32LE DATASPACE SIMPLE { ( 6077, 5692 ) / ( 6077, 5692 ) } ATTRIBUTE "Maximum Elevation Value" { DATATYPE H5T_IEEE_F32LE DATASPACE SCALAR } ATTRIBUTE "Minimum Elevation Value" { DATATYPE H5T_IEEE_F32LE DATASPACE SCALAR } } DATASET "metadata" { DATATYPE H5T_STRING { STRSIZE 1; STRPAD H5T_STR_NULLTERM; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } DATASPACE SIMPLE { ( 12484 ) / ( H5S_UNLIMITED ) } } DATASET "tracking_list" { DATATYPE H5T_COMPOUND { H5T_STD_U32LE "row"; H5T_STD_U32LE "col"; H5T_IEEE_F32LE "depth"; H5T_IEEE_F32LE "uncertainty"; H5T_STD_U8LE "track_code"; H5T_STD_I16LE "list_series"; } DATASPACE SIMPLE { ( 0 ) / ( H5S_UNLIMITED ) } ATTRIBUTE "Tracking List Length" { DATATYPE H5T_STD_U32LE DATASPACE SCALAR } } DATASET "uncertainty" { DATATYPE H5T_IEEE_F32LE DATASPACE SIMPLE { ( 6077, 5692 ) / ( 6077, 5692 ) } ATTRIBUTE "Maximum Uncertainty Value" { DATATYPE H5T_IEEE_F32LE DATASPACE SCALAR } ATTRIBUTE "Minimum Uncertainty Value" { DATATYPE H5T_IEEE_F32LE DATASPACE SCALAR } } } } }
06.17.2010 10:35
SNMP monitoring on a Mac
This blog post mainly comes from help given by Jordan. I am a
beginner at using the Simple
Network Management Protocol (SNMP). We are looking to use SNMP to
drive Nagios monitoring of our
AIS infrastructure. The idea is to query snmp and have it give you
the status of the AIS feed by returning the number of ships heard on
the feed in the last 10 minutes. Because this feed covers a large
region, this should work pretty well. If this were for Great and
Little Bays in NH, where we sometime have no ships with AIS, then Nagios
would think that the AIS system was always down. This example is done on
a Mac running 10.6 (also tested on 10.5 Server). Linux systems may be
slightly different. This is using the less secure snmp version 2 protocol.
Now that the snmpd daemon is running, we need to test it:
At this point, we needed to call a script to inspect the database. We couldn't figure out how to call python directly, so we first have this entry in /etc/snmp/snmpd.conf:
Jordan has nagiosgrapher setup so now we have a nice view of vessels seen over time:
Above, we started snmpd by hand. Apple's methods for starting daemons has been a moving target. You can use fink's net-snmp-unified package and daemonic:
cd /etc/snmpEdit snmpd.conf, create the users in the com2sec section and comment out the original entries. Forgetting to comment out the default entries will cause the daemon to start but be non-functional.
#com2sec local localhost COMMUNITY #com2sec mynetwork NETWORK/24 COMMUNITY com2sec ais localhost aiscom com2sec ais2 nagiosserver.unh.edu ais2comNow that you have two user/communities defined, we need to create the groups and tell snmpd which version of the snmp protocol to listen to.
# sec.model sec.name #group MyRWGroup v1 local #group MyRWGroup v2c local #group MyRWGroup usm local #group MyROGroup v1 mynetwork #group MyROGroup v2c mynetwork #group MyROGroup usm mynetwork group AisGroup v2c ais group AisGroup v2c ais2I left the views alone. Then I created the access for the group.
# context sec.model sec.level match read write notif #access MyROGroup "" any noauth exact all none none #access MyRWGroup "" any noauth exact all all none access AisGroup "" v2c noauth exact all none noneI then went through several iterations of starting snmpd and discovered complaints in /var/log/snmpd.log. These probably could have been left in, but I always think it is better to get rid of all warnings. Here are the things I had to comment out:
# proc httpd # exec echotest /bin/echo hello world # exec web_status /usr/sbin/serveradmin status web # exec wo_status /usr/sbin/serveradmin status webobjectsThen I started snmpd. The -A prevents it from overwriting the snmpd.log and the -c explicitly says which config file to use.
sudo /usr/sbin/snmpd -A -c /etc/snmp/snmpd.confOnce snmpd is running, you can send it a SIGHUP (kill -1) to cause it to reload its config files or you can kill it with a SIGQUIT (kill -3).
Now that the snmpd daemon is running, we need to test it:
snmpwalk -c aiscom -v 2c localhost snmpwalk -c aiscomm -v 2c localhost SNMPv2-MIB::sysDescr.0 = STRING: Darwin snipe.ccom.nh 10.4.0 Darwin Kernel Version 10.4.0: Fri Apr 23 18:28:53 PDT 2010; root:xnu-1504.7.4~1/RELEASE_I386 i386 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.255 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (6675) 0:01:06.75 SNMPv2-MIB::sysContact.0 = STRING: Administratorsnmpwalk will only wait for so long to get data. That is what is causing the timeout at the end. To get a single value, try this:SNMPv2-MIB::sysName.0 = STRING: snipe.ccom.nh SNMPv2-MIB::sysLocation.0 = STRING: Right here, right now. SNMPv2-MIB::sysServices.0 = INTEGER: 76 SNMPv2-MIB::sysORLastChange.0 = Timeticks: (7) 0:00:00.07 SNMPv2-MIB::sysORID.1 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance SNMPv2-MIB::sysORID.2 = OID: SNMP-MPD-MIB::snmpMPDCompliance ... HOST-RESOURCES-MIB::hrSWRunPerfMem.39056 = INTEGER: 22028 KBytes HOST-RESOURCES-MIB::hrSWRunPerfMem.39059 = INTEGER: 4492 KBytes HOST-RESOURCES-MIB::hrSWRunPerfMem.39070 = INTEGER: 1408 KBytes HOST-RESOURCES-MIB::hrSWRunPerfMem.39071 = INTEGER: 1084 KBytes Timeout: No Response from localhost
snmpget -c aiscom -v 2c localhost hrSystemUptime.0 HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (8520654) 23:40:06.54On debian linux systems, you can get the snmp tools by doing sudo apt-get install snmp, while on RedHat/CentOS systems, do sudo yum install snmp-net.
At this point, we needed to call a script to inspect the database. We couldn't figure out how to call python directly, so we first have this entry in /etc/snmp/snmpd.conf:
exec ais_pg_status /bin/bash /usr/local/bin/nagios_pg_ais.shThat script is pretty simple:
#!/bin/bash /sw/bin/python /usr/local/bin/nagios_pg_ais.py -u aisuserThe python code is not as flexible as we would like, but it basically prints the number of ships that have been seen in the last ten minutes. I have a little issue with my timezones, so I have to query out into the future, but hopefully you get the idea.
#!/usr/bin/env python from __future__ import print_function import psycopg2 import sys, os def get_parser(): from optparse import OptionParser parser = OptionParser(usage="%prog [options] ",version="%prog "+__version__) parser.add_option('-d','--database-name',default='realtime_ais' ,help='Name of database within the postgres server [default: %default]') parser.add_option('-D','--database-host',default='localhost' ,help='Host name of the computer serving the dbx [default: %default]') defaultUser = os.getlogin() parser.add_option('-u','--database-user',default=defaultUser ,help='Host name of the to access the database with [default: %default]') parser.add_option('-p','--database-passwd',default=None ,help='Password to access the database with [default: None]') parser.add_option('-v','--verbose',default=False,action='store_true' ,help='Make program output more verbose info as it runs') return parser def main(): (options,args) = get_parser().parse_args() v = options.verbose options_dict = vars(options) # Turn options into a real dictionary cx_str = "dbname='{database_name}' user='{database_user}' host='{database_host}'".format(**options_dict) if v: print ('cx_str:',cx_str) try: cx = psycopg2.connect(cx_str) cu = cx.cursor() cu.execute("SELECT count(*) FROM vessel_pos WHERE time_stamp>(now() + interval '3 hours 50 minutes');") row = cu.fetchone() count = row[0] except: print ('-1') sys.exit(3) print ('{count}'.format(count=count)) if count < 100: sys.exit(0) if count < 1000: sys.exit(0) sys.exit(0) if __name__ == '__main__': main()We used a snmpwalk to find the location of the end point. We can now query via snmp:
snmpget -c ais2com -v 2c aisserver.unh.edu .1.3.6.1.4.1.2021.8.1.101.1 UCD-SNMP-MIB::extOutput.1 = STRING: 2961The nagios entry in /etc/nagios/GROUP/services.cfg:
define service{ host_name server.unh.edu service_description AIS_FEED check_command get_snmp_oid!.1.3.6.1.4.1.2021.8.1.101.1!ais2com!1000:!100:!'new records in last 10 minutes' max_check_attempts 5 normal_check_interval 10 retry_check_interval 1 check_period 24x7 notification_interval 120 notification_period 24x7 notification_options w,u,c,r,f,s contact_groups ais }The check on the server now looks like this (with a ping and ssh check too):
Jordan has nagiosgrapher setup so now we have a nice view of vessels seen over time:
Above, we started snmpd by hand. Apple's methods for starting daemons has been a moving target. You can use fink's net-snmp-unified package and daemonic:
daemonic dump | grep -i snmp daemonic install net-snmp-unifiedThe current Apple way is:
sudo launchctl load -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
06.14.2010 16:42
AUV Bootcamp - Swains Lake
This year, UDel brought the gavia and their team up to Swains Lake,
which is about 15 minutes from UNH. This was the operations center:
There was some "classroom time"...
Art and Val going over the upcoming mission concept:
This stuff makes planning difficult:
The last night, the team went out for night time operations. There was planning in the bug tent.
And then the team headed out to the other end of the lake:
There was some "classroom time"...
Art and Val going over the upcoming mission concept:
This stuff makes planning difficult:
The last night, the team went out for night time operations. There was planning in the bug tent.
And then the team headed out to the other end of the lake:
06.14.2010 15:03
AIS and NCOM in the public version of ERMA
Try2... now it is officially announced. ERMA is publicly available.
Mapserver, OpenLayers, PostGIS, libais, gdal, OpenStreetMaps, lots of
software I have forgotten, and hard work by a dedicated team of
engineers and GIS people distributed across the country. We are working
on the AIS vessel categories.
Federal Agencies Introduce Online Mapping Tool to Track Gulf Response [NOAANEWS]
ERMA with NAIS and NCOM layers turned on
Trackbacks:
Track Oil Spill Response Vessels in Real-Time [Moni-Blog]
Real Time Mapping of BP Oil Spill Response Vessels [gCaptain.com]
Federal Agencies Introduce Online Mapping Tool to Track Gulf Response [NOAANEWS]
ERMA with NAIS and NCOM layers turned on
Trackbacks:
Track Oil Spill Response Vessels in Real-Time [Moni-Blog]
Real Time Mapping of BP Oil Spill Response Vessels [gCaptain.com]
06.11.2010 15:45
AIS snapshot of the Gulf Response
Content to return shortly...
Update 7:35PM EST: The site is now updating every 10 minutes.
I can finally show what I've been working on for the Gulf Response. The USCG and NOAA have made a snapshot of the response vessels as captured from the USCG AIS feeds. This is a small subset of the vessels that are involved. The URL encodes the view and what layers are turned on. Here is the well head and the vessels in the area.
An example URL: http://gomex.erma.noaa.gov/erma.html#x=-88.36690&y=28.73056&z=11&layers=1613+1759+3023+3393+3327+3392+497
I am processing the AIS NMEA strings with my libais software to track where ships are.
I can finally show what I've been working on for the Gulf Response. The USCG and NOAA have made a snapshot of the response vessels as captured from the USCG AIS feeds. This is a small subset of the vessels that are involved. The URL encodes the view and what layers are turned on. Here is the well head and the vessels in the area.
An example URL: http://gomex.erma.noaa.gov/erma.html#x=-88.36690&y=28.73056&z=11&layers=1613+1759+3023+3393+3327+3392+497
I am processing the AIS NMEA strings with my libais software to track where ships are.
06.10.2010 17:51
AUV Bootcamp - wish I was there
There is just too much going on. I wish I was out with the crew at
AUV bootcamp, which is occuring this week not too far from UNH.
Thanks to Monica for this image:
Thanks to Monica for this image:
06.08.2010 07:50
Norfolk, VA
Today and tomorrow, I am in Norfolk for the MTS TechSurge meeting:
Ocean Observing: Thinking Outside the Basin, where I will be talking
about how work at NASA relates to ocean observing. The view coming in
on the plane last night was really nice. Working with AIS,
bathymetry, and lidar data from this area for the last four years
makes flying in to this area especially interesting.
06.05.2010 13:43
Movie of the BAG visualization in Google Earth
Just yesterday, I put all the pieces together for my current version
of my BAG visualization.
This is my 2nd iteration of displaying NOAA Hydrographic Survey Grids
in the BAG format in Google Earth. I first scraped NGDC for all
released BAGs. Then I ran a python program that scraped the metadata
from all 2000 BAGs and produces an SQLite database. From there, I
produce bounding boxes and summary placemarks in a python program. I
then use a bash script to create depth histograms, XML metadata
extracted from the BAG, and KML tiled display. The script makes heavy
use of gdal and imagemagick. The code is definitely ugly, but I really like the
resulting visualization.
06.05.2010 12:21
Shep Smith on NPR
Shep Smith, a CCOM alumni, was just on NPR: Slick Technology. Shep talks about using a Moving Vessel Profiler (MVP).
BTW, I found AudioPlayer from Mike Shapiro's samples page.
BTW, I found AudioPlayer from Mike Shapiro's samples page.
06.04.2010 14:10
Which C++ logging and unit test framework(s)?
Please comment here!
Back a few years ago, I wrote slogcxx as an experiment in creating and easy to use logging system. It spent a little under a year as the logging system for GeoZui. I haven't touched the code in a long time and I am sure it can be done better. So, I have two questions that were triggered by a discussion with BRC. What packages would you suggest for C++ logging and unit testing. I did not including anything real in libais and would really like to work towards the best possible infrastructure to make this a long lived package. Some basic thoughts on requirements:
There are just so many options. Building a large system on the wrong one will lead to serious pain, but using a good one will encourage better software.
Dreaming of being able to subscribe to an AMQP feed for a cluster and user interface tearing through massive data crunching and knowing that the system passed a slew of unit tests that give us confidence that new code is less likely to pull the system down...
Another source of info: Wikipedia List of unit testing frameworks. There doesn't seem to be an equivalent page for loggers, but there is a little in the Java lib4j ports section.
P.S. I'm really starting to think that GNU autoconf sucks. It may be the standard, but a configure system is supposed to help, not break my brain. It shouldn't require massive training and experimentation to get it into place. I really should go look at cmake and scons again.
Back a few years ago, I wrote slogcxx as an experiment in creating and easy to use logging system. It spent a little under a year as the logging system for GeoZui. I haven't touched the code in a long time and I am sure it can be done better. So, I have two questions that were triggered by a discussion with BRC. What packages would you suggest for C++ logging and unit testing. I did not including anything real in libais and would really like to work towards the best possible infrastructure to make this a long lived package. Some basic thoughts on requirements:
- Must not be GPL, but LGPL is okay. It has to play nice
- Be open source
- Must be thread safe - a unit test may fire off a bunch of threads for a test
- The logger must be able to handle multiple producers and consumers that are both in the same process and external
- Likely to stick around for a few years
- Relatively easy to use
- Cross platform to Windows, Mac, and Linux
- If not already available in fink, rpm, and deb form, then easily packaged
- Have a safe for work project name... ahem
There are just so many options. Building a large system on the wrong one will lead to serious pain, but using a good one will encourage better software.
Dreaming of being able to subscribe to an AMQP feed for a cluster and user interface tearing through massive data crunching and knowing that the system passed a slew of unit tests that give us confidence that new code is less likely to pull the system down...
Another source of info: Wikipedia List of unit testing frameworks. There doesn't seem to be an equivalent page for loggers, but there is a little in the Java lib4j ports section.
P.S. I'm really starting to think that GNU autoconf sucks. It may be the standard, but a configure system is supposed to help, not break my brain. It shouldn't require massive training and experimentation to get it into place. I really should go look at cmake and scons again.
06.04.2010 11:13
Timemachine failures
I have had this happen a couple times in the last week. Backups
shouldn't fail. Skipping a file you can't handle and issuing warnings
is okay, but crashing is not okay. It always seems to work if I restart
the backup right after the crash. Here is the report in /var/log/system.log
Jun 4 10:50:58 laptop com.apple.backupd[44540]: Starting standard backup Jun 4 10:50:59 laptop com.apple.backupd[44540]: Backing up to: /Volumes/Time Machine Backups/B ackups.backupdb Jun 4 10:51:58 laptop Jun 4 10:52:46 laptop com.apple.backupd[44540]: No pre-backup thinning needed: 3.34 GB requested (including padding), 119.30 GB available Jun 4 11:05:21 laptop com.apple.backupd[44540]: Indexing a file failed. Returned 1 for: /Users/schwehr/projects/src/bag-py/processed/H11288/H11288_2m_2/16/16102/38310.kml, /Volumes/Time Machine Backups/Backups.backupdb/laptop/2010-06-04-105102.inProgress/A806AA65-8347-4607-8D64-8F4FCD263850/laptop/Users/schwehr/projects/src/bag-py/processed/H11288/H11288_2m_2/16/16102/38310.kml Jun 4 11:05:21 laptop com.apple.backupd[44540]: Aborting backup because indexing a file failed. Jun 4 11:05:21 laptop com.apple.backupd[44540]: Stopping backup. Jun 4 11:05:22 laptop com.apple.launchd[1] (com.apple.metadata.mds[43819]): Job appears to have crashed: Segmentation fault Jun 4 11:05:22 laptop com.apple.ReportCrash.Root[44601]: 2010-06-04 11:05:22.472 ReportCrash[44601:2803] Saved crash report for mds[43819] version ??? (???) to /Library/Logs/DiagnosticReports/mds_2010-06-04-110522_localhost.crash Jun 4 11:05:23 laptop com.apple.backupd[44540]: Copied 63578 files (824.8 MB) from volume laptop. Jun 4 11:05:23 laptop com.apple.backupd[44540]: Copy stage failed with error:11 Jun 4 11:05:31 laptop com.apple.backupd[44540]: Backup failed with error: 11And the file it could not index was not being modified and is still sitting there on the disk doing nothing.
06.03.2010 14:48
Explore the bathymetry of the Gulf of Mexico
We've put online a copy of the Gulf of Mexico section of the Coastal
Relief model in sd format. You can fly around the area where
Deepwater Horizon spill is occurring. If you don't have Fledermaus,
you can use the free browser: iView4D. I
leave it as an excercise for those with Fledermaus to create a color
map and shading that shows the area better and to add annotations of
the site. For other ideas, take a look at the La Jolla demo
that I did for the Scripps centenial celebration.
GulfOfMexico-CoastalRelief.sd.zip [45MB]
My old La Jolla / SIO demo video. Pardon the poor quality of this old video.
GulfOfMexico-CoastalRelief.sd.zip [45MB]
My old La Jolla / SIO demo video. Pardon the poor quality of this old video.
06.03.2010 09:03
Example BAG in Google Earth with Transparency
My new desktop is working hard to push through all 2000 BAGs with the
new python code using gdal's interface to produce transparent
hillshades. Here is an example that shows off the transparency
nicely.
Here is what it looked like before with the semi-transparent gray bounding box. Note that I've fixed the color map to match the more common blue as deep color scheme.
Here is what it looked like before with the semi-transparent gray bounding box. Note that I've fixed the color map to match the more common blue as deep color scheme.
06.03.2010 07:26
Deepwater Horizon LMRP cap
These videos are a chance to see deep water operations done with ROVs.
I remember trying to pick up rocks from the bottom of Mono Lake with
the NASA TROV back in the mid 1990's. After more than a half hour of
frustrating mis-steps and no samples, I switched strategies and just
pushed the whole sample basket into the sediment and came back with a
basked full of rocks.
06.02.2010 12:01
NOAA needs a public software interface specification for multibeam data
Comments can go here: BAG file naming convention
Next week, I will be talking in Norfolk about technology transfer between NASA And NOAA. One thing that is clear, we need a file naming convention for published data files that come out of NOAA. These things are not fun, but they make or break the long term usability of data. When I join a NASA mission, the first few times I try to use these filename, it is a bit confusing, but once you lookup the codes and start to use them, you realize the power of well controlled file names. The naming convention for NOAA BAGs is great that it has a survey name, but the rest of the filename is basically information. It's done differently and there is no reference in the metadata to the standard applied. To kick off this idea, let us first think about just BAGs. What should be in the filename and how can we code it?
Take a look through the Mars Exploration Rover (MER) Software Interface Specification (SIS): MER_camsis_v4_9-25-07.pdf [JPL]
First, take a look at the Data Processing Level. With multibeam and lidar data, we have the similar kinds of concepts, but the data usually starts off collected with an IMU providing GPS derived positioning (not something we have on Mars at the moment).
Now for the part that I really want you to take a look at: Section 4.4, File Naming, P22. The file names are broken into a whole bunch of fields:
The document has some examples of decoding image file names:
These kinds of conventions are really powerful, but they do take a while to create and debug. I've spent hundreds of hours working on spacecraft imaging teams making sure that everything in the SIS document is correct.
Next week, I will be talking in Norfolk about technology transfer between NASA And NOAA. One thing that is clear, we need a file naming convention for published data files that come out of NOAA. These things are not fun, but they make or break the long term usability of data. When I join a NASA mission, the first few times I try to use these filename, it is a bit confusing, but once you lookup the codes and start to use them, you realize the power of well controlled file names. The naming convention for NOAA BAGs is great that it has a survey name, but the rest of the filename is basically information. It's done differently and there is no reference in the metadata to the standard applied. To kick off this idea, let us first think about just BAGs. What should be in the filename and how can we code it?
Take a look through the Mars Exploration Rover (MER) Software Interface Specification (SIS): MER_camsis_v4_9-25-07.pdf [JPL]
First, take a look at the Data Processing Level. With multibeam and lidar data, we have the similar kinds of concepts, but the data usually starts off collected with an IMU providing GPS derived positioning (not something we have on Mars at the moment).
Now for the part that I really want you to take a look at: Section 4.4, File Naming, P22. The file names are broken into a whole bunch of fields:
<scid><inst><sclk><prod><site><pos><seq><eye><filt><who><ver><ext>Each of these fields is defined in the document. For BAG instrument type, I could see codes something like this: S = Single beam sonar M = Multibeam, l = topo lidar, L = bathy Lidar, G = predicted from Gravity, c = combined sensors, C = lead line or similar from a Cable, etc. Then when I had a BAG that was from a bathy lidar, I would not be surprise when I opened it and it did not look like multibeam sonar data. There was no need to look at the metadata (which would not have told me) or the non-machine-readable Descriptive Reports.
The document has some examples of decoding image file names:
These kinds of conventions are really powerful, but they do take a while to create and debug. I've spent hundreds of hours working on spacecraft imaging teams making sure that everything in the SIS document is correct.
06.02.2010 11:25
ERMA public as geoplatform.gov
Now that it is public, I can post about this...
Seattle NOAA staff taking the spill to the Web [Seattle Times]
NOAA ship to study underwater oil near site of leak - The Thomas Jefferson:
Seattle NOAA staff taking the spill to the Web [Seattle Times]
... A version of GeoPlatform.gov is being tested. The official version will be unveiled next week, said Amy Merten, chief of the spatial data team at NOAA's Seattle-based Emergency Response Division. The site isn't ready for prime time, Merten cautioned. When it is, it will show the path the oil slick is likely to follow, share zoomable satellite images and provide data on biologically sensitive areas at risk, she told U.S. Sen. Patty Murray during a briefing Tuesday on NOAA's local oil-spill efforts. ...
NOAA ship to study underwater oil near site of leak - The Thomas Jefferson:
... "It's totally new. We're really testing the feasibility of the approach, we don't know whether it will work or not, but it's certainly worth trying," said one of those researchers, Larry Mayer, director of the Center for Coastal and Ocean Mapping at the University of New Hampshire. "What is the nature of submerged oil, if there is oil? We just don't understand its properties yet." ...
06.02.2010 09:16
Bash being used for Mars landers
This is where I kick myself for not writing a journal paper describing
the Mars Polar Lander (MPL) integration system that I wrote in the
fall of 1999 so that Doug could cite me in his JGR paper.
Processing of Mars Exploration Rover imagery for science and operations planning, JGR 2006, by Alexander et al.
I have james.txt and the scripts are in MarsPolarLander/bin, but that is not the same as having a JGR paper. Sigh.
... The pipeline's glueware was developed as a script roughly 10,000 lines of code in length using the Unix Bourne shell [Kochan and Wood, 1998]. Development was driven by a few factors: (1) scripts are generally more flexible and easier to maintain than low level C++ code, (2) the MER GDS provided a Unix-based environment that supported the Sun-Solaris and Linux Red Hat oper- ating systems, making the glueware script readily deploy- able on multiple systems, (3) establishing Unix as the product generation system baseline, combined with famil- iarity of Unix amongst operations team members, made the design amenable to future integration of ancillary tools developed as scripts during mission operations, and (4) heritage brought from MPL, where the product generation system was developed under the Bourne-again shell (Bash), provided precedents in a few areas of the pipeline design (such as driving processing events based on file detection). ...Emphasis is mine.
I have james.txt and the scripts are in MarsPolarLander/bin, but that is not the same as having a JGR paper. Sigh.
06.01.2010 13:59
Making a gdaldem hillshade have proper transparency
Note: If anyone can show me how to make a transparent hillshade using
gdal without having to resort to the python code below, I would really
appreciate hearing from you!
Back in November/December of last year, I had trouble with transparency in the output of gdaldem hillshade. Thanks to help from the IRC #gdal channel (and especially EvenR), I managed to get through the whole process and I will try to document it here. I definitely do not understand why gdaldem color-relief works as expected (using some extra flags) for alpha/transparency and gdaldem hillshade actually writes 50% gray into areas with nodata. EvenR suggested I add a command like this to set the nodata value, but the results were either a 50% gray (for -1 or 1e6) or the gray that matches the value. I have much to learn about gdal.
To help understand the problem, I will show gdalinfo commands (but truncated to save space) as things progress. Here is the original bag:
I then combine the color-relief and the new transparent hillshade into what I want: an image with hillshading and color:
That last step looses the georeferencing, so I have to go back in and add it in from an original that has knows where it is at:
My not yet polished python code that uses gdal to copy in transparency:
Back in November/December of last year, I had trouble with transparency in the output of gdaldem hillshade. Thanks to help from the IRC #gdal channel (and especially EvenR), I managed to get through the whole process and I will try to document it here. I definitely do not understand why gdaldem color-relief works as expected (using some extra flags) for alpha/transparency and gdaldem hillshade actually writes 50% gray into areas with nodata. EvenR suggested I add a command like this to set the nodata value, but the results were either a 50% gray (for -1 or 1e6) or the gray that matches the value. I have much to learn about gdal.
gdal_translate -a_nodata 0 H11296_5m-depths-f32.tif H11296_5m-depths-f32-with-nodata.tifHere is the hillshade with the 50% gray that I do not want:
To help understand the problem, I will show gdalinfo commands (but truncated to save space) as things progress. Here is the original bag:
gdalinfo H11296_5m.bag Driver: BAG/Bathymetry Attributed Grid ... PROJCS["UTM Zone 19, Northern Hemisphere", ... Pixel Size = (5.001271523178817,-5.002456140350830) ... Band 1 Block=3021x1 Type=Float32, ColorInterp=Undefined Description = elevation Min=-25.697 Max=2.399 NoData Value=1000000 Band 2 Block=3021x1 Type=Float32, ColorInterp=Undefined Description = uncertainty Min=0.500 Max=0.601 NoData Value=0So we are starting with a BAG. The basic NOAA published bag has two layers, which surprised a few people on IRC. A bag could have just one layer or it could have 3 or more. The nice thing about HDF5 is that you can put other information into a BAG when necessary. (Which is better: HDF5 or the latest NetCDF?)
gdalwarp -ot Float32 -t_srs EPSG:4326 $patch.bag ${patch}-depths-f32.tifThe first real step is to convert the BAG to a GeoTiff. I am using a 32 bit floating point tif so I don't have to worry about the precision of the data in the BAG. Now that I realize GeoTiffs are not just for images, things are better. The geotiff:
gdalinfo H11296_5m-depths-f32.tif Driver: GTiff/GeoTIFF Files: H11296_5m-depths-f32.tif Size is 3205, 1278 Coordinate System is: GEOGCS["WGS 84", ... Band 1 Block=3205x1 Type=Float32, ColorInterp=Gray Band 2 Block=3205x1 Type=Float32, ColorInterp=UndefinedNote that the nodata value that was in the BAG has been lost. I generate the hillshade from the GeoTiff of depths:
gdaldem hillshade ${patch}-depths-f32.tif $patch-hillshade.tifThis resulting image has a 1 pixel border of transparency, but the internal cells without data are all 50% gray as shown above.
gdalinfo H11296_5m-hillshade.tif Driver: GTiff/GeoTIFF ... Band 1 Block=3205x2 Type=Byte, ColorInterp=Gray NoData Value=0Thankfully, the color-relief image has proper transparency, so the next idea I had was to copy the transparency.
gdaldem color-relief ${patch}-depths-f32.tif ../../color_ramp_fixed.dat ${patch}-color-relief.tif -alpha -co ALPHA=YES gdalinfo H11296_5m-color-relief.tif Driver: GTiff/GeoTIFF ... Band 1 Block=3205x1 Type=Byte, ColorInterp=Red Mask Flags: PER_DATASET ALPHA Band 2 Block=3205x1 Type=Byte, ColorInterp=Green Mask Flags: PER_DATASET ALPHA Band 3 Block=3205x1 Type=Byte, ColorInterp=Blue Mask Flags: PER_DATASET ALPHA Band 4 Block=3205x1 Type=Byte, ColorInterp=AlphaThe code below takes the gray values from the hillshade and the alpha channel from the color-relief and combines them into a new RGB geotiff based off the color-relief geotiff. I copy the gray channel into all three of the R, G, and B channels to make an overly large gray image with the alpha/transparency the way I need it.
I then combine the color-relief and the new transparent hillshade into what I want: an image with hillshading and color:
composite -blend 50 $patch-hillshade-transparent.tif ${patch}-color-relief.tif ${patch}-blend.tif
That last step looses the georeferencing, so I have to go back in and add it in from an original that has knows where it is at:
listgeo -tfw ${patch}-depths-f32.tif geotifcp -e ${patch}-depths-f32.tfw ${patch}-blend.tif ${patch}.tifAnd the results in Google Earth:
My not yet polished python code that uses gdal to copy in transparency:
#!/usr/bin/env python from __future__ import print_function import sys import osgeo.gdal as gdal import osgeo.gdalconst gdal.AllRegister() patch_name = 'H11296_5m' hillshade = gdal.Open(patch_name + '-hillshade.tif', gdal.GA_ReadOnly) color = gdal.Open(patch_name + '-color-relief.tif', gdal.GA_ReadOnly) w,h = color.RasterXSize,color.RasterYSize print ('dim:',w,h,'\t\t',hillshade.RasterXSize,hillshade.RasterYSize) driver = gdal.GetDriverByName('GTiff') if driver is None: print('Format driver %s not found, pick a supported driver.' % format) sys.exit( 1 ) dst = driver.CreateCopy( 'dst.tif', color) #, 0) # what was the zero for? print ('gdal.GCI_AlphaBand:', gdal.GCI_AlphaBand) for band_num in range(1,color.RasterCount+1): band = color.GetRasterBand(band_num) if band is None: print ('band is None') continue bandmin,bandmax = band.ComputeRasterMinMax() clr_interp = band.GetColorInterpretation() print (band_num, bandmin,bandmax, clr_interp, gdal.GetColorInterpretationName(clr_interp)) alpha = color.GetRasterBand(4) gray_band = hillshade.GetRasterBand(1) # Shove all the same gray values into all the color channels for band_num in range(1,4): dst.GetRasterBand(band_num).WriteArray(gray_band.ReadAsArray()) # Set the alpha dst.GetRasterBand(4).WriteArray(alpha.ReadAsArray())As an aside, this data set is actually an airborne bathymetric LIDAR survey. Unfortunately, nothing in the XML metadata says that this is a lidar. The only clue is in a textual description that hints that this came from a plane. You would have to go read the H11296 Descriptive Report to know that it came from a "SHOALS-1000T Bathymetric and Topographic LiDAR."
<abstract> Project: OPR-A321-KRL-05; Survey: H11296; State: New Hampshire; General Locality: Approaches to Portsmouth; Sublocality: Rye Harbor and Isle of Shoals; Date of Survey: 09/09/05-09/12/05; Surveyed by: Beechcraft King Air 90; Verification by: Atlantic Hydrographic Branch
 </abstract>
06.01.2010 09:46
Some peoples' desire to shutdown internet AIS
I have discussed this in the past and even submitted comments to the
USCG request for input. ESR has also posted a reply, AIS "security" considered harmful as we are having trouble
leaving comments on the articles web page.
Our Capt. Ben just sent me a copy of a Professional Mariner article: Shut down unrestricted public Internet access to AIS data by Kelly Sweeney.
If you are a bad guy, you will probably buy your own an AIS receiver to remain anonymous. You are only blocking the good guys from getting AIS. The only way to be secure your AIS is turning off your AIS transmitter, which I might add is illegal for many ships. Mariners designed this system. They did not involve anyone with a security mind set. So, we have a system that is not secure. It is interesting to see the new dangerous cargo AIS binary message that is broadcast in the clear. But then the document defining the message says that you have to protect the information once you receive it. The bad guys already have it, so what is the point? The military uses AIS, but they encrypt some of their possition messages.
If you are transiting an area with pirates, a whole slew of issues come up. If you are insterest in this kind of topic, I recommend reading a lot of what Bruce Schneier has written. Think very carefully about what you believe. Security is a very tricky thing. It is easy to block the "good guys" while failing to deter "bad behavior".
My biggest issue with AIS is the large number of bad AIS configurations. Ask your budy or port authority to verify that you are showing up with your MMSI and all your ship static data set right.
Our Capt. Ben just sent me a copy of a Professional Mariner article: Shut down unrestricted public Internet access to AIS data by Kelly Sweeney.
Piracy off Somalia is at a six-year high and has increased in Asia in the past year, with reports that hundreds of my fellow merchant mariners are being held hostage for ransom. In these dangerous times, every effort needs to be made to protect merchant ships, crews and the billions of people living in port cities or coastal areas. The authorities have mandated the use of AIS on thousands of commercial vessels; they now need to ensure that this sensitive information does not get into the wrong hands.I have to say that this is closing the barn door after the horses have left (and the barn has already burned down). AIS is out there... period. If you block internet AIS, you will be damaging a good resource for many people.
If you are a bad guy, you will probably buy your own an AIS receiver to remain anonymous. You are only blocking the good guys from getting AIS. The only way to be secure your AIS is turning off your AIS transmitter, which I might add is illegal for many ships. Mariners designed this system. They did not involve anyone with a security mind set. So, we have a system that is not secure. It is interesting to see the new dangerous cargo AIS binary message that is broadcast in the clear. But then the document defining the message says that you have to protect the information once you receive it. The bad guys already have it, so what is the point? The military uses AIS, but they encrypt some of their possition messages.
If you are transiting an area with pirates, a whole slew of issues come up. If you are insterest in this kind of topic, I recommend reading a lot of what Bruce Schneier has written. Think very carefully about what you believe. Security is a very tricky thing. It is easy to block the "good guys" while failing to deter "bad behavior".
My biggest issue with AIS is the large number of bad AIS configurations. Ask your budy or port authority to verify that you are showing up with your MMSI and all your ship static data set right.