10.31.2005 08:39
layback update for cracks cruise
I went through the cracks data a little more this morning and verified
that all the layback points I used have there speed from within a few
shots. And thankfully that is the case. Here is the results table
with all 52 entries.
The code for the processing of the excel data from Jenna is in cracks_cruise_layback.py.html (raw python: cracks_cruise_layback.py)
Finally a quick calculation from the Santa Barbara Cruise...
# 1 2 3 4 5 6 7 # shot wireout vert layback angle1 speed shot_offset_for_speedAfter verifying the data, I created a little least square fit program to use the results table.
525106 264.6 79.8168 252.2745299 17.56 3.8 2 528154 240.8 85.0668 225.273788 20.69 3.8 17 595405 218.5 62.5668 209.3505327 16.64 5.0 0 597545 254.7 82.8168 240.8598506 18.97 3.8 0 598722 254.7 79.0668 242.1167717 18.09 4.2 0 605053 254.7 76.0668 243.0759798 17.38 3.7 17 633320 185.6 73.0668 170.6124343 23.18 4.0 0 670983 258.6 76.0668 247.1594666 17.11 3.9 0 683526 278.3 89.5668 263.4932226 18.77 4.0 0 689024 256.6 73.8168 245.7532096 16.72 4.1 0 711636 191.8 82.3668 173.2135972 25.43 4.2 0 713188 244.8 82.3668 230.5271139 19.66 4.3 0 741727 212.4 62.5668 202.9757511 17.13 4.8 0 745538 272.5 62.5668 265.2199946 13.27 4.5 0 760000 272.5 73.8168 262.3115134 15.72 3.8 0 815797 185.6 73.0668 170.6124343 23.18 3.8 0 821962 240.8 93.3168 221.9833661 22.80 3.6 0 844793 216.5 73.0668 203.797676 19.72 3.7 0 850749 270.5 76.0668 259.5844601 16.33 4.2 0 856363 305.2 76.0668 295.5687432 14.43 4.0 0 913976 214.4 64.0668 204.6040203 17.39 4.2 0 918322 258.6 79.0668 246.2161675 17.80 4.1 0 921739 295.6 91.0668 281.2226839 17.94 4.3 0 946083 185.6 72.3168 170.9316835 22.93 4.1 0 952200 314.8 84.3168 303.2980667 15.54 4.2 0 1289000 185.6 73.8168 170.2892834 23.44 3.9 0 1306641 242.8 79.0668 229.565418 19.00 4.0 0 1355961 210.3 65.5668 199.8176287 18.17 4.2 0 1360480 242.8 85.8168 227.1284149 20.70 4.0 0 1386006 191.8 76.8168 175.7453249 23.61 3.8 0 1387816 248.7 85.8168 233.4248634 20.19 4.1 0 1396015 248.7 72.0168 238.0446818 16.83 3.8 0 1405500 204.1 63.0168 194.1280323 17.98 4.0 0 1433000 195.6 64.0668 184.8101868 19.12 4.9 0 1433557 262.6 76.8168 251.1133992 17.01 4.5 0 28067 191.8 72.3168 177.6443651 22.15 3.6 0 29794 278.3 92.5668 262.4543342 19.43 3.8 0 37030 278.3 96.3168 261.101444 20.25 3.6 0 39700 305.2 91.8168 291.0613599 17.51 3.7 0 44800 272.5 72.3168 262.7290057 15.39 3.9 0 60575 128.4 56.5668 115.2681966 26.14 3.7 0 61542 310.9 84.3168 299.2482034 15.74 3.6 0 223345 177.2 61.0668 166.3450809 20.16 3.6 0 229052 195.6 79.8168 178.5739019 24.08 3.9 0 263470 170.9 67.0668 157.1905033 23.11 3.7 0 267038 266.5 82.0668 253.5493844 17.94 3.6 0 309101 305.2 85.8168 292.8865255 16.33 3.3 0 313592 258.6 83.5668 244.7254583 18.85 4.8 0 316027 208.2 74.5668 194.3888689 20.99 3.7 0 383600 258.6 76.0668 247.1594666 17.11 3.6 0 394351 220.6 61.0668 211.9792583 16.07 3.7 0 418500 256.6 80.4918 243.6485792 18.28 4.2 0
#!/usr/bin/env python ######################################## # FIT a line to the data dataFile = open('results.dat','r') dataFile.readline() dataFile.readline() dataFile.readline() wireoutAngle=[] speedAngle=[] for line in dataFile.readlines(): fields = line.split() wireoutAngle.append((float(fields[2-1]),float(fields[5-1]))) speedAngle.append ((float(fields[6-1]),float(fields[5-1]))) # from Scientific.Functions.LeastSquares import leastSquaresFit def linear(parameters,values): a,b = parameters x = values return (a*x + b) # initial = (-.1,32) param,error = leastSquaresFit(linear,initial,wireoutAngle) print " ----- WIREOUT TO ANGLE ----- " print param,error print 130, 130 * param[0] + param[1] print 300, 300 * param[0] + param[1] # initial = (0,22) param,error = leastSquaresFit(linear,initial,speedAngle) print " ----- SPEED TO ANGLE ----- " print param,error print 3.4, 3.4 * param[0] + param[1] print 4.8, 4.8 * param[0] + param[1]That made a simple linear fit to the data. The results of a run look like this:
./cracks_cruise_layback_fit.py ----- WIREOUT TO ANGLE ----- [-0.051842754250418684, 31.465098411287943] 194.768779108 130 24.7255403587 300 15.9122721362 ----- SPEED TO ANGLE ----- [-1.703222454552509, 25.827653874339632] 415.848028016 3.4 20.0366975289 4.8 17.6521860925After creating two data files for gnuplot, I end up with plots that have the best fit line.
The code for the processing of the excel data from Jenna is in cracks_cruise_layback.py.html (raw python: cracks_cruise_layback.py)
Finally a quick calculation from the Santa Barbara Cruise...
ipython In [1]: from math import * In [2]: from units import rad2deg
In [3]: rad2deg(asin(30.75/50)) Out[3]: 37.951920289246438 # Wire angle seen
In [4]: 50 * (-0.051842754250418684) + 31.465098411287943 Out[4]: 28.87296069876701 # Wire angle predicted
10.31.2005 08:39
layback update for cracks cruise
I went through the cracks data a little more this morning and verified
that all the layback points I used have there speed from within a few
shots. And thankfully that is the case. Here is the results table
with all 52 entries.
The code for the processing of the excel data from Jenna is in cracks_cruise_layback.py.html (raw python: cracks_cruise_layback.py)
Finally a quick calculation from the Santa Barbara Cruise...
# 1 2 3 4 5 6 7 # shot wireout vert layback angle1 speed shot_offset_for_speedAfter verifying the data, I created a little least square fit program to use the results table.
525106 264.6 79.8168 252.2745299 17.56 3.8 2 528154 240.8 85.0668 225.273788 20.69 3.8 17 595405 218.5 62.5668 209.3505327 16.64 5.0 0 597545 254.7 82.8168 240.8598506 18.97 3.8 0 598722 254.7 79.0668 242.1167717 18.09 4.2 0 605053 254.7 76.0668 243.0759798 17.38 3.7 17 633320 185.6 73.0668 170.6124343 23.18 4.0 0 670983 258.6 76.0668 247.1594666 17.11 3.9 0 683526 278.3 89.5668 263.4932226 18.77 4.0 0 689024 256.6 73.8168 245.7532096 16.72 4.1 0 711636 191.8 82.3668 173.2135972 25.43 4.2 0 713188 244.8 82.3668 230.5271139 19.66 4.3 0 741727 212.4 62.5668 202.9757511 17.13 4.8 0 745538 272.5 62.5668 265.2199946 13.27 4.5 0 760000 272.5 73.8168 262.3115134 15.72 3.8 0 815797 185.6 73.0668 170.6124343 23.18 3.8 0 821962 240.8 93.3168 221.9833661 22.80 3.6 0 844793 216.5 73.0668 203.797676 19.72 3.7 0 850749 270.5 76.0668 259.5844601 16.33 4.2 0 856363 305.2 76.0668 295.5687432 14.43 4.0 0 913976 214.4 64.0668 204.6040203 17.39 4.2 0 918322 258.6 79.0668 246.2161675 17.80 4.1 0 921739 295.6 91.0668 281.2226839 17.94 4.3 0 946083 185.6 72.3168 170.9316835 22.93 4.1 0 952200 314.8 84.3168 303.2980667 15.54 4.2 0 1289000 185.6 73.8168 170.2892834 23.44 3.9 0 1306641 242.8 79.0668 229.565418 19.00 4.0 0 1355961 210.3 65.5668 199.8176287 18.17 4.2 0 1360480 242.8 85.8168 227.1284149 20.70 4.0 0 1386006 191.8 76.8168 175.7453249 23.61 3.8 0 1387816 248.7 85.8168 233.4248634 20.19 4.1 0 1396015 248.7 72.0168 238.0446818 16.83 3.8 0 1405500 204.1 63.0168 194.1280323 17.98 4.0 0 1433000 195.6 64.0668 184.8101868 19.12 4.9 0 1433557 262.6 76.8168 251.1133992 17.01 4.5 0 28067 191.8 72.3168 177.6443651 22.15 3.6 0 29794 278.3 92.5668 262.4543342 19.43 3.8 0 37030 278.3 96.3168 261.101444 20.25 3.6 0 39700 305.2 91.8168 291.0613599 17.51 3.7 0 44800 272.5 72.3168 262.7290057 15.39 3.9 0 60575 128.4 56.5668 115.2681966 26.14 3.7 0 61542 310.9 84.3168 299.2482034 15.74 3.6 0 223345 177.2 61.0668 166.3450809 20.16 3.6 0 229052 195.6 79.8168 178.5739019 24.08 3.9 0 263470 170.9 67.0668 157.1905033 23.11 3.7 0 267038 266.5 82.0668 253.5493844 17.94 3.6 0 309101 305.2 85.8168 292.8865255 16.33 3.3 0 313592 258.6 83.5668 244.7254583 18.85 4.8 0 316027 208.2 74.5668 194.3888689 20.99 3.7 0 383600 258.6 76.0668 247.1594666 17.11 3.6 0 394351 220.6 61.0668 211.9792583 16.07 3.7 0 418500 256.6 80.4918 243.6485792 18.28 4.2 0
#!/usr/bin/env python ######################################## # FIT a line to the data dataFile = open('results.dat','r') dataFile.readline() dataFile.readline() dataFile.readline() wireoutAngle=[] speedAngle=[] for line in dataFile.readlines(): fields = line.split() wireoutAngle.append((float(fields[2-1]),float(fields[5-1]))) speedAngle.append ((float(fields[6-1]),float(fields[5-1]))) # from Scientific.Functions.LeastSquares import leastSquaresFit def linear(parameters,values): a,b = parameters x = values return (a*x + b) # initial = (-.1,32) param,error = leastSquaresFit(linear,initial,wireoutAngle) print " ----- WIREOUT TO ANGLE ----- " print param,error print 130, 130 * param[0] + param[1] print 300, 300 * param[0] + param[1] # initial = (0,22) param,error = leastSquaresFit(linear,initial,speedAngle) print " ----- SPEED TO ANGLE ----- " print param,error print 3.4, 3.4 * param[0] + param[1] print 4.8, 4.8 * param[0] + param[1]That made a simple linear fit to the data. The results of a run look like this:
./cracks_cruise_layback_fit.py ----- WIREOUT TO ANGLE ----- [-0.051842754250418684, 31.465098411287943] 194.768779108 130 24.7255403587 300 15.9122721362 ----- SPEED TO ANGLE ----- [-1.703222454552509, 25.827653874339632] 415.848028016 3.4 20.0366975289 4.8 17.6521860925After creating two data files for gnuplot, I end up with plots that have the best fit line.
The code for the processing of the excel data from Jenna is in cracks_cruise_layback.py.html (raw python: cracks_cruise_layback.py)
Finally a quick calculation from the Santa Barbara Cruise...
ipython In [1]: from math import * In [2]: from units import rad2deg
In [3]: rad2deg(asin(30.75/50)) Out[3]: 37.951920289246438 # Wire angle seen
In [4]: 50 * (-0.051842754250418684) + 31.465098411287943 Out[4]: 28.87296069876701 # Wire angle predicted
10.30.2005 09:44
layback analysis - preliminary
Jenna has done just over 50 layback calculations for one cracks cruise
off of the eastern seaboard. I just did a short analysis of the
relationship between wireout, layback and speed. Here are the
resulting graphs. These results may possibly have a couple bad points that I still need to get rid of.
The impression I get is that the wire angle is more dependent on the amount of wire in the water than the speed of the ship.
The data file that results from my processing looks like this.
The impression I get is that the wire angle is more dependent on the amount of wire in the water than the speed of the ship.
The data file that results from my processing looks like this.
# 1 2 3 4 5 6 # shot wireout vert layback angle1 speedI need to cleanup the code some before I share it, but it is getting there. Right now, my worry is that I had to remove any point from the shot-speed table where I had two shotpoint numbers that were the same. The current version of the code will keep scanning until it find the next shot with speed and use that speed to match up with the wireout and layback.
525106 264.6 79.8168 252.2745299 17.56 3.8 528154 240.8 85.0668 225.273788 20.69 3.8 595405 218.5 62.5668 209.3505327 16.64 5.0 597545 254.7 82.8168 240.8598506 18.97 3.8 598722 254.7 79.0668 242.1167717 18.09 4.2 605053 254.7 76.0668 243.0759798 17.38 3.7 633320 185.6 73.0668 170.6124343 23.18 4.0 670983 258.6 76.0668 247.1594666 17.11 3.9 683526 278.3 89.5668 263.4932226 18.77 4.0 689024 256.6 73.8168 245.7532096 16.72 4.1 711636 191.8 82.3668 173.2135972 25.43 4.2 713188 244.8 82.3668 230.5271139 19.66 4.3 741727 212.4 62.5668 202.9757511 17.13 4.8 745538 272.5 62.5668 265.2199946 13.27 4.5 760000 272.5 73.8168 262.3115134 15.72 3.8
10.29.2005 14:54
Using python to read /dev/random
I was thinking about using the python random class to make some simple
figures that need random particles dispersed in a uniform manner.
Since random needs a seed, I figured a good way to give it one is to
use /dev/random. It is easy to use the python struct class to turn
the random data that /dev/random generates into values of a certain
type. Here the 'i' tells struct that the random data is a 4 byte
integer.
#!/usr/bin/env python
import struct class DevRandom: """http://www.python.org/doc/current/lib/module-struct.html""" def __init__(self): self.devrand = open ('/dev/random','r') return def getInt4(self): string = self.devrand.read(4) intval = struct.unpack('i',string) return intval[0]
r = DevRandom() for i in range(100000): print r.getInt4()
10.29.2005 11:42
pmag-kds-py 0.7 released
Just released pmag-kds-py-0.7. This adds the xdr.py from my previous
post.
- xdr.py.1.html - documents the command line program
- xdr.html - documents the code
10.29.2005 11:28
xrd.py
xrd.py now lives in pmag-kds-py. It reads the data from the XRD
machine in the SIO Analytical Facilities.
xrd.py --cps-file=4-63b-cps.txt --show --no-cps --esd
10.29.2005 09:29
pmag-kds-py release 0.6
I have just released pmag-kds-py 0.6. There will be no more pmag-py
releases from me. pmag-kds-py contains only my python code. It is up
to Lisa as to if, when, and how any of pmag-kds-py gets merged into
pmag-py and when pmag-py will be released.
Just to reiterate, pmag-kds-py is a use at your own risk piece of software without support of any kind. If you are going to use it in your research, I expect you to verify that the calculations in here are correct. I am sure there are errors in code.
http://schwehr.org/software/pmag-kds-py/
schwehr.org/software/pmag is no more.
Just to reiterate, pmag-kds-py is a use at your own risk piece of software without support of any kind. If you are going to use it in your research, I expect you to verify that the calculations in here are correct. I am sure there are errors in code.
http://schwehr.org/software/pmag-kds-py/
schwehr.org/software/pmag is no more.
10.29.2005 07:56
Oracle Express
This is pretty exciting (other than the not available for Mac OSX and
not an open source db)...
Oracle to offer free database
The database heavyweight on Tuesday is expected to announce the beta release of Oracle 10g Express Edition (Oracle Database XE), which will be generally available by the end of the year. It is targeted at students, small organizations and software vendors that could embed the Oracle database with an application.I still don't know why oracle would be any better than postgres, but it is still great.
The latest edition is the same as other databases in Oracle's lineup but is limited in usage. It can only run servers with one processor, with 4GB of disk memory and 1GB of memory.
10.28.2005 17:01
XDR measurements
Question: "What do you do for your job?"
Answer: "I shoot laser beams at mud."
Today Jason and I did our first X-ray Diffraction (XRD) measurements today. I know this is not a huge deal for all of you who do XRD all the time, but it is new for us.
SIO's Analytical Facility has a Scintag XDS 2000, which looks like this:
Bruce Deck setup the instrument for us and ran the software. Here are the settings that we used on the front.
I created the quick look sample by scraping the dried cube with a razor blade to fill up a 1 cm square patch around the dimple in the glass slide. The slides are to small for the sample holder, so it has to go in diagonally.
Once the system is closed up with the lead glass shield down and the interlock disabled, it is time to run the sample. The measurements are done in degrees which are actually the emitter and receiver moving up in angle above the sample. Here is the emitter:
The software to control the system is DMSNT on windows... it unfortunately requires a floppy disk to copy the data off. Here is the software running.
Once the software is done running the system through your range (for me 2 to 50°) it can help you try to identify the materials present. Here is the original and baseline corrected data with an illite standard below. The middle graph are the peaks that the machine has found. The machine currently has a 0.2° offset.
Answer: "I shoot laser beams at mud."
Today Jason and I did our first X-ray Diffraction (XRD) measurements today. I know this is not a huge deal for all of you who do XRD all the time, but it is new for us.
SIO's Analytical Facility has a Scintag XDS 2000, which looks like this:
Bruce Deck setup the instrument for us and ran the software. Here are the settings that we used on the front.
I created the quick look sample by scraping the dried cube with a razor blade to fill up a 1 cm square patch around the dimple in the glass slide. The slides are to small for the sample holder, so it has to go in diagonally.
Once the system is closed up with the lead glass shield down and the interlock disabled, it is time to run the sample. The measurements are done in degrees which are actually the emitter and receiver moving up in angle above the sample. Here is the emitter:
The software to control the system is DMSNT on windows... it unfortunately requires a floppy disk to copy the data off. Here is the software running.
Once the software is done running the system through your range (for me 2 to 50°) it can help you try to identify the materials present. Here is the original and baseline corrected data with an illite standard below. The middle graph are the peaks that the machine has found. The machine currently has a 0.2° offset.
10.27.2005 19:24
PDF class to wrap pdflib_py
Here is pretty much the simplest case of a pdflib_py wrapper that I am
making. The API that comes with pdflib for python feels like it came
straight out of a C programmers mouth. This is still pretty basic.
It only writes some text. It takes an awful lot of work just to get
that done in the original API. I am a little more able to cope with a
simple object oriented API.
#!/usr/bin/env python import pdflib_py import sys class PDF: """OO Wrapper for pdflib_py""" def __init__(self,filename):#,size='letter'): self.pdf = pdflib_py.PDF_new() if -1 == pdflib_py.PDF_open_file(self.pdf,filename): print "Error: " + pdflib_py.PDF_get_errmsg(self.pdf) + "\n" sys.exit(ams.EXIT_FAILURE) pdflib_py.PDF_begin_page(self.pdf,595, 842) # FIX: make page setup sane # Make a default font self.font = pdflib_py.PDF_load_font(self.pdf,"Helvetica-Bold","iso8859-1","") pdflib_py.PDF_setfont(self.pdf,self.font, 12) def set_info(self,creator,author=None,title=None): if None!=creator: pdflib_py.PDF_set_info(self.pdf, 'Creator', creator) if None!= author: pdflib_py.PDF_set_info(self.pdf, 'Author', author) if None!= title: pdflib_py.PDF_set_info(self.pdf, 'Title', title) def show(self,string,pos=None): if None!=pos: pdflib_py.PDF_set_text_pos(self.pdf,pos[0],pos[1]) pdflib_py.PDF_show(self.pdf,str(string)) def __del__(self): import pdflib_py # bad ref counting in pdflib_py? pdflib_py.PDF_end_page(self.pdf) pdflib_py.PDF_close(self.pdf) pdflib_py.PDF_delete(self.pdf) import sys, datetime if __name__ == '__main__': pdf = PDF("foo.pdf") pdf.set_info(sys.argv[0],'Kurt Schwehr','clay flocs') pdf.show(datetime.date.today(),(25,25))Wait! Why am I trying to mess with pdflib? If I use matplotlib or gnuplot, I can still write in python and I do not need to deal with the pdflib_py API. Feel free to take this idea and run with it, but I'm not going to right now. I was just thinking about making a pdf figure and my brain went back to Feb of this year.
10.27.2005 18:45
Python destructors
How does python's destruction system work? I am cookin up a little
pdflib wrapper and am having some strange behavior on cleanup. Yes, I
do know that pdflib is up to 6.0.2 and fink only has 5.0.3. Soon, I
promiss.
It seems like the pdf object was already garbage collected when __del__ was being reached, so the pdf cleanup is freaking out. This little python snippet proves taht objects are not eaten by the great GC monster before my __del__ code gets run.
It seems like the pdf object was already garbage collected when __del__ was being reached, so the pdf cleanup is freaking out. This little python snippet proves taht objects are not eaten by the great GC monster before my __del__ code gets run.
#!/usr/bin/env python class destroy: def __init__(self): self.a = 1 def __del__(self): print self.a del(self.a) print self.a # Try it... d = destroy() del(d)I expect the 2nd print of self.a to throw an exception. And that is what happens.
./destroy.py 1 Exception exceptions.AttributeError: "destroy instance has no attribute 'a'" in <bound method destroy.__del__ of <__main__.destroy instance at 0x38ce90>> ignoredAha!!! I figured the bugger out. It was not my pdf object that was gone when my code got to __del__, rather pdflib_py was already gone. Doing another import did the trick, but is that okay? It feels REALLY WRONG to do this sort of thing. Shouldn't the pdf object be holding up the reference count on the import pdflib_py instance? Perhaps the pdflib python interface is missing a reference count? Or is this how python is supposed to work?
def __del__(self): print 'running __del__', type(self.pdf), self.pdf import pdflib_py pdflib_py.PDF_end_page(self.pdf) pdflib_py.PDF_close(self.pdf) pdflib_py.PDF_delete(self.pdf)I am leaning towards a missing refcount in the pdflib_py library. I tried this little snippet and it prints out 3.1415... just fine.
#!/usr/bin/env python import math class destroy2: def __init__(self): return def __del__(self): print math.pi d = destroy2()BTW, No luck trying to figure out what is wrong with mapserver. It seems to get confused and think that my mac needs -arch i386 and -arch ppc. Not gonna work like that. How to make the linker confused.
10.27.2005 08:24
GlobeXplorer
From this month's OGC News
OpenIOOS - Oceanographic data interchange.
... GlobeXplorer has over 600Tb of imagery content and they add as much as 1Tb more each week. This is mostly commercial imagery data (including over 2M square kilometers of Digital Globe content). It also includes other remotely sensed data sources and ancillary map data overlays that can be overlaid onto images (e.g. flood zones and data parcel data.) Most of the imagery consists of 6-inch and sub-meter resolution data from around the world, and includes some data at 3-inch resolution. GlobeXplorer has a very popular portal for individual consumers who might want to buy, for example, a high resolution image of their neighborhood, but most of their images are served to business customers via Web services. They serve over 2 million maps/day. ...I just noticed that mapserver is in fink. When did that happen? Oh, June of this year. Wahoo! Now we need gdal python support and OpenEV. Accept mapserver is not quite building on 10.3.9. Whoops.
OpenIOOS - Oceanographic data interchange.
10.26.2005 10:43
XRD info
I am hoping to do some X-Ray Diffraction work soon and have two
sources of info that look to be good starting points. Thanks to Jill
and Mike.
X-Ray Diffraction and the Identification and Analysis of Clay Minerals by Duane M. Moore Robert C. Reynolds, Jr 1989and
Data Report: Normalization Factors for Semiquantitative X-Ray Diffraction Analysis, with Application to DSDP Site 297, Shikoku Basin by Michael B. Underwood,2 Nandini Basu,2 Joan Steurer,2 and Swati Udas
10.25.2005 18:32
Suggestions for anyone writing SEM acquisition software
I am sitting here typing in the data from last weeks SEM run.
QUESTION: How can a sample have -1.24% carbon? Ahem.
This is a note to anyone designing SEM software. Please, oh please put these features into your software. The more info the data files, the better. This stuff could go in the tiff tags, in an XML file, or flat file even. Then I can write a little pdflib program to assemble a composite image. With some EDX machine readable data with position and time, I can even lay in the spot on the pdf.
We also need a free and open source system for best guess mineral identification. I bet a little WEKA setup could do a good matching job.
Back when I was working on MER, I had the IVS (Fledermaus folks) make a custom setup for microscope work. Both sides need to converge before the average user can just make a 3D model from an SEM and do analysis right in that tool. This is the kind of stuff that I wanted (and still want) to do with a Viz type architecture. Let me build a Rose diagram and/or 3D probability density map in the 3D environment quickly. Tag locations, annotate them, and then drop in additional data. Finally, let me make a publication quality figure and HD quality movie. All without killing myself in the process.
QUESTION: How can a sample have -1.24% carbon? Ahem.
This is a note to anyone designing SEM software. Please, oh please put these features into your software. The more info the data files, the better. This stuff could go in the tiff tags, in an XML file, or flat file even. Then I can write a little pdflib program to assemble a composite image. With some EDX machine readable data with position and time, I can even lay in the spot on the pdf.
- Stage position including height
- The scale of the image or width
- Focusing info
- Any other tunable beam parameters
- SEM mode (environmental, low vac, or high vac)
- time and date of the acquisition system
- Oh, and enable lossless tiff compression!
- Don't even let people write jpeg's. What is the point of saving bad data?
- Build a secure system that can publish the data to just me over the net. The SIO WinXP boxes have been force complete off the network. What a waist of CD-R's.
- Why do I have to explicitly store images. I would like a mode that grabs frames at a slow frequency, then let me explicitly mark the ones I need to pull out... Then we could do a pan/scan of most of a sample and build a large Fledermaus or iv model of the surface.
- A simple ASCII table of the spectra would rock
- Put in the time and stage position for linking to the SEM images
- Include the peak locations for the elements
- Include both the found and missing elements that the detector can sense (knowing that there is no detectable S is important too)
- Why not make EDX acquisition be automatic. The moment the SEM stops moving, start sampling. Then let me know how many samples at the current spot.
We also need a free and open source system for best guess mineral identification. I bet a little WEKA setup could do a good matching job.
Back when I was working on MER, I had the IVS (Fledermaus folks) make a custom setup for microscope work. Both sides need to converge before the average user can just make a 3D model from an SEM and do analysis right in that tool. This is the kind of stuff that I wanted (and still want) to do with a Viz type architecture. Let me build a Rose diagram and/or 3D probability density map in the 3D environment quickly. Tag locations, annotate them, and then drop in additional data. Finally, let me make a publication quality figure and HD quality movie. All without killing myself in the process.
10.25.2005 13:27
pmag-0.5
Oh the joy of magnetic units... pmag 0.5 is out:
- Makefile: build docs correctly
- agfm.py: Add getSlope(data), --save-cgs
- agfm.py: High field susceptebility should be correct now
- ams.py: Hext now can normalize on sample volume and is in SI
- units.py: AGFM slope to SI vol normalize susceptibility. Ouch
- units.py: Oe2T() and T2Oe()
10.25.2005 10:47
Change in semantics with Hext class in ams.py
Up until now, the Hext class in ams.py has just returned the values
from Lisa's k15_hext. That means KLY-2 raw data which is in microSI
will end up generating a bulk susceptibility also in microSI. I am
switching the sense of the Hext class to convert to microSI unless you
tell it not to. I have only used the KLY-2 at SIO, so I have no idea
what other Kappabridges return. Here is a quick example of the new
values in my Santa Barbara database. The value for this sample used
to be 145.95. The database desperately needs a table that tracks
units, but I won't get to do that anytime soon.
sqlite bpsio04.db \ 'select ams.bulksusc,ams_geo.bulksusc \ from ams,ams_geo \ where ams.samplename="bp04-1gw-s2-026" and ams.samplename=ams_geo.samplename' | tr '|' ' 'This change will be in the pmag-py 0.5 release.
0.000145950012 0.000145950012
10.25.2005 08:42
Doxygen does python
This is fantastic news!!!! Doxygen now supports python. Thanks to
Dennis Benzinger for pointing this out to me.
Doxygen features now lists python as the second bullet.
Dennis pointed me to the doxygen formulas page. The best part is that it uses LaTeX (well that is great for Lisa and I), so I don't have to learn MathML right now.
Directions on doxygen for python
There is also pyfilt that mushes python code with doxygen tags into a C file that doxygen can use for documentation.
Doxygen features now lists python as the second bullet.
Dennis pointed me to the doxygen formulas page. The best part is that it uses LaTeX (well that is great for Lisa and I), so I don't have to learn MathML right now.
Directions on doxygen for python
There is also pyfilt that mushes python code with doxygen tags into a C file that doxygen can use for documentation.
10.25.2005 08:07
Layback improvements
Jenna is working up some more layback info for the chirp. The hope is
to have a lookup table of ship speed to a range of layback angles.
Here is a little bit on the commands I am using to generate the
velocity input table for the work. Hopefully, the results of this
will end up in layback.py
The first task is to remove any bad xstar segy files. Neal's chirp always writes an ASCII header at the front that starts with a company entry like this:
Next, we need a short script to generate databases from the xstar files. Since sqlite 2 gets really slow with monster databases, I am going to break them into 5 groups of 100 files. Here is make-db.bash:
Once the database is created, here is a sample line used to make the lookup table.
The first task is to remove any bad xstar segy files. Neal's chirp always writes an ASCII header at the front that starts with a company entry like this:
C1 COMPANY:A quick grep will list those files without the company entry and move them out of the way. They will confuse segysql.py and cause troubles.
mkdir Bad for file in `grep --files-without-match 'C1 COMPANY' *.xstar`: do mv $file Bad doneThis gives me all the munch files:
ls Bad/The most likely cause of the errors are bad tape reads or attempting to read past the end of data on a tape. But don't quote me on that since I am not an expert with tape drives!
line183.sgy line350.sgy line396.sgy line400.sgy line406.sgy line336.sgy line351.sgy line397.sgy line403.sgy line407.sgy line347.sgy line352.sgy line398.sgy line404.sgy line408.sgy line349.sgy line353.sgy line399.sgy line405.sgy line409.sgy
Next, we need a short script to generate databases from the xstar files. Since sqlite 2 gets really slow with monster databases, I am going to break them into 5 groups of 100 files. Here is make-db.bash:
#!/bin/bash
segysql.py --noisy -d xstar -f lines001-099.db line[0-9].sgy line[0-9][0-9].sgy segysql.py --noisy -d xstar -f lines100-199.db line1[0-9][0-9].sgy segysql.py --noisy -d xstar -f lines200-299.db line2[0-9][0-9].sgy segysql.py --noisy -d xstar -f lines300-399.db line3[0-9][0-9].sgy segysql.py --noisy -d xstar -f lines400-499.db line4[0-9][0-9].sgy
Once the database is created, here is a sample line used to make the lookup table.
sqlite lines001-099.db 'select shotpoint,speed/10.0,x/3600.0,y/3600.0 \ from segytrace' \ | tr '|' ' ' > 001-099.shot-speedWhich generates stuff that looks like this with speed in knots.
bzcat 001-099.shot-speed.bz2 | head 4729 3.6 -74.6377777777778 36.8233333333333 4729 3.6 -74.6380555555556 36.8233333333333 4730 3.6 -74.6380555555556 36.8233333333333 4730 3.6 -74.6380555555556 36.8233333333333 4731 3.6 -74.6380555555556 36.8233333333333 4731 3.6 -74.6380555555556 36.8233333333333 4732 3.6 -74.6380555555556 36.8233333333333 4732 3.6 -74.6380555555556 36.8233333333333 4733 3.6 -74.6380555555556 36.8233333333333 4733 3.6 -74.6380555555556 36.8233333333333Which looks like it is off the eastern seaboard of the US like it is supposed to be.
10.24.2005 22:52
Missing Firefox feature
I really wish that I had a button to enable and disable Javascript and
animated gifs on a per page basis. Those annoying animated adds
really crank up the CPU useage and eat through the battery on my
laptop.
10.22.2005 17:38
compositing EDX composition
A little more work with Illustrator provided an easy solve for the
maximum zoom with these super detailed images - make the lowest
resolution image much larger! All I have to do is scale up the first
image by 400% and start from there. Now I have no problem with 10000x
images. I also overlayed one of the comparison graphs from the EDX
and highlighted where the measurements were taken and the spot size.
Wacky Towns: Paducah, Possum Trot and Monkey's Eyebrow. I've seen the exit for Boring, OR. which is not too far from Drain, OR.
Wacky Towns: Paducah, Possum Trot and Monkey's Eyebrow. I've seen the exit for Boring, OR. which is not too far from Drain, OR.
10.22.2005 16:05
SoCal coastal bluffs provide beach sand
Coastal Bluffs Provide More Sand To California Beaches Than Previously Believed
http://engrish.com/
... Ashford said decades-old photographs of the Southern California coast taken from the ground and the air also have documented the steady pace of erosion. However, he said the photographs lack the precision and accuracy of the laser scanning technique called LIDAR, an acronym for light detection and ranging. Ashford said the 3-D maps generated by LIDAR permitted Young to calculate the unexpectedly high volume of bluff material that has fallen onto beaches during the study period. ... At the wave washed western edge of the campus, Neal Driscoll, a geology professor at UCSD's Scripps Institution of Oceanography, and graduate student Jennifer Haas have studied the same 50-mile stretch of beach north, but with a completely different technique. The Scripps team used a mineralogical fingerprinting technique. They compared sand grains collected from beaches in the study area to grains taken from coastal bluffs, rivers, and from dredged material that the San Diego Regional Beach Sand Project used to replenish the region's disappearing beaches. ...
http://engrish.com/
10.22.2005 14:42
SEM drilldown in resolution using Illustrator
I did a quick test of trying to do increasing resolution insets so
that I know where each successive SEM image is located. This works,
but I did hit a limitation with Illustrator where it would not let me
zoom in beyond 6400%. This is definitely a problem. It was a lot
easier to place images with images successively double the
resolution. Then I could just do scales by 50% repeatedly until I had
the correct size. When they were not powers of 2, I had to do the
scaling by eye. Not as fast or as accurate.
10.22.2005 14:02
pmag-py 0.4 released
Not that much new in pmag-py with revision 0.4, but here it is:
- Makefile: Support for doing diffs between versions
- agfm.py: More entries from AGFM header - step and range
- agfm.py: Added --save-si, --mass (for high field chi(X)), --density
10.22.2005 11:57
scanning electron microscope
Thanks to Evelyn York for helping me through the whole process.
Yesterday was my first real time using the SIO Analytical Factilies SEM. I have sat in before while Kevin and Rodrigo ran some of their samples through, but that is not the same as going through it myself. I ran three samples through yesterday of mud that has been dried. The first task was to slice my cubes into slabs that were as flat as possible and then mount them on pedestals that have a special double stick tape.
We used the low vacuum mode on the FEI Quanta 600 E-SEM. Images were taken in both BSE and SE mode. Here is an image from the BSE mode of pyrite framboids (typical in somewhat reducing sediments).
The instrument also has an Energy Dispersive Xray spectroscopy mode (EDX) that is able to composition work. This mode gives an estimate of abundance in a 5 micron spot. You can also get quick look comparisons between two spots like this:
I did not have time to do a compositional map, but that looks like a very useful capability.
I was hoping that the tif images from the SEM would have some of the instruments status in the tag headers. Things like knowing the stage position would be really great (especially for automated visualizations), but no such luck. Here is what I get from tiffdump. tiffdump is a part of the libtiff package.
tiffdump 54_BSE_100x_5.tif 54_BSE_100x_5.tif: Magic: 0x4949And here is a little bit nicer display of what is in the file:Version: 0x2a Directory 0: offset 974609 (0xedf11) next 0 (0) SubFileType (254) LONG (4) 1<0> ImageWidth (256) SHORT (3) 1<1024> ImageLength (257) SHORT (3) 1<948> BitsPerSample (258) SHORT (3) 1<8> Compression (259) SHORT (3) 1<1> Photometric (262) SHORT (3) 1<1> Make (271) ASCII (2) 22 Artist (315) ASCII (2) 1<\036> StripOffsets (273) LONG (4) 1<32> StripByteCounts (279) LONG (4) 1<970752> SamplesPerPixel (277) SHORT (3) 1<1> Resolution (282) RATIONAL (5) 1<200> YResolution (283) RATIONAL (5) 1<200> ResolutionUnit (296) SHORT (3) 1<1> Software (305) ASCII (2) 13 33560 (0x8318) LONG (4) 1<974493>
tiffinfo 54_BSE_100x_5.tif TIFFReadDirectory: Warning, 54_BSE_100x_5.tif: invalid TIFF directory; tags are not sorted in ascending order. TIFFReadDirectory: Warning, 54_BSE_100x_5.tif: unknown field with tag 33560 (0x8318) encountered. TIFF Directory at offset 0xedf11 Subfile Type: (0 = 0x0) Image Width: 1024 Image Length: 948 Resolution: 200, 200 (unitless) Bits/Sample: 8 Compression Scheme: None Photometric Interpretation: min-is-black Artist: "\036" Make: "SIS, D-48153 Muenster" Samples/Pixel: 1 Planar Configuration: single image plane Software: analySIS 3.2 Tag 33560: 974493
10.20.2005 19:11
Plotting both the raw and flattened hysteresis
I wanted to be able to plot both the raw AGFM and slope corrected
plots on the same graph. The slope of the raw AGFM is related to the
high field susceptibility and the flattened is what people are used to
seeing. In a future post, I will hopefully explain how to get the
normalized high field susceptibility (k(hf) or Chi_hf... need
MathML!).
agfm.py is becoming a beast/garbage can. It now has --save-si in addition to --save-flattened-si. The --save-si saves the raw data in a gnuplot style ASCII format. There are some '#' comments at the top with AGFM header values and others that give the units. Then the data is space delimited columns. Hopefully that should be easy for others to parse. Here is how I generated the files to plot:
agfm.py is becoming a beast/garbage can. It now has --save-si in addition to --save-flattened-si. The --save-si saves the raw data in a gnuplot style ASCII format. There are some '#' comments at the top with AGFM header values and others that give the units. Then the data is space delimited columns. Hopefully that should be easy for others to parse. Here is how I generated the files to plot:
agfm.py -d .2 --save-si --save-flattened-si bp04-1gw-s2-026c.rawThen I use gnuplot to see the results:
set xlabel 'mT' set ylabel 'Am^2' plot 'bp04-1gw-s2-026c.raw.flat.si' with l, \ 'bp04-1gw-s2-026c.raw.si' with lWhich results in a plot like this: Finally, here is a little bash script that does the whole deal. It uses Lisa's hystcrunch that is in her pmag package. You should verify for yourself that the units and results are the same (or at least within fit error). Then it uses agfm.py to produce gnuplot files. Finally, I just noticed that gnuplot has svg output. Woo hoo! It is about time. I tried it and it imports into Adobe Illustrator without the problems that come with the pdf terminal. Yes, I am very excited.
#!/bin/bash for file in *.raw; do echo echo "======== $file =======" agfm.py --save-flattened --save-si --save-flattened-si --agfm-format -d .2 $file ls $file $file.flat hystcrunch -mal $file < $file | plotxy; mv mypost $file.ps ; ps2pdf $file.ps hystcrunch -mal $file.flat < $file.flat | plotxy; mv mypost $file.flat.ps; ps2pdf $file.flat.ps cat << EOF | gnuplot set xlabel 'mT' set ylabel 'Am^2' set terminal gif set output "$file.si.gif" plot 'bp04-1gw-s2-026c.raw.flat.si' with l, 'bp04-1gw-s2-026c.raw.si' with l set terminal pdf set output "$file.si.pdf" replot set terminal svg set output "$file.si.svg" replot EOF echo $file ... done. done
10.19.2005 14:04
Hysteresis plots - Day and Tauxe
(This is my 2nd try at this since the first draft went up in smoke
I wanted to create the Day Plot and Lisa's squareness plot for some of my hysteresis data (from the AGFM). I reverted back to gnuplot for these. From a previous post, I have created a small text file that has the hysteresis parameters generated with hystcrunch in Lisa's pmag-1.9. flattened.hyst.dat:
Looks like I made it without any accidents. I hope that made some sense. matplotlib seems to be much more powerful than gnuplot, but I know gnuplot really well and am still struggling with matplotlib.
I wanted to create the Day Plot and Lisa's squareness plot for some of my hysteresis data (from the AGFM). I reverted back to gnuplot for these. From a previous post, I have created a small text file that has the hysteresis parameters generated with hystcrunch in Lisa's pmag-1.9. flattened.hyst.dat:
# flattened.hyst.dat from --dist-percent=0.2 # filename sid s Ms Mr*Ms Bc Bcr Bcrn*1e-3 bp04-1gw-s2-026.raw.flat -.602E-37 0.599E-07 0.130E-07 0.014 0.039 0.107 bp04-1gw-s2-032.raw.flat -.602E-37 0.106E-06 0.217E-07 0.013 0.038 0.109 bp04-1gw-s2-053.raw.flat -.602E-37 0.225E-07 0.314E-08 0.007 0.035 0.106 ...I am trying to match Figure 13 in Tauxe et al. 2002 on micromagnetic modeling. Here is a bit of my Makefile turned into a shell script...
#!/bin/bash # Columns in flattened.hyst.dat for awk # Ms = 3 # Mr = 4 # Bc = 5 # Bcr = 6 # # dayplot: egrep 'bp04-1gw' flattened.hyst.dat > hyst-1gw.dat echo '# Bcr/Br Mr/Ms (aka squareness)' > hyst-1gw-day.dat awk '{print $$6/$$5,$$4/$$3}' hyst-1gw.dat >> hyst-1gw-day.dat ./dayplot.gp # #square-coerce: egrep 'bp04-1gw' flattened.hyst.dat > hyst-1gw.dat echo '# Bc(mT) Mr/Ms (aka squareness)' > hyst-1gw-squarec.dat awk '{print $$5*1000.,$$4/$$3}' hyst-1gw.dat >> hyst-1gw-squarec.dat ./squarec.gpA little awk to make Bcr/Br versus Mr/Ms for the Day plot. Then the 1000 multiplier in the second awk makes Bc be in mT. I then have two gnuplot files to make the plots. Here is the dayplot.gp:
#!/usr/bin/env gnuplot set xlabel 'Bcr/Bc' set ylabel 'Squareness (Mr/Ms)' set xrange [0:9.5] set yrange [0.0:0.7] set label 1 'SD' at 1,.55 set label 2 'PSD' at 2,0.4 set label 3 'MD' at 6,.05It first sets the axis labels and the range of the plot. The set label puts three text labels into the plot. The coordinates are the same as the data. I don't know how to draw vertical lines with gnuplot, so I do it with the end points in file like day.l1:
set terminal pdf set output 'bpsio2004-dayplot.pdf' plot 'hyst-1gw-day.dat','hyst-2gw-day.dat','hyst-4gw-day.dat', \ 0.5, 0.15, 'day.l1' with l, 'day.l2' with l
1.5 0 1.5 0.7Then for squarec.gp:
#!/usr/bin/env gnuplot set xlabel 'Bc (mT)' set ylabel 'Squareness (Mr/Ms)' set xrange [0:110] set yrange [0.0:1.0] set label 30 'MD' at 1,.02 set label 40 'MD+stress' at 40,.25 set label 50 'USD/SP' at 32,.35 set label 60 'vortex' at 6,.2 set label 70 'CSD' at 5,.8 # Cubic set label 80 'flower' at 5,.70 set label 81 'flower' at 21,.4 set label 90 'complicated shapes' at 45,.65 set label 100 'uniform' at 56,.52 # set terminal gif set output 'bpsio2004-squarecplot.gif' plot 'hyst-1gw-squarec.dat','hyst-2gw-squarec.dat','hyst-4gw-squarec.dat',\ 'tauxe2002-fig13.dat' with l # set terminal pdf set output 'bpsio2004-squarecplot.pdf' replotThe same basic idea, but there is a trick: tauxe2002-fig13.dat. This file contains a series of lines separated by a blank like that were digitized from Lis'a paper. I converted the pdf to a png and then used g3data (much like the old DataThief) to grab the lines.
Looks like I made it without any accidents. I hope that made some sense. matplotlib seems to be much more powerful than gnuplot, but I know gnuplot really well and am still struggling with matplotlib.
10.19.2005 13:42
plotdi colored plotxy
I just figured out how to quickly color a stereonet (equal area plot)
that comes from Lisa's plotxy program. This is adapted from my
Makefile so that it is easier to understand. The $$ and ";\" symbols
required with GNU make to have embedded shell scripts gets a little confusing
#!/bin/bash ams-eigvector-plots.py cores=1 2 4 for core in ${CORES} ; do echo $core plotdi < $core-ams-max.di > $core-ams-max.di.plotxy perl -p -e 's/symb 19 \.2/symb 0 \.075\ncolor 3\n/' $core-ams-max.di.plotxy | plotxy mv mypost $core-ams-max.ps ps2pdf $core-ams-max.ps doneThe .py python program just creates ASCII dec inc table files that can be passed into plotdi. The big trick here is to use a perl search and replace. The plotdi output commands to pass to plotxy. Those commands are ascii. The symbol command controls the shape of the points of the stereonet. I need to replace the black dots with squares and circles. Additionally, I need to change the color to match those used for AMS eigenvector plots. Here is a little table:
AMSEig Eig Color ColorNum Symbol Plotxy symbol number Min V3 Red 2 Circle 17 Int V2 Yellow 7 Triangle 1 Max V1 Blue 3 Square 0For the Maximum eigenvector, the perl line replaces the symb 19 .2 with symb 0 .075. The 0 makes the symbols be squares and the .075 makes them much smaller. Then the color 3 sets the color for the rest of the graph. This messes up the color for the error ellipse on the right hand part of the graph, but I crop that part anyway.
10.19.2005 10:23
Quad G5 mac
From the apple store...
The Power Mac G5 Quad, with a suggested retail price of $3,299 (US), includes:
I remember when I got a dual 200 MHz R4K Octane MXI in 1997 for something like $50K!!!! (yikes) It really changed how I thought about computing and 3D graphics. For an order of magnitude less money, I bet this will do the same to many folks.
The Power Mac G5 Quad, with a suggested retail price of $3,299 (US), includes:
- two dual-core 2.5GHz PowerPC G5 processors;
- 512MB of 533 MHz DDR2 SDRAM expandable up to 16GB;
- 250GB Serial ATA hard drive running at 7200 rpm;
- NVIDIA GeForce 6600 with 256MB of GDDR SDRAM
- three open PCI Express expansion slots: two 4-lane slots and one 8-lane slot;
- dual Gigabit Ethernet ports;
- 16x SuperDrive with double-layer support (DVD+R DL/DVD±RW/CD-RW); and ships with Mighty Mouse and Apple Keyboard.
- make -j5
- 4 node rendering cluster (blender, bluemoon, maya, renderman, lightwave, povray)
- seismic data crunching
- monster volumetric rendering
- SGI, how about porting performer?
- SETI@Home
- and...
I remember when I got a dual 200 MHz R4K Octane MXI in 1997 for something like $50K!!!! (yikes) It really changed how I thought about computing and 3D graphics. For an order of magnitude less money, I bet this will do the same to many folks.
10.19.2005 08:00
Journaling while you work
Hmmm... Journaling by
Steven Douglas Olson on the Orielly Blogs. He talks about keeping a
paper journal while you work. I keep a couple different ones at this
point and go back and forth all the time.
GMT tip from Paul Wessel:
- paper based - I have a hard bound, no lines drawing book as my paper based note book. It is easy to sketch in. I add a table of contents to the front and it is really easy to flip through. Sometimes I scan in the log book when it is really important like being used as a cruise log (Warning... that was my first attempt at a photoshop built web page and it sucks)
- blog based - This is where I try to write in a style where anyone (well, almost) can see what I am doing and is hopefully does using half-way passable English. Indexed by Google, grepped by me. This is even more valuable (to me) now that I include images when relevant.
- text file - My true garbage can of notes. I have one file per year and paste in whatever junk I want. This is not released to the world so it gets little if any actual writing, but lots of command lines and web links. Grep, grep, grep...
- readme's... - Many of my sub-projects have readme type files where very specific information goes. Sometimes this is good, other times it feels like it is diluting the power of my "one big textfile". Too many places to grep.
- revision control logs - I try to always write good logs, but it is rare that I go back to them. I need to start using some better tools! This is mostly to keep myself ready to work in groups again where reading the cvs/svn logs is a regular occurrence. Maybe I need a bridge to a news feed of log entries. That would be great! (or could really be annoying)
- wiki's - (Aurelio's fav) I think I have make 3 total entries in wiki's in my entire life. They sound good, but I think I like the other methods more, at least for now...
GMT tip from Paul Wessel:
Use pscoast -M to extract ASCII lon,lat files from the GMT cdf files. Alternatively, get the gshhs*.b files and use the gshhs programs in the gshhs supplement. -pwhttp://www.seacoastonline.com/news/10192005/news/68684.htm
... The 14-ton vessel includes a radio, a hand-held global positioning system, extra batteries for the GPS, two sets of nautical maps and personal cell phones.
A sighting of flares was reported approximately 30 miles off Portsmouth Harbor Monday night, but Newlin said the Coast Guard searched the area with no results. There have been no reports of debris washed up on shore or floating on the water. ...
10.19.2005 07:25
Understanding how people work
I addition to wanting to know what apps are running on people's desk
for how long and which is in focus, I would also like this:
http://www.oreillynet.com/pub/wlg/8137
On a separate note... USGS to Contract Out Topo Maps
http://www.oreillynet.com/pub/wlg/8137
Screen Use Monitor: leave it running for a few days, then take a look at the report it generates. The app would show you with diagrams and animated screenshots how you use your computer's display; where you tend to be working most, and which bits of the display tend to get neglected.How do we actually use our computers? Once we get a handle on that, how do we improve the experience for scientists?
On a separate note... USGS to Contract Out Topo Maps
10.18.2005 19:48
Wireless at sea
Tackling the last, great
untapped wireless opportunity [gizmag.com]
... with all of the growth, ubiquity and connectivity, one wireless marketplace remains virtually untapped: the open seas. SeaMobile is a new venture launched in 2005 to focus on enabling wireless phones and computers to work at sea. SeaMobile uses scalable, flexible and IP/software based technology to create a user experience that is optimal for private yacht owners, cruise ships, ferries, offshore platforms, container ships and any other sea-going craft. Coverage can include the entire ship or select areas to enable "quiet zones". The technology enables people to use their own cellular phones and Wi-Fi enabled computers transparently while anywhere at sea with charges for use of the system appearing on customers' regular monthly bill.Errr... why is "turned off" in quotes?
SeaMobile has established more than 200 separate roaming agreements with wireless carriers and systems around the world to establish transparent connectivity for GSM, CDMA and GPRS services. The SeaMobile system is "turned off" within 12 nautical miles of land to avoid interference with land-based wireless carrier networks.
10.18.2005 19:17
Woz and History Channel
Just saw Woz on TV...
http://www.historychannel.com/invent/ - Win $25K.
http://www.historychannel.com/invent/ - Win $25K.
Has the next BIG IDEA - been your idea - but someone else got it to market first? Have you spent your time pondering, tinkering, building, or crafting the next Modern Marvel of the 21st century - but weren't sure what to do with it? Now you have a chance to be recognized and win a $25,000 grant for sharing your BIG IDEA and expressing your inventive spirit.
10.18.2005 17:58
Nature has RSS feeds
Nature
RSS Feeds... it is not the greatest, but it is definitely a
start. Where is the feed for earth and planetary sciences?
10.18.2005 17:48
Sat imagery to help people
Quake
aid hampered by ban on web shots [nature.com]
Open-access satellite images are revolutionizing responses to disasters. Yet the government of Pakistan has forced aid agencies to remove pictures of earthquake devastation from the Internet. ... But Anne Wright, a computer scientist at Ames Research Center who was involved in mapping the damage after Katrina, says the lack of images of Pakistan is hampering the Global Connection's efforts. "We haven't played a similar role with the Pakistan earthquake because we've had no data sources to work with. After Hurricane Katrina, the National Oceanic and Atmospheric Administration made their post-hurricane imagery publicly available. For Pakistan, the only available imagery we're aware of is commercial satellite photography." ...http://www.kathryncramer.com/
10.18.2005 10:28
Here comes the rain
Wow. It is actually raining reasonably hard here in la Jolla. I hope
the hills turn green soon and that there is little flooding or
mudsliding.
10.18.2005 10:17
mix and match agfm
I don't really trust my agfm program correctly calcuate hysteresis
parameters, so I am using Lisa's hystcrunch, but using agfm.py to
flatten the curves. I have converted my Makefile into a bash script here.
Then in line 10 I make a png of each hystcrunch image.
In the data section (line18) I call hystcrunch in the no plot mode (-p) so that it will pass back the parameters. I don't give anything for -l so the sid field is empty.
Finally in the section starting at line 29, I use plotxy to make a multipage pdf. plotxy has already made a postscript (ps) file for each hysteresis loop. Then I just cat these together into a large postscript file and ps2pdf magically makes a 22 page pdf. To make 8 samples per piece of paper, I print them in acrobat with 4 per page and double sided.
1 #!/bin/bash 2 3 METHOD="--dist-percent=0.2" 4 # flattened: 5 for file in bp*.raw; do echo $file 6 agfm.py $file ${METHOD} -s ; mv $file.flat $file.flat.gnuplot 7 agfm.py $file ${METHOD} -s --agfm-format 8 done 9 10 # hystcrunch-flattened: 11 for file in bp*.raw.flat; do 12 echo $file 13 hystcrunch -mal $file < $file | plotxy 14 mv mypost $file.ps 15 convert $file.ps $file.png 16 done 17 18 # hystcrunch-flattened-data: 19 rm -f flattened.hyst.dat 20 touch flattened.hyst.dat 21 echo "# flattened.hyst.dat from ${METHOD} " >> flattened.hyst.dat 22 echo "# filename sid s Ms Mr*Ms Bc Bcr Bcrn*1e-3" >> flattened.hyst.dat 23 for file in bp*.raw.flat; do 24 echo $file 25 echo -n "$file " >> flattened.hyst.dat 26 hystcrunch -map < $file >> flattened.hyst.dat 27 done 28 29 # multipage: 30 rm -f multipage.{ps,pdf,plotxy} 31 touch multipage.ps 32 for file in bp*flat.ps; do 33 echo $file 34 cat $file >> multipage.ps 35 done 36 ps2pdf multipage.ps 37 open multipage.pdfThe end result of this script is a data table like this:
# flattened.hyst.dat from --dist-percent=0.2 # filename sid s Ms Mr*Ms Bc Bcr Bcrn*1e-3 bp04-1gw-s2-026.raw.flat -.602E-37 0.599E-07 0.130E-07 0.014 0.039 0.107 bp04-1gw-s2-032.raw.flat -.602E-37 0.106E-06 0.217E-07 0.013 0.038 0.109 bp04-1gw-s2-053.raw.flat -.602E-37 0.225E-07 0.314E-08 0.007 0.035 0.106 bp04-1gw-s2-062.raw.flat -.602E-37 0.362E-07 0.395E-08 0.006 0.034 0.111 ...For a change, I used code2html with '-n' to give line nubmers. At line 3, I say that I want to use 20% of the each line segment for flattening the cure. The default of 5% was very susceptible to noise.
Then in line 10 I make a png of each hystcrunch image.
In the data section (line18) I call hystcrunch in the no plot mode (-p) so that it will pass back the parameters. I don't give anything for -l so the sid field is empty.
Finally in the section starting at line 29, I use plotxy to make a multipage pdf. plotxy has already made a postscript (ps) file for each hysteresis loop. Then I just cat these together into a large postscript file and ps2pdf magically makes a 22 page pdf. To make 8 samples per piece of paper, I print them in acrobat with 4 per page and double sided.
10.18.2005 08:45
DoD wireless encryption requirement
I am noticing that the RSS feed from GISuser.com is more like a stream
of fliers for commercial packages. Not what I was hoping for. But
there are still some interesting bits of information hiding in there.
http://www.gisuser.com/content/view/7160/ - Thales and LightPointe Form Alliance to Deliver Secure Outdoor Wireless Solutions
http://www.gisuser.com/content/view/7160/ - Thales and LightPointe Form Alliance to Deliver Secure Outdoor Wireless Solutions
... This DoD mandate requires that all unclassified wireless transmission must be encrypted with Federal Information Processing Standards (FIPS) 140-2 Level 2 certified hardware utilizing the AES encryption algorithm. This alliance will facilitate an accelerated deployment of secure outdoor wireless solutions within the DoD agencies by enabling the use of low-cost commercial off-the-shelf (COTS) communication products coupled with high-bandwidth LightPointe optical wireless solutions for seamless, cost-effective connectivity. LightPointe products operate at OSI Layer 1 and are easily integrated with Thales' Datacryptor(R) family of encryption devices to meet FIPS compliance. ...
10.17.2005 10:41
pmag-0.3 released
pmag-py now has pdfposter for taking a bunch of images and laying them
into a large poster. It needs lots of work, but the basics are there.
- Makefile: man pages and html versions too
- agfm.py: --{x,y}{min,max}, --ms-norm for M/Ms plots
- pdfposter.py: New program for large poster of figures
10.17.2005 00:29
pdflib_py png pasting
Here is a slightly more complicated program (actually an ugly hackish
one). This illustrates how to scale and write a set of png images
into a large poster. It's really not that hard once you have a
template. There is some trouble with worrying about image size and
column wrapping that I have done a terrible job handling. Read
through and try this one after looking at my previous post. When you
run it, just pass in a list of png files.
I would be much smarter to watch the maximum image width for each column and jump right just enough. Also, I need to label the image file names for each one so that I don't get totally turned around. How can I denote groups/collections of input images in optparse?
I would be much smarter to watch the maximum image width for each column and jump right just enough. Also, I need to label the image file names for each one so that I don't get totally turned around. How can I denote groups/collections of input images in optparse?
#!/usr/bin/env python __version__ = '$Revision: 1.2 $'.split()[1] # import math,sys from pmag_verbosity import BOMBASTIC,VERBOSE,TRACE,TERSE,ALWAYS from pmag_version import VERSION import datetime from pdflib_py import * # points36Inches = 2592 points60Inches = 4320 # pointsWide = points36Inches pointsHigh = points60Inches # if __name__ == "__main__": from optparse import OptionParser myparser = OptionParser(usage="%prog [options] hyst1.raw hyst2.raw hyst3.raw ...", version='%prog '+__version__) (options,args) = myparser.parse_args() # filename = 'agfm-poster.pdf' pdf = PDF_new() if -1 == PDF_open_file(pdf,filename): system.exit('Unable to open new pdf file '+filename) PDF_set_info(pdf, 'Creator', 'agfm-poster.py') PDF_set_info(pdf, 'Author', 'Kurt Schwehr') PDF_set_info(pdf, 'Title', 'AGFM of BPSIO-2004 cubes') # PDF_begin_page(pdf,pointsWide,pointsHigh); # font = PDF_load_font(pdf,"Helvetica-Bold","iso8859-1","") PDF_setfont(pdf,font, 42) PDF_set_text_pos(pdf,750,pointsHigh - 50) PDF_show(pdf,"BPSIO 2004 -- AGFM -- "+ str(datetime.date.today())); # sampleHeight = 210 sampleWidth = sampleHeight pointsTop = pointsHigh - 150 y = pointsTop y-=36 columnTop = y x = sampleHeight # for file in args: # print 'file = ',file if y < 0: print "Ran off the bot of the page! Wrapping to the next column" y = columnTop x += sampleWidth print ' now at ',x,y pdfImage = PDF_load_image(pdf,"auto", file, "") # if -1 != pdfImage: print "Putting image at ",x,y PDF_fit_image(pdf,pdfImage,x,y,"scale 0.2") PDF_close_image(pdf,pdfImage) y -= sampleHeight else: sys.exit("ERROR: unable to open " + file) # PDF_end_page(pdf) PDF_close(pdf) PDF_delete(pdf)
10.16.2005 23:50
pdflib_py - creating pdf's with python
Here is a pretty basic example for how to use pdflib to put some text
in a new pdf document. Once I get the basic pdf generated, it takes
some work to get everything I want in there, but it is not too hard.
#!/usr/bin/env python import datetime from pdflib_py import *
points32Inches = 2304 points36Inches = 2592 points60Inches = 4320 points72Inches = 5184
pointsWide = points36Inches pointsHigh = points60Inches
if __name__ == "__main__": filename = 'agfm-poster.pdf' pdf = PDF_new() if -1 == PDF_open_file(pdf,filename): system.exit('Unable to open new pdf file '+filename) PDF_set_info(pdf, 'Creator', 'agfm-poster.py') PDF_set_info(pdf, 'Author', 'Kurt Schwehr') PDF_set_info(pdf, 'Title', 'AGFM of BPSIO-2004 cubes')
PDF_begin_page(pdf,pointsWide,pointsHigh);
font = PDF_load_font(pdf,"Helvetica-Bold","iso8859-1","") PDF_setfont(pdf,font, 42) PDF_set_text_pos(pdf,750,pointsHigh - 50) PDF_show(pdf,"Today's date: "+ str(datetime.date.today()));
PDF_end_page(pdf) PDF_close(pdf) PDF_delete(pdf)
10.16.2005 10:28
pmag-py 0.2 - agfm.py gets much better
I now have agfm.py doing a lot more than before, so is time to release
pmag-py 0.2. BTW, how do I keep the unittest junk out of the html
pydoc file? I am only doing an import unittest!
- agfm.py: lots of cleanup, added leastsquares flattening
- agfm.py: Added lineFrom2Pts, findHystParams for Ms, Mr, Bc
- agfm.py: --show-flat, --plot-flat-file, --save-flattened
- agfm.py: --agfm-format, --dist-counts, --dist-percent
- agfm.py: Labeling of plots
- units.py: Magnetic unit conversions
10.16.2005 08:24
No RSS feed for Marine Geology
I got a response back from Marine Geology saying that there is no way
for them to provide an RSS feed of their title and abstracts. So to
all who are reading, if you have any programming interns or such,
please consider having them spend a little time writing a web scraper
and then spit out an RSS feed somewhere. I'd do it myself, but I
can't justify the time until I finish.
10.14.2005 23:38
Lisa's new book on the web!
Check out Lisa's 2005 paleomagnetic book here: http://earthref.org/MAGIC/books/Tauxe/2005/
This book is a series of 16 lectures presented as individual chapters. Each chapter assumes knowledge of the previous chapters so the chapters should be read in order. Chapter 1 begins with a review of the physics of magnetic fields. Maxwell's equations are introduced where appropriate and the magnetic units are derived from first principles. The conversion of units between cgs and SI conventions is also discussed and summarized in a handy table. Chapter 2 turns to the Earth's magnetic field, discussing the geomagnetic potential, geomagnetic elements, and the geomagnetic reference fields. The various magnetic poles of the Earth are also introduced. Chapters 3 to 7 presents the rudiments of rock magnetism necessary to do paleomagnetism. The most important aspect of rock magnetism to the working paleomagnetist is how rocks can become magnetized and how they can stay that way. In order to understand this, we begin with a discussion of what the origin of magnetism is in rocks, including induced and remanent magnetism, anisotropy energy, and mechanisms of magnetization. The magnetic properties of geologically important minerals are described and tabulated, as well as tools for their recognition. Chapter 8 provides a brief tour of environmental magnetism and what I have called "applied rock magnetism''. The nuts and bolts of how to obtain paleomagnetic samples and how to treat them in the laboratory is the topic of Chapters 9 and 10. They cover sampling strategy and routine laboratory procedures, including measurement and demagnetization. Techniques for obtaining both directional and paleointensity information are outlined. Chapters 11 and 12 describe what to do with directional data. They begin with a thorough discussion of Fisher [1953] statistics, describing many useful tests. They then illustrate how to decide whether a particular data set is likely to be Fisherian and Chapter 13 introduces a bootstrap procedure for treating data that are not. Included are many bootstrap tests that perform similar functions to the Fisher based tests that have proved so useful over the years. Magnetic tensor data, primarily anisotropy of magnetic susceptibility are useful in a number of geological applications. The acquisition and treatment of such data is described in Chapter 13. Traditional (Hext, 1963) and more modern (bootstrap) approaches are presented in some detail. Tests for discrimination of eigen-parameters are developed and illustrated with many examples. The final chapters, Chapter 14-16, provide a whistle-stop tour of various paleomagnetic applications. They include a discussion of paleo-geomagnetism magnetostratigraphy, and apparent polar wander.This is mostly what I use. I sometimes also use Butler's book. PALEOMAGNETISM: Magnetic Domains to Geologic Terranes
10.13.2005 19:35
Detrending an agfm hysteresis loop
I have been trying to deal with some hysteresis data from out Model
2900 AGFM. For some unknown reason, hystcrunch freaks out on my data,
so I am wrote some code to detrend a loop. I did this by finding the
slope of both the left and right tails and then removing that from the
loop. Here is a first pass at it without the bells and whistles (like
being able to write back out a valid file agfm file to give back to
hystcrunch)
The red loop (tallest) is the original (and unflattened data) from the AGFM.. The green line is the average slope for the two tails. I used 20 points on each end, which makes 40 total for each end with both directions. The Blue '+' symbols mark the flattened curve that is created by subtracting ax+b (b=0) from the data.
Fitting lines to these groups of points is accomplished with LeastSquares module from Scientific python. It has a search algorithm that that takes a function and searches for the best fit. It is a little confusing until you have an example to copy from. My fit function is call "linear." The next trick is giving a reasonable initial condition for the search. Strange results (or long search times) can happen if you pass in really bad initial conditions on tough equations.
A note to people documenting programs: Please name your manual something other than manual.pdf, tutorial.pdf, or users_guide_0.83.1.1.3.pdf. Ack, I have like 5 of these sitting on my desktop.
And a note to Apple: I would like to know the power status in my cron jobs. How do I do this? I would like to make sure that my laptop is on wall power before I run findutil's updatedb.
The red loop (tallest) is the original (and unflattened data) from the AGFM.. The green line is the average slope for the two tails. I used 20 points on each end, which makes 40 total for each end with both directions. The Blue '+' symbols mark the flattened curve that is created by subtracting ax+b (b=0) from the data.
Fitting lines to these groups of points is accomplished with LeastSquares module from Scientific python. It has a search algorithm that that takes a function and searches for the best fit. It is a little confusing until you have an example to copy from. My fit function is call "linear." The next trick is giving a reasonable initial condition for the search. Strange results (or long search times) can happen if you pass in really bad initial conditions on tough equations.
#!/usr/bin/env python """Detrending hysteresis data"""
import pylab import agfm from agfm import rows2cols from Scientific.Functions.LeastSquares import leastSquaresFit
def linear(parameters,values): a,b = parameters x = values return (a*x + b)
distance = 20 # How many points to use on each end of the loop
sample = agfm.agfm("bp04-4gw-s1-057.raw")
data = sample.data params=[] initial = (1,0) # ax+b, start with a=1 and b=0
######################################## # Right hand side start d = data[0:distance] param,error = leastSquaresFit(linear,initial,d) params.append(param)
######################################## # Right hand side end d = data[-distance:-1] param,error = leastSquaresFit(linear,initial,d) params.append(param)
######################################## # CENTER is the left hand side of the graph center = len(data)/2 leftData = data[center-distance:center+distance] param,error = leastSquaresFit(linear,initial,leftData) params.append(param)
# Average the 3 params that we found since the slopes are all suppoed # to be the same a = 0; for i in range(len(params)): a += params[i][0] a = a/len(params) print 'a = ',a
# Detrend the data by removing the slope data2 = [ (pt[0], pt[1] - pt[0]*a) for pt in data ]
d2cols = rows2cols(data2) pylab.plot (d2cols[0], d2cols[1]) pylab.show()
A note to people documenting programs: Please name your manual something other than manual.pdf, tutorial.pdf, or users_guide_0.83.1.1.3.pdf. Ack, I have like 5 of these sitting on my desktop.
And a note to Apple: I would like to know the power status in my cron jobs. How do I do this? I would like to make sure that my laptop is on wall power before I run findutil's updatedb.
10.13.2005 12:48
pmag-py 0.1
I just did a quick crank on getting pmag-py in better shape. I am
working on agfm stuff right now and successfully building local sqlite
databases of agfm data. The same warning for 0.0 applies. This
source tree is a mess. I am NOT up-to-date with Lisa's pmag python
work. This code is here for reference only. It probably will not
work for you or give you weird results. Sorry.
Version 0.1 - 13-Oct-2005
http://schwehr.org/software/pmag/ and Lisa's http://sorcerer.ucsd.edu/software/
Version 0.1 - 13-Oct-2005
- Makefile: sdist rules for packaging
- manyfiles: Started updating __version__ and __date__
- agfm.py: Added samplename to create, added hystcruch stuff
- agfm.py: Cleanup header in agfm class, lots of other improvements
- eqarea_i.py: Fixed to survive pydoc
- ChangeLog: New file
http://schwehr.org/software/pmag/ and Lisa's http://sorcerer.ucsd.edu/software/
10.13.2005 08:46
python __version__ and __date__
I finally realized that setting the __date__ and __version__ fields
from CVS in a python is really easy! Check this out:
__version__ = '$Revision: 1.3 $'.split()[1] # See man ident __date__ = '$Date: 2005/10/13 15:43:17 $'.split()[1] __author__ = 'Kurt Schwehr' __credits__ = 'DAC and DSW via ARC and CMU'Now that was not so bad!
10.13.2005 07:59
IMAPS - marineplanner.com
SAIC Acquires IMAPS, LLC
IMAPS is a leading provider of Geographic Information Systems and marine navigation, aviation flight planning, and navigation software products to government and commercial customers.
The acquisition enhances SAIC's capability to support key customers in the national security and intelligence community. SAIC also gains the ability to provide navigational aid to both the aeronautical and marine communities by acquiring IMAPS' Web sites, www.AeroPlanner.com, and www.MarinePlanner.com.
10.12.2005 17:09
AGFM gets database support
I just added a little database support to the SIO AGFM system. I am
sure this will NOT work for other models as I have never seen their
dataformats. No clue what a lot of the fields are, but I have
integrated calling hystcrunch to get Mr and Ms.
How does this relate to the GRAND UNIFIED PMAG DATABASE? Ummm... not at all. Sorry. B.T.W. that would be MacIG. But the python code that I am putting in sqlhelp.py is reasonable helpful, hence the help name
A real quick demo and then some code.
I really like the idea of having classes that have the ability to emit SQL INSERT statements. Maybe there is a better way to do this. I should look at the marshal and pickle modules, but I won't. Not enough time.
And I am skipping any code. You will have to grab agfm.py from somewhere. Well, it's nowhere right now!
How does this relate to the GRAND UNIFIED PMAG DATABASE? Ummm... not at all. Sorry. B.T.W. that would be MacIG. But the python code that I am putting in sqlhelp.py is reasonable helpful, hence the help name
A real quick demo and then some code.
agfm.py f.raw --sql-create . agfm.py f.raw --sql-insertThen I paste the results of those two calls into an sqlite session.
rm -f foo.db sqlite foo.dbBam! There is the beginnings of a database. Errr... it will need some joins and such to get sample location and depth in core, but that is the basic idea.
SQLite version 2.8.5
sqlite> CREATE TABLE hyst ( key INTEGER PRIMARY KEY, filename VARCHAR(255), \ instrument VARCHAR(50), comment VARHCHAR(80), date VARCHAR(100), s REAL, Ms REAL, Mr REAL, Bc REAL, Bcr REAL, Bcrn REAL, \ sid VARCHAR(30) );
sqlite> INSERT INTO hyst (comment,Bc,Mr,filename,instrument,s,Ms,sid,\ date,Bcrn,Bcr) VALUES ("bp04-4gw-s1-063",0.003,6.95e-09,"f.raw",\ "Model 2900 ASCII Data File",-2.0,2.7e-07," ","10/11/2005 11:02",\ 0.109,0.033);
I really like the idea of having classes that have the ability to emit SQL INSERT statements. Maybe there is a better way to do this. I should look at the marshal and pickle modules, but I won't. Not enough time.
And I am skipping any code. You will have to grab agfm.py from somewhere. Well, it's nowhere right now!
10.12.2005 00:00
Parsing AGFM data
Here is a quick program to parse and display AGFM data from the SIO
MicroMag 2900 running a DOS program on a windows 95 box (always a
shock). I did not lookup a lot of the fields, so they are just set to
their position number in the dictionary.
#!/usr/bin/env python
class agfm: "MicroMag 2900 running NewMag Windows 1995" def __init__(self, filename): ''' Parse the header and the data.
One big line of fields followined by field and value pairs '''
file = open(filename,'r') line = file.readline() fields = line.split(',') print fields[0], len(fields) assert('"Model 2900 ASCII Data File"'==fields[0]) self.header={} self.header['instrument'] = fields[0] self.header['7'] = fields[1] i = 2 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1; self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header['comment'] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header['date'] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = fields[i]; i+=1 self.header[i] = str(fields[i]).rstrip() # remove \r\n
self.data=[] for line in file.readlines(): #print len(line) line = line.rstrip() if len(line)<3: continue if '"Model 2900 Data File ends"' == line: break # we is done fields = line.split(',') self.data.append((float(fields[0]),float(fields[1]))) def rows2cols(list): """ convert a big list of rows to list of columns.
[ [1,2], [3,4], [5,6] ] to [ [1,3,5], [2,4,6] ] """ newlist = [] for i in range(len(list[0])): newlist.append([]) for row in list: for i in range(len(list[0])): newlist[i].append(row[i]) return newlist
if __name__ == "__main__": sample = agfm('f.raw')
import pylab cols = rows2cols(sample.data) pylab.plot (cols[0],cols[1]) pylab.show()
10.10.2005 22:00
first full polar plot with colors too
Polar plots are starting to look much better! Note the two red points
(one in the top center) that show when points are in the upper
hemisphere.
10.10.2005 17:15
python logging
It would be really nice to have a solid logging system. I'd like to
be able to send messages to stderr and a file. The more features the
better, but it needs to be something that people can pick up quickly.
I tried the scipy logger, but no luck. Very complicated and no good examples are on my machine (no network at the moment). It follows a PEP, but if I can't figure it out, then I'm not going to use it.
It looks like twisted has a powerful logging system. I got a simple example working to a file, but could not figure out how to get both file logging and stdout_or_stderr. Here is my trivial example.
I tried the scipy logger, but no luck. Very complicated and no good examples are on my machine (no network at the moment). It follows a PEP, but if I can't figure it out, then I'm not going to use it.
It looks like twisted has a powerful logging system. I got a simple example working to a file, but could not figure out how to get both file logging and stdout_or_stderr. Here is my trivial example.
#!/usr/bin/env pythonI can tail -f test.log in a xterm to watch the progress. I think this is going to be more work for not much gain. That's all I'll do for now.
from twisted.python import log from twisted.python import logfile
f = logfile.LogFile("test.log", ".", rotateLength=None)
# setup logging to use our new logfile log.startLogging(f)
# print a few message for i in range(10): log.msg("this is a test of the logfile: %s" % i)
f.close()
10.10.2005 14:06
projecting dec and dip onto the polar
This does not handle upper and lower hemispheres, but it does the
basics. decdip2pylabpolar transforms the points into the radians and
directions needed by pylab's polar plot. Here is the snipped that I
added to handle that.
from projections import deg2rad def decdip2pylabpolar(dec, dip): """ Convert two lists of dec and dip to the coordinates needed to pass to pylab's polar plots. pass in degrees, get back radians """ decnew=[] dipnew=[] assert (len(dec)==len(dip)) for i in range(len(dec)): decdeg = 360 - (dec[i]-90) dipdeg = 90 - dip[i] decnew.append(deg2rad(decdeg)) dipnew.append(deg2rad(dipdeg)) return decnew, dipnew
dec = [ 0, 10, 20 ]; dip = [ 0, 5, 10 ] dec.append(90); dip.append(15) dec.append(100); dip.append(20) dec.append(110); dip.append(25) dec.append(180); dip.append(30) dec.append(190); dip.append(35) dec.append(200); dip.append(40) dec.append(270); dip.append(45) dec.append(280); dip.append(50) dec.append(290); dip.append(55)
theta,r = decdip2pylabpolar(dec,dip)
10.10.2005 12:53
matplotlib polar rgrids
I a little baffled by the matplotlib documentation for polar plots, so
I am going to go by examples with trial and error. My first task is
to create direction angles. Note that the grids want to go counter
clockwise with 0 degrees to the East!!! This is the normal
mathematical definition, but not what we want for a stereo net. But
now just the labeling. The projections of points will come later.
Maybe I should use Transformations, but I don't totally understand. Will they get me from 0° is East and counter clockwise to 0° is North and clockwise? Oh, and how about 90° is down too?
thetagrids( range(0,360,45), ('E', 'NE', 'N','NW', 'W', 'SW', 'S', 'SE') )Next comes the dip angle rings on the stereo net. This is done with rgrids, but zero is in the center and moves out. The list.reverse() call flips the order in place. Gareth would appreciate this code, as it is my frst use of list comprehensions.
ring_angles = [ x * (pi/180.) for x in range(0,100,10)] ring_labels = [ str(x) for x in range(0,100,10)] ring_labels.reverse() rgrids (ring_angles,ring_labels)That draws from 90..0° on the plot. The only question is how to make the dip of 0° be a solid line. Putting it all together:
#!/usr/bin/env python from pylab import * from math import pi # seems to get pulled in by pylab too
N = 150 r = 2*rand(N) theta = 2*pi*rand(N) print theta,r area = 200*r**2*rand(N) colors = theta figure(figsize=[4,4]) ax = subplot(111, polar=True) c = scatter(theta, r, c=colors, s=area) c.set_alpha(0.75) # ring_angles = [ x * (pi/180.) for x in range(0,100,10)] ring_labels = [ str(x) for x in range(0,100,10)] ring_labels.reverse() rgrids (ring_angles,ring_labels) # = thetagrids( range(0,360,45), ('E', 'NE', 'N','NW', 'W', 'SW', 'S', 'SE') ) # show()
Maybe I should use Transformations, but I don't totally understand. Will they get me from 0° is East and counter clockwise to 0° is North and clockwise? Oh, and how about 90° is down too?
10.10.2005 11:46
matplotlib stereonets
Yet again, I go for google and end up back at my blog with no information gained. I am trying to figure out how to do stereonets/polar plots.
http://matplotlib.sourceforge.net/matplotlib.pylab.html#-polar
polar(*args, **kwargs) POLAR(theta, r) Make a polar plot. Multiple theta, r arguments are supported, with format strings, as in plot.There are two polar examples in the zip... first polar_scatter.py:
#!/usr/bin/env python # a polar scatter plot; size increases radially in this example and # color increases with angle (just to verify the symbols are being # scattered correctlu). In a real example, this would be wasting # dimensionlaity of the plot from pylab import *Which makes this:
N = 150 r = 2*rand(N) theta = 2*pi*rand(N) area = 200*r**2*rand(N) colors = theta ax = subplot(111, polar=True) c = scatter(theta, r, c=colors, s=area) c.set_alpha(0.75)
#savefig('polar_test2') show()
There is also polar_demo.py which makes a spiral
#!/usr/bin/env python # # matplotlib now has a PolarAxes class and a polar function in the # matplotlib interface. This is considered alpha and the interface # may change as we work out how polar axes should best be integrated # # The only function that has been tested on polar axes is "plot" (the # pylab interface function "polar" calls ax.plot where ax is a # PolarAxes) -- other axes plotting functions may work on PolarAxes # but haven't been tested and may need tweaking. # # you can get get a PolarSubplot instance by doing, for example # # subplot(211, polar=True) # # or a PolarAxes instance by doing # axes([left, bottom, width, height], polar=True) # # The view limits (eg xlim and ylim) apply to the lower left and upper # right of the rectangular box that surrounds to polar axes. Eg if # you have # # r = arange(0,1,0.01) # theta = 2*pi*r # # the lower left corner is 5/4pi, sqrt(2) and the # upper right corner is 1/4pi, sqrt(2) # # you could change the radial bounding box (zoom out) by setting the # ylim (radial coordinate is the second argument to the plot command, # as in matlab(TM), though this is not advised currently because it is not # clear to me how the axes should behave in the change of view limits. # Please advise me if you have opinions. Likewise, the pan/zoom # controls probably do not do what you think they do and are better # left alone on polar axes. Perhaps I will disable them for polar # axes unless we come up with a meaningful, useful and functional # implementation for them. # # See the pylab rgrids and thetagrids functions for # information on how to customize the grid locations and labels
from pylab import *
# radar green, solid grid lines rc('grid', color='#316931', linewidth=1, linestyle='-') rc('tick', labelsize=15) # force square figure and square axes looks better for polar, IMO figure(figsize=(8,8)) ax = axes([0.1, 0.1, 0.8, 0.8], polar=True, axisbg='#d5de9c')
r = arange(0,1,0.001) theta = 2*2*pi*r polar(theta, r, color='#ee8d18', lw=3)
title("And there was much rejoicing!", fontsize=20) savefig('polar_demo') show()
10.10.2005 11:29
dog gps
I think this needs to get a little bit smaller for our cat.
This is from: http://www.flickr.com/photos/pmtorrone/5711475/
Sad news... I hope they have great insurance and this is a leg up to lots of new great stuff. I can't wait to see the movie!!! Fire destroys 'Wallace and Gromit' warehouse
STAR INFORMATIC software to manage Moroccan harbours. I am not really sure what their system does... http://www.star.be/
This is from: http://www.flickr.com/photos/pmtorrone/5711475/
Sad news... I hope they have great insurance and this is a leg up to lots of new great stuff. I can't wait to see the movie!!! Fire destroys 'Wallace and Gromit' warehouse
BRISTOL, England -- The company behind the new "Wallace and Gromit" film said Monday its "entire history" has been destroyed in a fire at a warehouse containing props and sets.
STAR INFORMATIC software to manage Moroccan harbours. I am not really sure what their system does... http://www.star.be/
10.10.2005 10:00
sqlite UNIQUE and DISTINCT
I wanted to get a list of samples for a paricular core so I can do a
PCA to give the best direction. Here is what I ended up doing:
SELECT DISTINCT(samplename) FROM ams WHERE corenum=1;These did NOT work:
sqlite> SELECT UNIQUE(samplename) FROM ams WHERE corenumber=1; sqlite> SELECT samplename FROM ams WHERE UNIQUE (SELECT samplename FROM ams);I am not totally sure that I am using UNIQUE correctly. It is also possible that UNIQUE is not something that sqlite2 understands. I saw this from oracle that makes me think that UNIQUE just is not there.
If I say: SELECT UNIQUE(COLUMN) FROM TABLE I get the same results as saying SELECT DISTINCT(COLUMN) FROM TABLE
10.09.2005 22:48
pca in python
Lisa has a Principal Component Analysis (PCA) function in her pmag-py,
but I wanted to take a quick look at what is available elsewhere. I
need to start adding -schwehr to my google searches. I often mention
the sort of things that I am interested and then I use those same
words in a search. Funny how I think like myself. Not useful.
A package writen for neurological analysis came up that looks generally promising: MDP
But not tonight. I am going to adapt Lisa's PCA to work from data passed in and not from a file.
Update: Um... that would be dopca in pmag.py. Here is an example pulling values out of the database.
A package writen for neurological analysis came up that looks generally promising: MDP
Modular toolkit for Data Processing (MDP) is a Python library to perform data processing. Already implemented algorithms include: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Slow Feature Analysis (SFA), and Growing Neural Gas (GNG).FINK-TODO: mdp python library
MDP supports the most common numerical extensions to Python and the symeig package (a Python wrapper for the LAPACK functions to solve the standard and generalized eigenvalue problems for symmetric (hermitian) positive definite matrices). MDP also includes graph (a lightweight package to handle graphs).
But not tonight. I am going to adapt Lisa's PCA to work from data passed in and not from a file.
Update: Um... that would be dopca in pmag.py. Here is an example pulling values out of the database.
cx = sqlite.connect("ttn136b.db")Which when run gives something like this: ./magplots.py --verbosity=1 (0.0, 257.5, 70.099999999999994, 3.7360000000000001e-05) (2.5, 250.40000000000001, 71.799999999999997, 3.4480000000000002e-05) (5.0, 241.80000000000001, 71.099999999999994, 3.1040000000000001e-05) (7.5, 241.0, 71.0, 2.851e-05) (10.0, 237.0, 71.0, 2.6290000000000001e-05) (12.5, 228.80000000000001, 71.799999999999997, 2.4539999999999999e-05) (15.0, 233.90000000000001, 71.700000000000003, 2.141e-05) (15.0, 232.90000000000001, 70.799999999999997, 2.2019999999999999e-05) (20.0, 231.5, 70.700000000000003, 1.8170000000000001e-05) (20.0, 223.69999999999999, 68.900000000000006, 1.9369999999999999e-05) (30.0, 242.5, 71.099999999999994, 1.344e-05) (30.0, 233.69999999999999, 69.900000000000006, 1.434e-05) (40.0, 245.5, 66.200000000000003, 9.3700000000000001e-06) (40.0, 234.5, 67.799999999999997, 1.0020000000000001e-05) (60.0, 191.30000000000001, 65.400000000000006, 4.9819999999999999e-06) (60.0, 221.59999999999999, 73.900000000000006, 5.6799999999999998e-06) (80.0, 214.69999999999999, 51.0, 2.2129999999999998e-06) (80.0, 155.59999999999999, 55.600000000000001, 2.6960000000000001e-06) [[242.94491290687807, 71.338744095815557, 1.0], 4.7963548164164731, 16, 3.3448784483489629]
cu = cx.cursor()
s = 'select treatment,dec,inc,intensity from mag ' s += 'where samplename="ttn136b-01pw-010" and type="afdemag" order by treatment ;' cu.execute(s) rows = [] for row in cu.fetchall(): print row rows.append(row) import pmag pcaData = pmag.dopca(rows,2,100,'p') print pcaData
10.09.2005 16:32
More darpa stuff... Dan Christian and Eric Zbinden
Looks like Dan and Eric are teaming up for coverage for the Darpa Grand Challenge.
http://www.gizmag.com/go/4720/
http://www.gizmag.com/go/4720/
10.09.2005 16:01
database to matplotlib
I created a little function that creates a downcore plot from a
database. It seems to do the job I need. The default values are all
designed for plotting all three AMS eigenvalues down core, but it
should be easy enough to plot just about anything down core... at
least anything that has a database table with depth and field pair.
The one key thing is the order of the parameters that are passed to pylab. When I put things like the title, axis, and ticks before the plot, they did not show up. That probably has to do with the hold on the first plot. I thought matlab had a clear function, but I don't see a clear in pylab. That forces me to do a first hold=False, followed by plots with hold=True.
The one key thing is the order of the parameters that are passed to pylab. When I put things like the title, axis, and ticks before the plot, they did not show up. That probably has to do with the hold on the first plot. I thought matlab had a clear function, but I don't see a clear in pylab. That forces me to do a first hold=False, followed by plots with hold=True.
#!/usr/bin/env python import sqlite import pylab # matplotlib from sqlhelp import sqlselectSo here is what I do to call it. This includes my new verbosity module so I stop duplicating the verbosity levels and command line options in every program that I write.
def plotDownCoreSQL(db_cx, corenumber, yfield='depth', xfields=('tau1','tau2','tau3'),maxdepth=None #800 ,coretype='p', table='ams',verbosity=0 ,linecolors=('r','g','b','r','g','b') ,xrange=(.310,.35) ,yrange=(0,800) ): s = sqlselect() s.addfield(yfield) for field in xfields: s.addfield(field) s.addfrom(table) if None!=coretype: s.addwhere('coretype="'+coretype+'"') if None!=corenumber: s.addwhere('corenum='+str(corenumber)) if None!=maxdepth: s.addwhere('depth<'+str(maxdepth)) s.setorderby(yfield) if verbosity>=TRACE: print 'query for plot:',str(s) cu = cx.cursor()
yfieldData = [] xfieldsData = [] for f in xfields: xfieldsData.append([])
cu.execute(str(s)) for row in cu.fetchall(): yfieldData.append(row[yfield]) for i in range(len(xfields)): xfieldsData[i].append(row[xfields[i]])
pylab.figure(figsize=(3,8))
pylab.plot(xfieldsData[0],yfieldData,linecolors[0],hold=False) for i in range(1,len(xfields)): pylab.plot(xfieldsData[i],yfieldData,linecolors[i],hold=True)
basename = '' for f in xfields: basename += f + '-' basename += yfield pylab.title(basename+' for core '+str(corenumber),fontsize=8) basename += '-' basename += str(corenumber)
# Reverse plot for distance down core pylab.axis([xrange[0],xrange[1],yrange[1],yrange[0]]) pylab.yticks(fontsize=8) pylab.xticks(fontsize=8)
pylab.savefig(basename+'.svg') pylab.savefig(basename+'.png') return
#!/usr/bin/env python import sqlite import pylab # matplotlib import sys # segy-py stuff... from sqlhelp import sqlselect import verbosity from verbosity import ALWAYS,TERSE,TRACE,VERBOSE,BOMBASTIC
from SomeWhere import plotDownCoreSQL
if __name__ == "__main__": from optparse import OptionParser myparser = OptionParser(usage="%prog [options] ", version="%prog $Id: magplots.py,v 1.6 2005/10/09 22:48:21 schwehr Exp $") verbosity.addVerbosityOptions(myparser)
(options,args) = myparser.parse_args() cx = sqlite.connect("ttn136b.db")
for core in [ 1, 2, 7 ]: if core==1: plotDownCoreSQL(cx,corenumber=core,verbosity=BOMBASTIC,maxdepth=600) else: plotDownCoreSQL(cx,corenumber=core)
10.09.2005 13:37
More on robot grand challenge
Gizmag has more on the
desert race. I didn't think anyone would finish this year and 4 teams
did it. Now we have the MER rovers on Mars for more than a year each.
Damn!
A funny note: When I joined the Intelligent Mechanisms Group at Ames in 1996, there was a robot named Mel after Mike M.'s dad. Now Mike wins the grand challenge. How ironic, considering Mel was a pretty sorry robot platform. Hey Mike, what next?
A funny note: When I joined the Intelligent Mechanisms Group at Ames in 1996, there was a robot named Mel after Mike M.'s dad. Now Mike wins the grand challenge. How ironic, considering Mel was a pretty sorry robot platform. Hey Mike, what next?
10.09.2005 13:17
Whew - plot size and font sizes decrypted
I think I was being tripped up by the vocabulary difference between
gnuplot and matplotlib, but I finally know how to control the size of
what gnuplot calls the terminal. I code set the number of pixels of
the png output while setting the gnuplot terminal. With matplotlib,
this is accomplished with pylab.figure(figsize=(width,height)). The
units are not as clear. I think I have to set the dpi to be 100 or
something to be a little more obvious. Or maybe 1 dpi so that my
width and height directly pixel size? So here is my tall, skinny plot.
B.T.W.: I hate it that a GUI based emacs does not detach from the xterm that starting it. Then when I nuke the xterm, poof! There goes my emacs session.
#!/usr/bin/env python import sqlite import pylab # matplotlib import sys from sqlhelp import sqlselectWhich generates this figure, complete with an empty liner section and flow-in.
if __name__ == "__main__": cx = sqlite.connect("ttn136b.db") cu = cx.cursor() for core in [ 1 ]: s = sqlselect() s.addfield('depth'); s.addfield('tau1'); s.addfield('tau2'); s.addfield('tau3'); s.addfrom('ams') s.addwhere('coretype="p"') s.addwhere('corenum='+str(core)) s.setorderby('depth') print str(s) cu.execute(str(s)) depth = [] tau1 = [] tau2 = [] tau3 = [] for row in cu.fetchall(): depth.append(row.depth), tau1.append(row.tau1) tau2.append(row.tau2), tau3.append(row.tau3) # pylab.figure(figsize=(3,8)) pylab.yticks(fontsize=10) pylab.xticks(fontsize=8) # pylab.plot (tau1, depth, 'r', tau2, depth, 'g', tau3, depth, 'b', hold=False) pylab.axis([.315,.345,1000,0]) pylab.title(r'$\tau_{1,2,3}$') pylab.savefig(str(core)+'.png')
B.T.W.: I hate it that a GUI based emacs does not detach from the xterm that starting it. Then when I nuke the xterm, poof! There goes my emacs session.
10.09.2005 08:36
explorations in matplotlib
It is time to get down to using matplotlib for paleomagnetic analysis
of my cores. Earlier in the year, I used gnuplot with pdflib_py in
python to produce a couple posters with all of the main plots for my
cores in the Santa Barbara Basin. Now it is time to revist that idea
for the Eel River Basin in norcal.
Despite the intially polished look of the matplotlib figures on the web, it feels like there are a lot of rough edges for the newbie. For starters, I am not sure if how I am supposed to close the plot windows. Do I control-C them in the python window or kill the window by closing it (red X button in OSX)? Both seem a little grumpy.
My next frustration is the documentation. What has been written in the manual is fantastic. It has lots of examples and they are pretty clear. I just want more. There are a few teaser section titles that have no text with them.
I downloaded the zip full of examples and was both rewarded and teased. Not all of the demos run right out of the box. They need a few more comments. This is a little thing, but would be great if they were made executable from the start. It does not take long for me to do a 'chmod +x foo.py', but still.
It's been a while since matlab for me (their Mac OSX port still sucks... we've paid for it, but it is not worth it to use it), but my memory of the plot hold function was that hold was default to off. That meant that each new plot command started over. Then I had to say hold on to do multiple plots on one graph. Seems that matplotlib defaults to the other way around. I had to do this for each plot in a loop so that I did not just keep ending up with more and more lines in the graph.
Emacs Note:
Despite the intially polished look of the matplotlib figures on the web, it feels like there are a lot of rough edges for the newbie. For starters, I am not sure if how I am supposed to close the plot windows. Do I control-C them in the python window or kill the window by closing it (red X button in OSX)? Both seem a little grumpy.
My next frustration is the documentation. What has been written in the manual is fantastic. It has lots of examples and they are pretty clear. I just want more. There are a few teaser section titles that have no text with them.
I downloaded the zip full of examples and was both rewarded and teased. Not all of the demos run right out of the box. They need a few more comments. This is a little thing, but would be great if they were made executable from the start. It does not take long for me to do a 'chmod +x foo.py', but still.
It's been a while since matlab for me (their Mac OSX port still sucks... we've paid for it, but it is not worth it to use it), but my memory of the plot hold function was that hold was default to off. That meant that each new plot command started over. Then I had to say hold on to do multiple plots on one graph. Seems that matplotlib defaults to the other way around. I had to do this for each plot in a loop so that I did not just keep ending up with more and more lines in the graph.
pylab.plot (tau1, depth, 'r', tau2, depth, 'g', tau3, depth, 'b', hold=False)With all that, I still think that matplotlib has more potential than gnuplot. gnuplot has been around longer which is a double edged sword. I foresee using both a lot.
Emacs Note:
M-# eThis seems like a cool idea... an in place embedded calculator in emacs. Just execute that over a line with '2+2' and it will give you 4. But then it left my buffer in a funny state. I keep thinking that it would be nice to do math in my thesis notes so I can have a record of exactly how I calculated a particular value.... kind of like using Mathematica and hitting print (I did that a lot for grad level computer graphics classes for monster polynomials). But, this will take some learning.
10.09.2005 00:00
matplotlib svg is almost there
I stumbled on a post that mentioned that matplotlib supports SVG
(Scalable Vector Graphics), so I gave that a shot since Illustrator is
supposed to read SVG. It worked! Well, not totally. The TeX tau
symbol ended up as an upsidedown question mark (Que?).
pylab.title(r'$\tau_{1,2,3}$') pylab.savefig('ams-tau123-core'+str(core)+'.svg')That is not the end of the world not having symbols, since I didn't have that with gnuplot (at least that I know of).
10.08.2005 23:45
matplotlib
I thought maybe it was a font problem, but that just changes the gv
error message.
If I can't figure this out quickly tomorrow, it is goodbye to matplotlib. I really need to have pdf or at least ps output so that I can work with my figures in Illustrator.
I tried playing with AFM thinking that would be the solution, but I don't know what I am doing with that.
Error: /invalidfont in -dict- Operand stack: LucidaGrande --dict:11/14(L)-- Font LucidaGrande --dict:11/14(L)-- LucidaGrande rep -i lucida ams-tau123-core1.ps %%BeginFont: LucidaGrande /FontName /LucidaGrande def /FamilyName (Lucida Grande) def /FullName (Lucida Grande) def /LucidaGrande findfont /LucidaGrande findfontSo, I tried a different font...
pylab.title('Tau{1,2,3}', fontname='Times Roman') pylab.yticks(fontname='Times Roman') pylab.xticks(fontname='Times Roman')Which changes the error to be this:
%interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- 2 4 %oparray_pop 3 4 %oparray_pop --nostringval-- --nostringval-- --nostringval-- 7 5 %oparray_pop --nostringval-- 7 5 %oparray_pop Dictionary stack: --dict:1121/1686(ro)(G)-- --dict:0/20(G)-- --dict:71/200(L)-- --dict:6/7(L)-- --dict:17/17(ro)(G)-- Current allocation mode is local Last OS error: 2 AFPL Ghostscript 8.51: Unrecoverable error, exit code 1What a lovely error message. What in the world does that mean?
If I can't figure this out quickly tomorrow, it is goodbye to matplotlib. I really need to have pdf or at least ps output so that I can work with my figures in Illustrator.
I tried playing with AFM thinking that would be the solution, but I don't know what I am doing with that.
from matplotlib.afm import AFM fh = file ('/sw/share/fonts/afms/adobe/ptmr8a.afm') afm = AFM(fh) afm.get_fullname()Now what?
Out[12]: 'Times Roman'
10.08.2005 21:57
matplotlib, the first little bit of experience
So far I am not sure that matplotlib is really better for simple
plots. I was able to load a jpg into a picture like this:
Here is my first program that writes plots from core data in a database to a png and postscript file. There is something wrong with the postscript as neighther gv or illustrator will open them.
from pylab import * import Image # pil - python imaging libary im = Image.open('a_picture.jpg') s = im.tostring() rgb = fromstring(s,UInt8).astype(Float)/255.0 rgb2 = resize(rgb, (im.size[1],im.size[0],3)) imshow(rgb2,interpolation='nearest') show()Now I can say that I don't totally understand what those do, but the image came up nicely. Then I started playing with the TeX symbols in titles and other plotted text. I immediately hit a problem with this feature. This works:
pylab.title(r'$\tau_{1,2,3}$')But I could not figure out how to dynamically add to that string. I could not do things like this:
pylab.title(r'$\tau_{1,2,3}$'+str(corenumber))That caused it to loos the TeX interpretation of the tau sumbols and subscripting the 1,2,3. Someone please set me straight!
Here is my first program that writes plots from core data in a database to a png and postscript file. There is something wrong with the postscript as neighther gv or illustrator will open them.
#!/usr/bin/env python import sqlite import pylab # matplotlib import sys from sqlhelp import sqlselect if __name__ == "__main__": cx = sqlite.connect("ttn136b.db") cu = cx.cursor() for core in [ 1, 2, 7 ]: print core s = sqlselect() s.addfield('depth'); s.addfield('tau1'); s.addfield('tau2'); s.addfield('tau3'); s.addfrom('ams') s.addwhere('coretype="p"') s.addwhere('corenum='+str(core)) s.setorderby('depth') print str(s) cu.execute(str(s)) depth = []; tau1 = []; tau2 = []; tau3 = [] for row in cu.fetchall(): depth.append(row.depth) tau1.append(row.tau1) tau2.append(row.tau2) tau3.append(row.tau3)A little tweak to the axis command (which sets the range to plot in the graph) and I have the figure reversed.
pylab.plot (tau1, depth, 'r', tau2, depth, 'g', tau3, depth, 'b', hold=False) pylab.axis([.3,.4,0,1000]) pylab.title(r'$\tau_{1,2,3}$') pylab.savefig('ams-tau123-core'+str(core)+'.png') pylab.savefig('ams-tau123-core'+str(core)+'.ps')
pylab.axis([.315,.345,1000,0])Here is a cut down example image with the y-axis decreasing as it goes down.
10.08.2005 15:45
segy-py 0.30 released
I just put segy-py 0.30 on the web server...
- projections.py: Added meters2twtt(), feet2meters(), metersPerShot()
- projections.py: getDistance now works given a utm zone
- segywaterbottom.py: Fixed minor bugs. Still needs projection help
- sqlhelp.py: New file to help make select queries easier to build
10.08.2005 15:25
DARPA Grand Challenge success!!! Three vehicles make it
DARPA Grand Challenge US$2 million prize claimed - three autonomous vehicles finish 132 mile course
Stanford and CMU completed the track. Wow. Two places where I have worked and gone to school.
Okay Dan, let's hear all the details!
CMU FRC autnomous racing page: http://www.redteamracing.org/. Yikes, I only see two people on the team that I know... Chris and Red.
stanfordracing.org I see that Michael Montemerlo is on the Stanford Team. He was at CMU doing his PhD while I was working there.
Stanford and CMU completed the track. Wow. Two places where I have worked and gone to school.
Okay Dan, let's hear all the details!
CMU FRC autnomous racing page: http://www.redteamracing.org/. Yikes, I only see two people on the team that I know... Chris and Red.
stanfordracing.org I see that Michael Montemerlo is on the Stanford Team. He was at CMU doing his PhD while I was working there.
10.08.2005 14:48
sql query builder
I just cooked up a little bit of python to make SELECT sql calls much
easier. I am sure that SQLObject and things like that have support
for this kind of idea, but it is more trouble to deal with more large
packages right now. The idea is to add fields, where conditions,
froms, etc and have the sqlselect class take care of stringing
together the fields with commas and AND's as appropriate. I have been
writing too much of this again and again sporadically in my code as it
is needed. I think I may thow this into segy-py.
I know, I know... I should really check out SQLObject by Ian B. For now, this is what I am going to use.
I know, I know... I should really check out SQLObject by Ian B. For now, this is what I am going to use.
#!/usr/bin/env pythonThe example at the end returns this SQL string:
class sqlselect: """ Construct an sql select query
Sometimes it just gets ugly having all that comma and WHERE AND logic in there. This code takes care of that """ def __init__(self): self.fields = [] self.where = [] self.limit = None self.from_tables = [] self.orderby = None self.desc = False # descending sort if true return
def setorderby(self,field,desc=False): "Make the returned rows come in some order" if str != type(field): print "ERROR: fix throw type exception" self.orderby = field self.desc = desc return
def addfield(self,fieldname): "Add a field name to return" if str != type(fieldname): print "ERROR: fix throw type exception" self.fields.append(fieldname) return
def addwhere(self,boolTest): " Add expressions to chain together with ANDs" if str != type(boolTest): print "ERROR: fix throw type exception" self.where.append(boolTest) return
def addfrom(self,tableName): "Which tables the query will pull from" if str != type(tableName): print "ERROR: fix throw type exception" self.from_tables.append(tableName) return
def setlimit(self,numOfItems): "Set the maximum number of items to return" if int != type(numOfItems): print "ERROR: fix throw type exception" self.limit = numOfItems return
def __str__(self): "Return the query as a string" if len(self.fields) < 1: print "ERROR: fix throw some exception?" s = 'SELECT ' for i in range (len(self.fields)-1): s += self.fields[i]+',' s += self.fields[-1] + ' '
if len(self.from_tables)<1: print "ERROR: fix throw some exception" s += 'FROM ' for i in range (len(self.from_tables)-1): s += self.from_tables[i]+',' s += self.from_tables[-1]
if (len(self.where)>0): s += ' WHERE ' for i in range (len(self.where)-1): s += self.where[i]+' AND ' if (len(self.where)>0): s += self.where[-1]
if (None != self.orderby): s += ' ORDER BY ' + self.orderby if self.desc: s += ' DESC'
if (None != self.limit): s += ' LIMIT ' + str(self.limit)
s += ';' return s
q = sqlselect() q.addfield('tau1') q.addfield('tau2') q.addfield('depth') q.addfrom('ams') q.addwhere('corenum=1') q.addwhere('depth<50') q.setlimit(15) q.setorderby('depth', desc=True);
print str(q)
SELECT tau1,tau2,depth FROM ams WHERE corenum=1 AND depth<50 \ ORDER BY depth DESC LIMIT 15;Not as cool as the built in C# query mechanisms, but it works for me.
10.08.2005 08:25
Customer service
I just sent a request to Elsevier asking for RSS or Atom feeds of
their journals with title and abstracts. That would get me much more
in tune with the journals that I should be reading. However, they
only allow 250 characters in their response field and when I finally
cut it down to that this is the reponse that the web server gave me:
Attention.XML - What other tasks could this be applied to beyond news readers? Ship's crew?
Thank you. We will deal with your request soon.I guess I will be dealt with
Attention.XML - What other tasks could this be applied to beyond news readers? Ship's crew?
10.07.2005 23:19
matplotlib
That does it. I am switching to matplotlib for my graphing. There
are just so many things that it has which are not really possible with
the python gnuplot.
- Loading images into the plot. matplotlib.image and the Hubble Telescope image demo.
- Pie charts
- blending
- Better multiplot support (subplots)
- Mapping suport
- TeX style symbols and equations
- Detecting mouse clicks and key hits
- The demo images are prettier than those in the gnuplot demo directory
10.07.2005 22:47
gdal strangeness
I just found a gdal python install on my machine that maybe I did??!!??
This evening I took a look at matplotlib. It looks pretty nice. I am thinking of using it for my Eel figures since it looks pretty powerful. With the Santa Barbara / Gaviota Slide paper I pushed gnuplot to its very limits and beyond.
Also, I (unitentionally) found stuff that says that growl, which is used by Adium and Fish, has a python interface. It was not clear where to get it. Then I found that fink has a perl growl interface. It would be cool to start to understand a little bit more how to interface with the mac. I am totally torn between learning all the mac ins and outs, or sticking to stuff that I know can be cross platform (perhaps with some pain).
Looks like the python growl code is in the developer tar. FINK-TODO! http://growl.info/downloads_developers.php
ls -l /System/Library/Frameworks/Python.framework /Versions/2.3/lib/python2.3/site-packages/ total 792 -rw-rw-r-- 1 root admin 119 Sep 13 2003 README -rw-r--r-- 1 root admin 357076 Jul 18 2004 _gdalmodule.a -rwxr-xr-x 1 root admin 1026 Jul 18 2004 _gdalmodule.la -rwxr-xr-x 1 root admin 293208 Jul 18 2004 _gdalmodule.so -rw-r--r-- 1 root admin 23718 Jul 18 2004 gdal.py -rw-r--r-- 1 root admin 3639 Jul 18 2004 gdalconst.py -rw-r--r-- 1 root admin 7882 Jul 18 2004 gdalnumeric.py -rw-r--r-- 1 root admin 26761 Jul 18 2004 ogr.py -rw-r--r-- 1 schwehr admin 53450 Oct 7 21:28 ogr.pyc -rw-r--r-- 1 root admin 21359 Jul 18 2004 osr.pyBut when I tried it, it was very unhappy. The date is from before got this machine. Maybe the initial setup migrated this from the last machine? I am not sure, but it did not work.
/usr/bin/python Python 2.3 (#1, Sep 13 2003, 00:49:11) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import ogr /System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-pac kages/ogr.py:117: FutureWarning: hex/oct constants > sys.maxint will return pos itive values in Python 2.4 and up wkbPoint25D = 0x80000001 ... Traceback (most recent call last): File "<stdin>", line 1, in ? File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/ site-packages/ogr.py", line 99, in ? import _gdal ImportError: Failure linking new module: /usr/local/lib/libogdi31.dylib: dyld: /usr/bin/python can't open library: /usr/local/lib/libogdi31.dylib (No such fi le or directory, errno = 2)That's one of the troubles with the Mac OSX setup stuff. I can't do a dpkg -S to ask where it came from. So at least it is possible to build gdal with python support on mac osx.
This evening I took a look at matplotlib. It looks pretty nice. I am thinking of using it for my Eel figures since it looks pretty powerful. With the Santa Barbara / Gaviota Slide paper I pushed gnuplot to its very limits and beyond.
Also, I (unitentionally) found stuff that says that growl, which is used by Adium and Fish, has a python interface. It was not clear where to get it. Then I found that fink has a perl growl interface. It would be cool to start to understand a little bit more how to interface with the mac. I am totally torn between learning all the mac ins and outs, or sticking to stuff that I know can be cross platform (perhaps with some pain).
Looks like the python growl code is in the developer tar. FINK-TODO! http://growl.info/downloads_developers.php
10.07.2005 14:12
projecting cores into the time domain
I want to be able to show cores in the time domain of my seismic
lines. Here is how I went about it. First I measured the length of
the core from the photographs with xcore (I did this a couple years
ago).
Now I'll use the illustrator overlay technique to give myself a guide to annotating the nice figure for the paper.
Warning: gnuplot has a little trouble flushing pdf plots to disk if you finish a plot, hit Ctrl-Z and do an
We do not know the exact wireout, so the 40° is an estimate based on times where the wireout and speed were known. It is important to give an estimate of the range of laybacks that are possible. A reasonable range is 30° to 45°. Here is the python to get the range of shot offsets. Remember that shot spacing is only aproximately constant and only that when the ship is driving a constant speed in a straight line (and has been doing so for a while).
core length (cm) 7 757 2 782 1 583Now use segy-py in python to project from length to time.
ipython # Igor says, "ipython good!"Then I regenerated the water bottom picks.
In [1]: import projections
In [4]: projections.meters2twtt(5.83) Out[4]: 0.0077733333333333335
In [5]: projections.meters2twtt(7.82) Out[5]: 0.010426666666666667
In [6]: projections.meters2twtt(7.57) Out[6]: 0.010093333333333334
segywaterbottom.py -t 5000 -d xstar -k Shotpoint --time -O 143 -f EEL5kL03.xstar --noisy -nGrrr. It seems like -n does not do what I think it does. Bugglet. At least the 2nd column is shot number and the 3rd is the two way travel time from the fish to water bottom and back. I then layed in my 177 shot offset values from yesterday and plotted them. I tweaked the gnuplot file until the top (smaller) time matched the water bottom. Then I added the time size of the core to the start time and came up with this nice file for use in gnuplot:
head wbt.dat # FILE: EEL5kL03.xstar # traceNumber Shotpoint time(s)
1 83542 0.155436 2 83543 0.155313 3 83543 0.155313 4 83544 0.155395 5 83544 0.155518 6 83545 0.155395 7 83545 0.1556
# With length from gps to block # core 7 - 757 cm - .0101 sec 88942 .4205 88942 .4306Now some gnuplot magic:
# core 2 - 782 cm - .0104 sec 88979 .421 88979 .4314
# core 1 - 583 cm - .00777 sec 89027 .424 89027 .43177
gnuplot
gnuplot> set label 1 "1" at 89027,.421 center gnuplot> set label 2 "2" at 88979,.419 center gnuplot> set label 7 "7" at 88942,.419 center
gnuplot> set xlabel 'Shotpoint' gnuplot> set ylabel 'TWTT (seconds)' gnuplot> set yrange [*:*] reverse gnuplot> set xrange [90000:88500] gnuplot> plot 'wbt.dat' using 2:3 with l, 'cores-shots-177-time.dat' with l
gnuplot> set terminal gif gnuplot> set output 'core-location-time.gif' gnuplot> replot
Now I'll use the illustrator overlay technique to give myself a guide to annotating the nice figure for the paper.
Warning: gnuplot has a little trouble flushing pdf plots to disk if you finish a plot, hit Ctrl-Z and do an
open -a /Applications/Adobe\ Illustrator\ CS/Illustrator\ CS.app foo.pdfYou can either quit gnuplot first or set the output to something else and replot. That seems to flush the pdf to disk.
We do not know the exact wireout, so the 40° is an estimate based on times where the wireout and speed were known. It is important to give an estimate of the range of laybacks that are possible. A reasonable range is 30° to 45°. Here is the python to get the range of shot offsets. Remember that shot spacing is only aproximately constant and only that when the ship is driving a constant speed in a straight line (and has been doing so for a while).
ipythonThen I plot it with as a line over the cores.
import projections import layback mps = projections.metersPerShot(87542,(-124.4866666,40.835555555), 89042,(-124.5052777,40.839166666), utmZone=10) # 1.07985 # laybackMeters = ( layback.angledepthLayback(143,gpsoffset=projections.feet2meters(69), cableangle=45), layback.angledepthLayback(143,gpsoffset=projections.feet2meters(69), cableangle=30) ) # # (164.03120000000004, 268.71446548234945) # laybackShots = (laybackMeters[0] / mps, laybackMeters[1]/mps) # # (151.90179979198766, 248.84418901349753) <--- this is it!
10.07.2005 12:52
google's news aggregator
Looks like google has their news aggregator up... notice the navigation shortcuts. Has a vi feel.
Back to segywaterbottom.py and placing my cores in a seismic line...
10.07.2005 11:40
Bar code system for macs
Wine
Collector 150 announced with Bluetooth support - If this came with
generic drivers, it would be a super way to add a cheap barcode system
to core lockers around the country. Would they just sell the
Intelliscanner barcode reader with just drivers?
ODP already uses barcode readers, so it would be great to get that to all the SIO geologic collections. Just need a little database app in front.
Intelliscanner
ODP already uses barcode readers, so it would be great to get that to all the SIO geologic collections. Just need a little database app in front.
Intelliscanner
10.07.2005 10:19
Magnus adds more to PyGMT
For PyGMT, Magnus did a merge of my libgmt reading code with his
original code and we now have the best of both worlds! I need to add
the libgmt writing code soon that the whole thing is wrapped up.
Collaboration rocks! Also, looking at his setup.py, I learned how to
check python's version. I use sets in some of my python code, which is only in python >= 2.4.
extremetech on military remote sensor stations. These could make great sensor nodes for science (especially if they cost a lot less).
void example() { if (PyFile_Check(GMT_file)) {/* checking if GMT_file is a file object */ input = PyFile_AsFile(GMT_file); if (!(fread(&gmtheader,sizeof(struct GRD_HEADER),1,input))) { PyErr_SetString(PyExc_IOError, "Error, reading GMT header from file"); return NULL; } } else if (PyString_Check(GMT_file)) { /* check if GMT_file is a string */ int argc=1; char *argv[]={"gmtiomodule",0}; const int false=(1==0);Here is my build/install line for Mac OSX with fink (netcdf and gmt-dev 4.x installed):
filename = strdup(PyString_AsString(GMT_file)); // DO NOT FREE or modify PyString_AsString, so copy it right away argc=GMT_begin(argc,argv); // Sets GMT globals GMT_grd_init (&gmtheader, argc, argv, false); // Initialize grd header structure GMT_read_grd_info (filename,&gmtheader); } else { PyErr_SetString(PyExc_TypeError, "Error, need a File object or a file name (string)"); return NULL; } }
NETCDF_PREFIX=/sw GMT_PREFIX=/sw python setup.py install \ --root=/Users/schwehr/Desktop/pygmt-cvs
extremetech on military remote sensor stations. These could make great sensor nodes for science (especially if they cost a lot less).
10.07.2005 07:25
Fish - OXS aquarium
Not as photorealistic as the windows screen save that everyone has,
but this program is amusing.
http://oriol.mine.nu/software/Fish/
http://oriol.mine.nu/software/Fish/
10.07.2005 07:00
Firefox 1.5b2
I just down loaded Firefox 1.5 beta 2. Their new install schematic in
the dmg is going to confuse people. I though at first that it somehow
had an alias to my applications folder inside the dmg. That would
have been cool, but it is just a graphic. about: give:
Why can't these browsers save ssl certs? I am tired of that for my secure sites like yahoo and my pop servers. There is a preferences security feature that lets me import the cert, but why not just grab it while at the https site? Or is that not possible?
Firefox 1.4.1 Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8b5) Gecko/20051006 Firefox/1.4.1We shall see.
Why can't these browsers save ssl certs? I am tired of that for my secure sites like yahoo and my pop servers. There is a preferences security feature that lets me import the cert, but why not just grab it while at the https site? Or is that not possible?
10.06.2005 23:09
wxPython not that much fun
There is a minor bug with wxPython on Mac OSX with fink. I make the
pasted in the sample program from the wiki:
import wx app = wx.PySimpleApp() frame = wx.Frame(None, -1, "Hello World") frame.Show(1) app.MainLoop()And then then I hit enter after the mainloop line, I got a constant stream of error messages about poll.
(.:13932): GLib-WARNING **: poll(2) failed due to: Invalid argument. (.:13932): GLib-WARNING **: poll(2) failed due to: Invalid argument. (.:13932): GLib-WARNING **: poll(2) failed due to: Invalid argument. (.:13932): GLib-WARNING **: poll(2) failed due to: Invalid argument.Lovely. I am using wxpython-py 2.5.2.8-2 on OSX 10.3.9. Then with the more complicated menu example, I don;t get any output at all in terms of windows.
10.06.2005 11:28
Illustrator overlay
Here is an example of what it looks like to do an Adobe Illustrator
overlay of the gnuplot pdf of a plot with lines of the water bottom
picks. The plot with points obsures what I was trying to match. I
first open the gnuplot pdf and have to select all the none text
geometry. This has to be set to stoke of .25 or something similarly
small. I don't know why illustrator and gnuplot hate each other.
Make a new layer at the top level in the figure for the overlay. Lock all the other layers to prevent trouble, then past into the new layer. Finally grab the corners and start stretching until it matches up.
Here is an overlay with points. This works better with points than lines when zoomed in closer. The plus signs from gnuplot show and the whole plot does not get clobbered.
Make a new layer at the top level in the figure for the overlay. Lock all the other layers to prevent trouble, then past into the new layer. Finally grab the corners and start stretching until it matches up.
Here is an overlay with points. This works better with points than lines when zoomed in closer. The plus signs from gnuplot show and the whole plot does not get clobbered.
gnuplot> set xrange [5625:6175] gnuplot> replot gnuplot> set yrange [.66:.625] gnuplot> plot 'wbt2.dat' using 2:3
10.06.2005 11:08
blog archives back in a few minutes, PyGMT update
Arg. My Mac overheated last night by doing seismic stuff and had a kernel panic (only seems to happen when it gets super hot) while I was remotely shelled in and doing an "nb -u all" on my server. Doh!
By the time you see this, it will be fixed.
On the PyGMT front, my patch to Magnus broke all of the gmt program functionality. I overloaded the PyFile code by grabbing the filename out of the object and passing the filename string into gmt. This breaks pipes. Whoops! Thinking about raw grids, they are important for speed and pipes.
10.05.2005 18:14
Waterbottom overlay for Illustrator
I am redoing my Huntec figure so I need a new overlay of the water
bottom. I have a sonarweb figure of line43, but it does not have
FieldRecNo values on it nor TWTT (two-way travel time). I make a pdf
overlay with segywaterbottom and gnuplot. I strecht the water bottom
to fit over the figure and presto! I have horizontal and vertical
scales. Here is how.
segywaterbottom.py -t 27000 --time -n --file=line43.sgy -k FieldRecNo grep -vi none wbt.dat > wbt2.dat
Now I have an ascii table that looks something like this: head wbt2.dat # FILE: line43.sgy # traceNumber FieldRecNo time(s)Now for a little gnuplot and I will have the overlay.
1 1 0.738125 3 2 0.736375 4 2 0.812625 5 3 0.737875 7 4 0.737375 9 5 0.737125 11 6 0.7375
gnuplotPresto change-o!
gnuplot> set xrange [1000:8000] gnuplot> replot gnuplot> set yrange [.8:.5] gnuplot> set terminal pdf gnuplot> set output 'line43-wbt-pts-1000-8000.pdf' gnuplot> plot 'wbt2.dat' using 2:3 with p
10.05.2005 13:51
Layback.py
I finally got around to writing up functions for layback. Right now,
they are extremely basic.
from projections import feet2meters import laybackI gave is 69 feet from the gps to the block on the fan tail. Then there is a 40 degree on the angle, calculated from other measurements at the same speed. Next, we need meters per shot. That can be done with two points. I don't have a working utmZone finder, so I still have to pass that in.
layback.angledepthLayback(fishdepth=143,gpsoffset=feet2meters(69),cableangle=40) 191.45196374097205
from projections import metersPerShot metersPerShot(87542,(-124.4866666,40.835555555), 89042,(-124.5052777,40.839166666), utmZone=10)The shot offset is then pretty easy to calculate:
Out[4]: 1.0798502731674162
>>> 191.45/1.07985177 shots back lets us recompute the befit shot number.
177.29314256609715
offset = 177 c=1;s=88850; print s+offset,'-460\n',s+offset,'-470\n' c=2;s=88802; print s+offset,'-460\n',s+offset,'-470\n' c=7;s=88765; print s+offset,'-460\n',s+offset,'-470\n'Toss that into a gnuplot file called cores-shots-177.dat and give it something close to the bottom:
88850 -462 88850 -470Now gnuplot it:
88802 -460 88802 -467
88765 -458 88765 -465
plot 'wbt.dat' using 4:5 with l, \ 'cores-shots-177.dat' with l,\ 'cores-shots-0.dat' with lHere is the layback that looks like the best estimate. The core lengths are not right, just rough guesses without looking back at my notes.
10.05.2005 10:27
segy-py 0.29
This one is all about getting smoothed navigation from
segysqlshiptrack to segysqldist.
- README: more info
- TODO: A few more items for a rainy day
- projections.py: Added distPoints()
- segysqldist.py: More typing and error checking
- segysqldist.py: --in-position-file to bring in smoothed navigation
- segysqlshiptrack: --dump-keys so Shotpoint can be in column 1
- segywaterbottom.py: Database commits to only happen with there is a db
- zone.py: Start of utm zone calculator. Temp file. Not working
10.05.2005 09:07
Wow! Another segypy package!
I was pleasantly surprised to see another segy python package come
across the wire today. I had seen the matlab version, by had no idea
that Thomas Hansen had put together segypy [sf.net]. segypy 0.3 [freshmeat]
Pythonware dailey link:
This is very exciting!!! There is huge potential here for collaboration.
For a totally different topic... there are some great color photos coming back from the MER rovers this week:
http://www.lyle.org/mars/updates/color-1128523800.html
Pythonware dailey link:
SegyPY 0.3 [SegyPY includes M-files to read and write SEG-Y files from Python, implemented using the syntax of the SEG-Y format, commonly used for storing seismic data. ... Changes: This release adds supprt for writing all variants of SEGY (revision 0 and 1), except for IBM floats, in data sample format. ...]I can see lots of similiarities in the way our two projects work He does writing of files, which I have totally avoided, and he has made some plots using matplotlib which is so cool!
This is very exciting!!! There is huge potential here for collaboration.
For a totally different topic... there are some great color photos coming back from the MER rovers this week:
http://www.lyle.org/mars/updates/color-1128523800.html
10.05.2005 08:02
Lots of links
I'm getting to many book marks in Firefox, so it is time to blast
through them.
SGI Darpa Grand Challenge - to go with Dan Christian's Article... I know, I already posted this one.
Protecting your sites folder - Configuring Apache
How to: Share your music via iTunes on the Net
zenity - GTK dialogs from a shell script. Wait, didn't I decide to stop writing so many bash scripts? Looks better than xdialog.
Control stuff with your Mac - Velleman K8055 USB I/O card. I NEED ONE OF THESE!. SourceForge K8055mac
Ossim ("awesome") GIS. Need to check this out. Uses open scene graph. Also osgPlanet - a 3D Geospatial Global Viewer
OpenOffice end notes - OOo Off-the-Wall: Back to School with Bibliographies - I don't use open office, but has some good concepts.
Short PHP tutorial - What I really need is a tutorial about how to setup PHP5 in fink with apache 2. I gave it 10 minutes and then said, bugger... So many great tools and so little time to play with them.
Python anti-pitfals - lots of great python tips... e.g. I would not think of this as a programmer coming from C/C++: "a,b = b,a" just works. Where is the tmp variable?
clusty the search engine so google doesn't own me completely.
My fink stats... Weird. Shows how little I actually do for fink. Should do more
lowtech sensors and actuators
http://jrhicks.net/ does some mapping stuff.
Doing things with flickr
x10 wiki - home automation that I have never gotten to play with.
Keeping Your Life in Subversion - I do a lot of this for my thesis with CVS. Must learn SVN soon. It was pretty easy to playwith in just a few hours.
Google Total - lots of catagories.
vnc2swf - Record your screen to a flash movie.
How trackbacks work
CocoThumbX - like Pic2Icon, but I have not tried it and am sticking with Pic2Icon.
DeInstaller for mac apps that use apple's lame package manager with Receipts. Why did someone else have to create the de-installer?
GeekTool for cool ways to change the screen background for OSX. Want to try this too.
GOSCON' CONFERENCE TO HIGHLIGHT GOVERNMENT OPEN SOURCE SOFTWARE ADOPTION AND BEST PRACTICES... I hate all caps titles. Yada yada... and not that much real content.
NASA Develops the Blue Marble - true global color imagery at 1km resolution
book review of Web-Mapping Hacks
That's it for now...
SGI Darpa Grand Challenge - to go with Dan Christian's Article... I know, I already posted this one.
Protecting your sites folder - Configuring Apache
How to: Share your music via iTunes on the Net
zenity - GTK dialogs from a shell script. Wait, didn't I decide to stop writing so many bash scripts? Looks better than xdialog.
Control stuff with your Mac - Velleman K8055 USB I/O card. I NEED ONE OF THESE!. SourceForge K8055mac
Ossim ("awesome") GIS. Need to check this out. Uses open scene graph. Also osgPlanet - a 3D Geospatial Global Viewer
OpenOffice end notes - OOo Off-the-Wall: Back to School with Bibliographies - I don't use open office, but has some good concepts.
Short PHP tutorial - What I really need is a tutorial about how to setup PHP5 in fink with apache 2. I gave it 10 minutes and then said, bugger... So many great tools and so little time to play with them.
Python anti-pitfals - lots of great python tips... e.g. I would not think of this as a programmer coming from C/C++: "a,b = b,a" just works. Where is the tmp variable?
clusty the search engine so google doesn't own me completely.
My fink stats... Weird. Shows how little I actually do for fink. Should do more
lowtech sensors and actuators
http://jrhicks.net/ does some mapping stuff.
Doing things with flickr
x10 wiki - home automation that I have never gotten to play with.
Keeping Your Life in Subversion - I do a lot of this for my thesis with CVS. Must learn SVN soon. It was pretty easy to playwith in just a few hours.
Google Total - lots of catagories.
vnc2swf - Record your screen to a flash movie.
How trackbacks work
CocoThumbX - like Pic2Icon, but I have not tried it and am sticking with Pic2Icon.
DeInstaller for mac apps that use apple's lame package manager with Receipts. Why did someone else have to create the de-installer?
GeekTool for cool ways to change the screen background for OSX. Want to try this too.
GOSCON' CONFERENCE TO HIGHLIGHT GOVERNMENT OPEN SOURCE SOFTWARE ADOPTION AND BEST PRACTICES... I hate all caps titles. Yada yada... and not that much real content.
NASA Develops the Blue Marble - true global color imagery at 1km resolution
book review of Web-Mapping Hacks
That's it for now...
10.04.2005 23:10
shotpoint offset for the cores
Now that I have the closest point to the shiptracks, I need to apply
the layback offset to see where these cores land.
Using a 40 degree wire angle and a water depth of 143 meters layback is:
Using a 40 degree wire angle and a water depth of 143 meters layback is:
layback = 143/tan 40° = 143/.83909963 = 170 meters 170 m / (1.08 m/shot) = 158 shotsA little python can give us the 158 shot layback file for gnuplot:
#!/usr/bin/env python offset = 158 c=1;s=88850; print s+offset,'-460\n',s+offset,'-470\n' c=2;s=88802; print s+offset,'-460\n',s+offset,'-470\n' c=7;s=88765; print s+offset,'-460\n',s+offset,'-470\n'So we now have a gnuplot file with the cores, but need the water bottom to go with cores-shots-158.dat:
89008 -460 89008 -470We can quickly generate the water bottom like this:
88960 -460 88960 -470
88923 -460 88923 -470
segywaterbottom.py -t 5000 -d xstar -k Shotpoint -D -O 143 \ -f EEL5kL03.xstar --noisy -nFinally, it is time to see what we have!
set ylabel 'meters below sea level' set xlabel 'Shotpoint'I also plotted no layback and 375 shots to show where those fall.
set xrange [88500:90000] set terminal gif set output 'cores-layback-375shots.gif'
plot 'wbt.dat' using 4:5 with l, \ 'cores-shots-375.dat' with l, \ 'cores-shots-158.dat' with l
10.04.2005 22:43
optparse multiple instances of the same flag
I always forget how to do this, so I had better write it up! If I
want to able to specify a flag 0 or more times and build a list of
what the user gave you, then you must use the append option.
#!/usr/bin/env pythonHere is what happens:
from optparse import OptionParser
myparser = OptionParser(usage="%prog [options]", version="%prog foo")
myparser.add_option("-f", "--file", action="append", type="string", dest="filename")
(options,args) = myparser.parse_args()
print options.filename print options print args
./foo.py -f 1 -f 2 ['1', '2'] {'filename': ['1', '2']} []
10.04.2005 21:52
Closest point to cores on the seismic line
Here is how I calculated the closest location of my three cores to the
track line. This does NOT account for layback. The shot numbers here
need to have an offset applied to get the shot in the seismic plot
shown. This assumes the fish follows exactly the path of the ship
which is not exactly true!
The navigation in the xstar files is really bad on this scale, so I need to get the lon,lats from somewhere other than the X and Y segy fields in EEL5kL03.xstar.db. I just modified segysqldist so that it can import ASCII navigation files. segysqlshiptrack.py has code that creates a better shiptrack, so if I tell it to prepend the shotpoint, segysqldist.py can use that.
Now pass that in to segysqldist for each of the three core locations. I am now using named files with the lon/lat pairs. Much less confusing. I can never remember the full coordinates.
The navigation in the xstar files is really bad on this scale, so I need to get the lon,lats from somewhere other than the X and Y segy fields in EEL5kL03.xstar.db. I just modified segysqldist so that it can import ASCII navigation files. segysqlshiptrack.py has code that creates a better shiptrack, so if I tell it to prepend the shotpoint, segysqldist.py can use that.
segysqlshiptrack.py --dump-key='Shotpoint' -d EEL5kL03.xstar.db \ -g ~/Desktop/pygmt-test/Eureka.grd -Z -iThe key is the -i to cause it to do interpolation. This is not the best solution as it requires a gmt grid file, but that is all I have time for!
Now pass that in to segysqldist for each of the three core locations. I am now using named files with the lon/lat pairs. Much less confusing. I can never remember the full coordinates.
segysqldist.py --in-position-file=1.dat --verbosity=6 \ -f EEL5kL03.xstar.db -k Shotpoint --navigation-file=track.xyz -n 1Which gives me:
segysqldist.py --in-position-file=2.dat --verbosity=6 \ -f EEL5kL03.xstar.db -k Shotpoint --navigation-file=track.xyz -n 1
segysqldist.py --in-position-file=7.dat --verbosity=6 \ -f EEL5kL03.xstar.db -k Shotpoint --navigation-file=track.xyz -n 1
BEST: 1 88850 -124.502794647 40.8387528115 0.000182643946413 88850 BEST: 1 88802 -124.502146874 40.8386448493 0.000207362222513 88802 BEST: 1 88765 -124.501704355 40.8385510888 0.00031133927505 88765I can then make this simpler as
-124.502794647 40.8387528115 0.000182643946413 88850 -124.502146874 40.8386448493 0.000207362222513 88802 -124.501704355 40.8385510888 0.00031133927505 88765Here are the gnuplot commands that made the graph below.
gnuplot set xrange [-124.504:-124.5] set yrange [40.837:40.84] plot 'track.xyz' using 2:3 with l, '1.dat','2.dat','7.dat', 'picks.127.xy'
10.04.2005 08:47
More legal torrent usage
I am starting to find more legal bit torrents that I have wanted to
download. P2P has seemed like the way to destribute big chunks of
data, but I haven't seen much to date. Today I am trying to grab
Knoppix that way. Yesterday, I grabbed the new (and free) Harvey Danger
album from the band's web site. Not really into the music, but wanted
to check it out. The main problem is speed. If I do a wget of
knoppix, it will be here today. However, if I do a torrent, it's
going to be like 5 days. Torrents are great for super popular stuff
with lots of peers, but they are painful for the less popular stuff.
10.03.2005 19:24
Dan Christian covers the Darpa Grand Challange
CHECK IT OUT!
Looks like Dan Christian will be covering the Darpa Grand Challenge robotic race this weekend. The prize is $2 MILLION. I don't think anyone will claim it, but it is always possible.
http://www.gizmag.com/go/4700/
Note that I used to work for CMU in Red Whittaker's Field Robotic Center and I used to work with Dan at NASA Ames. I couldn't pick a better person to report on the happenings there!
Looks like Dan Christian will be covering the Darpa Grand Challenge robotic race this weekend. The prize is $2 MILLION. I don't think anyone will claim it, but it is always possible.
http://www.gizmag.com/go/4700/
Note that I used to work for CMU in Red Whittaker's Field Robotic Center and I used to work with Dan at NASA Ames. I couldn't pick a better person to report on the happenings there!
10.03.2005 19:03
segy-py 0.28 - the layback kickback beer finder
segy-0.28 is for finding how far behind the ship the fish is hanging out.
- Matching the water bottom picks to the GMT grid file
- offset.py: Allow offsetting of column 2 from column 1 in a text file
- decimate.py: -B for blank lines between skip groups - gnuplot lines
- segysqlshiptrack.py: --skip-zeros, --wbt, --trace-min/max
- segywaterbottom.py: Very slow database wbt add. Needs help! Join?
- segywaterbottom.py: Time, depth, offset, negative, position options
10.03.2005 18:24
offsetting a line until it matches the bathymetry
I am now able to shift the water bottom pick (wbt) to match fairly
closely to the EM1000 multibeam/swath sonar geometry. The EM1000 is
hull mounted so there is no issue of wire out.
In the figure, the red saw toothed (jaggly) line is from the EM1000 data. The green dashed line is the water bottom pick with a 143 meter fish depth. Finally the blue is my best match of the chirp's water bottom to the EM1000 by shifting the data back to the left and up.
This graph shows that the fish is approximately 750 traces behind the GPS. In an xstar file, there are two traces for every shot, so the fish lags by 375 shots.
Update: The winch only has something like 400 meters of wire, so there is no way that I have the right offset in this example, but the idea still seems right. I must have miss matched the bumps.
In the figure, the red saw toothed (jaggly) line is from the EM1000 data. The green dashed line is the water bottom pick with a 143 meter fish depth. Finally the blue is my best match of the chirp's water bottom to the EM1000 by shifting the data back to the left and up.
offset.py -i wbt.db.raw -p 750 -V '12.' > test.datOn the right, the green and blue lines start really diverging from the sea floor on the turn as the fish goes up and down in the water column. Here is looks like up in the water. Perhaps they speed up for the turn or winched in. I haven't looked at the header info and plots to see which. This starts about traceNumber 12000. Note that traceNumber is not related to Shotnumber. traceNumber is just the offset for a trace from the beginning of the current SEG-Y file.
# Make a dashed line: 60 shots on, 60 off - blank line in between decimate.py -B -l 60 -L 60 wbt.db.raw > orig.skip
gnuplot> set xlabel 'traceNumber' gnuplot> set ylabel 'altitude(m)'
gnuplot> plot 'track.xyz' using 3 with l title 'track.xyz EM1000', \ 'orig.skip' using 1:2 with l title 'orig.skip from wbt.db.raw', \ 'test.dat' with l title 'test.dat w/ shift'
This graph shows that the fish is approximately 750 traces behind the GPS. In an xstar file, there are two traces for every shot, so the fish lags by 375 shots.
Update: The winch only has something like 400 meters of wire, so there is no way that I have the right offset in this example, but the idea still seems right. I must have miss matched the bumps.
10.03.2005 08:36
water bottom getting close to matching gmt grid
It's not there yet, but I am getting closer to comparing the water
bottom picks to the bathymetery. I may have a rounding or casting
error. Or it could be something totally different. Oh, but I am so
close! Here is what I did at this point. The command lines are
getting crazy long.
Update: Looks like the original water bottom picks are great! There is one miss and a couple high picks (telephone poles), but check out that sea floor! Clearly I have a bug somewhere with putting the water bottom picks into the wbt database field or getting it back out. Once I find that bug, I need a little program that lets me slide the water bottom data backward in time (to the left) and up and down. Looks like the fish depth estimate from Friday or 143-148 meters is pretty close. It is also cool to see the fish drop on turns... it pulls away from the gmt grid file and then comes back up.
segywaterbottom.py -d xstar -f EEL5kL03.xstar -t 5000 -D -p -n \ --database=foo.db --verbosity=30 --offset=143 \ --trace-key=Shotpoint --trace-min=88500 --trace-max=90000
segysqlshiptrack.py -g Eureka.grd -d foo.db -i --wbt=wbt --verbosity=10
gnuplot
splot 'track.xyz' with l, 'track.xyz' using 1:2:4 with l
set xrange [8000:13000] plot 'track.xyz' using 3 with l title 'EM1000 gmt grid', \ 'track.xyz' using 4 with lp title 'wbt pick'
Update: Looks like the original water bottom picks are great! There is one miss and a couple high picks (telephone poles), but check out that sea floor! Clearly I have a bug somewhere with putting the water bottom picks into the wbt database field or getting it back out. Once I find that bug, I need a little program that lets me slide the water bottom data backward in time (to the left) and up and down. Looks like the fish depth estimate from Friday or 143-148 meters is pretty close. It is also cool to see the fish drop on turns... it pulls away from the gmt grid file and then comes back up.
gnuplot
plot 'wbt.dat' using 1:4 with l title 'wbt original', \ 'track.xyz' using 3 with l title 'EM1000 gmt grid', \ 'track.xyz' using 4 with lp title 'wbt pick'
10.02.2005 22:04
sqlite2 - grr
I just got totally bit by SQLITE 2 not having an "ALTER TABLE"
command. I just wanted add a field for the water bottom to the
segyTrace table, but no dice. I am going to shove one into the CREATE
command in segy.py, but that is a disgusting hack. Once this thesis
is done, I really have to switch to SQLITE 3 and the latest PySqlite.
I also have to start working with other more powerful databases.
SQLITE 2 and PySqlite 1.x have been a great way to learn the basics,
but I am outgrowing them quickly.
10.02.2005 18:35
segy-py 0.27 - adds segysqlshipstrack.py
I just released segy-0.27 which adds segysqlshiptrack.py for finding
the bathymetry from GMT grid files along a shiptrack. This means that
segy-py now depends on PyGMT, but that is currently only for
segysqlshiptrack.py. This program is still rough as it is hard coded
for the xstar chirp. I will fix that in the next release.
segy-py-0.27.tar.bz2 [52K]
segy-py
Here is a quick sample of running the new program:
segy-py-0.27.tar.bz2 [52K]
segy-py
Here is a quick sample of running the new program:
segysqlshiptrack.py -d EEL5kL03.xstar.db -i -g Eureka.grd --verbosity=10
gnuplot
gnuplot> splot 'track.xyz' with l
10.02.2005 16:16
interpolating the ship track
Since the xstar has gps positions coming in infrequently and with a
slower update rate on the latitude, I setup a little python program to
interpolate the position. The jagged line is the original position
data and the smooth is my new navigation data. At the same time I am
picking the bottom from the EM1000 multibeam data from the STRATAFORM
project so I end up with the seafloor along with a ship track.
#!/usr/bin/env python import sys import PyGMTNote: I used p[0],p[1] to get the z value, when I should have used pInterp[0],pInterp[1], so the z values from this code with be not so great. See segysqlshiptrack.py for more developed code. That program requires PyGMT > 0.3.
def lonlat2xy(grd,lon,lat): minx = grd.x_minmax[0] maxx = grd.x_minmax[1] miny = grd.y_minmax[0] maxy = grd.y_minmax[1] nx = grd.data.shape[0] ny = grd.data.shape[1] dx = (maxx-minx)/float(nx) dy = (maxy-miny)/float(ny) dLon = lon - minx dLat = lat - miny x = dLon / dx y = dLat / dy return x,y
def interpolateXY(points): newPoints = [] yStart = points[0][1] yEnd = points[-1][1] xStart = points[0][0] xEnd = points[-1][0] dx = (xEnd - xStart) / float(len(points)-1) dy = (yEnd - yStart) / float(len(points)-1) count = 0 for p in points: x = xStart + count * dx y = yStart + count * dy newPoints.append([x,y]) count += 1 return newPoints
# # Get the GMT grid # f=open('Eureka.grd','r') print "reading grid..." g=PyGMT.read_grid(f) shape = g.data.shape o = open ('out.dat','w')
import sqlite cx = sqlite.connect('EEL5kL03.xstar.db') cu = cx.cursor()
# # Cope with the y values not getting updated as often, so do a linear interpolation # cu.execute('Select x,y FROM segyTrace;') start=True points=[] batches=0 for row in cu.fetchall(): lon,lat = row.x/3600.,row.y/3600. if start: start = False lastlat = lat points.append([lon,lat]) if lat != lastlat: # Finally y got incremented print " BATCH", len(points) pointsInterp = interpolateXY(points) for i in range(len(points)): pInterp = pointsInterp[i] p = points[i] grdx,grdy = lonlat2xy(g,p[0],p[1]) z = g.data[int(grdx),int(grdy)] o.write (str(pInterp[0])+' '+str(pInterp[1])+' '+str(z)+' '+str(p[0])+' '+str(p[1])+' '+str(grdx)+' '+str(grdy)+'\n') points=[] start=True batches += 1 # FIX: handle last batch
10.02.2005 08:56
PyGMT data loading
I took a little bit of time to port PyGMT to Mac OSX and it looks like
reading is in great shape. I am not going to worry about writing
grids right now because I do not need that feature for my seismic data
processing. I accomplished the port by switching the grid file
reading from an fread with cast from float method to using libgmt's
GRD_read_grid_info and GMT_read_grd. This allows the reader to be
cross platform and read any of the 6 or 7 grid formats supported
by GMT. I will be sending the code to Magnus Hagdorn who wrote PyGMT.
Here is a gnuplot splot of a decimated grid. The code to
dump the longitude, latitude and depth follows.
And a quick hint about installing packages outside of the normal package location. First I had to build and install the package like this into a temporary location on my Desktop using the distutils install command.
Note: The y axis labels are wrong. I had a bug which is now fixed in the above code. I did not feel like rebuilding the graph
#!/usr/bin/env python import PyGMT def lonlat(grd,x,y): """Calculate the lonlat for a point""" #import pdb #pdb.set_trace()
minx = grd.x_minmax[0] maxx = grd.x_minmax[1] miny = grd.y_minmax[0] maxy = grd.y_minmax[1]
nx = grd.data.shape[0] ny = grd.data.shape[1]
dx = (maxx-minx)/float(nx) dy = (maxy-miny)/float(ny)
lon = minx + dx * x lat = miny + dy * y return lon,lat
f=open('Eureka.grd','r') print "reading grid..." g=PyGMT.read_grid(f) shape = g.data.shape o = open ('out.dat','w')
print "Writing grid to an ascii file" decimationFactor = 15 for y in range(0,shape[1]): if y%100==0: print ' y = ',y if y%decimationFactor!=0: continue # FIX: decimating for x in range(0,shape[0]): if x%decimationFactor!=0: continue # FIX: decimating lon,lat = lonlat(g,x,y) o.write (str(lon)+' '+str(lat)+' ' +str(g.data[x,y]) + '\n')
And a quick hint about installing packages outside of the normal package location. First I had to build and install the package like this into a temporary location on my Desktop using the distutils install command.
GMTINC=/sw/include python setup.py install --root=/Users/schwehr/Desktop/pygmtThen I have to add this location to the python path. You might expect to point to the PyGMT under the site-packages directory, but that will not work. Just point to the site-packages directory.
export PYTHONPATH=$PYTHONPATH:~/Desktop/pygmt/sw/lib/python2.4/site-packagesNow import PyGMT should work.
Note: The y axis labels are wrong. I had a bug which is now fixed in the above code. I did not feel like rebuilding the graph
10.01.2005 21:23
Strange going back to C
I'm doing just a little bit of C code this evening trying to glue some
stuff together. After doing a lot of C++ and then really getting into
python, C feels so clunky. I feel like I have shackles on.
Everything is just so much effort. With ipython, I can introspect
live objects and do completions. I can just ask an object, "What are
you?" Documentation is just so much better than with straight C.
Sending text to stdout with printf just seems silly. Molasses.