Monday, July 1, 2019

Incomplete data I

We geoscientists are glued to maps, are we not? Whatever analysis we undertake always goes back to the map, where the original sample was collected. While hours in the lab may significantly exceed the time in the field, nevertheless, the data we collect from all of that analysis ultimately gets plotted back on the map or at least referenced to the map which shows the sample locations.
The map then shows up in our publication of the study, with our interpretation of the data at the sample points plotted on the map. Since in all probability the location on the map was either plotted on a field map (do we use those anymore), extracted from GPS, and entered into the mapping program, why don't we include the mapped location of the sample in our publication? Add a table that provides sample ID, latitude, longitude or UTM coordinates to our manuscript? Is that too much to ask (even if the publisher insists on putting it in a supplementary file)? What about all those "representative" geochemical analyses with unknown locations?)
The results of our study, to be of any value, need to be reproducible, do they not? Isn't that SCIENCE 101? Locations on a minuscule map graphic are not enough, requiring the reader measure out the locations using a mapping program (i.e., screen capture of map, add to Google Earth, position the map, add data points, save to kml file, open kml file in text editor, extract latitude and longitude) or digitizer.

Wednesday, January 24, 2018

McQuarrie and Wernicke's (2005) - Calculated rotation parameters

A file link in a previous post on is no longer working.
A new location link is: here

The original content of the post, with correct link is:

McQuarrie and Wernicke's (2005) - Calculated rotation parameters

Here are the rotation parameters calculated from McQuarrie and Wernicke's (2005) reconstructions: Rotation Parameters: Western US Terranes to North America.

Original post that discusses their article (with link) and derivations: Google Earth: Restoration Parameters and Restoring Data Points.

Monday, November 27, 2017

I missed this one...

While doing a simple search, I ran across an abstract from 1977, authored by M. Kumazawa and Y. Fukao, in a conference Proceedings volume  
(emphasis is mine):

The physical properties of materials in the vicinity of the 650-km seismic discontinuity are studied based on (1) phase relations (solid-solid, and solid-liquid) inferred from recent high-pressure experiments; (2) possible distribution of mineralogical and chemical compositions; and (3) shear softening in γ-(Mg, Fe)2SiO4 and its effect on melting temperatures and flow properties. It is predicted that the horizon just above the 650-km discontinuity is a low-velocity zone, in particular in shear wave velocity, a layer of local low melting temperature, and a layer of the active chemical migration and fractionation. At the same time this horizon is also identified as a low-viscosity layer (lower LVZ).
These features lead to a concept of dual plate tectonics models. The layer between 200- and 550-km depths is sandwiched between two relatively soft layers (upper and lower LVZs) and is expected to behave as a rigid plate (mesoplate). The interaction of two groups of plates (lithoplate and mesoplate) is discussed as a particular type of convective motion in the mantle. The upper LVZ (100 − 200-km depth) is not a zone of counter flow of lithoplate motion, but is a shear zone decoupling the lithoplate from the mesoplate. The mesoplates are spreading away from the trench, where the lithospheric materials diffuse into the lower LVZ and the mesoplates. The diapirism for generating hot spots is supposed to originate from the lower LVZ just above the 650-km depth.

I so wish I had found this twenty years ago, while I was preparing my monograph, Geokinematics: Prelude to Geodynamics, for publication, or even forty years ago, one year into my Assistant Professorship at LSU. Kumazawa and Fukao's definition and usage is so very close to mine, even though they arrive at the concept from entirely different directions from mine. Compare the above with this

Sunday, January 12, 2014

Derivation of Fractals and Self-Similarity from Information Theory

In a paper published a couple of years ago (Pilger, 2012), I describe the application of a simple principle, transformed into a distinctive abstract object, to an optimization problem (within the plate tectonics paradigm): simultaneous reconstruction of lithospheric plates for a range of ages from marine geophysical data . It is rare that the relation of the principle, maximum entropy, with a particular transformation, power-series fractals, is recognized, since Pastor-Satorras and Wagensberg derived it. I'm unaware of any other application of fractal forms to optimization problems analogous to the paper. The following derivation is taken from the 2012 paper, with slight modification, in hopes that it might prove useful in other fields, not merely the earth sciences, but beyond. I'm investigating  applications in a variety of other areas, from plate tectonics, to petroleum geology, and, oddly enough, the arts.
Pastor-Satorras and Wagensberg (1998) showed that fractal self-similarity could be derived utilizing Shannon’s (1948) information theory and Jaynes’ (1957) maximum entropy formalism. For convenience, straight-forward derivations of both maximum entropy and fractal distributions are provided here.
Mandelbrot (1982) provided a basic definition of the mathematical formulation of fractals: 
p = k d-D 
For statistical fractals the formula is 
p @ k d-D.  
In either case, p is a measure of a self similar structure, such as length, width, mass, or even probability, over a range of scales, d; k is a constant, in part dependent on the units of the measure, if any; and D is the fractal dimension.
Given a measure space – for the purposes of this study, a two-dimensional map – the map area can be divided into successive subareas. If the dimensions of the map are 1 by 1 unit, divide the map into square cells of dimension 1/k by 1/k, in which, for convenience, k=1, 2, 4, 16…N {kj = [1, for j=1; 1/( 2kj-1) for j>1]}.  
1. The amount of information, I, supplied for each successive division j, is equal to ln(1/kj2) = ln (dj) (see Shannon, 1948; or Kanasewich, 1974). The average (expected) information
<I> = Sj=1,N pj ln(1/kj2) = Sj=1,N pj ln(dj)     (eq. 1)
in which pj is the probability of each division. (Note that dj could be equal to either the inverse cell width, kj, or area, kj2, without affecting the derivation.)
2. In the presence of inadequate information, Jaynes (1957) proposed (using Grandy’s, 1992, paraphrase here): “The optimal probability assignment describing that situation is the one which maximizes the information-theoretic entropy subject to constraints imposed by the information that is available.” What is the information-theoretic entropy? From Grandy’s derivation:
An experiment is repeated n times with potentially m results. Thus there are potentially mn outcomes of the experiments. Each outcome produces a set of sample numbers nj and frequencies of occurrence of those sample numbers, fj = nj/n, 1 j m.
As a result, the number of outcomes that yield a particular set of frequences fj is given by the multiplicity factor:
W = n!/[(nf1)! . .. (nfm)!]     (2)
The set of frequencies (fj) that can be realized in the most ways is that which maximizes the multiplicity, W.
It is convenient to note that maximizing W is equivalent to maximizing ln (W). Thus
ln(W) = ln {n!/[(nf1)! . .. (nfm)!]} = ln (n!) – S j = 1,m ln (nfj)!     (3)
Stirling’s approximation, ln(q!) @ q ln(q) – q , is applicable for large n:
ln(W) @ n ln(n) – n – S j = 1,m [n fj ln(fj) – nfj ln (n) – nfj ]     (4)
and, then
ln(W)/n @ ln(n) – 1 – S j = 1,m fj ln(nfj) – ln n S j = 1,m fj - S j = 1,m fj     (5)
As Sj = 1,m f= 1, therefore,
ln(W)/n @S j = 1,m fj ln(fj)     (6)
If the set of frequencies accurately represent the probabilities (pj , j = 1,m) of the phenomenon being investigated, then,
S = ln(W)/n @Sj = 1,m pj ln(pj)     (7)
which is Shannon information entropy.         
Shannon information entropy has the following properties (Jaynes, 1957): (1) It is a continuous function of the probabilities, (2) if all probabilities are equal, it is a monotonic function of n (the number of probabilities), (3) grouping of events produces the same value as separate events, and (4) S(m) + S(n) = S(mn), which can only be satisfied by a form such as S(n) = k ln (n).
3. In the Pastor-Satorras and Wagensberg formalism, using the Lagrangian variational principle, information entropy (eq. A.7) is maximized subject to the constraining information (eq. A.1) and normalization of the probabilities (Sj=1,N pj = 1) with Lagrange multipliers a and b:
F = – Sj = 1,N pj ln(pj) + a [- Sj=1,N pj ln(dj)] + b [1 - Sj=1,N pj]     (8)
Maximizing F for each pi:
F/¶pi = 0 = - ln(pi) – 1 – a ln(di) – b     (9)
ln(pi) = – 1 + a  ln(di) + b     (10)
exp[ln(pi)] = pi = exp (-b -1) di-a     (11)
pi = g di -a     (12)
in which a and g = (-b -1) are constants.
For such a power series, eq. 12, the probabilities exhibit self-similarity across scales, a,, and a, then, is equivalent to Mandelbrot’s (1975, 1982) fractal dimension, D, above.
The novelty of the Pastor-Satorras and Wagensberg (1998) derivation is the incorporation of the constraint of average information, = Sj=1,N pj ln(d), into the maximum entropy formalism (eq. A.8). Conventional applications of the formalism utilize statistical measures, such as the mean or variance from experimental data, as, e.g., = Sj=1,N pj f(xj), rather than information.
Fractal structure can be interpreted as the manifestation of iterative processes which propagate information kernels across a range of scales, in such a manner that they achieve the most probable outcome. And, the fractal structure, the information, is thereby apparent across every scale over which the process operates.
(Links as of January 13, 2014; if link is broken, it may be necessary to search for it.)

Sunday, August 18, 2013

Comparison - 32 yrs ago and now

Déjà vu time:

I'm in the process of revising and enhancing reconstructions of the Nazca oceanic and South American continental plates. In the process, I decided to compare the latest parameters with those I published 32 years ago. Long = Longitude, Lat = Latitude, Ang = Angle of rotation. 81 = 1981. 13 = 2013. Other than a slight offset of the longitudes and the greater resolution now, the comparisons are not too bad -- especially the rotation angle. The x axis is age in millions of years. The y axis is degrees.

Monday, June 25, 2012

Part-time research

I look at my output over the last decade and am a bit frustrated. There's my memoir (2003), the Hawaiian-Emperor paper (2007) and assorted comments on other workers' contributions (here, here, and here).
Now, finally, there is the 2012 fractal plate reconstruction paper (paper ) I've had it in the back of my mind for some time. And, next to come, I hope is a similar paper dealing with plates and mantle anomalies. It's actually coming along quite nicely (the research itself), as is the writing. But, as a part-time researcher it moves too, too slowly.

Friday, May 18, 2012

Fractal plate reconstructions: at Marine Geophysical Research, now online.

Tuesday, February 15, 2011

What's missing in these hotspot volcanic models? Answer: geokinematics

Three recent "hotspot" papers (Ballmer et al., 2009, 2010, and Presnall & Gudfinnsson, 2011) share a common feature. They are missing something -- a piece of evidence which none of the three confront: geokinematics. In fairness, however, first, these papers are not unique; and, second, the authors could with equal validity claim that geokinematics papers neglect geochemical and geodynamic models, too.

Sunday, January 30, 2011

Refocus: The Hotspot-Plume Debate

[Incomplete draft, 2003]
In recent years the ongoing hotspot-plume debate has seemingly increased in intensity, extending beyond refereed publications to sometimes passionate exchanges in opinion and letters sections of organization newsletters and even the popular scientific press. Some of the debate is reminiscent of a political exchange. Assertions are sometimes imprecise or misleading, while counter assertions ignore the initial assertion, providing a response that raises entirely new and different issues. Like ships passing in the night, the debate seems to involve much more miss than hit. Rather than continue this sequence of mixed metaphors, it is preferable to try to refocus the debate. What are the fixed hotspot [1,2] and plume [2,3] hypotheses (while coupled, they can be viewed as separate proposals – either complementary or even competing)? What evidence have we to consider in elaborating and testing the hypotheses? Can we perhaps come to a minimal consensus on these basics, as a basis for progress in future research? This note is an attempt to bring some clarity to the debate while introducing some pertinent observations from both recently published research and earlier, yellowing publications.

Plate and Subplate Interactions, U.S. Cordillera (2005)

Plate and Subplate Interactions: Understanding U.S. Cordilleran Tectonics in the Late Mesozoic and Cenozoic

Thursday, January 20, 2011

Jack Oliver, RIP

Jack Oliver
Another of the giants is gone. As inspiring as his ground-breaking (pun intended) contributions were, Jack Oliver was also a very generous man. I, as a young professor, enjoyed and valued my brief interactions with him during visits to Cornell, in seminars, and in various research-planning workshops. RIP

Sunday, January 2, 2011

Further confirmation of hotspot trace overprinting: the end of the "super swell"?

Jackson et al. (2010, p. 17) write 
When backtracked through time using the plate motion model of Wessel and Kroenke [2008], the Rurutu hot spot passed through the WESAM province in the region of Bayonnaise seamount, then its trajectory bent to the northwest with the production of the Gilbert chain (Figure 7). The Macdonald hot spot [Hémond et al., 1994] back-tracks through the ESAM, and the hot spot reconstruction model has the chain turning northeast through the Tokelau chain [see also Koppers and Staudigel, 2005]. The reconstructed path of the Rarotonga hot spot passes along the southern fringes of the Samoan hot spot and trends through the Enriched Mantle 1 (EM1) seamounts in the Western Pacific Seamount Province (WSPC [Koppers et al., 2003]). Lending credence to the plate reconstructions, each lineament exhibits isotopic affinities with its respective active hot spot [Konter et al., 2008]. In summary, evidence from plate motion models supports the hypothesis of a “hot spot highway”: Older volcanism left over from three earlier hot spots could be present in the Samoan region.
Their interpretation is in accord, almost fully, with my previous observations:

Hotspot Frames and Shear-wave Tomography

Wagner, Forsyth, Fouch, and James (2010) have produced an intriguing model of the shear wave velocity structure beneath the northwestern US from Rayleigh wave tomography. I've calculated the predicted locus of the the Yellowstone hotspot relative to stable North America and plotted it (Fig.1) on top of their Fig. 8, (-3 percent velocity variation).

Figure 1. Wagner et al.'s (2010) Fig. 8, showing the -3 percent shear velocity anomaly, overlain by calculated locus of Yellowstone hotspot, present to 50 Ma, relative to stable North America (red line and solid circles every five m.y., calculated using Africa-hotspot parameters of Müller et al., 1993, time scale of Gradstein et al., 2005, North American-Africa parameters of Müller et al., and spline interpolation method of Pilger, 2003). The -3 percent anomaly ranges in apparent depth from ~60 to ~140 km, with the greatest apparent lateral extent at ~60 to ~80 km, from Wagner et al.'s Fig. 7. (Click graphic to enlarge, then back button to return to post.)

Sunday, December 26, 2010

Plate Reconstruction Interpolation - Appendix II - C Code

Appendix II: C Source Code Fragments (Plate Reconstruction Interpolation):
(Based on Pilger, R. H., Jr., 2003, Geokinematics: Prelude to Geodynamics, Springer-Verlag, Berlin.)
// Code follows jump