*In a paper published a couple of years ago (Pilger, 2012), I describe the application of a simple principle, transformed into a distinctive abstract object, to an optimization problem **(within the plate tectonics paradigm)**: simultaneous reconstruction of lithospheric plates for a range of ages from marine geophysical data . It is rare that the relation of the principle, ***maximum entropy**, with a particular transformation, power-series **fractals**, is recognized, since Pastor-Satorras and Wagensberg derived it. I'm unaware of any other application of fractal forms to optimization problems analogous to the paper. The following derivation is taken from the 2012 paper, with slight modification, in hopes that it might prove useful in other fields, not merely the earth sciences, but beyond. I'm investigating applications in a variety of other areas, from plate tectonics, to petroleum geology, and, oddly enough, the arts.

Pastor-Satorras
and Wagensberg (1998) showed that fractal self-similarity could be derived
utilizing Shannon’s (1948) information theory and Jaynes’ (1957) maximum entropy
formalism. For convenience, straight-forward derivations of both maximum
entropy and fractal distributions are provided here.

Mandelbrot (1982) provided a basic definition of the mathematical formulation of fractals:
p = k d^{-D}

For statistical fractals the
formula is
p @ k d^{-D}.

In either case, p is a measure of a self similar structure, such as
length, width, mass, or even probability, over a range of scales, d;
k is a constant, in part dependent on the units of the measure, if any; and D
is the fractal dimension.
Given
a measure space – for the purposes of this study, a two-dimensional map – the
map area can be divided into successive subareas. If the dimensions of the map
are 1 by 1 unit, divide the map into square cells of dimension 1/k by 1/k, in
which, for convenience, k=1, 2, 4, 16…N {k_{j} = [1, for j=1; 1/( 2k_{j-1})
for j>1]}.

1.
The amount of information, I, supplied for each successive division j, is equal
to ln(1/k_{j}^{2}) = ln (d_{j}) (see Shannon, 1948; or Kanasewich,
1974). The average (expected) information

*<*I*> = *S_{j=1,N}
p_{j }ln(1/k_{j}^{2}) = S_{j=1,N} p_{j }ln(d_{j}) *(eq. 1)*

in
which p_{j} is the probability of each division. (Note that d_{j} could be equal to either the
inverse cell width, k_{j}, or area, k_{j}^{2}, without
affecting the derivation.)

2.
In the presence of inadequate information, Jaynes (1957) proposed (using Grandy’s,
1992, paraphrase here): “The optimal probability assignment describing that
situation is the one which maximizes the information-theoretic entropy subject
to constraints imposed by the information that *is* available.” What is
the information-theoretic entropy? From Grandy’s derivation:

An experiment is repeated n times
with potentially m results. Thus there are potentially m^{n} outcomes
of the experiments. Each outcome produces a set of sample numbers n_{j}
and frequencies of occurrence of those sample numbers, f_{j} = n_{j}/n,
1 ≤ j ≤ m.

As a result, the number of outcomes
that yield a particular set of frequences f_{j} is given by the
multiplicity factor:

W = n!/[(nf_{1})! . .. (nf_{m})!] *(2)*

The
set of frequencies (f_{j}) that can be realized in the most ways is
that which maximizes the multiplicity, W.

It
is convenient to note that maximizing W is equivalent to maximizing ln (W).
Thus

ln(W)
= ln {n!/[(nf_{1})! . .. (nf_{m})!]} = ln (n!) – S
_{j = 1,m} ln (nf_{j})! *(3)*

Stirling’s
approximation, ln(q!) @ q ln(q) – q , is applicable for
large n:

ln(W)
@ n ln(n) – n – S
_{j = 1,m} [n f_{j }ln(f_{j}) – nf_{j} ln (n) –
nf_{j }] *(4)*

and,
then

ln(W)/n
@ ln(n) – 1 – S
_{j = 1,m} f_{j }ln(nf_{j}) – ln n S
_{j = 1,m} f_{j} - S _{j = 1,m} f_{j }*(5)*

As S_{j = 1,m}
f_{j }= 1, therefore,

ln(W)/n
@ – S _{j = 1,m }f_{j }ln(f_{j}) *(6)*

If
the set of frequencies accurately represent the probabilities (p_{j} , j = 1,m) of
the phenomenon being investigated, then,

S = ln(W)/n @ – S_{j = 1,m}
p_{j }ln(p_{j}) *(7)*

which is Shannon information
entropy.

Shannon information entropy has the
following properties (Jaynes, 1957): (1) It is a continuous function of the
probabilities, (2) if all probabilities are equal, it is a monotonic function
of n (the number of probabilities), (3) grouping of events produces the same
value as separate events, and (4) S(m) + S(n) = S(mn), which can only be
satisfied by a form such as S(n) = k ln (n).

3. In the Pastor-Satorras and
Wagensberg formalism, using the Lagrangian variational principle, information
entropy (eq. A.7) is maximized subject to the constraining information (eq.
A.1) and normalization of the probabilities (S_{j=1,N} p_{j} = 1) with Lagrange
multipliers a and b:

F
= – S_{j = 1,N} p_{j }ln(p_{j}) + a
[*- *S_{j=1,N} p_{j }ln(d_{j})] + b [1 - S_{j=1,N} p_{j}] *(8)*

Maximizing F
for each p_{i}:

¶F/¶p_{i }= 0 = - ln(p_{i})
– 1 – a ln(d_{i}) – b *(9)*

Reorganizing:

ln(p_{i}) = – 1 + a ln(d_{i}) + b *(10)*

exp[ln(p_{i})] = p_{i} =
exp (-b -1) d_{i}^{-}^{a }*(11)*

or

p_{i} = g
d_{i}^{
-}^{a }*(12)*

in
which a
and g
= (-b
-1) are constants.

For such a power series, eq. 12,
the probabilities exhibit self-similarity across scales, a,,
and a, then, is equivalent to Mandelbrot’s (1975, 1982)
fractal dimension, D, above.

The novelty of the Pastor-Satorras
and Wagensberg (1998) derivation is the incorporation of the constraint of
average information, * = *S_{j=1,N} p_{j }ln(d),
into the maximum entropy formalism (eq. A.8). Conventional applications of the
formalism utilize statistical measures, such as the mean or variance from experimental
data, as, e.g., = S_{j=1,N} p_{j }f(x_{j}),
rather than information.

Fractal structure can be interpreted
as the manifestation of iterative processes which propagate information kernels
across a range of scales, in such a manner that they achieve the most probable
outcome. And, the fractal structure, the information, is thereby apparent
across every scale over which the process operates.
**References**

(Links as of January 13, 2014; if link is broken, it may be necessary to search for it.)