An essential aspect of deterministic chaos is the fact that small differences in the initial values of a system's development over time can grow into very large differences. The reason for this is the exponential growth of these differences, which in principle is once more a feedback effect. Differences grow in respect to the size they have reached in the previous moment of time. The larger they are, the faster they grow.

This increase in differences measures the so-called Lyapunov exponent, named after the Russian mathematician Aleksandr Mikhailovich Lyapunov. This exponent indicates the speed with which two initially close dynamics diverge - if the L. exponent is positive - or converge - if the L. exponent is negative - in phase space. If the difference of the initial values is \(u_0\), the difference at time \(t\) can be estimated as \(| u_t | = e^{\lambda t} * | u_0 |\), with \(\lambda\) denoting the Lyapunov exponent, which can be calculated as \(\lambda = lim \frac{1}{t} * ln |\frac{u_t}{u_0}|\).

The Lyapunov exponent hence indicates how rapidly a complex system of several interdependent dynamics tends to run up to deterministic chaos. The inverse value of the exponent indicates the so-called Lyapunov time, the time an initial difference needs to reach \(e\), thus allowing certain conclusions about the predictability of a system. Knowing Lyapunov time enables to estimate for what time period a system can be expected to be predictable.

In more-dimensional developments, there may be a whole spectrum of Lyapunov exponents. Usually only the largest of them is called Lyapunov exponent, or more accurately the Maximal Lyapunov Exponent (MLE). For a time discrete system (\(x_{t+1}=f(x_t)\) it is defined as:

\[\lambda (x_0) = \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} \ln | f'(x_i)|\]In respect to the Lorenz equations on
weather forecast, for example, reliable predictions for about four
days would be possible with an initial accuracy as provided by 1000
weather stations evenly distributed over the earth's surface. For an
equally reliable forecast for 11 days the number of required weather
stations increases to 100 million stations. And for predictions over
a whole month, an amount of 10^{20} stations would be needed
- one per 5 square millimeters earth surface.

(The following examples are taken - and slightly varied - from the excellent textbook Introduction to the Modeling and Analysis of Complex Systems of Hiroki Sayama)

Basically, the Lyapunov exponent is just a time average of $log\mid f'(x_i) \mid$ at every state the system visits over the course of the simulation. Here is an example of computing the Lyapunov exponent for the time-discrete system $x_{t+1} = x_t + r - x_t^2$ over varying $r$:

In [147]:

```
import numpy as np
import matplotlib.pyplot as plt
# show plots in notebook
% matplotlib inline
result = []
lambdas = []
maps = []
# define range of r
rvalues = arange(0, 2, 0.01)
# loop through r
for r in rvalues:
x = 0.1
result = []
# iterate system 100 times
for t in range(100):
x = x + r - x**2
# calculate log of the absolute of the derivative
result.append(log(abs(1 - 2*x)))
# take average
lambdas.append(mean(result))
# for the map ignore first 100 iterations as transient time and iterate anew
for t in range(20):
x = x + r - x**2
maps.append(x)
fig = plt.figure(figsize=(10,7))
ax1 = fig.add_subplot(1,1,1)
xticks = np.linspace(0, 2, 4000)
# zero line
zero = [0]*4000
ax1.plot(xticks, zero, 'g-')
# plot map
ax1.plot(xticks, maps, 'r.',alpha = 0.3, label = 'Map')
ax1.set_xlabel('r')
# plot lyapunov
ax1.plot(rvalues, lambdas, 'b-', linewidth = 3, label = 'Lyapunov exponent')
ax1.grid('on')
ax1.set_xlabel('r')
ax1.legend(loc='best')
ax1.set_title('Map of x(t+1) = x(t) + r - x(t)^2 versus Lyapunov exponent')
```

Out[147]:

As can be seen in the above plot, a bifurcation (in the red map) is indicated when the Lyapunov exponent (blue) approches zero (green line). The outbreak of deterministic chaos, i.e. the danger of rapid divergence of two initially close dynamics, is indicated by the Lyapunov exponent becoming positive (crossing the zero line). Those locations in the plot where the Lyapunov exponent diverges to deep negative values (at times even infinite values) indicate extremely stable equilibrium points in the system with $f'(x_t) \approx 0$ for certain $t$.

The following example shows the same simulation for the logisitic difference equation $x_{t+1}= r*x_t*(1-x_t)$

In [142]:

```
result = []
lambdas = []
maps = []
xmin = 2
xmax = 4
mult = (xmax - xmin)*2000
rvalues = arange(xmin, xmax, 0.01)
for r in rvalues:
x = 0.1
result = []
for t in range(100):
x = r * x * (1 - x)
result.append(log(abs(r - 2*r*x)))
lambdas.append(mean(result))
# ignore first 100 iterations as transient time
# then iterate anew
for t in range(20):
x = r * x * (1 - x)
maps.append(x)
fig = plt.figure(figsize=(10,7))
ax1 = fig.add_subplot(1,1,1)
xticks = np.linspace(xmin, xmax, mult)
# zero line
zero = [0]*mult
ax1.plot(xticks, zero, 'g-')
ax1.plot(xticks, maps, 'r.',alpha = 0.3, label = 'Logistic map')
ax1.set_xlabel('r')
ax1.plot(rvalues, lambdas, 'b-', linewidth = 3, label = 'Lyapunov exponent')
ax1.grid('on')
ax1.set_ylim(-1, 1)
ax1.set_xlabel('r')
ax1.legend(loc='best')
ax1.set_title('Logistic map versus Lyapunov exponent')
```

Out[142]:

$x_{t+1}=x_t^3-r*x_t$

In [149]:

```
result = []
lambdas = []
maps = []
xmin = 0
xmax = 3
mult = (xmax - xmin)*2000
rvalues = arange(xmin, xmax, 0.01)
for r in rvalues:
x = 0.1
result = []
for t in range(100):
x = x**3 - r * x
result.append(log(abs(3*x**2 - r)))
lambdas.append(mean(result))
# ignore first 100 iterations as transient time
# then iterate anew
for t in range(20):
x = x**3 - r * x
maps.append(x)
fig = plt.figure(figsize=(10,7))
ax1 = fig.add_subplot(1,1,1)
xticks = np.linspace(xmin, xmax, mult)
# zero line
zero = [0]*mult
ax1.plot(xticks, zero, 'g-')
ax1.plot(xticks, maps, 'r.',alpha = 0.5, label = 'Cubic map')
ax1.set_xlabel('r')
ax1.plot(rvalues, lambdas, 'b-', linewidth = 3, label = 'Lyapunov exponent')
ax1.set_ylim(-4, 2)
ax1.grid('on')
ax1.set_xlabel('r')
ax1.legend(loc='best')
ax1.set_title('Cubic map versus Lyapunov exponent')
```

Out[149]:

$x_{t+1}=r*sin x_t$

In [148]:

```
result = []
lambdas = []
maps = []
xmin = 2
xmax = 3
mult = (xmax - xmin)*2000
rvalues = arange(xmin, xmax, 0.01)
for r in rvalues:
x = 0.1
result = []
for t in range(100):
x = r * sin(x)
result.append(log(abs(r*cos(x))))
lambdas.append(mean(result))
# ignore first 100 iterations as transient time
# then iterate anew
for t in range(20):
x = r * sin(x)
maps.append(x)
fig = plt.figure(figsize=(10,7))
ax1 = fig.add_subplot(1,1,1)
xticks = np.linspace(xmin, xmax, mult)
# zero line
zero = [0]*mult
ax1.plot(xticks, zero, 'g-')
ax1.plot(xticks, maps, 'r.', alpha = 0.5, label = 'Sinusoid map')
ax1.set_xlabel('r')
ax1.plot(rvalues, lambdas, 'b-', linewidth = 3, label = 'Lyapunov exponent')
ax1.set_xlim(2, 3)
ax1.set_ylim(-1, 3)
ax1.grid('on')
ax1.set_xlabel('r')
ax1.legend(loc='best')
ax1.set_title('Sinusoid map versus Lyapunov exponent')
```

Out[148]:

In [ ]:

```
```