# 迭代函数

In mathematics, an iterated function is a function X → X (that is, a function from some set X to itself) which is obtained by composing another function f : X → X with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial number, the result of applying a given function is fed again in the function as input, and this process is repeated.

In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial number, the result of applying a given function is fed again in the function as input, and this process is repeated.

Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics.

Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics.

## Definition

The formal definition of an iterated function on a set X follows.

The formal definition of an iterated function on a set X follows.

Let X be a set and f: XX be a function.

Let be a set and be a function.

X是一个集合，f: XX是一个函数。

Define f n as the n-th iterate of f, where n is a non-negative integer, by:

Define as the n-th iterate of , where n is a non-negative integer, by:

$\displaystyle{ f^0 ~ \stackrel{\mathrm{def}}{=} ~ \operatorname{id}_X }$
$\displaystyle{ f^0 ~ \stackrel{\mathrm{def}}{=} ~ \operatorname{id}_X }$


[数学] f ^ 0 ~ stackrel { mathrm { def }{ = } ~ operatorname { id } x </math >

and

and

$\displaystyle{ f^{n+1} ~ \stackrel{\mathrm{def}}{=} ~ f \circ f^{n}, }$
$\displaystyle{ f^{n+1} ~ \stackrel{\mathrm{def}}{=} ~ f \circ f^{n}, }$


< math > f ^ { n + 1} ~ stackrel { mathrm { def }{ = } ~ f circ ^ { n } ，</math >

where idX is the identity function on X and fg denotes function composition. That is,

where is the identity function on and denotes function composition. That is,

(fg)(x) = f (g(x))

Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians choose to write f °n for the n-th iterate of the function f.

Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to write for the n-th iterate of the function .

## Abelian property and iteration sequences

In general, the following identity holds for all non-negative integers m and n,

In general, the following identity holds for all non-negative integers and ,

$\displaystyle{ f^m \circ f^n = f^n \circ f^m = f^{m+n}~. }$
$\displaystyle{ f^m \circ f^n = f^n \circ f^m = f^{m+n}~. }$


< math > f ^ m circ ^ n = f ^ n circ f ^ m = f ^ { m + n } ~ </math >

This is structurally identical to the property of exponentiation that aman = am + n

This is structurally identical to the property of exponentiation that am + n}}

In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arcos(x )).

In general, for arbitrary general (negative, non-integer, etc.) indices and , this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials,  Tm n(x)}}, since cos(n arcos(x ))}}.

The relation (f m )n(x) = (f n )m(x) = f mn(x) also holds, analogous to the property of exponentiation that (am )n = (an )m = amn.

The relation (f n )m(x) f mn(x)}} also holds, analogous to the property of exponentiation that  (an )m amn}}.

The sequence of functions f n is called a Picard sequence, named after Charles Émile Picard.

The sequence of functions is called a Picard sequence, named after Charles Émile Picard.

For a given x in X, the sequence of values f n(x) is called the orbit of x.

For a given in , the sequence of values is called the orbit of .

If f n (x) = f n+m (x) for some integer m, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.

If f n+m (x)}} for some integer , the orbit is called a periodic orbit. The smallest such value of for a given is called the period of the orbit. The point itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.

## Fixed points

If f(x) = x for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f ). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.

If f(x) = x for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f ). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.

There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.

There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.

## Limiting behaviour

Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.

Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.

When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.

When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.

The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behaviour of small neighborhoods under iteration. (Also see Infinite compositions of analytic functions.)

The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behaviour of small neighborhoods under iteration. (Also see Infinite compositions of analytic functions.)

Other limiting behaviours are possible; for example, wandering points are points that move away, and never come back even close to where they started.

Other limiting behaviours are possible; for example, wandering points are points that move away, and never come back even close to where they started.

## Invariant measure

If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.

If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.

In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.

In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.

## Fractional iterates and flows, and negative iterates

The notion f1/n must be used with care when the equation gn(x) = f(x) has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for n = 2 and f(x) = 4x − 6, both g(x) = 6 − 2x and g(x) = 2x − 2 are solutions; so the expression f ½(x) doesn't denote a unique function, just as numbers have multiple algebraic roots. The issue is quite similar to division by zero. The roots chosen are normally the ones belonging to the orbit under study.

The notion must be used with care when the equation f(x)}} has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for 2}} and 4x − 6}}, both  6 − 2x}} and 2x − 2}} are solutions; so the expression doesn't denote a unique function, just as numbers have multiple algebraic roots. The issue is quite similar to division by zero. The roots chosen are normally the ones belonging to the orbit under study.

Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that g(g(x)) = f(x). This function g(x) can be written using the index notation as f ½(x) . Similarly, f(x) is the function defined such that f(f(f(x))) = f(x), while f(x) may be defined as equal to f(f(x)), and so forth, all based on the principle, mentioned earlier, that f mf n = f m + n. This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.

Fractional iteration of a function can be defined: for instance, a half iterate of a function is a function such that f(x)}}. This function can be written using the index notation as . Similarly, is the function defined such that f(x)}}, while may be defined as equal to , and so forth, all based on the principle, mentioned earlier, that f m + n}}. This idea can be generalized so that the iteration count becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.

In such cases, one refers to the system as a flow. (cf. Section on conjugacy below.)

In such cases, one refers to the system as a flow. (cf. Section on conjugacy below.)

Negative iterates correspond to function inverses and their compositions. For example, f −1(x) is the normal inverse of f, while f −2(x) is the inverse composed with itself, i.e. f −2(x) = f −1(f −1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example, f −½(x) is defined such that f − ½(f −½(x)) = f −1(x), or, equivalently, such that f −½(f ½(x)) = f 0(x) = x.

Negative iterates correspond to function inverses and their compositions. For example, is the normal inverse of , while is the inverse composed with itself, i.e. f −1(f −1(x))}}. Fractional negative iterates are defined analogously to fractional positive ones; for example, is defined such that f −1(x)}}, or, equivalently, such that f 0(x) x}}.

### Some formulas for fractional iteration

One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.

One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.

1. First determine a fixed point for the function such that f(a) = a .
First determine a fixed point for the function such that  a}} .


1. Define f n(a) = a for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates.
Define    a}} for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates.


1. Expand fn(x) around the fixed point a as a Taylor series,
Expand    around the fixed point a as a Taylor series,


1. $\displaystyle{ \lt math\gt 《数学》 f^n(x) = f^n(a) + (x-a)\left.\frac{d}{dx}f^n(x)\right|_{x=a} + \frac{(x-a)^2}2\left.\frac{d^2}{dx^2}f^n(x)\right|_{x=a} +\cdots f^n(x) = f^n(a) + (x-a)\left.\frac{d}{dx}f^n(x)\right|_{x=a} + \frac{(x-a)^2}2\left.\frac{d^2}{dx^2}f^n(x)\right|_{x=a} +\cdots F ^ n (x) = f ^ n (a) + (x-a) left. frac { d }{ dx } f ^ n (x) right | { x = a } + frac {(x-a) ^ 2}2 left. frac { d ^ 2}{ dx ^ 2} f ^ n (x) right | { x = a } + cdots }$

[/itex]

1. Expand out
Expand out


1. $\displaystyle{ \lt math\gt 《数学》 f^n(x) = f^n(a) + (x-a) f'(a)f'(f(a))f'(f^2(a))\cdots f'(f^{n-1}(a)) + \cdots f^n(x) = f^n(a) + (x-a) f'(a)f'(f(a))f'(f^2(a))\cdots f'(f^{n-1}(a)) + \cdots F ^ n (x) = f ^ n (a) + (x-a) f’(a) f’(f (a)) f’(f ^ 2(a))) cdots’(f ^ { n-1}(a)) + cdots }$

[/itex]

1. Substitute in for f k(a)= a, for any k,
 Substitute in for    a}}, for any k,


1. $\displaystyle{ \lt math\gt 《数学》 f^n(x) = a + (x-a) f'(a)^n + \frac{(x-a)^2}2(f''(a)f'(a)^{n-1})\left(1+f'(a)+\cdots+f'(a)^{n-1} \right)+\cdots f^n(x) = a + (x-a) f'(a)^n + \frac{(x-a)^2}2(f(a)f'(a)^{n-1})\left(1+f'(a)+\cdots+f'(a)^{n-1} \right)+\cdots F ^ n (x) = a + (x-a) f’(a) ^ n + frac {(x-a) ^ 2}2(f (a) f’(a) ^ { n-1})左(1 + f’(a) + cdots + f’(a) ^ { n-1}右) + cdots }$

[/itex]

1. Make use of the geometric progression to simplify terms,
Make use of the geometric progression to simplify terms,


1. $\displaystyle{ \lt math\gt 《数学》 f^n(x) = a + (x-a) f'(a)^n + \frac{(x-a)^2}2(f''(a)f'(a)^{n-1})\frac{f'(a)^n-1}{f'(a)-1}+\cdots f^n(x) = a + (x-a) f'(a)^n + \frac{(x-a)^2}2(f(a)f'(a)^{n-1})\frac{f'(a)^n-1}{f'(a)-1}+\cdots F ^ n (x) = a + (x-a) f’(a) ^ n + frac {(x-a) ^ 2}2(f (a) f’(a) ^ { n-1}) frac { f’(a) ^ n-1}{ f’(a)-1} + cdots }$

[/itex]

1. There is a special case when f '(a) = 1,

There is a special case when  1}},

1. $\displaystyle{ \lt math\gt 《数学》 f^n(x) = x + \frac{(x-a)^2}2(n f''(a))+ \frac{(x-a)^3}6\left(\frac{3}{2}n(n-1) f''(a)^2 + n f'''(a)\right)+\cdots f^n(x) = x + \frac{(x-a)^2}2(n f(a))+ \frac{(x-a)^3}6\left(\frac{3}{2}n(n-1) f(a)^2 + n f(a)\right)+\cdots F ^ n (x) = x + frac {(x-a) ^ 2}2(n f (a)) + frac {(x-a) ^ 3}6 left (frac {3}{2}{ n-1) f (a) ^ 2 + n f (a) right) + cdots }$

[/itex]

This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.

This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.

#### Example 1

For example, setting f(x) = Cx + D gives the fixed point a = D/(1 − C), so the above formula terminates to just

For example, setting Cx + D}} gives the fixed point D/(1 − C)}}, so the above formula terminates to just

$\displaystyle{ \lt math\gt 《数学》 f^n(x)=\frac{D}{1-C} + (x-\frac{D}{1-C})C^n=C^nx+\frac{1-C^n}{1-C}D ~, f^n(x)=\frac{D}{1-C} + (x-\frac{D}{1-C})C^n=C^nx+\frac{1-C^n}{1-C}D ~, F ^ n (x) = frac { d } + (x-frac { d }{1-C }) c ^ n = c ^ nx + frac {1-C ^ n }{1-C } d ~ , }$

[/itex]

which is trivial to check.

which is trivial to check.

#### Example 2

Find the value of $\displaystyle{ \sqrt{2}^{ \sqrt{2}^{\sqrt{2}^{\cdots}} } }$ where this is done n times (and possibly the interpolated values when n is not an integer). We have f(x) = 模板:Sqrtx. A fixed point is a = f(2) = 2.

Find the value of $\displaystyle{ \sqrt{2}^{ \sqrt{2}^{\sqrt{2}^{\cdots}} } }$ where this is done n times (and possibly the interpolated values when n is not an integer). We have  x}}. A fixed point is  f(2)  2}}.

So set x = 1 and f n (1) expanded around the fixed point value of 2 is then an infinite series,

So set x = 1 and expanded around the fixed point value of 2 is then an infinite series,

$\displaystyle{ \lt math\gt 《数学》 \sqrt{2}^{ \sqrt{2}^{\sqrt{2}^{\cdots}} } = f^n(1) = 2 - (\ln 2)^n + \frac{(\ln 2)^{n+1}((\ln 2)^n-1)}{4(\ln 2-1)} - \cdots \sqrt{2}^{ \sqrt{2}^{\sqrt{2}^{\cdots}} } = f^n(1) = 2 - (\ln 2)^n + \frac{(\ln 2)^{n+1}((\ln 2)^n-1)}{4(\ln 2-1)} - \cdots Sqrt {2} ^ { sqrt {2} ^ { sqrt {2} ^ { cdots }} = f ^ n (1) = 2-(ln 2) ^ n + frac {(ln 2) ^ { n + 1}((ln 2) ^ n-1)}{4(ln 2-1)}-cdots }$

[/itex]

which, taking just the first three terms, is correct to the first decimal place when n is positive—cf. Tetration: f n(1) = n模板:Sqrt . (Using the other fixed point a = f(4) = 4 causes the series to diverge.)

which, taking just the first three terms, is correct to the first decimal place when n is positive—cf. Tetration: n }}. (Using the other fixed point f(4) 4}} causes the series to diverge.)

For n = −1, the series computes the inverse function 模板:Sfrac.

For −1}}, the series computes the inverse function .

#### Example 3

With the function f(x) = xb, expand around the fixed point 1 to get the series

With the function xb}}, expand around the fixed point 1 to get the series

$\displaystyle{ \lt math\gt 《数学》 f^n(x) = 1 + b^n(x-1) + \frac{1}2b^{n}(b^n-1)(x-1)^2 + \frac{1}{3!}b^n (b^n-1)(b^n-2)(x-1)^3 + \cdots ~, f^n(x) = 1 + b^n(x-1) + \frac{1}2b^{n}(b^n-1)(x-1)^2 + \frac{1}{3!}b^n (b^n-1)(b^n-2)(x-1)^3 + \cdots ~, F ^ n (x) = 1 + b ^ n (x-1) + frac {1}2b ^ { n }(b ^ n-1)(x-1) ^ 2 + frac {1}{3！} b ^ n (b ^ n-1)(b ^ n-2)(x-1) ^ 3 + cdots ~ , }$

[/itex]

which is simply the Taylor series of x(bn ) expanded around 1.

which is simply the Taylor series of x(bn ) expanded around 1.

## Conjugacy

If f and g are two iterated functions, and there exists a homeomorphism h such that g = h−1fh , then f and g are said to be topologically conjugate.

If and are two iterated functions, and there exists a homeomorphism such that h−1 ○ f ○ h }}, then and are said to be topologically conjugate.

Clearly, topological conjugacy is preserved under iteration, as gn = h−1  ○ f nh. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking f(x) = x + 1, one has the iteration of g(x) = h−1(h(x) + 1) as

Clearly, topological conjugacy is preserved under iteration, as  h−1  ○ f n ○ h}}. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking x + 1}}, one has the iteration of h−1(h(x) + 1)}} as

gn(x) = h−1(h(x) + n),   for any function h.
h−1(h(x) + n)}},     for any function .


H < sup > & minus; 1 (h (x) + n)}} ，对于任何函数。

Making the substitution x = h−1(y) = ϕ(y) yields

Making the substitution h−1(y) ϕ(y)}} yields

g(ϕ(y)) = ϕ(y+1),   a form known as the Abel equation.
ϕ(y+1)}},    a form known as the Abel equation.


(y + 1)} ，一种被称为 Abel 方程的形式。

Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve Schröder's equation for a function Ψ, which makes f(x) locally conjugate to a mere dilation, g(x) = f '(0) x, that is

Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at = 0, (0) = 0, one may often solve Schröder's equation for a function Ψ, which makes locally conjugate to a mere dilation, f '(0) x}}, that is

f(x) = Ψ−1(f '(0) Ψ(x)).
 Ψ−1(f '(0) Ψ(x))}}.

 Ψ−1(f '(0) Ψ(x))}}.


Thus, its iteration orbit, or flow, under suitable provisions (e.g., f '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial,

Thus, its iteration orbit, or flow, under suitable provisions (e.g., ), amounts to the conjugate of the orbit of the monomial,

Ψ−1(f '(0)n Ψ(x)),

,

,

where n in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.

where in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.

Iterates of the sine function (blue), in the first half-period. Half-iterate (orange), i.e., the sine's functional square root; the functional square root of that, the quarter-iterate (black) above it; and further fractional iterates up to the 1/64th. The functions below the (blue) sine are six integral iterates below it, starting with the second iterate (red) and ending with the 64th iterate. The green envelope triangle represents the limiting null iterate, the sawtooth function serving as the starting point leading to the sine function. The dashed line is the negative first iterate, i.e. the inverse of sine (arcsin). Iterates of the sine function (blue), in the first half-period. Half-iterate (orange), i.e., the sine's functional square root; the functional square root of that, the quarter-iterate (black) above it; and further fractional iterates up to the 1/64th. The functions below the (blue) sine are six integral iterates below it, starting with the second iterate (red) and ending with the 64th iterate. The green envelope triangle represents the limiting null iterate, the sawtooth function serving as the starting point leading to the sine function. The dashed line is the negative first iterate, i.e. the inverse of sine (arcsin). 正弦函数(< span style ="color: blue"> blue )的迭代，在前半段。半迭代(< span style ="color: orange"> orange ) ，即正弦函数的函数平方根; 其函数平方根，它上面的四分之一迭代(black) ; 以及进一步的小数迭代到1/64。正弦函数(< span style ="color: blue"> blue )下面有六个整数迭代，从第二个迭代开始(< span style ="color: red"> red )到第64个迭代结束。绿色信封三角形代表极限的零迭代，锯齿函数作为正弦函数的起点。虚线是负的第一个迭代，即。正弦的逆(反正弦)。 (From the general pedagogy web-site. For the notation, see .) (From the general pedagogy web-site. For the notation, see .) (摘自普通教学法网站。关于这个符号，请参阅[ http://www.physics.miami.edu/~curtright/therootsofsin.pdf ]

]]

This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.

This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.

## Markov chains

If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.

If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.

## Examples

There are many chaotic maps.

There are many chaotic maps.

Well-known iterated functions include the Mandelbrot set and iterated function systems.

Well-known iterated functions include the Mandelbrot set and iterated function systems.

Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case f(x) = 4x(1 − x), so that Ψ(x) = arcsin2(模板:Radic), hence f n(x) = sin2(2n arcsin(模板:Radic)).

Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case 4x(1 − x)}}, so that arcsin2()}}, hence sin2(2n arcsin())}}.

Ernst Schröder 在1870年提出了 logistic 映射的特殊情形，如混沌情形4x (1-x)} ，使得 arcsin < sup > 2 ()} ，因此 sin < sup > 2 (2 < sup > n arcsin ())}。

A nonchaotic case Schröder also illustrated with his method, f(x) = 2x(1 − x), yielded Ψ(x) = −模板:Sfrac ln(1 − 2x), and hence fn(x) = −模板:Sfrac((1 − 2x)2n − 1).

A nonchaotic case Schröder also illustrated with his method, 2x(1 − x)}}, yielded − ln(1 − 2x)}}, and hence −((1 − 2x)2n − 1)}}.

If f is the action of a group element on a set, then the iterated function corresponds to a free group.

If is the action of a group element on a set, then the iterated function corresponds to a free group.

Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.

Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.

100%
$\displaystyle{ f(x) }$ $\displaystyle{ f(x) }$ 数学 f (x) $\displaystyle{ f^n(x) }$ $\displaystyle{ f^n(x) }$ < math > f ^ n (x) </math >
$\displaystyle{ x+b }$ $\displaystyle{ x+b }$ < math > x + b $\displaystyle{ x+nb }$ $\displaystyle{ x+nb }$ < math > x + nb </math >
$\displaystyle{ ax+b \ (a\ne 1) }$ $\displaystyle{ ax+b \ (a\ne 1) }$ < math > ax + b (a ne 1) </math > $\displaystyle{ a^nx+\frac{a^n-1}{a-1}b }$ $\displaystyle{ a^nx+\frac{a^n-1}{a-1}b }$ < math > a ^ nx + frac { a ^ n-1}{ a-1} b </math >
$\displaystyle{ ax^b \ (b\ne 1) }$ $\displaystyle{ ax^b \ (b\ne 1) }$ < math > ax ^ b (b ne 1) </math > $\displaystyle{ a^{\frac{b^n-1}{b-1}}x^{b^n} }$ $\displaystyle{ a^{\frac{b^n-1}{b-1}}x^{b^n} }$ < math > a ^ { frac { b ^ n-1}{ b-1} x ^ { b ^ n } </math >
$\displaystyle{ ax^2 + bx + \frac{b^2 - 2b}{4a} }$ (see note)
$\displaystyle{ ax^2 + bx + \frac{b^2 - 2b}{4a} }$ (see note)
< math > ax ^ 2 + bx + frac { b ^ 2-2b }{4a } </math > (见注释) < br > $\displaystyle{ \frac{2\alpha^{2^n} - b}{2a} }$
$\displaystyle{ \frac{2\alpha^{2^n} - b}{2a} }$
< math > frac {2 alpha ^ {2 ^ n }-b }{2a } </math > < br >

where:

where:

$\displaystyle{ \alpha = \frac{2ax + b}{2} }$

$\displaystyle{ \alpha = \frac{2ax + b}{2} }$

2ax + b }{2}

$\displaystyle{ ax^2 + bx + \frac{b^2 - 2b - 8}{4a} }$ (see note)
$\displaystyle{ ax^2 + bx + \frac{b^2 - 2b - 8}{4a} }$ (see note)
< math > ax ^ 2 + bx + frac { b ^ 2-2b-8}{4a } </math > (见注释) < br > $\displaystyle{ \frac{2\alpha^{2^n} + 2\alpha^{-2^n} - b}{2a} }$
$\displaystyle{ \frac{2\alpha^{2^n} + 2\alpha^{-2^n} - b}{2a} }$
< math > frac {2 alpha ^ {2 ^ n } + 2 alpha ^ {-2 ^ n }-b }{2a } </math > < br >

where:

where:

$\displaystyle{ \alpha = \frac{2ax + b \pm \sqrt{(2ax + b)^2 - 16}}{4} }$

$\displaystyle{ \alpha = \frac{2ax + b \pm \sqrt{(2ax + b)^2 - 16}}{4} }$

2ax + b pm sqrt {(2ax + b) ^ 2-16}{4} </math >

$\displaystyle{ \frac{ax + b}{cx + d} }$   (rational difference equation) $\displaystyle{ \frac{ax + b}{cx + d} }$   (rational difference equation) < math > frac { ax + b }{ cx + d } </math > (有理差分方程) $\displaystyle{ \frac{a}{c} + \frac{bc - ad}{c} \left [ \frac{(cx - a + \alpha)\alpha^{n - 1} - (cx - a + \beta)\beta^{n - 1}}{(cx - a + \alpha)\alpha^{n} - (cx - a + \beta)\beta^{n}} \right ] }$
$\displaystyle{ \frac{a}{c} + \frac{bc - ad}{c} \left [ \frac{(cx - a + \alpha)\alpha^{n - 1} - (cx - a + \beta)\beta^{n - 1}}{(cx - a + \alpha)\alpha^{n} - (cx - a + \beta)\beta^{n}} \right ] }$
< math > frac { a }{ c } + frac { bc-ad }{ c } left [ frac {(cx-a + alpha) alpha ^ { n-1}-(cx-a + beta) beta ^ { n-1}{(cx-a + alpha) alpha ^ { n }-(cx-a + beta) beta ^ { n }-(beta ^ { n }}}右] </math > < br >

where:

where:

$\displaystyle{ \alpha = \frac{a + d + \sqrt{(a - d)^2 + 4bc}}{2} }$

$\displaystyle{ \alpha = \frac{a + d + \sqrt{(a - d)^2 + 4bc}}{2} }$

{ a + d + sqrt {(a-d) ^ 2 + 4bc }}{2}

$\displaystyle{ \beta = \frac{a + d - \sqrt{(a - d)^2 + 4bc}}{2} }$

$\displaystyle{ \beta = \frac{a + d - \sqrt{(a - d)^2 + 4bc}}{2} }$

[ math > beta = frac { a + d-sqrt {(a-d) ^ 2 + 4bc }{2} </math >

$\displaystyle{ \sqrt{x^2 + b} }$ $\displaystyle{ \sqrt{x^2 + b} }$ < math > sqrt { x ^ 2 + b } </math > $\displaystyle{ \sqrt{x^2 + bn} }$ $\displaystyle{ \sqrt{x^2 + bn} }$ < math > sqrt { x ^ 2 + bn }
$\displaystyle{ \sqrt{ax^2 + b} \ (a \ne 1) }$ $\displaystyle{ \sqrt{ax^2 + b} \ (a \ne 1) }$ < math > sqrt { ax ^ 2 + b }(a ne 1) </math > $\displaystyle{ \sqrt{a^nx^2 + \frac{a^n - 1}{a - 1}b} }$ $\displaystyle{ \sqrt{a^nx^2 + \frac{a^n - 1}{a - 1}b} }$ < math > sqrt { a ^ nx ^ 2 + frac { a ^ n-1}{ a-1} b } </math >
$\displaystyle{ g^{-1}\Big(f\bigl(g(x)\bigr)\Big) }$ $\displaystyle{ g^{-1}\Big(f\bigl(g(x)\bigr)\Big) }$ < math > g ^ {-1} Big (f bigl (g (x) bigr) Big) </math > $\displaystyle{ g^{-1}\Bigl(f^n\bigl(g(x)\bigr)\Bigr) }$ $\displaystyle{ g^{-1}\Bigl(f^n\bigl(g(x)\bigr)\Bigr) }$ < math > g ^ {-1} Bigl (f ^ n Bigl (g (x) Bigr) </math >
$\displaystyle{ g^{-1}\bigl(g(x)+b\bigr) }$   (generic Abel equation) $\displaystyle{ g^{-1}\bigl(g(x)+b\bigr) }$   (generic Abel equation) < math > g ^ {-1} bigl (g (x) + b bigr) </math > (通用 Abel 方程) $\displaystyle{ g^{-1}\bigl(g(x)+nb\bigr) }$ $\displaystyle{ g^{-1}\bigl(g(x)+nb\bigr) }$ < math > g ^ {-1} bigl (g (x) + nb bigr) </math >
$\displaystyle{ g^{-1}\Bigl(a\ g(x)+b\Bigr) \ (a\ne 1) }$ $\displaystyle{ g^{-1}\Bigl(a\ g(x)+b\Bigr) \ (a\ne 1) }$ < math > g ^ {-1} Bigl (a g (x) + b Bigr)(a ne 1) </math > $\displaystyle{ g^{-1}\Bigl(a^ng(x)+\frac{a^n-1}{a-1}b\Bigr) }$ $\displaystyle{ g^{-1}\Bigl(a^ng(x)+\frac{a^n-1}{a-1}b\Bigr) }$ < math > g ^ {-1} Bigl (a ^ ng (x) + frac { a ^ n-1}{ a-1} b Bigr) </math >
$\displaystyle{ T_m (x)=\cos (m \arccos x) }$ (Chebyshev polynomial for integer m) $\displaystyle{ T_m (x)=\cos (m \arccos x) }$ (Chebyshev polynomial for integer m) < math > t _ m (x) = cos (m arcos x) </math > (整数 m 的 Chebyshev 多项式) $\displaystyle{ T_{mn}=\cos(mn \arccos x) }$ $\displaystyle{ T_{mn}=\cos(mn \arccos x) }$ < math > t _ { mn } = cos (mn arcos x) </math >

|}

Note: these two special cases of ax2 + bx + c are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table.

Note: these two special cases of are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table.

Some of these examples are related among themselves by simple conjugacies. A few further examples, essentially amounting to simple conjugacies of Schröder's examples can be found in ref.

Some of these examples are related among themselves by simple conjugacies. A few further examples, essentially amounting to simple conjugacies of Schröder's examples can be found in ref.

## Means of study

Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.

Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.

## In computer science

In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.

In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.

## Definitions in terms of iterated functions

Two important functionals can be defined in terms of iterated functions. These are summation:

Two important functionals can be defined in terms of iterated functions. These are summation:

$\displaystyle{ \lt math\gt 《数学》 \left\{b+1,\sum_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x+g(i) \}\right)^{b-a+1} \{a,0\} \left\{b+1,\sum_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x+g(i) \}\right)^{b-a+1} \{a,0\} 左{ b + 1，sum { i = a } ^ b g (i) right } equiv left ({ i，x } right tarrow { i + 1，x + g (i)} right) ^ { b-a + 1}{ a，0} }$

[/itex]

and the equivalent product:

and the equivalent product:

$\displaystyle{ \lt math\gt 《数学》 \left\{b+1,\prod_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x g(i) \}\right)^{b-a+1} \{a,1\} \left\{b+1,\prod_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x g(i) \}\right)^{b-a+1} \{a,1\} 左{ b + 1，prod _ { i = a } ^ b (i) right }等价左({ i，x } right tarrow { i + 1，x g (i)}右) ^ { b-a + 1}{ a，1} }$

[/itex]

## Functional derivative

The functional derivative of an iterated function is given by the recursive formula:

The functional derivative of an iterated function is given by the recursive formula:

$\displaystyle{ \frac{ \delta f^N(x)}{\delta f(y)} = f'( f^{N-1}(x) ) \frac{ \delta f^{N-1}(x)}{\delta f(y)} + \delta( f^{N-1}(x) - y ) }$

$\displaystyle{ \frac{ \delta f^N(x)}{\delta f(y)} = f'( f^{N-1}(x) ) \frac{ \delta f^{N-1}(x)}{\delta f(y)} + \delta( f^{N-1}(x) - y ) }$

{ delta f ^ n (x)}{ delta f (y)} = f’(f ^ { N-1}(x))) frac { delta f ^ { N-1}(x)}{ delta f (y)} + delta (f ^ { N-1}(x)-y) </math >

## Lie's data transport equation

Iterated functions crop up in the series expansion of combined functions, such as g(f(x)).

Iterated functions crop up in the series expansion of combined functions, such as .

Given the iteration velocity, or beta function (physics),

Given the iteration velocity, or beta function (physics),

$\displaystyle{ v(x) = \left. \frac{\partial f^n(x)}{\partial n} \right|_{n=0} }$

$\displaystyle{ v(x) = \left. \frac{\partial f^n(x)}{\partial n} \right|_{n=0} }$

= math > v (x) = left.{ partial f ^ n (x)}{ partial n } right | { n = 0} </math >

for the nth iterate of the function f, we have

for the th iterate of the function , we have

$\displaystyle{ \lt math\gt 《数学》 g(f(x)) = \exp\left[ v(x) \frac{\partial}{\partial x} \right] g(x). g(f(x)) = \exp\left[ v(x) \frac{\partial}{\partial x} \right] g(x). G (f (x)) = exp left [ v (x) frac { partial }{ partial x } right ] g (x). }$

[/itex]

For example, for rigid advection, if f(x) = x + t, then v(x) = t. Consequently, g(x + t) = exp(t ∂/∂x) g(x), action by a plain shift operator.

For example, for rigid advection, if x + t}}, then t}}. Consequently, exp(t ∂/∂x) g(x)}}, action by a plain shift operator.

Conversely, one may specify f(x) given an arbitrary v(x), through the generic Abel equation discussed above,

Conversely, one may specify given an arbitrary , through the generic Abel equation discussed above,

$\displaystyle{ \lt math\gt 《数学》 f(x) = h^{-1}(h(x)+1) , f(x) = h^{-1}(h(x)+1) , F (x) = h ^ {-1}(h (x) + 1) , }$

[/itex]

where

where

$\displaystyle{ \lt math\gt 《数学》 h(x) = \int \frac{1}{v(x)} \, dx . h(x) = \int \frac{1}{v(x)} \, dx . H (x) = int frac {1}{ v (x)} ，dx. }$

[/itex]

This is evident by noting that

This is evident by noting that

$\displaystyle{ f^n(x)=h^{-1}(h(x)+n)~. }$

$\displaystyle{ f^n(x)=h^{-1}(h(x)+n)~. }$

< math > f ^ n (x) = h ^ {-1}(h (x) + n) ~ . </math >

For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,

For continuous iteration index , then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,

$\displaystyle{ e^{t~\frac{\partial ~~}{\partial h(x)}} g(x)= g(h^{-1}(h(x )+t))= g(f_t(x)). }$

$\displaystyle{ e^{t~\frac{\partial ~~}{\partial h(x)}} g(x)= g(h^{-1}(h(x )+t))= g(f_t(x)). }$

(x) = g (h ^ {-1}(h (x) + t)) = g (f _ t (x)) . </math >

The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation,引用错误：没有找到与</ref>对应的<ref>标签

Equations and Their Applications (Dover Books on Mathematics, 2006), Ch. 6, .</ref>

$\displaystyle{ f_t(f_\tau (x))=f_{t+\tau} (x) ~. }$

$\displaystyle{ f_t(f_\tau (x))=f_{t+\tau} (x) ~. }$

< math > f _ t (f _ tau (x)) = f _ { t + tau }(x) ~ . </math >