A dynamical problem

Sort:
elliott271828

Does anyone know how to prove that for any invertible function f(x), mapping R→R, there is at least one function g(x) such that g(g(x))=f(x), or find a counterexample?

FifthDimension

Oh geez I'm still on Algebra ...=p

strangequark

Hi e-dog,

I would think that there are an infinite number of counterexamples if one wanted to play sneaky and have f(x)=g(x), because there are certainly many functions were g(x) is not equal to g(g(x)). However, it is also very easy to prove on limited case by case bases that this is true for certain types of functions...like you should be able to make a general proof that this is true for f(x)=ax^b very quickly, for example.

elliott271828

I am afraid I don't understand. If f(x)=g(x), then  f(x)=g(g(x))=f(f(x)), f(x)=f(f(x)); and since f(x) is invertible,x=f(x). 1/x,x, and 1-x are a few examples of functions that satisfy the restrictions on g(x) in this case. 

Elroch

Excellent question, elliott271828. I think this would be a tough problem for Analysis 1 at Cambridge, which is where I read mathematics.

Firstly, it may be best to ignore strangequarks comment, as he appears to have misunderstood what you were asking.

Secondly, I assume you are intuitively assuming f is continuous and/or monotone (the conditions are equivalent for a bijection from R to R). Without such a restriction on f, the problem could involve extremely pathological examples that are unlikely to be what you are thinking of.

If f is monotone/continuous, I believe the answer is yes. In fact, I believe there are an uncountably infinite number of solutions. I have sketched an existence proof, but it needs to be made rigorous (this is not exactly trivial).

We can define g in an iterative way. Suppose f(0)=x then choose any number a and define

g(0) = a 

This choice is arbitrary - we could have chosen any value at all. But having chosen it, an infinite number of other values of g are determined

Firstly,

g(a)=x.

Further

g(x) = g(g(a))= f(a)

g(f(a)) = f(g(a)) = f(x)

and so on.

Other values of g are determined in the opposite direction as well. Define b so that f(b)=g(0). Then clearly g(b) must be 0, so that g(g(b))=f(b)=0. We can similarly get the value of g at c such that f(c)=0 and so on backwards.

The discrete set of values on which g is now defined may stretch from -infinity to +infinity, but alternatively, it could also have a minimum or a maximum. In this case g will also be defined at the inf or the sup of the range (simple argument using monotone property and continuity). To be rigorous, we need to check  this is possible without producing any contraditions. I believe the monotone continuous nature of f ensures this.

In either of these cases we need to pick a value outside of this range and define g arbitrarily there (but now constrained by the values of g we know). This choice will determine g on a new infinite set of values, similarly to above.

Anyhow, in this way we can define g on a discrete set of values which stretches from +infinity to -infinity

Next we pick one value on which g is not defined (say the midpoint of any of the intervals given by points on which it is defined), and show it is possible to define it there (proof needed - just need to be consistent with the values at the two ends of hte interval). Make all the inferences possible about other values of g, checking there are no awkward contraditions.

Repeating this process indefinitely produces a dense set on which g is defined. g is automatically defined on every real number by continuity. QED

As far as I can see, the monotonality and continuity conditions protect this construction from any problems. The key thing is that if both f and g must send intervals to intervals, so if c is between a and b, we know that f(c) is between f(a) and f(b), and the converse is equally true. But I will be pleased to be enlightened if anyone sees a problem that has escaped me. [Though this was my strongest subject a long time ago, I am certainly a little rusty on it now].

In the unlikely event that anyone has read this far, the monotone bijections from R to R are the homeomorphisms of the real line that preserve direction. This is a subgroup of index 2 in the whole group of homeomorphisms. It is not unusual for an element of a large group to have many square roots. Even much smaller matrix groups can allow multiple square roots.

pawn_slayer666

Taylor series it:

Since any (well most of them) function can be expressed as an infinite polynomial...

f(x)=a[0]+a[1]x+a[2]x^2+a[3]x^3+...

g(x)=b[0]+b[1]x+b[2]x^2+b[3]x^3+...

 

Sort terms of g(g(x)) by powers of x:

b[0]+b[1]*b[0]+b[2]*b[0]^2+b[3]*b[0]^3+...=a[0]

x*(b[1]^2+b[2]*(2b[0]b[1])+b[3]*(3b[0]^2b[1]))=x*a[1]

 

Okay... so it does get complicated quickly.  Point is... lets say there are n a-terms, making n equations.  Since there are going to be more b-terms to solve for than just n (roughly 2n terms) its a system of equations with twice as many variables as equations.  So intuitively, it's theoretically solvable.  Meaning g(x) does exist.

 

Probably not the best way to solve the problem... Elroch's way is definitely better... but this still works... at least I think it should.

Elroch

Actually, you're attacking a slightly different problem, pawn_slayer666, as most monotone bijections R -> R are not analytic (i.e. they can't be expressed as power series). The analytic ones are infinitely differentiable everywhere, which most functions aren't.

But for this restricted problem of finding roots of monotone functions that are power series, it makes sense. You can find possible roots by expanding the composition of a power series with itself and equating coefficients.

Try doing it for the function f(x) = x and see what you get.

An interesting question is how big is the set of roots of an analytic bijection R->R.