Maximize Sum Of Squares

Sort:
pawn_slayer666

Problem:

Given three numbers, x, y, z we know that they sum up to a certain number, say 12.  Is there a way to prove that to minimize x^2+y^2+z^2, you have to set x=y=z=4?  It's simple doing it with 2 numbers, that's just substitution, but with 3 or more variables it seems trickier.  Any ideas?

 

As a generalization, given n variables, a(1), a(2), a(3), ..., a(n), that sum to S, show that to minimize [a(1)]^2+[a(2)]^2+[a(3)]^2+...+[a(n)]^2, all a(1), a(2), a(3)..., a(n) must equal S/n.

 

Thanxx!

tseta

Perhaps by using Lagrange multipliers. It is often one of the simplies way to do it, but not always. In case of sum of squares, I think, it works pretty good.

Case of three dimension can also be proven by using geometric of sphere, or more generally, ellipsoid.

Elroch

Easy way is to convert it to a 3 variable problem by substituting x+y+z=12. Then look for a maxima using calculus. Or similarly, convert an n-variable problem to an (n-1) variable problem.

Thijs

A bit of an informal proof:

Suppose there exists some optimal solution (a1,a2,...,an) with a1+a2+...+an = S and a1^2+a2^2+...+an^2 = m minimal. Suppose in this solution there exist ai and aj such that ai is different from aj. Without loss of generality we can assume ai < aj, say ai = x, aj = x + 2eps with eps > 0. So ai^2 + aj^2 = x^2 + (x + 2eps)^2 = 2x^2 + 4x eps + 4eps^2. If we now substitute ai + eps for ai and aj - eps for aj, so that ai = aj = x + eps, then still a1+a2+...+an = S, but ai^2 + aj^2 = (x + eps)^2 + (x + eps)^2 = 2x^2 + 4x eps + 2 eps^2 < 2x^2 + 4x eps + 4eps^2. So substituting ai + eps for ai and aj - eps for aj still gives an allowed solution but with a lower sum of squares. This is a contradiction to the statement that (a1,a2,...,an) was optimal. So all ai must in fact be the same.

This is a bit informal though. There's of course alot of known theory on optimization problems like this, where you can for example just plug in your requirement formula and minimizing-formula, and get the optimal solution out. But I guess that if you don't want to delve into that, the above proof could be sufficient.

Thijs

By the way, there seems to be a bit of confusion. The title of the topic is "maximize sum of squares" while you are looking to minimize the sum of the squares.

pawn_slayer666

@ tseta :  Lagrange multipliers... hadn't thought of that, and it works the same way for any number of variables!

@ Elroch : That works for 3 variables, but its harder to do it when there's more veriables to contend with.

@ Phobetor : That looks like a formal proof to me.  It's like simplifying the problem into lots of 2 variable pairs.  Creative!  Yeah, I wasn't thinking when I titled the thread...

@ elianto : Good geometric interpretation, it's like where the smallest possible sphere intersects the plane.  And by Pythagorean theorem, we know all variables are equal!

 

All very good solutions, thanks for the help!

Elroch

pawn_slayer666 - calculus works with any finite number of variables. Infinite numbers of them might take a bit more work.

Thijs
elianto84 wrote:

Consider (x,y,z) in R^3. You are asking which point of the plane P

  x+y+z = constant

has the minimum distance from the origin. Since P is convex, the solution is obviously given by the projection of the origin on the plane:

 x=y=z= constant/3.


Yes, that's the "existing theory" I was referring to :)

And indeed, in this case, x^2 + y^2 + z^2 can just be interpreted as the square of the 2-norm of the vector (x,y,z) (the normal human "length" of a vector), so that minimizing that would be minimizing the 2-norm of the vector.

It's nice to see different areas of mathematics come together to answer the same question, using different approaches.

pawn_slayer666

Elroch : Induction could always work...

Phobetor : As they say, there's more than one way to skin a rat!