Unfortunately, chess ratings ARE determined by complicated mathematical expressions. They aren't actually that hard if you break them down, however.
Rating explanation wanted

Full disclosure: math isn't my field. But maybe this would work: use estimates of the frequencies with which higher-rated players beat lower-rated players.
e.g. 100-point rating difference: higher-rated player wins 2/3rds of the time
200-point difference: higher-rated player wins 3/4ths
300-point difference: 4/5ths, etc.
Assume a 100-point "game value." This doesn't mean anyone will gain or lose 100 points, but it is a starting point for determining how much they will gain or lose.
(a) Suppose a 1400 player beats a 1600 player. By hypothesis, this should happen 1/4th (25%) of the time. Therefore, the upset gets the lower-rated player 3/4ths (75%) of the game value, and the 1400 becomes 1475. The upsets costs the higher-rated player the same number of points, and the 1600 becomes 1525. Notice that if the higher-rated player wins the next 3 games in a row, he/she goes back to 1600 (i.e. 1600 - 75 + 25 + 25 + 25), while the lower-rated player goes back to 1400 (i.e. 1400 + 75 - 25 - 25 - 25).
(b) Suppose an 1800 beats a 1000. By hypothesis, this should happen 9/10ths (90%) of the time. Therefore, this predictable win only gets the higher-rated player 1/10th (10%) of the game points, and the 1800 becomes 1810. Notice that if the higher-rated player wins 9 of 10, as hypothesis would predict, he/she returns to 1800 [1800 + (9 x 10) - 90], while the lower-rated would return to 1000 if he/she could only pull the upset once [1000 + 90 - (9 x 10)].
(c) Suppose a 1561 beats a 1328. By hypothesis ... how often should this happen? Well, somewhere between the 200-point difference frequency (75%)and the 300-point difference frequency (80%). Since the point difference is exactly 233, and we can account for a 200-point difference already, just take 33/200ths of what would have happened if they were only 200 points apart, and add it to the points earned from a 200-point difference. Does that work?
In reality, among other faults, I think my proposed simplified system would over-predict big upsets. For example, I think an 1800 would beat a 1000 more often than 90% of the time. But if that difference between theory and reality bothers you, you can either adjust the numbers (making your math life a bit more complicated) or just use the real math-geek formulae (making your math life a lot more complicated).

Hmm, seems I just suggested adding to the higher-rated players' rating change in my "1561 v 1328" example. I think the fact that the 1561 is more than 200 points higher than the opponent (1328) means that that amount ought to be subtracted from the 200-point rating change.
One further clarification: by "over-predict big upsets," I meant it under-values them, since the lower-rated player would be expected (by hypothesis) to produce such upsets more often (10%) than is realistic (5%?). But I think the numbers for smaller differences are not so far off, e.g. 66.7% compared to about 64%, 75% compared to about 72%.

Here's a nice, simple version for you (that I didn't make):
Rn = Ro + K × (W - We)
Rn is the new rating, Ro is the old (pre-game) rating.
K is something that you would have to determine a formula for yourself. It is based on how long it has been since the player has played a rated game.
W is the result of the game (1 for a win, 0.5 for a draw, and 0 for a loss).
We = 1 / (10(-dr/400) + 1) where dr is the rating difference.
---
As I said, I didn't make that formula, but it definately works. The only thing you need to determine is the K-value.

1. first you start at 1200 points.
2. you win & lose, then your points go up & down.
3. you start to become obsessed with not losing so your rating won't go down.
4. you start avoiding good players you know you can't beat, just so your rating won't go down 2 points.
5. you gradually become insane.
6. when you lose so many points you start posting annoying messages on the forums asking why did my rating go down.
7. you eventually realize the futility of the ratings system & start to only play games with a certain surfer girl who only plays unrated games just for the fun of playing chess.
Could someone direct me to an explanation of how chess rating systems work. The searches I have done result in complicated explanations that can only be understood by maths experts. I would like to program a rating system for personal use. Thanks.