Well, if the rating is supposed to make sure adaptive mode selects lessons matching your capabilities it makes sense to "keep score".
But yes, it might make sense to differentiate between topics (e.g., "strong tactician, weak in opening theory"). And I'd propose not to publish the users rating but - similar to your suggestion - at the end of a lesson just give a performance rating for the lesson, the difficulty of that lesson and whether your performance was in line with expectations, better or worse (equivalent to the rating change one would see currently).
The rating of lessons would have to be given in relative terms, I guess.
It seems to me that the rating is totally useless in chess mentor. Maybe it would make more sense to give a percentage of success for a course, like in school with A, B, C and D or F.
The rating makes sense if it would measure always the same activity (notice also the different ratings for bullet, blitz, standard or correspondence games).
But in chess mentor, let's say that a teacher makes a course over an opening I don't know anything about, and I just finished an endgame course, which gave me 2000 points, then I try the opening course and hypothetically I lose 500 points.
Now my new rating is 1500, but what is actually measuring? Not my knowledge of the endgame, and neither my lack of knowledge of the opening course I, actually, didn't know anything about.