Why isn't there any thread to report abusers?He just resigned in the game, why the hell was I even matched with him... I didn't allow any lower than 1600 to be paired with me! -28 for this game.https://www.chess.com/member/signuptowin
I played my first game in a month and again found nothing but crazy teaming. Luckily these guys were helpless chesswise and I won. But such attitudes make the 4-way game 100% disgusting
Skeftomilos Jan 11, 2018
The new system which stops idle people appearing on the leader board is pointless and doesn't encourage any competition. All the original 2k players do is play one game every now and then and keep there extremely inflated rating (and many of them are cheaters). Look at the top 10: #1 gokul009 (2053) #2 a1t19 (2037) #3 randomblitzmonster (2032) #4 4thplayer (2030) #5 BabYagun (2013) #6 B_d_E (1990) #7 Martin0 (1982) #8 Fukadzume (1968) #9 RamanChodzka (1964) #10 mitsuyamelon (1919) These players dont play at all and the other 10 below them dont even play regularly (if they did you can bet they would have only 1800 max). Having these inflated players idle on the leader board removes the competitiveness and fun of 4 player chess. None of the new regular players with the new rating system can get past 1900 so competing with them is pointless - so there's no competition. At the moment the game is taken like a fun side game because of them but to further the game it needs to be taken more seriously with more competitiveness. PLEASE RESET THE LEADER BOARD
I was just playing a game, I set my player matchmaker to 1500 - 3000, I'm then put in a game with a 1200. another 1200 and a 1390, I then get teamed up by yellow and red who kill me first, helping eachother. How is it fair that my rating goes down more for playing against people I do not want to play against?
Dragadiddle Jan 9, 2018
1. When we click "+ New Game" button a dialog window appears with a drop down list containing 2 items: "Free For All" and "Teams". It is wrong to use a drop down list for just 2 items. Radio buttons should be used instead. 2. That window should remember the choice. Currently it always has FFA selected by default. But it must show Teams as a default choice if a player selected Teams before. 3. When we invite someone to play a Teams game the window remembers the invited player nickname. It is good. But if you delete that player nickname and leave the field blank it still remembers that nickname and suggests it next time. It should remember the empty field too.
Jake_Paul7 Jan 5, 2018
Spectators can see the arrows. It is a way of cheating. People just open another browser window and spectate their own game to see the opponents' arrows. Please hide the arrows from spectators. Thanks.
The-Lone-Wolf Jan 4, 2018
In the (pinned) thread on 4PC ratings: VAOhlman wrote: All in all I think a good discussion is in order while the current system continues. However would it be possible to code in another system as well and compare how well it predicts the games? Not use it for matching, but just for seeing if it is a better predictor. Because, in the long run, that is what we are looking for, no? A system that can fairly accurately predict how player one will do against player two. This got me wondering: could we evaluate different ideas for rating functions more empirically? I believe so... but doing so would require a lot of elbow grease. To that end, I wondered if the chess.com team might be willing to tackle this in a crowd-sourced fashion: Chess.com could do nothing more than publish[*] a large sample of historic game results (HUGE sticking point here: this entire idea hinges on the hope that chess.com actually retains records of historic game results), members of the community could use that data to develop/optimize a new 4PC algorithm, and chess.com could make a decision on how (or whether) it made sense to use it. [*] The data dump could be as simple as a downloadable csv (comma-separated-value) file, limited to: GAME_ID,RED_USERNAME,RED_SCORE,GREEN_USERNAME,GREEN_SCORE,BLUE_USERNAME,BLUE_SCORE,YELLOW_USERNAME,YELLOW_SCORE (and if there are privacy concerns, perhaps just psuedo-anonymous PLAYER_ID numbers could be substituted for USERNAME) The goal would be to encode different algorithms into functions that: Took 12 input parameters describing each completed game (red's starting rating, a running count of red's rated games, red's final score; same for blue, yellow, and green) Returned 4 output parameters: red's new rating, blue's new rating, yellow's new rating, and green's new rating, ... and assess the various algorithms for accuracy. Assessment might be along the lines of: 1) Run the function against a training data set of games (i.e. first 90% of games) to compute each player's "training-data" rating. 2) Use the training-data ratings to predict results (player finishing order) of the last 10% of games. 3) Score those predictions (per game) by evaluating each game as six 2-player match-ups: If there's a big gap between the two players' ratings (e.g. over 200 difference), then +2 for a correct prediction (1400 player finishes better than 1100 player) and -2 for an incorrect prediction (1500 player finishes worse than 1200 player). If it's a medium-sized gap (between 50 and 200) then +1 for correct prediction and -1 for failed prediction Ignore match-ups with small rating gaps (less than 50); reward +0 regardless of outcome. Whichever function yields the highest average accuracy-score-per-game (for all of the 10% testing data) would be considered the "more accurate" function. Some "sanity" guidelines should also be in play: 1) The logic should be sensible and (relatively) easily explained. While the function may entail a lot of nuanced adjustments (perhaps accounting for things like relative ratings, score differentials, and positional aspects of the game), players should still have a basic sense of how their ratings should change based on the outcome of each game. In particular, the player who finishes first should always experience an increase in rating and the 4th place player should always have a drop in rating. 2) Each game should be zero-sum: no net gain (or loss) of rating points among 4 players of a game. 3) The function should be expressible using standard arithmetical functions and simple if-then-else constructs (i.e. no fancy libraries; easily portable to any language) 4) The function should be deterministic (same inputs => same outputs) and exhibit "smooth" behavior (small changes in inputs should result in minor deviations in output) 5) The function should be self-correcting. If a single player has a rating of 1500 after starting at 1200 and playing 50 games; it should be possible to start that same player at 2200 and still wind up with a rating of ~1500 after the same 50 games. Otherwise, there should be no restrictions on the logic and, IMO, there are a lot of ideas worth exploring (and vetting these possibilities is where the crowd-sourcing idea really seems to "fit") It may be that the rating of the player opposite is vitally important in deciding how strong a player is. Perhaps a function should give more "street cred" to a player who wins against two adjacent 1400-rated opponents while opposite a 1200 player, than to a player who wins against two adjacent 1400 players while opposite a 1600 player. It may be that "skewed ranking" games are (empirically) very poor predictors of player strength. For instance, in a 1200 vs 1250 vs 1275 vs 1850 game; perhaps the function is "smart" enough to realize that no matter what happens to the 1850 player, it says very little about that player's skill levels. It may be wise to use "provisional status" logic (looking at running total of rated games played) to improve accuracy. For example, a function might decide it's "less bad" to lose to a new 1200-rated player than to an well-established 1200-rated player; or that a new player who soundly defeats three 1600+ players is probably a *lot* better than their initial 1200 rating reflects. While not as important as final standing, I suspect score-differentials could shed some light on the players' skill levels. It'd be interesting to see if a function could tease that out without overtly rewarding "running up the scoreboard" play over "quick claim win" play (which says more about players' patience levels than skill levels).
Skeftomilos Jan 4, 2018
No response from the "Resign" button. Time ran out on my turn, but nothing could be done.
kevinkirkpat Jan 4, 2018
Just played a team game where both me and my team mate checkmated our opponents at the same time!
I would like to draw arrows after the game has finished, since I like to record my games, it would help when going over the game. I guess it might be a general implementation that arrows can be drawn for yourself any time, like on live chess.
Just a weird thought...I'm sure I'm not the only one to suffer from a partner who resigns early in team chess. Or gets cut off by the internet. So, just wondering...What if the rule was that when one player resigns their partner gets to keep playing, just like in free for all? The resigning player gets a loss, but the player that continues on could, if he did amazingly well, pull out a win. Or at least get to try!!So the rule would be that a player only loses if they resign or if they or their partner are mated (or have their king taken). The player who resigns or times out would have their pieces go grey, like in free for all.
NelsonMoore Dec 29, 2017
Having promotion on the 11th rank SUUUUUCKS. Often times my ally would have pushed his pawns blocking me from queening . Queening itself is very difficult in the late game as all the opponent does is block with their knights. It's not even that appealing considering that you have to waste 9-8 MOVES to queen when the other dude will place a knight on promotion square just to kill it AND in team games each move is FAR MORE VALUABLE than a normal ffa game. Also, in some games once people have established a decent defense, have their diagonals secure and there are no real tricks and are avoiding castling in case of getting attacked all they do is shuffle pieces. It would be FAR MORE INTERESTING if queening was an initiative. I mean in FFA games it's a similar situation. Everyone would be shuffling pieces, saving there pieces, if the temptation of queening wasn't possible. And also easier queening in teams is far more exciting. A few coordinated queens can destroy the other side and open so much more tactics.
Just finished my first game back in a couple of weeks. Just remembered the kind of bull that goes on here. Blue has an unsurmountable lead, I am in clear 2nd, red has no pieces left. Blue deliberately gives his mate red a free rook so he can overtake me, then resigns instead of mating him. So over this kind of crap. I never give anyone undeserved points and don't expect that of others that are playing either.
CHESS.COM must do something finally! Many players start a game and resign without making any moves. The last drop for me was this game: I understand that people want to play with their invited friends. But why the heck should I lose rating points because of that? It happens every day multiple times a day. CHESS.COM can you fix it finally? Your programmers do not deserve their salary. This bug was reported months ago! I understand that it is more fun for you to make a video about that "sexy" World Championship logo than fixing the bugs. But you've got money from players wishing to invite friends! But do not fix this error. Is it fair at all? Either make refunds to all those players and do not allow anyone to invite friends or fix the bug finally.
Onepiecemania Dec 27, 2017
It would be a simple variant, with almost the same rules as those for standard antichess. Lets review these rules (copy-pasted from Wikipedia).The rules of antichess are the same as those for standard chess, except for the following special rules:• Capturing is compulsory.• When more than one capture is available, the player may choose.• The king has no royal power and accordingly: ◦ it may be captured as any other piece; ◦ there is no check or checkmate; ◦ there is no castling;• Stalemate is a win for the stalemated player.• A player wins by losing all his pieces, or being stalemated.A point system like the one used for FFA 4-player chess will not be needed here. The final rankings will be determined be the order by which the players achieve the goal of losing all their pieces, or becoming stalemated. In other words, the first player who will lose all his pieces, or become stalemated, takes the first place. The second player takes the second place. The third player takes the third place. The last surviving player takes the fourth place.Resigning, timing-out or disconnecting will award the player the fourth place. If later another player do any of these, he will get the third place. The next one will take the second place, and the game will end with the last surviving player as the winner. The pieces of any removed players will remain on the board as dead (gray) pieces, but their capture will still be compulsory for the remaining players.Note: In standard antichess it is allowed to promote pawns to Kings. To keep things simple and familiar we could discard this rule, and adopt the 4-player FFA rule of forced pawn promotions to Queens on each player's 8th rank (at the center of the board).What do you think? Opinions are welcome!
Skeftomilos Dec 27, 2017
I was playing teams and had to take a 30 minute break to soak in the beauty of the following checkmate! After blundering a nice countersacrifice in the opening we had been down a queen the whole game, but it was not so clear since we had a bind on the blue king. My teammate was mated in two moves, but I had a counterattack against both players (backrank and matethreat). In fact in the final position before blundering mate my team would be up two pieces. The final move was Na7#:
MarshmallowQueen2 Dec 26, 2017
The settings bar used to appear in the upper left but it appears to simply be gone now (playing on mobile using Google chrome in "desktop mode" to stretch the window). My recommendation would be to simply make it the site's default setting to only play FFA against players within a similar range +/- 100 or 150 or whatever. 1200s should play other 1200s, 1500s should play one another, etc Also please reconsider the "dividing by three" adjustment. If everyone is playing against similarly rated players, the reward for winning should be greater. Nobody will ever climb the ratings ladder if you're earning or losing 5 or 10 for each game. A bad ratings system is bad for the player pool and bad for the site
pjfoster1313 Dec 26, 2017
I'm noob at teams mode, just played 10-15 games, but i think that since the calculations are much more deeper than in FFA mode, you need more time than 1 minute with 15 seconds delay. what do you think?
kevinkirkpat Dec 22, 2017
There's been a lot of buzz over Google's Alpha Zero soundly defeating the Stockfish 8.0 chess engine. A brief recap: Alpha Zero, using a novel form of machine learning, "studied" the game of chess for 4 hours (it was told nothing but the basic rules, and spent 4 hours playing games against itself). After those 4 hours, it played 100 games against the Stockfish 8.0 engine (the winner of the 2016 chess-engine championship). And Alpha Zero CRUSHED Stockfish. It won 28 of the games (25 as white, 3 as black) and drew the rest. While there's a lot of debate raging over some of the nuances of this match-up (Stockfish didn't have its opening book, it wasn't the latest version, it wasn't run on optimal hardware, etc.), the key takeaway is that in a very short period of time, Alpha Zero was able to teach itself chess at a level at least on par with the best chess engines ever built. The waking thought I had this morning, and that's been bouncing around in my head since: what if Alpha Zero were trained to play 4PC? The basic development principle should be the same: just give it the rules of 4pc and 4 hours to play itself (technically, play 3 different versions of itself). How would it stack up against human players? In a game between two solid "Leaderboard" human players and two Alpha-Zero opponents (with game anonymized to avoid any transparent "Fight the machines!" ruses), would the humans have any chance at finishing first? What about with 3 humans vs 1 machine? What would its game play look like? Obviously the engine would employ superhuman tactical capabilities (if final two players were human vs Alpha Zero, with roughly equivalent material and points... yeah, game over). But what about meta-strategy? Would the highly skilled Alpha Zero engine use game-theory to its advantage? Would it learn to cooperate with opposite player (e.g. to coordinate attacks against adjacent players, or perhaps to rescue the opposite player from danger for no immediate gain)? It's utterly pointless speculation, of course. I expect Google tackled Chess *not* to create a masterful chess engine but simply as a marketing stunt that made a point about its AI's current capabilities. Even that point merited just 4 hours or so of computational time. I would not be surprised if - to the utter frustration of the chess world - Google never again (officially) entered the foray of Chess. And I can think of no motivation for Google to dedicate any additional resources (at least, not any time soon) to teach Alpha Zero to play more obscure games (especial games without clearly-established standardize rules like 4PC). Nevertheless, I thought it was a fun thing to contemplate, and figured it might be interesting to hear the thoughts of others on this forum.
Skeftomilos Dec 21, 2017