## $`Fangraphs WAR`
## Role Value
## 1 Scrub 0-1 WAR
## 2 Role Player 1-2 WAR
## 3 Solid Starter 2-3 WAR
## 4 Good Player 3-4 WAR
## 5 All-Star 4-5 WAR
## 6 Superstar 5-6 WAR
## 7 MVP 6+ WAR
##
## $`Reference WAR`
## Role Value
## 1 Replacement <0 WAR
## 2 Substitute 0-2 WAR
## 3 Starter 2+ WAR
## 4 All-Star 5+ WAR
## 5 MVP 8+ WAR
Fangraphs and Baseball Reference list their recommendations for how to rate players. With their rule of thumb, most people can judge that a 2+ WAR is a good player. The main question that I would ask is how are these designations for player performance constructed. Based on the Fangraphs glossary page, the method is conceived through playing time with Average Full-Time Position Players being worth 2 WAR. However, is playing time of the players in MLB the best way to understand talent level and put the best set of players on the field? I would probably say itβs not.
Above, I set up a distribution of Position player WAR for 2019, scaled to a full season. Taking into account the replacement level, the highest density of WAR (approximately the average) is around 1.20 WAR with approximately 24% being at this level of performance. One full deviation from the mean is >~3 WAR, which signifies clearly above average talent. Two full deviations from the mean is >~5.0 WAR and three full deviations being >~8.0 WAR, designating elite talent. The question is should we think of a solid player being a 2 Win-player, even if they are not even a standard deviation better in quality than the average. In my opinion, I am willing to flexible on this position because there are obviously a small sample of players above 3 WAR in a single season. However, by being aware of the distribution, I am personally going to be more strict about the designation of quality performance with using 3+ WAR for a full-scaled season, using a standard deviation scale rather than playing-time.