Quote
If you want to reward the person who sails the best in a fleet full of poor sailors, then you use PY type systems as they rate the boat and the sailor, if you want to reward the best sailor, you use a formula based rating system such as SCHRS and Texel


Scooby
I think you should lay out the fundamental assumptions for the two systems.

A measurement rating system... ASSUMES that the significant performance factors are rated and your formula captures the real performance of the boat relative to all of the others. but ... the Hobie 16 sailors complain.... the uni sailors complain. So, this measurement rule is not perfect.

A performance-based system ASSUMES... that the boats being rated are in good racing shape. Using race data from dead classes is a problem... it is unlikely that the boat has new sails and good foils, therefore when you update the current rating with this bad data it blows up the measurement... eg the Supercat ratings.

This is easily solved by including race data ONLY from classes which host a major one-design championship in the last 2 years. The rating gets frozen when the class stops hosting a major championship.

You are correct about the effect of poor sailors. The major problem for a performance system is getting the data from races that have enough well sailed boats with the top sailors in each class competing in the full range of conditions. If the top three F18 teams are competing against the top three F16's teams you can ASSUME that the skill set of all 6 are about the same... Therefore, the difference would be the boat class. Throwing poor sailors of a class into the mix will blow up the rating system. Averaging the top performance of poor sailors over time cannot solve the problem of getting the boat's rating correct. The assumption is the sailor skill levels are comparable is incorrect.

EG.. You can't take an F16 class rating from data in the USA because the grass roots nature of the class means it is not populated with accomplished racers . Using this data to rate the F16 against the Nacra 20 or F18 and A class fleets which has many more experienced and highly accomplished racers leads to an unfair rating.

The solution... IMO for performance ratings.... is to use the measurement prediction rating until the class hits the benchmark of two national championships.... then weight open championships like the Alter qualifiers or any other major championship (Catfight and Spring Fever) heavily when you compute the rating. Hell… if the F16 had a championship start 5 minutes after the F17 nacra championship … the correct rating would be locked in pretty quickly Likewise with events like the Tradewinds… (times on the first place finishers around the track )would work to getting the proper ratings.

IMO, handicap rating systems need to adjust the policy for the 21st century. Otherwise, it’s flaws significantly outweigh the flaws in the measurement systems.


crac.sailregattas.com