Scooby
I think you should lay out the fundamental assumptions for the two systems.
A measurement rating system... ASSUMES that the significant performance factors are rated and your formula captures the real performance of the boat relative to all of the others. but ... the Hobie 16 sailors complain.... the uni sailors complain. So, this measurement rule is not perfect.
SCHRS uses formula that predicts boat performance based on the criteria the rule rates such as sail area, width, length etc to produce a theoretical speed for the boat and so a rating.
Each boat is different and so gets a rating based on these data points. The way to think of this rating is that it is the rating for a boat that is sailed perfectly! Now we all know this cannot happen, but if it did, then boats would finish together. This is how measurement systems work. Measurement systems can only measure a certain number of items.
-I’ll accept that a 3:1 downhaul is not as efficient as a 8:1 and so you cannot flatten the mainsail so easily, but should SCHRS (or Texel) have a rating point for Downhaul efficiency ? I don’t think so as we’d end up with 1000 rating points and the system would be impossible to use manage.
-I’ll also accept that not all hull forms are as good as each other, but should Texel or SCHRS insist that each hull is measured and so a Coefficient of Drag for each hull is defined? It would be impractical.
-SCHRS and Texel also assume that the boat is in good working order.
-SCHRS and Texel also only measure a subset of all available data’s – the rules need to be manageable – more on this later.
The Hobie 16 should be fast, it’s fairly light (145kg) has fairly large sails, but it does not have plates, it gets an allowance for this.
A performance-based system ASSUMES... that the boats being rated are in good racing shape. Using race data from dead classes is a problem... it is unlikely that the boat has new sails and good foils, therefore when you update the current rating with this bad data it blows up the measurement... eg the Supercat ratings.
This is easily solved by including race data ONLY from classes which host a major one-design championship in the last 2 years. The rating gets frozen when the class stops hosting a major championship.
Yep, dunno exactly what PY (or US PY) does, but I believe that factorise the results to try and remove anomalies.
PY systems require results to produce a handicap.
It is known the PY systems do get bent by the “quality” of fleets and also by how hard the boat is to sail. Classic example is how the Musto Skiff Handicap is going down. Few years ago there were a very few people who could sail these boats and so the handicap was kept slower by the “average skill factor” in the fleet, as more people have learned to sail the boat, the PY has dropped. Measurement systems provide a rating based on the performance of the boat, not the sailors
and the boat.
As an aside I would expect the F16 PY handicap to drop as we learn to sail these boats. This is because the “crew skill factor” is coming into play.
Problem with only using data from classes of a certain size is that they do not get a rating, few hours measuring a boat and about an hour on a PC and you have an SCHRS rating (or Texel).
You are correct about the effect of poor sailors. The major problem for a performance system is getting the data from races that have enough well sailed boats with the top sailors in each class competing in the full range of conditions. If the top three F18 teams are competing against the top three F16's teams you can ASSUME that the skill set of all 6 are about the same... Therefore, the difference would be the boat class. Throwing poor sailors of a class into the mix will blow up the rating system. Averaging the top performance of poor sailors over time cannot solve the problem of getting the boat's rating correct. The assumption is the sailor skill levels are comparable is incorrect.
I agree that you cannot assume that the skill levels in fleets are the same, this is an area where measurement based systems win out. Because the rule provides a rating based on the theoretical speed of the BOAT, when sailing under a measurement based system, the BEST sailor should win as they have sailed their boat closer to its maximum potential.
Consider the results from the Tornado results from Sail Melbourne; looking at the results
http://www.sailmelbourne.com.au/race-results/2008/tornado/series.htm the finishing times are spread over around 15 minutes from front to back of the fleet, but more significantly the top 10 is still spread by 2 or 3 minutes at least. That’s going to be a few %; Now as Bundy and Ashby won, we can assume they sail the boat the best over the regatta. Would we expect Bundy and Ashby to wind in the F16 if they turned up to Mumbles this year; I would! Skill factor makes a difference in a PY system as it uses the performance of the boat, it does not in a measurement system.
The solution... IMO for performance ratings.... is to use the measurement prediction rating until the class hits the benchmark of two national championships.... then weight open championships like the Alter qualifiers or any other major championship (Catfight and Spring Fever) heavily when you compute the rating. Hell… if the F16 had a championship start 5 minutes after the F17 nacra championship … the correct rating would be locked in pretty quickly Likewise with events like the Tradewinds… (times on the first place finishers around the track )would work to getting the proper ratings.
IMO, handicap rating systems need to adjust the policy for the 21st century. Otherwise, it’s flaws significantly outweigh the flaws in the measurement systems.
SCHRS came about because people wanted to race new boats with a rating at once, not have to wait for the returns to come in before they can have a defined and stable and justifiable rating.
In an ideal world we would have 100 of each class of boat sailing against each other class of boat every weekend, then we WOULD be able to use PY as the returns would be statistically significant. We do not have this situation and so measurement based systems are the way to go.
Returns based systems work when you have ENOUGH RETURNS as this will factor out the skill factor as with 100 (or what every one decides is enough boats) boats you have enough good people at the front of the fleet.
To quote
www.schrs.com:
Whilst it is accepted that the ideal Rating system is one which uses historical results, a Portsmouth Yardstick type system, it has proved difficult to obtain sufficient data to validate such a system around the World. The SCHRS enables new designs to be rated quickly, and allows International regattas to take place with a common handicapping system for many types of Catamaran.
SCHRS is evolving; The management group do meet electronically on an ad-hoc basis and exchange 1000’s of emails each year.
The problem with trying to compare SCHRS / Texel with PY systems is that they do not measure (and reward) quite the same thing.
SCHRS /Texel rewards the person(or persons) who sail their boat and make the least mistakes, i.e. sail the boat to it’s greatest potential
Returns based systems provide a handicap based on the history of the boats results. Thus this will provide a rating based on the speed of the boat AND the skill of the sailors that sail them. YES you can use data analysis techniques to remove anomalies within this data but it will not be perfect. PY rewards (to an extent) the person who sails the boat with the best handicap the best.
[color:"blue"] If people want to propose amendments that they feel should be included (as stated before) and can propose such amendments that are manageable, understandable, will not overly complicate the rule I will read the emails and make
all the suggestions available to the other members of the management team (email address is on the
www.schrs.com website) and we will consider them all for inclusion. I cannot promise that they will be included in future, with ALL measurement systems they need to be simple enough to be used. Simplicity is important, otherwise they die as they are un-useable.
I must stress that these changes must be manageable and verifiable with the simple enough to be used on a day-to-day basis by all.
[/color]