A possible alternative to so much tweaking

Post Reply
User avatar
lorduintah
Posts: 642
Joined: Fri Jan 11, 2008 9:37 am
Location: Plymouth, MN

A possible alternative to so much tweaking

Post by lorduintah »

I see a lot of check calibration messages, let alone calibration questions.

Many of these are based on a difference in values for a previous ride, or a calibration that has suddenly gone berserk. Some are due to wind or tilt.

One of the biggest contributors, I think, to all of these issues are people thinking that every ride is the same and every set of conditions is the same and every result should be the same - for the same route. If it is not, then the wind scaling is off or the tile was goofy or the temperature changed or .... (you fill in the reasons)

I would suggest that Velocomp consider allowing a selection of rides - even 10-20 might be selected - and the software use standard statistical techniques to crank out the best fit to them all in one grand fit - preferably using numerical methods such as MLE - Maximum Likelihood Estimates and stay away from linear regression (or for that matter multiple linear regression.) For those who have some exposure to statistical models - engineers, physicist and other sciences especially, a regression does not mean all the experiments gave exactly the predicted results - but fitting the data from all that has led to a model that "best" fits what was included. I suggest MLE, because it is robust and works well for limited information - linear regression can be "fooled" by one outlier data set. This is a non-linear method, but there are many routines around that do the job well.

In this way, the parameters are allowed to vary over every file to get the best overall fit - which means - yes folks - not every result is identical, but over a range the values used are optimized to agree with every ride equally. This takes noise from every day rides and says, on average, here are the parameters that best agree with all the rides you are giving me.

The issue then - and once again - becomes one of how do I know I used a good set of rides to zero in on my parameters? It is most likely that a figure of merit can be found to flag in bogus rides. And it is possible that the error estimates for each parameter can be used as additional information as to how well the group of rides fit in to the same model.

I bet a few short discussions with one of the local statistics departments (University) - or engineering/physics - around Boca Raton would lead them in the right direction to code this up and quiet the multitudes. A grad student would love this kind of problem.......

Just my 2c.

Tom
rruff
Posts: 445
Joined: Wed Apr 23, 2008 10:48 am

Re: A possible alternative to so much tweaking

Post by rruff »

I haven't seen anyone complaining about *minor* discrepancies.
Velocomp
Velocomp CEO
Posts: 7804
Joined: Fri Jan 11, 2008 8:43 am

Re: A possible alternative to so much tweaking

Post by Velocomp »

lorduintah wrote:I see a lot of check calibration messages, let alone calibration questions.

Many of these are based on a difference in values for a previous ride, or a calibration that has suddenly gone berserk. Some are due to wind or tilt.

One of the biggest contributors, I think, to all of these issues are people thinking that every ride is the same and every set of conditions is the same and every result should be the same - for the same route. If it is not, then the wind scaling is off or the tile was goofy or the temperature changed or .... (you fill in the reasons)

I would suggest that Velocomp consider allowing a selection of rides - even 10-20 might be selected - and the software use standard statistical techniques to crank out the best fit to them all in one grand fit - preferably using numerical methods such as MLE - Maximum Likelihood Estimates and stay away from linear regression (or for that matter multiple linear regression.) For those who have some exposure to statistical models - engineers, physicist and other sciences especially, a regression does not mean all the experiments gave exactly the predicted results - but fitting the data from all that has led to a model that "best" fits what was included. I suggest MLE, because it is robust and works well for limited information - linear regression can be "fooled" by one outlier data set. This is a non-linear method, but there are many routines around that do the job well.

In this way, the parameters are allowed to vary over every file to get the best overall fit - which means - yes folks - not every result is identical, but over a range the values used are optimized to agree with every ride equally. This takes noise from every day rides and says, on average, here are the parameters that best agree with all the rides you are giving me.

The issue then - and once again - becomes one of how do I know I used a good set of rides to zero in on my parameters? It is most likely that a figure of merit can be found to flag in bogus rides. And it is possible that the error estimates for each parameter can be used as additional information as to how well the group of rides fit in to the same model.

I bet a few short discussions with one of the local statistics departments (University) - or engineering/physics - around Boca Raton would lead them in the right direction to code this up and quiet the multitudes. A grad student would love this kind of problem.......

Just my 2c.

Tom
Agreed
John Hamann
Post Reply