it's sort of a philosophical question really. i came from a biomedical engineering / signal processing background where regular sampling was essential. here i can see how it can be a bit less important. however, if you are trying to calculate any derived quantities that use your reported time intervals (e.g. acceleration, horsepower), you would want the sampling intervals to be accurate.
so if you require periodic sampling and you are resampling your data to a single output logging rate (so you appear to have a value for each parameter at each time interval), and the ECU is not instantaneous in its responses, you have a fixed number of "slots" each interval in which to request and receive parameters, and you might as well use all of those slots each time. so yes, the fuel trim will be logged more frequently if it is the only pri2 parameter, but this is only because this is still gives the best-case periodic sampling rate. no harm in sampling something too often if you have the time for it. the sampling rate for this param will slow down if you have more long term items, but they will all continue to be sampled on regular intervals and will never affect the sampling intervals for the pri1 parameters.
Take S as the single parameter sampling rate and first compare the two method using your example of 3 fast parameters (Pri1) and one slower one (Pri2):
[pri1 logging rate] [pri2 logging rate] [sampling interval error rate] [sampling interval error magnitude]
tephra S/3 S/200 S/200 S/3
cboles S/4 S/4 0 0
Also look at a case where you have 15 Pri1 params, and 10 Pri2 ones (Priority = 200, for example):
[pri1 logging rate] [pri2 logging rate] [sampling interval error rate] [sampling interval error magnitude]
tephra(1) S/15 S/200 S/200 S*10/15
tephra(2) S/16 S/200 S/20 S/16
cboles S/16 S/160 0 0
In case (1) I am assuming all of your Priority=200 samples are in phase and happen at once. This has a low error rate, but high magnitude.
In case (2) I am assuming all of your Priority=200 samples are dephased using some software smarts. This has a higher error rate, but lower magnitude each time.
So in the first example, you get a ~25-33% improvement in (average) logging rate at the expense of sampling interval errors that happen every 200 samples.
So in the second example, you get a ~7% improvement in (average) logging rate at the expense of sampling interval errors that either happen very often or are very significant.
Depending on what your goals are, each method has it's merits. I am pushing for accuracy over slightly faster but occasionally irregular sampling. Yours also has the benefit of specifying the low-priority sampling rates.
Tephra wrote:ok.
In the most basic example:
logging rpm, load and knocksum every loop (because they are super important)
logging long term fuel trim every 200 loops (because its not important)
In your setup I can't see how you would tell the unimportant item to log MUCH less often?
Don't worry about explaining it if you think its good enough
I know my priority system can cause a staggered output, ie when all the non important items log they will cause a longer gap in the important items - but that's not too big a problem...