I just wonder, whether the following implementation would make sense: while guiding with an OAG the HFR of the guide star(s) is constantly beeing measured. To get reliable results, the average of every x exposures is being recorded. Based on these values a moving average is being calculated. When the system moves out of focus, the HFR gets worse and hence at some point the most recent HFR value will surpass the moving average (last n (e.g. 20) measurements). One could set a threshold value (e.g. HFR +10%) and once this is surpassed the imaging run is paused at the end of the current exposure, the autofocus routine is being initiated and thereafter the imaging session continues.
This idea is derived from the ONAG real time autofocus. While the ONAG seems to correct the focus while imaging, the idea here is simply to gain an indication, if and when the imaging train moved out of focus (mirror flop, temperature change, other) by a pre-defined threshold.
All the critical algorithms and data are already there: the autoguide makes an exposure every n-second(s) anyway, the HFR value could be measured by the same algorithms used for autofocus. It is simply a question of calculating the HFR value, the average and comparing the actual values to a moving average; additional system load should be minimal. Only flip side I can think of is that this approach can not differentiate between change of HFR values due to "out of focus causes" and variable seeing-conditions. But even this could be build into an additional feature: if the focus can not be enhanced (i.e. because of deteriorated seeing conditions), the imaging run could be paused and restarted once HFR values get better again.
Anybody thought about that ? Does this make sense ?
Similar functionality to this is already in Ekos. It is calculating HFR from recently takes exposure and if it is more that a threshold, the autofocus routine is executed. But it is using abosulte value, not percentage, and calculation is done for exposure taken by main telescope, net OAG or guider.
I do think this is an interesting idea, and, in fact, the HFR (or at least something close to that) is already being computed on the guiding image every iteration, if you're using SEP MultiStar (in the SEP star detection code).
There are some challenges I can think of, though:
When folks are imaging with a refractor and a monochrome camera + a filter wheel, the system will intentionally change its focus on filter changes to compensate for needed focus offsets of the imaging filters. The guide camera is almost always placed before the filters, and thus will observe those intentional changes in focus position. That is, in that environment one cannot perfectly focus the guide camera. Thus, if the guiding camera is in-focus during blue imaging, it will be a bit out-of-focus during red imaging. The good news is that I don't believe the "a bit out-of-focus guiding" doesn't really affect guiding performance much.
As was pointed out by @hades, the system already measures the HFR of the captured sub, and could decide to focus based on that. One of the advantages of the ONAG scheme is that the system can actually update focus while imaging. However, it can do this because the ONAG (a) knows which direction to move the focus, in or out, based on astigmatism in the defocussed image, and (b) perhaps has a more accurate in-focus/out-of-focus indicator using this astigmatism indicator vs the noisy HFR measurements.