I spend my Christmas holidays well, I have been developing a Bahtinov mask assistant tool for kstars/Ekos. It is still under development, but the sources are already available on GitHub. I have forked the kstars project and created a branch where I develop the Bahtinov mask assistant tool (as part of the Focus tool from Ekos) see: github.com/prmolenaar/kstars/tree/bahtinov-mask-focus
For this project I first did some research on how to detect the Bahtinov star pattern in an image and then step by step integrated it into Ekos. As there is no simulator yet that generates the Bahtinov star pattern, I depend on a clear sky to test if it really works. So far the skies were very cloudy so I haven't been able to test the software yet. I will try that as soon as possible. When the implementation is working properly I will create a request to merge the changes back to the original kstars project.
I hope this will give other people some hope that the Bahtinov mask focus helper is on its way. I you like you can give it a try (download sources, build it and run it) and post the result on this forum.
A short tutorial:
- Start kstars / Ekos
- Connect your ccd
- Select Focus tab
- Select algorithm "Bahtinov mask"
- Select a star
* if you have a bahtinov mask on your telescope and a bright star in view, then a Bahtinov star pattern should be visible.
* if the software recognizes the Bahtinov star pattern, it will draw lines over the star pattern en circles on the center and on an offset to indicate the focus
* also the FHR value is set the the calculated offset, which also updates the graph
Good luck experimenting with it, and please be aware that the drawing of the star pattern on the image is not tested yet, so it might not work properly yet.
The other focus algorithms haven't been touches, so they still work as before.
That is correct, this is indeed a compleet KStars fork that will replace KStars on your system.
I am currently doing research into generation of Bahtinov diffraction patterns, which maybe could be integrated in the INDI CCD simulator to support testing the Bahtinov focus assistant in KStars/Ekos. But that is quite some difficult matter. So for now testing will only be possible on real stars, and not using the CCD simulator.
The Bahtinov tool in APT is a licensed version of Bahtinov Grabber. The original allows the bounding box to be freely sized. The tool in APT has a rather small fixed capture area. If this is imposed by ROI limitations is not known.
Having the bounding box scaleable is useful if zoomed in images.
The FITS viewer in KStars contains controls to zoom in and out without resizing the capture area. To my best of knowledge it is possible to zoom in and out in the DSLR viewer as well using a mouse but I never got that to work over VNC using a MacBook trackpad.
Wouter van Reeven
ASI6200MM and 7 slot 2" filter wheel with a SkyWatcher Esprit 80 ED on a SkyWatcher HEQ5-Pro
ASI1600MM-Pro Cooled and 5 slot 1.25" filter wheel with an 8" TS Ritchey-Chrétien on a SkyWatcher EQ6-R
INDI/KStars on Raspberry Pi 4, 4gb
Raspbian Buster with AstroPi3 script configuration
Skywatcher HEQ5 Pro Mount
Canon 600D Camera
Orion SSAG/ASI120mm @280mm Guide Scope
Waveshare Stepper Motor Board - DIY Focuser
Adafruit GPS Module
Generic Bluetooth Joystick.
Startech 7 port powered USB Hub.
I have looked at the Muskulator app. That was my inspiration for my research. Unfortunately only the executable is available I could not find the source code or an explanation of the mathematics used in this app. Looking at the files in this app shows that FFT (Fast Fourier Transformation) is used and probably some Fresnel calculations as the name of the app is called Fresnel. But that's about all I could get from this app.
If someone knows a site where diffraction pattern calculations are explained for Bahtinov masks, the I would be really interested.
Small update on the progress of the Bahtinov assistant in KStars/Ekos. I had a clear night yesterday and found out that the detection of the diffraction pattern is not working yet. So I will continue with it to make it working.
Not too much knowledge. What they use is so-called light propagation using FFT methods. Contrary to raytracing, that allows for diffraction effects. There used to be a (free) software called light pipes (IIRC) doing this. I just searched for it, and found that there is an implementation for Python in the package
. Maybe that gives you a start.
openSUSE Tumbleweed KStars git INDI git
GPDX+EQMOD, CEM60EC, ASI2600/1600/290mini+EFW+EAF
You probably don't see my changes because I have not yet merged my changes into the main branch. I have created a branch called 'bahtinov-mask-focus' in my kstars repo. If you check out this branch, then you will have my changes.
$ git branch bahtinov-mask-focus
$ git checkout bahtinov-mask-focus
$ git pull origin bahtinov-mask-focus
Here are globally the changes I made:
The in kstars/kstars/ekos/focus/focus.ui there is a combobox called focusDetectionCombo. This should have an extra value named Bahtinov Mask. In kstars/kstars/fitsviewer/fitsdata.cpp is the implementation of the bahtinov algorithm. And in kstars/kstars/fitsviewer/fitsview.cpp is the drawing of the lines on the image. Some extra parameters have been added to the Options class.
I am a little short on time this week to develop, but hope to get some time for it next week.
There is not much logging in the code, I usually test it visually by aiming at a star, with your bahtinov mask in place and then get the image in kstars. Then see if the focus module draws the right lines on top of the image.
I found out last week that that is not the case. Some lines are drawn, but they don't make sense. Probably because the lines are not detected right. It could also be that the coordinates for drawing the lines are incorrect, but I have to look into that some more. Printing the variables of the detected lines will usually help to see if the detection did work.
I have saved the images I captured and am feeding them into my example application (which I didn't share on github yet) which uses the exact same algorithm. I noticed that the example application had a really hard time recognizing the lines in the images because the image was too noisy. I have been tweaking the few parameters I have, but they seem to be quite different for each image I feed it.
So currently I am trying two things:
add some sort of algorithm to analyse the data an determine the best values for the parameters
implement a completely different algorithm to detect the lines in the image
The current line detection is as follows:
start processing the image the same way as it is done in the Canny algorithm that was already implemented, namely get the image data and apply a MEDIAN and HIGH_CONTRAST mask
apply sobel algorithm (determine horizontal, vertical and diagonal edges)
apply thinning algorithm (sharpen the edges)
apply threshold (make it a 3 color image: black, 50% grey and white)
apply hysteresis (make it a black and white image)
apply hough transform which gives back an array of lines detected in the image
take the 3 brightest lines and use them to determine the focus offset
.1) sort the lines in order of angle
.2) determine the intersection between the two outer lines
.3) determine the distance between the intersection of the outer lines and the middle line, that is the offset
place the data for drawing the lines in the BahtinovEdge class (derivative of Edge class used for drawing the focused stars)
use the BahtinovEdge class to draw the lines on the image (in fitsview.cpp)
The alternative algorithm I was thinking off is the same algorithm used in the bahtinov-grabber application. Namely take the image, rotate it 180 degrees in steps of 1 degree en for each step calculate the average brightness of each horizontal line. When there is a diffraction line in the image positioned horizontally, then the average brightnes of that horizontal line will have a higher value. Store the highest value in an array.
After all rotation steps have been done, determine the three highest values in the array, these should be the three diffraction spikes of your bahtinov mask.
Then the processing continues from step 7 as described before.
I am also trying to apply a gaussian blur filter instead of the MEDIAN and HIGH_CONTRAST mask and see if that makes a difference.
I hope this information will help you on your way.
"I am a little short on time this week to develop, but hope to get some time for it next week.
There is not much logging in the code, I usually test it visually by aiming at a star"
You might consider making a simple artificial star. As simple as a light source in a box with a pin hole into some aluminium cooking foil. Along with a short telescope like a spare finder/guider scope or even a camera you now have a controlled source without the vagueries of the weather or indeed time of day since It is important to keep FL short if you need a short subject to camera distance for use indoors for example.