I was wrong regarding my finding about querying the exposure control. I switched my code to run exposures asynchronously and I am still receiving exposures in twice the expected time.
I will work on generating the debug logs from the indiserver.
I might have figured out what is causing the doubled exposure time. I made a new script that is a "minimum viable product" to take exposures, but performs all of the configuration in the way it is normally done in my program. Doing this I could quickly change various settings to see how the program would react. I found that I could not consistently get the doubled exposure until I was taking 9 second exposures. 8 seconds and below, it might happen rarely, but at 9 seconds it happened fairly often.
This is my MVP test script:
My indi code (which is based on Marco Gulino's indi-light-tools) has a synchronous mode for taking exposures. Basically, it sets the exposure value and then queries the status of the exposure control until it is idle. The code I was using would query the status of the control every 0.05 seconds, or 20 times a second. Backing off to 0.15 seconds between status calls seems to do better. It appears the cause of the doubled exposure is aggressively querying of the status of the exposure control. I am not sure if it is the total number of status calls or just the calls-per-second causing the behavior.
It does appear to be specific to the sv305. I have tested several other cameras (ASI, QHY) and they do not exhibit the same behavior.
This is the code where my script queries the status:
I also tried to use the python sep module, but ultimately, it would not accept any data from INDI. INDI [mostly] produces 16-bit unsigned integer FITS data, where sep seemed to want 32-bit floating point FITS data. I may have the types slightly wrong, but I never could get sep to accept FITS data from INDI.
Image: Full clouds (no stars)
INFO:root:SEP processing in 0.3782 s
INFO:root:Deduplication in 0.0001 s
INFO:root:Total in 0.3783 s
INFO:root:Found 0 objects
Image: Transparent clouds
INFO:root:SEP processing in 0.3807 s
INFO:root:Deduplication in 0.0089 s
INFO:root:Total in 0.3896 s
INFO:root:Found 27 objects
Image: Clear, no clouds
INFO:root:SEP processing in 0.3813 s
INFO:root:Deduplication in 1.0220 s
INFO:root:Total in 1.4033 s
INFO:root:Found 297 objects
I have a test script for each method and some example images to torture them. smile.png
I will post each set of test images for cv2 matchTemplate() and scikit blob_dog(). There is quite a bit of tuning that can be done to each method that can affect the results. You can increase sensitivity in one case but increase the false positive rate in another. The settings I have found seem to work well across each type of image I have.
stars = cv2.matchTemplate(image, star_template, cv2.TM_CCOEFF_NORMED) stars_filtered = numpy.where(result >= 0.55)
stars = blob_dog(image, max_sigma=5, min_sigma=1, threshold=.1, overlap=0.1)
I am just using a ZWO 2.1 or 2.5mm CS mount lens that came with one of my other cameras I use for autoguiding.
At the moment, I am just counting the instances of an image of a single star in the picture. I am not using it for SEP or plate solving, but it would not be too difficult to do that.
From reading the documentation, it seems to be best described as finding instances of a picture within another picture. You can use this for complex pattern matching, but stars are very simple objects. In most of the examples I looked at, they included an example image of a star. In my case, instead of using a real star image, I just generate a fake star image using cv2 by drawing a white circle and applying a blur() function to it--a perfect star every time.
As for the speed, it is one of the faster methods I have used. It can find all of the stars in a 1920x1080 image covering 120 degress of the sky in about a second. The only problem is that it generates a lot of duplicate findings, finding the same star multiple times. Eliminating the duplicates slows things down a lot, but still *at least* 2-3 times faster than scikit-image blob_dog() for similar results. It is not perfect, but it is good enough for my purposes.
Here is an example. I only look for patterns in the central box of the image. This is to stay away from the trees. The gaps in the leaves look like stars to the pattern matching.
1035 "stars" were detected in 0.5 seconds on a Raspberry PI 3. The brighter the red circle, the more times it was duplicated.
Eliminating the duplicates brings the star count to 198 which adds another 2 seconds to processing.
I have integrated some star detection code. I initially used your example, but it was slower than I really wanted. It worked very well, but it was just slow-ish and processor intensive.
I found another method using OpenCV template matching that is much faster and more sensitive.