×

INDI Library v1.9.4 Released (17 Jan 2022)

Bimonthly Stable INDI Library release introduces new drivers and fixes for existing ones. Some highlights:

Ekos Optical Trains

Hello folks. Previously when talking about Ekos supporting multiple cameras, I threw around that time about Optical Train (OT)? Right now, I'm trying to gather opinions on this matters from both developers and users so we can arrive at a good architecture to make this happen.

Just like Port Selector tool, this would popup on the first time a profile is run. After all devices are connected, you are asked to configure Optical Train(s). You assign a label to each train. The optical trains are tied to the profile. You then configure each train, and drag/drop devices into their respective places (slots) in the train. Then in the future, you no longer select camera/filter/focuser/guide..etc, you simply select which train to use.

Each optical train is tied to an equipment profile, so if you create a new equipment profile, you need to configure the OT(s) for this particular setup. Things can now pave the way for multi-camera setup, not just 2, but N-camera setup would be possible. Of course, this could get very complex and interesting very quickly.

Suppose for example you create a setup like this:

OT1: Scope1 -> Reducer -> Focuser1 -> OAG1 -> FW1 -> Camera1
OT2: Scope2 -> FW2 -> Camera2

Then you create a sequence to capture 5x frames for OTA1, and 10x frames for OT2. Guide module is set to use OTA1 (since OAG1 is defined there). You slew the mount to a target, and begin capturing. Ekos should now trigger both OTA1 + OTA2 camera to capture at the same time. However, in order to sync everything up, we need to be careful of:

1. Suppose we need to change FW1 filter. We might need to adjust focuser for some offset, and this would cause OAG1 to be suspended. This would ruin images captured in OTA2, so we either wait until OTA2 camera finishes the next frame, or we abort OTA2 capture, finish the focus+filterwheel change. 
2. Meridian flip would have to wait for both camera to finish.
3. Dithering cannot start unless both cameras complete the next frame. Basically anything affecting the mount (slew/guide/flip) would need for all N-cameras to complete their current capture.
4. ...etc

Please share your thoughts and concerns on this idea so we can come up with a solid architecture.
Jasem Mutlaq
Support INDI & Ekos; Get StellarMate Astrophotography Gadget.
How to Submit Logs when you have problems?
Add your observatory info
1 month 3 weeks ago #78060

Please Log in or Create an account to join the conversation.

  • Posts: 916
  • Thank you received: 278
I really like the idea and would really appreciate if we separate INDI device profiles from optical trains. My problem is that I have a limited set of INDI devices (mount, CCD, guiding camera, ...) which I use on several scopes and these scopes typically have more than one configuration (with and without reducer, OAG guiding vs. guiding scope).

Currently, I need to stop the profile (and subsequently the INDI devices) to change the values - which is technically unnecessary.

With an optical train that I could configure and select AFTER having started the INDI devices is the main advantage making EKOS even more handsome for me. Having more than one optical train on the same rig is currently too complex for me.

Cheers
Wolfgang
TSA-120 + epsilon-160 + FSQ-85 + GSO 150/750 | Avalon Linear + M-zero | ASI 1600mm pro + 6200mm pro | KStars/INDI on Raspberry Pi 4/Intel NUC
1 month 3 weeks ago #78077

Please Log in or Create an account to join the conversation.

  • Posts: 1017
  • Thank you received: 298

Replied by Eric on topic Re:Ekos Optical Trains

In this description, the mount device is excluded from the Optical Train denomination. It is indeed a good point to consider one Ekos instance controls no more than zero or one mount device, although I'd be interested in hearing about specific use cases. If there is ever a need to control multiple mount devices, the incentive would be to have multiple Ekos instances, eventually with a single kstars host process (out of scope here).

With this approach, some Ekos modules may modify device settings, some others may be restricted to listen to events. The config would be responsible for managing these access restrictions, or, more simply, to inform each instantiated Ekos module whether it may write or only read the device settings through the INDI drivers. This information would be defined when creating the config.

Because we now possibly have multiple instances of Ekos modules, we require separable settings sets. The way we store Ekos settings now is in the KStars database, and there is only one instance of each setting item. We need to move on to individual settings sheets for each Ekos module. We could continue to store them in the KStars database, but it would be more interesting to serialise them into files (and make sure we can export the current settings properly). Users should be able to save and load settings for each target observation. Further on, we could offer a library where to store those settings, and embed them inside Scheduler plans, with the necessary UI helpers to ease manipulations.

Then comes the question of state, so that alignment doesn't take place during a meridian flip, with guiding active, all this while trying to finish focusing. Currently the state is shared with notifications between Ekos modules, which then proceed, suspend or abort. With multiple module instances, the question become more complex. Where to put it for reference by others is something we need to think about precisely.

-Eric
HEQ5-Pro - Atik 314E - Orion ED80T - DMK21 on Orion 50mm
DIY 3D-printed Moonlite and FWheel RGB/LPR
KStars and indiserver on two Atom 1.6GHz 1GB RAM Linux, VPN remote access
1 month 3 weeks ago #78078

Please Log in or Create an account to join the conversation.

  • Posts: 916
  • Thank you received: 278
Another point we need to consider where we hold the state of each task like capturing, focusing, guiding etc.. Currently, we mostly use the UI class holding most of the state information - in fact, most part of the tabs is for setting the dozens of parameters .

Do we want to stick with this - with the consequence that in the presence of two imaging cameras for example we need two Capture tabs? Or shall we separate the UI from the task and introduce separate task models - maybe with their own persistence layer as Eric described above?
TSA-120 + epsilon-160 + FSQ-85 + GSO 150/750 | Avalon Linear + M-zero | ASI 1600mm pro + 6200mm pro | KStars/INDI on Raspberry Pi 4/Intel NUC
1 month 3 weeks ago #78090

Please Log in or Create an account to join the conversation.

  • Posts: 916
  • Thank you received: 278
Next point: since we need to do a major refactoring anyway, I would like to extract the highly elaborated - but hard wired - logic into proper state machines. A good example is the filter manager that reacts upon each event by creating queues of actions that need to be executed.
TSA-120 + epsilon-160 + FSQ-85 + GSO 150/750 | Avalon Linear + M-zero | ASI 1600mm pro + 6200mm pro | KStars/INDI on Raspberry Pi 4/Intel NUC
The following user(s) said Thank You: Grimaldi
1 month 3 weeks ago #78091

Please Log in or Create an account to join the conversation.

  • Posts: 80
  • Thank you received: 6

Replied by Thomas Mason on topic Re:Ekos Optical Trains

Seems like one way to keep things reasonably in sync is to define a base exposure time and then require exposures to be an integer multiple of it.  E.g. you might chose a base of 120 seconds and one train could be taking 1x (120 s) subs while the second is doing 3x (360 s) subs.  Given some slop for file transfer things would more or less be in sync so dithering for example could take place when you have a break in both - by definition it would be dithering on only the longer subs timebase.  This would allow some flexibility to account for differences in filter/aperture/saturation.  Same would apply for refocusing with attendant pause in guiding for OAG, you wait for all trains to be complete and then focus everyone.  Due to the fact things would be a little offset due to breaks between shorter subs you would loose a little efficiency compared to each train running its own sequence totally independently.  It would be more flexible than requiring all trains to image at same exposure time and has less chance of error than relying on operator to construct sequences where the subs are integer multiples of one another.
Borg 107FL, Astro-Tech AT130EDT; Rainbow Astro RST-135 SkyShed Pier; QHY600PH Chroma LRGBHa; QHY5-III-462C; IR Guiding WO Uniguide 50 & ASI290mm mini; ASUS PN51 ubuntu, kstars/ekos, & firecapture; Pegasus PPBA; Stellarvue Optimus + WO Redcat, Skyguider Pro RT90C, rPi4/stellarmate
1 month 3 weeks ago #78099

Please Log in or Create an account to join the conversation.

  • Posts: 71
  • Thank you received: 32

Replied by Hans on topic Ekos Optical Trains

Interesting idea. Some thoughts :
I would not want to make the assumption that having OAG1 in OT1 implies that that is the guider to use, or that that OAG port even has a guide camera attached. What if OT2 also has a guide camera somewhere ? Which one to use then ? I'd like to see the guide camera in the optical train explicitely listed as well. The choice which camera to use should still be with the user as we cannot deduce which one it is. Maybe the user wants to guide with the main camera of OT2. Also what if the guide camera is physically there but not to be used by INDI as PHD2 accesses it natively ? (this is what I actually use). Same for my SX-AO unit, it's there in the optical train but not to be accessed by INDI as PHD2 controls it natively.
Then on the idea of two imaging cameras, I like it :) and I see challenges like when to dither, both cameras need to wait for that to happen, and if 1 camera is waiting for the other to complete its sub it might have enough time to make another complete sub itself. Extrapolating to N cameras is cool, I agree we should design for N>=1 immedately when we leave N==1 where we are today.
I wonder what the purpose to INDI/EKOS is of something like a reducer in the optical train, it could be used to calculate the new focal length of course but then all spacing rings etc need to be added too ! This would be awesome to have of course.
OT-N support is very interesting, and it will be difficult to implement right without impacting N==1 stability which is already quite a challenge today :P In the end I think it will improve stability so I'm in :)
-- Hans
1 month 3 weeks ago #78100

Please Log in or Create an account to join the conversation.

Replied by Jasem Mutlaq on topic Ekos Optical Trains

Great feedback folks! How do we break this into milestones? Perhaps:

1. Better equipment manager that includes DSLR lens and focal reducers (I'm working on this).
2. Optical Train Editor + Backend databasee.
3. Module settings manager?
4. Decouple state from GUI for each module?
Jasem Mutlaq
Support INDI & Ekos; Get StellarMate Astrophotography Gadget.
How to Submit Logs when you have problems?
Add your observatory info
1 month 3 weeks ago #78113

Please Log in or Create an account to join the conversation.

  • Posts: 916
  • Thank you received: 278
For me an optical train editor would create much value even at the beginning if we can handle only a train with a single imaging camera. I would work on a proposal for state machines.
TSA-120 + epsilon-160 + FSQ-85 + GSO 150/750 | Avalon Linear + M-zero | ASI 1600mm pro + 6200mm pro | KStars/INDI on Raspberry Pi 4/Intel NUC
1 month 3 weeks ago #78116

Please Log in or Create an account to join the conversation.

  • Posts: 89
  • Thank you received: 18

Replied by Grimaldi on topic Ekos Optical Trains

Hi Jasem,

good idea, but similar to what Eric and Wolfgang pointed out, I believe it can only be a first step: Take e.g. FlipFlats/FlipDark or a Flat position (light panel at a fixed Alt/Az position), a roll-on-roll-off roof or a dome. Then add constraints like certain positions that need to be avoided, a position the scope must be in to safely close the observation hut, an observatory horizon with a few trees or a weather station and imagine now scheduling an observation list over a few nights and reacting to changing weather conditions.

This seems like that hardcoding collaboration between modules as it is now will not be a good solution in the long run, as we cannot predict nor code every automation / interaction between modules our users will want to have. 

The first question therefore is: How far will Ekos/Indi want to go? Does it want to be a solution, that is capable (in the long run), to run a remote observatory or not? If not, what exactly will be out of scope and how will it interface with software capable to do the out-of-scope stuff?

Let me assume in the following, that Ekos/indi will want to go quite far in this journey. How could we achieve this?

First step as Wolfgang said, will be refactoring the existing code to separate UI and state more clearly. For example I can only talk about the platesolving module right now, but this one really is orchestrating many of the other modules, like take picture, solve, slew to position, and doing all this in order to execute a polar align. All of this is done through code that reacts on events and tries to keep track of the current state in variables. Many event processing routines consist of lots and lots of if/then/else how to react to the event given current state. For me being new to the code base it is hard to reason about and is hard to change. It is also completely unclear, which events to expect when and in what order, with-out reading that other module‘s code. Here a Strategy and/or visitor pattern should be applied (different ones for Load&Slew, Take&Sync, PolarAlign, AimAfterMeridianFlip etc), which make the State and how to react on events explicit. This is a set of orchestrator classes for each module. This will hopefully also make it also easier to get contributors to the codebase.

As a parallel step, I believe a different way of collaboration between modules will be necessary: While currently this is some form of Choreography (looking at others, then doing the right thing), I‘d introduce an Orchestrator class (or set of classes) for the whole system (including the optical train), that is responsible to execute a single job. This should allow to reason about the whole system and avoid regressions during the refactoring of step one. Then as more optical trains come in, relax this and have independent but collaborating orchestrator classes for this. This hopefully will also get rid of some more recent bugs where this collaboration fails.

If we want to keep the choreography, as an alternative a Blackboard could be used, where all modules publish state, so that other modules can be aware of what‘s happening and can veto, vote or otherwise collaborate using some global state. (I don‘t have experience with such an architecture, so I don‘t know, if this really makes things easier or not). I believe this needed to be thread safe.

Then the next step could be to move that collaboration sooner or later into some DSL (domain specific language) or a rule engine, so that it’ll become configuration. Then the task is to increase the coverage of that, to avoid hardcoding it and open up for more customizations and support less likely combinations of products more easily.

One thing, that I‘d not change is using indi as an abstraction layer for instruments and as a means of separating compute. 

Hope this helps,

Jens
 
The following user(s) said Thank You: Wolfgang Reissenberger
1 month 3 weeks ago #78140

Please Log in or Create an account to join the conversation.

Time to create page: 0.256 seconds