Page 1 of 4 1234 LastLast
Results 1 to 10 of 32

Hybrid View

  1. #1
    Join Date
    May 2012
    Location
    Tõravere, Estonia
    Posts
    71

    Default CMOS Cameras - Stacking many very short exposures

    Hello!

    I have few questions about methods, how it would be possible to control external programs from ACP. My goal is following: I want to use a CMOS camera with very short exposure times to do photometric measurements of bright stars. While it it definitely possible from camera side, one can not beat the Nature - 100 millisecond exposure through 8 cm telescope gives awful scincillation noise. Solution is "easy" - one just has to stack many frames. Effective exposure time of ~50 seconds is already getting there. But for that, 500 0.1-second frames must be stacked. When each one of them is 40 MB in size (20 Mpix cameras...)... well, probably there is no need to explain more.

    Unfortunately, MaximDL is not helping here at the moment. However, there are other programs for that. I'm thinking currently about SharpCap Pro (https://www.sharpcap.co.uk/sharpcap/...ro/sharpcappro), that allows to do live stacking (buffering high-framerate images into RAM, doing preprocessing on the fly, stacking on fly), is remotely controllable/scriptable, and can save stacked output as FITS file.

    Is there any good methods how to use an external capture program from ACP using e.g. scripts? Does ACP support directly Python scripts? Can such scripts be used from e.g. Scheduler?

    With best wishes,
    Tõnis
    Last edited by Tõnis Eenmäe; Feb 13, 2019 at 14:37. Reason: correcting typos and adjusting sentences

  2. #2

    Default Fast Downloads and OH in Scheduler/ACP

    Hello

    I was checking some short images exposures of 10 seconds or so, and found that in a time series with no filter change, that I could only get an image every 20 seconds, implying that there was 10 seconds of overhead for each image As we use shorter exposures with these CMOS cameras, this overhead will limit the thruput. Are there things/options to turn off functions that can reduce this overhead. I am using an L500 PW mount, so for instance it is not necessary to plate solve each image. Can you think of other things? When I use the Continuous mode in Maxim, it takes about 1 second between images, and its displaying them. I am using the Kepler 400 camera.

    Thanks in advance for any advice that you can give

    Gary

  3. #3
    Join Date
    Oct 2005
    Location
    Mesa, AZ
    Posts
    33,158

    Default

    Gary - from our phone conversation your requirement is also photometry of bright stars, also 0.1 sec. exposures, 20 sec total for 200 of them, and an 8MP detector. I wanted to post your requirements here for the record.

    For both of you: In the immediate term, you can maximize the cadence by

    Putting #nosolve into the Description field of your Observations
    Putting #nopreview into the Description of your ImageSets.

    For the full speed-up effect of the #Enopreview you need to be on the latest version of ACP Expert (ACP and Scheduler) 8.2. Unfortunately I don't have a way to do "bulk" changes to Observations and IOmageSets to add this info to existing Plans.
    -- Bob

  4. #4
    Join Date
    Oct 2005
    Location
    Mesa, AZ
    Posts
    33,158

    Default

    Tõnis -- Coincidentally I just got off the phone with Gary Walker of AAVSO, who has the same issue. I called him back to ask him to post his CMOS requirement... he's already done it!! I will now combine the two threads and do a little bit of research to see how to go forward. ACP does have a feature for adding special logic, and calling out to a Python script in lieu of taking the image with MaxIm is quite possible. I would want to look for a "Python-free" solution because of the many ways that a Python environment can be constructed :-) I'll get back to both of you when I have looked at possible short term solutions.

    The long term solution is for the camera subsystem to do the stacking and deliver the final image. Ideally the stacking would be done in hardware, eliminating the need to transfer (in your case) 20 GIGabytes of data across the wire and into memory, or worse, onto disk, then stacking with the CPU. Maybe these stackers use the GPU? I know nothing about them. I will learn.
    -- Bob

  5. #5
    Join Date
    May 2012
    Location
    Tõravere, Estonia
    Posts
    71

    Default

    Bob,
    thank you for quick reply!

    Yes, I agree that HW stacking would be most ideal. However, pre-processing in camera hardware can be rather challenging task - while some cameras have buffers, I'm not sure if it is possible to access them in any normal way.

    Unlike very fast planetary cameras, framerate ~10 Hz over proper USB3 should not be that bad (even when one frame is 40 MB in size) for modern computers and RAM doesn't cost much neither. I'm not very experienced about that field of astrophotography, have just seen what my friends are doing and bits here and there from Internet - I haven't stumbled on GPU-based processing for astronomical imaging/image capture so far. However, someone at StackOverflow commented: (https://stackoverflow.com/questions/...ages-using-gpu) - seems that GPU approach may even slow stacking down.

    I tested stacking with some of my images in IRAF: random 9 x 4Mpix x 32bit images (16 MB each) took ~3 seconds when reading from HDD disk, computing, and writing to HDD disk on 2.6 GHz Core 2 Duo computer. IMHO that's very fast. After first read, files were buffered to RAM, and then median of that stack of 9 took less than 2 seconds.

    Best wishes,
    Tõnis

  6. #6
    Join Date
    Oct 2005
    Location
    Mesa, AZ
    Posts
    33,158

    Default

    OK, on the surface, it looks like SharpCap Pro might be the ticket! And, well, it uses Python so it will be up to you guys to get it installed and create the Python environment needed. The conceptual approach will be to use the ACP UserActions "AcquireImage" hook. This allows bypassing ACP's normal use of MaxIm to acquire the image, and instead (apparently) ShellExec() out to something that somehow uses the SharpCapPro "Scripting" facility to acquire the subframes, stack into the final image, image and store it onto disk.

    Their website says SharpCapPro has built-in Python scripting, but then they say you must use Pyro - Python Remote Objects to talk to their scripting environment from another program? This seems strange but well we can take it at face value for now. I looked at their scripting docs and it appears that SharpCap hosts scripts within the app in a private IDE and running internally under IronPython (which I haven't used, I run the standard Python 2 and 3 on mt W7/W10 systems). What I am getting is that one can use Pyro to get to the SharpCap .NET class/object and call its properties and methods from the outside. Pyro runs under Python so the outside logic will need to be in Python. What I get from this is that one may be able to ultimately make up a command line program that can be run from the Windows CMD shell that will take two parameters: the subexposure duration and the total exposure duration. If this is true, then it's possible to make this available to ACP or ACP Expert.

    Tõnis - are you in a position to try this? I can help you with the documentation and guidance on the ACP Side. And are you running under Expert? (Scheduler?)

    Gary I know you are running Scheduler, and I know you aren't in a position to do programming (and that's OK!!).

    The hook logic will be somewhat different between ACP Live and Scheduled.

    This is what I can do today.
    Tõnis what do you think?
    -- Bob

  7. #7
    Join Date
    May 2012
    Location
    Tõravere, Estonia
    Posts
    71

    Default

    Bob, I have to admit that I cheated you a bit. I don't have that camera at hand yet, I'm preparing to buy a ZWO ASI-183MM in coming days. However, I might be able to find a fast camera that can be controlled via SharpCap Pro. Probably for tests, even our current Apogee Alta U42 (which is "slow") could be used.
    I guess I'm able to install all those Python things.

    I failed to find remote control from SharpCap User Manual, scripting seems to be as you said - executing ready made scripts or typing them in manually. And just one place where that Pyro was mentioned. It seems to me, that this small wrapper program/script could do well, specially if it is easy to pass some usable information from ACP. Sounds to me like a plan if it is possible to execute a'la 'python.exe myscript 0.1 100 V myfile-V-100.fits' using ACP engine. Then in a ACP/Scheduler plan, it could be called as a specific user action on request?

    I know that at least one of AAVSONet telescopes has the same camera - maybe Gary can test it out in real environment even faster?

    Tõnis

  8. #8

    Default

    Hello Bob and Tonis

    I happen to be reading the help file relative to #nopreview and #nosolve and noticed the following Directives:

    #STACKCombines repeated images within one filter group without aligning into a single image. Individual images used in the stack are preserved. File names will have -STACK in place of the repeat number. This is most useful when doing orbital tracking. See #TRACKON. The stacked image is saved in IEEE floating-point FITS format to preserve the dynamic range. For example: #STACK
    #STACKALIGN Combines repeated images within one filter group and aligns images into a single image. Individual images used in the stack are preserved. File names will have STACK in place of the repeat number. Use this for all stare-mode image sets. The stacked image is saved in IEEE floating-point FITS format to preserve the dynamic range. For example: #STACKALIGN


    Would this not do the stacking that we are looking for? Its probably not fast, but who knows. Do I misunderstand what these commands would do? What does this mean about a filter group?

    Gary

  9. #9
    Join Date
    May 2012
    Location
    Tõravere, Estonia
    Posts
    71

    Default

    Gary,

    IMHO that works only for "normal" cadence. If you want to take several/many 10 sec exposures, stackalign or stackcombine would be useful. Stackalign would need solving (so you can't use nosolve). Nopreview just saves you some seconds. I was able to get down to 7 second cadence with Apogee Alta U42, that has ~4 sec read-out time.

    My use-case (that I flagged useless after thorough testing) was/is photometry of bright stars - about 5..6 magnitude ones with a Planewave 12.5 CDK and back-illuminated Alta U42. That required <1 sec exposures even in Johnson B-filter. So I decided to take 100x1s per filter... That lead to 700+ second effective time usage per filter + remarkable amount of large files (many thousands at the end of the night) on the disk..

    So if it really would be possible to tame modern CMOS cameras in live stacking mode through ACP, I'd say that it would open future for (really) bright star photometry (AAVSO-s BSM too).

    Best wishes,
    Tõnis

  10. #10

    Default

    Hello Tonis

    Thanks for your reply. I agree with you. There are really 2 cases here. First the case of the normal exposure ie the stacking of 10-60 sec exposures. I gave a paper at NEAIC and SAS last year showing the analysis and data advantage of stacking a number of exposures. That case was for a time series of say one data point every 2 minutes. What would be the optimum? 1 exposure or many. It turned out that the best result was for 6 exposures in most cases with a particular set to typical cmos and ccd camera dark currents, read noises, and sky conditions. I had been in discussions with Bob, Doug, and FLI as to the need to do this. I have the Kepler 400 camera, and the native software, Pilot does this nicely. I wanted to do the same on my remote telescope at SRO. It seems these features #stack and #stackalign would do it. I am not sure where I tell it how many exposures to stack?

    The Bright star with short exposures of 0.1 secs for instance is a further extension of the same problem, however now the overhead becomes very important. I have been working with Arne on this also. He highlighted the problem recently--hence this post. It does seem there is hope however, as when we take continuous images via ACP/Maxim, it displays images more like every second--even with the ZWO cameras. There is no storage of the image in that case, but there is a read and a display. I agree with you that it would be great to have this overhead reduced and then the use of CMOS with stack on the would open the future for bright star photometry.

    As a side result from my tests, which I highlighted in the two presentations given at NEAIC and SAS, the systematic errors were also reduced by a factor of 2x-3x. As I am sure you know, the photon error is only part of the battle. I surmise that the lower power operation of CMOS relative to CCD might have something to do with this. I am not sure, but I saw it on the two occasions where I compared the data from a stack of 6 or 10 images versus a single exposure of the same total exposure time. The improvement observed was 2x of the systematics in the resulting light curves. Another bonus, and the reason for my inquiry how to automate this. My mount is the fabulous Planewave L500, so for time series cadence of 1-2 minutes, from the results I have seen so far, I don't think I will need to align the images, only stack.

    Gary

 

 

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. ST8 problems: stuck at downloading after short exposures (continuation from Cyanogen)
    By Arne Henden in forum Hardware/Software/Driver Topics Not Directly Related to Our Software
    Replies: 6
    Last Post: Jan 13, 2017, 11:58

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •