-
Tõnis - Would you be able to produce a command line Python program that would take as parameters the filter, total exposure time, sub-exposure time, and a file path for the resulting stacked image, then tell SharpCap to do it and wait until it is complete? If so I could make a UserAction that detects a special tag in an observing plan and substitute the external image acquisition for ACP's normal MaxIm based imaging.
-
Bob, I'm sorry to say that I can't do that testing for now, actually not before mid June when I come back from long trip. However, I installed SharpCap and seems that it is running and also scripting things are running under demo mode (i.e. files are not saved). Now I have to figure out, how to execute a program outside the SharpCap (seems that it should be easy inside the SharpCap) or how to control SharpCap from Python 3. I'm looking into their forums etc.
But otherwise, if there are no other people interested in that functionality, I guess we should put such testing on hold till that time - I can not promise that I have time before June :-/
Best wishes,
Tõnis
-
Tõnis -- Thanks you for looking at it, and please enjoy your trip. There is other activity on this within the ASCOM community, as well as several people within the AAVSO Net group who are currently doing extremely precise photometry with sCMOS. I'm talking to the camera manufacturers and looking at options.
-
I recently experienced what sharpcap pro can do with a camera capable of high speed and it's really neat.
It does all the aligning and stacking on the fly in real time. You can see the image get deeper and deeper very quickly right on the screen.
-
Thank you Geoff for that information. It sounds really cool. I assume that you do not have to save the interum images if you do not want to. Have you checked to see what the minimum overhead is? In other words, if you are taking say 3 second exposures, how long does it take to get a stack of 10 of them? The problem with the Kepler Ascom driver was that it was taking 10 seconds of overhead for each exposure--not including the stacking. That is a 50% hit for exposures of 10 seconds. When taking say 0.1 second exposures, then its a bigger hit.
I love the fact that it does the aligning and stacking on the fly for you.
Gary
-
Apart from the coolness of seeing the image appear before your eyes, in an automated scenario it seems to me that it would be "good enough" to have the images stacked right in the camera, resulting in zero overhead. If one wanted really bad to have the thrill of seeing the image appearing, the camera could include a low-to-modest bandwidth HTML5 video stream. Or of course one could use something like SharpCap and be logged in via remote desktop (which has its limitations in some usage scenarios).
I'm still interested in rigging up as UserAction that would use SharpCap to replace MaxIm's image acquisition via #tags in an ACP live observing plan, or for Scheduler, keywords in an ImageSet Description field. Geoff would that be a project you'd be interested in teaming up on?
-
Yes, absolutely agree that stacking in camera would be the way to go for most situations and would be great if it could be done by the camera.
This was a demonstration of replacing a Mallincam and some external timing hardware with a fast CMOS camera with internal GPS timing. The intended application is occultation observations which wouldn't use the stacking capability.
I don't know much about the details of sharpcap as the whole setup belongs to someone else - I was just present for the demonstration. From memory the exposures were far less than one second (bright objects) and I believe it was doing several frames per second. The camera was a QHY174/GPS connected via USB3. The thing that really impressed me was that the software, even on a fairly old laptop, was able to keep up. The mount was alt/az and it appears to handle field rotation just fine.
If I had access to the gear I'd be happy to work the the project... :rolleyes:
-
Bob, I absolutely agree that the future of CMOS stacking has to be on-camera, at least for photometry. [EDIT: ha, Geoff, great minds think alike.]
It's not even bandwidth: with a QHY 174M GPS (~2.5 Megapixels, since sold) through USB 3.1 into SharpCap (i5 laptop w/*internal* SSD), I was able to download and write to disk > 30 full frames per second. But: there was smoke coming out of the laptop CPU. That is, it's the stacking computation that worries me, not to mention plate solving etc.
CMOS photometry has serious problems. (1) Meh, so did CCDs at first. (2) On-camera stacking and ASCOM drivers to control it will go a very long way to convincing me to give CMOS photometry a go.
-
Eric, you and Gary Walker should chat in depth. He's getting rather spectacular results albeit with some pain and low cadence. My interest in pushing this came from a discussion I had with him at last Summer's AAVSO meeting in FLagstaff. I'm not letting up. And you wold not believe the pushback from camera makers. Reminds me of the days when CCD cameras were unbuffered. Windows context switches resulted in horizontal banding in images. Awful.
-
Yes, I keep up somewhat with Gary's CMOS doings. But with new mount in the offing this is not the summer for me to try CMOS too. When the camera manufacturers get their respective acts together I'll consider it. Who knows, with the FLI acquisition CMOS camera dev may accelerate, but I'm not optimistic that medical applications will drive on-cam stacking and certainly not ASCOM dev. It leaves an enormous opportunity/gap to be filled. (Looking at you, SBIG, QHY, and peers. :rolleyes: )