What raw data can I possibly acquire from an 8" Classical Dobsonian Telescope, and a DSLR? Could anything eye-opening to amateur astronomers be computed or calculated first-hand with such equipment? I'm sure scientists must've considered this equipment "advanced technology" at some point in history not too far back… Could I rediscover or calculate some Laws (like Kepler's laws) or some other things amateur astronomers would be amazed to calculate themselves (like the distance to a planet) using this equipment?
First off, pairing a classic dob with a DSLR is a bit like a shotgun marriage. A dobsonian is fundamentally a visual telescope. Most manufacturers don't even consider the possibility that these instruments could be used for data collection via a sensor. There are 2 issues here:
1. The dobsonian is not tracking
The sky is moving, the dob stays still. You have to push the dob to keep up with the sky. Any long-exposure photo would be smeared. To remedy this, you'll need an equatorial platform, which will move the dob in sync with the sky.
Please note that only the best platforms allow reasonably long exposure times. Then the results can be fairly good.
2. There isn't enough back-focus
The best photos are taken when you remove the lens from the camera, plug it into the telescope directly, and allow the primary mirror to focus the image directly on the sensor. This is called prime focus photography. But most dobs can't reach the sensor within the camera, because their prime focus doesn't stick out far enough. There are several remedies for this, like using a barlow, moving the primary mirror up in its cell, etc.
The bottom line is that it takes some effort to make a dob and a DSLR play nice together. Is it doable? Yes. Is it simple and immediate? No. So the literal answer to your question is that there isn't much you can do with just a dob and a DSLR.
You can take photos of the Moon and the Sun, because the short exposure there does not require tracking, but that's pretty much it. Here is an image of the Moon I took with a home-made 6" dob (with home-made optics) and a mirrorless camera (prime focus, about 1/320 sec exposure):
Makes a cute little desktop background, I guess, but it's definitely not research-grade.
Now add a tracking platform and things become more interesting, and the possibilities open up quite a lot.
In a more general sense:
There are telescopes that are specifically made for astrophotography. They have lots of back-focus, they are short and lightweight and therefore can easily be installed on tracking mounts. More importantly, there are tracking mounts made specifically for imaging - very precise, delicate mechanisms that follow the sky motion with great accuracy. In fact, the mount is more important than the scope.
A typical example would be a C8 telescope installed on a CGEM mount, or anything equivalent. Barring that, a dob with lots of back-focus sitting on a very smooth tracking platform (probably not as accurate as a GEM, but good enough for many purposes).
Make sure you don't exceed the load capacity of the mount. If the mount claims it can carry X amount of weight, it's best if the telescope weight doesn't exceed 1/2 of that amount. Close to the weight load limit, all mounts become imprecise. The exceptions are high end (the most expensive) mounts which cost many thousands of dollars and usually honor their promises in terms of load capacity 100%.
Once you have: a tracking mount, a good camera, and a telescope (listed here from most important to least important), you can start imaging various portions of the sky for research. There are 2 main classes of objects that you could image:
1. Solar system objects
They're called "solar system objects" but the class includes anything that's pretty bright, not very big, and it's high resolution. Tracking is important but not that crucial.
You need a sensitive, high speed camera that can take thousands of images quickly (a movie, basically). These are called planetary cameras. They generally have small sensors, are high sensitivity, and can operate at high frame rates (hundreds of frames per second).
As a cheap alternative in the beginning you could use a webcam, there are tutorials on the Internet about that. A DSLR in video mode in prime focus might work, but it's going to do a lot of pixel binning, so resolution would be greatly reduced unless you use a very powerful barlow (or a stack of barlows).
You'll load all those images in a software that will perform "stacking" to reduce them all to one single, much clearer image.
The scope needs to operate at a long focal length, f/20 being typical, so a barlow is usually required. The bigger the aperture, the better.
2. Deep space objects (DSO)
These are anything that's pretty faint and fuzzy, like galaxies, but some comets are also DSO-like in their appearance. You need to take extremely long exposures; usually a dozen or a few dozen images, each one between 30 sec and 20 min of exposure, sometimes even longer. Extremely precise tracking is paramount, so you need the best tracking mount you could buy. Autoguiding is also needed to correct tracking errors.
The scope needs to operate at short focal ratios, f/4 is pretty good, but as low as f/2 is also used; focal reducers (opposite of barlows) are used with some telescopes, like this or like this. Aperture doesn't mean much; small refractors are used with good results.
The camera needs to be very low noise; DSO cameras use active cooling that lowers their temperature 20… 40 C below ambient. Typically they have large sensors.
DSLRs can also provide decent results, but their noise is typically higher than dedicated cameras, so you need to work harder for the same results.
Specific software is used for processing, stacking, noise reduction, etc.
So what can you do with such a setup?
Comet- or asteroid-hunting works pretty well. Terry Lovejoy has discovered several comets recently using equipment and techniques as described above. Here's Terry talking about his work.
Tracking variable stars is also open to amateurs. This could also be done visually, without any camera, just a dob, meticulous note-taking, and lots of patience.
With a bit of luck, you could also be the person who discovers a new supernova in a nearby galaxy. You don't need professional instruments, you just need to happen to point the scope in the right direction at the right time and be the first to report it. This also could be done purely visually, no camera, just a dob.
You are absolutely right: amateurs can do a lot of science with the apparatus you own.
The book "Astronomical Discoveries You Can Make, Too!", by Robert Buchheim, lists famous historical observations that can be replicated by amateurs.
My First Trip with Remote Imaging
I got into astrophotography at the same time I got into observational astronomy: July 2015, when I was given my first telescopewhile I was in graduate school. That telescope was an 8 inch Celestron Schmidt Cassegrain, and it came on a Celestron NexStar SE alt-az mount. It was a fairly difficult place to start astrophotography, but I picked it up quickly, and have since been given or sold at a good discount even better equipment and cameras. It has been a fun adventure learning the hobby over the last three years! Recently, an awesome new opportunity walked my way: the chance to image on a high-caliber rig under the very dark New Mexico skies, from afar.
When I was at the 2018 Texas Star Party, I met some folks from The Astro Imaging Channel, an online video series about astrophotography, who were going around interviewing people with astrophotography rigs and asking about their setups for a video they were putting together. They asked if I would do a presentation on their weekly show, and I had a great time presenting "Astrophotography Joyride: A Newbie's Perspective" which can be found on YouTube.
I stayed on as a panel member for the channel and have gotten to know the other members. For example, another presenter, Cary Chleborad, president of Optical Structures (which owns JMI, Farpoint, Astrodon, and Lumicon), asked if I would test a new design of a Lumicon off-axis guider. In late October, Cary and Tolga Gumusayak collaborated to give me fivehours of telescope time on a Deep Sky West scope owned by Tolga of TolgaAstro with a new FLI camera on loan and some sweet Astrodon filters, and asked me to write about the experience! Deep Sky West is located in Rowe, New Mexico, under some really dark Bortle 2 skies.
The telescope rig in question is the following: - Mount: Software Bisque Paramount Taurus 400 fork mount with absolute encoders -Telescope: Planewave CDK14 (14 -inch corrected Dall
Kirkham astrograph) -Camera: Finger Lakes Instrumentation (FLI) Kepler KL4040
-Filters: Astrodon, suite of wideband and narrowband
-Focuser: MoonLite NiteCrawler WR35
The whole thing is worth about $70k!
And you will notice the lack of autoguiding gear you do not need to autoguide this mount. It is just very good already once you are perfectly polar aligned.
Screenshot from the live camera feed from inside the observatory
After getting the camera specs, I needed to select a target. With a sensor size of 36x36mm and a focal length of 2563mm, my field of
view was going to be 48x48 arcmin (or 0.8x0.8 degrees). It sounded like I was going to get the time soon, so I needed a target that was in a good position this time of year. While I was tempted to do a nebula with narrowband filters, I have not processed narrowband images before, so I wanted to stick with LRGB or LRGB + Ha (hydrogen alpha). I decided that I should do a galaxy. Some ideas that came to mind were M81, the Fireworks Galaxy, the Silver Dollar Galaxy, and M33. M74 was also recommended by a colleague.
I finally settled on M33, which is difficult for me to get a good image of from my light polluted home location because of its large angular size on the sky, and it has some nice H II nebula regions that I have not been able to satisfactorily capture. Messier 33 is
also known as the Triangulum Galaxy for its location in the small Triangulum constellation up between the Aries and Andromeda constellations. It is about 2.7 million light years from Earth, and while it is the third largest galaxy in our Local Group at 40% of the Milky Way's size, it is the smallest spiral galaxy.
As far as how to use the five hours went, I originally proposed 30x300s L and 10x300s RGB each. But then Tolga told me that this camera (like my ZWO ASI1600MM Pro) has very low read noise, but some what high dark current, and it is also very sensitive, so shorter exposures would be better. He also told me that the dynamic range was so good on this camera that he shot 5- minute exposures of the Orion Nebula with it, and the core was not blown out! Even on my ZWO, the core was blown out after only a minute.
So I revised my plan to be 33x180s L, 16x180s RGB each, and I also wanted some Ha, so I asked for 10x300s of that.
This screenshot shows a raw single luminance frame. To the untrained eye, it looks blown out and noisy, but I know better, having looked at a lot a raw files now - it looks great to me!
The very next night after I decided on my target, November 7, Tolga messaged me saying he was getting the M33 data and asked if I wanted to join him on the VPN! He had me install TeamViewer, which is free for non - commercial users, and then he sent me the login information for the telescope control computer out at the remote site. It was a little laggy, but workable.
This was really cool! We could control the computer as if we were sitting in front of it. The software, TheSkyX with CCDCommander, let you automate everything. The list shown on the screen contains the actions for the scope to follow, which instead of being time based are event based.
The first instruction is "Wait for Sun to set below -10d altitude." This way, you do not have to figure out all the times yourself every night it just looks at the sky model for that night for that location. It turns the camera on, lets it cool, and then goes to the next action, which is to run a sublist for imaging M33 in LRGB until M33 s ets below 30 degrees altitude.
It has the exposure times and filter changes and everything else in there.
It also has how often to dither dithering is when you move the scope just a few pixels every frame or couple so that you do not get hot pixels in the same place in every frame. I have not had to do this yet since I have never been perfectly polar aligned enough or had a scope with good enough gears for it not to already be drifting around a little bit frame to frame on its own.
He only took some of the luminance frames and red frames the rest he would get on another night soon and then switched to green.
On the second green frame, the stars had jumped! Tolga thought at first a cable might be getting caught, so he switched to the live camera feed and moved the scope around a bit, but everything looked fine. He mentioned that it had been hitching in this same spot about a month ago.
It later turned out to be a snagged cable, which was later fixed. Anyway, the mount moved past that trouble spot, and the rest of the frames came out fine. I logged off because it was getting late.
He collected the rest of the frames, and then on November 11, sent me the stacked L, R, G, and B images.
Then it was time to process!
Preparing for Combination
I have recently graduated from processing in DeepSkyStacker and Photoshop to processing in PixInsight, which has been a huge but amazing step up. Since I am still learning PixInsight, I will be following the Light Vortex Astronomy tutorials, starting with "Preparing Monochrome Images for Colour - Combination and Further Post processing."
These tutorials provide excellent step by step instructions, complete with screenshots, and are organized very well.
First, I opened up the stacked frames in PixInsight and applied a screen stretch so I could see them.
The first processing step I did was DynamicBackgroundExtraction to remove background on each of the four stacked images. It may be very dark out in Rowe, NM, but there is likely still some background light. Since they were aligned, I could use the same point model for each one, so I started with the luminance frame, and then applied the process to each of them. This step can also be done on the combined RGB image.
Following the tutorial's advice, I set the "default sample radius" to 15 and "samples per row" to 15 in the Sample Generation tab. I hit Generate, but there were still a lot of points missing from the corners, so I increased the tolerance (in the Model Parameters (1) tab) to 1.000.
After increasing all the way to 1.5, there were still points missing from the corners, but I decided just to add some by hand. I also decided there were too many points per row, so I reduced that from 15 to 12. Then I went through and checked every point, moving it to make sure it was not overlapping a star, and deleting points that were on the galaxy. You want only background. You also want to make sure that none of the points are on top of any part of the galaxy or nebulosity in your image.
Next, I lowered the tolerance until I started getting red points ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I do not lose all my work! I stopped at 0.500, and all my points werestill valid.
I opened the "Target Image Correction" tab, selected "Subtraction" in the "Correction" dropdown, and then hit Execute. After I autostretched the result, this is what I had
Next, I lowered the tolerance until I started getting red points ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I do not lose all my work! I stopped at 0.500, and all my points were still valid. I opened the "Target Image Correction" tab, selected "Subtraction" in the "Correction" dropdown, and then hit Execute. After I autostretched the result, this is what I had:
Hmm, maybe a little too aggressive there are some dark a reas that I do not think are real. I backed off Tolerance to 1.000 and tried again.
The result looks pretty much the same, so I decided to run with it and see what happens. I saved the process to my workspace so I c
adjust later if needed (and I also neededto apply it to my RGB frames). This is what the background it extracted looked like:
I put a New Instance icon for the DBE process in the workspace (by clicking and dragging the New Instance triangle icon on the bottom of the DBE window into the workspace), and then closed the DBE process. Then I minimized the DBE'd luminance image and opened up the red image, and double clicked the process I just put into the workspace, which then applies the sample points to the red image. None were colored red for invalid, so I executed the process, and the result image looked good. I did the same for the green and blue, and saved out all of the DBE'd images for later reference, if needed. I also saved the process to the same folder for possible later use.
Next, I opened up the LinearFit process, which levels the LRGB frames with each other to account for differences in background that are a result of imaging on different nights, different times of the night, the different background levels you can get from the different filters, etc. For this process, you want to select the brightest image as your reference image. It is probably luminance, but you can check with HistogramTransformation.
I selected L, R, G, and B (the ones I have applied DBE to) and zoomed in on the peak (in the lower histogram). It is so dark at the Deep Sky West observatory that especially after background extraction, there is no background, and pretty much all the peaks are in the same place. Even the non DBE'd original images have basically no background (which would show up as space between the left edge of the peak and the left side of the histogram window) So I selected the luminance image as reference, and then applied the LinearFit process to each of the R, G, and B frames by opening them back up and hitting the Apply button. I needed to re-auto stretch the images afterwards.
Combining the RGB Images
Once their average background and brightness levels are leveled, it was time to combine the LRGB images together. For that, I went to the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial. First, I opened the ChannelCombination process, and made sure that "RGB" was selected as the color space. Then I assigned the R, G, and B images that I had background extracted and linearly fitted to each of those channels, and hit the Apply Global button, which is the circular icon on the bottom of the process window.
It was showing some noise at that point, but that will be fixed soon. Remember, this is just a screen stretch, which tends to be less refined than when I will actually stretch the image. I will come back to this tutorial later to combine the luminance image with the RGB, since it is a good idea to process them separately, then bring them together, since they bring different features to the table.
To properly white balance the color image, I turned to the PhotometricColorCalibration process, which I have absolutely fallen in love with. This process uses a Sun type star that it finds in the image using plate solving as a white reference from which to re-balance the colors. In order to plate solve, you need to tell it where the camera is looking and what your pixel resolution is.
To tell it where this image is looking, I simply clicked "Search Coordinates," entered "M33," and it grabbed the celestial coordinates
for that object. After hitting "Get," I entered in the focal length and pixel size. Focal length on the Planewave CDK14 is 2563mm, and the pixel size on the FLI Kepler KL4040 is a whopping 9 microns! I entered these values and hit the Apply button, then waited.
A few minutes later, the result appeared.
The change is small this time, but other times I have used this process, it has a made a huge difference - especially on my DSLR images. It looks like these Astrodon filters are already color balanced. My own Astronomik filters are too, but sometimes they still require a small bit of tweaking.
Time to deal with the background noise! I followed the "Noise Reduction" and "Producing Masks" tutorials. First, since I wanted to reduce noise without blurring fine details in the brighter parts of the image, I used a mask that protects the brighter parts of the image, where the signal to noise ratio is already high, so that I could attack the dark areas more heavily. Since I have a luminance image that matches the color data, I used that as my mask. (You can also create a luminance frame from your RGB image, which the “Producing Masks” tutorial explains how to do). Now, masks work better when they are bright and non - linear, so I duplicated my luminance image first by clicking and dragging the tab with the name of the image (once I have re-maximized it so I can see it) to the workspace. Then I turned off the auto screen opened up the ScreenTransferFunction process. Then I hit the little radioactive icon to apply an auto stretch again, and I opened theHistogramTransformation process. I then clicked and draggred the "New Instance" icon (triangle) from the ScreenTransferFunction process to the bottom bar of the HistogramTransformation window. This applies the same parameters that the auto stretch calculated to the actual histogram of the image. It is a quick and dirty way to stretch an image. Then I hit the Reset button on the ScreenTransferFunction window, closed it, and hit the Apply button in HistogramTransformation to apply the stretch.
To apply the mask to my color image, I selected the color image to make the window active again, I went up to Mask > Select Mask, and I selected my cloned, stretched luminance image.
Now, the red areas are the areas the mask is protecting, so since I wanted to apply the noise reduction to the dark areas, I inverted the mask by going to Mask > Invert Mask.
I opened up MultiscaleLinearTransform for the noise reduction. Since I did not need to see the mask anymore, I went up to Mask > Show Mask. Now, do not forget you have the mask still applied a few times I have tried to do a stretch or other processing and it looks really weird or does not work, and it was because I left the mask on! Following the tutorial's recommendation, I set the settings for the four layers, and hit Apply.
If you want to see the effect of changes made to the parameters without having to run this many times, you can create a small preview window by clicking the "New Preview Mode" button at the top of PixInsight, selecting a portion of the image (I would pick one with some bright and some dark areas both), and then hit the "Real Time Preview" (open circle) icon at the bottom of the MultiscaleLinearTransform window. It still takes a bit to apply, but less time, and then once you are happy, you can go back to the whole image and apply it there. I think this worked pretty well here. I remove d the mask before I forgot it was still applied.
While I had the window open, I applied the same mask I created to the luminance channel as well, and ran the same MultiscaleLinearTransform on it.
Sharpening Fine Details I decided to try here a new process I have not tried yet for bringing out finer details deconvolution with DynamicPSF. I followed that section of the "Sharpening Fine Details" tutorial on the luminance image.
Deconvolution is awesome because it helps mitigate the blurring effects of the atmosphere, as is easily seen when processing planetary images. It is magical! I opened up the DynamicPSF process and hand selected about 85 stars "not too big, not too little" according to the tutorial.
I then made the window bigger and sorted the list by MAD (mean absolute difference), and scrolled through them to see where the most number of stars are clustered around.1.5e-03 and 2.5e-03 seem to be about the range. I deleted the ones outside this range. Next, I re-sorted the list by A (amplitude).The tutorial recommends excluding stars outside the range of 0.25-0.75 amplitude, but the brightest star I still have left is 0.285 in amplitude, so I justcut the ones below 0.1.Next I sortedby r (aspect ratio).The tutorial recommends keeping stars between 0.6-0.8, and all of mine are already pretty tight in that range, between 0.649 and 0.746, so I keptall 20 of them.Then I hit the "Export" icon (picture of the camera below the list of star data), and a tiny model star appeared underneath the window.
I had noticed that the stars, even in the center of the image, looked ever-so-slightly stretched. You can see that here with this model star. I closed the DynamicPSF-process, but keptthe star image open. First, I needed to make another kind of mask, involving Range Selection.In all honesty, I am a little out of my depth when it comes to masks, but Iamsure if I use them more, I will start to get a better feeling for them.For this, I just reliedon what the tutorial recommends. I re-opened the stretched luminance image I used earlieras a maskand then opened the Range Selection process and tweaked the settings as suggested in the “Producing Masks ”tutorial until the galaxy is selected.
Next, I needed to include a star mask with this as well, so I minimized the range mask for the moment and opened the Star Mask process, as described in part 5 of that same tutorial.I stretched it a bit with Histogram Transformation to reveal some dimmer stars. According to the tutorial, it will help to make the stars a little bit bigger before convolving this with the range mask, so I opened up Morphological Transformation, and copied the tutorial's instructions.
I part of the tutorial that makes the super-bright stars all black because none of mine are over the nebulous region of the galaxy.I skipped ahead to the making the more pronounced stars over nebulosity have more protection.
Next came smoothing the mask using A Trous Wavelet Transform I applied it twice with the recommended settings to blur the mask.
Finally I could apply the mask.
After all of this, I lost track of what I was doing! I had to scroll back up as I was writing this to remember –deconvolution! I opened up the Deconvolution process, clicked on the External PSFtab, and gave it the star model I made earlier with DynamicPSF. I set other settings recommended by the tutorial and created a preview so I could play with the number of iterations without waiting forever for it to complete. All the way up to 50, it hadnot converged yet, so I wen tahead and ran 50 iterations on the whole image.
The difference is not enormous for all the work I did to get there, but you can definitely tell that the image is sharper. Pretty cool! All right, time to stretch! Do not forget to remove the mask!I almost did. Stretching When I usePhotoshop to process, the very first thing I do is stretch. But in PixInsight, thereare a lot of processes that work better on unstretched (linear) data.When your image comes out of stacking, all the brightness data are compressed into a very small region.
Stretching makes the data in that peak fill up more of the range of brightnesses so that you can actually see it.Allthe data is there, it just appears very dark in its linear state.
I opened up HistogramTransformation, turned off the screen stretch, and reset the histogram window.The image wasquite dark.I openedup a real-time preview so I could see what I was doing. I moved the gray point slider (the middle one) toward the left, and then I zoomedin on the lower histogram. The upper one shows what the result histogram will look like after the stretch is applied, and the preview window shows what the image will look like.
Stretching is a multi-step process I hit Apply, and then Reset, and then I movedthe gray point slider some more.The histogram changes each time as the data fill up more of the brightness range.As you stretch, the histogram will move off the far-leftedge, and you can kill some extra background there if needed by moving the black point up to the base of the peak.Donotgo too crazy though -astro images with perfectly black backgroundstend tolook "fake."After a few iterations, my image was thennon-linear, and a screen stretch was no longer required.
Then I did thesame process with the RGB image. With those two killer images, it was time to combine the luminance with the RGB!
Since the luminance filter passes all visible wavelengths, that image tends to have a higher SNR (signal-to-noise ratio),and thus finer detail because itisnot lost in the noise.
Acquirable Raw Data in Amateur Astrophotography - Astronomy
Listed below are links to FITS files for several of the data sets I have taken with the Optical Guidance Systems 32" Ritchey-Chretien Telescope and SBIG STL-11000m CCD camera.
These files will only be of interest to astronomical imagers who have the proper software to complete the processing of the images.
The files have been calibrated (dark subtracted, flat fielded, stacked, and aligned). They are ready for RGB combine and/or LRGB or HaLRGB combine in the program of your choice. The R/G/B filters used for all data were the SBIG set. I've experimented with various RGB color combine ratios, and my current starting ratios for the SBIG filters are R:G:B = 1.3 : 1.0 : 1.6. The H-Alpha images were taken with the Astrodon 6nm filter.
Feel free to download these files and try your hand at processing images taken with a large scope under dark skies.
Should you decide to publish any resulting processed image (on a web site or in print), the only thing I ask is that the credit line include the phrase "Image Acquisition by Jim Misti".
You should be able to download each file by right-clicking on the file name and selecting the download activity in your browser, and then specifying a location on your computer to save the file. Each file is just over 20meg in size.
Free sequence capture control for Mac/PC + lots of DSLRs.
For calculating all things related to CCD imaging (FOV/pixel size/etc)
AstroImager is a powerful, but easy to use image capture application for the astrophotography. (OSX)
performs Lucy-Richardson deconvolution, unsharp masking, brightness normalization and tone curve adjustment.
AvisFV is a free FITS viewer, editor and converter.
free software that lets you input your coordinates and will show you when objects will be up and at what time.
planetary video processing/stacking software
Planetary imaging pre-processor, good for shaving down file sizes with cropped, stabilized planetary shots
Astrophotography: Tips & Techniques
Once you’ve learned your way around the night sky and glimpsed distant nebulae through a pair of binoculars or a telescope, you might find yourself wanting to capture the magic that keeps you returning to your telescope every night. But if you’re used to taking point-and-shoot photos, astrophotography can be pretty daunting. We provide useful astrophotography tips and tricks to get you started photographing the night sky. We’ll help anyone with modest stargazing equipment and access to dark skies capture panoramic vistas of planets, star clusters and more — from learning how to guide a telescope for imaging to creating composite images on your laptop.
Interested in learning more? Download our FREE astrophotography primer ebook and start shooting nightscapes, the bright planets, and deep-sky fuzzies tonight!
Do you want to sponsor AstroBin?
If you represent a company in the astronomy or astrophotography business, and would like your name here, or would like to advertise on AstroBin, please get in touch to discuss opportunities.
AstroBin is a one-man-company that is always looking for means to support its activities. Being a sponsor of the largest astrophotography community in the world is sure to reflect positively in the way the amateur astronomy community perceives your brand!
AstroBin is an image hosting website specifically targeted to astrophotographers: it's the first and the last place where you need to upload your astrophotography images. Made by an astrophotographer, for the astrophotographers.
If you are new to processing astrophotography images, you are not alone. If you would like a video reference to follow along with, have a look at the following image processing tutorial in Adobe Photoshop:
For more astrophotography tutorials from image acquisition to image processing, have a look at the tutorials section of this website. I appreciate those that take the time to process my data, and are brave enough to share their results with the world!
Testing the Venus Optics Laowa 15mm f/2 Lens
I test out a fast and very wide lens designed specifically for Sony mirrorless cameras.
In a test on my blog at www.amazingsky.net published May 31, 2018 I presented results on how well the Sony a7III mirrorless camera performs for nightscape and deep-sky photography. It works very well indeed.
But what about lenses for the Sony? Here’s one ideal for astrophotography.
Made for Sony e-mount cameras, the Venus Optics 15mm f/2 Laowa provides excellent on- and off-axis performance in a fast and compact lens ideal for nightscape, time-lapse, and wide-field tracked astrophotography with Sony mirrorless cameras. (Sorry, Canon and Nikon users, it is not available for other lens mounts.)
I use it a lot and highly recommend it.
Size and Weight
While I often use the a7III with my Canon lenses by way of a Metabones adapter, the Sony really comes into its own when matched to a “native” lens made for the Sony e-mount. The selection of fast, wide lenses from Sony itself is limited, with the new Sony 24mm G-Master a popular favourite (I have yet to try it).
However, for much of my nightscape shooting, and certainly for auroras, I prefer lenses even wider than 24mm, and the faster the better.
The Laowa 15mm f/2 from Venus Optics fills the bill very nicely, providing excellent speed in a compact lens. While wide, the Laowa is a rectilinear lens providing straight horizons even when aimed up, as shown above. This is not a fish-eye lens.
Though a very wide lens, the 15mm Laowa accepts standard 72mm filters. The metal lens hood is removable. © 2019 Alan Dyer
The Venus Optics 15mm realizes the potential of mirrorless cameras and their short flange distance that allows the design of fast, wide lenses without massive bulk.
Sigma 14mm f/1.8 Art lens (for Nikon mount) vs. Venus Optics 15mm f/2 lens (for Sony mount). © 2019 Alan Dyer
For me, the Sony-Laowa combination is my first choice for a lightweight travel camera for overseas aurora trips.
The lens mount showing no electrical contacts to transfer lens metadata to the camera. © 2019 Alan Dyer
However, this is a no-frills manual focus lens. Nor does it even transfer aperture data to the camera, which is a pity. There are no electrical connections between the lens and camera.
However, for nightscape work where all settings are adjusted manually, the Venus Optics 15mm works just fine. The key factor is how good are the optics. I’m happy to report that they are very good indeed.
To test the Venus Optics lens I shot “same night” images, all tracked, with the Sigma 14mm f/1.8 Art lens, at left, and the Rokinon 14mm SP (labeled as being f/2.4, at right). Both are much larger lenses, made for DSLRs, with bulbous front elements not able to accept filters. But they are both superb lenses. See my test report on these lenses published in 2018.
The Sigma 14mm f/1.8 Art lens (left) vs. the Rokinon SP 14mm f/2.4. © 2019 Alan Dyer
The next images show blow-ups of the same scene (the nightscape shown in full below, taken at Dinosaur Provincial Park, Alberta), and all taken on a tracker.
I used the Rokinon on the Sony a7III using the Metabones adapter which, unlike some brands of lens adapters, does not compromise the optical quality of the lens by shifting its focal position. But lacking a lens adapter for Nikon-to-Sony at the time of testing, I used the Nikon-mount Sigma lens on a Nikon D750, a DSLR camera with nearly identical sensor specs to the Sony.
A tracked image with the Venus Optics Laowa 15mm at f/2.
Above is a tracked image (so the stars are not trailed, which would make it hard to tell aberrations from trails), taken wide open at f/2. No lens correction has been applied so the vignetting (the darkening of the frame corners) is as the lens provides.
As shown bottom right, when used wide open at f/2 vignetting is significant, but not much more so than with competitive lenses with much larger lenses, as I compare below.
And the vignetting is correctable in processing. Adobe Camera Raw and Lightroom have this lens in their lens profile database. That’s not the case with current versions (as of April 2019) of other raw developers such as DxO PhotoLab, ON1 Photo RAW, and Raw Therapee where vignetting corrections have to be dialled in manually by eye.
A tracked image with the Venus Optics Laowa 15mm stopped down 1 stop to f/2.8.
When stopped down to f/2.8 the Laowa “flattens” out a lot for vignetting and uniformity of frame illumination. Corner aberrations also improve but are still present. I show those in close-up detail below.
15mm Laowa vs. Rokinon 14mm SP vs. Sigma Art 14mm – Comparing the left side of the image for vignetting (light fall-off), wide open and stopped down. ©2018 Alan Dyer
I compare the vignetting of the three lenses, both wide open and when stopped down. Wide open, all the lenses, even the Sigma and Rokinon despite their large front elements, show quite a bit of drop off in illumination at the corners.
The Rokinon SP actually seems to be the worst of the trio, showing some residual vignetting even at f/2.8, while it is reduced significantly in the Laowa and Sigma lenses. Oddly, the Rokinon SP, even though it is labeled as f/2.4, seemed to open to f/2.2, at least as indicated by the aperture metadata.
15mm Laowa vs. Rokinon 14mm SP vs. Sigma Art 14mm – Comparing the centre of the image for sharpness, wide open and stopped down. Click or tap on an image to download a full-resolution JPG for closer inspection © 2018 Alan Dyer
Above I show lens sharpness on-axis, both wide open and stopped down, to check for spherical and chromatic aberrations with the bright blue star Vega centered. The red box in the Navigator window at top right indicates what portion of the frame I am showing, at 200% magnification in Photoshop.
On-axis, the Venus Optics 15mm shows stars just as sharply as the premium Sigma and Rokinon lenses, with no sign of blurring spherical aberration nor coloured haloes from chromatic aberration.
This is where this lens reaches sharpest focus on stars, just shy of the Infinity mark. © 2019 Alan Dyer
Focusing is precise and easy to achieve with the Sony on Live View. My unit reaches sharpest focus on stars with the lens set just shy of the middle of the infinity symbol. This is consistent and allows me to preset focus just by dialing the focus ring, handy for shooting auroras at -35° C, when I prefer to minimize fussing with camera settings, thank you very much!
15mm Laowa vs. Rokinon 14mm SP vs. Sigma Art 14mm – Comparing
The Laowa and Sigma lenses show similar levels of off-axis coma and astigmatism, with the Laowa exhibiting slightly more lateral chromatic aberration than the Sigma. Both improve a lot when stopped down one stop, but aberrations are still present though to a lesser degree.
However, I find that the Laowa 15mm performs as well as the Sigma 14mm Art for star quality on- and off-axis. And that’s a high standard to match.
The Rokinon SP is the worst of the trio, showing significant elongation of off-axis star images (they look like lines aimed at the frame centre), likely due to astigmatism. With the 14mm SP, this aberration was still present at f/2.8, and was worse at the upper right corner than at the upper left corner, an indication to me that even the premium Rokinon SP lens exhibits slight lens de-centering, an issue users have often found with other Rokinon lenses.
Real-World Examples – The Milky Way
The fast speed of the Laowa 15mm is ideal for shooting tracked wide-field images of the Milky Way, and untracked camera-on-tripod nightscapes and time-lapses of the Milky Way.
Image aberrations are very acceptable at f/2, a speed that allows shutter speed and ISO to be kept lower for minimal star trailing and noise while ensuring a well-exposed frame.
This is a stack of 8 x 2-minute exposures with the Venus Optics Laowa 15mm lens at f/2 and Sony a7III at ISO 800, on the Sky-Watcher Star Adventurer tracker. A single exposure taken through the Kenko Softon A filter layered in with Lighten mode adds the star glows, though exaggerates the lens distortion on the bright stars.
This is a stack of 12 exposures for the ground, mean combined to smooth noise, and one exposure for the sky, all 30 seconds at f/2 with the Laowa 15mm lens on the Sony a7III camera at ISO 6400. These were the last frames in a 340-frame time-lapse sequence.
The fast speed of the Laowa 15mm is ideal for shooting tracked wide-field images of the Milky Way, and untracked camera-on-tripod nightscapes and time-lapses of the Milky Way.
Image aberrations are very acceptable at f/2, a speed that allows shutter speed and ISO to be kept lower for minimal star trailing and noise while ensuring a well-exposed frame.
Astrophotography Data Storage
I have been taking photos now for about a year, and my data storage is beginning to amass an increasingly large number. I was hoping I could get some thoughts from the community about how you store your data.
1. What are you using? Just a large hard drive, writing to discs, NAS? Looking for ideas that work for you.
2. How do you keep your data? For example I have a folder for each target, but I have it saved in many varieties, for example the raw data (fits files), some folders with editing and stacking etc., and of course the final product in the completed photo itself. I think I am going to begin just keeping the raw data and the final photo, then if I want to begin a new reprocess I will start from scratch with raw data. What do you do?
3. If you have photos or recommended equipment that works well, please throw it in here.
Edited by cargostick, 18 November 2020 - 07:51 PM.
Roger I have 2 external 1T hard drives for storage and keep the present years data on my computer HD. I do as you mentioned save the data and finished processed photos in folders by target and year. Even with all this I have to go back and clean things up from time to time. I find myself questioning why in the heck did I save some of these data files and photos. As I get better with AP some of the old stuff is really bad by comparison. There should be a reality show called AP hoarders since it is so easy to want to save everything for some perceived future need.
I was thinking of one of the 8TB backup drives myself.
There should be a reality show called AP hoarders since it is so easy to want to save everything for some perceived future need.
You hit the nail on the head.
Yes, it is a good idea to hang on to good raw data.
Depends on how long you want to keep the data.
If you want to keep them for many years you will probably run into a problem that most file systems in use have called 'bit rot'. Without getting into too much detail what happens is that the hard drive stores the bits as very small amounts of magnetic particles that are aligned to represent either a 1 or 0 for each bit. The operating system (windows, mac, linux. ) is constantly reading the bits on the drive and looking for errors which it then tries to correct. The key word is 'tries'! It doesn't always succeed if there are too many in a sequence or too many on the disc. This is compounded when you use compressed files or compressed storage systems. When you compress a file you only loose detail by removing sequences of data that are duplicated or very similar and keeping a single copy of the sequence and a list of where in the file it goes each time. (This is an EXTREMELY simplified explanation of compression.) The only filesystem I'm aware of that is available for public use without buying commercially rated equipment is ZFS used in BSD and some versions of linux.
I've been using a NAS called freenas https://www.freenas.org/ for several years now and am very satisfied with it. You can buy a pre-assembled box with everything ready to go or build your own. I started with 1 of these (https://www.amazon.c. DDR31JW5609MX1F) and have upgraded to a very large system because of the amount of data I have saved. (Somewhere around 50TB and climbing. Video & Audio files take a LOT of space!)
I started looking at storage when i lost some important files that got corrupted on a drive that tested perfectly fine but had lost enough bits in the files to make parts of them unreadable. Fortunately the duplicate backup disc was good.
Freenas may be overkill for a lot of people and it has a learning curve, but if you want to view those pictures 10 year from now you may want to look into it. And, if you think 10 years is too long. Someone I know found some of his Mars pictures from the 80's & 90's still on the web on an active page. I still look at and enjoy some of my photos from the 70's.
I have a NAS array with 4-8tb drives in a RAID configuration that is used for backup and archival. I also have a more standard backup drive.
I have an 11TB NAS storage where I store all the original images I took. While my processing hasn't gotten that much better I hope it will and I can get better images from the raw data. I use a Synology 4 bay system, and it works flawlessly. As an IT guy of 30+ years, I am impressed that it was so inexpensive (compared to what my company pays for NetApp appliances) and yet it works so well.
I'm running a Synology DS1618+ NAS with 6x10TB HDs as a RAID10 giving me 26+TB of storage space for all of my photography/astrophotography pictures/data. While a bit pricey upfront (the majority of the cost was for the HDs), it has been a worthwhile investment and seems to be extremely stable I haven't had a single problem nor downtime for the 14 months it's been running. The only issue I had was when I originally set it up, I only bought 4x10TB HDs (that I could afford) at the time and set up the RAID10 system, then when I wanted to add the final 2 HDs, there was no provision that I could find that could simply add 2 additional HDs into the existing RAID10. I could have just had the 2 HDs as their own RAID system but I didn't want to do that. So I had to transfer all of my existing data from the NAS onto my other computers, wipe out the existing RAID10 and rebuild it with the new drives for a completely new RAID10. Lesson learned: buy as many HDs as needed to fill the NAS when originally building it because upgrading isn't as simple as just adding additional HDs to an existing RAID10.
Edited by Rextiles, 26 November 2020 - 07:27 AM.
Good God Rextiles I hope you are young. I just did a quick calculation and at my current pace it would take me at least 40 years to get close to tapping out this storage system. I am 70 so I think I'll stick with a smaller capacity. I think you are probably set for life.
I agonize about storage daily. What if my PC crashes irremediably ? Should I back up every day? Every week? Every year?
In the past I tip-toed into external hard drives but the first I bought died after a year and a lot of data was lost. Even had flash sticks die on me.
I lost the confidence to go that route again.
Now I take the old steam method of burning it all on to DVD's. Always good Karma on long spells of cloudy skies.
At least I can write on the discs for minimal instant recall.
I finally put it all into perspective by realizing that after I pop my clogs and go to the great Spirit in the Sky that some relative may go through my discs and wonder why most of them are black images (darks) or bright images (flats) and purple frames of uninteresting lights. If they even bother.
They will toss them or make decorative strings of them al la 1980's space dividers.
There are better ways I know but so far DVD's are as future proof as anything else.
Almost but not unlike the Edison phonograph, reel to reel, 8 track tape, 45's, 33's, 78's and even CD's.
Soon most of our data will only be viewable and recoverable by visiting museums.
I still have a VHS player, a cassette player, a turntable and -WAIT FOR IT- a Walkman!
Orderd 5TB for in the field use..
You should look at what needs backing up. You have 4 basic types of files on a PC. Data (This is all of the pictures, docs, movies, music, etc. that we all accumulate.), Settings and Preferences (This varies between operating system and programs. Think about Windows registry and ini files. Other OSes use various types of files for these.), Programs (Everything you work with on the PC to read and write the data.), Operating System (OS) (Windows, Mac, Linux, etc.). Of these the data is the most important. The OS can be gotten from the internet or CD/DVD source. Same with the programs. You need to keep a separate record of all installation/registration keys so you don't pay for it again. The Settings and Preferences can be a little tricky depending on the OS, but most of the time it's how you want something to look and can be recreated from memory.
Most current OSes have some way of backing up the PC available to them. If you buy a external drive like what TxStars listed they often come with backup software to use. Most people should do a backup whenever they change enough files to realize that they will have great pain if they loose them. Depending on how much you do this can be hourly, daily, weekly, or monthly. It all depends on how much is too much to loose. That's why businesses will run backups each night and have off-site storage in case the building itself is damaged. For home use most people can be very comfortable with doing weekly incremental backups of data and a full backup once a month. Change the weekly to daily if you do a LOT of work that would be painful to loose midweek. Then keeping a hard copy of the full backup in a safe if you are really paranoid about data safety.
An incremental back means that you only backup the file that changed or are new since the last backup. It saves a lot of space! The full backup can be just data or everything. The advantages of backing up just the data is the amount of space used is a lot smaller, the disadvantage is that if you have a major system crash you then need to manually reinstall the OS and all programs which can take a lot of time.
Something else to keep in mind is that ALL media has a lifespan. Memory sticks, hard drives, tape they all have a limit on the number of times they can be written to and read from then they start loosing data. Even CDs & DVDs have a limited life. It's often in decades but they can go bad. The data is written by etching a metal inside the plastic disk and if that metal corrodes it can't be read. End of disk!
The bottom line is that nothing last forever, just so long as it lasts long enough.
12 Replies to &ldquoCharon Imaged by Amateur Astronomers&rdquo
That’s truly amazing. With off-the-shelf equipment and a little know-how, amateur astronomers can do things professionals struggled with only 20-30 years ago.
With this kind of capability, a motivated amateur can do real science. And no doubt there are many simple and beautiful things left to discover… even with a modest telescope.
has anyone tried yet to detect Nix and Hydra with amateur equip? is it possible?
They need a bigger telescope in order to do that. Besides, these two moons were discovered only a few years ago by Hubble Telescope (?) I think it will take at least 10-20 years for ordinary amateur astronomers to do that in the future.
Another interesting situation is that amateur astronomers take some images of extrasolar planets . How long will it take them to do that? Any guess?
well, if the trend continues, and Nix and Hydra will take about 10-20 years, we should have an amateur image of an extrasolar planet as soon as adaptive optics become commercially available…maybe, 30-40 years?
We should redefine what an “amateur” is and what “amateur equipment is”.
Most major observatories have at least a few C14’s that are used routinely for astronomical observations. That is the SCT that we are talking about here.
There is no such thing as AMATEUR EQUIPMENT or PROFESSIONAL EQUIPMENT. Amateurs use the same equipment that Professionals use and visa versa. … it is really about who pays the bills and that person or corporation or gov’t that does pay the bills determines the goals and priorities for the equipment.
Some of these so called “amateurs” must have VERY DEEP POCKETS to be able to afford such equipment and a permanent observatory from which to use that equipment and the time to wait for the right conditions and such to attain images. What is the real difference between a professional and an amateur? A degree – hogwash – the discoverer of Pluto had no degree. A funding source – yes but some of todays so called amateurs have much higher budgets than many professionals and these amateurs have no need to make a case for the acquisition, use, or priorty for the equpment.
In many cases … so called amateur equipment is superior to professional equipment of just a couple years ago because professional observatories have limited resources as well. In many cases, amateurs and professionals use the same equipment … exactly.
It really is all about $$. Almost any group of amateurs with enough $$ can achieve the same results as professionals. In past times, so called “amateurs” built many of the great observatories that “professionals” use every day.
But … are they really amateurs anymore. Do they really work 40 hours a week in non-astronomy professions or do they have a magnifient pile of money or an endownment so they can focus on astronomy without worrying about the day-to-day issues of ekeing out a living. Who can afford such equipment and time without an outside benefactor (or an inheritance) to pay some or all the bills and expenses.
It really comes down to $$. Those of us who don’t have large budgets to commit to astronomy still make achievements as amateurs. Those with large budgets … well they really are professionals.
For these so-called “professional amateurs”, astronomy is an Avocation if not a vocation.
Many of us with significant investments just can’t manage the $$ to become a “professional amateur”
How much money for a 14 inch telescope?
Found my answer, anywhere from $6000-$10,000.
Amazing work. Top job guys…
# Zibit Says:
October 30th, 2008 at 3:01 pm
How much money for a 14 inch telescope?
# Zibit Says:
October 30th, 2008 at 3:12 pm
Found my answer, anywhere from $6000-$10,000.
14 inch instrument or similar can be found in a much wider price range than that – the price all depends on the ultimate quality and capability of the instrument. For example, a 12.5 inch scope from a premium manufacturer such as RCOS, optical assembly only, will set you back more than US$20,000. A 16 inch Meade Lightbridge can be had for under US$2000… Mind you, you won’t be imaging Charon with the ligthbridge. If you did, you’d be quite the hero…
>and @ Steven C Says:
October 30th, 2008 at 1:02 pm
I would agree with you largely about the equipment being similar in some cases – there is some overlap between lower end professinal setups and very high end amateur setups. Certainly you can find pro’s using RCOS scopes and even some of the higher end Meades, etc. But what separates a professional astronomer from an amateur? A pro astronomer gets paid to do what they love, whereas an amateur does not. Simple really, when it comes down to it…
I know very little about astronomy and perhaps the answer to this question is obvious to everyone, but why couldn’t Hubble image Charon? Is it too close?
So where is his image? I see a nice pretty HST image and some false color image that is marked up at the top. Is that his image after processing on PC?
“why couldn’t Hubble image Charon? Is it too close?”
Hubble can image Charon. The bottom image is a HST image. I don’t know what the blob in the top image is.
@ Bob & ruf, check out the Bad Astronomy link in the story for a more complete description of the equipment and techniques used to make this amazing image. This is the real thing, heavily processed of course, but the real deal nonetheless.
Join our 836 patrons! See no ads on this site, see our videos early, special bonus material, and much more. Join us at patreon.com/universetoday