Menu
This post discusses the steps taken in processing NGC 4567 and NGC 4568 – Siamese Twins.
Aug 01, 2018 PixInsight - for processing your planetary images to get the most detail out of them: $230 EUR. Unfortunately planetary processing software is a gap right now on the Mac. You need wavelet processing to get the most detail out of your images, and currently PixInsight is the only real option. PixInsight is a modular, open-architecture, portable image processing platform available for FreeBSD, Linux, Mac OS X and Windows. PixInsight — Pleiades Astrophoto. Jul 24, 2018 PixInsight is a modular, open-architecture, portable image processing platform available for FreeBSD, Linux, Mac OS X and Windows. PixInsight is an advanced image processing software platform. It has been designed specifically for astrophotography and other technical imaging fields.
Acquisition
Subexposures for this image were taken on 15 different nights from our remote observatory in Fort Davis, Texas, using a Planewave CDK 17″ f/6.8 telescope on an Astrophysics 1100GTO mount and an FLI PL16803 CCD camera. ACP was used to take 120 20-minute subexposures in LRGB, totaling 40 hours of integration.
Processing Strategy
PixInsight and Photoshop were used to process this image. PixInsight excels at preprocessing and integration, noise reduction, deconvolution, color calibration and color combination. Photoshop excels at targeted detail extraction, sharpening, noise reduction and final color balancing.
Preprocessing
File Types Used throughout processing
Individual subs taken with our imaging systems are saved as FITS files. Once we begin processing these subs in PixInsight, we use their .xisf file format. .XISF files are saved with the following settings.
Finally, we use 16-bit TIFF images to move files back and forth between PixInsight and Photoshop.
BatchPreProcessing (BPP) Script
Our preprocessing of this image began using PixInsight’s Batch Preprocessing (BPP) script. This was the first time we didn’t first create master bias and master dark frames for our preprocessing. Instead we added all bias and dark subs into the BPP script. The reason for this is to avoid clipping when calibrating the dark frames or master dark frame (see section 4 of this post for details: https://pixinsight.com/forum/index.php?topic=11968.msg73522#msg73522). We use the BPP script only for image calibration. Although the script can also perform image registration and integration, we include several other manual steps in our preprocessing before registration.
Blinking Images
Once we have our images calibrated, we review them using PixInsight’s Blink process. This process reveals any subs that should be thrown away because of apparent issues such as clouds, poor tracking, and other visual anomalies. We are careful to only blink subs from a single filter so that there is a valid visual comparison across subs. Rejected subs are moved into their own sub-folder (the Blink process handles this) so they won’t be used in successive steps.
Cosmetic Correction
PixInsight’s CosmeticCorrection process is optional, and is primarily used to remove hot/cold pixels. We don’t have serious hot/cold pixel issues with our cameras, and hot/cold pixels are usually effectively rejected during image integration. However, we do use this process to remove defect columns. This hasn’t been an issue with our PL16803 CCDs, so we did not perform CosmeticCorrection on the Siamese Twin subs. We usually require column defect correction for our ML16200 and ML/PL29050 CCDs.
Subframe Selection and Weighting
Next we grade our subexposures using the SubframeSelector (SS) process. This shouldn’t be confused with the SubframeSelector script which has been a part of PixInsight for years. The SS PCL process is relatively new. If you do not see the SS process listed in your process menu, you will have do download and manually install the PCL process. You can download the SubframeSelector PCL process for both Mac and PC from this post: https://pixinsight.com/forum/index.php?topic=11780.0. Make sure to read through the entire discussion so that you download the latest version of the script. The SS process runs a lot more quickly than the script, saving a great deal of time during preprocessing. It also caches your image data, so returning to a group of subexposures doesn’t require remeasuring the set. To use the SS process, make sure to enter the system parameters for your system, camera and bin level. Run one filter/bin combination at a time. After measuring the subexposures, we review the graphs and charts to exclude any subs that don’t meet our requirements. This can be done automatically with an Approval formula however we prefer reviewing the data ourselves. For example, we look at the Star Support graph or table. If all of the subs have 4,000 stars, but a few have 1,000, then it’s pretty clear there was either a focus issue, or clouds, or humidity for the subs with fewer stars. We can eliminate those subs without fear of throwing away good data. In short, we look for outliers that don’t fit with the bulk of the data. We hate throwing away data, but adding poor data is worse than integrating less data.
Once we have excluded any undesirable subs, we use the following Weighting formula to output the approved subs, with the weight captured in a new FITS keyword “SSW”:
(FWHM*((1/(FWHM*FWHM))-(1/(FWHMMax*FWHMMax)))/((1/(FWHMMin*FWHMMin))-(1/(FWHMMax*FWHMMax)))+Eccentricity*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin))+SNRWeight*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin)+(100-(FWHM+Eccentricity+SNRWeight)))/100
We do one additional critical step during the SS process: we note the best subexposure for each filter. We usually select the subexposure with the most stars, but we also make sure that the FWHM for the selected sub is good. The selected sub for each filter will be used in a later step when we apply the LocalNormalization process, and the best L sub will be used as the reference image during StarAlignment. The subs we selected for the Siamese Twins image were:
- L and Registration: NGC_4567_date_20180213_time_044513_Lum_exp_1200s_bin_1_002_c
- R: NGC_4567_date_20180221_time_031707_Red_exp_1200s_bin_1_001_c
- G: NGC_4567_date_20180221_time_022922_Green_exp_1200s_bin_1_002_c
- B: NGC_4567_date_20180221_time_045211_Blue_exp_1200s_bin_1_002_c
Note: for those still using the SubframeSelector script, the above formula will not work. Instead, we use a spreadsheet posted in various locations online by David Ault to create our formula based on SubframeSelector script measurements.
StarAlignment for Image Registration
We use the StarAlignment process to align all of our subexposures. We use the best L frame as our reference image, and keep the rest of PixInsight’s defaults. For this image we did not Drizzle the data.
LocalNormalization
After registration, we run the LocalNormalization process on our subs, one filter at a time. For each filter group, we use the best subexposure of that group (as noted during the SubframeSelector process) as the reference image. Running this process now will provide better rejection and noise reduction when the subexposures are integrated. This process takes a bit of time to run, but is fully automated. The only change we make to the process settings is to set the scale to 256 (from the default 128).
ImageIntegration
We are finally ready to integrate our individual filter subexposures using the ImageIntegration process.
This screenshot shows our typical settings for ImageIntegration:
Notes on ImageIntegration settings:
- After adding your files for integration, make sure you add the LocalNormalization files (see the blue box in the screenshot)
- Make sure to select FITS Keyword for weighting, and enter the correct keyword you applied during the SubframeSelector process (we use “SSW”)
- We’ve found that by tweaking Buffer Size and Stack Size settings we can greatly improve the speed of this process. We set Buffer Size (in Mbytes) to just larger than the size of the files being integrated. At this stage of our Siamese Twins processing our .xisf files were each 65MB, so we set Buffer Size to 66MB. For Stack size, the larger the better, but this is dependent on available system RAM. We apply 20GB of the available ram on our machine to this process. You can leave this at the default with no ill effect except for overall speed of the process
- Since we typically have more than 20 subexposures for each filter, we use Linear Fit Clipping as our rejection algorithm. For Normalization, remember to select Local Normalization so that the LocalNormalization files are applied
- In Pixel Rejection settings, we find that using values of 4 and 2 for Linear fit low/high provide the best rejection of satellite trails, cosmic rays, etc.
Postprocessing – PixInsight
Following are steps we took preparing the LRGB image in PixInsight.
Dynamic Crop
PixInsight’s DynamicCrop process allows you to precisely crop all of your master images to the same crop region. To do this, open all of the master images (L, R, G, B). Using the DynamicCrop process, draw your desired crop region on any one of the images (here we’ve drawn it on L). Take care to observe how this crop would look on all four images, making sure that all dark edges will be cropped out from all images with the crop you are drawing. Once you are satisfied with the crop region, DO NOT apply the crop. Instead, drag the Instance triangle from the bottom left of the DynamicCrop process window onto the PixInsight workspace. This will save the precise location/dimension settings of the crop area. You can rename this saved instance by right clicking on it, and even save it to your hard drive (this might be useful if you intend to add more data in the future and would want to match the crop of this old data). Once you have the instance on your workspace, cancel the DynamicCrop process without applying it to the L image. Now, all you have to do is drag the DynamicCrop instance on each of the four images, and the precise crop will be applied to each of them.
You’ll want to save the newly cropped images with a new name (we typically name it “L_crop”). Here is the L channel before and after applying Dynamic Crop:
L Channel Processing
Processing of the cropped L (detail) channel typically follows these steps:
- DynamicBackgroundExtraction (DBE)
- Noise reduction (linear)
- Deconvolution
- HistogramTransformation (stretching)
- Noise reduction (nonlinear)
- Curves and additional HistrogramTransformation
DynamicBackgroundExtraction (DBE)
The DBE process is unique for each image, and for each channel. We didn’t save an interim image showing our DBE for the Siamese Twins. There are many good tutorials about DBE on the web, but this is one area where we could use some improvement. In any case, DBE should be used to remove as much of the uneven background as possible (from external light sources, imperfect flat field removal, etc.).
Noise Reduction (linear)
Noise reduction while the image is in its linear state is critical, as it will allow you to more aggresssively apply deconvolution and stretch the image without exaggerating noise. we use the noise reduction techniques described in this excellent tutorial for linear noise reduction: https://jonrista.com/the-astrophotographers-guide/pixinsights/effective-noise-reduction-part-1/. Here is a comparison of the L image before and after noise reduction. Note, this is PixInsight’s default stretch:
As you can see, there is a definite improvement in the overall noise profile. Not all of the noise is removed. We will take additional steps for noise reduction later.
Deconvolution
In March, 2018 we were contacted by Adam Block asking us if we would test out his new deconvolution video. The video is part of his suite of PixInsight processing tutorials, so we cannot share it here. His detailed demonstration and explanations have greatly improved the results we get from deconvolution. In short, the process requires creating a Point Spread Function (PSF) from stars in the image, an overall image luminance mask for targeting areas for sharpening, and a local support mask to protect larger stars from the deconvolution process. Each image requires different tweaked settings. Adam’s video explains how the settings work, and shows how to build each of these three components required for the process. Here is the L channel before and after deconvolution:
Detail has clearly been enhanced. Just as importantly, larger stars remain untouched. Stars in front of the galaxy display no dark ringing (this is due to a well made local support star mask, and using proper local deringing settings within the Deconvolution process.
Additional Noise Reduction
At this stage in processing, we sometimes do another round of noise reduction. However, in this case we did not.
HistogramTransformation (stretching)
The HistogramTransformation process is used to stretch the image. With some galaxy images which have very high dynamic range we perform three stretches (one for the core, one for the main portion of the galaxy and one for the faint outer areas), save each of the stretches as a TIFF file, and blend them together using masks in Photoshop before bringing them back into PixInsight for further processing (see our M 101 processing walkthrough for an example). For the Siamese Twins this wasn’t necessary; the initial stretch brought out detail everywhere without blowing out the core. Here is the image after stretching:
Noise Reduction (nonlinear)
At this point, we carefully applied a round of nonlinear noise reduction in two steps. To protect areas of detail in the image, we created a clipped luminance mask. We first duplicated the original image, then used the auto clipping shadows and highlights buttons in the HistogramTransformation process to exaggerate the contrast.
This high contrast image was then applied to the original image and inverted as a mask. The red areas will be protected from noise reduction processes:
We applied two processes to the masked image: AtrousWaveletTransformation and ACDNR. Our settings are shown here:
Here is a before/after of our stretched L image showing noise reduction results: Fresh prince of bel air font download 2017.
The L image is now set aside while we process the color information.
RGB Color Channel Processing
Processing of the cropped R, G and B (color) channels typically follows these steps: Photoshop cs6 plugins free download.
- DynamicBackgroundExtraction (DBE)
- Noise reduction (linear)
- LinearFit
- ColorCombination
- PhotometricColorCalibration
- HistogramTransformation
- CurvesTransformation and additional HistrogramTransformation
However in reviewing the processing of this image, for some reason we changed the order of the first three steps. We began with linear noise reduction, then performed a linear fit and finally DBE. This order is not our standard process, but we’re pleased with the results.
Noise Reduction (linear)
Linear noise reduction for each of the color channels follows the exactly same process as described above for L channel linear noise reduction. We can afford to be more aggressive with our noise reduction, as our image detail will come primarily from the L image. Here is the Red channel before and after linear noise reduction:
LinearFit
PixInsight’s LinearFit process is used to ensure that the relative strength of signal in each of the color channels is similar. This helps to ensure a proper color balance when combining the channels into an RGB image. Using the HistogramTransormation process, we examine the histogram of each of our R, G and B images. The image with the signal furthest to the right in its histogram will become our reference, and the other two images will be LinearFit to match the reference image. In this case, the R image had the strongest signal so using that as the reference image we applied LinearFit to G and B, and saved these. Visually we don’t see any difference in the images, but the histograms will now match better when we combine the channels.
DynamicBackgroundExtraction (DBE)
As mentioned earlier, normally we would not apply DBE after linear fit, but for whatever reason we did. We usually first set up DBE on the R channel, but before applying it we save an instance of the DBE process to the workspace. Later when applying DBE to the G and B channels, we can double click this saved instance, and it will provide a very good starting point for DBE’s point placement on these other channels. Make sure when applying DBE to select Subtraction as the Target Image Correction. Applying DBE can make it appear that the image is gaining noise, but in reality removing gradients is merely revealing noise which is already present. Here is the Red channel before and after DBE (note: these are default stretches):
ChannelCombination
The individual color channels are now ready to be combined. The ChannelCombination process is used with default settings to combine the R, G, and B channels (right) into a single RGB channel (left). You can begin to see a hint of color in the combined image but it’s clear that much more color work is required:
PhotometricColorCalibration
Color calibrating the image is a critical step in attaining proper color balance. We used to use two processes to do this: BackgroundNeutralization and ColorCalibration. However, PixInsight has added a PhotometricColorCalibration process, which uses actual color balance data from the region of the sky your image comes from to apply a proper white balance. To use this process, enter your system’s focal length and your camera’s pixel size, and search your object’s coordinates using the search tool. Applying the process will plate solve your image, calculate the proper white balance, and then apply the color calibration to your image:
HistogramTransformation
The RGB image is now ready for the HistogramTransformation (stretching) process. As you can see, after stretching there is definitely color present (especially the yellowish star near the top), but at this stage we pay more attention to exposure than color saturation when stretching. If we push color too much in the HistogramTransformation processing we risk oversaturating the image, resulting in an unnatural appearance. We want to maintain translucency in the fainter portions of the galaxy.
CurvesTransformation
We use the CurvesTransformation process to push the color hidden within the image. We sometimes look at various images online to get a feeling of how blue or dusty brown a galaxy should be, and tweak individual color channels and overall saturation using the CurvesTransformation process. Here is our image before and after CurvesTransformation. First, the entire image, then a closeup of the Siamese Twins:
We use the stars as our main indicator of how far to push the curves, and while we know there is more color to be had, we will attack this in a more controlled manner using Photoshop and Nik Plugins.
Sometimes during stretching and pushing the color with curves we end up with color areas in the background that clearly are not correct; where they should be neutral gray they take on either red, green or blue color. This often results when DBE hasn’t done a perfect job at removing background gradients. When the gradients don’t match from R to G to B, you end up with patches of background that take on the color of the channel where the background contains excess data when highly stretched. There are different ways to deal with this. You could apply a luminance mask before pushing the color, only allowing the color to be pushed on galaxies and stars. Or you could just let it happen. We find that later, in Photoshop, we can use the Nik Viveza plugin to neutralize the background or desaturate, by targeting only those areas which have taken on such colors, and even out the background brightness differences using Nik ColorEfex Pro, targeting those brighter areas with control points and bringing them down with levels or curves to match the rest of the image background. Of course if you have nebulosity or IFN in your image background you must be very careful when using these methods. True signal should never be destroyed.
LRGB Processing
Processing of the final LRGB image in PixInsight typically follows these steps:
- LRGBCombination
- CurvesTransformation
LRGBCombination
The LRGBCombination process is used to blend our Luminance detail image into our RGB color image. To use this process, we open the final L and RGB images, and we make certain that the RGB image is the active one by clicking it. This will ensure the L gets added into the color, rather than the other way around. In the LRGBCombination process window we select only our L image, then we apply the process to the color image. We play with the Lightness and Saturation sliders (lower is more), and we leave the Channel Weights untouched. When playing with this tool, we uncheck Chrominance Noise Reduction to speed up testing, but once we settle on our final slider numbers we turn this noise reduction back on. This image shows our LRGBCombination settings, with the L and RGB images at bottom, and the resulting LRGB image behind them:
CurvesTransformation
We apply one more round of CurvesTransformation to enhance overall color. In some cases, we also apply an additional round of HistogramTransformation to lighten or darken the background, but in this case we didn’t need to. Here is the before/after of our final CurvesTransformation (note: we are very careful not to lose fine detail when pushing color):
The LRGB image is saved as a 16-bit TIFF for use in Photoshop. Make sure to uncheck the “Associated Alpha Channel” box.
Postprocessing – Photoshop
Following are steps we took finalizing the LRGB image in Photoshop. We used several Nik Collection plugins and tools, as well as some Photoshop tools. The changes we made were mostly subtle, with the most noticeable being drawing out the detail and signal toward the outer edges of the Siamese Twins.
Here is the image before and after Photoshop processing:
After these adjustments, we made a final crop for a tighter presentation of the Siamese Twins, and then fractally upsized the image. Here is the final cropped image (full image details can be found here):
Deep Sky Stacker Tutorial
I have been using DeepSkyStacker to get the most out of my astrophotography images since I began shooting through a telescope in 2011. This useful and easy-to-use freeware tool simplifies the pre-processing steps of creating a beautiful deep sky image.
The concept of stacking in astrophotography is simple, by combining multiple images together, the signal-to-noise ratio can improve.
With so much time and effort going into the acquisition stages of astrophotography, it would be a shame not to achieve the best possible results when stacking your images. In this post, I will explain the DeepSkyStacker settings I use to stack and register all of my astrophotography images.
If you haven’t already done so, download DeepSkyStacker for free. The version I currently use to stack and register my astrophotography images is DeepSkyStacker 4.2.3.
I have used DeepSkyStacker to align, calibrate and integrate every deep-sky astrophotography image I have ever taken. It is well worth your time to learn how to use this free software successfully, as you will enjoy it for years to come.
Over the past 8 years, I’ve stacked images created using a DSLR camera, dedicated astronomy camera, and CCD Camera. Whether you are stacking .RAW image files from my Canon DSLR, or .FIT files from a CCD camera (or dedicated CMOS), the right settings can be the difference between a good image, and a great one.
Integration is the key to great astrophotography image. This is the reason why amateur astrophotographers spend multiple nights collecting pictures on a single deep sky target. Calibration is another vitally important component of the process, as this removes unwanted elements from your image that would otherwise spoil the picture.
For an in-depth, step-by-step guide to DeepSkyStacker and Adobe Photoshop, please consider downloading my premium image processing guide.
Page 35 of my premium image processing guide.
Main Features
For many amateur astrophotographers, DeepSkyStacker (DSS) is an integral part of their image processing workflow. For myself, I find that DeepSkyStacker does an exceptional job of registering astrophotography images taken using a variety of methods. This includes everything from untracked DSLR and camera lens shots to deep sky astrophotography through a telescope.
DSS can register images of everything from a wide-angle Milky Way panorama to a deep sky emission nebula. Most of my experience with this software has been on a Windows 10 PC, stacking Canon RAW files from a DSLR. To run Deep Sky Stacker on a Mac computer, a workaround such as using a virtual machine is necessary.
Let’s take a look at the main features of this software:
- Registration of picture sets
- Creation and use of offsets, flats and dark frames
- Native use of RAW files from most DSLR
- Multiple Stacking methods including average, median, kappa-sigma clipping and more
- Preview of all pictures including RAW and FIT file types
- Simple and intuitive user interface
DSS offers some advanced features I have not yet put into practice myself, such as comet stacking. The steps outlined on this page are most useful for beginners using a DSLR camera to capture their images. The official website offers some great resources for understanding how the process works.
If you want to review the statistics of your images and stack them as they are captured, you can try using DeepSkyStacker Live.
It’s important to remember that DeepSkyStacker was meant to integrate and calibrate your data into a useful intermediate file. It does not include the robust image processing tools of an application like Adobe Photoshop.
All of the images that run through the pre-processing stages in DSS are then brought into Adobe Photoshop for final image processing. The image below shows a stacked image before and after processing in Photoshop.
See the difference post-processing makes?
When you have successfully created your intermediate file in DeepSkyStacker, you can process it much further. Read my Photoshop image processing tutorial for a basic walkthrough of the process. Or, download my premium processing guide for an in-depth look at all of the techniques I use to process astrophotography images.
Tutorial (Deep Sky Images)
There are several applications available to register, calibrate, and stack astrophotography images including Astro Pixel Processor, and PixInsight. However, DeepSkyStacker is completely free and continues to receive new updates from the developer (version 4.2.2 was published in August 2019).
The software may seem confusing at first, but the good news, generally the default settings work best.
I regularly capture images on the same deep-sky object over multiple nights to increase the signal-to-noise ratio. I shoot through heavy light pollution in my backyard, which means I need to capture up to 4x or more the amount of exposure time someone living under dark skies would (see this article for a better understanding of this calculation).
I have experimented with many different combinations of options for stacking DSLR raw files, and have found that most of the default settings work best. DSS includes a handy “recommended settings” tab, that will highlight helpful settings to use based on your image data.
File Preparation Before Stacking
If you follow my astrophotography tutorials, you will have captured light frames, dark frames, flat frames and offset/bias frames during each of your imaging sessions. These support files (calibration frames) will go a long way towards improving your final image. I recommend capturing new calibration files for each night of imaging unless you are certain that your master files match your light frames.
Only stack your best images
Before opening the files in Deep Sky Stacker, I pre-qualify the images I want to stack. I use a RAW image preview application called Adobe Bridge to review and organize my images. Any photos with football-shaped stars from hiccups in autoguiding are tossed in the recycling bin. The same goes for frames with airplanes, satellites or passing clouds.
You can also use the scoring feature built into DeepSkyStacker for a calculated interpretation of your image data.
Using the Score Feature
After registering your pictures in DeepSkyStacker, it will provide a score for each of your light frames. The values of the score will vary widely depending on the imaging equipment used. There is no benchmark number to achieve.
This is handy when stacking your final picture, as you may want to only include the light frames with the highest score in your final stack. Instead of clicking Check all as you did when registering the files, click Check above a threshold.
If you have prescreened your images already, you will likely stack most of the images you registered anyway!
Remember, the scores will only appear after you register the picture files. Once you have selected a minimum score value, DeepSkyStacker will only stack your best light frames into the final image. I recommend choosing a minimum score value that will use at least 70-80% of your light frames, as you want to use as much integration time as possible for the best signal-to-noise ratio.
Stacking FIT files (CCD or Dedicated CMOS)
If you are transitioning from a DSLR camera to a dedicated astronomy camera, one of the first hurdles to overcome is the new file type the camera produces. It’s called “FITS”. Stacking FIT files in DeepSkyStacker presented a bigger learning curve than I anticipated.
I’ve had most of my success using trial and error. For example, I was able to produce an image with the correct color balance using a dedicated astronomy camera with an RGGB Bayer pattern. I discovered this during my Markarian’s Chain imaging session, by using a specific color adjustment setting.
You can adjust the RAW/FITS Digital Development Process Settings to make sure that you have the correct Bayer Pattern Filter for your specific camera selected. For most color dedicated astronomy cameras (including the ZWO ASI294MC Pro camera I use), the correct setting is Generic RGGB.
Cracker barrel logo font. These files can be hard to preview, due to the fact that they need to be debayered first. For this file type, I inspect and remove poor quality frames within DeepSkyStacker itself. This method can be a bit tedious, but a necessary step to ensure your final image only includes the best data.
Keep Your Image Sets Organized
Organize your images into 4 folders. Lights, Darks, Flats and Offset/Bias.
In the Main Group:
Open Picture Files
Select all of your light frames from your first night of imaging. Since you have already reviewed and approved all of the images in this folder, this is simply a matter of selecting every RAW file in your light frame folder.
Dark Files
Select the dark frames you captured from the same imaging session. The images need to be the same exposure length, ISO and temperature as your light frames. These can be easily captured with the lens cap on your camera. I recommend using a minimum of 15 dark files or more.
I believe that dark calibration frames are a must for DSLR astrophotography. In my experience, they reduce a significant amount of noise in the final image through dark frame subtraction.
Flat frames
Flat frames require a little more effort than dark frames but can be collected in a very short amount of time. Stretch a white t-shirt over the objective of your telescope, and smooth out all of the folds.
Point your telescope towards the blue dawn sky (or an evenly-lit artificial light source), and capture a number of shots with your DSLR set to AV mode. 15 flat files can make a significant improvement to your final image. They remove artifacts such as dust and correct vignetting and gradients in your image.
Offset/bias
Offset/bias files are quick and simple to capture with your DSLR camera. Just take about 15 exposures with the lens cap on your DLSR. These exposures need to be the fastest possible shutter speed using the same ISO as your light frames. (On the Canon 450D, that’s a 1/4000 second exposure)
How to Combine Images from Multiple Nights
Use the tabs to group your image sets
Once you’ve got your picture files (lights) and all of your support files loaded into the main group, it’s time to load up your files from night 2. Click on the small Group 1 tab at the bottom left of the screen, and repeat the process for opening files from imaging night 2.
Remember, you can stack different variations of exposures together in Deep Sky Stacker. This means a range of ISO sensitivity and exposure length.
Some imaging sessions may include all 3 supports files to complement the light frames, some may not. This is fine. After all of the image files have been loaded into their respective categories, it is time to register and stack the frames into a single file. Finally, make sure to click “‘check all“, to make sure that all of the frames you have loaded are selected.
Before we click Register and Stack images, let’s take a look at the current default settings.
Accessing the Register and Stacking settings is accessible by clicking “Settings…” under the options tab.
The default settings for registering is set to a 10% star detection threshold. In my experience, the default value of 10% has worked very well for stacking images captured using my 12MP Canon EOS Rebel DSLR. If you decrease the star detection threshold, DSS will detect fainter stars. The number of stars in a given light frame is displayed in the lower half of the screen.
With a light frame selected, look for the #Starscategory.
The following checkboxes should be checked before moving hitting “OK”, and letting DSS begin its process.
- Register already registered pictures
- Automatic detection of hot pixels
- Stack after registering
The DeepSkyStacker website states that the automatic detection of hot pixels only works if using Super-pixel, Bayer Drizzle, bilinear and AHD interpolation modes. However, I leave this box checked regardless and hot-pixels and stacking errors have never been an issue.
Stacking Parameters
Unless you are experiencing errors in the stacking process, leave all of the values in the stacking parameter dialogue box unchanged. Yes, this sounds like a conveniently simple option, but default values are usually set for a reason.
If you want, go ahead and click on the different modes in the “Result” tab. The program will show you a preview of the final composure created using Mosiac and Intersection modes. I prefer to use Adobe Photoshop for the final framing and cropping of the image.
As for the stacking parameters of the light and dark frames, Kappa-Sigma clipping and Median work well in the Light, Dark, Flat and Bias/Offset categories. I do not use any additional features such as the detection and cleaning of hot pixels in the Cosmetic tab.
One setting I do change, however, is the output location folder of the Autosave.tif file. I prefer that these images populate in a specific folder of my choice rather than mixed in with a folder of light frames.
Depending on the quality of and amount of light frames available, I usually select the best 80-90% of pictures and stack them.
Ready to Stack?
You’ve got all of your lights, darks, flats and offset/bias frames loaded. The default settings are currently selected, and the ever-comforting green bar is displayed (confirming your use of all support files) But wait, if only there was a way to confirm all of the files are as they should be.
The Stacking Steps Window
Before you run DSS, be sure to check and see if there are any warnings in the dialogue window. In the case above, there was a single Flat frame with a miss-matched ISO speed. These warnings are useful for catching little mistakes in your file organization that can potentially make a big impact on your image.
At this point, you can remove or add any frames based on the information that DSS has provided.
If all looks well, and there are no more warning messages in the Stacking Steps window, you can proceed to run the register and stacking process. I enjoy the information preview about the estimated total exposure time.
Deep Sky Stacker Tutorial (Video)
In the video tutorial below, I walk through some of the basic settings used in DeepSkyStacker. I then bring the image into Adobe Photoshop for further image processing.
When DSS has completed its process of registering and stacking all of the image frames together, a preview of the constructed Autosave.tif file is displayed onscreen. Based on the design of this software, you would think that the next logical step would be to make adjustments in the RGB/K Levels, Luminance, and Saturation area.If you plan on processing your image in Adobe Photoshop, I recommend leaving these settings as they are.
Balancing levels, curve adjustments, and boosting saturation are all staples of an Astrophotography processing workflow in Adobe Photoshop. Photoshop offers many more options and a higher level of control than Deep Sky Stacker for such edits.
What about the Recommended Settings option?
DeepSkyStacker has a “Recommended Settings” option that offers suggestions based on the image files submitted. Some of the recommendations include changing the stacking mode used such as “Use Median Combination Method”.
I have tested both the recommended settings and the default settings and found the default to produce better results.
If you are determined to see the subtle differences in the final stacked image, you can go through the entire process using the default Deep Sky Stacker settings vs. the recommended settings. I found that the recommended settings had varying results, with fuzzier more washed-out stars than the original stack.
I prefer to try both stacking methods and compare the results on a per-image basis. You may find that the stacking modes suggested by DSS improve your image.
Below: The Andromeda Galaxy stacked in DeepSkyStacker. Final processing in Adobe Photoshop.
To view the techniques I use in Adobe Photoshop to finish the image, watch my image processing tutorial video featuring the Soul Nebula. There is a link in the description to download the RAW data and process the image yourself.
Stacking wide-angle Camera Lens Images
Although I mostly use DSS for deep sky images, it is also very useful to stack wide-angle astrophotos through a camera lens as well. The same signal-to-noise benefits can be achieved by stacking multiple images together.
You may experience a number of issues when attempting to register and stack images that include terrestrial elements such as trees or any other terrestrial landscape. If you are using a star trackerto compensate for the apparent rotation of the night sky, the ground will blur. If you are using a stationary tripod (non-tracking), it’s the sky that is moving between each frame.
The photo below was captured using an iOptron SkyGuider Pro to track the night sky, with a DSLR camera and wide-angle lens mounted on top. As you can see, the rooftop of my neighbor’s house is blurred, because DSS registered the images with the moving sky.
An excessive amount of movement between the night sky and the foreground (over time) can make stacking images like this difficult. One solution is to photograph the night sky and foreground separately and combine the images together in Photoshop later.
Recommended Settings and Tips
For my wide-angle shots, I use a modified Canon DSLR with a light pollution filter. The settings I recommend below will work well for a modified DSLR shooting through moderate to heavy pollution. Those shooting with a stock DSLR may have to experiment with these settings to produce a pleasing result.
White Balance Settings
If you are using a modded DSLR, make sure to leave the white balance checkboxes unchecked. Using an auto white balance or the “camera white balance” with a modified camera will produce odd results. I would also suggest checking off the “set the black point to 0″ option.
This should provide you with a final image with a background sky that is much easier to correct in post-processing. Gradient Xterminator does a great job of correcting gradients in wide-angle shots of the night sky.
Recommended Settings
As for DeepSkyStacker’s recommended settings, the graphic below shows you which ones I like to use on a wide-angle starry sky photo. One of the important settings is to use Per Channel background calibration – as the RGB background calibration does a poor job of producing correct colors in my experience.
Scenarios and recommended settings:
- Scenario: You are processing long exposure and possibly good SNR images
- Recommendation: Use AHD debayering
- Scenario: If you are using a modded DSLR
- Recommendation:Reset all white-balance settings
- Scenario: If you are processing narrowband images (especially Ha)
- Recommendation: Use super-pixel mode
- Scenario: You are stacking (x amount) of light frames
- Recommendation: Use Sigma-Clipping combination method
- Scenario: You are creating a master dark from 31 dark frame(s)
- Recommendation: Use Sigma-Clipping combination method
- Scenario: If the resulting images look too gray
- Recommendation: Use Per Channel Median combination method
- Scenario: If the color balance in the resulting images is hard to fix in post-processing
- Recommendation: Use RGB background calibration
What to do if DeepSkyStacker Crashes
I have experienced this issue many times while attempting to register and stack both RAW image files from a DSLR and .FIT from a CCD camera. It can be a frustrating experience, especially if you have left your computer to let DSS do its thing. You come back 20 minutes later to view your stacked image, and instead, find an error message saying “This program has stopped working” or any number of other error messages.
I have found that the following steps can decrease your chances of producing an error using DSS:
1. Don’t run other applications while stacking
I am a multitasker. Usually, I have 5-6 windows open at a time from my Google Chrome browser to Adobe Photoshop. This all uses RAM on your machine, which DSS uses to process your image. Give DeepSkyStacker your full RAW capacity to use during its process.
2. Pay attention to the options you’ve selected
Certain options, such as “superpixel mode” are very demanding on your system and have been known to crash. Take a screenshot of your settings used before stacking, so you can compare results and try another stacking parameter next time.
3. Try stacking fewer images
The more frames you stack, the more time and resources DSS will pull from your machine. Try being more selective with the images you plan to register, and only include the absolute best images. Dell security device driver pack windows 7 64 bit download.
4. Try an external hard drive
You can tell DSS to utilize the space available on an external hard drive to render your images. The temporary files can require up to 100GB of space or more depending on the number of images in the set. This destination is selected under Settings > Stacking Settings > Temporary Files Folder.
I hope you were able to learn something new about DeepSkyStacker following my tutorial. It’s one of the few applications that hasn’t changed very much since I began using it in 2011, and it continues to deliver consistent results.
Alternatives to DeepSkyStacker
Pixinsight Planetary Image Processing
Everyone prefers to process and stack their astrophotography images in their own way. DeepSkyStacker isn’t the only software available to calibrate and stack your image frames. Here is a list of alternatives to DeepSkyStacker: