Discovery Astrophotography with ZWO ASTRO

Everything you need to know about Astrophotography Pixel Binning (the fundamentals)

What is pixel binning? What can you do? How do you do it? We see astrophotographers from all over the world asking these questions everyday. In this article we are going to explain how you can start binning your astro photos.



1. What is it?

Binning is when you merge adjacent pixels into one larger pixel. Most commonly in a 2×2 formation, where 4 pixels are combined through software to make one large pixel as shown in the figure below.


Bin1 vs Bin2

Bin1 VS Bin2

Something you will notice when you bin 2×2 is your camera resolution is quartered as you effectively have 4 times less pixels. This also doubles the signal-to-noise ratio.

Binning 2×2 or even up to 4×4 can be a very effective way of framing and focusing your target. This is because the larger pixels gather light much faster, revealing faint nebula and galaxies in shorter exposures.


2. Mono BIN


Mono binning is an option for colour cameras, if selected, the color camera will ignore the information of the Bayer matrix and select the closest pixel value to merge and get a grayscale image. This will result close to the image of a monochrome sensor but it is important to remember you only get a quarter of the resolution.



CCD and CMOS have different readout methods, and the areas where pixel binning occurs are also different. The pixel combination of CCD takes place in the analog domain, and analog signals can be combined.

Suppose we have 4 pixels here, the readout noise of a single pixel is 1x, it needs 4 readouts before BIN2, that is, the total readout noise is 4x.

According to the pixel merging rule of CCD, after BIN2, 4 pixels need to be read out only once. Therefore, the total pixel readout noise after BIN is 1x. Seen in this way, it means that the total read noise becomes 1/4 of the original. Without considering other noises, the signal-to-noise ratio becomes 4 times the original.

The pixel merge of CMOS occurs in the digital domain, and the analog signal has been converted into a digital signal. At this time, the pixel merge can only be completed by software.

As above, suppose there are 4 pixels, the readout noise of a single pixel is 1x, it needs 4 readouts before BIN2, that is, the total readout noise is 4x.

According to the pixel merge rule of CMOS, after BIN2, the total read noise becomes the square root of the original read noise, that is, x=2x. In other words, the total read noise becomes 1/2. Without considering other noises, the signal-to-noise ratio is 2x the original.

At first glance, it seems that the effect after CCD BIN is better than that after CMOS BIN, but here we combine the actual data and simply compare the read noise of ASI6200 (CMOS camera) and KAI11002 (CCD camera):


Since the original pixel size of ASI6200 is 3.76um, the pixel size after BIN2 is 7.52um. The popular KAI11002 pixel size is 9um. The two pixels are relatively close in size, so it is fair to compare the two cameras.

It can be seen that even after BIN2, the read noise of a single pixel of ASI6200 becomes twice the original (please note that 1 pixel after this BIN2 is equivalent to the original 4), the maximum 7 electrons are also lower than 10 electrons of KAI11002.


Similarly, looking at the comparison between ASI1600 and KAF8300, although the pixels of AS1600 at BIN2 are larger than BIN1 at KAF8300, the read noise is still lower.

Since the read noise of the existing CMOS is much lower than the read noise of the CCD, even after BIN2, the read noise of a single pixel is still very small. And we actually ignore the fact that the effective signal itself is noisy, we call it shot noise, and the shot noise will be reduced to half after BIN2, so the fact that BIN will increase the signal-to-noise ratio, whether it is CMOS or CCD will be very obvious.


4. Hardware VS Software BIN

After comparing CMOS BIN and CCD BIN, let’s look at another concept of BIN- hardware BIN and software BIN.

Hardware BIN, as the name implies, is to merge the pixel with hardware, usually refers to the completion of pixel merge on the chip, due to the different chip structure, CCD can do this, but CMOS cannot. CMOS hardware BIN is more like pixel skip. The frame rate will be faster by pixel extraction, but the signal-to-noise ratio is limited. Generally, we apply it to scenes that require high frame rates, such as shooting solar system objects. For deep space shooting, we recommend binning with software on CMOS cameras.

Here we also defines hardware BIN as pixel merge on CMOS sensor, and software BIN as pixel merge in SDK in software. Please notice it is not the same BIN as CCD on chip BIN.


In our ASCOM and ASIImg, ASILive software, the default BIN is software BIN, only ASICAP have hardware BIN option. You can find this option in ASICAP->Control->More (provided that the camera supports hardware BIN).

5. What is the difference between BIN with RAW8 and RAW16?

The calculation method of software BIN is different, you can take the average of adjacent pixels, or add up the pixel values. In the RAW8 format, we use the accumulation method, the image will become brighter; RAW16 under the average method, the image will not become brighter, but the image signal-to-noise ratio is improved.

Add up BIN


As BIN increases, the brightness of the image also increases.



But at the same time, the resolution is also declining, and the details of the letter “PCO” in the picture is losing.

Averaged BIN

Taking ASI294MC as an example, select RAW16 in ASICAP, the image is saved in 16bit, and the average of the original pixel value is taken after BIN.

Bin1 vs Bin2 ASI294MC

BIN1(4144*2822) VS BIN2(2072*1410)

In this case, the number of pixels in BIN2 is also reduced to 1/4, and the resolution and details are also reduced. The only difference from RAW8 is that the brightness is not increased, but if you look closely, you will find that the noise is reduced.

6. When to use BIN?

Generally, we will use BIN to improve efficiency when focusing and framing. If the system is oversampling when shooting, BIN can also solve the problem.


So, should we use BIN in the process of shooting deep space with CMOS camera?

Because CMOS cameras all use software BIN during deep space shooting, image post-processing can also complete this process.

If you confirm that the image is oversampled, you can use BIN to get the suitable sampling, otherwise we still recommend to keep the original image during the shooting process, and you can decide use BIN or not in post-processing.


More Posts

Send Us A Message

Go to Top