Image Processing

Introduction to Image Processing

Issues to learn:
– principle of operation of the CMOS matrix;

CMOS Matrix (Complementary Metal Oxide Semiconductor) sensor works on the same principle as the CCD. The light falls on a silicon crystal and forms the pixels, generating electrical charges in them. Each pixel has its own converter and its own ‘’address’’. This technology was created with the aim of producting integrates circuits. Its advantage is a relatively simple and cheap method of production.
– principle of operation of the CCD matrix;
CCD Matrix (Charge Coupled Device) is a detector that captures and registers the light that strikes it in the form of photons. A photon transmits its energy when it falls on the CCD due to the precipitation of internal electrons of photoelectric effect. The longer the exposure, the more electrons accumulate. Then just analog-to-digital converts the received signal to form comprehensible to the computer.

– technological differences of CCD and CMOS matrices;



The content of a single pixel can not be read. You have to read the entire matrix. This makes this operation slower.

You can read any number of pixels in any order. They work much faster.

The matrix has one load transducer for voltage and one A/D converter.

Each pixel of the CMOS sensor has its own load transducer for voltaje and a Reading system for pixel content. In advanced CMOS sensors, each pixel has its own A/D converter.

CCDs take more power during operation, so they heat up more quickly and the battery is exhausted sooner

Consume less electrical power.

Higher fill factor

Lower fill factor

Lower noise

Higher noise

Acceptable dark current

Larger dark current

One amplifier

Since each pixel has its own amplifier, it is difficult to maintain the quality regime.

Bigger disruptions in data transmisión due to the large distance.

Small disturbances in data transmisión, due to small distance.

Higher manufacturing costs

Low production cost

Greater photosensitivity

Less photosensitivity

– basic models of color space;

Common color spaces based on the RGB model include sRGB, Adobe RGB, ProPhoto RGB, scRGB, and CIE RGB.

1.RGB: Red, Green and Blue. The three primary colors, displayed together at the maximum intensity, give white color. The black color is obtained without displaying the primary colors.

2.CMYK – a model whose name is Cyan, Magenta, Yellow and Black. This model is used in printing. Compared to RGB, it has a smaller range and less vivid colors with less saturation.

3.HSB – the most intuitive approach, used in many graphic programs, including Adobe. This is an alternative model for RGB. It is much easier to illuminate the color by changing its brightness.

4.Pantone. The color standarization system which produces patterns. The colors are marked with numbers in them, and they are created by mixing 18 pigments.

5.HEX – colors in hexadecimal notation. Put simply, it’s just a way of describing colors with a specific number. It is used for the purpose of creating websites, but it is so comfortable that graphic designers also often use it.

data structures and data types used in the Matlab environment to represent raster digital images;
Raster image is described by a grid (matrix, array, map) of pixels. Photography from a digital camera is an image in the form of a raster. You can map data represented as a matrix in which each row-and-column element corresponds to a rectangular patch of a specific geographic area, with implied topological connectivity to adjacent patches.

– methods of digital image normalization;
Image normalization consists in reducing the range of changes in gray levels or color intensities of pixels. Is a process that changes the range of pixel intensity values.

The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale.

– the concept of the histogram of image data;

A histogram of image data is a function assigning to each level of grayness or the color intensity the number of pixels with this brightness level. The histogram can be represented as a graph in which on the horizontal axis are placed consecutive posible levels of grayness or the color intensity.

– equalization of the image histogram;

The histogram equalization or flattening of histograms involves the conversion of values of gray levels or color intensities of the pixels, so that the number of pixels in each of the intervals is approximately the same.. The objective of this technique is to give a linear trend to the cumulative probability function associated to the image. Histogram equalization improves the global contrast of the processed image.

– the concept of digital image binarization (types of binarization and methods for determining the parameters of binarization)

One of the basic methods of point processing of images is the binarization. The binarization process involves converting the source image having many gray levels or color intensities to the resulting binary image, which pixels have only two values. Usually, the values correspond to the black color (0 zero brightness) and the White color (1 maximum brightness)

Binarization is a transformation often immediately preceding image análisis. Only at the binary images one can perform the most measurements and some complex transformations.


oWith a lower threshold

oWith a upper threshold

oDouble-threshold binarization

oMulti-threshold binarization