Film and Electronic Image Formats: A Comprehensive Guide

Film Formats

Film Width

Different film formats are typically named based on their width, measured in millimeters. Common film formats include:

  • Super 8
  • 16mm
  • 35mm
  • 65mm

Note that variations exist within these formats (e.g., Super 35, Super 16). Some formats can also be supplied with varying numbers of perforations. Older standards like 55mm, 9mm, and Single 8 are now obsolete.

Super 8

Available in negative and reversible film, Super 8 comes in cartridges and has been the standard for home movies. It’s still used in education, television, and industrial applications.

16mm

Used in various applications, particularly Super 16, which is common in television, feature films, and short films. It’s also used for extending screening copies of 35mm films.

35mm

35mm (and Super 35) has been and remains the most popular format for feature films, advertising, and professional television productions.

65mm

Used as a camera film format and for 70mm widescreen positive film prints, such as IMAX and OMNIMAX.

Aspect Ratio

Aspect ratio refers to the ratio between the width and height of an image (width divided by height). In movies, it’s often expressed as a ratio with height as a unit (e.g., 1.78:1). TV typically uses a whole number ratio of width and height (e.g., 16:9).

Aspect ratios are independent of film width, as the same film format can accommodate various image formats. The industry standard for 35mm films (called Academic) was 1.37:1 from the advent of sound until the introduction of Cinemascope in 1953. Today, common ratios include 1.85:1 (flat) and 2.40:1 (scope).

The standard ratio for TV and video was 1.33:1 (or 4:3). A widely used aspect ratio in Europe is 1.66:1, originally the aspect ratio of Super 16, which is similar to the new quality TV standard: 1.78:1 or 16:9. Two 70mm formats are also used: 2.2:1 and 1.43:1 (IMAX).

The Electronic Image (Standard Analogue)

Image reproduction involves analysis and synthesis. Analysis breaks down the picture (or image sequence) into significant factors. Synthesis rebuilds a semblance of the image, incomplete but sufficiently faithful for the human eye to perceive.

Cinematic images are a series of still images captured successively at regular intervals, representing the light reflecting off the scene. Electronic images, however, function differently.

TV Cameras and Image Decomposition

TV cameras use scanning systems to decompose the image. The picture is divided into horizontal lines, each consisting of image points with three basic parameters: lightness (brightness), hue (tint), and saturation (color purity).

Scanning starts with the top line and proceeds to the line below, moving from left to right and top to bottom. This process captures brightness and color information for each image point, used to reconstruct the image.

Frame Rate and Flicker Effect

Film projected at 24 frames per second creates the illusion of movement. To avoid flicker, the frame rate is effectively doubled (e.g., by using a shutter), resulting in 48 images per second.

TV employs a similar approach, scanning and displaying 25 (PAL) or 30 (NTSC) frames per second. However, the flicker effect would be more noticeable because each image is formed by illuminating successive points.

Interlaced Scanning

To address the flicker issue, interlaced scanning is used. Instead of analyzing and displaying all lines of each image sequentially, it’s done in two phases:

  • In 1/50 of a second (PAL) or 1/60 of a second (NTSC), odd-numbered lines are scanned and reproduced.
  • Immediately after, even-numbered lines are scanned in 1/50 of a second.

This system is called interlaced scanning because lines are sampled alternately. The full image (frame) consists of two semi-images (fields). Because each odd line is close to the next even line, the eye doesn’t perceive the slight shift between fields, avoiding flicker without increasing bandwidth.

The Video Signal

Initially, the TV signal conveyed only luminance values (black and white). To maintain compatibility with black and white TVs, the color video signal was designed with two main parts:

  • Luminance: Contains the luminance signal of the images and vertical and horizontal synchronization pulses.
  • Chrominance: Contains the color information and is added to the luminance signal without interference. It’s zero for black and white images.

Luminance and chrominance are processed separately and then combined into a composite video signal for analog TV broadcasts.

Luminance Signal

The luminance signal of each field contains information for each point (image signal) and reference signals (sync pulses) for positioning.

Image Signal

The image signal varies between a maximum and minimum level. Black corresponds to zero signal level, while white represents the maximum. Other tones fall between these levels.