Fundamentals of Computer Graphics: Concepts and Applications

Major Applications of Computer Graphics

  • Entertainment and Media: Used in movies, animation, video games, and visual effects to create realistic or fantastical scenes.
  • Computer-Aided Design (CAD): Employed by architects, engineers, and designers to create precise technical drawings and 3D models of buildings, vehicles, and machinery.
  • Medical Imaging: Helps visualize complex structures of the human body through techniques like MRI, CT scans, and 3D reconstructions.
  • Scientific Visualization: Used to graphically represent scientific data, such as fluid dynamics, molecular structures, and weather patterns.
  • User Interfaces (UI): Integral to designing graphical user interfaces for software applications, operating systems, and mobile apps, enhancing usability and interaction.

Color Models Used in Computer Graphics

In computer graphics, color models are mathematical ways to represent colors in a standard format. Each model defines a color space using a set of components (like red, green, and blue). Here are the most popular color models used:

1. RGB (Red, Green, Blue)

  • Type: Additive color model
  • Use: Displays (monitors, TVs, digital cameras)
  • How it works: Combines red, green, and blue light in various intensities to produce different colors. When all are combined at full intensity, white is produced; at zero intensity, black.
  • Range: Usually 0–255 per channel in 8-bit systems.

2. CMY/CMYK (Cyan, Magenta, Yellow, Black)

  • Type: Subtractive color model
  • Use: Printing (inkjet, laser printers)
  • How it works: Based on absorbing (subtracting) light. Colors are created by combining cyan, magenta, and yellow. Black (K) is added to enhance depth and detail.
  • Range: Typically 0–100% per component.

3. HSV (Hue, Saturation, Value) / HSL (Hue, Saturation, Lightness)

  • Type: Perceptual (intuitive) model
  • Use: Image editing, graphic design, color pickers
  • How it works: Helps users think about colors in more human-friendly terms than RGB.
  • Components:
    • Hue: The type of color (0–360° on a color wheel).
    • Saturation: Intensity or purity of the color.
    • Value (HSV): Brightness of the color.
    • Lightness (HSL): Average of the maximum and minimum RGB values.

Virtual Reality (VR) Systems and Components

Virtual Reality (VR) is a computer-generated environment that simulates a realistic experience, allowing users to interact with it in a way that feels immersive. VR differs from the real world primarily because it is entirely synthetic and controllable by software, offering experiences that are physically impossible or impractical in reality.

Key Components of a VR System

  1. Head-Mounted Display (HMD):
    • Worn on the head like goggles.
    • Displays stereoscopic 3D visuals for depth perception.
    • Includes motion sensors (gyroscopes, accelerometers) to track head movement.
  2. Input Devices:
    • Enable interaction with the virtual environment.
    • Examples include motion controllers (e.g., Oculus Touch, HTC Vive), haptic gloves (simulate touch), keyboards, gamepads, or gesture tracking systems.
  3. Tracking System:
    • Tracks user movements in real-time (head, hand, body).
    • Types: Outside-in tracking (uses external sensors) and Inside-out tracking (sensors built into the HMD).
  4. VR Software/Applications:
    • The digital content or simulations (games, training modules, virtual tours).
    • Responsible for rendering graphics, physics, and interactions.
  5. Audio System:
    • Spatial or 3D audio enhances immersion.
    • Often integrated into the headset or provided via headphones.

Polygon Representation and Curve Types

Using Polygon Tables to Represent Geometry

In computer graphics, polygon tables are data structures used to efficiently represent and manage polygons, especially for rendering and geometric processing.

A polygon table typically stores:

  1. Geometric Information: Vertex coordinates.
  2. Topological Information: How vertices are connected (edges).
  3. Attribute Information: Color, texture, normal vectors, etc.

Structure of a Polygon Table

1. Vertex Table

Stores the coordinates of all vertices.

Vertex_ID   X    Y    Z
V1               0    0    0
V2               1    0    0
V3               1    1    0
2. Edge Table (Optional)

Defines edges by connecting vertex pairs.

Edge_ID   Start_Vertex   End_Vertex
E1                     V1                 V2
E2                     V2                 V3
3. Polygon Table

Lists which vertices (or edges) make up each polygon. May include attributes like color or surface normal.

Polygon_ID   Vertex_List        Color     Normal
P1                  V1, V2, V3          Red      (0, 0, 1)

Representation of Three Common Curves

Curve TypeDefined ByPasses ThroughUse Case
BezierControl points (2–n)First & lastGraphics, animation
B-SplineControl points + knot vectorGenerally noCAD, modeling
HermiteEndpoints + tangentsYes (endpoints)Path interpolation

Polygon Clipping Algorithms

Polygon clipping in computer graphics refers to the process of removing parts of a polygon that lie outside a defined clipping region (usually a rectangle or window). This is essential for rendering only the visible parts of objects within a viewport or screen.

Common Polygon Clipping Algorithms

  1. Sutherland–Hodgman Polygon Clipping Algorithm
  2. Weiler–Atherton Polygon Clipping Algorithm

1. Sutherland–Hodgman Polygon Clipping

This algorithm works by processing the polygon edge-by-edge against each boundary of the clipping window (left, right, top, bottom). At each step, a new set of vertices is generated. Only parts inside the clipping region are retained. This method is primarily suitable for convex polygons.

Sutherland–Hodgman Steps:

For each edge of the clipping rectangle:

  • Go through each edge of the polygon.
  • For each pair of consecutive vertices:
    • If Both inside → add the end vertex.
    • If Entering → add intersection point and end vertex.
    • If Exiting → add only the intersection point.
    • If Both outside → discard.

Example:

Given a square polygon: A(1, 1), B(5, 1), C(5, 5), D(1, 5)

Clipping window: x ∈ [2, 4], y ∈ [2, 4]

The process involves sequential clipping:

  1. Clip against left boundary (x = 2).
  2. Clip against right boundary (x = 4).
  3. Clip against bottom boundary (y = 2).
  4. Clip against top boundary (y = 4).

2. Weiler–Atherton Polygon Clipping

This algorithm is more general than Sutherland–Hodgman, as it can handle concave polygons and polygons with holes. It follows the path of the polygon and inserts intersections with the clipping region. When entering the clipping region, the algorithm switches to following the clip boundary until re-entering the polygon.

Applications of Polygon Clipping

  • Rendering visible portions of scenes.
  • GUI window management.
  • Geospatial mapping (e.g., trimming map regions).
  • Games and 2D/3D modeling tools.

Basic Steps for Computer Animation Workflow

Computer animation is the process of creating motion and shape change using computers. Whether it’s for 2D cartoons, 3D movies, or video games, animation follows a structured workflow:

  1. Storyboarding

    Purpose: To plan the storyline visually.

    • Creating a sequence of sketches or illustrations.
    • Representing key scenes and actions.
    • Planning camera angles and scene transitions.
  2. Modeling

    Purpose: To create the characters, objects, and environment.

    • 2D modeling for flat images or 3D modeling for depth-based objects (using software like Blender or Maya).
    • Applying textures and colors to models.
  3. Rigging

    Purpose: To define how objects move.

    • Adding a skeleton (bones/joints) to models.
    • Setting constraints and controls for animating.
  4. Animation

    Purpose: To bring motion to models.

    • Keyframing: Defining start and end positions/times.
    • Tweening (In-betweening): Automatically generating intermediate frames.
    • Using motion capture for realistic human movement.
    • Applying physics-based or procedural animations.
  5. Lighting

    Purpose: To simulate realistic or artistic light and shadows.

    • Setting up virtual light sources.
    • Adjusting light intensity, color, and angles.
  6. Camera Setup

    Purpose: To define what the audience sees.

    • Placing and animating virtual cameras.
    • Executing zooming, panning, and tracking shots.
  7. Rendering

    Purpose: To generate final images or frames.

    • Converting 3D scenes into 2D images.
    • Processing lighting, textures, and effects.
    • Outputting frames for video or real-time use.