Operating System Fundamentals: Types, Functions, and Architecture

Operating System Resources

The operating system manages resources such as:

  • Processor
  • Memory
  • Input/Output
  • Communication Devices


Key functions of an operating system include:

  • User interface management
  • Simultaneous resource access for users
  • Data sharing between users
  • Input/output control (disks, printers)
  • Resource accounting for processes and users
  • Communication management
  • Secure and fast data storage organization
  • Process and task planning and monitoring


System Operation Modes

The system operates in batch and real-time modes.


Workloads are classified by processor and peripheral usage:

  • Process-bound jobs: Consume most time processing information with minimal I/O.
  • I/O-bound jobs: Spend most time on I/O, with the processor idle for long periods.

Multiprogramming addresses processor idleness during I/O by switching to another process.

Each process appears to have its own virtual processor, but the system switches between them constantly.

Users perceive processes as running in parallel, though only one is active at any moment.


This system introduces challenges:

  • Processor access requires rules for all jobs.
  • Memory management is needed for shared resources.
  • Resource competition among processes must be managed.


  • Batch Processing

    Batch systems do not require user intervention during job execution. Long jobs are queued (FIFO). Multiple processes run “simultaneously” based on system workload.

    The OS manages the queue, executes jobs, and stores results for user access.

  • Time Sharing

    Interactive multiprogramming uses terminals (keyboard-screen) for user interaction. Users provide data as needed and receive immediate responses.

    Work is organized in sessions, from user login to logout.

    A shell process manages the dialogue between the user and the OS.

    Users perceive they have all machine resources, despite other active sessions.


Time-sharing systems are:

  • Conversational
  • Multi-user
  • Provide short response times (seconds)
  • Maintain sequential user request polling
  • Have strong records management
  • Use buffering and spooling
  • Manage virtual memory


Current OS often combine batch and time-sharing techniques, allowing users to choose the system for their processes.



  • Real Time

    Real-time systems are used for control systems with sensors requiring very short response times.

    Key features include:

    • Heavy restrictions on response time (milliseconds)
    • Constant information updates
    • System remains mostly inactive for quick event response
    • Effective interrupt management
    • Easy priority management
    • Real memory management

    Examples include industrial process control, booking systems, stock management, and satellite launches.



Types of Operating Systems

Operating System Variations

System exploitation depends on the number of users, processes, processors, and response time.


  • Number of Users

    • Single-user: One user has access to all system resources.
    • Multi-user: Multiple users share system resources simultaneously.


  • Number of Processes

    • Monoprogramming/Single-tasking: Only one process runs at a time.
    • Multiprogramming/Multitasking: Multiple processes run simultaneously, sharing processor time.



  • Number of Processors

    • Uniprocessor: A single processor handles all tasks.
    • Multiprocessor: Multiple processors work on the same or different jobs.


  • Response Time

    • Real-Time: Immediate response to process requests.
    • Time-Sharing: Process time depends on other running processes.
    • Batch Processes: Results are not needed immediately, executed during idle times.


3.1.5 .- COMPONENTS OF ARCHITECTURE AND OPERATING SYSTEMS.


  • Monolithic Systems

    The OS is built as a set of procedures compiled and linked into a single file. Each procedure has a specific task and interface.

  • Hierarchical Layers

    The OS is divided into layers around a core. Each layer has a specific function, with core operations directly on hardware. Layers closer to the user have lower priority functions.

    A typical structure has four layers:

    • Core Level: Controls all computer operations, manages processes, and handles hardware communication.
    • Executive Level: Manages memory.
    • Supervisor Level: Manages communication between the system and user, and controls I/O.
    • User Level: Controls user processes without managing memory or I/O.

    This structure can be seen as concentric rings, with inner layers more privileged and protected.


  • Virtual Machine

    VM/370 was the first virtual machine OS, offering multiprogramming and an extended (virtual) interface. It separates these two functions.


  • Client-Server

    The newest OS type, used in various machines. It moves most OS functions to user processes, leaving a minimal kernel. User processes (clients) request services from server processes. The kernel manages communication between them.

    This model supports memory management, process management, and inter-process communication. It can also be used in distributed systems.



3.1.6 .- SAFETY SYSTEM AND INFORMATION.


OS must protect against user errors and malicious users. Protective functions prevent problems between processes and between them and the OS.


  • Input/Output Protection

    External devices have specific routines (drivers) to control I/O. Drivers protect against incorrect access, passing control to the OS kernel when errors occur.


  • Memory Protection

    Each process has an allocated memory area (address space) and cannot access other areas. Records indicate memory limits for each process. The OS detects and reports errors if a process tries to access addresses outside its space.

  • Processor Protection

    Infinite loops or processor access without release can cause problems. Hardware includes a timer to interrupt processes and return control to the OS.



3.2 .- operating system functions, resource management.


The OS kernel manages the processor, memory, I/O, and other resources to address job operations and requests.


Process and Processor Management

The process is a key concept in an OS, representing its unit of work.

The OS executes its own code and user-created processes concurrently.

The kernel manages process creation, deletion, planning, synchronization, communication, and deadlock management.



Definition and Concept of Process

A process is a running program with its associated environment (files, records, variables, instructions).

When the system executes a program, it:

  • Places it in the job queue (secondary memory).
  • Finds main memory for machine language orders, variables, and the process stack.
  • Loads instructions from secondary memory.
  • Allocates resources.
  • Creates a PCB in the ready queue.

A process is a program running with specific data and resources. The environment includes:

  • Registers (PC, SP, etc.)
  • Stack data
  • Variable data
  • Running program instructions


Process Control Block (PCB)

A process is represented internally by a Process Control Block (PCB), which includes its state, resources, registers, etc.

The PCB contains:

  • Process State: Priority, execution method, and internal register information.
  • Occupation Statistics: Time and resource usage for processor planning.
  • Internal Memory Occupation: Memory and data usage.
  • Resources in Use: I/O drives.
  • Files in Use
  • Privileges

The OS manages lists of PCBs to:

  • Locate information about each process.
  • Keep records for suspending and resuming execution.

This information is in main memory and accessed as needed.

A process change involves actions by the OS to switch from one process to another.



Process State

Process control blocks are stored in queues, each representing a specific process state. Queues exist for blocked and ready processes.

The OS manages process states transparently to the user.

Process states are divided into active and inactive.


  • Active States

    Compete for the processor or are able to do so:

    • Implementation: Process in control of the processor.
    • Ready: Processes ready to execute but waiting their turn.
    • Blocked: Processes that cannot run due to unavailable resources.


  • Inactive States

    Cannot compete for the processor but can do so again through certain operations. The PCB is parked until activated. These are processes that have not completed their work due to reasons that have prevented (for example, lack of floppy in the drive, load paper in a printer, etc.) that can be reset from the point where they ran out having to re-run from the beginning. They are of two types:

    • Suspended Blocked: Suspended process pending an event.
    • Suspended Prepared: Suspended process with no more blocking causes.


State transition: Changes a process experiences throughout its existence.

Transitions include:

  • Start of Implementation: Process creation, PCB creation, and placement in the ready queue.
  • Step by Running State: Processor becomes idle, and the first process in the ready queue is executed.
  • Step-Locked State: A running process requests an I/O operation and moves to the blocked state.
  • Step-Ready State: Occurs due to program execution, I/O completion, or interruption.
  • Step Blocked Suspended State: A blocked process is suspended.
  • Step Suspended State Ready: Occurs due to suspension or unlocking of a suspended process.
  • Destroying a Process: Process termination, PCB removal, and destruction.


Process Management

Process management involves policies and mechanisms for processor management. Planning optimizes processor use by setting the order in which processes run.

There are several levels of planning:

  • Long-term planning (job scheduler): Loads programs into memory for execution. Creates processes and places them in the active preparations queue.
  • Medium-term planning (swapping): Manages suspended processes. Temporarily removes processes from memory and returns them later.
  • Short-term planning (CPU scheduler/dispatcher): Decides which memory-resident processes use the processor. Manages multiprogramming and provides good service to interactive processes.

The dispatcher is activated when a change occurs in the system state and performs the following steps:

  • Decides whether to change the process.
  • Saves the current volatile environment.
  • Selects the process to be run from the preparations.
  • Uploads its volatile environment and transfers control.


Planning Criteria

Process planning decides which process in the ready queue is assigned to the CPU. Key criteria for comparison include:

  • Processor Time: Time a process uses the processor (excluding I/O).
  • Timeout: Time processes are active without being executed.
  • Runtime: Theoretical time for a process to execute alone.
  • Return Time/Service: Total time to run a process.
  • Efficiency: Processor utilization.
  • Productivity/Performance: Number of jobs/processes per unit time.


Algorithm 5 .- multiple queues with feedback.

Multiple queues with feedback divide processes into several ready queues, each with its own policy. Lower priority is assigned to higher queue numbers. The processor selects the first process from the highest priority queue. A process that has consumed its allocated quantum is placed at the end of the next lower priority queue.


Memory Management

Main memory is an essential resource for processor and I/O access. Programs and data must be stored in main memory (address space).

Multiprogramming systems require good memory management to keep programs and data in memory simultaneously without mixing address spaces.


Memory Manager

The memory manager controls memory, allocating physical memory to processes. It manages used and unused memory, allocates space to new processes, and releases space used by completed processes. It also manages process exchange between memory and disk.


Address Allocation

Programs are defined as a series of instructions. The compiler translates the source program to machine language, numbering each instruction with a logical address. The first instruction has address 0 (zero relative). This sets the logical address space.

When a program runs, the OS loader puts it into memory at a physical address, converting relative addresses into absolute addresses. This defines the physical address space.

This separation between logical and physical address spaces allows for more efficient memory management.


Address Remapping

There are two ways to perform reallocation:

  • Static Reallocation: The loader modifies all logical addresses by adding the base address, updating the program code.
  • Dynamic Reallocation: The program is loaded without changing logical addresses. A hardware device adds the base address to obtain the physical address during execution.


Monoprogramming

In early computers, programmers managed memory directly. Memory was occupied by a single program, with no OS or memory manager. Unused memory was wasted.


Division of Memory: The Resident Monitor

The OS divides memory into two zones: one for the system (resident monitor) and the other for the user. The remaining memory was unused.

Memory sharing requires a mechanism to protect the OS memory area. An address boundary limits the OS area. Hardware compares requested addresses to this boundary.

Swapping involves downloading a program from memory to disk (swap-out) and loading another program (swap-in). Overlapping execution of an exchange program with another splits the user area into two parts.


Multiprogramming

To divide the processor between processes, they must be in main memory. Memory is divided into partitions, each hosting a different process. The number of partitions indicates the degree of multiprogramming.

The memory manager controls the start and end addresses of memory partitions. It also manages occupied and free areas, and allocates/releases address spaces.

Memory allocation techniques include:

  • Fixed Partitions: Partitions of the same or different sizes. Programs are placed based on their size. This is easy to control but can lead to internal fragmentation.
  • Variable-Sized Partitions: Partitions are created with the exact size requested by programs. This avoids internal fragmentation but can lead to external fragmentation. Compaction is needed to rearrange memory areas, requiring temporary suspension of all work.


There is a routine system that handles the page fault interruption using the following steps:

  • Find the page that contains the requested address in the secondary storage.
  • Find a free frame in main memory
  • If there is a free frame, it loads the page and updates the tables affected by the change.
  • If not, use a replacement algorithm to select the page to replace.
  • Save the page to replace in the secondary storage, updating the affected tables.
  • Load the requested page on the free frame and update the corresponding tables.
  • Continue to implement the program.


Replacement Algorithm

  • Optimal Algorithm: Selects the page that will take longer to be used. This is difficult to predict but can be approximated based on past memory references.
  • FIFO Replacement Algorithm: Replaces the page that has been in memory the longest. This is simple but can replace frequently used pages.
  • LRU Algorithm: Replaces the page that has been least recently used. This is a good approximation of the optimal solution.

Other algorithms use a reference bit or a modified bit to track page usage.


Page Replacement Criteria

When a page fault occurs, algorithms help select the page to replace. Selection can be local (within the process) or global (among all existing pages).


Peripheral Management

Peripheral devices vary in speed, contact mode, and data representation. Processes should not need to know the characteristics of each peripheral. The Input/Output manager masks device characteristics, allowing information transfer between peripherals and the processor or memory.

The objectives of the I/O manager are:

  • Provide an interface between devices and the rest of the system.
  • Handle errors during peripheral use.
  • Manage shared and dedicated devices.
  • Manage asynchronous computer-peripheral communication using buffers.


Hardware Devices

Devices are classified by function and how they handle information:

  • Function:
    • Storage Media: Magnetic tape, hard drives, CD-ROMs, etc.
    • User Interface Devices: Keyboard, monitor, mouse, etc.
    • Transmission Devices: Network cards, modems, etc.
  • Information Handling:
    • Block Devices: Treat information in fixed-size blocks with addresses.
    • Character Devices: Receive streams of characters without a predetermined structure.


Interface Processor/Peripheral

The speed and complexity of peripherals determine how they connect to the processor. Three types of connections are:

  • Registers: Communication between the CPU and devices occurs through ports (registers in RAM). Each device has an identifier or base address. Ports include:
    • Input Buffer: Data from the device to the CPU.
    • Output Buffer: Data from the CPU to the device.
    • State Register: Device status information.
    • Order Register: CPU instructions for the device.

    The CPU controls I/O operations, checking device status, sending read/write orders, and transferring data. The CPU can use polling or interruptions to determine when an operation is complete.

  • Drivers: Complex devices connect through a controller. The controller interfaces between the CPU and the device. The OS works with the controller, not the device. Types of drivers include:
    • Series: Bits are stored in a buffer, checked for errors, and then passed to main memory.
    • Parallel: Similar to series, but information is received in parallel.
    • Direct Memory Access (DMA): Used in block device drivers to avoid CPU usage loss. The CPU provides the driver with source, destination, and size information. The DMA controller loads information into its buffer and then into main memory.
  • Channel: Large computer models use multiple buses (channels) controlled by special processors. Channels treat devices as virtual or abstract. Channels can be selectors (handle multiple devices but transfer data one by one) or multiplexers (manage multiple devices and transfer data simultaneously).


Paging

Paging is a management technique that allows memory to be allocated discontinuously. Physical memory is divided into fixed-size pieces called frames, and programs are divided into blocks of the same size called pages. The OS maintains a page table linking each page to its frame address.


Paging Mechanism

Hardware dynamically translates addresses. Each address is divided into a page number (p) and a shift in the page (d). The page number is used as an index to find the frame address in the page table. The shift is added to the frame address to obtain the real address.


Memory Management

Paging reassigns addresses dynamically. The system analyzes each new job, finds free frames, loads the program pages, and builds a page table. This avoids external fragmentation. Internal fragmentation may still exist in the frame containing the last page.


Segmentation

Segmentation defines memory blocks of varying size to accommodate program segments. This eliminates internal fragmentation. External fragmentation can occur. Each logical address is expressed as a segment number (s) and an offset (d).


Segmentation Hardware

Hardware transforms logical addresses into real addresses using a segment table, which relates each segment number to its load address in real memory.


Combined System

Some systems combine paging and segmentation to harness their positive characteristics. Segmented paging uses a page table for each segment. Paged segmentation uses segments whose size is a whole number of pages.


Virtual Memory

Virtual memory allows the execution of programs partially loaded into real memory. This is due to implement a very efficient, because otherwise the program performance may worsen significantly.

Benefits include:

  • Logical memory can be larger than available real memory.
  • Each program takes up less real memory, increasing multiprogramming.

Parts of a program are loaded as needed. Key aspects include:

  • Load Mode: Portions are loaded when needed (reference page) or in advance (prepaging).
  • Placement: Virtual memory systems using segmentation decide where to load new segments.
  • Replacement: When real memory is full, a new part of a program replaces an existing one.

The load request message is the most common loading scheme. When the processor generates an address belonging to a page that is not in memory “page fault”, the search for, and then bring it to the corresponding memory from secondary memory device.

Replacement pages

Petition technique reduces memory pages occupied by each program during its execution. When loaded into memory a few pages of each program is normal page faults occur frequently. But for a program that generates a page fault can continue its execution will be necessary to bring to memory the page that you need to use.


Software for Control of Input/Output

I/O information is treated at two levels:

  • Device-independent software: Provided by the OS.
  • Device-dependent software: Provided by the manufacturer.


Device-Independent Software

Executes common I/O operations without regard to specific device characteristics. It includes routines triggered by an interruption, with the mission to:

  • Provide a uniform interface to device drivers.
  • Name devices to link logical file names to physical devices.
  • Control access to devices.
  • Provide a separate block size of the device.
  • Manage buffers.
  • Manage space allocation for block devices.
  • Reserve and release dedicated devices.
  • Report errors.


Device-Dependent Software (Driver)

Specific control programs for I/O devices. Each device has a driver that responds to requests from device-independent software. Driver functions include:

  • Define peripheral characteristics to the OS.
  • Initialize records associated with the peripheral.
  • Enable and disable the device for a process.
  • Process all I/O requests.
  • Cancel I/O operations.
  • Treat errors and communicate with the user.

The driver works as follows:

  1. Gets a request from the device-independent software.
  2. Translates the request to specific driver commands.
  3. Enters commands into the appropriate driver records.
  4. Waits for the device to respond and perform the operation.
  5. Checks for errors by checking the controller’s status register.
  6. Informs the device-independent software of the end of the transaction.

A driver is a set of routines and tables that control I/O operations on a device. Drivers are permanently housed in main memory and require high execution speed.


Interrupt Vectorizer

The OS allocates memory to store addresses of interrupt handlers associated with each device. The interrupt vector is a table containing addresses of routines that handle each interruption.


Address of Input/Output Device

Peripheral devices or drivers have hardware registers that the OS can read or write. The memory area containing these registers is the I/O address of the device.


Data Management

The OS presents stored information from a logical point of view, independent of physical reality. The system must translate information in storage media and find space to record files. It must also protect files from unauthorized access. The file subsystem manages these functions.


Physical Support of Information

Logical structure of information:

Information consists of bits, which are combined to form characters. From these characters, we get the following logical drives:

  • Field: A set of interrelated characters.
  • Record: A set of fields containing information about the same entity.
  • File: A set of related records.
  • Databases: Groups of interrelated files managed together.

Users handle information in logical units (logical records). The physical structure of disks determines the size of the basic unit of information transferred (physical record or block). The OS can work with locked records, grouping multiple logical records into a single physical record.


File Subsystem Software

Users see files as groups of interrelated information. The subsystem manager must manage operations on files, adapting to physical devices. Tasks include:

  • Storage Management: Deciding how to allocate space for efficient use and fast access.
  • Access Methods: Defining how to access stored information (sequential, direct, indexed).
  • File Management: Controlling existing files, their relationships, creation, and sharing.
  • Protection and Integrity: Ensuring information integrity and privacy.


Storage Management: Allocation of Space

To manage storage space, the system must know what is being used and what is available. The subsystem maintains a list of file space for each device. The directory is used to manage space. The system reserves space on each disk called the directory. A file directory is a table with an entry for each file and another table for information on available space. File entries record information such as:

  • File name
  • File type
  • Physical location on disk
  • Size
  • Accountants
  • Protection
  • Accounting

The OS allocates space for files, either in contiguous blocks or non-contiguous blocks. Options include:

  • Next Assignment: Each file is placed in a group of contiguous blocks. The directory contains the starting block address and the number of blocks. This allows for effective sequential and direct access.
  • Linked Allocation: Each file is a linked list of disk blocks. The directory contains a pointer to the first block. This facilitates sequential file processing but is less favorable for shortcuts.
  • Indexed Allocation: Each file has its own index block. The directory contains the index block address. This avoids external fragmentation and speeds up access.


Access Method

Information can be accessed logically as sequential, direct, or indexed. The OS file subsystem defines supported access methods. Some systems provide a single method, while others allow multiple methods.

  • Sequential Access: Records are accessed in a predetermined order, from first to last. The system maintains a pointer to the next logical record.
  • Direct Access: Allows direct access to any part of the file. This is best for quick access to large amounts of information.
  • Indexed Access: Creates an index or table containing relationships between keys and physical blocks. Access is achieved by going first to the index and then to the block address.


File Management

The subsystem controls the location of all files using directories. The directory structure can be simple or complex. Basic operations include:

  • Search: Locating a particular file.
  • Creation: Adding a new entry in the directory and storing the file contents.
  • Clear: Deleting a file entry and freeing disk space.
  • List: Listing the contents of the directory.

Directory structures include:

  • Single-Level Directories: Simple but cannot define two files with the same name.
  • Two-Level Directories: A master directory defines a subdirectory for each user.
  • Multi-Level Structures (Directory Trees): Each user can create subdirectories, grouping files according to their criteria.


Protection and Integrity of Files

File security focuses on:

  • Availability: Ensuring files can be accessed when needed.
  • Privacy: Controlling access to files.

Techniques for availability include:

  • Backups: Regular copies of file contents.
  • Log Files: Recording each transaction to update files.

Techniques for privacy include:

  • User Identification: Using a username and password.
  • Domains of Protection: A collection of resources and allowed operations.


External Security

External security mechanisms prevent physical destruction of information and improper access. It is divided into:

  • Physical Security: Prevents physical agents from destroying information. Includes protection against disasters and intruders.
  • Security Management: Prevents improper access through a system terminal or communication network. Includes access protection, cryptography, and functional safety.


Access Protection

Controls attempts to enter the system using a user ID and password.


Cryptography

Transforms information to make it secret. Techniques include:

  • Exclusive Or: Bitwise exclusive-or operation with a key.
  • Data Encryption Standard (DES): Uses a 128-bit key.
  • Rivest, Shamir, and Adelman (RSA): Uses different keys for encryption and decryption.


Functional Safety

Covers aspects concerning the functioning of the system and facility security. Includes security in data transmission, which uses techniques such as:

  • Compaction of Data: Compressing data to occupy less space.
  • Cryptography: Hiding information in a transmission.
  • Reliability: Adding a small portion of information to check if the received data coincide with those sent.

Fault-tolerant systems are used in systems where information can be lost due to malfunctioning. They use networks of two or more computers to take over if one fails.


Internal Security

Internal security includes OS mechanisms designed to ensure the computer system. Includes:

  • Processor Security: Protected states (kernel) and unprotected states (user), and hardware clock.
  • Memory Security: Records boundaries and protected/unprotected processor states.
  • File Security: Backups, log files, and control of access to resources.


User Interface Models

An OS is manifested through a communication interface. This interface allows the user to identify the type of OS and enter commands. Interfaces are either plain text or graphic format.

  • Text: Commands and responses are strings. Introduced via the keyboard and displayed on screen. Examples include MS-DOS, Unix, Xenix, CPM, and Novell Netware.
  • Graphic: Uses windows, menus, and icons. Access is achieved by double-clicking with the mouse. Examples include OS/2, System-7, Windows 9x, Windows 3.0, 3.1, 3.11, KDE, GNOME, and XMOTIF.


Most Important Operating Systems

DOS: Disk Operating System (PC-DOS, MS-DOS). Created in 1981 by Microsoft for IBM PCs. It has a text interface and works in single-user, single-tasking, and single-processor mode. It can handle only 640 KB of memory and does not recognize hard drives over 2 GB. Its use is almost abandoned.

Windows: Microsoft created Windows with a graphical user interface. It displays icons on the screen that represent different files or programs. It has multitasking capabilities. Early versions were GUIs that ran on MS-DOS. Windows 95 was a true OS with a 32-bit architecture. Windows 98 added support for USB, DVD, and FAT32. Windows ME was the last version with the old DOS core. Windows NT was designed for advanced workstations and servers. Windows 2000 Professional was designed to be simple and reliable. Windows XP is an evolution of Windows 2000. It is the most used OS for personal computers today.

OS/2: IBM announced OS/2 in 1987. It was a 32-bit OS with a graphical user interface. It had real and shared modes. The shared mode allowed multitasking. It did not receive enough application support and was monopolized by Windows.

Mac OS: Created by Apple Computer, Inc. It was one of the first to use a graphical interface. It is user-friendly and good for organizing files. It only works on Apple-branded computers.

UNIX: A very old OS that has diversified over time. It is robust, stable, and popular in large computers and network servers. It is a time-sharing, multi-user, multitasking OS with large storage capacity and good network support. It uses a text interface, although some versions have a graphical interface. Linux is a popular version of Unix for personal computers.