Virtual Memory Management in Operating Systems

Hardware Structures and Control

  • All memory references within a process are logical addresses translated into physical addresses during execution.
  • A process can be divided into several parts (pages or segments) that do not need to be contiguous during implementation.
    • This is possible through the combination of dynamic address translation and the use of page tables or segments.

Running a Program

  • The operating system loads only a few fragments of the program into main memory, including the fragment containing the program’s inception.
  • The resident set is the part of the process that is actually in main memory.
  • If the processor encounters an address not in main memory, it generates an interrupt.
  • The operating system puts the interrupted process in a locked state.
  • The fragment of the process that caused the failure is brought into main memory. The operating system:
    • Issues a read I/O request to disk.
    • Schedules another process to run while the I/O operation completes.
    • Once the I/O operation is complete, an interrupt is issued, and the operating system modifies the affected process’s state to the Ready state.

Advantages of this Approach

  • More processes can be kept in main memory.
    • Only a few fragments of each process need to be loaded into main memory.
    • More efficient use of the processor.
  • At least one process, among the many in main memory, will be in the Ready state.
  • A process can be larger than all of main memory.
    • With virtual memory based on paging or segmentation, the programmer does not need to worry if their program is too large.

Types of Memory (for Execution)

  • Real memory
    • Main Memory
  • Virtual memory
    • Allows for effective multiprogramming.
    • Removes the user’s main memory constraints.

Thrashing

  • If a fragment is ejected just before use, it must be brought back almost immediately.
  • Excessive exchanges lead to thrashing.
  • The processor spends more time exchanging fragments than executing user instructions.

Principle of Locality

  • References to data and the program within a process tend to cluster.
  • For short periods, only a few fragments of the process will be needed.
  • Intelligent predictions can be made about what portions of a process will be needed in the near future.

Necessary Support for Virtual Memory

  • Hardware must support paging and segmentation.
  • The operating system should include software to manage the movement of pages or segments between main memory and secondary storage.

Pages

  • Each process has its own page table.
  • Each entry in the page table contains the frame number of the corresponding page in main memory.
  • A bit is needed to indicate if the page is in main memory or not.
  • Another bit indicates whether the page’s contents have been modified since it was loaded into main memory. If not, there is no need to update the page when it is replaced.

Page Tables

  • There is one page table per process.
  • Some tables can occupy large amounts of main memory.
  • Page tables are stored in virtual memory.
  • When a process is running, at least part of its page table must be in main memory.

Translation Lookaside Buffer (TLB)

  • Each memory reference can generate two memory accesses:
    • For the page table entry.
    • To obtain the desired data.
    • To solve this, most virtual memory schemes use a special cache of page table entries called a Translation Lookaside Buffer (TLB).
  • Contains the most recently used page table entries.
  • Given a virtual address, the processor examines the TLB.
  • If the page table entry is present, the frame number is obtained, and the actual address can be derived.
  • If not found, the processor uses the page number as an index to search the process’s page table.
  • If found, the active presence bit indicates that the page is in main memory, and the processor can get the frame number and the physical address.
  • The processor updates the TLB.
  • If the presence bit is not active, the page is not in main memory, causing a page fault.

Page Size

  • The smaller the page size, the lower the internal fragmentation.
  • The smaller the page, the greater the number of pages required per process.
  • More pages mean larger page tables.
  • With large page tables, most of the tables will be in virtual memory.
  • The physical characteristics of most secondary storage devices are suitable for larger page sizes.
  • With a small page size, there are more pages available in main memory.
  • After a while, all memory pages will contain some of the process’s recent references, leading to fewer page faults.
  • When increasing the page size, any reference to distant positions will be included, increasing the risk of page faults.
  • Different page sizes provide greater flexibility in TLB usage.
  • Larger pages can be assigned to program instructions.
  • Smaller pages can be assigned to thread stacks.

Segmentation

  • Segmentation allows the developer to view memory as consisting of multiple address spaces.
  • Segments can be of different sizes, even dynamically.
  • Memory addresses consist of a segment number and displacement.
  • Simplifies the management of growing data structures.
  • Allows programs to be modified and recompiled independently.
  • Allows data sharing between processes.
  • Allows protection.

Segment Table

  • Each table contains the starting address of the segment in main memory and its length.
  • Contains a bit indicating whether the segment is already in main memory.
  • Contains a bit indicating whether the segment has been modified since it was loaded into main memory.
  • Contains other control bits for protection or sharing.

Combination of Paging and Segmentation

  • Combines the advantages of paging and segmentation.
    • Each segment can be divided into fixed-size pages.
    • Each process has a segment table and several page tables.

Protection and Sharing

  • Uses a ring protection scheme.
  • Inner rings have more privileges.
  • A program can only access data from its ring or less privileged rings.
  • A program can make calls to services that reside on the same ring or more privileged rings.

Operating System Policies on Virtual Memory

  • Memory management design in an operating system depends on three areas:
    • Whether or not virtual memory techniques are used.
    • Using both segmentation and paging.
    • The memory management algorithms.

The first two items depend on the available hardware platform. The third point is the domain of operating system software.

Memory Management Algorithms

  • Performance: aims to minimize page faults.
    • Page faults incur software overhead.

The operating system must:

  • Replace pages.
  • Schedule another process for execution.

The goal is to minimize the likelihood of referencing a missing page.

Reading Policy

  • Concerns the decision of when to load a page into main memory.
    • Demand paging

A page is brought into memory only when a location on that page is referenced.

When a process is first executed, there will be many page faults.

  • Page prepaging

Pages other than the ones causing a page fault are loaded.

If pages are loaded sequentially in secondary memory, it is more efficient to bring in a block of contiguous pages.

  • Prepaging should not be confused with swapping. When a process is unloaded from memory and moved to the suspended state, all its resident pages are also swapped out.

Placement Policies

  • Determine where parts of the process reside in main memory.
  • For pure segmentation: best fit, first fit, and next fit are used.
  • With a combination of segmentation and paging, placement is irrelevant.

Replacement Policies

  • Select the page to replace when loading a new page.
  • Objective: Replace the page least likely to be referenced in the near future.
  • Attempts to predict the future based on the past.

Frame Locking

  • A restriction on the replacement policy.
    • When a frame is locked, the page loaded in that frame cannot be replaced.
  • The operating system kernel and key control structures are in locked frames.
  • A lock bit is associated with each frame.

Replacement Algorithms

  • Optimal policy
    • Replaces the page that will not be referenced for the longest time.
    • Requires the operating system to have precise knowledge of future events.
    • Serves as a standard for comparing other algorithms.
  • Least Recently Used (LRU) policy
    • Replaces the page that has not been referenced for the longest time.
    • Based on the principle of locality, this is the page least likely to be referenced in the near future.
    • Implementation requires labeling each page with the time of its last reference, which involves overhead.
  • First-In-First-Out (FIFO) policy
    • Treats the frames allocated to a process as a circular buffer.
    • Pages are removed from memory using a round-robin technique.
    • Simple to implement.
    • Objective: Replace the page that has been in memory the longest.
    • Widely used for data that is loaded and ejected repeatedly.
  • Clock policy
    • Requires an additional bit associated with each frame called the use bit.
    • When a page is first loaded into a memory frame, the use bit is cleared.
    • When the page is referenced, the bit is set to 1.
    • When it is time to replace a page, the first frame encountered with a use bit of zero is replaced.
    • During the search, every use bit that is 1 is changed to 0.
  • Page buffering
    • Uses the FIFO policy.
    • The replaced page is assigned to one of two lists:

Free pages: If the page has not been modified.

Modified pages: If the page has been modified.

  • The page is not physically moved from main memory; only its entry in the page table is deleted.

Resident Set Management

  • Resident set size
    • The operating system must decide how much main memory to allocate to a particular process.

The fewer pages assigned to a process, the more processes can be in main memory.

With a small number of pages, the page fault rate is higher.

Allocating additional memory to a particular process will not significantly affect the page fault rate.

  • Fixed allocation
    • Gives the process a fixed number of frames.
    • If a page fault occurs, a page from the process must be replaced with the needed page.
  • Variable allocation
    • Allows the number of assigned frames to change.
    • If a process has many page faults, it will be assigned more frames.

Replacement Scope

  • The scope of a replacement strategy can be classified as global or local.
    • Global: Considers all memory pages as candidates for replacement.
    • Local: Considers only the pages of the process that caused the page fault.

Load Control

  • Determines the number of processes that can be in main memory.
  • With too few processes in memory, some may be blocked, leading to excessive swapping.
  • With too many processes in memory, the chance of page faults increases, resulting in thrashing.

Process Suspension

  • To reduce the degree of multiprogramming, one or more processes resident in main memory should be suspended.
    • Processes with lower priority.
    • Processes with high page fault rates.
    • The last process activated.
    • The process with the smallest resident set.
    • The process with the most pages.
    • The process with the largest remaining run time.

Benefits of Using Virtual Memory

  • Better processor utilization.
  • Removes size limitations in software development.
  • Logical addresses are translated to physical addresses during execution.

This allows processes to be placed anywhere in main memory and change location over time.

  • A process can be divided into fragments.

These fragments do not need to be located contiguously.

Not all fragments need to be in main memory.

  • The basic virtual memory approaches are paging and segmentation.
  • A memory management scheme requires hardware and software support.
    • Hardware: Provides the processor with:

Virtual address translation

Interrupt generation on page faults

  • Software: Memory management algorithms.
  • Design issues related to operating systems providing memory management support:
    • Reading policy
    • Placement policies
    • Replacement policies
    • Resident set management
    • Swapping policies
    • Load control