Memory Management in Operating System



Memory Management

The entire program and data of a process must be in main memory for the process to execute. Memory consists of a large array of words or bytes, each with its own address. Generally memory concerned with the solution of the following questions.
How to keep the track of processes currently being executed?
Which processes to load when memory space is available?
How to load the processes that are larger than main memory?
How do processes share the main memory?
The part of OS that is responsible for handling these issues is called a memory manager.

Memory Management Issues

The different management issues are as follows.
·         Uniprogrammnig or Multiprogramming system.
·         Absolute or relocatable partitions.
·         Multiple fixed or multiple variable partitions.
·         Partition in rigid manner or dynamic.
·         Program run from specific partition or anywhere.
·         Job placing at contiguous block or any available slot

Memory Management Requirements:

Relocation

·         Programmer does not know where the program will be placed in memory when it is executed.
·         While the program is executing, it may be swapped to disk and returned to main memory
          at a different location (relocated).
·         Memory addresses must be translated in the code to actual physical memory address.
Hardwares used - Base (relocatable) register, limit register.

Protection

·         Prevent processes from interfering with the OS or other processes. Processes should not be able to reference memory locations in another process without permission.
·         Impossible to check absolute addresses in programs since the program could be relocated.
·         Often integrated with relocation.

Sharing

·         Allow several processes to access the same portion of memory (allow to share data).
·         Better to allow each process (person) access to the same copy of the program rather than have their own separate copy.

Logical and physical memory address

The address generated by CPU is called Logical Address. It is also known as virtual address. Where as an address seen by the memory unit that is, the one loaded into the memory address register of the memory is called physical address. The set of all logical addresses generated by a program is a logical address space; the set of all physical addresses corresponding to these logical addresses is a physical address space. The runtime mapping from virtual to physical addresses is done by a hardware device called the memory management unit (MMU). Mapping from logical address to physical address is relocation. Two registers: Base and Limit are used in mapping. This mechanism also used in memory protection. Memory outside the range is protected.

Base register (relocation): Holds smallest legal physical memory address.
Limit register: contain the size of range.
Example:
Logical address = 300, limit=350
If Base = 1500
Physical address = 1500 + 300 = 1800.


Uniprogramming Model Without swapping and paging

                                                                       


·         Run one program at a time with single segment per process.
·         Memory space is divided into two partitions by convention; one partition is allocated to OS and the other to executing process.
·         As soon as the user types the command, the OS copies the requested program from disk to memory and execute it.
·         Used in early Dos system.

Disadvantages:
·         Only one process can run at a time.
·         Processes can destroy OS (an erroneous process can crash the entire system).


Multiprogramming Model: Fixed Partition Multiprogramming


Multiple Partitions are created to allow multiple user processes to resident in memory simultaneously. The partitions are fixed (possibly unequal), can be done using base and limit register. Partition table stores the partition information for a process. When job arrives, it can be put into the input queue for the smallest partition large enough to hold it. Science the partitions are fixed, any space in partition not used by a process is lost. Used by OS/360 on large mainframes.

Implemented in two ways:


·         Dedicate partitions for each process (Absolute translation).
·         Maintaining a single queue (Relocatable translation).



Dedicating Partitions

·         Separate input queue is maintained for each partition.
·         Processes are translated with absolute assemblers and compilers to run only in specific partition.
Problems:
If a process is ready to run and its partition is occupied then that process has to wait even if other partitions are available. Wastage of storage.

Maintaining Single Queue

·         Maintains a single queue for all processes.
·         When a partition becomes free, the processes closest to the front of queue that fits in it could be loaded into the empty partition and run.
·         Not to waste the large partition for small process, another strategy to search the whole input queue whenever a partition becomes free and pick the largest process that fits.
Problems:
·         Eliminate the absolute problems but implementation is complex.
·         Wastage of storage when many processes are small.


Variable Partition Multiprogramming

How to allocate the space for a process as much as they need?
-variable partition multiprogramming.
When processes arrive, they are given as much storage as they need.
When processes finish, they leave holes in main memory; OS fills these holes with another processes from the input queue of processes.
Advantages:
Processes get the exact size partition of their request for memory- no waste of memory.
Problems:
What will happen when we leave hear???
When holes given to other process they may again partitioned, the remaining holes gets smaller eventually becoming too small to hold new processes …….. Waste of memory occurs.




Memory Compaction

By moving processes in memory, the memory holes can be collected into a single section of unallocated space ….. Memory compaction.

Problems:
Not possible in absolute translation.
If it possible, it is highly expensive. It requires a lots of CPU time;
Eg: On a 256-MB machine that can copy 4 bytes in 40 nsec, it takes about 2.7 sec to compact all memory. It stops every thing when compaction. This technique is not used.

Memory management with Bitmap management

Memory is divided up into allocations units and each bit in the map indicates whether or not the corresponding unit is allocated. Each allocation unit is a bit in the bit map, 0 if the unit is free, and 1 if it is occupied (or vice versa). When a process comes into the memory, it searches required numbers of consecutive 0 bits in the map. The size of the allocation unit is an important design issue; increasing the size of allocation decrease the size of bitmap, but increase the amount of memory wasted when the process size is not a multiple of size of the allocation unit.

Advantages:
Provides a simple way to keep track of memory words.
Problems:
Searching a bitmap for a run of given length is a slow operation.

Memory management with Linked List Management

Maintains a linked list of allocated and free memory segments, where a segment is either a process or a hole between the two processes. Each segment contains an entry indicating the segment size and a pointer to the next segment in the list. H represents hole and P represents process and the segment list is kept sorted by address. Sorting has an advantage that when a process terminates, updating the list is straightforward. It may be implemented as doubly-linked list to make a more convenient search. When a process terminates, if any neighbors is already hole, merge the holes called coalescing.


Partition Selection Algorithms

Situation: Multiple memory holes are large enough to contain a process; OS must select which hole the process will be loaded into.

First Fit

Allocate the first hole that is big enough. It stops the searching as soon as it finds a free hole that is large enough. The hole is then broken up into two pieces, one for the process and one for unused memory.
Advantage:
·         It is a fast algorithm because it searches as little as possible.
Problem:
·         Not good in terms of storage utilization.

Best Fit

Allocate the smallest hole that is big enough. It searches the entire list, and takes the smallest hole that is big enough. Rather than breaking up a big hole that might be needed later, it finds a hole that is close to the actual size.
Advantage:
·         More storage utilization than first fit but not always.
Problem:
·         Slower than first fit because it requires to search whole list at every time.
·         Creates tiny hole that may not be used.

Worst Fit
Allocate the largest hole. It searches the entire list, and takes the largest hole. Rather than creating tiny hole it produces the largest leftover hole, which may be more useful.
Advantage:
Some time it has more storage utilization than first fit and best fit.
Problem:         not good for both performance and utilization.
These algorithms however suffer from external fragmentation. As process are loaded and removed from memory, the free memory space is broken down into little pieces. External fragmentation exists when enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a large number of small holes. This fragmentation problem can be severe.
Ex consider a swapping system in which memory consists of the following hole sizes in memory order 10k,4k,20k,18k,7k,9k,12k and 15k which hole is taken for successive segment request of (a) 12k (b) 10k (c) 9k for first fit, best fit and worst fit algorithm.
Solution:                                     For first fit algorithm
Initially                                    10k, 4k, 20k, 18k, 7k, 9k, 12k, 15k
12k                                          10k, 4k, 8k, 18k, 7k, 9k, 12k, 15k
10k                                          0k, 4k, 8k, 18k, 7k, 9k, 12k, 15k
9k                                            0k, 4k, 8k, 18k, 7k, 0k, 12k, 15k
Hole 20k request for 12k, hole 10k request for 10k and hole 9k request for 9k.

Solution:                                     For best fit algorithm
Initially                                    10k, 4k, 20k, 18k, 7k, 9k, 12k, 15k
12k                                          10k, 4k, 20k, 18k, 7k, 9k, 0k, 15k
10k                                          0k, 4k, 20k, 18k, 7k, 9k, 0k, 15k
9k                                            0k, 4k, 8k, 18k, 7k, 0k, 0k, 15k
Hole 20k request for 20k, hole 10k request for 10k and hole 9k request for 9k.

Solution:                                     For worst fit algorithm
Initially                                    10k, 4k, 20k, 18k, 7k, 9k, 12k, 15k
12k                                          10k, 4k, 8k, 18k, 7k, 9k, 12k, 15k
10k                                          10k, 4k, 8k, 8k, 7k, 9k, 12k, 15k
9k                                            10k, 4k, 8k, 8k, 7k, 0k, 12k, 6k
Hole 20k request for 12k, hole 18k request for 10k and hole 15k request for 9k.

Fragmentation

When a process request the memory space, if the requested memory space is greater then the memory needed by the process then a tiny hole is created which is called fragmentation. For example if the process request for 812bytes. If whole memory is allowed to the process, a tiny hole of 3 bytes is created which is called fragmentation.
The fragmentation may be of two types. Which are as follows?
1)      Internal fragmentation.
2)      External fragmentation.
The distinguish between the internal and external fragmentation is below.
·         Internal fragmentation is occurred in paging while the external fragmentation is occurred in fragmentation.
·         When process request for memory segment and if the available memory is greater then the memory needed for process, a tiny hole is created and the difference between these two space is called the internal fragmentation.
·         While if a process request for memory size and if the process does not get the exactly the memory as it needed to complete its execution. This is called external fragmentation. Here, the holes are smaller then the memory needed by the process. This can be satisfied by the memory compaction and coalsiding.



Memory management with Buddy system

         Entire space available is treated as a single block of 2U
         If a request of size s such that 2U-1 < s <= 2U, entire block is allocated
        Otherwise block is split into two equal buddies
        Process continues until smallest block greater than or equal to s is generated



Swapping

Till now:  User process remains in main memory until completion and contiguous allocation.
In timesharing system, the situation may be different; sometimes there is no enough memory to hold all processes.
How to handle this problem? ………….. Swapping.
Access process must be kept on the disk and brought it into memory and run dynamically.
A process, however, can be swapped temporarily out of the memory to disk, and then brought back into memory for continued execution.
In timesharing system processes will be swapped in and out many times before its completion.
In some swapping systems, one process occupies the main storage at once; when process can no longer to continue it relinquishes both the storage and CPU.

Overlays

In the past when the programs were too big to fit in available main memory, the solution adopted was to split the program into pieces called overlays.
Overlays 1 would start running first; when it was done, it would call next overlay 2 and so on.
The overlays are kept on disk and swapped in and out by OS.
Problem:
The job of splitting program into pieces (making overlays) had to be done by programmer; time consuming and boring for programmer.
Solution: Virtual memory

Virtual memory

Virtual memory is a concept that is associated with ability to address a memory space much
larger than that the available physical memory.

The basic idea behind the virtual memory is that the combined size of the of the program, data, and stack may exceed the amount of physical memory available for it. The OS keeps those parts of the program currently in use in main memory, and the rest on the disk.

Virtual storage is not a new concept; this concept was devised by Fotheringham, 1961 and used in Atlas computer system. But the common use in OS is the recent concept, all microprocessor now support virtual memory. Virtual memory can be implemented by two most commonly used methods: Paging and Segmentation or mix of both.

Virtual address space vs. Physical address space

The set of all virtual (logical) addresses generated by a program is a virtual address space; the set of all physical addresses corresponding to these virtual addresses is a physical address space.
MMU
The run time mapping from virtual address to physical address is done by hardware devices called memory-management-unit (MMU).


No comments

Powered by Blogger.