Two-level scheduling

Two-level scheduling

Two-level scheduling is a computer science term to describe a method to more efficiently perform process scheduling that involves swapped out processes.

Consider this problem: A system contains 50 running processes all with equal priority. However, the system's memory can only hold 10 processes in memory simultaneously. Therefore, there will always be 40 processes swapped out written on virtual memory on the hard disk. The time taken to swap out and swap in a process is 50 ms respectively.

With straightforward Round-robin scheduling, every time a context switch occurs, a process would need to be swapped in (because only the 10 least recently used processes are swapped in). Choosing randomly among the processes would diminish the probability to 80% (40/50). If that occurs, then obviously a process also need to be swapped out. Swapping in and out of is costly, and the scheduler would waste much of its time doing unneeded swaps.

That is where two-level scheduling enters the picture. It uses two different schedulers, one lower-level scheduler which can only select among those processes in memory to run. That scheduler could be a Round-robin scheduler. The other scheduler is the higher-level scheduler whose only concern is to swap in and swap out processes from memory. It does its scheduling much less often than the lower-level scheduler since swapping takes so much time.

Thus, the higher-level scheduler selects among those processes in memory that have run for a long time and swaps them out. They are replaced with processes on disk that have not run for a long time. Exactly how it selects processes is up to the implementation of the higher-level scheduler. A compromise has to be made involving the following variables:

* Response time: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessly long. If this variable is not considered resource starvation may occur and a process may not complete at all.
* Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process.
* Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster.

References

* Tanenbaum, Woodhull, "Operating Systems: Design and Implementation", p.92


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Scheduling (computing) — This article is about processes assignment in operating systems. For other uses, see Scheduling (disambiguation). Scheduling is a key concept in computer multitasking, multiprocessing operating system and real time operating system designs.… …   Wikipedia

  • Campeonato Brasileiro tournament scheduling — Campeonato Brasileiro (i.e., Brazilian football league) had its first edition in 1971. Since then, the competition has never featured twice the same scheduling, either in number of participants or especially in its structure. This lack of… …   Wikipedia

  • Instruction scheduling — In computer science, instruction scheduling is a compiler optimization used to improve instruction level parallelism, which improves performance on machines with instruction pipelines. Put more simply, without changing the meaning of the code, it …   Wikipedia

  • Rate-monotonic scheduling — In computer science, rate monotonic scheduling [citation|first1=C. L.|last1=Liu|authorlink1=Chung Laung Liu|first2=J.|last2=Layland|title=Scheduling algorithms for multiprogramming in a hard real time environment|journal=Journal of the ACM|volume …   Wikipedia

  • Aggregate Level Simulation Protocol — The Aggregate Level Simulation Protocol (ALSP) is a protocol and supporting software that enables simulations to interoperate with one another. Replaced by the High Level Architecture (simulation) (HLA), it was used by the US military to link… …   Wikipedia

  • Fair-share scheduling — is a scheduling strategy for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes. For example, if four users (A,B,C,D) are concurrently executing… …   Wikipedia

  • Windows NT Processor scheduling — Without processor scheduling the Microprocessor would give attention to jobs based on when they arrived in the queue. This is not always optimal. Some applications should be given more time with the processor because that program is more critical …   Wikipedia

  • List scheduling — The basic idea of list scheduling is to make an ordered list of processes by assigning them some priorities, and then repeatedly execute the following two steps until a valid schedule is obtained : * Select from the list, the process with the… …   Wikipedia

  • Cannabis reform at the international level — refers to efforts to ease restrictions on cannabis use under international treaties. Most cannabis reform organizations do not spend a great deal of resources on international cannabis reform, since success would require governmental assistance… …   Wikipedia

  • Thread (computer science) — This article is about the concurrency concept. For the multithreading in hardware, see Multithreading (computer architecture). For the form of code consisting entirely of subroutine calls, see Threaded code. For other uses, see Thread… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”