Rate-monotonic scheduling

Rate-monotonic scheduling

In computer science, rate-monotonic scheduling [citation|first1=C. L.|last1=Liu|authorlink1=Chung Laung Liu|first2=J.|last2=Layland|title=Scheduling algorithms for multiprogramming in a hard real-time environment|journal=Journal of the ACM|volume=20|issue=1|year=1973|pages=46–61|doi=10.1145/321738.321743.] is a scheduling algorithm used in real-time operating systems with a static-priority scheduling class.

These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.


A simple version of rate-monotonic analysis assumes that threads have the following properties:

*No resource sharing (processes do not share resources, "e.g." a hardware resource, a queue, or any kind of semaphore blocking or non-blocking (busy-waits))
*Deterministic deadlines are exactly equal to periods
*Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks)
*Static priorities assigned according to the "rate monotonic" conventions (tasks with shorter periods/deadlines are given higher priorities)
*Context switch times and other thread operations are free and have no impact on the model

It is a mathematical model that contains a calculated simulation of periods in a closed system, where round robin and time share schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question.

harvtxt|Liu|Layland|1973 proved that for a set of "n" periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU utilization is:

:U = sum_{i=1}^{n} frac{C_i}{T_i} leq n(sqrt [n] {2} - 1)

where "Ci" is the computation time, and "Ti" is the release period (with deadline one period later). For example "U" ≤ 0.8284 for "n" = 2. When the number of processes tends towards infinity this expression will tend towards:

:lim_{n ightarrow infty} n(sqrt [n] {2} - 1) = ln 2 approx 0.693147ldots

So a rough estimate is that RMS in the general case can meet all the deadlines if CPU utilization is 69.3%. The other 30.7% of the CPU can be dedicated to lower-priority non real-time tasks. It is known that a randomly generated periodic task system will meet all deadlines when the utilization is 85% or less, [citation|first1=J.|last1=Lehoczky|first2=L.|last2=Sha|first3=Y.|last3=Ding|contribution=The rate monotonic scheduling algorithm: exact characterization and average case behavior|title=IEEE Real-Time Systems Symposium|pages=166–171|year=1989|doi=10.1109/REAL.1989.63567.] however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets.

The "rate monotonic" priority assignment is "optimal" meaning that if any "static priority" scheduling algorithm can meet all the deadlines, then the "rate monotonic" algorithm can too. The deadline-monotonic scheduling algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods. [citation|first1=J. Y.|last1=Leung|first2=J.|last2=Whitehead|title=On the complexity of fixed-priority scheduling of periodic, real-time tasks|journal=Performance Evaluation|volume=2|issue=4|pages=237–250|year=1982.]

An optimal static-priority scheduling algorithm when deadlines are greater than periods is an "open problem".

Avoiding priority inversion

In many practical applications, resources are shared and the unmodified RMS will be subject to priority inversion and deadlock hazards. In practice, this is solved by introducing priority inheritance. Alternative methods are to use lock free algorithms or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place.

Disabling of preemption

*The OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL() primitives that lock CPU interrupts in a real-time kernel (MicroC/OS-II (commonly written as uC/OS-II) kernel),
*The splx() family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x ),

Priority inheritance

Examples of priority inheritance algorithms include (from simplest to most complex):

*http://rt.wiki.kernel.org/index.php/Main_Page contains an implementation of this algorithm which is a part of the in-kernel Linux real time effort to make the entire Linux kernel fully preemptive
*The "basic priority inheritance protocol" [citation|first1=B. W.|last1=Lampson|authorlink1=Butler Lampson|first2=D. D.|last2=Redell|title=Experience with processes and monitors in Mesa|journal=Communications of the ACM|volume=23|issue=2|year=1980|pages=105–117|doi=10.1145/358818.358824.] which waits until a high priority task requests a lock held by a low-priority task, and then boosts the low priority task priority up to the high priority level until the lock is released
*The "highest locker protocol" which requires off-line analysis of all task conflicts to find a "ceiling priority"
*The priority ceiling protocol [citation|first1=L.|last1=Sha|first2=R.|last2=Rajkumar|first3=J. P.|last3=Lehoczky|title=Priority inheritance protocols: an approach to real-time synchronization|journal=IEEE Transactions on Computers|volume=39|issue=9|year=1990|pages=1175–1185|doi=10.1109/12.57058.] which is an enhancement of basic priority inheritance which assigns a "ceiling priority" to each semaphore, which is the priority of the highest job that will ever access that semaphore. A job then cannot preempt a lower priority critical section if its priority is lower than the ceiling priority for that section.

Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount):

In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems.Fact|date=October 2007

The basic priority inheritance protocol can produce chained blocking, that is, an almost arbitrarily long delay from the time a high priority task requests a critical section, until it is served. The other protocols guarantee that at most one lower priority critical section must finish before the high priority task gets its critical section.

The priority ceiling protocol is available in the VxWorks real-time kernel but is seldom enabled.

An example of usage of basic priority inheritance is related to the "Mars Pathfinder reset bug" [http://research.microsoft.com/~mbj/Mars_Pathfinder/] [http://anthology.spacemonkeys.ca/archives/770-Mars-Pathfinder-Reset-Bug.html] which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance.


The utilization will be:

:frac{1}{8} + frac{2}{5} + frac{2}{10} = 0.725

The sufficient condition for 3, processes, under which we can conclude that the system is schedulable is:

:U = 3(2^frac{1}{3} - 1) = 0.77976ldots

Since 0.725 < 0.77976ldots the system is surely schedulable.

But remember, this condition is not a necessary one. So we cannot say that a system with higher utilization is not schedulable with this scheduling algorithm.

ee also

*Dynamic priority scheduling
*Earliest deadline first scheduling
*RTEMS, an open source real-time operating system containing a working Rate Monotonic Scheduler.


Further reading

*citation|first1=M.|last1=Joseph|first2=P.|last2=Pandya|title=Finding response times in real-time systems|journal=BCS Computer Journal|volume=29|issue=5|pages=390–395|year=1986.
*citation|last=Liu|first=Jane W.S.|title=Real-time systems|publisher=Prentice Hall|location=Upper Saddle River, NJ|year=2000, Chapter 6.

External links

* [http://research.microsoft.com/~mbj/Mars_Pathfinder/Authoritative_Account.html Mars Pathfinder Bug] from Research @ Microsoft
* [http://www.cs.berkeley.edu/~brewer/cs262/PriorityInversion.html Priority Inversion explained via the Mars Pathfinder Bug]

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Rate-monotonic Scheduling — Algorithmes d ordonnancement EDF • Rate monotonic • Round robin LIFO • FIFO Le rate monotonic scheduling est un algorithme d ordonnancement …   Wikipédia en Français

  • Rate Monotonic Scheduling — (RMS) ist ein Prioritätsscheduling Verfahren für unterbrechbare, periodische Jobs. Es wird häufig in Echtzeit Systemen eingesetzt. Die Prioritäten werden statisch nach der Periodendauer eines Jobs festgelegt: je kürzer die Periodendauer eines… …   Deutsch Wikipedia

  • Rate-monotonic scheduling — L ordonnancement à taux monotone (en anglais, rate monotonic scheduling) est un algorithme d ordonnancement temps réel en ligne à priorité constante. Il attribue la priorité la plus forte à la tâche qui possède la plus petite période. RMS est… …   Wikipédia en Français

  • Deadline Monotonic Scheduling — (DMS) bezeichnet in der Informatik ein Schedulingverfahren für harte Echtzeitsysteme, das zur Verwaltung von Prozessen fester Prioritäten dient. Unter den Schedulingverfahren mit festen Prioritäten ist es für beliebige Deadlines optimal.… …   Deutsch Wikipedia

  • Rate Monotonique — Rate monotonic scheduling Algorithmes d ordonnancement EDF • Rate monotonic • Round robin LIFO • FIFO Le rate monotonic scheduling est un algorithme d ordonnancement …   Wikipédia en Français

  • Scheduling (Informatik) — Ein Prozess Scheduler (Scheduler = Steuerprogramm) ist eine Arbitrationslogik, der die zeitliche Ausführung mehrerer Prozesse in Betriebssystemen regelt. Prozess Scheduler kann man grob in unterbrechende (preemptive) und nicht unterbrechende (non …   Deutsch Wikipedia

  • Scheduling algorithm — In computer science, a scheduling algorithm is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth). This is usually done to load balance a system effectively or …   Wikipedia

  • Earliest deadline first scheduling — Earliest deadline first (EDF) scheduling is a dynamic scheduling algorithm used in real time operating systems. It places processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be …   Wikipedia

  • Least slack time scheduling — Least Slack Time (LST) scheduling is a scheduling algorithm. It assigns priority based on the slack time of a process. Slack time is the amount of time left after a job if the job was started now. This algorithm is also known as Least Laxity… …   Wikipedia

  • Earliest Deadline First Scheduling — Pour les articles homonymes, voir EDF (homonymie). Algorithmes d ordonnancement EDF • Rate monotonic …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”