CS 452/652 Fall 2019 - Lecture 33
November 25, 2019
prev
next
Real-Time Scheduling / Theory and Practice
CPU Scheduling
embedded (special-purpose) system
guarantee response latency - predictability!
hard vs. soft/near real-time
soft/near: statistical distribution of response times ok
hardware / os / application co-design
general-purpose computer system
throughput and fairness
hardware -> os -> application: independent designs
Cyclic Executive
polling loop w/ strict timing
components: subroutine or coroutine
subroutine runs to completion
coroutine suspends/resumes (need stack)
known upper bound for each component
sleep to fill gaps → predictable timing
t = current_time() if (C1) then A1; sleep(t + target - current_time())
monolith, limited flexibility
Real-Time Task Scheduling
separate OS kernel and application tasks for flexibility
dynamic creation of tasks
tasks annotated with real-time requirements
set of real-time tasks feasible → need online admission test
periodic tasks: arrival, run-time
aperiodic tasks (sporadic) much more difficult to analyze
classic paper:
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
(Liu/Layland)
periodic tasks; deadline = arrival of next request
immediate preemption
rate-monotonic (RM) scheduling
assign priorities according to arrival rate (not utilization)
admission analysis based on total CPU utilization
starvation under overload
earliest deadline first (EDF) scheduling
dynamic priorities -> sorting / computational overhead
no guarantees under overload
CPU utilization < 100% (optimal)
Trains
symmetric/homogeneous tasks (train control) → but tasks not independent!
interrupt handling latency (minimal)
central event query and sensor attribution
worst case: maximum number of track operations?
speed and/or turnout
shared serial interface
internal organization ⇒ priorities sufficient (buffering)
best outcome: assess how many trains can be controlled
BUT: CPU is not bottleneck!
additional challenge: error handling & robustness