CS 452/652 Winter 2023 - Lecture 20
Mar 28, 2023
prev
next
Real-Time Scheduling / Theory and Practice
CPU Scheduling
embedded (special-purpose) system
hardware / operating system / application co-design
guaranteed response latency - predictability! (too early?)
hard vs. firm vs. near vs. soft real-time
hard: "no delay"
firm: rate delay miss ok
near: some delay ok
soft: statistical distribution of response times ok, degrading utility
general-purpose computer system
hardware - operating system - application: independent designs
throughput and fairness objectives
hard real-time almost impossible
pipeline vs. run-to-completion
Cyclic Executive
polling loop w/ strict timing
components: subroutine or coroutine
subroutine runs to completion
coroutine suspends/resumes (need stack)
known upper bound for each component
sleep to fill gaps → predictable timing
t = current_time(); if (C1) then A1; ... sleep(t + target - current_time());
monolith, limited flexibility
Real-Time Task Scheduling
separate OS kernel and application tasks for flexibility
dynamic creation of tasks
tasks annotated with real-time requirements
periodic (arrival, run-time) vs. aperiodic tasks (sporadic)
aperiodic much more difficult to analyze
admission control: set of feasible real-time tasks
selection mechanism
static priority vs. dynamic/deadline-based
hybrid: can have background tasks
mathematical model
Fundamentals
classic paper:
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
(Liu//Layland)
periodic tasks; deadline = arrival of next request
immediate preemption
rate-monotonic (RM) scheduling
assign priorities according to arrival rate (not runtime)
admission analysis based on total CPU utilization, approximated as < 70%
starvation under overload
earliest deadline first (EDF) scheduling
dynamic priorities → sorting / computational overhead
optimal for CPU utilization < 100%
no guarantees under overload
refinement for networking:
Generalized Processor Sharing
other populer name:
Weighted Fair Queueing
variation and extension of Liu/Layland's EDF
adding
work conservation
as objective
network traffic: transmission rate, burst, non-preemptive packet transmission
later work uses traffic characteristics to build O(1) scheduling algorithms
all of the above are
single resource
!
optimal multi-processor scheduling is NP-hard
also keep in mind effects from resource sharing
Train Control
rate-monotonic scheduling seems sufficient
scheduling analysis vs. latency analysis
scheduling:
service tasks, identical engineers - arrival, runtime?
interrupt handling latency minimal
assess how many trains can be controlled
CPU not bottleneck - track limits number of trains running!
train control latency:
central event query and sensor attribution
central train/track operations - buffering?