CS452 F23 Lecture Notes
Lecture 16 - 16 Nov 2023
1. Nov 16th
1.1. Collision Avoidance
- full path reservation
- does new path for train \(T\) overlap with active paths of other trains?
- if so, either
- wait
- re-route \(T\)
- advantages:
- no deadlocks
- disadvantages:
- slow, poor efficiency (coarse-grained reservations)
- on-demand reserve and release
- divide track into reservable zones
bounded by sensors?
Figure 1: Reservable Track Zones
- zone granularity?
- larger zones: fewer reservations required, but increased conflict
- moving train must acquire next zone before moving into it
- release previous zone when entering new one
- unable to acquire next zone?
- stop requesting train and either
- wait, or
- re-route
- either way, need to be able to stop in time
- predict when train will enter next zone
- if no reservation success, issue stop far enough in advance
- emergency stop if train enters unreserved zone
- Can use train reverse as emergency stop
- stop requesting train and either
- wait vs. re-route
- waiting is simple, but risk of deadlock (hold-and-wait)
- example scenarios:
- head-on conflict (always 2-cycle deadlock)
- following conflict
- cross-traffic
- example scenarios:
- waiting is simple, but risk of deadlock (hold-and-wait)
- deadlock detection
- cycle checking in wait-for graph
- nodes for lockable track regions and trains
- timeout
- cycle checking in wait-for graph
- deadlock avoidance
- prioritize trains?
- resource (zone) ordering?
- must lock in resource order!
- divide track into reservable zones
- time-domain locking
- plan predicts zones that will be needed, and when
- reserve zone \(z\) from time \(t_1\) to \(t_2\)
- allows for advance reservation for entire route
- train control slows/stops train to ensure that it does not arrive too early/late
1.2. On-Demand Switching
- ensure that switch is not moved while a train is on it
- integrate with reservation system?
- only switch reserved switches
- switch before notifying train that reservation is granted?
- ensures that the zone is empty when it is switched
1.3. Route Finding
- what is a good route when there is contention?
- with a single train, shortest path = fastest path
- not necessarily true when there is contention!
- conflict-oblivious routing
- find shortest route, as for TC1
- deal with contention while en-route
- conflict-aware routing
- simple form: route avoiding zone that cannot be reserved
- use to re-route around a reservation conflict
- more general: try to account for potential conflicts in route selection
- for example: score path based on number of potentially contended nodes and distance
- simple form: route avoiding zone that cannot be reserved
- use reservation zones as routing overlay?
1.4. Priorities
1.4.1. How to Set Priorities?
- Prioritze application-meaningful operations
- sensor-command activation cycle
- sensor polling loop
- UI
- route planning, route setup
- Mapping operations to tasks
- problem: same task doing high- and low-priority operations
- route planning and train control (sensor-activation)
- time services for low- and high-priority tasks
- train server handling high (stop) and low (go) priority commands
- static solution: split tasks, e.g, high- and low-priority train server
- not always practical, e.g., want a single train server to ensure one command at a time to the Marklin
- servers often have to handle requests from processes with different priorities
- problem: same task doing high- and low-priority operations
1.4.2. Priority Inversions
- Assume lower numbers are higher priorties, and task Ids reflect their priority
- Scenario 1:
- \(T_2\) is runnning, server \(T_3\) is idle (blocked)
- \(T_2\) gets preempted by \(T_1\), which sends a request to idle server \(T_3\) then blocks
- problem: server \(T_3\) does not run until \(T_2\) finishes. \(T_1\) is effectively waiting for \(T_2\)
- Scenario 2:
- server \(T_3\) is handling request from \(T_2\)
- server gets pre-empted by \(T_1\), which sends it a request
- problem: \(T_1\)’s work is waiting for \(T_2\)’s work
- Scenario 3:
- server \(T_3\) is running some request, and has a queued request from \(T_2\) (blocked waiting for response)
- \(T_1\) preempts \(T_3\) and sends it a request, which queues behind \(T_2\)’s request. \(T_1\) blocks waiting for response.
- problem: \(T_1\)’s work waiting for \(T_2\)’s work
- Dynamic Prioritization (tools for combatting priority inversions)
- promote server priority to priority of active request
- would fixed problem in Scenario 1
- promote server priority to that of highest priority queued request
- would help with Scenario 2
- \(T_1\) still waits for \(T_2\)’s request to finish, but not as long
- would help with Scenario 2
- prioritize Receive queue
- would help with Scenario 3
- \(T_1\) waits for request the server was already running, but not for \(T_2\)’s request
- would help with Scenario 3
- promote server priority to priority of active request
- Kernel can support dynamic prioritization
- carry sender’s priority with message on Send()
- priority queue of waiting Send()ers