CS 452/652 Winter 2022 - Lecture 20
February 28, 2022
prev next
Train Modelling - Velocity
- Kinematics: area of Mechanics (and thus Physics)
- studies "motion of objects without reference to the forces which cause the motion."
- model trains emulate real trains
- we model the model trains
- velocity is inherently an average: distance / time
Goals / Objectives
- location tracking
- collision avoidance
- accurate stopping
- train efficiency: complete trips as fast as possible
Experiments and Data
- how many speed levels to investigate?
- determine minimum speed (per segment?) to avoid getting stuck
- stop from lower speed is more accurate
- how much data can you feasibly collect?
- document and justify decisions!
Measurement
- recommendation: use tight polling loop (no kernel)
- sensor → timestamp
- measurement errors
- signal delivery, Märklin processing: constant (?)
- constant errors cancel out when subtracting timestamps
- delivery to software: variable (polling loop)
- statistical modelling (see below)
- processing in software: small (?)
Uncertainty
- assume sensor timestamps uniformly distributed across duration of polling loop (~70ms)
- time interval between two sensors is sum of uniform distributions
- general case: Irwin-Hall distribution
- special case (N=2): triangular distribution
- sample mean/variance are good estimates of real mean/variance
- could also use min/max to estimated midpoint?
- but velocity (distance/time) is non-linear in time!
- average of time is straightforwarded; average of velocity not so much
- consider simple example: 100m distance, 2 time samples 10s, 20s
- speed: 10m/s, 5m/s → average would be 7.5m/s
- compute average time first, then speed: 100m/15s = 6.66m/s
- observation: measured data typically forms bimodal distribution
- why? is that a problem?
- low-frequency sampling (70ms): sample distribution bimodal (corner cases trimodal)
- general recommendation: keep experiment raw data as much as possible
- bug in processing → repeat only processing
- new approach to data processing → possible with raw data
Dynamic/Continuous Calibration
- verify validity of and/or update current estimates
- long-term variability: track degradation? (or improvement)
- real-world variations (wear and tear): difficult to model
- need window of recent measurements
- basic technique: Exponentially Weighted Moving Average (EWMA)
- c: current estimate; d: next data sample, a: weighting factor
- c := c * (1 - a) + d * a
- no need to store array of samples
- with appropriate choice of a, can use bitshift instead of division
- can use similar approximation for standard deviation