typical-exam-questions.md 17 KB

RTS typical exam questions

What are Logical Clocks?

Logical clocks do not timestamp events with actual (physical) time values, but assign a number from a monotonically increasing counter. They have desirable properties, such as $a \to b \Rightarrow C(a) < C(b)$ (monotonicity/consistency) or sometimes even $a \to b \Leftrightarrow C(a) < C(b)$ (strong consistency).

One such concept is given by Lamport's Logical Clocks, which is a set of rules that establish a certain degree of order between messages:

  • each logical clock of a process $p_i$ has its own local view of the global time
  • non-negative integer $C_i$ represents the local time of $p_i$
  • strict clock update rules:
    1. $p_i$ increments $C_i$ for each local event: $C_i = C_i + 1$
    2. each message transports the value of the sender's clock, $C_{msg}$
    3. when $p_i$ receives a message: $C_i = max{Ci, C{msg}} + 1$

Lamport's Logical Clocks have monotonicity, but no strong consistency. For this, vector time is needed: keeping not only the local logical clock values, but a snapshot of all other logical clock values to reconstruct causality.

What is a clock?

A digital physical clock typically consists of an oscillator vibrating at a certain frequency that is used to count individual microticks. Depending on the frequency of the vibration, a second is composed of a differing amount of microticks. The actual duration between two microticks forms the granule of the clock.

Every real clock will deviate slightly in its frequency from an ideal reference clock. This clock drift is parameterized as the rate parameter $\rho$.

Accuracy? Precision?

We denote precision with the symbol $\Pi$. In an ensemble of clocks, precision is given by the maximum offset of any two clocks (where the offset is the absolute time difference between two clock at the same microtick). Without continuous resynchronization, free-running clocks have a $\Pi$ that continually grows larger.

Accuracy is another parameter defined in terms of offset, this time between a clock and the ideal reference clock however.

Limits of Time Measurement?

  1. A one tick difference for the timestamp produced by clocks that have observed the same event (due to limitations of clock synchronization).
  2. An obvserved interval $d{obs}$ is in actuality bounded by $(d{obs} - 2g) < d{true} < (d{obs} + 2g)$.
  3. The temporal order of events can be recovered if the difference between their timestamps is equal to or greater than 2 ticks.
  4. The temporal order can always be recovered if the event set is at least 0/3g precedent.

What is the Reasonableness Condition? What do we learn from it?

The reasonableness condition is defined in terms of the precision and global granularity: $g > \Pi$. It is called reasonableness condition, because it ensures that the synchronization error is bounded to less than one macrogranule. This in effect means that an event that is observed by all clocks will have timestamps that differ from each other by 1 tick at most. This is one of the limits of time measurement.

What is sparse time?

In a sparse timebase, events are only allowed to happen (and thus be timestamped) outside of specified intervals of inactivity. These intervals of inactivity are cleverly chosen to be large enough so as to make the temporal order of events undeniably clear without use of an agreement protocol.

In a dense timebase however, any two events that are closer together than $3g$ are not necessarily classifiable in terms of temporal order. Such a dilemma can be used by an agreement protocol, during which the nodes typically exchange their views and apply a determinist algorithm to arrive at the same conclusions. Such a routine takes additional computational resources and perhaps more importantly, precious communication time. As such, a sparse timebase is sometimes preferable (wherever applicable).

What is a convergence function?

The convergence function $\Phi$ is the max offset between two clocks (after resynchronization), depends on the synchronization algorithm and message latency jitter $\varepsilon$.

What is the Synchronization Condition?

The global time ticks must be periodically resynchronized to establish a global timebase with a specified precision.

$$ \Phi + \Gamma \leq \Pi $$

Where $\Phi$ is the convergence function, $\Gamma$ the drift offset and $\Pi$ the precision. The convergence function bounds the max offset between any two clocks after applying its algorithm. The drift offset on the other hand is the maximum offset that any two (free-running) clocks achieve right before the periodic resynchronization procedure.

What synchronization algorithms exist?

  1. Central Master Algorithm: external synchronization, master node sends periodic resynchronization requests
  2. Fault Tolerant Average (FTA): internal synchronization, clients exchange their times and take an average except for some $k$ Byzantine clock values (outermost values)

What is the Central Master Algorithm?

Record local arrival time => compute difference between master/local clocks => adjust for latency (has to be known) => adjust local clock

Has a precision of $\Pi_{central} = \varepsilon + \Gamma$.

What is the FTA algorithm?

Globally exchange local times => discard the $k$ outermost time values => execute convergence function => apply correction term on local time

In the presence of $k$ Byzantine clocks there must be at least $N \geq 3k + 1$ clocks in total.

The convergence function is $\Phi(N, k, \varepsilon) = k \Pi / (N - 2k) + \varepsilon$.

The precision is $\Pi(N, k, \varepsilon, \Gamma) = (\varepsilon + \Gamma) \mu (N, k)$. (where $\mu (N, k)$ converges against $1$ the large the rift between $N$ and $k$)

What is an RT-entity? What does it consist of?

An RT-entity is a state variable assigned for a given purpose. It can either be physical or part of a computer system, that is to say, continuous or discrete. RT-entities usually change its state as a funcion of real-time.

They can have static attributes: name, type, value domain, maximum rate of change

But also dynamic attributes: actual value at a point in time

An RT-entity can be modified inside its sphere of control, but only observed outside of it.

What is observation of an RT-entity?

Observations are snapshots of RT-entity states coupled with a specific instant in time, like for example: Observation = <Name, Time, Value>. Usually this information is transmitted in messages.

There are two schools of observations: state and event observations.

  • state observation: take absolute value of the state of the RT-entity with the time at which the RT-entity was observed
  • event observation: take relative value of "old" and "new" state of the RT-entity with the time at which the "new" state applies

What is an RT-image? What does temporal accuracy mean?

An RT-image is a "picture of an RT-entity". It is valid at any point in time if it corresponds to an actual RT-entity, in value and time. Can be based on an observation or estimation. Can be stored inside a computer or within a control system component.

Temporal accuracy refers to the dilemma of judging the validity of RT-images. Some RT-images may become obsolete immediately after being sampled. To be more exact, we consider temporal accuracy to be the length of the recent history $d_{acc} = ti - t{i-k}$. This value is determined by the dynamics of the RT-entity in the controlled object. In general: if an RT-entity changes its value quickly, $d_{acc}$ must be short.

What is a parametric (phase sensitive) RT-Image? What is phase-insensivity?

An RT-image is parametric or phase insensitive if $d{acc} > d{update} + WCET{send} + WCCOM + WCET{rec}$. That is to say, if the time the RT-image is accurate is larger than the time to use it plus some delay. Another way to say it: if the time the image is sampled does not matter anyway, because the image is going to be temporally accurate for long enough anyways.

An RT-image is phase sensitive if: $d{acc} \leq d{update} + WCET{send} + WCCOM + WCET{rec}$ and $d{acc} > WCET{send} + WCCOM + WCET_{rec}$. That is to say, if the RT-image is accurate for a shorter duration than until it is actually used, making the time at which it is sampled crucial. Another way to say it: if the image is sampled at the right moment, it is going to be temporally accurate.

When is a message permanent?

Not until all messages that logically come before the one in question have also been received. Acting on a non-permanent message might lead to errors or inconsistencies!

What is meant by Action Delay?

The action delay represents the interval between the sending of a message and the point in time when the receiver knows that the message is permanent.

In a properly designed system, the action delay is smaller than $d_{acc}$. (otherwise we need state estimation!)

What are the syntactic and semantic differences between state and event based messages?

What are time anomalies (on the layer of instructions)?

What is explicit flow control? implicit flow control?

What is Traffic Shaping? How does the Token Bucket algorithm work?

What are event-triggered protocols?

What are time-triggered protocols?

What is an End-to-End protocol?

What is redundancy? Why is it important?

What is the WCET?

The worst-case execution time (WCET) is a piece of knowledge that is important to know a priori for guaranteeing the schedulability of a system. It depends on the source code of the task, properties of the object code generated by the compiler, characteristics of the target hardware but also on whether the application uses semaphores or mutexes (for C-tasks). In general, computing the WCET is unsolvable, but possible with certain restrictions:

  • absence of unbounded control statements at the beginning of a loop
  • absence of recursive function calls
  • absence of dynamic data structures

It is difficult to determine the WCET. Measuring is not necessarily viable, as rare but possible executions might be missed. The WCET is also dependent on the state of the hardware (caches, pipelines, parallelism, perhaps even temperature). It is characteristic for a WCET analysis to make estimates for loop bounds.

What is tree-based WCET analysis?

The tree-based WCET, sometimes also called the timing-schema approach, is a WCET analysis technique in which the syntax tree is traversed from the bottom up. A loop for example can be unrolled as T(for) = (LB+1)*T(test) + LB*T(body), an if as T(if) = T(test) + max{T(then), T(else)} and so forth.

What is the IPET WCET analysis?

The implicit path enumeration technique (IPET) for WCET analysis is a method in which a program is given as a control-flow graph together with a goal function describing execution time. The plan is to maximize execution time by enumeratiung the worst-case execution paths.

This analysis method is advantageous because complex flow facts can be described, constraints can easily be generated and tools already exist. However, this sort of analysis is NP-hard in general, which leads to a high runtime. Additionally, not all flow facts are easy to integrate.

What is the Adversary Argument?

An argument outlining why dynamic scheduling can never guarantee an optimal solution (clairvoyant solution) when tasks are sporadic. The argument goes as follows: Two tasks $T_1$ and $T_2$ have a period of $4$, although $T_2$ has a deadline of only $1$ and is sporadic. In an adversarial case, whenever a dynamic scheduler thinks it safe to initiate $T_1$, $T_2$ springs to life soon after. The schedule can't be met because $T_2$ has a deadline of only $1$.

What is the model we are using for scheduling?

single processor, fixed set of $n$ tasks, all tasks periodic (with known periods), worst-case execution times known, tasks are independent, overhead/context-switching time is ignored

What is Cyclic Executive Scheduling?

A completely static schedule is created before commissioning of RTS system, in which all tasks are executed before their deadlines. All tasks are repeated in a large periodic cycle, called the major cycle. The smallest period within the schedule is called the minor cycle. The major cycle consists of smaller minor cycles. This makes this scheduling algorithm fully deterministic and timed.

It is often implemented in terms of procedure calls within a program (sharing the same address space) instead of independent tasks at runtime. As such, it is difficult to accomodate long periods, because tasks can not necessarily be split up. There are no sporad or aperiodic tasks. But mutual exclusion is guaranteed by design.

What is Fixed Priority Scheduling (FPS)?

Each task is assigned a static priority (computed before runtime, derived from temporal requirements), the value of which is used to determine the execution order of available tasks.

What is Rate Monotonic Scheduling (RMS)?

Fixed Priority Scheduling, associated with a way to compute the priorities of tasks. The shorter the task period (the higher the rate of a task), the higher its priority, in other words: $P_i > P_j \Leftrightarrow T_i < T_j$. Upon executing a task select the task with the highest priority among all available tasks.

Rate-monotonic priority assignment is optimal for FPS. This means, if the task set can be scheduled with a fixed priority schedule, it can also be scheduled with RMS.

What is the Schedulability Test?

In mathematics, if a necessary property is violated, we know for a fact that something does not work. If a sufficient property is fulfilled, we know that something works. Similarly with schedulability tests, there exist sufficient and necessary schedulability tests:

result sufficient necessary
positive schedulable no assertion
negative no assertion not schedulable

What is the Utilization-Based Schedulability Test?

The utilization is a value that is computed as follows: $U = \sum C_i / T_i$. In English: The utilization of a task is its worst-case execution time ("CPU time", WCET) over its period, while the total utilization is the sum of the utilization of all tasks.

A necessary schedulability test for RMS is $U \leq 1$.

A sufficient schedulability test for RMS is $U \leq n (2^{1/n} - 1)$. For big $n$, this utilization threshhold approaches $\ln 2 \approx 0.69$. (by the theorem of Liu and Layland)

How can RMS schedulability be tested?

By picturing the worst-case scenario: all tasks arrive in the same instant - the critical instant. If all tasks fit within the largest task period $T_i$, the schedule is guaranteed to work.

What is Earliest Deadline First (EDF) Scheduling?

Instead of sorting tasks by priority, there is a queue in which the task that has its deadline up next comes first. As such, the task with the earliest absolute deadline is always selected. EDF has a higher utilization than RMS in general. EDF is also less predictable and thus harder to reason about. It is also a bit harder to implement. EDF is not optimal in multiprocessor systems.

Utilization-Based Schedulability can be tested with a (familiar) condition that is both sufficient and necessary: $U = \sum C_i / T_i \leq 1$.

What is Response Time Analysis (for FPS)?

Response Time Analysis (for FPS) consists of a computation and verification of results:

  1. compute worst-case response time $R_i$ in addition to the "interference" $I_i$ of other tasks: $R_i = C_i + I_i$
  2. check whether the task meets its deadline: $R_i \leq D_i$

In order for this computation to happen, we need to estimate the interference. The maximum number of task preemptions by another task is given by $\lceil \frac{R_i}{T_j} \rceil$, giving us a maximum interference of $\lceil \frac{R_i}{T_j} \rceil C_j$. To compute the worst-case response time we now consider all tasks with a higher priority: $R_i = Ci + \sum{j \in hp_i} \lceil \frac{R_i}{T_j} \rceil C_j$. This is a recurrence relation that needs to be solved.

The response-time analysis is a necessary and sufficient schedulability test. (passed test = no missed deadlines, failed test = deadlines will be missed)

(the response time can be appended with a blocking term $B_i$ if mutexes are involved)

What is Least-Laxity First (LLF) Scheduling?

Laxity is defined as the difference between the deadline and remaining computation time $L_i = D_i - C_i$. The task with the smallest laxity gets the highest (dynamic) priority and is therefore selected next. This is optimal in uniprocessor systems, but not so in multiprocessor systems.

The variant Modified LLF reduces the number of task switches.

What is Priority Inversion?

The phenomen where a higher-priority task is blocked by a lower-priority task. This is due to blocking that happens because of mutual exclusion constraints. If a high-priority task needs a resource it should not be given to the low-priority task first. There can also be indirect blocking, if a low-priority task is holding a resource a high-priority task needs but is blocked due to a medium-priority task executing first.

This leads to the concept of Priority Inheritance: when a low-priority task blocks one or more tasks of higher priority, it temporarily assumes the highest priority of a task it blocks. This can cause deadlocks.

A more advanced way to deal with this is the Priority Ceiling Protocol.

What is the Priority Ceiling Protocol?

  1. Give each task a default priority.
  2. Assign a priority ceiling for each resource that is equal to the priority of the highest-priority task that uses the resource.
  3. Each time a task executes at a dynamic priority that is the maximum of its own static priority and ceiling values of all the resources it has locked.