Real-time embedded systems have become much more complex due to the introduction of a lot of new functionality in one application, and due to running multiple applications concurrently. This increases the dynamic nature of today's applications and systems and tightens the requirements for their constraints in terms of deadlines and energy consumption. State-of-the-art design methodologies try to cope with these issues by identifying several most used cases and dealing with them separately, reducing the newly introduced complexity.
Embedded systems usually comprise processors that execute domain-specific applications. These systems are software intensive, having much of their functionality implemented in software, which is running on one or multiple processors, leaving only the high performance functions implemented in hardware. Typical examples include TV sets, cellular phones, wireless access points, MP3 players and printers. Most of these systems are running multimedia and/or telecom applications and support multiple standards. Thus, these applications are full of dynamism, i.e., their execution costs (e.g., number of processor cycles, memory usage, energy) are environment dependent (e.g., input data, processor temperature).
Scenario-based design in general has been used for some time in both hardware and software design of embedded systems. In both of these cases, scenarios concretely describe, in an early phase of the development process, the use of a future system. These scenarios are called use-case scenarios. They focus on the application functional and timing behaviours and on the interaction with the users and environment, and not on the resources required by a system to meet its constraints. These scenarios are used as an input for design approaches centred round the application context.
The present disclosure, however, focuses on a different and complementary type of scenarios, called system scenarios. System scenario-based design methodologies have recently been successfully applied to reduce the costs of dynamic embedded systems. They provide a systematic way of constructing workload-adaptive embedded systems and have already been proposed for multimedia and wireless domains. At design-time, the system is separately optimized for a set of system scenarios with different costs, e.g., alternative mapping and scheduling of tasks on multiprocessor systems. At run-time, certain parameters are monitored, changes to the current scenario situation are detected, and mapping and scheduling are changed accordingly. Up to today, only so-called control variable-based system scenario approaches based on bottom-up clustering have been studied in-depth in the literature.
State-of-the-art advanced multiprocessor system-on-chip platforms and applications typically require a dynamic scheduling of instructions and threads to be able to meet stringent performance and low energy requirements. The Task Concurrency Management (TCM) methodology is a systematic approach for mapping an application onto a multiprocessor platform in real-time embedded systems. The methodology is based on a two-step scheduling technique that, at design-time, performs computation-intensive parts of the mapping and scheduling, leaving for run-time the parts that result in less overhead for computing the actual schedule.
An application is divided into thread frames, each consisting of a number of thread nodes, as will be further detailed below. At design-time, each thread node is profiled to find its execution time and power consumption for all possible input data and on all possible processors on the platform. With profiling is meant the simulation of hardware based emulation of the system behaviour to obtain the system responses for a representative set of input stimuli. The resulting numbers are used to find all thread frame schedules with an optimal trade-off between execution time and energy consumption. A schedule candidate is optimal if it, e.g., has the lowest energy consumption for a given execution time. As a result, each thread frame has a set of optimal schedules along a curve in a two-dimensional execution time-energy consumption solution space. In cases where different sets of input data give too wide a spread in a thread node's execution times, system scenarios can be used to find a thread frame Pareto-curve for each of the individual scenarios. Each system scenario corresponds then to a different (non-overlapping) cluster of run-time situations that are similar enough in their Pareto-curve positions (close enough in the N-dimensional trade-off space, see also further). At run-time, input data is monitored to keep track of the currently active scenario. It should be stressed though that system-scenario based approaches for identifying clusters of similar run-time situations are usable in many more contexts than only thread-frame scheduling of dynamic embedded systems. This disclosure focuses on the broadly applicable system scenario concept itself.
The scenarios are derived from the combination of the application behaviour and the application mapping on the system platform. These scenarios are used to reduce the system cost by exploiting information about what can happen at run-time to make better design decisions at design-time, and to exploit the time-varying behaviour at run-time. While use-case scenarios classify the application's behaviour based on the different ways the system can be used in its over-all context, system scenarios classify the behaviour based on the multi-dimensional cost trade-off during the implementation trajectory. By optimizing the system per scenario and by ensuring that the actual system scenario is predictable at run-time, a system setting can be derived per scenario to optimally exploit the system scenario knowledge.
FIG. 1 depicts a design trajectory using use-case and system scenarios. It starts from a product idea, for which the product's functionality is manually defined as use-case scenarios 1, 2, and 3. These scenarios characterize the system from a user perspective and are used as an input to the design of an embedded system that includes both software and hardware components. In order to optimize the design of the system, the detection and usage of system scenarios augments this trajectory (the cost perspective box of FIG. 1). The run-time behaviour of the system is classified into several system scenarios (A and B in FIG. 1), with similar cost trade-offs within a scenario. For each individual scenario, more specific and aggressive design decisions can be made. The sets of use-case scenarios and system scenarios are not necessarily disjoint, and it is possible that one or more use-case scenarios correspond to one system scenario. But still, they are usually not overlapping, and it is likely that a use-case scenario is split into several system scenarios, or even that several system scenarios intersect several use-case scenarios.
The system scenario-based design methodology is a powerful tool that can also be used for fine grain optimizations at the task abstraction level and for simultaneous optimization of multiple system costs. The ability of handling multiple and non-linear system costs differentiates system-based design methodologies from the dynamic run-time managers intended for Dynamic Voltage and Frequency Scaling (DVFS) type platforms. DVFS methodologies concentrate on optimization of a single cost (the energy consumption of the system), that scales monotonically with frequency and voltage. They perform direct selection of the system reconfiguration from the current workload situation. This, however, cannot be generalized for costs that depend on the parameters in a non-uniform way. That makes the decision in one run-time step too complex. Scenario-based design methodologies solve this problem by a two-stage approach decided at run-time: they first identify what scenario the working situation belongs to, and then choose the best system reconfiguration for that scenario. Since the relationship between the parameters and the costs will, in practice, be very complex, the scenario identification is, however, performed at design-time.
When using system scenarios, both information about the system at design time and the occurrence of certain types of input at run-time, which results in particular (groupings of) run-time situations, are considered. The term “run-time situation” (RTS) is an important concept used in task level system scenario-based design methodologies. An RTS is a piece of system execution with an associated cost that is treated as a unit. The cost usually consists of one or several primary costs, like quality and resource usage (e.g., number of processor cycles, memory size). The system execution on a given system platform is a sequence of RTSs. One complete run of the application on the target platform represents the sequence of RTSs. The current RTS is known only at the moment it occurs. However, at run-time, using various system parameters—so-called RTS parameters—it can be predicted in advance in which RTS the system will run next for a non-zero future time window. If the information about all possible RTSs in which a system may run is known at design-time, and the RTSs are considered in different steps of the embedded system design, a better optimized (e.g., faster or more energy efficient) system can be built because specific and aggressive design decisions can be made for each RTS. These intermediate per-RTS optimizations lead to a smaller, cheaper and more energy efficient system that can deliver the required quality. In general, any combination of N cost dimensions may be targeted. However, the number of cost dimensions and all possible values of the considered RTS parameters may lead to an exponential number of RTSs. This will degenerate to a long and really complicated design process, which does not deliver the optimal system. Moreover, the run-time overhead of detecting all these different RTSs will be too high compared to the expected gain over their (quite) short time windows. To avoid this situation, the RTSs are classified and clustered from an N-dimensional cost perspective into system scenarios, such that the cost trade-off combinations within a scenario are always fairly similar (i.e their Euclidean distance in the N-dimensional cost space is relatively small), the RTS parameter values allow an accurate prediction, and a system setting can be defined that allows to exploit the scenario knowledge and optimizations.
In the paper ‘System scenario based design of dynamic embedded systems’ (V. Gheorghita et al., ACM Trans. On Design Automation for Embedded Systems, Vol. 14, No. 1, January 2009), a general methodology is proposed which can be instantiated in all contexts wherein a system scenario concept is applicable. Important steps of the method comprise choosing a good scenario set, deciding which scenario to switch to (or not to switch), using the scenario to change the system knobs, and updating the scenario set based on new information gathered at run-time. This leads to a five step methodology, including design time and run-time phases. The first step, identification, is somewhat special in this respect, in the sense that its run-time phase is merged into the run-time phase of the final step, calibration. The initial identification at run-time is incorporated in the detection step, but a re-identification can happen at run-time in a calibration step.
The various steps are now introduced. Special attention is hereby paid to the steps that are most relevant for the present disclosure.
Design-Time Identification of the Scenario Set
In order to gain the advantages offered by a scenario approach, it is necessary to identify the different scenarios that group all possible RTSs. A scenario identification technique therefore lies at the heart of any system scenario-based design methodology. It determines how the different observed RTSs should be divided into groups with similar costs, i.e. The system scenarios, and how these system scenarios should be represented to make their run-time prediction as simple as possible. In prior art system scenario based solutions, parameters that decide the scenario boundaries have been limited to control variables, or variables with a limited number of distinct values.
In the identification step the relevant RTS parameters are selected and the RTSs are clustered into scenarios. This clustering is based on the cost trade-offs of the RTSs, or an estimate thereof. The identification step should take as much as possible into account the overhead costs introduced in the system by the following steps of the methodology. As this is not easy to achieve, an alternative solution is to refine the scenario identification (i.e., to further cluster RTSs) during these steps.
A task-level scenario identification split can be performed in two steps. In the first step, the variables in the application code are analyzed, either statically, or through profiling of the application with a representative data set. The variables having most impact on the run-time cost of the system are determined. These variables are called RTS parameters, denoted by ξ1, ξ2, . . . , ξk, and are used to characterize system scenarios and design the scenario prediction mechanism. Typically a small set of RTS parameters is selected to keep the run-time prediction overhead low.
The output of the first step is the set of the selected RTS parameters and, for each RTS i, its RTS signature is given by Equation 1 below:r(i)=ξ1(i),ξ2(i), . . . ,ξk(i);c(i),  (1)containing parameter values ξ1(i), ξ2(i), . . . , ξk(i) and the corresponding task costs c(i), i.e., each run instance of each task has its own RTS signature. The number N of RTS signatures will hence be very large. Depending on the number of RTS parameters and how many different values each of them can take, there is a small or large number of different RTS signatures. This is important for the complexity of the second step of the scenario identification. In this second step, the RTS signatures are divided into groups with similar costs—the system scenarios. This can be done by a bottom-up clustering of RTS signatures with a resulting multi-valued decision diagram (MDD) that is used as a predictor for the upcoming system scenario (see step of scenario prediction).
As mentioned, the impact of RTS parameters can be evaluated either statically or through profiling. When based on purely static analysis of the code, the frequencies of occurrence of different RTS parameter values are not taken into consideration. Therefore, system scenarios may be produced that almost never occur. This technique can be extended with profiling information, and then forms a system scenario set that exploits run-time statistics. This approach typically leads to only a limited amount of parameters being labelled as important enough to incorporate in the identification step, which is crucial to limit the complexity.
The existing scenario identification approaches cannot be used in a system with many-valued RTS parameters, causing an explosion of states in the MDD and the associated run-time/implementation costs. In that case the bottom-up clustering approach has a fundamental complexity limitation which cannot be overcome by any incremental change only.
Prediction of the Scenario
At run-time, a scenario has to be selected from the scenario set based on the actual parameter values. This selection process is referred to as scenario prediction. In the general case, the parameter values may not be known before the RTS starts, so they may have to be estimated. Prediction is not a trivial task; both the number of parameters and the number of scenarios may be considerable, so a simple lookup in a list of scenarios may not be feasible. The prediction incurs a certain run-time overhead, which depends on the chosen scenario set. Therefore, the scenario set may be refined based on the prediction overhead. In this step two decisions are made at design-time, namely selection of the run-time detection algorithm and scenario set refinement.
Exploitation of the Scenario Set
At design-time, the exploitation step is essentially based on some optimization that is applied when no scenario approach is applied. A scenario approach can simply be put on top of this optimization by applying the optimization to each scenario of the scenario set separately. Using the additional scenario information enables better optimization.
Switching from One Scenario to Another
Switching is the act of changing the system from one set of knob positions (see below) to another. This implies some overhead (e.g., time and energy), which may be large (e.g., when migrating a task from one processor to another). Therefore, even when a certain scenario (different from the current one) is predicted, it is not always a good idea to switch to it, because the overhead may be larger than the gain. The switching step selects at design-time an algorithm that is used at run-time to decide whether to switch or not. It also introduces into the application the way how to implement the switch, and refines the scenario set by taking into account switching overhead.
Calibration
The previous steps of the methodology make different choices (e.g., scenario set, prediction algorithm) at design-time that depend very much on the values that the RTS parameters typically have at run-time; it makes no sense to support a certain scenario if in practice it (almost) never occurs. To determine the typical values for the parameters, profiling augmented with static analysis can be used. However, the ability to predict the actual run-time environment, including the input data, is obviously limited. Therefore, also support is foreseen for infrequent calibration at run-time, which complements all the methodology steps previously described.
Hence, there is a need for an improved method for system scenario based design.