(1) Field of the Invention
The present invention relates to a method of testing an electronic system.
The term “electronic system” is used to designate a system comprising electronic and/or electrical and/or computer equipment.
(2) Description of Related Art
The electronic system may in particular be arranged on board an aircraft, and specifically a rotorcraft. In addition, an electronic system may have at least one screen suitable for displaying images that are of use to an operator, such as a pilot, for example. In the context of an aircraft, an electronic system is more specifically referred to as an “avionics” system.
The term “image” is used herein to designate a graphics representation as such, and not a numerical form for such an image, for example. Nevertheless, an image may be obtained by using its graphical representation on a screen, or by picking up its digital shape by means of a video output or a digital bus, for example.
For example, a complex system includes not only pieces of equipment, but also wired and/or wireless connections between such pieces of equipment. Such connections may comprise digital buses and also analog and/or discrete connections for passing information, such as values relating to parameters or indeed data relating to an image, for example.
Amongst this information, an electronic system makes use of input data items referred to more simply as “inputs” injected into the electronic system for processing. Furthermore, the electronic system generates output data items referred to more simply as “outputs”, that may possibly be in the form of images.
By way of example, data relating to a pressure as measured by a Pitot tube may be injected into an electronic system and then processed in order to lead to a speed being displayed on a screen. The pressure then represents an input to the electronic system, whereas the image including the speed represents an output from the electronic system.
During a development stage, a manufacturer may need to test an electronic system in order to verify that the electronic system complies with specifications. The development of an electronic system may give rise to a multiple variants before the system is finalized. A manufacturer then tests these different variants in order to verify that they operate properly, and where appropriate in order to modify the electronic system. By way of example, it is possible for several tens or even several hundreds of versions to be tested before the electronic system of an aircraft is finalized.
Furthermore, a manufacturer may need to test an electronic system during production stages. The manufacturer may test each electronic system as it is fabricated and before it is put on the market.
During a test session, an electronic system may be tested by following a test procedure. The test procedure may require a plurality of distinct operating points to be detected by setting a variety of values to certain inputs of the electronic system. The term “test case” is used to designate a stable state of the electronic system obtained from certain inputs to the electronic system.
The operation of the electronic system then leads to outputs being obtained from the electronic system. These outputs can be analyzed in real time by an operator or by an automatic device for verifying that the outputs correspond to the expected outputs.
The outputs obtained may possibly be stored for subsequent analysis.
In order to test the system, one method thus suggests establishing a test case that sets values for the inputs of the system, and then verifying the behavior of the system with the help of the outputs that are obtained. A plurality of distinct test cases may be applied in order to test a variety of operating points of the system.
At each instant, the behavior of the system thus depends on the initial values of the inputs, and possibly also on how those values change once the test has started.
It can be difficult to observe the behavior of the system. For example, observing the outputs of the system may require waiting for the values of the inputs to stabilize and/or waiting for a processing time to expire.
Under such conditions, a first method consists in causing test cases to be run in interactive manner, with their results being analyzed in full by a tester operator, either during or after the tests. It is therefore necessary for an operator to be present in order to analyze all of the results.
In a second method, the test is automated.
It can be difficult to automate a test procedure.
An electronic system may make use of numerous pieces of equipment and numerous connections for passing a large amount of data, e.g. of the order of several thousand different data items.
In order to guide the development of the electronic system and the tests that need to be envisaged, the electronic system may be subdivided virtually, on a functional basis.
Each tested function is then analyzed by an operator thus working on a functional perimeter that is limited. The operator thus pays attention only to certain outputs from the electronic system, i.e. to certain items of data and/or certain portions of images. If the test is performed interactively, the operator in charge of analyzing the results takes account only of information that is pertinent for the function being tested.
Nevertheless, the test cases used for validating any one function may have effects on the behaviors of other functions. The person skilled in the art then refers to “side effects” that may influence test results. Such a side effect may be difficult to observe and to predict, in particular when the test is concentrating on a single function.
For example, a screen may display an image presenting outputs associated with a plurality of functions. Observing a portion of the image that relates to a given function does not make it possible to detect a side effect that shows up in some other portion of the image in question. Furthermore, this side effect may not be understood by an operator trained to study some other function of the system.
Furthermore, performing tests in totally automatic manner assumes that the results of one test session are compared with the results that are to be expected, without human intervention. In order to achieve this target, the results must be reproducible from one session to another.
This constraint can be difficult to satisfy.
In order to ensure that outputs are reproducible, it is appropriate to simulate changes in all of the inputs of the system in the same manner for the various test sessions. However the number of inputs to the system may be very large, thereby making the situation more complex.
The inputs may be arranged in several categories.
A first category relates to inputs that have an influence on the function under test. These inputs are under the control of the designer of the test and they may be modified if there is a change in the specifications for the function under test. This first category includes inputs that allow the function to be tested and inputs that are directly associated with the function that is to be tested.
A second category relates to inputs that, a priori, have no influence on the behavior of the function under test. The designer of a test is sometimes unconcerned by these inputs. These inputs may have values that are attributed to them implicitly by the test tools. Under such circumstances, these values may depend on the versions, configurations, and parameter settings of the test tools in use. Their influence on the outputs of the system, not associated with the function under test, may differ from one test session to another. These inputs may also be in states that are not valid, e.g. relating to connections to inputs that are disconnected or indeed to digital connections that are not powered by the test tools. These states are not necessarily under control in a complex environment.
Furthermore, an electronic system may exist in several variants or configurations that may have an influence on the outputs. Under such circumstances, if a general function of the system that is applicable to all of the variants under test, then an image may include symbols that are associated with the function under test and that differ with differing variants of the system.
Furthermore, the electronic system and the test platform may be made up of numerous pieces of equipment, and not all of them need necessarily be essential for the tests that are to be performed. The presence of these pieces of equipment and also their states, may also have an influence on certain outputs of the system.
Furthermore, changes in the specifications of the system may have an influence on its outputs.
Finally, the images generated by an electronic system may include defects that are not reproducible from one test session to another. For example, the images obtained by a camera may include spots or reflections that are not the same from one day to another.
In this context, a first automatic test method that is applicable to digital outputs seeks only to compare digital outputs with expected reference outputs.
With a system that is complex, it is difficult or even impossible to check all outputs automatically in a reasonable length of time. Under such circumstances, a first method may be limited to certain outputs only. Nevertheless, side effects, if any, are difficult to observe.
In a second automatic test method applicable to systems having graphics outputs only, the images resulting from each of the tests are automatically analyzed and compared with pre-established descriptions. That solution makes use of known methods for recognizing shapes and for comparing images. Reference may be made to the literature in order to obtain details explaining such methods.
Nevertheless, the time required for processing can be long.
Furthermore, pre-established descriptions of expected images depend on the configuration and the state of the system under test, on the environment of the test, and possibly also on the state of specifications and any changes thereto. Under such circumstances, an automatic test runs the risk of leading to results that are not reproducible.
In another method seeking to limit the above-mentioned problems of reproducibility, only those digital outputs that are associated with the function under test are examined. Under such circumstances, portions of images that do not correspond to the function are masked.
That method can require a set of masks that need to change with changing specifications of the system.
Nevertheless, side effects can be difficult to detect.
Furthermore, that method suffers from the problem of reproducibility associated with imperfections, possibly temporary imperfections, that might appear in the images.
One known document is US 2009/241076.
Also known is the document by L. J. Aartman et al.: “An independent verification tool for multi-vendor mode S airborne transponder conformance testing”, 21st, DASC, The 21st Digital Avionics System Conference Proceedings, Irvine, Calif., Oct. 27-31, 2002; [Digital Aviations System Conference, New York, N.Y.: IEEE, US. Vol. 2, Oct. 27, 2002 (2002-10-27), XP010616263, ISBN: 978-0-7803-7367-9].
Also known is the document by C. Nebut et al., “Automatic test generation: a use case driven approach”, IEEE Transactions on Software Engineering, Vol. 32, No. 3, Mar. 1, 2006 (2006-03-01), XP055142522, ISSN: 0098-559, DOI: 10.1109/TSE.2006.22.