Use of communication systems is wide spread since more and more devices are capable of being connected, foremost by wireless, but also by wired, technologies. One use of the communication systems concerns control systems in various applications.
A known control system may include a device controlled by a server. The device can be an object or a process. Within the control system, transmissions are transferred to and from the device and/or the server in order to provide control information for controlling of the device and feedback information to be used by the server when controlling the device. In this context, latency can be defined as a time interval, starting when a transmission is sent from the server and ending when a response to the transmission is received by the server.
For the device, a required latency is a measure of what network performance in terms of latency is required in order for the device to be properly, and safely, controlled.
For the known control system, an actual latency, which is a measure of a current or actual network performance in terms of latency, can vary depending on fluctuations among a plurality of factors. Examples of such factors are number of connected devices, data rates for connected devices, type of connection, i.e. wired or wireless and even weather conditions. This implies that when the factors indicate harsh conditions, e.g. bad weather and high network load, the actual latency becomes higher.
In a real-life example, the device is a truck and the server is a computer that is used by a remote driver for remotely controlling the truck. In view of the above explained unintentional variations in actual latency depending on the plurality of factors, a problem may be that the harsh conditions, such as bad weather, causes the actual latency to increase which makes it more difficult for the remote driver to properly, and safely, control the truck.