The invention relates to a method and an apparatus for processing data in a vehicle.
An Audio Video Bridging (AVB) technology is known which permits the transport of time-synchronized audio and video data with a slight delay while utilizing a quality of service (QoS) by way of an Ethernet connection. An audio-visual data stream (A/V stream) can be identified according to the AVB by an identification (Stream ID). This identification comprises a MAC address of an AVB source.
Furthermore, a transport protocol according to IEEE 1722 is known (“AVB Transport Protocol”). In this case, an Ethernet frame comprises an IEEE P1722 data stream, a packet of the data stream comprising a presentation time (also called AVB TP time stamp). By means of the AVB technology, it can already be determined on layer-2 by an analysis of the so-called ether type whether the IEEE 1722 packet involves A/V data or other information (such as other IP data). The IEEE 1722 packet therefore does not first have to be analyzed at high cost over several layers of a protocol stack before the type of the data content can be determined. The above-mentioned presentation time is determined by the AVB source. The IEEE 1722 packet comprises payload, for example, in the form of an IEC 61883 packet. Other A/V formats can be used correspondingly.
A protocol for the time synchronization or clock synchronization of various components of a network is also known according to IEEE 802.1AS or PTP (Precision Time Control).
Vehicles increasingly have monitors (also called display units or displays) which make it possible for the passengers (the front passenger; when the vehicle is stationary, particularly also the driver, and/or passengers in the rear compartment of the vehicle) to view audio-visual contents, such as films or telecasts. For this purpose, several monitors are often arranged in the vehicle, for example, one or two monitors in the rear compartment and one or two monitors for the driver as well as the front passenger. When all passengers are viewing the same audio-visual content, the output of the audio contents can take place by way of a main sound field of the vehicle, comprising, for example, at least one amplifier with connected loudspeakers. In this case, it is important that the output of the sound takes place in a lip-synchronized manner with respect to the output of the image on all monitors. Even slight deviations from such a lip-synchronized output of the sound are found to be annoying by the users.
In this case, it is problematic that a data source (A/V source) provides a compressed audio-visual data stream which is outputted, for example, by way of a front processing unit (for example, a so-called head unit), on at least one front monitor and, by way of a rear processing unit (for example, a rear seat entertainment unit (RSE)), on at least one rear monitor of the vehicle and, in the process, the audio-visual data stream is decoded separately by the front as well as by the rear processing unit. The duration of the transmission of the data stream and of the decoding of the data stream may differ, whereby the image data outputted in the front and in the rear will drift apart, and the lip synchronicity of the reproduced images with respect to the outputted sound is not ensured. It should be noted that, when one of the processing units is used for controlling the main sound field, the other processing unit will not automatically reproduce the decoded image data on the connected monitor in a lip-synchronized manner with respect to the sound emitted by way of the main sound field.
It is an object of the invention to avoid the above-mentioned disadvantages and, in particular, create a solution for outputting a piece of audio information in a lip-synchronized manner with respect to video information, in which case, especially the video information can be decoded by different processing units of a vehicle.
This object is achieved according to the characteristics of the independent claims. Further developments of the invention are also contained in the dependent claims.
For achieving this object, a method is suggested for processing data in a vehicle,                wherein the data are received by a first processing unit by way of a network;        wherein the data are decoded by the first processing unit;        wherein a piece of reproduction information comprising an output time or a piece of synchronization information is transmitted to the second processing unit.        
Advantageously, the second processing unit is therefore informed of an output time by the first processing unit, specifically after the decoding of the received data in the first processing unit. For example, in this manner, the output time can be determined by the first processing unit, which output time, for example, takes into account transit times for the transmission of data in the network and/or processing times in the individual components of the network, and the second processing time is correspondingly informed. It can thereby be achieved that a synchronized output of the data takes place to several processing units or output units of the network. In particular, a lip-synchronized output of audio and video information can therefore take place (for example, to several monitors of the vehicle which are controlled at least partially by different processing units).
It should be noted here that correspondingly more than two processing units can be provided, in which case, the first processing unit can then correspondingly treat a further processing unit like the second processing unit.
The data may, for example, be combined audio and video information which is transmitted particularly in a compressed manner and is received by the first (and possibly the second) processing unit.
The piece of audio information can be outputted by way of at least one loudspeaker, and the video information can be outputted by way of at least one monitor (display unit). In particular, at least one monitor respectively can be connected to the first and/or second processing unit, on which monitor the piece of video information of the received and decoded data are outputted.
The network may be a connection structure which permits a communication between components of the network. The network is, for example, a packet-oriented network, such as an Ethernet or an IP-based network. The network may comprise wired or wireless communication sections (for example, radio links). The network may, for example, have a wireless network, such as a wireless LAN (WLAN) or at least one Bluetooth connection. The network may also comprise a bus system, such as a MOST network (also called MOST bus or MOST bus system), in which the connected components know a common time base (for example, a so-called MOST system time) and can utilize the latter correspondingly.
The processing unit may be a control device of the vehicle. The first processing unit may, for example, be a central control device of the vehicle (for example, a so-called head unit). The second processing unit may, for example, be a control device of an RSE unit. In particular, the processing unit may be a mobile device that is connected with the network, for example, by way of a radio interface. It thereby becomes possible that, for example, a monitor functionality of a mobile device is integrated as a (second) processing unit; the output of the piece of sound information may, for example, take place by way of the main sound field of the vehicle. Thus, a passenger in the vehicle can use the display of his mobile terminal for viewing the piece of video information from the source and, with respect to the latter, hears the piece of audio information by way of the vehicle loudspeakers in a lip-synchronized manner.
It should further be noted that the piece of synchronization information is a piece of information by means of which a synchronization of the second processing unit (or of a part, for example, a decoding unit) with the first processing unit (or of a part, for example, a decoding unit) can be achieved. In particular, it can thereby be ensured that the processing unit, which controls the output of the piece of sound information, also causes or controls the synchronization with the at least one remaining processing unit.
Furthermore, it is an option that the output time and the piece of synchronization information are transmitted as a piece of reproduction information to the second processing unit.
The piece of reproduction information comprises, for example, a telegram that is transmitted, for example, by means of a PTP to the second processing unit.
In particular, the processing unit may have at least one memory (also called “buffer”) for temporarily storing incoming data (for example, data packets) and/or for temporarily storing decoded video and/or audio information.
It is also conceivable that the first processing unit has a transmitter for transmitting the piece of reproduction information and, as required, the decoded data to the second (and/or an additional) processing unit and/or to an output unit.
The data can be made available in the vehicle by a source. The source may, for example, comprise a transcoder, a receiver, a hard disk or a playback drive (for example, a CDROM, DVD, Blue-Ray drive). The data may comprise audio-visual contents, such as films, telecasts, or the like. In particular, the data can be provided by mobile devices, such as a computer, a mobile telephone, a personal digital assistant (PDA) or the like.
The data can be reproduced locally in the vehicle by a data carrier or can be loaded or received at least partially by way of a, for example, wireless interface (such as a DVB, WLAN, mobile interface, etc.) The source preferably provides the data as a packet-oriented data stream by way of the network. The providing can take place, for example, as a transmission to all users, to some of the users, or to an individual user of the network (broadcast, multicast, unicast).
It is a further development that the data are received by the first processing unit and by the second processing unit (particularly from the source by way of the network).
It is a further development that the data are decoded by the second processing unit,
wherein a portion of the decoded data, particularly the audio or the video information, is discarded;
wherein the not discarded decoded data are outputted at the transmitted output time, particularly by the second processing unit.
For example, the data comprising the audio and video information are decoded, the audio information is then discarded by the second processing unit because this audio information is also decoded by the first processing unit, and a main sound field of the vehicle (comprising at least one amplifier and at least one loudspeaker in the vehicle) are controlled by this first processing unit.
As an alternative, it is conceivable that the piece of video information is discarded. A portion of the piece of audio information or a portion of the piece of video information can also be discarded. In particular, portions of a piece of audio information can be decoded and outputted at different locations or processing units of the vehicle (for example, different sound channels in the case of stereo, surround or other multi-channel sound outputs).
Furthermore, it is a further development that the data have a presentation time, as of which the received data are decoded by the first processing unit and by the second processing unit.
It is also a further development that the data comprise data packets, at least one data packet containing a presentation time.
In particular, these may be data packets according to an AVB protocol or based on an AVP protocol. For example, the data can be transmitted in IEEF 1722 packets.
It is also a further development that the second processing unit has an audio-decoding unit and a video-decoding unit, the output of the video-decoding unit being controlled by way of the output time of the first processing unit.
In particular, the decoding of the video-decoding unit can be delayed by means of the transmitted output time corresponding to the second processing unit. The first processing unit therefore controls the video-decoding unit of the second processing unit by means of the transmitted output time.
Within the scope of an additional further development, the first processing unit has an audio-decoding unit and a video-decoding unit, an output of the video-decoding unit of the first processing unit being controlled, particularly delayed, by the audio-decoding unit of the first processing unit.
Here, it is an advantage that the audio-decoding unit may be implemented in software and, in the event of a (temporarily high) loading of the processing unit by other functions, this audio-decoding unit controls the video-decoding unit such that the reproduction of the piece of video information still always takes place in a lip-synchronized manner with respect to the decoded and outputted piece of audio information.
Another further development consists of the fact that the video-decoding unit of the first processing unit is controlled by means of the output time.
It is an embodiment that at least a portion of the data decoded by the first processing unit is transmitted by way of the network to an output unit.
The output unit preferably is a component that is connected with the network and can receive data by way of this network. The output unit may have an audio output unit (for example, having at least one amplifier and at least one loudspeaker) and/or a video output unit.
The decoded data provided by the first processing unit can be outputted preferably without further decoding or transcoding.
In particular, it is a further development that the portion of the decoded data transmitted to the output unit comprises audio data.
As an alternative, it is also conceivable that the decoded data are video data or comprise such data.
An alternative embodiment consists of the fact that the output unit comprises at least one amplifier with at least one loudspeaker respectively.
In particular, the output unit may have a buffer (memory) for temporarily storing the incoming decoded data.
Especially a main sound field of a vehicle can be controlled by means of the output unit.
It is a further embodiment that the decoded data are transmitted together with the output time to the output unit.
It is therefore also conceivable that the output unit outputs the decoded data at the output time. As a result, also when several distributed components are used, which are mutually connected by way of a network, a lip-synchronized output of pieces of audio- and video information can be achieved.
It is also an embodiment that the decoded data are transmitted with the output time to the output unit by means of an AVB protocol, particularly at least one IEEE 1722 packet.
The output time can correspondingly be inserted as the presentation time into the packets of the AVB protocol.
A further development consists of the fact that the components connected to the network are synchronized (with respect to the time).
It is suggested in particular that several components of the network are synchronized to a common time base, so that the output of the processed data can take place based on this common time base.
The common time base of the several components of the network can be achieved by means of a suitable protocol, such as a PTP (Precision Time Protocol, according to IEEE 1588 or IEEE 802.1AS). By means of the synchronized components of the network, it becomes possible that the presentation time is interpreted in the same fashion in different components, and therefore the output can take place in a time-synchronized manner in distributed components.
Another further development consists of the fact that the network comprises a bus system.
In particular, the bus system may be constructed as a MOST system. A synchronous and/or isochronous channel of the bus system can be used for the transmission of the piece of synchronization information.
In addition, it is a further development that, by means of the piece of synchronization information, the second processing unit is synchronized with the first processing unit.
In particular, the piece of synchronization information may be a time stamp of the video decoding unit of the first processing unit. Thus, the video-decoding units of the first processing unit and of the second processing unit can synchronize themselves, and a lip-synchronized output of the piece of audio and video information can be ensured.
The above-mentioned object is also achieved by means of an apparatus comprising a processing unit which is set up such that the method described here can be implemented.
The processing unit may, for example, be a (partially) analog or (partially) digital processing unit. It may be constructed as a processor and/or as an at least partially hard-wired circuit arrangement which is set up such that the method can be implemented as described here.
The processor may be any type of processor or computer with the correspondingly required periphery (memory, input/output interfaces, input/output devices, etc.) or comprise such a computer. Furthermore, a hard-wired circuit unit, for example, an FPGA or an ASIC or an other integrated circuit, can be provided.
It is a further embodiment that the device comprises a control device or a part of a control device of the vehicle.
The above-mentioned object is also achieved by means of a vehicle comprising at least one of the apparatuses described here.
Embodiments of the invention will be illustrated and explained in the following by means of the drawings.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of one or more preferred embodiments when considered in conjunction with the accompanying drawings.