1. Technical Data
The present invention relates to a method of and apparatus for data non-repudiation.
2. Related Art
An abiding concern in the history of communication techniques is the worry that messages or conversations, whilst purporting to come from or be with a particular party might in fact have been made up by another (malicious) party.
Various methods to attempt to raise the level of trust that parties may place in messages or conversations have been introduced, many of them relying on cryptographic techniques. It is certain that, at least as far as the public domain is concerned, cryptographic techniques have themselves been developing apace over the last few decades, leading to the widespread introduction of such technologies as public key encryption, digital signatures and public key infrastructures as now provide for the much vaunted e-commerce revolution.
In terms of the sending of messages, the notion of so-called ‘non-repudiation’ may embrace both that a message originator is unable to deny either having created the contents of a message or having sent the message and that a message recipient cannot deny having received the message (i.e. non-repudiation of both transmission and receipt).
Cryptographic techniques have been used to address the problem of non-repudiation and related problems such as authentication (providing for a guarantee of message originator identity), confidentiality (providing for a guarantee that the message has not been intercepted and read during transmission) and integrity (providing for a guarantee that the message can only be changed by an authorised party).
Further definitions may be found for “non-repudiation” and associated terms in, for example, Internet Engineering Task Force (IETF) Request For Comments (RFC) 2828, “Internet Security Glossary”.
A thorough discussion of cryptographic techniques and applications is to be found in “Applied Cryptography”, Bruce Schneier, Wiley 1996.
A typical cryptographic approach to providing non-repudiation of, for example, transmission of a message has first involved the formation of a so-called message digest or hash of the message and then signing of the hash with a key. The hashing step tends to account for only a small portion of the time taken in such approaches whereas the signing step tends to account for a large portion, being much more slow. For non real-time operations, the slowness of this step may be of no significance. For applications which have a (near) real-time operation however, the slowness engendered by the signing of messages may well be intolerable.
An application might, for example, produce a stream of messages each carrying data such as a portion of a voice conversation between two parties. Non-repudiation of both transmission and receipt would be desirable to provide for the possibility of proof of who had said what in the conversation and who had heard it. These voice messages may suitably be carried in the form of so-called packets. For an introduction to packet protocols see, for example, “Internet Core Protocols”, Eric A. Hall, O'Reilly publishers, 2000.
Internet Protocol version 4 (IPv4) is in widespread use on computer networks at the present time. IP was originally intended for asynchronous data transport rather than real time media sessions. IP itself provides no guarantees as to packet delay, ordering or even arrival. Higher level protocols have been developed for use over an IP layer. The Transmission Control Protocol (TCP; hence the TCP/IP protocol suite) is a connection oriented service, with which the unreliability associated with IP is managed through the use of acknowledgements to confirm packet arrival (and to re-send where necessary). TCP/IP is not of general use for carrying the data of real time applications since it can introduce intolerable delays occasioned by waiting for lost packets to be re-sent. By way of contrast, the User Datagram Protocol (UDP; hence the UDP/IP protocol suite) is a connectionless service which is therefore more suitable for real time applications in that it does not hold up transmission of a sequence of packets if one of the packets in the sequence is lost.
By way of conversion into a suitable form for packetisation, an analogue signal such as speech can be digitised (including a degree of compression) using techniques such as Pulse Code Modulation (PCM; an 8 bit code converting speech into a 64 kbps stream), Adaptive Differential Pulse Code Modulation (ADPCM; a 4 bit code converting speech into a 32 kbps stream) or Linear Predictive Coding (utilising a model of human speech).
Whilst bandwidth efficient, this digitisation process introduces a delay which may add to other sources of delay arising from sending a packet over a network such as propagation delay (rather small transmission time on hops between network nodes) and handling delay (rather larger time taken in network nodes incurred by such processes as buffering). By way of example, a one way delay of some 100 milliseconds might be introduced in this way.
Rather more serious however than even this absolute delay (with which conversation can still be carried out by making some concessions to the need for parties to speak in turn) is the introduction of a variable delay through packet transport mechanisms. A variable delay or so-called ‘jitter’, between, for example, syllables, can cause severe difficulties in understanding speech. Techniques have therefore been developed to buffer the received audio data carrying packets. Thus, no matter what jitter has been suffered in the receipt of the buffered packets, the audio data contained within can be played out of the packet buffer at an acceptable constant rate. Whilst smoothing the variable delay, this clearly introduces a further element of absolute end-to-end delay. If a packet arrives outside an acceptable buffer period, instead of holding up the playout, it is generally discarded.
Accordingly, to provide a mechanism for non-repudiation of both transmission and receipt in such a field of application would require the addressing of the issue that the signing of each message may well add an intolerable delay to the cumulative delays already arising from the message network transport. If, as is likely for the reasons indicated above, an unreliable packet protocol is used, the issue of the occasional loss of a message must also be addressed.
The following method might be considered for application to this class of non-repudiation problems:
Gennaro and Rohatgi (“How to sign digital streams”, Crypto '97, pp180-197) proposed a scheme for signing a real-time digital stream wherewith the stream is broken into blocks and authentication information is transmitted with each. Packet/is then authenticated by a public key contained in packet/−1 using a fast 1-time signature scheme. If the first packet is authenticated using standard public key cryptography then the whole stream is signed. Whilst faster (using a hash computation based 1-time scheme) problems arise with the overhead incurred by each packet containing the public key for the next and a signature of itself and the fact that authentication cannot continue if a packet is dropped.
Neither the straightforward hashing and signing of every packet nor the proposed alternative method present a solution to the identified problems.