Recent decades have seen an enormous evolvement of telecommunication technology and industry. In the beginning, it addressed basic needs, such as the connection of places, offices, and homes, and it was done by means of the fixed-line phone. The following need was the communication of people, achieved by means of the mobile phone. Currently, technology and industry are preparing the connection of “things”, which is often referred to as the Internet-of-Things (IoT) or Machine-to-Machine (M2M). It is projected that by 2015, there will be 50 billion devices connected to the network.
A proposed logical architecture of a Machine-to-Machine (M2M) platform involves a so-called M2M connectivity layer and a so-called M2M service enablement layer. The main purpose of the M2M connectivity layer is to guarantee that there is a connection to devices, that such a connection is reliable, that the possibility of roaming is provided, and, in general, that the connectivity infrastructure is optimized for several constraints. The latter may address that the connected devices usually have low mobility (except in transport and logistics-related business), low traffic and there are many devices per owner.
The main purpose of the M2M Service Enablement layer is to ensure context and meaningfulness of the information that the devices are measuring or generating, which can be accessed over application layer protocols (usually over TCP/TP, such as HTTP/XML, and SMS and the like). Thus, service enablement must guarantee a transparent access to devices, regardless of the connectivity technology, a uniform naming of such devices (usually through a URI=unique resource identifier), the exposure of a uniform application plane interface (API) to developers, access control and secure access to devices, quality of service assurance. The latter may include an assignment of priorities to data coming from sensors or throttling its outcomes.
FIGS. 5A and 5B show an approach to M2M Service Enablement. Different organizations may take care of M2M Connectivity and M2M Service Enablement, as coupling between them is not mandatory. Device owners (the entities owning devices and therefore the ones that may grant access to data being measured or generated by devices) use the M2M Service Enablement platforms to access their devices and therefore being able to create applications fulfilling their requirements. For instance, “Example Cars Co.” would access the speed of its cars as read from the speedometers through the M2M Service Enablement platform provided by “Example Network Operator” through an API. This information would feed an application collecting such data in order to calculate the average speed of their cars.
An existing service provider which allows owners of sensors and devices to connect sensor data to the web and developers to build their own applications using such data is “Pachube”. Its offering is structured around a hierarchy of the following data types: [1] Environments or feeds (a collection of measured data, often at a particular geographical location, defined by the creator of the feed and measured by sensors and devices; an environment can represent measures coming from both physical, such as a room, a mobile device, a building or a forest, and virtual entities, such as a Second Life model, server bandwidth monitoring, etc.), [2] Datastreams (an individual sensor or measuring device within an environment having a unique ID and possibly specifying ‘units’, e.g. ‘watts’, as well as user-defined ‘tags’, e.g. ‘fridge_energy’), and [3] Datapoints (a single value of a datastream at a specific point in time, possibly represented by a key-value pair of a timestamp and the value at that time).
A typical usage of the Pachube service may be described in the following example: One wires up a hallway (‘environment’) with temperature, humidity and CO2 sensors (‘datastreams’). One creates a Pachube feed named ‘Example Hallway’, with three datastreams where the IDs could be: ‘temperature’, ‘humidity’ and ‘CO2’; which might be tagged ‘thermal, non-contact’, ‘capacitive, SHT21’ and ‘MG811’ respectively; and have units ‘Celsius’, ‘% RH’ and ‘ppm’. Individual datapoints at a point in time might be ‘23.2’, ‘34’ and ‘3820’ respectively.
Pachube's is currently structured to offer to developers its service in three levels (so-called plans): [1] Basic (the developer may access all public feeds, access historical data from feeds up to one month old and access a limited number of datastreams, with a limited number of requests per time unit), [2] Pro (the developer may access all public feeds, access historical data from feeds up to one year old and access a larger amount of datastreams, with a limited but larger number of requests per time unit), and [3] Premium (the developers may access all public feeds and also private feeds, access to all historical data from feeds and access to an even larger set of datastreams with larger limits). However, the following problems can be identified in conventional services: Firstly, developers may not be allowed to selectively access only those data feeds that they are actually interested in. Instead, developers get access to the whole of the existing data feeds, depending on the level the developer has paid for. Therefore, the access control model is very coarse-grained and does not support the possibility of accessing only the feeds the user is interested in. While from a deployment point of view this coarse-grained access control model might be simpler to implement and deploy, such a design decision may have one or more of the following drawbacks: it makes it very difficult to dimension the system. That is, once a level of access is granted to a developer, he may access whatever feed he wishes. Estimates for the dimensioning of the system must take into account such fact, which eventually results in an excess of capacity. Additionally, feeds always provide all the data points from their datastreams. The lack of modulation in the amount of information provided to developers prevents the system from prioritizing clients (developers) and offering them different levels of quality of service.
Secondly, conventional models follow a strict publication-consumption model. In that model, clients (developers) are able to pay for and access device data published in the respective platform. However, they cannot play an active role stating which type of data they would be interested in accessing or which service levels conditions are enough for them in device data being published in the platform.
Accordingly, there is a need for an improved data distribution platform that handles data streams from a plurality of generating devices on an input side, and distributes this data to a plurality of client devices on an output side. More specifically, there is a need for modifying data streams between the input side and the output side so as to more efficiently address the needs of the client devices. There is further a need to allow clients to select the data that they are interested in with a finer granularity and/or to specify offered data packages on their own, that is to at least propose the form of data how it is offered on data distribution platforms.