Monitoring the flow, location and state of products has always been an important part of any supply chain management system. Only with this information can a business manager get a clear picture of the state of his system, and judge the impact of changes on it so that the system can be effectively maintained and improved.
A number of technologies have been used over the years to identify products in a supply chain, one of the most recent being optical bar codes. Identifying products, boxes, palettes and containers with bar codes has been very useful, but reading the bar code labels themselves has been problematic as they require line-of-site reading by an optical sensor. Bar codes can only be read if they are physically close enough to the reader and are properly oriented. Even if the bar code reading system is set up carefully, the reading-failure rate in an automated environment is still so high that many companies do their bar code reading manually despite the higher cost.
In recent years, the technology behind RFID (radio frequency identification) has been improving and it is being positioned to replace bar code technology. RFID can be used in many of the same supply chain, process control, inventory control and system management applications, but unlike bar coding, RFID uses wireless tags which do not need line-of-site access to be read. RFID scanning can be done at greater physical distances from sensors than bar coding, and the orientation of the RFID tag with respect to the sensor is a far less significant problem. As a result, RFID technology is providing much higher read-rates, allowing process speeds and efficiency to be increased. It is no surprise that RFID is quickly become one of the hottest new technologies, being encouraged along by mandates from the United States Department of Defense and large commercial retailers.
At a basic level, an RFID process control system consists of a collection of electronic RFID tags and a network of sensors at different physical locations within the system. Each electronic RFID tag is embedded with a unique electronic product code (EPC), transmitter and micro-antenna. When a tagged item passes within range of a sensor, the sensor receives the EPC via radio waves, identifies the item and its location, and relays this information to a central computer.
A standard database on the central computer can be used to collect and manage the RFID data in the same way that a database handles data from other kinds of sensors. However, existing databases are not designed to handle the “avalanche” of sensor data that is generated in an RFID system, let alone in a very efficient way. Thus, there is a need for a database system which can manage very large streams of RFID data; that is, a system which can manage the spatial and temporal fidelity at which we currently instrument and analyze the physical world.
In fact, this avalanche of data is a problem that is common to many of today's hardware and software systems and is not just exclusive to RFID. Today's process management systems produce more and more data as data acquisition tools and devices become cheaper and more available. RFID is just one example of sensor data produced by such systems.
Currently available sensor data management systems fall short of what is required in many respects, a number of which are outlined hereinafter. Once a database system is developed that can manage very large streams of RFID data, one will be able to support larger and more complex sensor networks where objects have to be tracked and traced throughout multiple operational steps and correlated with other sensor events. There are many environments in which such systems would be highly desirable including supply chains of perishable goods, sensitive and hazardous material management, security and access control, patient/employee safety, asset management and continuous & automated monitoring for systems and vehicles. Security systems (anti-terror, disaster relief, general defense) may also incorporate a multitude of sensors and potentially hundreds of dynamically changing applications that require sensor data. Thus, there is clearly a need to correlate data between various sensors in a complex and dynamic environment.
The management strategies that govern these operational environments are often formulated as mandates or directives. These directives apply to the environment itself as well as the movement of objects through it. For example, one may have a directive that pallets containing perishable food cannot be exposed to more than 20 F for more than 2 hours, regardless of where they are in the system. As another example, one may have a directive that a ceiling light is turned on only when an authorized person enters the room.
There are currently no sensor networks which organize sensor observation data in an efficient and application agnostic way such that these operational processes can be easily formulated, controlled and executed. Also, there is currently no way to accommodate contextual change, that is, the use of sensor data to change the operational process flow. For example, a change in a terror alert level may require that procedures be changed, but existing systems cannot accommodate such changes. As another example, NASA's chemical waste management procedures may change when a space shuttle is on site, but again, the existing systems cannot accommodate such change.
As well, application processes and events are usually defined on a higher interpretation level than pure observation data. That is, a receiving process is not simply an atomic sensor observation, but is usually a correlation of several sensor observations and context information. This includes, for example, the sensor which reported the observation. Legacy applications usually cannot be “sensor enabled” easily to correlate data and context information in this way.
Traditional process control systems follow objects throughout the operational environment, thus, they have to specify all of the unexpected situations that the objects can get into. This requires the application to keep complete control and knowledge about the state of the objects as well as the environment. In an operational environment it is usually assumed that such unexpected situations occur about 1% of the time, but nevertheless, the application should not crash when these situations occur. The application therefore has to be programmed with all the possible exceptions that could occur. Software developers generally spend 90% of their time dealing with this relatively small number of exceptions, dramatically expanding the code base and the complexity of every application. Existing systems do nothing to mitigate this problem.
Currently, most commercial retailers and suppliers are simply using RFID for tracking mainly at shipping or receiving, with tags seldom being reused in subsequent processes or being correlated with other sensor readings. In such an environment, RFID merely automates previously known processes and does not implement a more complex Sensor Network Environment definition. As well, the RFID sensor data generated in such an environment is mostly used by only one application. However, as observed from recent customers and industry trends (Factory Networks, Distributed Sensor Networks, RFID Journal, RFID Show and GridWorld, for example) as well as through EPC Global activities, both retailers and suppliers recognize that the use of additional sensor data and their correlation, as well as the utilization of this information by multiple applications, will yield a significantly higher return on investment. There are currently no effective systems which support such complexity.
Other problems with current implementations of process control systems include the following:    a) it is difficult to analyze data in existing systems because the sensor and analysis system has been generated without any regard for the underlying processes and operational environment context. Data is simply monitored and stored; and    b) application processes and events are usually defined at a high level, yet typical database systems are intended to deliver pure observed data. Thus, there is a gap between the data being received and the judgments and observations which must be made.
There is therefore a need for a method of and system for managing sensor data in a computer database environment which addresses one or more of the problems outlined above. It is desirable that this design be application agnostic (i.e. the data collected and analyzed can be integrated with independent and disparate software applications). It is also desirable that this design be developed with consideration for efficiency, flexibility and cost.