Within a given organization, it is common to see several different systems used for managing content and monitoring the health status of different types of remote devices. As the number and type of remote devices grows, the cost of maintaining and integrating content management and device management systems using standard systems eventually becomes prohibitive. In addition, when many different types of playback devices are configured to show the content—for example, PCs, mobile phones, tablets, and dynamic signs—these devices will typically have different physical form factors and playback capabilities, so the desired content may need to be transmitted via different channels or in different formats to ensure correct playback. Regardless of the level of integration achieved, these systems also require that repetitive tasks be performed manually in order to manage the content schedules on different device types.
When prior art content management systems attempt to provide a universal playback environment to replace these disparate systems, they typically utilize a remedial web-based rendering environment to play back content that is streamed over the Internet. While some prior art discusses storing content prior to display, the prior art does not provide a method in which the playback system may dictate the content to be displayed in conjunction with such storage. Further, these systems lack the ability to store logging and status data on the device after it is collected. As a result, existing solutions are highly sensitive to interruptions in the device's network connection.
Accordingly, there is a need for a system that can dictate the storage and playback of content obtained over a communication network (such as the Internet) on a diverse set of electronic devices, and reliably collect logging and status data from each device.