In an era when there has been a vast expansion in the number and type of electronic devices, particularly portable and wireless devices, as well as an expansion in the types of application programs and device types, there has been a corresponding need for data and code to be shared directly between these diverse heterogeneous different device types in order to carry out the intent of applications which can only be accomplished by employing the electronic and programmatic resources of a plurality of devices. A need has also arisen and continues to grow for a device user to be able to communicate with other devices that may or may not be set up in advance for the type of communication or data transfer or sharing that the user desires. For example, a user may have or wish to create a picture collection on a digital camera and then to be able to transfer that collection of pictures directly to a personal data assistant (PDA) type device, television, or projector for viewing, or to a storage device for storage or to a printer. The user may also or alternatively wish to transfer code which implements a sequenced slide show encapsulating the pictures, titles, index of slides, and the like to another device, such as to a device that has a larger screen resolution or better graphics capability than the device on which the slide images reside. The user may also want to be able to select and print some subset of pictures in the slide show on an available printer. There are many other examples of such code, data and content sharing.
Conventional Interoperability Issues, Problems, and Limitations
The sharing of data and code along with the sharing of associated device computing resources, and device control between similar (homogeneous) and dissimilar (heterogeneous) devices or device types is known in the art as “device interoperability,” or simply as “interoperability”. Some of the necessary and optional enhancement issues involved in providing this interoperability include the issues of: (i) content adaptation; (ii) content format; (iii) device drivers; (iv) device to device communication; (v) individual device resources and capabilities; (vi) application programs resident on the devices; (vii) loading application programs on the devices; (viii) costs associated with providing device resources to support interoperability; (ix) power or energy management of interoperable devices; and (x) robustness of code executing in an interoperability environment where connections between devices may be intermittent and unreliable. Furthermore, (xi) the scope of the development, deployment and testing efforts necessary to enable interoperability; (xii) the reliability problems inherent in having independently developed and or distributed interoperability components even where detailed interoperability standards exist; and (xiii) the difficulty of end users having to have a high level of technical knowledge and spend appreciable amounts of time and efforts administering interoperability; (xiv) the security of interoperability devices, data and content; (xv) the size, performance, power management and cost tradeoffs that can be made with respect to interoperability infrastructure, raises additional issues. These issues are addressed in additional detail below.
With respect to content adaptation, there is a need for intelligent scaling or adaptation of the content in terms of such parameters (application and data type dependent) as picture size, user interface, controls and special effects, content format, features, and the like that needs to be taken care of when transferring data, application programs (applications), or control from one device type to another. These are collectively referred to as “adaptation.” The better the sophistication of the adaptation when sharing content, applications, and control, the larger the set of interoperable devices; and the more advanced the features on each device, the more efficient the transfer of data, information, and/or other capabilities can be, and the easier the devices and code data and content are to use to carry out applications.
A second interoperability issue arises from the undesirable requirement that the user may generally need to specify or at least consider the content format. If the user desiring interoperability with another device is not familiar with the content format and/or how the other devices will deal with the content format even if it can be communicated to the other device, this factor alone may preclude interoperability.
A third interoperability issue arises from the undesirable requirement that the user may generally need to specify, consider, or carry out the loading of one or more special purpose drivers, code, data or content on one or more devices before interoperability can be carried out.
A fourth interoperability issue arises from the undesirable requirement that the user specify, consider, or select the physical communications mechanisms and protocols to be employed in a communication between the user's device and one or more other devices, each of which may have or require a communication mechanism, protocol, interface, or the like.
A fifth interoperability issue arises from the undesirable requirement that the user may need to consider or chose which devices will have the capabilities and memory, processor and other features necessary to interoperate with his or her device or with the data or applications required.
A sixth interoperability issue arises from the undesirable requirement that the user may need to specify, consider, and/or load the applications that must reside on some or all of the involved and potentially interoperable devices.
A seventh interoperability issue arises from complete or partial application failure due to missing, outdated, or incompatible version of code, data or content on one or more devices.
An eighth interoperability issue arises from the undesirable requirement that devices need to have all the code to carry out applications that will be needed resident at the time of manufacture or at some time prior to the need for them arising, or be explicitly loaded on some or all, of the devices by the user.
A ninth interoperability issue arises from the monetary cost associated with providing the amount of processor or CPU resource, memory resources, electronic gates or logic, or other physical infrastructure necessary to implement the communications and other protocols and applications on devices intended to interoperate.
A tenth interoperability issue arises from the desirability of providing effective power or energy management methodologies to extend battery life or reduce the size of the batteries needed for portable or mobile devices that are intended to interoperate. Although not specifically required for short term interoperability, such power management is highly desirable so that interoperating with other devices will not create such a battery power drain on such devices that users would rarely use the capabilities or be hesitant to permit another user to access their device.
An eleventh interoperability issue arises from the need for a degree of robustness of applications which need to continue to operate in an environment where connections between devices are often intermittent or transient and unreliable. For example, code to carry out an application on a first device that is interoperating with and in communication with a second device should not itself freeze, hang, or otherwise cause a major problem or result in the device itself freezing, hanging, or causing a major problem when the second device moves out of range or otherwise fails to reply to a communication from the first device. Furthermore, it is desirable for all the code, data and content necessary to carry out an interoperability application to be automatically restored and updated if such a second device becomes reliably available again.
A twelfth interoperability issue arises from the unreliability of applications where devices are produced by independent manufacturers, based on interoperability standards which are inherently weak in their ability to predict realistic and future device needs and capabilities, and in the ability of programmers or circuit designers to completely and correctly understand, implement and have such implementations correctly deployed. A thirteenth interoperability issue arises from the slow speed of executing code which cannot rely on optimizations necessary for graphics, video, sound, etc.
A fourteenth interoperability issues arises from the lack of availability of interoperable code, data and content to carry out applications and devices which might be employed for interoperability due to all the issues listed above which discourages both users and providers.
These fourteen interoperability issues are merely exemplary of the types of issues that do or may arise and are not intended to be a complete list or to identify issues that arise in all situations. For example, interoperability between two identical devices that are intended to interoperate with each other at the time of manufacture may not present any or all of the issues described here, but this type of homogeneous device interoperability does not represent the more common situation that device users are faced with today, and modest attempts to address heterogeneous device interoperability issues have been incomplete, not very insightful, and clearly not successful.
Conventional Static and Procedural Solution Attempts
Conventional attempts at providing interoperability solutions have generally fallen into two categories, namely (i) static interoperability solutions (“static”), or (ii) procedural interoperability solutions (“procedural”). Conventional static solutions require each device to support the same specific communications protocols, and send specific rigidly specified data structures with fixed field layout. In static approaches, the semantics, code, and display capabilities must be existent on all devices before interoperability can be established between those devices. Each content type, application, or device capability must be known, implemented and installed at the time of manufacture of all devices involved; or alternately, the user must install application programs, protocols, and/or drivers as required prior to initiating the desired interoperability of devices, software data or content. As the user may not be a trained information technology professional, or may not know or have a copy of the driver, application, operating system component, protocol, or the like, it may be impossible to provide the desired interoperability within the time available. Furthermore, often it is necessary with static solutions to implement a specific set of static solutions.
For example, the sharing of a set of pictures with slideshow capabilities between a digital camera and a television or display device (TV) might require a common static protocol, such as for example, a Bluetooth wireless capability for sending the slide image data and slide order or sequence information to a TV. It would also require a static content format for the slides and slide order information to be recognized on the TV as something it knows how to deal with. And at least one static slide show program that can render and control a slide show with the specific content format must exist on both the TV and the digital camera. The user may or may not have to separately initiate transfer of the images or pictures and slide order information, find and associate the information on the TV, and run the correct slide show application on the TV. Depending on the sophistication of the static slideshow programs on both sides, the controls on the digital camera may or may not be useable to control the slide show on the TV which was initiated on the camera. Where such camera based control is not possible some other mechanism for control may necessarily be provided. Static approaches can result is highly optimized solutions to well understood specific applications known at the time of manufacture of all the device types which can interoperate. Static approaches however have major limitations, including the requirement for most all capability to be known and custom implemented at the time of manufacture, limited ability to upgrade or fix errors after manufacture; and a conventional requirement that each static program implementation must be correctly and completely ported to run on the different devices and exist on all devices prior to interoperation. Often this is accomplished by the loading and updating of specific drivers for specific applications, communications mediums and the desired set of devices where interoperability is required.
Even when static solutions are available, reliability is compromised due to the inevitability of different versions of standards and applications. Hence when two devices wish to share data or procedures, failure can occur when the devices adhere to different versions of the standard, or the programs adhere to different versions of the standard, or different versions of the applications reside on the devices. Additional reliability problems arise from inadvertent errors or shortcuts made in the independent implementations of the set of standards used to interoperate. Such standards implementations may interact in unpredictable ways when any two implementations attempt to work together. In general it is often impractical or impossible to test all the permutations of all the standard implementations across all sets of devices, especially as all target devices which with an initiating device must interoperate with did not exist at the time of manufacturing of the initiating device.
One of the more significant limitations to static approaches is that the amount of work to make a number of N devices or applications interoperable grows very quickly as N for the number of devices and/or applications gets larger. Manufacturers currently are flailing at creating hundreds of static standards for content types, application programs (“programs”), communications protocols, and the like to try to make even limited size sets of devices interoperable over an ever increasing large set of devices and applications. This also conventionally requires that every device have the memory, screen size, controls and processor and battery power to support every static solution for every desired interoperability option across all desired interoperable devices. Otherwise true device and application interoperability is not achieved. To better illustrate this conventional problem and limitation, consider that currently, adaptation requires a software engineering development project for each type of device (N devices) that has to share with each other type of device (N−1 devices). From a development point of view this is an N×N or N2 order problem because in a universe of N device types that all wish to interoperate with each other, there are N×(N−1) adaptations to consider, develop, implement and test.
Moreover, as the number of devices increases and the required adaptations rise toward N2, the expense and difficulty of attaining a high-quality product tends to increase at an even faster rate due to the increased overall complexity. This is because the difficulty of attaining high-reliability and quality software and hardware solutions increases as overall complexity increases. This isn't purely a “size of source code” issue, but is due to just the kind of factors prevalent when trying to get devices from different manufacturers to work together, including unpredictability of behavior, unpredictability of events, unknown future capabilities, and so on.
For example, as the number of devices increases from 5 to 6, interoperability adaptation requirements increase according to the relationship N×(N−1) from 20 to 30. And as this increases the overall complexity of the project has grown even faster.
This N-squared order problem wherein getting N devices to work together requires substantially N2 adaptations is illustrated in FIG. 1. It will be appreciated that conventional inter-device cooperation using a static approach may be relatively simple for a user to use but requires a high degree of development, administration and deployment effort and continual updating to maintain compatibility and interoperability between old devices, applications, and data types, and new devices, applications, and data types.
With reference to FIG. 2, there is shown the interactions for inter-device cooperation or limited interoperability for just eight device types using a brute force approach requiring fifty-six adaptations and additional fifty-six test procedures.
Separate from the N2 problem of the brute force method, most static approaches also involve combining a number of standard or standards efforts. The Microsoft originated UPnP (Universal Plug and Play) approach has perhaps the largest following and scope of the static approaches. UPnP is a static non-procedural approach to solving some of the problems associated with device interoperability by incorporating a set of static (non-procedural based) standards, and attempting to enumerate all the different classes of devices and services, each with a different XML or data structure based description. However, even the UPnP approach suffers significant limitations, some of which are briefly described below.
First, UPnP is heavyweight in that it requires large collections of modules and code, power, and memory to run. This makes it unsuitable for thin low cost battery powered devices that may have a very modest processor, little random access memory, a small battery capacity.
Second, UPnP offers little content or feature optimization capability. UPnP generally assumes one size application, content, or user interface will work well on all involved devices. This may have been a reasonable assumption a decade ago for Microsoft Windows based desktop personal computers (PCs), but is now a poor assumption and basis for operation in a world filled with devices that must interoperate that are as different as a pager, a digital camera, and a personal computer, not to mention the likely set of hybrid and diverse electronic devices to arise in the next few decades.
Third, UPnP offers only a limited set of user interfaces that do not meet the needs of the large set of diverse devices now available.
Fourth, UPnP requires programs and drivers that are needed to perform the requested task to reside on all devices before they can be used.
Fifth, while the intent of UPnP is at least to partially avoid the N-Squared (N2) problem, the reality is that using UPnP as a basis for the interoperability would still require a massive N-Squared (N2) development/deployment/testing effort, as described above, to bring out new applications which require programs, code, data and content to be ported, distributed and tested for all interoperating devices if the all the permutations of independent implementations of complex standards is to result in reliable interoperability.
Sixth, UPnP programs, devices, content and standards must all be synchronized so that the same or at least compatible versions and updates are deployed simultaneously
Seventh, as device and content capabilities evolve, existing UPnP programs, data and content based devices tend to fail to support the new device.
Eighth, the costs of maintaining compatibility of existing device, standards including UPnP, and applications versions increases the overall project complexity ever more rapidly.
Ninth, as the project complexity increases, due to the problems inherent in the complexity and diversity of UPnP as a standards-static-based approach, ease-of-use and reliability degrade.
Tenth, UPnP would still not address the requirements imposed by the frequent need to have one or many data structures sent between devices, especially where this data or these data structures are expressed in a human readable text format that require considerably more transmission bandwidth and time than binary representations. Furthermore, many of the data structures to be sent between devices are expressed in XML, a human readable text format, rather than in a binary or less generalized format, where using XML format requires significantly more CPU operations, memory, and/or software program code size to perform the CPU intensive parsing operations required for XML.
Finally relative to a few of the limitations imposed by conventional static approaches to device and application interoperability, static standards often limit the number of protocols, content types and application types to reduce the overall complexity and size of standards based implementations. An example is that UPnP only allows TCP/IP as a base communication protocol. This effectively eliminates the efficient use of other important existent communications protocols such as Bluetooth, USB, MOST, and IR and all other non-TCP/IP protocols.
Conventional Procedural Solution Attempts
An alternative to the static standards approach relies on creating a procedural standard. Procedural standards techniques implemented in hardware or emulated in software are ubiquitous. There exist a large number of hardware microprocessors, each with an instruction set and interfaces optimized to different classes of problems, and there are numerous higher level software emulated instruction sets and environments existent, that are optimized around specific task sets. These include for example, Java (an approach generally optimized for portability and ease of programming), PostScript (an approach generally optimized to represent printed pages and printer control functions), and Storymail Stories (generally optimized for efficiently representing a very broad range of rich multimedia messages). Java and PostScript are well known in the computer arts. Aspects of Storymail Stories and related systems and methods are described, for example, in United States Patent Application Publication No. 20030009694 A1 published 9 Jan. 2003 entitled Hardware Architecture, Operating System And Network Transport Neutral System, Method And Computer Program Product For Secure Communications And Messaging; and naming Michael L Wenocur, Robert W. Baldwin, and Daniel H. Illowsky as inventors; in United States Patent Application Publication No. 20020165912 A1 published 7 Nov. 2002 and entitled Secure Certificate And System And Method For Issuing And Using Same, and naming Michael L Wenocur, Robert W. Baldwin, and Daniel H. Illowsky as inventors; and in other patent applications.
Procedural interoperability approaches, typically involve establishing or otherwise having or providing a common runtime environment on all devices that are to interoperate, so that programs, procedures, data and content can be sent between devices in addition to static data structures and static applications. Currently one leading interoperability procedural solution is the Java platform along with the JINI extensions to it. As an example of a Java-based procedural approach, a slideshow written in Java could encapsulate or reference the pictures and slide ordering or sequence data, interrogate the other device, adapt the content to the other device, and send the information and a Java slideshow program to the TV. The Java slideshow program could be run on the camera after manufacture and enable interoperability with a Java enabled TV even if the slide show program did not pre-exist on the TV.
While Java has been widely deployed and has some limited success in providing correspondingly limited interoperability, it has serious deficiencies that have prevented its broad use, especially for small mobile devices where cost, power efficiency, processor efficiency, memory efficiency for storing program code, data, and temporary buffers are very important issues. Also, the Java Virtual Machine (VM) approach to binary compatibility of applications running on different devices is in conflict with the very reason the devices exist. Java and other conventional procedural interoperability approaches have sever limitations. Five exemplary limitations are described below.
First, the Java Virtual Machine approach makes or at least attempts to make all devices look like the exact same virtual computer to applications in order to allow the same binary code (Java binary code) to run on all devices. In order to maintain binary compatibility, it is necessary to avoid attempts to access device capabilities that were not predefined as part of the Virtual Machine definition and implementation. Thus binary compatibility is lost across multiple devices if native functions are needed to access capabilities of any device that are not part of the common Virtual Machine definition. Since most non-PC device hardware and software are most often specialized or optimized for a particular purpose, form factor, price point, user interface or functionality, it is often the case that their basic unique native functions or capabilities must be accessed for the applications most often targeted for the device. For most portable or special purpose devices, the very reason for their existence is because there is a need for uniquely different capabilities and functions. This runs counter to the Java Virtual Machine approach of hiding the differences between devices to make them all look the same to the application.
Secondly, Java is a general purpose language optimized for ease of programming at the expense of efficient execution and efficient memory use. Therefore, it will not be the most efficient or effective solution for many thin devices with modest processing capability and little available memory or where cost is important.
Thirdly, multimedia content response times cannot be assured using a Java procedural approach. Most Java programs are heavily reliant on the frequent allocation and de-allocation of varying size memory structures causing memory fragmentation. This memory fragmentation often leads to periods when the processor within the device must stop rendering the content while it performs garbage collection within the memory. Users will often experience a breakup in smoothness of audio and video rendering when this occurs.
Fourthly, Java presents significant speed and size issues. Java and its associated technologies and libraries necessary for interoperability are relatively heavyweight and require a relatively large number of CPU cycles for execution and relatively large amounts of memory for storage. Interoperability programs written in Java that include user interfaces, multimedia rendering, device enumeration, robust cross platform device interoperability, dynamic adaptation of code and data to send to different types require a large amount of code to be written and exchanged because all of these functions must be built up using libraries, or special purpose Java code sequences, as none of these operations are native to the Java instruction set or environment. The result is that Java programs for interoperability are large and slow, limiting their use on devices where limited processor power, battery life or cost are important issues. Where the devices do not have sufficient resources a Java based interoperability solution is not possible.
Fifthly, Java at beast provides a limited and incomplete base implementation for interoperability. This makes it necessary for a large number of libraries to be existent on all devices to interoperate, or have a high speed always-on connection to servers which contain the necessary program code. Performance for operations not included in the base instruction set of Java must be provided in the Java language itself, greatly limiting the runtime performance verses that of native code that might otherwise be used to implement these operations. Missing interoperability base operations include native support for: (i) multimedia animation playback; (ii) adaptation of programs, data, content, user interface or controls to, target other devices; (iii) computer generation of custom programs so that devices that originate content can automatically and easily marry that content with interoperability programs; (iv) device, service and resource discovery over a wide variety of protocols; (v) synchronization and/or serialization of processes running in different devices; (vi) device power management; (vii) application and synchronization recovery when devices intermittently lose and regain their connections.
Often when a Java VM specification proves to be deficient for a class of devices or applications, a new Java VM specification arises to address the now known native support needs of this new class of devices; however, Java programs written for one VM are not generally binary compatible or interoperable with devices which conform to different VM specifications. Java VM specifications exist for various devices classes, including the J2ME, MIDP 1.0, MIDP 2.0, and CDC, but this proliferation of ever more non-interoperable Java VM specifications and implementations continues to cause a form of fragmentation of the types and forms of programs and devices that achieve even a small degree of interoperability through the use of Java VMs.
Xerox Palo Alto Research Complex (PARC) has announced a variation on the Java plus Jini interoperability technologies which they call “Obje”. Obje is explicitly based on Java, or as an alternative, an unspecified and unrealized similar virtual machine based technology. While Obje points to some ways of providing procedural methodologies needed to effectively team devices and eliminate the requirements for all devices to have the programs ported or resident on all machines, it is expected that Obje implementations will have similar capabilities and limitations as the Java plus Jini approach as they offer no details to indicate any divergence from the Java VM model for the procedural base to be used
PostScript, another procedural approach, has been around for a considerable time and provides a printed page description language which has been very effective at establishing a high degree of interoperability between PostScript documents and PostScript printers. PostScript documents are programs which when executed on a PostScript software engine inside a printer, control the hardware printer engine and recreate the image of printed pages while taking advantage of the highest resolution possible on the printer that it finds itself on. PostScript is largely limited to the interoperability of documents and printers. Some of the reasons for this limitation include the fact that PostScript documents are expressed in human readable text. This expands the size of documents and programs greatly over binary programs. The text requires parsing operation when the programs are run, requiring more processor cycles and scratch memory then would be necessary if the programs were expressed in binary. Furthermore, PostScript does not provide any significant native support for: (i) Multimedia video/audio/animation playback; (ii) adaptation of application, data, content, user interface or controls to target other devices; (iii) device, service and resource discovery; (iv) synchronization and serialization of programs running in multiple devices; (v) device power management; (vi) finding and using other devices; (vii) maintaining robust connections between devices; or (viii) efficient access to various storage mediums, including the common flash memory now common on devices.
Storymail Stories provide a variable length procedural instruction set designed for encapsulating multimedia messages. Aspects of Storymail Stories and related systems and methods are described, for example, in United States Patent Application Publication No. 20030009694 A1 published 9 Jan. 2003 entitled Hardware Architecture, Operating System And Network Transport Neutral System, Method And Computer Program Product For Secure Communications And Messaging; and naming Michael L Wenocur, Robert W. Baldwin, and Daniel H. Illowsky as inventors; in United States Patent Application Publication No. 20020165912 A1 published 7 Nov. 2002 and entitled Secure Certificate And System And Method For Issuing And Using Same, and naming Michael L Wenocur, Robert W. Baldwin, and Daniel H. Illowsky as inventors; and in other patent applications.
The Storymail invention including the Storymail Story structure and associated technologies were invented by Daniel Illowsky the same inventor as in this patent. This Storymail instruction set allows trans-coded multimedia content to be represented in a universal procedural format called, “Stories” which provided significant advantages over multi-media content representations known theretofore. However, the Storymail technologies did not fully address device-to-device interoperability issues and should one attempt to apply the Storymail technology to achieve device-to-device interoperability, several problems and limitations will quickly become apparent. First, the Storymail instruction set is optimized for small engine size requiring the use of a lot of scratch memory. Second, the Storymail instruction set is burdened with the implementation of a specialized threading model while running content. Third, Storymail Stories require trans-coding of even basic content types, such as JPEG or Bitmap images. Fourth, Storymail Story technology does not provide native support for device, service, or resource discovery. Fifth, Storymail Story technology has no native base support for synchronization of programs running in multiple devices. Therefore, even though Storymail technology and the universal procedural media format provided significant advancements over the conventional arts of the day, it does not satisfactorily solve the device-to-device interoperability issues that are now apparent in the electronic and computer arts.
It will therefore be apparent that neither static nor current procedural approaches provide satisfactory device-to-device interoperability particularly for heterogeneous devices and a priori unknown applications. The problem is further compounded when the current client-server and peer-to-peer inter-device interoperability models are considered.
Conventional Client-Server and Peer-to-Peer Models
In the current state of the art, interoperating programs running on multiple devices generally use either a Client-Server model or a Peer-to-Peer model.
In a client-server environment model, one device (i.e., the server) provides the services and another device (i.e., the client) makes use of the services. This allows multiple client devices to take advantage of a single more capable server device to store and process data, while the lighter weight client device just needs to be powerful enough to make requests and display the results. A limitation of client-server approach is that generally the server must be registered on a network and accessible at all times to the clients with a relatively high speed connection.
In the peer-to-peer environment model, any interoperating device is generally assumed to have all the facilities necessary to carry out the application. Also in the peer-to-peer model, generally all devices that interoperate must have knowledge of the programs or services to be employed before establishing a connection so that they can agree on appropriate coupling and protocols. Having to have the full capabilities to carry out the application, and the need to have the peered program protocol layers pre-existing on all interoperating devices are significant limitations to applying a peer-to-peer model to device interoperability. In practice peer-to-peer devices will often encounter different non-perfect implementations or versions of software which will fail to cooperate do to unpredictable iterations of the differing non-perfect implementations. To correct these problems, often drivers or other software must be updated, distributed and installed. While this can often correct interoperability problems, the administration, implementation, distribution and frustration associated with failures and the complexity, sophistication and time needed to fix them remains a serious problem of peer-to-peer based device interoperability.
In the world of personal computers the current most popular though limited interoperability platform is the Microsoft Windows™ operating system. Under Microsoft Windows™, theoretically any application binary image can run and be useful on any standard PC architecture device running Microsoft Windows™ operating system, regardless of whether the PC was manufactured by IBM, Toshiba, Sharp, Dell or any other manufacturer. In practical terms, however, even Microsoft Windows has limitations with interoperability in real-world operating environments.
As computing devices and information appliances diverge from the generic general purpose Personal Computer model into more specific specialized devices such as mobile phones, mobile music players, remote controls, networkable media players and routers, and a mired of other devices, there is a need for an application platform that allows the quick and efficient development of applications that are not only binary compatible across all (or at least most) devices, but which can form ad-hoc teams of devices on-the-fly over multiple protocols based on the resources of each device. The applications need to be able to spread their execution across all the devices in order to carry out applications that no one device has all the software, hardware or other resources needed to implement on its own.
Currently in the state of the art, there is no effective software platform for creating programs that can run and spread themselves across multiple devices, particularly when the devices to be spread to are of diverse types and have heterogeneous device hardware, software, and operating system (if any) characteristics. Although there are many standardized embedded operating systems, because of the need for the applications to access the unique capabilities and features of the device, including the arrangement of displays and controls, there is little chance that a program built for one device will be useful if run on another different device, even if both devices use the same embedded operating system and processor. Thus, it is clear that there is a great need in the art for an improved method and system for providing reliable, easy-to-use device, application program, data and content interoperability, while avoiding the shortcomings and drawbacks of the prior art apparatus and methodologies heretofore known.