Personal computers (PCs) traditionally have been used to execute a multitude of software applications, features and functions. The applications provide the user with tools to accomplish tasks, such as, but not limited to, document processing, spreadsheet management, email exchanges, and Internet browsing. These applications are executed over an operating system installed in the PC. Previously, there were only a limited number of operating systems available for PCs, for example, Windows®, MAC OS®, and Linux®, which are the predominant operating systems.
However, recently the popularity of newly developed operating systems (e.g., iOS™ by Apple®, Android™ by Google®, Chrome OS™, and the like) has significantly increased. Such operating systems have been developed to be mainly installed in handled computing devices, such as smart phones, tablet computers, and the like. These devices are also capable of executing software applications (known as APPs) over the operating system of the device.
Software applications, regardless of the type of operating system, are installed and set up using an automated installation process. The installation process is designed to enable the integration of the new functionality to the operating system, as well as to ensure that the application can be removed.
Virtualization technology allows executing virtual applications inside an isolated virtual environment having its own virtual file system and virtual registry. That is, execution of a virtual application will not conflict or impact other virtual applications that may coexist in the virtual environment.
Typically, an operating system operable in PCs, such as Microsoft® XP®, Microsoft Vista®, or Microsoft® Windows 7, includes a registry file for storing an operating system, user, and application settings, dynamic link libraries (DLLs) which contain shared code, and named objects for naming functions shared by different processes. However, this structure of an operating system is incompatible with other types of operating systems designed for execution of handled devices.
For example, the Android™ OS is based on the Linux® OS kernel for core system services such as security, memory management, process management, network stack, and driver model. The kernel also acts as an abstraction layer between the hardware and upper layers of the Android™ OS. These layers include a set of C/C++ libraries used by various components of the Android™ OS, a runtime, and an application framework that provides an interface between the applications and the operating system's functions. As a result, an application, e.g., Microsoft Office Word™ 2010, developed for Windows-based operating systems, cannot be executed over Android™ OS. This limits the user to a set of applications developed specifically for a certain type of operating system. That is, a user of a smart phone equipped with an Android™ OS is limited to access and run only applications developed for this type of an operating system.
In addition, a PC's operating system operates in conjunction with the file system of the PC. For example, a new technology file system (NTFS) is the standard file system for Windows XP®, Windows Server® 2003, Windows Server® 2008, Windows Vista®, and Windows® 7 operating systems. Handled computing devices do not implement an NTFS and their operating systems often do not support this file system. Thus, files cannot be saved in a structure of folders and sub-folders, such as those provided by the NTFS. In most cases, users of handled computing devices rely on cloud storage services, e.g., Dropbox™, Box.net, and the like, to store their data files.
One technique that allows for the execution of a software application across different operating systems includes: executing a cloudified application corresponding to the software application upon launching a mobile application on a computing device compliant with a second type of operating system; receiving inputs generated by the mobile application; rendering outputs based in part on the received inputs; and streaming the rendered output to the computing device to be displayed by the mobile application. The technique allows for execution even when the first OS and second OS are incompatible with each other, for example, the first OS is a Windows-based OS, while the while second OS is a non-based Windows OS (e.g., Android™). An example for such a technique is disclosed in the above-referenced patent application Ser. No. 13/720,093, assigned to the common assignee and herein incorporated by reference in its entirety.
FIG. 1 shows a diagram illustrating a system 100 useful in describing the software application across different operating systems. In the system 100, a plurality of servers 110-1 to 110-N can be accessed by a computing device 120 through a network connection 130. Each of the plurality of servers 110-1 to 110-N allows the execution of software applications across different types of operating systems. The computing device 120 is typically a handled computing device including, for example, a smart phone, a tablet computer, a PDA, and the like that runs an OS that is not a Windows-based OS. The computing device 120 runs an operating system that may be any one of, for example, iOS®, Android™, Chrome OS™, and the like.
Typically, the servers 110-1 to 110-N are deployed in one or more datacenters that can provide cloud computing services. In the system 100, the computing device 120 can execute legacy applications, which are software applications designed to be compliant with Windows-based operating systems. Examples for such legacy applications include the various versions of Microsoft Office® Word, Excel, Power Point, Outlook, Visio, Publisher, and the like, as well as Microsoft Internet Explorer®, Flash® applications, Java® applications, Silverlight® applications, numerous in-house legacy applications that were all developed only for Windows and are too costly or not feasible to re-write for other operating systems, and so on. Software applications executed on mobile compatible operating systems, such as iOS®, Android™, and Chrome OS™ will be also referred hereinafter as “mobile applications”. Typically, such applications are downloaded from a marketplace of the operation system's provider, e.g., Android Market and Apple Store. The servers 110-1 to 110-N are configured to execute a virtualized cloud version of a legacy application, referred to as “cloudified applications”.
A mobile application is downloaded from the marketplace of the operation system's provider and displayed as a mobile application icon on a display of the computing device 120. The mobile application is related to the legacy application (hence the cloudified application) to be executed. The user by clicking on the icon causes execution of the application to launch on the device 120. As a result, the user will enjoy the full functionality of the legacy application while executed as a mobile application on the device 120. In the system 100, a cloudified application corresponding to the legacy application is executed by one or more of the servers 110-1 to 110-N, where rendered outputs are streamed to the computing device 120, over the network 130, by means of, for example, HTTP. For the user, it appears that the legacy application is actually executed by the device 120.
As an example, the legacy application Microsoft Office Word 2010™ is executed on a tablet computer. As shown in FIG. 2A, an icon 210 of the Word mobile application is placed on the tablet's display 200, such as with any other mobile application. As shown in FIG. 2B, tapping on the icon 210 causes the device to launch the Word application. As can be noticed, the user interface and the functionality are as if a Word application has been executed on a PC running a Windows-based OS. The user can open, review, edit, and save a document.
Returning back to FIG. 1, a server 110-i (i=1 to N) hosts a cloudified application version of the Word application, and every input from the device is communicated to the server 110-i. The captured inputs include, for example, location of the cursor, any character typed using the keypad, a selection of a task from the toolbar (e.g., opening a file, changing font size, color, applying track change options), and so on. The server 110-i processes the received input and renders a new output display that is communicated to the tablet computer. For example, as shown in FIG. 2C, upon selection by a user to open a document, a dialog box is displayed to the user.
Standard techniques for capturing changes in the screens and sending such changes from a server to a client are based on terminal services (TS) and a remote desktop protocol (RDP). As shown in FIG. 3, applications use standard Application Programming Interfaces (API), such as Graphics Device Interface (GDI) 400, User32 405 and DirectX 410 to transmit the information displayed on the screen of the computer 415. These APIs interact with a kernel 420 of the operating system that passes the information to the screen driver 425 using TS 430. Similarly, I/O events, such as keyboard and mouse events, are sent back to the application through mouse and keyboard drivers 435. The TS module 430 is located between the kernel 420 and the computer's drivers 425, 435. The capturing screens and I/O events are performed by the TS module 430.
Generally, Windows-based OS provides objects managed using Windows Stations and Desktops, which are each securable objects. A Windows Station is a securable object associated with a process, and it contains a clipboard, an atom table, and at least one desktop object. A Windows desktop is a securable object contained within a Windows Station. A desktop has a logical display surface and contains user interface objects, such as windows, menus, and hooks. For example, the desktop can only be used to create and manage windows.
Windows objects exist within Windows Sessions, with each session consisting of all processes and other objects that represent a single user's logon session. This includes windows, Desktops, and Windows Stations. Each session can contain more than one Windows Station and each Windows Station can have more than one desktop. However, at any given time, only one Windows Station is permitted to interact with the user at the console. This is called Winsta0, which contains three desktops, winlogon, default, and disconnect, corresponding to the logon screen, the user desktop, and the disconnect desktop. All three have separate logical displays, so when a user locks a screen, for example, the default desktop disappears and the winlogon desktop appears instead.
The conventional approach for capturing and transmitting screens and I/O events suffers from various limitations. For example, to transmit multiple applications to multiple users from a single server, each user must have his own Windows session, which is resource intensive. To overcome this limitation, multiple Window Stations can be executed under the same Windows Session. Theoretically, two applications can run in two different Windows Stations and be transmitted to a user concurrently. However, this cannot be performed directly, as there can be only a single Interactive Window Station (Winsta0) in the OS. All other Window Stations (such as Services) are non-interactive and cannot have a graphic communication with the user.
Another potential technique for transmitting multiple applications to multiple users from a single server includes opening multiple Windows Desktops in a single Window Station. In a Windows XP® OS, for example, the Interactive Window Station (Winsta0) includes three Desktops: WinLogon, a default (regular) desktop, and Disconnect. Theoretically, multiple Desktops in WinSta0 can be created and an application can be opened in each one of the Desktops and transmitted concurrently. However, this cannot be achieved since at any given time there can only be one Interactive Desktop, and all desktops are separate logical displays, only one of which is available at any given time.
Furthermore, when two applications are opened, they are in the default Desktop. It would be beneficial to transmit each one of them separately, regardless of which window is top-most. As shown in FIG. 4, when two application windows, 450, 455 are opened, one over the other, both cannot be transmitted as two separate windows. That is, there is no way to transmit windows of two or more applications at the same time, because the screen driver can only show the top window. In the exemplary diagram provided in FIG. 4, the top application window 450 and only a portion of the application window 455 are displayed.
Therefore, it would be advantageous to provide an efficient solution to overcome the shortcomings of currently available techniques for capturing and transmitting screens and I/O events.