The present invention relates to computer application technology. In particular, it relates to an improved method and system for providing the end-user of a mainframe application running a limited, character-oriented transfer protocol like IBM 3270 protocol with a combined rendering of non-character, i.e. New media data and traditional character data.
The so-called new media data extends traditional computer data formats such as EBCDIC coded text files, DB2 records and tables into more natural data formats for the interaction of humans and computers by incorporating images, motion pictures, voice, audio, and video.
This kind of data is getting more and more important in the information technology business. It accompanies the traditional computer data and the end-user dealing with both types of data expects to view it at the same time on the same rendering device.
Within today""s information technology environments we normally see a two-tiered or three-tiered infrastructure:
Tier 1 is thereby represented by an xe2x80x9cintelligentxe2x80x9d PC (Personal Computer) or NC (Network Computer) or Workstation which is used to render the data which is produced by applications running on tier 2xe2x80x94the so called application servers.
Mostly the tier 1 machines are communicating to tier 2 machines through so-called terminal emulators such as 3270 emulators with an S/390 application server running CICS/IMS applications or a telnet emulator which connects to application servers using the UNIX operating system like SAP/R3.
The application server in that case does not know that it is connected to an xe2x80x9cintelligent PCxe2x80x9d but just sends ASCII or EBCDIC data down to that xe2x80x9cTerminalxe2x80x9d. In all those cases all functionality resides on the tier 2 application server. That means that the structured data is xe2x80x9cpushedxe2x80x9d from the host to the client and the rendering of this data is controlled by the host.
In the context of said new media the term xe2x80x98renderingxe2x80x99 is to be understood as comprising playing back image data, audio data, or video data, or motion pictures with the respective suited hardware and/or software arrangement.
The rendering of new media data in contrast is normally initiated from the client. A media renderer xe2x80x9cpullsxe2x80x9d the data from the application server or at least initiates the xe2x80x9cpushxe2x80x9d from the server. If a user for example wants to view a video, the client workstation passes the so-called Universal Resource Locator, further referred to herein as URL, and according meta data to the server. Then the server streams that video to the client. However, the client workstation has to get the UBL first, in order to initiate a xe2x80x9cplayxe2x80x9d request to the server.
So the problem is to combine the two paradigms to enable the application server on tier 2 to command all the logic required to render all data on the same end user workstation.
A prior art way to solve said problem of combining traditional data and new media data is to xe2x80x9cwrapxe2x80x9d the traditional user interface on the client workstation by introducing program logic on the client side which performs the integration. In order to achieve that a windows-oriented xe2x80x9cwrapperxe2x80x9d program must be programmed which envelopes the mainframe application on the client side, i.e., which accesses the relevant data of each mainframe application xe2x80x98panelxe2x80x99 and feeds it to the xe2x80x98modernxe2x80x99 back-end program.
As, however, a mainframe application usually has a very large number of panels provided to the end-user for displaying and entering mainframe application data such work is per se very complex because this requires normally a change of the programming paradigm from the traditional model to an object-oriented model, i.e., a business object model.
The most relevant obstacles, however, to achieve an efficient integration of new media data in those mainframe applications are:
1. In order to extend an existing character-oriented application with new media data changes have to be applied to both, the mainframe application, as well as to the xe2x80x9cwrapperxe2x80x9d programs running on the client workstations.
2. Such a windows-oriented xe2x80x9cwrapperxe2x80x9d program has to be installed at multiple locations in the network, i.e., for each end-user location. Then, however, the maintenance of such an end-user IT environment requires a large amount of work as maintenance has to be provided at those multiple, maybe thousands of locations in the network.
The costs associated with such an approach can thus be tremendous.
It is thus an object of the present invention to provide a method and system for providing non-character data to a client computer coupled to an application server located in a network and using a character-oriented protocol in order to run said application whereby the above mentioned obstacles are removed.
The present invention provices for finding a way which allows a host-initiated mechanism to render non-character data such as the before-mentioned image, audio and video data on a client workstation, while modifying only one single place of application code, namely within the host-based application, like e.g., of the CICS or IMS type.
The example used below describes the process of rendering video data using the streaming technique, because this is the most comprehensive one. However, the invention works the same way for rendering image data or audio data using streaming or store and forward techniques.
This approach splits up in finding a solution for the following two separate problems:
1. The host application has somehow to pass the media URL to the client workstation, such that it will be able to initiate a play request from the media delivery server, further referred to herein as Stream Server, based on the given URL.
2. Client workstations running CICS and IMS 3270 type of terminal emulations are mostly attached via the so-called SNA protocol, while at the same time playing media data requires TCP/IP sessions.
So there is an inherent network protocol and address mapping problem that has to be solved.
Briefly summarizing the basic concepts of the present invention it is proposed to install an individually programmed program component, called Server Media Resolution Service (SMRS) on the application server site and a matching program component, called Client Media Resolution Service (CMRS) which is a universal, standard component without any individual application specific features. The SMRS is told the client computer destination, searches the requested media address and feeds this meta information to the CMRS which in turn manages the start of a client site media renderer in order to render the new media data received from a datastore such as the File System. A practical example of the media renderer could be a media player which requests and renders a streamable asset provided by a a stream server (22) to said media player. Thus, when any change in the host application program is required such changes have to be done in the host application only, the SMRS and the plurality of CMRS components may, however, remain untouched which reduces system programmers work significantly.
The following summarizing list of items shows the data and command flow and the components involved according a preferred aspect of the inventional method and system, respectively.
1) On both the client and on the server said SMRS, and CMRS components are installed. These components are preferably started on both systems at boot time. When a Unix system environment is used at the application server these components can be implemented with Daemon processes.
One example for the address mapping in static configuations is that the Client Media Resolution Service tells the Server Media Resolution Service both addresses, the address of the terminal emulator session, e.g. in an SNA connection the PU/LU address pair of the system, as well as the TCP/IP address of the system at startup time. When dynamic configuration techniques are used, the address mapping is done in the Communication Server on the Application Server and the corresponding addresses can be retrieved dynamically.
2) Within his xe2x80x98legacyxe2x80x99 application subsystem running on CICS or IMS the user presses a function key which requests the rendering, of the media asset belonging to the current transaction, and allows for some playback of said media on the client computer.
3) As the media often reside in some media store accessible via a URL the corresponding media URL is requested from the underlying system managing the access to said dedicated media asset.
4) The xe2x80x98legacyxe2x80x99 application subsystem calls a Browse function of the Server Media Resolution Service. The SMRS component which acts as a xe2x80x98media provision managerxe2x80x99 is thus told the terminal address of the current client session, and the media URL. The SMRS maps the terminal address to the TCP/IP address of the client computer. Thus, the SMRS component knows all meta data required for a subsequent media data delivery to the client.
5) Optionally in case of streamable media, the Server Media Resolution Service prepares all functions an the server side to prepare the streaming. This can include a physical movement of the media to at Stream Server connected with the application server. Said stream server can be located at the location of the application server, or, there can be also a LAN or even a WAN connection between them. Anyhow, the stream server is capable of streaming the media to the client computer system. The SMRS gets back some Streaming Initiation Data, further referred to herein as SID, from the stream server. In this SID data, issued by said stream server, the basic meta information specifying the stream server important control information about the stream server location, the type of connection to the client system, etc. is included.
6) Then, the SMRS sends a message to the Client Media Resolution Service to execute a Browsing/Rendering service, and passes the Streaming Initiation Data to the client, or the media itself in case of store and forward. Optionally, URLs pointing to the streaming metadata or the media file itself can be exchanged.
7) Depending on the client site hardware and software environment the Client Media Resolution Service may generate a temporary file or a metafile as it is necessary for some media players.
8) Then, the CMRS launches the player with all necessary parameters.
9) The player requests the media asset from the Stream Server.
10) The Stream Server streams the asset to the client workstation.
The method and system herein can be used in every environment where a host application needs to render New Media Data. Typical examples are CICS and IMS applications in an IBM environment as well as the xe2x80x98traditionalxe2x80x99 UNIX applications using a TELNET type end-user interface or a xe2x80x9cX-Windowsxe2x80x9d type end-user interface, such as products from SAP or BaaN, for example.
The advantage of the inventional method compared to before mentioned prior art integration on the client side is, that only the host application logic has to be changed in case of integration of new media data and when maintenance of the host application is desired. No program logic on the client side has to be introduced, and the traditional client interface can remain untouched.
These and other objects and advantages will be apparent to one skilled in the art from the following drawings and detailed description of the invention.