User interfaces are the means by which a computer user interacts or communicates with the computer system. User interfaces are typically implemented with a display screen and a user-controlled input device such as a keyboard, mouse, microphone, light pen, or the like. The display screen displays information to the user and the user uses the input device to issue commands and provide information to the computer system.
As computers have become very widely used in recent years, much work has been done on the development of user interfaces which do not require users to know large numbers of specific and complex commands and syntax rules. For example, graphical user interfaces (GUIs) which are in common usage present to the user on a display screen a plurality of icons which are small stylised representations of computer system entities (applications, folders, etc) and actions (print, save, cut, paste, etc). A user can select and work with an icon via an input device (for example moving a cursor using a mouse so as to point to a required icon and then clicking a mouse button) without needing to remember and type specific commands to select and invoke the associated action or entity. The system is programmed to recognise the mouse click as a selection. Typical GUIs present icons within tool bars and in the client area of windows (the area reserved for displaying information to the user).
An icon in a tool bar is just one example of the general requirement for a user interface to enable users to select and invoke particular operations on the computer system and to select and modify particular properties. In a typical user interface environment, operation selection may be achieved by defining `actions` which the user can select via `views` of those actions provided via the user interface. An action in this context can be defined as a user initiated event which invokes an operation. In addition to the ability to select and invoke operations, users also require the ability to select and modify certain `properties` such as to select text fonts, colours of displayed graphics, audio volume, or criteria to be applied in a search operation. `Properties` can be defined as data attributes associated with particular components or applications. A property may be any data attribute in a computer system and most applications have a large number of properties associated with them, but property data is typically related to configuration of the application as in the example of the font of a block of text. Taking one example, the code controlling a computer system to automatically display help text as a cursor is moved across a screen (known as `Hover Help` or `Bubble Help`) is known to be implementable as a component. The Hover Help component may have a number of properties such as the font of its help text, the colour of the text, the background colour of the `bubble`. References herein to modifiable properties are to properties having user-modifiable values.
The `views`, used to represent an action or a property within the user interface may take a number of different forms, both visual and non-visual. Typical examples of visual views used in GUIs are a word or phrase on a menu bar (as illustrated in FIG. 9a where the `Print` action is selected from a word view), a graphical representation such as an icon or bitmap on a tool bar (as illustrated in FIG. 9b where the `Print` action is selected from a print bitmap view), or other visual controls that enable user selection or input. Examples of such other controls are a combination box control that allows selection of a font property (as illustrated in FIG. 9c), or an entry field that allows the setting of a string property such as setting the name of a directory or a tool bar name (as illustrated in FIG. 9d). These controls could appear anywhere within the user interface. Further examples of visual views are illustrated in FIG. 9e, where the print action is selectable from a context menu displayed by depressing the right mouse button whilst the cursor is positioned over the window area, and in FIG. 9f where the print action is selectable from a push button view in a window layout.
Examples of non-visual views could be accelerator data selectable by accelerator key sequences using a keyboard (generally a short sequence of key strokes defined to invoke an operation, such as `Ctrl+P` used to select a print action), speech pattern data selectable by speech input, or gesture data selectable by input gestures such as the stroke of a pen on a tablet. Hereafter, all such mechanisms for selecting actions or properties will be referred to as `views` whether they are visual or non-visual. Menu lists, toolbars, accelerator tables, edit fields and windows which portray properties and actions are referred to generically herein as `item lists` or as action/property `viewers`. Individual item lists each have a type (e.g. a toolbar is a type of item list or viewer) and a list of zero or many `items`, which items may be actions, properties, or other item lists.
Currently, when developing application programs, a significant amount of developer effort is required to provide the functionality enabling user selection and invocation of actions or user selection and modification of properties within an application program. Additionally, there is a significant problem in achieving user interface and functional consistency between and within applications that display different views of properties (or actions) which views are perceived by the user as being the same. Different views that appear to be the same may have quite different code implementations, and some applications will allow certain functions which others do not.
A major reason why providing this support is so time consuming for the developer is that there are often required to be a number of different places within an application where the user can select the same action or property, and a number of views of a particular action or property available for selection by the user in different places within the user interface. The developer is required to write separate control code for each of these views. Views on properties include those found within drop-down lists, sets of radio buttons, check boxes, edit fields, etc. Views on actions typically also appear as items within drop-down menu lists, and as tool bar buttons, and within context menus. Within some applications there are also many places where the user is presented with a push button that opens a dialog or window. An example of the above is where a user can select a print option from a menu bar pull down list, a tool bar button on a tool bar, an entry in a context menu, or a push button on a dialog. Essentially the same operation of opening a print dialog occurs, but the visual appearance and implementation of the selection mechanisms are different to achieve the same result.
In addition to requiring application developers to write code to enable access to actions and properties from multiple places, users may also require the developer to provide different action and property views depending on the type of viewer (e.g. a bitmap representation may be wanted in a toolbar whereas a text identifier may be preferred in a menu list, both because of space constraints and for consistency of presentation with other items in each viewer). Further requirements may include the need for property values to be persistent (to be saved across sessions even when the computer is switched off) and an ability for the user to move or copy views of properties to more convenient places within the user interface (e.g. using a drag and drop mechanism or clipboard). For each property for which such user-modification features are required, the application developer's task currently includes at least:
defining a set of attributes for the property; PA1 defining the interface controls that the user has to interact with to access the attributes; PA1 providing the logic that responds to a user selection when an interaction occurs; PA1 providing a persistence mechanism that allows attributes to be stored persistently and to be restored with the same values when the application is subsequently restarted. PA1 means for identifying certain user interactions as requests to change a property value; and PA1 means for managing properties as property objects of a properties class, the property objects encapsulating property values and view attributes for providing said one or more views, the managing of property objects including changing an encapsulated property value in response to said change requests.
If more advanced user interaction is required or the developer wishes to provide the capability of applying more complete selection mechanisms on a visual property, then the developer has to provide much more advanced logic. For example, providing concurrency of both the value and state (such as whether it is modifiable) of a property wherever it is used in the application, whether by the application logic or the end user, or enabling users to drag views of properties to other property viewers would each entail significant further work for the application developer.
Currently, there is no help for developers who wish to enable end-user modification or customisation of properties within their applications and the development effort required to provide a flexible solution is considerable.
A further consideration which increases the amount of effort required by the developer is that there are a number of different input mechanisms that can be employed by the user to select the same action or property. The standard point and click mechanism of the mouse is well understood, and this would be the typical mechanism used to select visual views displayed via the GUI. However, a number of actions also have accelerator options that allow selection via a specific character on the keyboard (eg `P` for selecting the `Print` action). Further, speech enabled applications allow the user to speak commands, so that for example the user can say `Print` to open the print dialog, and if a pen gesture has been defined for opening a print dialog, pen enabling will add another view that may be available for the user to select the action.
Additionally, there are situations where small interface behaviours are so pervasive that the same piece of code is written repeatedly throughout an application. An example of this is when an application has a number of dialog type windows that allow the user to cancel out of the window without applying any changes. The visual representation of this function may, for example, be a `Cancel` push button, but usually there is also a keyboard mechanism to achieve the same result, for example selection of the `Esc` key. Traditionally the developer would have to code three things to achieve this function, namely: a) to provide a push button on a dialog; b) to add the escape accelerator to the accelerator table; and c) to provide the cancel procedure within the application code. These three stages would normally be repeated every time the escape function is needed for an individual window.
The application developer thus currently has to write a significant amount of code to fully support the various required views of actions and properties, and to support modification. When new technology is introduced (e.g. speech) additional work is required to make that new selection mechanism available to the user. Since there are a number of permutations for the developer to remember, there are times when certain selection mechanisms are not enabled within products. This leads to usability problems across products, where one application works one way, and another does not. Added to this, the various views on the same actions and properties usually are constructed in a number of different ways, and once constructed have little to no user-customisation capabilities and can only be extended to satisfy future requirements with considerable extra effort.
It is an object of the present invention to provide a system and a method which alleviate at least some of the above identified problems.
Copending UK patent application number 9613767.4, which is incorporated herein by reference, discloses support for different views of actions by use of a single generic mechanism for dealing with actions. An action object defines, for each available view that can be used to represent an action, the attributes required to provide that view and an identifier to identify the operation to be invoked upon selection of the action. Copending UK patent application number 9615293.9, which is also incorporated herein by reference, discloses provision of a note pad object accessible from an action palette which allows a user to create new user pages into which commonly used properties and actions can be placed.