In consideration of the wide deployment of risk transfer technologies worldwide, in particular as provided by insurance and reinsurance systems, the importance of these systems in maintaining the operation and operational conditions on the industry-wide level becomes apparent. However, a technological approach to many problems in this field is often difficult. For example, it is important to improve and adjust the physical design and technological implementation of such risk transfer systems in order to cope with any emerging problems; problems involving, e.g., the fact that the risk assessment can be based on reproducible results, that the error rate of the measurement of pooled risks can be minimized, that the enormous amount of data can be processed and taken into account in the measurements, and that the operation of such systems can be adjusted to quickly changing environmental condition-related parameters, and/or improved by self-adaptation. Furthermore, it is also important, that the use of the enormous resources of data can be systematized and dynamically used, which requires appropriate technical modalities and physical system designs.
Significant concerns for most automated risk transfer systems are data quality and data quantity. Estimates in the industry have found that about 25% of the operational times of insurance systems are expended on data quality issues. Moreover, related to the technical impact of bad data quality, a further survey found that about 30% of automation problems are due to poor data quality and that automated analyses are adversely affected by data quality issues, thereby often rendering automated risk transfer systems too unreliable for self-sufficient day-to-day live operations, thus reducing them to stand-alone running systems; i.e., systems that operate without human interaction and control, thereby measuring and reacting in a self-adapting manner to on changing environmental conditions. Some basic technological problems are merely related to the accuracy of data and the amount and complexity of data to be processed. Some other technical issues relate to characteristics, such as completeness and timeliness; however, these issues often end again in the problem of data recognition, data acquisition, data quality and, finally data processing. The measurement and determination of risks, in particular during the process of risk pooling of exposed components by means of resource pooling systems and, more particularly, at the level of fixing the individual risk transfer condition parameters, are technical key features in the operation of automated risk-transfer and risk-pooling systems. However, measurements of the risk, as it relates to a specific risk transfer, requires the systems to operate based on probable most up-to-date data, which, in particular, requires not only a modality for measuring and capturing appropriate measuring parameters but also appropriately fast and reliable data recognition and processing. It must be added to the above comments that cheap data storage, along with changes in regulatory requirements, have led to extraordinary amounts of data being captured, stored, and provided to insurance systems. On one hand, the processing of these data quantities requires appropriately adapted systems, as mentioned. One the other hand, these enormous amounts of data also amass an enormous error incidence and inconsistencies, which hinders the operation of automated systems. Therefore, on the other hand, it is important to ensure that, already on the technical level of generating new data, monitoring modalities are in place by means of appropriately implemented and fast-reacting control and monitoring systems.
For automated underwriting systems providing the basis for the automated adaption and determination of condition-related parameters of associated risk transfers, an underwriting process-flow for underwriting workflow objects comprises the technical and/or procedural steps required for executing the underwriting process with regard to an object; i.e., the underwriting process, the technical and other means to conduct the processing steps, and the transfer and flow of data/signaling between the means and/or steps for executing the process on the object. Each step is defined by a set of processes, activities or tasks that need to be implemented. Within an underwriting workflow, objects for underwriting (e.g., risk transfer objects comprising operational condition parameters for the risk transfer, i.e. technical objects affecting the operation and interaction of the resource pooling system and providing risk cover by pooling resources from risk exposed components, and triggering risk events in order to automatically cover the impact of occurring risk events by means of the transfer of the pooled resources) pass through the different steps in the specified order, from start to finish, and the underwriting processes at each step are executed either by dedicated technical processing devices or means, by activating specific system functionalities (also, e.g., computer program products), or by dedicated signaling to specific devices or people intended to perform activities on the object. Automated underwriting workflow systems can be set up using a visual front end, or they can be hard-coded, and their execution is delegated to a workflow execution engine that handles the call-up and signal generation of the remote devices or applications.
In the prior art, underwriting workflow systems for automating risk transfer belong to the field of so-called production workflow systems. The production or industrial process systems are dedicated to steering and executing processing steps of technical objects, such as operational parameters, devices or products, by steering and operating appropriate devices for executing the activities of the workflow objects. Regarding data objects, the process systems also serve for functional processing and the computation of data objects, in particular for the purpose of standardizing the operational interaction of such systems by generating and adjusting workflow objects by means of processing them within the workflow, i.e. the underwriting process flow.
Concerning the monitoring and control systems for such automated underwriting systems, the prior art envisions automated underwriting workflow systems that are able to provide various capabilities for the monitoring of workflow processes, which are modeled and executed within the workflow system. Such capabilities can include, for example, analysis tools for the measurement and display of metrics with respect to the status of the processes, times for executing work steps in the context of the processes, and [management of] bottlenecks within the processes. These capabilities can also be transferred to the underwriting workflow system for underwriting workflow processes, which are executed in systems that are external to the underwriting workflow system. Many underwriting systems of the prior art comprise, as the core, a workflow execution engine, a process management system or a similar control device/system for controlling and monitoring the processing of the workflow objects. The workflow execution engine of the workflow systems can, e.g., be implemented as a processor-based automation means of the underwriting process flow. The workflow execution engine steers a sequence of activities (process tasks), interactions and signaling with execution devices or means, or in interaction with human resources (users) or IT resources (software applications and databases), as well as rules for controlling the progression of processes throughout the various stages that are associated with each activity.
However, at the various stages of the underwriting process, activities typically require human interactions: i.e., user control or data entry through a form. For certain underwriting workflow systems, one of the ways for automating and operating the steering and monitoring tasks of the processes by means of a workflow execution engine is to develop the appropriate processor codes and applications that guide a processor-based workflow execution engine for the execution of the required steps of the underwriting process; however, in practice, such underwriting workflow execution engines are not able to accurately execute all the steps of the underwriting process by means of the underwriting workflow system while assuring, thereby, the operational stability of the underlying resource pooling system for the pooled risks. To solve this problem, in the prior art, the typical approach envisions the use of a combination of software and human intervention; however, this approach is quite complex rendering the reproducibility, the predictability, and even the information flow and documentation process difficult. Further, with this approach, due the amount of processed workflow objects, it is impossible to provide an automated control for the underwriting systems that operate dynamically and optimized based on a monitoring of all processed underwriting workflow objects.
Another problem in the prior art underwriting system is that workflows are difficult to generate and/or to adapt dynamically, due to a lack of the appropriate measuring, monitoring and control systems, which allow for filtering and dynamically recognizing non-conforming workflow objects. Moreover, upon reaching a certain process step in the workflow, it can become necessary to make adjustments to the processing by way of steps which are not predictable at the beginning of the underwriting process flow or workflow, and which can depend on environmental parameters or operational parameters of the risk exposed components and/or the resource pooling systems, i.e. the automated insurance systems. However, such an adaptation of the underwriting conditions and/or the pooled risk portfolio of transferred risk critically depend on correctly measuring and recognizing non-conforming underwriting objects, and, even more so, on a correct measurement of the possible impact of such non-conforming underwriting objects based on their level of non-conformity.