Various approaches for potentially addressing specific conditions are very frequently evaluated using investigatory events. Unfortunately, frequently each event is performed and/or assessed in relative isolation. This is despite the fact that a given investigatory event very typically overlaps to some degree with one or more other investigatory event (e.g., in terms of a type of event or protocol being investigated, selection criteria, etc.). Not only does this approach then result in sub-optimal assessment of results of the events, but it also hinders the potential for identifying and implementing design improvements for subsequent investigatory events.
A challenge with improving data integration is addressing a very high degree of variability of event protocols, selection techniques and semantic coding across the events. This variability results in challenges in even identifying similarities across events, much less aggregating data in a sound manner.