Viruses, malicious code, and other computer threats are rapidly evolving and are becoming more and more difficult to detect. Automated mechanisms to ensure safety area also evolving, but existing methods commonly detect what has already happened. Anti-virus detects the post-mortem infection of a system. An intrusion detection system identifies events which have already occurred. Such systems are also generally unable to prevent situations which rely on human culpability. Often the first and last defenses are decisions made by the computer user. However, sophisticated ruses are used to fool users into making precisely the decisions necessary for the exploitation to occur. Even though threats to computing systems have become progressively more adept at exploiting human users, little work has been performed in providing human users with better information for decision making regarding computer security. Much of this threat is reliant on the user lacking sufficient information regarding system state. To cure these failings the last two decades has provided little more than icons denoting software certification and a plethora of dialog decision boxes.
Within conventional human-machine interfaces of computing displays we rely almost exclusively on primary attentional information streams. The peripheral is used, but generally limited to event notification in the form of binary indicators of interesting information (time and date, weather, etc.) or the need for user action or response (e.g. emblems on application icons, notifiers in the status-bar of active processes). Peripheral information displays have been shown to increase a user's awareness of supplemental knowledge. Nonetheless, only a handful of iconic and graphical display elements are seen within modern computer interfaces providing peripheral information. These are akin to road signs within physical environments which convey meaning in a direct fashion. The ambient activity monitor as conceived within this invention conveys meaning indirectly via correlation of the display with the state and interactions with a computing display. To use a similar analogy, this is akin to road, engine, and tire noise providing peripheral cues as to the state of an automobile as it races around a sharp corner, engine revving and tires chirping.
When a user interacts with a computing device the underlying state of the machine is generally hidden and only the intended output of a set of active computing tasks is visible. This differs significantly from traversal of a physical environment (e.g. walking down a sidewalk) where the rich environmental milieu provides constant and myriad peripheral cues to the active state of the local environment (e.g. footsteps of other pedestrians, automobile noises, sounds of children playing). These environmental cues, while peripheral to the task at hand (e.g. walking to a particular destination), are often necessary for doing so safely (e.g. avoiding a speeding car, or a child on a skateboard). We know that threat avoidance in real environments relies on peripheral information. Further, the peripheral information is not a measurement or abstraction of the threat itself, but information which can be used to predict potential threats (whether real or imagined).
Indicators of emerging or current threats to a computing environment are currently provided to a user in several ways.                a) Indicators may be provided early, prior to an event which warns us to specific threats (as in an alert or dialog box).        b) Indicators may be provided late, after an event (as in the output of an intrusion detection system).        c) Indicators may also be provided in the form of a near real-time measurement of network connection, process, or other machine states (such as a process monitor).        
In some cases, real-time measurements (such as memory and CPU-time of a running process) may provide subtle hints that something may be awry in much the same way as peripheral cues in natural environments. However, the conventional approach for process monitoring is generally limited for several reasons:
non-peripheral, demanding a user's full attention
represent intended cues and as such reflect an abstraction and judgment of underlying machine state rather than simply a representation of the machine state itself.
significantly simplify and aggregate the representation of the machine state for the purposes of exposing particular measurement semantics (i.e. CPU usage percentage, counts of disk reads/writes).
generally do not correlate precisely with the activities of the user or underlying system state, often being delayed by seconds for purposes of decreasing the process monitor's resource demands.
There also exist unintended peripheral cues within modern computing system (such as unexpected slowdown of a software application, the erratic behavior of running processes, disk drive noises, fan noises). However, it is the intent of good software design to eliminate these unintended effects. The computing environment is designed to be sterile in respect to unintended effects. These effects represent bugs or other software deficiencies.
It is important to note that neither the intended or unintended peripheral cues have a rich semantics to tie them to particular user or system activities. If a software application hangs due to internal defect or system malfunction, the only peripheral cue to the user may be that the software appears unresponsive. When a software application is infected by malicious code there are often no peripheral cues until long after data has been destroyed or stolen. And while existing software and system instrumentation mechanisms can expose these (and other) system defects and security problems in a direct fashion, use of such instrumentation mechanisms completely disrupts the user's primary purpose for using the computing system. While some users may be willing to direct their attention to a process monitor and examine the memory usage of a particular software application, unless the user is using sophisticated debuggers and software instrumentation, they do not get any deeper knowledge as to why the memory usage is at a particular level or what the software is doing internally. If a user did wish to understand the reason for the software problem they would no longer be using the computing system as originally intended, spending all of their time addressing the nuances of the underlying software and system states. The system activity and software instrumentation mechanism would become the user's primary task.