A fundamental endeavor of mankind is to overcome natural and manmade limitations through the use of invention and design; with two innate goals being to live a contented life and in due course overcome mortality. The present invention is especially aimed at furthering these goals by the disclosure and use of a data logging and memory enhancement system and method. While humans have the natural ability to pass on physical attributes via genetics directly through reproduction, humans do not naturally have the ability to pass memory and thought processes through reproduction. And only very recently in human history has mankind had the ability to not just evolve but to determine how they evolve. Traditionally, environment caused changes in man's evolution, often taking many generations and thousands to millions of years for significant changes to naturally occur. But increasingly humans can use modern procedures and technological advances to instantly change their make-up. Increasingly modern technology is being offered that goes beyond traditional maintenance of the human faculties we are born with and is providing systems and methods to substitute, replace, or enhance what humans are provided with naturally at birth. The artificial heart, the Dobelle Eye, growing artificial body parts, using stem cells to differentiate into host cells, and genetic engineering are just a few examples. This invention is aimed at providing “designed evolutionary” systems and methods that accomplish this type of utility. Part of being a sentient being is being self-aware and realizing that there is a past, present, and future consequences of one's actions. It is therefore conceived as part of this invention that the user of the data logging and memory enhancement system, when coupled with problem solving, mobility, and available resources, will perform maintenance that will allow himself, herself, or itself to continue to exist in some fashion indefinitely.
The brain is the center of all human thought and memory. The sentientness of a being ends up encompassing what the being processed and retained in the beings central nervous system. The being is constantly perceiving the environment that surrounds in order to update his or her thought and memory. With this in mind a central objective of present invention is developing a human internal to external correlation of conscious percepts relative to a particular being realized in a system and method for personnel data logging and memory enhancement. For purposes of the present invention, the concept of “Neural Correlates of Consciousness” (NCC) and “Conscious Percept” (CP) are discussed and defined in publications and presentations by, for example, Christof Koch in “The Quest for Consciousness: A Neurobiological Approach” dated 2010 and as depicted in a diagram by Dlende entitled “Neural Correlates of Consciousness” dated 2008. Neural Correlates of Consciousness (NCC) can be defined as “the minimal neuronal mechanisms jointly sufficient for any one conscious percept” of a self-aware being, the being also aware of the surrounding environment, and other conscious contents to which that being is relating at a given time. A “conscious percept” may be defined as a subject's focus of attention as a being, machine, or bio-mechanical system in a particular surrounding environment at a given time. A surrounding environment may also be referred to as “place”, which may for example be defined by imagery, audio, brain activity, or geospatial information. Achieving this objective can be done by recording and measuring internal and external activity and relating the activity in the mind to the activity and subjects that the person is thinking about in the surrounding environment. Studies have taught us that various senses can stimulate the central nervous center. Examples focused on in the present invention are those which yield the most utility for learning. Approximately 78% of all information taken in is through our eyes, 12% through our ears, 5% through touch, 2.5% through smell, and 2.5% through taste. It is an objective of the present invention and it will be understood to those skilled in the art that various internal and external types of sensor systems (i.e. audio, imagery, video camera, geospatial, position, orientation, brain activity, and biometric systems) may be used to record sensory data and that this data may be processed in a computer to build a correlation, transcription, and translation system for human to machine interaction. Statistical correlations are useful in the present invention because they can indicate a predictive relationship that can be exploited in practice. A computer can operate upon recorded sensory data using adaptive filters (i.e. Kalaman and/or Bloom filter algorithms implemented in computer language) to determine the correlation between the internal and external representations to determine the strength of the statistical relationship between internal and external representations. Threshold's for retaining, disregarding, or acting upon the data may be based on the statistical relationships and used to determine targeted data output. In the present invention translation is the communication of the meaning of a source-language, be it human or machine. It is an objective of the present invention to incorporate machine translation (MT) as a process wherein computer program(s) analyze inter-related raw and preprocessed sensor data and produce target output data (i.e. human understandable GUI text, video or synthesized voice audio output into human interactive input devices of a human user) with little or no human intervention. In the context of the current invention computer-assisted translation (CAT), also called “computer-aided translation,” “machine-aided human translation” (MAHT), and “interactive translation,” is a form of translation wherein a machine translation system uses machine language to create a target language, be it human or machine, correlated with text, sub-vocalization, brain activity, and sensory signatures of subjects and activities in the surrounding environment with the assistance of computer program(s). It is an objective of the present invention to use the above translations to form the basis of a relational database which may be drawn upon by a user to perform various functions using a mobile computing device such as a smartphone or the like as described in the present invention.
Spherical field-of-view sensing and logging about the user is preferable when it comes to recording how the mind works because the mind constantly perceives the space one finds himself or herself occupying. In an academic paper entitled “Intelligent Systems in the Context of Surrounding Environment” Joseph Wakeling and Per Bak of the Department of Mathematics, London, UK, dated 29 Jun. 2001, describe a biological learning pattern based on “Darwinian selection” that suggests that intelligence can only be measured in the context of the surrounding environment of the organism studied: i.e. “We must always consider the embodiment of any intelligent system. The preferred embodiment reflects that the mind and its surrounding environment (including the physical body of the individual) are inseparable and that intelligence only exists in the context of its surrounding environment.” Studies by O'Keefe, J. and Nadel, L. (1978) entitled The Hippocampus as a Cognitive Map. Clarendon Press: Oxford and Rotenberg, A., Mayford, M., Hawkins, R. D., Kandel, E. R., and Muller, R. U. (1996) and classic studies of John O'Keefe and John Dostrovsky (1998), point to strong evidence why a video logging machine needs to provide a panoramic FOV about a user in order to get a true representation or reproduction of their consciousness. In 1971 it was discovered that the pyramidal cells of the hippocampus—the cells one examines artificially using electrical stimuli to the Schaffer collateral pathway while studying LTP—are “place cells”; they actually encode extra-personal space in real life. A given pyramidal cell will fire only when the head of a user is in a certain part of an enclosed space: the cell's place field. Thus, when a person walks borne with the present invention in a given space, a particular subset of pyramidal cells in the hippocampus becomes active. When the user is in different space, different sets of pyramidal cells become active. Cells of the hippocampus form an internal neural representation, or “cognitive map” of the space surrounding the user. This holistic neural representation permits the user to solve spatial problems efficiently. And when placed in a new environment, a person forms an internal representation of the new space (the coordinated firing of a population of place cells) within minutes, and once this representation is formed it is normally stable for at least several days. The same cell will have the same firing field each time the person is reintroduced to that environment. When now placed in a second environment, a new map is formed—again in minutes—in part from some of the cells that made up the map of the first environment and in part from pyramidal cells that had been silent previously. These place cells and spatial memory can be studied by recording brain pattern activation using MRI, and various other brain activity systems such as AMR, fMRI, fNRI, EEG, PET, or DECI to record brain activity from individual pyramidal cells in the hippocampus (ref. Kandel and Squire, 1998). Studies show that regions of the brain that have place cells that are active when one is in a familiar place versus when one is not in a familiar place. Activity is especially noticeable in these cells when a person is navigating a space in the dark. Human memory works to recall and visualize what was there in the daylight to help a user of the present invention navigate a dark space.
Neurological research has identified specific locations, processes, and interactions down to the human neuron and molecular level for thinking and memory. Research has shown that human neurons and synapse both are actively involved in thought and memory, and that brain imaging technology such as Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance Imaging, or Magnetic Resonance Tomography (MRT) can be used to observe this brain activity at the molecular level. Recently atomic magnetometers have begun development of cheap and portable MRI instruments without large magnets used in traditional MRI machines to image parts of the human anatomy, including the brain. There are over 100 billion brain cells/neurons in the brain, each of which has synapses that are involved in memory and learning, which can also be observed by brain imaging techniques. It has also been proven that new brain cells are created whenever one learns something new. Whenever stimuli in the environment or through thought make a significant enough impact on the beings brain new neurons are formed. During this process synapses carry on electro-chemical activities that reflect activity related to both memory and thought. Important for purposes of the present invention is that using modern technological devices, such as an Atomic Magnetometer, this activity in the brain at the molecular level can be detected, measured, stored, and operated upon using computers according to the present invention as these processes are taking place in the brain. Research has also shown that even though there are important similarities in the brain activity of different people each person has a unique brain “fingerprint”. This fingerprint of the brain is unique to each person's thought processes and how and where they store their memories in their brain. It is an objective of the present invention to facilitate recording and translating the uniqueness of a subject's brain and the subjects corresponding brain activity. Yet additionally, to design a universal brain translation system and method that facilitates communication between different beings, machines, or a combination thereof.
In September 2006 Stefan Posse and his colleagues at the University of New Mexico used MRI techniques to observe brain activity correlated with the thought of a single word. And they recently recorded longer imaging sequences and decomposed the thought processes into individual thoughts. When images of Marilyn Monroe were shown a specific neuron fired, when images of another actor was shown a neuron specific to that actor fired. Likewise, Francis Krick and Christof Koch in the periodical Nature Neuroscience, Vol. 6, number 2, dated February 2003, in an article entitled “A Framework for Consciousness” along with their more recent findings demonstrate that certain neurons fire selectively to certain visual stimuli. Koch argues for including the neural correlates for conscious precepts as any part of understanding how human beings are consciously aware. Koch research has shown that neural correlates of both basal arousal and activity in the inferior temporal cortex are necessary for a human being to be consciously aware. And that brain decoding techniques can be translated into images based on reading a patients mind. In a study 20-30 specific neurons were listened to too infer what the patient was conscious of. Research by Koch has also shown that physical input (i.e. A person actually looking at an object.) and imagined input (i.e. A person closing their eyes and imagining an object in their mind.) stimulated the same neurons. It is an object of the present invention to correlate repeated recordings and loggings of user physiological activity (i.e. user brain activity, sub-vocal imitations, etc.) with recordings and loggings of the surrounding environmental activity (i.e. panoramic video images of gaze of the user upon a subject, etc.) to build an esemplastic patterned language using the present invention. The computerized logging and assistance system that forms the present invention thus yielding a representation of the consciousness and understanding of the world from the given point-of-view of the being whose information is operated upon. And the computerized logging and assistance system that forms the present invention thus providing an informational system that may be operated upon to assist a user being, machine, or combination thereof in negotiating the world in which he, she, or it respectively lives or operates.
An example of a brain activity sensing system providing enabling technology incorporated into the present invention is a portable Magnetic Resonance Imaging devices such as the Atomic Magnetometer Sensor Array Magnetic Resonance (AMR) Imaging Systems and Methods. Recently portable Atomic MR systems such as those described in U.S. Patent 2009/0149736, dated 11 Jun. 2009 by Skidmore et al and U.S. Patent 2010/0090697, dated 15 Apr. 2010 by Savukov have been disclosed that are of a type compatible and enabling of the present invention. Further, John Kitching, a physicist at the National Institute of Standards and Technology in Boulder, Colo. has developed a tiny (grain of rice size) atomic magnetic sensors of a type compatible for use in the present invention. Specifically, systems and devices disclosed in the Skidmore patent and Kitching presents a wearable portable array, of reduced size, low power consumption, reducible to a wafer-level, has rapid signal transfer, and with decreased magnetic field that facilitates lower cost and easy mounting on and/or inside a person, animal, or inanimate object. U.S. Patent Application 20100016752, by Jeffery M. Sieracki dated 21 Jan. 2010 entitled System and Method for Neurological Activity Signature Determination, Discrimination, and Detection discloses a system for automatically correlating neurological activity to a predetermined physiological response comprising: at least one sensor operable to sense signals indicative of the neurological activity; a processing engine coupled to said sensor, said processing engine being operable in a first system mode to execute a simultaneous sparse approximation jointly upon a group of signals sensed by said sensor to generate signature information corresponding to the predetermined physiological response; and, a detector coupled to said sensors, said detector being operable in a second system mode to monitor the sensed signals and generate upon selective detection according to said signature information a control signal for actuating a control action according to the predetermined physiological response.
Still alternatively, U.S. Patent Application 2010/0042011, dated 18 Feb. 2010, by Doidge et al entitled “Three-dimensional Localization, Display, Recording, and Analysis of Electrical Activity in the Cerebral Cortex” discloses a computerized Dynamic Electro-cortical Imaging (DECI) method and apparatus for measuring EEG signatures of the brain in real time. The DECI system and method is portable and can be worn by the user to generate dynamic three-dimensional (voxel) information of the electrical activity occurring in the cerebral cortex of the brain. The DECI system is of a type that may be incorporated in the present invention to provide brain activity information according to the present invention. U.S. Patent Application 2010/0041962, dated 18 Feb. 2010 by Causevic et al., entitled “Flexible Headset for Sensing Electrical Activity” discloses a headset worn on the outside of the head for sensing brain activity.
Additionally, scientific studies show that images we recall in our imagination are not always as detailed as a photographic image. In 1999, researchers led by Yang Dan at University of California, Berkeley decoded neuronal firings to reproduce images seen by laboratory animals. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain's sensory input) of animals. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The animals were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the animals saw and were able to reconstruct recognizable scenes and moving objects. An object of the present invention is to provide imagery and audio of the subject of the CP and surrounding environment that is correlated to brain activity which can be queried by a user of the invention from logged information recorded by the invention which is more complete and accurate than what the brain remembers. To derive this utility from the above mentioned brain activity systems, like the AMR system, the resulting brain activity signatures are related to a thoughts and memories as associated with things in the surrounding environment with respect to the individual using the AMR system. A monocular or binocular camera system may be incorporated into the present invention. But preferably a camera system with stereoscopic capability is incorporated. U.S. Patent Application 20070124292 A1, by Kirshenbaum et al, dated 31 May 2007, entitled Autobiographical and Other Data Collection System describes a system for collecting/recording, storing, retrieving, and transmitting video information that may be incorporated into the present invention. Stereoscopic cameras that approximate human vision are preferable because they reflect how humans naturally see and experience the world, and provide depth clues to the brain. Panoramic stereoscopic cameras are also more preferable because they provide more measurable data, added spatial awareness like that what persons experience, and allow the replay of the total surrounding environment is more attune to what is actually stimulating the user's senses, memories, and resulting thoughts in the real world. Portable head-mounted panoramic video cameras of a type that may be used in the present invention include U.S. Pat. No. 6,552,744 B2 by Chen, dated Apr. 22, 2003, entitled Virtual Reality Camera which presents a camera which records discrete still or video images that can be stitched together to create a panoramic scene that incorporates computer processing so that the user may pan and zoom around the panoramic scene; U.S. Patent Application 2001/00105555 and U.S. Pat. No. 6,539,547, by Driscoll, dated Aug. 2, 2001, discloses a Method and Apparatus for electronically recording, storing, and distributing panoramic images from a panoptic camera system to a remote location using the internet; U.S. Patent Publication 2005/0157166 by Peleg, dated Jul. 21, 2005 entitled Digitally Enhanced Depth Image which discloses a camera method to simultaneously record, store, and process panoramic stereoscopic imagery; U.S. Pat. No. 5,023,725, by McCutchen, dated Jun. 11, 1991, FIG. 21, which discloses a cap with a plurality of high resolution video cameras that record a plurality of imagery that may be stitched together to form a hemispherical scene; U.S. Patent 20020015047 Okada, Hiroshi” et al, dated Feb. 7, 2002 entitled “Image cut-away/display system” that describes a panoramic camera, processing, display system in which the images are combined for forming a single wide-area view image for use as a virtual environment, telepresence environment, texture mapped three-dimensional simulated environment, or an augmented reality environment consistent for use in the present invention; U.S. Patent Application Publication 2005/0128286 dated 16 Jun. 2005 by Angus Richards that discloses a panoramic camera mounted helmet that also includes a head-mounted display (HMD) with telecommunication capabilities; U.S. Pat. Nos. 5,130,794, and 5,495,576, and grandparent, parent, and pending related applications by Ritchey and Ritchey et al; and U.S. Patent Applications 2005/0128286 dated 16 Jun. 2006 and 2006/0082643 dated 20 Apr. 2006 that disclose HMD systems of a type compatible for incorporation in the present invention. All of the camera systems cited in this paragraph are of a type that may be incorporated as a component of the present invention.
Still alternatively, eye-in and eye-on contact lenses may include cameras for recording and displaying imagery according to the present invention. For example a camera device that is mounted on and/or inside the eye is disclosed in US Patent 20090189974 A1, by Michael F. Deering, dated 30 Jul. 2009, entitled Systems Using Eye Mounted Displays (EMD). Deering describes a still and/or video camera could be placed directly on the eye mounted display worn on or in the user's eye(s). Such a camera in essence automatically tracks the motions of the user's eye(s) because it is effectively part of the user's eye(s). The eye mounted camera is folded within the EMD using some of the same optical folding techniques used in folding the display optics of the EMD. The processing of the image is handled on the contact lens, an electronics package on the user's body, or by a remote processing center. A remote user can pan and tilt the camera to point in the same direction as the user's eyes, using the direction information from the eye tracking subsystem. Such a camera greatly reduces the time and physical grabbing of an external camera when taking a picture; as an example a particularly gorgeous sunset can be photographed with something as simple as a quick glance and a double eye blink. The camera can be located in one or both eyes. A plurality of camera systems, like EMD and panoramic camera systems, may be integrated in the present invention to attain the required FOV coverage and overall system functionality. An EMD system of this type may provide capture and/or display for the present invention, and may transmit to and from the smartphone when incorporated according to the present invention. Additionally, another EMD design consists of a contact lens that harvests radio waves to power an LED that displays information beamed to the contact lens from mobile devices, like a smartphone. The EMD system was invented by Babak Parviz and is currently in prototype at the University of Washington (Ref. New Scientist, 12 Nov. 2009 by Vijaysree Venkatraman). The above systems are of a type compatible with and are incorporated into the present invention.
A smartphone is a portable electronic device (PED) that combines the functions of a personal digital assistant (PDA) with a mobile phone. Smartphones typically have computer and computer processing hardware, firmware, and software built in to the unit. An example of a smartphone is the iPhone 4S and 5, sold by Apple Inc. Later models added the functionality of portable media players, low-end compact digital cameras, pocket video cameras, and global positioning system (GPS) navigation units to form one multi-use device. Modern smartphones also include high-resolution touch screens and web browsers that display standard web pages as well as mobile-optimized sites. High-speed data access is provided by Wi-Fi and Mobile Broadband. The most common mobile operating systems (OS) used by modern smartphones include Google's Android, Apple's iOS, Nokia's Symbian, RIM's BlackBerry OS, Samsung's Bada, Microsoft's Windows Phone, Hewlett-Packard's webOS, and embedded Linux distributions such as Maemo and MeeGo. Such operating systems can be installed on many different phone models, and typically each device can receive multiple OS software updates over its lifetime.
It is also known in the art that small independent pill capsules may be used to capture imagery. A very small wireless video camera and lens, transceiver, data processor and power system and components that may be integrated and adapted to form the panoramic capable wireless communication terminals/units is disclosed by Dr. David Cumming of Glasgow University and by Dr. Blair Lewis of Mt Sinai Hospital in New York. It is known as the “Given Diagnostic Imaging System” and administered orally as a pill/capsule that can pass through the body and is used for diagnostic purposes. U.S. Pat. No. 7,662,093, by Gilad et al, dated 16 Feb. 2010, entitled Reduced Size Imaging Device describes a swallowable imaging capsule that includes an imager, processing, and wireless transmission system that may be incorporated and is compatible with the present invention. Others similarly include U.S. Pat. No. 7,664,174 and U.S. Patent Application 20080033274 and 20080030573. Small pen cameras, tie cameras, and so on used in the spy and surveillance may also be incorporated into forming camera components of the present invention. Objective micro-lenses suitable for taking lenses in the present invention, especially the panoramic taking assembly, are manufactured and of a type by AEI North America, of Skaneateles, N.Y., that provide alternative small and compact visual inspection systems. AEI sales micro-lenses for use in borescopes, fiberscopes, and endoscopes. AEI manufacture objective lens systems (including the objective lens and relay lens group) from 4-14 millimeters in diameter, and 4-14 millimeters in length, with circular FOV coverage from 20 to approximately 180 degrees. Of specific note is that AEI can provide an objective lens with over 180 FOV coverage required for some embodiments of the panoramic sensor assembly like that incorporated in the present invention required in order to achieve overlapping adjacent hemispherical FOV coverage of two back-to-back fisheye lenses or stereoscopic panoramic coverage when four lenses are incorporated at 90 degree intervals. The above cameras, transmitters, and lenses may be incorporated into the above video logging system or other portion of the panoramic capable wireless communication terminals/units to form the present invention. Camera systems may be operated by powered and controlled via wire clad, fiber-optics, or over a radio frequency signal. Camera signals may be processed and transmitted separately or multiplexed by any manner familiar to those in the art in the present invention. Both EMD and pill camera technology are enabling and are incorporated in the present invention to record and transmit imagery of the user and the scene surrounding the user in the present invention.
As stated above, deriving utility from the above mentioned brain activity systems includes relating the brain activity to a subject(s) in the surrounding environment at the time that the focus was on the subject observed. User born position orientation, geospatial position and orientation systems, target designators, and eye tracking systems may be incorporated in the present invention to accomplish the task of recording what the attention of the user is focused upon. Pointing devices may be any user-operated pointing device including, but not limited to, a joystick, a trackball, a touch-sensitive pad or screen, a set of directional “arrow” cursor control keys, a helmet-mounted sight, or an eye-tracking system. Many navigation systems, surveillance systems and weapon systems provide a user with a video image of a region of interest (ROI) from which the user may designate an object or feature for tracking. In a typical tracker, the user selects the desired target and from that point onward the target is tracked automatically. Known techniques for video-based target designation employ a use operated pointing device (e.g., joystick, trackball, helmet-mounted sight, eye-tracking system. etc.) to either move a cursor/marker or move a gimbal on which the camera is mounted so that a marker (e.g. a crosshair) is located on the desired target on the live video display. Then, by pushing a button, the user finally locks the tracker on the current target. A video scalar and rangefinder may be incorporated as part of the target tracking system. A tracking module is then actuated and attempts to reliably acquire a trackable target at the designated position within the image for subsequent automated tracking. Target tracking systems may be integrated with eye tracking systems to determine what the eyes of a person is focused upon. Tracking and pointing devices may be manually operated, or automatically operated by a computer given a rule set. Eye tracking systems are known in prior art that monitor the position of a user's eye within its socket in order to determine the user's line of gaze, for example to enable the user to control a device, such as a weapon, by eye movements or to determine whether the user is watching a predetermined location, such as a location on a television screen, or simply to determine the state of wakefulness of the user.
Furthermore a number of different methods have been proposed for monitoring the position of the user's eye associated with gaze and focus on a subject in the users field-of-view (FOV), including the so-called corneal reflection (CR) method in which a point light source is used to produce a bright image on the anterior surface of the cornea, and a tracking system monitors the position of the image. A differential CR/pupil tracking method has been developed in which the relative positions of the pupil and a corneal reflection are monitored by a suitable camera and a wavelength-sensitive beam splitter being used to ensure that the user's view is not obstructed by the light source and camera. This method is less sensitive to sensor movements. Generally the eye is illuminated by a near infrared source (or multiple sources) and a solid state video camera captures an image of the eye. In so-called bright pupil imaging the light source produces a light beam which is coaxial with the camera axis, and light reflected back from the retina making the pupil appear to be a bright circle, the apparent brightness increasing roughly with the fourth power of pupil diameter. In so-called dark pupil imaging the light source produces a light beam which is off axis relative to the camera axis, and a dark pupil image is produced. Real time image analysis is used to identify the pupil and corneal reflections and to find their centers. Portable target tracking and pointing devices of a type that can be incorporated in the present invention to associate the image observed in the surrounding environment with specific subjects there-in and brain activity to facilitate recording correlate designation include the eye tracking system generally described above and specifically described in U.S. Patent Application 20040196433, by Durnell, dated 7 Oct. 2004, titled Eye Tracking System, and in U.S. Patent Application 20080205700, by Nir, dated 28 Aug. 2008 titled Apparatus and Method for Assisted Target Designation which includes video designation and tracking via imagery and/or directional audio. The above systems referenced in this paragraph produced information that can be digitally stored and processed by a computer. The eye tracking, gaze, and directional FOV, and GPS derived from systems described in this paragraph can be correlated with recorded and stored AMR, and camera data of objects and scenes according to the present invention. The Ultra-Vis, Leader, system developed by ARA, which includes the subsidiary companies MWD, Vertek, and KAD, Lockheed Martin, and Microvision Incorporated is a type of target designation and tracking system that may be integrated into the present invention. The portable iLeader system includes a HMD system with a micro-laser range finder system for target designation, see through eyewear, head and eye tracking system, waveguide display goggles, video cameras for recording the view the user is seeing directly ahead of where he is looking, helmet electronics, eye tracking and target designation system, voice mics and earbuds, and an associated electronics unit to control the HMD, telecommunications network and GPS interface, iGlove, battery power and sensor feed, and a soldier augmented reality (AR) system. In the planning and patrol mode view the users see-through HMD of the iLeader system is operated by the user to designate and record targets in the surrounding environment and overlay information on a see-through display. The overlaid information displayed to the user may be from associated sensors the user is wearing, sensors other users are wearing, or from other information on networked devices that is wirelessly transmitted from a remote location that is part of the telecommunication system and network that includes the iLeader system. Technology of a type disclosed in the iLeader system is consistent with and may be incorporated into the present invention.
As mentioned above audio input systems provide a significant portion of human sensory input. A microphone system is incorporated to record audio from and about the user as part of the video logging system described in the present invention. Microphones are faced inward to record audio from the user and outward to record audio about the user. Typically microphones are located on the user as a device worn or carried by the user. Small microphones are known to those in the art and are commonly used in the hand-free cell phone operation and known as throat mics that fit around the ear or as lapel mics worn by those in the television industry and security industry and are of a type that is compatible with and incorporated into the present invention. The microphone can be part of an audio recording or communication system common on cellular telephones and in the cellular telephone industry. Alternatively, a three-dimensional surround sound ambisonic audio recording system exist to capture using a tetrahedrally arranged quartet of cardioid pattern microphone capsules connected to some simple circuitry to convert the outputs to a standard B-format signal. B-format signals represent a 3D sound-field with four signals; X, Y and Z representing three orthogonal figure of eight patterns and an omni-directional W reference signal. Audio from ambisonic microphones may be spatially encoded using surround sound encoders to output spatial audio may be played back in a user's earphones or earbuds. Ambisonic microphones may be distributed in an outward facing manner according to the present invention. Ambisonic hardware known as TetraMic Spheround with associated software of a type applicable to the present invention is manufactured by Core Sound of Teaneck, N.J., USA.
Vocal representations of the user or from a remote user, be they words spoken aloud or sub-vocalized, may be sensed and provide data input according to the present invention. Audio can be used for correlation purposes or for command and control of the logging and enhancement system according to the present invention. Speech recognition (also known as automatic speech recognition or computer speech recognition) converts spoken words to text. The term “voice recognition” is sometimes used to refer to recognition systems that must be trained to a particular speaker—as is the case for most desktop recognition software. Recognizing the speaker can simplify the task of translating speech. In the present invention a microphone is a user interface for recording audio signatures of the user and surrounding environment for input in to an associated computer in order to facilitate hands-free computing. Conventional voice-command systems that use conventional voice recognition systems of a type that may be used in the present invention include the Kurzweil Applied Intelligence (KAI) Speech Recognition System for commercial use. The large-vocabulary present invention a microphone is a user interface for recording audio signatures of the user and surrounding environment for input in to an associated computer in order to facilitate hands-free computing.
An embodiment and sensor input component of the present invention includes a sub-vocalization system. Sub-vocalization is the tendency of a user to silently say individual words to themselves as they read or think. Sub-vocal recognition (SVR) is the process of taking sub-vocalization and converting the detected results to a digital text-based or text-synthesized voice audio output. It is similar to voice recognition except it is silent sub-vocalization being detected. A sub-vocalization system of a type that may be incorporated into the present invention as a component disclosed in U.S. Pat. No. 6,272,466, dated 7 Aug. 2001, byt Harada, et al., entitled “Speech detection apparatus using specularly reflected light” and that described in the ongoing NASA Sub-vocal Recognition (SVR) program began in 1999, and later renamed the Extension of Human Senses program. In the NASA program muscles of the vocal tract (e.g. electromyographic or EMG) signatures are sensed by contact sensors placed on the throat (either internally or externally to the body). The signatures are read out as electrical signals which are translated by a computer into patterns recognized by classifiers as word or word components. Another sensor input system that may be integrated with the present logging and memory enhancement system and method include infrared and LIDAR systems. LIDAR (Light Detection and Ranging) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant target. LIDAR systems can see through fog and darkness to record the shape and motion of objects in their FOV, overcoming the limitation of visible spectrum cameras. LIDAR systems and methods of a type that may be integrated into and is compatible with the present invention are those found in U.S. Patent Application 2003/0154010 and 6,859,705, by Rae et al, dated 14 Aug. 2003 and 22 Feb. 2005, entitled “Method for Operating a pre-crash sensing system in a vehicle having a countermeasure system” using a radar and camera; U.S. Patent 2007/0001822 by Karsten Haug, dated 4 Jan. 2004, entitled “Method for improving vision in a motor vehicle”; and that mentioned in U.S. patent application Ser. No. 11/432,568 entitled “Volumetric Panoramic Sensor Systems” filed May 11, 2006 and LIDAR systems cited in related patent applications by the present inventor. An objective of the present invention is to provide and embodiment to the present invention which includes a LIDAR system for logging the surrounding environment: and man portable systems described in U.S Patent Application Publication 2011/0273451, dated 10 Nov. 2011, by Salemann; and a publication entitled “An approach for collection of geo-specific 3D features from terrestrial LIDAR, by Dr. David Optiz et al., of Overwatch Geospatial Incorporated, of Missoula, Mont., dated 28 Apr. 2008, at the ASPRS Conference.
Turning now to user feedback systems of a type incorporated into the present invention. Feedback to the user can be through any of the user's senses. Portable audio-visual devices of a type that may be incorporated in the present invention to provide visual and audio information to the user include information appliances like cellular phones, head-mounted displays, laptops, and speaker headphones. Additionally, separate eye and audio capture and presentation devices may be worn by the user. The separate devices may be connected via radio-frequency, infrared, wire, fiber-optic communications network on or off the user. Processing of the audio and visual signature information may be at the site of the sensor or downstream in the body, or outside the body on a system mounted on, carried by, or at a remote server in communication with the user's video logging and enhancement/assistance system.
According to many users, a current limitation of panoramic head mounted display (HMD) systems integrated with panoramic camera systems is that they are too heavy and bulky. The additions of wider field-of-view displays and viewing optics, microphones, speakers, cameras, global positioning systems, head and eye tracking systems, telecommunication, associated power and processing capabilities, along with helmet padding can add additional weight and bulkiness. These problems contribute to the majority of head mounted displays being too large and not being portable. Correspondingly, a limitation is that putting-on, adjusting, and taking-off the HMD is a difficult task. Finally, another limitation is that good head mounted displays are expensive. Head-mounted display (HMD) devices of a type that are compatible with the present invention are described in the present inventors previous disclosed prior art. HMD design well known to those skilled in the art and that may be used in the present invention is described in the following papers: Head-Worn Displays, The Future Through New Eyes, by Jannick Rolland and Ozan Cakmakci, published by the Optical Society of America, April 2009; Head-Worn Displays: A review by Jannick Rolland and Ozan Cakmakci, published IEEE in the Journal of Display Technology, Vol. 2, No. 3, September 2006. Specifically, a type of system applicable to the present invention is a low profile writeable holographic head worn display (HWD) that has see-through capabilities that facilitate augmented reality. U.S. Patent Application 20100149073, by David Chaum et al, dated 17 Jun. 2010, entitled “Near to Eye Display System and Appliance” is such a holographic type of display compatible with and that is incorporated into the present invention. Such a system compatible with and integrated by reference into the present invention manufactured by Microvision of Redmond, Wash., includes the small portable Integrated Photonics Module (IPM) only a couple of centimeters square that is mounted on a HMD device. The IPM uses integrated electronics to control a laser and bi-axial MEMS scanner to project an image through optics onto and including eyeglasses a user is wearing. Furthermore, U.S. Patent 2005/0083248, by Biocca, Frank and Rolland, Jannick et al., dated 21 Apr. 2005, entitled “Mobile face capture and image processing system and method” disclose a camera system that looks inward to capture a user's face and not outward such that a continuous panoramic view of the remaining surrounding scene can be recorded and interacted with, which is critical for 2-way teleconferencing and for establishing neural correlates of consciousness with surrounding environment. A further limitation of Biocca is that the cameras facing inward block the users peripheral FOV.
Flexible electronic displays of a type integrated in the present invention are of a type shown in U.S Patent Application Publication 2010/0045705, dated 25 Feb. 2010, Vertegaal et al., entitled “Interaction Techniques For Flexible Displays” that incorporate what is referred to as “e-paper” in the display industry; and display screens and associated computerized image processing systems to drive flexible thin, light-weight, either of soft or semi-rigid material, energy saving, and irregular shaped and curved LED display systems of a type integrated into the present invention are manufactured by Beijing Brilliant Technology Co, LTD, China, under the trade name “flexible display”. It is known that non-see-through and see-through LED and OLED systems are manufactured. See-through LED and OLED are frequently used in augmented reality HMD applications. Systems referenced in this paragraph are of a type that may be integrated, retrofitted, and in some cases improved upon to realize the present invention.
Providing electrical power to the smartphone, portable brain activity sensing system, surround video logging system, correlation system, and sub-components are an enabling technology to the operation of the present invention. A conventional battery charger may be operated to recharge the battery carried by the user, typically in the smartphone. Landline transfer of energy, especially for recharging of portable systems is well known to those skilled in the art and may be used in some embodiments of the system that comprises the current invention. However, while less common, wireless energy transfer or wireless power transmission for recharging electrical devices is preferable because it facilitates ease of use in some embodiments described in the present invention. Wireless energy transfer or wireless power transmission is the process that takes place in any system where energy transfer or wireless power transmission. An induction charging system of a type that may be used to recharge devices external to the body of the user or implanted in the user is of a type put forth in the Provisional Application by Ritchey et al; U.S. patent Ser. No. 13/222,2011 dated 31 Aug. 2011 by Parker et al and as US Patent Application Publication No 2012/0053657 on 1 Mar. 2011 entitled “Implant Recharging”; and in U.S. Pat. No. 5,638,832, issued 17 Jun. 1997, by Singer et al., entitled “Programmable Subcutaneous Visible Implant”. Another method of providing electrical power incorporated in the present invention is by kinetic energy replacement. Where electrical power is generated by movement and used to power electrical devices. Energy can also be harvested to power small autonomous sensors such as those developed using Micro-electromechanical Systems (MEMS) technology. These systems are often very small and require little power and whose applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat or light enables smart computers and sensors in the present invention to function indefinitely. Energy can be stored in a capacitor, super capacitor, or battery. In small applications (wearable and implanted electronics), the power follows the following circuit: after being transformed (by e.g. AC/DC-to-DC/DC-inverter) and stored in an energy buffer (e.g., a battery, condenser, capacitor, etc.), the power travels through a microprocessor (fitted with optional sensors) and then transmits out the gathered sensor data (usually wirelessly) over a transceiver. Biomechanical energy harvesters have been created and are incorporated into the present invention. One current model is the biomechanical energy harvester of Max Donelan which straps around the knee. Devices as this allow the generation of 2.5 watts of power per knee. This is enough to power some five cell phones. Incorporation of the above mentioned electrical power and battery technologies is incorporated and anticipated in realizing the present invention.
Correlation processing of information from the portable brain activity sensing system, surround video logging system and other sensing systems is a key part of the present invention. Post processing of sensor data includes noise filtering of brain activity data transmitted from the brain activity sensor system, such as an AMR or other internal biometric or physiological sensor system. And also includes post processing of external data representing the surrounding environment recorded by devices such as panoramic video. A key part of the correlation is target identification and tracking which involves performing target recognition and filtering out false targets. Computer software and firmware of a type that is incorporated into the present invention to filter data and make correlations between brain pattern data and video is disclosed in U.S. Patent 2009/0196493, dated 6 Aug. 2009, by Widrow et al entitled Cognitive Method and Auto-Associative Neural Network Based Search Engine for Computer and Network Located Images and Photographs. Hierarchical tree and relational databases familiar to those in the computer industry and discipline are incorporated in the present invention to organize and retrieve information in the computer. Widrow teaches storing input data, images, or patterns, and quickly retrieving them as part of a computer system when cognitive memory is prompted by a query pattern that is related to the sought stored pattern. Widrow teaches search, filtering techniques, pre-processing of sensor data, post processing of sensor data, comparator operations done on data, storage of data, and keying techniques incorporated into the present invention. Widrow also teaches that the computer may be part of a computer or information appliance and that the system may be remotely connected to the global information grid (GIG)/internet and the processes distributed. U.S. Patent Application 20070124292 A1, by Kirshenbaum et al, dated 31 May 2007, entitled “Autobiographical and Other Data Collection System” teaches a stereoscopic video logging system with recall. However, neither Widrow or Kirshenbaum teach a portable device for brain pattern correlation with video logging and memory enhancement as does the present invention. And neither Widrow nor Kirshenbaum teach spherical recording with brain correlation. Compact computer processing systems, including the latest 3G, 4G, and 5G communication telecommunication systems and follow-on devices like smartphone phones (i.e. Apple iPhone 4S and 5, Samsung Epic 4G; Blackberry 4G smartphones's; chips, PCB's, DSP's FPGA's; Quantum 3D Inc., San Jose, Calif., powerful compact portable computer processing and imaging generator modules (i.e. IDX 7000, ExpeditionDI, and Thermite 4110); Mini & Small PC's by Stealth Computer Inc.; the Pixel Edge Center 3770 HTPC with Dual Core i7 or dual chip Xeon processors; U.S. Pat. No. 7,646,367, dated Jan. 9, 2006, by Hajime Kimura entitled Semiconductor device, display device and electronic apparatus; and associated telecommunications systems and methods disclosed in U.S. Pat. No. 7,720,488, by Kamilo Feher, dated Jun. 21, 2007, entitled “RFID wireless 2G, 3G, 4G, 5G internet systems including Wi-Fi, Wi-Max, and OFDM” and the like compatible and of a type incorporated into the present invention.
Dynamic user/host command and control of the present invention through interactive machine assist systems is a major feature of the above invention. Interactive computer machine assist and learning systems are incorporated in the present invention to assist the host in command and control of the logging and memory system. Once neural correlates are identified using technologies specifically described in the preceding paragraph the information is referenced by artificial intelligent (AI) and AI like systems to form an enduring cognitive assistant for the user or another client in the present invention. An AI computer hardware and software of a type that may be integrated with the present invention is the Cognitive Agent that Learns and Organizes (CALO), developed by SRI between 2003 and 2008. CALO is a PC based cognitive software system that can reason, learn from experience, be told what to do, explain what they are doing, reflect on their experience, and respond robustly to a client's specific commands or based on a client's repeated actions when using the CALO system. The SIRI system is a software application on the I-Phone 4S and 5, a portable electronic device, manufactured by Apple Corporation Inc, CA. The SIRI application is a personal assistant that learns (PAL) application that is run on the I-Phone 4S and 5. The SIRI system includes a speech recognition and speech synthesis application that may be integrated with the smartphone of the present invention to interact with on-board and off-system devices and software applications that comprise the entire system of the current invention. It is an object of the present invention to integrate AI and AI-like CALO and SIRI software, Widrow's 2009/0196493 art, and Kirshenbaum's logging and database software and hardware into a single integrated computer architecture to achieve the objectives of the present invention.
Microprocessor speed and memory capacity have increased along a number of dimensions which enable the present invention. Computers get twice as powerful relative to price every eighteen months, or in other words, increase by about an order of magnitude every five years. Additionally, decreases in size and volume of mobile computing and communication devices continue to make them even more portable. Bandwidth is also increasing dramatically. Therefore, new uses for such powerful machines, programs, and bandwidth may be developed, as evidenced by the present invention. Particularly, as computing speed and memory capacity drop in price, personal use systems become more powerful and more available. Personal communication systems, like smartphones with video cell capability, may be in part or in whole in the present invention to process, display, transmit and receive data in accordance with the present invention. One valuable use for powerful computing processes is multimedia, surveillance, and personal data collection. There is known in the art individual devices which already employ microprocessors and application specific integrated circuits for recording specific types of data; e.g., video (with sound track capability) and video cameras for recording the local surroundings (including day-date imprints), pen-size digital dictation devices for sound recording, space satellite connected global positioning systems (GPS) for providing instantaneous position, movement tracking, date and time information, smartphone downloadable note taking and other computing activities, biofeedback devices, e.g., portable cardio-vascular monitors, for medical patients and sports enthusiast, and the like. Additionally, remotely located servers may be incorporated into the present invention to receive and transmit data to and from users of the data logging and communication system comprising the present invention.
An additional feature of the command and control portion of the present invention, typically conducted by the user operating a host computer, is an integral part of the present invention. In the present invention the U.S. Patent Application 2009113298, by Edward Jung et al, dated 30 Apr., 2009, entitled “Method of selecting a second content based on a user's reaction to a first content” provides a method of a type compatible with and incorporated into the present invention. Accordingly, data sensed or recorded by the logging and video enhancement system of the present invention may be operated upon in response to other data sensed or recorded to include at least one a person's gaze, attention, gaze dwell time, facial movements, eye movements, pupil dilation, physiological parameters (heart rate, respiration rate, etc.), stance, sub-vocalization (and other non-word audio), P-300 response, brain waves, brain patterns, or other detectable aspects. In another embodiment, data indicative of a response may include data indicative of at least one of a user's physiological, behavioral, emotional, voluntary, or involuntary response sensed by the system of the present invention.
User activation and authentication is important in the present invention because inadvertent input might cause confusion in a host beings brain or malfunctioning in a host and remote server machines processing. Surreptitious activation by a hostile being or machine, either locally or remotely, could introduce unwanted input and control of the host being or machine. Thus, at least standard intrusion detection and information security systems and methods are incorporated into the present invention (i.e. firewalls and virus protection software). Preferably, the present system incorporates an identification and an authentication system for activating and deactivating the system due to the critical nature to the user which access the present invention allows. It is an object to integrate and combine both standard and new novel identification (ID) and authentication systems into the present invention.
In some instances it may be preferable to locate at least some processing and database storage of the present invention at a remote location. This may be preferable in order to reduce weight and because of limited space considerations. Additionally, locating processing at a remote location may be important for safety and security reasons.
Size, location, unobtrusiveness, concealment, and support of components borne by the user, whether external or internal to the body of the user, are important parts of the present invention. These requirements vary and dictate the various embodiments of this invention. Traditional support assemblies include securing components onto the clothing of the user. Backpacks and belt-packs are one such conventional example. Distribution of some components in the present invention is a technique used to decrease the weight and volume of the present invention.
Improved and novel systems and methods of positioning and securing devices to or in the host user are an important contribution and objective of the present invention. These systems and methods of dividing up and securing the components overcome many of the limitations mentioned above with HMD's. Alternatives include using invasive and/or noninvasive techniques. The present invention includes various systems and methods that lesson or disguise the visual impact of people wearing data logging and memory enhancement systems. U.S. Pat. No. 4,809,690, dated 7 Mar. 1989, by Jean-Francois Bouyssi et al, entitled “Protective skull cap for the skull” is compatible and of a type that may be integrated into the present invention. Additionally, data derived from the present invention may be transmitted for presentation by a programmable subcutaneous visual implant as described in U.S. Pat. No. 5,638,832 by Singer in order to hide or communicate with others in the surrounding environment in a non-verbal manner compatible with the present invention. Concealing implants by the use of a hair-piece, wig, fall, synthetic skin, prosthetics, optical film, skin colored and tattoo sleeves, sticky material, material coverings that blend into and with the exterior body and extremities and is an objective of the present invention. For instance, skull caps may be used to hide or conceal components of the present invention that are mounted in and on the head of the user according to the present invention. It is a further objective is to integrate a covering a covering that conceals the camera optics comprised of a one-way film used in the optical industry on contact lenses and eye glasses. These concealment devices are well known to those in the medical, optical, and cosmetic industry. However, the use of these concealment devices as described in the present invention is not known in prior art.
In the present invention miniaturization allows sensor, input, processing, storage, and display devices to be positioned on the exterior of the user by means of conventional double sided adhesive based techniques commonly used in the medical industry to mount heart and brain monitoring sensors to a patient. Body piercings known to people in the body art industry are used in the present invention to support components of the present invention. Specifically, industrial, snug, forward helix, conch, and lobe piercings of the skin may support components. In medicine, fistula are unnatural connections or passageway between two organs or areas that do not connect naturally. While, fistula may be surgically created for therapeutic reasons, in the present invention fistula are created to facilitate passageways for components that facilitate and form the present invention. Fistula used in the present invention include: blind-with only one end open; complete-with both external and internal openings; and incomplete-a fistula with an external skin opening, which does not connect to any internal organ. While most fistula are in the form of a tube, some can also have multiple branches, various shapes and sizes. In medicine, a canula is a tube that can be inserted in the body, often for the delivery or removal of fluid. Cannula may be inserted by puncturing of the skin. Alternatively, cannula may be placed into the skull by drilling or cutting a portion of the skull away and replacing it with an appropriate material or device. In the present invention fistula and cannula are used to house, support, connect, and conceal components of the present invention.
Subdermal and transdermal implants are known in the body modification industry and medical profession and adapted to the present invention to hold components of the invention in place. Subdermal implants are the implantation of an object that resides entirely below the dermis, including (i.e. horn implants for body art: a pacemaker placed beneath the skin for medical purposes; or a magnet implant beneath the skin to assist a user in mounting or picking up devices above the skin.) In contrast, transdermal implants are placed under the skin, but also protrude out of it. Binding and healing of the skin around and over implants and piercings is an import part and objective of the present invention. Aftercare of implants is known in the body modification industry and medical profession and is also a part of the present invention. (Ref. Shannon Larratt (Mar. 18, 2002). ModCon: The Secret World Of Extreme Body Modification. BMEbooks. ISBN 0973008008); (Ref. Various Medical Atlas's of Plastic Sugery, ENT Surgery, and Neuro Surgery).
Surgical methods used to implant components in the present invention are described in various surgical atlas known to those in the medical field. Making holes in the skin and skull of living animals and insuring their survival is done routinely in the medical and vetinary profession. For instance, a paper by Laflin and Gnad, DVM, entitled “Rumen Cannulation: Procedure and Use of a Cannulated Bovine” in 2008 by Kansas State University and an article by Hodges and Simpson, DVM, in 2005 entitled “Bovine Surgery for Fistulation of the Rumen and Cannula Placement” describe surgical techniques for making large holes between the outer skin and stomach of cattle. These techniques demonstrate surgical methods and the survivability of animals when large cannula and fistula are placed in animals. In the present invention these techniques are used to make passage ways for communication between implanted electronic components using cannula and fistula into and on the body of users consistent with the present invention.
It is known in medicine that specialty implants are used in plastic surgery to achieve aesthetic surgery. Common implants include chin, calf, pectorial, nasal, carving, and check bone implants. Additionally, it is known in medicine that implants are used in the body art industry to create bumps as body art. A manufacturer of such implants is Spectrum Designs Medical, of Carpinteria, Calif. These implants may be filed with silicone, foam, or teflon are typically placed just beneath the skin. In the present system implants are filled with electronic components. The components may be connected to the interior and exterior of the body via fistula and cannula. Furthermore, Craig Sanders et al demonstrate in an article entitled “Force Requirements for Artificial Muscle to Create and Eyelid Blink With Eyelid Sling” dated 19 Jan. 2010, in the ARCH Facial Plastic Surg/Vol 12, No 1, January/February 2010 and in an article entitled “Artificial muscles restore ability to blink, save eyesight”, by U.C. Davis Health System, dated 11 Feb. 2010 describes an implanted artificial muscle system to restore a person's eyelid blinks. The eyelid blinking system demonstrates the surgical implantation techniques and method of small electrical processing, battery, servos, and planted wiring beneath the skin surgical of a type used in and enabling certain aspects of the present invention.
With respect to implants, it is known by neurosurgeons in the medical profession that artificial plastic skull plates may replace the skull; ref. “Applications of Rapid Prototyping in Cranio-Maxilofacial Surgery Procedures, Igor Drstvensek et al, International Journal of Biology and biomedical Engineering, Issue 1, Volume 2, 2008. And it is known in the electronics industry that plastic is the base material on which many printed circuit boards are built. Printed circuit boards are traditionally flat, however, curved printed circuit boards have recently been produced. It is an objective of the present invention to incorporate PCB technology into irregular and cranial skull plate implants to facilitate some embodiments of the present invention. Development of curved printed circuit boards of a type that enable and are compatible with the present invention include those developed at the Center for Rapid Product Development, Creative Research Engineering Institute, Auckland University of Technology, New Zealand in 2009 under their Curved Layer Rapid Prototyping, Conductive 3D Printing, and Rapid Prototyping and Design Methodology Programs. It is therefore an objective of the present invention to enable implantation of specially designed curved and irregularly shaped printed circuit boards as a substitute for removed sections of the skull to enable the present invention.
Additionally, it is an objective to use optical concealment and cloaking systems and methods in the present invention to conceal worn devices and implants mounted over, on top of, into, and under the skin. Systems and methods for cloaking integrated into and compatible with the present invention include those described in: U.S. Patent 2002/0090131, by Alden, dated 11 Jul. 2002, entitled “Multi-perspective background simulation cloaking process and apparatus”; U.S. Patent Application Publication 2002/0117605, by Alden et al, dated 29 Aug. 2002, entitled “Three-Dimensional Receiving and Displaying Process and Apparatus with Military Application”.
It is an object to input data and information derived by the present invention into a simulation system Hosts simulations of a type consistent with the present invention include U.S. Pat. No. 5,495,576, by Ritchey, dated 27 Feb. 1996 entitled “Panoramic image based virtual reality/telepresence audio-visual system and method”. Other enabling simulation technology of a type compatible with and that may be integrated into the present invention includes U.S. Patent Application 2004/0032649 by Kondo et al, dated 19 Feb. 2004, entitled “Method and Apparatus for Taking an image, method and apparatus for processing and image, and program and storage medium”; U.S. Patent Application 2004/0247173, by Frank Nielson et al, dated 9 Dec. 2004, entitled “Non-flat image processing apparatus, in-processing method, recording medium, and computer program”; U.S. Patent Application 20100030578, by Siddique et al, dated 4 Feb. 2010, entitled “System and Method for collaborative shopping, business, and entertainment; U.S. Patent Application 20100045670, by O'Brien et al, dated 25 Feb. 2010, entitled “Systems and Methods for RenderingThree-Dimensional Objects”; U.S. Patent Application 20090237564, by Kikinis et al, dated 24 Sep. 2009, entitled “Interactive Immersive Virtual Reality and Simulation”; U.S. Patent Application 201000115579 by Jerry Schlabach, dated 21 Jan. 2010, entitled “Cognitive Amplification for Contextural Game-Theoretic Analysis of Courses of Action Addressing Physical Engagements”; U.S. Patent Application 2005/0083248 A1, by Frank Biocca, Jannick P. Roland et al., dated 21 Apr. 2005, entitled “Mobile Face Capture and Image Processing System and Method”; U.S. Patent Application 20040104935, by Williamson et al, dated 20040104935, entitled “Virtual reality immersion system”; and U.S. Patent Application 2005/0128286.
Host computer servers for storing and retrieving data and information derived by the present inventions data logging system and other social network and search engine systems operated by a user via a wireless telecommunication system of a type consistent with the present invention include those in U.S. Patent Application 20070182812, specifically FIGS. 47-51, and those above mentioned in U.S. Patent Application 20070124292 A1, by Kirshenbaum et al and in U.S. Patent Application 2009/0196493 by Widrow et al. For instance, Google Earth™ and video chat like technologies and graphics may be adapted as a platform for geospatial referencing and video teleconferencing in which users of the present invention interact with one another. It is an objective of the present invention to describe a social telecommunication network that allows users to interactively share their thoughts and a view of themselves and their surrounding environments using the present invention. Telecommunications systems that are integrated with the internet of a type that may be incorporated into the present invention to accomplish video communications within the scope of the present invention are described in U.S. Patent Application Publication 2007/0182812 A1 dated Aug. 9, 2007 by Ritchey entitled Panoramic Image-based Virtual Reality/Telepresence Audio-Visual System and Method and are incorporated by reference.
Robotic and cybertronic systems of a type that may be populated with data derived by a data logging system of a type compatible with the present invention include those discussed at the: Proceedings of the 18th Joint International Conference on Artificial Intelligence, Aug. 9-15, 2003, Acapulco, Mexico in the article “Non-Invasive Brain-Actuated Control of a Mobile Robot”, by José del R. Millán et al; the ongoing NASA Robonaut 2 Program; in the scientific paper A Brain-Actuated Wheelchair: Asynchronous and Non-Invasive Brain-Computer Interfaces for Continuous Control of Robots by F. Gal'an et al from the IDIAP Research Institute, Martigny, Switzerland, dated 2007; U.S. Patent Application 20040104702 by Nakadai, Kazuhiro; et al., dated Jun. 3, 2004, entitled Robot audiovisual system; U.S. Patent Application 20040236467, by Sano, Shigeo, entitled Remote control device of bipedal mobile robot, dated Nov. 25, 2004; and United States Patent Application 20060241808 by Nakadai; Kazuhiro; et al, dated Oct. 26, 2006, entitled Robotics Visual and Auditory System. It is known by those skilled in the art that robotic devices may be remotely piloted or operate autonomously. It is also known that robots can be programmed to replicate characteristics of a being by translating information derived from data logged about a given being and converting that data into computer code based on those characteristics of the living being consistent with some embodiments is the present invention.
Video logging and memory enhancement devices that form the present invention carried on and in a being can add additional weight. Exoskeletal systems compatible with and of a type that may be incorporated to support the additional weight of the system disclosed in the present invention includes U.S. Patent Application Publication 2997/0123997, by Herr et al, dated 31 May 2007, entitled “Exoskeletons for running and walking”. Passive and active exoskeletal systems known to those skilled in the art may be incorporated into the present invention. An exoskeleton like that disclosed in U.S. 2003/0223844, by Schile et al, dated 4 Dec. 2003 entitled “Exoskeleton for the Human Particular for Space Applications” which may be used for remotely control of robots may be integrated into the present invention. Astronaut suites, scuba gear, other life support garb and equipment, protective garments, backpacks, helmets and so forth may be supported. Garb integrated with and supported by a user in the present invention may incorporate various displays, microphones, cameras, communication devices like cell phones, body armor, power sources, or computers and associated devices. In one embodiment of the data logging and memory enhancement system of the present invention the helmet design and backpack is supported by an exoskeletal system in order reduce the weight on the being carrying the portion of the invention born by a being. Alternatively, the helmet design can be supported by the weightlessness of outer space or by underwater buoyancy compensation apparatus in some situations. Still alternatively, an opaque helmet design embodiment that captures imagery from camera systems and displays the imagery on the interior and exterior of the helmet is disclosed in the present invention. Recently developed thin form flat, curved, flexible, opaque and see-through display devices known in the industry are integrated into the novel helmet design enabled various embodiment of the present invention.
Direct sensing and stimulation of existing brain cells to drive the data logging and memory enhancement system is an objective of the present invention. Direct sensing and stimulation system and methods of a type compatible and incorporated into the present invention includes: U.S. Patent 2008/0097496, 24 Apr. 2008, by Chang et al, entitled “System and Method for Securing an Implantable Interface to a Mammal”; U.S. Patent Application Publication 2009/0105605, dated 23 Apr. 2009, by Marcio Abreu, entitled “Apparatus and Method for Measuring Biological Parameters”; U.S. Patent Application Publication US 2009/0163982 and 2009/0306741, by Christopher deCharms, dated 25 Jun. 2009 and 10 Dec. 2009, entitled “Applications of the Stimulation of Neural Tissue Using Light”; U.S. Patent Application Publication, by Hogle et al, dated 10 Dec. 2009, entitled Systems and Methods for Altering Brain and Body Functions and For Treating Conditions and Diseases of the Same”; U.S. Patent Application 20090062825, 5 Mar. 2009, by Scott Pool et al, entitled “Adjustable Implant and Method of Use”; U.S. Patent 20090108974 by Michael Deering (cited earlier); U.S. Patent Application 20020082665, by Markus Haller et al, dated 27 Jun. 2002, entitled “System and method of communicating between an implantable medical device and a remote computer system or health care professional”; U.S. Patent Application 20050084513, by Liping Tang, dated 21 Apr. 2005, entitled “Nanocoating for improving biocompatibility of medical implants”; U.S. Patent Application 20050209687, dated 22 Sep. 2005, by James Sitzmann et al, entitled “Artificial vessel scaffold and artificial organs therefrom”; U.S. Patent Application 20070045902, dated 1 Mar. 2007, entitled “Analyte Sensor”; U.S. Patent 20090306741, Hogle et al, dated 10 Dec. 2009, entitled Systems and Methods for Altering Brain and Body Functions and for Treating Conditions and Diseases of the Same”; atlases and articles on Surgical Implants; and Neurosurgical Atlases familiar to those in the medical profession. Biological material grown in vitro or ex vitro containing data and/or information derived from operating the present invention may be implanted in the same or a different recipient. Additionally, logged data derived according to the present invention may be incorporated into a genetically modified organism (GMO) or genetically engineered organism (GEO) is an organism whose genetic material has been altered using genetic engineering techniques. These techniques, generally known as recombinant DNA technology, use DNA molecules from different sources, which are combined into one molecule to create a new set of genes. This DNA is then transferred into an organism, giving it modified or novel genes. Transgenic organisms, a subset of GMOs, are organisms which have inserted DNA that originated in a different species. In such an instance, additional and enhanced sensor systems, embedded communication devices, disease resistance, hostile environment survival capabilities, and superior brain and muscle strength may be engineered into the DNA such that humans with unique and enhanced-human capabilities develop from birth with data logged according to the present invention recorded by a user of previous generations. Still further, it is an objective of the present invention that a cloned beings may be stimulated with historical data derived from the data logging system in an immersive manner such that the brain of the cloned being is stimulated similar to that of the original being from which the data was logged.
A related objective to that described in the two preceding paragraphs is loading and monitoring of implanted stem cells with data logged and data evoked by logged data according to the present invention. Adult neurogenesis (the creation of new brain cells in adult brains) was first discovered in 1965, but only recently has it been accepted as a general phenomenon that occurs in many species, including humans (1998). Like stem cells, progenitor cells have a capacity to differentiate into a specific type of cell. In contrast to stem cells, however, they are already far more specific: they are pushed to differentiate into their “target” cell. The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can only divide a limited number of times. Systems and methods of a type applicable to the present invention include: Those discussed in the International Review of Cytology, Volume 228, 2003, Pages 1-30, by Kiminobu Sugaya, University of Illinois at Chicago, entitled “Potential Use of Stem Cells in Neuro-replacement Therapies for Neurodegenerative Diseases”; in Stem Cell Research & Therapy 2010 1:17, by Jackson et al, entitled “Homing of stem cells to sites of inflammatory brain injury after intracerebral and intravenous administration: a longitudinal imaging study”; U.S. Patent Application Publication 2008/0255163, by Kiminobu Sugaya et al, dated 16 Oct. 2008, entitled “Use of Modified Pyrimidine Compounds to Promote Stem Cell Migration and Proliferation”; PHYSorg.com. 31 Oct. 2007. Entitled “Stem cells can improve memory after brain injury”; and in Molecules 2010, 15, 6743-6758; doi:10.3390/molecules 15106743, Yong-Ping Wu et al, entitled “Stem Cells for the Treatment of Neurodegenerative Diseases”.
Nanobots may be also be introduced into the brain of a recipient with data and/or information derived from operating the present invention. The data and/or information may introduced in order to reintroduced lost memory to a prior user or add a new memory to a new user. A recipients implanted data and/or information may be derived from another user. Incorporating programmable nanobots and computer electronic interfaces with brain tissue are additional methods of sensing brain activity and introduce information derived from queries in the present invention into the brain is a further objective of the present invention. It is there for an objective of the present invention to record and incorporated information that has been logged or derived from data logged using the present invention such that it may be placed in storage and then loaded into nanobots and the nanobots targeted to replace neurons in the brain. Additionally, nanobots may be introduced into the brain to block neural connections to inhibit or allow information formulated by the video logging and memory enhancement system according to the present invention. Nanobot technologies of a type compatible with and integrated into the present invention include those described in the internet video entitled “Nanobot Replacing Neurons 3D Animation” by info@cg4tv.com dated Jun. 6, 2011. The host computer or a server may be used to transmit electronic signatures thru electrodes or light fibers into the brain of the user. The stimulants may represent feedback responses to information queries conducted by the user of the present. Machine interfaces to brain tissue that are of a type compatible with and integrated into the present invention include: U.S Patent Application Publication 2003/0032946 A1, dated 13 Feb. 2003 by Fisherman et al. entitled “Artificial Synapse Chip Interface for Electronic Prosthetic Retina”. It is also an object of the present invention to disclose sensor methods and systems according to the present invention that may be interfaced with audio, electro-optical, and other sensors directly with body tissues according to the Fisherman '946 Patent.
The data logged by individuals may be operated upon for programming nanobots that may be introduced into the brain to restore memory or introduce information into the neural network of the brain. Additionally, data logged by the present invention may be incorporated in bio-engineering human systems that carry memories forward through encoding those memories in human DNA and RNA. U.S. Patent Publication 2005/0053968, by Bharadwaj et al, dated 10 Mar. 2005, and techniques disclosed in the UCD, Dublin, year 2012, publication Bioinformatics article entitled, “DNA Data Embedding Benchmark”, by David Haughton, that describes a system and method for embedding information in the DNA string while still preserving the biological meaning of the string; is incorporated in full as a system and method of a type which is integrated with the present invention to encode and decode raw or correlated information derived from the present invention into human DNA. The logged information could may include a test file, image file, or audio file that in which large sequences are divided into multiple segments and placed in DNA introduced to the user human or other organism. It is therefore an object to provide an invention that logs a beings life experience such that a least some portions of the logged data may be codified and stored into DNA and RNA and passed to a later generations, as stored information in a living organism, a cadaver, or transfer to another living being though reproduction.
Finally, in accordance with the present invention, historical data from brain activity sensing systems, like AMR recordings, along with other physiological and biometric data is read into life support systems to assist in keeping a user on life support alive. Using historical biometric data and information from a given user derived by the present invention that is consistent with the users homeostasis, when the user is a patient, can assist in making the settings of a life support system(s) compatible to the specific patient. It is conceived that historical logged and derived from the system 100 will be used in brain, head, body or other transplants to achieve this objective. Alternatively, robotic, prosthetic, cybortronic, and robotic systems may also be adapted and hooked to the life support system in order to receive and operate on the the logged data derived from system 100. Brain and head transplant methods and techniques applicable to the present invention are disclosed by: Browne, Malcolm W. (May 5, 1998), “Essay; From Science Fiction to Science; The Whole Body Transplant” in the New York Times; by White, Robert J.; as “Head Transplants” in Scientific American; and in U.S. Pat. No. 4,666,425, entitled “Device for perfusing an animal head”.
The above mentioned references and the information all of which are distinctly different from current invention are incorporated by reference as enabling the present invention.