1. Field of the Disclosure
The present disclosure relates to medical treatment evaluation systems and methods. More particularly, it concerns advanced evaluation systems and methods concerning prospective treatments for progressive brain disorders (both neurological and psychiatric) and the progressive effects of aging on the brain.
2. Description of the Related Art
Alzheimer's Disease
Alzheimer's disease (AD) is a rapidly growing public health problem. Clinically, AD is characterized by a gradual and progressive decline in memory and other cognitive functions, including language skills, the recognition of faces and objects, the performance of routine tasks, and executive functions. It is also frequently associated with other distressing and disabling behavioral problems. Histopathological features of AD include: neuritic and diffuse plaques (in which the major constituent is the β-amyloid protein), neurofibrillary tangles (in which the major constituent is the hyperphosphorylated form of the microtubule-associated protein τ), and the loss of neurons and synapses.
Aside from its debilitating effect on patients themselves, AD also places a terrible burden on their families. Tellingly, about half of all primary caregivers caring for AD patients become clinically depressed.
AD is a prevalent problem. According to one community survey, AD afflicts about 10% of those over the age of 65 and almost half of those over the age of 85. As the population grows older, the prevalence and cost of AD is expected to increase dramatically. By 2050, for example, the prevalence of AD in the United States is projected to quadruple from about 4 million cases to about 16 million cases—without accounting for any increase in a patient's life expectancy. Unsurprisingly, the cost of caring for patients is also estimated to quadruple from about 190 million to 750 million dollars per year—without any adjustment for inflation. Effective prevention therapies are urgently needed to avert what is becoming an overwhelming public health problem.
In recent years, scientific progress has raised the hope of identifying treatments that may halt the progression of AD or even prevent its onset altogether. This recent progress has included: the discovery of genetic mutations and at least one susceptibility gene that account for many cases of AD; the characterization of other AD risk factors and pathogenic molecular events that could be targeted by potential treatments; the development and use of improved research methods for identifying new therapeutic targets (e.g., in the fields of genomics and proteomics); the development of promising animal models (including transgenic mice containing one or more AD genes) that may help to clarify disease mechanisms and screen candidate treatments; suggestive evidence that several available interventions (e.g., estrogen-replacement therapy, anti-inflammatory medications, statins, which might be associated with a lower risk and later onset of AD; the discovery of medications which at least modestly attenuate AD symptoms (e.g., several acetylcholinesterase inhibitors and the N-methyl-D-aspartate [NMDA] inhibitor memantine); and the development of other potentially disease-modifying investigational treatments (e.g., anti-amyloid immunization and medication therapies, which inhibit the production, aggregation, and neurotoxic sequelae of Aβ and/or promote its clearance, drugs that inhibit the hyperphosphorylation of tau and drugs that protect neurons against oxidative, inflammatory, excitatory, and other potentially toxic events).
Problematic Subject Size, Study Durations, and Expense in Prevention Studies
Even if a prevention therapy is only modestly helpful, it could provide an extraordinary public health benefit. For instance, a therapy that delays the clinical onset of AD by only five years might reduce the number of cases by half. Unfortunately, however, determining whether or when cognitively normal persons treated with a candidate preclinical AD prevention therapy develop cognitive impairment and AD requires thousands of volunteers, many years, and great expense.
One way to reduce the samples and time required to assess the efficacy of an AD prevention therapy is to conduct a clinical trial in patients with mild cognitive impairment (MCI), who may have a 10-15% rate of conversion to probable AD and commonly have histopathological features of AD at autopsy. Randomized, placebo-controlled clinical trials in patients with MCI could thus help establish the efficacy of putative “early AD” therapies. Using clinical outcome measures, the only practical way to establish the efficacy of “preclinical AD therapies” (i.e., interventions started in cognitively unimpaired persons and intended to postpoine, reduce the risk, or completely prevent the clinical onset of AD) has been to restrict the randomized, placebo-controlled study to subjects in advanced age groups—a strategy which still requires extremely large samples, a study duration of several years, and significant cost.
While these strategies are likely to play significant roles in the identification of effective prevention therapies, it remains possible that subjects will require treatment at a younger age or at an even earlier stage of underlying disease for a candidate prevention therapy to exert its most beneficial effects. Those of skill in the art will readily recognize and appreciate the value of developing prevention (i.e., preclinical AD) therapies. Such therapies are placing an increasing emphasis on the earliest possible detection of the brain changes associated with the predisposition to this disorder. Accordingly, the scientific community needs a new paradigm that reduces the impractically large subject samples, time, and cost currently required to establish the efficacy of putative preclinical AD prevention therapies, encourage industry and government agencies to sponsor the required trials, and prevent this growing problem without losing a generation along the way. The scientific community further needs a viable approach for evaluating putative treatment modalities on additional brain disorders other than AD, such as mild cognitive impairment (MCI), the decline in cognitive ability due to other age-related atrophy, or other disorders.
Drawbacks of Prior Imaging Processes Used in Prevention Studies
Recently, researchers have begun using 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) and magnetic resonance imaging (MRI) to detect and track changes in brain function and structure which precede the onset of brain disorder symptoms in cognitively normal persons who are at risk for developing brain disorders such as Alzheimer's. Suggested risk factors for AD include older age, female gender, lower educational level, a history of head trauma, cardiovascular disease, higher cholesterol and homocysteine levels, lower serum folate levels, a reported family history of AD; trisomy 21 (Down's syndrome), at least 12 missense mutations of the amyloid precursor peptide (APP) gene on chromosome 21, at least 92 missense mutations of the presenilin 1 (PS1) gene on chromosome 14, at least 8 missense mutations of the presenilin 2 (PS2) gene on chromosome 1, candidate susceptibility loci on chromosomes 10 and 12, and the APOE ε4 allele on chromosome 19.
Next to age, the APOE ε4 allele is the best-established risk factor for late-onset AD. Thus, it is especially relevant to human brain imaging studies. The APOE gene has three major alleles, ε2, ε3, and ε4. Compared to the ε3 allele (the most common variant), the ε4 allele is associated with a higher risk of AD and a younger age at dementia onset. The ε2 allele, on the other hand, may be associated with a lower risk of AD and an older age at dementia onset.
In one of the original case-control studies, for instance, individuals with no copies of the ε4 allele had a 20% risk of AD and a median age of 84 at dementia onset; those with one copy of the ε4, which is found in about 24% of the population, had a 47% risk of AD and a median age of 76 at dementia onset. Those with two copies of the ε4 allele (the ε4/ε4 genotype, found in 2-3% of the population) had a 91% risk of AD by 80 years and a mean age of 68 at dementia onset. In another study, 100% of ε4 carriers with cognitive loss had neuritic plaques at autopsy. In a related study, 23% of their AD cases were attributed to absence of the ε2 allele and another 65% of their cases were attributed to the presence of one or more copies of the ε4 allele.
Case-control studies in numerous clinical, neuropathological, and community studies have confirmed the association between the ε4 allele and AD. Farrer et al. conducted a worldwide meta-analysis of data from 5930 patients with probable or autopsy-confirmed AD and 8607 controls from various ethnic and racial backgrounds. In comparison with persons with the genotype ε3/ε3, the risk of AD was significantly increased in genotypes ε2/ε4 (odds ratio [OR]=2.6), ε3/ε4 (OR=3.2), and ε4/ε4 (OR=14.9), and the risk of AD was significantly decreased in genotypes ε2/ε3 (OR=0.6), and ε2/ε2 (OR=0.6). Community-based, prospective studies promise to better characterize the absolute risk of AD in persons with each APOE genotype.
Prior imaging processes have focused on demonstrating that baseline reductions in structural or functional performance with a single imaging measurement predict subsequent clinical decline in patients with dementia and that baseline measurements in MCI predict higher rate of conversion to AD. But those findings have been unable to demonstrate that the selected brain imaging process is an adequate surrogate marker for demonstrating prevention of or delayed onset of a disease state. More specifically, the processes must be able to show that a surrogate marker correlates with clinical severity in patients. The processes must also be able to demonstrate that when a change in measurements is attributable to administration of a treatment regimen, the change in measurement also predicts an improved clinical outcome. Prior single baseline imaging techniques are insufficient in this regard.
According to Temple's commonly cited definition, a surrogate endpoint of a clinical trial is “a laboratory measurement or a physical sign used as a substitute for a clinically meaningful endpoint that measures directly how a patient feels, functions, or survives; changes induced by a therapy on a surrogate endpoint are expected to reflect changes in a clinically meaningful endpoint.” According to Fleming and DeMets, a valid surrogate endpoint is not just a correlate of the clinical outcome; rather, it should reliably and meaningfully predict the clinical outcome and it should fully capture the effects of the intervention on this outcome. Citing several examples, they note several ways in which an otherwise promising surrogate endpoint might fail to provide an adequate substitute for a clinical endpoint.
Although few if any surrogate endpoints have been rigorously validated, the 1997 United States “FDA Modernization Act” authorizes the approval of drugs for the treatment of serious and life-threatening illnesses, including AD, based on its effect on an unvalidated surrogate. In order to promote the study and expedite the approval of drugs for the treatment of these disorders, “fast track” approval” may be granted if the drug has an effect on a surrogate marker that is “reasonably likely” to predict a clinical benefit; in such cases, the drug sponsor may be required to conduct appropriate post-marketing studies to verify the drug's clinical benefit and validate the surrogate endpoint.
Linking Functional and Structural Brain Images
Neuroimaging researchers frequently acquire a combination of functional and structural brain images. Examples of functional brain images include those obtained via positron emission tomography (PET) or functional magnetic resonance imaging (fMRI). An example of structural brain images include those obtained via volumetric MRI. Structural MRI data is often used in PET/fMRI studies for anatomical localization of functional alterations, definition of regions of interest for the co-registered PET/fMRI data extraction, and partial volume correction. Neuroimages have been most commonly analyzed using univariate methods. However, multivariate analyses have also been used to characterize inter-regional correlations in brain imaging studies. Multivariate algorithms have included principal component analysis (PCA), the PCA-based Scaled Subprofile Model (SSM), and the Partial Least Squares (PLS) method. These methods have typically been used to characterize regional networks of brain function (and more recently brain anatomy) and to test their relation to measures of behavior. Such multivariate methods, however, have not yet been used to identify patterns of regional covariance between functional and structural brain imaging datasets.
A major challenge to the multivariate analysis of regional covariance with multiple imaging modalities is the extremely high dimensionality of the data matrix created by including high-resolution neuroimaging datasets. The scientific community needs an advanced technology that can successfully compute, in a practical and useful fashion, dimensional datasets with a covariance analysis using multivariate methods.
As discussed above, another major drawback to previously existing methods and systems for evaluating prospective treatments for AD is that they fail to provide sufficient power to evaluate the treatments in a meaningful or useful way.