Treating costly conditions such as Alzheimer’s disease may soon collapse healthcare systems around the world, yet companies hesitate to invest in the long, large clinical trials required to discover disease-modifying therapies. New incentives are necessary to turn this tide.
Although there is some disagreement about the right therapeutic target to combat Alzhemer’s disease—whether it’s β-amyloid, tau phosphorylation or something else—there is overwhelming agreement about how to address many of the other problems that plague this field.
If you were to conduct a poll of Alzheimer’s researchers, virtually all would agree that current clinical testing of potential new therapies starts too late, after the brain is severely damaged by the disease. They would also agree that early diagnosis and biomarkers predictive of clinical progression are crucial for combating the disease.
Incorporating these views into clinical-trial design would certainly result in better trials, but such trials would also be very large and very long. Longitudinal studies are beginning to identify people at risk to develop Alzheimer’s disease: subjects with subtle cognitive or biochemical changes who, years later, will go on to develop the pathology. But validating these markers in a clinical trial will require the trial to start as early as a decade before the onset of the disease, when the presumptive biomarkers start to appear and before brain damage has advanced too far. More importantly, as biomarkers only help identify people at risk, a fraction of whom ultimately won’t develop Alzheimer’s, the trial would have to include thousands of patients to allow for these ‘false positives’ and still pick up a statistically significant therapeutic signal. Such a trial would be prohibitively expensive.
(Click here to continue reading.)