Posts Tagged 'NIH'

NIH, here’s some of what we need from you

In answer to the recent NIH Request For Information (RFI), several advocates and organizations submitted the attached response. Response to NIH June 2016

The response covers essentials such as:

  • show us the money, (Quickly ramp up committed budget)
  • give us access to research by bringing down the walls, (Address NIH Institute process and policy barriers)
  • nothing about us without us, (Provide for meaningful engagement of experts and patients advocates)
  • do it right, (Establish rigorous research standards)

and

the need for speed. (Speed delivery of treatment)

Talk is Cheap

For decades, stakeholders have advocated for funding commensurate with the severity of ME/CFS. The government’s response has been to hold an occasional meeting, commission a report from time to time, and little if anything has changed. More words are spoken, with clinicians and researchers saying things that have been said before. But officials haven’t followed through with the necessary funding increases or with the sustained attention required to address this severely disabling disease whose economic impact wildly exceeds the paltry dollars allotted to research.

For example, NIH hosted a State of the Knowledge Workshop in April 2011. The report from that meeting bears a great deal of resemblance to the NIH’s Pathways to Prevention (P2P) Workshop report published in June 2015. Four years have passed, but the situation remains the same.

Both reports acknowledged patients’ suffering.

2011: Individuals with ME/CFS, their families, and their caregivers have gone through untold suffering and difficulties from a disease that is poorly understood and for which there is relatively little to offer in the way of specific treatments. (p.5)

2015: Unfortunately, ME/CFS is an area where the research and health care community has frustrated its constituents, by failing to appropriately assess and treat the disease and by allowing patients to be stigmatized. (p.2)

Both reports recommended research on biomarkers and epidemiology.

2011: Continued research on biomarkers for ME/CFS, including biomarkers that are mediators of the illness, has the potential to aid in diagnosis, and treatment and prevention. (p.15)

2011: There is a lack of longitudinal, natural history, early detection, pediatric-versus-adult-onset, and animal model studies. . . . In addition, few studies look at comorbid conditions, biomarkers, or genetics.  (p.18)

2015: Research priorities should be shifted to include basic science and mechanistic work that will contribute to the development of tools and measures such as biomarker or therapeutics discovery. (p.8)

2015: Epidemiological studies of ME/CFS, including incidence and prevalence, who is at high risk, risk factors, geographical distribution, and the identification of potential health care disparities are critical.   (p.11)

Both reports recommended a network of collaborative centers.

2011: Creating coordinated and collaborative systems for sharing research was an important topic that included creating standard operating procedures for the field, within and across labs, as well as common data elements. (p.18)

2015: Create a network of collaborative centers working across institutions and disciplines, including clinical, biological, and social sciences. These centers will be charged with determining the biomarkers associated with diagnosis and prognosis, epidemiology (e.g., health care utilization), functional status and disability, patient-centered QOL outcomes, cost-effectiveness of treatment studies, the role of comorbidities in clinical and real-life settings, and providing a complete characterization of control populations, as well as those who recover from ME/CFS. (p.15)

Both reports recommended central repositories.

2011: To capture the extensive information from such studies, a centralized interactive database, using common data elements and accessible to everyone, is sorely needed to collect, aggregate, store, and analyze results.   (p.18)

2015: Biologic samples (e.g., serum and saliva, RNA, DNA, whole blood or peripheral blood mononuclear cells, and tissues) and de-identified survey data should be linked in a registry/repository to understand pathogenesis and prognosis, and facilitate biomarker discovery. (p.11)

Both reports highlighted the urgent need for consensus on case definition.

2011: Throughout the Workshop, participants identified opportunities for advancement in the current research paradigm for ME/CFS, beginning with a need to define and standardize the terminology and case definitions.   (p.6)

2015: Define disease parameters. Assemble a team of stakeholders (e.g., patients, clinicians, researchers, federal agencies) to reach consensus on the definition and parameters of ME/CFS.   (p.9)

2015: Thus for progress to occur, we recommend (1) that the Oxford definition be retired, (2) that the ME/CFS community agree on a single case definition (even if it is not perfect), and (3) that patients, clinicians, and researchers agree on a definition for meaningful recovery.   (p.16)

Both reports highlighted the need for collaboration and new scientists.

2011: The study of ME/CFS can benefit from an interdisciplinary collaborative approach using well-connected clinical and research networks. . . . Moreover, additional highly qualified investigators must be attracted to study ME/CFS.   (p.18)

2015: [T]here is a need for partnerships across institutions to advance the research and develop new scientists.   (p.14)

Both reports noted the need for educated clinicians.

2011: However, the biggest barrier to treating patients, according to Workshop participants, is lack of informed clinicians… (2011, p.6)

2015: Thus, a properly trained workforce is critical…   (p.14)

***************

If I just listed the quotes without telling you which report they came from, I bet you would not be able to tell which were from 2011 and which were from 2015. That the same points are repeated without substantive differences illustrates how little has changed, other than the year the report was issued.

Perhaps time moves at a different pace for those in charge of allocation of funds and they don’t feel the urgency we feel. However, for more than thirty years patients have grown up, lived and died, all the while being subjected to disdain and neglect. Failed policies mean there are no treatments, and this horrid disease is so disabling that patients usually live isolated, impoverished lives.

We NEED better and we DESERVE far better than occasional federal lip-service and occasional meetings.

  • We (patients/caregivers, healthcare professionals, policy makers, HHS) need to be very clear about the disease being addressed.
  • We need a total overhaul of federal policy regarding this disease with stakeholders as active participants.
  • We need a sustained and meaningful increase in biomedical research funding and we need it now!
  • We need an awareness campaign like the one outlined here.
  • We need to be meaningfully involved at every step of the way in all of this.
  • We want and deserve to have our productive lives back! NOW!
  • We need to work together in a sustained manner to push for these changes.

And HHS ABSOLUTELY must do its part. The IOM report has been out for months and the P2P report is out now, yet there is no indication from HHS as to what they are going to do with these reports. So HHS – tell us – what you are going to do and when you are going to do it?

Talk is cheap. It’s relatively easy and cheap to hold a meeting and write a report. Investing the requisite resources in research and building the infrastructure needed to sustain progress is hard work. It’s expensive. But this is what is needed. Not more meetings. Not more spin.

Talk is cheap. It’s time to show ME the money.

Release date for final P2P report

After delays and process problems, the release date for the final P2P report has been announced. ODP (Office of Disease Prevention) says the final report will be in print and posted on Tuesday June 16, 2015.

Important Notice: The ODP recently discovered that one set of public comments was not forwarded to the panel for consideration. Because the ODP is committed to ensuring that all public comments have been considered, we paused the publication process in order to give the panel time to consider the new information and determine if changes are needed before the release of the final report.

Status Update (April 16, 2015): The new publication date for the panel’s final report will be Tuesday, June 16, 2015, in print in the Annals of Internal Medicine and online on the ODP website. Thank you for your patience to allow for consideration of all public comments.”

https://prevention.nih.gov/programs-events/pathways-to-prevention/workshops/me-cfs

The final NIH P2P report will be out on Tuesday 14 April 2015

ODP had originally indicated that the final report would be published a couple of weeks after the end of the public comment period (16 Jan 2015). The time frame now seems to be more like 12 weeks after the end of the public comment period.

(fwiw – there was a similar delay in publication of the final opioid P2P report. It was due out in October 2014 and was not published until Jan. 2015).

From the ODP (Office of Disease Prevention) website:

Draft Report

Download the Draft Report (PDF – 3.45 MB)

An unbiased, independent panel developed a draft report of the 2014 NIH Pathways to Prevention Workshop: Advancing the Research on Myalgic Encephalomyelitis/Chronic Fatigue Syndrome, which summarizes the workshop and identifies future research priorities. The public comment period is now closed. The final report will be available on the ODP website on Tuesday, April 14, 2015.”

https://prevention.nih.gov/programs-events/pathways-to-prevention/workshops/me-cfs/workshop-resources#draftreport

link to Miriam Tucker’s article Chronic Fatigue Syndrome: Wrong Name, Real Illness

 

http://www.medscape.com/viewarticle/837577_2
“Chronic Fatigue Syndrome: Wrong Name, Real Illness
Miriam E. Tucker
January 08, 2015

Introduction

Sufferers of what has been called chronic fatigue syndrome (CFS) are challenging patients, presenting with complaints of postexertional
malaise, persistent flulike symptoms, unrefreshing sleep, “brain fog,” and often a long list of other symptoms that don’t seem to fit any
recognizable pattern. Some appear ill, but many don’t. And the routine laboratory tests typically come back negative. ….”

http://www.medscape.com/viewarticle/837577_2

Confusion at NIH

Quote from “P2P and Dr. Francis Collins”

“Ok, let’s pause for a minute. NIH is co-sponsoring the IOM study under their contract with the National Academy. The IOM contract had been controversial for months, and Dr. Maier was scheduled to speak at the IOM meeting in just three weeks. Yet the Deputy Director of NIH had no idea what is going on with it, Dr. Collins needed an explanation of the difference between IOM and P2P, and now Dr. Murray had to scramble to figure out if there was a third meeting he was not aware of. “

Would a trail of breadcrumbs help them?

The full post is well worth reading:

http://www.occupycfs.com/2014/11/10/p2p-and-dr-francis-collins/

P2P draft agenda is now available

Good grief!

Does P2P actually stand for “Purpose: to Prevent” meaningful research?

Given important portions of the draft agenda (and those presenting those portions), one might not be mistaken for thinking so….

For instance,

The second main topic of the Workshop is titled: “Given the unique challenges of ME/CFS, how can we foster innovative research to enhance the development of treatments for patients?”

The three speakers for this section are Dr. Dedra Buchwald, Dr. Dan Clauw, and Dr. Niloofar Afari. If anyone thought that psychosocial theories and functional somatic syndromes would not make an appearance at the Workshop, I’m afraid I must correct your false workshop belief. “

read the full post at OccupyCFS (here: http://www.occupycfs.com/2014/10/31/p2p-agenda-what-the-huh/ )

link for draft agenda:

https://prevention.nih.gov/programs-events/pathways-to-prevention/upcoming-workshops/me-cfs/agenda

Evidence Review Comments Due Monday October 20th

You had a preview of the systematic evidence review comments that some advocates submitted and now you can read the full version (it is in two parts and is lengthy) by accessing the links in this post or by following this link:     

http://www.occupycfs.com/2014/10/18/comments-on-p2p-systematic-evidence-review/

 

Remember – comments are due Monday October 20th.

http://www.effectivehealthcare.ahrq.gov/research-available-for-comment/comment-draft-reports/?pageaction=displayDraftCommentForm&topicid=586&productID=1976

Comments on Evidence Review due Oct 20th

Evidence Review Comments Preview

This post comes via Mary Dimmock, Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution and a link back to this post. You are also welcome to use this (and other material gathered here ) as a framework for your own comments on the draft evidence review due October 20th.

It’s been a challenging few weeks, digesting and analyzing the AHRQ Draft Systematic Evidence Review on Diagnosis and Treatment of ME/CFS.  We continue to be deeply concerned about the many flaws in the review, in terms of both the approach it took and how it applied the study protocol.

Our comments on the Review will reflect our significant concerns about how the Evidence Review was conducted, the diagnostic, subgroup, and harms treatment conclusions drawn by this report, and the risk of undue harm that this report creates for patients with ME. We believe a final version should not be published until these scientific issues are resolved.

Most fundamentally, the Evidence Review is grounded in the flawed assumption that eight CFS and ME definitions all represent the same group of patients that are appropriately studied and treated as a single entity or group of closely related entities. Guided by that assumption, this Evidence Review draws conclusions on subgroups, diagnostics, treatments and harms for all CFS and ME patients based on studies done in any of these eight definitions. In doing so, the Evidence Review disregards its own concerns, as well as the substantial body of evidence that these definitions do not all represent the same disease and that the ME definitions are associated with distinguishing biological pathologies. It is unscientific, illogical and risky to lump disparate patients together without regard to substantive differences in their underlying conditions.

Compounding this flawed assumption are the a priori choices in the Review Protocol that focused on a more narrow set of questions than originally planned and that applied restrictive inclusion and exclusion criteria. As a result, evidence that would have refuted the flawed starting assumption or that was required to accurately answer the questions was never considered. Some examples of how these assumptions and protocol choices negatively impacted this Evidence Review include:

  • Evidence about the significant differences in patient populations and in the unreliability and inaccuracy of some of these definitions was ignored and/or dismissed. This includes: Dr. Leonard Jason’s work undermining the Reeves Empirical definition; a study that shows the instability of the Fukuda definition over time in the same patients; studies demonstrating that Fukuda and Reeves encompass different populations; and differences in inclusion and exclusion criteria, especially regarding PEM and psychological disorders.
  • Diagnostic methods were assessed without first establishing a valid reference standard. Since there is no gold reference standard, each definition was allowed to stand as its own reference standard without demonstrating it was a valid reference.
  • Critical biomarker and cardiopulmonary studies, some of which are in clinical use today, were ignored because they were judged to be intended to address etiology, regardless of the importance of the data. This included most of Dr. Snell’s and Dr. Keller’s work on two day CPET, Dr. Cook’s functional imaging studies, Dr. Gordon Broderick’s systems networking studies, Dr. Klimas’s and Dr. Fletcher’s work on NK cells and immune function, and all of the autonomic tests. None of it was considered.
  • Treatment outcomes associated with all symptoms except fatigue were disregarded, potentially resulting in a slanted view of treatment effectiveness and harm. This decision excluded Dr. Lerner’s antiviral work, as well as entire classes of pain medications, antidepressants, anti-inflammatories, immune modulators, sleep treatments and more. If the treatment study looked at changes in objective measures like cardiac function or viral titers, it was excluded. If the treatment study looked at outcomes for a symptom other than fatigue, it was excluded.
  • Treatment trials that were shorter than 12 weeks were excluded, even if the treatment duration was therapeutically appropriate. The big exclusion here was the rituximab trial; despite following patients for 12 months, it was excluded because administration of rituximab was not continuous for 12 weeks (even though rituximab is not approved for 12 weeks continuous administration in ANY disease). Many other medication trials were also excluded for not meeting the 12 week mark.
  • Counseling and CBT treatment trials were inappropriately pooled without regard for the vast differences in therapeutic intent across these trials. This meant that CBT treatments aimed at correcting false illness beliefs were lumped together with pacing and supportive counseling studies, and treated as equivalent.
  • Conclusions about treatment effects and harms failed to consider what is known about ME and its likely response to the therapies being recommended. This means that the PACE (an Oxford study) results for CBT and GET were not only accepted (despite the many flaws in those data), but were determined to be broadly applicable to people meeting any of the case definitions. Data on the abnormal physiological response to exercise in ME patients were excluded, and so the Review did not conclude that CBT and GET could be harmful to these patients (although it did allow it might be possible).
  • The Evidence Review states that its findings are applicable to all patients meeting any CFS or ME definition, regardless of the case definition used in a particular study.

The issues with this Evidence Review are substantial in number, magnitude and extent. At its root is the assumption that any case definition is as good as the rest, and that studies done on one patient population are applicable to every other patient population, despite the significant and objective differences among these patients. The failure to differentiate between patients with the symptom of subjective unexplained fatigue on the one hand, and objective immunological, neurological and metabolic dysfunction on the other, calls into question the entire Evidence Review and all conclusions made about diagnostic methods, the nature of this disease and its subgroups, the benefits and harms of treatment, and the future directions for research.

As the Evidence Review states, the final version of this report may be used in the development of clinical practice guidelines or as a basis for reimbursement and coverage policies. It will also be used in the P2P Workshop and in driving NIH’s research strategy. Given the likelihood of those uses and the Evidence Review’s claim of broad applicability to all CFS and ME patients, the flaws within this report create an undue risk of significant harm to patients with ME and will likely confound research for years to come. These issues must be addressed before this Evidence Review is issued in its final form.

They Know What They’re Doing (Not)

by Mary Dimmock (please re-post freely with attribution to Mary Dimmock)

Last week, Jennie Spotila and Erica Verillo posted summaries of just some of the issues with AHRQ’s Draft Systematic Evidence Review, conducted for P2P.

Jennie and Erica highlighted serious and sometimes insurmountable flaws with this Review, including:

  • The failure to be clear and specific about what disease was being studied.
  • The acceptance of 8 disparate ME or CFS definitions as equivalent in spite of dramatic differences in inclusion and exclusion criteria.
  • The bad science reflected in citing Oxford’s flaws and then using Oxford studies anyway.
  • The well-known problems with the PACE trial.
  • The flawed process that used non-experts on such a controversial and conflicted area.
  • Flawed search methods that focused on fatigue.
  • Outright errors in some of the basic information in the report and apparent inconsistencies in how inclusion criteria were applied.
  • Poorly designed and imprecise review questions.
  • Misinterpretation of cited literature.

In this post, I will describe several additional key problems with the AHRQ Evidence Review.

Keep in mind that comments must be submitted by October 20, 2014. Directions for doing so are at the end of this post.

We Don’t Need No Stinking Diagnostic Gold Standard

Best practices for diagnostic method reviews state that a diagnostic gold standard is required as the benchmark. But there is no agreed upon diagnostic gold standard for this disease, and the Review acknowledges this. So what did the Evidence Review do? The Review allowed any of 8 disparate CFS or ME definitions to be used as the gold standard and then evaluated diagnostic methods against and across the 8 definitions. But when a definition does not accurately reflect the disease being studied, that definition cannot be used as the standard. And when the 8 disparate definitions do not describe the same disease, you cannot draw conclusions about diagnostic methods across them.

What makes this worse is that the reviewers recognized the importance of PEM but failed to consider the implications of Fukuda’s and Oxford’s failure to require it. The reviewers also excluded, ignored or downplayed substantial evidence demonstrating that some of these definitions could not be applied consistently, as CDC’s Dr. Reeves demonstrated about Fukuda.

Beyond this, some diagnostic studies were excluded because they did not use the “right” statistics or because the reviewer judged the studies to be “etiological” studies, not diagnostic methods studies. Was NK-Cell function eliminated because it was an etiological study? Was Dr. Snell’s study on the discriminative value of CPET excluded because it used the wrong statistics? And all studies before 1988 were excluded. These inclusion/exclusion choices shaped what evidence was considered and what conclusions were drawn.

Erica pointed out that the Review misinterpreted some of the papers expressing harms associated with a diagnosis. The Review failed to acknowledge the relief and value of finally getting a diagnosis, particularly from a supportive doctor. The harm is not from receiving the diagnostic label, but rather from the subsequent reactions of most healthcare providers. At the same time, the Review did not consider other harms like Dr. Newton’s study of patients with other diseases being diagnosed with “CFS” or another study finding some MS patients were first misdiagnosed with CFS. The Review also failed to acknowledge the harm that patients face if they are given harmful treatments out of a belief that CFS is really a psychological or behavioral problem.

The Review is rife with problems: Failing to ask whether all definitions represent the same disease. Using any definition as the diagnostic gold standard against which to assess any diagnostic method. Excluding some of the most important ME studies. It is no surprise, then, that the Review concluded that no definition had proven superior and that there are no accepted diagnostic methods.

But remarkably, reviewers felt that there was sufficient evidence to state that those patients who meet CCC and ME-ICC criteria were not a separate group but rather a subgroup with more severe symptoms and functional limitations. By starting with the assumption that all 8 definitions encompass the same disease, this characterization of CCC and ICC patients was a foregone conclusion.

But Don’t Worry, These Treatment Trials Look Fine

You would think that at this point in the process, someone would stand up and ask about the scientific validity of comparing treatments across these definitions. After all, the Review acknowledged that Oxford can include patients with other causes of the symptom of chronic fatigue. But no, the Evidence Review continued on to compare treatments across definitions regardless of the patient population selected. Would we ever evaluate treatments for cancer patients by first throwing in studies with fatigued patients? The assessment of treatments was flawed from the start.

But the problems were then compounded by how the Review was conducted. The Review focused on subjective measures like general function, quality of life and fatigue, not objective measures like physical performance or activity levels. In addition, the Review explicitly decided to focus on changes in the symptom of fatigue, not PEM, pain or any other symptom. Quality issues with individual studies were either not considered or ignored. Counseling and CBT studies were all lumped into one treatment group, without consideration of the dramatic difference in therapeutic intent of the two. Some important studies like Rituxan were not considered because the treatment duration was considered too short, regardless of whether it was therapeutically appropriate.

And finally, the Review never questioned whether the disease theories underlying these treatments were applicable across all definitions. Is it really reasonable to expect that a disease that responds to Rituxan or Ampligen is going to also respond to therapies that reverse the patient’s “false illness beliefs” and deconditioning? Of course not.

If their own conclusions on the diagnostic methods and the problems with the Oxford definition were not enough to make them stop, the vast differences in disease theories and therapeutic mechanism of action should have made the reviewers step back and raise red flags.

At the Root of It All

This Review brings into sharp relief the widespread confusion on the nature of ME and the inappropriateness of having non-experts attempt to unravel a controversial and conflicting evidence base about which they know nothing.

But just as importantly, this Review speaks volumes about the paltry funding and institutional neglect of ME reflected in the fact that the study could find only 28 diagnostic studies and 9 medication studies to consider from the last 26 years. This Review speaks volumes about the institutional mishandling that fostered the proliferation of disparate and sometimes overly broad definitions, all branded with the same “CFS” label. The Review speaks volumes about the institutional bias that resulted in the biggest, most expensive and greatest number of treatment trials being those that studied behavioral and psychological pathology for a disease long proven to be the result of organic pathology.

This institutional neglect, mishandling and bias have brought us to where we are today. That the Evidence Review failed to recognize and acknowledge those issues is stunning.

Shout Out Your Protest!

This Evidence Review is due to be published in final format before the P2P workshop and it will affect our lives for years to come. Make your concerns known now.

  1. Submit public comments on the Evidence Review to the AHRQ website by October 20.
  2. Contact HHS and Congressional leaders with your concerns about the Evidence Review, the P2P Workshop and HHS’ overall handling of this disease. Erica Verillo’s recent post provides ideas and links for how to do this.

The following information provides additional background to prepare your comments:

However you choose to protest, make your concerns known!