Evidence and clinical effectiveness in rehabilitation

 Gaining evidence for rehabilitation interventions can be messy

Gaining evidence for rehabilitation interventions can be messy

There are two questions uppermost in most clinicians minds when approached to adopt new technology. The first is "Where is the evidence for this?".  Whilst evidence-based healthcare is a rational way to try to ensure that we don't invest in approaches that do not work, in practice this is easier said than done - especially when it comes to rehabilitation. 

The difficulties of carrying out cost-effective trials of rehabilitation interventions are significant. This wouldn't be a problem if we knew that rehabilitation provision was always optimal and people were already receiving necessary, sufficient and intensive interventions.  The burden of trying to provide evidence for something new is high and the effect is inertia - resistance to change from the current status quo (which is probably nor perfect by any means).

It is very difficult to establish the quality of rehabilitation research studies by "traditional" evidence based methods.  Many clinicians hold the thought that randomised, double blind, trials with large numbers of people in the study are the "gold standard".  Well that may be so, but this type of trial doesn't really work when we talk about rehabilitation.

Rehabilitation depends on complex, experience-based treatments and these  Interventions might involve training, behaviour change, and things that cannot typically be hidden (blind) from the patient or the therapist.  One immediate effect of this is that studies cannot remove potential bias due to the well known "placebo effect". We know that even the attitude and behaviour of the therapist creates an impact on the patient.

Anyway - how should we measure outcomes? Rehabilitation is best when it is "person centred" and so goals and results that are associated with successful treatment will vary from person to person.  Simplistic outcome measures may not provide universal and objective indication of improvement.  Each individual may have a vastly different potential for improvement even though they have the same clinical label (Stroke, MS, TBI etc).  So a highly meaningful intervention may appear meaningless if the wrong outcome measure is selected for that particular situation.  If that were not enough, rehabilitation interventions are often delivered by members of multiple disciplines which complicates the application of robust measurements and quality standards.

Randomised controlled trials (RCTs) where patients are randomly assigned to at least two comparison groups are certainly best able to control the validity of studies and ensure experimental and control groups are indeed comparable. This strengthens the basis for statistical inference.  As we state above, the problem is that constructing an RCT with an adequate sample size, appropriate techniques to account for variability in the diagnostic conditions and a suitable combination of outcome measures is extremely difficult. Even when we can conceive an approach the chances are that it would be hard to fund due to high competition for rehabilitation research funding and the individual nature of brain injuries.

There are also ethical considerations in using RCTs; particularly with severely affected patients in whom denying services in order to conduct an RCT is likely to be unethical.

Due to the many difficulties above, the majority of studies published describing patients with brain injury use single-case design or are small case series, reflecting the individual nature of rehabilitation interventions.   Single case studies are usually ranked at the bottom of the traditional hierarchy of evidence but they are not necessarily inferior.  In fact, they may be ideal for exploring a new treatment or to enhance understanding of why some patients respond to a treatment of known effectiveness whereas others do not.

There are of course disadvantages associated with single case design studies. These include the difficulty in drawing statistically valid cause-and-effect conclusions or extrapolating findings to a a wider population.

Functional therapies tend to be safe, and due to their context-dependent nature, their effectiveness may be better examined using observational techniques which permit natural heterogeneity.  Similarly, vocational rehabilitation interventions by definition are contextual, depending on the nature of the specific job, employment sector, country etc.

One approach that is used to try to aggregate the findings of many smaller studies is Meta-analysis. This can be undertaken only if the study populations, interventions, outcomes, and study designs are agreed to be sufficiently consistent to allow pooling of data. This has tended to limit the use of the technique.  Nevertheless, some authors have pooled data using a range of study designs and making a range of assumptions about the quality of the data.

There are significant challenges in obtaining robust evidence for many interventions in rehabilitation. Remember that this does not mean that interventions that don't have evidence to support them yet cannot work.

What was the second question? "How much does it cost?"