Clinical
Outcome Indicators Report May 2002 | Annex
7 |

**Indirect
Standardisation: How It Works **

**Direct
and indirect standardisation. **

**Indirect
standardisation: how it is carried out.**

**The
power of indirect standardisation**.

The aim of the outcome indicators in this report is to identify variations in outcome which may act as pointers to variation in the quality of care. Other than quality of care, the main determinants of outcome at patient level tend to fall under the category of case mix. For example, other things being equal, older people are likely to have higher post-admission or post-operative mortality than younger people. Some surgical procedures have a higher average post-operative mortality than others. (See p22 of the 1999 CRAG Clinical Outcome Indicators Report for a fuller discussion of the possible determinants of outcome).

Thus differences in outcome between hospitals are likely to reflect to some extent differences in the case mix of the patients admitted. A hospital which admits older patients on average would be expected to have poorer outcomes. A hospital which carries out a higher proportion of 'high-risk' surgical procedures is likely, for that reason, to have a higher overall surgical mortality rate than a hospital which carries out a higher proportion of 'low-risk' procedures.

Where
we have information at patient level on such elements of case mix e.g. age, sex,
deprivation category or type of procedure, it is possible to adjust for their
effects using standardisation.

There are two main methods of standardisation: direct and indirect. Direct standardisation is often the preferred method especially in epidemiological contexts. In this report it is used for example for the analysis of cancer incidence and mortality. However in the context of case mix adjustment for the outcome indicators presented in Sections C, D and E of this report - readmission rates and post-admission and post-operative mortality - it has one overwhelming drawback. Direct standardisation is inadvisable if the number of cases in any of the cells of the cross-classification of the variables used to standardise is small. Thus if one is standardising for age, sex and deprivation and there is a possibility of very low numbers in any combination of age, sex and deprivation categories, direct standardisation should be avoided. If there is a possibility that there are no cases in any of the cells of classification (zero cells) then direct standardisation is entirely ruled out. Indirect standardisation is highly robust in the context of small cell numbers.

*It
cannot be stressed strongly enough that despite the possible implication in the
very terms ('indirect' vs 'direct') that indirect standardisation is somehow less
powerful than direct standardisation, in the current context, that of case mix
adjustment of clinical outcome indicators for multiple factors indirect standardisation
is the more robust method.
*

**Indirect
standardisation: how it is carried out. **

Let us take a hypothetical example of post-operative surgical mortality. Here we are standardising for differences in patient composition or case mix between Trusts in terms of age, sex, deprivation category and the type of procedure performed.

The steps involved are as follows.

1.
* Calculate Scottish 'reference rates'.* Take all the patients included
in the analysis (here all patients in Scotland undergoing the relevant procedures
and included in calculating the outcome). For each sub-category of patients defined
in terms of all the standardisation variables (here age, sex, deprivation and
surgical procedure) calculate the proportion dying within 30 days. Thus for example
we might calculate the Scottish 30 day mortality rate for women aged 65-69 in
deprivation category 4 undergoing a mastectomy as 0.5%. This is repeated for all
other combinations of the standardisation variables.

2.*
Apply 'reference rates' to patients in each Trust.* Let us assume that
in Trust A there were 2 patients who were women aged 65 to 69 in deprivation category
4 who underwent a mastectomy. If Trust A were to experience the same mortality
rates for each category of patient as Scotland as a whole, then, on average, these
two patients would be expected to contribute 0.5% of 2 or 0.01 deaths to the total
deaths in Trust A.

3. * Calculate total expected number of deaths for each Trust*. If
we carry out this same procedure across all sub-categories of patient and sum
the expected deaths contributed by each sub-category we obtain a total number
of deaths for the Trust which would be expected if the Trust experienced the same
mortality rate as Scotland as a whole for each sub-category of patients.

4.
* Compare the expected number of deaths in each Trust with the actual number
of deaths*. The ratio between the actual number of deaths in each Trust
and the expected number of deaths as calculated above gives us a form of the Standardised
Mortality Ratio. Suppose a Trust actually experienced 30 deaths within 30 days
of the relevant basket of surgical procedures in the relevant period. However,
suppose it was calculated that if the Trust experienced the same mortality rate
as Scotland for all sub-categories of patients it would expect to have experienced
24 deaths. The ratio of actual to expected deaths would thus be 30/24 or 1.25.
The Trust has actually experienced more deaths than would be expected. We apply
the ratio of 1.25 to the Scottish 30 day mortality rate of (say) 0.4% to give
a standardised mortality rate for the Trust of 0.5%. This is the outcome indicator,
the standardised rate, as presented in this report.

**The
power of indirect standardisation**.

It is hoped that this account has made clear that, at least in terms of the those aspects of case mix for which we have information, it is difficult to envisage a more robust technique for case mix adjustment. Not only does it adjust for the effects of individual variables, such as age, on outcome, it automatically adjusts for any interactions resulting from particular combinations of case mix variables. Indirect standardisation literally adjusts for the effects of all combinations of the casemix variables. Some systems for case mix adjustment have used statistical techniques such as logistical regression as basis for calculating expected rates and thus for case mix adjustment. A paper carried out as part of the Clinical Indicators Support Team research programme shows that indirect standardisation produces exactly the same results (is logically equivalent to) standardisation based on a fully saturated logistic regression model. This paper "Adjusting outcomes for case mix:indirect standardisation and logistic regression" is available on the CIST website: http://www.indicators.scot.nhs.uk