Are cognitive screening tools good enough?

Many stroke survivors experience problems with cognitive functioning in the months and years following a stroke event. Impairments in cognitive abilities such as memory, attention, planning and processing speed can have a negative effect on post-stroke recovery, quality of life and engagement in activities. Consequently, it is imperative that possible cognitive dysfunction after stroke is adequately assessed so that suitable rehabilitation is put into place to improve recovery.

In clinical practice, due to time and resources constraints, there is a reliance on cognitive screening tools which are cheap, quick and easy to administer. The problem is that these tools (e.g., the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA)) were not designed to be used in a stroke population. Their suitability in terms of sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) needs to be examined to make sure that stroke survivors with cognitive impairments are not missed, and that those with intact cognitive functions do not undergo unnecessary subsequent cognitive assessments.

This recent systematic review by Stolwyk and colleagues aimed to determine if current cognitive screening tools are sensitive and specific enough for use after stroke.

Methods

In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines the reviewers searched MedLine, PsychInfo, Scopus, PubMed and CINAHL. Studies that met the following criteria were included:

  • Male or female participants >18 years
  • Confirmed ischaemic or haemorrhagic stroke
  • Analysis of the sensitivity and specificity of a cognitive screening tool compared with a comprehensive neurological assessment
  • Cognitive screening tools that were designed to measure cognitive impairment and took <30 minutes to administer
  • Comprehensive neuropsychological assessments that used multiple domain-specific measures with established reliability and validity

Two authors independently reviewed the results from the search and identified possible articles for inclusion.

Results

13,201 articles were identified through the initial search. This was reduced to 66 after reading the titles and abstracts. A further 50 were removed as they did not meet the inclusion criteria leaving 16 articles being included in the review.

shutterstock_168244142

The reviewers used the reliable PRISMA method to identify 16 trials to include in their analysis.

  • The MMSE and the MoCA were the most commonly used screening tools.
  • Eleven studies used the MMSE, although only three reported adequate sensitivity (>80%) and specificity (>60%).
  • Of these three that reported adequate sensitivity and specificity, positive predictive values (PPVs) were >80% and negative predictive values (NPVs) ranged from 65% to 73%.
  • Five studies used the MoCA, and three of these reported adequate sensitivity and specificity.
  • Only two studies that used the MoCA reported PPV and NPV >80%.
  • Four studies compared the MMSE and the MoCA and found that overall the MoCA had better sensitivity but poorer specificity than the MMSE.
  • The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), Cognistat and the Barrow Neurological Institute (BNI) were assessed and found acceptable sensitivity and specificity.
  • The Middlesex Elderly Assessment of Mental State, Addenbrooke Cognitive Examination, Revised, Screening Instrument for Neuropsychological Impairments in Stroke and the Clock Drawing Test failed to achieve adequate levels of sensitivity and specificity.
  • Few studies examined if other factors or patient variables impacted on sensitivity and specificity results.

Conclusions

There was some preliminary support for the use of the MoCA, BNI, Cognistat and RBANS as cognitive screening measures for stroke in a clinical environment, but not for the MMSE. However, methodological factors need to be taken into consideration. For example, of the studies that reported adequate sensitivity and specificity, not all of them reported PPV and NPV, or if they did it was lower than the acceptable level of 80%.

There were some other issues that also needed to be taken into account such as sensitivity and specificity values for cut-off points with the MoCA, the exclusion of cognitive impairments such as calculic, praxis and speed of processing, the non-stratification of the results based on demographic and stroke specific factors, and the poor reporting of time the screening tools and tests were administered.

The Stroke Elf’s view

Stroke_Elf_Twitter-500px-01The review is timely given that a recent survey carried out in association with the James Lind Alliance reported that understanding cognition and ways to improve cognitive impairment was number one on the list of research priorities. The review highlighted the importance of reliable and valid screening tools so that the nature and extent of cognitive problems can be identified in the first instance ensuring that the most suitable treatment can be delivered to those that need it, and for those who do not, no time and resources are wasted in implementing a therapy that is not needed.

However, there are a few limitations that should be noted. First, the keywords used to conduct the search may not have produced an inadequate search of the literature. For example, only ‘stroke’ and cerebrovasc*’ were used, whereas some articles use the broader term of ‘acquired brain injury’ which stroke falls under. The inclusion of additional terms would have made the review more convincing. Second, the search produced a high number of hits indicating that the search was not sensitive enough to detect only those articles that were relevant. It was not stated in the article if Boolean operators were used to aid the search, which if they were would have produced a more accurate and specific search.

Thirdly, the authors drew attention to the need to address not only sensitivity and specificity but also positive predictive and negative predictive value (PPV and NPV). However, their search does not include the terms PPV nor NPV, again this could have limited the finding. It is also important to highlight that although it is useful to include PPV and NPV when evaluating the usefulness of screening tools, predictive values obtained in one study of stroke survivors will not apply to all stroke survivors, especially given the heterogeneous nature of cognitive impairments a stroke can cause.

One final point to make is that the authors could have been more comprehensive by providing rater agreement statistics. They stipulated that two authors independently reviewed the articles and then decided which ones should be included, however, the extent to which they agreed on this could have been calculated such as the AC1 statistic which would have been useful to include.

Links

Altman, D. G. & Bland, J. M. (1994). “Diagnostic tests 2: Predictive values”. British Medical Journal, 309. doi:http://dx.doi.org/10.1136/bmj.309.6947.102

Gwet, K. (2002). Inter-rater reliability: dependency on trait prevalence and marginal homogeneity. Statistical Methods for Inter-Rater Reliability Assessment, 2, 1-9. http://www.agreestat.com/research_papers/inter_rater_reliability_dependency.pdf

Jaillard, A., Naegele, B., Trabucco-Miguel, S., LeBas, J. F. & Hommel, M. (2009). Hidden dysfunctioning in subacyte stroke. Stroke, 40, 2473-2479. doi: 10.1161/STROKEAHA.108.541144.

Stolwyk, R, J., O’Neill, M. H., McKay, A. J. D., & Wong, D. K. (2014). Are cognitive screening tools sensitive and specific enough for use after stroke? Stroke, 45, 3129-3134. doi: 10.1161/STROKEAHA.114.004232.

Stroke in Scotland A James Lind Alliance Priority Setting Partnership