Your activity: 4 p.v.

Mental status scales to evaluate cognition

Mental status scales to evaluate cognition
Author:
Mario F Mendez, MD, PhD
Section Editor:
Michael J Aminoff, MD, DSc
Deputy Editor:
Janet L Wilterdink, MD
Literature review current through: Dec 2022. | This topic last updated: Apr 16, 2019.

INTRODUCTION — The mental status examination is an important tool for assessing brain function, particularly cognition. While technological advances in neuroimaging have enabled the direct observation of the brain with regards to structure, blood flow, metabolic function, and deposition of abnormal proteins, there is still no way to directly observe cognitive function. Therefore, cognitive assessment remains critical for clinical diagnosis, patient care, and research.

The cognitive assessment portion of the mental status examination is designed to distinguish between normal and abnormal performance arising across a range of different conditions. It can be divided into three levels of rigor:

Mental status scales are short instruments (≤30 minutes) that assess memory and/or other cognitive domains, with structured administration and scoring and predetermined cutoff scores. These tests are designed to efficiently distinguish patients with impaired cognition. While they are particularly useful for identifying cognitively impaired individuals who might benefit from more extensive assessments, they provide less insight into which brain areas might be affected or potential underlying etiologies and incorporate few, if any, adjustments for the patient's age or level of education.

The extended mental status examination includes more detailed assessments and observations regarding a broader range of cognition and behavior and may take 30 to 60 minutes to perform. The overall pattern of performance helps identify which brain regions might be dysfunctional and provides important clues regarding underlying neuropathology.

Formal neuropsychological testing incorporates the most detailed assessments with normative values that can account for a wide range of demographic factors. Neuropsychological testing can last up to several hours, and can be divided across multiple visits. Referral to neuropsychology often occurs in clinical settings where screening scales and/or the extended mental status exam are not sufficiently conclusive to render a diagnosis.

This topic will specifically review the use of mental status scales, with a particular focus on their use in older adult patients. The extended mental status examination is presented separately. (See "The mental status examination in adults".)

Other aspects of the evaluation of patients with cognitive disorders are also discussed separately. (See "Evaluation of cognitive impairment and dementia".)

CLINICAL USE OF MENTAL STATUS SCALES

Screening for cognitive impairment and dementia — The mental status scales that are typically used for screening can be administered in routine clinical appointments and over shorter durations (≤30 minutes) than formal neuropsychological testing, but are more limited in both the breadth and depth of assessment [1].

Indications — Mental status scales are perhaps most frequently used to screen older adult patients for mild cognitive impairment (MCI) or dementia. Assessment for cognitive impairment is a required component of the Medicare Annual Wellness Visit for older adults [2]. Because routine medical history and physical examinations in this patient population may be insufficient for identifying significant cognitive impairment [3], screening for cognitive impairment with a mental status scale is suggested as an additional approach [2]. However, many expert groups explicitly do not recommend screening for dementia in otherwise asymptomatic patients, and some actively recommend against such assessments in that patient population. Recommendations for screening for cognitive impairment and dementia in older patients are discussed in detail separately. (See "Comprehensive geriatric assessment", section on 'Cognition'.)

Mental status scales are also used in other populations that are at higher risk for cognitive deficits, including patients with multiple sclerosis, traumatic brain injuries, and psychiatric disorders [1].

Interpreting screening results

"Normal" performance – Overall, screening mental status scale scores in the unimpaired range (particularly for patients with demographic profiles similar to the applicable validation cohorts) should be considered reassuring to patients and their families and can be used to support a decision not to pursue further testing.

However, performance in the unimpaired range on these scales does not rule out the possibility of more subtle (yet still meaningful) cognitive deficits that might be detected through more comprehensive neuropsychological testing [1]. This is particularly true for scales with shorter assessment times that are optimized for detecting dementia rather than MCI. In addition, ceiling effects may limit the sensitivity of these scales for detecting significant deficits for younger and/or more highly educated patients.

If there is continued concern for cognitive impairment despite normal screening results, patients should be followed regularly for evidence of worsening cognition. (See 'Longitudinal assessment' below.)

Formal neuropsychological testing, which provides more granular normative data across broader ranges of age and education, will have greater sensitivity, both cross-sectionally and longitudinally.

"Impaired" performance – Screening mental status scale scores in the impaired range are of concern, and in most cases, further workup and/or specialist referrals are indicated, with a particular focus on identifying potential reversible causes of cognitive impairment [4].

Nevertheless, depending on the scale and the cutoff score that is used, the possibility of false-positive results should be considered, especially in patients with less education, of different cultural backgrounds, or tested in languages other than their native language [1].

Importantly, screening mental status scale scores, irrespective of the length of the assessment, represent only a portion of the necessary workup for cognitive impairment or dementia, and should be interpreted in the context of other clinical information when rendering such diagnoses [4]. (See "Evaluation of cognitive impairment and dementia", section on 'Evaluation'.)

Longitudinal assessment — Longitudinal assessments with mental status scales may provide additional insight into a patient's clinical trajectory (eg, worsening, remaining stable, or improving) [5] and are recommended for this purpose by the American Academy of Neurology (AAN) clinical practice guidelines [4].

Serial performance of mental status scales could be used to support or reconsider specific diagnoses in a cognitively impaired patient, depending on whether changes in test scores observed over time conform to an expected trajectory. Progressive decline across serial assessments is characteristic of cognitive impairment associated with neurodegenerative conditions, while stable or fluctuating cognitive deficits may be more indicative of other etiologies. When normal individuals are serially assessed with such instruments, they often exhibit some initial improvement in their scores on their next assessment, with stable scores thereafter [6], though subtle longitudinal declines may be seen in older patients, particularly those over 80 years of age [7,8].

Two scales with moderate assessment times, the Mini-Mental State Examination (MMSE) [9] and the Montreal Cognitive Assessment (MoCA) [10], have been studied most frequently for this purpose, and average annual rates of decline in performance have been calculated for older adult populations with Alzheimer disease (AD) dementia [11] and MCI [12]. However, these scales were not specifically developed for longitudinal assessments, and the interpretation of serial scores in individual patients may not always be straightforward. In particular, intraindividual variance has been seen on repeated testing on both the MMSE and MoCA across shorter intervals in cognitively normal individuals, suggesting that only relatively large declines (≥3 points on the MMSE and ≥4 points in the MoCA) are likely to be meaningful [13]. (See 'Mini-Mental State Examination (MMSE)' below and 'Montreal Cognitive Assessment (MoCA)' below.)

Changes on MMSE and MoCA are less sensitive in detecting longitudinal decline in cognition than formal neuropsychological testing [14]. Furthermore, such scales can exhibit ceiling and floor effects as well as nonlinear rates of change at different stages of disease progression [15]. Given their ceiling and floor effects, the MoCA may be better for detecting decline in MCI, while the MMSE may be better for detecting decline in mild to moderate dementia, though neither might be expected to perform as well as more comprehensive scales such as the Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Battery (CERAD-NP).

Longitudinal testing with shorter mental status scales, which provide largely dichotomous results of impaired versus unimpaired performance, can be useful for patients who perform in the unimpaired range at baseline to detect significant subsequent cognitive decline. However, for patients who perform in the impaired range at baseline, subsequent assessments with such instruments have less utility, given their relative insensitivity for detecting further disease progression.

Longitudinal scores could also be used to assess responses to different therapeutic interventions. However, at present, longitudinal mental status assessments provide only limited guidance regarding interventional approaches, given the relatively modest benefits of available treatments and the wide interindividual variability in rates of cognitive decline. However, should a broader range of more efficacious therapies become available, longitudinal assessments may become increasingly important in the optimization of treatment plans.

Limitations — Screening mental status scale scores in patients from demographic backgrounds that differ from the population from which they were derived should be interpreted with caution, as the sensitivities and specificities reported in initial validation studies may be less applicable.

Because mental status scales were developed for potential utility for a broad swath of the general population, both the difficulty of the assessments and the cutoffs that distinguish normal versus abnormal performance are typically designed around initial study cohorts that range between 65 to 80 years of age, have approximately 12 to 14 years of formal education, and are tested in their native language (usually English). Many of these assessments exhibit significant effects of age (poorer performance with increased age), education (better performance with increased education), and ethnicity (better performance in non-Hispanic white patients). While some scales, such as the Short Portable Mental Status Questionnaire (SPMSQ), MoCA, and Saint Louis University Mental Status Examination (SLUMS), have adjusted cutoffs based on education, none make explicit adjustments for age or ethnicity.

In addition, since items that explicitly assess language comprise a substantial portion of the longer mental status scales and since language skills can modulate performance on other items, assessments administered in languages other than the patient's native language should be interpreted with caution. While an increasing number of mental status scales have undergone translation and cultural adaptation for use across broader patient populations, such conversions do not adhere to a uniform methodology and may have subtly different psychometric properties that may complicate comparisons [16].

Most mental status scales focus on testing memory and orientation [3], as these cognitive domains are often affected early in the course of AD, the most common cause of dementia in older adults [17]. Thus, the test characteristics (sensitivity, specificity, accuracy) are less well defined for other conditions.

Limitations more specific to individual scales are discussed in the sections below.

SPECIFIC MENTAL STATUS SCALES — The choice of the appropriate mental status scale to use will be dictated by the clinical situation [18]. Primary care practitioners may seek to most efficiently identify cognitive impairment for possible referral for further workup, and thus prefer shorter scales that can be rapidly administered and easily interpreted. In specialty clinics, more extensive testing using longer scales that assess cognition in greater breadth and depth may provide additional information to guide more precise diagnosis and treatment options.

In this section, a range of different mental status scales are described in further detail and organized by the time needed for their administration.

Scales with shorter assessment times (<5 minutes) — Mental status scales that can be rapidly administered are often preferred in busy primary care settings where cognitive assessment is just one of multiple goals for the patient visit. However, while these shorter scales may be more efficient for cognitive screening, they are correspondingly more limited in both the depth and breadth of deficits that can be detected.

Many of these tests focus primarily on memory function (rather than a broader range of cognition) and may be most effective at detecting the more severe deficits seen in dementia (as opposed to the more subtle deficits seen in mild cognitive impairment [MCI]). Abnormal results on these brief assessments should be confirmed and elaborated on through further workup.

Memory Impairment Screen (MIS) — The MIS focuses on memory, both recall and recognition [19]. Patients are presented four written words (all nouns), which belong to different semantic categories (eg, "checkers" is a game, a "saucer" is a dish). They read the words aloud and, when given a category cue, identify the word that belongs in that category. After three minutes of distraction, patients are asked to recall the items; category cues are used for items not freely recalled. The total score is calculated as 2 x (number of items freely recalled) + (number of items recalled with cue) [20].

Across studies, a score of ≤4 identifies individuals with dementia with variable sensitivity (43 to 86 percent) but better specificity (93 to 97 percent) [21]. Given its specific emphasis on memory, the MIS is best suited for screening for Alzheimer disease (AD) and less sensitive for other forms of dementia, especially at their early stages [22]. Age, gender, and educational level do not impact test performance [23].

This test is freely available through the Alzheimer's Association.

Six-Item Screener (SIS) — The SIS incorporates three memory questions and three orientation questions [24]. Patients are read three words and asked to repeat them; repetition of these items is not scored. They are then asked three temporal orientation questions (year, month, day of the week), which are scored. As with the MIS, three minutes of distraction then intervene before the patient is asked to recall the three words; recall of these items is scored.

Further diagnostic workup for possible cognitive impairment is indicated if the patient incorrectly answers ≥2 of the 6 combined orientation and memory questions.

While there are relatively fewer published data with this instrument, the SIS has a sensitivity of 86 to 89 percent and a specificity of 78 to 88 percent for detecting dementia in outpatient and community settings [24-26]. However, its sensitivity for detecting MCI is much poorer (34 percent) [26].

The SIS is freely available through a number of organizations, including the University of Washington.

Clock-Drawing Test (CDT) — The CDT, which does not explicitly draw upon memory function, fulfills many of the requirements for an effective screening tool: convenient administration and scoring; applicability to a wide range of patients, irrespective of language, education, or cultural background; and high inter-rater reliability, test-test reliability, sensitivity, and specificity [27]. Patients are verbally asked to draw an analog clock, including all of the numbers, and set the hands to a specified time (eg, 10 minutes past 11:00). Performance on the CDT is supported by a combination of visuospatial abilities, executive function, motor execution, attention, language comprehension, and numerical knowledge.

A wide range of scoring systems has been proposed; all are somewhat subjective, but most use varying anchor points focused on number and hand placement and the overall spacing and organization of the drawing. However, no single scoring system is clearly superior for dementia screening, and a simple subjective qualitative interpretation of the clock drawing as "normal" or "abnormal" may suffice [28]. In primary care settings, the performance characteristics of the CDT for identifying dementia vary considerably, with sensitivities ranging from 67 to 98 percent and specificities ranging from 69 to 94 percent. Some of this variability may arise from the diverse scoring systems that have been used [21]. Like many other brief assessments, the CDT performs less well for identifying MCI (sensitivity 41 to 85 percent; specificity 44 to 85 percent) [29]. The multitude of cognitive domains that underlie performance on the CDT contributes to its sensitivity. However, this same characteristic limits its utility in determining the specific etiologies that may underlie abnormal performance, which can be seen with a wide range of medical conditions that affect the brain [30].

Mini-Cog — The Mini-Cog combines free recall of three unrelated words (presented verbally) and a version of the CDT, which is dichotomously scored as normal (all numbers present in correct sequence with the hands correctly displaying the specified time) or abnormal [20]. Mini-Cog performance is judged to be impaired if patients are unable to recall any of the three words or if they recall only one or two words and have an abnormal clock drawing.

In primary care settings, the Mini-Cog has demonstrated reasonable sensitivity (76 to 100 percent) but relatively poorer specificity (54 to 85 percent) for identifying patients with dementia [21], but performs less well for identifying patients with MCI (sensitivity 39 to 84 percent; specificity 73 to 88 percent) [29].

This test is freely available through the Alzheimer's Association.

Short Portable Mental Status Questionnaire (SPMSQ) and Abbreviated Mental Test Score (AMTS) — The SPMSQ [31] and AMTS [32] differ from many of the other tests described in this section in that they are not primarily driven by performance on delayed recall or recognition of newly presented word lists. They instead focus on assessing the patient's temporal (eg, time or date) and spatial (eg, hospital or clinic) orientation as well as semantic knowledge (eg, name of the president or year that World War I began). Both the SPMSQ and AMTS are composed of 10 such questions. Patients who make ≥3 errors on the SPMSQ [31] or ≥4 errors on the AMTS [32] are considered to have at least MCI. Scoring on the SPMSQ is further adjusted for level of education, with the threshold for significant cognitive impairment set at ≥4 errors for those with a grade school education or less and ≥2 errors for those with an education that extends beyond high school [31].

The SPMSQ is reasonably sensitive (67 to 74 percent) and highly specific (91 to 100 percent) for identifying moderate to severe dementia, though it is less sensitive for mild dementia [33,34]. The AMTS has been reported to have higher sensitivity (78 to 94 percent) but lower specificity (86 to 89 percent) for detecting dementia in outpatient settings [35-37]. Both tests have also been used to assess confusion and delirium in inpatient settings [34,38-41].

The SPMSQ is available freely through a number of organizations, including the National Palliative Care Research Center. The AMTS is available freely through a number of organizations, including the British Geriatrics Society.

Informant questionnaires — Whereas the mental status scales described above directly assess a patient's current cognitive abilities, an alternative approach is to query an informant who knows the patient well and can provide a longer-term perspective on their cognitive and functional performance. Such strategies are particularly useful when the validity of direct cognitive testing with the patient may be less certain (eg, testing would not be conducted in patient's native language, patient has low level of formal education); however, they are subject to potential biases related to informant characteristics and/or their relationship with the patient [42]. They are also less useful for identifying MCI, due to the added emphasis on functional decline.

The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) [43,44] and the Eight-Item Interview to Differentiate Aging and Dementia (AD8) [45] are brief informant questionnaires that distinguish older adult patients with normal cognition from those with dementia. Both questionnaires ask informants to rate observed changes in the patient's memory, orientation, judgment, and performance of instrumental activities of daily living.

The full version of the IQCODE includes 26 items [43], but a shorter version has subsequently been developed that includes only 16 items [44]. For both versions, an informant rates the patient's current performance on different aspects of cognition and function on a 5-point scale (1: "much improved"; 2: "a bit improved"; 3: "not much change"; 4: "a bit worse"; 5: "much worse") as "compared with 10 years ago" [43,44]. As informants may not be able to provide responses for all of the questions, the overall score is derived by calculating an average item score from the available responses. The optimal cutoff score for identifying dementia ranges from >3.3 to >4.0, depending on the composition of the study cohort and the IQCODE version used [42].

In primary care settings, the IQCODE exhibits sensitivities of 75 to 88 percent and specificities of 65 to 91 percent [21]. Since the full and short versions of the IQCODE demonstrate similar screening accuracy, the short form may be preferred due to its more rapid administration [46].

The short form of the IQCODE is freely available through the Alzheimer's Association.

The AD8 is composed of eight items. For each item, the informant is asked to provide a dichotomous answer regarding whether or not the patient has experienced a decline "in the last several years." If the informant endorses deficits on ≥2 items, the patient is considered likely to have significant cognitive impairment [45].

In outpatient and community settings, the AD8 has demonstrated good sensitivity (73 to 97 percent), but highly variable specificity (17 to 90 percent) [47-52].

The AD8 is freely available through the Alzheimer's Association.

Scales with moderate assessment times (5 to 15 minutes) — Mental status scales that take from 5 to 15 minutes to administer include the Mini-Mental State Examination (MMSE) [9], the Montreal Cognitive Assessment (MoCA) [10], and the Saint Louis University Mental Status Examination (SLUMS), which are amongst the most widely used and studied cognitive assessments. Their increased length allows for evaluation of a broader range of cognitive abilities and, in some instances, the detection of more subtle deficits such as those seen in MCI.

These longer scales may be better suited for initial testing in specialty clinic settings, where the relative patterns of performance across different cognitive domains may be helpful in clarifying the differential diagnosis and guiding further investigations [18]. However, even in those settings, scores from such scales alone should not be considered sufficient for rendering cognitive diagnoses [1]. (See "Evaluation of cognitive impairment and dementia".)

Mini-Mental State Examination (MMSE) — The MMSE is scored on a 30-point scale, with items that assess orientation (temporal and spatial; 10 points), memory (registration and recall; 6 points), attention/concentration (5 points), language (verbal and written; 8 points), and visuospatial function (1 point) [9].

While different cutoff points have been used across studies, scores ≤23 are most commonly regarded as abnormal and indicative of cognitive impairment [53]. However, age, education, and race/ethnicity each appear to have significant effects on overall MMSE scores [54,55], suggesting that these demographic variables should be taken into account when evaluating individual patient performance.

The MMSE was originally derived to identify cognitive impairment across a range of different etiologies seen amongst inpatient psychiatric patients [9]. However, it has subsequently been most commonly used to identify patients with dementia in outpatient settings [56]. When the MMSE has been used to distinguish patients with dementia from cognitively normal controls, it exhibits a pooled sensitivity of 81 percent and a pooled specificity of 89 percent [53]. When it has been used to identify patients with MCI, the MMSE exhibits a pooled sensitivity of 62.7 percent and a pooled specificity of 63.3 percent [56]. MMSE scores are driven by three primary factors: verbal, memory, and constructional abilities [30]. Therefore, this scale may be most appropriate for identifying patients with AD dementia of mild to moderate severity, where deficits in these domains are characteristically seen. The MMSE exhibits poorer sensitivity when used to identify cognitive impairment in broader cohorts of general neurologic and psychiatric patients, whose wider range of deficits may not be as specifically addressed by its items [55]. In particular, the MMSE does not directly assess executive function, which can be an early feature of cognitive impairment caused by the frontal and/or subcortical dysfunction that can be seen in vascular cognitive impairment or frontotemporal dementia (FTD).

A number of studies have examined longitudinal changes in MMSE scores with progressive dementia. The average rate of decline seen in AD patients is 3.3 points/year, though there is significant heterogeneity across studies and between patients [11]. More rapid rates of decline ranging from 4.7 to 6.7 points/year have been reported with more advanced FTD [57,58]. Since the individual items of the MMSE are not equivalent in difficulty, the scale is most sensitive to change in the middle of its score range (ie, between 10 and 20) and less sensitive to change at the higher and lower portions of its score range [15,55].

While the MMSE has been widely published, can still be downloaded from a number of sources on the internet, and was initially freely distributed, Psychological Assessment Resources (PAR) currently owns its copyright. Therefore, unlike the other scales described in this article, MMSE users must now register with PAR, obtain permission to use it, and pay a fee for each form and use, which has generated some controversy [59].

Montreal Cognitive Assessment (MoCA) — The MoCA is another widely used screening test of moderate length (comparable to the MMSE) that has been more specifically designed to detect the more subtle cognitive deficits that characterize MCI [10]. Like the MMSE, the MoCA is scored on a 30-point scale, with items that assess delayed word recall (5 points), visuospatial/executive function (7 points; includes clock-drawing), language (6 points), attention/concentration (6 points), and orientation (6 points).

In the original publication describing the MoCA, scores ≤25/30 indicated the presence of significant cognitive impairment [10]. Subsequent versions of the test have added a cutoff point of ≤24/30 for patients with ≤12 years of formal education, but in broader patient populations, a cutoff point of ≤22/30 may be more optimal and reduce the frequency of false-positive results [60]. The MoCA exhibits a pooled sensitivity of 91 percent and a pooled specificity of 81 percent for identifying dementia, and a pooled sensitivity of 89 percent and a pooled specificity of 75 percent for identifying MCI [53]. Studies examining head-to-head performance of patients on the MMSE and MoCA have shown that the MoCA is more difficult; MoCA scores are consistently lower than those obtained on the MMSE [61-64]. The MoCA appears to be more sensitive than the MMSE for detecting MCI, though perhaps slightly less specific [65]. The wider range of cognitive domains assessed by the MoCA may facilitate the identification of MCI [66], as well as cognitive impairment across a broad spectrum of conditions beyond AD, including cerebrovascular and cardiovascular conditions; Parkinson, Huntington, and Korsakoff disease; traumatic brain injury; and HIV [60].

The MoCA has been less comprehensively studied than the MMSE in longitudinal settings. The bulk of the data has focused on cognitively normal cohorts, in which MoCA scores remain largely stable over a one-year period, though small declines can be seen with patients over 70 years of age over longer intervals [6,8]. There are fewer data regarding the magnitude of annual decline in patients with baseline cognitive impairment. One small study showed a one-point decline on the MoCA in patients with prodromal to mild AD over a one-year interval [12]. A larger study that spanned three years reported no detectable decline in MCI and an average decline of 2.5 points in dementia over that interval [14]. Further work is needed to more clearly define expected rates of decline on the MoCA in patients with neurodegenerative disease and other conditions.

The MoCA is freely accessible for clinical use at the MoCA website. Forms and instructions are available for over 90 different versions of the MoCA, which allow it to be used in a multitude of patient populations and clinical settings. The availability of multiple versions of the MoCA in some languages may facilitate longitudinal assessments with this instrument with less contamination by test-retest effects, though alternate versions may not be fully equivalent in difficulty [67].

Saint Louis University Mental Status Examination (SLUMS) — Like the MMSE and MoCA, the SLUMS is scored on a 30-point scale, with questions that assess orientation, calculation, semantic verbal fluency, word and story recall, reverse digit span, clock-drawing, and visuospatial function. Because this instrument includes items that more directly measure executive function, it has demonstrated better discriminability than the MMSE for MCI [68-70].

The SLUMS form specifies different thresholds for identifying MCI (<25 points if <12 years of education, <27 points if ≥12 years of education) and dementia (<20 points if <12 years of education, <21 points if ≥12 years of education) [68]. However, across patient cohorts, a range of optimized cutoffs has emerged, yielding sensitivities of 67 to 98 percent and specificities of 61 to 87 percent for MCI and sensitivities of 84 to 100 percent and specificities of 87 to 100 percent for dementia [68-72]. Relative to the MMSE and MoCA, the SLUMS has far fewer published studies that examine its utility. More research may be required to further validate consensus cutoff points for normal cognition, MCI, and dementia.

The SLUMS has been made freely available for clinical use through the Saint Louis University School of Medicine, which offers over 20 separate versions aimed for use in different countries and languages.

Scales with longer assessment times (>15 minutes) — While mental status scales with moderate assessment times can test multiple cognitive domains, each domain is typically only assessed with a few items, which limits both the range of performance that can be ascertained and the sensitivity for detecting deficits. With longer mental status scales, different cognitive domains are assessed with separate subtests, rather than individual items, which allows for more sensitive detection of more subtle cognitive deficits as well as more reliable determination of longitudinal decline. These longer scales can begin to approximate the more comprehensive tests that are used in formal neuropsychological testing. However, while neuropsychological testing incorporates normative data across a broad age range for comparison and can be used to differentiate a wide spectrum of different cognitive disorders, even the longer mental status scales are targeted towards a more limited range of ages (eg, older adults) and diagnoses (eg, neurodegenerative diseases such as AD).

Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Battery (CERAD-NP) — The CERAD-NP was developed to specifically identify cognitive deficits characteristic of mild AD dementia and measure their longitudinal progression. In its original configuration, it included the entirety of the MMSE along with separate subtests for semantic verbal fluency, naming, word list learning, constructional praxis, delayed word list recall, and delayed word list recognition. Altogether, these assessments typically take 20 to 30 minutes to administer [73]. However, the CERAD-NP has subsequently been considered as being comprised of the non-MMSE subtests, which have been examined both separately [74,75] and as a total score [76].

When considered separately, the different CERAD-NP subtests demonstrate varying utility for differentiating between cognitively intact versus cognitively impaired cohorts, which is perhaps unsurprising, given that the subtests are intended to assess different neuropsychological constructs. In the original validation cohort, the most efficient subtest for diagnostic distinctions was delayed word list recall, which correctly classified 86 percent of mild AD dementia patients and 96 percent of moderate to severe AD dementia patients as cognitively impaired [74]. Similar results have been observed in other cohorts, in which the delayed word list recall (sensitivity 84 to 91 percent; specificity 83 to 96 percent) and word list learning (sensitivity 88 to 90 percent; specificity 83 to 88 percent) subtests have shown the greatest utility for identifying patients with dementia [77,78]. Like other screening tests that have been discussed, the individual CERAD-NP subtests perform more poorly in identifying MCI, with only the word list learning subtest differentiating between patients with normal cognition versus MCI (sensitivity 73 percent; specificity 80 percent) [79].

The CERAD-NP performs better as a screening instrument when individual subtest scores are summed up into a 100-point composite scale [76]. While the specific cutoffs for total CERAD-NP scores can vary substantially across different study cohorts (age, gender, and education each affect performance), excellent sensitivities (90 to 100 percent) and specificities (91 to 94 percent) have been reported for distinguishing normal cognition from dementia [76,78,80,81]. The total CERAD-NP score also exhibits reasonable sensitivity (79 to 92 percent) but poorer specificity (56 to 90 percent) when used for MCI [61,76,80-82] and has shown better diagnostic accuracy for this purpose than either the MMSE or MoCA when these assessments have been compared head-to-head [61,80,82].

The greater depth and breadth of CERAD-NP assessments allow for distinctions between AD patients of different severity [80,83] and, in turn, for more precise assessment of disease progression. Patients with AD dementia or progressive MCI exhibit annualized change scores averaging from -7.2 to -8.8 points, and can clearly be distinguished from patients with nonprogressive MCI or normal cognition, whose performance typically remains stable or even improves over the same interval [84,85]. Unlike the MMSE [15,55], longitudinal rates of change on the CERAD-NP are relatively independent of disease severity [83,85].

The CERAD-NP has been translated into over 20 languages, allowing for its administration in a range of settings. The clinical forms and stimuli, as well as instructions for their use, are available for a fee through Duke University.

Addenbrooke's Cognitive Examination (ACE) — In contrast with the CERAD-NP, which was specifically developed to assess AD, the ACE was designed to detect the presence of cognitive impairment across a range of dementia syndromes and to differentiate patterns of performance that are characteristic of different etiologies [86]. The original ACE (2000), like the CERAD-NP, incorporated the MMSE in its entirety, but included additional items that assessed executive functioning (eg, clock-drawing, phonemic verbal fluency), memory, and language in greater depth to produce a 100-point scale [87]. A revised version (ACE-R; 2006) was subsequently introduced, which exhibited improved psychometric properties and had more clearly defined subscales to allow for closer analyses of specific cognitive domains (attention/orientation, memory, verbal fluency, language, and visuospatial function) [88]. Another subsequent revision (ACE-III; 2013) was developed to address copyright issues that complicated the inclusion of the MMSE in the ACE-R [87] and to further improve its psychometric properties [89]. The ACE-III replaces proprietary elements of the MMSE with similar items that more effectively assess the same cognitive constructs but retains all of the non-MMSE items, allowing scores on the ACE-III to maintain an excellent correlation with scores from the ACE-R (r = 0.99) [89]. Administration times for each ACE version range from 15 to 20 minutes.

Across studies, the ACE showed excellent sensitivity (97 percent) but poorer specificity (77 percent) for identifying patients with dementia [90]. The ACE-R has better specificity than the ACE, particularly in lower-prevalence settings [90], and performs better than the MMSE for identifying both dementia (pooled sensitivity 92 percent; pooled specificity 89 percent) [53] and MCI (pooled sensitivity 82 percent; pooled specificity 78 percent) [91]. While the suggested cutoffs for identifying cognitive impairment using the ACE-R are either <88 (maximizing sensitivity) or <82 (maximizing specificity) [88], different optimal cutoffs have been identified across cohorts [90,91], suggesting that clinicians should consider using normative scores that most closely resemble their patient population, given that both age and education affect performance [87]. There are fewer data available to date regarding the performance of the ACE-III, but its performance in dementia (sensitivity 79 to 100 percent; specificity 83 to 100 percent) [89,92-97] and MCI (sensitivity 77 to 84 percent; specificity 6 to 75 percent) [92,93] appears to be similar to the ACE-R and superior to the MoCA or MMSE.

Amongst the strengths of the ACE, ACE-R, and ACE-III is their ability to distinguish AD from FTD (both behavioral and primary progressive aphasia [PPA] variants) [86,88,89]. Cross-sectionally, AD patients perform more poorly on items assessing orientation and delayed recall for a newly learned address, while FTD patients perform more poorly on items assessing verbal fluency (particularly phonemic) and language [86]. Ratios derived from scores on these items demonstrate some utility in distinguishing AD patients from non-AD patients and FTD patients from non-FTD patients on both the ACE [86] and ACE-R [88]. While only sparse longitudinal data are available with various ACE versions, significant declines can be detected over a 12-month interval with the ACE-R in patients with AD or PPA, with larger declines seen in PPA [98]. Versions of the ACE have also been used in other patient populations, including those with depression, parkinsonian syndromes, stroke and vascular dementia, and traumatic brain injury [87].

The ACE-III is freely available through the University of Sydney Brain and Mind Centre. Over 30 versions are currently available in over 25 languages, allowing for cross-cultural use in a variety of clinical settings.

Computerized cognitive assessments — The mental status scales described in the preceding sections are administered via paper and pencil. Potential benefits of using automated cognitive batteries include more rapid and cost-effective assessments, more accurate recording of responses and greater ease of comparisons with prior performance, more immediate availability of reports detailing and interpreting performance, further standardization of administration, measurement of novel endpoints (eg, reaction times), availability of more alternate forms, and the ability to adaptively tailor the difficulty of the assessments to the patient's abilities [99]. However, potential challenges to their adoption include the more limited available data on normative performance and psychometric properties (eg, even direct conversion of paper and pencil tests to computerized platforms does not guarantee equivalent results [100]) and the potential for increased anxiety and/or frustration associated with perceived barriers to the use of computer technology and novel interfaces, especially amongst older adult patients with cognitive impairment [99]. Clinicians remain concerned about the differing levels of patient experience with computer use and the potential for age-associated changes in vision, hearing, and motor function to confound the results [101]. While patients are increasingly accepting of computerized cognitive assessments, their level of acceptance is likely conditioned by their own personal familiarity with computer use [102].

The computerized cognitive testing landscape has been steadily evolving, as evidenced by review articles published on this topic over recent years [99,103,104]. Computerized assessments can be broadly categorized by both intended use (eg, screening versus in-depth evaluation) and administration (eg, by patient, technician, or examiner). Prior recommendations for computerized cognitive screening tools in primary care settings prioritized specific features (self-administration; relatively shorter administration times; availability of test-retest, normative, and validation data; and interpretative reports) and suggested that the Computer Assessment of Mild Cognitive Impairment (CAMCI), Computer-Administered Neuropsychological Screen for Mild Cognitive Impairment (CANS-MCI), and CNS Vital Signs might each be acceptable [103]. However, given the plethora of options, including many that have since become available, there is no current consensus regarding which assessments are best suited for particular uses or settings [99]. Regardless of which computerized assessments might be used, the results will continue to require clinician input and judgment for valid interpretation [101,102].

SUMMARY

Mental status scales are structured instruments that assess memory and/or other cognitive domains with standardized administration and scoring. While they can be used for a variety of purposes, they are most commonly used in older adult patients who present with cognitive complaints to screen for significant cognitive impairment or detect further cognitive decline. Screening mental status scale scores in patients from demographic backgrounds that differ from the population from which they were derived should be interpreted with caution. (See 'Clinical use of mental status scales' above.)

The diagnosis of dementia cannot be made solely on the basis of a low score on one of these tests; a detailed history, including the perspective of an informant, is fundamental. The evaluation of cognitive impairment and dementia is discussed separately. (See "Evaluation of cognitive impairment and dementia".)

The choice of the appropriate mental status scale to use will depend on the clinical context and setting:

Shorter cognitive assessments (<5 minutes), such as the Memory Impairment Screen (MIS), Six-Item Screener (SIS), Clock-Drawing Test (CDT), Mini-Cog, Short Portable Mental Status Questionnaire (SPMSQ), Abbreviated Mental Test Score (AMTS), Eight-Item Interview to Differentiate Aging and Dementia (AD8), and Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE), are suitable for efficiently screening individuals for dementia, but have limited sensitivity for more subtle cognitive impairment and less utility for assessing disease progression. (See 'Scales with shorter assessment times (<5 minutes)' above.)

Cognitive assessments of moderate length (5 to 15 minutes), such as the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Saint Louis University Mental Status Examination (SLUMS), are more sensitive in screening for mild cognitive impairment (MCI) and test a broader spectrum of cognition, but provide a limited range of scores within individual cognitive domains. (See 'Scales with moderate assessment times (5 to 15 minutes)' above.)

Longer cognitive assessments (>15 minutes), such as the Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Battery (CERAD-NP) and the Addenbrooke's Cognitive Examination (ACE), allow for more granular determination of performance within individual cognitive domains and may be more sensitive to longitudinal progression of cognitive deficits, but only begin to approximate the sensitivity and specificity of formal neuropsychological testing. (See 'Scales with longer assessment times (>15 minutes)' above.)

ACKNOWLEDGMENT — The editorial staff at UpToDate would like to acknowledge Edmond Teng, MD, PhD, who contributed to an earlier version of this topic review.

  1. Roebuck-Spencer TM, Glen T, Puente AE, et al. Cognitive Screening Tests Versus Comprehensive Neuropsychological Test Batteries: A National Academy of Neuropsychology Education Paper†. Arch Clin Neuropsychol 2017; 32:491.
  2. Centers for Medicare and Medicaid Services: Annual Wellness Visit, ICN 905706, 2018. www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/downloads/awv_chart_icn905706.pdf (Accessed on January 11, 2019).
  3. Holsinger T, Deveau J, Boustani M, Williams JW Jr. Does this patient have dementia? JAMA 2007; 297:2391.
  4. Petersen RC, Lopez O, Armstrong MJ, et al. Practice guideline update summary: Mild cognitive impairment: Report of the Guideline Development, Dissemination, and Implementation Subcommittee of the American Academy of Neurology. Neurology 2018; 90:126.
  5. Galasko D, Corey-Bloom J, Thal LJ. Monitoring progression in Alzheimer's disease. J Am Geriatr Soc 1991; 39:932.
  6. Cooley SA, Heaps JM, Bolzenius JD, et al. Longitudinal Change in Performance on the Montreal Cognitive Assessment in Older Adults. Clin Neuropsychol 2015; 29:824.
  7. Jacqmin-Gadda H, Fabrigoule C, Commenges D, Dartigues JF. A 5-year longitudinal study of the Mini-Mental State Examination in normal aging. Am J Epidemiol 1997; 145:498.
  8. Malek-Ahmadi M, O'Connor K, Schofield S, et al. Trajectory and variability characterization of the Montreal cognitive assessment in older adults. Aging Clin Exp Res 2018; 30:993.
  9. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res 1975; 12:189.
  10. Nasreddine ZS, Phillips NA, Bédirian V, et al. The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. J Am Geriatr Soc 2005; 53:695.
  11. Han L, Cole M, Bellavance F, et al. Tracking cognitive decline in Alzheimer's disease using the mini-mental state examination: A meta-analysis. Int Psychogeriatr 2000; 12:231.
  12. Costa AS, Reich A, Fimm B, et al. Evidence of the sensitivity of the MoCA alternate forms in monitoring cognitive change in early Alzheimer's disease. Dement Geriatr Cogn Disord 2014; 37:95.
  13. Feeney J, Savva GM, O'Regan C, et al. Measurement Error, Reliability, and Minimum Detectable Change in the Mini-Mental State Examination, Montreal Cognitive Assessment, and Color Trails Test among Community Living Middle-Aged and Older Adults. J Alzheimers Dis 2016; 53:1107.
  14. Phua AKS, Hiu SKW, Goh WK, et al. Low Accuracy of Brief Cognitive Tests in Tracking Longitudinal Cognitive Decline in an Asian Elderly Cohort. J Alzheimers Dis 2018; 62:409.
  15. Philipps V, Amieva H, Andrieu S, et al. Normalized Mini-Mental State Examination for assessing cognitive change in population-based brain aging studies. Neuroepidemiology 2014; 43:15.
  16. Mirza N, Panagioti M, Waheed MW, Waheed W. Reporting of the translation and cultural adaptation procedures of the Addenbrooke's Cognitive Examination version III (ACE-III) and its predecessors: a systematic review. BMC Med Res Methodol 2017; 17:141.
  17. Scheltens P, Blennow K, Breteler MM, et al. Alzheimer's disease. Lancet 2016; 388:505.
  18. Wang Z, Dong B. Screening for Cognitive Impairment in Geriatrics. Clin Geriatr Med 2018; 34:515.
  19. Buschke H, Kuslansky G, Katz M, et al. Screening for dementia with the memory impairment screen. Neurology 1999; 52:231.
  20. Borson S, Scanlan J, Brush M, et al. The mini-cog: a cognitive 'vital signs' measure for dementia screening in multi-lingual elderly. Int J Geriatr Psychiatry 2000; 15:1021.
  21. Lin JS, O'Connor E, Rossom RC, et al. Screening for cognitive impairment in older adults: A systematic review for the U.S. Preventive Services Task Force. Ann Intern Med 2013; 159:601.
  22. Kuslansky G, Buschke H, Katz M, et al. Screening for Alzheimer's disease: the memory impairment screen versus the conventional three-word memory test. J Am Geriatr Soc 2002; 50:1086.
  23. Ismail Z, Rajji TK, Shulman KI. Brief cognitive screening instruments: an update. Int J Geriatr Psychiatry 2010; 25:111.
  24. Callahan CM, Unverzagt FW, Hui SL, et al. Six-item screener to identify cognitive impairment among potential subjects for clinical research. Med Care 2002; 40:771.
  25. Xue J, Chiu HFK, Liang J, et al. Validation of the Six-Item Screener to screen for cognitive impairment in primary care settings in China. Aging Ment Health 2018; 22:453.
  26. Chen MR, Guo QH, Cao XY, et al. A preliminary study of the Six-Item Screener in detecting cognitive impairment. Neurosci Bull 2010; 26:317.
  27. Hazan E, Frankenburg F, Brenkel M, Shulman K. The test of time: a history of clock drawing. Int J Geriatr Psychiatry 2018; 33:e22.
  28. Mainland BJ, Amodeo S, Shulman KI. Multiple clock drawing scoring systems: simpler is better. Int J Geriatr Psychiatry 2014; 29:127.
  29. Lin JS, O'Connor E, Rossom RC, et al. Screening for cognitive impairment in older adults: An evidence update for the U.S. Preventive Services Task Force. Report no. 14-05198-EF-1, Agency for Healthcare Research and Quality, Rockville, MD 2013.
  30. Lezak MD, Howieson DB, Loring DW. Neuropsychological Assessment, 4th ed, Oxford University Press, New York 2004.
  31. Pfeiffer E. A short portable mental status questionnaire for the assessment of organic brain deficit in elderly patients. J Am Geriatr Soc 1975; 23:433.
  32. Hodkinson HM. Evaluation of a mental test score for assessment of mental impairment in the elderly. Age Ageing 1972; 1:233.
  33. Roccaforte WH, Burke WJ, Bayer BL, Wengel SP. Reliability and validity of the Short Portable Mental Status Questionnaire administered by telephone. J Geriatr Psychiatry Neurol 1994; 7:33.
  34. Erkinjuntti T, Sulkava R, Wikström J, Autio L. Short Portable Mental Status Questionnaire as a screening test for dementia and delirium among the elderly. J Am Geriatr Soc 1987; 35:412.
  35. Rocca WA, Bonaiuto S, Lippi A, et al. Validation of the Hodkinson abbreviated mental test as a screening instrument for dementia in an Italian population. Neuroepidemiology 1992; 11:288.
  36. Sarasqueta C, Bergareche A, Arce A, et al. The validity of Hodkinson's Abbreviated Mental Test for dementia screening in Guipuzcoa, Spain. Eur J Neurol 2001; 8:435.
  37. Foroughan M, Wahlund LO, Jafari Z, et al. Validity and reliability of Abbreviated Mental Test Score (AMTS) among older Iranian. Psychogeriatrics 2017; 17:460.
  38. Eissa A, Andrew MJ, Baker RA. Postoperative confusion assessed with the Short Portable Mental Status Questionnaire. ANZ J Surg 2003; 73:697.
  39. Sands LP, Yaffe K, Covinsky K, et al. Cognitive screening predicts magnitude of functional recovery from admission to 3 months after discharge in hospitalized elders. J Gerontol A Biol Sci Med Sci 2003; 58:37.
  40. Pendlebury ST, Klaus SP, Mather M, et al. Routine cognitive screening in older patients admitted to acute medicine: abbreviated mental test score (AMTS) and subjective memory complaint versus Montreal Cognitive Assessment and IQCODE. Age Ageing 2015; 44:1000.
  41. Ní Chonchubhair A, Valacio R, Kelly J, O'Keefe S. Use of the abbreviated mental test to detect postoperative delirium in elderly people. Br J Anaesth 1995; 75:481.
  42. Jorm AF. The Informant Questionnaire on cognitive decline in the elderly (IQCODE): a review. Int Psychogeriatr 2004; 16:275.
  43. Jorm AF, Korten AE. Assessment of cognitive decline in the elderly by informant interview. Br J Psychiatry 1988; 152:209.
  44. Jorm AF. A short form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): development and cross-validation. Psychol Med 1994; 24:145.
  45. Galvin JE, Roe CM, Powlishta KK, et al. The AD8: a brief informant interview to detect dementia. Neurology 2005; 65:559.
  46. Quinn TJ, Fearon P, Noel-Storr AH, et al. Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) for the diagnosis of dementia within community dwelling populations. Cochrane Database Syst Rev 2014; :CD010079.
  47. Galvin JE, Roe CM, Xiong C, Morris JC. Validity and reliability of the AD8 informant interview in dementia. Neurology 2006; 67:1942.
  48. Larner AJ. AD8 Informant Questionnaire for Cognitive Impairment: Pragmatic Diagnostic Test Accuracy Study. J Geriatr Psychiatry Neurol 2015; 28:198.
  49. Dong Y, Pang WS, Lim LB, et al. The informant AD8 is superior to participant AD8 in detecting cognitive impairment in a memory clinic setting. J Alzheimers Dis 2013; 35:159.
  50. Yang YH, Galvin JE, Morris JC, et al. Application of AD8 questionnaire to screen very mild dementia in Taiwanese. Am J Alzheimers Dis Other Demen 2011; 26:134.
  51. Ryu HJ, Kim HJ, Han SH. Validity and reliability of the Korean version of the AD8 informant interview (K-AD8) in dementia. Alzheimer Dis Assoc Disord 2009; 23:371.
  52. Malmstrom TK, Miller DK, Coats MA, et al. Informant-based dementia screening in a population-based sample of African Americans. Alzheimer Dis Assoc Disord 2009; 23:117.
  53. Tsoi KK, Chan JY, Hirai HW, et al. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. JAMA Intern Med 2015; 175:1450.
  54. Crum RM, Anthony JC, Bassett SS, Folstein MF. Population-based norms for the Mini-Mental State Examination by age and educational level. JAMA 1993; 269:2386.
  55. Tombaugh TN, McIntyre NJ. The mini-mental state examination: a comprehensive review. J Am Geriatr Soc 1992; 40:922.
  56. Mitchell AJ. A meta-analysis of the accuracy of the mini-mental state examination in the detection of dementia and mild cognitive impairment. J Psychiatr Res 2009; 43:411.
  57. Chow TW, Hynan LS, Lipton AM. MMSE scores decline at a greater rate in frontotemporal degeneration than in AD. Dement Geriatr Cogn Disord 2006; 22:194.
  58. Rascovsky K, Salmon DP, Lipton AM, et al. Rate of progression differs in frontotemporal dementia and Alzheimer disease. Neurology 2005; 65:397.
  59. Newman JC, Feldman R. Copyright and open access at the bedside. N Engl J Med 2011; 365:2447.
  60. Carson N, Leach L, Murphy KJ. A re-examination of Montreal Cognitive Assessment (MoCA) cutoff scores. Int J Geriatr Psychiatry 2018; 33:379.
  61. Roalf DR, Moberg PJ, Xie SX, et al. Comparative accuracies of two common screening instruments for classification of Alzheimer's disease, mild cognitive impairment, and healthy aging. Alzheimers Dement 2013; 9:529.
  62. Trzepacz PT, Hochstetler H, Wang S, et al. Relationship between the Montreal Cognitive Assessment and Mini-mental State Examination for assessment of mild cognitive impairment in older adults. BMC Geriatr 2015; 15:107.
  63. Saczynski JS, Inouye SK, Guess J, et al. The Montreal Cognitive Assessment: Creating a Crosswalk with the Mini-Mental State Examination. J Am Geriatr Soc 2015; 63:2370.
  64. Bergeron D, Flynn K, Verret L, et al. Multicenter Validation of an MMSE-MoCA Conversion Table. J Am Geriatr Soc 2017; 65:1067.
  65. Pinto TCC, Machado L, Bulgacov TM, et al. Is the Montreal Cognitive Assessment (MoCA) screening superior to the Mini-Mental State Examination (MMSE) in the detection of mild cognitive impairment (MCI) and Alzheimer's Disease (AD) in the elderly? Int Psychogeriatr 2019; 31:491.
  66. Albert MS, DeKosky ST, Dickson D, et al. The diagnosis of mild cognitive impairment due to Alzheimer's disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimers Dement 2011; 7:270.
  67. Lebedeva E, Huang M, Koski L. Comparison of Alternate and Original Items on the Montreal Cognitive Assessment. Can Geriatr J 2016; 19:15.
  68. Tariq SH, Tumosa N, Chibnall JT, et al. Comparison of the Saint Louis University mental status examination and the mini-mental state examination for detecting dementia and mild neurocognitive disorder--a pilot study. Am J Geriatr Psychiatry 2006; 14:900.
  69. Kaya D, Isik AT, Usarel C, et al. The Saint Louis University Mental Status Examination Is Better than the Mini-Mental State Examination to Determine the Cognitive Impairment in Turkish Elderly People. J Am Med Dir Assoc 2016; 17:370.e11.
  70. Szcześniak D, Rymaszewska J. The usfulness of the SLUMS test for diagnosis of mild cognitive impairment and dementia. Psychiatr Pol 2016; 50:457.
  71. Cummings-Vaughn LA, Chavakula NN, Malmstrom TK, et al. Veterans Affairs Saint Louis University Mental Status examination compared with the Montreal Cognitive Assessment and the Short Test of Mental Status. J Am Geriatr Soc 2014; 62:1341.
  72. Stern S. Psychometric Properties of the Saint Louis University Mental Status Examination (SLUMS) for the Identification of Mild Cognitive Impairment (MCI) in a Veteran Sample, Dissertation, Georgia State University, 2014. https://scholarworks.gsu.edu/psych_diss/125 (Accessed on January 22, 2019).
  73. Morris JC, Heyman A, Mohs RC, et al. The Consortium to Establish a Registry for Alzheimer's Disease (CERAD). Part I. Clinical and neuropsychological assessment of Alzheimer's disease. Neurology 1989; 39:1159.
  74. Welsh K, Butters N, Hughes J, et al. Detection of abnormal memory decline in mild cases of Alzheimer's disease using CERAD neuropsychological measures. Arch Neurol 1991; 48:278.
  75. Welsh KA, Butters N, Hughes JP, et al. Detection and staging of dementia in Alzheimer's disease. Use of the neuropsychological measures developed for the Consortium to Establish a Registry for Alzheimer's Disease. Arch Neurol 1992; 49:448.
  76. Chandler MJ, Lacritz LH, Hynan LS, et al. A total score for the CERAD neuropsychological battery. Neurology 2005; 65:102.
  77. Sotaniemi M, Pulliainen V, Hokkanen L, et al. CERAD-neuropsychological battery in screening mild Alzheimer's disease. Acta Neurol Scand 2012; 125:16.
  78. Wolfsgruber S, Jessen F, Wiese B, et al. The CERAD neuropsychological assessment battery total score detects and predicts Alzheimer disease dementia with high diagnostic accuracy. Am J Geriatr Psychiatry 2014; 22:1017.
  79. Karrasch M, Sinervä E, Grönholm P, et al. CERAD test performances in amnestic mild cognitive impairment and Alzheimer's disease. Acta Neurol Scand 2005; 111:172.
  80. Seo EH, Lee DY, Lee JH, et al. Total scores of the CERAD neuropsychological assessment battery: validation for mild cognitive impairment and dementia patients with diverse etiologies. Am J Geriatr Psychiatry 2010; 18:801.
  81. Aguirre-Acevedo DC, Jaimes-Barragán F, Henao E, et al. Diagnostic accuracy of CERAD total score in a Colombian cohort with mild cognitive impairment and Alzheimer's disease affected by E280A mutation on presenilin-1 gene. Int Psychogeriatr 2016; 28:503.
  82. Paajanen T, Hänninen T, Tunnard C, et al. CERAD neuropsychological battery total score in multinational mild cognitive impairment and control populations: the AddNeuroMed study. J Alzheimers Dis 2010; 22:1089.
  83. Hallikainen I, Hänninen T, Fraunberg M, et al. Progression of Alzheimer's disease during a three-year follow-up using the CERAD-NB total score: Kuopio ALSOVA study. Int Psychogeriatr 2013; 25:1335.
  84. Paajanen T, Hänninen T, Tunnard C, et al. CERAD neuropsychological compound scores are accurate in detecting prodromal alzheimer's disease: a prospective AddNeuroMed study. J Alzheimers Dis 2014; 39:679.
  85. Rossetti HC, Munro Cullum C, Hynan LS, Lacritz LH. The CERAD Neuropsychologic Battery Total Score and the progression of Alzheimer disease. Alzheimer Dis Assoc Disord 2010; 24:138.
  86. Mathuranath PS, Nestor PJ, Berrios GE, et al. A brief cognitive test battery to differentiate Alzheimer's disease and frontotemporal dementia. Neurology 2000; 55:1613.
  87. Hodges JR, Larner AJ. Addenbrooke’s Cognitive Examinations: ACE, ACE-R, ACE-III, ACEapp, and M-ACE. In: Cognitive Screening Instruments: A Practical Approach, 2nd ed, Larner AJ (Ed), Springer International Publishing, Cham, Switzerland 2017. p.109.
  88. Mioshi E, Dawson K, Mitchell J, et al. The Addenbrooke's Cognitive Examination Revised (ACE-R): a brief cognitive test battery for dementia screening. Int J Geriatr Psychiatry 2006; 21:1078.
  89. Hsieh S, Schubert S, Hoon C, et al. Validation of the Addenbrooke's Cognitive Examination III in frontotemporal dementia and Alzheimer's disease. Dement Geriatr Cogn Disord 2013; 36:242.
  90. Larner AJ, Mitchell AJ. A meta-analysis of the accuracy of the Addenbrooke's Cognitive Examination (ACE) and the Addenbrooke's Cognitive Examination-Revised (ACE-R) in the detection of dementia. Int Psychogeriatr 2014; 26:555.
  91. Breton A, Casey D, Arnaoutoglou NA. Cognitive tests for the detection of mild cognitive impairment (MCI), the prodromal stage of dementia: Meta-analysis of diagnostic accuracy studies. Int J Geriatr Psychiatry 2019; 34:233.
  92. Peixoto B, Machado M, Rocha P, et al. Validation of the Portuguese version of Addenbrooke's Cognitive Examination III in mild cognitive impairment and dementia. Adv Clin Exp Med 2018; 27:781.
  93. Matias-Guiu JA, Cortés-Martínez A, Valles-Salgado M, et al. Addenbrooke's cognitive examination III: diagnostic utility for mild cognitive impairment and dementia and correlation with standardized neuropsychological tests. Int Psychogeriatr 2017; 29:105.
  94. Wang BR, Ou Z, Gu XH, et al. Validation of the Chinese version of Addenbrooke's Cognitive Examination III for diagnosing dementia. Int J Geriatr Psychiatry 2017; 32:e173.
  95. Cheung G, Clugston A, Croucher M, et al. Performance of three cognitive screening tools in a sample of older New Zealanders. Int Psychogeriatr 2015; 27:981.
  96. Jubb MT, Evans JJ. An Investigation of the Utility of the Addenbrooke's Cognitive Examination III in the Early Detection of Dementia in Memory Clinic Patients Aged over 75 Years. Dement Geriatr Cogn Disord 2015; 40:222.
  97. Elamin M, Holloway G, Bak TH, Pal S. The Utility of the Addenbrooke's Cognitive Examination Version Three in Early-Onset Dementia. Dement Geriatr Cogn Disord 2016; 41:9.
  98. Hsieh S, Hodges JR, Leyton CE, Mioshi E. Longitudinal changes in primary progressive aphasias: differences in cognitive and dementia staging measures. Dement Geriatr Cogn Disord 2012; 34:135.
  99. Zygouris S, Tsolaki M. Computerized cognitive testing for older adults: a review. Am J Alzheimers Dis Other Demen 2015; 30:13.
  100. Ruggeri K, Maguire Á, Andrews JL, et al. Are We There Yet? Exploring the Impact of Translating Cognitive Tests for Dementia Using Mobile Technology in an Aging Population. Front Aging Neurosci 2016; 8:21.
  101. Millett G, Naglie G, Upshur R, et al. Computerized Cognitive Testing in Primary Care: A Qualitative Study. Alzheimer Dis Assoc Disord 2018; 32:114.
  102. Robillard JM, Lai JA, Wu JM, et al. Patient perspectives of the experience of a computerized cognitive assessment in a clinical setting. Alzheimers Dement (N Y) 2018; 4:297.
  103. Tierney MC, Lermer MA. Computerized cognitive assessment in primary care to identify patients with suspected cognitive impairment. J Alzheimers Dis 2010; 20:823.
  104. Wild K, Howieson D, Webbe F, et al. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement 2008; 4:428.
Topic 14058 Version 5.0

References