Archived: This report is greater than 3 years old. Findings may be used for research purposes, but should not be considered current.
People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.
Anemia is a common complication of chronic kidney disease (CKD) that develops early in the course of CKD, and becomes increasingly severe as the disease progresses. The management of anemia in CKD patients requires an appropriate balance between stimulating the generation of erythroblasts (erythropoiesis) and maintaining sufficient iron levels for optimum hemoglobin (Hb) production. Thus, assessing iron status is integral to both iron and anemia management in CKD patients, as iron is essential for Hb formation (as is erythropoietin). However, classical laboratory biomarkers of iron deficiency exhibit a wide biological variability in CKD. In response, newer, less-variable markers have been proposed.
To summarize the literature on the use of newer versus classical laboratory biomarkers of iron status as part of the management strategies for iron deficiency in stages 3–5 CKD patients (nondialysis and dialysis).
All published articles identified though MEDLINE®, and the Cochrane Central Register of Controlled Trials, from inception to May 2012.
Two reviewers independently selected studies on the basis of predetermined eligibility criteria. We considered studies of pediatric and adult nondialysis patients with stage 3, 4, or 5 CKD, patients with CKD undergoing dialysis (hemo- or peritoneal dialysis), and patients with a kidney transplant. Studies that compared newer laboratory biomarkers of interest such as hemoglobin content in reticulocytes (CHr), percentage of hypochromic red blood cells (%HYPO), erythrocyte zinc protoporphyrin (ZPP), soluble transferrin receptor (sTfR), hepcidin, and superconducting quantum interference devices (SQUID), with classical laboratory biomarkers, such as bone marrow iron stores, serum iron, transferrin saturation (TSAT), iron-binding capacity, and serum ferritin were included.
One reviewer abstracted article information into predesigned extraction forms; a second reviewer checked information for accuracy. A standardized protocol was used to extract details on designs, diagnoses, interventions, outcomes, and methodological issues.
A total of 30 articles were accepted, including one Polish- and one Japanese-language publication. We did not identify any study that provided data directly addressing our overarching question Key Question 1 regarding the impact of using newer laboratory biomarkers on patient-centered outcomes (mortality, morbidity, quality of life, and adverse effects). We identified 27 studies to answer Key Question 2, which addresses the performance of newer markers of iron status as a replacement for or in addition to classical markers.
The synthesis of data for Key Question 2 was complicated by the lack of generally accepted reference standard tests for determining iron deficiency in the context of CKD. Of the 27 included studies, 15 used classical markers of iron status to define “iron deficiency” as the reference standard in calculating the test accuracy (sensitivity and specificity) of newer markers of iron status. For the purpose of our review, this approach was analogous to assessing the concordance between classical and newer biomarkers of iron status; thus, these studies were only included for subquestion 2a (What reference standards are used for the diagnosis of iron deficiency in studies evaluating test performance?). The remaining 12 studies investigated the test performance of newer or classical markers of iron status, using a response to IV iron treatment as the reference standard for the diagnosis of iron deficiency. We therefore synthesized these 12 studies for Key Question 2. Of these 12 studies, most studies enrolled only adult CKD patients on hemodialysis (HD CKD patients), though a few examined adult peritoneal dialysis (PD) and nondialysis (ND) CKD patients. Only one study enrolled pediatric CKD patients. Although the reviewed studies evaluated many newer markers, such as CHr, %HYPO, RetHe, sTfR, hepcidin, and ZPP, the majority assessed CHr or %HYPO among adult HD CKD patients.
Based on our analysis, we concluded that there is a low level of evidence that both CHr and %HYPO have a similar or better overall test accuracy compared with classical markers (TSAT or ferritin) to predict a response to IV iron treatment (as the reference standard for iron deficiency). In addition, data from a few studies suggest that CHr (with cutoff values of <27 or <28 pg) and %HYPO (with cutoff values of >6% or >10%) have better sensitivities and specificities to predict iron deficiency than classical markers (TSAT <20 or ferritin <100 ng/mL). There is also a low level of evidence that sTfR has a similar test performance compared with classical markers (TSAT or ferritin) to predict a response to IV iron treatment, but the strength of evidence was insufficient to come to a conclusion regarding the test performance of newer markers of iron status as an add-on to older markers, and that of ZPP and hepcidin. It should be noted that, across studies, there exists a high degree of heterogeneity in the test comparisons, definitions for the reference standard (a response to IV iron treatment), iron status of the study populations (assessed by TSAT or ferritin), and background treatment. This heterogeneity may limit the comparability of findings across studies.
For Key Question 3 (impact on intermediate outcomes of newer markers compared with older markers), we identified only two short-term RCTs (4 and 6 months), enrolling a total of 354 adult HD CKD patients. We concluded that there is a low level of evidence for a reduction in the number of iron status tests and resulting intravenous iron treatments (a post hoc intermediate outcome) administered to patients whose iron management was guided by CHr compared with those guided by TSAT or ferritin. Both RCTs reported that Hct remained in the targeted ranges (an indication for the adequacy of anemia management) throughout the study period in all randomized arms, though the Hct target was higher in the U.S. trial than the Japanese trial.
For Key Question 4 (factors affecting the test performance and clinical utility of newer markers), we included 3 studies (1 RCT and 2 prospective cohorts) as well as relevant data from all 27 studies included in Key Questions 2; however, we found insufficient evidence to draw any conclusions, as only single studies or indirect comparisons across studies provided relevant data.
The available data are very limited due to a high degree of heterogeneity. There exist many definitions of a response to IV iron treatment as the reference standard for iron deficiency. Moreover, there is a lack of a uniform regimen of intravenous iron treatment in terms of dosage, iron formulation, treatment frequency, and followup duration for the iron challenge test (to define a response) across studies. Many studies included in our review were also rated as being at a high risk of bias, limiting their utility in informing clinical practice.
Combining the evidence addressing Key Questions 2,3 and 4, we can conclude that all currently available laboratory biomarkers of iron status (either newer or classical markers) do not have an ideal predictive ability when used singly to determine iron deficiency as defined by a response to iron challenge test. Furthermore, we can conclude that there is insufficient evidence to determine the test performance of the combinations of newer biomarkers, or combinations of newer and classical biomarkers, for diagnosing iron deficiency. However, it may be that CHr and %HYPO have better predictive ability for a response to IV iron treatment than classical markers (TSAT <20 or ferritin <100 ng/mL) in HD CKD patients. In addition, results from two RCTs showed a reduction in the number of iron status tests and resulting IV iron treatments administered to patients whose iron management was guided by CHr, compared with those guided by TSAT or ferritin. These results suggest that CHr may reduce potential harms from IV iron treatment by lowering the frequency of iron testing, although the evidence for the potential harms associated with testing or test-associated treatment is insufficient.
Nevertheless, the strength of evidence supporting these conclusions is low, and there remains considerable clinical uncertainty regarding the use of newer markers in the assessment of iron status and management of iron deficiency in stages 3–5 CKD patients (both nondialysis and dialysis). In addition, factors that may affect the test performance and clinical utility of newer laboratory markers of iron status remain largely unexamined.