Skip Navigation
Department of Health and Human Services www.hhs.gov
 
 

Back to "About This Webcast"

New and Improved Registries for Evaluating Patient Outcomes and HIT

Webcast Transcript, September 27, 2010
Elise Berliner, Ph.D., Moderator

Title Slide

DR. BERLINER: Good morning everyone. Welcome to New and Improved Registries for Evaluating Patient Outcomes and HIT. This is the release of the Second Edition of our registries handbook.

Slide 2: Agenda

My name is Elise Berliner, and I'm the task force officer for this project. We've been working on this update for a number of years now, and we're just so proud of it. Our senior editors are Richard Gliklich and Nancy Dreyer from Outcome, which is one of the DEcIDE Centers for the Effective Health Care Program. They are going to talk about the major changes and new sections. And, then at the end, we will have an open discussion on the Third Edition. We hope the Third Edition will be even larger than the Second Edition, and we want to hear your suggestions about what it should include.

Slide 3: Background

The First Edition was published in 2007. It has been widely used as a reference for designing, operating, and evaluating patient registries, which we're so happy about. I forgot to acknowledge, Michelle Leavy, the managing editor of the Handbook for her contributions. Michelle did a literature search to see how many citations there were for the Registry Handbook and she found over 60 citations in the literature and significant government publications such as the Federal Register and the FCC Report to the President. So we are happy that this document is getting used.

Slide 4: Purpose of the Second Edition

So why did we do a Second Edition? As registries continue to evolve, many new methodological and practical issues have arisen. We wanted to update and expand the First Edition of the User's Guide with new information gathered from recent publications and reported experiences encountered by researchers and other professionals who utilize registries for research. And, we wanted to expound on selected topics in the original guide and add new topics that deserve further in-depth discussion.

Slide 5: Process of Creating the Guide

The topics were identified based on public comments received for the First Edition. For the new sections, we gathered author and reviewer teams with similar backgrounds to the First Edition; all of the chapters include authors from industry, government, physician groups, and patient representatives. We tried to get a balance of stakeholders in order to have everyone's perspective represented. We posted the new sections for public comment and revision, and then we posted the whole draft of the Second Edition for another round of public comment. For the original chapters, the original authors and reviewers were invited to participate. Additions were made for new topic areas when necessary, and you'll hear about that from Rich and Nancy. We also had an open call for case examples. One of the great things about this handbook is that throughout we have cases where people who have developed real registries tell us about problems they've had and how they've solved them. We think is useful for people as they face their own problems, they can see how other people have solved those problems.

Slide #5: Second Edition Collaborators

So our Second Edition. We had 55 contributors from industry, academia, health plans, physician societies, and government. We had 49 invited peer reviewers plus we invited anyone to comment during the public review process. We had 38 case studies illustrating challenges and solutions. There are 20 new examples. Rich and Nancy were the editors and Michelle Leavy was our managing editor.

Slide 7: Second Edition: Table of Content

Here is the table of contents and in yellow (on this slide) are the major new sections. Every chapter was updated as needed. In Chapter 3, we added a discussion of planning for the end of a patient registry. In Chapter 4, we added a whole new section on the use of registries and product safety assessment. In Chapter 7, we looked at linking registry data to other sources of data and to other technical and legal considerations. And, in Chapter 11 we looked at interfacing registries with electronic health records. All of these are very timely topics.

With that, I will turn it over to Rich and Nancy.

Slide 8: Registries for Evaluating Patient Outcomes: A User’s Guide Second Edition

DR. GLIKLICH: Thank you, Elise. Good morning. It is both an honor to be here and a great feeling to be talking about this second edition, which you can see is a bit thicker than the first one. As Dr. Berliner said, this is the work of a large number of collaborators and hopefully a very valuable contribution in the field.

Slide 9: Review of Major Changes
This morning we're going to primarily focus on how the Second Edition builds on and expands the First Edition with a particular emphasis on the four new chapters or chapter parts and case examples.

As many of you know, the User's Guide is divided into three sections: creating, operating and evaluating registries. The first two sections provide basic information on the key areas of registry development and operations, which highlight the spectrum of practices in each of those areas and their potential strengths and weaknesses.

First, I should say that every part of every chapter was reviewed and there are many updates and changes; however, this morning I'll just review the key changes to the original chapters, discuss the new case studies and present a more expanded review of two of the new chapters. I'll cover linking registry data and interfacing registries with EHRs and my colleague, co-editor Dr. Dreyer, will review the other two major additions as well as the changes to the last chapter which is on evaluating registries.

Slide 10: Updates to 1st Edition Chapters

So, let’s go through the key chapters that had significant changes. In Chapter 2, which covers planning there are several key steps in planning a patient registry, including articulating its purpose, determining whether it is an appropriate means of addressing the research question, identifying stakeholders, defining the scope, the target population, assessing feasibility, securing funding and so on. To this chapter, we have now added a more significant discussion of public/private partnerships.

Chapter 3 also had significant changes. Chapter 3 covers registry design. A patient registry should be designed with respect to its major purpose or purposes with the understanding that different levels of rigor may be required for registries designed to address focused analytical questions versus registries that are more descriptive in nature. As you know, design and analysis go hand in hand and what we tried to do in these revisions is to align this chapter much more closely with the analysis and interpretation chapter, which is now Chapter 13.

A third chapter which changed quite a bit is Chapter 5. This is on data elements. The selection of data elements requires balancing such factors as their importance for the integrity of the registry and the analysis of the primary outcomes, as well as things like reliability, burden, risk of patient identification, and so on. What we did here was further stress data standards. We explained what data standards are. We pointed the reader to a lot more data standards because these will have direct consequences on our ability to aggregate and link registries in the future.

Slide 11: Updates to 1st Edition Chapters

Another chapter which had significant additions is Chapter 8. Chapter 8 focuses on registry ethics. This is data ownership and privacy and there are obviously critical ethical and legal issues that arise in collecting and using data for patient registries. In the First Edition, we had a pretty strong review of HIPAA and the Common Rule. In this edition, we expanded the review to include the Patient Safety and Quality Improvement Act of 2005, which covers patient safety organizations, the Genetic Information Nondiscrimination Act, which touches a little bit on biosamples, and the HITECH Act. In addition to adding a chapter on using patient registries for product safety assessment, we felt that the original chapter on adverse event detection required some updating since risk evaluation and mitigation strategies or REMs did not exist at the time the first Handbook was published. So we added sections on that.

Slide 12: Case Examples (New Examples Highlighted)

So I've listed on the next three slides the case examples. We added another 20 case examples to this edition. Many of the examples touch on the new chapters such as linking registry data or electronic health record use, but many examples were added to the older chapters such as designing a registry for Health Technology Assessment, using a collaborative approach to plan and implement the registry.

Slide 13: Case Examples (New Examples Highlighted)

And these are some more of the case examples that have been added. The case examples that have been added are in that brownish/reddish color and the original ones are in the black typeface.

Slide 14: Case Examples (New Examples Highlighted)

So just one more comment on the case examples. The purpose of their inclusion just as in the First Edition is solely to illustrate a point in the text. We don't endorse any of these registries for their quality. It's simply to use them as an example.

Slide 15: New Chapter: Linking Registry Data

So let me talk briefly about two chapters. The first chapter is linking registry data. Registry data may be linked to other data sources, for example, administrative data sources or other registries that examine questions that cannot be addressed using the registry data alone. For these projects, we need to determine what the risk of identifying patients becomes when we combine data, and what the legal and ethical requirements are in performing the linkage.

Slide 16: Linking Registry Data: Overview

Basically, the chapter is divided into two equally weighted and important sets of questions that must be addressed in the data linkage process. First, what is a feasible technical approach to linking the data; and second, is linkage legally feasible under the permissions, terms, and conditions that apply to the original formation or collection of the dataset?

Slide 17: Table 10: Technical Planning Questions

This table, which is from the chapter outlines a number of questions researchers need to ask to address the central issues in data linkage. There are many statistical techniques for linking records: deterministic matching, probabilistic matching, and so on. The choice of the technique should be guided by the types of data available, and linkage projects need to be managed by the types of data that are available, for example, records that exist only in one database, variations in units of measure, the types of things that you can imagine.

In addition, it is important to understand that linkage of de-identified data may result in accidental reidentification and risks of reidentification vary depending on the variables used and should be managed with guidance from legal and statistical experts to minimize risk and ensure compliance with HIPAA as well as the Common Rule and state statutes and so on.

Slide 18: Table 9: Legal Planning Questions

So these additional tables in that chapter walk through questions that someone who is undergoing planning a linkage should be thinking about, what's the purpose of the data linkage, what are the conditions in which the original data was collected, are there biospecimens present, what kind of data do you have, who is conducting the linkage…

Slide 19: Table 9: Legal Planning Questions (Continued)
…other laws or policies that might apply, future uses of the aggregated data, have those future uses being considered in the consenting process, and so on.

Slide 20: Linking Registry Data: Case Example

As in all the new chapters, we have added new case studies that emphasize some relevant point or feature. Here is an example of one of the case studies which is the linkage between the American College of Cardiology CathPCI Registry which lacks long-term follow-up data, and Medicare data which lacks detailed procedural information. So you can see that there is a complementarity between those two. By combining them, the registry developers in this case and the linkage experts were able to better understand outcomes for diagnostic and interventional cardiac catheterization.

Slide 21: New Chapter: Interfacing Registries with Electronic Health Records (EHRs)

The other chapter I'm going to talk about is achieving interoperability between electronic health records and registries, which will become increasingly important as both EHR adoption increases, particularly under ARRA, and patient registries use increases now that the Handbook is out and that the purposes for both are growing. In this chapter, we take the viewpoint that such interoperability should be based on open standards that enable any willing provider to interface with any applicable registry without requiring customization or permission from the EHR vendor. The chapter explores how we might achieve this in the near term.

Slide 22: Interfacing Registries & EHRs: Overview

This chapter has four main topics, the role of EHRs and patient registries in health care, the vision of EHR interoperability, some of the challenges in interoperability, and then what are partial solutions or potential solutions that can get us at least most of the way there.

Slide 23: Interfacing Registries & EHRs: Definitions

One of the things that we do early on in this chapter is distinguish clearly between an EHR and a registry because there is some confusion in the literature and as people talk about these things. What is the definition of an EHR? It is a record of health-related information on an individual. An EHR is individually-focused whereas a patient registry -- this is the definition from the Registry Handbook -- is an organized system that evaluates specified outcomes for a population. So, one is individually focused, one is population focused.

Slide 24: Interoperability Challenges

Interoperability for health information systems requires accurate and consistent data exchange, and then the use of the information that has been exchanged. There are two different realms to that, syntactic interoperability, the ability to exchange data, and semantic interoperability which is the ability to understand the exchange data. Those are the core constructs of interoperability. Both must be present in order for EHRs and registries to share data successfully. There are other important issues such as managing patient identifiers and so on. We touch on those; but not in as much detail.

The summary findings of the chapter are that getting to full complete, seamless interoperability is unlikely to be achieved for many years.

Slide 25: Partial and Potential Solutions

But there are partial solutions today that are really quite good and get us most if not very close to all of the way there.
Specifically we point to several open standard components. An open standard is one that is freely available to any system vendor who wants to use it. Many of these open standards are well tested and can provide very real and substantial benefits today. We call this functional interoperability in the chapter.

Slide 26: Building-block Approach: Retrieve Form for Data Capture/HITSP-TP50

We describe examples of groups of those standards, such as what's called the retrieve form for data capture, HITSP-TP50 which is a collection of open standards that provides enough of a framework to enable us to have real interoperability today. This diagram is showing an EHR linked to a registry or multiple registries. The clinicians sit down with the EHR to enter information on the patient. Through open standards the EHR recognizes that the patient is already enrolled in a registry or meets the criteria for a new registry. It signals to the clinician that this is the case, assuming the other permissions such as informed consent are in place, and the clinician can opt to move forward. The registry form itself actually emerges within the EHR. They don't go anywhere else, and it gets partially filled out or completely filled out by whatever data is already in the EHR. But because we don't have exact matching of data standards between registries and EHRs yet, we don't have outcomes defined for everything yet, there are always a few data elements that still need to be completed. The clinician can complete that and hit submit and the data goes on to the registry as well as into the EHR. In the case examples, I'll talk about how the practical implementation of this actually drives down the time it takes to participate in a registry with a patient -- from 15-20 minutes to go between the two systems, to a few seconds to a few minutes to do it. So it has very great value potentially for enabling this field to grow.

Slide 27: Summary

The chapter concludes that EHR registry interoperability will be increasingly important as both are growing. While a complete solution does not yet exist, there are enough open standard building blocks to significantly improve work flow and reduce duplication of effort, and these should generally be adopted. It will be helpful to all.

Slide 28: Interfacing Registries & EHRs: Case Examples

Okay, so three of the case examples are presented in the Handbook on this HITSP-TP50 model. In one case, the registry focused on the effectiveness of pain management. It was made interoperable with a commercial EHR with a dramatic reduction in work by the clinicians. In a second case called ASTER, the same model was used to facilitate adverse event reporting to the FDA. In this case, the form is triggered in the EHR by a potential adverse event. It emerges, is completed, and then transferred to the FDA via the National Health Information Network or NHIN. In a third case, the EHR is made interoperable with a quality registry from a professional society, the American College of Rheumatology as well as simultaneously meeting the needs for the Center for Medicare and Medicaid Services physician quality reporting initiatives so that a clinician entering a patient in both registries simultaneously through the same interface can actually meet requirements for both programs at the same time.

I'd like to now turn this over to my colleague and co-editor, Dr. Nancy Dreyer, and I look forward to your questions later in this session.

Slide 28: Review of Major Changes

DR. DREYER: Thanks, Rich. Before I start, I wanted to thank all of you who are in the room and watching the Webcast, who were contributors or reviewers. This Handbook represents a contribution of a lot of people. It is intended as a very practical guide. Without your help in bringing together that vast amount of experience, it wouldn't be very useful. So a heartfelt, “thank you,” on behalf of all the editors to all the contributors.

My assignment today is to talk to you about the rest of the new sections in the book, specifically, planning for the end of a registry and using a registry for product safety assessment, with some case examples. I will then take you through some of the changes to the last section which is about how to evaluate a patient registry.

Slide 30: New Section: When to End a Registry

The section on how to end a registry is within a chapter. It is a sidebar, but it is a pretty important one because if any of you have ever been on the funding side or the leading side of a registry, it seems like a great idea when you start it. You make a commitment to doing something important for public health or for whatever reason, and then it becomes quite a challenge to figure out when you have enough information to stop. For example, what is your obligation to the audience? So this section addresses some of those issues.

Slide 31: Stopping Decisions and Registry Goals

When you are planning to stop a registry, it helps when you started the registry to have put forth some measurable goals so you understand when you have achieved your purpose. Ideally, they would be codified in the protocol and you could use them as a guide to periodically assess whether you've achieved those goals or not.

On the other side of the coin, it's also important to use those goals to see if you are not making them. So a registry that fails to meet measurable goals should be considered as a candidate to be stopped.

There are also other reasons that people sometimes stop a registry. They have to do with funding, that the purpose is no longer relevant, or that the data quality is really insufficient for the goals of the registry.

Slide 32: What Happens When a Registry Stops?

Stopping is not a clear simple point. Stopping could mean no longer accruing new patients and continuing to follow those already in the registry, or it could mean stopping recruitment and folding everything down. So, regardless of how you're stopping, you need to think through exactly what would be stopped and what the processes would be.

Slide 33: What Happens When a Registry Stops?

For example, sometimes there may be a multi-sponsored registry and one sponsor wants to back out and the other sponsors are willing to take up the slack or to take full possession of the ownership and to continue to run it. In that case, it's important to go through the issues of who owns the data, confidentiality, and all of the ethical obligations attendant to making a transfer. But it's not impossible and it is often a very good way to transition a registry.

A correlated or related idea is, what do you do with the data? So if you' are folding down a registry, it is often desirable to think through issues about whether the data should be preserved and, if so, how. Sometimes data that were collected for one purpose can serve other purposes that you hadn't thought of at the time. Sometimes they are of historical importance, sometimes they are just useful for descriptive information, or for generating hypotheses. So it is important to think through what might be the potential uses whether you in fact could use the data and if so, where and how it should be stored.

Slide 34: Ending a Registry: Case Example

Here’s an example. One of the 38 case examples is about bupropion, a drug marketed as Wellbutrin and Zyban and used for depression and smoking cessation. It is a widely used drug. In this example, the manufacturer created a registry to look at teratogenic effects. This was a very large, very long-term project. It went more than ten years. They accrued data on 1,500 babies who had been born to mothers who used this product during their pregnancy. This case study is about a teratogenic signal that emerged, and the question was, “What do you do?” This was a registry that only had treated or exposed babies. You couldn't really evaluate it much further in the registry, but you could characterize it. So, they went to an external database, and in this case it was the health insurance claims database. They put together a sizable cohort, also over ten years, with comparators and looked for the presence of this teratogenic effect which was evident at birth. Based on that external validation effort, the end comparison effort, they were able to see that the signal was in fact consistent with background noise. Based on that, an advisory committee advised that the registry had served its purpose. So, this is an example of a registry that I don't think had a clear endpoint, but it generated a signal. The signal was evaluated and they decided that this was enough information.

Slide 35: New Chapter: Use of Registries for Product Safety Assessments

Another important chapter deals with registries for product safety assessments. This is becoming an increasingly important tool. When new products come to market, they have been extensively evaluated, but generally in optimal populations and they are not inclusive of the full range of patients who would use a product. So, registries are becoming particularly useful for understanding subpopulations that wouldn't have been in a clinical trial, either because of their age or their fragility or co-morbidities or other factors. So, we devoted a whole chapter to this, particularly how the data should be looked at for monitoring adverse events, how to figure out what the expectations should be so you can gauge what your observation is, what you should expect and how often you should look at it, and then what your legal and ethical responsibilities are for reporting, including informing physicians.

Slide 36: Use of Registries for Product Safety Assessments: Overview

This chapter is divided into four sections: registries that are specifically designed for safety assessments; registries that weren't designed for safety assessments but can provide important information; signal detection; and requirements for reporting. These are all hot topics. I think you'll find this chapter useful if you deal with this work.

Slide 37: Registries Designed for Safety Assessments

The utility of this chapter is that it talks about the challenges of looking at large and diverse populations over extended times, trying to parse out the effects of multiple treatments that people are using, and those types of issues. We deal a little bit about the study size questions, for example, when you are trying to detect safety events, size matters. So there is an importance to understanding when you have enough information to confidently rule out a problem as well as to confirm one.

We also talk about some of the challenges in recruiting the appropriate populations of interests, figuring out how well your recruitment has been and if you in fact do have a good representation of the target population. Then, we address some of the issues of trying to figure out timing and switching treatment and multiple therapies. I want to make it clear that this is not a methods handbook for how to do analysis of complex treatments. It is a practical guide to some of the issues, but it points you to other references. It is not everything you ever wanted to know, but it has a lot of information.

Slide 38: Registries Designed for Purposes Other than Safety

We also talk about the challenges of using a registry that was designed for purposes other than safety. Sometimes these are registries that were targeted to study the natural history of the disease, but there can be safety signals in them or safety data in them. So we talk about how you handle that.

Slide 39: Use of Registries for Product Safety Assessment: Case Example

We give an example of how registries like this are used. This example is from the British Society for Rheumatology Biologics Register. This important registry created by rheumatologists, has been used by the National Institute of Clinical Health and Excellence in the UK, a group that decides what goes on the national health formulary. It is an important decision-making body for the United Kingdom. They used this register to try to understand anti-TNF therapy; and this registry was used by decision-makers to allow a product or class of products to come on formulary.

Slide 40: Evaluating Registries

The last chapter I wanted to talk about is one that is widely cited. I am very interested in your comments on anything, but in this chapter in particular, because this is the chapter about evaluating registries. There has been some discussion about if it should it be a checklist. It's not, but I want to know if you think that that's important. What it does, is lay out two categories of quality: research quality and evidence quality. Research quality has to do with the process and evidence quality has to do with what you get.

For each of these -- research quality and evidence quality -- we have two levels of evaluation. One is a basic level that every registry should have, and the second category is potential enhancements that are really quite good, but they are not essential. So I'll take you through some of those issues.

Slide 41: Table 20: Research Quality – Basic Elements of Good Practice for Establishing and Operating Registries

This is the table that we give on the basic elements of good practice for research quality. I know the print is small, but we're talking about the basic elements of research design. It should have objectives. You should know who you are studying. You should think of finding clinically meaningful outcomes. Think about an efficient and reliable way to collect data, use validated scales when they exist. Think about the followup time you need to get enough information about what you're studying. We're not saying that you have to have enough follow-up for the ideal study, but we're saying you have to think through it and acknowledge what the desirable information is and what you can do. For example, if you are studying something rare and you think you need a million people, but you can only afford to study 10,000, you write that. Ideally we can't rule out a risk of one in a million, but we can tell you about the experiences in 10,000 people. And we talk about reporting and planning an analysis. So I think when you get a chance to look at this in a font size that you can read, you'll see that there should be no surprises here. This is basic good research practice.

Slide 42: Table 21: Research Quality – Potential Enhancements to Good Practice for Establishing and Operating Registries

This table shows potential enhancements. Here we talk about the difference between having an informal study plan and a formal study protocol and the value of having internal comparisons. One of the big challenges in registries is how to know if what you found is unusual or meaningful. There is some value to internal comparison groups, that is, information on nonexposed comparators that were selected at the same point in time. That is not always the Holy Grail, but certainly something to think about.

We also discuss the value of a formal statistical evaluation of study size requirements and the potential added value you could get from linkage with external sources. For example, if you are looking for serious side effects, you might want to collect enough data elements that allow you to link, at some point in time, to the National Death Index. That is a tremendous resource that will tell you not only who died but also their cause of death. Those are potential enhancements to a good research design.

Slide 42: Evidence Quality – Indicators of Good Evidence Quality for Registries

Now, remember, research quality refers to how good your processes were; evidence quality refers to how it all worked out. This table has been reorganized from the last edition. The major changes are that we reorganized the table in a more logical fashion, for example, here we talk about external validity and internal validity.

Slide 42: Evidence Quality – Indicators of Enhanced Good Evidence Quality for Registries

External validity refers to the extent to which this is generalizable and internal validity refers to the extent to which what you got actually represents the truth. So, external validity is basic evidence quality. For example, were the people in your study similar to the target population that you aimed to get? That’s a fairly practical question. Do you have good completeness of information? And, then there is internal validity. For example, were you able to get the data on the key exposures? Did you get it in a reasonable follow-up on people, so you could draw some conclusion? Were you able to do any quality assurance, even checking a sample of data against source documents?
We talk about the importance of reasonably complete follow-up and checking the quality of your data. I would propose that nothing here is shocking. It should be familiar to all of you as a marker of quality.

For the enhancements, then we talk about things that you can do to ensure or assure that you've in fact got good quality data. So did you confirm, for example, that the patients in your study were in fact eligible to be in your study? Did you try to evaluate the potential for selection bias? That's my epidemiology jargon for understanding whether certain people got into your study more so than others who also represented the population of interest. You can do that by comparing your study population with other information that characterizes them. For example, people with rheumatoid arthritis, how similar are the people in my RA registry to what's known about people with rheumatoid arthritis? For internal validity, we talk in terms of enhancements in doing things like collecting data in an unbiased fashion, for example, the value of results that can be confirmed by an unbiased observer. Instead of a global assessment, for example, of patient health which depends on a particular doctor's appraisal, if it is a measurement of some lab value that can be confirmed in another lab or chest X-ray that can be read by somebody else, we talk about value of those types of issues.

Regarding the value of quantitative evaluation of risks and benefits, I am sure many of you have read articles where the conclusion is there were no statistically significant differences. This chapter talks about the importance of reporting the rate that was observed. That is, you give a rate and a confidence interval. It is not just about statistical significance, it is about quantifying the benefit or the harm. We talk about quantification, the value of contemporaneously collected data for comparators, if you are doing a comparative effectiveness study. We talk about discussing the validated tools where possible. One of the other new contributions is to talk a little bit more about the value of sensitivity analyses. No matter how well you've done your study, there will be quirks about the analysis or the population. So, there is value in basic sensitivity analyses to understand how much your conclusions depended on assumptions that you made in the analyses.

Then finally, and of course, there is value in describing the consistency of what you observed with what's known about the issues that you are studying.

With that, I'm going to turn it over to Elise and we'll look forward to any comments you have about this, what else is missing, or things that you think should be changed. Thanks very much.

Slide 45: Plan for the Third Edition

DR. BERLINER: We are happy that we've released the Second Edition, but we are already starting work on the Third Edition. The full text of the Handbook is posted on the Effective Health Care Web site. There is also information on the Web site about ordering a printed copy. The Web site is www.effectivehealthcare.ahrq.gov.

I want to talk about briefly what we're planning for the Third Edition. We are planning to begin right away on development. Through public comment on the Second Edition, seven topics were identified as potential new chapters but we are planning to do 11 new chapters. So we definitely want to hear feedback about other topics that you think would be important. I am going to review the seven topics were that identified previously and we can talk about framing those topics.

Slide 46: Topics of the Third Edition

Topics for the Third Edition that were identified through the public comment, include: 1) patient identity management; 2) protection of data from litigation, that's a topic that comes up often of high concern; 3) data protection concerns, 4) public/private partnerships; 5) statistical techniques for analyzing combined data; 6) having more on pregnancy registries; and 7) also registry transition. So, those are the seven topics that people told us were important to cover in more detail than is currently covered.

Slide 47: Additional Topics

As I mentioned, we have four more possible topics. If anyone in the room has any questions on anything that we've said or any comments or ideas for the next version, we'd love to hear it. So with that I'll open it up to questions.

AUDIENCE MEMBER: I wonder if you might comment on the relationship to this future registry development and EHRs and meaningful use and the relationships between those.

DR. GLIKLICH: Some of those points are in the interoperability chapter. There is a long preface in the interoperability chapter about what the impact of ARRA or HITECH funding will have on EHR adoption and what the potential strategies are for integrating registries with that because once data -- as you're implying from your question, is in electronic form, nobody wants to reenter it. They are happy to enter it if it's in paper form but less willing once it's electronic. They want the magic of electricity to do its thing. We do talk about that in the current chapter.

There is another side to your question which may be directed to Elise as a potential area. What we don't have currently is a chapter which focuses specifically on registries for quality measurement and improvement. And, what you're describing as meaningful use is somewhere between HIT issues and quality improvement issues.

DR. BERLINER: We had a lot of discussion about that when the project started and the scope was specifically limited to registries for evaluating patient outcomes, but I think that it probably is good to revisit the question now, as we move forward about quality improvement registries because we know that there is a lot of overlap with quality improvement registries being used for measuring patient outcome So thank you.

Another project that we're working on at AHRQ is a longitudinal study of implantable cardiac defibrillators where we are linking data from health records not always electronic, and the implantable cardiac defibrillators to study the effectiveness of them. Wouldn’t it be great if you could just download all the data from the device directly into the registry? I think that would be something that would be really interesting to look at. I'd love to hear thoughts here.

DR. DREYER: I have a question for you. On the section on evaluating patient registries, I was interested to learn if it would be of value to have a checklist than a description. Those people who find a checklist helpful, could you raise your hands? And everyone who finds the description adequate without a checklist, let’s have a show of hands. Okay, that's very helpful. Thanks.

DR. FORDIS: Elise, we have a couple of questions that have come from the folks on the Webcast. For the Second Edition, have you researched national registries established in other countries? And also, as you discussed patient safety issue capture, is there a plan to aggregate the issue of unintended consequences identified by registries?

DR. DREYER: I can address the question about national registries. When we have done both editions of this Handbook, we have tried to use a variety of resources available to us to call for case studies. That call did go internationally, and you will find that there are many international case examples here. It is not a comprehensive resource for national registries, but it does reflect international contributions about registries. Although this is funded by the U.S. government, we thought that the utility would be improved by looking around the world for good examples.

DR. GLIKLICH: Just to follow up on that question and try to see if it's worthy of a suggestion for the next edition, in both editions we've tried to state that this is a U.S. text, but we draw references from outside the U.S. In this edition, in the first chapter called patient registries, there is a little bit more discussion about what's happening globally with registries. So, one question that anyone here could ask is, to what extent might a chapter on international registries, or aspect of international registries, or laws, or differences be useful. If somebody does think that it is useful, please suggest it.

AUDIENCE MEMBER: This isn't really a question, just a comment on what's just been mentioned. I work in Qatar out in the Arabian Gulf. I spent a long time in Saudi Arabia and we've developed various registries along the way if you're looking for contributions for international registries. We are also about to embark on a pregnancy registry for Qatar, and I might be coming back and asking someone some questions about that. But my real question -- it sounds quite simple, but to me it's quite complicated -- you used the term registry, but in my mind I would call a clinical database. I've looked on a registry as something more than a database; it is a complete database of the specified information. It seems to me that you're talking a lot about sampling and representation today. Do you distinguish between the two?

DR. GLIKLICH: I'm not sure what the definition of a clinical database is. We tried very carefully to outline the definition of patient registry as an organized system for collecting uniform data, about exposures and outcomes for a specified population that's defined by an exposure. The exposure can be a disease, a procedure, condition that it serves predetermined scientific clinical or policy purposes. A clinical database could have a lot of definitions. I have colleagues who are surgeons who just collect data on every patient. They don't have a specific purpose in mind. It gets close, but it would not fit this definition of registry.

AUDIENCE MEMBER: So the definition of registry you're working with has more to do with the structure of the data and perhaps even the quality not so much to do with the completeness of the data?

DR. GLIKLICH: Well, its completeness is something we evaluate at the end of the registry. The intention of a registry is to collect uniform data. It doesn't mean that every patient will have a specified visit by which when you collect data because you are collecting it in a naturalistic real world type of setting. But I think the real key is the last part of the definition, which is that it serves a predetermined purpose. You could have a purpose where complete data is not necessary to serve that purpose and that would still potentially constitute a registry.

AUDIENCE MEMBER: Thank you.

AUDIENCE MEMBER: I am with the American Society of Transplantation. I agree with the point he's made. The way I tend to look at this is a registry is essentially a lot of data on a smaller number of subjects and then you're trying to deal with the issue of how much of the data are descriptive versus how much are explanatory, how much of it is superficial versus very complex. In our field, we tend to have a lot of clinical people, who want to create essentially what I would call registries where they've got a lot of data -- or a few data on a lot of people. And they are not particularly helpful. Many of the databases I feel are very complex and cover a large number of subjects and include a good mix of essentially descriptive data, namely outcomes and explanatory variables. So I think the point he's making is extremely valuable and one we see in our field. So I think there is a difference between a registry and a database.

DR. DREYER: Thank you. I'd like to add in our description that this handbook is focusing on are those that have generally some follow-up. So I think that's particularly important. In the transplant example, is the question what is it about that transplant that allowed them to make it successful or not? Then there are other questions about the long-term safety or toxicity of immunosuppressants. The point I'm trying to make is that registries are driven by a scientific purpose. If your purpose is to understand what works best, then you design a registry that will drive a study that will allow you to have enough information to answer that question.

DR. GLIKLICH: What I missed in the first gentleman's comment is this handbook and the comments that we're making today are specific to registries for evaluating patient outcomes. I think the clinical databases you are talking about are generally geared towards evaluating clinical outcomes where there may be other types of registries which are descriptive or merely count numbers and so on. This handbook does not cover those.

DR. FORDIS: We have more questions from our Webcast audience. The first one is this: Is there something in the report that addresses how a registry may harmonize its data elements with NCI common data elements?

DR. GLIKLICH: The chapter on data elements talks a bit about data standards -- and the NCI data standards, as one example, recommends to users that they use data standards when developing their registries so that they actually do harmonize. Harmonization is key for us in the future as we go forward, In the ICD registry that Elise mentioned, the ability to track procedures forward through linking different datasets and data registries requires ultimately, that you have data elements that are the same ideally, or you have to make judgments. So, the ability to aggregate across registries and make interoperability easier between registries and EHRs depends ultimately on using consensus standards. So, I think that point is well taken.

DR. FORDIS: We have a question about whether there a registry of all of the national health-related registries or some source where they can see where all the health-related registries may be found and contact information.

DR. BERLINER: That is a project that we are going to be starting at AHRQ, a registry of patient registries. So, stay tuned for more information.

DR. FORDIS: Are there any thoughts or suggestion on ways to encourage participates to enter data in the registry? Mandatory versus voluntary other than having the luck of a mandatory registry, what incentives could be provided?

DR. GLIKLICH: The chapter on physician and patient participation in registries talks quite a bit about incentives. I won't summarize Chapter 9 too much , but if physicians are driven by it, it has to be credible to them. Generally, they are looking for the registry to be transparent and deliver value that meets some need for them to balance the time that it takes to participate. Sometimes incentives come from benchmarking and seeing how they are doing. Sometimes, if the purpose of the registry is being driven by someone other than the clinician, they may require financial compensation to pay for that time. On the patient side, it is not dissimilar. Many patients have an interest in research, but not all. Patients often are guided by what is important to their physicians and are willing to participate to the extent they think it will help other patients like themselves, but if you apply a lot of burden to a patient, you may scare them away. An IRB will rightfully suggest that you make a payment to the patient, for example, to complete 45 pages of quality of life forms, and so on. So the whole range is discussed in this modified chapter in the handbook.

DR. DREYER: I wanted to add, we strongly encourage registry developers to be prudent in how much data they collect. I never met a good researcher who didn't want just a little more about something. We all want that, but if you overburden the docs and the patients, you will get nothing. So we try to caution you to be mindful about the burden on the respondents.

DR. BERLINER: Now that I've started working with Outcome and with other investigators on developing registries, I am seeing how difficult it is to define the elements. One thing that happens is that data elements are added, or the definitions change over time. For example, you have a long-term registry, but then the patients and certain cohorts have different definitions and different datasets. I am wondering if there are statistical methods that could be used to address that and if that would be a good chapter. Again, we're looking for feedback from all of you. So let us know.

DR. GLIKLICH: If we have single standards for every data element, we won't have to do that anymore.

AUDIENCE MEMBER: Dr. Gliklich I'll tackle you on that last issue about standards. Perhaps one of the things that would be good in the next chapter would be how to deal with changing technology. As health care evolves, standards change. Our definition of an MI relies on troponins now. How do we put that into a registry?

DR. GLIKLICH: I gave a comical answer to a very complex question. I agree that the problem with this changing technology implies a change of definition because the technology changes the underlying way things are coded in a database. There is a case from the USRDS that talks about those issues in the current handbook and it talks about new versions of registries and changing technologies, and so on. I think for someone like Nancy, who is thinking a lot about how you evaluate these registries, one of the real problems -- and I think would be worth a chapter to talk about how the methods are done -- is that it's hidden. So when you bring data forward in a registry, somebody is making a decision that's usually not well-detailed or described. They just carry it forward and that's very hard to discern as you audit a registry. So, part of it is how you do it and the other is how you document and evaluate the documentation. So, that's a very good point.

AUDIENCE MEMBER: I have one practical question you might like to put in a chapter. You are changing the clinical coding from ICD9 to ICD10 in a couple of years. In Canada we made that shift. I can tell you that the experience is not easy. One major challenge we faced was the training and the comparability of different versions of coded data and it took us a couple of years to make that process work. If you want to know further details, we'll be happy to help you. Thank you.

DR. BERLINER: Thank you very much for that comment.
I can mention also for the registry of patient registry project, we have been thinking about this question of how you represent outcome measures. If you had a registry of patient registries for each registry, how does each registry put in to its database a description of what outcome measures they are measuring at sufficient levels of detail? It is just an idea right now. If anyone wants to participate in any of these processes and in any of the new chapters, just let us know. Use the “Contact Us” form on the site and put in the subject line, “Registry Handbook,” and then give us feedback --anything you want to tell us about the Second Edition and any ideas for the Third Edition. As we move forward on these other projects we'll be posting information on the Web site.

DR. DREYER: I'd like to just add support for that Elise. The example we just heard from Canada is a great example of a practical challenge, these are just the kind of information that everybody else can learn from. So the idea of -- so we welcome your offer and we encourage the rest of you to think through issues or challenges, practical challenges that you've experienced that others might learn from. I've been told that that's what made the First Edition come to life and useful and we've tried to extend that even more so in the Second Edition. So when you get a chance to look at this, please don't hesitate to send us comments either in terms of suggestions for what should be added or what you might add, or things that make it more readable or more usable from your perspective. Thanks.

DR. BERLINER: We've built into that task order a lot of feedback from different stakeholders. So we want to hear from people and want everyone to become involved in this process.

DR. FORDIS: We have another question from our Webcast listeners, “There are certainly benefits for a company considering establishing registries, what are the risks that they should be thinking about?”

DR. DREYER: I'll jump into that one. I think the risks of doing a registry are similar to the risks of doing any type of research where you face public attention to what you're researching. So when you ask a question, you need to be willing to listen to the answer. Like any type of research, you can't guarantee the answer you're going to get. So, the concern for a sponsor, whether it is a company or otherwise, is understanding that some of the information may be useful and serve their purposes, some of it may have surprises that they need to always be mindful of the importance of disseminating information. The value of a registry is not just what you've learned for yourself, but the real importance is getting that information out. So, when planning a registry, they need to plan for how they will deal with the good news and the news that may not be so good, but still is important.

DR. BERLINER We did get a question about protection of data from litigation and I think that reflects at least one large fear that industry might have in participating in registries. Another area that I am personally interested in is the idea of getting different stakeholders to work together on registries. For example, the implantable cardiac defibrillator study that we're doing is a collaboration with funding from AHRQ, NIH, and industry. The American College of Cardiology is also participating. We've brought together a lot of different stakeholders to participate in the study. That process was long and difficult, but I think the idea of how can AHRQ work together with different stakeholders to build something that's useful for different purposes is a good one. There are a lot of questions including the litigation question, and hopefully that we can help to address that in future.

DR. DREYER: I wanted to ask you all along the lines of data protection concerns, are you interested in more information about data sharing? It's a topic that we read about a lot in the journals. Some journals are taking the stand that the data should be made publicly available on their Web site. We see a big discussion about people interested in accessing data and who gets first rights to the data. I was wondering whether it would be useful to try to build some of the knowledge about when you've put together a registry, are there accepted processes for who gets the first chance to analyze and report on the data and when is it fair and responsible to share it with other people and what the processes are for those. Anybody interested in that?

DR. BERLINER: People are shaking their heads, yes.

DR. DREYER: Something to think about.

AUDIENCE MEMBER: The USRDS has a place where people can request the data and there is some funding that they have to make available in some cases to access that and the same with the scientific registry of transplant recipients. People can individually request data and they are given a dataset that they can then analyze. So those are two good examples where they really worked out the procedures in quite detail.

AUDIENCE MEMBER: We worry about protecting patients and HIPAA reminds us of that all the time, and particularly in registries. But, we have to also be concerned about higher level organizational protections because if you're from a state where you are the only large organization in that state, and you are reporting out about that particular state, you could be identified and so there are legal protections that you have to worry about from that perspective.

DR. BERLINER: I think that issue was part of the impetus for the new chapter on looking at patient safety and adverse events in registries. One of the concerns is that when you detect something that looks like a signal in a registry, is that really a signal? What's the validity of that signal? Are you really picking up a patient safety issue or are you picking up some other thing. I think that you have to be very careful to represent what your findings are, and that's definitely a concern. I would really encourage you to see what's there and let us know what's missing for the next version.

AUDIENCE MEMBER: This question is not quite related to her question but to the issue of linking administrative databases with registries. Administrative databases basically, are not geared towards research as we all know. They have a multiple set of purposes usually around billing and other things. Is there anything in terms of critical appraisal or standards or suggestions for linking those so that you can either critically appraise the opportunity from this clarification, put together some sort of check and balance or ability to look at the accuracy of that type of data?

DR. GLIKLICH: That's an important topic. It is covered two places in this version in the handbook. It is covered in the data sources section, which talks about secondary sources that might be used in a registry and the kind of links in the chain, and the chain is only as good as its weakest link. Then, in the linkage chapter itself it talks about the limitations of those databases. That would be an important area I think to go through those chapters and see if you've gotten enough stuff from that or if it requires another expansion in the next edition.

DR. DREYER: In the chapter about planning registries, we talk a little bit about governance and I want to know if we have enough, or we need more. Because that's a question that comes up a lot -- I think some people don't want to have more because you want to have room to do it your own way, but I think there are some really good examples of good governance that would be useful to people. To me, it is related to data protection and data sharing. So, my question to you is when you read it, let me know if you need more about governance or if this is satisfactory.

DR. BERLINER: There is an open invitation for everyone to give us feedback on any chapter. So, we are planning these seven new chapters plus four new chapters on topics not on this list. But we are committed to updating every single chapter and keeping this document updated. Any comments that anyone has we would really appreciate.

DR. GLIKLICH: I wanted to make sure that we publicly thank the people at AHRQ who have been working on this project hand in hand with us, Elise Berliner who has been the project officer, and Scott Smith. They both work for Jean Slutsky. And, the editorial production staff at AHRQ has been unbelievable. So if it reads well, it's because of them. So I just want to make sure we thank them.

DR. BERLINER: I also want to thank the staff from our OCKT office. They worked so hard so we would have a printed copy to show everybody today and we really thank everybody. Are there any other questions? No? Well, if no one has any other questions, thank you all so much.