- Search for Research Summaries, Reviews, and Reports
- DEcIDE Project
Chapters in this User's Guide
- Design and Implementation of N-of-1 Trials: A User's Guide Feb. 12, 2014
- Introduction to N-of-1 Trials: Indications and Barriers (Chapter 1) Feb. 12, 2014
- An Ethical Framework for N-of-1 Trials: Clinical Care, Quality Improvement, or Human Subjects Research? (Chapter 2) Feb. 12, 2014
- Financing and Economics of Conducting N-of-1 Trials (Chapter 3) Feb. 12, 2014
- Statistical Design and Analytic Considerations for N-of-1 Trials (Chapter 4) Feb. 12, 2014
- User Engagement, Training, and Support for Conducting N-of-1 Trials (Chapter 6) Feb. 12, 2014
Research Report - Final – Feb. 12, 2014
Information Technology Infrastructure for N-of-1 Trials (Chapter 5)
This is a chapter from Design and Implementation of N-of-1 Trials: A User’s Guide. The full report can be downloaded from the Overview page.
Table of Contents
- What Does an N-of-1 Trial Platform Do?
- Implementation Feature Overview
- Example System: MyIBD
- Design Considerations
- Cross-Cutting Concerns
N-of-1 trials have not been adopted for broad use despite early successes and the promise of improved care and reduced costs.1 Barriers to adoption include education (addressed in Chapter 6), operational complexity,2 and costs. For example, in one system, the time spent per trial was approximately 16.75 hours. Half that time was spent in setup and another third for trial execution.3 Some of these time costs were attributable to carrying out the n-of-1 process, involving direct patient education and discussion. The remaining time was spent on activities related to trial design, transmission of design to the pharmacist, analysis, preparation, and presentation of results. A detailed discussion of n-of-1 costs can be found in Chapter 3.
This chapter describes how a modern information technology (IT) infrastructure and design approach can reduce the costs, burden to end-users (patients and their clinicians), and complexity of administering and running n-of-1 trials at scale by automating trial workflow. IT infrastructure also offers new opportunities such as the ability to pull data from electronic medical records (EMRs), integrate with emerging consumer health devices, interact with patients or collect data via mobile platforms, and embed statistical analysis and visualization directly into Web-enabled platforms.
Prior work has addressed the procedures necessary to run individual trials using pen-and-paper techniques or simple electronic tools such as a spreadsheet,3 but to date attempts to build IT platforms for n-of-1 trials have not been commercially successful (see Chapter 3). Unfortunately, existing clinical trial management systems are inadequate for managing n-of-1 trials and are difficult to extend for this purpose. This chapter discusses features of an IT-based trial platform that will enable efficient scaling beyond individual provider use. It is written to provide clinicians, health services researchers, and IT specialists a shared framework for discussing the use of IT to support n-of-1 trials. It defines relevant terms, identifies key requirements of n-of-1 trial systems, introduces tradeoffs to be considered, and warns of common pitfalls to avoid. While all of the proposed features should be considered during a project's design phase, only a subset are likely to be implemented in any one platform due to the complexity inherent in modern health care IT systems.
A research IT system, called MyIBD, is presented as an early example of such an integrated system. MyIBD is a prototype of a Personalized Learning System that uses longitudinal data collected from the context of everyday life to inform the management of chronic disease. Support of individual n-of-1 trials was one of the design goals for the platform. A pilot study by four health services researchers engaged a small pool of providers from three hospitals toward a target of 20 concurrent patients to identify issues involved in performing and scaling IT-supported n-of-1 trials as an intrinsic part of the care of chronic diseases. A future revision of the platform, informed by the pilot, is intended for deployment in the 50 or more centers of the ImproveCareNow4 pediatric inflammatory bowel disease (IBD) quality improvement network.
This chapter should be read as providing an array of practical options for organizations looking to develop an n-of-1 trial platform. As commercial and open-source IT platforms become available, this document will help organizations evaluate these offerings.
What Does an N-of-1 Trial Platform Do?
A general trials administration platform facilitates all phases of design, execution, and analysis activity as illustrated by Figure 5–1. Compared with traditional clinical trials, n-of-1 trials allow for greater user (patient and provider) participation in the selection and design phases through a dialog between a health professional and a patient and/or family member. The design of an n-of-1 trial may be specified by the care provider independently, through a shared decisionmaking exercise between the patient and the provider, or by a patient-driven process, to determine the treatments to compare, the outcomes to track, the format and content of the final report to be presented at the end of the trial, etc. In all cases, IT can support these steps by providing ready access to standard libraries of characterized treatments, outcome measures, and statistical tools. IT can help clinicians and patients jointly explore tradeoffs that affect the time, strength, and overall burden of the trial.
The end result of the design phase is a schedule of treatment periods with specified treatments and measurements necessary to execute the trial. An electronic platform can facilitate the trial’s execution through data collection, treatment reminders, and pharmacy interaction where necessary. Some trials may involve action plans that accommodate real-world challenges such as acute comorbidity, changes in routine, or periods of nonadherence. The platform should support a variety of adjustments to trials and track these adjustments for use in subsequent analyses.
When the trial is complete, the platform should execute statistical analyses as specified in the protocol, which may include adaptive model selection; and the results should be displayed for joint clinician-patient interpretation (see Chapter 4). Because nearly all n-of-1 trials are conducted to inform a treatment decision in the ongoing care of a patient, visualizations and reports must be understandable to patients and their clinicians. Shared decisionmaking aids are an important part of facilitating the use and interpretation of n-of-1 trials in clinical care.
At scale, many individual trials are likely to be variations on a common theme. A library of experimental designs should also be supported by the platform. The track records for prior trials using a specific design can inform future users of problems with the protocol, the rate of successful trial completion, the ability to reach a definitive conclusion, and user feedback on their experience. Providers using the same protocol can exchange notes on or outside the platform. An automated scientific review process could be added to this IT infrastructure to ensure the soundness of the protocol and analysis plan for each trial.
Figure 5–1. Platform processes
Implementation Feature Overview
We recommend a modular and extensible architecture be used in the design of an n-of-1 trial platform.5 In this spirit, we introduce a list of desired capabilities of a trial platform as a guide to evaluating approaches for automating n-of-1 trials, or as a jumping-off point for designing a new approach.
Requirements of n-of-1 trial platform
Features supporting n-of-1 trials
- Record clinician goals and patient goals
- Document the experimental hypothesis
- Protocol implementation support
- Library of characterized treatments (including details of onset, carryover, etc.)
- Library of characterized measures (including precision and variance)
- Support for randomization
- Web service connections to acquire/share libraries of standard measures
- Trial protocol specification
- Choice of characterized treatments
- Choice of measures
- Choice of duration and number of treatment periods
- Decision on important covariates to track
- Analytical design
- Connection to Electronic Medical Records (EMRs), Personal Health Records (PHRs), pharmacy records (obtain medication context, lab reports, etc.)
- Data collection and user engagement support
- Data capture modules (e.g., choice lists, visual analog scales)
- Applications programming interfaces (APIs) to third-party data services such as sensors, apps (e.g., for symptom tracking)
- Direct email or Short Message Service (SMS) submission of patient-reported outcomes (PRO)
- Trial progress review screens for patients and clinicians, and other user engagement modules (e.g., leaderboards, rewards)
- Data analysis and review
- Data preprocessing modules
- Statistical analysis modules
- Visualization modules
- Data review and decision-support modules
Institutional support for n-of-1 trials
- Integration with electronic health records (EHRs) for recruiting and screening
- Configurable eligibility requirements
- Support for external informed consent processes and documentation requirements
- Population review
- Summary reports (e.g., participation, utilization)
Aggregation of n-of-1 trial results
- De-identification of patient record (for real-time in situ analysis, or for download to external systems for secondary analysis)
- Statistical analysis and aggregation of raw individual patient-level data
- Statistical analysis and aggregation of summary results data
- Statistical analysis and modeling of aggregated outcomes
- Models for using aggregated group outcomes to facilitate “borrow from strength” for individual treatment effects and to estimate individual-level heterogeneity of treatment effect
- Secure data storage
- Data transmission security
- Data downloading in multiple formats
- Authorization controls (who can do what)
- De-identified views of data
An n-of-1 trial platform will also need to support a variety of user roles at different stages of the process with different access to information maintained by the platform (Table 5–1). Not all technical platforms need to implement all roles explicitly.
|Patient and/or Caregiver||Access own data. Codesign n-of-1 trial. Enter data via Web, Short Message Service (SMS), mobile app, device, third-party service. View and interpret results.|
|Clinician||Recruit and manage a sample of patients. Codesign n-of-1 trials. Monitor trial and data collection progress, and intervene as needed. View and interpret results.|
|Administrator||Provide institutional oversight, user account creation, and management.|
|Pharmacist||Receive instructions for specific patients/trials, including randomization schedule and blinding requirements. May interact through the trial system or by fax, secure email.|
|Statistician/Researcher||Review trial design and collected data for validity and/or aggregate analysis. May download identified or de-identified data for offline analysis.|
|Systems Administrator||Support operation of the IT system, provide user tech support.|
|Developer||Maintain and problem-solve operational code.|
Example System: MyIBD
The MyIBD6 IT platform was developed by the Cincinnati Children's Hospital and Medical Center (CCHMC) and a third-party consulting group (including author I.E.) as part of its Collaborative Chronic Care Network (C3N) health services research project. They targeted a minimal set of requirements to facilitate definition and management of up to 100 concurrent, independently designed n-of-1 trials. The platform is part of a personalized learning system intended to capture evidence from a patient's daily life to improve and augment patient-provider dialog. The system is intended to validate the efficacy of treatments with known heterogeneity of response, or to evaluate other treatments for which minimal research exists. Aggregation of n-of-1 trials was not a goal of the version of MyIBD reported here.
A Web form is used to capture the trial goals and design constraints. The system supports simple A-B treatment responses as well as multiphase withdraw/reversal or alternating designs. Trial outcomes are monitored using Shewhart-style statistical control charts (Figure 5–2). A single data review screen provides a scrollable view of all measures. The measures are plotted on an Xbar control chart using 3-sigma control lines calculated from the first 20 measured data points (the minimal baseline period for this platform).
Figure 5–2. Control charts
Screenshot Copyright © 2014 Cincinnati Children's Hospital Medical Center and Vital Reactor; all rights reserved.
The MyIBD system currently supports three roles: Administrator, Researcher, and Patient. A Clinician role linked to a subset of patients with clinician population management features is planned. The service consists of a simple administrative dashboard to create and review user accounts. Administrators are the only users able to see patient identifiers. Researchers are shown a de-identified list of the patient population (Figure 5–3) and can click to review individual charts.
Figure 5–3. De-identified population view
Screenshot Copyright © 2014 Cincinnati Children's Hospital Medical Center and Vital Reactor; all rights reserved.
Users are given a dashboard showing outstanding data recording actions, recently recorded data, and additional reports for configuring third-party services, updating data recording schedules, and a generic journal entry facility (Figure 5–4). Due to the challenges of integrating with the hospital's commercial EMR and network registry, manual entry of medication and treatment periods is facilitated via the Journal Entry mechanism.
Figure 5–4. User tasks
Screenshot Copyright © 2014 Cincinnati Children's Hospital Medical Center and Vital Reactor; all rights reserved.
The initial target population is pediatric patients with IBD such as Crohn's and ulcerative colitis. An integration of standard measures is provided including PROMIS Fatigue (weekly), PROMIS Pain (weekly), and PEDS QL (monthly). The system supports a user-extensible catalog of measures, treatments, and experimental protocols (Figure 5–5). This catalog is maintained manually and reviewed by researchers, and made available to patients and clinicians. For now, no procedures have been implemented to import measures from external data sources.
Figure 5–5. Catalog browser for measures
The clinical team had opted not to support treatment blinding at the time of this writing, as many of the planned treatments, such as diet, home-supplementation, or lifestyle changes, are difficult to blind. Consequently, all of the collected data and treatment context are available to both patients and clinicians throughout the trial.
The full development costs for this system are anticipated to be over $250,000. The ongoing infrastructure and maintenance costs are currently $400/month. This fee includes software and service licenses and a set of four cloud-hosted servers supporting high availability, backups, Short Message Service (SMS) transaction fees, and other miscellaneous costs. Additionally, a $1,000/month support contract was in place for the first year to secure 1-2 days/month of consultant time for critical bug fixes and small feature enhancements. The project is exploring making the existing functionality more widely usable, including a software-as-a-service offering and open-source licensing of the underlying code base.
The benefit of this large up-front investment is that sustaining and per-patient costs are minimized, requiring only a small per-user fee. A single installation of the MyIBD platform will eventually scale to thousands of patients with only minor changes to the user interface. Scaling can also be accomplished through deploying the service over multiple servers. The incremental per-user cost in the current system is almost entirely driven by usage fees for the SMS gateway service. Tracking three daily values per patient requires approximately 90 messages/month. At one cent per message and with thousands of total users, the amortized IT cost per user per month is estimated to be around $1.00. While asymptotic infrastructure costs can be very low, this should not indicate that using a trial system is inexpensive. The majority of costs will be in services such as user support, technical support, ongoing development, multilingual translation, statistical and methodological review, and clinician review. Fortunately, many of these service costs are amortizable (e.g., translations and technical support).
The following sections take a deeper look into many of the important components of an n-of-1 trial platform listed in "Requirements of n-of-1 trial platform" and discussed above. As introduced in Chapter 1 and demonstrated in the MyIBD example, n-of-1 trials ensure validity of the trial inference through two primary mechanisms: protocol-defined, time-varying exposure, and systematic measurement of outcomes (patient reported and others) via EMRs, Web sites, mobile devices, and sensors. Proper interpretations of these data require a data cleaning and analysis pipeline to generate findings to populate result reports and visualization. The tradeoffs involved in defining an n-of-1 protocol can be subtle and complicated. Not all providers are equally facile at measure or trial design, so we recommend the inclusion of features to import or generate libraries of measures and protocols to facilitate reuse.
Time-varying exposure refers to the restriction on patient behavior during different treatment periods to ensure that he/she is exposed to the alternative treatment conditions being compared (one of which can be the control condition, including no treatment, placebo, or current treatment) according to a prespecified schedule. As discussed in Chapters 1 and 4, the schedule of exposure needs to account for the operating characteristics of the treatment conditions, such as onset time, washout (for withdraw/alternating designs), and the variability of the measurement (how many samples are needed over what duration of time to get an accurate estimate of the outcome).
For n-of-1 trials, the issues involved in treatment adherence are accentuated due to the complexity of scheduled switches between treatments being tested. Treatment adherence may be facilitated by automated reminders, generated from the IT system, of what behavior is needed at a particular point in time, as well as (for drug treatments) prepackaged dose packs for specific treatment intervals managed by the pharmacy.7 IT systems can also support blinding by masking the clinician, pharmacist, and the patient to the patient’s assigned treatment.
An IT platform should accommodate the effects of unanticipated events (such as hospitalization, vacations, nonadherence) by allowing the study protocol to be adjusted midstream. For example, data collection may need to be suspended for a period of time, or phases of the trial may need to be restarted, possibly including reverification of the patient’s baseline. Changes may need to be made in trial execution (e.g., producing reminders, interacting with a pharmacy) or in analysis. This highlights the difference between an intended “planned study protocol” and the actual “executed study protocol.”8 Accomplishing automatic adjustments requires that the system maintain a representation of treatment effects such as onset and washout periods or facilitate explicit changes in the schedule by an expert (see related discussions in the section “Adaptation” in Chapter 4).
The second critical component of an experiment is the measurement of observations of the patient over time. In published n-of-1 trials, measurements often consist of questionnaire responses or lab values at the end of a trial phase; however, n-of-1 trials are increasingly leveraging time-series data, especially with mobile technologies that enable frequent “ecological momentary assessments”9 (EMA) of patient symptoms. Time-series data involve repeated daily or weekly measurements that are taken within a given phase of a trial (see the section “Statistical Models and Analytics” in Chapter 4). Each measurement may be a single value such as weight or a compound multidimensional assessment, such as:
- Time point or time unit: Length of time since start of a treatment period, or a timeframe of validity (e.g., over the last week)
- Measure(s) (integer, decimal, categorical, ordinal, free text, compound, complex):
- Devices often generate multiple and compound measures for a single time point (e.g., systolic and diastolic blood pressures or GPS latitude/longitude)
- Complex responses such as survey items10 may be summarized into summary scores or indexes
- Devices may need to be calibrated and harmonized, especially if a variety of different devices are used for the same measure
- Definition: What does this measure represent? Often this includes binding to a terminology or coding system (e.g., LNC 1558-6, the LOINC code for fasting blood glucose), which is particularly important for aggregating data across multiple trials and/or centers.
In any trial, but especially in n-of-1 trials, the context for a data element may have a large effect on observed measures and on overall trial results. For example, if a patient records a pain score of “2 at 10:30 p.m. EST on Tuesday, March 5, 2013,” we may also want to know:
- Mode. How did the patient record this measure? Was it via telephone, SMS, Web application, a measurement device, or pencil and paper?
- Time recorded. When did the patient record the measure? He or she may have forgotten to record that morning and instead recorded it that evening, for example, reporting at 10 p.m. a pain score that was actually experienced at 2 p.m.
- Prompt. What, if any, prompt elicited the measure?
- Schedule. What was the scheduling on this prompt: momentary assessment, or scheduled? Randomization of prompts may improve the accuracy of the resulting data.
- History. Were the data ever changed or updated?
- Respondent. Were the data entered by the patient, entered by a proxy (e.g., parent, caretaker), or recorded by an automated device?
These “data about data” are called metadata. It is critical that a trial IT platform has robust support for collecting, storing, and analyzing metadata, because these contextual factors can interact with time-series data and greatly impact trial inference. For trials with small effects or poor precision (e.g., because the number of treatment periods is limited), these interaction effects may overwhelm and mask the underlying effect of interest. Metadata are particularly critical to facilitate aggregation of data from individual trials to estimate population effects or to predict the likely outcome of future n-of-1 trials. In the absence of existing standards around metadata capture, a platform should support an open-ended set of elements and allow providers of observations and other data to expand the set of labels over time without requiring central coordination or standards.
Statistical Analysis Modules
Chapter 4 identifies a variety of options for performing statistical analysis of n-of-1 trials, including simple statistical tests on observed results (t-test, ANOVA, etc.), regression analysis, and Bayesian analysis using closed-form solutions or numerical solutions such as techniques based on Markov-chain Monte Carlo. A variety of statistical procedures are needed for model selection, filling in missing data, adjusting for time-series effects (e.g., autocorrelation models), removing short-term special-cause variations, preprocessing the data for visualization, et cetera. Over time, different measures and different trial conditions may require new forms of analysis.
It is essential that the IT system supports the automation of the statistical procedures needed (see the section “Automation of Statistical Modeling and Analysis Procedures” in Chapter 4). For statistical procedures that are implemented in a statistical language such as R or Matlab, export procedures for analyzing the data in external packages are often needed. R, for example, can be embedded or linked as a Web service to other software, which may simplify the creation and extension of the statistical facilities of the trial platform. Extension of the statistical facilities may become especially important if the platform incorporates in situ aggregation techniques, already an active area of research in statistics and a likely active area of research and development in IT in the coming years.
For n-of-1 trials with strong effects, it might be straightforward to analyze the trial outcomes using simple visual techniques. Visualizations typically involve a data preparation phase (data transformation, filtering, etc.), a customization of the data presentation (colors, emphasis, and labeling), and a rendering phase. If the trial platform is Web based, there are many off-the-shelf charting packages that may suffice, but for implementing the annotations that are often helpful to interpreting n-of-1 trial results, a more flexible graphics language such as D3 may be a more appropriate foundation.
Many clinicians and most patients have limited statistical numeracy.11 N-of-1 trials therefore require very clear communication of trial results that use visual heuristics and user-friendly, comprehensible statistical interpretation of the trial findings (see the section “Presentation of Results” in Chapter 4).
Health IT systems are typically designed for clinical settings and often lack an effective interaction design12 (i.e., the workflow and experience of a software-enabled process that encompasses the user interface). Designing and implementing a user interface is complex and costly, but building a seamless and engaging user experience can be an order of magnitude more challenging and expensive. With increasingly sophisticated user experiences becoming commonplace in consumer software (e.g., Apple iPhone), many patients and clinicians will expect equally sophisticated design from health IT. Thus, IT platforms designed to scale up n-of-1 trials must devote adequate resources to user interaction design to ensure user uptake and engagement.
Important capabilities of n-of-1 platforms include sharing of n-of-1 outcomes among providers and researchers for secondary analysis, aggregation of results for estimation of population effect, and population management. Platforms should export both identified and de-identified datasets, or support dynamic updating of population-oriented data systems such as i2b213 or SHRINE.14
Further, the move to patient-centered care has increased the emphasis on enabling patients to easily access their own data. An emerging consumer ecosystem is using longitudinal health data for exercise performance, weight-loss coaching, personal health tracking, Quantified Self,15 and other activities. Making data available via an applications programming interface (API) to third parties, with explicit patient consent, will be an important capability of future systems. The Open mHealth project is a leading effort to provide interoperability standards at the data and protocol layers.16
Another form of sharing that can be valuable is for patients to share their own data, displays, and reports with family, friends, peers, or over social media channels. This level of semipublic sharing may not be suitable for all provider settings, but facilitating patient use of these sources of social support and reinforcement can have therapeutic value. Sharing data over social media can be enabled through exposing patient-approved, public visualizations of the data as sharable URLs or through social media via n-of-1 platforms that send status updates.
Templates and Libraries
As described above, a library of treatments and measures will help to simplify the process of trial design and specification, to increase methodological strength, and to enhance user engagement. Designers of n-of-1 trials can consult such a library to find detailed information about treatments, including their speed of onset, washout periods, and other necessary information in designing a proper n-of-1 trial. Detailed information on outcome measures would also be helpful. Measurements have statistical properties that can be characterized for the average respondent, such as reliability, variance, and reproducibility. Measurements may also be subject to biases, including practice effects, onset behavior, etc. These characteristics of treatments and measures must be taken into account to ensure methodologically strong scheduling and analysis of a specific trial. Examples of preexisting libraries for measures include: PROMIS,10 GEM,17 NeuroQOL,18 NIH Toolbox,19 and PatientsLikeMe.20
An n-of-1 methodology library should also include parameterized templates of successful trials that can be used or adapted “off the shelf” by other patients or providers on the same platform with similar study questions. This will reduce the barrier for clinicians and patients who do not have the statistical or methodological expertise to design and run n-of-1 trials on their own. Classes of trials that have been successfully reviewed by statisticians and methodologists may be fully automatable, reducing overall personnel costs and burden, while increasing the methodological quality of executed studies.
A methodology library such as we discuss here could be a component of a full-featured n-of-1 IT platform, or it could be a common shared resource. The more closely such a library is integrated into an operational IT platform, the more easily those treatment and outcome measure characteristics could be populated directly from the results of prior trials (e.g., the typical within-subject and across-subject variations of a measurement).
Technology promises to help close health disparities, but underserved populations have special needs that require special attention. The user interaction design should be culturally sensitive, and instructions and prompts should be adapted and translated expertly to accommodate multicultural, multilingual populations. Section 508 compliance21 should also be sought if the targeted audience includes those visually or hearing impaired.
An n-of-1 trial platform needs to be designed with the same consideration given to any health IT system that maintains patient data. The goal is to facilitate patient access, clinician utility, and third-party review with minimal effort, while preserving privacy consonant with ethical principles and applicable regulations. De-identification of data can provide privacy protection sufficient to enable authorized people to review records and explore aggregate data analysis safely and ethically. However, some types of data (e.g., location traces, genomic data) are almost impossible to de-identify. Privacy is best maintained and assured by a combination of technology and policy.
In U.S.-based settings where research is performed on the data collected by the platform, all developers and system administrators will need to complete Human Subjects Training, as they will have physical access to patient identifiers. (See Chapter 2 for a more complete treatment of human subjects issues relevant to n-of-1 trials.)
Data Security and HIPAA/HITECH
U.S.-based systems that store or operate on patient data must adhere to a set of regulations created by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and Health Information Technology for Economic and Clinical Health Act (HITECH). HIPAA and HITECH present the rules that govern the security and privacy of Protected Health Information (PHI), such as names, dates of service, and contact information that are provided to “covered entities,” Such as health plans for health providers. These provisions are relevant to platform and service providers because covered entities are required to ensure that they have formal Business Associate Agreements (BAA) with any vendor that processes patient information. The vendor, in turn, must have a BAA with any of their vendors, such as cloud service providers, that store or transmit PHI in nonencrypted form, even if such access is transient and limited.
Some of the provisions of HIPAA require that PHI be encrypted both “at rest” and “in transit.” This means that data stored on the platform servers or in backup archives must be encrypted, and data transmitted between components of the platform (such as between Web server and database, or between Web servers and clients such as Web browsers) must also be encrypted. Further, all procedures for managing access to data and system administration must be formally documented and controlled. There are two approaches to encryption: bulk encryption, such as an encrypted database or file system, and “column” encryption, singling out HIPAA-delineated identifiers and encrypting them at the application layer. Unless there is explicit database support for column encryption, it can increase the complexity of applications to explicitly manage encryption and decryption of specific fields. Indexing and searching over PHI fields are often required in IT systems and can be facilitated by creating proxy fields that are one-way hashes (enabling indexed lookup of patient email) or transformed values (such as date of birth truncated to a year) that allow a query to restrict to a superset of the desired range of dates.
One of the most challenging aspects of adhering to the latest regulations in the emerging ecosystem of data sources is identifying the line of demarcation between a trial platform and a third-party service. Standard SMS, for example, is not a secure, HIPAA-compliant communications channel, and the third-party SMS provider is storing the patient’s mobile phone number, a piece of PHI. However, regular text messaging is far easier for patients to use than “secure” text messaging solutions developed for provider-to-provider communication. The same rules apply for consumer devices and services such as Fitbit22 and FitnessKeeper23 that collect and store consumer data on systems that are typically not HIPAA compliant.
The intent of HIPAA was to protect patients and improve their access to data, even if in practice it appears to make access more difficult. Under the 2013 omnibus regulations,24 communication channels such as SMS and email are acceptable for patient-provider communication and data exchange if users have explicitly asked to exchange data over a particular channel, the risks are modest, and they have been informed of those risks.
It is important to designate someone in the provider organization, even if outsourcing to a third party, who can review the organization’s obligations under current legislation with regard to risk exposure, documentation, and notification in the event of a breach. That person should audit the platform procedures and recommend improvements and/or fixes on a regular basis and prior to major updates.
Authentication and Authorization
One critical consideration for platforms that provide interfaces to users (clinicians, patients, or any other user) is how those users will be authenticated to the system. Authentication is the process of determining whether the user is indeed the intended user, and authorization is the logic that determines what a specific user can do. N-of-1 systems are particularly amenable to role-based authorization, where a given user satisfies one or more roles that in turn dictate what data or reports they have access to. For example, clinicians can review only data on their own patients, and reviewers can see only de-identified population-level data and review summary records.
Everyone faces the problem of having to remember passwords for a wide variety of systems. Where possible, an n-of-1 platform should seek to integrate with existing authentication systems to facilitate easy credential recovery according to industry best practices. At the time of this writing, the OAuth25 standard is emerging as the most widely used consumer authorization framework for third-party access to data via APIs. Within the health care sector, discussions around a national Unique Patient Identifier involve many technical and policy challenges.26 Current thinking is to ensure unique identification through a combination of patient identifiers and processes (e.g., two-factor identification), and any n-of-1 platform should align with the identification policies specific to health care.
Though n-of-1 trials are an old technique, they remain novel in most health settings and have received limited attention from the academic research community. Techniques and trial design styles are likely to evolve, and it is prudent for trial platform designers to adopt a modular and extensible approach to facilitate the adoption of new techniques over time. Ideally, extensibility is made possible through a “plug-in” architecture, allowing third parties to add functionality to a well-defined API or data format without requiring a deeper understanding and extensive remodeling of the existing platform.
The areas most important to emphasize extensibility include: a catalog for reusable components and templates (e.g., standardized measures like PROMIS), user interaction templates for surveys (to facilitate development and adoption of new adaptive assessment models), processing modules, statistical modeling and analysis models, visualization modules, and shared decision support.
Open mHealth16 defines an open data and software interoperability approach that is congruent with this perspective along with a specification for measures. It defines a framework for modular assembly of data processing, statistical modeling and analysis, and visualization modules to create specific time-based or summary views of time-series data, and allows integration of Web-based services. For example, Web services can be provided to match a patient’s electronic health record data to heuristics on types of patients and situations amenable to n-of-1 studies (discussed in Chapter 6), and thus suggest to the clinician to consider n-of-1 studies for the specific patient. However, the state of the art in computational eligibility determination is quite rudimentary.27
Another model to consider is the Substitutable Medical Apps Reusable Technologies (SMART) platform28 a standard that defines a model for pluggable applications for EMR systems. SMART defines a Web-based model for interoperability, allowing third-party Web-based user interfaces to plug into the patient portal of an EMR or personal health record. A SMART approach could support the provision of n-of-1 studies as one of several point-of-care research studies that clinicians can “order.”29 Providing similar facilities would allow new developers to build shared decision support screens, Web-based instrumentation, or new visualization and/or processing solutions for a core platform.
When considering buying or building an n-of-1 platform, the total cost of ownership should be carefully analyzed. Though initial development of a trial platform may be a fixed cost and born by a research grant or one-time funds, ongoing support costs, particularly as needs evolve, can be substantial. Resources will be required for ongoing user support, technical support, hosting costs, and new feature development. Adaptation and translation costs for multicultural, multilingual support can be significant not only up front, but also over time as site content is updated. Service costs can also be significant if users are contracting with third-party services such as a gateway for eliciting data via SMS, telecom costs for phone/fax, and transcription costs for manually transcribing paper responses. Institutional owners of n-of-1 platforms may consider recharge mechanisms for defraying carrying costs.
Development of a custom IT-based n-of-1 trial platform calls for significant up-front investment, as illustrated by the MyIBD example. Moreover, it requires significant ongoing investment for both clinical and IT operations. Given the high costs, the lack of strong evidence establishing value, and the small market, there are currently no commercially available n-of-1 trial platforms. It is likely that such platforms will be developed instead with government or foundation funding intended to characterize the applicability and use of n-of-1 trials at scale.
Institutions interested in IT support for n-of-1 trials will find it prudent to maximize knowledge transfer and to amortize investments across multiple institutions by leveraging existing open-source projects and approaches. If inhouse development and management of a clinical trials platform is impractical, it may be possible to use one of these open-source platforms through collaborations with other institutions hosting such platforms.
Another option is to investigate reuse or extension of existing clinical trial management or other data acquisition systems, although many clinical trial management systems are designed explicitly for traditional parallel group randomized trials and might be difficult to adapt to the time-varying exposure design in n-of-1 trials. In time, commercial services and/or software offerings may lower costs of ownership and increase functionality beyond what is offered by open-source alternatives.
Nevertheless, it is possible to facilitate many n-of-1 trial activities without the comprehensive design approach advocated here. MyIBD provides one example of simplifying and accelerating n-of-1 trial deployment with only a small subset of these features. However, each of the features advocated here will expand the population a platform can serve with greater ease of use and reduced costs.
|Determine the purpose for which you are creating or buying an n-of-1 trial IT platform||
|Decide whether to build or buy||
|Decide on open- or closed-source solutions||
|Choose a hosting model.
Is the service hosted in your institution’s facility or managed by external resources on servers not under your direct control?
|A cloud solution is preferred if it satisfies your institution’s HIPAA and/or Institutional Review Board obligations and integrates with your clinical systems where needed.|
|Define patient ownership of and access to data||
|Assure that platform is sufficiently flexible to support the range of anticipated n-of-1 designs||Involve methodologists and statisticians in developing the design specifications of the system.|
|Protocol design support||
|Provide a population management view||
|Adaptive schedule management||
|Provide a Web-based portal for trial review by all participants||
|Provide built-in data collection facilities||
|Support download of trial data for post-trial analysis||Allow for de-identified download of raw data for additional statistical review in case platform analysis is insufficient for a specific trial.|
|Obtain requisite data from the electronic medical record||
|Enable connection to pharmacist services||
|Provide multilingual and culturally sensitive versions||
|Ensure Section 508 compliance if applicable||
|Integrate with other institutional IT systems||
|Provide printed forms and reports; support manual transcription from paper||
|Support scanning of printed records||Optional: Support for scanning and/or OCR (optical character recognition) of paper records will reduce workflow costs and enable de-identified transcription.|
|Interoperate with third-party services||Provide support for importing data from mobile devices, medical and consumer devices, and third-party service platforms.|
|Provide educational materials||
|Simplify human subjects research||
|Simplify methodology review||Provide facilities for online and offline methodology review.|
|Provide user support||
|Provide technical support||
- Scuffham PA, Nikles J, Mitchell GK, et al. Using n-of-1 trials to improve patient management and save costs. J Gen Intern Med. 2010;25(9):906-913.
- Larson EB. N-of-1 trials: a new future? J Gen Intern Med. 2010;25(9):891-892.
- Larson EB, Ellsworth AJ, Oas J. Randomized clinical-trials in single patients during a 2-year period. JAMA. 1993;270(22):2708-2712.
- Crandall W, Kappelman MD, Colletti RB, et al. ImproveCareNow: the development of a pediatric IBD improvement network. Inflamm Bowel Dis. 2011;17:450?457.
- Chen CN, Haddad D, Selsky J, et al. Making sense of mobile health data: an open architecture to improve individual- and population-level health. J Med Internet Res. 2012 Aug 9;14(4):126-134.
- Eslick I, Kaplan H, Chaffins J, et al. MyIBD. https://myibd.c3nproject.org/. Accessed July 14, 2013.
- Guyatt G, Sackett D, Adachi J, et al. A clinician's guide for conducting randomized trials in individual patients. CMAJ. Sep 15 1988;139(6):497-503.
- Sim I, Niland JC. Study Protocol Representation. In: Richesson RL, Andrews JE (Eds.) Clinical Research Informatics. London: Springer; 2012.
- Shiffman S, Stone A, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32.
- The National Institutes of Health. The Patient-Reported Outcomes Measurement Information System (PROMIS), 2013. http://nihpromis.org. Accessed July 14, 2013.
- Gigerenzer G, Gaissmaier W, Kurz-Milcke E, et al. Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest. 2008;8(2):53-96.
- Cooper A, Reimann R, Cronin D. About Face 3: The Essentials of Interaction Design. Indianapolis, IN: Wiley; 2012.
- Kohane IS, Altman RA. Health-information altruists – a potentially critical resource. N Engl J Med. 2005;353:2074-2077.
- Weber GM, Murphy SN, McMurry AJ, et al. The Shared Health Research Information Network (SHRINE): a prototype federated query tool for clinical data repositories. J Am Med Inform Assoc. 2009 Sep-Oct;16(5):624-630.
- Wolf G. Know thyself: Tracking every facet of life, from sleep to mood to pain. Wired Magazine. 2009:24(7):365.
- Chen C, Haddad D, Selsky J, et al. Making sense of mobile health data: an open architecture to improve individual- and population-level health. J Med Internet Res. 2012;14(4):e112. doi: 10.2196/jmir.2152.
- National Cancer Institute, GEM Grid-Enabled Measures Database. https://www.gem-beta.org. Accessed July 14, 2013.
- Northwestern University, NeuroQOL: Quality of Life in Neurological Disorders. http://www.neuroqol.org. Accessed July 14, 2013.
- National Institutes of Health and Northwestern University, NIH Toolbox: For the assessment of Neurological and Behavioral Function, 2006-2013. http://www.nihtoolbox.org. Accessed July 14, 2013.
- Frost, J, Massagli, M. Social uses of personal health information within PatientsLikeMe, an online patient community. What can happen when patients have access to one another’s data. J Med Internet Res. 2008;10(3):e15.
- Office of Federal Contract Compliance Programs, Section 508 of the Rehabilitation Act of 1973, as amended. 29 USC Section 793. 1993; http://www.section508.gov/
- Fitbit, Inc., 2013. http://www.fitbit.com/company.
- Fitness Keeper, Inc., 2013. http://runkeeper.com.
- Federal Register Vol 78, No 17, 45 CFR Parts 160 and 164. Modifications to the HIPAA Privacy, Security, Enforcement, and Breach Notification Rules Under the Health Information Technology for Economic and Clinical Health Act and the Genetic Information Nondiscrimination Act; Other Modifications to the HIPAA Rules 2013 Department of Health and Human Services.
- Internet Engineering Task Force: The Oauth 2.0 Authorization Framework. RFC 6749, 2012; http://tools.ietf.org/html/rfc6749.
- Analysis of Unique Patient Identifier Options. National Committee on Vital and Health Statistics, 1997. http://www.ncvhs.hhs.gov/app3.htm.
- Cuggia M, Besana P, Glasspool D. Comparing semi-automatic systems for recruitment of patients to clinical trials. Int J Med Inform. 2011;80(6):371-388.
- Mandl KD, Mandel JC, Murphy SN, et al. The SMART Platform: early experience enabling substitutable applications for electronic health records. J Am Med Inform Assoc. 2012 Jul;19(4):597-603.
- Fiore LD, Brophy M, Ferguson RE, et al. A point-of-care clinical trial comparing insulin administered using a sliding scale versus a weight-based regimen. Clin Trials. 2011 Apr;8(2):183-195.
- Office of the National Coordinator for Health Information Technology, HealthIT.gov. http://www.healthit.gov/bluebutton. Accessed July 14, 2013.
Eslick I, Sim I, the DEcIDE Methods Center N-of-1 Guidance Panel. Information Technology (IT) Infrastructure for N-of-1 Trials. In: Kravitz RL, Duan N, eds, and the DEcIDE Methods Center N-of-1 Guidance Panel (Duan N, Eslick I, Gabler NB, Kaplan HC, Kravitz RL, Larson EB, Pace WD, Schmid CH, Sim I, Vohra S). Design and Implementation of N-of-1 Trials: A User’s Guide. AHRQ Publication No. 13(14)-EHC122-EF. Rockville, MD: Agency for Healthcare Research and Quality; January 2014: Chapter 5, pp. 55-70. http://www.effectivehealthcare.ahrq.gov/N-1-Trials.cfmReturn to Top of Page