Skip to main content
  • IRB: ETHICS & HUMAN RESEARCH

A Study to Evaluate the Effect of Investigator Attendance on the Efficiency of IRB Review

Institutional Review Boards (IRBs) are challenged to review increasing volumes of proposed research studies while meeting high ethical and regulatory standards.1A growing literature has documented concerns about the IRB review process, including administrative delays,2inadequate institutional support,3and poor investigator-IRB relations.4IRBs also carry significant operational costs,5creating a strong incentive for institutions to identify mechanisms to improve the efficiency of the review process.

Some suggest that IRBs can become more efficient and decrease misunderstandings with principal investigators (PIs) by inviting them to attend IRB meetings when their protocols are scheduled for review and discussion.6IRBs can involve investigators in the review of protocols in a number of ways: 1) invite PIs to meetings whenever new protocols are reviewed; 2) invite PIs on an as needed basis, often when the IRB has particularly deep concerns about a protocol; or 3) never invite PIs to attend meetings. In addition, we know from anecdotal evidence that some IRBs (or individual IRB members) communicate with PIs after the submission of their protocol but prior to the full committee meeting to gather additional information regarding particular concerns or issues that they identify in their review of protocols.

The limited data on IRBs indicate that they do not routinely invite PIs to attend convened meetings. In a national sample of university-based IRBs, Hayes et al. found that 9% required the PI’s presence at IRB meetings, 79% said they meet with PIs by request only, and 12% indicated they never meet with PIs.7In contrast, Jones et al. found that 69% of hospital-based IRBs reported that a presentation by the PI at the IRB meeting is a routine part of their protocol review process.8Finally, the Bell study reported that 42% of low-volume IRBs and 17% of high-volume IRBs said they routinely encouraged PIs to attend committee meetings, either in person or by phone. Yet 22% of low-volume IRBs and 41% of high-volume IRBs indicated that PIs attended meetings only at the IRB’s request.9

For this study, we hypothesized that PI attendance at the IRB meeting could influence the IRB review process. Specifically, we hypothesized that PI attendance could positively influence the review according toefficiency (by decreasing misunderstandings and thus minimizing the amount of correspondence and length of time before final resolution);satisfaction (fostering confidence in each party’s comprehension of the protocol and minimizing impersonal communication);attitudes/relations (improving investigators’ satisfaction with the review and attitudes about the IRB by demystifying the review process, or by engendering more collegial interaction); andquality (through enhanced understanding and communication).

No studies have experimentally assessed the effect of PI presence on any of these domains. At the Johns Hopkins Medical Institutions (JHMI), four IRBs review protocols that require full committee (rather than expedited) review. Two of these IRBs routinely require PIs to attend the initial review, and two do not. This “natural experiment” provided the opportunity for a retrospective record review to assess the effect of PI attendance on one of the above four domains, efficiency of the review. We acknowledge that efficiency is not the only goal of the IRB review process. Satisfaction, investigator-IRB relationships, and quality of review are also critical. Adopting a highly efficient review process at the expense of review quality would be antithetical to the goals of the IRB system. At the same time, given that inefficiency is frequently cited as a cause for criticism of the IRB process, it is appropriate to examine interventions potentially relevant to efficiency in and of itself.

As an initial step in exploring the effect of PIs attending IRB meetings that undertake an initial review of their protocols, we conducted a retrospective review of IRB records for research protocols from the four JHMI IRBs. The project examined whether PI presence during initial IRB review improves the efficiency of the review, as measured by the number of days from protocol submission to approval, the volume of correspondence exchanged between the PI and the IRB prior to approval, and the number of convened IRB meetings required prior to approval.

Study Methods

Study Population. JHMI operates five IRBs; one was excluded from the sample because it conducts only expedited reviews. All of the IRBs are constituted to review any protocol submitted, include an average of eight members each, and are assigned protocols to review on a random basis. The exception to this rule is that one IRB includes an experimental psychologist, and protocols conducted by faculty affiliated with the Kennedy-Krieger Institute (a research institute affiliated with Johns Hopkins that focuses on the diagnosis, treatment, and prevention of childhood neurological diseases and developmental disabilities) are assigned to that IRB only. All of the IRBs review approximately the same number of protocols on an annual basis and meet once a week for a scheduled two- to three-hour meeting. The time from IRB meeting to correspondence generated to the PI is comparable across all IRBs. At the time of data collection, two IRBs routinely invited PIs to meetings, and the other two did not. JHMI has no standard policy regarding PI attendance at IRB meetings; rather, an IRB’s policy regarding the presence of the PI at reviews is at the discretion of the individual IRB chair. Anecdotally, we know that some chairs and/or primary reviewers contact PIs in advance of the IRB meeting. We were unable to systematically track this activity among IRBs as this contact is neither systematically collected nor documented in the IRB file.

Data Collection. We reviewed IRB records for 125 new protocol applications submitted to four JHMI IRBs between March 12, 2002 and June 30, 2005 (Diagram 1). We purposely sampled 25 protocols from each of the four IRBs that conduct full committee reviews, allowing us to compare 50 reviews where the PI attended to 50 reviews where the PI did not. For the cross-sectional comparison, 50 subjects in each group is sufficient to provide 80% power to detect a 0.6 SD difference in review times. For the historical comparison, 25 subjects per group provided 80% power to detect a 0.8 SD difference in review times. In addition, one of the IRBs changed its policy within this time period, moving from a model where PIs were not invited to attend meetings to one where they are. We therefore sampled 50 protocols from that IRB to allow a separate pre- and postinternal comparison of efficiency for reviews conducted by this IRB. This sampling methodology allowed for two types of comparisons: a historical comparison within one IRB, and a cross-sectional comparison between two IRBs that invite PIs to meetings versus two that do not.

Protocols were eligible for inclusion if they were approved and active at the time of data collection; received initial full committee review by one of the four IRBs; were submitted by faculty from the departments of medicine, neurology, or oncology10; and did not involve children. Using these inclusion criteria, a total of 25 protocols were sampled from each IRB. In some cases more than one protocol from a single investigator was included in the sample to meet our target of 25 protocols from each IRB. The Committee on Human Research at the Johns Hopkins Bloomberg School of Public Health—which is administratively separate from the JHMI IRBs (and has a unique Federal Wide Assurance [FWA] from the Office for Human Research Protections)—concluded that this study fell within the exempt category of the Common Rule and thus did not require IRB review.

Data Abstraction and Analysis. We abstracted data on the PIs, the study, and the review characteristics, as well as on review outcomes of interest from IRB paper files and electronic databases. A comprehensive list of domains included in the data abstraction tool is found in Figure 1. Data were abstracted by hand, entered into an Excel database, and later uploaded into STATA statistical software, version 7.0, for analysis.11 

Descriptive statistics and bivariate analyses were conducted. Three different variables served as outcome measures of efficiency in this study: the number of days from protocol submission to approval; the number of pieces of correspondence exchanged between PI and IRB during the review; and the number of convened IRB meetings at which the protocol was discussed prior to approval.

We focused on two sets of comparisons of efficiency: 1) cross-sectional comparison between those IRBs that invite PIs to their meeting and those who do not (n = 100), and 2) change in efficiency after a change in policy for one IRB (n = 50).

Results

Background Characteristics. A total of 125 protocols were reviewed, representing the protocols of 93 PIs (i.e., 23 investigators are represented by more than one protocol). PIs affiliated with the Department of Medicine submitted 50% of the protocols, PIs affiliated with Oncology submitted 34%, and PIs affiliated with Neurology submitted the remaining 16%.

Protocol and review characteristics are summarized in Table 1. Half of the protocols reviewed were clinical trials,12and half were sponsored by federal agencies. One-quarter of the trials included subjects from federally-defined vulnerable populations,13and one-fifth included healthy volunteers. Three-quarters had evidence of administrative review by IRB staff prior to review by the committee, and almost two-thirds were reviewed by one or more Johns Hopkins review committees (e.g., radiation, conflict of interest) besides the IRB prior to approval by the JHMI IRB.14 

Efficiency-Related Measures. There were three primary outcome measures related to efficiency of review: time to approval, volume of correspondence, and number of convened IRB reviews before approval. For the complete sample of 125 protocols, the mean time from date of submission to date of approval was 75 calendar days (median = 64 days). On average there were 5.6 pieces of correspondence (both letters and e-mail messages) between the IRB and PI per protocol. The average protocol was reviewed at 1.8 IRB meetings prior to approval. No PI or protocol characteristics were found to be associated with time to approval, pieces of correspondence per protocol, or number of convened meetings before approval. Efficiency measures by IRB are presented in Table 2.

For the cross-sectional sample of 100 protocols, the mean time from date of submission to date of approval was 65 calendar days (median = 57). On average there were 5.1 pieces of correspondence between the IRB and PI per protocol. The average protocol was reviewed at 1.6 IRB meetings prior to approval.

Using a basic Chi-square test of the difference in means and a Kruskal-Wallis test of the difference in medians, there was no statistical difference between the mean or median time from submission to final approval (p = 0.98), pieces of correspondence (p = 0.31), and number of convened meetings (p = 0.88) between IRBs that routinely invite PIs to their meetings (n = 50) and those who do not (n = 50).

For the historical comparison of the one IRB that changed its own policy regarding PI attendance, differences were found between the means (Chi-square test). Specifically, average time from submission to approval was considerably longer when the PI did not attend (mean = 114 days) than when the PI did attend (mean = 70 days; p = 0.012). Assuming unequal variance, we performed a Kruskal-Wallis test and also found a statistically significant difference between the relevant medians (median when PI did not attend = 92; median when PI did attend = 66; p = 0.016) as well.

We also found a difference between the number of convened meetings needed for approval when a PI was present (mean = 1.7 meetings, median = 2) compared to the number needed when the PI was not present (mean = 2.4 meetings, p = 0.009; median = 2, Kruskal-Wallis p = 0.014) (Table 3).

Discussion

Our results are inconclusive as to whether PI presence during initial IRB review improves the efficiency of the review process. A cross-sectional analysis comparing the efficiency of four IRBs—two that invite PIs to meetings, and two that do not—seems to indicate no difference in efficiency when PIs attend; but a historical comparison within one IRB shows a sizable increase in efficiency when the PI is present. For this IRB, the average length of time from protocol submission to approval decreased from approximately four months to just over two months when the PI routinely attended initial review. As this study was not a controlled trial, we cannot say that the presence of the PI alone was responsible for this change, but it indicates that PI presence may play a role in IRB efficiency. We are unaware of any other substantial changes to the IRB process that could explain this difference but acknowledge that other factors, or other factors in combination with PI presence, may be responsible for this increase in efficiency. This hypothesis is further supported by the decrease in the amount of correspondence when PIs attended meetings.
Several factors could explain our divergent results. First, each IRB has a different chair and may have developed informal policies not related to PI attendance but relevant to efficiency. Anecdotally, we are aware that some IRBs or IRB chairs communicate with the PI in advance of the meeting to gather missing or more detailed information about the protocol. Since this form of communication is not systematically recorded in IRB records, we were unable to measure the extent to which this occurs, or whether this is more likely to occur in IRBs that do not intend to invite the PI to attend meetings of protocol reviews. This advance communication may dilute the effect of the PI presence at the meeting and suggests that a variety of approaches might be considered to reduce misunderstandings, improve efficiency and quality of the review, and improve relations between IRBs and PIs. Second, we did not keep track of whether investigators’ response time was uniform across IRBs. That is, some delays in review may be a result of a PI’s failure to respond to IRB concerns, not a result of delays in the IRB office. We assumed that PI response time varied somewhat randomly across IRBs. Third, a different team of personnel provides administrative and substantive support for each IRB. Because staff members play an important role in facilitating communication between investigators and the IRB, their effectiveness could influence the efficiency of the overall review process. At the same time, we knew before initiating this study that the average time from meeting to initial correspondence was fairly uniform across all four IRBs. To address these issues, further research to explore IRB-specific strategies is warranted. Similar natural history studies could be conducted at high-volume institutions that have adopted interventions to increase the efficiency of their IRB processes, or high-volume IRBs that routinely invite PIs to initial review meetings could simply be compared to those that do not. Ultimately, randomized controlled trials of inviting PIs to attend meetings, as well as other interventions designed to improve IRB efficiency, should be conducted.

Our findings are limited by the data being drawn from only one large academic medical center with multiple IRBs. Furthermore, review efficiency is only one domain subject to influence by investigator presence at IRB meetings. For example, PI presence may render the IRB deliberation process more transparent and lead to an improvement in PI-IRB relations. Further experimental work related to IRB efficiency is warranted. It is important to minimize the degree to which IRB review is a rate-limiting step in the conduct of research. Moreover, institutions throughout the country are spending millions of dollars to enhance their human subjects protection programs. It would be important to know if these funds could be spent more wisely without compromising the quality of research reviews.

Acknowledgments

We would like to acknowledge the Program in Research Ethics, Johns Hopkins Berman Institute of Bioethics, for their input into the design and methods.
Holly A. Taylor, MPH, PhD, is Assistant Professor, Department of Health Policy and Management, Bloomberg School of Public Health and Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD;Peter Currie, MHS, is a JD Candidate, Georgetown Law Center, Georgetown University, Washington, DC; andNancy E. Kass, ScD, is Phoebe R. Berman Professor of Bioethics and Public Health, Bloomberg School of Public Health and Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD.

References

1. Department of Health and Human Services, Office of the Inspector General.Institutional Review Boards: Promising Approaches.Washington, DC: DHHS, 1998; Institute of Medicine.Responsible Research: A Systems Approach to Protecting Research Participants. Washington, DC: National Academy Press, 2002.

2. Ahmed AH, Nicholson KG. Delays and diversity in the practice of local research ethics committees.Journal of Medical Ethics1996;22(5):263-236.

3. Ellis GB. Keeping research subjects out of harm’s way.JAMA1999;282:1963-1965.

4. Burke GS. Looking into the institutional review board: Observations from both sides of the table.The Journal of Nutrition2005;135(4):921-924.

5. Sugarman J, Getz K, Speckman JL, et al. The cost of institutional review boards in academic medical centers.NEJM2005;352(17):1825-1827.

6. Paul C. Health researchers’ views of ethics committees functioning in New Zealand.New ZealandMedical Journal2000;113(1111):210-214; Hirshon JM, Krugman SD, Witting MD, et al. Variability in institutional review board assessment of minimal-risk research.Academic Emergency Medicine2002;9(12):1417-1420.

7. Hayes GJ, Hayes SC, Dykstra T. A survey of university Institutional Review Boards: characteristics, policies, and procedures.IRB: A Review of Human Subjects Research1995;17(3):1-6.

8. Jones JS, White LJ, Pool LC, et al. Structure and practice of institutional review boards in the United States.Academic Emergency Medicine1996;3(8):804-809.

9. Bell H, Whiton J, Connelly S.Evaluation of NIH Implementation of Section 491 of the Public Health Service Act, Mandating Program of Protection for Research Subjects, 1998.

10. These were the only departments from which new proposals were submitted to all four IRBs during the period of interest.

11. StataCorp LP, College Station, Texas. STATA7.0.

12. Clinical trial was defined as a research program designed to evaluate a new medical treatment, drug, or device. The ultimate purpose of the clinical trial had to be the discovery of new and improved methods of treating diseases and conditions. As such, Phase I studies were considered to be clinical trials because, although their purpose does not meet the definition identified above, they are a necessary precursor to efficacy evaluation. On the other hand, studies testing new diagnostic procedures were not defined as clinical trials because they did not include a treatment component.

13. Sample population was considered vulnerable if any of the following were targeted for inclusions: prisoners; staff/employees; students; nursing home residents; terminally ill; pregnant women; fetus/fetal tissue; poor/uninsured; illiterate; institutionalized; handicapped; mentally disabled; cognitively impaired; and emergency department patients. Thus, it is possible that almost any study could encounter a potential subject who was in some way vulnerable. However, this alone would not necessitate the vulnerable population designation. On the other hand, if a study were specifically enrolling subjects with cognitive impairment, etc., then the designation would apply.

14. Internal Johns Hopkins Medicine (JHM) committees included the Committee on Conflict of Interest, Sidney Kimmel Cancer Center Committee, Clinical Radiation Research Committee, Institutional Biosafety Committee, and Radioactive Drug Research Committee. Committees external to JHM included external IRBs, international consultants, and student representatives (for studies enrolling JHU students).

Holly A. Taylor, Peter Currie, and Nancy E. Kass, “A Study to Evaluate the Effect of Investigator Attendance on the Efficiency of IRB Review,”IRB: Ethics & Human Research30, no. 1 (2008): 1-5.