Ethics in Research

Ethics plays an important role in conducting of research. While the individual researcher is responsible for his/her ethical practice, any research sponsored by an organization, university, or government program has a review process guided by a code of ethics.  Besides the code of ethics, the Institutional Review Board (IRB) plays an important role in monitoring research.  Your task in this discussion is to examine the origins of the IRB and analyze how the Review Board guides or monitors research.  In your 250-300 word post answer the following three questions:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
  • What does the IRB do to protect a research participant from harm?
  • One of the reasons for the existence of the IRB has to do with screening research applications. What is the basis for such screening?
  • The IRB provides the guidelines for ethical research practice. What would the researcher responsibilities be in terms of these guidelines?

In response to increasing concerns regarding inconsistency in the decision‐making of institutional review boards (IRBs), we introduce the decision‐maker’s dilemma, which arises when complex, normative decisions must be made regularly. Those faced with such decisions can either develop a process of algorithmic decision‐making, in which consistency is ensured but many morally relevant factors are excluded from the process, or embrace discretionary decision‐making, which makes space for morally relevant factors to shape decisions but leads to decisions that are inconsistent. Based on an exploration of similarities between systems of criminal sentencing and of research ethics review, we argue for a discretionary system of decision‐making, even though it leads to more inconsistency than does an algorithmic system. We conclude with a discussion of some safeguards that could improve consistency while still making space for discretion to enter IRBs’ decision‐making processes.

Keywords: human subjects research; institutional review boards; IRB decision‐making

A good deal has been written, mostly in complaint, about the functioning of institutional review boards (IRBs)[ 

1

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

] and the judgments they make when conducting an ethics review of proposed research with human subjects. One clearly discernible line of complaint has been about variation or, more normatively, inconsistency in decision‐making. Such inconsistency has been documented across IRBs at different research sites that review the same research protocol and within a single institution’s IRB, and it has been the source of mounting criticisms from researchers who require approval from these committees before they can begin their research. In this article, we acknowledge that, while consistency in IRB decision‐making is an important value in research ethics and oversight, achieving it fully may come at too great a cost.

In the first section, we begin by mapping out the issue of inconsistency, including the different kinds of inconsistency seen in IRB decision‐making and the responses to this phenomenon. The second section introduces a distinction between procedural and content consistency, describes how the principle of fairness provides the ethical grounding for consistency, and considers the relationship between fairness and the two types of consistency. The third section lays out what we call the “decision‐maker’s dilemma.” The dilemma suggests that those faced with a series of complex normative decisions are pulled in two directions: On the one hand, they can adopt an algorithmic decision‐making procedure to ensure consistency, but risk excluding morally relevant factors from the process. On the other hand, they can embrace a discretionary decision‐making procedure, allowing for all morally relevant factors to be taken into account, but risking the inclusion of morally irrelevant factors in the process. In the final section, we consider the question of which horn of the dilemma IRBs ought to embrace when conducting an ethics review of research protocols. We argue that discretionary decision‐making is preferable in light of important parallels between the justice system and systems of research ethics. Finally, we suggest several important safeguards that should be put in place to increase both fairness and perceptions of fairness in research governance.

3

>
Inconsistency in Research Ethics Review

Concerns about the inconsistency of IRB decision‐making have been mounting for decades.[ 

2

] Initial concerns revolved primarily around the experiences of research teams conducting multisite research who were required to seek approval from several IRBs.[ 3] These IRBs would often request different, sometimes conflicting, changes to be made to the same research protocol, which frequently would consume a considerable amount of time and funding and, at times, would decrease the value of the research (for example, through reducing generalizability).[ 

4

] To reduce the redundancy and variation seen across local IRBs within multisite research, several countries have implemented centralized systems of research ethics review. In the United Kingdom, the creation of multisite research ethics committees ensures that research conducted on patients in the U.K.’s National Health Service (NHS) is reviewed by only one ethics committee.[ 

5

] As a result of two recent federal policies, most federally funded multisite research in the United States is now required to undergo review by a single IRB.[ 

6

] Australia has taken similar measures: investigators involved in multisite research are required to submit a national application form recognized across the country.[ 

7

] Several jurisdictions in Canada are moving toward centralized ethics committee review as well, although not all have done so.[ 

8

]

Despite these changes, inconsistency in IRB decision‐making is still a widely reported issue. An IRB at one institution is sometimes found to be inconsistent, such as when it finds fault with a research protocol that it previously approved, even though the researchers made no changes to the protocol after it was approved, or when the IRB treats two similar protocols differently. Additionally, while the final decisions of IRBs that review protocols for multisite studies tend to converge (such as whether to approve the protocol), how the IRBs reach their decisions and the number and types of modifications they request along the way tend to differ substantially, illuminating discrepancies between their decision‐making processes.[ 

9

] Documented inconsistencies across IRB processes include differences in the amount of preparation done before review, the average time to approval of a protocol, and the costs of review.[10] Inconsistencies have also been found across IRBs in terms of the type of review assigned to a particular protocol (exempt or expedited, for instance), whether informed consent waivers are deemed acceptable, and the ways in which regulations are interpreted.[11] For example, an investigation by Shah et al. found that while federal regulations in the United States permit children to be enrolled in research only when risks are determined to be minimal or a minor increase over minimal or when the research leads to direct benefits, IRBs vary considerably in how they define “minimal risk” or “direct benefit.”[12] These inconsistencies can appear between IRBs in the same city and even within the same institution,[13] suggesting that local context is not always the source of the differences.

Critics point out that inconsistency is harmful in that it can lead to wasted resources, frustrated investigators, and the delay of valuable research. Green et al. reported that their research team dedicated 4,680 hours of staff time toward the IRB process over 19 months.[14] Inconsistencies within IRBs are likely to lead to frustrations among researchers and reductions in confidence in IRB functioning, while inconsistencies across IRBs may increase the practice of IRB shopping, so that IRBs perceived as faster and more permissible will receive more protocols for review. The reasons behind inconsistencies within and across IRBs are likely to be multiple and varied. Evidence suggests that IRB members often have different degrees of knowledge about existing regulations[15] and different senses of their role in relation to research participants and investigators (as protectionists or expediters, for example),[16] as well as different concerns, past experiences, and personalities,[17] all of which may contribute to the variation seen in their decision‐making. Furthermore, policies and regulations that provide guidance for IRBs are frequently vague or inconsistent, making it more likely that IRBs will interpret or apply regulations differently.[18] Stark’s examination of IRB practices also suggests that individual IRBs tend to develop local precedents when novel ethical issues arise and are resolved; these precedents may increase consistency within committees as they tend to guide responses to similar protocols, but may contribute to increases in inconsistency across committees.[19]

Barriers to increasing consistency within and across IRBs are substantial. As several commentators have pointed out, moral disagreement is ubiquitous and inevitable, there is no algorithm available on which to produce the “right” ethical output of committees, and there is little incentive for IRB members to try to increase consistency.[20] In light of these barriers, some go as far as to defend inconsistency. Edwards et al. provide the most detailed defense of inconsistency. Asserting that we ought “to reject the view that we should strive for complete consistency,”[21] they provide three arguments to justify their claim: a justice argument that maintains that local context and, in particular, cultural differences, often justify differences in IRB functioning; a moral‐pluralism argument that makes the case that disagreement between IRBs is both legitimate and desirable; and a due‐process argument that holds that the process of moral deliberation is often more important than the outcome of the deliberation.[22] Similarly, McGuiness argues that inconsistency in IRB decision‐making is legitimate as long as it is based on a fair procedure.[23] This emphasis on process closely relates to the distinction between procedural and content consistency that we outline in the following section, and we are sympathetic to the conclusions these authors reach. Sayers also defends the inconsistency of IRBs, arguing that the goal of consistency is not in fact desirable when one looks more closely. Sayers rhetorically suggests that one way to reach such a goal would be to screen IRB members in advance to determine their moral views or to tell them how to think, leading to an Orwellian state of affairs.[24] Qualitative research with IRB members has revealed that they sometimes defend inconsistencies between committees as well, although with little apparent justification.[25]

In line with Edwards et al., McGuiness, and Sayers, we contend that research ethics review should not aim for complete consistency. Rather than focusing on acceptable forms of inconsistency, as Edwards and colleagues do, we defend the importance of consistency in IRB decision‐making, and, in line with Sayers’s warning that inconsistency is the lesser of two evils, we argue that embracing some inconsistency in IRB decision‐making may be the best approach available, given the nature of the decisions being made. In the next section, we outline an important distinction between two types of consistency, the moral basis of consistency in decision‐making, and how they are connected.

Examining Consistency

Procedural and content consistency

Two kinds of consistency in decision‐making can be distinguished: procedural and content consistency.[26] While procedural consistency is concerned with the process by which a decision is reached, content consistency is concerned with the outcome of the decision. These can come apart quite cleanly in principle, although, as will be discussed in more detail below, they rarely do in practice. In the context of research ethics review, mere procedural consistency could occur in the case of an IRB that made all decisions on the basis of a coin flip. Regardless of the outcomes of these decisions, the IRB would be procedurally consistent, because its process for making decisions would always be the same. Content consistency, by contrast, results only from agreement between the substantive judgments made by committees, not from the process. If two committees approve the same research protocol and require no modifications, one as the result of reading and discussing the protocol in detail, and the other because it always gives a full pass to protocols that are submitted on a Tuesday, the committees would be consistent in content, but not in procedure. Note that in neither of these cases of mere procedural or content consistency is there any reference to an independent standard of what counts as “getting it right.”

Fairness and consistency

We often care about consistency because it represents fairness. To be consistent is to treat equals equally, ensuring that individuals are not treated differently when there is no good reason for them to be. Of course, there are important limits on when and where we expect the content of decisions to be consistent. In many situations, we agree that there are important differences that ought to prevent us from treating two cases in the same way. In these cases, to ensure fairness, it is important for decision‐makers to be procedurally consistent, to decide on the same basis, taking into account the same kinds of reasons in different cases.

Content consistency, in which the same decision is made for two cases with the same relevant features, is not always required for fairness. Individuals accept differences in content consistency when they feel that the process has been fair, which, in some decisions, can involve a process relying on chance. If all options are equal at the outset, using a system based on chance ensures that no differential treatment is given when it is not justified. If one person wins in a lottery and another does not, even though there is no content consistency, there is no sense of injustice, as long as each person was given an equal chance in the lottery process.

Sometimes, however, fairness depends on there being the right kind of relationship between the procedure and the content of the decision, and in these cases, procedural and content consistency are closely related. This occurs when there are relevant factors that ought to inform the decision but that would be excluded if the decision were left to mere chance. In such cases, the procedure ought to take into account the right kinds of reasons in the right way, in order to be fair. Take the allocation of scarce resources, such as organs. If one person was likely to die if they didn’t receive a liver transplant in the next six months, while another was not in dire shape, it would not be fair to flip a coin between these two to decide who should receive the next available liver. This is because there are morally relevant factors that ought to be taken into consideration (such as urgency of need and prognosis) within a decision‐making process regarding the allocation of organ donations, and flipping a coin fails to recognize these as reasons. Systems that have developed in order to ensure the just allotment of organs (such as the model for end‐stage liver disease [MELD] scores that are assigned to potential recipients by the United Network for Organ Sharing to capture these factors) aim to connect the procedure of decision‐making with the content of the decision and take into account the right reasons in the right way.[27]

Fairness and consistency in IRBs

The decisions of IRBs are more like decisions related to the allocation of organs than decisions based on the results of a lottery. There are important differences between the risks and benefits contained with different research protocols that IRBs ought to take into account during the review process. This suggests that the kind of consistency that we should be concerned with when it comes to IRB decision‐making is not only consistency in terms of the final decisions that IRBs make (for example, approve, approve with modifications, reject)[28] but also in terms of the reasons or values given in support of the decisions that they reach.[29] Of course, if the same kinds of reasons or values are guiding the approaches of different IRBs, it is all the more likely that they will reach the same decisions about different protocols. This points to an important way in which procedural and content Perceived injustice within IRB decision‐making may alienate researchers from the ethics review process and lead them to cut corners or leave morally relevant information out of applications.

consistency are related: the more procedural consistency one has, the more likely one is to have content consistency (except in cases in which the procedure is based on chance).

As with other types of consistency, fairness underlies the importance of consistency in IRB decision‐making. This is for the sake of both investigators and potential research participants. It is unfair to treat two identical protocols differently if they are being proposed for the same context (so there are no relevant local differences), because this allows one investigator to proceed with her proposed research, while another is required to wait. Alternatively, one researcher may substantially improve her research protocol through engagement with the committee’s requests, while another may not benefit from this engagement. Likewise, potential participants can be either unfairly benefited or harmed by inconsistencies in research ethics review. If a protocol with risks that outweigh its benefits is approved by one committee and not another, the patient population in the jurisdiction of one IRB is protected, whereas the other is not. However, a risk‐adverse IRB may unfairly disadvantage a local population by turning down a research protocol that includes the opportunity for patients to enroll in a clinical trial, while another committee may approve the same protocol, thereby benefiting potential participants in a particular geographic area.[30]

Perceptions of fairness

Importantly, perceptions of fairness shape how individuals interact with a system. In many contexts, an increase in perceived injustice leads to an increase in deviant behavior.[31] For example, research investigating the link between perceptions of injustice and workplace behavior suggest that retaliatory behavior (such as theft or calling in sick when not ill) is more likely when employees feel a sense of injustice in their workplace.[32] Extending these findings to the realm of IRBs, Keith‐Spiegel and colleagues have suggested that “the perception of unfairness will motivate some investigators to engage in subterfuge and misconduct designed to ‘level the playing field.'”[33] Keith‐Spiegel and Koocher argue that particular features of IRBs (such as flexibility in interpretation of guidelines, pressure to protect the institution, or a lack of training or professionalization) make them especially susceptible to being perceived as unfair, making it likely that, paradoxically, their behaviors may contribute to unethical behavior in scientists.[34]

Others have also pointed to the ways in which perceived injustice within IRB decision‐making may alienate researchers from the ethics review process and lead them to cut corners or leave morally relevant information out of applications.[35] Research suggests that the majority of complaints from investigators regarding IRBs relate to issues of fairness, and when polled, researchers report, somewhat surprisingly, that procedural justice (how decisions are made) and interactional justice (interpersonal sensitivity and justification) are more important to them in IRBs than competence, the absence of bias, or protecting the rights of participants.[36] This suggests that, in addition to being unfair, inconsistency within and across IRBs may have harmful downstream effects, as it may lead to dishonesty within research protocols, IRB shopping, and damage to the relationships between IRB members and researchers. This points to the importance of having buy‐in from researchers within a system of research governance, or risk encouraging behavior that poses a direct threat to the purpose of having research ethics review. This is a tall order; what it requires is not only a fair system of research governance, but a system that is both fair and perceived as fair.

Perceptions of fairness also depend on the availability of information about the decision‐making process. If a decision‐making process is unknown to those affected by it, content consistency is all that is available and so is central to perceptions of fairness. For example, if two people with the same terminal illness wish to gain access to an investigational drug and only one is granted access, but they have no knowledge of how the decision was reached, the decision‐making process may well be perceived as unfair, regardless of whether it was. Similarly, when investigators are unaware of how decisions about research proposals are made within IRBs, they may be inclined to assume the process is unfair, considering the variety of experiences they have heard about and the lack of justification given for the modifications requested, regardless of what actually took place within IRB discussions. This suggests that transparency can play an important role in perceptions of fairness, a point we will return to later.

The Decision‐Maker’s Dilemma

Sometimes, a decision is complex enough that the procedure for decision‐making cannot be specified in advance, because the relevant factors are too many or interact in unpredictable ways. In these cases, individual or community discretion is often valued (when interviewing candidates for a job, for example). This is particularly the case in normative decisions, because the ethical criteria that are relevant are often under dispute and resist easy measurement (as in determining the sentence for a crime committed). The task of IRBs, in reviewing research protocols to evaluate their ethical and scientific quality, is both complex and normative, making it the kind of decision‐making process in which discretion is valued.[37] As a result, IRBs are designed to consist of a diverse group of individuals with a range of experiences and areas of expertise.

However, the more complex and normative the decision that must be made, the more likely that factors that are thought to be irrelevant to the decision‐making process will slip into the discretionary decision‐making process. This results from both the complexity, in that the many variables involved are difficult to specify in advance, and the normativity, in that moral judgments across individuals tend to be diverse and resist easy consensus. Take decisions regarding promotions at work, for example. While managers may have every intention of narrowing the pay gap between men and women, there are various ways in which gender stereotypes with regards to men and leadership positions can infiltrate the process of deciding whom to promote. In the many cases in which women are paid less than men for the same job, it appears that a morally irrelevant factor (gender) has slipped into the decision‐making process, creating an unfair system of inconsistent rewards. Similarly, there is little doubt that irrelevant factors enter the decision‐making processes of IRBs. Qualitative research by Klitzman reveals that idiosyncratic features of individuals and communities can play a substantial role in shaping feedback given by IRBs to investigators, both as a result of the complexity of decisions involved and differences in terms of moral judgments.[38] Uncertainty also tends to exacerbate the difficulties related to decision‐making that is complex and normative; the less that is known about which factors are relevant (such as the risks and benefits that might be involved in a particular research protocol) and the relationship between different values (such as how to weigh patient autonomy versus the potential knowledge that might be gained), the less likely that only the right factors will enter the decision‐making process. This inevitably leads to inconsistencies.

In decision‐making systems in which such inconsistencies begin to appear, those responsible for decision‐making are faced with a dilemma. On one horn of the dilemma is the option of specifying in advance how factors relevant to the decision should affect it, fixing the content to ensure consistency and prevent irrelevant information from entering the decision‐making process. This can be thought of as algorithmic decision‐making. This leads to both procedural and content consistency. In response to the gender pay gap, a boss might decide that, rather than offering promotions based on the boss’s perception of employees’ merit, individuals will receive promotions on the basis of how long they have been with the company. While this new system will ensure that men and women are not treated differently on the basis of their gender, other factors (such as performance) that ought to contribute to decisions regarding promotions will be left out. This suggests that in algorithmic decision‐making processes, consistency comes with a cost. While irrelevant factors tend to be excluded from the decision, some relevant ones tend to go too, both increasing and decreasing the fairness of the decision.

On the other horn, a decision‐maker can embrace the importance of discretion and work toward creating a more consistent procedure of decision‐making, without fixing the content of the decision in relation to relevant factors. This can be thought of as discretionary decision‐making. This allows all relevant factors that With discretionary decision‐making, some consistency is sacrificed to improve the likelihood that an IRB will identify the broad range of relevant ethical issues that can arise in a research protocol.

ought to be taken into account to, at least potentially, be included. A manager might be concerned that he tends to promote men more often than women, and he might also see the benefit of allowing features other than time with the company to contribute to decisions about promotions. Embracing discretionary decision‐making in this context ensures that a significant range of relevant factors can be considered in any one decision about a promotion (so that taking maternity leave wouldn’t negatively affect an employee’s chances even if it meant that she had actually worked fewer days at the company than a colleague who was hired at the same time), but it also inevitably allows irrelevant factors (such as shared interests, which he may have more of with male, rather than female, employees) to enter the decision‐making process. Perhaps he will try to patch up the discretionary procedure by reducing the amount of irrelevant information that enters the decision‐making process (by, for example, taking a course in implicit bias or including women on the hiring team), hoping this will contribute to content consistency. While these measures may do some good, a discretionary process of decision‐making will always be limited in terms of both procedural and content consistency, since more than just morally relevant factors will enter into the process.

Embracing Inconsistency

How should systems of research governance respond to this dilemma? We argue that IRBs should choose the second horn of the dilemma and continue to rely on a discretionary process of decision‐making, embracing (some) inconsistency. Rather than attempting to specify in advance precisely what factors should contribute to decisions regarding research protocols, IRBs should aim to improve content consistency by way of procedural changes.[39] This choice involves recognizing that consistency is merely one value among many within research ethics; with the choice of discretionary decision‐making, some consistency is sacrificed in order to improve the likelihood that the broad range of relevant ethical issues (such as novel risks and benefits) that can arise in research protocols will be identified by an IRB and appropriate modifications will be requested. In reflecting on our recommendation of discretionary decision‐making, we discuss here several features that are shared between systems of justice and systems of research governance. In light of these similarities, we argue that systems of research ethics ought to follow the approach courts use in recognizing the importance of discretion, which requires accepting the limitations of attaining consistency. We also discuss several safeguards that systems of research governance could adopt in order to counter the risks that arise from embracing the second horn and adopting a discretionary process of decision‐making.

The justice system

Many research ethicists have noted the similarities of research ethics systems and the justice system, but the connection has not been explored in great detail.[40] We think there is real weight and importance to these comparisons. As with systems of research oversight, consistency is a constant concern in systems of justice. For example, in the United States, those who have been convicted of the murder of a white female are 10 times more likely to be executed than those who have been convicted of the murder of a black male.[41] This violates the principle of parity, which is considered to be foundational in law: those who have committed the same offence should, all things being equal, receive equal punishment. As Bagaric and Pathinayake explain, “At the core of the principle of parity is the concept of equal justice and the patent unfairness in dealing differently with offenders who are similarly placed.”[42] Just as fairness underlies the importance of consistency in IRB decision‐making, the principle of parity underlies the importance of consistency in criminal sentencing.

While ensuring fairness through consistency is thought to be a central goal of systems of justice, the principle of parity is also generally accepted as impossible to fully achieve. Several features of the justice system contribute to this impossibility, each of which are shared by systems of research governance. The first feature is that there is no available ground truth that can provide a foundation for consistent moral decision‐making. While both systems have legislation, regulations, guiding documents, and principles, no universal moral system has been agreed upon as a guide.[43] As Angell and colleagues have put it with regards to IRBs, there is no “final moral authority.”[44] This contributes to the second feature, which is continuous evolution when it comes to ethical agreement. That which is considered right or wrong in systems of justice and research ethics changes over time, as do interpretations of regulations and principles. While, in some cases, these changes will be instantiated in changes to standards and regulations, in other cases, they occur gradually within processes of decision‐making. This makes consistency over time a very difficult, and arguably undesirable, goal.[45]

The third feature is context. Both within legal systems and within research governance, the details of a case are inseparable from an ethical examination of it. This rules out the possibility of embracing the dilemma’s first alternative by developing an algorithmic decision‐making procedure, because the weighting of particular factors cannot be specified in advance. Given the complexity of context within sentencing and research protocols, the factors are too many and too varied, and they interact in unpredictable ways. Consider the role of aggravating and mitigating circumstances in sentencing as an illustration of the difficulty of disentangling the details of a case from its evaluation. Whereas certain features are intuitively classed as either aggravating (past convictions, for example) or mitigating (remorse), these features tend to express themselves differently in different contexts, resisting attempts to quantify them. The remorse of an offender with a long list of past convictions takes on a different hue from the remorse of a first‐time offender, suggesting that to assign each of these features a particular weight in all circumstances would be to miss out on relevant information within sentencing.[46] Similarly, potential risks and benefits change shape within research settings, depending on the context, and would be very hard to pin down in a settled system of measurement. A risky protocol submitted by an experienced and conscientious investigator may well be less risky than the very same protocol submitted by a hands‐off investigator who spends most of her time overseas.[47]

A strong case can be made for the importance of choosing discretionary decision‐making in systems of research ethics, given these features shared with systems of justice. In recognizing these features, those grappling with the complex, normative decisions that must be made within sentencing embrace such discretion, recognizing that to map out the correct relationship between every case and every outcome is an unachievable and undesirable aim. In embracing discretionary decision‐making, the justice system risks leaving room for morally irrelevant features to enter these decisions, but it ensures that many of those that are morally relevant, and the intricate relationships between them, are included. As systems that are also tasked with decisions that are evolving, ungrounded, and contextual, research oversight systems are faced with the same dilemma. Given the similarities between systems of justice and systems of research ethics, we argue that, like courts, IRBs ought to maintain discretionary decision‐making processes. However, as will be discussed below, there are safeguards that can be put in place to prevent too many morally irrelevant details from entering decisions within research ethics review.

It is worth noting that while systems of justice have had centuries to evolve into the shape they are in today, systems of research governance are, by comparison, very young. To take up the first of the dilemma’s two alternatives and fix the relationship between moral factors and decisions with respect to proposed research, one must be confident that the moral factors one has identified as inputs, the decisions one has selected as outputs, and the relationships between the two are the best they can be. We are a far cry from this stage within research governance. Consider the shifts that have taken place in research ethics in just the past several decades. Single‐site, institutionally funded trials have ballooned into multisite, international, industry‐funded trials, while technologies such as body cameras and genetic tests have arrived on the scene, introducing novel ethical issues that IRBs must grapple with. The priorities of the field of research ethics have shifted as well, from an initial emphasis on protectionism within research toward one of increasing and encouraging participation in research.[48] Discussions about dissolving the boundary between research and practice, the importance of granting researched communities ownership over research design and data, and the growth of industry‐led investigations that fall outside the current domain of research oversight are all taking place, each of which could fundamentally impact the shape of research governance.[49] In a rapidly shifting realm such as this one, to attempt to specify the process of deliberation within research ethics too closely would be a significant misstep.

One might object to this argument, which paints a picture of justice systems as having chosen the second alternative, by pointing toward several efforts to increase consistency within sentencing that look quite a bit like algorithmic decision‐making. In the United States, the Federal Sentencing Guidelines, introduced in 1984 and mandatory until 2005, sought to reduce the influence of morally irrelevant factors in sentencing practices by specifying the criteria that ought to shape an offender’s sentence. Two inputs (criminal history and the severity of an offense) were combined to produce a score that indicated the sentence an offender should receive.[50] Similarly, in Australia, a sentencing calculus was introduced to reduce inconsistencies and increase fairness.[51] However, although these measures included suggested limitations for sentencing, discretion still guided decisions at the end of the day, particularly with the incorporation of aggravating and mitigating factors into a sentence. In the United States, this use of discretion was so significant that, in 2002, 35% of sentences fell outside the federal guidelines.[52] While this use of discretion probably occurred because the sentencing guidelines were unable to capture all morally relevant features, it also certainly allowed for morally irrelevant criteria, such as race and income, to impact sentencing.[53] As Bagaric writes of the calculus in Australia, “The complexity of the sentencing calculus, stemming from the large number of aggravating and mitigating factors and the reluctance of the courts and legislature to ascribe weight to them, reduces the parity principle from a concrete principle to an aspirational ideal.”[54]

Safeguards

Although we argue that the second alternative is preferable to the first in the realm of research ethics, we also acknowledge that embracing a discretionary process that inevitably involves inconsistency requires safeguards to prevent too many morally irrelevant factors from entering the decision process, as well as to ensure that morally relevant factors are considered. These safeguards should be aimed at reducing both unfairness and perceptions of unfairness.

Unfairness can be reduced by improving content consistency, which can be achieved through improvements in procedural consistency. Many of these are discussed in detail elsewhere, so here we merely flag some ways that procedural consistency can be improved.[55] One important aspect of the decision‐making procedure that ought to be consistent is the makeup of the committee evaluating research protocols. This includes the members who sit on the committee and their backgrounds and expertise, as well as the training they received before beginning their membership. Although some have pointed out how having consistent expertise and shared training will not be sufficient to ensure consistencies between committees, it is nonetheless an important step toward it.[56] Similarly, guidelines that require the discussion of IRBs to cover several questions or categories in relation to each research protocol can contribute to procedural consistency. IRB members in the United States take guidance from The Belmont Report’s emphasis on respect for persons, beneficence, and justice,[57] and ten ethical “review domains” have been identified to guide IRB discussions in the United Kingdom.[58] Despite this, evidence from the United States suggests that the vast majority of IRB meetings are spent discussing consent forms, and very little time is spent on other important topics, such as the impact of the research on the community involved.[59] This suggests that more needs to be done to ensure that IRB members consider the importance of several ethical aspects of a protocol during discussion. Such a process would resemble the current use of sentencing guidelines as recommendations, rather than as a mandatory framework for sentencing decisions. In hopes of better understanding ethics committees’ decision‐making processes, the National Research Ethics Service in the United Kingdom has been conducting an exercise, the Shared Ethical Debate, to collect details about committees’ decisions, the rationale for decision‐making, and their process of deliberation.[60] Ideally, the findings from this exercise will be used to improve standard operating procedures and enable each ethics committee to reflect on how they deliberate and make decisions.

As with trust and trustworthiness, where trust should be placed only in processes, individuals, or institutions that are in fact trustworthy, perceptions of IRBs as fair ought to be improved only if the processes they engage in are in fact fair. In light of this, two interrelated changes that might help to improve both fairness and perceptions of fairness in research oversight are for IRBs to increase the transparency of their processes and to give reasons for their decisions.[61] Although some argue that there are good reasons (such as the protection of confidentiality and freedom of discussion)[62] to refrain from making IRB decision‐making processes available to the public, allowing investigators to peek within the black box of IRB reasoning may also help them to better understand the ethical issues that arise in their research and how they might respond to them. Fernandez Lynch proposes several ways that IRB transparency could be increased, including releasing redacted IRB minutes, sharing information about internal processes (such as timelines and mechanisms for self‐quality improvement), or opening up IRB meetings to various stakeholders (such as investigators, the public, and participants).[63] While inviting investigators to discuss their protocols at IRB meetings is common in some countries (such as the United Kingdom) and reportedly uncommon in others (such as the United States), many have suggested that such face‐to‐face meetings can significantly reduce tensions between IRBs and investigators.[64] Furthermore, increases in IRB transparency may not only lead to improvements in investigators’ perceptions of fairness in IRB functioning but may also increase public and participant trust in the process of research ethics review, improve the efficiency of IRB processes, and decrease inconsistencies within and across IRBs.[65]

A particular form of transparency that is likely to have a significant impact on perceptions of fairness is transparency with respect to the reasons behind the decisions IRBs make and their requests for modifications to protocols.[66] As Fernandez Lynch has pointed out, federal regulations in the United States require IRBs to provide reasons for their decisions when they disapprove a protocol, but not when they request modifications, a much more common occurrence. As she suggests, providing reasons for these can help investigators to learn about research ethics and can encourage self‐reflection among IRB members, as “the process of articulating reasons may cause IRBs to reassess their own views.”[67] The connection between fairness and giving reasons for one’s decisions is also reflected in the research of Keith‐Spiegel and colleagues discussed above, which measures the significance that researchers place on the importance of fairness in research ethics processes and the link between retaliatory behavior and perceptions of unfairness. These authors suggest that requiring IRB members to explain to researchers their requests for modifications of protocols in ethical terms may well reduce perceptions of injustice and thereby decrease noncompliant behaviors.[68] As discussed above, when only the content of decisions is available, perceptions of the fairness of a process are filled in by whatever one is able to imagine, and when the decision is not the desired one, the imagination is not always generous. Requiring IRBs to offer reasons for their decisions may also have the effect of reducing the number of unwarranted requests they make (to change the font, for instance), given that members will be required to reflect on each request and offer a justification for it. Increased transparency of IRB deliberations may also give researchers insight into ethical issues related to their research that could aid them in the development of future protocols, making everyone’s job easier.[69]

In response to persisting concerns related to inconsistency in the ethical review of research, we have offered a partial defense of such inconsistency. This defense rests on the recognition of a dilemma faced by all who aspire to make consistent, fair decisions in response to complex and normative cases. This dilemma results in either the achievement of consistency but the exclusion of morally relevant factors, through algorithmic decision‐making, or inconsistency but the inclusion of both morally relevant and irrelevant factors, through discretionary decision‐making. In light of several parallels between systems of research governance and systems of justice, we argue that the second horn, that of discretionary decision‐making, is preferable within IRB processes, but warn that certain safeguards should be put in place to prevent too many morally irrelevant factors from entering the decision‐making process.

ACKNOWLEDGMENTS

Sheehan and Friesen are funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre, through grant BRC‐1215‐20008 to the Oxford University Hospitals NHS Foundation Trust and the University of Oxford. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. Sheehan is a member of the National Research Ethics Advisors Panel, which advises the National Research Ethics Service in the United Kingdom. Yusof is grateful for the support of the Universiti Teknologi MARA, in Malaysia. We would all like to thank the members of the National Research Ethics Advisers Panel (Andrew George, Peter Heasman, Soren Holm, Ros Levenson, Hugh Davies, Peter Keen, Simon Woods, and Clive Collett), Jonathan Montgomery, Janet Wisely, and Ana Iltis for their comments on and discussion of earlier versions of this paper.

Disclosure

Friesen is a member of a research ethics committee specializing in social care research in the United Kingdom, and Yusof is a member of the research ethics committee of the Universiti Teknologi MARA, in Malaysia.

REFERENCES

1

 An IRB in the United States is effectively the same as a research ethics committee (REC) or a research ethics board (REB). There are some important dissimilarities across countries that depend on regulatory, historical, and cultural differences. These do not have a significant impact on the arguments here and are beyond the scope of this article.

2

 Trace, S., and S. E. Kolstoe, ” Measuring Inconsistency in Research Ethics Committee Review ,” BMC Medical Ethics 18, no. 1 (2017): 874 – 75 ; Abbott, L., and C. Grady, “A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn,” Journal of Empirical Research on Human Research Ethics 6, no. 1 (2011): 3–19; Klitzman, R., The Ethics Police? The Struggle to Make Human Research Safe (New York: Oxford University Press, 2015); Edwards, S. J., T. Stone and T. Swift, “Differences between Research Ethics Committees,” International Journal of Technological Assessment of Health Care 23, no. 1 (2007): 17–23; Stark, L., Behind Closed Doors: IRBs and the Making of Ethical Research (University of Chicago Press, 2011); Goldman, J., and M. D. Katz, “Inconsistency and Institutional Review Boards,” Journal of the American Medical Association 248, no. 2 (1982): 197–202.

3

 McWilliams, R., et al., ” Problematic Variation in Local Institutional Review of a Multicenter Genetic Epidemiology Study ,” Journal of the American Medical Association 290, no. 3 (2003): 360 – 66 ; Caulfield, T., N. Ries and G. Barr, “Variation in Ethics Review of Multi-site Research Initiatives,” Amsterdam LF 3 (2011): 85–100; Driscoll, A., et al., “Ethical Dilemmas of a Large National Multi-centre Study in Australia: Time for Some Consistency,” Journal of Clinical Nursing 17, no.16 (2008): 2212–20.

4

 Petersen, L. A., et al., ” How Variability in the Institutional Review Board Review Process Affects Minimal-Risk Multisite Health Services Research ,” Annals of Internal Medicine 156, no. 10 (2012): 728 – 35.

5

 U.K. Department of Health, Multi-Centre Research Committees, HSG(97)23, 1997.

6

 ” Final NIH Policy on the Use of a Single Institutional Review Board for Multi-site Research ,” National Institutes of Health, 2016, https://grants.nih.gov/grants/guide/notice-files/not-od-16-094.html.

7

 Needham, A. C., M. Z. Kapadia, and M. Offringa, ” Ethics Review of Pediatric Multi-center Drug Trials ,” Pediatric Drugs 17, no. 1 (2015): 23 – 30.

8

 Page, S. A., and J. Nyeboer, ” Improving the Process of Research Ethics Review ,” Research Integrity and Peer Review 2, no. 1 (2017): 14 – 21.

9

 Vadeboncoeur, C., et al., ” Variation in University Research Ethics Review: Reflections following an Inter-university Study in England ,” Research Ethics 12, no. 4 (2016): 217 – 33.

Hirshon, J. M., et al., ” Variability in Institutional Review Board Assessment of Minimal-Risk Research ,” Academic Emergency Medicine 9, no. 12 (2002): 1417 – 20 ; Khan, M. A., et al., “Variability of the Institutional Review Board Process within a National Research Network,” Clinical Pediatrics 53, no. 6 (2014): 556–60; Byrne, M. M., et al. “Variability in the Costs of Institutional Review Board Oversight,” Academic Medicine 81, no. 8 (2006): 708–12; Vick, C. C., et al., “Variation in Institutional Review Processes for a Multisite Observational Study,” American Journal of Surgery 190, no. 5 (2005): 805–9; Dziak, K., et al., “Variations among Institutional Review Board Reviews in a Multisite Health Services Research Study,” Health Services Research 40, no. 1 (2005): 279–90.

Petersen, et al., ” How Variability in the Institutional Review Board Review Process Affects Minimal-Risk Multisite Health Services Research”; Hirshon et al., “Variability in Institutional Review Board Assessment of Minimal-Risk Research”; Khan et al., “Variability of the Institutional Review Board Process within a National Research Network”; Silverman, H., S. C. Hull, and J. Sugarman, “Variability among Institutional Review Boards’ Decisions within the Context of a Multicenter Trial ,” Critical Care Medicine 29, no. 2 (2001): 235 – 41.

Shah, S., et al., ” How Do Institutional Review Boards Apply the Federal Risk and Benefit Standards for Pediatric Research? ,” Journal of the American Medical Association 291, no. 4 (2004): 476 – 82.

Klitzman, R., ” The Myth of Community Differences as the Cause of Variations among IRBs ,” AJOB Primary Research 2, no. 2 (2011): 24 – 33.

Green, L. A., et al., ” Impact of Institutional Review Board Practice Variation on Observational Health Services Research ,” Health Services Research 41, no. 1 (2006): 214 – 30 ; Whitney, S. N., et al., “Principal Investigator Views of the IRB System,” International Journal of Medical Sciences 5, no. 2 (2008): 68–72; Stark, L., and J. A. Greene, “Clinical Trials, Healthy Controls, and the Birth of the IRB,” New England Journal of Medicine 375, no. 11 (2016): 1013–15.

Mhaskar, R., et al., ” Those Responsible for Approving Research Studies Have Poor Knowledge of Research Study Design: A Knowledge Assessment of Institutional Review Board Members ,” Acta Informatica Medica 23, no. 4 (2015): 196 – 201.

Sayers, G. M., ” Should Research Ethics Committees Be Told How to Think? ,” Journal of Medical Ethics 33, no. 1 (2007): 39 – 42.

Klitzman, The Ethics Police? ; Klitzman, “The Myth of Community Differences as the Cause of Variations among IRBs.”

Friesen, P., B. Redman, and A. Caplan, “Of Straws, Camels, Research Regulation, and IRBs,” Therapeutic Innovation & Regulatory Science (September 3, 2018): doi: 10.1177/2168479018783740 ; Emanuel, E. J., et al, “Oversight of Human Participants Research: Identifying Problems to Evaluate Reform Proposals,” Annals of Internal Medicine 141, no. 4 (2004): 282–91.

Stark, Behind Closed Doors: IRBs and the Making of Ethical Research.

Klitzman, ” The Myth of Community Differences as the Cause of Variations among IRBs “; Angell, E. L., et al., “Is ‘Inconsistency’ in Research Ethics Committee Decision-Making Really a Problem? An Empirical Investigation and Reflection,” Clinical Ethics 2, no. 2 (2007): 92–99; Edwards, S. J., R. Ashcroft and S. Kirchin, “Research Ethics Committees: Differences and Moral Judgement,” Bioethics 18, no. 5 (2004): 408–27.

Edwards, Ashcroft and Kirchin, ” Research Ethics Committees ,” 416.

Edwards, Ashcroft and Kirchin, ” Research Ethics Committees. “

McGuinness, S., ” Research Ethics Committees: The Role of Ethics in a Regulatory Authority ,” Journal of Medical Ethics 34, no. 9 (2008): 695 – 700.

Sayers, ” Should Research Ethics Committees Be Told How to Think? “

Klitzman, ” The Myth of Community Differences as the Cause of Variations among IRBs. “

This distinction relates closely to one drawn by McGuiness in her discussion of IRB deliberations, which refers to procedural and substantive issues in deliberation (McGuinness, “Research Ethics Committees”), as well as to Rawls’s distinction between procedural and substantive justice. See, Rawls, J., A Theory of Justice (Cambridge, MA : Belknap Press, 1971).

This is not an argument that these systems have been successful in selecting the right criteria to take into consideration, but only an acknowledgement that they intend to. See ” Questions and Answers for Transplant Candidates about the Liver Allocation ,” United Network for Organ Sharing, 2014, https://unos.org/wp-content/uploads/unos/Liver_patient .

Angell, et al., ” Consistency in Decision Making by Research Ethics Committees: A Controlled Comparison ,” Journal of Medical Ethics 32, no. 11 (2006): 662 – 64 ; Stair, T. O., et al., “Variation in Institutional Review Board Responses to a Standard Protocol for a Multicenter Clinical Trial,” Academic Emergency Medicine 8, no. 6 (2001): 636–41; Stark, A., J. Tyson and P. Hibberd, “Variation among Institutional Review Boards in Evaluating the Design of a Multicenter Randomized Trial,” Journal of Perinatology 30, no. 3 (2010): 163–69.

Goldman and Katz, ” Inconsistency and Institutional Review Boards. “

Interestingly, those writing about inconsistencies in IRBs have tended to focus on fairness to either investigators or research participants, but not both (e.g., Sayers writes primarily about injustice to researchers, while Edwards et al. discuss unfairness to participants [Sayers, ” Should Research Ethics Committees Be Told How to Think? “; Edwards, Ashcroft, and Kirchin, “Research Ethics Committees”]).

Keith-Spiegel, P., and G. P. Koocher, ” The IRB Paradox: Could the Protectors Also Encourage Deceit? ,” Ethics & Behavior 15, no. 4 (2005): 339 – 49.

Skarlicki, D. P., and R. Folger., ” Retaliation in the Workplace: The Roles of Distributive, Procedural, and Interactional Justice ,” Journal of Applied Psychology 82, no. 3 (1997): 434 – 43.

Keith-Spiegel, P., G. Koocher, and B. Tabachnick, ” What Scientists Want from Their Research Ethics Committee ,” Journal of Empirical Research on Human Research Ethics 1, no. 1 (2006): 67-82, at 78.

Keith-Spiegel and Koocher, ” The IRB Paradox. “

Dubois, J. M., ” Is Compliance a Professional Virtue of Researchers? Reflections on Promoting the Responsible Conduct of Research ,” Ethics & Behavior 14, no. 4 (2004): 383 – 95 ; Klitzman, “The Myth of Community Differences as the Cause of Variations among IRBs.”

Keith-Spiegel, Koocher and Tabachnick, ” What Scientists Want from Their Research Ethics Committee. “

There is a nest of issues in here about the relationship between scientific and ethical considerations that are beyond the scope of this paper but that are important to acknowledge. See, Freedman, B., ” Scientific Value and Validity as Ethical Requirements for Research: A Proposed Explication ,” IRB: A Review of Human Subjects Research 9, no. 6 (1987): 7 – 10 ; Dawson, A. J., and S. M. Yentis, “Contesting the Science/Ethics Distinction in the Review of Clinical Research,” Journal of Medical Ethics 33, no. 3 (2007): 165–67; Angell, E. L., et al., “An Analysis of Decision Letters by Research Ethics Committees: The Ethics/Scientific Quality Boundary Examined,” BMJ Quality & Safety 17, no. 2 (2008): 131–36.

Klitzman, The Ethics Police? ; Klitzman, “The Myth of Community Differences as the Cause of Variations among IRBs.”

By unpacking this dilemma, we do not intend to suggest that any research ethicists are arguing for an entirely algorithmic process of decision-making but merely to illuminate the issues that are likely to arise if one leans too far toward either horn of the dilemma.

Angell et al., “Is ‘Inconsistency’ in Research Ethics Committee Decision-Making Really a Problem? An Empirical Investigation and Reflection”; Klitzman, “The Myth of Community Differences as the Cause of Variations among IRBs”; Hirshon et al., “Variability in Institutional Review Board Assessment of Minimal-Risk Research.” An exception is Schneider who offers a detailed argument in favor of replacing the current system of research governance with a system of tort law and self-regulation, but this is a different connection than the one we emphasize here (Schneider, C., The Censor’s Hand: The Misregulation of Human-Subject Research [ Cambridge, MA : MIT Press, 2015 ]).

Baumgartner, F. R., et al., ” These Lives Matter, Those Ones Don’t: Comparing Execution Rates by the Race and Gender of the Victim in the US and in the Top Death Penalty States ,” Albany Law Review 79 (2015): 797 – 860.

Bagaric, M., and A. Pathinayake, ” The Paradox of Parity in Sentencing in Australia: The Pursuit of Equal Justice That Highlights the Futility of Consistency in Sentencing ,” Journal of Criminal Law 77, no. 5 (2013): 399-416, at 400.

Note that this is arguably the case for all moral problems, and yet, as discussed above (with respect to organ donations), in less complex cases, the most important morally relevant features can be selected and built into an adequate system for decision-making.

Angell et al., ” Is ‘Inconsistency’ in Research Ethics Committee Decision-Making Really a Problem? ,” 13.

Of course, evolution within ethical reasoning does not justify inconsistencies within or across IRBs in the present day, but adopting an algorithmic process of decision-making would restrict the potential for such evolution to take place.

Maslen, H., ” Penitence and Persistence: How Should Sentencing Factors Interact? ,” in Exploring Sentencing Practice in England and Wales, ed. J. V. Roberts (Basingstoke, Hampshire, England : Springer, 2015), 173 – 93.

Interestingly, it appears that IRB members’ confidence in their risk assessments decreases as the riskiness of research increases. Grinnell, F., et al., ” Confidence of IRB/REC Members in Their Assessments of Human Research Risk: A Study of IRB/REC Decision Making in Action ,” Journal of Empirical Research on Human Research Ethics 12, no. 3 (2017): 140 – 49.

Friesen, P., et al., ” Rethinking the Belmont Report? ,” American Journal of Bioethics 17, no. 7 (2017): 15 – 21.

Faden, R. R., et al., ” An Ethics Framework for a Learning Health Care System: A Departure from Traditional Research Ethics and Clinical Ethics ,” Ethical Oversight of Learning Health Care Systems, special report, Hastings Center Report 43, no. 1 (2013): S16 – S27 ; Quigley, D., Compilation on Environmental Health Research Ethics Issues with Native Communities (Syracuse, NY: Syracuse Initiative for Research Ethics in Environmental Health, 2001); Puschmann, C., and E. Bozdag, “Staking Out the Unclear Ethical Terrain of Online Social Experiments,” Internet Policy Review 3, no. 4 (2014): 1–15.

Mustard, D. B., ” Racial, Ethnic, and Gender Disparities in Sentencing: Evidence from the US Federal Courts ,” Journal of Law and Economics 44, no. 1 (2001): 285 – 314.

Bagaric and Pathinayake, ” The Paradox of Parity in Sentencing in Australia. “

Department of Justice, Fact Sheet: The Impact of United States v. Booker on Federal Sentencing, March 15, 2006, https://

www.justice.gov/archive/opa/docs/United%5fStates%5fv%5fBooker%5fFact%5fSheet

.

Mustard, ” Racial, Ethnic, and Gender Disparities in Sentencing: Evidence from the US Federal Courts. “

Bagaric and Pathinayake, ” The Paradox of Parity in Sentencing in Australia ,” 400.

Emanuel, E. J., et al., ” Oversight of Human Participants Research: Identifying Problems to Evaluate Reform Proposals. “

Sayers, ” Should Research Ethics Committees Be Told How to Think? “; Edwards, Ashcroft, and Kirchin, “Research Ethics Committees.”

Ryan, K., et al., The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (Washington, DC : U.S. Government Printing Office, 1979).

Trace and Kolstoe, ” Measuring Inconsistency in Research Ethics Committee Review. “

Abbott and Grady, ” A Systematic Review of the Empirical Literature Evaluating IRBs “; Angell et al., “Is ‘Inconsistency’ in Research Ethics Committee Decision-Making Really a Problem?”

Trace and Kolstoe, ” Measuring Inconsistency in Research Ethics Committee Review”; Heasman, P., A. Gregoire, and H. Davies, “Helping Research Ethics Committees Share Their Experience, Learn from Review and Develop Consensus: An Observational Study of the UK Shared Ethical Debate ,” Research Ethics 7, no. 1 (2011): 13 – 18 ; Davies, H. “Standards for Research Ethics Committees: Purpose, Problems and Possibilities,” Journal of Medical Ethics 35 (2008): 152–157.

Klitzman, R. ” From Anonymity to ‘Open Doors’: IRB Responses to Tensions with Researchers ,” BMC Research Notes 5, no. 1 (2012): 347 – 58 ; Fernandez Lynch, H., “Opening Closed Doors: Promoting IRB Transparency,” Journal of Law, Medicine & Ethics 46, no. 1 (2018): 145–58.

Sheehan, M., ” Should Research Ethics Committees Meet in Public? ,” Journal of Medical Ethics 34, no. 8 (2008): 631 – 35.

Fernandez Lynch, ” Opening Closed Doors. “

Stark, Behind Closed Doors: IRBs and the Making of Ethical Research ; Klitzman, “From Anonymity to ‘Open Doors.'”

Fernandez Lynch, ” Opening Closed Doors. ”

Drawing on Daniels and Sabin’s concept of “accountability for reasonableness,” McGuinness also emphasizes the importance of reason giving in IRB decision-making. McGuinness, “Research Ethics Committees”; Daniels, N., and J. Sabin, ” Limits to Health Care: Fair Procedures, Democratic Deliberation, and the Legitimacy Problem for Insurers ,” Philosophy & Public Affairs 26, no. 4 (1997): 303 – 50.

Fernandez Lynch, ” Opening Closed Doors ,” at 153.

Keith-Spiegel, Koocher and Tabachnick, ” What Scientists Want from Their Research Ethics Committee. ”

It is worth noting that another aspect of IRB decision-making that can improve perceptions of fairness is the availability of an appeals process. Currently, such a process is available in several countries (such as the United Kingdom, Australia, and Canada), but not in the United States.

~~~~~~~~

By Phoebe Friesen; Aimi Nadia Mohd Yusof and Mark Sheehan

Reported by Author; Author; Author

References

Friesen, P., Yusof, A. N. M., & Sheehan, M. (2019). Should the Decisions of Institutional Review Boards Be Consistent? Ethics & Human Research, 41(4), 2–14. https://doi-org.proxy-library.ashford.edu/10.1002/eahr.500022

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP