May 2, 2013

Janssen and Johnson & Johnson to Provide Webcast Presentation of Simeprevir Phase 3 Clinical Data Presented at The International Liver Congress of the European Association for the Study of the Liver (EASL)

logo-prn-01_PRN

NEW BRUNSWICK, N.J., May 2, 2013 /PRNewswire/ -- Janssen R&D Ireland (Janssen) and Johnson & Johnson (NYSE: JNJ) will provide a pre-recorded webcast for investors and other interested parties on Friday, May 3, at approximately 8:30 a.m., Eastern Time, to discuss simeprevir phase 3 clinical data presented at The International Liver Congress of the European Association for the Study of the Liver (EASL).

A pre-recorded webcast featuring management from Janssen will provide an overview of results from the phase 3 QUEST -1 and QUEST-2 studies of the investigational protease inhibitor simeprevir (TMC435) administered once daily in combination with pegylated interferon and ribavirin in treatment-naive genotype 1 chronic hepatitis C patients.

The webcast/podcast can be accessed by visiting the Johnson & Johnson website at www.investor.jnj.com and clicking on "Webcasts/Presentations."

SOURCE Johnson & Johnson

RELATED LINKS
http://www.jnj.com

Source

Donor-recipient matching: Myths and realities

Journal of Hepatology
Volume 58, Issue 4 , Pages 811-820, April 2013

Javier BriceƱo, Ruben Ciria, Manuel de la Mata

Received 25 July 2012; received in revised form 17 September 2012; accepted 13 October 2012. published online 25 October 2012

Summary

Liver transplant outcomes keep improving, with refinements of surgical technique, immunosuppression and post-transplant care. However, these excellent results and the limited number of organs available have led to an increasing number of potential recipients with end-stage liver disease worldwide. Deaths on waiting lists have led liver transplant teams maximize every organ offered and used in terms of pre and post-transplant benefit. Donor-recipient (D-R) matching could be defined as the technique to check D-R pairs adequately associated by the presence of the constituents of some patterns from donor and patient variables. D-R matching has been strongly analysed and policies in donor allocation have tried to maximize organ utilization whilst still protecting individual interests. However, D-R matching has been written through trial and error and the development of each new score has been followed by strong discrepancies and controversies. Current allocation systems are based on isolated or combined donor or recipient characteristics. This review intends to analyze current knowledge about D-R matching methods, focusing on three main categories: patient-based policies, donor-based policies and combined donor–recipient systems. All of them lay on three mainstays that support three different concepts of D-R matching: prioritarianism (favouring the worst-off), utilitarianism (maximising total benefit) and social benefit (cost-effectiveness). All of them, with their pros and cons, offer an exciting controversial topic to be discussed. All of them together define D-R matching today, turning into myth what we considered a reality in the past.

Abbreviations: D-R, donor-recipient, UNOS, United Network of Organ Sharing, MELD, Model for End Stage Liver Disease, SBE, Symptom-based exceptions, INR, International Normalized Ratio, LT, Liver transplantation, ECD, Extended criteria donors, SB, Survival benefit, CIT, Cold ischemia time, DRI, Donor risk index, DCD, Donation after cardiac death, SRTR, Scientific Registry of Transplant Recipients, PGF, Primary graft failure, SOLD, Score Of Liver Donor, SOFT, Survival Outcomes Following Liver Transplant, ROC, Receiver operating curves, BAR, Balance of risk score, HCC, Hepatocellular carcinoma, HCV, Hepatitis C virus

Keywords: Liver, Transplantation, Donor-recipient, Matching, Outcomes, Allocation

Introduction

Liver allocation policies have been staged by precise strategies to turn arbitrary criteria into well-established and objective models of prioritization. The fast onset of this turnover has led to the coexistence of different models and metrics, with their pros and cons, with their goodness and boundaries, with their dogmas and fashions; in short, with their myths and realities.

Liver transplant (LT) outcomes have improved over the past two decades. Unfortunately, with an increasing number of individuals with end-stage liver disease and a limited number of organs to afford this demand, this growing discrepancy has addressed the dismal scenario of waitlist deaths [1]. Moreover, the use of less stringent selection criteria to expand the donor pool has evidenced the importance of recipient and donor factors on transplant outcomes [2].

Donor-recipient (D-R) matching has been strongly analysed and policies in donor allocation have tried to maximize organ utilization whilst still protecting individual patient interests. However, D-R matching has been written through trial and error, with early baseline weak rules [3] which have changed continuously. Several analyses and over-analyses of databases have yielded non-uniform donor and/or recipient selection criteria to make an appropriate D-R matching. In the late 1990s, traditional regression models, estimating the average association of one factor with another, were used [4]. Consequently, an independent association could be demonstrated, whilst adjusting for other confounding factors. However, this was a simplistic approach when lots of donor and recipient variables were considered [5], [6], [7]. Subsequent complexity with stratified models was more realistic, and very useful scores have been depicted with this approach [8]. However, the increasing expectancy with the development of each new score has been followed by strong discrepancies and controversies [9], [10].

Match is defined as “a pair suitably associated” [11]. D-R matching could be defined as “the technique to check D-R pairs adequately associated by the presence of the constituents of some patterns from donor and patient variables”. This definition, however, lacks of purpose. Possible purposes can be graft survival, patient survival, waitlist survival, benefit survival and evidence-based survival; furthermore, all of them can be tabulated in terms of transparency, individual and/or social justice, population utility and overall equity [10]. D-R matching combines a donor acceptance policy and an allocation policy to get advantages (i.e., survival) over a random, experimental or subjective interpretation in terms of better precision. In some circumstances, D-R matching means a higher exactness. Current allocation systems are based on isolated or combined donor or recipient characteristics (Fig. 1). We will focus this review considering patient characteristics-based systems, donor risks-based systems, and combined D-R-based systems (Table 1).

PIIS0168827812008197_gr1_lrg

Fig. 1. Current composite formulations of donor-risk-based systems (left), patient-risk-based systems (right) and combined donor-recipient-based systems (middle) for donor-recipient matching available in literature. COD (cause of death), CVA (cardiovascular accident), DCDD (donation after circulatory determination of death) and PVT (portal vein thrombosis).

Table 1. Classification of allocation models useful for donor-recipient matching.

PIIS0168827812008197_fx1_lrg

Patient-based policies

Urgency principle: MELD/MELD-like

In the early years of LT, allocation was a clinician-guided decision. Time on waiting list became the major determinant to receive a graft, but this allocation system engendered an unacceptable number of inequities for many candidate subsets and profound regional and centre differences. In this context, a score named MELD [12], the acronym of the Model for End Stage Liver Disease, became a metric by which the severity of liver disease could be accurately described. Moreover, listed candidates could be ranked by the risk of waiting list mortality independently of time on it (medical urgency) [5]. UNOS made several changes to the calculation of MELD score [13]. These adjustments finished in a continuous score, representing the lowest and the highest probability of 3-month waitlist mortality. As the model is based on purely objective laboratory variables, a potentially transparent and independent-of-observer opinion method has been implemented in the United States since 2002, and soon worldwide [14], [15], [16], [17].

The strengths of MELD are its significant contribution to reducing mortality on the waiting list [13], [18], a reduction of the number of futile listing, a decrease in median waiting time to LT, and, finally, an increase in mean MELD score in patients who underwent transplantation after MELD implementation [10].

Unfortunately, the initial enthusiasm with MELD was followed by some caveats and objections that weaken its solidity: First, although the empirical adjustments by UNOS in calculating the MELD score have some rationale, they were not based on validated studies [13], [19]. Second, the components of the MELD formula (creatinine, bilirubin and INR) are not so objective due to interlaboratory variability [9], [20], [21], with a reported risk of “gaming” the system by choosing one laboratory or another [22]. Third, MELD is only useful for most of non-urgent cirrhotic patients. However, growing indications as hepatocellular carcinoma and symptom-based exceptions (SBE) are all mis-scored by MELD, leaving half of the patients inadequately scored, and with the assignment of priority extra-points which have been arbitrarily up and downregulated [17], [23], [24], [25]. Fourth, extreme MELD values may be considered exceptions as well, i.e., the highest MELD values (>40) [26] and lower MELD patients with cirrhosis and hyponatremia. A trend to substitute the classic MELD score by a more comprehensive MELD-sodium score is under debate [27]. Together with this MELD-Na score, a myriad of metrics based on MELD improvements according to particular situations have been depicted: delta-MELD [28]; MELD-XI [29]; and MELD-gender [30]. The UKELD score is the equivalent to MELD in United Kingdom [31]. None of these variations has reached enough importance to replace the original MELD score [32].

D-R matching occurs at the time of organ procurement. However, because MELD score obviates donor characteristics, the assignment of a donor to the first sickest listed patient cannot be considered a true D-R matching. Therefore, in a MELD-based allocation policy, a concrete D-R combination does not necessarily mean the best combination in terms of outcome. One of the drawbacks of the MELD score is the impossibility of donor selection. This is more evident in patients with similar MELD score, as a similar punctuation would not mean equal outcomes, especially with the growing trends in extended criteria donors (ECD) use. MELD score may correctly stratify patients according to their level of sickness, but D-R pairs are not well categorized in accordance with the net benefit of their combinations. MELD was not designed for D-R matching and therefore it is a suboptimal tool for this aim.

Utility-based principle (survival benefit)

MELD lacks of utility at predicting post-transplant outcomes. Several studies have shown the poor correlation between pretransplant disease severity and post-transplant outcome [13], [33], [34], [35], [36]. An allocation policy based on the utility principle would give priority to candidates with better outcomes, avoiding emphasis on waitlist mortality.

Considering both waitlist mortality (urgency principle) and post-transplant mortality (utility principle), a benefit-of-survival concept has been recently identified [6]. Survival benefit (SB) computes the difference between the mean lifetime with and without an LT. This new allocation system seeks to minimize futile LT, giving first concern to patients with a predicted best lifetime gained due to transplantation [37]. Under an SB model, an allocated graft goes to the patient with the greatest difference between the predicted post-transplant lifetime and the predicted waiting list lifetime for this specific donor [38], [39].

In the first conception, Merion [6] analyzed post-transplant mortality risk (with transplant) and waiting list mortality risk (without transplant). This early analysis left some important concepts: (1) LT provides an overall advantage over remaining on waiting list. (2) Survival benefit increases with a raising MELD score. (3) Consequently, there is a set of listed patients who would not benefit from LT and they preferentially may be crossed out the list. The threshold for a benefit survival with transplantation is a MELD score of about 15 [40].

This initial report did not consider the impact of donor quality. Another important concern was the shortness of a maximum of one year of post-transplant follow-up for estimated SB. Schaubel et al. [39] improved the SB assertions including sequential stratifications of donor risk index (DRI) [41]. They estimated SB according to cross-classifications of candidate MELD score and DRI. The conclusions of this study included: (1) in terms of SB, candidates with different MELD scores show different benefit according to DRI. Whilst higher-MELD patients have a significant SB from transplantation, regardless of DRI, lower-MELD candidates who receive higher-DRI organs also experience higher mortality and they do not demonstrate significant SB. (2) High-DRI organs are more often transplanted into lower-MELD recipients and vice versa. This unintended consequence of the MELD allocation policy tries to avoid the addition of risks from recipients and donors, but is accompanied by a consequent decrease in post-transplant survival [42].

Current contributions to the SB allocation scheme are a myriad of complex mathematical refinements [40]. An SB now represents the balance between 5-year waiting list mortality and post-transplant mortality, globally combining patient and donor characteristics. Probably the most important learning of the current SB score is not only the individual but also a collective benefit: the maximum gain to the patient population as a whole will occur if the patient with the greatest benefit score receives the organ. In this sense, the benefit-principle also means an utilitarian principle [43].

Allocation of organs in LT has gone through three categories: from a “first-come, first-served” principle (waiting-time) [44] to a “favouring the worst-off or prioritarianism” principle (sickest first) [45], and, recently, to a “maximising total benefits or utilitarianism” principle (survival benefit). The last one combines two simple principles: “number of lives saved” [46] principle and “prognosis or life-years saved” [47] principle. Perhaps the most considerable advantage of a SB model is its ability to consider prognosis. Rather than saving the most lives, SB aims to save the most life-years. Living more years and saving more years are both valuable. In this sense, saving 2000 life-years per-year is attractive [39]. However, three main considerations must be made: first, SB favours acutely ill patients irrespective of donor quality, because the most life-years saved occur in higher-MELD patients. Waitlist mortality risk considerably outweighs post-transplant mortality in higher-MELD candidates; however, healthier candidates, even with lower DRI organs, are penalized because liver transplant in this group is more hazardous than remaining on the waiting list. Giving many life-years to a few (highest-SB patients) differs from giving a few life-years to many (lowest-SB patients) [47]. Survival benefit is undeniably valuable but insufficient alone [43]. Second, the LT-SB principle is not a pure “maximising total benefits” principle. It was elaborated from the MELD-score teachings and that is why the caveats of MELD score have been included in the SB scheme. Recently, Schaubel et al. [39] tried to obviate this problem by incorporating 13 candidate parameters (not only the three of MELD) and MELD exceptions. This situation becomes more troubling when the authors compute the post-transplant survival model by multiplying the hazard ratio of a given recipient by the hazard ratio of a given donor. This is biased because it assumes the independency of the D-R hazards and an absence of interactions between cross-sections of D-R pairs [48]. Third, the survival-benefit-based allocation fails to identify recipient age in the proposed benefit score. Even when age predicts both pre and post-transplant survival, LT benefit does not differ too much along the spectrum of 20–70years old patients. According to ethics and morality, saving life-years for old-sickest patients may be not equal for younger people who have not yet lived a complete life and will be unlikely to do so without LT. The SB score lacks of distributive justice [43], [49].

Donor-based policies

The growing discrepancy between demand and supply is at the forefront of current dilemmas in LT. Two decades ago, the need to achieve coherent outcomes with this new therapy, led clinicians to select top-quality liver donors. The increase of both listed patients and waiting list mortality led to the expansion of acceptance criteria [1], [50], [51], [52]. Progressively, the qualitative effect of individual donor variables became understood. The terms “marginal” or “suboptimal” donors gave way to the current terminology of “extended criteria donors” [53]. As clinicians have learnt to see ECD with eyes wide-open, low-quality donors have become a real practice to expand the donor pool. Donors are generally considered “extended” if there is a risk of primary non-function or initial poor function, although those that may cause late graft loss may be included [54].

Traditionally, liver donors have been considered “bad” or “good” if an extended criterion was present or not, respectively [55]. However, some concerns must be highlighted: first, each expanded variable has an evolutive and subjective “tolerance threshold”. A 50-year old donor in the 90s was considered as risky as an 80-year old one, nowadays [56], [57], [58]. Second, even with ECD, graft and/or patient outcomes may not necessarily be so bad, as graft survival depends on several factors [59], [60]. Moreover, ECD may work well in high-risk recipients [38]. Third, some extended criteria can act in combination. Liver grafts from elderly donors and/or donors with steatosis are even more affected by prolonged cold ischemia time (CIT) and preservation injury [61], [62]. Actually, the accumulation of ECD variables influences graft survival in a MELD-based allocation system for LT [63].

Feng et al. [41] discuss the concept of the donor risk index (DRI). DRI objectively assesses donor variables that affect transplant outcomes: donor age, donation after cardiac death (DCD) and split/partial grafts are strongly associated with graft failure; African-American race, low-height and cerebrovascular accident as the cause of death are modestly associated with graft failure. All together with two transplant factors, CIT and sharing outside of the local donor area, configure a quantitative, objective and continuous metric of liver quality based on factors recoverable at the time of an offer. Donor quality represents an easily computed continuum of risk [55]. Unlike simplistic previous scores for donor risk assessment [64], [65], [66], [67], DRI offers a single value for each donor and allows the possibility to compare outcomes in clinical trials and practice guidelines. DRI has been validated in the United States and recently, in Europe [68], [69], [70], [71]. Perhaps the most important contribution of DRI is to give formal consideration to variables that previously were just intuitive [53]. However, although useful and with significant differences within strata, several discrepancies between DRI indexes in Europe against the United States have been reported [71] thus limiting its universal applicability.

DRI offers a rationale for an evidence-based D-R matching. However, considering DRI at the offer time, D-R matching is not easily scheduled and the distribution of risks from donor and recipients depends on the allocation scheme. Two main possibilities of matching are: (1) synergism-of-risks matching, where a high-risk-for-high-risk policy matches high-DRI donors to high-MELD patients, or (2) division-of-risks matching, where a high-risk-for-low-risk (and vice versa) policy matches high-DRI donors to low-MELD patients (and vice versa). Again, the allocation principle dictaminates which of the previous policies may be adopted (Table 2): (a) under a sickest-first-based allocation, the trend has been a division-of-risks matching. Because the sickest candidate may presumably have a difficult postoperative course, a wise criterion would be to discard a high-DRI organ (with its additive worsening of graft survival) [39], [42], [68], [70], [71], [72]. (b) Under an SB allocation system, where the main goal is the maximization of utility, a synergism-of-risks matching must prevail [73]. In their first analysis, Schaubel et al. [38] showed a significantly higher risk to stable patients; however, grafts from ECD increase the chance for long-term survival for patients at high risk of dying of their liver disease. As Henri Bismuth stated, “the highest risk for a patient needing a new liver is the risk of never being transplanted” [74]; (c) under a cost-effectiveness allocation system, MELD and DRI interact to synergistically increase the cost of LT. High-DRI increases the cost of transplant at all MELD strata. Moreover, the magnitude of the cost according to organ quality is greater in the high MELD patients, whereas low MELD patients had a minimal rise in cost when receiving ECD. From a conceptual view, a division-of-risks matching is mandatory according to economics [75].

Table 2. Donor-recipient matching according to the allocation scheme and the principle for allocation of scarce liver donors.

PIIS0168827812008197_fx2_lrg

Some limitations regarding DRI must be addressed: first, DRI was derived from pre-MELD era data [41]; second, DRI is almost a donor age-related index because age is the most striking variable in its formulation; third, DRI is a hazard model of donor variables known or knowable at the time of procurement (as CIT and graft steatosis). Actually, macrosteatosis was excluded from the analysis even though it has been reported to be an independent risk factor of increased I/R injury and graft survival [76], [77], [78] that could be useful in evaluating donor risk [79]. Finally, it has been argued that DRI is impractical [80] as at the moment of its conception, several variables were not available at the Scientific Registry of Transplant Recipients (SRTR) database. DRI may partially predict the real magnitude of donor quality on transplant outcome. As MELD score, where there is an absence of donor variables for prognosis, DRI lacks of candidate variables. Per se, DRI alone is a suboptimal tool for D-R matching.

Combined donor-recipient based systems (Table 3)

Relatively few studies have attempted to develop comprehensive models for predicting post-transplant survival, none of which have been validated [64], [67]. We have discussed in a previous section the SB model, where candidates with different MELD scores show different benefit respective DRI [38], [39]. Amin et al. described a Markov decision analytic model to estimate post-transplantation survival whilst waiting for a standard donor and survival with immediate ECD transplant [81]. Under a decision-making modelling, transplantation with an available ECD graft should be preferred over waiting for a standard organ for patients with high-MELD scores. At lower MELD scores, the SB depends on the risk of primary graft failure (PGF) associated with the ECD organ. These results support the concept of SB from a theoretical viewpoint. However, the study was based on simulation and the truthfulness of their results rested on unverifiable assumptions (the possibility of recovery from PGF, the rate of retransplantation for PGF, the lack of consideration of late graft failure and the availability of a standard criteria donor liver) [38], [82].

Table 3. Main combined donor-recipient-based systems for D-R matching.

PIIS0168827812008197_fx3_lrg

n.a., not available.

Afterwards, Ioannou analysed a pre-MELD era database available from UNOS between 1994 and 2003 (about 30% of all eligible transplants at that period of time) [83]. Four donor and 9 recipient characteristics adequately predicted survival after LT in patients without hepatitis C virus, and a slightly different model was used for patients with hepatitis C virus. The model also described a risk score of the 4 donor variables included in the survival models, which was called Score Of Liver Donor (SOLD). This study only offered a series of single risk factors without the development of a composite score (except for donor criteria). Moreover, an excess of categories and strata of the variables included, and a data-splitting approach made it useless.

An ambitious scoring system that predicts recipient survival following LT deserves full consideration. Rana et al. [7] identified 4 donor, 13 recipient and 1 operative variables as significant predictors of 3-month mortality following LT. Two complementary scoring systems were designed: a preallocation Survival Outcomes Following Liver Transplant (P-SOFT) Score and a SOFT Score which is the result of adding P-SOFT points to points awarded from donor criteria, 1 recipient condition (portal bleed 48-h pretransplant) and two logistical factors (CIT and national allocation) at the time of procurement. Calculations of area under the ROC curves for 3-month survival showed P-SOFT and SOFT score values of 0.69 and 0.70, respectively. These two scores combine the sickest-first principle (MELD was included for regression analysis) and a prognosis principle (3-month mortality), but SB is not considered. Although the SOFT score along with the MELD score would theoretically allow practitioners to make a real-time decision on a particular offer, some concerns must be considered: (1) points were assigned to each risk factor based on its odds ratio (one positive/negative point was awarded to each risk factor for every 10% risk increase/decrease), which seems to be arbitrary; (2) the score includes observer-dependent variables (encephalopathy, ascites pretransplant), time-dependent variables (recipient albumin), or therapy-dependent variables (dialysis prior to transplantation, intensive care unit or admitted to hospital pretransplant, life support pretransplant); (3) the weight of preallocation variables (P-SOFT) is simply added to variables included at the offer time, which is an empirical assumption; (4) warm ischemia time was removed from the SOFT score (OR 2.3) since it cannot be predicted prior to transplantation; (5) the range of points of the groups of risk in the SOFT score is arbitrarily unequal.

There is certainly room for improvement in the high-risk-for-low-risk (and vice versa) policy. Recently, Halldorson et al. [84] have proposed a simple score, D-MELD, combining the best of the sickest-first policy (lab-MELD) and DRI (donor age). The product of these continuous variables would result in an incremental gradient of risk for operative mortality and complications estimated as length of hospitalization. A cut-off D-MELD score of 1600 defines a subgroup of D-R matches with poorer outcomes. The strengths of this system are simplicity, objectivity and transparency. A final rule of D-MELD would be to eliminate matches with D-MELD 1600 (futile transplants). D-MELD is a pure prognosis-allocation system and crashes head-on against the SB principle. A 65-year donor would be refused for a MELD-25 candidate (D-MELD 1625), when this match offers an SB of 2.0 life-years saved approximately [39]. This policy could endanger high-risk patients, especially under a low-donation-rate organ network: the refusal of a >1600 D-MELD match may not be necessarily followed by a favourable match in due time. According to the principle that transplants with 5-year patient survival <50% (5-year PS <50%) [18], [85], [86], [87], [88] should not be performed, in order to avoid organ wasting, a national Italian study has explored potential applications of D-MELD [89]. The cut-off value predicting the 5-year PS <50% was identified in HCV patients only at a D-MELD 1750. For a given match, donor age (ranging 18–80years) outweighs MELD score (ranging 3–40). It is paradoxical that the product of MELD score (with a weak ability of predicting posttransplant mortality) and donor age (which influences graft survival) could strongly predict short- and long-term patient outcomes. D-MELD needs fine refinements and tackle ethical challenges before implementation.

A novel score based on a combination of the prognosis and the justice principles has been reported recently. The balance of risk (BAR) score [90] includes six predictors of post-transplant survival: MELD, recipient age, retransplant, life support dependence prior to transplant, donor age and CIT. The strongest predictor of 3-month mortality was recipient MELD score (0–14 of 27 possible scoring points), followed by retransplantation (0–4 of 27 points). The BAR score discriminates in terms of overall mortality below and above a threshold at 18 points. Essentially, the BAR score is a division-of-risks matching system, where MELD (“justice”) is the most contributing factor, even though well-balanced by prognostic recipient and donor factors (“utility”). However, some cautions must be considered: first, maximum BAR score for low-MELD patients (<15) can reach up to 13 points; minimum BAR score for high-MELD patients (>35) would be 14 points. BAR score has an excessive linear correlation with MELD. With a cut-off of 18 points to split patient survival, the BAR score of a high-MELD candidate with two or more risk criteria (i.e., >60-year recipient and >40-year donor) will go above 18 points, and transplantation would be futile. In the UNOS database, BAR score >18 represents only 3% of all LT. Consequently, BAR score is an “all-or-nothing” system more than a “matching” system. Below a score of 18, LT would always be adequate, irrespective of recipient and/or donor factors; second, subsequently, no SB is offered by BAR score; and, third, BAR score was designed for predicting 3-month mortality, which seems poor for long-acting factors.

Matching systems for specific entities

Hepatocellular carcinoma

Hepatocellular carcinoma (HCC) represents one third of the indications for liver transplantation. As these patients may not be adequately scored by MELD, a direct consequence has been award methods with bonus points assigned for them with an extraordinary variability both in American [24], [25] and European centres [15]. The assignment of these awards has not been evidence-based and has been adjusted through trial and error. Single-centre series have reported their individual experiences with self-performed not-evidence-based systems. In these reports, regional monthly updated prioritization enlistments [14] or adjusted MELD scores [91] have been used. Moreover, not only from the recipient side, but also from the donor criteria, HCC recipients have been evaluated, thus advocating for an expanded use of ECD in them, due to the recipient risk of dropout mainly in high-risk HCC [92]. All these series justify their results by achieving equal deaths on waiting list or post-transplant survival ratios. However, global series reflect that the introduction of the MELD allocation system allowing priority scores to patients with limited-stage HCC has resulted in a 6-fold increase in the proportion of liver transplant recipients who have HCC. This is particularly worrying when more than 25% of liver donor organs are currently being allocated to patients with HCC. More interestingly, the supposed equal post-transplantation survival does not appear to be true for patients with tumors 3–5cm in size and globally for all patients with HCC compared to patients without HCC [93]. It seems obvious that individual benefit is contrary to global benefit in the HCC setting. By current modified HCC allocation systems, the requirements of liver grafts certainly represent a major challenge because of the limited organ resources [94]. On the other hand, not offering a transplant to patients who have the potential to have good outcomes is ethically disturbing [95].

Hepatitis C

The problem of an optimal D-R matching becomes fairly more complex in the HCV recipients setting. HCV re-infects the liver graft almost invariably following reperfusion [96] with histological patterns of acute HCV appearing between 4 and 12weeks post-transplant and chronic HCV in 70–90% of recipients after 1year and in 90–95% after 5years [97]. Recurrent HCV will lead 10–30% of recipients to progress to cirrhosis within 5years of transplantation with a rate of decompensation of >40% and >70% at 1, and 3years, respectively [98].

Increased rates of re-transplantation and lower survival rates have been reported by most series in HCV recipients. Because of these results, the appropriateness of re-transplantation for HCV and the optimal timing of surgery in an era of organ shortage are under debate. In fact, it has been stated that re-transplantation is not an option for recurrent hepatitis C cirrhosis after LT unless performed on patients with late recurrence, stable renal function and with the possibility of antiviral treatment post-LT [99]. This scenario adds extreme complexity to the current development of matching systems in a subgroup of patients that is still the most common indication for LT in several countries. In this context, the question of which liver should be allocated to HCV recipients seems to be fairly difficult to answer, as several ECD factors as age [100], [101], steatosis, CIT [102] and I/R injury [103] have been reported to significantly increase viral recurrence and decrease patient and graft survival. Current evidence supports to tip the balance towards HCV recipients, as the impact of ECD factors may exponentially impact their outcome compared to non-HCV recipients [102]. However, this individual benefit of survival is based on the current standard of a minimum benefit criterion of a 5-year patient survival >50%. In the case that a global benefit of survival is to be considered, with increasing rates of post-transplant survival and more stringent critera, not only HCV, but also older and HCC recipients would significantly have reduced eligibility for liver transplantation [104].

The “ideal” D-R matching system

The ideal D-R matching system still remains a chimera mainly due to two factors: inconsistent evidence and a lack of reliable end points. While the concept of survival benefit is certainly very attractive, currently it is quite unrealistic to be used in real practice, given the difficulty of unbiased calculations. Calculation of the social benefit is obviously even more complex. However, although sickest-first principles are still prevailing everywhere and efforts towards optimization of isolated donor-recipient matching may be more practical and realistic, utilitarianism should be considered in nearly future. The main problem nowadays is that every model is performed by using simple statistical calculations that analyze the impact of individual variables in the context of multiple regression models. Unfortunately, this is not enough and calculations that may lead to life or death may not be under human-guided decisions. The most complex models of survival benefit have no more than 20 variables with a single end point. Interesting attempts to improve current simplistic evidence have been reported by using artificial intelligence that may compute hundreds of variables, combining their own contribution (even though not so strong) and achieving different end points [105]. Although still not prospective and not under the setting of a randomized multicenter trial, this tool could potentially make real-time analyses including several variables, giving an objective allocation system. Surely this concept will change allocation policies in the near future (Fig. 2).

PIIS0168827812008197_gr2_lrg

Fig. 2. Chronology of donor-recipient matching in liver transplantation. As observed, prior to 1995, D-R matching was unclear. In the last 15–20years, several changes have led to a change in the mentality of liver transplant teams, leading to more complex methods of allocation in order to obtain the best survival and the lowest rate of deaths on waiting list.

Antoine Augustin Cournot in his theory of oligopoly (1838) depicts how firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others. A Cournot-Nash equilibrium occurs when each firm’s output maximizes its profits given the output of the other firms [106]. Probably, the “ideal” D-R matching system may satisfy the global benefit to optimize individual benefits including both short- and long-term survival rates and fully assessing every single donor, recipient and surgical variable known prior to the transplantation procedure.

PIIS0168827812008197_fx4_lrg

Conclusions

Donor-recipient matching is an amazing theoretical concept that nowadays, is more a myth than a reality. Every score currently available focuses on isolated or combined donor and recipient variables. Unfortunately, to date, these scores are not statistically robust enough. Scores for D-R matching in the future would have to be built up by more than a list of variables; they should consider the probability of death on waiting list, post-transplant survival, cost-effectiveness and global survival benefit. Only when everything is considered in a single method, transparency, justice, utility and equity may be achieved. Probably, the human mind may not be enough accurate to put in order so many interactions. Surely, the future of graft allocation will be guided by computational tools that might give objectivity to an action that should be more a reality than a myth.

Conflict of interest

References

Copyright

Source

Interferon free regimens for the “difficult-to-treat”: Are we there?

Journal of Hepatology
Volume 58, Issue 4 , Pages 643-645, April 2013

Maria-Carlota LondoƱo, Sabela Lens, Xavier Forns

Liver Unit, Hospital ClĆ­nic, IDIBAPS, University of Barcelona, Barcelona, Spain

Centro de InvestigaciĆ³n BiomĆ©dica en Red de Enfermedades HepĆ”ticas y Digestivas (CIBEREHD), Barcelona, Spain

Received 11 December 2012; received in revised form 7 January 2013; accepted 8 January 2013. published online 16 January 2013.

See Articles, pages 646–654 and pages 655–662

Developments in the treatment of chronic hepatitis C over the last 2years have been remarkable. For the first time ever, we are now certain that this chronic infection can be cured without the need of interferon and ribavirin. Gane and colleagues provided the proof of concept that oral antiviral therapy with two direct-acting antivirals (DAAs) without interferon can suppress viral replication [1]. In their study, they showed that the combination of an NS5B nucleoside polymerase inhibitor (RG7128) and an NS3 protease inhibitor (danoprevir) had potent antiviral activity even in null responders; some patients achieved undetectable HCV-RNA only 14days after treatment initiation. Unfortunately, the combination of DAAs in this study was limited to 2weeks and was followed immediately by treatment with peginterferon and ribavirin, thus preventing the assessment of sustained virological response to an interferon-free regimen [1]. The combination of the protease inhibitor asunaprevir with the NS5A inhibitor daclatasvir is the first oral interferon-free regimen proved to be effective [2], [3]. In the study by Lok et al. [3], 11 previous null responders received both drugs for 24weeks and a total of 4 patients (2 of 9 with HCV genotype 1a and 2 of 2 with genotype 1b) achieved a sustained virologic response (SVR). In the study by Chayama et al. [2], 11 genotype 1b null responders underwent the same interferon-free regimen and the 9 individuals who completed 24weeks of therapy achieved SVR.

In this issue of the Journal of Hepatology, Suzuki et al. [4] evaluated the efficacy of dual therapy with asunaprevir and daclatasvir in 43 subjects infected with genotype 1b considered poor candidates for current treatment for hepatitis C (21 null responders and 22 ineligible or intolerant to interferon-based therapy). SVR at 12 and 24weeks was 90% for null responders and 64% for ineligible/intolerant to interferon-based therapies. Treatment was well tolerated and virological failures were only observed in the cohort of ineligible/intolerant patients (3 breakthroughs and 4 relapses). In the accompanying manuscript, Karino et al. [5] characterized the escape viral mutations in patients experiencing virological failures. The authors found that NS3 and NS5 resistance-associated variants (RAVs) were detected together at the time of virological failures.

One of the strengths of the study of Suzuki et al. [4] is that it deals with difficult-to-treat patients: well-documented null responders and patients who are intolerant or ineligible to interferon. Although the latter group was rather heterogeneous (individuals older than 70years, with depression or other co-morbidities), this profile of patients represents a significant proportion of our current candidates to antiviral therapy. Obviously, the combination of peginterferon, ribavirin and a first generation protease inhibitor (boceprevir or telaprevir) is not an option for patients with absolute contraindications to interferon and it is also a poor choice for individuals with co-morbidities or those who are old. In Japan and China, hepatitis C virus expanded decades before that of the United States and Europe [6]. Therefore, candidates to antiviral therapy in Asia are often older than corresponding patients in Western countries. Older age is not an absolute contraindication for an interferon-based therapy. A French group showed good efficacy in a small group of patients older than 65years treated with pegylated interferon and rivabirin [7]. Nevertheless, other studies have demonstrated a trend towards lower SVR rates, as well as higher rates of dose reductions and discontinuations of therapy in this population as compared to younger individuals [6], [8]. Currently, there are no data on the safety and efficacy of triple therapy in old patients. In the CUPIC French cohort, cirrhotic patients up to 83years old have been included: though the number of severe adverse events using triple therapy seems clearly higher than those reported with peginterferon and ribavirin alone [9], a specific analysis in older patients has not been performed.

Similarly, triple therapy is not an ideal alternative for most previous null responders, since SVR rates in this subpopulation rank only between 30% and 40% [10], [11]. Moreover, subgroup analyses from the REALIZE study [10] suggest that in cirrhotic null responders SVR is below 15%. In order to accomplish the definition of “difficult-to-treat” patients, it would have been interesting if the study by Suzuki et al. had included patients with advanced liver disease (biopsy-proven cirrhosis was an exclusion criterion).

Response rates obtained in this study using daclatasvir and asunaprevir can be considered excellent. It is surprising though, that the only virological failures reported were in the group of intolerant/ineligible patients [5]. Although the small number of patients precludes any definitive interpretation, there are several potential explanations. Firstly, it is important to notice that 10 out of the 21 null responders (sentinel cohort) received a significantly higher dose of asunaprevir, which was not the case in any of the 22 intolerant/ineligible individuals. Second, patients experiencing virological failure had below-median daclatasvir and asunarpevir levels, but this was also the case for other individuals who achieved sustained viral clearance. A lack of compliance did not seem to play a major role in the lower efficacy in this group (though cannot be completely excluded). A more interesting hypothesis is the potential effect of pre-existing resistance-associated variants (RAVs). In a complementary manuscript, Karino et al. [5] performed a careful characterization of virological escape mutants in patients included in the first study. Interestingly, most patients experiencing viral breakthrough or relapse had daclatasvir RAVs at baseline, being NS5A-Y93H the predominant polymorphism in all 3 patients with virological breakthrough and in 2 of the 4 relapsers. The global prevalence of this variant is around 4% [5], [12] and may be higher in genotype 1b-infected patients (∼10%). Indeed, NS5A-Y93H was found at baseline in five other patients who achieved SVR in this study.

In every patient with virological failure, resistant variants to both agents emerged together at the time of failure (NS3-D168A/V and NS5A-L31M/V-Y93H). At baseline, a combination of these NS3 and NS5A variants was not detected by clonal sequencing; however, their presence at low levels cannot be excluded due to the limited number of clones analyzed. Currently, assessment of minor NS3 plus NS5A RAVs from the same RNA sequence is not possible by ultra-deep sequencing technologies, since the size of the analyzed fragments is still a limitation (a fragment of ∼4000 base pairs encompassing NS3, NS4 and NS5A is far too large for the current technology).

A final point analyzed in the accompanying manuscript by Karino et al. was the persistence of RAVs after treatment interruption [5]. This is a very relevant topic, since it may impact future treatment options in patients who develop drug resistance. As reported with other protease inhibitors, asunaprevir-resistant NS3-D168 substitutions generally decayed during the follow-up period, which implies a lack of replicative fitness compared to the wild type virus in the absence of selective pressure (drug). This was also reproduced in the replicon system, where double NS3 RAVs (D168V plus Q80L or S122G) had a replicative ability similar to the D168V variant alone. Obviously, a more thorough sequence analysis using ultra-deep pyrosequencing would be necessary to fully establish the dynamic decay of these RAVs, after treatment interruption and to make sure that these variants do not remain enriched for longer periods relative to baseline. In fact, a small study including 5 patients who were first treated with simeprevir monotherapy (5days), and then retreated more than 1year later with pegylated interferon, ribavirin and simepervir, analyzed the potential clinical implications of the presence of RAVs. In this study [13], 3 patients achieved SVR and 2 did not. Deep sequencing indicated low-level persistence of simeprevir RAVs in the 2 patients who did not achieve SVR. We do not know if the presence of these resistant strains at low levels explained the lack of response to re-treatment. What is really interesting in the study by Karino et al. [5] is that in some individuals, NS5A variants associated with daclatavir resistance persisted for at least 48weeks after treatment interruption. As already mentioned, longer follow-up studies are important to establish the clinical impact of these more fitted resistant strains in case these patients will be retreated with NS5A inhibitors.

Overall, the ideal combination of DAAs is still unknown, but some of the inherent characteristics of the antiviral agents may help predict which combination will be more effective (Table 1). The inclusion of a nucleo(s)tide NS5B polymerase inhibitor in a combination seems reasonable [14]. These drugs offer a high barrier to resistance (RAVs have a very poor fitness), are pangenotypic and have proved to be very effective in several phase 2 trials. The simple combination of sofosbuvir and ribavirin for 12weeks appears to be extremely successful in naĆÆve genotype 1, 2 or 3 patients (though this combination using such a short regimen is insufficient to cure previous null responders) [15]. Combinations including more than 2 DAAs targeting different viral proteins also seem a good approach. Recently, a study including both naĆÆve and null responder genotype 1 patients assessed the efficacy of ABT450/r (ritonavir-boosted NS3 inhibitor), ABT267 (NS5A inhibitor), ABT 333 (NS5B non nucleoside inhibitor) and ribavirin. This combination achieved SVR12 rates close to 100% in naives and around 90% in null responders [16]. Unfortunately, patients with advanced liver disease have not yet been included in these studies. The only data regarding cirrhotic patients treated with oral regimens comes from the SOUND-C2 study, where an NS3 protease inhibitor (faldaprevir), a non-nucleoside NS5B inhibitor (BI207127) and ribavirin were combined in genotype 1 naĆÆve patients: reported SVR12 rates in cirrhotics were around 60% [17].

Table 1. Characteristics of direct antiviral agents approved for hepatitis C treatment or entering phase 3 studies.

PIIS0168827813000214_fx1_lrg

Genotypes in parenthesis indicate documented activity in vitro.

Within the next few years, we will certainly witness more progress. When choosing a combination of antiviral agents, we will need to take into consideration a certain number of variables: potency, genetic barrier to resistance, range of activity (pangenotypic or not), potential drug–drug interactions. Importantly, safety and simplicity of the regimen will also be very relevant. Up to now, most of the oral compounds appear to be safe and well tolerated by most patients, but until large phase 3 studies are finished, safety needs to be closely monitored. Most of our current knowledge on interferon-free regimes is based on phase 2 trials including small numbers of patients. Added to which, we still have very little information on the safety and efficacy of these regimens in difficult-to-treat subjects, particularly in null responders with advanced fibrosis or cirrhosis, or in special populations such as transplant patients with hepatitis C recurrence. Over the next 2–3years, we will start to see data on large cohorts (phase 3 studies) and in small series of really difficult-to-treat individuals and in special populations. By then, it will be easier to answer the question: “are we there?”.

Conflict of interest

References

Copyright

Source

Clinical management of drug–drug interactions in HCV therapy: Challenges and solutions

Journal of Hepatology
Volume 58, Issue 4 , Pages 792-800, April 2013

David Burger, David Back, Peter Buggisch, Maria Buti, Antonio CraxĆ­, Graham Foster, Hartwig Klinker, Dominique Larrey, Igor Nikitin, Stanislas Pol, Massimo Puoti, Manuel Romero-GĆ³mez, Heiner Wedemeyer, Stefan Zeuzem

Received 17 August 2012; received in revised form 22 October 2012; accepted 25 October 2012. published online 06 November 2012

Summary

Hepatitis C virus (HCV) infected patients often take multiple co-medications to treat adverse events related to HCV therapy, or to manage other co-morbidities. Drug–drug interactions associated with this polypharmacy are relatively new to the field of HCV pharmacotherapy. With the advent of the direct-acting antivirals telaprevir and boceprevir, which are both substrates and inhibitors of the cytochrome P450 (CYP) 3A iso-enzyme, knowledge and awareness of drug–drug interactions have become a cornerstone in the evaluation of patients starting and continuing HCV combination therapy. In our opinion, an overview of conducted drug–drug interaction studies and a list of contraindicated medications is not enough for the clinical management of these drug–drug interactions. Knowledge of pharmacokinetic profiles and concentration–effect relationships is key for the interpretation of these data, and insight into how to manage these interactions (e.g., dose adjustments, safe alternatives and therapeutic drug monitoring) is of equal importance. This review provides a practical overview of the safe and effective management of these clinical challenges.

Abbreviations: HCV, hepatitis C virus, DAAs, direct-acting antivirals, HIV, human immunodeficiency virus, SPCs, summary of product characteristics, Peg, polyethylene glycol, BID, twice daily, AKR, aldo-ketoreductases, P-gp, P-glycoprotein, AUC, area under the plasma concentration vs. time curve, FDA, Food and Drug Administration, EMA, European Medicines Agency, TDM, therapeutic drug monitoring, CYP, cytochrome P450, ECG, electrocardiogram, UGT, UDP-glucuronosyltransferase, ACE, angiotensin converting enzyme, AT1, angiotensin II receptor, PDE5, phosphodiesterase type 5, BOC, boceprevir, TVR, telaprevir, RBV, ribavirin, IFN, interferon

Keywords: Drug interactions, Hepatitis C virus infection, Boceprevir, Telaprevir, Pharmacokinetics

Introduction

With the introduction of the direct-acting antivirals (DAAs) telaprevir and boceprevir in Europe, US and other countries in 2011–2012, the management of drug–drug interactions in the treatment of patients with hepatitis C virus (HCV) has gained wide interest. Drug–drug interactions were not entirely new to the field, as certain combinations of ribavirin and human immunodeficiency virus (HIV) nucleoside analogues had been shown to be problematic before [1], and transplant hepatologists have long learned to consider drug–drug interactions with ciclosporin and tacrolimus. The current attention on drug–drug interactions and their clinical management, however, is unprecedented in hepatology and many other disease areas, and can only be compared to the introduction of HIV-protease inhibitors in the mid-90s.

PIIS0168827812008288_fx2_lrg

Each health professional involved with HCV treatment (hepatologist, infectious disease specialist, nurse specialist, clinical pharmacist, etc.) will need a sound and complete understanding of the potential of a drug–drug interaction in every patient treated for HCV infection. This is a rapidly evolving field and many questions on specific drug combinations remain unanswered. Most of the drug–drug interaction studies are initially presented at conferences and many do not appear in peer-reviewed literature. Besides knowledge on potential mechanisms that form the basis of the development of drug–drug interactions, one should also have an overview of the most frequently occurring or most serious potential drug combinations. Finally, awareness of how to find reliable and up-to-date information is essential.

One of the reliable and up-to-date sources is a website from the University of Liverpool: www.hep-druginteractions.org. However, since only a small number of interactions have been studied, one of the challenges is to provide expert opinion on potential interactions based on metabolic data and an understanding of the mechanisms. Currently, the website does not always provide information on effective alternatives when faced with a problematic interaction, and this would be a useful addition.

In this paper, we will review drug interactions with HCV agents and a number of therapeutic groups. As (potential) drug interactions between HIV and HCV drugs are extensive [2], it was decided that this deserves a separate review and, therefore, this will not be included here. Also, alternative and complementary medicines (e.g., herbals) may cause drug interactions but have not yet been studied and are consequently not the scope of this paper. Before discussion of potential drug interactions with anti-HCV agents, the pharmacokinetic properties of the drugs and current knowledge of their concentration–effect relationships will be discussed. This basic knowledge is required for an adequate interpretation of drug interaction data. It is important to remember that drug–drug interactions can be bidirectional, i.e., both drugs are affected.

Data on drug interactions were extracted from published literature, Summaries of Product Characteristics (SPCs) [3], [4], abstract books from medical conferences, and clinical experience. If possible, a safe alternative is given to manage a specific drug interaction although it should be noted that clinical experience is limited. Data have been updated until July 1, 2012 and this overview is restricted to licensed anti-HCV agents.

Pharmacokinetics of anti-HCV agents

This paragraph focuses on the currently available anti-HCV agents ribavirin, polyethylene glycol (Peg)-interferon alfa, telaprevir and boceprevir. Ribavirin is a nucleoside analogue and as such a prodrug requiring intracellular activation to a triphosphate. Ribavirin-triphosphate accumulates in red blood cells because these cells lack the enzyme to degrade the triphosphate. Ribavirin has a bioavailability of approximately 64%, which is largely dependent on simultaneous intake of food. Absorption is dose-limited, so it is recommended to take ribavirin twice-daily (BID), although based on its long elimination half-life (approximately 300h) less frequent dosing might have been more logical. Ribavirin is not metabolised by hepatic enzymes and does not influence hepatic metabolism of other agents. It is eliminated unchanged by the kidneys.

Interferon alfa is a recombinant representative of a natural protein that can only be administered parenterally; its pharmacokinetic profile is improved by encapsulation of the molecule in a peg “coat”. As a result, dosing frequency could be reduced to once-weekly subcutaneous administration. There are 2 forms marketed: 2a and 2b, which have limited pharmacokinetic differences, and in this paper interferon alfa is meant to represent both 2a and 2b products. Peg-interferon alfa is not a substrate of hepatic metabolism and does not show a direct inducing or inhibitory effect on hepatic metabolism of other agents.

Boceprevir and telaprevir are both orally available HCV protease inhibitors with food-dependent absorption and relatively short elimination half-lives, necessitating three times daily administration (BID administration of telaprevir is currently in phase III clinical trial). Both agents are substrates of CYP3A, although for boceprevir this is not the primary route of metabolism; boceprevir is primarily metabolised by aldo–ketoreductases (AKR) and only a minor proportion is subject to CYP3A-mediated metabolism [5]. Both boceprevir and telaprevir are substrates of the membrane transporter P-glycoprotein (P-gp), which is present at many sites, including the gastro-intestinal tract, blood–brain barrier, placenta; P-gp is a so-called efflux pump and prevents uptake of substrates, and as such can be seen as a protection of the body against noxious substances. Telaprevir and boceprevir are both strong inhibitors of CYP3A, with telaprevir being associated with a stronger inhibitory effect than boceprevir (see data on immunosuppressants and midazolam below). Both agents also appear to be inhibitors of P-gp (again, with boceprevir being a weaker inhibitor than telaprevir, based on digoxin data), but it should be noted that this is more difficult to assess as a large overlap between CYP3A and P-gp substrates exists. Both agents are so-called mechanism-based inhibitors of CYP3A, which means that CYP3A is inactivated. As a consequence, reduced CYP3A activity is maintained even when telaprevir or boceprevir use is discontinued, until new CYP3A enzymes are generated (approximately 1week).

Based on the above, much attention will be directed to interactions between boceprevir/telaprevir on one hand and CYP3A/P-gp substrates/inhibitors/inducers on the other hand.

Concentration, effect relationships

It is difficult to interpret results from drug–drug interaction studies without a detailed insight into concentration–response relationships. If there is no change in plasma concentrations when drug A is added to drug B, one can easily conclude that both agents can be safely combined from a pharmacokinetic perspective. But what changes in drug concentrations is generally accepted to be related to reduced efficacy: −30%,−50% or −70%? At which elevated drug concentration is the risk for toxicity significantly increased? Which pharmacokinetic parameter is most closely associated with therapeutic response and thus should be used for interpretation: the average exposure to the drug during one dose interval (area-under-the-concentration vs. time curve, AUC), or for instance the trough concentration (Cmin)? A sensible statement can only be made if a concentration–effect relationship is known, there is some idea of a target concentration, how far away the “average” patient is from this putative threshold and how large the interpatient variability is in pharmacokinetics. It is not surprising that in clinical practice such a well-balanced and thorough evaluation of drug interaction data is hardly possible, and inevitably we have to look at drug interaction data outcomes in a more general way. Regulatory bodies such as the Food and Drug Administration (FDA) or European Medicines Agency (EMA) could decide that any reduction in exposure of more than a certain percentage (i.e., 30%, 40%, 50%) could be defined as clinically relevant, and hence any combination of drugs that leads to this kind of plasma drug concentration change should lead to either a dose adjustment or a contraindication. Such a general condition can then be applied to agents with comparable mechanisms of action and pharmacokinetic properties. This is, however, difficult to justify given differences in concentration, effect relationships and is currently not an FDA or EMA viewpoint.

Another important consideration is that in pharmacokinetic studies, plasma (or serum) pharmacokinetic parameters are assessed, while it is known that anti-HCV agents are primarily active inside the hepatocyte and not in plasma. Hepatocytes, however, do not represent an easily accessible biological matrix. Animal data may not always reflect the human situation, as there are differences in expression of uptake transporters between species [6]. As a result, we tend to assume that globally, there is a correlation between concentrations at the site of activity and in plasma, and hence changes in plasma concentrations will result in more or less similar changes inside the hepatocyte. For all DAAs, correlations have been found between plasma concentrations and HCV RNA decline after the start of treatment [7], [8]. Thus, the assumption that plasma concentrations are a surrogate for levels inside the hepatocyte appears valid so far. However, in individual patients, there could be a “mismatch” between plasma and hepatocyte concentrations, for instance caused by genetic polymorphisms in uptake or efflux transporters present on the cell membrane of a hepatocyte. Another example of a mismatch could be for nucleoside analogues that are activated intracellularly to triphosphates: plasma concentrations of the parent compound may not always be related to intracellular concentrations of the triphosphate.

A further important aspect is the possibility to use therapeutic drug monitoring (TDM) to assess the presence of a clinically relevant drug interaction. TDM can play a major role in the management of a drug–drug interaction and to evaluate the effectiveness of an intervention such as a dose adjustment. For anti-HCV agents, this is currently only possible for ribavirin in a number of specialized laboratories. Literature suggests that steady-state plasma concentrations of ribavirin at week 8 or later should be 2.0mg/L or higher to reduce the risk of virological failure as much as possible [9]. If a drug interaction with ribavirin is known or suspected, this may lead to changes in ribavirin plasma concentrations and TDM can then be recommended. TDM is also possible, and probably indispensible, for a number of therapeutic groups that are influenced by HCV protease inhibitors, such as immunosuppressants and antiretroviral agents. But also in other situations the adagium “one dose does not fit all” can be advocated to understand whether a drug interaction is causing inter- or intrapatient variability in drug concentrations. Currently, TDM of HCV protease inhibitors telaprevir and boceprevir is not yet possible because of practical issues around blood sampling, storage of samples, limited availability of pure compounds, etc. In addition, TDM comes at a cost and tends only to be performed in specialist centers.

Immunosuppressive agents (including steroids)

Without doubt one of the most important drug interactions with the currently available anti-HCV agents are those with immunosuppressants, such as tacrolimus and ciclosporin [10]. These immunosuppressants are substrates of both CYP3A and P-gp and, with the above-described inhibitory effects of boceprevir and telaprevir on CYP3A and P-gp, it was expected that the plasma concentrations of the immunosuppressants would be largely increased. In particular, the interaction between tacrolimus and telaprevir has a magnitude that is unprecedented in clinical pharmacology: the AUC of tacrolimus is increased by 70.3-fold and this combination would be lethal if doses are not adjusted [11]. Ciclosporin levels are increased “only” 4.1-fold when combined with telaprevir. Also for boceprevir, the interaction with tacrolimus is stronger than for ciclosporin, but differences are less pronounced than for telaprevir: tacrolimus levels increase 17-fold and ciclosporin levels 2.6-fold when combined with boceprevir [12]. Less attention has been paid to the effects of the immunosuppressants on the levels of the HCV protease inhibitors, but no influence is expected.

The above-mentioned data have been collected in healthy volunteers; preliminary data presented at EASL 2012 suggest that with TDM of immunosuppressants directly from the start of combined treatment these combinations are indeed manageable with adjusted doses that appear to be around 50% of the observed differences in healthy volunteers [13]. Overall, the ciclosporin dose needed to be adjusted by an average factor of 1.3 while the interaction study in healthy volunteers showed a 2.6-fold increase. However, in this study, patients were admitted to hospital for correction of drug dosing and intensively monitored. Phase II studies are ongoing that might allow less intensive monitoring and more flexible dosing of the immunosuppressants, for instance a very low dose of tacrolimus taken once-a-week when combined with telaprevir. At the current time, there is some uncertainty whether the safety and efficacy of tacrolimus once weekly (with telaprevir) can be extrapolated from daily use of tacrolimus (without telaprevir), even when similar target trough levels of tacrolimus are achieved. The combination of ciclosporin and boceprevir causes the smallest interaction and could be considered a preferred option. There are no data on the use of other immunosuppressants such as sirolimus and everolimus, but it is expected that the effects are similar to those with tacrolimus.

Systemically applied corticosteroids such as prednisone and methylprednisolone are CYP3A substrates and higher steroid levels may occur when combined with telaprevir and boceprevir, and this is not recommended. This also holds true for corticosteroids that are locally applied by inhalation or intranasally such as budesonide and fluticasone: Cushing syndrome may occur with DAAs. Regarding the systemic glucocorticoid dexamethasone, this agent can act as an enzyme inducer and may be associated with low DAA levels. There are data available suggesting that beclomethasone can be used safely in patients on strong CYP3A inhibitors [14] and consequently this could be the corticosteroid of choice for patients on HCV protease inhibitors. Dermatically applied steroids are not expected to cause significant systemic absorption; this could be different for anorectal administration to treat anorectal discomfort.

Antimicrobial agents (non-HIV)

Ketoconazole is a prototype CYP3A inhibitor often used during clinical development of putative CYP3A substrates such as telaprevir and boceprevir to investigate interactions. It has been shown that telaprevir levels were increased by 62% and also ketoconazole levels were elevated by 46–125%, demonstrating telaprevir’s CYP3A/P-gp inhibitory potential [15]. It is recommended that telaprevir can de dosed normally, but that the ketoconazole dose should not exceed 200mg/day to avoid development of toxicity. This recommendation has been extended to itraconazole although the combination with telaprevir was not formally studied. For boceprevir and ketoconazole, similar effects have been noticed. Consequently maximum doses of ketoconazole and itraconazole of 200mg/day are also in the product label for boceprevir.

Besides ketoconazole, the macrolide clarithromycin is a well-known CYP3A inhibitor. Plasma concentrations of boceprevir were only marginally increased (+21%) during co-administration and, therefore, these agents can be safely combined without dose adjustments [16]. Telaprevir has not been tested with clarithromycin, but a similar recommendation can be given. The potential increase in clarithromycin levels, however, warrants electrocardiogram (ECG) monitoring in patients also on telaprevir, as QT prolongation may occur. Where possible, azithromycin is an alternative to clarithromycin, as the former macrolide is not a CYP3A inhibitor or substrate.

Rifampin is the prototype of a strong enzyme inducer and is often difficult to combine with CYP substrates such as telaprevir and boceprevir. The AUC of telaprevir in combination with rifampin was reduced by 92% when compared to telaprevir alone, and this combination is contraindicated [15]. Boceprevir has not been tested with rifampin, but a contraindication also applies.

Methadone/buprenorphine

Because (former or current) intravenous drug use is a major transmission route for HCV, a considerable number of patients who are receiving opiate substitution therapy and/or actively using illicit drugs will be considered for therapy with DAAs. Hence there is a risk for a drug–drug interaction. Methadone is commonly used in opiate substitution programs and has been extensively studied. Telaprevir reduced methadone levels on average by 29%, but this effect is most probably attributed to displacement of methadone from plasma protein-binding sites [17]. Free, active concentrations of methadone remained largely unchanged. A somewhat smaller decrease in methadone levels was seen with boceprevir: AUC and Cmax were reduced by 22% and 15%, respectively; free methadone levels were not reported [18]. Surprisingly, the use of peg-interferon alfa appeared to cause a small increase in methadone levels of about 15%, which is unlikely to be associated with the need for a dose reduction to prevent methadone toxicity. Taking these apparently opposite effects of telaprevir/boceprevir and peg-interferon alfa together, close monitoring might be prudent in patients on methadone, when HCV combination therapy is initiated. Importantly, there should be a low threshold for methadone dose adjustment based on patient responses. In some centers, patients and their friends who are being considered for antiviral therapy are provided with opiate antagonists (naloxone), along with instructions for their use and this may be a prudent precaution in individuals with erratic consumption of illicit opiates.

Buprenorphine is an alternative to methadone for patients with opiate addiction. It has multiple metabolic pathways, including CYP3A, so an increase in plasma concentrations was possible when combined with CYP3A inhibitors such as telaprevir or boceprevir. However, buprenorphine levels were not increased when this was combined with telaprevir and also no signs of toxicity were observed [19]. Boceprevir caused a minor increase in buprenorphine AUC (+19%) which was associated with a 45% decrease in the AUC of norbuprenorphine, demonstrating an effect of boceprevir on this CYP3A pathway [18]. These changes, however, are not considered clinically relevant.

Buprenorphine is also available in a fixed-dose combination with naloxone. Systemic bioavailability of oral naloxone is very low (<3%) due to extensive first-pass metabolism (mainly UDP-glucuronosyltransferase (UGT) and partly CYP3A). Boceprevir increased naloxone AUC by 33%, suggesting that bioavailability of naloxone is somewhat improved when these agents are co-administered [18].

Antidepressants

It is generally accepted that the use of peg-interferon alfa can lead to psychiatric disorders including depression. Concomitant use of antidepressants with HCV treatment will thus not be a rarity and this poses the risk for a drug–drug interaction with DAAs, as both groups are CYP substrates. Escitalopram has been studied for the prevention of peg-interferon-induced depression and was, therefore, a logical candidate to be tested for a drug interaction with HCV protease inhibitors. With telaprevir, no change occurred in telaprevir levels, but escitalopram levels were decreased by an average of 35% [20]. When initiating escitalopram in a patient on telaprevir, one should dose titrate high enough before concluding that the antidepressant is not effective. The effect of boceprevir on escitalopram had the same direction as with telaprevir, although the magnitude of the decrease in AUC was smaller (−17%) [21].

It is unlikely that all patients can be effectively treated with escitalopram, and clinicians may have preferences for other antidepressants based on personal experience. Some of these agents (e.g., sertraline and mirtazepine) are CYP3A substrates and increased plasma concentrations of the antidepressant may occur when combined with telaprevir or boceprevir. Other antidepressants are more selectively metabolised by CYP2D6 (e.g., paroxetine, duloxetine and fluoxetine) and their pharmacokinetics are expected not to be influenced by telaprevir and boceprevir as the latter agents do not possess CYP2D6 inhibitory activity. More research is needed in this area.

Sedatives

A number of benzodiazepines are heavily dependent on CYP3A for their metabolism and interactions with boceprevir and telaprevir can be expected. Midazolam is a prototype CYP3A substrate, but is also relevant here as it is being used as premedication before endoscopy or gastroscopy. The AUC of oral midazolam was increased 9-fold with telaprevir [22] and 5.3-fold with boceprevir [2] and, therefore, oral midazolam is contraindicated with both DAAs. The magnitude of an interaction with parenteral midazolam is less than that observed with oral midazolam, as the inhibition of presystemic CYP3A metabolism is no longer relevant. Indeed, midazolam AUC increased only 3.4-fold when i.v. midazolam was added to telaprevir (vs. 9-fold with oral midazolam, see above) and there was no change in Cmax of midazolam [22]. Administration of 50% of the normal parenteral dose in patients on boceprevir or telaprevir is probably safe.

Other oral benzodiazepines such as triazolam and alprazolam [23] are contraindicated. Zolpidem levels were reduced by approximately 50% with steady-state telaprevir, so possibly a higher dose of zolpidem is needed [23]. Ketamine is extensively metabolised in the liver by various CYP enzymes and consequently, if CYP3A is involved, there are potentially multiple escape pathways. Propofol is mainly eliminated renally and, therefore, no interaction with DAAs is expected.

Statins

Most statins are also CYP3A substrates and, not surprisingly, CYP3A inhibitors such as telaprevir and boceprevir are expected to increase statin levels and the associated risk of severe toxicity such as rhabdomyolysis. Indeed, atorvastatin levels were elevated almost eight times with telaprevir [24]. This combination is a contra-indication, as is simvastatin, which has not been tested. The effect of boceprevir on atorvastatin was less strong: statin levels increased 2.3 times and this interaction appears to be manageable by starting with a low dose of atorvastatin (10mg) [25]. An alternative option might be pravastatin as this statin is not a CYP substrate. Pravastatin levels were marginally increased when combined with boceprevir (1.5-fold), probably caused by inhibition of the organic anion-transporting polypeptide (OATP) 1B1 [25], [26]. There are no data on rosuvastatin and HCV protease inhibitors.

Some clinicians have the opinion that, given the relatively short treatment duration with DAAs, at least with telaprevir (12weeks), one can also temporarily stop the statin to avoid toxicity associated with a potential drug–drug interaction.

Cardiovascular agents (other than statins)

Calcium entry blockers are known CYP3A (and partly also P-gp) substrates and thus increased exposure can be expected with CYP3A inhibitors such as telaprevir and boceprevir. This was also observed: amlodipine levels increased 1.8-fold when combined with telaprevir [24]. It is advised to start with a low dose of amlodipine (5mg) and titrate to the desired effect. Effects of telaprevir on other calcium channel blockers are expected to be more severe than this, since most of these agents have a larger CYP3A-mediated first-pass effect. Thus, telaprevir can cause a more pronounced drug interaction. There are currently no data on boceprevir and amlodipine, but as boceprevir is also known as a CYP3A inhibitor (though weaker than telaprevir) a similar recommendation as with telaprevir appears logical.

Some of the calcium entry blockers have very low systemic bioavailability (4–8%: barnidipine, lacidipine, lercanidipine) due to extensive first-pass metabolism. However, when combined with CYP3A inhibitors, such as telaprevir or boceprevir, systemic exposure may easily increase several-fold; therefore, these agents should not be used as a first choice.

Diuretics, angiotensin converting enzyme inhibitors (ACE) and angiotensin II receptor antagonists (AT1) are all classes of agents without extensive CYP metabolism, and hence combination with telaprevir and boceprevir is not expected to be problematic. Ī²-receptor blocking agents are also not expected to cause problems as they are mainly eliminated renally (e.g., atenolol and sotalol) or metabolised through CYP2D6 (e.g., metoprolol and carvedilol). Anti-arrhythmics have a narrow therapeutic window and some are CYP3A substrates (e.g., amiodarone and bepridil). These are contraindicated with the strong CYP3A inhibitor telaprevir and caution is warranted with the moderate CYP3A inhibitor boceprevir.

Digoxin has been tested with telaprevir [22] and boceprevir [27] as a prototype P-gp substrate: digoxin levels were increased by 85% with telaprevir so this DAA can be defined as a moderate P-gp inhibitor and one should start with a low-dose digoxin in a patient on telaprevir. With boceprevir, the impact on digoxin levels was less than with telaprevir: AUC and Cmax of digoxin were increased by 19% and 18%, respectively. This suggests that boceprevir is a very mild P-gp inhibitor.

Antidiabetics

Use of antidiabetics should be monitored carefully in patients with hepatic impairment to avoid the occurrence of severe hypoglycaemia. Repaglinide is one of the few oral antidiabetics that is partially metabolised by CYP3A and theoretically could interact with the CYP3A inhibitors boceprevir and telaprevir. Repaglinide’s primary route of metabolism, however, is CYP2C8. This would be the escape pathway in the presence of a CYP3A inhibitor; therefore, no interaction with DAAs through this mechanism is expected. Repaglinide is also a substrate for OATP transporters and may consequently interact with DAAs in a non-CYP mediated mechanism. Some other oral antidiabetics are also metabolised by the liver, but not the CYP3A iso-enzyme. For instance, glimepiride is a CYP2C9 substrate, but this enzyme is not influenced by boceprevir [26] or telaprevir. Metformin is not expected to cause a problem when combined with DAAs.

Other agents

Finally, in this paragraph some agents are described that did not fall in one of the main therapeutic areas that are listed above. This includes either agents that have been tested or those with a contraindication based on theoretical considerations.

Plasma concentrations or the estrogen component of oral contraceptives are reduced by about 25–30% when combined with boceprevir [16] or telaprevir [28], and it is recommended to take additional (non-hormonal) precautions to prevent pregnancy. This is not only based on the observed drug interaction data, but also because HCV therapy includes ribavirin, which is teratogenic. Therefore, pregnancy should also be avoided from that important perspective.

The following agents are contraindicated with telaprevir and boceprevir because these agents are strongly dependent on CYP3A for metabolism and have a narrow therapeutic range: alfusozin, cisapride, ergotamin and derivates and pimozide.

Colchicine is another CYP3A substrate with a narrow therapeutic range; the drug labels contain a dosing algorithm for combined use of DAAs with colchicine, depending on its indication.

Besides rifampin, other strong enzyme inducers are carbamazepine, phenytoin, phenobarbital and St John’s wort; these inducers should not be combined with DAAs to avoid the occurrence of subtherapeutic levels of DAAs. Alternative anti-epileptic agents, such as valproic acid, levetiracetam and lamotrigine, are not enzyme inducers or CYP3A substrates. Therefore, they should be easier to combine with DAAs.

Phosphodiesterase type 5 inhibitor (PDE5) inhibitors such as sildenafil and tadalafil are CYP3A substrates and toxic levels can occur when combined with telaprevir or boceprevir. When these agents are applied at high doses for treatment of pulmonary hypertension, they are contraindicated with DAAs. However, for their use in erectile dysfunction, lower doses and/or less frequent dosing should be safe: sildenafil, 25mg per 48h; tadalafil, 10mg per 72h; vardenafil, 2.5mg per 24h (boceprevir) or 2.5mg per 72h (telaprevir). The proton pump inhibitor esomeprazole does not influence telaprevir exposure. Ibuprofen and diflunisal analgesic agents are interesting in this perspective as they are known to be AKR inhibitors, and AKR is responsible for part of boceprevir’s metabolism. A drug–drug interaction study, however, did not show an effect of diflunisal or ibuprofen on boceprevir pharmacokinetics [4].

An overview of the drug interactions with frequently used co-medications in HCV-infected patients is shown in Table 1.

Table 1. Overview of drug interactions with frequently used co-medications in HCV-infected patients. (See below-mentioned references for further information.)

PIIS0168827812008288_fx1a_lrg

PIIS0168827812008288_fx1b_lrg

CI, contraindicated; BOC, boceprevir; TVR, telaprevir; RBV, ribavirin; IFN, interferon; IV, intravenous; HCV PI, hepatitis C virus protease inhibitor; INR, international normalized ratio; Y, yes

Limitations of current drug interaction data

Many of the above-mentioned drug–drug interaction studies have been performed in healthy volunteers to avoid potential harm to patients who are at risk for toxicity or subtherapeutic effects when potentially interacting drugs are combined. This assumes that the effect of a certain drug–drug interaction is similar in healthy subjects as in an HCV-infected patient. This might not always be the case. For instance, HCV-infected patients with cirrhosis may also have impaired CYP450 capacity and have higher plasma concentrations of CYP450 substrates than healthy subjects. Theoretically, this would mean that they are at even more risk for drug toxicity when a drug–drug interaction occurs that is based on CYP450 inhibition, but at lower risk for subtherapeutic effects when a drug–drug interaction is based on enzyme induction, impaired absorption, etc. Nevertheless, extrapolation from healthy subjects to patients is still considered to be the norm, although in individual cases therapeutic drug monitoring, if available, might be helpful to assess the clinical relevance for that specific patient.

Conclusions

This overview illustrates that drug–drug interactions are an important and potentially frequent problem when using DAAs in clinical practice. It also shows, however, that many of the interactions are manageable by either dose adjustments or selecting a safe alternative, but only if one has sufficient knowledge and expertise to deal with these pharmacokinetic issues. The aim of this review was to provide this insight, as well as to raise awareness that drug–drug interactions in modern HCV treatment may have unwanted effects, such as increased toxicity or lack of therapeutic effect. This is of course most likely to have an impact on patients on multiple medications and/or treated by multiple physicians. Whenever one doubts about the safety of a certain combination, one should consult a pharmacist or clinical pharmacologist.

Financial support

We thank Gardiner-Caldwell Communications for general styling and co-ordination support, which was funded by Janssen Pharmaceuticals.

Conflict of interest

DBu has received research grants, honoraria for advisory boards and speakers fees from Merck and Tibotec/Janssen.

DBa has received research grants, honoraria for advisory boards and speakers fees from Merck, Janssen and Vertex.

PB has received speakers fees from Merck, Janssen, Bristol-Myers Squibb and Roche.

MB has been an investigator in clinical trials and advisor from Janssen and Merck.

HK has received research grants from Abbott, Boehringer, Bristol-Myers Squibb, Gilead, Janssen, MSD, Novartis, Roche, honoraria and speakers fees from Abbott, Boehringer, Bristol-Myers Squibb, Gilead, GlaxoSmithKline, Janssen, MSD, Novartis, Roche and ViiV.

SP has received consulting and lecturing fees from Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, Gilead, Roche, Schering-Plough/Merck, Novartis, Abbott, Sanofi and GlaxoSmithKline, and grants from Bristol-Myers Squibb, Gilead, Roche and Merck/Schering Plough.

MP has received consulting and lecturing fees from Abbott, Boehringer Ingelheim, Bristol-Myers Squibb, Gilead Sciences, Janssen, MSD, Roche; has been an investigator in clinical trials for MSD, Janssen, Roche, Boheringer Ingelheim, Bristol-Myers Squibb and has received a research grant from Gilead Sciences.

MRG has received research grants from Abbott and Roche; and consulting and lectures fees from Bristol-Myers Squibb, Ipsen Farma, Janssen, Gilead, Roche, Schering-Plough/Merck, Novartis, Abbott, Digna Biotech, Transgene and GlaxoSmithKline.

SZ has consulted for Abbott, Achillion, Astra Zeneca, Bristol-Myers Squibb, Boehringer Ingelheim, Gilead, Idenix, Janssen, Merck, Novartis, Presidio, Roche, Santaris, Vertex.

All other authors have not provided a conflict of interest.

References

Copyright

Source