Abstract

Over the last decade there has been a dramatic increase in the use of oral anticoagulants. This is primarily as a result of the demonstration of their benefit in atrial fibrillation (Laupacis et al, 1998). It has been estimated that 2·3% of people older than 40 years and 5·9% of those older than 65 years suffer from atrial fibrillation (Feinberg et al, 1995). In Cambridge, UK, 0·5% of the population take warfarin (Baglin, 1998), while the Dutch Thrombosis Service monitors the anticoagulation of 270 000 persons annually (1·8% of a population of 15 million) (Breukink-Engbers, 1999). Three oral anticoagulant drugs are in widespread use. Warfarin is the anticoagulant most frequently prescribed in the UK and North America, while acenocoumarol and phenprocoumon are more commonly used elsewhere. The half-lives of these drugs differ and recommendations given below refer to anticoagulation with warfarin, except in the last section in which the specific problems associated with acenocoumarol and phenprocoumon are addressed. The coumarins act by inhibiting the post-ribosomal γ-carboxylation of glutamic acid residues in the N-terminal regions of the vitamin K-dependent factors (II, VII, IX and X) during synthesis in the liver. Carboxylation is dependent on the availability of vitamin K1 in its fully reduced hydroquinone form and results in its conversion to vitamin K epoxide. Under normal circumstances this is then reduced via epoxide reductase and quinone reductase to the hydroquinone form. Warfarin competitively inhibits these two enzymes and this leads to a deficiency of hydroquinone. The resultant vitamin K-dependent proteins are undercarboxylated, cannot bind calcium and, therefore, do not participate normally in the assembly of tenase and prothrombinase complexes. Thrombin generation is thus impaired. A further quinone reductase, which is NADH-dependent and is not inhibited by coumarins, provides the probable mechanism by which administration of vitamin K results in reversal of anticoagulation (Whitlon et al, 1978). Orally administered warfarin is rapidly absorbed, mostly albumin-bound in plasma and eliminated almost entirely by the liver. Interindividual variation in dose response is determined by race and genetic factors such as the hepatic cytochrome P-450 polymorphism (Aithal et al, 1999; Mannucci, 1999). Intraindividual variation occurs as a result of changes in diet, intercurrent illness and concurrent drug therapy. Warfarin has a half-life of 35 h, acenocoumarol 9 h and phenprocoumon 5·4 d. These differences have important implications for both initiation and resolution of the anticoagulant effect. The vitamin K-dependent clotting factors have different half-lives. The half-life of factor VII is shortest at 6 h and that of factor II is the longest at 50–80 h. The half-lives of factor IX and X are intermediate at 24 h and 25–60 h respectively (Saito, 1996). These differences may be relevant in consideration of the rate of reversal of warfarin therapy following administration of vitamin K and the duration of function in circulation following coagulation factor replacement. Warfarin therapy does not result in uniform reduction of the vitamin K-dependent factors (Paul et al, 1987). In studies of stable anticoagulated patients, factor II and X levels are most profoundly affected by warfarin with less reduction in factor VII and the least reduction in factor IX levels (Kumar et al, 1990). An identical relationship between the four factors has been observed in patients presenting with life-threatening bleeding (Makris et al, 1997). Many studies have reported the risks of haemorrhage in patients on oral anticoagulants and have investigated the factors associated with an increased bleeding risk. Most report bleeding in one of three categories, fatal, life-threatening or major, and minor. It is difficult to compare these studies, however, because of the lack of conformity in the classification of bleeding episodes. It is probable that most diagnoses of fatal haemorrhage are correct but, nevertheless, it is desirable that such events be objectively proven by overwhelming clinical evidence, imaging or post-mortem study. The classification of a bleed as major or life threatening is more problematic, primarily because of the lack of internationally agreed criteria. As a result, investigators apply their own definitions when reporting studies, often adopting subjective, non-reproducible criteria. While classification dependent on site of haemorrhage may be appropriate, there are no generally agreed criteria as to which sites of bleeding should be regarded as major except in the case of intracranial haemorrhage. In some studies, a low and arbitrary haemoglobin concentration is used to define major haemorrhage, while others even more subjectively classify bleeds as major if they trigger transfusion of the patient with red cell concentrate. Some studies include the need for surgical or endoscopic intervention to stem haemorrhage as criteria for major bleeding. While this can be criticised for being subjective, such treatment is not undertaken lightly and is clearly of major relevance to the patient. In our opinion, these criteria should be included in the definition of major haemorrhage. Finally, bleeding that results in shock is clearly serious. The development of shock as defined by hypotension with or without oliguria should, we feel, define a bleed as major. Adoption of a uniform classification of anticoagulant-related haemorrhage would allow accurate comparison of incidences of bleeding by category between studies. The main features of such a classification should be its simplicity, reproducibility, objectivity and clinical relevance. A proposed classification that we feel fulfils these criteria is illustrated in Table I. A review of observational studies in 1993 reported annual bleeding rates of fatal, major and minor bleeding of 0·8%, 4·9% and 15% (Landefeld & Beyth, 1993). In large prospective studies of cohorts of varying age and indications for anticoagulation, using non-uniform criteria to define severity of bleeding, the reported rates of haemorrhage were highly variable ranging from: fatal 0·1–1·0% per year, major 0·5–6·5% per year and minor 6·2–21·8% per year (Fihn et al, 1996; Palareti et al, 1996; Beyth et al, 1998). Two factors probably explain these differences. Firstly, bleeding rates differ between cohorts depending on their composition. Secondly, these data reflect the non-conformity of classification of bleeding adopted. Despite this, it is clear that in the reported cohorts there are characteristics which are associated with a much increased haemorrhagic risk. Hypertension, history of gastrointestinal bleeding, previous cerebrovascular accident and recent initiation of anticoagulants have all been shown to be associated with a higher risk of haemorrhage (Table II). The two variables that are most consistently associated with bleeding risk are intensity of anticoagulation and age (Fig 1). In the Italian Study on Complications of oral Anticoagulant Therapy (ISCOAT), the risk of bleeding at an International Normalized Ratio (INR) of greater than 7 was 40 times the risk at an INR in the range 2–2·9 and 20 times the risk at an INR in the range 3–4·4 (Palareti et al, 1996). While there is also little doubt that bleeding is more common in older patients on anticoagulants, some authors suggest that age per se is not a risk factor when data are adjusted for other variables (Fihn et al, 1996). INR and bleeding risk. Rate of bleeding per 100 patient-years by INR level. ▪, all bleeding; ○, major bleeding. Adapted from Palareti et al, 1996. Panneerselvam et al (1998) reviewed those factors that are most commonly associated with over-anticoagulation. In a retrospective study, they demonstrated that recent intercurrent illness requiring alteration of medication (especially treatment with antibiotics) and a high target INR were the main risk factors. The importance of target INR was highlighted by a prospective study of patients with prosthetic heart valves treated to either a high INR target (3·5–4·5) or a lower INR target (2·5–3·5) with additional aspirin 100 mg/d. Those set a higher target INR had an increased risk of major haemorrhage (Meschengieser et al, 1997). Clearly, risk of haemorrhage is also determined by the duration of therapy (Fihn et al, 1993). Guidelines to the optimal duration and intensity of anticoagulation for different indications have been produced [British Committee for Standards in Haematology, BCSH (Baglin & Rose, 1998); American Association of Colleges of Pharmacy, AACP (Hyers et al, 2001); Scottish Intercollegiate Guidelines Network (SIGN, 1999)] and the following principles should be adhered to: • When they are not the most appropriate method of thromboprophylaxis, oral anticoagulants should not be used. • When oral anticoagulation is required, the appropriate target INR should be set. • When oral anticoagulation is appropriate therapy, optimal monitoring of anticoagulation should be available (Poller et al, 1998; Cromheecke et al, 2000). • Prolonged anticoagulation for which there is no evidence should be discouraged. Application of these basic principles should minimize the numbers of patients presenting with bleeding related to anticoagulation. The majority of decisions to reduce the intensity of anticoagulation fall into the category of non-emergency or elective reversals. This is appropriate when a patient presents with minor, non-life-threatening haemorrhage or, even more commonly, with an INR perceived to be associated with a significantly increased bleeding risk. Assessment of bleeding risk is, however, difficult. A recent study suggested that in clinical practice the ability of physicians to predict risk of bleeding in a cohort of patients was no better than could be achieved by chance (Beyth et al, 1998). This probably relates to lack of knowledge of the risk factors for bleeding in patients on warfarin. This is important as it is clear that, as discussed earlier, there are certain groups of patients who are at increased risk of haemorrhage while on warfarin and this must be borne in mind when decisions about acceptable, safe levels of anticoagulation and need for elective reversal are considered. Two options are available in situations in which the intensity of anticoagulation needs to be decreased: • Temporary withdrawal of warfarin. • Administration of vitamin K. These options may be considered alone or in combination with each other. Warfarin withdrawal Discontinuation of warfarin results in very slow reversal of anticoagulation. In a study of 232 over-anticoagulated patients in whom warfarin was withheld as the only method of reversal, 33%, 68% and 89% returned to the therapeutic range after 24, 48 and 72 h respectively (Cosgriff, 1956). Pengo et al (1993) showed that, in patients with an INR of 5·0–8·0, 100% still had an INR > 3·0 and 42% had an INR > 5·0, 24 h after stopping warfarin. At 48 h after discontinuation, 84% still had an INR > 3·0 (Pengo et al, 1993). In patients within the therapeutic range, with a mean INR of 2·6 (1·95–3·8), in whom warfarin was temporarily discontinued, the INR decreased in an exponential fashion with a half-life of 0·5–1·2 d and a delay of 24–36 h before the onset of the maximum decline. At 65 h after discontinuation the mean INR was 1·6 (1·1–2·2) and 91% of patients were still anticoagulated (INR > 1·2); indeed, nearly 5 d after withdrawal the INR was still > 1·2 in 23% of patients (White et al, 1995). We are not aware of published reports on the rate of fall of the INR following warfarin withdrawal at INRs greater than 7·0. Therefore, for over-anticoagulated patients, the majority will return to the therapeutic range within 3 d of discontinuing therapy. Subjects with an INR in the therapeutic range require 3–5 d to ensure complete reversal of anticoagulation. Importantly, a delay of 24–36 h is seen before the maximal rate of fall of the INR begins. Clearly, withdrawal of warfarin alone is not sufficient intervention in situations in which rapid correction of anticoagulation is required. Administration of vitamin K The administration of low-dose vitamin K either alone or in combination with temporary withdrawal of warfarin is often recommended for treatment of patients with minor bleeding or who are perceived to be at increased risk of bleeding (Baglin & Rose, 1998). The aim of intervention is to reverse the INR to a safer level without rendering the patient resistant to further warfarin therapy, using a material and route of administration that is safe for the patient and is feasible given the clinical and geographical circumstances. Several questions about low dose vitamin K administration need to be addressed: • What route of administration is most appropriate? • What dose is optimal? • Is there a difference between preparations taken by the oral route? • Which asymptomatic over-anticoagulated individuals should be given vitamin K? Route of administration To partially reverse the level of anticoagulation in a patient who is not bleeding, vitamin K may be administered by the intravenous, subcutaneous or oral route. Intravenous administration results in rapid correction of anticoagulation, with a significant effect on the prothrombin time 4–6 h post administration (Anderson & Godal, 1975; Preston et al, 1999; Hung et al, 2000). The rate of reversal seen after administration of vitamin K by the oral route is slower. Satisfactory reversal of anticoagulation can be achieved within 24 h using oral vitamin K (Cosgriff, 1956; Crowther et al, 1998; Preston et al, 1999), although some authors have voiced reservations about the use of low-dose regimens in patients with very high INR values (Weibert et al, 1997). Subcutaneous administration of vitamin K is more favoured in North America than Europe and guidelines for its use have been published (Hirsh et al, 1995). In a randomized comparison between similar vitamin K doses administered by the intravenous or subcutaneous route, the INR at 24 h was significantly lower in intravenous recipients. The mean INR in the subcutaneous recipients at 24 h remained outside the therapeutic range for both 0·5 mg and 3 mg doses (Nee et al, 1999). Apart from efficacy, other factors such as safety, patient preference, availability and ease of administration need to be considered when deciding the most appropriate route of administration of vitamin K. Although the intravenous route probably offers the most predictable and rapid reversal of coumarin effect, it has been associated with the development of anaphylaxis and is inconvenient for use in remote areas and in the patient's home. On the other hand, although oral administration of vitamin K produces satisfactory but slightly slower reversal of anticoagulation, there is still a paucity of data on the early effects of oral vitamin K, a question over the comparative efficacy of different preparations and a lack of availability of suitable low-dose formulations. Appropriate dose of vitamin K In patients who are over-anticoagulated but not bleeding, the aim of vitamin K administration is to bring the INR back into the therapeutic range more quickly than through withholding warfarin only, without rendering the patient refractory to warfarin for a prolonged period of time. Published evidence suggests that this can be achieved by intravenous administration of 0·1–3 mg of vitamin K (Shetty et al, 1992; Whitling et al, 1998; Nee et al, 1999; Hung et al, 2000). Predictably, administration of higher doses more commonly results in a subtherapeutic INR. In situations in which the intravenous route of administration is chosen, it would appear that the appropriate initial dose is 0·5–1 mg. The data on low-dose oral administration of vitamin K are more difficult to interpret. Several studies have demonstrated that appropriate lowering of the INR into the therapeutic range can be achieved using a variety of vitamin K preparations at low dose (Cosgriff, 1956; Pengo et al, 1993; Weibert et al, 1997; Crowther et al, 1998, 2000). To date, there is no consensus on the best formulation for oral administration. Our own recent observations confirm that there is a difference in efficacy of orally administered products in the management of anticoagulant reversal (Preston et al, 1999). Further work is required to clarify the most appropriate dose and preparation for this indication. Which asymptomatic over-anticoagulated individuals should be given vitamin K? The need for administration of vitamin K in over-anticoagulated individuals is determined by the perceived risk of serious bleeding. Guidelines vary in their recommendations but most base their proposed strategy on the risk of bleeding associated with an INR value alone (Hirsh et al, 1995; Baglin & Rose, 1998). A more appropriate and scientific approach might be to consider the risk of bleeding associated with an INR value and to compare this with the risk of thrombosis that may result from over reversal of anticoagulation. In doing so it is also appropriate to consider the individual circumstances that contribute to haemorrhagic and thrombotic risk. The model shown in Fig 2 attempts to address these issues. Three assumptions are made: The risk of major haemorrhage by INR over a 48-h period. Two groups of 4·5–6·9 patients are shown, first the standard-risk patients and next those at high risk of bleeding according to the factors shown in Table II. Also indicated is the thrombotic risk over a 96-h period in patients with atrial fibrillation (––––) and prosthetic heart valves (……..) without anticoagulation. • The risk of haemorrhage per unit of time associated with short periods of over-anticoagulation is the same as that associated with prolonged periods. • The risk of thrombosis per unit of time associated with short periods of under-anticoagulation is the same as that associated with prolonged periods. • There is no increased prothrombotic risk associated with low-dose vitamin K administration other than its effect on reducing the INR. In the model, the risk of major haemorrhage by INR is derived from the ISCOAT study, a well-conducted prospective study of bleeding complications of oral coumarin therapy in a mixed cohort (Palareti et al, 1996). The risk of haemorrhage has been calculated for a 48-h period. This period was chosen based on previous observations that there is a significant lag between withholding warfarin therapy and a decline in INR (Cosgriff, 1956; White et al, 1995). The thrombosis risk for two different indications for warfarin therapy (prosthetic heart valve and atrial fibrillation) has been calculated for 96 h of no anticoagulation. The rate of thrombosis is based on an annual thrombosis risk of 20% and 4%, respectively, for these conditions in non-anticoagulated patients. In a situation in which appropriate low doses of vitamin K are administered, the model probably overestimates the thrombosis risk for three reasons. Firstly, less than a third of patients given ≤ 2·5 mg of oral vitamin K and 50% or fewer patients given ≤ 1 mg of intravenous vitamin K will have an INR of less than 2 at 24 h (Shetty et al, 1992; Weibert et al, 1997; Crowther et al, 1998; Whitling et al, 1998; Preston et al, 1999; Hung et al, 2000), whereas the model assumes 100%. Secondly, reintroduction of anticoagulation following partial correction of INR has not been shown to be associated with refractoriness in recent studies in which patients were followed up for 4–7 d (Weibert et al, 1997; Crowther et al, 1998). Thirdly, the model assumes that the patient is rendered completely non-anticoagulated by overcorrection when this is rarely the case. The model suggests that the bleeding risk for all individuals with an INR of > 7 exceeds the risk of thrombosis associated with possible over-reversal and, therefore, that these individuals, irrespective of the indication for anticoagulation, should have the INR partially corrected using low-dose vitamin K. For most individuals in the INR range 4·5–6·9, this does not apply. However, certain groups of patients are at much increased risk of bleeding. Beyth et al (1998) have identified characteristics in patients that may be associated with a bleeding risk increased by as much as 17-fold. The patients' age, previous history and co-morbid conditions (Table I) contribute to this risk and should therefore be considered. If this information is added to the model, it demonstrates that some patients also have a bleeding risk which exceeds that of thrombosis at an INR value of 4·5–6·9 and may, therefore, benefit from partial reversal of anticoagulation. The model does not take into account episodes of further increased thrombotic risk, such as surgery, and is most applicable to ‘cold’ situations in which over-anticoagulation is detected on routine testing. Life-threatening bleeding requires rapid and probably complete reversal of anticoagulation. It may be possible to draw an analogy to haemophilia treatment in which it is known that rapid correction of the coagulopathy reduces bleeding and adverse outcome. The majority of fatal anticoagulant-related bleeds are intracranial. In a Swedish study, intracerebral bleeds in individuals on warfarin were double the size and associated with twice the mortality of cerebral bleeds in patients not on anticoagulants (Radberg et al, 1991). Furthermore, it is apparent that intracranial haemorrhages sustained on anticoagulants show a greater propensity for enlargement over the following 24-h period (Hart et al, 1995). We feel that these observations provide a rationale for prompt, rapid and complete reversal of anticoagulation in the event of major or life-threatening bleeding. There are three strategies that can be adopted in the management of life-threatening haemorrhage in a subject taking warfarin: • Witholding warfarin. • Administration of vitamin K. • Transfusion of coagulation factors. Witholding warfarin It is self-evident that warfarin should be withdrawn at least until the situation is controlled. However, dose omission alone has no significant role in the emergency situation, because of the very slow resolution of the anticoagulant effect. Administration of Vitamin K Although current UK guidelines suggest that both oral or intravenous vitamin K are suitable for use in patients with major bleeding on warfarin (Baglin & Rose, 1998), we disagree with this recommendation. Although there are no comparative studies, we feel that the more predictable and rapid rate of onset of correction of the coagulopathy achieved using the intravenous route (Anderson et al, 1975; Preston et al, 1999; Hung et al, 2000) makes this the approach of choice in the management of major bleeding. In our opinion there is rarely, if ever, justification for giving oral vitamin K to patients with life-threatening bleeding. Doses of 0·5–3 mg do not produce complete reversal of anticoagulation within 24 h. Although formal studies of larger doses have not been performed, anecdotally we have found a dose of 5 mg given intravenously to be the most useful because it provides complete correction in the vast majority of situations, irrespective of the INR, and does not render the patient resistant to re-anticoagulation. Intravenous vitamin K will provide 70% of its correction of the INR within 8 h (Raj et al, 1999). Coagulation factor replacement In major or life-threatening anticoagulant-related bleeding, the deficient clotting factors II, VII, IX and X should be replaced as quickly as possible. This can be achieved using fresh-frozen plasma (FFP) or prothrombin complex concentrate (PCC). The properties, advantages and disadvantages of these products are compared in Table III. FFP In the UK and USA, FFP is still more commonly used for this purpose. The recommended dose for coumarin reversal is 15 ml/kg body weight (Baglin & Rose, 1998). There are a number of reasons why FFP is not the optimal form of coagulation factor replacement: • The recommended volume of FFP for an average adult weighing 70 kg is 1050 ml. This is problematic, especially in recipients who may have pre-existing impairment of cardiovascular function. • FFP has to be blood group-specific and the patient's blood group must therefore be known. • Thawing incurs delay in administration. It is unlikely that, in an emergency situation, a venous sample can be drawn, the blood group checked and 1 l of FFP thawed, transported and administered in less than 1–2 h. • In the UK, most FFP is a single donor product that has not been subjected to a virus inactivation procedure and therefore carries a small but finite risk of transmission of blood-borne pathogens. Virus-inactivated plasma can be produced as a pooled product by solvent-detergent (SD) methodology or as a single donor product using the methylene blue (MB) method. Both methods incur additional cost but do offer a marginal benefit in terms of virus safety for plasma. Furthermore, use of UK donor plasma carries the theoretical risk of transmission of new variant Creutzfeldt–Jakob disease (CJD). This risk does not apply to plasma used in the manufacture of PCC as UK donors are excluded by law following a recommendation by the Committee on Safety of Medicines. • The quality control of FFP is assessed by estimation of the factor VIII concentration. The vitamin K-dependent factors are not routinely assayed in single donor products. The median and range concentrations of factors II, VII, IX and X in 20 batches of FFP was 82·5 u/dl (53–121), 92 u/dl (41–140), 61 u/dl (32–102) and 90·5 u/dl (61–150) respectively (Makris et al, 1997). By extrapolation it can be calculated that the administration of 15 ml/kg FFP provides the average adult with 640 u of factor IX (FIX). This would be sufficient to raise the FIX level by only 0·09 u/ml or 9%. When the INR on warfarin is > 4·0, the FIX level is often less than 0·20 u/ml and may be less than 0·10 u/ml (Makris et al, 1997). The recommended volume of FFP is therefore not sufficient to completely correct the coagulopathy. This has been confirmed by a study in which 12 patients with INRs of 2·9–22 were given approximately 800 ml of FFP. The median concentrations of factors II, VII, IX and X before and after administrations were 3, 5, 10, 6 iu/dl and 17, 19, 19 and 20 iu/dl respectively. In this study, the administration of 800 ml of FFP failed to achieve a median concentration above 20 iu/dl for any of the coagulation factors measured (Makris et al, 1997). It is initially surprising that clinicians report that FFP produces satisfactory correction of the INR in most cases. However, the INR system was developed to assess anticoagulation in patients on coumarins and interventions such as administration of coagulation factors may make results more difficult to interpret (Makris et al, 1997). The INR is not sensitive to changes in factor IX that may be important in determining the outcome of anticoagulant-related bleeding. Bearing these facts in mind, it is clear that what are interpreted as reasonable falls in the INR following FFP administration are in fact only partial corrections of the starting coagulation deficit (Makris et al, 1997). These points should be borne in mind when monitoring coagulation factor replacement in anticoagulated patients. We would urge caution in interpreting INR values in isolation in these clinical situations. Prothrombin complex concentrate A more convenient and rapid method to administer coagulation factors in the dose required to fully correct anticoagulation is as prothrombin complex concentrate (PCC). PCC is produced by the fractionation of pooled plasma. The coagulation factor content of the final product depends on the chromatographic procedure employed. Most contain factors II, VII, IX and X in approximately equal concentrations, but some contain little factor VII. PCCs were used for the treatment of haemophilia B for more than 20 years, prior to the introduction of purer FIX concentrates and recombinant factor IX. They are available in lyophilized form as a powder, which is reconstituted with sterile water immediately prior to administration. The shelf-life is at least 2 years and 500–1000 units of each factor can be reconstituted in a final volume of 20 ml. It is therefore possible to completely and rapidly reverse the warfarin-induced coagulation factor deficiency whatever the initial INR. We are aware of only two studies directly comparing the use of FFP versus PCC in life-threatening bleeding, both retrospective. In 29 patients with life-threatening bleeding, the administration of PCC reduced the INR from a mean of 5·8 (range 2·2–20) to 1·3 (0·9–3·8) within 15 min. The median levels of factors II, VII, IX and X increased from 15, 23, 35·5 and 14·5 iu/dl to 50, 74, 68·5 and 72 iu/dl respectively (Makris et al, 1997). Fredriksson et al (1992) reported 17 patients with objectively proven intracranial haemorrhage related to anticoagulation who received FFP or PCC. There was a significant benefit in terms of final prothrombin time achieved and rate of change in favour of PCC. Furthermore, patients treated with PCC showed significantly less clinical deterioration compared with FFP recipients. Although these studies were retrospective and non-randomized and, in addition, did not compare equivalent quantities of coagulation factor replacement, they represent reasonable comparisons of what can be achieved regarding warfarin reversal in the first 4–6 h after presentation with bleeding. Another retrospective study of 18 patients also demonstrated the rapid correction of anticoagulation achieved with PCC (Nitu et al, 1998). The optimal dose of PCC or FFP required to correct warfarin coagulopathy and stop bleeding is unknown. This is owing to a lack of knowledge of the relative importance of the individual clotting factors for haemostasis and lack of published data on the effect of achieved coagulation factor levels on significant outcomes. In haemophilia B, in which there is a strong relationship between residual coagulation factor levels and bleeding, most clinicians faced with a patient with life-threatening bleeding would aim to normalize the FIX level by administering the appropriate calculated dose of concentrate. Based on this principle and extrapolating from the FIX levels of stable anticoagulated patients, we recommend the following doses of PCC: for patients with an INR of 2·0–3·9, 25 u/kg, for an INR of 4·0–5·9, 35 u/kg, and for an INR of ≥ 6·0, 50 u/kg PCC. At the same time, because the half-life of the FVII is only 6 h, it is essential to give intravenous vitamin K. The main concerns about PCCs are the potential to induce thrombosis and disseminated intravascular coagulation, and the risk of virus transmission. Cases of both arterial and venous thrombosis have been reported to complicate their use in haemophilia B and warfarin reversal. It appears that the risk of thrombosis is higher when PCCs are used in prothrombotic clinical conditions such as lower limb orthopaedic surgery, and in situations in which concentrate is used at high dosage and for prolonged periods (Lusher, 1991). In the 1970s, it was demonstrated that these products were associated with more thrombotic problems in patients with liver disease (Aledort, 1977). Recently, there have been a number of reports of thromboses in patients who received PCC to reverse over-anticoagulation. McNeill et al (1998) reported the development of testicular vein thrombosis 48 h after the administration of PCC to treat a patient with gastrointestinal (GI) tract bleeding following a deliberate warfarin overdose. In Germany, five fatal cases of disseminated intravascular coagulation (DIC) and thrombosis probably associated with the use a single batch of PCC have been reported. All the cases had risk factors for thrombosis that included cirrhosis, congestive cardiomyopathy, resuscitation and shock, and carcinoma (Kohler et al, 1998). It must be appreciated that almost all subjects treated with warfarin will, by definition, have a condition predisposing to thrombosis. Other side-effects of administration of PCC including virus transmission, especially of hepatitis B, and rare allergic reactions need to be borne in mind. Although presently available PCCs are the safest they have ever been, in terms of virus transmission a small risk persists. Because, unlike haemophiliacs, these recipients are not vaccinated against hepatitis B, they are susceptible to this infection. In 1994, over 30 cases of hepatitis B virus transmission by a single PCC product were reported in Germany after its use in over-anticoagulated patients (Arzneimittelkommission der deutschen Arzteschaft, 1994). While thrombosis and infection are important adverse events, they have to be balanced against the risks of bleeding and delay of treatment that might be incurred if PCC is not used; caution is essential in using these products and they should be reserved for patients with major life-threatening haemorrhage. Pre-existing DIC and uncompensated liver disease are significant contraindications to the use of PCC. Bleeding episodes commonly occur while the INR is within the therapeutic range. In the ISCOAT study, 84% of all haemorrhages and 80% of major bleeds occurred while the INR was < 4·5. All the cases of fatal intracranial bleeding, in which an INR value at the time of the event was available, occurred at an INR of < 4·5 (Palareti et al, 1996). The principles of management of these patients remain the same as outlined above. For non-life-threatening bleeding, a reduction in the INR with oral or i.v. vitamin K is desirable, while for life-threatening bleeding, treatment with clotting factor concentrates and intravenous vitamin K is preferred. The use of oral anticoagulants is based on an assessment of relative thrombotic and bleeding risk. After every serious bleed, reassessment is therefore required. The risk of re-bleeding is high. In a retrospective study, 32% of 156 anticoagulated patients had a recurrence within 1 year (Fihn et al, 1993). White et al (1996) found a re-bleeding rate of 57% in 25 patients who had their warfarin restarted after a life-threatening bleed. Often, especially in atrial fibrillation, aspirin may provide a safer alternative and should be considered. Great difficulty arises in patients with prosthetic metal heart valves who have intracerebral bleeds, as their thrombotic risk remains very high. Butler & Tait (1998) reported their experience of 13 patients with metal heart valves who had central nervous system bleeds. None had a valve thrombosis while their anticoagulant was reversed in the face of acute bleeding and it was possible to re-anticoagulate all of them with warfarin. Over a median follow-up of 23·5 months, one patient had a recurrent non-fatal subdural haemorrhage and three patients suffered cerebral thrombotic events. While warfarin is used almost exclusively for oral anticoagulation in the UK and North America, elsewhere acenocoumarol and phenprocoumon are widely used. In Germany, for example, 95% of the patients on anticoagulants receive phenprocoumon. Because of its short half-life, non-bleeding patients on acenocoumarol often only need omission of one or more doses when their INR is high, while they can be managed as described for patients on warfarin if they are bleeding. In contrast, because of its long half-life, patients on phenprocoumon may require repeat doses of vitamin K. Ortin et al (1998) retrospectively compared patients on acenocoumarol with an INR of > 6·0 who received either 0·5–2·5 mg subcutaneous vitamin K with a group that had their anticoagulants withheld. Although the vitamin K group had a greater fall in INR, a similar number of patients in each group reached a ‘safe’ INR within 24 h. In a study from the Netherlands, patients on phenprocoumon received 1–5 mg of oral vitamin K when their INR was > 6·0 and were followed up daily. Although the maximum INR fall occurred at 48 h, most patients continued to have an INR of > 4·0 and this increased over the subsequent 5 d, indicating that the dose used was insufficient and also that a second dose of vitamin K may be required (Penning-van Beest et al, 1999). Most of the data in the literature on anticoagulant reversal relates to warfarin and there is an urgent need for formal studies to determine the optimal management of over-anticoagulation of patients on other oral anticoagulants. Although the risk factors for over-anticoagulation and anticoagulant-related haemorrhage are well documented, the lack of a uniform classification of bleeding makes comparison of reported rates of bleeding difficult. For these reasons, we feel that a uniform classification, such as that presented, would be an advance. In the event of life-threatening haemorrhage, there is a strong rationale for rapid and complete reversal of anticoagulation. Although there are few comparative data, we feel that this is probably best achieved by administering PCC and intravenous vitamin K in appropriate doses. In cases of minor bleeding or asymptomatic elevation of the INR, reversal of anticoagulation into the desired therapeutic range can be achieved in under 24 h using low-dose vitamin K given by mouth or intravenously. Doses of 0·5–1 mg of vitamin K intravenously correct over-anticoagulation, while administration of a larger dose will probably render the patient subtherapeutic. Low-dose oral vitamin K also successfully reverses over-anticoagulation, but there may be a difference in efficacy between products and further studies to determine optimal dose and formulation are required. A more scientific approach to the management of asymptomatic over-anticoagulation is required. Indication for anticoagulation, patient age and co-morbidity, as well as INR, need to be considered when deciding which patients should receive vitamin K for this indication. Recent advances such as the use of computerized dosing systems and patient-based monitoring of treatment aim to improve the safety of anticoagulant therapy. It is important that appropriate strategies to manage over-anticoagulation and bleeding are in place in conjunction with these advances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call