Q1: What are the relevant changes concerning the cardiac biomarkers in the 2020 ESC guidelines for the management of acute coronary syndrome in patients presenting without persistent ST segment elevation?
I would say the 0h/1h algorithm, recommended by the ESC, is now given the higher priority. It should be used preferably over the 0h/3h algorithm that was proposed in the last ESC recommendations from 2011 and 2015. What is also new is that ESC 0h/2h algorithm is recommended as an alternative to the 0h/1h algorithm, provided that a validated protocol for the 0h/1h algorithm is not available. But then, it’s of course, required to have a validated protocol for the ESC 0h/2h protocol.
Another thing that is new is that there has been an upgrade in imaging with computer tomography for patients with suspected acute coronary syndrome with low treat immediate pre-test probability. Then there has been a downgrading of clinical scores, namely the GRACE score, which was strongly recommended in the past guidelines and now has received 2A level of recommendation.
Regarding the diagnostic and management algorithm, I think there are only quite small changes, for example, regarding the observation of the patients before and after coronary angiography and regarding the timing of the invasive approach. So, we have here a change – we no longer have a group of patients with intermediate-risk characteristics that can undergo an invasive strategy within 72 hours. Currently, the guidelines recommend an immediate angiography or an early planned coronary angiography depending on the individual patient risk. It’s clearly defined, but some of the immediate risk factors that were used to consider for the delayed invasive approach were just omitted.
Q2: What is the rationale for the 0h/1h as well as the 0h/2h algorithm taking precedence over the 0h/3h algorithm in the latest ESC guideline?
I think the rationale for the 0h/1h algorithm is the accumulating evidence supporting its safety and practicability, and also its effect on the congestion of crowded emergency departments. Since the last guidelines, we had a randomised trial – the RAPID-TnT study performed in Australia which compared 0h/1h algorithm with the standard approach, whereby it was at the three hours. Then, there were several observational trials with prospective validation and very nice meta-analysis that also brings together all these studies; an overview of 11,000 patients from ten countries, gave information on three high sensitivity troponin assays and here, we see very nice sensitivities to negative predictive values for this algorithm, as well as very good safety (margins) for 30 days and one year.
And we have two very nice real-life studies that also show that in real life, we have a very good triage of patients in different categories; rule out, rule in, a small fraction of patients who have to be further investigated, whether they are in the grey zone or the observational zone – but that’s only a fraction. Then we see from these real-life studies that three-quarters of these patients can be discharged and the discharge of these low-risk patients is safe, with very low mortality rates. So, it’s very convenient for hospitals that face crowding – they can really decongest emergency departments without increasing the risk for the patients. And what is very clear, is that provided you exert some scrutiny, you are still cautious on which patients you apply this protocol to. You can apply it across all subgroups of patients, even complicated patients with pre-existing coronary artery disease, with chronic kidney disease and early presenters, so it applies to almost every subgroup. This information allows to upscale their recommendation level for the 0h/1h.
The 0h/2h algorithm, to be very clear about that, is not regarding the ADP 0h/2h algorithm, which is clearly established in the APAC region, in Australia and Asia, where we have very nice studies from very renowned groups, and there’s also randomised trial on that from Martin Than. But it’s about the ESC 0h/2h algorithm. So, for many people, it was not known that this algorithm is also very effective and very safe, and indeed there are several observational trials that have confirmed the efficacy and safety of this 0h/2h algorithm.
Whether this algorithm is as safe and effective as a 0h/3h algorithm, we don’t know. There are no comparative studies. This is something that is under debate. Regarding the 0h/3h algorithm, I am not sure if this recommendation disfavouring the 0h/3h algorithm is really substantiated because the 99th percentile is the decision cut-off for the 0h/3h algorithm, while the decision cut-offs for the other algorithms are based on lower concentrations and these concentrations have been optimised to reach the highest sensitivity and highest specificity, but are not always biologically plausible. The 99th percentile is our biological equivalent to the definition of myocardial injury and infarction, so, if we omit that, then we also omit our objective evidence for infarct. I am a bit cautious about whether we can really only go to 0h/1h, but at least I think it will help to promote the use of high sensitivity assays and faster protocols.