“Can you go over Sensitivity-Specificity, and what that actually means for us in the operating room?”
I think the second part of that question is the most important (what’s that mean for us in the OR). If you’re taking the DABNM boards, of course, you’re going to need to know the definition. But being able to define sensitivity-specificity is far less important than being able to apply it.
But you have to start somewhere, so let’s start with some definitions so we can use them as building blocks to understanding.
Sensitivity: the proportion of patients with the disease who test positive.
Specificity: the proportion of patients without disease who test negative.
True Positive: correctly identified, or sick people correctly identified as sick.
True Negative: correctly rejected, or healthy people correctly identified as healthy.
False Positive: incorrectly identified, or healthy people incorrectly identified as sick.
False Negative: incorrectly rejected, or sick people incorrectly identified as healthy.
“Now we can start to add these concepts together to figure out what it means for our monitoring, and what the percentages mean that we see in the papers we read.”
Sensitivity deals with one of our biggest “uh oh” moments faced in neuromonitoring. The cases that lower your overall monitoring sensitivity are the problems where the OR staff gets involved and they discuss dropping your contract.
Here’s what it looks like: you record the case and see no changes. But the patient wakes with a problem when you thought there wouldn’t be one (false negative).
Now you have to prove that it was a shortcoming of the modality (poor sensitivity or poor understanding of what is being monitored) and not your ability to monitor/interpret the results.
The reason not detecting a problem when a problem occurs lowers your sensitivity is because of the following equation:
Sensitivity = (# of true positives) / (# of true positives + # of false negatives)
Sensitivity = (# of people with surgical injury detected by monitoring) / (# of people with surgical injury detected by monitoring + # of people with surgical injury that had no significant changes in monitoring)
Sensitivity = probability of an injury given there was a significant change
So we would like to see our outcome to be 1, or 100%. If there is a deficit not picked up on monitoring, then our denominator is larger than our numerator, bringing our percentage down.
For example, 11 patients wake up with a deficit, but monitoring only picked up 10.
Sensitivity = (10) / (10 + 1)
= 0.91, or 91%
You have to be careful when reading papers where they are assessing the modality used for the correct purpose. For instance, SSEP monitors the dorsal column or sensory track. Is it fair to say that they had a false negative if there was damage to the anterior horn cells?
Specificity is what we get challenged on all the time. It’s what sets off panic mode. You say “Doc, I have a change in SSEP!” S/he says, “Are you sure it isn’t technical, I didn’t do anything?” What s/he is saying is that you do not have a specific modality, or that your positive result (the change) lacks the means to identify patient injury. S/he is praying for a false positive, because that would mean the patient is OK even though you are saying there is a problem. But the surgeon doesn’t want this to be a reoccurring situation either because now s/he can’t trust you. A high amount of false positives means a poor specificity (which means, “Why the hell am I using you again? If this keeps up we need to discuss the problem…”), because of the equation used to find specificity:
But the surgeon doesn’t want this to be a reoccurring situation either because now s/he can’t trust you. A high amount of false positives means a poor specificity (which means, “Why the hell am I using you again? If this keeps up we need to discuss the problem…”), because of the equation used to find specificity:
Specificity = (# of true negatives) / (# of true negatives + # of false positives)
Specificity = (# of people with no surgical injuries and no significant monitoring changes) / (# of people with no surgical injuries and no significant monitoring changes + # of people with no surgical injury but had significant monitoring changes)
Specificity = probability of no injury given there were no significant monitoring findings
So if someone new to the field is calling EMG for every burst they see, they have become a false positive machine. If they did 10 cases and made clinically significant calls in 5 of those cases, and all 10 patients woke up OK, then they have a poor specificity.
Specificity = (5) / (5+5)
=0.5, or 50%
Application of Sensitivity-Specificity numbers to tcMEP
For our purposes, having a lower specificity is usually more desirable than having a lower sensitivity. And you can’t have it both ways. As criteria change to improve one, the other will suffer.
That’s why you’ll see some monitoring groups move away from the all-or-none criteria suggested for tcMEP. When you understand what all is going into the recording of a muscle potential after stimulating the cortex through the cranium, a 100% reduction makes the most sense (and you can make an argument that the way amplitudes are being measured by most groups lacks accuracy). The specificity goes way up. But since there have been reports of post op deficits with some CMAP response present, there is a reduction in the sensitivity.
Some groups have adopted a 75-80% reduction criteria in order to even further minimize any loss of sensitivity, even if the drop in specificity is far greater.
Should a group chose to really cover their basis against false negatives and lower sensitivity, they might choose a 50% reduction in tcMEP as a significant change. From my experience, they would have a sensitivity of 100%, but their specificity would be unacceptable (I’m talking about tcMEP for the spinal cord, not brainstem or peripheral nerve monitoring here).
So this is one of the reasons that there is no agreed upon alarm criteria for a lot of what we do, even if there are guidelines set by our associations.
One last observation on sensitivity-specificity.
For the neuromonitoring tech in the room, we can find ourselves in a tough situation. A lot of our surgeons only want to be told about something if it is a problem. Giving far too many false alarms is an easy way to get kicked out of their room.
For the oversight neuromonitoring doctor, we can find ourselves in a tough situation. We are there to make sure that the surgeon is informed of possible deficits. And because some causes are time dependent, the sooner the surgeon is informed and can make corrective measures the better.
I’ve been on both sides of the monitor, as the clinician in the room, remote doc overseeing the case and doing cases in the operating room without any other oversight.
There are definitely different emotional factors at play.
In the operating room, it is easier to lean towards making sure specificity is not forgotten about. You don’t want any false negatives, but you’re not looking to jump the gun either and become a false positive machine.
Overseeing someone else running the case is a little nerve wracking. There is a loss of control, and the talent level on the other side of the monitor can vary greatly. Human instinct makes you lean more towards making sure all changes are reported and sensitivity is as high as possible. Specificity sometimes takes a back seat.
Remember, I am talking about human emotions here.
My observation: not many remote docs work in the OR (and some never) and not too many clinicians working in the OR also do oversight (usually an in-house program or the DABNM falls into this role, and there is only about 150 active DABNMs).
Want new articles before they get published?
Subscribe to our Awesome Newsletter.
Here are some related guides and posts that you might enjoy next.
What To Expect From The Neuromonitoring Field In The Future? Anyone else want to make some predictions about the neuromonitoring field? Let's talk about what we can expect out of neuromonitoring in the near future. This line of conversation seems to come up a lot....read more
Double-Train MEP On A Comeback Kick Using transcranial electric motor evoked potentials in the operating room has become routine practice for spinal cord monitoring. Recent improvements in the ability to record tcMEP have resulted in increased use during other...read more
How Resourceful Of A Neuromonitoring Tech Are You? First off, let me start this topic off by saying that I'm not a big fan of the term neuromonitoring tech (I prefer surgical neurophysiologist or SNP). But I really want to address those in the field that might embrace...read more
Loss of Cortical SSEP Due To Loss of Receptor Activation Earlier in this 30 Days of Neuromonitoring series, I wrote about cortical and subcortical reorganization after a loss of afferent feedback from carpal tunnel syndrome and how that affects intraoperative...read more
The CNIM vs The Intraoperative Neuromonitoring Degree (Joe's notes: This is a GUEST POST by Josh Mergos, who is the director of the Intraoperative Neuromonitoring Program at the University of Michigan - School of Kinesiology. We met for the first time during a small...read more
Optimizing Sub-cortical SSEP There is 1 electrode that I see get misused in somatosensory evoked potentials more so than any other electrode in any modality. This is the electrode placed over the cervical spine (or sometimes around the ear or mastoid) and generally...read more