A positive screening test does not always mean you have the disease. A negative test does not always mean you're free of it. Explore how prevalence, sensitivity, and specificity shape what your test result actually tells you.
Before diving into the simulator, let's build a foundation. Every screening test has four possible outcomes when applied to a population. These outcomes depend on two things: the test's accuracy and how common the disease is.
The 2×2 Table
When a screening test is given to a group of people, every person falls into one of four categories:
Has Disease
No Disease
Test Positive
True Positive (TP) Correctly identified as having the disease
False Positive (FP) Incorrectly told they have the disease
Test Negative
False Negative (FN) Disease missed by the test
True Negative (TN) Correctly identified as disease-free
Sensitivity
TP ÷ (TP + FN)
Of all people who truly have the disease, what percentage does the test correctly identify as positive? High sensitivity means fewer false negatives — the test rarely misses a sick person.
Specificity
TN ÷ (TN + FP)
Of all people who do NOT have the disease, what percentage does the test correctly identify as negative? High specificity means fewer false positives — healthy people are rarely told they're sick.
Positive Predictive Value
TP ÷ (TP + FP)
If your test is positive, what is the probability that you actually have the disease? This is what patients care about most. PPV depends heavily on prevalence.
Negative Predictive Value
TN ÷ (TN + FN)
If your test is negative, what is the probability that you truly do not have the disease? NPV also depends on prevalence — when disease is rare, even a mediocre test gives reassuring negatives.
Why Prevalence Matters
Prevalence is the proportion of people in a population who have the disease at a given time. It is the single biggest factor that determines what a positive or negative result means for you.
Low prevalence (e.g., 1%)
Most people being tested are healthy. Even a small false-positive rate produces many false alarms relative to the few true cases. A positive result is less likely to be a true positive.
High prevalence (e.g., 30%)
A substantial portion of the tested group has the disease. True positives outnumber false positives. A positive result is much more likely to be a true positive.
This is why screening programs target high-risk groups — the disease is more prevalent in those groups, so the test results are more meaningful.
Screening Test Simulator
Adjust the sliders to see how prevalence, sensitivity, and specificity interact to determine what test results mean. The population below is simulated with 1,000 people.
Quick scenarios:
Positive Predictive Value--Chance a positive result is a true case
Negative Predictive Value--Chance a negative result means no disease
Results for 1,000 People
Has Disease
No Disease
Total
Test +
--
--
--
Test −
--
--
--
Total
--
--
1,000
Who Tests Positive?
Who Tests Negative?
How PPV Changes with Prevalence
This curve shows PPV at every prevalence level for the current sensitivity and specificity. The dot marks your current setting.
Interpretation
Real-World Scenarios
These scenarios show why the same test can mean very different things depending on context.
If Bob tests positive, there's only about a 37% chance he truly has Hepatitis C. That's why a confirmatory test (RNA PCR) is always ordered after a positive antibody screen — most positive screens in low-prevalence populations are false alarms.
In summer: Most positive tests are false positives. Only about 12% of positive rapid tests are true influenza cases. Doctors should not trust a positive result in summer without confirmation.
In winter: A positive test is reliable (about 78% PPV). But 30% of true flu cases are missed (false negatives), so a negative test doesn't rule it out.
In this scenario, only about 11% of positive mammograms represent actual breast cancer. Most callbacks are false alarms. This is why guidelines emphasize informed decision-making about when to begin screening — the anxiety from false positives is a real cost.
HIV in a High-Risk Population
Modern HIV antibody/antigen combo tests have >99% sensitivity and >99.5% specificity. In a high-risk population (prevalence around 15%), a positive test has a PPV of about 97%.
This is why targeted testing in high-prevalence groups is so effective. The same test given to the general population (prevalence ~0.3%) would have a much lower PPV — around 37%.
Even with a near-perfect test, a positive result in this very low-prevalence group has a relatively low positive predictive value — meaning many positives will be false alarms. A confirmatory HIV-1/2 differentiation assay is always required before a diagnosis is made.
Because prevalence is so high in this group, a positive mammogram is far more likely to represent a true cancer than in average-risk women. However, the lower sensitivity means mammography alone misses roughly 1 in 4 cancers in BRCA carriers — which is why MRI is typically added to the surveillance protocol.
Key Takeaways
A positive screening test is not a diagnosis. It means further testing is needed, especially when the disease is rare in your group.
When prevalence is low, even highly specific tests produce many false positives relative to true positives.
When prevalence is high, even moderately sensitive tests miss fewer cases, and positive results are more reliable.
Sensitivity and specificity are properties of the test itself. PPV and NPV depend on the population being tested.
Clinicians improve predictive values by targeting screening to groups where the disease is more common.