Let's say we come up with some kind of test where a positive test result correctly detects that a disease is present some percentage of the time (this percentage is called the sensitivity of the test). Set or vary the sensitivity of the tests here: % . Of course, this therefore also means that (unless the test is 100% sensitive), that based on the sensitivity you chose, the test won't detect the disease even when it's present % of the time).
Let's combine the idea of prior probability with sensitivity of a test. The prior probability of a disease in the population you are studying is % . The sensitivity of the test you chose above is %. The red circles around persons with disease below indicate positive tests.
So what do we notice when there are a lot of people with the disease and the sensitivity of a test is low? Is that a desirable test?
Prev: prior probability     Next: specificity of tests