Evaluating Segmentations

You can use the Segmentation Comparator to compare an automated segmentation with a ground truth and objectively evaluate the results with a number of metrics, such as Accuracy, Dice, True Positive Rate, True Negative Rate, and others. Results can be exported in the comma-separated values (*.csv extension) file format for further analysis or archiving.

Select the ROIs or multi-ROIs that you want to compare and then choose Open Segmentation Comparator in the pop-up menu to open the dialog shown below.

Segmentation Comparator dialog

Segmentation Comparator dialog

You can also choose Utilities > Structured Grids Comparator on the menu bar to open the Structured Grids Comparator, which provides the same options for comparing segmentations with a ground truth.

The following metrics are available for objectively evaluating segmentations:

Available metrics
  Description
Accuracy Is the fraction of pixels that were classified correctly and is calculated in terms of positives and negatives as:

(TP + TN) / (TP + TN + FP + FN)

DICE Is a measure of a models accuracy and calculated from precision and recall, where precision is the number of true positive results divided by the number of all positive results, including those not identified correctly, and recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in binary classification.
FN False negative, an outcome for which the model incorrectly predicted the negative class.
FP False positive, an outcome for which the model incorrectly predicted the positive class.
TN True negative, an outcome for which the model correctly predicted the negative class.
TP True positive, an outcome for which the model correctly predicted the positive class.
TPR Is the true positive rate.
TNR Is the true negative rate.