Comments (7)
Dear @hwjung92,
If you want to show the plots with the Average Precision graphs for each class, you need to call the function evaluator.PlotPrecisionRecallCurve
and set the parameter showInterpolatedPrecision=True
.
I encourage you to see this sample.
If you want to use the command line, you can use --savepath
and omit the --noplot
argument.
Please, close this issue if your question regarding this issue was answered properly.
Regards,
Rafael
from object-detection-metrics.
Thank you for rapid response.
My question is how to draw average precision graph for all calsses, instead of each class.
Actually, I modified simply GetPascalVOCMetrics() function like below.
I summed all FP and TP of each class, and calculate acc_FP and acc_TP.
FPS.append(FP)
TPS.append(TP)
# compute precision, recall and average precision
FPS2 = [j for sub in FPS for j in sub]
TPS2 = [j for sub in TPS for j in sub]
acc_FP = np.cumsum(FPS2)
acc_TP = np.cumsum(TPS2)
It works, Thank you.
from object-detection-metrics.
I want to draw the graph about Average Precision-Recall of all classes.
My implemented function is posted like below.
def GetPascalVOCMetricsForAllClass(self,
boundingboxes,
IOUThreshold=0.5,
method=MethodAveragePrecision.EveryPointInterpolation):
ret = [] # list containing metrics (precision, recall, average precision) of each class
# List with all ground truths (Ex: [imageName,class,confidence=1, (bb coordinates XYX2Y2)])
groundTruths = []
# List with all detections (Ex: [imageName,class,confidence,(bb coordinates XYX2Y2)])
detections = []
# Get all classes
classes = []
# Loop through all bounding boxes and separate them into GTs and detections
for bb in boundingboxes.getBoundingBoxes():
# [imageName, class, confidence, (bb coordinates XYX2Y2)]
if bb.getBBType() == BBType.GroundTruth:
groundTruths.append([
bb.getImageName(),
bb.getClassId(), 1,
bb.getAbsoluteBoundingBox(BBFormat.XYX2Y2)
])
else:
detections.append([
bb.getImageName(),
bb.getClassId(),
bb.getConfidence(),
bb.getAbsoluteBoundingBox(BBFormat.XYX2Y2)
])
# get class
if bb.getClassId() not in classes:
classes.append(bb.getClassId())
classes = sorted(classes)
# Precision x Recall is obtained individually by each class
# Loop through by classes
npos=[]
TPS=[]
FPS=[]
for c in classes:
# Get only detection of class c
dects = []
[dects.append(d) for d in detections if d[1] == c]
# Get only ground truths of class c
gts = []
[gts.append(g) for g in groundTruths if g[1] == c]
npos.append(len(gts))
# sort detections by decreasing confidence
dects = sorted(dects, key=lambda conf: conf[2], reverse=True)
TP = np.zeros(len(dects))
FP = np.zeros(len(dects))
# create dictionary with amount of gts for each image
det = Counter([cc[0] for cc in gts])
for key, val in det.items():
det[key] = np.zeros(val)
# Loop through detections
for d in range(len(dects)):
# Find ground truth image
gt = [gt for gt in gts if gt[0] == dects[d][0]]
iouMax = sys.float_info.min
for j in range(len(gt)):
# print('Ground truth gt => %s' % (gt[j][3],))
iou = Evaluator.iou(dects[d][3], gt[j][3])
if iou > iouMax:
iouMax = iou
jmax = j
# Assign detection as true positive/don't care/false positive
if iouMax >= IOUThreshold:
if det[dects[d][0]][jmax] == 0:
TP[d] = 1 # count as true positive
det[dects[d][0]][jmax] = 1 # flag as already 'seen'
else:
FP[d] = 1 # count as false positive
# - A detected "cat" is overlaped with a GT "cat" with IOU >= IOUThreshold.
else:
FP[d] = 1 # count as false positive
FPS.append(FP)
TPS.append(TP)
# compute precision, recall and average precision
FPS2 = [j for sub in FPS for j in sub]
TPS2 = [j for sub in TPS for j in sub] #np.array(TPS).flatten()
#FPS2 = np.reshape(FPS, (1,np.product(FPS.shape)))
acc_FP = np.cumsum(FPS2)
acc_TP = np.cumsum(TPS2)
rec = acc_TP / np.sum(npos)
prec = np.divide(acc_TP, (acc_FP + acc_TP))
# Depending on the method, call the right implementation
if method == MethodAveragePrecision.EveryPointInterpolation:
[ap, mpre, mrec, ii] = Evaluator.CalculateAveragePrecision(rec, prec)
else:
[ap, mpre, mrec, _] = Evaluator.ElevenPointInterpolatedAP(rec, prec)
# add class result in the dictionary to be returned
ret = {
'class':'all classes',
'precision': prec,
'recall': rec,
'AP': ap,
'interpolated precision': mpre,
'interpolated recall': mrec,
'total positives': np.sum(npos),
'total TP': np.sum(TPS2),
'total FP': np.sum(FPS2)
}
return ret
I found that the graph of each class is correct.
However, the final image is not correct, because it is summary graph instead of average.
Is there any someone who can help me?
from object-detection-metrics.
Dear @hwjung92,
I don’t understand the need of plotting a mAP graph. The mAP is a number that evaluates your detections over all classes.
Sorry, but I don’t know what your need is.
from object-detection-metrics.
Sorry for confusing my question. I misunderstood the concept of mAP. Actually I want to draw the average Precision-Recall graph of all class. How can I solve this issue?
from object-detection-metrics.
I simply obtain average FP and TP of all classes, and calculate Precision and Recall.
The new result looks like average PRcure of all classes.
If there is the better way or idea, let me know.
Thank you.
from object-detection-metrics.
from object-detection-metrics.
Related Issues (20)
- Is this a BUG? HOT 2
- How to calculate precision of small, medium and large objects ? HOT 3
- need to cast classId to str when saving PlotPrecisionRecallCurve HOT 2
- Question about the argument -imgsize HOT 1
- Confidence threshold HOT 8
- NMS ?? HOT 3
- Possible bug in BoundingBox.py? HOT 1
- How to calculate mAP@[0.5:0.99]? HOT 2
- red boxes represent detections with prediction label of that class or any detections whenever it prediction is that class ? HOT 2
- incorrect equation HOT 4
- ggggg
- Check performance of the trained model HOT 2
- The way to set which object is TP when more than one detection overlapping a ground truth seems to be wrong HOT 2
- How can i get TP,TN,FP,FN from it? HOT 1
- [question] support for 3d volumes? HOT 1
- Difference implementations between this repo and the faster_rcnn ones HOT 1
- What are the dimensions of precision represent? HOT 1
- Got `AP=0.00%` when running `pascalvoc.py` with samples
- image3 G iou ? HOT 2
- getting threshold values of confidence score which recall/precision calculated HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from object-detection-metrics.