Coder Social home page Coder Social logo

the method of mean_ap about hhl HOT 2 CLOSED

TJJTJJTJJ avatar TJJTJJTJJ commented on June 30, 2024
the method of mean_ap

from hhl.

Comments (2)

TJJTJJTJJ avatar TJJTJJTJJ commented on June 30, 2024

Hi,
I'm sorry to bother you again. I'm confused about your code about mAP.
In open-reid, it's like

from sklearn.metrics import average_precision_score
aps.append(average_precision_score(y_true, y_score))
# examples
>>> import numpy as np
>>> from sklearn.metrics import average_precision_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> average_precision_score(y_true, y_scores)  
0.83...
# after sort by y_socres, it's 1, 0, 1, 0
# recall= [1.  0.5 0.5 0. ], pre = [0.66666667 0.5        1.         1.        ],
# -np.sum(np.diff(recall) * np.array(precision)[:-1])
# 0.83 = 1/2*2/3+0*1/2+1/2*1= 5/6

In your code, it's like

def average_precision_score(y_true, y_score, average="macro",
sample_weight=None):
def _binary_average_precision(y_true, y_score, sample_weight=None):
precision, recall, thresholds = precision_recall_curve(
y_true, y_score, sample_weight=sample_weight)
return auc(recall, precision)

return _average_binary_score(_binary_average_precision, y_true, y_score,
                             average, sample_weight=sample_weight)
# examples
import numpy as np
from sklearn.metrics import average_precision_score
y_true = np.array([0, 0, 1, 1])
y_scores = np.array([0.1, 0.4, 0.35, 0.8])
average_precision_score(y_true, y_scores)
0.7916666666666666

recall= [1. 0.5 0.5 0. ], pre = [0.66666667 0.5 1. 1. ],
after I calculate, it's like
0.79 = 1/2*(2/3+1/2)1/2+0(1/2+1)1/2+1/2(1+1)1/2 = 19/24
it's like get the medium value rather than one value
I think even if it gets the medium value, it should be like
1/2
(2/3+1)1/2+0(1/2+1)1/2+1/2(1+1)*1/2 = 11/12

Maybe there are something I get wrong. Or if you are free, please check me out.
I am trying to understand better your code.
Thank you, sir.

from hhl.

zhunzhong07 avatar zhunzhong07 commented on June 30, 2024

Hi, this problem is explained at here by @huanghoujing . In our code, we use the "average_precision_score" of version-0.18.1, but in open-reid, the "average_precision_score" will be different in version-0.19. Our result is the same as the official evaluation code.

from hhl.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.