SemEval (Semantic Evaluation Exercises) are a series of workshops which have the main aim of the evaluation and comparison of semantic analysis systems. The data and corpora provided by them have become a ’de facto’ set of benchmarks for the NLP comunity.
The SemEval event provides data and evaluation frameworks for several tasks. One of them is Semantic Textual Similarity (STS), the purpose of this project.
All information of 2012’s edition is available in this link.
All information of 6th task (Sematic Textual Similarity) is available in this link.