Ontology Alignment Evaluation Initiative - OAEI-2013 Campaign

Results OAEI 2013::Large BioMed Track

Summary Results (top systems)

The following table summarises the results for the systems that completed all 6 tasks of the Large BioMed Track. The table shows the total time in seconds to complete all tasks and averages for Precision, Recall, F-measure and Incoherence degree. The systems have been ordered according to the average F-measure and Incoherence degree.

YAM++ was a step ahead and obtained the best average Precision, Recall and F-measure.

AML-R obtained the second best average Precision while AML-BK obtained the second best average Recall.

Regarding mapping incoherence, LogMap-BK computed, on average, the mapping sets leading to the smallest number of unsatisfiable classes. The configurations of AML using (R)epair also obtained very good results in terms of mapping coherence.

Finally, LogMapLt was the fastest system. The rest of the tools, apart from ServoMap and SPHeRe, were also very fast and only needed between 11 and 53 minutes to complete all 6 tasks. ServOMap required around 4 hours to complete the 6 tasks while SPHeRe required almost 12 hours.


System Total Time (s) Average
Precision  Recall  F-measure Incoherence
YAM++ 2,066 0.942 0.728 0.817 14.0%
AML-BK 1,814 0.908 0.709 0.792 44.2%
LogMap-BK 2,391 0.904 0.700 0.785 0.014%
AML-BK-R 1,987 0.921 0.692 0.785 0.027%
AML 1,681 0.926 0.683 0.783 43.1%
LogMap 2,485 0.910 0.689 0.780 0.015%
AML-R 1,821 0.939 0.666 0.776 0.029%
ServOMap 15,300 0.875 0.690 0.766 52.4%
GOMMA2012 1,920 0.813 0.567 0.654 28.8%
LogMapLt 371 0.874 0.517 0.598 34.1%
SPHeRe 42,040 0.857 0.464 0.569 21.4%
IAMA 704 0.912 0.386 0.517 46.4%
Table 1: Summary Results for the top systems.