Ontology Alignment Evaluation Initiative - OAEI-2013 Campaign

Results OAEI 2013::Large BioMed Track

Results of the FMA-NCI matching problem

The following tables summarize the results for the tasks in the FMA-NCI matching problem.

LogMap-BK and YAM++ provided the best results in terms of both Recall and F-measure in Task 1 and Task 2, respectively. IAMA provided the best results in terms of precision, although its recall was below average. Hertuda provided competitive results in terms of recall, but the low precision damaged the final F-measure. On the other hand, StringsAuto, XMapGen and XMapSiG provided a set of alignments with high precision, however, the F-measure was damaged due to the low recall of their alignments. Overall, the results were very positive and many systems obtained an F-measure greater than 0.80 in the two tasks.

Note that efficiency in Task 2 has decreased with respect to Task 1. This is mostly due to the fact that larger ontologies also involves more possible candidate alignments and it is harder to keep high precision values without damaging recall, and vice versa.


Task 1: FMA-NCI small fragments

System Time (s) # Mappings Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
LogMap-BK 45 2,727 0.949 0.883 0.915 2 0.02%
YAM++ 94 2,561 0.976 0.853 0.910 2 0.02%
GOMMA2012 40 2,626 0.963 0.863 0.910 2,130 20.9%
AML-BK-R 43 2,619 0.958 0.856 0.904 2 0.02%
AML-BK 39 2,695 0.942 0.867 0.903 2,932 28.8%
LogMap 41 2,619 0.952 0.851 0.899 2 0.02%
AML-R 19 2,506 0.963 0.823 0.888 2 0.02%
ODGOMS-v1.2 10,205 2,558 0.953 0.831 0.888 2,440 24.0%
AML 16 2,581 0.947 0.834 0.887 2,598 25.5%
LogMapLt 8 2,483 0.959 0.813 0.880 2,104 20.7%
ODGOMS-v1.1 6,366 2,456 0.963 0.807 0.878 1,613 15.8%
ServOMap 141 2,512 0.951 0.815 0.877 540 5.3%
SPHeRe 16 2,359 0.960 0.772 0.856 367 3.6%
HotMatch 4,372 2,280 0.965 0.751 0.845 285 2.8%
Average 2,330 2,527 0.896 0.754 0.810 1,582 15.5%
IAMA 14 1,751 0.979 0.585 0.733 166 1.6%
Hertuda 3,404 4,309 0.589 0.866 0.701 2,675 26.3%
StringsAuto 6,359 1,940 0.838 0.554 0.667 1,893 18.6%
XMapGen 1,504 1,687 0.833 0.479 0.608 1,092 10.7%
XMapSiG 1,477 1,564 0.864 0.461 0.602 818 8.0%
MaasMatch 12,410 3,720 0.407 0.517 0.456 9,988 98.1%
Table 1: Results for the largebio task 1.


Task 2: FMA-NCI whole ontologies

System Time (s) # Mappings Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
YAM++ 366 2,759 0.899 0.846 0.872 9 0.01%
GOMMA2012 243 2,843 0.860 0.834 0.847 5,574 3.8%
LogMap 162 2,667 0.874 0.795 0.832 10 0.01%
LogMap-BK 173 2,668 0.872 0.794 0.831 9 0.01%
AML-BK 201 2,828 0.816 0.787 0.802 16,120 11.1%
AML-BK-R 205 2,761 0.826 0.778 0.801 10 0.01%
Average 1,064 2,711 0.840 0.770 0.799 9,223 6.3%
AML-R 194 2,368 0.892 0.721 0.798 9 0.01%
AML 202 2,432 0.880 0.730 0.798 1,044 0.7%
SPHeRe 8,136 2,610 0.846 0.753 0.797 1,054 0.7%
ServOMap 2,690 3,235 0.727 0.803 0.763 60,218 41.3%
LogMapLt 60 3,472 0.686 0.813 0.744 26,442 18.2%
IAMA 139 1,894 0.901 0.582 0.708 180 0.1%
Table 2: Results for the largebio task 2.