Ontology Alignment Evaluation Initiative - OAEI-2014 Campaign

Results OAEI 2014::Large BioMed Track

Results of the FMA-NCI matching problem

The following tables summarize the results for the tasks in the FMA-NCI matching problem.

LogMap-Bio and AML provided the best results in terms of both Recall and F-measure in Task 1 and Task 2, respectively. OMReasoner provided the best results in terms of precision, although its recall was below average. From the last year participants, XMap and MaasMatch improved considerably their performance with respect to both runtime and F-measure. AML and LogMap obtained again very good results. LogMap-Bio improves LogMap's recall in both tasks, however precision is damaged specially in Task 2.

Note that efficiency in Task 2 has decreased with respect to Task 1. This is mostly due to the fact that larger ontologies also involves more possible candidate alignments and it is harder to keep high precision values without damaging recall, and vice versa. Furthermore, AOT, AOTL, RSDLWB and MaasMatch could no complete Task 2. The first three did not finish in less than 10 hours while MaasMatch rose an "out of memory" exception.


Task 1: FMA-NCI small fragments

System Time (s) # Mappings Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
AML 27 2,690 0.960 0.899 0.928 2 0.02%
LogMap 14 2,738 0.946 0.897 0.921 2 0.02%
LogMap-Bio 975 2,892 0.914 0.918 0.916 467 4.5%
XMap 17 2,657 0.932 0.848 0.888 3,905 38.0%
LogMapLite 5 2,479 0.967 0.819 0.887 2,103 20.5%
LogMap-C 81 2,153 0.962 0.724 0.826 2 0.02%
MaasMatch 1,460 2,981 0.808 0.840 0.824 8,767 85.3%
Average 3,193 2,287 0.910 0.704 0.757 2,277 22.2%
AOT 9,341 3,696 0.662 0.855 0.746 8,373 81.4%
OMReasoner 82 1,362 0.995 0.466 0.635 56 0.5%
RSDLWB 2,216 728 0.962 0.236 0.380 22 0.2%
AOTL 20,908 790 0.902 0.237 0.375 1,356 13.2%
Table 1: Results for the largebio task 1.


Task 2: FMA-NCI whole ontologies

System Time (s) # Mappings Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
AML 112 2,931 0.832 0.856 0.844 10 0.007%
LogMap 106 2,678 0.863 0.808 0.834 13 0.009%
LogMap-Bio 1,226 3,412 0.724 0.874 0.792 40 0.027%
XMap 144 2,571 0.835 0.745 0.787 9,218 6.3%
Average 5,470 2,655 0.824 0.746 0.768 5,122 3.5%
LogMap-C 289 2,124 0.877 0.650 0.747 9 0.006%
LogMapLite 44 3,467 0.675 0.819 0.740 26,441 18.1%
OMReasoner 36,369 1,403 0.964 0.466 0.628 123 0.084%
Table 2: Results for the largebio task 2.