The content of this page is not yet fixed and might be extended by additional evaluation results. In particular, we will integrate a probabilistic evaluation. The set of evaluated systems might also be extended(including systems from the Process Matching community). However, to stick to the time schedule of the OAEI we present preliminary results already now.
This page informs about the results for 2017 edition of the Process Model Matching track, which has already been part of the OAEI in 2016. Compared to the 2016 edition of this track, we have added another dataset by converting it to an ontology representation. This is the birth registration dataset of Process Model Matching Contest 2015. Thus, we base our evaluation on two datasets.
Only three systems generated non-empty results when running them against our datasets. These systems are AML, LogMap, and I-Match. Note that we tried to execute all systems marked as instance matching systems. However, the other systems threw exceptions or produced empty alignments. We have collected all generated non-empty alignments. They are available in a zip-file via the following link. These alignments are the raw results that the following report is based on.
In our evaluation, we computed precision and recall, as well as the harmonic mean known as F-measure. The datasets we used consist of several test cases. We aggregated the results and present in the following the micro average results for each of the two datasets. The gold standards we used for our evaluation are based on the gold standards that have been used for the University Admission and Birth Registration evaluation experiments reported at the Process Model Matching Contest in 2015 [1]. We modified only some minor mistakes. In order to compare the results to the results obtained by the process model matching community, we present also the recomputed values of the submissions to the 2015 contest. We also added the results of the OAEI 2016 edition to the results tables.
The following tables show the results of our evaluation. Participants of the Process Model Matching Contest and the OAEI 2016 edition are depicted in grey font, while this years OAEI participants are shown in black font. Note that some systems participated with a version that has not been modified with respect to its results comparing the OAEI 2016 and 2017 submission. We added only one entry for them with the label OAEI-16/17. This is only the case for the first dataset, which we have used already in 2016.
The OAEI participants are ranked on position 1, 11, and 12 with an overall number of 17 systems listed in the table. The best results have been achieved by AML. These are the results that have already been achieved 2016. Thus, the results are not a big surprise. LogMap achieves also the same results as in 2016. I-Match participates in 207 for the first time. Compared to the results of the tools specialized for the problem of process model matching, the results of I-Match are still very good. There are still five systems that have in particular been designed for matching process models, which achieve worse results.
Name | OAEI/PMMC | Size | P | R | FM |
---|---|---|---|---|---|
AML | OAEI-16/17 | 221 | 0.719 | 0.685 | 0.702 |
RMM-NHCM | PMMC-15 | 220 | 0.691 | 0.655 | 0.673 |
RMM-NLM | PMMC-15 | 164 | 0.768 | 0.543 | 0.636 |
Match-SSS | PMMC-15 | 140 | 0.807 | 0.487 | 0.608 |
OPBOT | PMMC-15 | 234 | 0.603 | 0.608 | 0.605 |
Know-Match-SSS | PMMC-15 | 261 | 0.513 | 0.578 | 0.544 |
RMM-SMSL | PMMC-15 | 262 | 0.511 | 0.578 | 0.543 |
DKP | OAEI-16 | 177 | 0.621 | 0.474 | 0.538 |
DKPx | OAEI-16 | 150 | 0.680 | 0.440 | 0.534 |
TripleS | PMMC-15 | 230 | 0.487 | 0.483 | 0.485 |
LogMap | OAEI-16/17 | 267 | 0.449 | 0.517 | 0.481 |
I-Match | OAEI-17 | 192 | 0.521 | 0.431 | 0.472 |
BPLangMatch | PMMC-15 | 277 | 0.368 | 0.440 | 0.401 |
KnoMa-Proc | PMMC-15 | 326 | 0.337 | 0.474 | 0.394 |
AML-PM | PMMC-15 | 579 | 0.269 | 0.672 | 0.385 |
RMM-VM2 | PMMC-15 | 505 | 0.216 | 0.470 | 0.296 |
pPalm-DS | PMMC-15 | 828 | 0.162 | 0.578 | 0.253 |
The probabilistic evaluation of the results is not yet available. We will hand it in within the next weeks.
The results for the Birth Registration dataset are more interesting, because we are using this dataset in 2017 for the first time. Moreover, the dataset contains a higher amount of correspondences that are hard to find by comparing the labels on a lexical level. This results usually in a significantly lower F-measure compared to the University Admission dataset.
The results show that AML is no longer the best of all matching systems. Four systems from the process matching community achieve better results in terms of F-measure. This dataset is dominated by the OPBOT system, while AML is among a group of follow-up systems that perform still significantly better than the rest of the field. The other two systems, LogMap and I-Match, achieve close results which are slightly worse than the average results. It is interesting to see that the ranking among the three systems is the same across the two datasets.
Name | OAEI/PMMC | Size | P | R | FM |
---|---|---|---|---|---|
OPBOT | PMMC-15 | 383 | 0.713 | 0.468 | 0.565 |
pPalm-DS | PMMC-15 | 490 | 0.502 | 0.422 | 0.459 |
RMM-NHCM | PMMC-15 | 267 | 0.727 | 0.333 | 0.456 |
RMM-VM2 | PMMC-15 | 492 | 0.474 | 0.400 | 0.433 |
AML | OAEI-17 | 502 | 0.454 | 0.391 | 0.420 |
BPLangMatch | PMMC-15 | 279 | 0.645 | 0.309 | 0.418 |
AML-PM | PMMC-15 | 503 | 0.423 | 0.365 | 0.392 |
Know-Match-SSS | PMMC-15 | 185 | 0.800 | 0.254 | 0.385 |
RMM-SMSL | PMMC-15 | 354 | 0.508 | 0.309 | 0.384 |
TripleS | PMMC-15 | 266 | 0.613 | 0.280 | 0.384 |
LogMap | OAEI-17 | 239 | 0.615 | 0.252 | 0.358 |
I-Match | OAEI-17 | 188 | 0.734 | 0.237 | 0.358 |
Match-SSS | PMMC-15 | 128 | 0.922 | 0.202 | 0.332 |
RMM-NLM | PMMC-15 | 128 | 0.859 | 0.189 | 0.309 |
KnoMa-Proc | PMMC-15 | 740 | 0.234 | 0.297 | 0.262 |
The probabilistic evaluation of the results is not yet available. We will hand it in within the next weeks.
In 20016 we organized the Process Model Matching Track for the first time. Our evaluation effort was motivated by the idea that Ontology Matching methods and techniques can also be used in the related field of Process Model Matching. For that reason we converted one (and in 2017 two) of the most prominent Process Model matching test datasets into an ontological representation. The resulting matching problems are instance matching tasks.
While we were aware that an instance matching system will not be able to exploit the sequential aspects of the given process models out of the box, we expected lexical components to generate results, that are already on an acceptable level. Even though one of the systems (AML) generated very good results, overall only a few of the systems participating at the OAEI were capable of generating any results for our test cases. We still do not fully understand the reasons for this outcome. The participation rate indicates that only a limited number of participants is interested in process model matching. For that reason we will not offer a third edition of this track in 2018.
If you have any questions or remarks, feel free to contact us.
[1] Goncalo Antunes, Marzieh Bakhshandeh, Jose Borbinha, Joao Cardoso, Sharam Dadashnia, Chiara Di Francescomarino, Mauro Dragoni, Peter Fettke, Avigdor Gal, Chiara Ghidini, Philip Hake, Abderrahmane Khiat, Christopher Klinkmüller, Elena Kuss, Henrik Leopold, Peter Loos, Christian Meilicke, Tim Niesen, Catia Pesquita, Timo Péus, Andreas Schoknecht, Eitam Sheetrit, Andreas Sonntag, Heiner Stuckenschmidt, Tom Thaler, Ingo Weber, Matthias Weidlich: The Process Model Matching Contest 2015. In: 6th International Workshop on Enterprise Modelling and Information Systems Architectures (EMISA 2015), September 3-4, 2015, Innsbruck, Austria.
[2] Elena Kuss, Henrik Leopold, Han van der Aa, Heiner Stuckenschmidt, Hajo A. Reijers: Probabilistic Evaluation of Process Model Matching Techniques. ER2016, Nov. 14-17, 2016, Gifu, Japan.