Process Model Matching - Results @ OAEI 2017

The content of this page is not yet fixed and might be extended by additional evaluation results. In particular, we will integrate a probabilistic evaluation. The set of evaluated systems might also be extended(including systems from the Process Matching community). However, to stick to the time schedule of the OAEI we present preliminary results already now.

This page informs about the results for 2017 edition of the Process Model Matching track, which has already been part of the OAEI in 2016. Compared to the 2016 edition of this track, we have added another dataset by converting it to an ontology representation. This is the birth registration dataset of Process Model Matching Contest 2015. Thus, we base our evaluation on two datasets.

Participants

Only three systems generated non-empty results when running them against our datasets. These systems are AML, LogMap, and I-Match. Note that we tried to execute all systems marked as instance matching systems. However, the other systems threw exceptions or produced empty alignments. We have collected all generated non-empty alignments. They are available in a zip-file via the following link. These alignments are the raw results that the following report is based on.

>>> Download-Link

Evaluation Measures

In our evaluation, we computed precision and recall, as well as the harmonic mean known as F-measure. The datasets we used consist of several test cases. We aggregated the results and present in the following the micro average results for each of the two datasets. The gold standards we used for our evaluation are based on the gold standards that have been used for the University Admission and Birth Registration evaluation experiments reported at the Process Model Matching Contest in 2015 [1]. We modified only some minor mistakes. In order to compare the results to the results obtained by the process model matching community, we present also the recomputed values of the submissions to the 2015 contest. We also added the results of the OAEI 2016 edition to the results tables.

Results

The following tables show the results of our evaluation. Participants of the Process Model Matching Contest and the OAEI 2016 edition are depicted in grey font, while this years OAEI participants are shown in black font. Note that some systems participated with a version that has not been modified with respect to its results comparing the OAEI 2016 and 2017 submission. We added only one entry for them with the label OAEI-16/17. This is only the case for the first dataset, which we have used already in 2016.

University Admission Dataset

The OAEI participants are ranked on position 1, 11, and 12 with an overall number of 17 systems listed in the table. The best results have been achieved by AML. These are the results that have already been achieved 2016. Thus, the results are not a big surprise. LogMap achieves also the same results as in 2016. I-Match participates in 207 for the first time. Compared to the results of the tools specialized for the problem of process model matching, the results of I-Match are still very good. There are still five systems that have in particular been designed for matching process models, which achieve worse results.

Results for the University Admission Dataset
NameOAEI/PMMCSizePRFM
AMLOAEI-16/172210.7190.6850.702
RMM-NHCMPMMC-152200.6910.6550.673
RMM-NLMPMMC-151640.7680.5430.636
Match-SSSPMMC-151400.8070.4870.608
OPBOTPMMC-152340.6030.6080.605
Know-Match-SSSPMMC-152610.5130.5780.544
RMM-SMSLPMMC-152620.5110.5780.543
DKPOAEI-161770.6210.4740.538
DKPxOAEI-161500.6800.4400.534
TripleSPMMC-152300.4870.4830.485
LogMapOAEI-16/172670.4490.5170.481
I-MatchOAEI-171920.5210.4310.472
BPLangMatchPMMC-152770.3680.4400.401
KnoMa-ProcPMMC-153260.3370.4740.394
AML-PMPMMC-155790.2690.6720.385
RMM-VM2PMMC-155050.2160.4700.296
pPalm-DSPMMC-158280.1620.5780.253

The probabilistic evaluation of the results is not yet available. We will hand it in within the next weeks.

Birth Registration Dataset

The results for the Birth Registration dataset are more interesting, because we are using this dataset in 2017 for the first time. Moreover, the dataset contains a higher amount of correspondences that are hard to find by comparing the labels on a lexical level. This results usually in a significantly lower F-measure compared to the University Admission dataset.

The results show that AML is no longer the best of all matching systems. Four systems from the process matching community achieve better results in terms of F-measure. This dataset is dominated by the OPBOT system, while AML is among a group of follow-up systems that perform still significantly better than the rest of the field. The other two systems, LogMap and I-Match, achieve close results which are slightly worse than the average results. It is interesting to see that the ranking among the three systems is the same across the two datasets.

Results for the Birth Registration Dataset
NameOAEI/PMMCSizePRFM
OPBOTPMMC-153830.7130.4680.565
pPalm-DSPMMC-154900.5020.4220.459
RMM-NHCMPMMC-152670.7270.3330.456
RMM-VM2PMMC-154920.4740.4000.433
AMLOAEI-175020.4540.3910.420
BPLangMatchPMMC-152790.6450.3090.418
AML-PMPMMC-155030.4230.3650.392
Know-Match-SSSPMMC-151850.8000.2540.385
RMM-SMSLPMMC-153540.5080.3090.384
TripleSPMMC-152660.6130.2800.384
LogMapOAEI-172390.6150.2520.358
I-MatchOAEI-171880.7340.2370.358
Match-SSSPMMC-151280.9220.2020.332
RMM-NLMPMMC-151280.8590.1890.309
KnoMa-ProcPMMC-157400.2340.2970.262

The probabilistic evaluation of the results is not yet available. We will hand it in within the next weeks.

Conclusion

In 20016 we organized the Process Model Matching Track for the first time. Our evaluation effort was motivated by the idea that Ontology Matching methods and techniques can also be used in the related field of Process Model Matching. For that reason we converted one (and in 2017 two) of the most prominent Process Model matching test datasets into an ontological representation. The resulting matching problems are instance matching tasks.

While we were aware that an instance matching system will not be able to exploit the sequential aspects of the given process models out of the box, we expected lexical components to generate results, that are already on an acceptable level. Even though one of the systems (AML) generated very good results, overall only a few of the systems participating at the OAEI were capable of generating any results for our test cases. We still do not fully understand the reasons for this outcome. The participation rate indicates that only a limited number of participants is interested in process model matching. For that reason we will not offer a third edition of this track in 2018.

Contact

If you have any questions or remarks, feel free to contact us.

References

[1] Goncalo Antunes, Marzieh Bakhshandeh, Jose Borbinha, Joao Cardoso, Sharam Dadashnia, Chiara Di Francescomarino, Mauro Dragoni, Peter Fettke, Avigdor Gal, Chiara Ghidini, Philip Hake, Abderrahmane Khiat, Christopher Klinkmüller, Elena Kuss, Henrik Leopold, Peter Loos, Christian Meilicke, Tim Niesen, Catia Pesquita, Timo Péus, Andreas Schoknecht, Eitam Sheetrit, Andreas Sonntag, Heiner Stuckenschmidt, Tom Thaler, Ingo Weber, Matthias Weidlich: The Process Model Matching Contest 2015. In: 6th International Workshop on Enterprise Modelling and Information Systems Architectures (EMISA 2015), September 3-4, 2015, Innsbruck, Austria.

[2] Elena Kuss, Henrik Leopold, Han van der Aa, Heiner Stuckenschmidt, Hajo A. Reijers: Probabilistic Evaluation of Process Model Matching Techniques. ER2016, Nov. 14-17, 2016, Gifu, Japan.