AnyBURL

AnyBURL

This is the home of the rule learner AnyBURL (Anytime Bottom Up Rule Learning). AnyBURL has been designed for the use case of knowledge base completion, however, it can also be applied to any other use case where rules are helpful. You can use it to (i) learn rules, (ii) apply them create candidate rankings / make predictions, (iii) and to evaluate a ranking. (iv) You can also use AnyBURL to explain why a certain candidate makes sense.

AnyBURL has meanhwile been under development for several years. Via this page you get access and find information related to the current version, called AnyBURL-23-1. Older versions can be found below in the Previous Versions section.

If you want to use rules learned by AnyBURL for making predictions or for explaining them, we recommend the use of PyClause. PyClause is a rule-application framework that offers a rich functionaility to work with rules, which alos allows to join them into into machine learning pipelines. You can also use an AnyBURL-Wrapper and learn rules directly from within PyClause.

Alternative approaches to knowledge base completion, which are currently dominating the research field in number of publications, are embedding a given graph into a low dimensional vector space. If you want to compare AnyBURL to these approaches we recommend the use of LibKGE.

Results

These are the results of the newest AnyBURL version, called AnyBURL-23-1, in comparison to previous AnyBURL versions. It corresponds to the version used in the VLDB journal paper, which is the most recent and comprehensive description of AnyBURL. The time used for learning the rules was restricted to 1000 seconds (17 minutes) in most of the runs. An exception are the learning times for Wikidata5mM and Freebase where we learned rules for 10000 seconds (~3 hours). The IJCAI results have been computed on a laptop, the other results have been computed on different compute servers. The largest dataset, Freebase, requires around 900 GB RAM. Datasets up to the size of Yago03-10 requires less than 16GB RAM and can be run on a standard laptop.

Dataset IJCAI-19 AnyBURL-RE AnyBURL-JUNO AnyBURL-22 AnyBURL-22 (large-scale) AnyBURL-23-1
Metric hits@1 hits@10 hits@1 hits@10 hits@1 hits@10 hits@1 hits@10 MRR hits@1 hits@10 MRR hits@1 hits@10 MRR
WN18 93.9 95.6 94.8 96.2 94.8 96.1 94.6 96.1 95.2 94.7 96.0 95.2 94.8 96.2 95.3
WN18RR 44.6 55.5 45.7 57.7 45.7 57.6 45.2 57.4 49.3 44.7 55.4 48.2 46.0 57.9 49.9
FB15k 80.4 89.0 81.4 89.4 80.9 89.4 82.0 89.7 84.6 82.3 89.4 84.8 81.0 89.7 84.0
FB237 23.0 47.9 27.3 52.2 24.5 50.6 24.0 50.1 32.6 24.1 49.8 32.6 24.7 50.5 33.2
YAGO03-10 42.9 63.9 49.2 68.9 49.8 69.2 49.3 68.5 55.9 48.6 67.9 55.2 49.7 68.7 56.4
CODEX-L 25.6 42.6 25.6 43.0 31.5 25.4 42.6 31.3 25.6 42.8 31.4
Wikidata5M - - - 30.8 42.2 34.8 31.2 43.3 35.3
Freebase - - - 70.5 72.5 71.3 70.2 72.8 71.1

IMPORTANT NOTE: All results have been created with the default parameter setting. Exceptions are the results for WN18 and WN18RR. For these datasets it is possible to increase the length of the cyclic rules from three (default value) to five, which gives a small additional plus. You have to add the line MAX_LENGTH_CYCLIC = 5 to the configuration file that describes the input and output of rule learning. For Freebase we set THRESHOLD_CORRECT_PREDICTIONS = 5. The default value for this parameter is 2.

SPECIAL REMARK:In AnyBURL-RE we implemented a specific technique to learn from the validation set, what can be expected to work well for the test set. Sounds okay, ... well, decide on your own. It is described here in Section 4.4. Its impact is restricted to the FB237 dataset. For that reason we colored the specific cells red in the table above. This technique has been deactivated in all other releases. It is still an open issue, whether its fair to apply this technique.

SPECIAL REMARK II: Partially unintended the current AnyBURL-23-1 release uses specific weights for the confidence of certain rule types. Rules that we called zero rules, that reflect only the distribution of entities, are weighted with 0.01. Rules that we called U_d rules (constant in the head, existentially quantified variable in the body) are weighted with 0.1. Both rules predict candidates that can typically be expected at that place of the relation.

Download

AnyBURL

AnyBURL is packaged as jar file and requires no external resources. You can download the jar file here.

If you have problems in running the jar due to, e.g., some java version conflict, you can build an AnyBURL.jar on your own. If you want (or need) to do this continue as follows, otherwise skip the following lines. Download the source code and unzip it. Compile the code and create the jar as follows. First create a folder build, then compile with the following command.

javac de/unima/ki/anyburl/*.java -d build

Package in a jar:

jar cfv AnyBURL-23-1.jar -C build .

There is a dot . at the end of the line, its required. Afterwards you can delete the build folder.

Datasets

You can use AnyBURL on any dataset that comes as a set of triple. The supported format is rather simple and should look like this. The separator can be blank or tab.

anne loves bob
bob marriedTo charly
bob hasGender male
...

Take care! AnyBURL expects that each identifier that appears in the file shown above starts with a alphabetic character and consists of at least 2 letters. If this is not the case you have to set the parameter SAFE_PREFIX_MODE = true in each configuration file (learn, apply, eval, explain). This puts a letter in front of each entity and relation name when reading the input files.

We have zipped the FB15k, FB237, and WN18 in one file. Please download and unzip. YAGO03-10 and WN18RR is available at the ConvE webpage. The datasets Freebase and Wikidata5M have been made available by the authors of the papers Kochsiek, Gemulla: Parallel Training of Knowledge Graph Embedding Models: A Comparison of Techniques. VLDB Endowments 2022. Download Freebase here and Wikidata5M here.

Run AnyBURL

AnyBURL can be used (i) to learn rules and (ii) to apply the learned rules to solve prediction tasks. These predictions can be (iii) evaluated against a gold standard and you can use AnyBURL to (iv) explain its own or other predictions. Note that (i) and (ii) are two distinct processes that have to be started independently.

Learning

Download and open the file config-learn.properties and modify the line that directs to the training file choosing the datasets that you want to apply AnyBURL to. Run AnyBURL with this command. In earlier versions the class name was LearnReinforced instead of Learn, check the you use the correct classname in case of error messages.

java -Xmx12G -cp AnyBURL-23-1.jar de.unima.ki.anyburl.Learn config-learn.properties

The parameter -Xmx12G specifies how much memory should be available for java. Here we specified 12 Gigabyte. It might be required to increase this for larger datasets. This command creates three files rules-10, rules-50, and rules-100. These files contain the rules learned after 10, 50, and 100 seconds. On most datasets (including large datasets) learning for 1000 seconds will yield good results. For very large datasets you should choose a higher value. Please change also the parameter related to the number of threads used for the rule mining process. See also the listing below.

Parameters relevant for Learning

You can change the following parameters to modify the standard learning behaviour of AnyBURL. Any changes have to be made by writing a line (or changing a line) into the config-learn.properties file.

In some scenarios there is only a specific target relation, suppose it is called relation17. In these scenarios you should add the line SINGLE_RELATIONS = relation17 to the config-learn.properties. If there are several target relation, you can list them separated by a comma. Do not use blanks in between.

Take care! AnyBURL expects that each identifier that appears in the file shown above starts with a alphabetic character and consists of at least 2 letters. If this is not the case you have to set the parameter SAFE_PREFIX_MODE = true in each configuration file (learn, apply, eval, explain). This puts a letter in front of each entity and relation name when reading the input files.

Predicting

Download and open the file config-apply.properties and modify it according to your needs (if required). Create the output folder predictions, then run AnyBURL with this command. Note that you have to specify the rules that have been learned previously.

java -Xmx12G -cp AnyBURL-23-1.jar de.unima.ki.anyburl.Apply config-apply.properties

This will create one file preds-100 that contains the top-10 rankings for the completion tasks, which are already filtered rankings (this is the only reason why the validation and test set must be specified in the apply config files)

Prediction Parameters

You can change the following parameters to modify the standard prediction (= rule application) behaviour of AnyBURL.

Evaluating Results

To eval these results, use this command after modifying config-eval.properties (if required). The evaluation result is printed to standard out.

java -Xmx12G -cp AnyBURL-22.jar de.unima.ki.anyburl.Eval config-eval.properties

If you follow the whole workflow using the referenced config-files, the evaluation program should print results similar to the following output (here is an example based on the config-learn/apply examples that use the WN18RR dataset):

0.4497 0.5010 0.5601 0.4859

The first columns refer to the hits@1 / hits@3 / hits@10 score. The last column is the MRR (approximated, as its based on the top-k only).

The evaluation command line interface is only used for demonstration purpose, its not intended to be used in a large scale experimental setting. We might have modified it meanwhile a bit, so the output might look a bit different.

Default Setting vs. Large Scale Setting

If you apply AnyBURL to very large datasets (full Freebase or larger), you might like to change the theshold for the support of rules that are stored. If you want each rule,m that will be stored, to make at least 5 correct predictsion, write this in the learning configuration.

THRESHOLD_CORRECT_PREDICTIONS = 5

There are a few other things that you should keep in mind, when running AnyBURL on large datasets. Reserve a sufficient amount of memory. Set for example -Xmx750G to allow java to use 750GB RAM. You should also not forget to use a high number of worker threads. If large datasets have already been processed to run KGE or similar models on them, the ids in the input files might be numbers. We explained above that you have to set SAFE_PREFIX_MODE = true in such a setting.

Extensions

Explaining Predictions

We applied AnyBURL also to explain predictions. These predictions might have been made by AnyBURL or any other knowledge graph completion technique. The input required is a set of triples that you want to explain listed in a file called target.txt. Suppose for example that a model predicted 01062739 as tail of the query 00789448 _verb_group ? (example taken from WN18RR. Than you add 00789448 _verb_group 01062739 to your target.txt file.

The folder of the target.txt file is the first argument. The explanation and temporary results (e.g., the rule file) are stored in that folder. The folder to the dataset is the second argument. It is assumed that the relevant files in that folder are the files train.txt, valid.txt, and test.txt. You call the explanation code as follows:

java -Xmx3G -cp AnyBURL-22.jar de.unima.ki.anyburl.Explain explanations/ data/WN18RR/

Unfortunately this explanation support (the class Explain used above) is only available in the 22 version. Please use that version if you want to use that functionality.

Several output files are generated. For details we refer to our IJCAI 2022 publication listed below. If you are mainly interested in an explanation look at the file delete-verbose.txt. With respect to the example mentioned above, you will find such an entry:

00789448 _verb_group 01062739 01062739 _verb_group 00789448 572 533 0.9318181818181818 _verb_group(X,Y) <= _verb_group(Y,X)

This means: The strongest reason for deriving the triple at the beginning is the second triple (which can be found in the training set) together with the rule listed at the end. Here we can observe that the symmetry of _verb_group was (probably) the reason for the prediction. Again, more details can be found in our publication.

The code is currently restricted to cyclic rules of length 2 and acyclic rules of length 1. If no rule can be found within that language bias, our approach puts a null at the end of the line. This setting covered most of the testcases of the dataset we used in our experiments (we used the same datasets that has been used in another publication to compare against). Recently we detected that this restriction does, unfortunately, not work well for predictions for which there are no clear signals in the dataset.

If you are interested in the complete code to reproduce the results of the IJCAI 2022 Adversarial Explanations paper, please use the code in this bundle.

Explaining AnyBURL Predictions

If you want to use AnyBURL explanation component for the predictions made by AnyBURL and you are only interested in the most important rules that cause the prediction, you can simply add the following line to the config-apply script (change to desired folder/filename).

PATH_EXPLANATION = folder/explanation-out.txt

Take care: As this explains every prediction in the top-k ranking for the file specified as via PATH_TEST, the resulting output file will be very large. Moreover, its format might look a bit weird, as it lists the predictions and relevant rules in a tree structure. If you have problems to understand this output, write a mail and a attach a part of such an explanation file. We will explain how to read this output.

SAFRAN

SAFRAN (Scalable and fast non-redundant rule application) is a framework for fast inference of groundings and aggregation of logical rules on large heterogeneous knowledge graphs. It requires a rule set learned by AnyBURL as input, which is used to make predictions for the standard KBC task. This means that is can be used as alternative to the rule application method that is built in AnyBURL. In most cases it is significantly faster and slightly better in terms of hits@k and MRR.

Publications

The two main publications are shown in bold letters. Please cite these papers (or on of this papers), if you are not using AnyBURL for a specific purpose, which might be better reflected in one of the other papers

Previous and Special Versions

19.06.2023: The VLDB paper has been published. We added this publications in the list of AnyBURL publications.

14.02.2023: The last bug fix resulted in another horrible bug (reported by Adrian Kochsiek), which caused a infinite loop writing the same completion tasks in the ranking files over and over again. We fixed this flaw. Again, we did not change the version number as no new functionality was added.

26.01.2023: Based on a hint from Simon Ott we fixed a flaw in the current version. The code we released still had a reference to an unused package, which resulted into a build failure. We removed these references and replaced the faulty 23-1 version with the fixed version.

06.01.2023: Fixed some bugs and added some minor imrovements to the 22 version, the new version is called 23-1.

22.09.2022: We became aware that the current AnyBURL version uses a rather specific weighting scheme that we implemented for trying out something and forgot to remove from the final version of AnyBURL-22. We added Special Remark II at the top of the page explaining that setting.

21.07.2022: Published new version called AnyBURL-22. Added additional information to parameter settings and explained how to compute explanations. Added the ESWC and IJCAI paper to the list of related publications. Removed the paragraph about the light weight setting. Please use the large-scale setting instead of that.

22.11.2021: Added a paragraph related to running AnyBURL in a light mode on the webpage. Further experiments with this setting are planned (UPDATE on 23.11.2021 while adding more results we detected an inconsistency in the evaluation code that will be resolved within the next days).

14.09.2021: Added the AKBC paper to the list of related publications.

08.07.2021: Fixed some minor issues in the descriptions and Updated webpage with a hint on the parameter SINGLE_RELATIONS.

10.06.2021: Updated webpage with new version of AnyBURL called AnyBURL-JUNO. Only minor modifications compared to the RE version.

02.02.2021: Added Lincense information at the end of this page and extended the remark below the results table with the WN18/WN18RR specific setting.

06.11.2020: Updated webpage with a hint on how to achieve the results shown in the results table. See the paragraph below the table. Thanks to Simon Ott for pointing to the fact that slightly worse results are achieved when run with default setting.

23.03.2020: Updated webpage with new version of AnyBURL using Reinforcement Learning (RE version).

12.06.2019: Some minor issues in the sources of the 2019-05 version have been fixed (thanks to the feedback of Andrea Rossi). Now there should be no built problems related to the encoding of German Umlaute in some comments (most of the comments are in English) and the reference to an outdated (and unused) package.

Contact

If you have any questions, feel free to write a mail at any time. We are also interested to hear about some applications where AnyBURL was useful.

License

AnyBURL is available under the 3-clause BSD, sometimes referred to as modified BSD license:

Copyright (c) University Mannheim, Data and Web Science Group

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Colophon

Wikipedia: " A burl [...] is a tree growth in which the grain has grown in a deformed manner. It is commonly found in the form of a rounded outgrowth on a tree trunk or branch that is filled with small knots from dormant buds." If you cut it you get what is shown in the background. The small knots, which are also called burls, can be associated with constants and the regularities that are associated with the forms and structures that surround them.