HOME
Interface
Implementation
Evaluation
Testsuite
Contact
Bibliography

Activity Recognition Testsuite

This page describes the technical aspects of an evaluation approach that has been developed in the context of [1]. Our aim is to define an easy to use format for describing an evaluation test set for human activity recognition tasks. We call a test set that is described in that format an activity recognition test suite. Furthermore, we developed a technology that is capable of automatically running an activity recognition tool against such a test suite, which can be stored remotely on a standard web server or locally on the file system. Our approach resembles the approach that has successfully been applied in the context of the SEALS (Semantic Evaluation At Large Scale) project [2]. By offering the proposed technology we hope that more researchers and developers use the same tests sets for their experiments and that the measured results become reproducible and comparable.

In §1 we first explain the interface that needs to be implemented in order to wrap a tool which is then capable of consuming a given test suite. An example of a java implementation that always assigns the majority class is shown in §2. Note that tools not developed in java are also supported, however, it is required to write a small java class that acts as a bridge. In §3 we show how to run a wrapped tool, we use of example implementation, against a given test suite. In case you are interested in the definition of your own testsuite, we refer to §4. As an example for such a test suite, we already converted a small subset of the data collection that can be found in [3].

§1 Minimal Interface

In order to process a test suite, your tool has to implement the interface de.unima.ki.ar.benchmark.tool.ActivityRecognitionTool, which defines the following two methods. This interface is packaged in the jar file arc.jar.

public void learnFrom(TrainingExample example);
public String classify(Example example);

The first method is called iteratively for each training example in the training set. The attribute values and the target class of each training example can then be accessed within your implementation (more details see below) to learn a classifier or to set up any kind of technique that is later required to classify unknown examples.

The second method is called iteratively for each example in the test suite that needs to be classified. The method should return the target class as string (e.g. "standing", "jumping", ...). The return values is than used for computing the accuracy by comparing it against a given gold standard

In the following section you will see how to access the relevant attributes of the examples and training examples. These examples correspond to time windows that comprise several point in time together with the sensor data that is available for these points in time.

§2 Example of an Implementation

The following code is a simple and complete example of a (baseline) tool that reads the attributes of the training examples to learn about the majority of activities within the training examples, which is than used to predict the majority class for each unseen example. You can also download this class.

package de.unima.ki.ar;

import java.util.ArrayList;
import java.util.HashMap;

import de.unima.ki.ar.benchmark.testsuite.*;
import de.unima.ki.ar.benchmark.tool.ActivityRecognitionTool;

public class MyTool implements ActivityRecognitionTool {
	
	private static int exampleCounter = 0;
	private static String defaultAnswer = null;
	private static HashMap<String, Integer> tcCounter = new HashMap<String, Integer>();
	
	public void learnFrom(TrainingExample example) {
		String tc = example.getTargetClass();
		if (tcCounter.containsKey(tc)) tcCounter.put(tc, tcCounter.get(tc) + 1);
		else tcCounter.put(tc, 1);
		// access to the data of the sensors can be done like this
		// here for demonstration attributes of the first given example are printed
		if (exampleCounter == 0) {
			System.out.println("* first seen example is labeled as " + tc);
			ArrayList<Sensordata> sensordatalist = example.getSensordata();
			for (Sensordata sd : sensordatalist) {
				String sensortype = sd.getSensortype();
				System.out.println("* sensortype: " + sensortype);
				System.out.print("* ");
				for (String h : sd.getHeader()) {
					System.out.print(h + "\t");
				}
				System.out.println("* ");
				ArrayList<String[]> valuelist = sd.getValuelist();
				for (String[] values : valuelist) {
					System.out.print("* ");
					for (String v : values) {
						System.out.print(v + "\t");
					}
					System.out.println();
				}
				System.out.println();
			}
			System.out.println("* all of the following examples are processed without generating any output ...");
			
		}
		exampleCounter++;	
	}

	public String classify(Example example) {
		if (defaultAnswer == null) {
			computeDefault();
		}
		return defaultAnswer;
	}
	
	private void computeDefault() {
		int max = -1;
		for (String tc : tcCounter.keySet()) {
			if (max < tcCounter.get(tc)) {
				max = tcCounter.get(tc);
				defaultAnswer = tc;
			}
		}
	}

}	

For you as a tool developer, the most important information is how to access the attributes of the training examples. The relevant part is marked by a comment. When you run the tool, which is shown in the next paragraph, the data that represents the first example is printed. This will also help you to understand how an example is structured and how to access its attributes.

Note that the class TrainingExample extends the class Example. Thus, both classes have nearly all methods in common. The only difference is the method that allows to access the target class (getTargetClass()). This method is only available for instances of the class TrainingExample.

§3 Running an Evaluation

We assume in the following that you have packaged our tool with all dependencies in a file named myTool.jar. For running an example of an evaluation, we have zipped the class described in the previous paragraph as myTool.jar. You also need to download the Activity Recognition Client library, which is available in the jar arc.jar.

Running your tool against the testsuite that is available at http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_forearm/ can be done as shown in the following. You just have to package your tool with all required dependencies in a file names myTool.jar. We assume that the referenced jar files are in the working directory.

E:\temp\arc>java -cp arc.jar;myTool.jar de.unima.ki.ar.benchmark.Client
-> Running activity recognition client via command line mode
Missing required options: c, t
usage: run the benchmark client with the following options
 -c,--classname    class implementing the interface
 -t,--testsuite    url of the testuite

The client requires two arguments. Via the parameter -c you have to specify the name of the class that implements the interface. Via -t you have to specify the URL that points to the base folder of the test suite. Specifying these two parameters results in the following command line call.

Note that this command line call works on Windows. In Unix-based systems you have to replace the ; by a : when listing the jar files that are the classpath arguments. Furthermore, check that you copied the URL of the testsuite correctly, which is http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_forearm/.

E:\temp\arc>java -cp arc.jar;myTool.jar de.unima.ki.ar.benchmark.Client -c de.unima.ki.ar.MyTool -t http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_forearm/
-> Running activity recognition client via command line mode
-> Instantiate the activity recognition tool
-> Learn from training examples
* first seen example is labeled as standing
* sensortype: acc
* id    attr_time       attr_x  attr_y  attr_z  *
* 15691 1435991988017   -9.49029        -1.60825        1.79036
* .... many more ...
* 15739 1435991988985   -9.6162 -1.98218        1.5816

* sensortype: Gyroscope
* id    attr_time       attr_x  attr_y  attr_z  *
* 15996 1435991988018   -0.316299       -0.0532989      -0.0515747
* .... many more ...
* 
* 16045 1435991988988   0.258347        -0.0201874      -0.0334167

* sensortype: MagneticField
* id    attr_time       attr_x  attr_y  attr_z  *
* 15996 1435991988018   44.3893 8.57239 8.39539
* .... many more ...
* 16045 1435991988988   45.6161 10.1547 7.55463

* all of the following examples are processed without generating any output ...
-> 2,4%
-> 5,1%
...

When you try to run your own tool instead of the baseline tool packaged in myTool.jar, you might get the following (or a similar) error message:

Exception in thread "main" java.lang.UnsupportedClassVersionError: ... 
has been compiled by a more recent version of the Java Runtime (class file version 53.0),
this version of the Java Runtime only recognizes class file versions up to 52.0. 

The client was complied in Java 1.8. The previous error message is shown if your tool or tool bridge is compiled in Java 1.9. You have to downgrade your compiler to 1.8 when you compile your own tool bridge.

Note that this is not the whole output that is generated when running the client. These printouts are generated within the first seconds, while the results of the evaluation are generated after several minutes when the whole testsuite is processed. Remember that our implementation of the required interface prints out all of the sensor data values of the first example. The output of MyTool starts with an *, the output generated by the client starts with ->. The remaining output that should be generated after several minutes (maybe up to 15 minutes), depending on your internet connection, looks like this.

-> Classify test examples

This message is printed when all training examples have been processed and the client starts to iterate over the examples that have to be classified. Finally the following output is generated.

-> Evaluation results per activity (p=precision. r=recall. f=f1-measure)
-> running: p=undef r=0.000 f=undef
-> standing: p=undef r=0.000 f=undef
-> jumping: p=undef r=0.000 f=undef
-> walking: p=undef r=0.000 f=undef
-> climbingup: p=0.148 r=1.000 f=0.258
-> climbingdown: p=undef r=0.000 f=undef
-> sitting: p=undef r=0.000 f=undef
-> lying: p=undef r=0.000 f=undef

The specific results are caused by our implementation which corresponds to the majority baseline. MyTool classified all of the given examples as climbing up.

Note that an evaluation test suite can also be stored and accessed locally as long as the reference is given in terms of a URL.

§4 Creating a test suite

It is rather simple to convert an existing dataset into a activity recognition test suite as long as you have prepared the data in a certain way. The (training and test) examples of a test suite have to be stored in csv files which might look like this. This excerpt is an example for the measurement of the gyroscope which has been positioned at the upper arm when performing some jumping actions.

id,attr_time,attr_x,attr_y,attr_z
1,1438189989519,-0.0061086523,0.03481932,-0.004581489
2,1438189989520,-0.008246681,0.030543262,-0.002443461
3,1438189989521,-0.009468411,0.035430185,-0.005192355
4,1438189989522,-0.016187929,0.00580322,-0.004581489
5,1438189989523,-0.02107485,-0.00061086525,-0.0015271631
6,1438189989524,-0.028405234,0.017715093,0.002443461
7,1438189989525,-0.01863139,0.039095376,-0.0015271631
8,1438189989526,-0.03390302,0.0290161,0.01038471
9,1438189989527,-0.03634648,0.013439035,0.020158553
10,1438189989528,-0.03023783,-0.0076358155,0.028405234
11,1438189989529,-0.028710667,-0.015882496,0.017715093
12,1438189989530,-0.026267206,-0.005192355,0.003970624
13,1438189989531,-0.01740966,0.0036651916,-0.00061086525
14,1438189989532,-0.019242255,0.0015271631,-0.0036651916
15,1438189989533,-0.024740042,-0.004581489,-0.00030543262
16,1438189989534,-0.02565634,-0.0143553335,-0.0033597588
17,1438189989535,-0.022296581,-0.024740042,-0.005192355
...

In a similar way all other sensor data has to be stored. Given a set of such files, a training or test example can just be defined as a timespan together with the information which sensors should be used and where (= name of the files) to find the relevant data. We propose an XML format to store this information as shown in the following.

<example targetclass="climbingdown">
	<sensordata type="acc" pos="forearm" source="proband1/data/acc_climbingdown_forearm.csv" start="1435997166000" end="1435997167000" time-col="2"/>
	<sensordata type="Gyroscope" pos="forearm" source="proband1/data/Gyroscope_climbingdown_forearm.csv" start="1435997166000" end="1435997167000" time-col="2"/>
	<sensordata type="MagneticField" pos="forearm" source="proband1/data/MagneticField_climbingdown_forearm.csv" start="1435997166000" end="1435997167000" time-col="2"/>
</example>

The important attributes are the start and end attributes together with the attribute time-col. The time-col attribute defines in which column of the references file the time is stored and the other two attributes define a time interval that is used for the example. By this simple mechanism it is possible to store the relevant data in a way that is mostly independent from its concrete usage in an evaluation test suite.

The examples have to be defined as training examples or test examples, which is done as shown in the following

<activitiyrecognition-suite name="some name" description="a description.">
    <trainingdata>
    ... list trainig examples here ...
    <trainingdata>
    <testdata>
    ... list test examples here ...
    </testdata>
</activitiyrecognition-suite 

The complete description of our example testsuite is available here.

So far we have created seven testsuites that vary with respect to the position of the sensor. We have described an intial evaluation experiment using these testsuites in [1].

http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_chest/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_forearm/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_head/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_shin/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_thigh/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_upperarm/
http://web.informatik.uni-mannheim.de/arc/testsuite-example/HAR_s1w1_waist/

§5 Contact

Feel free to contact us, if you have problems using the proposed technology, questions, or any kind of remarks.

- Christian Meilicke (christian |AT| informatik.uni-mannheim.de)
- Timo Sztyler (timo |AT| informatik.uni-mannheim.de)
- Heiner Stuckenschmidt (heiner |AT| informatik.uni-mannheim.de)

§6 Bibliography

[1] T. Sztyler, C. Meilicke, H. Stuckenschmidt. Towards Systematic Benchmarking of Activity Recognition Algorithms. Submitted to the 14th Workshop on Context and Activity Modeling and Recognition (COMOREA-18), 2018.
[2] C. Trojahn, C. Meilicke, J. Euzenat, and H. Stuckenschmidt: Automating OAEI campaigns (first report). In: Proceedings of the International Workshop on Evaluation of Semantic Technologies, 2010.
[3] http://sensor.informatik.uni-mannheim.de/