Mining the Top-K Non-Redundant Sequential Rules (SPMF documentation)

This example explains how to run the TNS algorithm using the SPMF open-source data mining library.

How to run this example?

What is TNS?

TNS is an algorithm for discovering the top-k non-redundant sequential rules appearing in a sequence database. It is an approximate algorithm in the sense that it always generates non-redundant rules. But these may not always be the top-k non-redundant rules. TNS uses a parameter named delta, which is a positive integer that can be used to improve the chance that the result is exact (the higher delta value, the more chances that the result will be exact).

Why is it important to discover top-k non-redundant sequential rules? Because other sequential rule mining algorithms requires that the user set a minimum support (minsup) parameter that is hard to set (usually users set it by trial and error, which is time consuming). Moreover, the result of sequential rule mining algorithms usually contains a high level of redundancy (for example, thousands of rules can be found that are variation of other rules having the same support and confidence). The TNS algorithm provide solution to both of these problems by letting users directly indicate k, the number of rules to be discovered, and by eliminating redundancy in results.

What is the input of TNS ?

TNS takes four parameters as input:

A sequence database is a set of sequences where each sequence is a list of itemsets. An itemset is an unordered set of items. For example, the table shown below contains four sequences. The first sequence, named S1, contains 5 itemsets. It means that item 1 was followed by items 1 2 and 3 at the same time, which were followed by 1 and 3, followed by 4, and followed by 3 and 6. It is assumed that items in an itemset are sorted in lexicographical order. This database is provided in the file "contextPrefixSpan.txt" of the SPMF distribution.

ID Sequences
S1 (1), (1 2 3), (1 3), (4), (3 6)
S2 (1 4), (3), (2 3), (1 5)
S3 (5 6), (1 2), (4 6), (3), (2)
S4 (5), (7), (1 6), (3), (2), (3)
What is the output of TNS ?

TNS outputs an approximation of the k most frequent non redundant sequential rules having a confidence higher or equal to minconf.

A sequential rule X==>Y is a sequential relationship between two sets of items X and Y such that X and Y are disjoint, and that X is unordered and Y is unordered. The support of a rule X==>Y is the number of sequences that contain all items of X before all items from Y divided by the number of sequences in the database. The confidence of a rule is the number of sequences that contain all items of X before all items from Y, divided by the number of sequences that contains items in X.

A sequential rule ra: X → Y is redundant with respect to another rule rb : X1 → Y1 if and only if:

For example, If we run TNS with k = 10 and minconf = 0.5 and delta = 2, the following set of non-redundant rules is found

2 ==> 3 sup= 3 conf= 0.75
1,3 ==> 2 sup= 3 conf= 0.75
1,4 ==> 3 sup= 3 conf= 1.0
1 ==> 2,3 sup= 4 conf= 1.0
3 ==> 4 sup= 3 conf= 1.0
2,5 ==> 6 sup= 2 conf= 1.0
2,3 ==> 4 sup= 2 conf=0.66
1 ==> 2,3,4,6 sup= 2 conf= 0.5
3,5 ==> 6 sup= 2 conf= 1.0
2 ==> 3,4,6 sup= 2 conf= 0.5

For instance, the rule 1 4 ==> 3 means that if 1 an 4 appears in any order they will be followed by 3 with a confidence of 100 %. Moreover, this rule has a support 75 % (sup = 3) because it appears in three sequences (S1, S2 and S3) out of four sequences.

Note that for some values of k and some datasets, TNS may return more than k rules. This can happen if several rules have exactly the same support, and it is normal. It is also possible that the algorithm returns slightly less than k rules in some circonstances because the algorithm is approximate.

Input file format

The input file format is defined as follows. It is a text file where each line represents a sequence from a sequence database. Each item from a sequence is a postive integer and items from the same itemset within a sequence are separated by single spaces. Note that it is assumed that items within a same itemset are sorted according to a total order and that no item can appear twice in the same itemset. The value "-1" indicates the end of an itemset. The value "-2" indicates the end of a sequence (it appears at the end of each line). For example, the sample input file "contextPrefixSpan.txt" contains the following four lines (four sequences).

1 -1 1 2 3 -1 1 3 -1 4 -1 3 6 -1 -2
1 4 -1 3 -1 2 3 -1 1 5 -1 -2
5 6 -1 1 2 -1 4 6 -1 3 -1 2 -1 -2
5 -1 7 -1 1 6 -1 3 -1 2 -1 3 -1 -2

The first line represents a sequence where the itemset {1} is followed by the itemset {1, 2, 3}, followed by the itemset {1, 3}, followed by the itemset {4}, followed by the itemset {3, 6}. The next lines follow the same format.

Note that it is also possible to use a text file containing a text (several sentences) if the text file has the ".text" extension, as an alternative to the default input format. If the algorithm is applied on a text file from the graphical interface or command line interface, the text file will be automatically converted to the SPMF format, by dividing the text into sentences separated by ".", "?" and "!", where each word is considered as an item. Note that when a text file is used as input of a data mining algorithm, the performance will be slightly less than if the native SPMF file format is used because a conversion of the input file will be automatically performed before launching the algorithm and the result will also have to be converted. This cost however should be small.

Output file format

The output file format is defined as follows. It is a text file. Each line is a sequential rule. Each item from a sequential rule is a postive integer. On each line, the items from the rule antecedent are first listed, separated by single spaces. Then the keyword "==>" appears, followed by the items from the rule consequent, separated by single spaces. Then, the keyword "#SUP:" appears followed by an integer indicating the support of the rule as a number of sequences. Then, the keyword "#CONF:" appears followed by a double values in the [0, 1] interval indicating the confidence of the rule. For example, an output file is shown below:

3 ==> 2 #SUP: 3 #CONF: 0.75
1 ==> 2,3 #SUP: 4 #CONF: 1.0
4 ==> 3 #SUP: 3 #CONF: 1.0

Consider the second line. It indicates that the rule {1} ==> {2, 3} has a support of 4 sequences and a confidence of 100 %. The other lines follow the same format.

Optional feature: giving names to items

Some users have requested the feature of given names to items instead of using numbers. This feature is offered in the user interface of SPMF and in the command line of SPMF. To use this feature, your file must include @CONVERTED_FROM_TEXT as first line and then several lines to define the names of items in your file. For example, consider the example database "contextPrefixSpan.txt". Here we have modified the file to give names to the items: 

1 -1 1 2 3 -1 1 3 -1 4 -1 3 6 -1 -2
1 4 -1 3 -1 2 3 -1 1 5 -1 -2
5 6 -1 1 2 -1 4 6 -1 3 -1 2 -1 -2
5 -1 7 -1 1 6 -1 3 -1 2 -1 3 -1 -2

In this file, the first line indicates, that it is a file where names are given to items. Then, the second line indicates that the item 1 is called "apple". The third line indicates that the item 2 is called "orange". The 9th line indicates that the symbol "-1" must be replaced by "|". Then the following lines define four sequences in the SPMF format.

Then, if we apply a sequential pattern mining algorithm using this file using the user interface of SPMF or the command line, the output file contains several patterns, including the following ones:

apple ==> orange,tomato #SUP: 4 #CONF: 1.0
bread,noodle ==> orange,tomato #SUP: 2 #CONF: 1.0

Note that this feature could be also used from the source code of SPMF using the ResultConverter class. However, there is currently no example provided.


TNS is an efficient algorihtm. It is based on the TopSeqRules algorithm for discovering top-k sequential rules. The main difference between TNS and TopSeqRules is that TNS includes additional strategies to eliminate redundancy in results, and that TNS is an approximate algorithm, while TopSeqRules is not.

TNS and TopSeqRules are more intuitive to use than regular sequential rule mining algorithms such as RuleGrowth. However, it should be note that the problem of top-k sequential rule mining is more computationally expensive than the problem of sequential rule mining. Therefore, it is recommended to use TNS or TopSeqRules for k values of up to 1000 or 2000 depending on the dataset. If more rules should be found, it could be better to use RuleGrowth or TRuleGrowth, for more efficiency.

Where can I get more information about this algorithm?

The TNS algorithm is described in this paper:

Fournier-Viger, P., Tseng, V. S. (2013). TNS: Mining Top-K Non-Redundant Sequential Rules. Proc. 28th Symposium on Applied Computing (ACM SAC 2013). ACM Press, pp. 164-166.