University of Twente Student Theses

Login

Distributed state space generation for graphs up to isomorphism

Kant, Gijs (2010) Distributed state space generation for graphs up to isomorphism.

[img] PDF
834kB
Abstract:In order to achieve reliability of systems, formal verification techniques are required. We model systems as graph transition systems, where states are represented by graphs and transistions between states are defined by graph transformation rules. For formal verication, often the state space of thesystem has to be generated, i.e., the set of all reachable states. The main problem with generating the state space is its size, which tends to grow exponentially with the size of the modelled system. This results in enormous computation time and memory usage already for relatively small systems. Recent developments in hardware design aim towards multi-core and multi-processor architectures. In order to benefit from these we require distributed tools: the task of state space generation needs to be split up into subtasks that can be distributed to multiple cores, called workers. In this report we present a tool for distributed state space generation for graph-based systems. The tool uses LTSMIN [23] for efficiently distributing the states over the workers and for storing the state space, and instances of GROOVE [28] for computing successor states by applying graph transformation rules. Because LTSMIN uses fixed sized state vectors to represent states, a serialisation from graphs to state vectors and its reverse are required. We define two such serialisation functions: one that partitions the graph by its nodes (node vector) and one that uses edge labels to partition the graph (label vector). In graph transformation systems a powerful symmetry reduction can be achieved by using isomorphism checking [27]. Isomorphic states can be merged, resulting in a reduced transition system that is bisimilar to the transition system without reduction, which implies that they satisfy the same semantic properties [29]. In our distributed tool we compute a canonical form for each computed successor graph, which enables us to also distribute the isomorphism reduction to the workers. For computing canonical forms, BLISS (described in [16]) is used together with a conversion that is needed because GROOVE uses a slightly different graph formalism (edge labelled graphs) than BLISS (coloured graphs) does. We have performed experiments to investigate the time and memory performance of the distributed tool, based on LTSMIN, with the two different serialisation functions, compared to the sequential version of GROOVE for three different case studies. The experiments show that the node vector encoding is better than the label vector encoding both in memory usage and execution time. The distributed setup with the node vector serialisation results in orders of magnitude less memory usage than sequential GROOVE for the very symmetric cases. This is suprising because the storage in GROOVE is optimised for graphs, storing only the differences (deltas) between state graphs instead of the complete graphs themselves. The execution time for LTSMIN with one core is much worse than that of GROOVE, because GROOVE uses a canonical hashcode, which often prevents that isomorphism checking has to be used, and some kind of partial order reduction for graph transition systems that is not applicable in the distributed setting. Also, the conversion to coloured graphs, needed for computing canonical forms using BLISS, blows up the size of the graphs. However, for the larger models the distributed solution scales well. For all reported case studies there are start graphs for which GROOVE runs out of memory or is not capable of finishing within the time limit, while the distributed tool still can generate the state space. In one of the cases a speedup of 16 is achieved with 64 workers in the largest case for which also GROOVE could generate the state space within the time limit. For the very symmetric cases most of the time is spent on isomorphism checking in GROOVE. In the distributed setting, the speedup is explained by the time spent on canonical form computation that decreases linearly as the number of workers increases. As the number of workers increases, however, the communication overhead also grows
Item Type:Essay (Master)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Subject:54 computer science
Programme:Computer Science MSc (60300)
Link to this item:https://purl.utwente.nl/essays/59672
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page