In this paper we test four different implementations of reasoning tools dedicated to Abstract Argumentation Frameworks. These systems are ASPARTIX, dynPARTIX, Dung-O-Matic, and ConArg2. The tests are executed over three different models of randomly-generated graphs, i.e., the Erds-Renyi model, the Kleinberg small-world model, and the scale-free Barabasi-Albert model. We compare these four tools with the purpose to test the search of all the possible stable extensions. Then we benchmark dynPARTIX and ConArg2 on the credulous and skeptical acceptance of arguments. Finally, we also evaluate ConArg2 to check the existence of a stable extension.
Benchmarking Hard Problems in Random Abstract AFs: The Stable Semantics
BISTARELLI, Stefano;ROSSI, Fabio;SANTINI, FRANCESCO
2014
Abstract
In this paper we test four different implementations of reasoning tools dedicated to Abstract Argumentation Frameworks. These systems are ASPARTIX, dynPARTIX, Dung-O-Matic, and ConArg2. The tests are executed over three different models of randomly-generated graphs, i.e., the Erds-Renyi model, the Kleinberg small-world model, and the scale-free Barabasi-Albert model. We compare these four tools with the purpose to test the search of all the possible stable extensions. Then we benchmark dynPARTIX and ConArg2 on the credulous and skeptical acceptance of arguments. Finally, we also evaluate ConArg2 to check the existence of a stable extension.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.