In this paper we test four different implementations of reasoning tools dedicated to Abstract Argumentation Frameworks. These systems are ASPARTIX, dynPARTIX, Dung-O-Matic, and ConArg2. The tests are executed over three different models of randomly-generated graphs, i.e., the Erds-Renyi model, the Kleinberg small-world model, and the scale-free Barabasi-Albert model. We compare these four tools with the purpose to test the search of all the possible stable extensions. Then we benchmark dynPARTIX and ConArg2 on the credulous and skeptical acceptance of arguments. Finally, we also evaluate ConArg2 to check the existence of a stable extension.

Benchmarking Hard Problems in Random Abstract AFs: The Stable Semantics

BISTARELLI, Stefano;ROSSI, Fabio;SANTINI, FRANCESCO
2014

Abstract

In this paper we test four different implementations of reasoning tools dedicated to Abstract Argumentation Frameworks. These systems are ASPARTIX, dynPARTIX, Dung-O-Matic, and ConArg2. The tests are executed over three different models of randomly-generated graphs, i.e., the Erds-Renyi model, the Kleinberg small-world model, and the scale-free Barabasi-Albert model. We compare these four tools with the purpose to test the search of all the possible stable extensions. Then we benchmark dynPARTIX and ConArg2 on the credulous and skeptical acceptance of arguments. Finally, we also evaluate ConArg2 to check the existence of a stable extension.
2014
978-1-61499-435-0
978-1-61499-436-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1345517
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 16
social impact