Abstract of FKI-251-05

Document-Name:  fki-251-05

Title:		A Formal Model of Generalized Assertion-Level Trust	
Authors:	Felix Fischer, Matthias Nickles 
Revision-Date:	December 2005
Category:	Technical Report (Forschungsberichte Künstliche Intelligenz)
Abstract:	With open environments, like electronic marketplaces or the
		Semantic Web, moving to the center of attention, recent years
		have witnessed a rapidly growing interest in the subject of	
		computational trust. With respect to trust models, which allow
		agents to evaluate the trustability of potential peers and
		are hence particularly useful in areas where complex interaction
		mechanisms are ineffective or inefficient, research has
		mainly focused on the numerical aggregation of information
		from different about various different aspects of trust via socalled
		trust metrics, often without clearly defining what trust
		actually is. The strong emphasis on these quantitative approaches
		raises a number of fundamental questions regarding
		the goals and the basic direction of research into trust in
		multi-agent systems. In this paper, we propose a formal (i.e.,
		logical) definition in terms of predictive, behavioral expectations,
		resulting in a communication-centric, generalized, and
		contextual notion of trust. We argue that the formal semantics
		underlying such a definition is a necessary precondition
		for implementing a fine grained yet feasible model with key
		properties like context-sensitivity, communicability, justification,
		and awareness of strategic lying. We further argue
		that existing logical formalizations, grounding trust in mental
		states particularly of the trustee, cannot provide sufficient
		guidance as to how trust is to be inferred, updated, or used.
Keywords:	Agents, Trust, Computational Expectation, Agent Communication
Size:		8 pages
Language:	English
ISSN:		0941-6358
Copyright:      The ``Forschungsberichte Künstliche Intelligenz''
                series includes primarily preliminary publications,
                specialized partial results, and supplementary
                material. In the interest of a subsequent final
                publication these reports should not be copied. All
                rights and the responsibility for the contents of the
                report are with the authors, who would appreciate
                critical comments.

FKI@AI/Cognition Group