A Statistical Matching Approach to Detect Privacy Violation for Trust-Based Collaborations Mohamed Ahmed, Daniele Quercia and Stephen Hailes Abstract Distributed trust and reputation management mechanisms are often proposed as a means of providing assurance in dynamic and open environments by enabling principals to building up knowledge of the entities with which they interact. However, there is a tension between the preservation of privacy (which would suggest a refusal to release information) and the controlled release of information that is necessary both in order to accomplish tasks and to provide a foundation for the assessment of trustworthiness. However, if reputation-based systems are to be used in assessing the risks of privacy violation, it is necessary both to discover when sensitive information has been released, and then to be able to evaluate the likelihood that each of the set of principals that knew that information was involved in its release. In this paper, we argue that statistical traceability can act as a basis for reaching a proper balance between privacy and trust. To enable this, we assume that interacting principals negotiate service level agreements that are intended to constrain the ways in which personal information may be used, and then monitor violations, ascribing likelihoods of involvement in release using an approach based on statistical disclosure control. Even though our approach cannot guarantee perfect privacy protection for personal information, it provides a framework using which detected privacy violation can be mapped onto a measure of accountability, which is useful in deterring such violation.