Social software is defined as software that enables its users to interact socially on the Web through organization structures such as a social network. The use of social software spans casual friendship networks to complex process management. The number of users of social software of various sorts is huge, for example Facebook.com by itself has more than one billion users. Users of these systems typically share a vast amount of content. The content that is being shared is always targeted for some of the users of the system but not for all of them. Every user has the right to demand that her privacy is preserved by not showing the content to those individuals for whom the content was not meant for. However, everyday more and more interesting cases of privacy breaches are taking place. A privacy breach by itself is not desired but it can have further effects. For example, if a private credit card information is shared with unknown people, it can result in serious security problems. Or, an invention that needs to be secret in a company is mistakenly shared with a competitor company, the invention details may leak. For these reasons, it is extremely important to preserve users’ privacy in social software. To enable this, we need intelligent software that can keep track of users’ privacy requirements, check if these requirements can be met by the system, and lead the user to take appropriate action as needed. For example, the software can suggest the user not to share a certain piece of information if the social software cannot guarantee its privacy or the software can signal that an important content has been shared with undesirable individuals. Accordingly, the aim of this project is to design and develop the necessary components and techniques for an intelligent software that will manage a user’s privacy on her behalf.
To reach this goal, this project will follow a path with three major milestones. In the first part, we will develop a formal representation to capture users’ privacy specifications. A formal representation is necessary to develop techniques that can process user’s privacy expectations automatically. Since different users would have different privacy expectations, it is necessary to have an expressive representation to denote various aspects. To realize this, first privacy agreements of existing social softwares will be studied in depth and constructs that are relevant to privacy will be identified. Next, these constructs will be represented using commitments, an abstraction that is widely used in multiagent systems to capture agent interactions. In the second part, concepts that are relevant to privacy and social software will be identified and their properties as well as their interactions to other concepts will be represented in an ontology. This ontology will not only capture access control concepts, but concepts such as relation types, content types, sharing mechanisms that are widely used in social software. In the third part, privacy management techniques will be developed. These techniques will make use of the formal and semantic representations that have been developed in the previous two parts. There will be two main problems of study: First, given a privacy agreement, we will develop ways to detect if the agreement is in itself consistent. It is well-known that many times an agreement can have conflicting terms that make the execution of the agreement difficult, if not impossible. For example, an agreement that states that personal information will not be shared as well as an e-mail address can be shared is in conflict since an e-mail address is a personal information as well. Hence, it is important to have ways to detect such inconsistencies in a given agreement. Second, given a consistent privacy agreement, its execution may not be possible because of a conflict with a second, related agreement. Consider a picture of two friends, where the first person does not want the picture to be public, while the second one does. In such situations, it is impossible to honor both of the agreements. However, it is important to catch such cases automatically and suggest an action to be taken. In both problems, verification techniques such as model checking will be used to detect if the system has evolved into a state where privacy violations are happening or if it is possible for the system to evolve accordingly. In addition to traditional model checking techniques, ontology-enhanced methods will be employed so that further semantic reasoning can be done on the privacy agreements.
As a result of the studies of this project, we will develop a software that can help the users of a social software in managing their privacy. This software will use the formal representation developed in the project to represent the privacy agreements between a user and a system. To enable understandability and semantic reasoning on the agreement, the developed ontology will be used. To ensure that the privacy of the users is being met, the developed techniques that check consistency of a given agreement as well as the coherence among agreements will be applied. We will evaluate the development of the project over privacy scenarios that will be collected as part of this project. The resulting software will help users of social software to maintain their privacy. This will enable people to use more of social software with ease. The formal representations developed in this project as well as the methods that will be designed based on the representations will be published in international conferences and journals in a timely manner.