Privacy & Online Rights

Webmaster · May 13, 2022

The pervasiveness of data collection, processing, and dissemination raises severe privacy concerns regarding individual and societal harms. Information leaks may cause physical or psychological damage to individuals, e.g., when published information can be used by thieves to infer when users are not home, by enemies to find out weak points to launch attacks on users or by advertising companies to build profiles and influence users. On a large scale, the use of this information can be used to influence society as a whole, causing irreversible harm to democracy. The extent of the harms that privacy loss causes highlights that privacy cannot simply be tackled as a confidentiality issue. Beyond keeping information private, it is important to ensure that the systems we build support freedom of speech and individuals’ autonomy of decision and self-determination.

The goal of this knowledge area is to introduce system designers to the concepts and technologies that are used to engineer systems that inherently protect users’ privacy. We aim to provide designers with the ability to identify privacy problems, to describe them from a technical perspective, and to select adequate technologies to eliminate, or at least, mitigate these problems.

Privacy is recognised as a fundamental human right [1]: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation”. As such, it has been studied for many years from a socio-legal perspective with two goals. First, to better understand what privacy means for society and individuals. Second, to ensure that the legal frameworks that underpin our democracies support privacy as a right. The former studies proposed definitions such as privacy being ‘the right to be let alone’ [2], ‘the right to informational self-determination’ [3, 4] or ‘the freedom from unreasonable constraints on the construction of one’s own identity’ [5]. Probably one of the best examples of the latter are the principles and rules associated with the European Data Protection Legislation [6] covered in the Law & Regulation CyBOK Knowledge Area [7]. All of these conceptualisations are of great importance to define and understand the boundaries of privacy and its role for society. However, their abstract and context-free nature often makes them not actionable for system designers who need to select technologies to ensure that privacy is supported in their systems.

To address this gap, in this knowledge area, we conceptualise privacy in a similar way as security engineering conceptualises security problems [8, 9]. We consider that privacy concerns, and the solutions that can address them, are defined by the adversarial model considered by the designer, the nature of the information to be protected, and the nature of the protection mechanism itself. Typical examples of adversarial models can be: third-party services with whom data are shared are not trusted, the service provider itself is not trusted with private data of the users, or users of a service should not learn private data from other users. Typical examples of private data to be protected from these adversaries can be: the content of users’ communications, their service usage patterns, or the mere existence of users and/or their actions. Finally, typical examples of protection means can be techniques that enable information availability to be controlled, such as access control settings, or techniques to hide information, such as Encryption.

This knowledge area is structured as follows. The first part, comprising three sections, considers three different privacy paradigms that have given rise to different classes of privacy technologies. The first is privacy as confidentiality (Section 1), in which the privacy goal is to hide information from the adversary. We revise technological approaches to hide both data and Metadata, and approaches to hinder the adversary’s ability to perform inferences using the data that cannot be hidden. The second is privacy as informational control (Section 2), in which the goal is to provide users with the means to decide what information they will expose to the adversary. We revise technologies that support users in their privacy-oriented decisions and techniques that help them express their preferences when interacting with digital services. Finally, we introduce privacy as transparency (Section 3), in which the goal is to inform the user about what data she has exposed and who has accessed or processed these data. We revise solutions that show users their digital footprint, and solutions that support accountability through secure logging.

The privacy requirements that define the privacy goals in the paradigms mentioned above are often context dependent. That is, revealing a particular piece of information may be acceptable in some environments but not in others. For instance, disclosing a rare disease is not considered a privacy concern in an interaction with a doctor but would be considered a privacy violation in a commercial interaction. Nissembaum formalizes this concept as contextual integrity [10], which explicitly addresses an information flow may present different privacy needs depending on the entities exchanging this information or the environment in which it is exchanged. We note that once the requirement for a flow are clear (including the adversarial model), a designer can directly apply the technologies described in this chapter.

The second part of the knowledge area is devoted to illustrating how privacy technologies can be used to support democracy and civil liberties (Section 4). We consider two core examples: systems for secure voting and to circumvent censorship. For the former, privacy of the votes is imperative for the functionality itself. For the latter, privacy of communication partners is necessary to ensure that content cannot be blocked by a censor.

We acknowledge that privacy technologies can be used in to support illicit (e.g., distribution of child pornography) or anti-social behaviors (e.g., cyberbullying), as described in the Adversarial Behaviours CyBOK Knowledge Area [11]. While there exist solutions to selectively revoke the protection provided by privacy technologies, these are strongly discouraged by privacy researchers and privacy advocates. The reason is that adding backdoors or escrow possibilities to ease law enforcement, inherently weakens the security of the privacy-preserving systems as they can also be exploited by malicious actors to undermine user’s rights. Therefore, we do not consider these techniques within this document.

We conclude the knowledge area by outlining the steps involved in the engineering of privacypreserving systems (5). We provide guidelines for engineers to make informed choices about architectural and privacy technologies. These guidelines can help system designers to build systems in which the users’ privacy does not depend on a centralised entity that may become a single point of failure.

We note that many of the privacy technologies we revise in this knowledge area rely on the cryptographic concepts introduced in the Cryptography CyBOK Knowledge Area [12]. Throughout this knowledge area, we assume that the reader is familiar with these basic concepts and avoid repeating cryptographic definitions and reiterating on the explanation of common primitives.

About Instructor

Webmaster

35 Courses

Not Enrolled
or £1,100.00 / 1 year(s)

Course Includes

  • 7 Lessons
  • 9 Topics