Security and Cryptography

Securing the Internet presents great challenges and research opportunities. Potential applications such as Internet voting, universally available medical records, and ubiquitous e-commerce are all being hindered because of serious security and privacy concerns. The epidemic of hacker attacks on personal computers and web sites only highlights the inherent vulnerability of the current computer and network infrastructure.

Adequately addressing security and privacy concerns requires a combination of technical, social, and legal approaches. Topics currently under active investigation in the department include mathematical modeling of security properties, implementation and application of cryptographic protocols, secure and privacy-preserving distributed algorithms, trust management, verification of security properties, and proof-carrying code. There is also interest in the legal aspects of security, privacy, and intellectual property, both within the department and in the world-famous Yale Law school, with which we cooperate. Some of these topics are described in greater detail below.

James Aspnes is interested in problems involved with securing large distributed algorithms against disruption by untrustworthy participants. Using cryptographic techniques, it may be possible to allow intermediate results in a distributed algorithm to be certified independently of who provides them, reducing the problem of choosing which machines to trust. These issues become especially important in systems, such as peer-to-peer networks, where association with the system is voluntary and cannot be limited only to machines under the control of the algorithm designer

Joan Feigenbaum is interested in the foundations of electronic commerce and in fundamental problems in complexity theory that are motivated by cryptology. One such problem is the power of “instance-hiding” computations. Can the owner of a private database use the superior processing power of one or more other machines (perhaps for a fee) without having to reveal the database to those machines? In a set of influential papers with Martin Abadi, Don Beaver, Lance Fortnow, and Joe Kilian, Professor Feigenbaum showed: 1) that instance-hiding computations are limited in power if the private-database owner can only consult a single other machine; 2) that they are extremely powerful if the owner can consult multiple other machines, and 3) that instance hiding is closely related to some of the central themes of complexity theory, e.g., interactive provability, average vs. worst-case complexity, and the inherent communication costs of multiparty protocols.

In another direction, Professor Feigenbaum founded the research area of “trust management” in collaboration with Matt Blaze and Jack Lacy. Emerging Internet services that use encryption on a mass-market scale require sophisticated mechanisms for managing trust. E-businesses will receive cryptographically signed requests for action and will have to decide whether or not to grant these requests. In centralized (and small-scale distributed) computing communities, an authorizer can make such a decision based on the identity of the person who signed the request. Global, internet-scale e-businesses, however, cannot rely on identities. Most merchants will have had no contact with a typical prospective customer prior to the first time they receive a request from him. Making authorization decisions in this type of environment requires formal techniques for specifying security policies and security credentials, rigorously determining whether a particular set of credentials proves that a request complies with a policy, and deferring trust to third-party credential issuers. The “PolicyMaker” and “KeyNote” trust-management systems, which she co-invented with Blaze, Lacy, John Ioannidis, and Angelos Keromytis, have had wide-ranging impact on large-scale distributed-authorization mechanisms.

Currently, Professor Feigenbaum is working on the tension between strong encryption and lawful surveillance.  More generally, she is interested in “socio-technical” issues and in the interplay of computer science and law.

Michael Fischer is interested in security problems connected with Internet voting, and more generally in trust and security in multiparty computations. He has been developing an artificial society in which trust has a precise algorithmic meaning. In this setting, trust can be learned and used for decision making. Better decisions lead to greater social success. This framework allows for the development and analysis of some very simple algorithms for learning and utilizing trust that are easily implementable in a variety of settings and are arguably similar to what people commonly use in everyday life.

Mariana Raykova does research work in the area of cryptography and applications to security. Her interests included secure computation which enables multiple parties to compute on their private data jointly without revealing their individual sensitive inputs to each other. Her work develops new efficient constructions for secure computation and implementations that aim to bring these techniques closer to practical use. Secure computation has numerous applications in almost any scenario that involves private information such as computation of medical data, running auctions, providing anonymous credentials, processing of financial records and many others. A special case of secure computation with wide applications on its own, which Dr. Raykova has worked on is secure search that allows mutually distrusting parties to share parts of their private data in a controlled manner.

Zhong Shao leads the FLINT group at Yale, which is applying formal methods to complex security-sensitive systems in such a way that they can guarantee to users that these systems really are trustworthy.  There are several challenges standing in the way of achieving this goal. One challenge is how to specify the desired security policy of a complex system. In the real world, pure noninterference is too strong to be useful. It is crucial to support more lenient security policies that allow for certain well-specified information flows between users, such as explicit declassifications. A second challenge is that real-world systems are usually written in low-level languages like C and assembly, but these languages are traditionally difficult to reason about. A third challenge is how to actually go about conducting a security proof over low-level code and then link everything together into a system-wide guarantee.

He and his team are developing a new set of formal techniques and tools for overcoming all of these challenges. They are developing a new methodology for formally specifying, proving, and propagating information-flow security policies using a single unifying mechanism, called the “observation function.” A policy is specified in terms of an expressive generalization of classical noninterference, proved using a general method that subsumes both security-label proofs and information-hiding proofs, and propagated across layers of abstraction using a special kind of simulation that is guaranteed to preserve security. To demonstrate the effectiveness of the new methodology, they are building an actual end-to-end security proof, fully formalized and machine-checked in the Coq proof assistant, of a nontrivial concurrent operating system.

Affiliated Faculty: James Aspnes, Joan Feigenbaum, Michael Fischer, Mariana Raykova, Zhong Shao.