Securing the Internet presents great challenges and research opportunities. Potential applications such as Internet voting, universally available medical records, and ubiquitous e-commerce are all being hindered because of serious security and privacy concerns. The epidemic of hacker attacks on personal computers and web sites only highlights the inherent vulnerability of the current computer and network infrastructure.
Adequately addressing security and privacy concerns requires a combination of technical, social, and legal approaches. Topics currently under active investigation in the department include mathematical modeling of security properties, implementation and application of cryptographic protocols, secure and privacy-preserving distributed algorithms, trust management, verification of security properties, and proof-carrying code. There is also interest in the legal aspects of security, privacy, and intellectual property, both within the department and in the world-famous Yale Law school, with which we cooperate. Some of these topics are described in greater detail below.
Joan Feigenbaum has conducted direction-setting research on various aspects of cryptography, security, and privacy for more than thirty years. For example, she was the co-founder (with Matt Blaze and Jack Lacy) of the security-research area of “trust management.” In 2020, Blaze, Feigenbaum, and Lacy won the IEEE Symposium on Security and Privacy Test-of-Time Award for their 1996 paper “Decentralized Trust Management.” Currently, Professor Feigenbaum is working on “socio-technical” issues in the interplay of computer science and law, including the tension between strong encryption and lawful surveillance.
Michael Fischer is interested in security problems connected with Internet voting, and more generally in trust and security in multiparty computations. He has been developing an artificial society in which trust has a precise algorithmic meaning. In this setting, trust can be learned and used for decision making. Better decisions lead to greater social success. This framework allows for the development and analysis of some very simple algorithms for learning and utilizing trust that are easily implementable in a variety of settings and are arguably similar to what people commonly use in everyday life.
Zhong Shao leads the FLINT group at Yale, which is applying formal methods to complex security-sensitive systems in such a way that they can guarantee to users that these systems really are trustworthy. There are several challenges standing in the way of achieving this goal. One challenge is how to specify the desired security policy of a complex system. In the real world, pure noninterference is too strong to be useful. It is crucial to support more lenient security policies that allow for certain well-specified information flows between users, such as explicit declassifications. A second challenge is that real-world systems are usually written in low-level languages like C and assembly, but these languages are traditionally difficult to reason about. A third challenge is how to actually go about conducting a security proof over low-level code and then link everything together into a system-wide guarantee.
He and his team are developing a new set of formal techniques and tools for overcoming all of these challenges. They are developing a new methodology for formally specifying, proving, and propagating information-flow security policies using a single unifying mechanism, called the “observation function.” A policy is specified in terms of an expressive generalization of classical noninterference, proved using a general method that subsumes both security-label proofs and information-hiding proofs, and propagated across layers of abstraction using a special kind of simulation that is guaranteed to preserve security. To demonstrate the effectiveness of the new methodology, they are building an actual end-to-end security proof, fully formalized and machine-checked in the Coq proof assistant, of a nontrivial concurrent operating system.