Most real software systems consist of modules developed in multiple programming languages. Different languages differ in their security assumptions and guarantees. Consequently, even if single modules are secure in some language model and with respect to some security policy, there is usually no uniform security guarantee on a whole multilingual system. This project focuses on low-overhead techniques for providing security guarantees to software systems in which type-safe languages such as Java interoperate with native code. Native code is developed in low-level languages including C, C++, and assembly languages. Although often used in software projects, native code is notoriously insecure and is a rich source of security vulnerabilities. The PIs are developing a two-layered approach to alleviating security threats posed by native code to type-safe languages: (1) Binary rewriting tools and their verifiers are being incorporated into the Java Virtual Machine (JVM) for rewriting and verifying native modules at the machine-instruction level to enforce security policies; (2) A safe dialect of C for interoperation with Java is being designed; with the help of programmer annotations, the safety of programs in this dialect can be statically verified. The outcome of this project will enable popular platforms such as the JVM and .NET and other major programming languages (e.g., Python, OCaml, etc.) to incorporate native modules safely. The developed principles will also be applicable to web browsers and operating systems in which there is a need of extending them with untrusted low-level modules without comprising host security.
From online shopping to electronic voting, software systems havebecome an intrinsic part of the modern society over the past fewdecades. However, software systems are not secure. The media is fullof reports of the catastrophic impact of software failures. One reason for the frequent failures is that modern software systemsare complex by all measures. One important aspect that contributes tothe complexity is that a software system is often multilingual; thatis, they are developed in multiple programming languages. Languagesdiffer in their security assumptions and guarantees. Consequently,even if single components are secure in some language model and withrespect to some security policy, there is usually no uniform securityguarantee on a whole multilingual software system. The goal of the project is to design and implement low-overheadtechniques that provide verifiable security when safe languages suchas Java interoperate with native code through the use of ForeignFunction Interfaces (FFIs). Native code is developed in low-levellanguages such as C, C++, or even assembly languages and isnotoriously unsafe. The two core research questions of the projects are: (1) How toimprove safety and security of native libraries when they are used bysafe languages (e.g., Java), while not sacrificing too muchperformance? (2) How to prove formally that the end system satisfiesdesired security properties? For the first question, we have designed a complete framework thatisolates faults of native libraries and provides safety and securityin the context of Java, one of the most popular programming languages.Leveraging a technique called software-based fault isolation, theframework puts native code in a separate code sandbox and allows theinteraction between native and Java code only through a carefullydesigned pathway. Two different implementations have been built todemonstrate the approach's efficiency and portability. In oneimplementation, the security framework is integrated into a JavaVirtual Machine (JVM). In the second implementation, the framework isbuilt outside of the JVM and takes advantage of JVM-independentinterfaces. The second implementation provides JVM portability, at theexpense of some performance degradation. Evaluation of our frameworkdemonstrates that it incurs modest runtime overhead whilesignificantly enhancing the security of Java applications. For the second question, we have applied formal methods to verifyingthe security of our techniques. For that, we have built formal modelsof native code (in particular, we have built a model of Intel's x86code) and a formal model of the Java Native Interface, the standardinterface between Java and native code. Based on those models, we havebuilt a formally verified checker, called RockSalt, that examinesnative code to check whether it obeys the sandboxing policy ofsoftware-based fault isolation, which is used in Google's NativeClient to ensure Chrome extensions developed in native code will notdamage the browser's integrity. The RockSalt checker is smaller,faster, and stronger than Google's original checker. The RockSaltchecker comes with a fully mechanized correctness proof encoded in aninteractive theorem prover. Through the completion of the project, much progress has been made toimprove the safety and security of multilingual software. We believemany of our techniques are applicable to the settings of many otherwidely used programming languages such as Python.