In the last ten years virtual machines (VMs) have been extensively used for security-related applications, such as intrusion detection systems, malicious software (malware) analyzers and secure logging and replay of system execution. A VM is high-level software designed to emulate a computer's hardware. In the traditional usage model, security solutions are placed in a VM layer, which has complete control of the system resources. The guest operating system (OS) is considered to be easily compromised by malware and runs unaware of virtualization. The cost of this approach is the semantic gap problem, which hinders the development and widespread deployment of virtualization-based security solutions: there is significant difference between the state observed by the guest OS (high level semantic information) and by the VM (low level semantic information). The guest OS works on abstractions such as processes and files, while the VM can only see lower-level abstractions, such as CPU and main memory. To obtain information about the guest OS state these virtualization solutions use a technique called introspection, by which the guest OS state is inspected from the outside (VM layer), usually by trying build a map of the OS layout to an area of memory where these solutions can analyze it. We propose a new way to perform introspection, by having the guest OS, traditionally unaware of virtualization, actively collaborate with a VM layer underneath it by requesting services and communicating data and information as equal peers in different levels of abstraction. Our approach allows for stronger and more fine-grained and flexible security approaches to be developed and it is no less secure than the traditional model, as introspection tools also depend on the OS data and code to be untampered to report correct results.
We will design, implement and make available to the research community this collaborative architecture between a guest OS and a VM layer and employ such architecture to counter various types of kernel-level malware. The goal is to increase the cost for attackers by refining trust/integrity values for subjects and objects at OS/VM layers by leveraging social trust. In this architecture guest OS and a VM actively collaborate requesting services and exchanging data and information through special instructions protected from tampering. This will open up possibilities for malware analysis and defense that are not currently possible (due to the semantic gap problem) including, preventing the actions from privacy-invasion malware like keyloggers, mitigating certain types of DoS attacks in the kernel and return-oriented rootkits, increasing the costs for attackers by leveraging social trust to refine integrity levels and restrict systems resources based on them, just to name a few. This research will also lead to the creation of a cyber security laboratory at Bowdoin, a liberal arts college located in Maine.