0% found this document useful (0 votes)
93 views

Qubes OS: Masaryk University

Tez dokuman

Uploaded by

Sewist Man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views

Qubes OS: Masaryk University

Tez dokuman

Uploaded by

Sewist Man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Masaryk University

Faculty of Informatics

Qubes OS

Bachelor’s Thesis

Martin Páleník

Brno, Spring 2016


Masaryk University
Faculty of Informatics

Qubes OS

Bachelor’s Thesis

Martin Páleník

Brno, Spring 2016


Declaration
Hereby I declare that this paper is my original authorial work, which
I have worked out on my own. All sources, references, and literature
used or excerpted during elaboration of this work are properly cited
and listed in complete reference to the due source.

Martin Páleník

Advisor: Ing. Mgr. et Mgr. Zdeněk Říha, Ph.D.

iii
Abstract

The aim of this thesis was to evaluate the possibilities of virtu-


alization technologies in the field of operating system security and
to measure the performance and usability impact of employing vir-
tualization instead of the traditional security mechanisms to justify
the need for a novel approach. Current security mechanisms in con-
temporary operating systems are examined, with an emphasis on
the containerization approach represented by Docker, and Manda-
tory access control (MAC) represented by Security-Enhanced Linux
(SELinux). The Qubes OS has been analyzed as an example of security
based on virtualization.
The experimental part of this thesis consist of two different
components. Firstly, the thesis evaluates the extent of performance
deterioration when paravirtualization is employed instead of another
alternatives, such as native Fedora Linux discribution, the same system
using Qubes kernel, Docker containerization and SELinux Sandbox
including its variant for GUI application confinement. Secondly, a
usability evaluation has been conducted to assess the potential caveats
the target user might have to face when working with this operating
system.
The thesis has concluded the Qubes OS is a viable alternative to a
Linux desktop operating system in terms of usability, performance and
stability. However, more work has to be done to improve the hardware
compatibility, to get a more widespread dissemination amongst casual
computer users, which is the main target audience according to its
developers.

iv
Keywords
Qubes, operating system, os, security, virtualization, Xen, hypervisor,
container, sandbox, isolation

v
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Container-based virtualization . . . . . . . . . . . . . . . . 4
2.2 Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 Type 1 . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 Type 2 . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Hypervisor-based virtualization . . . . . . . . . . . . . . . 6
2.3.1 Paravirtualization . . . . . . . . . . . . . . . . . . 6
2.4 Intel® VT-d . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Qubes OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 Domain . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.2 Disposable VMs . . . . . . . . . . . . . . . . . . . 13
4 Performance profiling . . . . . . . . . . . . . . . . . . . . . . 17
4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1.1 Desktop graphics . . . . . . . . . . . . . . . . . . 20
4.1.2 Specialized benchmarks . . . . . . . . . . . . . . 21
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.1 Desktop graphics . . . . . . . . . . . . . . . . . . 23
4.2.2 Specialized benchmarks . . . . . . . . . . . . . . 25
5 Usability evaluation . . . . . . . . . . . . . . . . . . . . . . . 27
5.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Dual-boot with Fedora 23 . . . . . . . . . . . . . . . . . . . 28
5.3 Copying files from external devices . . . . . . . . . . . . . . 28
6 Conclusion and further work . . . . . . . . . . . . . . . . . . 31

vii
List of Tables
4.1 Tested combinations of OS, kernels and SELinux 19
5.1 Disk layout for Qubes OS 27

ix
List of Figures
2.1 Comparison of virtual machines and containers [10] 4
2.2 Hypervisor types [14] 6
2.3 Intel® VT-d diagram 8
3.1 Qubes architecture overview [2, p.8] 12
3.2 Secure file system sharing among many AppVMs [2,
p.10] 13
3.3 GUI subsystem design overview [2, p.25] 14
3.4 Networking overview [2, p.30] 14
4.1 Test distribution inside the chosen Suites 18
4.2 Graphics tests inside the SELinux Sandbox -X
environment 21
4.3 Hw/sw configuration for graphics benchmarks 22
4.4 Hw/sw configuration for specialized benchmarks 23
4.5 Comparison of results from desktop graphics tests 23
4.6 Comparison of results from the selected specialized
tests 26
5.1 Installing GRUB2 to enable dual-boot 28
5.2 Transferring a directory from USB3.0 HDD into an
AppVM 29

xi
1 Introduction
This thesis is going to examine a novel security-oriented desktop
operating system called Qubes OS, with an emphasis on experimental
performance comparison against similar approaches to application
confinement, namely containerization and mandatory access control.
The reason for such a comparison has been to justify the purpose
and help to determine the extent of performance deterioration against
the more established, but often more lightweight competition. The
results should help the reader to understand the potential benefits
and drawbacks associated with employing a secure operating system
based on paravirtualization.
The containerization approach is represented by Docker, confined
using SELinux on the referential configuration, and the SELinux
Sandbox -X, tested on the same configuration. The referential
configuration is represented by Fedora 23 x86_64 Workstation running
on the base hardware.
The security of containerization technologies running on a Linux
operating systems is mainly based on the assumption of uncompro-
mised kernel, which becomes the single point of failure. Provided
there is an exploitable vulnerability inside the kernel, the whole sys-
tem might be compromised using the suitable exploit. Monolithic
kernels in contemporary operating systems might therefore not be
able to provide sufficient isolation, to not only isolate the host operat-
ing system, but also the confidential data of the running applications.
The reason for the development of Qubes OS has been the per-
ceived unsustainability of the current security mechanisms, expressed
by its main developers, Joanna Rutkowska, Marek Marczykowski-
Górecki and Rafal Wojtczuk. They do not consider the so called Secu-
rity by correctness [1] to be a feasible method to guarantee a reasonable
level of security in the contemporary operating systems and propose
a novel approach named Security by isolation.
Security by correctness, characterized by fixing of the known
security vulnerabilities, is considered to be a reactive approach, which
is often ineffective against the unknown vulnerabilities. Moreover, the
less widespread vulnerabilities are frequently ignored, which makes
them susceptible to 0-day exploits.

1
1. Introduction

Although the Security by isolation is, to some extent, implemented


in the contemporary operating systems, the authors of Qubes OS do
express concerns [1] about its ability to provide an effective isolation
between the applications, making them prone to being compromised
in case any of the other application is exploited. Therefore, they suggest
a new architecture [2] called Qubes OS, adhering to the principles of
Security by isolation.
The thesis also focuses on the usability of the operating system. The
virtualization layer brings nontrivial issues, mainly concerning the
interaction with the input/output (I/O) devices). Elementary tasks,
such as interaction with an external disk storage, must be handled in
a secure way. Moreover, the typical user configuration often involves
multiple operating systems being installed on the same machine.
For these reasons, a practical experiences with the various user or
administrator tasks is described in a separate section.

2
2 Virtualization
Multiple definitions of the term virtualization are provided in [3, p. 3].
The following definition [4] has been chosen for the purposes of this
thesis.
Definition 1. Virtualization – is a technology that combines or divides
computing resources to present one or many operating environments using
methodologies like hardware and software partitioning or aggregation, partial
or complete machine simulation, emulation, time-sharing, and others.
The aspect of virtualization that is relevant to operating system
security is emphasized by defining the virtualization [5] as a
“mechanism permitting a single physical computer to run sets of code
independently and in isolation from other sets.” The operating system
Qubes uses the virtualization as a “framework of [sic] dividing the
resources of a computer into multiple execution environments” [6].
Two fundamental approaches [7, pp. 1-2] are being employed in most
of the virtualization technologies.
Container-based virtualization 1 is a lightweight virtualization ap-
proach using the host kernel to run multiple virtual environ-
ments, referred to as containers
Hypervisor-based virtualization provides virtualization at the hard-
ware level. In contrast to container-based virtualization, a hy-
pervisor establishes complete virtual machines (VMs) on top of
the host operating system.2 Each virtual machine comprises of
not only an application and its dependencies, but also an entire
guest OS along with a separate kernel
The hypervisor-based virtualization approach can be implemented
using any of the following techniques.
1. Paravirtualization (PV)
2. Full Virtualization

1. also called OS-level virtualization [8, p. 4][9], or OS virtualization [8, p. 4], although
this terminology is often misused to refer to some mandatory access control
mechanisms, such as SELinux
2. in case of type 1 hypervisor, the VMs are established on top of the hypervisor
itself

3
2. Virtualization

2.1 Container-based virtualization


Container-based virtualization works at the operating system level,
thus allowing multiple applications to operate without redundantly
running other operating system kernels on the host. Its containers look
like normal processes from outside, which run on top of the kernel
shared with the host machine. They provide isolated environments
with necessary resources to execute applications. These resources
can be either shared with the host or installed separately inside the
container.

Figure 2.1: Comparison of virtual machines and containers [10]

2.2 Hypervisor
The hypervisor-based virtualization is usually implemented as a “layer
of software that provides the illusion of a real machine to multiple
instances of virtual machines.” [6] This layer is reffered to as the
hypervisor, or virtual machine monitor (VMM).
Definition 2. Hypervisor – the software, firmware or hardware that
implements the virtualization and enforces the isolation of virtual machines.
A more elaborate definition is provided in [11].
Definition 3. A virtual machine monitor (VMM) – software for a computer
system that creates efficient, isolated programming environments that are

4
2. Virtualization

“duplicates” which provide users with the appearance of direct access to the real
machine environment. These duplicates are referred to as virtual machines.
A virtual machine is the environment created by the virtual
machine monitor.
Definition 4. Virtual machine (VM) – is a hardware-software duplicate of
a real existing computer system in which a statistically dominant subset of
the virtual processor’s instructions execute on the host processor in native
mode. [12]
The virtual machine manager has multiple characteristics:
• Provides an environment for programs, which is “essentially
identical” with the original machine [13, p. 413],
• programs run in this environment show at worst only minor
decreases in speed [13, p. 413],
• it is in complete control of system resources3 [13, p. 413],
• enables concurrent execution of different operating systems on
the same hardware [11, p. 2],
• enforces isolation of untrusted applications [11, p. 2], or trusted
applications of a dubious (security-wise) quality,
• allows to execute different versions of operating system applica-
tions [11, p. 2],
• can program the software for scalable computers [11, p. 2].
There are multiple types of hypervisors based on their level of
control over the system resources.

2.2.1 Type 1
Definition 5. Type 1 (native, bare metal or bare-metal) – runs directly on
the host’s hardware to control the hardware and to manage guest operating
systems. (see Figure 2.2)
The guest operating systems run on the level above the VMM.
Examples of type 1 hypervisors are Xen, Citrix XenServer, VMware
vSphere and Microsoft Hyper-V.

3. does not hold for the Type 2 (hosted) hypervisor

5
2. Virtualization

Figure 2.2: Hypervisor types [14]

2.2.2 Type 2
Definition 6. Type 2 (hosted) – runs as an application on a host operating
system and relies on the host OS for memory management, processor
scheduling, resource allocation, and hardware drivers. [11] (see Figure 2.2)
Only the host operating system has direct access to the underlying
hardware and is responsible for managing basic operating system
services. Examples are VMware Workstation, VirtualBox and KVM.

2.3 Hypervisor-based virtualization


2.3.1 Paravirtualization
In case of paravirtualization, the VMM provides the guest operating
system with an environment that is similar but not identical to the
environment preferred by the host. Therefore, the guest OS has to be
modified to run on the paravirtualized hardware.
Definition 7. Paravirtualization – a technique in which the guest operating
system is modified to work in cooperation with the VMM to optimize
performance. [15]
The benefit of this approach is “more efficient use of resources and
a smaller virtualization layer.” [15, p. 725]

6
2. Virtualization

2.4 Intel® VT-d


Intel® Virtualization Technology for Directed I/O (VT-d) [16] is an
extension to Intel® Virtualization Technology that claims to improve
security, reliability and performance of I/O devices in a virtualized
environment. The I/O requests from the guest VM are virtualized
by the hypervisor using the I/O-device virtualization (IOV) models –
emulation or paravirtualization. The main task of IOV models is to
ensure reliability and protection by proper confinement of the device
accesses to only the resources assigned to the device by the hypervisor.
Intel VT-d restricts direct memory access (DMA) of the I/O devices
to the assigned physical memory regions called domains [16]. This is a
hardware capability known as DMA-remapping. The VT-d hardware
logic in chipset manages the communication between DMA capable
I/O devices and the physical memory. The VT-d DMA-remapping
hardware logic is programmed by system software – in case of
virtualization, the system software is represented by the hypervisor.
In other cases the native operating system is the system software. The
memory address from I/O is then translated to the correct physical
address. DMA-remapping then performs check whether the device
has the persmissions required to access the physical address, based
on the information from system software. The whole interaction is
illustrated in Figure 2.3.
The system software is able to create one or more DMA protection
domains and DMA-remapping enforces the permissions given by the
system software on this domain. The domain is defined as “an isolated
environment containing a subset of the host physical memory” [16].

7
2. Virtualization

Figure 2.3: Intel® VT-d diagram

8
3 Qubes OS
The authors of Qubes OS believe [17] there is a considerably better
isolation between virtual machines than between processes in a
monolithic kernels of mainstream desktop operating systems. This is
a result of substantially simpler interface between a virtual machine
and hypervisor [17]. The hypervisor itself can be simpler as it does
not provide many services. Modern computers also provide hardware
virtualization support (VT-x, VT-d or Intel TXT), which allows creation
of simplified hypervisor code and robust system configurations using
driver domains, special hardware-isolated containers for hosting a
code that is prone to compromise.
The purpose of Qubes OS is not to be a proof-of-concept operating
system, but rather to “be of interest to various organizations,
commercial and government, which care about data security and are
willing to invest some effort into configuring Qubes desktops.” [18]
The Qubes OS utilizes the existing open source components, mainly
the Xen hypervisor, K desktop environment (KDE) and Xfce desktop
environment. The virtualisation technology is used for two reasons
– security isolation properties and the ability to reuse the existing
software.
Each AppVM in Qubes operating system is identified with the
name and border color. When running some application within a
particular AppVM, the border of the application window contains the
name of the domain (it is running into) in brackets and the border has
the same color as is assigned to the particular domain. If another
application is being run inside the same domain, then its border
also has the same color. The application with particular color is not
isolated from another application with the same color. However, all
the application that run within a particular domain (marked with
certain color) are isolated from all application running in different
domain (marked with another color).

3.1 Definitions
The terminology used in this thesis follows that given in [19] and the
most essential terminology is defined in this chapter.

9
3. Qubes OS

Domain A virtual machine (VM), i.e., a software implementation of a


machine (for example, a computer) that executes programs like
a physical machine.

Dom0 Domain Zero. Also known as the host domain, dom0 is the
initial domain started by the Xen hypervisor on boot. Dom0 runs
the Xen management toolstack and has special privileges relative
to other domains, such as direct access to most hardware.

DomU Unprivileged Domain. Also known as guest domains, domUs


are the counterparts to dom0. All domains except dom0 are
domUs. By default, most domUs lack direct hardware access.

AppVM Application Virtual Machine. Any VM which depends on a


TemplateVM for its root filesystem. In contrast to TemplateVMs,
AppVMs are intended for running software applications.

TemplateVM Template Virtual Machine. Any standalone VM which


supplies its root filesystem to other VMs, known as AppVMs. In
contrast to AppVMs, TemplateVMs are intended for installing
and updating software applications.

Standalone(VM) Standalone (Virtual Machine). In general terms, a


VM is described as standalone if and only if it does not depend
on any other VM for its root filesystem. (In other words, a
VM is standalone if and only if it is not an AppVM.) More
specifically, a StandaloneVM is a type of VM in Qubes which
is created by cloning a TemplateVM. Unlike TemplateVMs,
however, StandaloneVMs cannot supply their root filesystems
to other VMs. (Therefore, while a TemplateVM is a standalone
VM, it is not a StandaloneVM.)

Template-BasedVM Opposite of a Standalone(VM). A VM, that


depends on another TemplateVM for its root filesystem.

NetVM Network Virtual Machine. A type of VM which connects


directly to a network and provides access to that network to
other VMs which connect to the NetVM. A NetVM called netvm
is created by default in most Qubes installations.

10
3. Qubes OS

ProxyVM Proxy Virtual Machine. A type of VM which proxies


network access for other VMs. Typically, a ProxyVM sits between
a NetVM and a domU which requires network access.

FirewallVM Firewall Virtual Machine. A type of ProxyVM which is


used to enforce network-level policies (a.k.a. “firewall rules”).
A FirewallVM called firewallvm is created by default in most
Qubes installations.

DispVM Disposable Virtual Machine. A temporary AppVM which


can quickly be created, used, and destroyed.

PV Paravirtualization. An efficient and lightweight virtualization


technique originally introduced by the Xen Project and later
adopted by other virtualization platforms. Unlike HVMs, par-
avirtualized VMs do not require virtualization extensions from
the host CPU. However, paravirtualized VMs require a PV-
enabled kernel and PV drivers, so the guests are aware of the
hypervisor and can run efficiently without emulation or virtual
emulated hardware.

HVM Hardware Virtual Machine. Any fully virtualized, or hardware-


assisted, VM utilizing the virtualization extensions of the host
CPU. Although HVMs are typically slower than paravirtualized
VMs due to the required emulation, HVMs allow the user to
create domains based on any operating system.

StandaloneHVM Any HVM which is standalone (i.e., does not


depend on any other VM for its root filesystem). In Qubes,
StandaloneHVMs are referred to simply as HVMs.

TemplateHVM Any HVM which functions as a TemplateVM by sup-


plying its root filesystem to other VMs. In Qubes, Template-
HVMs are referred to as HVM templates.

PVH PV on HVM. To boost performance, fully virtualized HVM


guests can use special paravirtual device drivers (PVHVM or
PV-on-HVM drivers). These drivers are optimized PV drivers
for HVM environments and bypass the emulation for disk and
network IO, thus providing PV like (or better) performance on

11
3. Qubes OS

HVM systems. This allows for optimal performance on guest


operating systems such as Windows.

3.2 Architecture
The Figure 3.1 shows the high level overview of the Qubes operating
system architecture.

Figure 3.1: Qubes architecture overview [2, p.8]

3.2.1 Domain
Each application in Qubes OS is run in a domain. According to
[17], there are two types of domains. The first type of domain
is SystemVM12 . Network domain, storage domain and a special
privileged domain called Dom0 are all examples of SystemVM.
These domains provide system-wide services and are common to all
installations of Qubes OS. The network domain isolates the exposed
networking code into an unprivileged virtual machine [17, p. 7]. This

1. also called system virtual machine or system domain


2. for definition see Section 3.1

12
3. Qubes OS

Figure 3.2: Secure file system sharing among many AppVMs [2, p.10]

effectively prevents compromise in case a vulnerability in the network


domain is exploited. The system would not be compromised even
if a vulnerability in the network domain is exploited [17, p. 7]. The
only possible consequence is a denial-of-service (DoS) attack as the
connection would be terminated. However, the other VMs of the users
would remain intact [17, p. 7]. The second type of domain is AppVM
(virtual machine, domain), which runs on top of SystemVMs. This
domain is used to run one or more user applications.
Each domain is given the fewest possible privileges, which means
that all domains except Dom0 are unprivileged. Dom0 is used to
provide some essential OS capabilities as well as secure graphical
user interface (GUI). It has almost the same amount of privileges as
the hypervisor itself and is the only domain whose compromise by
an attacker would give a complete access to the system. Therefore,
it is separated from the outside network (which is handled by the
networking domain) and the amount of communication with other
AppVMs is limited to minimum, which keeps the attack surface small.

3.2.2 Disposable VMs


Disposable VMs are virtual machines created for the purpose of
executing a single application [20]. In comparison with a dedicated
AppVM, they are lightweight and therefore can be quickly created
and destroyed.

13
3. Qubes OS

Figure 3.3: GUI subsystem design overview [2, p.25]

Figure 3.4: Networking overview [2, p.30]

14
3. Qubes OS

The use case for a disposable VM starts, when a user receives a PDF
e-mail attachment that might be compromised, while she is working
in the “working” domain. Opening such an untrusted attachment in
the working domain endangers the confidentiality of the system, as
it might expose sensitive information from this domain. Attacks on
integrity and availability are possible respectively. The matter of trust
to the sender does in fact provide only a false sense of security as the
e-mail itself might have been sent by a malware outside the will of the
sender [20].
The naive approach to overcome the problem would be to open the
PDF file in an unprivileged domain. This would ensure the potential
malware would not get access to any confidential data. However,
the e-mail attachment might not have been infected and it might
have contained some confidential data. It is eventually possible that
the unprivileged domain is compromised. That would mean that
confidential data would be available in a compromised environment,
which effectively makes them compromised [20].
The Qubes OS introduces disposable VM, a clean virtual machine,
that would be created in this scenario only for the purpose of viewing
the PDF file. After the PDF is viewed, the VM is discarded. If the PDF
was malicious it could only compromise the disposable VM, which
does not contain any sensitive data.

15
4 Performance profiling
Considering the fact there is an extra layer of indirection involved,
the potential performance deterioration must be determined and
assessed. The first part of this chapter therefore evaluates the extent
of performance deterioration on chosen desktop linux distributions.
The distributions have been chosen to minimize the impact of other
factors, which could unintentionally influence and bias the results.

The testing environment

All of the tests have been performed on bare hardware depicted on


Figure 4.4 and Figure 4.3. Considering the nature of the tested subjects,
virtualized hardware might have been detected by the testing utility.
Clean installations of Fedora 23 Workstation 64bit and Qubes OS
R3.1, that included all updates released up to May 6, 2016, have been
used for testing. The Docker version used is 1.11.1, the latest stable
version, confined using SELinux on Fedora 23.
The BIOS settings must have been altered in order to enable
VT-x. Although Qubes OS does not utilize VT-x for PV guest
virtualization [21], it must be enabled in order to operate fully
virtualized AppVMs, such as Windows-based AppVMs to run
Windows applications.

4.1 Methodology

For the purposes of performance evaluation I have chosen the Phoronix


Test Suite (PTS) benchmarking application for its comprehensive test
coverage, modularity, myriad test options and representative results
including statistical and analytical options for correct interpretation of
results. Currently in version 6.4.0 Milestone 2, it offers a tight integra-
tion with OpenBenchmarking.org, which stores discrete performance
Tests organized into Suites. A Suite is a XML file defining the set of
tests or suites, generally evaluating a chosen performance character-
istic, to be executed by the phoronix-test-suite executable. The tests

17
4. Performance profiling

are downloaded on-demand, which quarantees a small size of the


executable.
For the purposes of this thesis, which was to demonstrate the
synthetic, as well as real-world deterioration of multiple discrete
areas of bare-metal virtualization performance, I have chosen the
list of Suites depicted in Figure 4.1.

Suite Tests
Disk Test Suite 13
Desktop Graphics 10
Memory Test Suite 3
Networking Test Suite 1
Kernel 27
Video Encoding 3
Linux System 27
Cryptography 4
CPU / Processor Suite 25
Summary 113

Figure 4.1: Test distribution inside the chosen Suites

Some of the Tests have failed to install for multiple reasons –


commonly because of a broken link, failure to compile due to missing
libraries, or an unsupported architecture. Where possible, appropriate
measures have been taken to repair the broken tests, but some tests,
usually requiring 32bit libraries or proprietary software, have been
omitted from the test. One of the tests in desktop graphics called
pts/qgears2 needed qmake to be installed. This command is installed
using

$ sudo dnf -y install qt-devel

however a symbolic link, or a copy of the qubes executable file, located


at /usr/bin/qmake-qt4, has to be created in /bin/, or in any other of
the system directories containing the executable programs.

$ sudo ln -s /usr/bin/qmake-qt4 /bin/qmake

18
4. Performance profiling

Fedora 23 running on bare-metal has been used as the reference


configuration. The reason for this choice is, that Qubes OS R3.1
employs Fedora 20 template to build the dom0 domain and Fedora 23
template is the default to run the AppVMs. Therefore, the performance
inside an AppVM based on this template usually represents the
performance of an arbitrary application running under the Qubes
OS R3.1. To eliminate any side effects and to improve the accuracy of
the tests, the choice of the kernel to be used on both operating systems
has been carefully examined.
Although some sources claim the performance overhead of a run-
ning SELinux system to approach seven percent [22], recent analyses
do not concur, and consider the overhead to be marginal [23] or “negli-
gible” [24]. Therefore, I have abstained from further evaluation of this
topic, which might be treated more appropriately in separate studies.
The tested combinations of operating system, kernel and SELinux
mode, are depicted using color on Table 4.1.

kernel
Fedora Qubes
Fedora enforcing uncompiled
OS
Qubes — uncompiled

Table 4.1: Tested combinations of OS, kernels and SELinux

Additionally, I have included the performance tests inside a Docker


container, running inside a Fedora 23 Workstation 64bit with SELinux
set to Enforcing. I have created a Dockerfile to built the Docker image
from a fedora:23 official image [25] on the Docker Hub. The Dockerfile
installs all the necessary dependencies, being required or optional,
and starts the shell for the user.
Caution must be taken, however, when testing the file I/O perfor-
mance, as the Docker uses a special type of filesystem called UnionFS
inside of the Docker images. Not taking this fact into consideration,
the tests could have been misleading, testing the proprietary filesys-
tem performance instead of the real-world performance of a standard
persistent linux filesystem.

19
4. Performance profiling

For this reason, the Docker data volumes or Docker data volume
plugins should be used instead. This technique mounts an arbitrary
host directory into the given directory in the Docker container
directory tree. The data become persistent on the host machine and
become host-independent in case of using Docker Engine data volume
plugin, such as Flocker [26].
Such ancillary tests could provide interesting results to compare
in further studies. This would require an identification of the most
disk I/O intensive directories used by the given pts/disk Suite. One
approach would be to confine the application with SELinux to the
home directory (~), provided it is being run as a standard Linux user,
and evaluate the AVC denials, if any. The most probable candidates
for I/O intensive computations are /tmp, /var and /usr.
Each of the four specialized benchmarks has been running
for approximately 16 hours. Each of the three desktop graphics
benchmarks has been running for approximately 2 hours and their
detailed versions for circa 5 hours.
According to the dmesg logs, the CPU has been throttled multiple
times during the tests, because of the high core temperature levels,
which might have influenced the test results. However, the room
temperature (25 ∘C) and the cooling mechanisms have been identical
in all of the tests, therefore I found the conditions fair and do not
consider this a methodological flow.
Because the performance requirements for desktop computing
differ from that of usual server applications, the performance tests
had to be designed appropriately. Therefore, the results have been
separated into two subsections - desktop graphics benchmarks and
specialized benchmarks.

4.1.1 Desktop graphics

Working inside the graphical user interface is the most common use-
case for the ordinary desktop user, which is the target audience of
Qubes, according to its creator Joanna Rutkowska. Therefore, these
results should be emphasized in comparison with the more specialized
benchmarks, that are more relevant on server configuration.
The following is the list of the tested subjects.

20
4. Performance profiling

mandatory access control


Fedora 23 with SELinux Enforcing running natively

cointainerization
SELinux Sandbox -X running on Fedora 23 with SELinux
Enforcing

paravirtualization
performed inside an arbitrary Qubes OS domain (AppVM), that
is based on Fedora 23 template with SELinux uncompiled

The command used to initiate the testing the containerization


method (under regular Linux user) is demonstrated on Figure 4.1.1.

$ sandbox -X -M ~/home -T ~/tmp -t sandbox_net_t ...

Figure 4.2: Graphics tests inside the SELinux Sandbox -X environment

Docker has been omitted from the tests of desktop graphic


performance. The reason is, that even it is possible to confine an
X application under Docker, it is not a typical use case for a Docker
container. Docker has not been designed to run X applications and the
practical experience with this approach is rather experimental with
unpredictable results.
The hardware/software configuration of the three methods for
testing desktop graphics performance is depicted on Figure 4.3.
The first of the tests [27], [28] have been chosen to measure the
performance of applications built with Qt4 and GTK+ frameworks, the
last one [29] measures the performance of an open-source first-person
shooter game.

4.1.2 Specialized benchmarks


The testing subjects for the benchmarks targeting a specific perfor-
mance characteristic have been summarized in Table 4.1. The following
is a detailed description of the configurations referred to in Table 4.1.

native
native Fedora 23 with Qubes kernel with SELinux uncompiled

21
4. Performance profiling
Desktop graphics benchmarks
fedora selinux-sandbox-x qubes

Processor Intel Core i5-2450M @ 3.10GHz (4 Cores) Intel Core i5-2450M @ 2.49GHz (4 Cores)
Motherboard HP 167C
Chipset Intel 2nd Generation Core Family DRAM
Memory 2 x 4096 MB DDR3-1333MHz Samsung 8192MB 2048MB
Disk 500GB Samsung SSD 850 97GB
Graphics Intel Gen6 Mobile (1300MHz) LLVMpipe
Audio IDT 92HD87B1/3
Network Realtek RTL8111/8168/8411 + Qualcomm Atheros AR9285 Wireless
OS Fedora 23
Kernel 4.4.8-300.fc23.x86_64 (x86_64) 4.1.13-9.pvops.qubes.x86_64 (x86_64)
Desktop GNOME Shell 3.18.5
Display Server X Server 1.18.3 X Server 1.18.3
OpenGL 3.3 Mesa 11.1.0 (git-525f3c2) 2.1 Mesa 11.1.0 (git-525f3c2) Gallium 0.4
File-System btrfs ext4
Screen Resolution 1600x900 1000x700 1600x900
Compiler GCC 5.3.1 20160406
System Layer Xen 4.6.0 Hypervisor
Compiler Details
- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release
--enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto
--enable-libmpx --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-isl
--with-linker-hash-style=gnu --with-tune=generic
Processor Details
- fedora, selinux-sandbox-x: Scaling Governor: intel_pstate powersave

PHORONIX-TEST-SUITE.COM

Figure 4.3: Hw/sw configuration for graphics benchmarks

mandatory access control


native Fedora 23 with Fedora kernel with SELinux Enforcing

containerization
Docker running on Fedora 23 with SELinux Enforcing

paravirtualization
Qubes OS AppVM based on Fedora 23 template with SELinux
uncompiled

Selinux Sandbox (and its -X version) have been omitted from the
specialized benchmarks. The reason is, that it is no different, from a
performance perspective, from a system running SELinux in Enforcing
mode. The only difference between SELinux and SELinux Sandbox is
in the semantics – the SELinux Sandbox has been designed to confine
arbitrary, untrustworthy user-space applications, while SELinux itself
usually confines known system applications.
The detailed hardware/software configuration used for the
speacialized benchmarks is depicted on Figure 4.4.

22
4. Performance profiling
Specialized benchmarks
fedora-w-qubes-kernel-wo-selinux fedora-w-fedora-kernel-w-selinux docker-on-fedora-multiple-suites qubes-w-qubes-kernel-wo-selinux

Processor Intel Core i5-2450M @ 3.10GHz (4 Cores) Intel Core i5-2450M @ 2.49GHz (4 Cores)
Motherboard HP 167C
Chipset Intel 2nd Generation Core Family DRAM
Memory 2 x 4096 MB DDR3-1333MHz Samsung 8192MB 1536MB
Disk 500GB Samsung SSD 850 97GB
Graphics Intel Gen6 Mobile (1300MHz) (1300MHz) LLVMpipe
Audio IDT 92HD87B1/3
Network Realtek RTL8111/8168/8411 + Qualcomm Atheros AR9285 Wireless
OS Fedora 23
Kernel 4.1.13-9.pvops.qubes.x86_64 (x86_64) 4.4.8-300.fc23.x86_64 (x86_64) 4.1.13-9.pvops.qubes.x86_64 (x86_64)
Desktop GNOME Shell 3.18.5
Display Server X Server 1.18.3 Wayland Weston + SurfaceFlinger + GNOME Shell Wayland X Server 1.18.3
OpenGL 3.3 Mesa 11.1.0 (git-525f3c2) 2.1 Mesa 11.1.0 (git-525f3c2) Gallium 0.4
File-System btrfs ext4
Screen Resolution 1600x900
Compiler GCC 5.3.1 20160406
System Layer Xen 4.6.0 Hypervisor
Compiler Details
- --build=x86_64-redhat-linux --disable-libgcj --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array
--enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --enable-libmpx --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-isl --with-linker-hash-style=gnu --with-tune=generic
Disk Details
- fedora-w-qubes-kernel-wo-selinux: CFQ / relatime,rw,space_cache,ssd
- fedora-w-fedora-kernel-w-selinux: CFQ / relatime,rw,seclabel,space_cache,ssd,subvol=/root00,subvolid=258
- docker-on-fedora-multiple-suites: CFQ / relatime,rw,seclabel,space_cache,ssd,subvol=/root00/var/lib/docker/btrfs/subvolumes/c3a628efe5164f953fc3b645b03c128a995206f4c0364772724b71c879414bd0,subvolid=453
- qubes-w-qubes-kernel-wo-selinux: CFQ / data=ordered,discard,relatime,rw
Processor Details
- fedora-w-qubes-kernel-wo-selinux, fedora-w-fedora-kernel-w-selinux, docker-on-fedora-multiple-suites: Scaling Governor: intel_pstate powersave

PHORONIX-TEST-SUITE.COM

Figure 4.4: Hw/sw configuration for specialized benchmarks

Desktop graphics benchmarks


fedora selinux-sandbox-x qubes

qgears2: XRender + Image Scaling 1988.47 1107.93 731.13


nexuiz: 1600 x 900 w/ HDR 26.60 5.14
gtkperf: GtkDrawingArea - Pixbufs 1.65 4.09 1.34
nexuiz: 1000 x 700 w/ HDR 7.90
PHORONIX-TEST-SUITE.COM

Figure 4.5: Comparison of results from desktop graphics tests

4.2 Results

4.2.1 Desktop graphics

The test results of desktop graphics benchmarks are depicted on


Figure 4.5.
The first test, called pts/qgears2 [28], used the option -render to
render using the X Rendering Extension (XRender) and COM.PO
to conduct a compostion and scaling of images. XRender has been
optimized for 2D graphics, contrary to the OpenGL, although they

23
4. Performance profiling

are not completely orthogonal. It provides rendering methods, such


as antialiasing and alpha blending.
The results in Figure 4.5 show, as expected, the native Fedora
performance to be superior to the others. The SELinux Sandbox -
X performance degradation has been considerable, with 55.7 % in
comparison with native Fedora. This is even more accentuated in the
second test [27]. The performance of Qubes OS has been moderately
slower (34 %) than the SELinux Sandbox -X, and considerable slower
(63.2 %) to the native Fedora, although this has not influenced the user
experience significantly.
The second test, named pts/qtkperf [27], measures the perfor-
mance of several different GTK operations. The option chosen for
this particular test was the combination of GtkDrawingArea with
Pixbufs [30]. The test uses the GTK2 widget GtkDrawingArea [31],
which mimicks the creation of typical user elements, and should thus
represent the ordinary 2D desktop performance when interacting with
GUI applications.
The results confirmed the significantly diminished performance
degradation of SELinux Sandbox -X, which has been observed during
the usability evaluation in Chapter 4. The degradation has been
59.7 % in comparison with native Fedora, which rendered some more
demanding applications unusable. Even the common productivity
applications, like the browser, that SELinux Sandbox -X has been
designed to execute, have had a noticeably decreased responsiveness
to user input. This, however, has not disabled the user to conduct the
required task.
This might be explained by the fact that a separated nested
Xephyr server [32] (Xorg X11 server) has to be created to confine
each individual executed application. This is a resource intensive
task, which might result to the worst results in graphics benchmarks.
The thirg benchmark has been a 3D game named Nexuiz,
which“uses the DarkPlaces engine, which is a largely modified version
of the Quake engine with extra features such as High Dynamic Range
rendering and OpenGL 2.0 shaders.” [29]
Three of the 12 test results or test options have been missing
because of not being executed. To solve the problem, I have executed an
elaborate detailed test run, including all test options for each test and

24
4. Performance profiling

test subject. The comparisons of the detailed test results is available


at [33].

4.2.2 Specialized benchmarks


The test results of specialized graphics benchmarks are depicted on
Figure 4.6.
The most important bottleneck identified in the test was certainly
an I/O performance, which has been proved consistently across
numerous tests. The benchmark named pts/IOzone [34] has been
run in 4 different settings and has shown a 12.4 %, 15.3 %, 15.4 %
and 92.2 % diminishment in comparison with native Fedora. The last
results deviates significantly and should probably be considered an
error in measurement. Nevertheless, even ignoring this deviation, the
I/O performance has been the worst of the four tested subjects.
The degradation can also be observed when running the test
pts/sqlite [35] on Fedora with Qubes kernel, instead of its own.
This could indicate the problem is with the kernel configuration.
The Qubes kernel is only a slightly older and is disabled (using
CONFIG_SECURITY_SELINUX_DISABLE=y) in comparison with
the Fedora kernel. The diminishment in this test has been 44.68 %,
relative to the mentioned Fedora running its default kernel.
Compilebench is another benchmark, that tries to “age a filesystem
by simulating some of the disk IO common in creating, compiling,
patching, stating and reading kernel trees. It indirectly measures how
well filesystems can maintain directory locality as the disk fills up and
directories age. This current test is setup to use the makej mode with
10 initial directories.” [36] The differences between Fedora with Qubes
kernel and Qubes with the same kernel has been 43.73 %, 53.1 % and
54.1 % respectively.
The other results were not as significant as the chosen ones and
could not have been examined more thoroughly for the lack of space.
However, they could be a subject to further studies, provided that the
missing results are obtained, even though this might prove impossible
for some tests for the reasons I mentioned in Section 4.1.

25
4. Performance profiling

Specialized benchmarks
fedora-w-qubes-kernel-wo-selinux fedora-w-fedora-kernel-w-selinux docker-on-fedora-multiple-suites qubes-w-qubes-kernel-wo-selinux

aio-stress: Random Write 779.11 730.39


sqlite: Timed SQLite Insertions 193.53 101.17 153.44 182.77
dbench: 12 Clients 227.60 155.84 128.05
dbench: 128 Clients 798.63 430.94 374.37
dbench: 1 Clients 45.67 46.32 20.97
iozone: 4GB Read Performance 6097.67 5916.86 5644.45 460.11
iozone: 8GB Read Performance 532.59 529.11 529.23 463.74
iozone: 4GB Write Performance 460.12 475.87 458.52 402.85
iozone: 8GB Write Performance 485.82 485.44 475.03 410.53
compilebench: Test: Compile 651.20 616.77 443.51
compilebench: Test: Initial Create 142.54 108.49 105.68
compilebench: Test: Read Compiled Tree 577.68 535.99 150.76
unpack-linux: linux-2.6.32.tar.bz2 14.66 14.69 16.16
postmark: Disk Transaction Performance 4520 4360 3623 1506
qgears2: XRender + Image Scaling 1769.07 769.85
nexuiz: 1600 x 900 w/ HDR 25.90 26.75 5.01
openarena: 1280 x 1024 135.87 137.43
padman: 1600 x 900 166.33 166.67
urbanterror: 1280 x 1024 84.33 83.17
gtkperf: GtkDrawingArea - Pixbufs 1.24 0.45
ramspeed: Integer Add 11759.22 11504.48 11835.59 11588.18
ramspeed: Integer Copy 11437 11624.35 12034.58 11530.20
ramspeed: Integer Scale 11808.29 11334.25 12030.85 11734.67
ramspeed: Floating-Point Add 12737.81 13149.28 13394.91 13175.10
stream: Type: Copy 11687.02 11655.79 11807.89
stream: Type: Scale 11632.19 11579.28 11759.85
stream: Type: Triad 12995.38 12920.24 13066.73
stream: Add 13003.63 12893.91 13069.88
stream: Copy 11685.37 11655.79 11811.11
stream: Scale 11632.19 11579.28 11759.85
stream: Type: Add 13003.63 12893.91 13069.88
hmmer: Pfam Database Search 28.30 27.31 28.14 28.78
mafft: Multiple Sequence Alignment 14.84 13.08
gmpbench: Total Time 2831.10 2847.20 2782.10
byte: Dhrystone 2 23356676.20 23046775.07 23411762.07
byte: Integer Arithmetic 1 1 1
tscp: AI Chess Performance 913094 913699 894733
john-the-ripper: Test: Blowfish 1805 1945 1831 1876
john-the-ripper: Traditional DES 3146333 3366667 3226000 3304667
john-the-ripper: Test: Traditional DES 3259000 3368000 3236000 3198833
john-the-ripper: Blowfish 1768 1942 1783 1802
john-the-ripper: Test: MD5 25447 26150 25141 25549
ttsiod-renderer: Phong Rendering With Soft-Shadow Mapping 62.14 62.52
x264: H.264 Video Encoding 56.18 56.48
graphics-magick: HWB Color Space 120 119
graphics-magick: Local Adaptive Thresholding 69 68
graphics-magick: Sharpen 61 63
graphics-magick: Resizing 110 110
himeno: Poisson Pressure Solver 1304.82 1300.76 1260.05
compress-7zip: Compress Speed Test 7430 7387 6861
c-ray: Total Time 78.55 75.35 79.83 77.79
compress-pbzip2: 256MB File Compression 27.57 26.67 27.68 28.39
smallpt: Global Illumination Renderer; 100 Samples 260 253 264 259
bullet: Test: Convex Trimesh 1.83 1.92
compress-gzip: 2GB File Compression 16.83 16.94 17.02 19.35
compress-lzma: 256MB File Compression 418.76 418.02 414.19 428.75
crafty: Elapsed Time 97.65 97.67 97.26 100.82
dcraw: RAW To PPM Image Conversion 71.02 71.13 73.93
encode-flac: WAV To FLAC 9.69 9.84 10.86
encode-mp3: WAV To MP3 16.19 16.21 16.18 16.65
ffmpeg: H.264 HD To NTSC DV 18.97
mencoder: AVI To LAVC 27.57 29.10
minion: Solitaire 101.85 100.49
sudokut: Total Time 23.07 27.00 23.85
openssl: RSA 4096-bit Performance 191.10 185.42 187.97 190.03
pybench: Total For Average Test Times 2627 2634 2671 2643
apache: Static Web Page Serving 10525.44
sunflow: Global Illumination + Image Synthesis 7.39 7.29 8.28
dbench: 48 Clients 365.72 402.51
PHORONIX-TEST-SUITE.COM

Figure 4.6: Comparison of results from the selected specialized tests

26
5 Usability evaluation

5.1 Installation
In spite of the announcement of UEFI support in Qubes OS R3.1,
the UEFI boot has not been possible on the tested machine. I have
examined the problem extensively with the developers and made
numerous attempts to correct the issue [37], [38], but the result was
that the problem was a vendor bug and the solution would require a
rebuild of the whole Qubes OS. This would however undermine the
purpose of the tests, which was to evaluate the latest stable version of
Qubes OS.
Nevertheless, I built the trunk (developement) version of Qubes
to see if the issue has been fixed in upstream, but the problem
persisted. Another solution I tried was to boot using The rEFInd Boot
Manager instead of GRUB2, but it has been met with the same problem.
Needless to say, the UEFI boot on my machine works without any
issue in case of using the Fedora 23 Workstation 64bit live USB.
Finally, a hard drive partition table must have been changed
manually inside the GNOME Partition Editor Live Image to the GUID
Partition Table (GPT) and at least one partition must have been created
in order to force Qubes OS to install on a GPT partition table instead
of the master boot record (MBR).
A biosboot partition is required in order to boot from a GPT drive
in legacy mode. Usually, the size of this partition is 1 MB, however,
the installation fails in case that less then 2 MB biosboot partition is
created.
The used partition layout is depicted on Table 5.1. Btrfs filesystem
has been used for the root partition and the encryption has been
disabled to comply with the other tested subjects in Chapter 4.

biosboot 2 MB
/boot 500 MB
swap 20 GB
/ 150 GB

Table 5.1: Disk layout for Qubes OS

27
5. Usability evaluation

5.2 Dual-boot with Fedora 23


My initial approach to create a machine that allows booting from
Qubes OS, as well as Fedora 23, has failed [39] and an alternative
approach must have been considered [40]. Although the dual-boot
has not been eventually required, the methods mentioned are sound
and reportedly correct.
The problem with my initial approach was, that I configured
Fedora to boot using UEFI, but Qubes has used the Legacy BIOS
boot. The BIOS setting has been configured to enable the UEFI boot,
which was an experimental setting on this configuration and possibly
increased the amount of booting problems.
The solution [39], [40] would be to boot both the operating
system using a common method, in this case Legacy BIOS, because
Qubes OS R3.1 does not support the UEFI boot on this machine,
install the bootloader of the first operating system on the partition
(not the drive) and then, after installing the second operating system,
set GRUB2 to chainload the other operating system [40], regenerate
GRUB2 and install the bootloader on the drive.
$ grub2-mkconfig -o /boot/grub2/grub.cfg
$ grub2-install /dev/sda

Figure 5.1: Installing GRUB2 to enable dual-boot

After reboot, the GRUB2 menu should offer the user to boot both
of the operating systems.

5.3 Copying files from external devices


To minimize the network traffic, the installed test files, containing
approximately 16 GB of data, have been stored on an external USB 3.0
hard drive and consequently copied into the AppVM inside the
Qubes OS for Qubes performance testing. This scenario has provided
an opportunity to assess the feasibility, user friendliness, as well as
performance of file transfer from an external I/O device. No rigorous
methodology has been chosen to assess the performance, therefore,
the results must be interpreted cautiously.

28
5. Usability evaluation

After connecting the external hard drive, the device is correctly


detected and the user is notified (in the K Desktop Environment)
using a pop-up message. However, the device has yet to be mounted,
using the GUI tool called Qubes VM Manager, into a particular chosen
domain, which is usually one of the AppVMs or a domain template.
After this step, the device (not a partition) is already mounted to the
domain and can be listed using blkid or lsblk. The device is yet to
be mounted, however, which could be potentially misleading for an
average user. In spite of this, the integration with Nautilus file manager
works flawlessly, the device is visible and mounted on-demand.
Nonetheless, it must be emphasized that the GUI method of
transforming larger files (over 500 MB) has been consistently met
with desktop unresponsiveness and eventually could not have been
performed. This unresponsiveness has been localized only to a domain
in question and thus has not resulted in a complete desktop freeze.
Nevertheless, a different method must have been chosen to enable the
large file transfer.
I have used the rsync utility, which is a standard Linux tool to
perform a file copy including the real-time progress notification during
the transfer of the directory. The command in use is depicted in
Figure 5.3.

$ rsync -ah --info=progress2 pts ~/.phoronix-test-stuite/

Figure 5.2: Transferring a directory from USB3.0 HDD into an AppVM

The maximum speed during the transfer has been 25 MBps, which
concurs with the maximum data transfer on the same hardware
configuration [41] using the USB2.0 port instead of the USB3.0 port.
This could be explained by a missing USB3.0 support Qubes OS R3.1,
which has been confirmed by the developers. The average speed has
been oscillating between 15 MBps and 20 MBps.

29
6 Conclusion and further work
The results have generally concurred with the expected trend towards
performance degradation with increasing level of abstraction and
isolation, with a notable exception in the graphic performance results.
To summarize the desktop graphics results obtained in Sec-
tion 4.2.1, I would like to accent the significantly diminished per-
formance of SELinux Sandbox in graphics performance benchmarks.
The SELinux Sandbox -X performance degradation has been consider-
able, with 55.7 % in comparison with native Fedora. This is even more
accentuated in the second test [27]. The performance of Qubes OS has
been only moderately slower (34 %) than the SELinux Sandbox -X, but
considerably slower (63.2 %) to the native Fedora, although this has
not influenced the user experience significantly.
The specialized benchmark results, that are covered in detail in
Section 4.2.2, have indicated a significant decline in I/O performance.
Some test results have a 14.3 % decrease in I/O performance, compared
to the native Fedora. Another noteworthy result has been obtained by
benchmarking the Fedora distribution running on an identical kernel
to Qubes OS. The tests (pts/sqlite, pts/compilebench) has shown even
larger performance diminshment, which has been on average 44.68 %
and 50.31 % respectively.
These have unexpectedly shown SELinux Sandbox -X, the con-
tainerization technology for confining X applications, to be inferior
to the paravirtualized Qubes OS. This result has been supported em-
pirically and incurred a qualitative difference in user experience, ren-
dering some GUI applications unusable. For common applications,
such as the browser, the graphical performance of the applications
run inside the Qubes AppVM has outperformed the containeriza-
tion approach and, disregarding the longer start of the application
caused by virtual machine (domain) initialization, could be considered
comparable to the native execution.
The main bottleneck of the novel virtualization-based Qubes OS
was the I/O performance. The test results have consistently exhibited
diminishing performance in many areas related to disk storage, which
encouraged me to conduct an experiment with large file data transfer
from an external USB 3.0 data storage. It has been demonstrated

31
6. Conclusion and further work

that the data transfer has not only failed to reached the USB 3.0
theoretical or real-world throughput, but also barely reached the USB
2.0 maximum speeds, measured on the same machine with Ubuntu
15.10 operating system.
Although the Qubes OS is said to be dedicated to an average
computer user, it must be concluded that some common user
actions have demanded a qualified administrator intervention. An
example would be an external USB storage device, which is mounted
automatically after connection, and even though the integration with
the Nautilus file manager has been fully functional and the device
was detected and connected on demand inside the domain, provided
it has been assigned to the domain inside the Qubes VM manager
GUI, the attempt to transfer larger files lead to unresponsiveness of
the affected domain and demand its termination or restart. The file
transfer must have been completed inside the terminal, with manual
mounting of the device, which might be considered a nontrivial user
interaction.
Another sacrifice in user experience have been problems with
multimedia, namely the inability to watch online videos inside the
browser. Although the sound has been working, the position inside the
video could not have been controlled and the window could not have
been increased to full screen. This is the same problem I experienced
when using the SELinux Sandbox -X confined Firefox browser, which
has been running inside the Xephyr server. The problem persisted
even after a different browsers, Google Chrome and Chromium, have
been installed.
The problems might have been caused by incompatible hardware,
which is a common problem with Qubes OS. Nevertheless, although
some recommended technologies, like Intel VT-d or native UEFI
motherboard, have been absent from the tested system, the system
used the Intel graphics card with the open-source drivers. This is
the recommended graphic card and is supposed to be the most
thoroughly tested, as the competitive vendors fail to provide the
necessary cooperation with the open-source community.
Nevertheless, it must be concluded that the desktop environment
has been remarkably stable and I have suffered no data loss or integrity
corruption on the host hard drive during the evaluation. It has been
reasonably responsive, disregarding the longer time penalty when

32
6. Conclusion and further work

starting the domain. However, when the domain has been started, the
penalty became negligible.
I have made numerous proposal for further studies inside the
text, and I would like to make two more propositions, besides the
proposition to further evaluation and investigation into the test results,
which could not have been performed inside this thesis.
Firstly, the lack of the hardware support for Intel VT-d technology
has been repeatedly claimed, by numerous Qubes developers includ-
ing Joanna Rutkowska, to considerably reduce the security of the host
running Qubes OS, but no proof-of-concept has been yet presented.
The technology is also claimed to improve the performance and it
might be interesting to see how remarkably would this change the test
results, solve the existing issues, or improve the user experience.
Secondly, an interesting decision has been made by the Qubes
developers, to compile but not enable SELinux inside the application
domains. This decision is claimed to not influence the confidentiality
and integrity of the user data in any way. Even when SELinux is
Enforcing (in targeted mode), the unconfined_t subjects (usually
processes) are not subjected to SELinux policies. For a desktop
operating system for casual users, the majority of the applications
usually run in an unconfined_t domain, but server applications, such
as Apache, should be always confined for security purposes. How
could the Qubes domain benefit from turning on SELinux? How could
this influence the security of multiple applications run inside the same
domain, in case one application gets compromised? These questions
might be answered in subsequent articles.

33
Bibliography
[1] J. Rutkowska, The three approaches to computer security, Blogger,
2008. [Online]. Available: http://blog.invisiblethings.org/
2008/09/02/three-approaches-to-computer-security.html
(visited on 2016-05-19).
[2] J. Rutkowska and R. Wojtczuk, “Qubes os architecture,” Invisible
Things Lab Tech Rep, p. 54, 2010. [Online]. Available: https://www.
qubes- os.org/attachment/wiki/QubesArchitecture/arch-
spec-0.3.pdf (visited on 2016-05-19).
[3] R. Y. Ameen and A. Y. Hamo, “Survey of server virtualization,”
ArXiv preprint arXiv:1304.3557, 2013. [Online]. Available: http:
//arxiv.org/pdf/1304.3557v1 (visited on 2016-05-19).
[4] S. N. T.-c. Chiueh, “A survey on virtualization technologies,”
[Online]. Available: http : / / www . ecsl . cs . sunysb . edu / tr /
TR179.pdf (visited on 2016-05-19).
[5] J. White and A. Pilbeam, “A survey of virtualization technologies
with performance testing,” ArXiv preprint arXiv:1010.3233, 2010.
[Online]. Available: http : / / arxiv . org / pdf / 1010 . 3233v1
(visited on 2016-05-19).
[6] R. J. Andresen, “Virtual machine monitors,” CERN OpenLab
for Data grid application, 2004. [Online]. Available: https : / /
openlab-mu-internal.web.cern.ch/openlab-mu-internal/
03_Documents/3_Technical_Documents/Technical_Reports/
2004/vmm.pdf (visited on 2016-05-19).
[7] T. Bui, “Analysis of docker security,” ArXiv preprint
arXiv:1501.02967, 2015. [Online]. Available: http : / / arxiv .
org/pdf/1501.02967v1 (visited on 2016-05-19).
[8] C. Takemura and L. S. Crawford, The book of Xen: A practical guide
for the system administrator. No Starch Press, 2009. (visited on
2016-05-19).
[9] E. Reshetova, J. Karhunen, T. Nyman, and N. Asokan, “Security
of os-level virtualization technologies,” in Secure IT Systems,
Springer, 2014, pp. 77–93. [Online]. Available: http://arxiv.
org/pdf/1407.4245v1 (visited on 2016-05-19).

35
BIBLIOGRAPHY

[10] P. Galbraith. (2014-06-05). Docker: Containers for the masses,


[Online]. Available: patg.net/containers, virtualization,
docker/2014/06/05/docker-intro (visited on 2015-10-15).
[11] U. A. Force, “Analysis of the intel pentium’s ability to support
a secure virtual machine monitor,” 2000. [Online]. Available:
https : / / www . usenix . org / legacy / events / sec2000 / full _
papers/robin/robin.pdf (visited on 2016-05-19).
[12] R. P. Goldberg, “Architectural principles for virtual computer
systems,” DTIC Document, Tech. Rep., 1973. [Online]. Available:
www.dtic.mil/cgi-bin/GetTRDoc?AD=AD0772809 (visited on
2016-05-19).
[13] G. J. Popek and R. P. Goldberg, “Formal requirements for
virtualizable third generation architectures,” Communications
of the ACM, vol. 17, no. 7, pp. 412–421, 1974. [Online]. Available:
http://cs.nyu.edu/courses/fall14/CSCI- GA.3033- 010/
popek-goldberg.pdf (visited on 2016-05-19).
[14] W. Commons. (2011). Hypervisor. File: Hyperviseur.png, [On-
line]. Available: https://commons.wikimedia.org/wiki/File:
Hyperviseur.png (visited on 2015-10-30).
[15] P. B. Galvin, G. Gagne, and A. Silberschatz, Operating System
Concepts, 9th. New York, NY, USA: John Wiley & Sons, Inc., 2013,
isbn: 1118093755, 9781118093757. (visited on 2016-05-19).
[16] D. Abramson, “Intel virtualization technology for directed i/o,”
Intel technology journal, vol. 10, no. 3, pp. 179–192, 2006. [Online].
Available: https://software.intel.com/en- us/articles/
intel - virtualization - technology - for - directed - io -
vt - d - enhancing - intel - platforms - for - efficient -
virtualization-of-io-devices (visited on 2016-05-19).
[17] J. Rutkowska and R. Wojtczuk, “Qubes os architecture,” Invisible
Things Lab Tech Rep, p. 54, 2010. [Online]. Available: http://
sistemas . unla . edu . ar / sistemas / sls / ls - 4 - sistemas -
operativos/pdf/SO- L- Qubes- Arch- spec- 0.3.pdf (visited
on 2016-05-19).
[18] Qubes os: An operating system designed for security, 2011. [Online].
Available: http : / / www . tomshardware . com / reviews / qubes -
os - joanna - rutkowska - windows , 3009 - 4 . html (visited on
2016-05-19).

36
BIBLIOGRAPHY

[19] (2016). Glossary of qubes terminology, [Online]. Available:


https : / / www . qubes - os . org / doc / glossary/ (visited on
2016-01-01).
[20] J. Rutkowska. (2010). Disposable vms, [Online]. Available: http:
//blog.invisiblethings.org/2010/06/01/disposable-vms.
html (visited on 2016-01-01).
[21] Can i install qubes on a system without vt-x? 2016. [Online].
Available: https://www.qubes- os.org/doc/user- faq/%5C%
5C # can - i - install - qubes - on - a - system - without - vt - x/
(visited on 2016-05-19).
[22] (2006). The unofficial selinux faq, [Online]. Available: http://
www . crypt . gen . nz / selinux / faq . html # WWW . 14 (visited on
2016-05-08).
[23] M. Larabel. (2015-16). Does selinux have much of a performance
impact on fedora 23? [Online]. Available: https : / / www .
phoronix . com / scan . php ? page = news _ item&px = Fedora - 23 -
SELinux-Impact (visited on 2016-05-08).
[24] R. Vaidyanath. (2010). Real-time performance of a selinux-
enabled redhawk™ linux® system, [Online]. Available: https:
/ / www . concurrent . com / wp - content / uploads / 2015 / 04 /
real-time-performance-of-selinux-enabled-redhawk.pdf
(visited on 2016-05-08).
[25] Official docker builds of fedora, 2016. [Online]. Available: https:
//hub.docker.com/_/fedora/ (visited on 2016-05-19).
[26] About flocker, 2016. [Online]. Available: https : / / clusterhq .
com/flocker/introduction/ (visited on 2016-05-19).
[27] Gtkperf [pts/gtkperf], 2016. [Online]. Available: http : / /
openbenchmarking . org / test / pts / gtkperf (visited on
2016-05-19).
[28] Qgears2 [pts/qgears2], 2016. [Online]. Available: http : / /
openbenchmarking . org / test / pts / qgears2 (visited on
2016-05-19).
[29] Nexuiz [pts/nexuiz], 2016. [Online]. Available: http : / /
openbenchmarking . org / test / pts / nexuiz (visited on
2016-05-19).
[30] Pixbufs, 2010. [Online]. Available: https://developer.gnome.
org/gdk2/stable/gdk2-Pixbufs.html (visited on 2016-05-19).

37
BIBLIOGRAPHY

[31] Gtkdrawingarea, 2010. [Online]. Available: https://developer.


gnome . org / gtk2 / stable / GtkDrawingArea . html (visited on
2016-05-19).
[32] Cool things with selinux... introducing sandbox -x, 2009. [Online].
Available: http : / / danwalsh . livejournal . com / 31146 . html
(visited on 2016-05-19).
[33] M. Páleník, Detailed comparison of desktop graphics, Brno, 2016.
[Online]. Available: http://openbenchmarking.org/result/
1605186-HA-GRAPHICSD01 (visited on 2016-05-19).
[34] Iozone [pts/iozone], 2016. [Online]. Available: http : / /
openbenchmarking . org / test / pts / iozone - 1 . 8 . 0 (visited
on 2016-05-19).
[35] Sqlite [pts/sqlite], 2016. [Online]. Available: http : / /
openbenchmarking . org / test / pts / sqlite - 1 . 9 . 0 (visited
on 2016-05-19).
[36] Compile bench [pts/compilebench], 2016. [Online]. Available: http:
/ / openbenchmarking . org / test / pts / compilebench - 1 . 0 . 1
(visited on 2016-05-19).
[37] ——, (2016-19). Uefi install hangs on hp probook 4730s, [Online].
Available: https : / / github . com / QubesOS / qubes - issues /
issues/1857 (visited on 2016-05-14).
[38] M. Marczykowski-Górecki. (2015-8). Support for efi boot, [On-
line]. Available: https://github.com/QubesOS/qubes-issues/
issues/794 (visited on 2016-05-14).
[39] M. Páleník. (2016-05-1). Qubes does not boot on fedora/qubes
dual-boot machine, [Online]. Available: https : / / groups .
google . com / forum / # ! topic / qubes - users / OQAI77pMbHk
(visited on 2016-05-14).
[40] M. Lee. (2014-23). Dual-booting qubes and ubuntu with en-
crypted disks, [Online]. Available: https://micahflee.com/
2014/04/dual-booting-qubes-and-ubuntu-with-encrypted-
disks (visited on 2016-05-14).
[41] M. Páleník. (2016-10). Do i need usb 3.0 on my router? [Online].
Available: https://palenikmartin.wordpress.com/2016/02/
10/do-i-need-usb-3-0-on-my-router (visited on 2016-05-14).

38

You might also like