Virtualization Final
Virtualization Final
Cameron Trujillo
Based on the information introducing the computer lab scenario in the instructions
document for this assignment, the 50 users currently working in the computer lab are using
something of a traditional setup. There is local infrastructure in the environment the computer
lab is in, such as servers along with printers and other accessories. The users likely have access
to these extra peripherals over the computer lab’s Local Area Network (LAN) or are otherwise
connected to anything that is unable to communicate over a LAN, perhaps through a technology
like Bluetooth.
More specifically for the purposes of this project, though, it is perhaps most important to
discuss the fact that all 50 personal computers in this computer lab environment are expected to
locally download, install, and run all software required by the organization. Given the fact that
this computer lab is advanced enough to include multiple Microsoft-based server computers, it is
somewhat safe to assume that this computer lab exists to execute more intensive, or at least more
responsible for managing software, and files in general, on each individual machine creates a lot
of room for error. From the information given in the instructions document, and personal
background experience in a computer lab, I can infer that it would be important for all users to be
running the same software, and the same versions of said software, complete with the all-
important latest security and functionality updates. In addition to the software executable files, it
is also reasonable to assume that each user, and by extension machine, in the computer lab might
need access to specific other files, perhaps important data. Even with the use of a group policy
3
tool, it would prove rather difficult to attempt to keep all 50 machines up to date simultaneously.
Project Description
In the previous section, it was described how each of the 50 individual machines in the
computer lab are expected to have up-to-date hardware installed locally at all times. To the
unacquainted, this may seem to be the only option, and the issues that are associated with this
setup may be seen as nothing more than an unfortunate reality. However, in the given scenario, a
software that allows a user or administrator to create, configure, manage, and use virtual
instances of specific operating systems. Put simply, “virtualization is a technology that has been
widely applied for sharing the capabilities of physical computers by splitting the resources
among OSs” (Rodriguez-Haro et al., 2012, p.1). By using the hypervisor to allocate certain
resources from the main computer to the “virtual” computer, including RAM and drive space,
users can essentially create “another computer” within their main computer that they can access
While there are many uses and reasons why an administrator may implement
administrator of the computer lab is interested in creating virtual machines for the 50 users of the
computer lab to access remotely using their individual machines. The administrator is interested
in creating a sort of a local cloud environment. It was mentioned that there are Microsoft-based
servers already in the network of this computer lab. Because the administrator wants to run
4
everything off of these servers, depending on their level, they may need to be upgraded. In
addition, the administrator may be interested in creating backups for their servers in case of
failure so that the users can still get work done. In that case, there may need to be duplicate
Scope of Work
The first step in approaching the completion of this project is to assess the capability and
state of the existing servers in the existing computer lab infrastructure. As mentioned briefly in
the previous section, the administrator is specifically interested in hosting all software
applications that users will run in virtual machines running on the servers in the infrastructure.
Naturally, supporting 50 users simultaneously and with solid quality of service is a difficult task
and could only be handled single-handedly by an insanely capable machine. So, a thorough
analysis of the existing server infrastructure would be required in order to provide a specific list
computer lab, the computer lab should already have all of the network interface hardware they
would need for the new setup – namely network switches and cables. Also, considering it is the
servers that will be running software applications going forward, it should not be necessary to
change out any of the 50 user machines, provided there is nothing failing at this time.
After the possible new hardware is integrated into the network infrastructure at the
computer lab, the next steps are all software. The administrator in the computer lab will create an
adequate number of virtual machines, split up across the different servers as they see fit
depending on the server hardware. Then, the administrator could install the required software for
5
the needs of the organization – or, the administrator could execute these steps in reverse order,
installing all the software on one virtual machine and then using a replication tool such as
Microsoft’s Replica to duplicate that one virtual machine that is properly configured any number
of times. Finally, the administrator will share access to the virtual machines to each of the 50
In previous sections, I briefly discussed what is likely to be the primary reason the
organization behind the computer lab has decided to move forward with a virtualization-based
solution for the lab. This is, of course, ease of device management. Even with the use of a great
Microsoft PC group policy tool, the task of getting 50 individual machines configured for a
specific set of software is something of an unruly task. By concentrating the task of configuration
to a few select pieces of hardware that will be running virtual machines, the administrator’s
workflow will likely become significantly more efficient. In most environments, “Embedding
Beyond ease of administration, though, there are many more reasons why a computer lab
chief among these secondary reasons is one of the primary reasons why virtualization was
popularized to begin with. This feature I refer to is the enhanced security of a virtualization-
based setup.
In a traditional client-server environment, where the clients (user PCs) run software on
the local hardware and servers mainly exist to store data and run more intense processes, there
6
naturally formed a sort of importance singularity on the server itself. That meaning that just one
piece of hardware running just one operating system carried out a multitude of essential
functions for the given organization, and if someone may have been interested in attacking said
organization, crippling their operations would’ve been as simple as taking down a single server.
setup, such as the one that has been introduced and described for the computer lab thus far in this
document. However, when introducing virtualization to that traditional setup described in the
previous paragraph, a particular opportunity comes to light for the administrator to make the life
of a potential hacker or attacker exponentially more difficult. With the use of multiple virtual
machines running on a single piece of server hardware, the systems administrator can split the
core operations of the server, including holding sensitive data, across what are effectively
completely separate and isolated computers. This means that, in the event that a cyber-attacker is
able to bypass every other security precaution in place and gains access to what they think is an
all-important server, what they will find is that they’ve gained access to just a small fraction of
the server’s functionality. Practically, “An isolated VM can have no unauthorized interaction
with other non-hypervisor software running on the real machine”, so it is much safer to run
This cyber-attacker scenario is a great segway into another important factor that makes
virtualization technologies such a popular choice to implement for systems administrators. This
is reliability. It is a common suggestion inside and outside the Information Technology field to
“always have a plan B”. Before and after the popularization of virtualization technologies, many
administrators opt to have duplicate servers at the ready to deploy when and if a server in use
goes down for any number of reasons. While this is still a common recommendation in a
7
virtualization-based setup, virtualization also allows the administrator to create backup machines
on a much smaller scale as virtual machines. Commonly, administrators will implement what’s
known as a failover cluster environment. This means that many virtual machines are working
together to execute a function or provide a service. Notably, these machines should be capable of
running their designated process plus a little extra. This way, if one virtual machine in the cluster
goes down, the remaining machines can work together to fill in for the function of the downed
machine. Virtualization provides the organization with heavily increased reliability of service.
this project will depend heavily on certain information that was not provided in the instructions
document for this project plan. However, that will be the first step in completing this project. If
the servers and other hardware already available in the computer lab is not quite up to snuff to be
running so many virtual machines with a decent failover setup, then the process of scoping out
new hardware that fits the organization’s budget, acquiring said hardware, and implementing it
into the existing physical local area network will be a major portion of this project.
Beyond the hardware, the next step in project implementation is, naturally, software. As
explained earlier, at this step, the administrator will create the virtual machines they wish for
users to access to complete daily tasks in the organization. Then, the administrator will need to
provide the users with access to the virtual machines running on the servers in the computer lab.
implementing failover and disaster recovery. Failover was introduced in the previous section. In
this project, multiple backup virtual machines should be created in case of failure of one or more
8
of them. In addition, the organization should likely purchase entire backup servers in case of
failure of one of the servers that is in use. This is to protect uptime in the organization. In the
event of server hardware failure, users could simply switch over to using virtual machines
running on a backup server, as opposed to twiddling their thumbs for weeks or months while a
the computer lab should set up cloud backup of sensitive and important data. While backup
servers are a great way to protect uptime in case of server hardware failure, the organization
should also consider plan of action for a true disaster, such as fire or tornado. A cloud backup
Project Team
In the instructions document, there was not any information provided about a team to
execute this operation, besides the number of users for the eventual completed computer lab
project, which was disclosed at 50 users. Thus, for this brief section, I will attempt to lay out a
basic staffing plan that could be adapted to whatever resources the organization has available.
As for the planning and management of the project, I would recommend for the
organization to bring on the system administrator or system administrators for the eventual
completed lab, along with one or a small group of users for the eventual project. Together, this
would create a very practical project management team and would hopefully produce a
completed computer lab that is something of a happy medium between ease of administration
As for the actual implementation of the hardware and configuration of hypervisor and
virtualization software, I believe the team should consist of the system administrator or system
9
administrators that will be running the computer lab as well as an outside expert on virtualization
technology. Ideally, this outside virtualization expert would guide the system administrators
through the setup and deployment of the virtualization setup. This way, not only would there be a
guarantee that the setup is configured correctly, but ideally, the system administrators would
learn a lot about virtualization in the process and be better equipped to manage the computer lab
As with any successful project in any field, IT or otherwise, the first step is always a
great plan. Of course, this document itself would be considered part of the planning phase for the
computer lab project. However, this is really only the beginning, setting some basic guidelines
for the project so that it may get off the ground. In a scenario more grounded in reality, it would
be necessary for whatever project management or project planning team the organization has
created to come together and decide on more of the details of the given project. On the topic of
the project team in the previous section, I suggested this project management/project planning
team to consist of both the system administrators of the computer lab and some of the eventual
users of the completed lab. Once a solid plan has been developed alongside the guidelines of this
document, then the project can advance to the next major step – and the first major deliverable.
While a solid plan could feasibly be developed in a single meeting, management should probably
allow a few days to one week for this phase of the project.
As mentioned prior a few times across this document so far, the second major step, and
so, the first major deliverable on the timeline for the computer lab virtualization revamp project
is acquiring and configuring hardware. Namely, servers up to the task of supporting 50 or more
virtual machines simultaneously as well as other functionality, as well as backup servers for
10
those servers. As previously discussed, the instructions document fails to go into any sort of
detail regarding the state and quality of the existing server hardware in the computer lab
environment. So, it is entirely possible that the existing infrastructure in the computer lab is
enough to support the new virtualization-based setup. However, if it is not, then the organization
can expect the scouting of the new server hardware as well as purchase, delivery, and
implementation into the infrastructure to take between one and three weeks depending on
shipping.
At this point in the project, multiple, perhaps new, technically capable server computers
should have been implemented into the local area network infrastructure in the computer lab.
Since hardware is handled at this time in the project, it is time to move on to the set-up and
configuration of hardware. In the project planning phase, the project managers should have
defined exactly what the users should be doing on the client machines and what pieces of
software would be required to complete the given tasks effectively. These pieces of software
should be acquired and compiled in a singular digital location. Next, the system administrators
must choose a hypervisor tool for use on the servers as well as some sort of software solution
that would allow the users in the computer lab to access and use the virtual machines created and
maintained by the hypervisor software. This could be as simple as tools already present in
Microsoft Windows, or, the project management team may have opted for an alternative
Whatever the choice of hypervisor and user access software solution is chosen by the
project management team, a recommendation was made in the previous section about a certain
outside virtualization expert to assist the project managers in the configuration of the software
tools described thus far. Of course, if there is already someone in the organization, or even this
11
specific project that would already call themselves an expert in virtualization technology, then it
would be completely unnecessary to pull in some sort of outside specialist. The important thing
here is for there to be someone who is confident they know what they’re doing involved in the
software configuration so that they can educate those who will be covering the day-to-day
maintenance and administration tasks in the computer lab. This will be the final deliverable of
the project, which is a fully synthesized, operable, and capable computer lab in the virtualization
configuration described in the instructions document. Management should expect this to take one
to two weeks.
Resources Required
The specific resources required for the successful completion of this computer lab project
has been addressed at significant length already over the course of this document. Namely, the
hardware purchases required along with the human resources required have been explained in
dedicated sections in this document prior to this one. Some explaining into reasoning for
decisions made in this document regarding resources required has already been done as well.
However, for the sake of completion, this section will be formatted as something of a resource
for the finance side of this project – or, perhaps a section for those associated with this project to
easily reference the tools and materials required to make this project successful at a glance.
Hardware
As discussed multiple times thus far during this document, it is entirely possible that the
servers that the instructions document claims are already in use at the computer lab are plenty
reliable and technically advanced enough to support more than fifty virtual machines running
and being accessed simultaneously. However, in the event they aren’t, it will be necessary to
12
purchase new server computer hardware. Additionally, if the organization is at all concerned
with data security and the security of uptime for the users, the organization should be thinking
about purchasing backup server computer hardware as well, ideally obtaining a complete
duplicate set for the “A team” of servers, or the ones that will be running to begin with. While I
am admittedly not the most acquainted with the modern server computer market, I would
imagine such a hardware setup would be rather costly. On top of the server computers
themselves, there are multiple accessories the project managers should be considering. Of
course, network cabling as well as power cables and the such would be imperative. On top of
that, perhaps the project managers should consider surge protection and/or UPS systems for
further protection of the uptime of the service being provided by the servers.
Software
The first major piece of software worth discussing as a major required resource for the
successful completion of the computer lab project is likely many pieces of software – whatever
the organization needs for the users to complete tasks effectively. The instructions document failed
to disclose the real-world operations of the computer lab. This could be a very advanced high
school or university computer lab, it could be a call center, or it may be something of a data entry
and billing department in a hospital. I could go on. The bottom line here is, since multiple virtual
machines are going to be created that should house everything the users need for their daily tasks,
The second category of software required for the successful completion of the project is
technologies. Most obviously, this is the hypervisor or hypervisor-adjacent software that allows
the administrator to create, allocate resources to, and run virtual machines. This could be anything
13
from Hyper-V to VMware, or other choice. Another software tool that will be integral to the project
is some sort of software solution that allows users to access and use the virtual machines. This
could simply be Microsoft RDP tools built-in to windows. However, the administrators may opt
Human Resources
The people required to complete the project would mostly be contained in the project
planning and management team, which, for a project this size, would likely only be 2-5 people.
These people have other responsibilities over the length of the project, such as hardware hookup
and software configuration. Outside of this team, there would really only be the possible
virtualization specialist that was introduced as a good idea to train the system administrators.
Budget Requested
Similarly to the information of the previous section, all of the different sections of the
completion of the computer lab project, and thus the subject of this section, have already been
defined in this document. In this brief section, I will briefly review the different resources in this
project that may require monetization and offer a short discussion of what it may cost.
Again, it is possible that the organization can simply use existing servers. In such a
scenario, this could part could cost a whopping $0. However, in the event existing
infrastructure is not up to snuff, the organization would need to purchase new servers and
new backup servers for those servers, which would likely come out to several thousand
dollars.
14
2. New Software
Again, this is dependent fully on the organization and software managers’ preference.
While the function of this computer lab is unclear, in 2023, many computer operations can be
completed by free and open-source software. By extension of this statement, the organization
could create, manage, and run virtual machines all completely for free. So, again, if the
organization is willing to make compromises, this could also total to just $0. However, this is
3. Human Resources
My proposal for the project management team was people who already work for the
organization. So, they would only be getting paid their normal salary to work on this project.
The only money the organization may be spending on people is the proposed assistance from
a virtualization specialist. Due to their extensive expertise, this expert would likely charge
between $50 and $100 per hour – but, they would only be needed for a couple days of
In an ideal world, all systems, including the ones discussed for implementation for this
computer lab project, would just work exactly as expected. However, as many are painfully well-
acquainted with, this is image is not a good representation of the reality of many systems, IT or
otherwise. Dealing with virtualization specifically, “Accepting the reality we must admit,
machines were never designed with the aim to support virtualization” (Nanda & Chiueh, 2005,
p.3).
15
At all steps during the execution of the project plan that was laid out and described in the
“Timeline for Implementation” section, there should be testing to ensure the steps done were
done successfully. I failed to delve into this fact during prior sections because it is commonly
assumed when completing projects in the IT field. For example, when building a computer – it is
intelligent and often recommended to check the functionality of the motherboard and core
components before mounting them in the case. Regardless – there should be testing done at each
More specifically, after all steps have been completed and the computer lab is just about
“open for business” with the new virtualization-based setup, the project team including system
administrators and users should conduct a formal stress-test. Stress tests are also very common in
IT projects such as the one that is the subject of this document. In this specific scenario, this
stress test would likely consist of all 50 user machines attempting to log into their assigned VM
and complete some sort of task. A successful stress test would be if all 50 user machines were
able to complete their task with no problem. In addition, even if the test goes well, it may be
smart for the systems administrators to have a controlled downtime of one or more VMs to stress
Naturally, it will also be important to educate users on the new setup. Notably, users will
need to log into the VM, as opposed to accessing software locally like they’re used to.
Depending on the user base, this could be as easy as a 30-minute class or a sheet of paper
containing instructions taped to each machine. If the users happen to be small children or other
Maintenance
16
Echoing the daydreams of an ideal world on display in the previous section, it would be
similarly amazing to just set up any system once and have it just work, forever. Unfortunately,
this, too, is little more than a pipe dream. In the real world, in order for systems like the
virtualization setup for the computer lab to stay effective over time, certain maintenance is
required.
In the modern technical landscape, cybersecurity is one of the upmost concerns for any
organization. Even the smallest mom-and-pop businesses may have sensitive information such as
bank account and credit card information that opportunist hackers and attackers may be
interested in fraudulently accessing. The first, and sometimes only, line of defense against these
attacks are the anti-hacker and anti-malware technologies built in to many pieces of modern
software. One example of this is Microsoft Windows, who is the world’s most popular operating
system, and also releases security patches on a regular basis to keep the user base up to date on
their defenses to known threats. Many pieces of software and known parts of vulnerability patch
said vulnerability in regular software updates. This is one of the main reasons why it is so
Arbitrarily, with knowledge on the day-to-day of the system administrators limited, I can
crime. This could perhaps be completed after business hours on Fridays, or whatever day may
Conclusion
When this project begins, the computer lab will be in a quite traditional, and thus,
arguably outdated setup where users are expected to configure their own machines, and keep
17
local software up to date. By moving to a setup where virtual machines are hosted on one or
more servers and the users simply use their personal machines to access these virtual ones,
Aside from that benefit, though – with a setup based in virtualization technology, the
computer lab has the opportunity to experience greatly improved security through system
isolation. In addition, the computer lab has the opportunity to adopt a much more robust
approach to failover, effectively increasing the uptime security of the computer lab
exponentially. This project will absolutely take time, money, dedication, and education to
complete effectively. However, in the end, the computer lab technical environment and
“Virtualization has been revolutionizing the ways in which IT is developed” for many
years already (Yu et al., 2018, p.3). It’s time for the computer lab to make the change.
18
References
Uhlig, R., Neiger, G., Rodgers, D., Santoni, A. L., Martins, F. C., Anderson, A. V., ... & Smith,
Rodríguez-Haro, F., Freitag, F., Navarro, L., Hernánchez-sánchez, E., Farías-Mendoza, N.,
Mergen, M. F., Uhlig, V., Krieger, O., & Xenidis, J. (2006). Virtualization for high-performance
Chiueh, S. N. T. C., & Brook, S. (2005). A survey on virtualization technologies. RPE Report,
142.
Yu, F. R., Liu, J., He, Y., Si, P., & Zhang, Y. (2018). Virtualization for distributed ledger