Assignment in Ethics 2
Assignment in Ethics 2
Engineers shall hold paramount the safety, health, and welfare of the public.
a. If engineers' judgment is overruled under circumstances that endanger life or property,
they shall notify their employer or client and such other authority as may be appropriate.
b. Engineers shall approve only those engineering documents that are in conformity with
applicable standards.
c. Engineers shall not reveal facts, data, or information without the prior consent of the client
or employer except as authorized or required by law or this Code.
d. Engineers shall not permit the use of their name or associate in business ventures with
any person or firm that they believe is engaged in fraudulent or dishonest enterprise.
e. Engineers shall not aid or abet the unlawful practice of engineering by a person or firm.
f. Engineers having knowledge of any alleged violation of this Code shall report thereon to
appropriate professional bodies and, when relevant, also to public authorities, and
cooperate with the proper authorities in furnishing such information or assistance as may
be required.
In the last several decades, considerable effort has been devoted to defining and
addressing the ethical issues that arise in the context of biomedical experimentation. Central
ethical concerns that have been identified in such research include the relationship of risks to
benefits and the ability of research subjects to give informed and voluntary prior consent.
Assurance of adequate attention to these issues has normally been achieved by review of
research protocols by an independent body, such as an Institutional Review Board (IRB). For
example, in the United States, institutions engaging in biomedical research and receiving
Public Health Service research funds are subject to strict federal governmental guidelines for
such research, including review of protocols by an IRB, which considers the risks and benefits
involved and the obtaining of informed consent of research subjects. To a large degree, this
is a model which has come to be applied to scientific research on human subjects in
democratic societies around the world (Brieger et al. 1978).
Although the shortcomings of such an approach have been debated—for example, in a
recent Human Research Report, Maloney (1994) says some institutional review boards are
not doing well on informed consent—it has many supporters when it is applied to formal
research protocols involving human subjects. The deficiencies of the approach appear,
however, in situations where formal protocols are lacking or where studies bear a superficial
resemblance to human experimentation but do not clearly fall within the confines of academic
research at all. The workplace provides one clear example of such a situation. Certainly, there
have been formal research protocols involving workers that satisfy the requirements of risk-
benefit review and informed consent. However, where the boundaries of formal research blur
into less formal observances concerning workers’ health and into the day-to-day conduct of
business, ethical concerns over risk-benefit analysis and the assurance of informed consent
may be easily put aside.
As one example, consider the Dan River Company “study” of cotton dust exposure to its
workers at its Danville, Virginia, plant. When the US Occupational Safety and Health
Administration’s (OSHA) cotton dust standard went into effect following US Supreme Court
review in 1981, the Dan River Company sought a variance from compliance with the standard
from the state of Virginia so that it could conduct a study. The purpose of the study was to
address the hypothesis that byssinosis is caused by micro-organisms contaminating the
cotton rather than by the cotton dust itself. Thus, 200 workers at the Danville plant were to be
exposed to varying levels of the micro-organism while being exposed to cotton dust at levels
above the standard. The Dan River Company applied to OSHA for funding for the project
(technically considered a variance from the standard, and not human research), but the
project was never formally reviewed for ethical concerns because OSHA does not have an
1
IRB. Technical review by an OSHA toxicologist cast serious doubt on the scientific merit of
the project, which in and of itself should raise ethical questions, since incurring any risk in a
flawed study might be unacceptable. However, even if the study had been technically sound,
it is unlikely to have been approved by any IRB since it “violated all the major criteria for
protection of subject welfare” (Levine 1984). Plainly, there were risks to the worker-subjects
without any benefits for them individually; major financial benefits would have gone to the
company, while benefits to society in general seemed vague and doubtful. Thus, the concept
of balancing risks and benefits was violated. The workers’ local union was informed of the
intended study and did not protest, which could be construed to represent tacit consent.
However, even if there was consent, it might not have been entirely voluntary because of the
unequal and essentially coercive relationship between the employer and the employees.
Since Dan River Company was one of the most important employers in the area, the union’s
representative admitted that the lack of protest was motivated by fear of a plant closing and
job losses. Thus, the concept of voluntary informed consent was also violated.
Fortunately, in the Dan River case, the proposed study was dropped. However, the questions
it raises remain and extend far beyond the bounds of formal research. How can we balance
benefits and risks as we learn more about threats to workers’ health? How can we guarantee
informed and voluntary consent in this context? To the extent that the ordinary workplace
may represent an informal, uncontrolled human experiment, how do these ethical concerns
apply? It has been suggested repeatedly that workers may be the “miner’s canary” for the
rest of society. On an ordinary day in certain workplaces, they may be exposed to potentially
toxic substances. Only when adverse reactions are noted does society initiate a formal
investigation of the substance’s toxicity. In this way, workers serve as “experimental subjects”
testing chemicals previously untried on humans.
Some commentators have suggested that the economic structure of employment already
addresses risk/benefit and consent considerations. As to the balancing of risks and benefits,
one could argue that society compensates hazardous work with “hazard pay”—directly
increasing the benefits to those who assume the risk. Furthermore, to the extent that the risks
are known, right-to-know mechanisms provide the worker with the information necessary for
an informed consent. Finally, armed with the knowledge of the benefits to be expected and
the risks assumed, the worker may “volunteer” to take the risk or not. However, “volunteer-
ness” requires more than information and an ability to articulate the word no. It also requires
freedom from coercion or undue influence. Indeed, an IRB would view a study in which the
subjects received significant financial compensation—“hazard pay”, as it were—with a
sceptical eye. The concern would be that powerful incentives minimize the possibility for truly
free consent. As in the Dan River case, and as noted by the US Office of Technology
Assessment,
(t)his may be especially problematic in an occupational setting where workers may perceive
their job security or potential for promotion to be affected by their willingness to participate in
research (Office of Technology Assessment 1983).
If so, cannot the worker simply choose a less hazardous occupation? Indeed, it has been
suggested that the hallmark of a democratic society is the right of the individual to choose his
or her work. As others have pointed out, however, such free choice may be a convenient
fiction since all societies, democratic or otherwise,
have mechanisms of social engineering that accomplish the task of finding workers to take
available jobs. Totalitarian societies accomplish this through force; democratic societies
through a hegemonic process called freedom of choice (Graebner 1984).
Thus, it seems doubtful that many workplace situations would satisfy the close scrutiny
required of an IRB. Since our society has apparently decided that those fostering our
biomedical progress as human research subjects deserve a high level of ethical scrutiny and
protection, serious consideration should be given before denying this level of protection to
those who foster our economic progress: the workers.
It has also been argued that, given the status of the workplace as a potentially uncontrolled
human experiment, all involved parties, and workers in particular, should be committed to the
systematic study of the problems in the interest of amelioration. Is there a duty to produce
new information concerning occupational hazards through formal and informal research?
Certainly, without such research, the workers’ right to be informed is hollow. The assertion
2
that workers have an active duty to allow themselves to be exposed is more problematic
because of its apparent violation of the ethical tenet that people should not be used as a
means in the pursuit of benefits to others. For example, except in very low risk cases, an IRB
may not consider benefits to others when it evaluates risk to subjects. However, a moral
obligation for workers’ participation in research has been derived from the demands of
reciprocity, i.e., the benefits that may accrue to all affected workers. Thus, it has been
suggested that “it will be necessary to create a research environment within which workers—
out of a sense of the reciprocal obligations they have—will voluntarily act upon the moral
obligation to collaborate in work, the goal of which is to reduce the toll of morbidity and
mortality” (Murray and Bayer 1984).
Whether or not one accepts the notion that workers should want to participate, the creation
of such an appropriate research environment in the occupational health setting requires
careful attention to the other possible concerns of the worker-subjects. One major concern
has been the potential misuse of data to the detriment of the workers individually, perhaps
through discrimination in employability or insurability. Thus, due respect for the autonomy,
equity and privacy considerations of worker-subjects mandates the utmost concern for the
confidentiality of research data. A second concern involves the extent to which the worker-
subjects are informed of research results. Under normal experimental situations, results
would be available routinely to subjects. However, many occupational studies are
epidemiological, e.g., retrospective cohort studies, which traditionally have required no
informed consent or notification of results. Yet, if the potential for effective interventions exists,
the notification of workers at high risk of disease from past occupational exposures could be
important for prevention. If no such potential exists, should workers still be notified of findings?
Should they be notified if there are no known clinical implications? The necessity for and
logistics of notification and follow-up remain important, unresolved questions in occupational
health research (Fayerweather, Higginson and Beauchamp 1991).
Given the complexity of all of these ethical considerations, the role of the occupational health
professional in workplace research assumes great importance. The occupational physician
enters the workplace with all of the obligations of any health care professional, as state by
the International Commission on Occupational Health and reprinted in this chapter:
Occupational health professionals must serve the health and social well-being of the workers,
individually and collectively. The obligations of occupational health professionals include
protecting the life and the health of workers, respecting human dignity and promoting the
highest ethical principles in occupational health policies and programmes.
In addition, the participation of the occupational physician in research has been viewed as a
moral obligation. For example, the American College of Occupational and Environmental
Medicine’s Code of Ethical Conduct specifically states that “(p)hysicians should participate in
ethical research efforts as appropriate” (1994). However, as with other health professionals,
the workplace physician functions as a “double agent”, with the potentially conflicting
responsibilities that stem from caring for the workers while being employed by the corporation.
This type of “double agent” problem is not unfamiliar to the occupational health professional,
whose practice often involves divided loyalties, duties and responsibilities to workers,
employers and other parties. However, the occupational health professional must be
particularly sensitive to these potential conflicts because, as discussed above, there is no
formal independent review mechanism or IRB to protect the subjects of workplace exposures.
Thus, in large part it will fall to the occupational health professional to ensure that the ethical
concerns of risk-benefit balancing and voluntary informed consent, among others, are given
appropriate attention.
3
trusts to enter one’s zone of inaccessibility; one will not feel secure unless one trusts the
security provider). Violation of privacy constitutes a risk, thus, a threat to security. Law
provides a resolution when ethics cannot (e.g., ethics knows that stealing is wrong; the
law punishes thieves); ethics can provide context to law (e.g., law allows trading for the
purpose of making a profit, but ethics provides input into ensuring trade is conducted
fairly). Privacy breaches disturb trust and run the risk of diluting or losing security; it is a
show of disrespect to the law and a violation of ethical principles.
Data privacy (or information privacy or data protection) is about access, use and collection
of data, and the data subject’s legal right to the data. This refers to:
Though different cultures put different values on privacy or make it impossible to define a
stable, universal value, there is broad consensus that privacy does have an intrinsic, core
and social value. Hence, a privacy approach that embraces the law, ethical principles, and
societal and environmental concerns is possible despite the complexity of and difficulty in
upholding data privacy.
The need for data privacy protection is also urgent due to multidirectional demand.
Information protection becomes an essential information security function to help develop
and implement strategies to ensure that data privacy policies, standards, guidelines and
processes are appropriately enhanced, communicated and complied with, and effective
mitigation measures are implemented. The policies or standards need to be technically
efficient, economically/financially sound, legally justifiable, ethically consistent and socially
acceptable since many of the problems commonly found after implementation and
4
contract signing are of a technical and ethical nature, and information security decisions
become more complex and difficult.
Data privacy protection is complex due to socio-techno risk, a new security concern. This
risk occurs with the abuse of technology that is used to store and process data. For
example, taking a company universal serial bus (USB) device home for personal
convenience runs the risk of breaching a company regulation that no company property
shall leave company premises without permission. That risk becomes a data risk if the
USB contains confidential corporate data (e.g., data about the marketing strategy,
personnel performance records) or employee data (e.g., employee addresses, dates of
birth). The risk of taking the USB also includes theft or loss.
Using technology in a manner that is not consistent with ethical principles creates ethical
risk, another new type of risk. In the previous example, not every staff member would take
the company USB home, and those who decide to exploit the risk of taking the USB may
do so based on their own sense of morality and understanding of ethical principles. The
ethical risk (in addition to technical risk and financial risk) arises when considering the
potential breach of corporate and personal confidentiality. This risk is related partly to
technology (the USB) and partly to people (both the perpetrator and the victims) and is,
therefore, a risk of a technological-cum-social nature—a socio-techno risk. Hence, taking
home a USB is a vulnerability that may lead to a violation of data privacy.
However, the problem of data privacy is not unsolvable. The composite approach alluded
to earlier that takes into consideration the tangible physical and financial conditions and
intangible measures against logical loopholes, ethical violations, and social desirability is
feasible, and the method suggested in this article, which is built on a six-factor framework,
can accomplish this objective.
The International Data Privacy Principles (IDPPs)1 for establishing and maintaining data
privacy policies, operating standards and mitigation measures
Hong Kong’s Data Protection Principles of personal data (DPPs)2 for reinforcing those
policies, standards and guidelines
The hexa-dimension metric operationalization framework3 for executing policies,
standards and guidelines
International Data Privacy Principles
Data privacy can be achieved through technical and social solutions. Technical solutions
include safeguarding data from unauthorized or accidental access or loss. Social solutions
include creating acceptability and awareness among customers about whether and how
their data are being used, and doing so in a transparent and confidential way. Employees
must commit to complying with corporate privacy rules, and organizations should instruct
them in how to actively avoid activities that may compromise privacy.
Next to technical and social solutions, the third element of achieving privacy is complying
with data protection laws and regulations, which involves two issues. The first concern is
5
that legal regulation is slow and, thus, unable to keep up with the rapid developments of
information technology. Legal solutions are usually at least one step behind technological
developments. Data privacy by electronic means should, therefore, be based not only on
traditional jurisdiction, but also on soft law, i.e., self-binding policies such as the existing
data privacy principles. Soft law may be more effective than hard law. The reactions of
disappointed customers, especially when those reactions are spread by social media, and
the fact that noncompliance with corporate governance may result in unfair competition
and/or liability toward affected customers (unfair competition by not complying with self-
binding policies/liability toward customers by breach of contract) will often be more
effective than mere fines or penalties.
The second problem of data protection has to do with the fact that these regulations are
not internationally harmonized, causing severe complications (especially between the
United States and the European Union) on a cross-border basis, which is the rule rather
than the exception in modern business. To make data privacy rules work in a global
environment, the principles outlined in this article consider US standards (e.g., the US
Federal Trade Commission’s Fair Information Practices), European standards (e.g., Data
Protection Directive 95/46/EC and the General Data Protection Regulation [GDPR]), Asian
regulations (e.g., Hong Kong Personal Data Privacy Ordinance [PDPO]) and international
benchmarks (e.g., the Organization for Economic Co-operation and Development [OECD]
Privacy Framework Basic Principles).
This article also considers the fact that common data privacy regulations, especially in
Europe, tend to focus on a traditional human rights approach, neglecting the fact that
nowadays, data are usually given away voluntarily upon contractual agreement. When
using sites such as Google, Baidu, Amazon, Alibaba or Facebook, users agree with the
terms and conditions of these companies. Data privacy should consider not only mere
data protection, but also contractual principles, among which one of the oldest and most
fundamental is do ut des, meaning a contract in which there is a certain balance between
what is given and what is received. That philosophy explains why companies such as
Google or Facebook, for whose services the customer does not pay, have the right to use
personal data. In other words, that tradeoff—data for services—is the balance.4
The consumer being less protected when receiving free services is a basic element of the
European E-Commerce Directive, which does not apply to services that are offered free
of charge. But this consideration is only a first step. Applied to a modern data environment,
a balance also has to be struck in relation to other parameters relevant to contractual
aspects of data privacy. Since data are a contract matter, it is important to consider what
kind of personal data are in consideration (e.g., sensitive and nonsensitive data have to
be distinguished and treated differently), and since contracts are concluded by mutual
consent, the extent of such consent also has to be taken into account. For example, does
consent have to be declared explicitly or is accepting the terms of use sufficient?
The IDPPs approach takes into consideration the Asian, European, US and international
data protection standards and focuses on personal data, but can apply to corporate data
as well. These principles suggest that the three parameters (payment, consent and data
category) should be balanced and combined with the previously mentioned, Asian,
European, US and international standards, putting them into a set of privacy rules.
Organizations in compliance with international data privacy standards should commit to
the following 13 IDPPs:5
6
Comply with national data protection or privacy law, national contract law, and other legal
requirements or regulations relating to data privacy.
Comply with current security standards to protect stored personal data from illegitimate or
unauthorized access or from accidental access, processing, erasure, loss or use.
Implement an easily perceptible, accessible and comprehensible privacy policy with
information on who is in charge of data privacy and how this person can be individually
contacted, why and which personal data are collected, how these data are used, who will
receive these data, how long these data are stored, and whether and which data will be
deleted or rectified upon request.
Instruct employees to comply with such privacy policies and avoid activities that enable or
facilitate illegitimate or unauthorized access in terms of IDPPs.
Do not use or divulge any customer data (except for statistical analysis and when the
customer’s identity remains anonymous), unless the company is obliged to do so by law
or the customer agrees to such use or circulation.
Do not collect customer data if such collection is unnecessary or excessive.
Use or divulge customer data in a fair way and only for a purpose related to activities of
the company.
Do not outsource customer data to third parties unless they also comply with standards
comparable to these IDPPs.
Announce data breaches relating to sensitive data.
Do not keep personal data for longer than necessary.
Do not transfer personal data to countries with inadequate or unknown data protection
standards unless the customer is informed about these standards being inadequate or
unknown and agrees to such a transfer.
In the case of a contract between the company and the customer in which the customer
commits to pay for services or goods:
Inform the costumer individually and as soon as reasonably possible in the event of a data
breach.
Inform the customer upon request about which specific data are stored, and delete such
data upon request unless applicable laws or regulations require the company to continue
storing such data.
Do not use or divulge content-related personal data.
Do not use or divulge any other personal data without the customer’s explicit, separate
and individual consent.
Do not store, use or divulge any customer data, unless applicable laws or regulations
require the company to continue storing such data.
In the absence of a contract between the company and the customer in which the
customer commits to pay for services or goods:
Inform the customer as soon as reasonably possible in the event of data breaches.
Inform the customer upon request what types of sensitive data are stored and delete such
data upon request when such data are outdated, unless applicable laws or regulations
require the company to continue storing such data.
Do not use or divulge sensitive data without the customer’s explicit, separate and
individual consent.
7
4. Provide at least two example on ethical issues on Privacy.
Ethics and Data Privacy
By Robert Bond
From Compliance & Ethics Professional
Even if Scott McNealy was right in 1999 (when he reportedly said, “You have zero privacy anyway
– Get over it.”), individuals deserve respect for their privacy. This respect does not have to be
imposed by law, but should be a matter of integrity and ethics.
Recently the European Data Protection Supervisor (EDPS) published Opinion 4/2015, entitled
“Towards a new digital ethics – data, dignity and technology.” The Opinion was published on 11
September 2015 and follows on from the previous Opinion on the General Data Protection
Regulation, which aims to assist the main institutions of the EU in reaching the right consensus
on a workable, futureorientated set of rules to bolster the rights and freedoms of the individual.
The latest Opinion focuses heavily on Article 1 of the EU Charter of Fundamental Rights, namely
that “human dignity is inviolable and must be respected and protected.”
The Opinion sets out a number of principles which state that the fundamental rights to privacy
and to the protection of personal data must reflect the protection of human dignity more than ever;
that technology should not dictate values and rights; in today’s digital environment, adherence to
the law is not enough and we have to consider the ethical dimension of data processing; and
finally that these issues have engineering, philosophical, legal and moral implications.
1. Future-orientated regulation of data processing and respect for the rights of privacy and data
protection;
3. Privacy conscious engineering and design of data processing products and services; and
4. Empowered individuals.
The Opinion looks at a number of recent developments namely big data, the Internet of Things,
cloud computing, drones and connected autonomous vehicles.
The Opinion proposes the creation of a European Ethics Advisory Board made up of academic,
legal and other professionals in the arena, to advise the EDPS on the ethical issues of big data
and related activities.
8
The Opinion preceded a meeting in The Hague in late October on similar topics by the Privacy
Advisory Group of the United Nations, where the discussions on ethics and big data were chaired
by me.
5. Define the ethics on Security.
THE NEEDS OF CODE OF ETHICS
FOR INFORMATION SECURITY PROFESSIONALS
Many of the international professional bodies such as GIAC, EC Council and ISC2 use the
code of ethics to provide benchmark to their professional members for self evaluation and
also to establish a framework for professional’s behavior and responsibilities.
Looking into scenario in Malaysia, there are many people acquired their information
security professionals through mastering the intellectual skill, training, education and
experience without receiving certification from the well-known international professional
bodies. However, these information security professionals do not have any common
framework of ethical that they can channel or attach themselves to.
Therefore, this paper proposes the Code of Ethics that can be adopted by the group of
information security professional described above. In fact, any new established
information security professional associations or organizations can adopt this proposed
code of ethics as their standards professional conduct and practice to provide guidance to
their members in conducting a day-to-day works and services.
Protect and maintain appropriate level of confidentiality, integrity and availability of
sensitive information in any course of professional activities.
Respect the confidentiality of information acquired during the course of their duties.
Do not use or disclose, share, disseminate or distribute any confidential or proprietary
information without proper and specific authority or unless there is a legal or professional
requirement to do so.
Avoid misusing any confidential information for personal gain
Treat all information received from a client or employer as confidential unless such
information is in the public domain.
Take appropriate steps to minimize or mitigate potential risks, including recommending the
engagement of another professional if need arises.
9
the job, but again, there is no requirement for IT security personnel to belong to those
organizations.
In fact, many IT pros don't even realize that their jobs involve ethical issues. Yet we make
decisions on a daily basis that raise ethical questions.
Many of the ethical issues that face IT professionals involve privacy. For example:
Should you read the private e-mail of your network users just because you can? Is it OK to read
employees' e-mail as a security measure to ensure that sensitive company information isn't being
disclosed? Is it OK to read employees' e-mail to ensure that company rules (for instance, against
personal use of the e-mail system) aren't being violated? If you do read employees' e-mail, should
you disclose that policy to them? Before or after the fact?
Is it OK to monitor the Web sites visited by your network users? Should you routinely keep logs
of visited sites? Is it negligent to not monitor such Internet usage, to prevent the possibility of
pornography in the workplace that could create a hostile work environment?
Is it OK to place key loggers on machines on the network to capture everything the user types?
What about screen capture programs so you can see everything that's displayed? Should users
be informed that they're being watched in this way?
Is it OK to read the documents and look at the graphics files that are stored on users' computers
or in their directories on the file server?
Remember that we're not talking about legal questions here. A company may very well have the
legal right to monitor everything an employee does with its computer equipment. We're talking
about the ethical aspects of having the ability to do so.
As a network administrator or security professional, you have rights and privileges that allow you
to access most of the data on the systems on your network.
You may even be able to access encrypted data if you have access to the recovery agent account.
What you do with those abilities depends in part on your particular job duties (for example, if
monitoring employee mail is a part of your official job description) and in part on your personal
ethical beliefs about these issues.
10
The slippery slope
A common concept in any ethics discussion is the "slippery slope." This pertains to the ease with
which a person can go from doing something that doesn't really seem unethical, such as scanning
employees' e-mail "just for fun," to doing things that are increasingly unethical, such as making
little changes in their mail messages or diverting messages to the wrong recipient.
In looking at the list of privacy issues above, it's easy to justify each of the actions described. But
it's also easy to see how each of those actions could "morph" into much less justifiable actions.
For example, the information you gained from reading someone's e-mail could be used to
embarrass that person, to gain a political advantage within the company, to get him/her disciplined
or fired, or even for blackmail.
The slippery slope concept can also go beyond using your IT skills. If it's OK to read other
employees' e-mail, is it also OK to go through their desk drawers when they aren't there? To open
their briefcases or purses?
What if your perusal of random documents reveals company trade secrets? What if you later leave
the company and go to work for a competitor? Is it wrong to use that knowledge in your new job?
Would it be "more wrong" if you printed out those documents and took them with you, than if you
just relied on your memory?
What if the documents you read showed that the company was violating government regulations
or laws? Do you have a moral obligation to turn them in, or are you ethically bound to respect
your employer's privacy? Would it make a difference if you signed a nondisclosure agreement
when you accepted the job?
IT and security consultants who do work for multiple companies have even more ethical issues to
deal with. If you learn things about one of your clients that might affect your other client(s), where
does your loyalty lie?
Then there are money issues. The proliferation of network attacks, hacks, viruses and other
threats to their IT infrastructures have caused many companies to "be afraid, be very afraid." As
a security consultant, it may be very easy to play on that fear to convince companies to spend far
more money than they really need to. Is it wrong for you to charge hundreds or even thousands
of dollars per hour for your services, or is it a case of "whatever the market will bear?"
Is it wrong for you to mark up the equipment and software that you get for the customer when you
pass the cost through? What about kickbacks from equipment manufacturers? Is it wrong to
accept "commissions" from them for persuading your clients to go with their products? Or what if
the connection is more subtle? Is it wrong to steer your clients toward the products of companies
in which you hold stock?
Another ethical issue involves promising more than you can deliver, or manipulating data to obtain
higher fees. You can install technologies and configure settings to make a client's network more
11
secure, but you can never make it completely secure. Is it wrong to talk a client into replacing their
current firewalls with those of a different manufacturer, or switching to an open source operating
system – which changes, coincidentally, will result in many more billable hours for you – on the
premise that this is the answer to their security problems?
Here's another scenario: What if a client asks you to save money by cutting out some of the
security measures that you recommended, yet your analysis of the client's security needs shows
that sensitive information will be at risk if you do so? You try to explain this to the client, but he/she
is adamant. Should you go ahead and configure the network in a less secure manner? Should
you "eat" the cost and install the extra security measures at no cost to the client? Should you
refuse to do the job? Would it make a difference if the client's business were in a regulated industry,
and implementing the lower security standards would constitute a violation of the Health
Insurance Portability and Accountability Act, the Graham-Leach-Bliley Act, Sarbanes-Oxley or
other laws?
Summary
This article has raised a lot of questions, but has not attempted to provide set answers. That's
because, ultimately, the answer to the question "Is it ethical?" must be answered by each
individual IT professional.
Unlike older, more established professions such as medicine and law, most ethical issues that IT
and security professionals confront have not been codified into law, nor is there a standard
mandatory oversight body, such as the national or state medical association or bar association,
that has established a detailed code of ethics.
12