A Security Model for Military Message Sy
A Security Model for Military Message Sy
Systems
CARL E. LANDWEHR, CONSTANCE L. HEITMEYER, and JOHN McLEAN
Naval Research Laboratory
Military systems that process classified information must operate in a secure manner; that is, they
must adequately protect information against unauthorized disclosure, modification, and withholding.
A goal of current research in computer security is to facilitate the construction of multilevel secure
systems, systems that protect information of different classifications from users with different
clearances. Security models are used to define the concept of security embodied by a computer system.
A single model, called the Bell and LaPadula model, has dominated recent efforts to build secure
systems but has deficiencies. We are developing a new approach to defining security models based on
the idea that a security model should be derived from a specific application. To evaluate our approach,
we have formulated a security model for a family of military message systems. This paper introduces
the message system application, describes the problems of using the Bell-LaPadula model in real
applications, and presents our security model both informally and formally. Significant aspects of
the security model are its definition of multilevel objects and its inclusion of application-dependent
security assertions. Prototypes based on this model are being developed.
Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General--
Security and protection; D.4.6 [Operating Systems]: Security and Protection--access controls;
information flow controls; verification; F.3.1 [Logics and Meaning of Programs]: Specifying and
Verifying and Reasoning about Programs--assertions; invariants; specification techniques; H.4.3
[Information Systems Applications]: Communications Applications--electronic mail
General Terms: Security, Verification
Additional Key Words and Phrases: Storage channels, message systems, confinement
1. INTRODUCTION
A s y s t e m is s e c u r e if i t a d e q u a t e l y p r o t e c t s i n f o r m a t i o n t h a t i t p r o c e s s e s a g a i n s t
unauthorized disclosure, unauthorized modification, and unauthorized withhold-
i n g (also c a l l e d d e n i a l o f service). W e s a y " a d e q u a t e l y " b e c a u s e n o p r a c t i c a l
s y s t e m c a n a c h i e v e t h e s e g o a l s w i t h o u t q u a l i f i c a t i o n ; s e c u r i t y is i n h e r e n t l y
r e l a t i v e . A s e c u r e s y s t e m is m u l t i l e v e l s e c u r e i f i t p r o t e c t s i n f o r m a t i o n o f d i f f e r e n t
classifications from users with different clearances; thus some users are not
c l e a r e d for all o f t h e i n f o r m a t i o n t h a t t h e s y s t e m p r o c e s s e s .
Security models have been developed both to describe the protection that a
c o m p u t e r a c t u a l l y p r o v i d e s a n d t o d e f i n e t h e s e c u r i t y r u l e s it is r e q u i r e d t o
e n f o r c e [14]. I n o u r view, a s e c u r i t y m o d e l s h o u l d e n a b l e u s e r s t o u n d e r s t a n d
how to operate the system effectively, implementors to understand what security
controls to build, and certifiers to determine whether the system's security
Authors' address: Computer Science and Systems Branch, Information Technology Division, Naval
Research Laboratory, Washington, D.C. 20375.
1984 ACM 0734-2071/84/0198-0222 $00.00
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984, pages 198-222.
A Security Model for Military Message Systems • 199
controls are consistent with the relevant policies and directives and whether
these controls are implemented correctly [13].
In recent years, the Bell and LaPadula model [4, 8], has dominated efforts to
build secure systems. The publication of this model advanced the technology of
computer security by providing a mathematical basis for examining the security
provided by a given system. Moreover, the model was a major component of one
of the first disciplined approaches to building secure systems.
The model describes a secure computer system abstractly, without regard to
the system's application. Its approach is to define a set of system constraints
whose enforcement will prevent any application program executed on the system
from compromising system security. The model includes subjects, which represent
active entities in a system {such as active processes), and objects, which represent
passive entities (such as files and inactive processes). Both subjects and objects
have security levels, and the constraints on the system take the form of axioms
that control the kinds of access subjects may have to objects.
One of the axioms, called the *-property ("star-property"), prohibits a subject
from simultaneously having read access to one object at a given security level
and write access to another object at a lower security level. Its purpose is to
prevent subjects from moving data of a given security level to an object marked
with a lower security level. Originally, the model applied this constraint to all
subjects, since a subject might execute any arbitrary application program, and
arbitrary programs executed without this constraint could indeed cause security
violations.
A system that strictly enforces the axioms of the original Bell-LaPadula model
is often impractical: in real systems, users may need to invoke operations that,
although they do not violate our intuitive concept of security, would require
subjects to violate the *-property. For example, a user may need to extract an
UNCLASSIFIED paragraph from a CONFIDENTIAL document and use it in
an UNCLASSIFIED document. A system that strictly enforces the *-property
would prohibit this operation.
Consequently, a class of trusted subjects has been included in the model. These
subjects are trusted not to violate security even though they may violate the
*-property. Systems based on this less restrictive model usually contain mecha-
nisms that permit some operations the *-property prohibits, for example, the
trusted processes in KS OS [17 ] and SIGMA [ 1]. The presence of such mechanisms
makes it difficult to determine the actual security policy enforced by the system
and complicates the user interface.
To avoid these problems, we propose a different approach. Instead of starting
with an application-independent abstraction for a secure computer system and
trying to make an application fit on top of it, we start with the application and
derive the constraints that the system must enforce from both the functional
and security requirements of the application. In this way, it is possible to
construct a set of assertions that is enforced uniformly on all the system software.
To evaluate our approach, we have formulated a security model for a family of
military message systems. Defining an application-based security model is part
of a larger effort whose goals are (1) to develop a disciplined approach to the
production of secure systems and (2) to produce fully worked-out examples of a
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
200 • C.E. Landwehr, C. L. Heitmeyer, and J. McLean
requirements document and a software design for such systems. In this paper,
we introduce the message system application, discuss the Bell-LaPadula trusted
process approach to building secure systems, and present a security model for
military message systems both informally and formally.
Military message systems are required to enforce certain security rules. For
example, they must insure that users cannot view messages for which they are
not cleared. Unfortunately, most automated systems cannot be trusted to enforce
such rules. The result is that many military message systems operate in "system-
high" mode: each user is cleared to the level of the most highly classified
information on the system. A consequence of system-high operation is that all
data leaving the computer system must be classified at the system-high level
until a human reviewer assigns the proper classification.
A goal of our research is to design message systems that are multilevel secure.
Unlike systems that operate at system-high, multilevel secure systems do not
require all users to be cleared to the level of the highest information processed.
Moreover, information leaving such a system can be assigned its actual security
level rather than the level of the most highly classified information in the system.
Unlike a system that operates at system-high, a multilevel system can preserve
the different classifications of information that it processes.
military officers and staff personnel used SIGMA, the message system developed
for the experiment, to process their messages [21( 22]. Although SIGMA was
built on the nonsecure TENEX operating system, its user interface was designed
as though it were running on a security kernel (i.e., a minimal, tamperproof
mechanism that assures that all accesses subjects have to objects conform to a
specified security model). SIGMA's user interface was designed so that it would
not change if SIGMA were rebuilt to operate with a security kernel.
During the planning phase of the MME, it was decided that SIGMA would
enforce the Bell-LaPadula model [ 1]. This decision led to a number of difficulties,
three of which are described below. The first problem arose from the initial
decision, later changed, to adopt the model without trusted subjects; the other
two problems apply to Bell-LaPadula with or without trusted subjects.
--Prohibition of write-downs. The *-property of Bell-LaPadula disallows write-
downs; yet, in certain cases, message system users need to lower the classification
of information. For example, a user may create a message at TOP SECRET, and,
after he has entered the message text, decide that the message classification
should be SECRET. A system that strictly enforces the *-property would prohibit
a user from reducing the message classification. The user would be required to
create a new message at SECRET and re-enter the text.
--Absence of multilevel objects. Bell-LaPadula recognizes only single-level
objects; some message system data objects (e.g., messages and message files) are
inherently multilevel. A computer system that treats a multilevel object as single-
level can cause some information to be treated as more highly classified than it
really is. For example, when a user of such a system extracts an UNCLASSIFIED
paragraph from a SECRET message, the system labels the paragraph SECRET
even though the paragraph is actually UNCLASSIFIED.
--No structure for application-dependent security rules. Military message sys-
tems must enforce some security rules that are absent in other applications. An
example is a rule that allows only users with release authority to invoke the
release operation. 1 Such application-dependent rules are not covered by Bell-
LaPadula, and, hence, must be defined outside of it.
To address the first problem (and, to some extent, the third), the SIGMA
developers designed a trusted process that is not constrained by the *-property
and is, therefore, permitted to perform write-downs. For example, a SIGMA user
could search a file containing both UNCLASSIFIED and SECRET messages
and then display an UNCLASSIFIED message whose citation was returned by
the search; such an operation required the intervention of the trusted process
since the message citation was transmitted from the SECRET process that did
the search to the UNCLASSIFIED process that handled the message display.
Unlike the Bell-LaPadula model, which puts no explicit constraints on write-
downs performed by the trusted subjects, SIGMA's trusted process narrowly
limited the cases in which write-downs were permitted. Ames [1] provides further
details on the role of the trusted process in SIGMA.
SiGMA's use of a trusted process was helpful in that it relaxed the rigid
constraints of Bell-LaPadula, thus permitting users to perform required opera-
tions. However, adding the trusted process also caused a serious problem: it made
the security policy that SIGMA enforced difficult to understand. Interviews held
during the MME revealed that few SIGMA users clearly understood the security
policy that was being enforced. It was an assumption of SIGMA's design that
user confirmation of security-relevant operations would prevent security viola-
tions. However, because users issued confirmations without comprehending why
these confirmations were needed, this assumption was unwarranted.
3.3 KSOS
KSOS [17] was to be a security-kernel based system with a UNIX-compatible
program interface on a DEC PDP-11. The KSOS security kernel was designed
to strictly enforce the axioms of the Bell-LaPadula model on user-provided
programs. To handle those situations where strict enforcement is incompatible
with functional requirements, the kernel recognizes certain "privileges" that
allow some processes to circumvent parts of this enforcement. These privileges
include the ability to violate the *-property to change the security or integrity
level [5] of objects, and to invoke certain security kernel functions.
KSOS developers defined a special category of software, called Non-Kernel
Security Related (NKSR), that supports such privileges. For example, the "Secure
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
204 • C.E. Landwehr, C. L. Heitrneyer, and J. McLean
Server" of the KSOS NKSR allows a user to reduce the security level of files he
owns and to print a file classified at a lower security level without raising the
security level of the printed output to the level of this process. Both of these
operations would be prohibited by strict enforcement of the Bell-LaPadula
axioms.
3.4 Guard
The Guard message filter [24] is a computer system that supports the monitoring
and sanitization of queries and responses between two database systems operating
at different security levels. When a user of the less sensitive system requests data
from the more sensitive system, a human operator of the Guard must review the
response to ensure that it contains only data that the user is authorized to see.
The operator performs this review via a visual display terminal.
One version of the Guard is being built on a security kernel that enforces the
axioms of the Bell-LaPadula model. However, strict enforcement of the
*-property is not possible since a major requirement of the Guard system is to
allow the operator to violate it, that is, to allow information from the more
sensitive system to be sanitized and "downgraded" (or simply downgraded), so
that it can be passed to systems that store less sensitive information. An
important component of this version's design is the trusted process that performs
this downgrading.
2Indirectly, trusted subjects can implement any arbitrary security policy. For example, a trusted
subject that acts a s a t y p e m a n a g e r c a n providemultilevelobjects,and application-dependentsecurity
rules can be enforcedby makingcontrolledoperations available only through trusted subjects. Our
ACMTransactionson ComputerSystems,Vol.2, No.3, August1984.
A Security Model for Military Message Systems • 205
4.1 Definitions
The definitions below correspond in most cases to those in general use and are
given here simply to establish an explicit basis for the model. We distinguish
between objects, which are single-level, and containers, which are multilevel. We
also introduce the concept of user roles, which define job-related sets of privileges.
Classificationa--a designation attached to information that reflects the damage
that could be caused by unauthorized disclosure of that information. A classifi-
cation includes a sensitivity level (UNCLASSIFIED, CONFIDENTIAL, SE-
point here is that the notion of trusted subjects in itselfserves only to draw a circle around the
aspects of security policy not addressed by the axioms of the Bell-LaPadula model. It does not provide
any framework for formulating that policy.
3This definitioncorrespondsto that used by other authors for security level. In this paper, security
level and classification are synonyms.
ACMTransactionson ComputerSystems,Vol.2, No. 3, August1984.
206 • C.E. Landwehr, C. L. Heitmeyer, and J. McLean
UI is a set of userID's.
RL is a set of user roles.
US is a set of users. For all u E US, CU(u) E L is the clearance of u, R(u) CC_
RL is the set of authorized roles for u, a n d RO(u) CC.RL is the current
role set for user u.
RF is a set of references. T h i s set is p a r t i t i o n e d into a set, DR, of direct
references a n d a set, IR, of indirect references. Although the exact nature
of these references is u n i m p o r t a n t , we assume t h a t the direct references
can be ordered by the integers. In this model we t r e a t each direct reference
as a u n a r y sequence consisting of a single integer, for example, (17).
E a c h indirect reference is t r e a t e d as a finite sequence of two or more
integers, for example, (nl . . . . . nm), where ( n l ) is a direct reference.
VS is a set of strings (bit or character). T h e s e strings serve p r i m a r i l y as
entity values (e.g., file or message contents).
TY is a set of message s y s t e m data types t h a t includes "DM" for draft
messages and "RM" for released messages.
ES is a set of entities. For "all e E ES CE(e) E L is the classification of e.
AS(e) C_ (UI U RL) x OP x N is a set of triples t h a t compose the access
set of e. (u, op, k) E AS(e) iff u is a u s e r I D or user role authorized to
p e r f o r m operation op with a reference to e i~s op's kth p a r a m e t e r . T (e) E
T Y is the type of entity e. V(e) ~ VS is the value of entity e. I f T(e) =
DM or T(e) = RM, t h e n V(e) includes a releaser field RE(e), which if
n o n e m p t y , contains a userID. E S contains as a subset the set of entities
t h a t are containers. F o r a n y entity e in this set H(e) = ( e l , . . . , e,) where
entity ei is the ith entity c o n t a i n e d in e. CCR (e) is true iff e is m a r k e d
CCR, else [alse. If T(el) = T(e2) t h e n e~ a n d e2 are b o t h containers or
b o t h objects. T h e set 0 of output devices is a subset of the set of
c o n t a i n e r s ? E l e m e n t s o E 0 serve as the d o m a i n of two further functions.
D(o) is a set of ordered pairs {(x~, y~), (x2, Y2). . . . , (x,, y,)} where each yi
is displayed on o. E a c h x~ is either a user or an entity, a n d the correspond-
ing Yi is either a reference, a userID, or the result of applying one of the
above functions to xi. ~ W e require t h a t (x, V(x)) E D(o) ~ x E H(o). 6
CD(o) gives the m a x i m u m classification of i n f o r m a t i o n t h a t m a y be
displayed on o. T h i s allows CE(o) to be used as the c u r r e n t u p p e r limit of
the classification of i n f o r m a t i o n to be displayed by the o u t p u t device, so
t h a t users can restrict the classification of output to be less t h a n the
m a x i m u m level permitted.
4 In implementations, some kinds of output "disappear" from the system state (e.g., information sent
to a printer or a telecommunications port} while others persist (e.g., information displayed on the
screen of a terminal, which a user may later refer to and modify). In the formalization, we do not
distinguish between these types; both are intended to be covered by O.
5 Both the item and what is displayed must be specified so that, for example, cases in which two
entities have identical values but different security levels can be distinguished.
e We extend the set theoretic notions of membership and intersection to apply to tuples in the obvious
sense.
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
214 • C.E. Landwehr, C. L. Heitmeyer, and J. McLean
State s = (U, E, LO) a n d s* = (U*, E*, LO*) are equivalent except for some set
of references p iff (1) U = U*, (2) LO = LO*, (3) dom(E) = dom(E*), (4) for a n y
entity function F except V, F, = F,., a n d (5) for a n y reference r E dom(E) ~ p,
V,(r).
We now define potential modification as follows:
u, i, spotentially modify r i f f 3 sl, s*: sl is equivalent to s except possibly for some
set of references a n d T(u, i, s~) = s* a n d for some entity function F,
F(r,,) ~ F ( r ~ ) . 1°
Call y a contributing factor in such a case iff y = r or 3 s~ as above a n d s2, s~': s~
and s2 are equivalent except for {y} a n d T(u, i, s2) =s* a n d F(r,.~)~F(r~.~).
9We could have developed the entire formal model in terms of referential counterparts, but preferred
the simplicity of functions to working with the predicate H,.
lo This covers cases of creation (and deletion) since, F(r,~) will be undefined and F(r,~.) will be defined
(although possibly empty).
ACMTransactionson ComputerSystems,Vol.2, No. 3, August 1984.
216 • C . E . Landwehr, C. L. Heitmeyer, and J. McLean
single exception to this assumption; it changes the type of r and, potentially, the
releaser field of r's value as well.
The exact nature of these operations is unimportant since these assumptions
are included solely for ease of exposition. Their purpose is not to rule out
implementation commands that affect different parts of entities, but to eliminate
the problem of unspecified side effects in the formal model (e.g., permission to
view a message marked CCR is not permission to clear the CCR mark). Imple-
mentation commands that can alter more than a single part of a single entity
correspond to a sequence of formal operations. For a given implementation, this
correspondence is determined by the semantics of the implementation command
language. Once this correspondence has been determined, so that the security-
relevant effects of each user command are clear, I can be replaced by the set of
implementation commands with access sets also changed accordingly. Neverthe-
less, prudence dictates that modifications (e.g., changing a user's clearance) that
can be made only by the security officer, be restricted so that there is only a single
command that performs them in any implementation.
The following constraints on the system transform lead to the definition of a
secure history and a secure system. Where quantification is not explicit below,
universal quantification is assumed.
Definition 5. A transform T is access secure iff Vu, i, s, s*: T(u, i, s ) = s * ,
[ ( o p E i n O P and rk E i n RF) --, ((u, op, k ) E A S ( E ( r k ) ) or 3 I E R O ( u , ) , and (/,
op, k ) E A S ( E ( r h ) ) ) ] or s = s * . 11
Definition 6. A transform T is copy secure iff Vu, i, s, s*: T(u, i, s) =s*, x is
potentially modified with y as a contributing factor ~ CE(x,) >_CE(y,).
Definition 7. A transform T is CCR secure iff Vu, i, s, s*: T(u, i, s ) = s * ,
r E i n I R is based on y and C C R ( y ) and z is potentially modified with r as a
contributing factor --~ CU(u,) >_C E ( y ) .
Definition 8. A transform T is translation secure iff V u, i, s, s*: T(u, i, s) =s*,
x E DR and (x,., x) E D (Ss*) --* 3 r E i n RF, r, = x, and (r is based on z and CCR (z)
--* CU(u,) _ CE(z)). 12
Definition 9. A transform T is set secure iff Vu, i, s, s*: T(u, i, s ) = s * , (a)
3 o E dom (E n (RF × 0 )), CD (o,) ~ CD (o,.) or 3 x E dom (U), C U(x, ) ~ C U(x,. ) or
R(x,) # R(x,.) --* security_officer E RO(us); and (b) x E d o m ( U ) and
RO(x,) ~ RO(x,. ) ---, u, = x, or security_officerE RO(u,).
Definition 10. A transform T is downgrade secure iff Y u, i, s, s*: T(u, i, s ) = s*,
x E d o m ( E ~ (RF x {5,})) and CE(x,) > CE(x,.) ---, downgraderERO(u,).
Definition 11. A transform T is release secure iff Vu, i, s, s*: T(u, i, s)=s*,
(T(x~)=RM ~ T ( x , . ) = R M and RE(x~.)=RE(xs)) and ( T ( x , ) ~ R M and
T ( x , . ) = R M --~ RE(x,.)=u, 3r: r,=x,, i is the operation (release, r), re-
leaserE RO(u,) and T(x,) = DM).
Definition 12. A transform is transform secure iff it is access secure, copy
secure, CCR secure, translation secure, set secure, downgrade secure, and release
secure.
Definition 13. A history is secure if all its states are state secure and its
transform is transform secure.
Definition 14. A system is secure if each of its histories is secure.
5.2 Discussion
Perhaps the most basic decision we made in formalizing the MMS model concerns
our general conception of a computer system, in particular the relation between
a system state and a system. We considered a view where a system state consists
of entities and their relations, and where a system adds to this users and user
operations on entities. Hence, all restrictions on user properties (in particular,
the restriction for all u, R O ( u ) ~ R ( u ) ) are included in the definition of a secure
system. We chose instead to view the distinction between system states and
systems in terms of static as opposed to dynamic properties. Static properties are
those that hold for all secure states and, hence, can be checked by examining a
state in isolation; dynamic properties are those that hold for the relation between
secure states and, hence, can be checked only by comparing two or more states.
In the view we adopted, all static security properties are included in the definition
of a secure state.
To a large extent the choice in conceptualizations is a matter of taste. Bell and
LaPadula [4] use the latter, while Feiertag et al. [8] lean to the former. By
minimizing the notion of a secure state, the former view makes the Basic Security
Theorem shorter. The deciding factor in our adopting the latter view is that it
makes it impossible for a system to undergo a security-relevant change without
undergoing a change in state.
Principal difficulties we encountered in formalizing the MMS security model
were in representing "copy" and "view," system output, and the notion of an
authorized operation. Assertion 3 (changes to objects) in the informal model
requires formal semantics to reflect the movement of information between
entities, while assertion 4 (viewing) requires formal semantics to reflect making
an entity visible to a user. Assertion 5 (accessing CCR entities) now addresses
both copying and viewing. The semantics for "copy," embodied in the definitions
of "potential modification" and "contributing factor," are based on a broad
interpretation of "copy." Information is considered to be copied, not only if it is
directly moved from one entity to another, but also if it contributes to the
potential modification of some other entity. For example, if an operation scans
message file A and copies messages selected by a filter F to message file B, both
A and F contribute to the potential modification of B (and are therefore subject
to the constraints imposed by copy secure and CCR secure), even if both A and F
A C M Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
218 • C.E. Landwehr, C. L. Heitmeyer, and J. McLean
are empty. The semantics for "view" are straightforward: a thing is viewed if an
operation makes it a member of an output container. In light of these consider-
ations, we have used "access" instead of "view" in assertion 5.
In the formalization, system output is interpreted as a set of containers; other
entities, parts of entities, references, and classifications that are made visible to
a user are interpreted as being copied to his output container. We assume that
in any implementation the classifications displayed appear close to the entities
(or parts) they correspond to, but we have not formalized this assumption.
References are explicitly included as a part of output because the same operation
applied to the same entities can yield different results, depending on how the
entities are referenced. This leads to the constraint (translation secure) on
operations that produce as output direct references that are translations of
indirect ones. To enforce this constraint, the system must recognize references
as a particular kind of output.
Formalizing the concept of an authorized operation is difficult because the
semantics of authorized operations are unspecified. Our definition of access secure
requires that, if an operation changes the system state (beyond producing an
error message as output), then for each entity in the set of operands the user or
role, operation, and operand index must appear in the access set. Unauthorized
operations must not alter the system state except to report that they are
erroneous.
5.5 A Basic Security Theorem for the Formal MMS Security Model
In formalizations where a secure system is a collection of secure states, some feel
that a Basic Security Theorem is needed to show the restrictions on system
transforms that ensure that a system starting in a secure state will not reach a
state that is not secure [4]. Such theorems are of little practical significance,
since their proofs do not depend on the particular definition of security provided
by the model [18]. Further, in our approach such a theorem is not pressing since
the concept of a secure system is defined largely in terms of a secure transform.
Nevertheless, we do appeal to the notion of a secure state, and some readers may
feel that some form of Basic Security Theorem is needed. Those readers should
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
220 ° C.E. Landwehr, C. L. Heitmeyer, and J. McLean
find it trivial to prove the following analog of the Basic Security Theorem for
our definition of a secure state.
THEOREM. Every state of a system ~ is secure if So is secure and T meets the
following conditions for all u, i, s, s*: T(u, i, s) = s* and for all x, y ~ RF, w E US:
(1) xsqtH(ys) and x s . ~ H ( y s . ) --->CE(x,.)<_CE(y~.).
(2) x ~ H ( y ~ ) and CE(x~.)~CE(y,°) --->xs.q~H(y~.).
(3) x~qtH(Cv~)and x,.EH(tb~.) --, CU(w~.)>_CE(x,.).
(4) x, E H ( ~ ) and CU(w~.)~CE(xs.) --~ x~H(Cv~.).
(5) (x~, V(x~))q~ t~ and (xs., V(x~.)) E Cv~. --~ (x~., CE(x~.))E t~s..
(6) (x~, V(x,))~Cvs and (x~., CE(x~.))~Cv~. ---, (x~., V(x~.))q~ff)s..
(7) R ( w ~ ) ~ R ( w , . ) or RO(w,)#RO(w~.) --* RO(ws.)CR(w~.).
(S) CE{&~)~CE{~v~.) or CD(Cv~)#CD(Cv~.) --->CD(Cvs.)>CE(Cv~.).
Together, {1)-(8) are necessary and sufficient conditions for every state of a
system to be secure in any system that does not contain states that are unreach-
able from So.
6. C O N C L U S I O N S
We favor an approach to building secure systems that includes an application-
based security model. An instance of such a model and its formalization have
been presented. They are intended as examples for others who wish to use this
approach. Important aspects of the model are summarized below:
(1) Because it is framed in terms of operations and data objects that the user
sees, the model captures the system's security requirements in a way that is
understandable to users.
(2) The model defines a hierarchy of entities and references; access to an
entity can be controlled based on the path used to refer to it.
(3) Because the model avoids specifying implementation strategies, software
developers are free to choose the most effective implementation.
(4) The model and its formalization provide a basis for certifiers to assess the
security of the system as a whole.
Simplicity and clarity in the model's statement have been primary goals. The
model's statement does not, however, disguise the complexity that is inherent in
the application. In this respect, we have striven for a model that is as simple as
possible but stops short of distorting the user's view of the system.
The work reported here demonstrates the feasibility of defining an application-
based security model informally and subsequently formalizing it. The security
model described has been used almost without change by another message system
project [9], and has been adapted for use in document preparation and biblio-
graphic systems [2].
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
A Security Model for Military Message Systems • 221
Judgments about the viability of our approach as a whole must await its
application in building full-scale systems. This we are pursuing in the develop-
ment of message system prototypes [11, 12].
ACKNOWLEDGMENTS
Many individuals contributed to the work reported here. Discussions with David
Parnas led to an initial version of the security model. Later revisions of the
model were based on reviews by Jon Millen, Stan Wilson, Mark Cornwell, Rob
Jacob, Jim Miller, Marv Schaefer, and others too numerous to mention. Parti-
pants in the 1982 Air Force Summer Study on Multilevel Data Management
Security also provided many helpful comments. For providing the funding that
allows us to continue our work, we are grateful to H. O. Lubbes of the Naval
Electronic Systems Command and to the Office of Naval Research.
REFERENCES
1. AMES, S.R., JR., AND OESTRE1CHER, D.R. Design of a message processing system for a
multilevel secure environment. In Proceedings of the AFIPS 1978 National Computer Conference
(June 5-8), Vol. 47. AFIPS Press, Reston, Va., 765-771.
2. Air Force Studies Board. Multilevel Data Management Security. Commission on Engineering
and Technical Systems, National Research Council, National Academy Press, Washington, D.C.,
1983.
3. BELL, D.E. Secure computer systems: A refinement of the mathematical model. MTR-2547,
Vol. III, MITRE Corp., Bedford, Mass., Apr. 1974, 30-31. Available as NTIS AD 780 528.
4. BELL, D.E., AND LAPADULA,L.J. Secure computer system: Unified exposition and Multics
interpretation. MTR-2997, MITRE Corp., Bedford, Mass., Mar 1976. Available as NTIS ADA
023 588.
5. BIBA,K.J. Integrity considerations for secure computer systems. ESD-TR-76-372, ESD/AFSC.
Hanscom AFB, Bedford, MA, Apr. 1977 (available as MITRE MTR-3153, NTIS AD A039324).
6. COHEN,E. Information transmission in computational systems. In Proceedings of the 6th ACM
Symposium on Operating Systems Principles, West Lafayette, Ind. ACM SIGOPS Oper. Syst.
Rev 11, 5, (Nov. 1977), 133-139.
7. DENNING, D.E. A lattice model of secure information flow. Commun ACM 19, 5 (May 1976),
236-243.
8. FEIERTAG, R.J., LEVITT, K.N., AND ROBINSON, L. Proving multilevel security of a system
design. In Proceedings of the 6th ACM Symposium on Operating Systems Principles, West
Lafayette, Ind. ACM SIGOPS Oper. Syst. Rev. 11, 5 (Nov. 1977), 57-65.
9. FORSDICK, H.C., AND THOMAS, R.H. The design of a Diamond--A distributed multimedia
document system. BBN Rep. 5204, Bolt, Beranek, and Newman, Cambridge, Mass., Oct. 1982.
10. HEITMEYER, C.L., AND WILSON, S.H. Military message systems: Current status and future
directions. IEEE Trans. Cornmun., COM-28, 9, (Sept. 1980), 1645-1654.
11. HEITMEYER,C.L., LANDWEHR,C.E., AND CORNWELL, M.R. The use of quick prototypes in
the secure military message systems project. ACM SIGSOFT So#w. Eng. Notes 7, 5 (Dec. 1982),
85-87.
12. HEITMEYER, C.L., AND LANDWEHR, C.E. Designing secure message systems: The Military
Message Systems (MMS) project. In Proceedings of the IFIP 6.5 Working Conference on
Computer-Based Message Services (Nottingham, U.K., May 1984) Elsevier North-Holland, New
York, pp. 245-255.
13. LANDWEHR, C.E. Assertions for verification of multilevel secure military message systems.
ACM SIGSOFT So#w. Eng. Notes 5, 3 (July 1980), 46-47.
14. LANDWEHR,C.E. Formal models for computer security. ACM Comput. Surv. 13, 3 (Sept. 1981),
247-278.
ACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.
222 • C.E. Landwehr, C. L. Heitmeyer, and J. McLean
15. LANDWEHR,C.E. What security levels are for and why integrity levels are unnecessary. NRL
Tech. Memo 7590-308:CL:uni, Naval Research Laboratory, Washington, D.C., Feb. 1982.
16. LANDWEHR,C. E., ANDHEITMEYER,C.L. Military message systems: Requirements and security
model. NRL Memo. Rep. 4925, Naval Research Laboratory, Washington, D.C., Sept. 1982.
Available as NTIS ADA 119 960.
17. MCCAULEY,E.J., AND P.J. DRONGOWSKI. KSOS--The design o f a secure operating system.
In Proceedings of the AFIPS 1979 National Computer Conference (June 4-7), Vol. 48. AFIPS
Press, Reston, Va., 345-353.
18. MCLEAN,J. A comment on the basic security theorem of Bell and LaPadula. Inf. Proc. Lett.,
Elsevier North-Holland, New York, 1984, to be published.
19. MOOERS,C.D. The HERMES guide. BBN Rep. 4995, Bolt, Beranek, and Newman, Cambridge,
Mass., Aug. 1982.
20. POPEK, G.J., AND FARBER,D.A. A model for verification of data security in operating systems.
Commun. ACM 21, 9 (Sept. 1978), 737-749.
21. ROTHENBERG,J. SIGMA message service: Reference manual, Version 2.3, Rep. ISI/TM-78-
11.2, USC/Inform. Sci. Inst., Marina del Rey, Calif., June 1979. Available as NTIS ADA 072 840.
22. STOTZ, R., TUGENDER,R., AND WILCZYNSKI,D. SIGMA--An interactive message service for
the military message experiment. In Proceedings of the AFIPS 1979 National Computer Confer-
ence, (June 4-7, 1979), Vol. 48. AFIPS Press, Reston, Va. pp. 855-861.
23. WILSON, S.H., GOODWIN, N.C., BERSOFF, E.H., AND THOMAS, N.M., III. Military message
experiment--Vol. I executive summary. NRL Rep. 4454, Naval Research Laboratory, Washing-
ton, D.C., Mar. 1982. Available as NTIS ADA 112 789.
24. WOODWARD,J. P.L. Applications for multilevel secure operating systems. In Proceedings of the
AFIPS 1979 National Computer Conference (June 4-7), Vol. 48. AFIPS Press, Reston, Va. 1979,
pp. 319-328.