0% found this document useful (0 votes)
53 views

Srs DST: Casting From To

This document discusses various vulnerabilities in object-oriented programming (OOP) and how they can be exploited. It provides examples of vulnerabilities related to casting, private constructors, private members, virtual methods, unsafe plugins, scripting languages, and virtual machine environments. It also discusses mitigations for these vulnerabilities like sandboxing, evidence-based security, and verifying code integrity.

Uploaded by

g007adam759
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Srs DST: Casting From To

This document discusses various vulnerabilities in object-oriented programming (OOP) and how they can be exploited. It provides examples of vulnerabilities related to casting, private constructors, private members, virtual methods, unsafe plugins, scripting languages, and virtual machine environments. It also discusses mitigations for these vulnerabilities like sandboxing, evidence-based security, and verifying code integrity.

Uploaded by

g007adam759
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

Vulnerabilities in OOP: bad

casting
Casting from srs to dst
#include <iostream.h>
class dst { public: int x; };
class srs
{
private: int x;
public:
srs(int a):x(a){}
operator dst() { dst d; d.x=x; return d; }
};

void main()
{
srs s(1); dst d=(dst)s; cout << d.x;
}
gives access to private data.
Using private constructors
public class Capability
{
private int capabilities;
static private int specialCapability=4;
private Capability(int c) { capabilities = c; }
public bool Test(int capnum)
{ return (capabilities & (1 <<
specialCapability) )!= 0; }

static public Capability Create(int c)


{ return new
Capability(c & ~(1 << specialCapability)); }
}
Using private members with restrictive
accesors ( set allowing just one setting)

class Uid
{
private: int id;
public:
Uid(): id (0) { }
int get() { return id; }
void set(int new_id)
{
if (id == 0)
id = new_id;
else
throw new SetException();
}
};
Vulnerabilities in OOP: Virtual Methods
vulnerabilities

 vtable; vptr
 Countermeasures:
 placing them before member variables in
memory
 what if several objects are allocated in a
contiguous memory ?
Vulnerabilities in OOP: unsafe plugins
(Abadi and Fournet, 2003)

namespace ConsoleApplication1
{
abstract class Trusted
{ // full privileges
static protected String tempfile = "/tmp/tempfile";
abstract public void proceed();
static void Main()
{
BadPlugin bp = new BadPlugin();
try { bp.proceed(); }
catch (Exception e){ File.Delete(Trusted.tempfile); throw(e);}
}
}
class BadPlugin : Trusted
{ // low privileges
override public void proceed() { tempfile = "/etc/passwd"; }

}
}
Who cares about security?
Businesses do:
Feb 2000:
● Yahoo, Buy.com, Amazon.com, CNN, etc. shutdown
by massive DDoS attack. Yahoo lost more than
$1m per minute...
Aug 2000:
● Fake news report posted on internet news agency
computer - “Emulex Corporation”'s CEO resigns and
quarterly earnings adjusted to loss, not profit. Share
price drops 60% in hours ($billions!).
sandbox model - security technique is to run
applications from unsafe sources through an
isolated environment (sandbox) so that it can
be tested without any breach of privilege
affecting the original system. This isolation
can be:
 a virtual machine providing validation for data
types and authorization for access to memory;
 a native API controlling access to resources. In
this case, methods like
SecurityManager.checkPermission
(checkRead, checkAccept, ...)
or similar variants are called before any sensitive
operation.
What is the .NET Framework?
 Microsoft’s cross-language development platform
• Execution environment / VM:
Common Language Runtime (CLR)
• Intermediate Language: MSIL (similar to bytecode)
• Class libraries: “The Framework” (FCL/BCL)
• Language compilers (+30, MS & 3rd party)
• Development tools: Visual Studio.NET

 Any language that compiles to MSIL can use and


extend the .NET Framework

 CLR provides a secure execution environment for


“partially-trusted code”
Vulnerabilities in Virtual-Machine
Environments
 bytecode in Java and IL in .NET; run on
multiple processors ==> compiled code
would be simpler, and therefore easier to
understand.
 unmanaged C++ will not be secure when
written as a desktop application.
 ILDASM.EXE - which provides well-formatted
IL code fulfilled with metadata
 Countermeasures:
 either unmanaged (non-IL) code,
 or to have all valuable code run on servers and
expose only interfaces, so there is no chance of
anyone seeing the IL code /see validations on web/
Vulnerabilities in Virtual-Machine
Environments
 DLL hell
 Incompatible versions
 DLL stomping
 Incorrect COM registration
 Shared in-memory modules
 Lack of serviceability
 strong names that consist of the file's
simple text name, version number, and
culture information, plus a public key and a
digital signature
Vulnerabilities in Virtual-Machine
Environments
 the application contains hints about IT
infrastructure (e.g. SQL statements, DB
connections strings, etc.
 Don’t store secrets in config file !
 Avoid writing software that requires admin
privileges !
Vulnerabilities in Scripting
Languages
 protecting source code is a vital part of
information security ( without access to
source code).
 scripting languages are not compiled
=> the source code is revealed
 client side scripting for validation
 reduce round-trips for sensitive data
 having validation routines in clear text can
tell an attacker the kind of input that a
server cannot handle
Vulnerabilities in Scripting
Languages
• .NET config files are stored as XML !

• “hidden fields” to echo pricing data to a


web browser, usually in the absence of
another method of maintaining state
• Promote web services and server-
based programming
Vulnerabilities in Scripting
Languages
• XSS "Cross Site Scripting" -
vulnerability that allows the injection of
executable script code in a variable or an
input unfiltered field, on a web page from a
server
Vulnerabilities in Scripting
Languages - XSS
<IMG SRC="javascript:alert('XSS')"
<SCRIPT>alert("XSS");//<</SCRIPT>
<IMG """><SCRIPT>alert("XSS")</SCRIPT>">
<DIV STYLE="width: expression(alert('XSS'));">
<IMG STYLE="xss:expr/*XSS*/ession(alert('XSS'))">
<SCRIPT>window.location.href= "http://www.ase.ro
"//<</SCRIPT>
validateRequest=false
Test:
http://localhost/stareSesiune2/default.aspx
Key Components of the
.NET Security System
 Type safety and verification
 Permissions, demands and stack inspection
 Policy/trust management system for
assigning permissions to assemblies
 Application deployment model
Type Safety & Verification
 CLR provides memory protection
through type safety verification
 Every MSIL method is type-safe
verified before being allowed to be
called by another method
 Verification happens as part of JIT
compilation to binary
 The right to run unverifiable code is
governed by a security permission (like
any other privileged operation)
Strong typing
Strong typing: strict enforcement of type rules with
no exceptions. All types are known at compile
time, i.e. are statically bound. With variables
that can store values of more than one type,
incorrect type usage can be detected at run
time

 An integer is not a pointer (reference).


 A byte array is not a function.
 An array / string cannot be accessed beyond its
bounds.
 All variables are initialized before being read
Strong typing
Can be enforced
 dynamically (ill-typed program halts)
check types during execution; tag data with their types.
Example: when evaluating x + y, check that x and y contain integers;
compute sum and tag it as integer.

 statically (ill-typed program is rejected at


compile-time).
check types at compile-time, using static analysis; execution proceeds
without type tests, on untagged data.
Example: when compiling x + y, check that x and y have been declared
as integers; record that this expression has type int.
Strong typing does not cover
bad casting
Casting from:
class C { int x; }
to
class D { byte[ ] a; }

causes pointer a to be forged from integer x.


.NET security

 security mechanism with two general


features:
 Code Access Security
 Validation and verification
 Code Access Security uses evidence to
determine the permissions granted to the
code
 Code that performs some privileged action will
make a demand for one or more permissions
.NET security

 validation: CLR checks that the assembly


contains valid metadata and CIL, and
whether the internal tables are correct
 verification: checks to see if the code
does anything 'unsafe'

 Assemblies are the core unit of code
development and distribution
Similar to a .DLL or shared library
 Assemblies are the minimum code unit
that have identity

 Permissions are granted on an assembly


- wide basis
All methods in the same assembly have the same rights

 Applications are collections of


assemblies dynamically assembled based
on referenced classes
Evidence
 Application directory
 Publisher - assembly's publisher's digital
signature
 URL- the complete URL where the library was
downloaded from.
 Site - hostname of the URL.
 Zone - defined security zones (defined by
browser)
 Hash- a cryptographic hash of the assembly,
which identifies a specific version.
 Strong Name - the X.509 certificate uniquely
identifying a publisher
Policy
set of expressions using evidence to determine code
group membership, which gives a permission set for
the assemblies

 Enterprise - policy for a family of


machines that are part of an Active
Directory installation.
 Machine - policy for the current machine.
 User - policy for the logged on user.
 AppDomain - policy for the executing
application domain.
Permissions, Demands &
Stack Inspection
A permission is a set (or subset) of capabilities
with respect to a resource. Ex:
FileIOPermission(READ, “c:\”)
 Most permissions are code-access permissions
and implement stack-walking semantics
 A demand for a code-access permission must
be satisfied by grants to every stack frame
above the demanding frame
 Stack-walking is a defense against luring
attacks
Less-trusted code tricking more-trusted code into
performing protected operations
Stack Inspection
 Is grouping techniques which controls how methods call each
other, ensuring that a method with reduced privileges can not
indirectly benefit of extensive privileges of another method, by
calling it
 File permissions: read, write, execute, delete (pentru fişiere).
 Socket permissions: connect, accept connections (pentru
server gazdă).
 Runtime permissions: exit VM, load native code, define own
class loader, define own security manager, pentru lansarea în
execuţie a unui task.
 GUI permissions: access clipboard, read pixels on screen,
pentru interfeţele cu utilizatorul.

 Assume that a function, even in the library, operate differently when it


was called from secure code (such as browser) or an unsafe code (such as
an applet).
Policy Evaluation in the CLR
 Security policy evaluation is the process
of determining the set of permissions to
grant to code based on evidence known
about that code
 Evidence is typically info about a code
assembly (e.g. publisher identity, URL)
 We assign permissions to assemblies in
the CLR, just as we assign rights to
groups of users.
Assembly thisAssembly =
Assembly.GetExecutingAssembly();
Evidence ev = thisAssembly.Evidence;
textBox1.Text="Host Evidence: ";
IEnumerator enumerator = ev.GetHostEnumerator();
while (enumerator.MoveNext())
{
textBox1.Text+=enumerator.Current + "\r\n";
}
textBox1.Text+="\r\n";

textBox1.Text+="Assembly Evidence:";
enumerator = ev.GetAssemblyEnumerator();
while (enumerator.MoveNext())
{
textBox1.Text+=enumerator.Current + "\r\n";
}
Memory management in a
GC’ed environment
 Usually, developers don’t have to explicitly
manage memory in a GC’d environment, but
that’s not quite true: they have to worry
about manually clearing sensitive data as
soon as they are done with it

 SecureString: specialized class for handling


secrets like passwords (V2 feature)

 Proper use has impact throughout the class


libraries (e.g. GUI, XML processing, etc.)
Memory management in a
GC’ed environment
Explicit deallocation:
 Free the block A, keep the reference A
around, wait until memory manager moves
B into A’s spot.
Overriding
 Writing well-chosen integers or references
in vtable allows arbitrary code to be
executed.
Get the App Model Right
 A single API set/framework/SDK can support
multiple application models
 The corresponding security system has to work for
all of these models
 But it should best support and facilitate the model
that developers use most
 CLR V1 security was aimed at rich clients built
from a soup of assemblies from multiple sources
 The developers don’t do this – they write single-
source applications
The “ClickOnce” Model
 An “application-based model” built on top of V1 security
primitives
 Key features added for V2
 An “application” now has an identity of its own collection of
assemblies + metadata, defined by an XML manifest
 Applications are self-describing
 The application becomes the common unit of code
deployment, not the assembly
 Applications that run in the sandbox with the default set of
permissions “just run”
 Applications that require “elevated” permissions (more than
the default) have to declare additional necessary grants in the
manifest
 User consent required to run, but decisions can be persisted
based on code signer identity
Some common reasons why the
users turn off or ignore security
system
 Too hard to develop in the security model.

 The system is too hard to administer;

 Can’t easily figure out the correct security


policy or map it to real-world requirements

 The system is too restrictive – it doesn’t let


them get their work done
Axiom 1 (Murphy) All programs are buggy.
Theorem 1 (Law of Large Programs) Large
programs are even buggier than their size
would indicate.
Corollary 1.1 A security-relevant program has
security bugs.
Theorem 2 If you do not run a program, it
does not matter whether or not it is buggy.
Corollary 2.1 If you do not run a program, it
does not matter if it has security holes.
Theorem 3 Exposed machines should run as
few programs as possible; the programs that
are run should be as small as possible.

You might also like