MANDT Kernelpool PAPER
MANDT Kernelpool PAPER
Tarjei Mandt
1 Introduction
As software bugs are hard to completely eliminate due to the complexity of
modern day computing, vendors are doing their best to isolate and prevent ex-
ploitation of security vulnerabilities. Mitigations such as DEP and ASLR have
been introduced in contemporary operating systems to address a variety of com-
monly used exploitation techniques. However, as exploit mitigations do not ad-
dress the root cause of security vulnerabilities, there will always be edge case
scenarios where they fall short. For instance, DEP alone is easily circumvented
using return-oriented programming (ROP) [15]. Furthermore, novel techniques
leveraging the capabilities of powerful application-embedded scripting engines
may bypass DEP and ASLR completely [4].
A complementary approach to exploit mitigations is privilege isolation. By
imposing restrictions on users and processes using the operating system’s built-
in security mechanisms, an attacker cannot easily access and manipulate system
files and registry information in a compromised system. Since the introduction
of user account control (UAC) in Vista, users no longer run regular applications
with administrative privileges by default. Additionally, modern browsers [2] and
document readers [13][12] use ”sandboxed” render processes to lessen the impact
of security vulnerabilities in parsing libraries and layout engines. In turn, this
has motivated attackers (as well as researchers) to focus their efforts on privilege
escalation attacks. By executing arbitrary code in the highest privileged ring,
operating system security is undermined.
Privilege escalation vulnerabilities are in most cases caused by bugs in the
operating system kernel or third party drivers. Many of the flaws originate in
the handling of dynamically allocated kernel pool memory. The kernel pool is
analogous to the user-mode heap and was for many years susceptible to generic
write-4 attacks abusing the unlink operation of doubly-linked lists [8][16]. In
response to the growing number of kernel vulnerabilities, Microsoft introduced
safe unlinking in Windows 7 [3]. Safe unlinking ensures that the pointers to
adjacent pool chunks on doubly-linked free lists are validated before a chunk is
unlinked.
An attacker’s goal in exploiting pool corruption vulnerabilities is to ulti-
mately execute arbitrary code in ring 0. This often starts with an arbitrary
memory write or n-byte corruption at a chosen location. In this paper, we show
that in spite of the security measures introduced, the kernel pool in Windows 7
is still susceptible to generic1 attacks. In turn, these attacks may allow an at-
tacker to fully compromise the operating system kernel. We also show that safe
unlinking, designed to remediate write-4 attacks, may under certain conditions
fail to achieve its goals and allow an attacker to corrupt arbitrary memory. In
order to thwart the presented attacks, we conclusively propose ways to further
harden and enhance the security of the kernel pool.
The rest of the paper is organized as follows. In Section 2 we elaborate on
the internal structures and changes made to the Windows 7 (and Vista) kernel
pool. In Section 3 and 4 we discuss and demonstrate practical kernel pool attacks
affecting Windows 7. In Section 5 we discuss counter-measures and propose ways
to harden the kernel pool. Finally, in Section 6 we provide a conclusion of the
paper.
The pool descriptor holds several important lists used by the memory man-
ager. The delayed free list, pointed to by PendingFrees, is a singly-linked list
of pool chunks waiting to be freed. It is explained in detail in Section 2.8. The
ListHeads is an array of doubly-linked lists of free pool chunks of the same
size. Unlike the delayed free list, the chunks in the ListHeads lists have been
freed and can be allocated by the memory manager at any time. We discuss the
ListHeads in the following section.
The ListHeads lists, or free lists, are ordered in size of 8-byte granularity and
used for allocations up to 4080 bytes2 . The free chunks are indexed into the List-
Heads array by block size, computed as the requested number of bytes rounded
up to a multiple of 8 and divided by 8, or BlockSize = (NumberOfBytes+0xF)
>> 3. The rounding is performed to reserve space for the pool header, a structure
preceding all pool chunks. The pool header is defined as follows on x86 Windows.
The pool header holds information necessary for the allocation and free algo-
rithms to operate properly. PreviousSize indicates the block size of the preced-
ing pool chunk. As the memory manager always tries to reduce fragmentation
by merging bordering free chunks, it is typically used to locate the pool header
of the previous chunk. PreviousSize may also be zero, in which case the pool
chunk is located at the beginning of a pool page.
PoolIndex provides the index into the associated pool descriptor array, such
as nt!ExpPagedPoolDescriptor. It is used by the free algorithm to make sure
the pool chunk is freed to the proper pool descriptor ListHeads. In Section 3.4,
we show how an attacker may corrupt this value in order to extend a pool header
corruption (such as a pool overflow) into an arbitrary memory corruption.
As its name suggests, PoolType defines a chunk’s pool type. However, it also
indicates if a chunk is busy or free. If a chunk is free, PoolType is set to zero. On
the other hand, if a chunk is busy, PoolType is set to its descriptor’s pool type (a
2
The remaining page fragment cannot be used if requested bytes exceed 4080.
value in POOL TYPE enum, shown below) OR’ed with a pool-in-use bitmask. This
bitmask is set to 2 on Vista and later, while it is set to 4 on XP/2003. E.g. for a
busy paged pool chunk on Vista and Windows 7, PoolType = PagedPool|2 =
3.
typedef enum _POOL_TYPE
{
NonPagedPool = 0 /*0x0*/,
PagedPool = 1 /*0x1*/,
NonPagedPoolMustSucceed = 2 /*0x2*/,
DontUseThisType = 3 /*0x3*/,
NonPagedPoolCacheAligned = 4 /*0x4*/,
PagedPoolCacheAligned = 5 /*0x5*/,
NonPagedPoolCacheAlignedMustS = 6 /*0x6*/,
MaxPoolType = 7 /*0x7*/,
NonPagedPoolSession = 32 /*0x20*/,
PagedPoolSession = 33 /*0x21*/,
NonPagedPoolMustSucceedSession = 34 /*0x22*/,
DontUseThisTypeSession = 35 /*0x23*/,
NonPagedPoolCacheAlignedSession = 36 /*0x24*/,
PagedPoolCacheAlignedSession = 37 /*0x25*/,
NonPagedPoolCacheAlignedMustSSession = 38 /*0x26*/
} POOL_TYPE, *PPOOL_TYPE;
If a pool chunk is free and is on a ListHeads list, its pool header is imme-
diately followed by a LIST ENTRY structure. For this reason, chunks of a single
block size (8 bytes) are not maintained by the ListHeads as they are not large
enough to hold the structure.
typedef struct _LIST_ENTRY
{
/*0x000*/ struct _LIST_ENTRY* Flink;
/*0x004*/ struct _LIST_ENTRY* Blink;
} LIST_ENTRY, *PLIST_ENTRY;
The LIST ENTRY structure is used to join pool chunks on doubly linked lists.
Historically, it has been the target in exploiting memory corruption vulnerabili-
ties in both the user-mode heap [5] and the kernel pool [8][16], primarily due to
well-known ”write-4” exploitation techniques.3 Microsoft addressed LIST ENTRY
attacks in the user-mode heap with the release of Windows XP SP2 [5], and
similarly in the kernel pool with Windows 7 [3].
For the paged and non-paged lookaside lists, maximum block size is 0x20.
Hence, there are 32 unique lookaside lists per type. Each lookaside list is defined
by the GENERAL LOOKASIDE POOL structure, shown below.
typedef struct _GENERAL_LOOKASIDE_POOL
{
union
{
/*0x000*/ union _SLIST_HEADER ListHead;
/*0x000*/ struct _SINGLE_LIST_ENTRY SingleListHead;
};
/*0x008*/ UINT16 Depth;
/*0x00A*/ UINT16 MaximumDepth;
/*0x00C*/ ULONG32 TotalAllocates;
union
{
/*0x010*/ ULONG32 AllocateMisses;
/*0x010*/ ULONG32 AllocateHits;
};
/*0x014*/ ULONG32 TotalFrees;
union
{
/*0x018*/ ULONG32 FreeMisses;
/*0x018*/ ULONG32 FreeHits;
};
/*0x01C*/ enum _POOL_TYPE Type;
/*0x020*/ ULONG32 Tag;
/*0x024*/ ULONG32 Size;
union
{
/*0x028*/ PVOID AllocateEx;
/*0x028*/ PVOID Allocate;
};
union
{
/*0x02C*/ PVOID FreeEx;
/*0x02C*/ PVOID Free;
};
/*0x030*/ struct _LIST_ENTRY ListEntry;
/*0x038*/ ULONG32 LastTotalAllocates;
union
{
/*0x03C*/ ULONG32 LastAllocateMisses;
/*0x03C*/ ULONG32 LastAllocateHits;
};
/*0x040*/ ULONG32 Future[2];
} GENERAL_LOOKASIDE_POOL, *PGENERAL_LOOKASIDE_POOL;
In order to allocate pool memory, kernel modules and third-party drivers call
ExAllocatePoolWithTag (or any of its wrapper functions), exported by the ex-
ecutive kernel. This function will first attempt to use the lookaside lists, followed
by the ListHeads lists, and if no pool chunk could be returned, request a page
from the pool page allocator. The following pseudocode roughly outlines its im-
plementation.
PVOID
ExAllocatePoolWithTag( POOL_TYPE PoolType,
SIZE_T NumberOfBytes,
ULONG Tag)
If a chunk larger than the size requested is returned from the ListHeads[n]
list, the chunk is split. In order to reduce fragmentation, the part of the oversized
chunk returned by the allocator depends on its relative page position. If the
chunk is page aligned, the requested size is allocated from the front of the chunk.
If the chunk is not page aligned, the requested size is allocated from the back
of the chunk. Either way, the remaining (unused) fragment of the split chunk is
put at the tail of the appropriate ListHeads list.
VOID
ExFreePoolWithTag( PVOID Entry,
ULONG Tag)
if (PAGE_ALIGNED(Entry)) {
// call nt!MiFreePoolPages
// return on success
}
if (Entry->BlockSize != NextEntry->PreviousSize)
BugCheckEx(BAD_POOL_HEADER);
The DELAY FREE pool flag (nt!ExpPoolFlags & 0x200) enables a perfor-
mance optimization that frees several pool allocations at once to amortize pool
acquisition and release. This mechanism was briefly mentioned in [11] and is
enabled on Windows XP SP2 or higher if the number of available physical
pages (nt!MmNumberOfPhysicalPages) is greater or equal to 0x1fc00.5 When
used, every new call to ExFreePoolWithTag appends the chunk to be freed to
the PendingFrees list, specific to each pool descriptor. If the list holds 32 or
more chunks (determined by PendingFreeDepth), it is processed in a call to
5
Roughly translates to 508 megabytes of RAM on IA-32 and AMD64 architectures.
ExDeferredFreePool. This function iterates over each entry and frees it to the
appropriate ListHeads list, as illustrated by the following pseudocode.
VOID
ExDeferredFreePool( PPOOL_DESCRIPTOR PoolDesc,
BOOLEAN bMultipleThreads)
Frees to the lookaside and pool descriptor ListHeads are always put in the
front of the appropriate list. Exceptions to this rule are remaining fragments of
split blocks which are put at the tail of the list. Blocks are split when the memory
manager returns chunks larger than the requested size (as explained in Section
2.7), such as full pages split in ExpBigPoolAllocation and ListHeads entries
split in ExAllocatePoolWithTag. In order to use the CPU cache as frequently
as possible, allocations are always made from the most recently used chunks,
from the front of the appropriate list.
FakeEntry
Pool overflow
ListHeads[n].Flink
(validated in safe unlink)
After unlink
Pool Header • FakeEntry.Blink = ListHeads[n]
• ListHeads[n].Flink = FakeEntry
ListEntry
Flink Flink
Blink Blink
NextEntry.Blink
(validated in safe unlink)
ListHeads[n].Blink Pool Header
(validated in safe unlink)
Flink
PreviousEntry.Flink
Blink (validated in safe unlink)
This attack requires at least two free chunks to be present on the target
ListHeads[n] list. Otherwise, ListHeads[n].Blink will validate the unlinked
chunk’s forward link. In Example 1, the forward link of a pool chunk on a
ListHeads list has been corrupted with an address chosen by the attacker. In
turn, when this chunk is allocated in ExAllocatePoolWithTag, the algorithm
attempts to write the address of ListHeads[n] (esi) at the backward link of
the LIST ENTRY structure at the attacker controlled address (eax).
nt!ExAllocatePoolWithTag+0x4b7:
8296f067 897004 mov dword ptr [eax+4],esi ds:0023:80808084=????????
Lookaside lists are designed to be fast and lightweight, hence do not introduce the
same consistency checking as the doubly-linked ListHeads lists. Being singly-
linked, each entry on a lookaside list holds a pointer to the next entry. As there
are no checks asserting the validity of these pointers, an attacker may, using a
pool corruption vulnerability, coerce the pool allocator into returning an arbi-
trary address in retrieving the next free lookaside chunk. In turn, this may allow
the attacker to corrupt arbitrary kernel memory.
PPNPagedLookasideList[0]
PPNPagedLookasideList[1] Header
ListHead
PPNPagedLookasideList[2]
PPNPagedLookasideList[0]
After an allocation has been
made for BlockSize 2, the
PPNPagedLookasideList[1]
Next pointer points to the
arbitrary attacker supplied address
ListHead
Next
address
Depth
PPNPagedLookasideList[2]
Page-aligned pointer to
Pool overflow
Next Next
NonPagedPool
SListHead[0]
Depth
Pool page arbitrary
(0x1000 address
NonPagedPoolSListHead[1] bytes)
NonPagedPoolSListHead[2]
PagedPoolSListHead
Next MiAllocatePoolPages
NonPagedPool returns a page with an
SListHead[0]
Depth address we control
arbitrary
address
NonPagedPoolSListHead[1]
NonPagedPoolSListHead[2]
Recall from Section 2.8 that pool entries waiting to be freed are stored on singly-
linked PendingFrees lists. As no checks are performed in traversing these lists,
an attacker could leverage a pool corruption vulnerability to corrupt the Next
pointer of a PendingFrees list entry. In turn, this would allow the attacker to
free an arbitrary address to a chosen pool descriptor ListHeads list and possibly
control the memory of subsequent pool allocations (Figure 4).
0x0 PoolType
Pool overflow
0x4 PagedLock
… Pool Header
0x140 ListHeads[512]
Pool Header
If more than one pool descriptor is defined for a given pool type, a pool chunk’s
PoolIndex denotes the index into the associated pool descriptor array. Hence,
upon working with ListHeads entries, a pool chunk is always freed to its proper
pool descriptor. However, due to insufficient validation, a malformed PoolIndex
may trigger an out-of-bounds array dereference and subsequently allow an at-
tacker to overwrite arbitrary kernel memory.
2 8b1ae280 …
PoolIndex
BlockSize
PoolType
Data
3 8b1af3c0 0x100 PendingFrees
6 0
0x140 Flink Flink
+ N*8 Blink Blink
… 0
15 0
For paged pools, PoolIndex always denotes an index into the paged pool
descriptor array (nt!ExpPagedPoolDescriptor). On checked builds, the index
6
Each pool descriptor implements a lock, so two threads will never actually operate
on the same free list simultaneously.
value is validated in a compare against nt!ExpNumberOfPagedPools to pre-
vent any out-of-bounds array access. However, on free (retail) builds, the in-
dex is not validated. For non-paged pools, PoolIndex denotes an index into
nt!ExpNonPagedPoolDescriptor only when there are multiple nodes present in
a NUMA-aware system. Again, on free builds, PoolIndex is not validated.
A malformed PoolIndex (requiring only a 2-byte pool overflow) may cause an
allocated pool chunk to be freed to a null-pointer pool descriptor (Figure 5). By
mapping the virtual null-page, an attacker may fully control the pool descriptor
and its ListHeads entries. In turn, this may allow the attacker to write the
address of a pool chunk to an arbitrary address when linking in to a list. This is
because the Blink of the chunk currently in front is updated with the address
of the freed chunk, such that ListHeads[n].Flink->Blink = FreedChunk. Of
note, as the freed chunk is not returned to any real pool descriptor, there is no
need to clean up (remove stale entries, etc.) the kernel pool.
PoolIndex
BlockSize
PoolType
2 8b1ae280
0x140 ListHeads[512] Put in front of ListHeads[n]
3 8b1af3c0
Pool Header Pool Header
4 8b1b0500
0x140 Flink Flink Flink
5 0
+ N*8 Blink Blink Blink
Index … 0
If delayed pool frees (as described in Section 2.8) is enabled, a similar effect
can be achieved by creating a fake PendingFrees list (Figure 6). In this case, the
first entry on the list would point to an attacker controlled address. Additionally,
the value of PendingFreeDepth in the pool descriptor would be greater or equal
to 0x20 to trigger processing of the PendingFrees list.
Example 2 demonstrates how a PoolIndex overwrite could potentially cause
a user-controlled page address (eax) to be written to an arbitrary destination
address (esi). In order to execute arbitrary code, an attacker could leverage
this method to overwrite an infrequently used kernel function pointer with the
user-mode page address, and trigger its execution from the same process context.
nt!ExDeferredFreePool+0x2e3:
8293c943 894604 mov dword ptr [esi+4],eax ds:0023:80808084=????????
The PoolIndex overwrite attack can be applied to any pool type if also
the chunk’s PoolType is overwritten (e.g. by setting it to PagedPool). As this
requires the BlockSize to be overwritten as well, the attacker must either know
the size of the overflown chunk or create a fake bordering chunk embedded inside
it. This is required as FreedBlock->BlockSize = NextBlock->PreviousSize
must hold, as checked by the free algorithm. Additionally, the block size should
be greater than 0x20 to avoid lookaside lists (which ignore the PoolIndex). Note,
however, that embedded pool chunks may potentially corrupt important fields
or pointers in the chunk data.
As processes can be charged for allocated pool memory, pool allocations must
provide sufficient information for the pool algorithms to return the charged quota
to the right process. For this reason, pool chunks may optionally store a pointer
to the associated process object. On x64, the process object pointer is stored in
the last eight bytes of the pool header as described in Section 2.9, while on x86,
the pointer is appended to the pool body. Overwriting this pointer (Figure 7) in
a pool corruption vulnerability could allow an attacker to free an in-use process
object or corrupt arbitrary memory in returning the charged quota.
Pool Header
PreviousSize
PoolIndex
BlockSize
PoolType
Process Pool
Pool Header Pool overflow Data …
Pointer Header
EPROCESS EPROCESS_QUOTA_BLOCK
Process
Pool Header Pool overflow Pool Header
Pointer
VOID
InitPoolDescriptor ( PPOOL_DESCRIPTOR PoolDescriptor ,
PPOOL_HEADER PoolAddress ,
PVOID WriteAddress )
{
ULONG i ;
}
Listing 1. Function initializing a crafted pool descriptor
We assume the pending frees list to be used as most systems have 512MBs
RAM or more. Thus, the address of the user-controlled pool chunk will end
up being written to the address indicated by WriteAddress in the process of
linking in. This can be leveraged to overwrite a kernel function pointer, making
exploitation trivial. If the pending frees list was not used, the address of the
freed kernel pool chunk (a kernel address) would end up being written to the
address specified, in which case other means such as partial pointer overwrites
would be required to execute arbitrary code.
The final task before triggering the overflow is to initialize the memory
pointed to by PoolAddress such that the fake pool chunk (on the pending frees
list) is properly returned to the crafted ListHeads lists (triggering the arbitrary
write). In the function of Listing 2 we create a layout of two bordering pool
chunks for which PoolIndex again references an out-of-bounds index into the
associated pool descriptor array. Additionally, BlockSize must be large enough
to avoid lookaside lists from being used.
# define BASE_POOL_TYPE_MASK 1
# define POOL_IN_USE_MASK 2
# define BLOCK_SHIFT 3 // 4 on x64
VOID
InitPoolChunks ( PVOID PoolAddress , USHORT BlockSize )
{
POOL_HEADER * pool ;
SLIST_ENTRY * entry ;
// chunk to be freed
pool = ( POOL_HEADER *) PoolAddress ;
pool - > PreviousSize = 0;
pool - > PoolIndex = 5; // out - of - bounds pool index
pool - > BlockSize = BlockSize ;
pool - > PoolType = POOL_IN_USE_MASK | ( PagedPool &
BASE_POOL_TYPE_MASK ) ;
}
Listing 2. Function initializing a crafted pool layout
Pool overflow
Per-Processor Non-
Paged Lookaside Lists
PPNPagedLookasideList[0]
Header Header
Next
ListHead
Next Next
Depth
PPNPagedLookasideList[2]
ExAllocatePoolWithTag verifies
Cookie before returning the chunk
As PendingFrees lists are singly-linked, they obviously share the same problems
as the aforementioned lookaside lists. Thus, PendingFrees lists could also benefit
from an embedded pool chunk cookie in order to prevent exploitation of pool
overflows. Although a doubly-linked list could be used instead, this would require
additional locking in ExFreePoolWithTag (upon inserting entries to the list)
which would be computationally expensive and defeat the purpose of the deferred
free list.
As PoolIndex is used as a pool descriptor array index, the proper way of ad-
dressing the attack is to validate its value against the total number of array
entries before freeing a chunk. In turn, this would prevent an attacker from ref-
erencing an out-of-bounds array index and controlling the pool descriptor. The
PoolIndex overwrite, as demonstrated in Section 4, could also be prevented if
the kernel pool performed validation on bordering chunks before linking in.
Note that this technique was also another clear case of null-pointer abuse.
Thus, denying mapping of virtual address null (0) in non-system processes could
be a solution not only to address this particular attack, but many other ex-
ploitable null-pointer kernel vulnerabilities as well. Currently, the null page is
primarily used for backwards compatibility, such as by the Virtual Dos Machine
(VDM) for addressing 16-bit memory in WOW applications. Hence, an attacker
could circumvent a null page mapping restriction by injecting into a WOW pro-
cess.
6 Conclusion
In this paper we’ve shown that in spite of safe unlinking, the Windows 7 kernel
pool is still susceptible to generic attacks. However, most of the identified attack
vectors can be addressed by adding simple checks or adopting exploit prevention
features from the userland heap. Thus, in future Windows releases and service
packs, we are likely to see additional hardening of the kernel pool. In particular,
the kernel pool would benefit greatly from a pool header checksum or cookie in
order to thwart exploitation involving pool header corruption or malicious pool
crafting.
References
[1] Alexander Anisimov: Defeating Microsoft Windows XP SP2 Heap
Protection and DEP Bypass. http://www.ptsecurity.com/download/
defeating-xpsp2-heap-protection.pdf
[2] Adam Barth, Collin Jackson, Charles Reis: The Security Architecture
of the Chromium Browser. http://crypto.stanford.edu/websec/chromium/
chromium-security-architecture.pdf
[3] Pete Beck: Safe Unlinking in the Kernel Pool. Microsoft Security Re-
search and Defense. http://blogs.technet.com/srd/archive/2009/05/26/
safe-unlinking-in-the-kernel-pool.aspx
[4] Dion Blazakis: Interpreter Exploitation: Pointer Inference and JIT Spraying. Black
Hat DC 2010. http://www.semantiscope.com/research/BHDC2010
[5] Matt Conover & Oded Horovitz: Windows Heap Exploitation. CanSecWest 2004.
[6] Matthew Jurczyk: Windows Objects in Kernel Vulnerability Exploita-
tion. Hack-in-the-Box Magazine 002. http://www.hackinthebox.org/misc/
HITB-Ezine-Issue-002.pdf
[7] Matthew Jurczyk: Reserve Objects in Windows 7. Hack-in-the-Box Magazine 003.
http://www.hackinthebox.org/misc/HITB-Ezine-Issue-003.pdf
[8] Kostya Kortchinsky: Real World Kernel Pool Exploitation. SyScan 2008. http:
//www.immunitysec.com/downloads/KernelPool.odp
[9] Adrian Marinescu: Windows Vista Heap Management Enhancements. Black
Hat USA 2006. http://www.blackhat.com/presentations/bh-usa-06/
BH-US-06-Marinescu.pdf
[10] Microsoft Security Bulletin MS10-058: Vulnerabilities in TCP/IP Could Allow
Elevation of Privilege. http://www.microsoft.com/technet/security/Bulletin/
MS10-058.mspx
[11] mxatone: Analyzing Local Privilege Escalation in win32k. Uninformed Journal,
vol. 10, article 2. http://www.uninformed.org/?v=10&a=2
[12] Office Team: Protected View in Office 2010. Microsoft Office 2010 Engi-
neering. http://blogs.technet.com/b/office2010/archive/2009/08/13/
protected-view-in-office-2010.aspx
[13] Kyle Randolph: Inside Adobe Reader Protected Mode - Part 1 - Design. Adobe Se-
cure Software Engineering Team (ASSET) Blog. http://blogs.adobe.com/asset/
2010/10/inside-adobe-reader-protected-mode-part-1-design.html
[14] Ruben Santamarta: Exploiting Common Flaws in Drivers. http://reversemode.
com/index.php?option=com_remository&Itemid=2&func=fileinfo&id=51
[15] Hovav Shacham: The Geometry of Innocent Flesh on the Bone: Return-into-libc
without Function Calls (on the x86). In Proceedings of CCS 2007, pages 552561.
ACM Press, Oct. 2007.
[16] SoBeIt: How To Exploit Windows Kernel Memory Pool. Xcon 2005. http:
//packetstormsecurity.nl/Xcon2005/Xcon2005_SoBeIt.pdf
[17] Matthieu Suiche: Microsoft Security Bulletin (August). http://moonsols.com/
blog/14-august-security-bulletin