Caching Architecture Guide For .NET Applications
Caching Architecture Guide For .NET Applications
Chapter 2
Understanding Caching Technologies 13
Using the ASP.NET Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Using Programmatic Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Using an Output Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Using the ASP.NET Cache in Non-Web Applications . . . . . . . . . . . . . . . . . . . . . . . . . 21
Managing the Cache Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Using Remoting Singleton Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Using Memory-Mapped Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Using Microsoft SQL Server 2000 or MSDE for Caching . . . . . . . . . . . . . . . . . . . . . . . 25
Using Static Variables for Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Using ASP.NET Session State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Choosing the Session State Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Determining What to Cache in the Session Object . . . . . . . . . . . . . . . . . . . . . . . . . 29
Implementing Session State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Contents
Chapter 3
Caching in Distributed Applications 43
Caching in the Layers of .NET-based Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Caching in the User Services Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Caching in the Business Services Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Caching in the Data Services Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Caching in the Security Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Caching in the Operational Management Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Selecting a Caching Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Caching in Browser-based Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Caching in Smart Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Caching in .NET Compact Framework Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Caching in ASP.NET Server Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Caching in Server Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Considering Physical Deployment Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . 60
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Chapter 4
Caching .NET Framework Elements 61
Planning .NET Framework Element Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Ensuring Thread Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Serializing a .NET Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Normalizing Cached Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Choosing a Caching Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Implementing .NET Framework Element Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Caching Connection Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Caching Data Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Caching XML Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Caching Windows Forms Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Contents 5
Caching Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Caching Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Caching Security Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Chapter 5
Managing the Contents of a Cache 75
Loading a Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Caching Data Proactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Caching Data Reactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Determining a Cache Expiration Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Using Expiration Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Using External Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Flushing a Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Using Explicit Flushing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Implementing Scavenging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Locating Cached Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Chapter 6
Understanding Advanced Caching Issues 93
Designing a Custom Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Introducing the Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Introducing the Solution Blueprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Securing a Custom Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Signing Cache Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Encrypting Cached Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Monitoring a Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Implementing Performance Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Monitoring Your Cache Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Synchronizing Caches in a Server Farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Chapter 7
Appendix 121
Appendix 1: Understanding State Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Understanding the Lifetime of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Understanding the Scope of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Appendix 2: Using Caching Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Implementing a Cache Notification System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Implementing an Extended Format Time Expiration Algorithm . . . . . . . . . . . . . . . . 126
Appendix 3: Reviewing Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Introducing the Test Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Defining the Computer Configuration and Specifications . . . . . . . . . . . . . . . . . . . . 131
Presenting the Performance Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
1
Understanding Caching Concepts
This chapter introduces the concepts of caching. It is important to be familiar with
these concepts before trying to understand the technologies and mechanisms you
can use to implement caching in your applications.
This chapter contains the following sections:
● “Introducing the Problems that Caching Solves”
● “Understanding State”
For example, in a Web application the Web server is required to render the user
interface for each user request. You can cache the rendered page in the ASP.NET
output cache to be used for future requests, freeing resources to be used for other
purposes.
Caching data can also help scale the resources of your database server. By storing
frequently used data in a cache, fewer database requests are made, meaning that
more users can be served.
● Availability — Occasionally the services that provide information to your appli-
cation may be unavailable. By storing that data in another place, your application
may be able to survive system failures such as network latency, Web service
problems, or hardware failures.
For example, each time a user requests information from your data store, you can
return the information and also cache the results, updating the cache on each
request. If the data store then becomes unavailable, you can still service requests
using the cached data until the data store comes back online.
To successfully design an application that uses caching, you need to thoroughly
understand the caching techniques provided by the Microsoft® .NET Framework
and the Microsoft Windows® operating system, and you also need to be able to
address questions such as:
● When and why should a custom cache be created?
● Which caching technique provides the best performance and scalability for a
specific scenario and configuration?
● Which caching technology complies with the application’s requirements for
security, monitoring, and management?
● How can the cached data be kept up to date?
It is important to remember that caching isn’t something you can just add to your
application at any point in the development cycle; the application should be de-
signed with caching in mind. This ensures that the cache can be used during the
development of the application to help tune its performance, scalability, and
availability.
Now that you have seen the types of issues that caching can help avoid, you are
ready to look at the types of information that may be cached. This information is
commonly called state.
Understanding State
Before diving into caching technologies and techniques, it is important to have an
understanding of state, because caching is merely a framework for state manage-
ment. Understanding what state is and an awareness of its characteristics, such as
lifetime and scope, is important for making better decisions about whether to cache it.
Chapter 1: Understanding Caching Concepts 3
State refers to data, and the status or condition of that data, being used within a
system at a certain point in time. That data may be permanently stored in a data-
base, may be held in memory for a short time while a user executes a certain func-
tion, or the data may exist for some other defined length of time. It may be shared
across a whole organization, it may be specific to an individual user, or it may be
available to any grouping in between these extremes.
For more details and examples of the lifetime of state, see Chapter 7, “Appendix.”
● Farm — state that is accessible from any computer within an application farm
For more details and examples of physical scope, see Chapter 7, “Appendix.”
For more details and examples of logical scope, see Chapter 7, “Appendix.”
4 Caching Architecture Guide for .NET Framework Applications
Table 1.1 describes and gives examples of the representations of state during the
different stages in the transformation pipeline.
Table 1.1: Data representation types
Representation Type Description Example
Raw Data in its raw format. Dataset reflecting data in a database.
Processed Data that has gone through Different representations
business logic processing. of the same dataset.
At this stage of the pipeline,
the same data may undergo
several different transforma-
tions.
Rendered Data that is rendered and Rendered combo-box Web control
ready to be displayed in the containing a list of countries.
user interface. Rendered Windows Forms TreeView
control.
6 Caching Architecture Guide for .NET Framework Applications
When you plan the caching of state in your application, you need to decide which of
the state representation types is the best one to be cached. Use the following guide-
lines to aid you in this decision:
● Cache raw data when any staleness of the cache can be tolerated by your business
logic. Raw data should be cached in:
● Data access components.
● Service agents.
● Cache processed data to save processing time and resources within the system.
Processed data should be cached in:
● Business logic components.
● Service interfaces.
● Cache rendered data when the amount of data to be displayed is large and
the rendering time of the control is long (for example, the contents of a large
TreeView control). Rendered data should be cached in user interface (UI)
components.
For a summarized review of .NET-based application architecture, its common
elements, and their roles, see Chapter 3, “Caching in Distributed Applications.”
State is used in one form or another in all types of applications. Because it is time
consuming to access state, it is often wise to cache the state to improve overall
application performance.
The benefits that are important to you vary depending on the type of application
that you are developing.
executes in a different process. This can be less efficient when the application re-
quires a large amount of data to be moved between the processes or when one
process is making chatty (that is, numerous small) calls to the other to obtain data.
Making calls between processes requires using remote procedure calls (RPCs) and
data serialization, both of which can result in a performance hit. By using a cache to
store static or semi-static data in the consuming process, instead of retrieving it each
time from the provider process, the RPC overhead decreases and the application’s
performance improves.
You have many different options when deciding the physical location and logical
location for the cache. The following sections describe some of the options.
8 Caching Architecture Guide for .NET Framework Applications
You can reduce the number of expensive disk operations that need to be made by
storing the data in memory, and you can minimize the amount of data that needs
to be moved between different processes by storing the data in the memory of the
consumer process.
● Disk resident cache — This category contains caching technologies that use disk-
based data storages such as files or databases. Disk based caching is useful when:
● You are handling large amounts of data.
● Data in the application services (for example, a database) may not always be
available for reacquisition (for example, in offline scenarios).
● Cached data lifetime must survive process recycles and computer reboots.
You can reduce the overhead of data processing by storing data that has already
been transformed or rendered, and you can reduce interprocess communications
by storing the data nearer to the consumer.
Users
UI Components
UI Process Components
Communication
Operational Management
Security
Service Interfaces
Data Services
Figure 1.2
Layered architecture elements
● Serialization — When storing an object in a cache and the cache storage serializes
data in order to store it, the stored object must support serialization.
● Normalizing cached data — When storing state in a cache, make sure that it is
stored in a format optimized for its intended usage.
For more information about formatting cached data, see the “Planning .NET Frame-
work Element Caching” section in Chapter 4, “Caching .NET Framework Elements.”
Introducing Security
When caching any type of information, you need to be aware of various potential
security threats. Data that is stored in a cache may be accessed or altered by a
process that is not permitted access to the master data. This can occur because when
the information is held in its permanent store, security mechanisms are in place to
protect it. When taking data out of traditional trust boundaries, you must ensure
that there are equivalent security mechanisms for the transmission of data to, and
the storage of, data in the cache.
For more information about security issues when caching, see the “Securing a
Custom Cache” section in Chapter 6, “Understanding Advanced Caching Issues.”
Chapter 1: Understanding Caching Concepts 11
Introducing Management
When you use caching technologies, the maintenance needs of your application
increase. During the application deployment, you need to configure things such as
the maximum size of the cache and the flushing mechanisms. You also need to
ensure that the cache performance is monitored, using some of the techniques made
available in Windows and the .NET Framework (for example, event logging and
performance counters).
For more information about maintenance issues, see the “Monitoring a Cache”
section in Chapter 6, “Understanding Advanced Caching Issues.”
Summary
In this chapter, you have been introduced to some of the problems involved in
developing distributed applications and how implementing caching within the
application can help alleviate some if these issues. You have also been introduced
to some of the considerations that you need to take into account when planning
caching mechanisms for different types of applications.
Now that you have an overall understanding of the concepts involved in caching,
you are ready to begin learning about the different caching technologies available.
2
Understanding Caching
Technologies
Chapter 1 introduced you to the different types of state that can be used within a
distributed .NET-based application and provided an overview of why and where to
cache some of this state. You also learned about some of the issues you must bear in
mind when you design the caching policy for your applications.
This chapter describes the different caching technologies you can use when you
build distributed enterprise applications using the Microsoft .NET Framework.
Other technologies can be used to cache data, such as NTFS file system caching,
Active Directory® directory service caching, and the COM+ Shared Property Man-
ager. However, in most cases, these methods are not recommended and are not
described in this chapter.
This chapter describes the different caching technologies and Chapter 3, “Caching in
Distributed Applications,” describes how to select the right technology to fit your
needs.
This chapter contains the following sections:
● “Using the ASP.NET Cache”
● Key dependency — Invalidates a specific cache item when another cached item
changes. For example, when you cache basic data alongside calculation results on
that data, the calculated data should be invalidated when the basic data changes
or becomes invalid. As another example, although you cannot directly remove a
page from the ASP.NET output cache, you can use a key dependency on the page
as a workaround. When the key on which your pages are dependent is removed
from cache, your cached pages are removed as well.
The following example shows how to make one cache item dependent on an-
other.
// Create a cache entry.
Cache["key1"] = "Value 1";
Using a duration that’s too short limits a cache’s usefulness; using a duration
that’s too long may return stale data and unnecessarily load the cache with pages
that are not requested. If using page output caching does not improve perfor-
mance, your cache duration might be too short. Try fast concurrent page requests
to see whether pages are served rapidly from cache.
If you do not have a mechanism, such as dependencies, to invalidate a cached
page, keep the duration short. In general, keep your cache duration shorter than
the update interval of the data in your page. Usually, if you receive pages that are
out of date, your cache duration is too long.
A common question is how to implement a cache database dependency. This re-
quires you to implement a notification mechanism that informs your cache of
changes to the original data in the database. For information about using external
16 Caching Architecture Guide for .NET Framework Applications
Check whether data in the cache is valid before using it to avoid using stale items.
Use this method when the cost of validating a cached item is significantly less than
the cost of regenerating or obtaining the data.
Items added to the cache automatically expire and are removed from the cache if
they are not used for a given amount of time. Use the expiration cache feature when
the data from which the cached items generate is updated at known intervals.
Dependencies and expirations initiate the removal of items from a cache. Sometimes
you want to write code to be called when the removal occurs.
// Test whether the item is expired, and reinsert it into the cache.
if (r == CacheItemRemovedReason.Expired)
{
// Reinsert it into the cache again.
CacheItemRemovedCallback onRemove = null;
onRemove = new CacheItemRemovedCallback(this.RemovedCallback);
Cache.Insert(key,
value,
null,
Cache.NoAbsoluteExpiration,
Cache.NoSlidingExpiration,
CacheItemPriority.Default,
onRemove);
}
}
Notice that in addition to expiration parameters, the Insert method also uses a
CacheItemPriority enumeration.
Flushing a Cache
There is no direct support for automatic flushing of the ASP.NET output cache. One
way to do so is to set an external dependency that clears all cached items automati-
cally. For example, the following statement flushes the ASP.NET output cache as
soon as it executes.
Response.Cache.SetExpires(DateTime.Now)
This code flushes the output cache, but the page does not reflect this until the
original cache duration completes. For example, if you use the following directive to
configure your cache, the cache resets after 10 seconds.
<%@ OutputCache Duration="10" VaryByParam="none" %>
Flushing the entire output cache is generally not required, and better alternatives are
available to your application design. For example, when using the ASP.NET Cache
object, you can reload your cache items when they become stale, overwriting the
existing cache content.
18 Caching Architecture Guide for .NET Framework Applications
● Pages with a known update frequency. For example, a page that displays a stock
price where that price is updated at given intervals.
● Pages that have several possible outputs based on HTTP parameters, and those
possible outputs do not often change — for example, a page that displays weather
data based on country and city parameters.
● Results being returned from Web services. The following example shows how
you can declaratively cache these results.
[WebMethod(CacheDuration=60)]
public string HelloWorld()
{
return "Hello World";
}
Caching these types of output avoids the need to frequently process the same page
or results.
Pages with content that varies — for example, based on HTTP parameters — are
classed as dynamic, but they can still be stored in the page output cache. This is
particularly useful when the range of outputs is small.
Chapter 2: Understanding Caching Technologies 19
For more information about the caching attributes in ASP.NET cache, see “Page
Output Caching” and “Caching Multiple Versions of a Page,” in the MSDN Library.
Configuring the Output Cache Location
You can control the location of your output cache by using the Location attribute
of the @OutputCache directive. The Location attribute is supported only for page
output caching and not for page fragment caching. You can use it to locate the cache
on the originating server, the client, or a proxy server. For more information about
20 Caching Architecture Guide for .NET Framework Applications
The following example shows how to programmatically configure the cache using
the cache API.
private void Page_Load(object sender, System.EventArgs e)
{
// Enable page output caching.
Response.Cache.SetCacheability(HttpCacheability.Server);
// Set the Duration parameter to 20 seconds.
Response.Cache.SetExpires(System.DateTime.Now.AddSeconds(20));
// Set the Header parameter.
Response.Cache.VaryByHeaders["Accept-Language"] = true;
// Set the cached parameter to 'state'.
Response.Cache.VaryByParams["state"] = true;
// Set the custom parameter to 'minorversion'.
Response.Cache.SetVaryByCustom("minorversion");
…
}
You can implement page fragment caching by adding the necessary directives (high-
level, declarative implementation) to the user control or by using metadata at-
tributes in the user control class declaration.
For more information about page fragment caching, see “Caching Portions of an
ASP.NET Page,” in the MSDN Library.
Determining What to Cache with Page Fragment Caching
Use page fragment caching when you cannot cache the entire Web page. There are
many situations that can benefit from page fragment caching, including:
● Page fragments (controls) that require high server resources to create.
● Page fragments that can be used more than once by multiple users.
Note: ASP.NET version 1.1, part of the Microsoft Visual Studio® .NET 2003 development
system, introduces a new Shared attribute in the user control’s <%@ OutputCache %> direc-
tive. This attribute allows multiple pages to share a single instance of a cached user control. If
you don’t specify the Shared attribute, each page gets its own instance of the cached control.
When the page containing the user control is requested, only the user control — not
the entire page — is cached.
The following code shows how you can access the ASP.NET cache object from a
generic application.
HttpRuntime httpRT = new HttpRuntime();
Cache cache = HttpRuntime.Cache;
After you access the cache object for the current request, you can use its members in
the usual way.
Note that the Cache API counters do not track internal usage of the cache by
ASP.NET. The Cache Total counters track all cache usage.
The Turnover Rate counters help determine how effectively the cache is being used.
If the turnover is large, the cache is not being used efficiently because cache items
are frequently added and removed from the cache. You can also track cache effec-
tiveness by monitoring the Hit counters.
For more information about these performance counters, see “Performance Counters
for ASP.NET,” in the MSDN Library.
This code ensures that the garbage collector does not collect the object by returning
a null lifetime manager for the object.
24 Caching Architecture Guide for .NET Framework Applications
You can implement the remoting singleton as a caching mechanism in all layers of a
multilayer architecture. However, because of the relatively high development cost of
its implementation, you’re most likely to choose this solution when a custom cache
is needed in the application and data tiers.
You can often use solutions based on Microsoft SQL Server™ instead of remoting
singleton caching solutions. It is tempting to choose remoting singleton caching
because it is simple to implement, but the poor performance and the lack of
scalability it produces mean that this choice is often misguided.
● Cache management DLL — Implements the specific cache functionality, such as:
● Inserting and removing items from the cache.
● Flushing the cache using algorithms such as Least Recently Used (LRU) and
scavenging. For more information about flushing, see Chapter 5, “Managing
the Contents of a Cache.”
● Validating data to protect it against tampering or spoofing. For more informa-
tion about security, see Chapter 6, “Understanding Advanced Caching Issues.”
Because a memory-mapped file is a custom cache, it is not limited to a specific layer
or technology in your application.
Memory-mapped file caches are not easy to implement because they require the use
of complex Win32® application programming interface (API) calls. The .NET Frame-
work does not support memory-mapped files, so any implementations of a memory-
mapped file cache run as unmanaged code and do not benefit from any .NET
Framework features, including memory management features, such as garbage
collection, and security features, such as code access security.
Management functionality for memory-mapped file custom cache needs to be
custom developed for your needs. Performance counters can be developed to show
cache use, scavenging rate, hit-to-miss ratios, and so on.
Note: In caching terms, SQL Server 2000 and MSDE provide the same capabilities. All refer-
ences to SQL Server in this section equally apply to MSDE.
If your application requires cached data to persist across process recycles, reboots,
and power failures, in-memory cache is not an option. In such cases, you can use a
caching mechanism based on a persistent data store, such as SQL Server or the NTFS
file system. It also makes sense to use SQL Server to cache smaller data items to gain
persistency.
SQL Server 2000 limits you to 8 KB of data when you are accessing VarChar and
VarBinary fields using Transact-SQL or stored procedures. If you need to store
larger items in your cache, use an ADO.NET SQLDataAdapter object to access
DataSet and DataRow objects.
26 Caching Architecture Guide for .NET Framework Applications
SQL Server caching is easy to implement by using ADO.NET and the .NET Frame-
work, and it provides a common development model to use with your existing data
access components. It provides a robust security model that includes user authenti-
cation and authorization and can easily be configured to work across a Web farm
using SQL Server replication.
Because the cache service needs to access SQL Server over a network and the data is
retrieved using database queries, the data access is relatively slow. Carefully com-
pare the cost of recreating the data versus retrieving it from the database.
SQL Server caching can be useful when you require a persistent cache for large data
items and you do not require very fast data retrieval. Carefully consider how much
memory you have and how big your data items are. Do not use all of your memory
to cache a few large data items because doing so degrades overall server perfor-
mance. In such cases, caching items in SQL Server can be a good practice. Remem-
ber, though, that implementing a SQL Server caching mechanism requires installing,
managing, and licensing at least one copy of SQL Server or MSDE.
When you use SQL Server for caching, you can use the SQL Server management
tools, which include a host of performance counters that can be used to monitor
cache activity. For example, use the SQLServer:Databases counters on your cache
database table to monitor your cache performance, and use counters such as Trans-
actions/sec and Data File(s) Size (KB) to monitor usage.
The following example shows how to implement a static variable cache by using a
hash table.
static Hashtable mCacheData = new Hashtable();
The scope of the static variables can be limited to a class or a module or be defined
with project-level scope. If the variable is defined as public in your class, it can be
accessed from anywhere in your project code, and the cache will be available
throughout the application domain. The lifetime of static variables is directly linked
to their scope.
You can specify the mode to be used for your application by using the mode at-
tribute of the <sessionState> element of the application configuration file.
The scope of the ASP.NET session state is limited to the user session it is created in.
In out-of-process configurations — for example, using StateServer or SQLServer
28 Caching Architecture Guide for .NET Framework Applications
mode in a Web farm configuration — the session state can be shared across all servers
in the farm. However, this benefit does come at a cost. Performance may decrease
because ASP.NET needs to serialize and deserialize the data and because moving the
data over the network takes time.
For more information about ASP.NET session state, see “ASP.NET Session State,” in
the MSDN Library.
article Q324479, “FIX: ASP.NET SQL Server Session State Impersonation Is Lost
Under Load,” in the Microsoft Knowledge Base.
For session state to be maintained across different servers in a Web farm, the appli-
cation path of the Web site (for example, \LM\W3SVC\2) in the IIS metabase
should be identical across all servers in that farm. For details about how to imple-
ment this, see article Q325056, “Session State Is Lost in Web Farm If You Use
SqlServer or StateServer Session Mode,” in the Microsoft Knowledge Base.
You can configure ASP.NET session state to use SQL Server to store objects. Access to
the session variables is slower in this situation than when you use an in-process
cache; however, when session data persistency is critical, SQL Server provides a
good solution. By default, SQL Server stores session state in the tempdb database,
which is re-created after a SQL Server computer reboots. You can configure SQL
Server to store session data outside of tempdb so that it is persisted even across
reboots. For information about script files that you can use to configure SQL Server,
see article Q311209, “Configure ASP.NET for Persistent SQL Server Session State
Management,” in the Microsoft Knowledge Base.
When you use session state in SQL Server mode, you can monitor the SQL Server
tempdb counters to provide some level of performance monitoring.
Because session state can function as both an in-process and an out-of-process cache,
it is suitable for a wide range of solutions. It is simple to use, but it does not support
data dependencies, expiration, or scavenging.
The following example shows how you can use the session object to store and access
user credentials.
private void SaveSession(string CartID)
{
Session["ShoppingCartID"] = CartID;
}
Remember to secure any credentials when caching them. For more information
about securing custom caches, see “Securing a Custom Cache,” in Chapter 6,
“Understanding Advanced Caching Issues.”
● View state
● Hidden frames
● Cookies
● Query strings
Each of these mechanisms is useful for caching different types of state in different
scenarios.
Note: If you use hidden fields, you must submit your pages to the server by using the HTTP
POST method rather than by requesting the page using the HTTP GET method.
This code creates a hidden field containing the string data Initial Value.
If you cannot use hidden fields because of the potential security risk, consider using
view state instead.
In some situations, using view state is not recommended because it can degrade
performance. For example:
● Avoid using view state in pages that do not post back to the server, such as logon
pages.
● Avoid using view state for large data items, such as DataSets, because doing so
increases time in roundtrips to the server.
● Avoid using view state when session timeout is required because you cannot time
out data stored in the ViewState property.
View state is a simple and easy-to-use implementation of hidden fields that provides
some security features. Most of the performance considerations, benefits, and
limitations of view state and hidden fields are similar.
For more information about ViewState performance, see “Maintaining State in a
Control” in the MSDN Library.
● The values in view state are hashed, compressed, and encoded, thus representing
a higher state of security than hidden fields.
● View state is good for caching data in Web farm configurations because the data
is cached on the client.
Limitations to using view state include:
● Page loading and posting performance decreases when large values are stored
because view state is stored in the page.
● Although view state stores data in a hashed format, it can still be tampered with
because it is stored in a hidden field on the page. The information in the hidden
field can also be seen if the page output source is viewed directly, creating a
potential security risk.
For more information about using view state, see “Saving Web Forms Page Values
Using View State,” in the MSDN Library.
34 Caching Architecture Guide for .NET Framework Applications
When your application runs on a low bandwidth client — for example, a mobile
device — you can store the view state on the server by using Losformatter and
thereby returning slimmer pages with less of a security risk.
● The ability to cache and access data items stored in different hidden forms.
● The ability to access JScript® variable values stored in different frames if they
come from the same site.
The limitations of using hidden frames are:
● Functionality of hidden frames is not supported by all browsers. Do not rely on
hidden frames to provide data for the essential functionality of your pages.
● The hidden frame can be tampered with, and the information in the page can be
seen if the page output source is viewed directly, creating a potential security
threat.
● There is no limit to the number of frames (embedded or not) that you can hide. In
frames that bring data from the database and that contain several Java applets or
images, a large number of frames can negatively affect performance of the first
page load.
Hidden frames are useful for storing many items of data and resolve some of the
performance issues inherent in hidden field implementations.
You can then use the data stored in this frame in your Web page by using client-side
scripting.
36 Caching Architecture Guide for .NET Framework Applications
Using Cookies
You can use cookies to store small amounts of infrequently changing information on
the client. That information can then be included in a request to the server.
Note: Cookies are often used for personalization, in which content is customized for a known
user. In most cases, identification is the issue rather than authentication, so it is usually
enough to merely store the user name, account name, or a unique user ID (such as a GUID) in
a cookie and use it to access the user personalization infrastructure of a site.
Implementing Cookies
The following example shows how to store and retrieve information that cookies
hold.
Chapter 2: Understanding Caching Technologies 37
Never rely on cookies for essential functionality in your Web applications, because
users can disable them in their browsers.
Note: Query strings are a viable option only when a page is requested through its URL using
HTTP GET. You cannot read a query string from a page that has been submitted to the server
using HTTP POST.
38 Caching Architecture Guide for .NET Framework Applications
Query strings can be very useful in certain circumstances, such as when you are
passing parameters to the server to customize the formatting of the data returned.
The following example shows how to access the information at the server.
// Check for a query string in a request.
string user = Request.QueryString["User"];
if( user != null )
{
// Do something with the user name.
}
Query strings enable you to send string information directly from the client to the
server for server-side processing.
Note: Internet Explorer caching is supported only in an Internet Explorer client environment. If
other client browsers are also being used, you must use customization code in your application
and provide an alternative caching solution for other types of browser.
● You can use persistence through DHTML behaviors that allow you to store
information on the client. Doing so reduces the need to query a server database
for client-specific information, such as color settings or screen layout preferences,
and increases the overall performance of the page. Persistence stores the data
hierarchically, making it easier for the Web developer to access.
Note: Users can choose to work offline by selecting Work Offline on the File menu in Internet
Explorer. When Work Offline is selected, the system enters a global offline state independent
of any current network connection, and content is read exclusively from the cache.
Another method of making Internet Explorer cache pages is by manually setting the
page expiration by using the Web pagefile properties in IIS. The site administrator
can perform this operation.
To set the expiration date manually in IIS
1. Right-click the file name.
2. Click Properties, and then click the HTTP Headers tab.
3. Select the Enable Content Expiration check box.
4. Set a date in the future, and then click OK.
For more information about the Internet Explorer cache, see “Control Your Cache,
Speed up Your Site” in the MSDN Library.
This example caches the page or object until the date specified by the value attribute.
Summary
This chapter introduced you to various technologies available for implementing
caching mechanisms in .NET-based applications. You saw examples of how to
implement these mechanisms and learned the benefits and limitations of each.
The next step is to map how to use these technologies in the various application
layers and deployment scenarios in distributed applications.
3
Caching in Distributed Applications
The previous chapters of this guide have reviewed the available caching technolo-
gies that exist in the Microsoft .NET Framework and Microsoft Windows, and has
explored their benefits and limitations. The next step is to map them to the applica-
tion layers and elements that exist in distributed applications. This helps when you
are selecting the appropriate caching technique for a specific layer and element.
This chapter contains the following sections:
● “Caching in the Layers of .NET-based Applications”
Users
UI Components
UI Process Components
Communication
Operational Management
Security
Service Interfaces
Figure 3.1
.NET-based distributed application architecture
For a complete overview of the application layers, see “Application Architecture for
.NET: Designing Applications and Services” in the MSDN Library.
Caching in UI Components
Most solutions need to provide some way for users to interact with the application.
User interfaces can be implemented using Windows Forms, ASP.NET pages, con-
trols, or any other technology capable of rendering and formatting data for users
and acquiring and validating data from users.
Chapter 3: Caching in Distributed Applications 45
● Windows Forms controls with a long render time (for example, TreeView controls)
● Business data
● Configuration data
● Metadata
Caching these types of data in UI process components can improve the performance
of your applications.
Users
UI Components
1 2
UI Process Components
3
Cache
Business Layer
Figure 3.2
Caching business data in the users services layer
46 Caching Architecture Guide for .NET Framework Applications
The steps that occur when caching data in user service elements are:
1. The UI component elements make a request for data from the UI process
components.
2. The UI process component checks if a suitable response is present in the cache,
and if so, returns it.
3. If the requested data is not cached or not valid, the UI process components
delegate the request to the business layer elements and, if appropriate, place the
response in the cache.
This configuration means that you can reuse the same caching technique for mul-
tiple user interfaces.
running, multi-step business processes, and they can be implemented using business
process management tools such Microsoft BizTalk® Server Orchestration Designer.
You can store the state being used in a business workflow between the workflow
steps to make it available throughout the entire business process. The state can be
stored in a durable medium, such as a database or a file on a disk, or it can be stored
in memory. The decision of which storage type to use is usually based on the size of
the state and the durability requirements.
Custom
objects
Business Business
Component Entities
Figure 3.3
Storing business entities in the cache
48 Caching Architecture Guide for .NET Framework Applications
Figure 3.4 shows the second pattern, where a business entities factory is used to
retrieve relevant state. In this pattern, the state is stored in standard formats and
translated to its original format when the data is required.
Activity Cache
Business Objects
Component XML, Datasets,
XmlNodes, etc.
Invokes
Business
Entities Data
Business
Entities
Finds
Figure 3.4
Storing data for business entities in the cache
You should store business entities in a cache only when the translation of the data in
a business entities factory is expensive.
Caching these types of information in data access components can improve the
performance of your applications.
You should use data access helpers to cache data such as stored procedure param-
eters. (For more information, see the SqlHelperParameterCache class in “Data
Access Application Block” in the MSDN Library.)
1 2
Service Agent
3
Cache
Service
Figure 3.5
Caching process in service agents
The steps that occur when caching data in service agents are:
1. The application business logic invokes the service agent.
2. The service agent checks whether a suitable response (for example, valid data
that meets the requested criteria) is present in the cache, and if so, returns it.
3. If the requested data is not cached or not valid, the Web service is invoked, and if
appropriate, the requested data is also placed in the cache.
You should use service agents to cache data such as:
● State returned from queries in a non-transactional context
● User credentials
Caching these types of data in service agents can improve the performance of your
applications.
Presentation
Operational Mgmt
Communication
Secure Comm.
Authentication
Authorization
Profile Mgmt
Auditing
Security
Business
Data
Figure 3.6
Aspects of security
Not all of these aspects are relevant to the subject of caching, and this section
describes only profile management.
Presentation
Service Location
Exception Mgmt
Communication
Biz Monitoring
Configuration
Monitoring
Metadata
Security
Business
Data
Figure 3.7
Aspects of operational management
For more information about caching connection strings see Chapter 4, “Caching
.NET Framework Elements.”
Caching Metadata
You may design your application so that it knows information about itself to make it
more flexible to changing runtime conditions. This type of information is known as
metadata. Designing your application using metadata in certain places can make it
more maintainable and lets it adapt to common changing requirements without
costly redevelopment or deployment.
Metadata can be stored in multiple places, as described in the preceding “Caching
Configuration Information” section. For centralized stores, you can use SQL Server
databases or Active Directory. If you want your metadata to be distributed with
your assemblies, you can implement it in XML files, or even custom .NET attributes.
52 Caching Architecture Guide for .NET Framework Applications
● XML schemas — This includes XSDs, XML Data Reduced Schemas (XDRs), and
Document Type Definitions (DTDs).
● Extended metadata — In some situations you may want to provide developers
with additional metadata such as:
● Logical naming — In some cases, mapping logical names to physical entities
makes programming tasks easier. For example, instead of letting the developer
access a certain stored procedure using its physical name
(sp_GetCustomersByName), a logical name (Customers) can be provided and
translated to the actual name in the data services layer.
● Filter restrictions — Providing information about which fields in a data table
can be used for filtering based on the available stored procedures. For ex-
ample, you may want to enable filtering only on the first and last name fields
in a customers form. By providing filter restrictions metadata, a client applica-
tion can determine whether to enable filtering.
● Sort restrictions — Providing information about which fields in a data table
can be used for sorting that table based on the available stored procedures. For
example, you may want to enable sorting only on the first and last name fields
in a customers form. By providing sorting restrictions metadata a client
application can determine whether to enable sorting or not.
Now that you are aware of the layers within an application’s architecture, you are
ready to start deciding the caching technology that is best to be used in each layer.
Chapter 3: Caching in Distributed Applications 53
● Smart clients (based on the .NET Framework or the .NET Compact Framework)
Smart Clients
(Windows Forms
and .NET CF)
Figure 3.8
Common elements in a distributed application
54 Caching Architecture Guide for .NET Framework Applications
Note: Although the .NET Framework and the Windows platform provide many caching, design,
and deployment options, this section concentrates on the solutions that most closely suit
distributed applications.
The following tables compare the scope, durability, and scenario usage of browser-
based caching technologies. Table 3.1 compares the scope of different technologies
available for caching data for browser-based clients.
Table 3.1: Scope of browser-based caching technologies
Technology Single page Multiple page Multiple windows
(same window) (same window)
ViewState ✓ Additional
implementation
required.
Cookies ✓ ✓ ✓
Hidden frames ✓ ✓
Table 3.2 compares the durability of different technologies available for caching data
for browser-based clients.
Table 3.2: Durability of browser-based caching technologies
Technology None Survives browser Survives reboots
shutdown
ViewState ✓
Cookies Subject to Subject to Subject to
expiration rules expiration rules expiration rules
in client. in client. in client.
Hidden frames ✓
Chapter 3: Caching in Distributed Applications 55
Table 3.3 shows commonly used scenarios for each of these technologies.
Table 3.3: Browser-based caching scenarios
Technology Scenario
ViewState Small amounts of data used between requests of the same page.
Cookies Small amounts of data used between requests of the same application.
Hidden frames Relatively large amounts of data accessed only from a client-side script.
Use the information in the preceding tables to select the appropriate caching tech-
nology for your browser-based clients.
Note: Caching state in a browser results in the cached information being stored as clear text.
When storing sensitive data in this way, you should consider encrypting the data to avoid
tampering.
Table 3.5 compares the durability of different technologies available for caching data
in Windows Forms applications.
Table 3.5: Durability of smart client caching technologies
Technology None Survives process Survives reboots
recycles
Static variables ✓
Memory-mapped files ✓
Remoting singleton ✓
SQL Server 2000 / MSDE ✓ ✓
56 Caching Architecture Guide for .NET Framework Applications
Table 3.6 shows commonly used scenarios for each of these technologies.
Table 3.6: Smart client caching scenarios
Technology Scenario
Static variable In-process, high performance caching.
Memory-mapped files Computer-wide, high performance caching.
Remoting singleton Organization-wide caching (not high performance).
SQL Server 2000 / MSDE Any application which requires high durability.
Use the information in the preceding tables to select the appropriate caching tech-
nology for your smart clients.
Table 3.8 compares the durability of different technologies available for caching data
in .NET Compact Framework applications.
Table 3.8: Durability of .NET Compact Framework caching technologies
Technology None Survives process Survives reboots
recycles
SQL Server CE ✓ ✓
Static variables ✓
File system ✓ ✓
Chapter 3: Caching in Distributed Applications 57
Table 3.9 shows commonly used scenarios for each of these technologies.
Table 3.9: .NET Compact Framework caching scenarios
Technology Scenario
SQL Server CE Any scope (within the device) which requires high durability.
Static variables In process, high performance caching.
File system Any scope that requires high durability.
Use the information in the preceding tables to select the appropriate caching tech-
nology for your .NET Compact Framework clients.
Table 3.11 compares the durability of different technologies available for caching
data in ASP.NET server applications.
Table 3.11: Durability of ASP.NET server-side caching technologies
Technology None Survives process Survives reboots
recycles
ASP.NET Cache ✓
ASP.NET Session ✓
object (InProc)
ASP.NET Session ✓ (survives
object (StateServer) aspnet_wp process
recycles)
ASP.NET Session ✓ (survives Possible (for more
object (SQLServer) aspnet_wp process information, see article
recycles) Q311209, “HOW TO:
Configure ASP.NET for
Persistent SQL Server
Session State Manage-
ment,” in the Microsoft
Knowledge Base).
ViewState Not Applicable Not Applicable Not Applicable
Static variables ✓
Memory-mapped files ✓
Remoting singleton Good for surviving
the applications
process recycle (not
the remoting process
recycle).
SQL Server 2000 / ✓
MSDE
Table 3.12 shows commonly used scenarios for each of these technologies.
Table 3.12: ASP.NET server-side caching scenarios
Technology Scenario
ASP.NET Cache In process, high performance caching. Good for
scenarios that require specific caching features.
ASP.NET Session object (InProc) User session scope cache. Good for small amounts of
session data that require high performance.
ASP.NET Session object (StateServer) User session scope cache. Good for sharing session
data in a Web farm scenario where SQL Server is not
available.
Chapter 3: Caching in Distributed Applications 59
Technology Scenario
ASP.NET Session object (SQLServer) User session scope cache. Good for sharing session
data in a Web farm scenario where SQL Server is available.
ViewState Good for request scope scenarios.
Static variables In process, high performance cache.
Memory-mapped files Computer-wide, high performance cache.
Remoting singleton Organization-wide cache (not high performance).
SQL Server 2000 / MSDE Can be used for any scope requirement which demands
high durability.
Use the information in the preceding tables to select the appropriate caching tech-
nology for your ASP.NET server-side applications.
Table 3.14 compares the durability of different technologies available for caching
data in server applications.
Table 3.14: Durability of server-side caching technologies
Technology None Survives process Survives reboots
recycles
ASP.NET Cache ✓
Static variables ✓
Memory-mapped files ✓
Remoting singleton ✓
SQL Server 2000 / MSDE ✓
60 Caching Architecture Guide for .NET Framework Applications
Table 3.15 shows commonly used scenarios for each of these technologies.
Table 3.15: Server-side caching scenarios
Technology Scenario
ASP.NET Cache In process, high performance caching. Good for scenarios which
require specific caching features.
Static variables In process, high performance caching.
Memory-mapped files Computer-wide high performance caching.
Remoting singleton Organization-wide cache (not high performance).
SQL Server 2000 / MSDE Can be used for any scope requirement which requires high
durability.
Use the information in the preceding tables to select the appropriate caching tech-
nology for your server-side applications.
Summary
In this chapter, you have learned about the application layers in a .NET-based
distributed application, and you have seen how the different caching technologies
available best suit different layers.
You are now ready to consider how to cache the many elements of a .NET-based
application.
4
Caching .NET Framework Elements
Preceding chapters explained caching, including available caching technologies and
the appropriate locations for the different types of cached data. In this chapter, you
learn about how to make your own classes cacheable, what issues to consider when
you implement caching, and what types of Microsoft .NET Framework elements can
be cached in a distributed application.
This chapter contains the following sections:
● “Planning .NET Framework Element Caching”
● Cloning
● Serializing
Some of these issues can result in inconsistent data, whereas some simply influence
the application’s efficiency.
When you are caching data types that are type safe, you can eliminate the risk that
one thread will interfere and modify data elements of another thread through
coordinated access to shared data, thus circumventing potential data race situations.
If your code is not thread safe, the following can occur:
● A cached item is read from the cache on one thread and then updated on another,
resulting in the use of inconsistent data.
● A cached item is flushed from the cache by one thread while a client using
another thread tries to read the item from cache.
● Two or more threads concurrently update a cached item with different values.
● Remove requests result in an item being removed from the cache while it is being
read by another thread.
You can avoid these problems by ensuring that your code is thread safe, allowing
only synchronized access to the cached data. You can do so either by implementing
locking in the caching mechanism or by using the ReaderWriterLock class in the
cache items.
Ensuring thread safety of cached items guarantees synchronized access of the items
and avoids such issues as dirty reads and multiple writes.
For more information about thread synchronization, see “Safe Thread Synchroniza-
tion” in the MSDN Magazine.
Cloning
Another solution to thread safety and synchronization issues is to use cloning.
Instead of obtaining a reference to the original copy of your object when retrieving
cached items, your cache can clone the object and return a reference to the copy.
To do so, your cached objects must implement the ICloneable interface. This inter-
face contains one member, Clone(), which returns a copy of the current object. Take
care to implement a deep copy of your object, copying all class members and all
objects recursively referenced by your object.
If you design your cache to include interfaces for receiving both a cloned object and
a reference to the original cached object, you will find new possibilities for resolving
synchronization issues in your application:
● If your object is not thread safe, you can use cloning to create a copy of your
cached data, update it, and then update the cache with the new object. This
technique solves synchronization issues when data is updated. In this scenario,
updating the cached item is performed as an atomic action, locking the access
to the object during the update.
● One process can be responsible for updating an object while another process uses
a cloned copy of the object for read-only purposes. This solves the problem of
synchronizing read-after-write operations on your cache because the read object
is merely a copy of the cached object, and the original object can be updated at
the same time.
64 Caching Architecture Guide for .NET Framework Applications
Get cloned
cached object
for update
Process A 1 Cloned Object Process B
Cloned Object 2
3 Original Object Get
cloned
Update
object for
object in
cache read only
use
Cache
Figure 4.1
Cloning cached objects
Note: An alternative solution to synchronization issues is for process A to create a new object
instead of cloning an existing cached object and then to update the cache with the new object.
when designing your .NET classes, and if you plan to use a caching technology that
requires objects to be serialized, you must make your .NET class serializable. You
can do so by using the Serializable attribute or by custom serializing your classes by
implementing the ISerializable interface.
Note: To see whether a specific caching technology requires serialization support, see “Choos-
ing a Caching Technology,” later in this chapter.
The following example shows how to use the Serializable attribute to make a class
serializable.
[Serializable]
public class MyClass {
…
}
For more information about serializing a .NET class by using the Serializable
attribute, see “Basic Serialization” in the MSDN Library.
The following example shows how to custom serialize a .NET class by implementing
the ISerializable interface.
public class MyClass1: ISerializable
{
public int size;
public String shape;
//Deserialization constructor
public MyClass1 (SerializationInfo info, StreamingContext context) {
size = (int)info.GetValue("size", typeof(int));
shape = (String)info.GetValue("shape", typeof(string));
}
//Serialization function
public void GetObjectData(SerializationInfo info, StreamingContext context)
{
info.AddValue("size", size);
info.AddValue("shape", shape);
}
}
For more information about custom serialization, see “Custom Serialization” in the
MSDN Library.
It is recommended that you implement ISerializable instead of using the
Serializable attribute because ISerializable does not use reflection and results in
better performance.
66 Caching Architecture Guide for .NET Framework Applications
For more information about serializing a .NET class, see “Run-time Serialization,
Part 2” and “Run-time Serialization, Part 3” in the MSDN Magazine.
Note: This definition of normalization differs from the classic data processing definition of the
term, which is concerned with saving the data in the most efficient and memory-saving form.
When you consider normalizing cached data, your first priority is usually performance rather
than memory.
For more information about securely caching connection strings in your applica-
tions, see the guidelines for using Data Protection API (DPAPI) in the “Storing
Database Connection Strings Securely” section of “Building Secure ASP.NET Appli-
cations” in the MSDN Library.
For more information about using DPAPI encryptions, see the following articles in
“Building Secure ASP.NET Applications,” in the MSDN Library:
● “How To: Create a DPAPI Library”
● “How To: Use DPAPI (User Store) from ASP.NET with Enterprise Services”
These How To articles contain step-by-step instructions to help you learn how to use
DPAPI encryption in your ASP.NET applications.
Also do not cache SqlDataAdapter objects because they use DataReader objects in
their underlying data access method.
several forms in your application. In this situation, you can clone several instances
of the control to be used concurrently by all forms requiring the control.
The following Windows Forms controls are classic examples of good caching
candidates:
● DataGrid — Displays ADO.NET data in a scrollable grid
The following example shows how to create a TreeView control and insert it into the
cache that has been created.
private void btnCreate_Click(object sender, System.EventArgs e)
{
// Create the TreeView control and cache it.
TreeView m_treeView = new TreeView();
m_treeView.Name = "tvCached";
m_treeView.Dock = System.Windows.Forms.DockStyle.Left;
m_CachedControls.Add(m_treeView.Name, m_treeView);
The following example shows how to obtain the control from cache and bind it to a
form
private void GetAndBindControl(Form frm)
{
frm.Controls.Add((Control)m_CachedControls["tvCached"]);
frm.Show();
}
These code samples are all you need to implement Windows Forms control caching
in your application.
Chapter 4: Caching .NET Framework Elements 71
Caching Images
Because images are relatively large data items, they are good caching candidates.
An image can be loaded from various locations, including:
● A local disk drive
Loading from local drives is faster than the other options; however, images cached
in memory load extremely quickly and improve your application’s performance and
the user’s experience.
Before you decide where to cache an image, consider how the image is used in the
application. There are two common scenarios for image usage:
● Images displayed to the user in the presentation tier of an application
● Explicitly manage the caching of images on the page. Images contained on a page
are loaded using separate HTTP requests, and they are not automatically cached
when using ASP.NET page caching.
● Use file dependencies on the cached image file to ensure that the image is invali-
dated when the data changes.
● Consider your cache expiration time, taking into account:
● Whether your image refresh time is dependent on other cache keys or on disk
files
For more information about specifying cache expiration times for images in Web
applications, see “Using Dependencies and Expirations” in Chapter 2, “Under-
standing Caching Technologies.”
72 Caching Architecture Guide for .NET Framework Applications
Read image
from the remote
machine or
database
Application
1
2 Cache image
Get image on local disk
from cache 3
when needed
Figure 4.2
Caching image files from a database
Caching images in the presentation tier helps increase the user-perceived perfor-
mance of your application.
Summary
This chapter helps you plan the caching of .NET Framework elements and covers
the options available for doing so.
After you decide what to cache, where to cache it, and how to cache it, you must
consider how to get the data into the cache and how to manage it once it’s there.
5
Managing the Contents of a Cache
So far in this guide, you have seen what data to cache and where to cache it. This
chapter describes other issues concerning caching management, including how to
load data into a cache and how to manage the data inside the cache.
This chapter contains the following sections:
● “Loading a Cache”
● “Flushing a Cache”
Loading a Cache
Data can be loaded into a cache using various methods. This section covers the
different options for populating the cache and how to select a method for loading
the data.
When determining the data acquisition method for your cache, consider how much
of the data you want to acquire and when you want to load it. For example, you
may decide to load data into the cache when the application initializes or to acquire
the data only when it is requested. Table 5.1 shows the options available for loading
a cache.
Table 5.1: Cache loading options
Loading type Synchronous methods Asynchronous methods
Proactive loading Not recommended Asynchronous pull loading
Notification-based loading
Reactive loading Synchronous pull loading Not applicable
● You are using state of a known size. If you use proactive cache data loading when
you don’t know the size of the data, you might exhaust system resources. You
must not try to use resources that you do not have.
Chapter 5: Managing the Contents of a Cache 77
Application
Business Logic
Dispatcher
2
1
Cache
Application Service
Figure 5.1
Asynchronous pull loading
78 Caching Architecture Guide for .NET Framework Applications
As Figure 5.1 shows, the steps for asynchronous pull loading are:
1. Based on a schedule or other type of notification, a dispatcher queries application
services for their data.
2. Return values from the service agents are placed in the cache, and possibly
aggregated with other data.
3. The application queries the cache for subsets of data and uses them.
Asynchronous pull loading ensures that the data is loaded into the cache indepen-
dently of the application flow and is available for use whenever the application
requests it.
Notifications Pull/Push
Web Services
Rich Client
Applications Cache
Pull/Push
SQL Server 2000
Notification Services
SQL Server
Notifications Pull/Push
For a sample application design that demonstrates how to use Notification Services
as a push mechanism for cached items, see Chapter 7, “Appendix.”
You may want to use synchronous pull loading only when your system is running in
a steady state, so that, for example, your application won’t suffer from unexpected
network latencies.
Advantages of Synchronous Pull Loading
It is relatively easy to implement synchronous pull loading because all of the load-
ing code is written within the application and none is needed in the underlying
application services. For example, you do not need to create or maintain SQL Server
triggers or Windows Management Instrumentation (WMI) event listeners.
The synchronous pull loading implementation is independent of the underlying
services. Synchronous pull loading does not rely on a specific application services
technology, so it can be used to retrieve data from services such as Web services and
legacy systems.
Disadvantages of Synchronous Pull Loading
The major problem that occurs when using pull loading is state staleness. Because
no mechanism exists to notify the cache of data changes in the underlying applica-
tion services, data changes in the application services might not be reflected in the
cached data. For example, you may be caching flight departure information in your
application. If a flight departure time changes and that change is not reflected in the
cache, you may miss your flight.
Recommendations for Synchronous Pull Loading
Use synchronous pull loading for populating the cache when:
● The state retrieved from the different application services can have known and
acceptable degrees of staleness.
● The state can be refreshed at known intervals. For example, cached stock quotes
in a delayed stock ticker application can be refreshed once every 20 minutes.
● Working with third-party services, such as Web services or legacy systems, or
while using services that don’t implement notification mechanisms.
Figure 5.3 on the next page shows how you can implement synchronous pull load-
ing in a service agent element.
As Figure 5.3 shows, the steps for synchronous pull loading are:
1. The application business logic calls the service agent and requests the data.
2. The service agent checks whether a suitable response is available in the cache —
that is, it looks for the item in the cache and the expiration policies of any rel-
evant items in it.
3. If the data is available in the cache, it is used. If not, the application service is
called to return the data, and the service agent places the returned data in the
cache.
82 Caching Architecture Guide for .NET Framework Applications
1 2
Service Agent
3
Cache
Application Service
Figure 5.3
Synchronous pull loading
Note: It is recommended that you separate service agents that use caching from those that do
not. Doing so avoids having excess conditional logic in the service agent code.
In addition to reloading cache items to keep them valid, you may also want to
implement an expiration policy to invalidate cached items.
Relative Time —
sliding window
Time-Based One-time
Absolute Time
Recurrent
Figure 5.4
Policy dependencies
The following example shows how you can implement absolute time expiration.
internal bool CheckSimpleAbsoluteExpiration(DateTime nowDateTime,
DateTime absoluteExpiration)
{
bool hasExpired = false;
//Check expiration.
if(nowDateTime.Ticks > absoluteExpiration.Ticks)
{
hasExpired = true;
}
else
{
hasExpired = false;
}
return hasExpired;
}
● Sliding — Sliding expiration policies allow you to define the lifetime of an item
by specifying the interval between the item being accessed and the policy defin-
ing it as expired.
The following example shows how you can implement sliding time expiration.
internal bool CheckSlidingExpiration(DateTime nowDateTime,
DateTime lastUsed,
TimeSpan slidingExpiration)
{
bool hasExpired = false;
//Check expiration.
if(nowDateTime.Ticks > (lastUsed.Ticks + slidingExpiration.Ticks))
{
hasExpired = true;
}
else
{
hasExpired = false;
}
return hasExpired;
}
● WMI events
You can implement file dependencies by using the FileSystemWatcher class, in the
System.IO namespace. The following code demonstrates how to create a file depen-
dency class for invalidating cached items.
namespace Microsoft.Samples.Caching.Cs
{
public class FileDependency
{
private System.IO.FileSystemWatcher m_fileSystemWatcher;
m_fileSystemWatcher.Path = path;
m_fileSystemWatcher.Filter = fileName;
m_fileSystemWatcher.EnableRaisingEvents = true;
this.m_fileSystemWatcher.Changed += new
System.IO.FileSystemEventHandler(this.fileSystemWatcher_Changed);
}
Whenever the specified file (or folder) changes, the OnFileChange event executes,
and your custom code can be used to invalidate a cached item.
86 Caching Architecture Guide for .NET Framework Applications
Notifications
Service
Dependency
Rich Client
Applications Cache
Service
Dependency
Notifications
Web Applications
Cache
Figure 5.5
Creating a dependency for external service notification
Figure 5.6 shows how to create an internal pub-sub to obtain external notification of
changes and notify the internal subscribers for this event.
When you build applications with a large number of event subscribers, you should
handle external notifications in a pub-sub way to improve scalability and availability.
If your application’s data is stored in a SQL Server database, you can use triggers to
implement a simple notification mechanism for your application’s cache. The main
advantage of using triggers is that your application is immediately notified of
database changes.
One disadvantage of using this technique is that it is not optimized for many sub-
scribers or for many events, which can lead to a decrease in database performance
and can create scalability issues. Another disadvantage is that triggers are not easy
Chapter 5: Managing the Contents of a Cache 87
to implement. In the database, you must write stored procedures containing the
code to run when the trigger executes and write custom listeners in the application
to receive the notifications.
Notifications
Rich Client
Applications Cache
SQL Server 2000 Message
Service
Notification Services Interface
External
Internal Services
Pub-sub
Notifications
Web Applications
Cache
Figure 5.6
Using internal pub-sub for external notifications
For more information and sample code showing how to implement dependencies
using SQL Server triggers, see Rob Howard’s team page at http://www.gotdotnet.com
/team/rhoward.
Expirations work well for removing invalid items from a cache; however, this may
not always be enough to ensure an efficiently managed cache.
Flushing a Cache
Flushing allows you to manage cached items to ensure that storage, memory, and
other resources are used efficiently. Flushing is different from expiration in that you
may decide in some cases to flush valid cache items to make space for more fre-
quently used items, whereas expiration policies are used to remove invalid items.
There are two categories of flushing techniques: explicit flushing and scavenging.
Scavenging can be implemented to flush items based on when they were last used,
how often they have been used, or using priorities that you assign in your application.
88 Caching Architecture Guide for .NET Framework Applications
Explicit
Flushing LRU
Scavenging LFU
Priorities
Figure 5.7
Flushing techniques
Explicit flushing requires that you write the code to determine when the item should
be flushed and the code to flush it. With scavenging, you write an algorithm for the
cache to use to determine what items can be flushed.
Implementing Scavenging
You can use a scavenging algorithm to automatically remove seldom used or unim-
portant items from the cache when system memory or some other resource becomes
scarce.
Typically, scavenging is activated when storage resources become scarce, but it can
also be activated to save computing resources — for example, to reduce the time and
CPU cycles required to look up specific cached items.
The following code shows how to initiate a process based on the amount of memory
available.
using System;
using System.Management;
namespace Microsoft.Samples.Caching.Cs.WMIDependency
{
public delegate void WmiMonitoringEventHandler(object sender, object newValue);
Chapter 5: Managing the Contents of a Cache 89
eventArg = (ManagementBaseObject)(e.NewEvent["TargetInstance"]);
string memory =
eventArg.Properties["AvailableKBytes"].Value.ToString();
Int64 freeMemory = Convert.ToInt64(memory);
if (freeMemory <= m_MemoryLimit)
{
if(MemoryLimitReached != null)
MemoryLimitReached(this, freeMemory);
}
}
catch(Exception ex) {throw(ex);}
}
void IDisposable.Dispose()
{
if(memoryEventWatcher != null)
memoryEventWatcher.Stop();
}
}
}
90 Caching Architecture Guide for .NET Framework Applications
Flushing items from memory can be initiated using one of three common algorithms:
● Least Recently Used algorithm — The Least Recently Used (LRU) algorithm
flushes the items that have not been used for the longest period of time.
The following pseudocode shows how the LRU algorithm works.
Sort_cache_items_by_last_used_datetime;
While (%cache_size_from_available_memory >
max_cache_size%_from_available_memory)
{
item = peek_first_item;
Remove_item_from_cache( item );
}
● Least Frequently Used algorithm — The Least Frequently Used (LFU) algorithm
flushes the items that have been used least frequently since they were loaded. The
main disadvantage of this algorithm is that it may keep obsolete items infinitely
in the cache.
The following pseudocode shows how the LFU algorithm works.
Sort_cache_items_by_usage_count;
While (%cache_size_from_available_memory >
max_cache_size%_from_available_memory)
{
item = peek_first_item;
Remove_item_from_cache( item );
}
● Priority algorithm — The priority algorithm instructs the cache to assign certain
items priority over other items when it scavenges. Unlike in the other algorithms,
the application code rather than the cache engine assigns priorities.
You have seen how to load a cache and how to remove items from a cache for
different reasons. The last main planning task is to determine how to access the
information stored in the cache.
have an asynchronously pull loaded cache that obtains news feeds from many news
agencies and places them in a SQL Server database. When the application retrieves
cached data, it may simply request news items related to Microsoft. In this scenario,
you must implement some identification method based on the cached data at-
tributes, such as keywords, which allows this type of search to be used.
Figure 5.8 shows how data is commonly requested, using either a known key or a
set of attributes.
By Known Key
Locating Cache
Tagged/Indexed
Figure 5.8
Locating cached data
Whether you are implementing a known key or a data attribute-based cache search-
ing mechanism, use the following guidelines for locating an item:
● If there is only one primary key or identifier, use key-value pair collections such
as hash table or hybrid dictionary.
● If there are multiple potential keys, create one hash table for each identifier, as
Figure 5.9 shows. However, doing so can make it harder to remove items from
the cache because you need information from all of the hash tables that refer to
the cache item.
News Item 2
News Item 3
Figure 5.9
Using multiple hash tables
Summary
This chapter explains the options for loading data into a cache and managing the
data after it is in the cache. However, you must consider other issues when planning
and designing a cache, including security, monitoring, and synchronizing caches in
server farms.
6
Understanding Advanced Caching
Issues
Now that you have seen what caching technologies are available and how to use
them in Microsoft .NET-based applications, you have an overall picture of how to
implement caching. In this chapter, you learn about some of the advanced issues
related to the subject.
This chapter contains the following sections:
● “Designing a Custom Cache”
● “Monitoring a Cache”
● Provide support for cache specific features, such as dependencies and expira-
tions, and enables you to write your own expiration and dependency implemen-
tations.
● Provide support for cache management features such as scavenging.
● Enable you to design your own cache storage solution by implementing the
storage class interfaces provided.
● Enable you to design your own cache scavenging algorithms by implementing
the classes and interfaces provided.
This design should enable you to create reusable cache mechanisms for your .NET
distributed applications.
Cache Storage
Figure 6.1
Custom cache block diagram
Chapter 6: Understanding Advanced Caching Issues 95
Application AppDomain
CacheManager
CacheStorage
CacheService
Figure 6.2
AppDomain scope deployment
Machine
Application AppDomain Service AppDomain
CacheManager CacheService
CacheStorage CacheStorage
Shared Memory
Figure 6.3
Machine scope deployment
Chapter 6: Understanding Advanced Caching Issues 97
In this configuration, every application can access the storage directly while the
metadata is managed separately from a single location.
Using Application Farm Scope Deployment
In this deployment scenario the storage is implemented as a SQL Server durable
storage system as shown in Figure 6.4.
CacheManager CacheService
CacheStorage
CacheStorage
SQL Server
Figure 6.4
Application farm scope deployment
CacheManager CacheService
+ Add ( ) + Add ( )
+ Remove ( ) + Remove ( )
+ Clear ( ) + Clear ( )
+ Get ( ) + Get ( )
+ GetItem ( ) + Notify ( )
ICacheStorage
CacheStorage ICacheMetadata
Figure 6.5
Class diagram
CacheManager
● Get — The Get method retrieves a specific item from the cache identified by the
provided key. This method returns an object reference to that item in the cache.
● GetItem — The GetItem method returns a specific CacheItem object identified by
the provided key. This object includes the metadata associated with the cached
item alongside the cached item value.
The CacheManager class provided in the custom cache does not need to be changed
regardless of the storage technology you choose and your deployment model.
Using the CacheService Class
The CacheService class is responsible for managing the metadata associated with
cached items. This metadata may include expiration policies on cached items,
cached item priorities, and callbacks.
The definition of this class is shown in Figure 6.7 on the next page.
The CacheService class is implemented as a singleton, meaning that only one
instance of this class can exist for the cache. The instantiation of the CacheService
may change, depending on the scope of your cache data store and the deployment
scenario. If the scope of your cache data store is an AppDomain (for example, when
using static variable based cache storage systems) the CacheService class can exist
100 Caching Architecture Guide for .NET Framework Applications
within the AppDomain of your application and cache. On the other hand, if your
cache store is shared across multiple AppDomains or processes (for example, when
using memory-mapped files or SQL Server based cache storage systems), the
CacheService class needs to be instantiated by a separate process and it is accessible
from all processes using the cache. Note that each AppDomain or process using the
cache has its own instance of the CacheManager class regardless of the deployment
scenario.
CacheService
● Get — The Get method returns a specified CacheItem object identified by the
provided key. This object includes the metadata associated with the cached item.
● Notify — The Notify method is used by the CacheManager to indicate that a
cached item was accessed. This notification is passed by the CacheService to the
expiration object associated with a cached item. For example, this notification can
be used to manage sliding expiration implementations where the expiration time
is reset every time the cached item is accessed.
These methods provide all of the functionality that you need to implement a class to
handle the metadata associated with cached items.
Chapter 6: Understanding Advanced Caching Issues 101
● ICacheMetadata
<interface>
ICacheStorage
● Remove — The Remove method takes a key as input and removes the stored item
value identified by this key from the storage.
● Clear — The Clear method clears all items from the storage.
● Get — The Get method takes a cached item’s key as input and returns the stored
item value identified by this key from the storage.
● GetSize — The GetSize method is used by the IScavengingAlgorithm
implementation.
These methods are used by the CacheManager to store cache items.
Using the ICacheMetadata Interface
The CacheStorage can also implement the ICacheMetadata interface. The interface
definition of this is shown in Figure 6.9. This is used by the CacheService to provide
persistency for the metadata. This interface should be implemented only by storages
that are persistent, such as SQL Server-based storage systems. In such cases, the
metadata has to be persisted alongside the cached items.
<interface>
ICacheMetadata
● Remove — The Remove method removes an item’s metadata from the persistent
storage. This method is used by the CacheService when an item is removed from
cache.
● Clear — The Clear method clears all metadata information from the storage.
● Get — The Get method retrieves all metadata from the storage. This method is
used by the CacheService to initialize cache metadata from the persistent store
after a power fail or process recycle.
Chapter 6: Understanding Advanced Caching Issues 103
You need to implement this interface only when you are using persistent storage
systems, for example, SQL Server.
Using the CacheItem Class
The CacheItem class is a wrapper class around a cached item’s value and metadata.
The class declaration is shown in Figure 6.10.
CacheItem
+ Key : string
+ Value : object
+ Expirations : ICacheItemExpiration[]
+ Priority : CacheItemPriority
Figure 6.10
CacheItem class
An object of this type is returned by the CacheManager class and the CacheService
class when the item’s metadata is requested using the GetItem method.
Using the ICacheItemExpiration
This interface needs to be implemented by your expiration class. The interface
definition is shown in Figure 6.11.
<interface>
ICacheItemExpiration
+ Key : string
+ HasExpired ( ) : bool
+ Notify ( ) : void
<<signal>> + ItemDependencyChange (in key : string)
Figure 6.11
ICacheItemExpiration interface
● Notify — This method is used by the CacheService to notify the expiration object
attached to a cached item that that item has been accessed. For example, this is
useful for sliding expiration schemes where the expiration time resets every time
the item is accessed.
● OnChange — This event is listened to by the CacheService that, in response,
invalidates the cached item and removes it from the cache.
You can use these methods to manage the expiration of your cached items.
Using the IScavengingAlgorithm Interface
This interface needs to be implemented by your scavenging algorithm class. The
interface definition of this is shown in Figure 6.12.
<interface>
IScavengingAlgorithm
● Clear — The Clear method clears all metadata information from the scavenger
class.
● Scavenge — The Scavenge method is called by the CacheService and is used to
run the scavenging algorithm that removes items from the cache to free cache
storage resources.
● Notify — The Notify method is called by the CacheService to notify the scaveng-
ing algorithm object that a cached item was accessed. The scavenging algorithm
uses this information when performing the scavenging process to decide which
items should be removed from cache based on the scavenging policy (LRU, LFU,
or any other implementation).
Chapter 6: Understanding Advanced Caching Issues 105
These methods are used by the CacheStorage class to execute the scavenging pro-
cess when cache storage resources become scarce.
Using Additional Support Classes
In addition to the classes described so far, there are several other classes supporting
the design, as shown in Figure 6.13.
<enumeration> <enumeration>
CacheItemRemoveCause CacheItemPriority
+ Expired = 0 + Low = 0
+ Removed = 1 + Normal = 1
+ Scavenged = 2 + High = 2
+ NotRemoveable = 3
<delegate> <delegate>
ItemDependencyChange CacheItemRemovedCallback
Because this is an XML file, it is easy to view and update without recompiling or
redistributing sections of your caching system.
Create
ICacheMetadata : Get
Create
Figure 6.14
Cache initialization use case
The use case shown in Figure 6.15 details the sequence of events when an item is
added to the cache.
Add
ICacheMetadata : Add
IScavengingAlgorithm : Add
Figure 6.15
Add item use case
108 Caching Architecture Guide for .NET Framework Applications
The use case shown in Figure 6.16 details the sequence of events when an item is
removed from the cache.
ICacheStorage : Remove
Remove
ICacheMetadata : Remove
Remove
Figure 6.16
Remove item use case
The use case shown in Figure 6.17 details the sequence of events when an item is
retrieved from the cache.
Notify
IScavengingAlgorithm : Notify
Figure 6.17
Get item use case
The use case shown in Figure 6.18 details the sequence of events when an item with
metadata is retrieved from the cache.
Chapter 6: Understanding Advanced Caching Issues 109
Get
IScavengingAlgorithm : Notify
Figure 6.18
Get item and metadata use case
After you design and implement a cache for your application, you may decide that
the data in there is sensitive to your organization. If this is the case, you should add
some code to secure the contents, just as you would secure them in their natural
storage mechanism.
Using this code, you can ensure that your application is aware of any tampering that
occurs to items stored in your cache.
// Add a KeyInfo.
KeyInfo keyInfo = new KeyInfo();
keyInfo.AddClause( new RSAKeyValue( key ) );
signedXml.KeyInfo = keyInfo;
//Create a signedXml
SignedXml signedXml = new SignedXml();
Using this code, you can ensure that your application is aware of any tampering that
occurs to items stored in your cache.
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Unicode)]
internal struct CRYPTPROTECT_PROMPTSTRUCT
{
public int cbSize;
public int dwPromptFlags;
public IntPtr hwndApp;
public String szPrompt;
}
#endregion
[DllImport("kernel32.dll", CharSet=CharSet.Auto)]
private unsafe static extern int FormatMessage(int dwFlags,
ref IntPtr lpSource,
int dwMessageId,
int dwLanguageId,
ref String lpBuffer,
int nSize,
IntPtr *Arguments);
#endregion
Chapter 6: Understanding Advanced Caching Issues 113
#region Constants
public enum Store {Machine = 1, User};
static private IntPtr NullPtr = ((IntPtr)((int)(0)));
private const int CRYPTPROTECT_UI_FORBIDDEN = 0x1;
private const int CRYPTPROTECT_LOCAL_MACHINE = 0x4;
#endregion
These types must be declared for the encryption and decryption code to function.
int dwFlags;
try
{
try
{
int bytesSize = plainText.Length;
plainTextBlob.pbData = Marshal.AllocHGlobal(bytesSize);
if(IntPtr.Zero == plainTextBlob.pbData)
throw new Exception("Unable to allocate plaintext buffer.");
plainTextBlob.cbData = bytesSize;
Marshal.Copy(plainText, 0, plainTextBlob.pbData, bytesSize);
}
catch(Exception ex)
{
throw new Exception("Exception marshalling data. " + ex.Message);
}
dwFlags = CRYPTPROTECT_LOCAL_MACHINE|CRYPTPROTECT_UI_FORBIDDEN;
//Check to see if the entropy is null
if(null == optionalEntropy)
{//Allocate something
optionalEntropy = new byte[0];
}
try
{
int bytesSize = optionalEntropy.Length;
entropyBlob.pbData = Marshal.AllocHGlobal(optionalEntropy.Length);
if(IntPtr.Zero == entropyBlob.pbData)
throw new Exception("Unable to allocate entropy data buffer.");
114 Caching Architecture Guide for .NET Framework Applications
This code ensures that the data stored in the cache cannot be read without access to
the shared key.
try
{
try
{
int cipherTextSize = cipherText.Length;
cipherBlob.pbData = Marshal.AllocHGlobal(cipherTextSize);
if(IntPtr.Zero == cipherBlob.pbData)
throw new Exception("Unable to allocate cipherText buffer.");
cipherBlob.cbData = cipherTextSize;
Marshal.Copy(cipherText, 0, cipherBlob.pbData, cipherBlob.cbData);
}
catch(Exception ex)
{
throw new Exception("Exception marshalling data. " + ex.Message);
Chapter 6: Understanding Advanced Caching Issues 115
}
DATA_BLOB entropyBlob = new DATA_BLOB();
int dwFlags;
dwFlags = CRYPTPROTECT_LOCAL_MACHINE|CRYPTPROTECT_UI_FORBIDDEN;
//Check to see if the entropy is null
if(null == optionalEntropy)
{//Allocate something
optionalEntropy = new byte[0];
}
try
{
int bytesSize = optionalEntropy.Length;
entropyBlob.pbData = Marshal.AllocHGlobal(bytesSize);
if(IntPtr.Zero == entropyBlob.pbData)
throw new Exception("Unable to allocate entropy buffer.");
entropyBlob.cbData = bytesSize;
Marshal.Copy(optionalEntropy, 0, entropyBlob.pbData, bytesSize);
}
catch(Exception ex)
{
throw new Exception("Exception entropy marshalling data. " + ex.Message);
}
retVal = CryptUnprotectData(ref cipherBlob, null, ref
entropyBlob, IntPtr.Zero, ref prompt,
dwFlags, ref plainTextBlob);
if(false == retVal) throw new Exception("Decryption failed.");
//Free the blob and entropy.
if(IntPtr.Zero != cipherBlob.pbData)
Marshal.FreeHGlobal(cipherBlob.pbData);
if(IntPtr.Zero != entropyBlob.pbData)
Marshal.FreeHGlobal(entropyBlob.pbData);
}
catch(Exception ex)
{
throw new Exception("Exception decrypting. " + ex.Message);
}
byte[] plainText = new byte[plainTextBlob.cbData];
Marshal.Copy(plainTextBlob.pbData, plainText, 0, plainTextBlob.cbData);
return plainText;
}
This code uses the shared key to decrypt the data that is encrypted in the cache.
After you implement your cache, and optionally secure it, you are ready to start
using it. As with all pieces of software, you should monitor the performance of your
cache and be prepared to tune any aspects that do not behave as expected.
116 Caching Architecture Guide for .NET Framework Applications
Monitoring a Cache
Monitoring your cache usage and performance can help you understand whether
your cache is performing as expected and helps you to fine tune your cache solution.
This section of the guide describes which cache parameters should be monitored
and what to look for when monitoring your cache.
Note: Chapter 7, “Appendix,” includes cache performance data and graphs that were collected
and analyzed under varying cache loads.
118 Caching Architecture Guide for .NET Framework Applications
● Your maximum allowed cache size is too small, causing frequent scavenging
operations, which results in cached items being removed to free up memory.
3. Check your cache turnover rate. If this is high, it indicates that items are inserted
and removed from cache at a high rate. Possible causes for this include:
● Your maximum allowed cache size is too small, causing frequent scavenging
operations which result in cached items being removed to free up memory.
● Faulty application design, resulting in improper use of the cache.
Regular monitoring of your cache should highlight any changes in data use and any
bottlenecks that these might introduce. This is the main management task associated
with the post-deployment phase of using a caching system.
Summary
This chapter has described the design details for a custom cache framework. You can
use this solution as a basis for your own caching systems. It has also introduced how
you can secure the contents of a cache and the items as they are transmitted to and
from the cache, and it has described methods for monitoring a deployed caching
solution.
7
Appendix
This chapter includes the following sections:
● “Appendix 1: Understanding State Terminology”
● Physical scope
● Logical scope
State refers to data, and the condition of that data, being used in a system at a
certain point in time. The lifetime and scope of state need to be taken into account
when you are caching state, because the cache must be accessible for the same
period of time and from the same locations as the original state.
In addition to considering the duration of the validity of the state, you also need to
consider where the state can be accessed from.
Understanding your application’s state and its characteristics, such as lifetime and
scope, is important for planning your caching policies and mechanisms.
124 Caching Architecture Guide for .NET Framework Applications
Flights Schedule
Web Application
3 1
2
Cache
Flights Schedule
Web Service
Figure 7.1
Flight schedule application flow
The problem with this scenario is that if the flight schedule in the Web service
changes, the change is not reflected in the cache, and the Web application displays
obsolete data.
To solve this problem, you can implement a notification mechanism using SQL
Server Notification Services to notify the cache of flight schedule changes. The flow
of the application after implementing this mechanism is shown in Figure 7.2.
1
Flights Schedule
Web Application
2
Cache
2 2 4
The flow of data in the application now follows a different set of steps:
1. The application looks for the flight schedule data in the cache. If it’s there, the
data is returned and displayed.
2. If the data is not in the cache, the Web application retrieves the data from the Web
service and instructs the cache to store the data. The cache stores the data and
subscribes to a FlightsNotification event in Notification Services.
3. Notification Services monitors the Web service for flight schedule changes.
4. When a change occurs in the Web service, Notification Services informs the cache,
which invalidates the obsolete item and then removes it.
Using Notification Services ensures that any changes to schedules are reflected in
the Web application in a timely manner.
The full code sample for the flight schedule application can be downloaded from the
Patterns and Practices Caching Architecture Guide workspace at GotDotNet.
For more information about Notification Services, see the SQL Server 2000 Notifica-
tion Services Web site at http://www.microsoft.com/sql/ns/.
126 Caching Architecture Guide for .NET Framework Applications
/*
minute - 0-59
hour - 0-23
day of month - 1-31
month - 1-12
day of week - 0-7 (Sunday is 0 or 7)
wildcards - * means run every
Examples:
5 * * * * - run 5th minute of every hour
5,31 * * * * - run every 5th and 31st minute of every hour
1 21 * * * - run every minute of the 21st hour of every day
1 15,21 * * *- run every minute of the 15th and the 21st hour of every day
31 15 * * * - run 3:31 PM every day
7 4 * * 6 - run Saturday 4:07 AM
15 21 4 7 * - run 9:15 PM on 4 July
*/
namespace ExtendedFormat
{
class Test
{
/// <summary>
/// The main entry point for the application
/// </summary>
[STAThread]
static void Main(string[] args)
{
Console.WriteLine(
ExtendedFormat.IsExtendedExpired(
"0 15,22 * * *",
new DateTime( 2002, 4, 22, 16, 0, 0 ),
new DateTime( 2002, 4, 22, 22, 0, 0 )));
}
}
/// <summary>
/// Test the extended format with a given date.
/// </summary>
/// <param name="format">The extended format string</param>
/// <param name="getTime">The time when the item has been
refreshed</param>
/// <param name="nowTime">DateTime.Now, or the date to test
with</param>
/// <returns></returns>
public static bool IsExtendedExpired( string format,
DateTime getTime,
DateTime nowTime )
{
//Validate arguments
if( format == null )
throw new ArgumentNullException( "format" );
TimeSpan ts;
int day;
string[] days = formatItem.Split( ',' );
foreach( string sday in days )
{
day = int.Parse( sday );
ts = nowTime.Subtract( getTime );
if( ts.TotalDays >= 30 )
{
return true;
}
else
{
if( ( getTime.Day < day && nowTime.Day >= day ) ||
( getTime.Month != nowTime.Month && nowTime.Day >= day ) ||
( getTime.Month != nowTime.Month && getTime.Day < day ))
{
return true;
}
}
}
return false;
}
}
}
}
return false;
}
}
This code allows you to specify your time expirations in recurring absolute terms,
such as at midnight on the first day of every month. This can be useful for a caching
mechanism that requires expiration policies other than standard absolute or sliding
expirations.
Chapter 7: Appendix 131
Web Server
(IIS 5.0)
Database
(SQL Server 2000)
ACT Clients
Figure 7.3
Test bed configuration
All computers were running Windows 2000 Advanced Server with Service Pack 2.
The database server was using SQL Server 2000 and the Web server was using
Internet Information Service 5.0. Table 7.4 shows a summary of the computer specifi-
cation for each computer in the test system.
Table 7.4: Computer specifications
Computer Type and CPU Number of Memory Disk
CPUs
Client Dell PowerEdge 2550, 2 256 MB 16.9 GB
1GHz
Web server Compaq ProLiant DL380, 2 1 GB 16.9 GB
2GHZ
Database server Compaq ProLiant DL380, 2 1 GB 16.9 GB
2GHZ
The following graphs results were obtained running the described tests on these computers.
The only technology that behaves significantly differently is the memory mapped
files. This is because the memory-mapped file technology is un-managed and is
implemented by streaming data to and from specified memory locations. Memory-
mapped files are neither an in-process technology nor an out-of-process technology.
Figure 7.4
Item size versus retrieval request execution time
As the size of the items increases, the number of retrieval requests that the cache
can handle per second stays relatively constant for the in-process technologies, but
it decreases for the others. In this scenario, the ASP.NET session cache produces
consistently better results for items less than 6 MB in size than the memory-mapped
134 Caching Architecture Guide for .NET Framework Applications
file cache. The other technologies do not perform well with anything other than very
small items.
Figure 7.5 shows the effect of the item size on the number of retrieval requests
serviced per second.
Figure 7.5
Item size versus retrieval requests per second
These figures show that ASP.NET session and static variables produce consistently
good results for retrieving all sizes of items from the cache.
Storing Items in the Cache
The results for the time taken to store increasingly sized items in the cache are
similar to those for retrieving items for all technologies except for SQL Server 2000,
which is relatively slower. This is because when you retrieve data, SQL Server
automatically stores frequently used tables in memory to minimize input/output
operations and to improve performance. So although the retrieval of SQL Server
items is similar to the other technologies, the storage of those items is significantly
slower due to the need for SQL Server to do more than simply store the data, for
example, to create indexes.
Figure 7.6 shows the effect of the item size on the storage request execution time.
The results for the effect of item size on the number of storage requests that can
be handled per second are very similar to the results for the retrieval test, and the
reasoning behind those results is the same.
Chapter 7: Appendix 135
Figure 7.6
Item size versus storage request execution time
Figure 7.7 shows the effect of the item size on the number of storage requests
serviced per second.
Figure 7.7
Item size versus storage requests per second
136 Caching Architecture Guide for .NET Framework Applications
From all of these results you can deduce that the best performing caches are based
on the in-process technologies, and the worst are based on out-of-process technolo-
gies. However, the data presented here can help you make the correct choice of
technology based on the size of the items you need to cache alongside all the other
constraints discussed in Chapter 2, “Understanding Caching Technologies.”
Figure 7.8
User load versus retrieval execution time
Chapter 7: Appendix 137
As the number of the users increases, the number of retrieval requests that the cache
handles per second grows until a bottleneck is hit and then the number of requests
per second flattens out. For the in-process technologies, you can see that the amount
of requests per second at the bottleneck (about 100 users) is much higher than the
out-of-process ones.
Figure 7.9 shows the effect of the user load on the number of retrieval requests
serviced per second.
Figure 7.9
User load versus retrieval requests per second
The static variable cache produces the best retrieval results for increasing user load,
as it did for increasing item sizes.
Storing Items in the Cache
The results for the time taken to store items in the cache as the user load increased
are similar to those for retrieving items for all technologies except for SQL Server
2000, which is relatively slower. This is because when you are storing data, SQL
Server has to index the inserted data in the required tables and to physically store
the data on the disk. These types of operations take a relatively significant amount
of time to complete.
Figure 7.10 on the next page shows the effect of the user load on the storage request
execution time.
The results for the effect of user load on the number of storage requests that can be
handled per second are very similar to the results for the retrieval test, and the
reasoning behind those results is the same.
138 Caching Architecture Guide for .NET Framework Applications
Figure 7.10
User load versus storage request execution time
Figure 7.11 shows the effect of the user load on the number of storage requests
serviced per second.
Figure 7.11
User load versus storage requests per second
Chapter 7: Appendix 139
The performance tests for differing user loads show the same overall results as the
differing item size tests: that the in-process caching technologies perform signifi-
cantly better than out-of-process systems. Use the data presented here along with
your own parameters, estimated user load and item size data, to select the appropri-
ate caching technology for your application within the technology constraints
discussed in Chapter 2, “Understanding Caching Technologies.”
p a t t er ns & p ra c t i c es
Proven practices for predictable results
Patterns & practices are Microsoft’s recommendations for architects, software developers,
and IT professionals responsible for delivering and managing enterprise systems on the
Microsoft platform. Patterns & practices are available for both IT infrastructure and software
development topics.
Patterns & practices are based on real-world experiences that go far beyond white papers
to help enterprise IT pros and developers quickly deliver sound solutions. This technical
p a t t er ns & p ra c t i c es
guidance is reviewed and approved by Microsoft engineering teams, consultants, Product
Patterns & practices are available for both IT infrastructure and software development
topics. There are four types of patterns & practices available:
Reference Architectures
Reference Architectures are IT system-level architectures that address the business
requirements, operational requirements, and technical constraints for commonly occurring
scenarios. Reference Architectures focus on planning the architecture of IT systems and
p a t t er ns & p ra c t i c es
Operational Practices
Operational Practices provide guidance for deploying and managing solutions in a production
environment and are based on the Microsoft Operations Framework. Operational Practices focus on
critical tasks and procedures and are most useful for production support personnel.
Patterns
Patterns are documented proven practices that enable re-use of experience gained from solving
similar problems in the past. Patterns are useful to anyone responsible for determining the
approach to architecture, design, implementation, or operations problems.
p a t t er ns & p ra c t i c es
Proven practices for predictable results
and SharePoint Portal Server 2001 124 pages
UNIX Application Migration Guide 694 pages
Microsoft Active Directory Branch Office Guide: Volume 1: Planning 88 pages
Microsoft Active Directory Branch Office Series Volume 2: Deployment and
Operations 195 pages
Microsoft Exchange 2000 Server Hosting Series Volume 1: Planning 227 pages
Microsoft Exchange 2000 Server Hosting Series Volume 2: Deployment 135 pages
Microsoft Exchange 2000 Server Upgrade Series Volume 1: Planning 306 pages
Microsoft Exchange 2000 Server Upgrade Series Volume 2: Deployment 166 pages
Reference Building Blocks
Data Access Application Block for .NET 279 pages
.NET Data Access Architecture Guide 60 pages
Designing Data Tier Components and Passing Data Through Tiers 70 pages
Exception Management Application Block for .NET 307 pages
Exception Management in .NET 35 pages
Monitoring in .NET Distributed Application Design 40 pages
Microsoft .NET/COM Migration and Interoperability 35 pages
Production Debugging for .NET-Connected Applications 176 pages
Authentication in ASP.NET: .NET Security Guidance 58 pages
Building Secure ASP.NET Applications: Authentication, Authorization, and
Secure Communication 608 pages
Operational Practices
Security Operations Guide for Exchange 2000 Server 136 pages
Security Operations for Microsoft Windows 2000 Server 188 pages
Microsoft Exchange 2000 Server Operations Guide 113 pages
Microsoft SQL Server 2000 Operations Guide 170 pages
Deploying .NET Applications: Lifecycle Guide 142 pages
Team Development with Visual Studio .NET and Visual SourceSafe 74 pages
Backup and Restore for Internet Data Center 294 pages