SFC Developer's Guide: Release 4.5.X 23 November 2007 Doc ID: SDK453UMC++.070
SFC Developer's Guide: Release 4.5.X 23 November 2007 Doc ID: SDK453UMC++.070
Release 4.5.X
23 November 2007
Doc ID: SDK453UMC++.070
Copyright © 1995, 1996, 1998, 2000-2007 Reuters Group plc. All rights reserved.
Except as permitted by law no part of this document may be reproduced or transmitted by any process or means
without the prior consent of Reuters Group plc.
Reuters, by publishing this document, does not guarantee that any information contained herein is and will remain
accurate or that use of the information will ensure correct and faultless operation of the relevant service or
equipment. Reuters, its agents and employees shall not be held liable to or through any user for any loss or
damage whatsoever resulting from reliance on the information contained herein.
This document refers to the products and trademarks of manufacturers. Acknowledgment is made of all
trademarks, registered trademarks and trading names that are referred to in the text.
TIB is a registered trademark and TIBCO and TIBCO Rendezvous are trademarks of TIBCO Software Inc.
REUTERS, the Sphere Logo, Triarch, and Reuters SSL are trademarks of the Reuters Group of Companies round
the world.
This SOFTWARE, including but not limited to, the code, screen, structure, sequence, and organization thereof,
and DOCUMENTATION are protected by United States copyright laws and international treaty provisions.
This manual is subject to U.S. export regulations.
TABLE OF CONTENTS
1 OVERVIEW 1
1.1 What are the System Foundation Classes? 1
1.2 Audience 1
1.3 Organization of the SFC Documents 1
1.4 Conventions 3
1.4.1 Typographic 3
1.4.2 Programming 3
1.4.3 Naming 4
1.5 The SFC Development Process and Change Management Policies 5
1.6 Scope and Availability 6
2 DESIGN PHILOSOPHY 7
2.1 Modeling Principles 7
2.2 Design by Contract 8
2.3 Component Design 9
2.4 Application Design 10
4 SFC MODELS 15
4.1 Record Subscription 15
4.1.1 Overview 15
4.1.1.1 Record Items 15
4.1.1.2 Real-time Records 16
4.1.1.3 Snapshot Records 16
4.1.1.4 Record Chains 17
4.1.1.5 Record Services 17
4.1.1.6 Record Service Pools 18
1.2 Audience
Users of the SFC Library should have working knowledge of the C++ language and a general
understanding of the concepts of object-oriented analysis and design.
Some useful books for learning C++ are:
[1] Object-Oriented Programming using C++ by Ira Pohl, ISBN #:0-8053-5382-8
[2] Effective C++ by Scott Meyers, ISBN #: 0-201-56364-9
[3] Advanced C++ - Programming Styles and Idioms by James O. Coplien, ISBN# 0-204-54855-0
Some useful books for learning object-orientated concepts are:
[4] Object-Oriented Software Construction by Bertrand Meyer, ISBN #: 0-13-629049-3
[5] Object-Oriented Analysis and Design by Grady Booch, ISBN #:0-8053-5340-2
[6] Design Patterns by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, ISBN
#0-201-65561-2
The remaining sections in this chapter discuss the conventions used in the manual, the SFC
development process, and the current scope of the product.
The SFC Migration Guide details the differences between the SFC in SSL Developers Kit - Classic
Edition 4.0.x and this release of SFC. It also describes what changes will need to be made when
migrating programs.
The SFC Reference Manual contains a class reference with detailed descriptions of each class in the
SFC.
Also included with the SFC documentation is the Reuters Marketfeed Reference Manual which defines
the syntax of those Marketfeed messages that may be sent by source applications or received by sink
applications.
The SFC documentation does not cover the installation and configuration of the Triarch and RMDS
market data infrastructures. Consult the product documentation of those infrastructures for information
on market data components, including RVD, and Sink, Source and Service Distributors.
1.4 Conventions
1.4.1 Typographic
• Important notes and tips are marked off with a box.
NOTE: Most files referenced in this document with a .C extension will have a .cpp extension in
the win32 load.
1.4.2 Programming
• All class names start with the letters “RTR” (for ReuTeR classes; e.g. RTRString).
• All function names start with lower case letters, but subsequent words are capitalized (e.g.,
append() and lastString()).
• In order to make function names easy to understand, abbreviations are used sparingly. Some
class names may seem rather long, but these names should be easier to understand than
abbreviated names. On the technical side, it is important that class names do not clash with other
class libraries or client defined classes. The combination of the “RTR” prefix and the long name
format will avoid name collisions with classes from other libraries.
• To provide consistent, easy to learn class interfaces, function names are assigned such that
functions are given the same name in different classes if they have the same use. Application
developers will be able to understand new classes more quickly when the naming is consistent.
For instance, the addClient() function is used throughout the real-time classes as the mechanism
that allows components to register for events.
• The SFC uses a common boolean type RTRBOOL, which can have either RTRTRUE or
RTRFALSE as its value.
• Where class methods return a pointer, the return value may be null. In circumstances where
returned values will never be null, an object reference will be returned.
• The SFC uses the following memory management guideline: “If you allocated the object, you
delete it. If you didn’t allocate the object, don’t delete it.” Any exceptions to the guideline will be
explicitly highlighted in the appropriate section(s) of the SFC Reference Manual. When possible,
objects should be deleted in the reverse order from which they were allocated.
1.4.3 Naming
SFC originally only included an implementation for the Triarch (SSL) infrastructure. While the SFC
name originates from SSL Foundation Classes, the core part of SFC, the abstract models, is generic
for any market data infrastructure. So even though SFC now includes a TIB implementation for TIC-
based TIB and Reuters Market Data System (RMDS) infrastructures, the name “SFC” applies to the
abstract models of both implementations.
The class names for Triarch and RMDS (P2PS, MDH) implementation included “SSL” and “Default.”
These classes retain those names for backwards compatibility. The class names for the TIB
implementation include “TIB.” Through the rest of this document, “SSL” refers to the implementation
for the Triarch infrastructure and RMDS (P2PS, MDH), and “TIB” refers to the implementation for the
RTIC-based RMDS infrastructure. “TSA” is used in implementations that work only with the RMDS
Distribution Layer.
SFC’s TIB implementation should be used when consuming data from RMDS. Both the SSL and the
TIB record publishing implementations can be used with an RMDS infrastructure. The TSA insert
implementation can be used to contribute to an RMDS infrastructure. See Figure E.1 for more
information.
The TIB implementation is designed to work with TIBCO Rendezvous, so it is used with the
Rendezvous Distribution Layer of RMDS. However, due to subtle differences in the infrastructures,
SFC must be told which infrastructure it is using. See section 4.8.1 and section 4.8.2 for information on
how to do this.
NOTES:
• When this document, the SFC Reference Manual, or the example programs refer to “TSA”
(Trading Solution Architecture) they are referring to RMDS. In that context, the terms TSA and
RMDS are interchangeable.
• The TIB implementation is designed to work with the RTIC-based RMDS infrastructure.
Users of the SFC are strongly encouraged to provide feedback on both the conceptual models and the
implementation.
Ultimately, application programming interfaces (APIs) are required. When using an object-oriented
programming environment, the result is a set of class libraries. A given model may have more than one
implementation. The models which comprise a class library must be modular in their design and
implementation. That is, it must be possible to add and remove both models and implementations
thereof. This allows the scope of the library to grow and change over time in a manageable fashion.
Why go to all the trouble of developing abstract data types, iterating through prototypes, and building
components? Clearly, the production of a reusable component requires high standards of design and
implementation, and, perhaps, imposes restrictions on the way in which software can be implemented.
Perhaps the most important benefit of building components is the flexibility it affords. Components can
be swapped out, enhanced, maintained, and interchanged. Monolithic software systems inhibit
maintenance and enhancement.
Additionally, a well conceived conceptual model affords a much friendlier, more intuitive programming
environment, letting programmers concentrate on problems specific to their particular domain rather
than having to master the details of many different domains and implementations.
time record model to retrieve a record item. It would need access to formatting information specifying
which fields to display, where to display them, and which presentation attributes to use in doing so.
Ideally, this component is independent of the particular user interface being used and also
independent of the display technology.
Another example is a limit minding component which monitors fields from a record and tests whether
those fields exceed predefined limits. Ideally, this kind of capability is implemented so that it is
independent of display and user interface.
domain. Distributed systems often offer multiple services or data providers. Again, the purpose of the
service abstraction is to shield applications from the details of system implementation.
• Services are uniquely identified by name.
• Services have state and undergo state changes which may be indicated by explanatory textual
information.
• Services may provide alerts, e.g., data items worthy of special notice.
• Services may be permissioned to control access to their data.
application specific processing of the events generated by the SFC Libraries. An application consists
of one or more instances of various specialized client types which register with instances of market
data items to receive events relating to those items.
Clients register to receive events with individual data items (or other event generating components) in
which they have interest. The implementation of the model is responsible for managing this client
registration and for delivering the appropriate events to registered clients.
implementation of their choice. Where portability is of concern, designers may find that they can use
the RTREventNotifier interface to their advantage.
4.1.1 Overview
definition. The identifier also implicitly defines the semantics of the field. For example there are many
different kinds of price fields, but a BID price and an ASK price have very different semantics.
Clients may use fields in their "generic" form or interpret their contents according to their type. For
example, while a date can be represented as the string "MAR 11 1994", it may be useful to use the
numeric equivalent where day=11, month=3, year=1994. A price may be represented as the string "29
2/8" or as the floating point value 29.25. Type-specific fields are represented in the SFC for
alphanumeric, date-time, enumerated, integer, numeric, and price values. These type-specific fields
provide the capability to interpret the string value according to the rules associated with its particular
type.
NOTE: Record chain elements correspond to positions in a chain, not records in a chain. If the val-
ues in the third and fourth positions of a chain swap, chain element clients for the third and fourth
chain elements will each receive an update event.
no corresponding item of market data with the same name) or the application may not have permission
to access the underlying data. The text of an inactive record explains the reason that data will not be
provided. Once in an inactive state, a record will never transition to the active state; it is in an
unrecoverable error state. Applications should not maintain references (pointers) to inactive items.
This rule is based on practical considerations, e.g., memory management. When items are inactive,
the resources they use (memory, etc.) are eligible to be reclaimed.
4.1.2 Design
This section provides more detail concerning the analysis and design of the real-time record model.
The purpose of this section is to show how the abstract analysis translates into specific class features.
Not all features of all classes are described. Please refer to the SFC Reference Manual for full details.
Record
State Description
Attributes
Active_noData hasData = False Records in this state cannot provide any fields. The
stale = True initial "image" of the record is not yet available.
active = True Records in this state may transition to any of the other
three states. Once out of this state, a record instance
will never return to it.
Record
State Description
Attributes
Active_valid hasData = True Records in this state contain valid data. Records in this
stale = False state may transition to the Active_stale state or to the
active = True Inactive state.
Active_stale hasData = True Records in this state contain data which is may be out-
stale = True of-date, i.e. it is stale. Stale records are in a recover-
active = True able error condition and will take all necessary steps to
return to an Active_valid state.
Inactive hasData = don’t care Records transition to this state when the service which
stale = don’t care created the given record is unable or unwilling to con-
active = False tinue providing access to the underlying data item.
Once in this state, a record will never transition to
another state.
State transitions are propagated to clients as events. The events triggered by state transitions are
described as follows:
• Sync - This event indicates the availability of an initial "image". This event will only occur once for
a given record instance. The event triggers a transition to either the Active_valid state or the
Active_stale state. Record clients will receive a processRecordSync() event.
• Resync - This event indicates the availability of a subsequence "image". It also indicates a
change in state between Active_stale and Active_valid. When a resync occurs, the state of the
record must be examined. Record clients will receive a pair of events, processRecordResync()
and processResyncComplete(), to indicate data change. In case there is a change in state,
record clients will receive an additional event, processRecordStale() or
processRecordNotStale() event to indicate a state change from Active_Valid to Active_Stale or
Active_Stale to Active_Valid respectively.
• Stale - This event indicates a transition from the Active_valid state to the Active_stale state.
Record clients will receive a processRecordStale() event.
• NotStale - This event indicates a transition from the Active_stale state to the Active_valid state.
Record clients will receive a processRecordNotStale() event.
• Inactive - This event indicates a transition to the Inactive state. Record clients will receive a
processRecordInactive() event.
Record will also generate informational events when appropriate. Clients will receive a
processRecordInfo() event. These events never indicate a change in record state.
Records have a text() method which provides a textual explanation of the current record state. Record
clients are kept informed of progress by means of events which indicate that there is new textual data
available concerning the state of the record.
Figure 4.1 shows the various real-time record states and state transitions.
Resync
Sync
Active_noData Active_valid
Inactive Resync
Inactive Sync
Stale
NotStale
Record
State Description
Attributes
Active_noData hasData = False Records in this state cannot provide any fields. The
stale = True initial "image" of the record is not yet available.
active = True Records in this state may transition to any of the other
three states. Once out of this state, a record instance
will never return to it.
Active_stale hasData = True Records in this state contain data which is may be out-
stale = True of-date, i.e. it is stale. Snapshots are always consid-
active = True ered stale because the data may have been updated
immediately after the snapshot was received.
Inactive hasData = don’t care Records transition to this state when the service which
stale = don’t care created the given record is unable or unwilling to con-
active = False tinue providing access to the underlying data item.
Once in this state, a record will never transition to
another state.
State transitions are propagated to clients as events. Snapshot can only receive two event:
• Sync - This event indicates the availability of an initial "image". This event will only occur once for
a given record instance. The event triggers a transition to the Active_stale state. Record clients
will receive a processRecordSync() event.
• Inactive - This event indicates a transition to the Inactive state. Record clients will receive a
processRecordInactive() event. Snapshot clients receive this event if the record does not exist
or if the record object is deleted.
Figure 4.2 shows the various snapshot record states and state transitions.
Active_noData
Sync
Inactive Active_stale
Inactive
Inactive
Record
State Description
Attributes
notComplete isValid = don’t care The chain has been requested. Nothing else is
complete = False know about its validity or state yet.
stale = don’t care
error = False
Complete_stale isValid = True The chain is valid and complete, but one of the
complete = True chain headers is currently stale.
stale = True
error = False
Complete_notStale isValid = True The chain is valid, complete, and ready to be used.
complete = True
stale = False
error = False
Error isValid = don’t care Some error occurred when processing one of the
complete = don’t care chain headers. This state could occur before a
stale = don’t care chain is complete if the last valid chain header
error = True points to a record that does not exist (e.g. if the
NEXT_LR field of 1#.AV.O were set to 2#.AV.O, a
non-existent RIC, the chain would go into the error
state). This state could also occur after the chain is
complete if any of the records that constitute chain
headers become inactive for any reason.
Invalid isValid = False The RIC used for the chain name is a valid RIC, but
complete = False it is not a chain header. Maybe the application
stale = don’t care should re-request the RIC as a regular record.
error = False
State transitions are propagated to clients as events. Events that trigger state transitions are described
as follows:
• Complete - This event indicates that all of the values currently available for the chain have been
completely received. This event will only occur once for a given chain instance. The event triggers
a transition from the notComplete state to either the Complete_stale or Complete_notStale state.
The chain client will receive a processChainComplete() event.
• Error - This event indicates that some error occurred while processing one of the chain headers
(e.g. 1#.DJI). The chain client will receive a processChainError() event. This event will only
occur once for a given chain instance. If the chain’s count() is greater than zero, the current
contents of the chain can still be used. If the client continues to use the chain, it should check to
see if the chain resized.
• Stale - This event indicates a transition from the Complete_notStale state to the Complete_stale
state. The chain client will receive a processChainStale() event.
• NotStale - This event indicates a transition from the Complete_stale state to the
Complete_notStale state. The chain client will receive a processChainNotStale() event.
• Invalid - This event indicates a transition to the Invalid state. This event will occur if the requested
chain is a valid record, but it does not include all of the required fields. The chain client will receive
a processRecordInactive() event. This event will only occur once for a given chain instance.
A record chain is complete when its elements can be completely constructed. A record chain is invalid
when the provided name is an available record, but that record is not a chain (e.g. IBM.N is requested
as a chain). A record chain is in the error state if it could not be completely constructed or if one of its
internal records (records that define the chain, not the chain elements’ records) becomes inactive. For
example, if 1#.DJI were to become inactive, the chain client would receive a processChainError()
event.
Chain client can receive additional events to indicate that the contents of the chain have changed.
• Partial - This event indicates that some elements have been added to the chain, but the chain is
not yet complete. The chain client will receive a processChainPartial() event..This event is only
received when the chain is in the notComplete state. It can be used by applications to proactively
request records listed in a chain before the chain is complete.
• Resize - This event indicates that elements have been added to or removed from the chain. The
chain client will receive a processChainResize() event. This event is only received when the
chain is in the Complete_stale or Complete_notStale state. When this event is received, the chain
client should also check to see if the chain moved between the Complete_stale and
Complete_notStale states.
NOTE: Record chain elements are always added to and removed from the end of the chain. For
example, if a value Y’ is inserted before the last value ’Z’ in the chain, the last chain element’s client
will receive a ChainElementUpdate event for a change from ’Z’ to ’Y’. Then the chain client will
receive a Resize event for 1 additional element. This element will contain the value ’Z’.
• UpdateComplete - This event indicates that a number of chain elements have changed but the
change did not resize. The chain client will receive a processChainUpdateComplete() event.
This event is only received when the chain is in the Complete_stale or Complete_notStale state.
When this event is received, the chain client should also check to see if the chain moved between
the Complete_stale and Complete_notStale states.
Figure 4.3 shows record chain states and events causing state transitions.
Invalid
notComplete Invalid
Error
Complete Complete
Error
Error Error
Stale
Complete_stale Complete_notStale
NotStale
Record
State Description
Attributes
Stale stale = True The service is not available on the network. This state
active = True could be because the initial connection between the
SFC application and the infrastructure has not been
established, information from this service has not been
retrieved from the infrastructure, or the service is sim-
ply not available. Stale is a recoverable condition.
Items may be requested from stale services, but they
will remain stale until the service itself is no longer
stale and can automatically request the items.
Ok stale = False The service has been found in the market data infra-
active = True structure and is ready to handle item requests.
Inactive stale = don’t care This state should never occur for services on a net-
active = False work because the application can never know if the
service will recover sometime in the future. Services
representing local publishers could go into this state
because the application has control over the state of
the local publisher. Services also go into this state
right before their objects are destroyed.
State transitions are propagated to clients as events. The events triggered by state transitions are
described as follows:
• Sync - This event indicates the service has been found and is ready to accept requests. Unlike
the record sync event, this event can occur multiple times for a given service instance. However,
it will never occur twice in a row. This event triggers a transition from the Stale state to the Ok
state. Service clients will receive a processServiceSync() event.
• Stale - This event indicates a transition from the Ok state to the Stale state. Service clients will
receive a processServiceStale() event.
• Inactive - This event indicates a transition to the Inactive state. Service clients will receive a
processServiceInactive() event and should release all references to the service.
Services will also generate informational events when appropriate. Clients will receive a
processServiceInfo() event. These events never indicate a change in the service’s state. Services
have a text() method which provides a textual explanation of the current state of the service.
Figure 4.4 shows the various service states and state transitions.
Stale Sync
Stale
Inactive Ok
Inactive
Inactive
classes, i.e., specialized descendants of RTRRTRecordClient. The first, a "quote display" client is
designed to display a single record. The second, a "limit minder" client, is designed to display the trade
prices of one or more records. At run-time the application will consist of one or more quote display
clients and a limit minding client. Figure 4.5 illustrates the run-time relationships which result when the
application is displaying two records ("IBM.N" and "DEM=") and also performing limit minding on the
same records. In this case, there are two record instances and three client instances, two of type
"Quote Display" and one of type "Limit Minder". Each record has two clients, one each of type Quote
Display and type Limit Minder. The single instance of Limit Minder is a client of two records; the
instances of Quote Display are each clients of a single record.
In a typical application, these relationships are defined dynamically, i.e., on the basis of user input or
other events, meaning that clients may be dynamically added (allocated) or removed (deleted) at any
time.
The RTRRTRecord class defines three methods for "client management". They are:
void addClient(RTRRTRecordClient&)
This method registers a client with an individual record so that the given client will receive all
events subsequently generated by that record.
void dropClient(RTRRTRecordClient&)
This method un-registers a client with a particular record so that the given client will no longer
receive events from that record.
RTRBOOL hasClient(RTRRTRecordClient&)
This method is used to determine whether a given client is currently registered to receive events
from a particular record.
Again, this "event" occurs prior to the individual field update events. The default implementation
does nothing, i.e., pre-update events are ignored.
virtual void processUpdateComplete(RTRRTRecord&)
This event is generated by a record after all fields affected by a tick have been modified. Clients
who wish to post-process an update event can redefine this method. The default implementation
does nothing, i.e., post-update events are ignored.
virtual void processResyncComplete(RTRRTRecord&)
This event is generated by a record after all fields affected by a tick have been modified. Clients
who wish to post-process a re-sync event can redefine this method. The default implementation
invokes processUpdateComplete().
RTRRTField ("BID")
RTRRTField ("ASK")
RTRRTRecordClient RTRRTFieldClient
RTRRecord (DEM=)
processTick()
processUpdate()
processFieldUpdate()
processFieldUpdate()
processUpdateComplete()
4.1.3 Implementation
Currently, the SFC provides two implementations of the real-time and snapshot record models. While
most application components should refer only to the base classes mentioned in the preceding
section, the "main" routine or initialization section of an application must instantiate one or more
implementation specific classes.
SFC includes implementation classes for use by both single (simple) and multiple service (more
complex) applications. These classes encapsulate the procedures for creating an SSL connection and
the record services which use that connection. They are:
Real-time Snapshot
RTRRTRecord RTRSnapshotRecord
RTRRTRecordClient RTRSnapshotRecordClient
RTRRTRecordIterator RTRSnapshotRecordIterator
Real-time Snapshot
RTRRTField RTRSnapshotField
Table 4.5: Real-time Record Classes vs. Snapshot Record Classes (Continued)
4.1.5 Examples
SFC includes several examples which highlight different parts of the record model. All of these
examples can be found the sfc/examples/realtime/ directory, with the exception of Quote, which is
only available for Windows. The rest of the examples are very similar. They typically have two parts:
1. main() function that:
• parses command line arguments
• creates a service pool factory or record service
• requests a record, field or chain
• creates the appropriate SFC client and registers it with the market data item
• creates the event control loop
2. SFC client implementation that is responsible for
• processing data and status events
• printing output to standard out
The following table summarized the record examples and details where more information about them
can be found.
class RecordSnapshotClient :
public RTRSnapshotRecordClient
{
public:
// Constructor
RecordSnapshotClient(RTRSnapshotRecordService& service, const char *symbol);
// Destructor
~RecordSnapshotClient();
void processSnapshotSync(RTRSnapshotRecord&);
// This method invoked when there is an initial image for the
// record. (It can only happen once.)
void processSnapshotInactive(RTRSnapshotRecord&);
// This method invoked when the record has made the transition to
// the inactive state.
protected:
// Implementation attributes
RTRSnapshotRecord *_record;
};
#endif
#include "recsnapclient.h"
// Constructor
RecordSnapshotClient::RecordSnapshotClient(
RTRSnapshotRecordService& service, const char *symbol)
: _record(0)
{
// 1
_record = &(service.snapshotRecord(symbol));
_record->addClient(*this);
if ( _record->active() )
{
if ( _record->hasData() )
processSnapshotSync(*_record);
else
processSnapshotInfo(*_record);
}
else
{
// 2
cout << symbol << " - Inactive: " << _record->text() << endl;
_record->dropClient(*this);
_record = 0;
}
}
// Destructor
RecordSnapshotClient::~RecordSnapshotClient()
{
// 3
if ( _record )
_record->dropClient(*this);
}
// 6
RTRSnapshotRecordIterator iterator = record.iterator();
for (iterator.start(); !iterator.off(); iterator.forth())
{
// Print the name and value of each field
RTRSnapshotField& field = iterator.field();
cout << field.name() << " " << field.string() << endl;
}
// Clean up and terminate the application
// 7
record.dropClient(*this);
_record = 0;
RTREventNotifierInit::notifier->disable();
}
// record.
//
// 6 Records provide both sequential and "random" access to their constituent
// fields.
//
// 7 This client terminates the application once the record sync event
// is received. This is not typical behaviour for record clients.
#include "recsnapclient.h"
#include "rtr/selectni.h"// RTRSelectNotifier
#include "rtr/tibrtsvc.h"
#include "rtr/sslrtrs.h"
RTRCmdLine RTRCmdLine::cmdLine;
RTRRTRecordService *service = 0;
if ( !service->active() )
{
cerr << "Service error: " << service->text() << endl;
delete (service);
return -1;
}
delete service;
return 0;
}
• RecordSync
• RecordInfo
• RecordInactive
• RecordStale
• RecordNotStale
RecordUpdateClient also processes this event by overriding the default behavior (which does nothing):
• UpdateComplete
#ifndef _recupdclient_h
#define _recupdclient_h
#include "rtr/rtrec.h"// Defines RTRRTRecordClient
class RecordUpdateClient :
public RTRRTRecordClient
{
public:
// Constructor
RecordUpdateClient(RTRRTRecordService& srvc, const char *symbol);
// Destructor
~RecordUpdateClient();
void processRecordSync(RTRRTRecord&);
// This method invoked when there is an initial image for the
// record. (It can only happen once.)
void processRecordStale(RTRRTRecord&);
// This method invoked when the record has made the transition to
// the stale state.
void processRecordNotStale(RTRRTRecord&);
// This method invoked when the record has made the transition out
// of the stale state.
void processRecordInactive(RTRRTRecord&);
// This method invoked when the record has made the transition to
// the inactive state.
void processUpdateComplete(RTRRTRecord&);
// This method invoked when the record has completed a update.
protected:
// Implementation attributes
RTRRTRecord *_record;
};
#endif
This procedure should be used in all record client classes. It is especially important that clients do not
assume that they are the "first" or only client to access a given record. This is because the record may,
for example, already be in the "hasData" state, in which case it will never again generate the Sync
event. In general it should be assumed that a record instance may already be in use by some other
client within the application.
Note that while the processRecordSync() and processRecordNotStale() methods use an instance
of RTRRTRecordIterator to access all fields in the record, the processUpdateComplete() method
uses an instance of RTRRTRecordUpdateIterator which provides access to only those fields which
have been updated by the most recent update event.
//
// This file contains the implementation of RecordUpdateClient
//
#include "recupdclient.h"
// 1
RecordUpdateClient::RecordUpdateClient(
RTRRTRecordService& service, const char *symbol)
: _record(0)
{
// Obtain a record reference, register with the record and check
// its state.
_record = &(service.rtRecord(symbol));
_record->addClient(*this);
if ( _record->active() )
{
if ( _record->hasData() )
processRecordSync(*_record);
else
processRecordInfo(*_record);
}
else
{
cout << symbol << " - Inactive: " << _record->text() << endl;
_record->dropClient(*this);
_record = 0;
}
}
RecordUpdateClient::~RecordUpdateClient()
{
if ( _record )
_record->dropClient(*this);
}
}
}
class TickerMonitor :
public RTRRTRecordServicePoolClient
{
public:
// Constructor
TickerMonitor(
const RTRObjectId& context,
const char *name,
RTRRTRecordServicePool& pool);
// Destructor
~TickerMonitor();
RTRRTRecordServicePool&,
RTRRTRecordService&);
protected:
// Implementation attributes
const RTRObjectId _instanceId;
RTRString _serviceName;
RTRRTRecordServicePool& _pool;
int _tickCount;
TickerClientPtr *_tickStore;
};
#endif
#include "tickmon.h"
#include "tickclient.h"
// Constructor
TickerMonitor::TickerMonitor(
const RTRObjectId& context,
const char *name,
RTRRTRecordServicePool& pool)
: _instanceId(context, name), _pool(pool),
_tickCount(0), _tickStore(0)
{
// Event processing
// 3
void TickerMonitor::processRTRecordServiceAdd(
RTRRTRecordServicePool&,
RTRRTRecordService& service)
{
if (_serviceName == service.name())
{
// 4
// Get a list of symbols to retrieve
RTRListOfExternalValue ricList = RTRConfig::configDb().variable(
_classId,
_instanceId,
"symbol_list",
"RTRSY.O").list(‘,’);
// 5
_pool.dropClient(*this);
}
}
// 6
RTRObjectId TickerMonitor::_classId("TickerMonitor");
//
// 6 One way to define a class identifier common to all instances of a
// class is to use the static data member construct.
RTRRecordChainElementClient RTRRTRecordClient
RTRRecordChainClient RTRRTFieldClient
ChainElementRecordClient
ChainUpdChainClient * ChainUpdRecordClient
4.1.10.1 ChainElementRecordClient
Figure 4.9 includes one class that has not been mentioned yet, ChainElementRecordClient.
ChainElementRecordClient merges implementations of a RTRRecordChainElementClient and a
RTRRTRecordClient.
The elements in statistical chains, like .AV.O and .NG.L, often change. Remember that chain elements
are fixed to a position, not a record symbol. So when statistical chains re-sort or resize, the
RTRRecordChainElementClients have to react. Actions are require when:
• the chain shrinks, causing some elements to be removed
• chain elements update, causing some elements to point to new records
In ChainElementRecordClient, these events result in:
• dropping all references so the client’s element and record can safely be deleted
4.2.1 Overview
Section 4.1, Record Subscription, described the basic characteristics of real-time record data and how
application components subscribe to that data. These real-time record characteristics remain the same
for publishing and this section assumes that the Record Subscription section has already been read.
The main difference between the models for record publication and subscription is that the publication
model adds new methods to allow application components to "write” to the record. Specifically, the
publication model allows applications to:
• control the contents of a record service
• determine the contents of a service’s constituent real-time records
• update the data of real-time records
• change the state of the service and records
• propagate data and state change events to other interested application components at the record,
field and service level
• provide entitlement data and associated events to other interested application components for
each record
NOTE: This model is used to publish records which represent market data instruments.
The data provider is the application component that interacts with the record publication model. The
application developer creates the data provider component.
capability to interpret the stored field string value according to the rules associated with its particular
type.
A field definition database (fidDb) defines the set of all possible fields that may be added to a published
record. A field definition defines the semantic meaning of a record field, including the name, numeric
identifier (FID), type and maximum data length of a field. Every field in a record is provided a field
definition when the field is created.
A published record may change state depending on the ability of the data provider to supply up-to-date
record data.The published record data may also change as updates are made to the data. Each of
these types of state changes result in events being propagated to downstream client components.
NOTE: A record propagates both state change events and data change events. State changes
reflect the overall state of the record and all constituent fields. Data change events reflect updates to
some or all constituent fields in a record.
In the SFC model, a RTRRTRecordImpl class represents a published real-time record. This class
provides public methods that are used by application components to populate a real-time record with
data, to update the record’s data and to propagate the state and data change events to other
interested application components. Instances of RTRRTRecordImpl are obtained from a service of
type RTRRTRecordServiceImpl.
The main benefits of the RTRRTRecordImpl class are:
• it implements the basic data caching and event propagation code for a real-time record (thus the
name RTRRTRecordImpl)
• it makes it easy to control how the record functions
• it provides a standardized interface for publishing record data to different downstream
components and architectures (e.g. Triarch and RMDS networks)
• it allows application components to stay focused on the task of determining how the record data is
obtained and updated
4.2.2 Design
NOTE: The SFC record publication service implementations always cache the item data that is
being published. Requests from downstream clients for items already cached in the record publica-
tion service will be satisfied from that service without notifying the data provider code.
Records can be added to the service at any time. However, if the publishing service only supports a
source-driven cache (like the TIB implementation), then it is best to load all the records into the
publishing service at application start-up to ensure that these records will be there when the service
becomes available to downstream client components and end-users.
Source-driven mode
A source-driven, or non-interactive, publisher determines the entire contents of the publishing service
cache on its own. Input is not accepted from downstream components.
NOTE: After the publisher indicates the service as sync, if client requests a record that has not been
sent out to the Source Distributor by the publisher, this client will get the record status as inactive.
As a result, this client has to re-request this record.
Sink-driven mode
A sink-driven, or interactive, publisher determines the entire contents of the publishing service cache
based on requests from downstream components.
The publishing service can also create new published records itself based on demand from
downstream components. Note that not all publishing service implementations necessarily support this
capability. The data provider component must register with the publishing service to receive an event
whenever a record not currently cached in the publishing service is requested by a downstream
component. Only a descendant of RTRRTRecordServiceImplClient can register with the service via
the RTRRTRecordServiceImpl::setClient() method. Note that there is one and only one event client
for a RTRRTRecordServiceImpl instance.
When a new RTRRTRecordImpl instance is created due to user demand, the
RTRRTRecordServiceImplClient::processNewRecord() method is invoked by the service. This
method is implemented in the data provider code. The data provider may choose to allow or disallow
the record in the service’s cache. This decision may be made during the event call-back or at a later
time if the data provider needs to query an upstream component before deciding.
Initially, a new published record is in the Active_stale state. If the data provider determines the record
is valid, it will populate the record with fields and update the record’s state and data as needed (see
section 4.2.2.5).
If the data provider determines that the record is not valid for some reason (e.g. not a valid instrument
or instrument is no longer supported), the data provider sets the record state to Inactive, using the
RTRRTRecordImpl::setInactive() and notifies downstream components of the record state change
using RTRRTRecordImpl::indicateInactive(). If the data provider wishes to provide the record for
that symbol later on, it must first remove the record from the publishing service via the
RTRRTRecordServiceImpl::removeRTRecordImpl() method. Then it can create a new record and
add it to the service.
Mixed mode
An application can also be designed to load items into the publishing service cache and accept
requests from downstream components for new records as well. To do this, the application simply uses
both of the methods described above.
Note that certain implementations do not support for the sink-driven mode. In particular, the TIB
implementation will only allow the source-driven mode to be utilized. This means that the typical
application using TIB will need to publish all of its records during initialization so they will be available
when the application goes live on the RMDS network on a TIBCO Rendezvous Distribution Layer.
Note that your application can still be designed to use the sink-driven mode, but when used with the
TIB implementation, it will never receive requests from downstream components.
Sync Resync
Active_noData Active_valid
Inactive
Sync
Inactive NotStale
Stale
Inactive Active_stale
Inactive
Resync
Active_noData (hasData = False, stale = True, active = True)
Active_valid (hasData = True, stale = False, active = True)
Active_stale (hasData = True, stale = True, active = True)
Inactive (hasData = XX, stale = XX, active = False)
NOTE: A transition to the Inactive state is a permanent condition. It is meant to indicate that this
record will not be available anymore. If a record is temporarily unavailable, it should be put into the
Stale state, not the Inactive state.
Records have a text() method which provides a textual explanation of the current record state. Record
clients are kept informed of progress by means of events which indicate that there is new textual data
available concerning the state of the record.
A record will also generate informational events when appropriate. The record’s text() should be
changed to describe the informational event. This event is used to pass on information that may be of
interest to downstream client components. This might include information regarding progress
ininitializing the record data or in recovering record data. These events never indicate a change in
record state. The RTRRTRecordImpl::indicateInfo() method is called to send this event.
See section 4.1.1.7 on Record Clients for information on how events are typically handled in
downstream client components.
NOTE: Fields can be added to records at any time. Down stream clients will only receive these new
fields if the publisher calls either RTRRTRecordImpl.indicateSync() or RTRRTRecordImpl.indi-
cateResyncComplete(RTRRTFieldUpdateList & fldList). On the client side this added field will
not be contained in the RTRRTRecordUpdateIterator and will only show up in the RTRRTRecordIt-
erator during the clients RTRRTRecordClient::processResyncComplete() callback.
Adding fields after a record transitions out of the Active_noData state is not supported when record
templates are used. Adding fields may work if the added field is in the template, however if the field
is not in the template the RTIC will not cache the field.
• A RTRFidDefinition instance that properly represents that field. This means that the
RTRFidDefinition::type() must match the type of field being created as indicated in the following
table:
TimeSecs,Time,DateTime RTRRTDateTimeField
Integer RTRRTIntegerField
Numeric RTRRTNumericField
Price RTRRTPriceField
Enumerated RTRRTEnumeratedField
Binary RTRRTField
The FID definition is obtained from the service that created the published record
(RTRRTRecordServiceImpl::fidDb()).
• A character array that will be used by the field to store data. Both the character array and the
length of the array are passed.
• The length of the allocated memory. This should be the maximum length defined by the FID
definition (i.e. RTRFidDefinition::length()) plus 1 byte to accommodate an end-of-delimiter
character in the field data. Note that this is not the size of data currently found in the buffer, but the
maximum size of the buffer.
NOTE: When allocating memory for the field, you must allocate 1 extra byte to accommodate an
end-of-field delimiter; e.g. "char* buf = new char[ fidDef->length() + 1]”. If the correct length is not
supplied, the field may not include all of the data.
After allocating the field, the initial data value is set using the RTRRTField::set() method.
After the record is fully populated with fields, the state of the record should be changed to either
Active_stale or Active_valid. The record data can be either stale or up-to-date when a Sync event is
propagated. The RTRRTRecordImpl::setNotStale() method is used to change the record data state
from stale to notStale. This must be done prior to sending the Sync event. Also, the
RTRRTRecordImpl::setText() method may be used to reset the descriptive text associated with the
record.
The RTRRTRecordImpl::indicateSync() method is called to propagate the Sync event to
downstream client components. The RTRRTRecordImpl::hasData() record state variable will
automatically be transitioned to True on this call.
NOTE: A transition from the Active_noData state occurs only once during the lifetime of the record.
Future re-synchronization of the record is indicated by sending Resync and ResyncComplete
events (see section 4.2.2.8).
The data provider then updates the fields that have changed. Each field that is to be updated is
obtained from the record using one of the record’s field accessor methods. Fields are then updated by
calling the RTRRTField::set() method and providing the new data for the field.
NOTE: If a length value of 0 is given to the RTRRTField::set() method, the field data will be cleared.
If the length value given to the RTRRTField::set() method is greater than the maximum length pro-
vided to the field at construction, the provided data will be truncated at the maximum length of the
field.
After the appropriate fields have been updated, the indicateUpdateComplete() method is called. This
method takes a RTRRTFieldUpdateList reference which is used as a container for the fields from this
record that have been updated. The RTRRTFieldUpdateList class provides methods to insert fields by
name, by FID or by reference. Upon completion of the indicateUpdateComplete() method, all of the
record’s registered event clients have been notified of the updates to the fields passed in the field
update list.
Note that the transition from Active to Inactive represents a permanent state change and that the
record is no longer valid. No more data or state changes may occur to the record once it has
transitioned to the Inactive state. Typically, the record is removed from the service (using the service’s
removeRTRecordImpl() method) after the Inactive event is propagated to downstream client
components.
There are two discrete events associated with refreshing record fields. The first event indicates that a
resync is about to occur and zero or more fields will be updated in the record. The method
RTRRTRecordImpl::indicateResync() is called to send a Resync event. The second event indicates
that the fields associated with the refresh have been updated and a list of the changed fields is
available. The RTRRTRecordImpl::indicateResyncComplete() is called to send a ResyncComplete
event.
Sequential access is achieved by obtaining an iterator from the record with the
RTRRTRecordServiceImpl::iterator() method. An instance of RTRRTRecordServiceImplIterator
provides access to all the published records within a record.
Record
State Description
Attributes
Record
State Description
Attributes
Inactive stale = don’t care The service is being deactivated. All constituent items
active = False in this service must also be changed to the Inactive
state. Users across a Triarch network will be notified
that the service is Stale (not Inactive, since other ser-
vices on the network with the same name may be
available to handle user requests). Users on an RMDS
network on a TIBCO Rendezvous Distribution Layer
will receive individual inactives. Users within the same
process must drop all references to this service as this
is a non-recoverable condition.
The data provider component is responsible for determining the current state of the publishing service.
State transitions are propagated to clients as events generated by the data provider. The events
triggered by state transitions are described as follows:
• Sync - This event indicates that the service is ready to accept requests and will recover data for
any items currently cached in the service. Unlike the record sync event, this event can occur
multiple times for a given service instance. However, it will never occur twice in a row. This event
triggers a transition from the Stale state to the Ok state. Triarch and in-process service clients will
receive a processServiceSync() event. This will also allow publisher to publish to the network.
• Stale - This event indicates a transition from the Ok state to the Stale state. Triarch and in-process
service clients will receive a processServiceStale() event. Publishing to the network is disabled.
(Network connection is still up). The publisher can still publish data; however, the data will only go
to the publisher’s cache. Also, any inserts in the cache are N-ACKed and deleted.
• Inactive - This event indicates a transition to the Inactive state. In-process service clients will
receive a processServiceInactive() event and should release all references to the service but
this inactive event will not be propogated across the network. Consumers will recieve stale
events.
Two methods are available for sending group messages efficiently to downstream clients.
• GroupStale - This event indicates Stale for all records in a particular group. This event can be
used to efficiently send stale notifications to all records of a group. The indicateGroupStale()
method is called to send this event. Unlike a service stale, a GroupStale still allows the publisher
to publish to the network. However, all the items published to the network are stale.
• GroupNotStale - This event indicates NotStale for all records in a particular group. This event can
be used to efficiently send NotStale notifications to all records of a group. The
indicateGroupNotStale() method is called to send this event.
Services will also generate informational events when appropriate. Triarch and in-process service
clients will receive a processServiceInfo() event. These events never indicate a change in the
service’s state. Services have a text() method which provides a textual explanation of the current state
of the service.
Figure 4.11 shows the various service states and state transitions.
Stale Sync
Stale
Inactive Ok
Inactive
Inactive
NOTE: Some DACS specific configuration must occur in order for respective entitlement data to be
published properly. See section 5.3 Entitlements for implementation specific details.
Overview
The RTRRTRecordImpl class provides methods to set entitlement data for each record and to
propagate an event indicating the existence of the entitlement data. The
RTRRTRecordImpl::setEntitlementData() method is used to set new entitlement data. The
entitlement data is passed in the form of an RTREntitlementData object. After setting the entitlement
data, the RTRRTRecordImpl::indicateEntitlementData() is used to propagate an EntitlementData
event to downstream components. This event indicates that entitlement data or updated entitlement
data is available for the record.
The RTREntitlementData class encapsulates a buffer containing the entitlement data and a format
attribute which identifies the format in which the entitlement data is encoded. A pre-defined format
(RTREntitlementData::DacsAccessLockFormat) is provided for using the DACS entitlement
system, the standard entitlement system for RMDS. The purpose of the format attribute is to allow
different entitlement data formats to be provided in the future.
The following code segment shows how entitlement data is created with a DACS access lock, then
published to a record. This code segment assumes that a DACS access lock has already been created
and that the appropriate RTRRTRecordImpl instance has been obtained:
#include "rtr/rtrecimp.h"
// int lockLength;
// unsigned char* lockPtr;
// RTRRTRecordImpl& publishedRec;
RTREntitlementData *edata = 0;
Overview
Data provider components that are interactive typically find it useful to know when interest in a given
record has been added or dropped. One possible use for these events would be to remove records
that have not been actively monitored for some period of time.
To receive interest add and drop events from RTRRTRecordImpl, the data provider component
registers with the published record. The registered published record client will then receive events
whenever the first client adds interest in the record and when the last client drops interest in the record.
NOTE: Not all implementations of RTRRTRecordImpl support interactive, sink-driven requests for
items. Therefore, registered data provider components may or may not receive these notifications
depending on the particular implementation of RTRRTRecordImpl being used. See
The typical use of this capability is to better manage memory and CPU resources by removing records
that are not being used by downstream clients. Note that the actual number of downstream clients
using the item is not available to the data provider component; this information may not be available to
the published record as it may be publishing to a distributed network.
have registered with a record for the purpose of receiving record data and state change events.
Record implementations have their own event clients that are notified whenever the number of
consumers monitoring the record transitions from zero to one or from one to zero. There may be
multiple clients of a record implementation.
• RTRRTRecordImplClient - This is the abstract base class for application components which
wish to register with one or more real-time record implementations in order to receive data and
state change events from those records.
• RTRRTRecordImplIterator - Instances of this class provide sequential access to all the
constituent fields within a given record implementation.
• RTRRTFieldUpdateList- Instances of this class are used during update events to provide
sequential access to those fields within a real-time record implementation that have been
modified.
• RTRRTField - This class defines the base type for the constituent parts of a record. Fields are
identified by name or FID, have a specific type, and provide various forms of access to the
underlying data which they represent.
• RTRRTAlphanumericField, RTRRTDateField, RTRRTEnumeratedField, RTRRTIntegerField,
RTRRTPriceField - These descendants of RTRRTField provide type-specific interpretation of the
underlying data. These instances are created by the data provider and added into a record
implementation instance.
• RTRRTRecordServiceImpl - This class provides methods used by application components to
control the creation/deletion of record implementations, to manipulate the state of the service, and
to propagate state change events to interested service consumers. A RTRRTRecordServiceImpl
implements all the caching and memory management of record implementations.
• RTRRTRecordServiceImplClient - This is the abstract base class for components that need to
register with one or more instances of record service implementation in order to receive state
change events whenever a new record is added to the service due to service consumer demand.
• RTRRTRecordServiceImplIterator - Instances of this class provide sequential access to all the
constituent published records within a given publishing service implementation
• RTRFidDb - This class represents the database of field definitions used by a particular service.
• RTRFidDbClient - This is the abstract base class for components that need to register with a
RTRFidDb to receive events when the database has completed initialization or when an error has
occurred while initializing the database.
• RTRFidDefinition - This class represents the definition for an individual field. A definition
comprises the identification (name and FID) of a field and the type. The type determines the way
in which the raw data should be interpreted; the identity defines the meaning of the field.
• RTREntitlementData - This class represents entitlement data which is used to entitle users in
downstream components. The class encapsulates a buffer of entitlement data and a format
attribute identifying the encoding format for the data.
4.2.4 Implementation
Currently, the SFC provides three implementations of the record publication classes—an SSL
publisher, a TIB publisher, and an in-process publisher. The names of these publishing service
implementations are based on the relationship of how the data is published into the service and how
the data is published out to downstream client components. Specifically,
• RTRRTFieldToSSLRecordService - publishes records to the SSL infrastructure, to the RMDS
on the Market Data Hub, or to a directly connecting SSL-based application.
• RTRRTFieldToTIBRecordService - publishes records to the TIB infrastructure or to the RMDS
on the Rendezvous Distribution Layer.
• RTRRTFieldToFieldRecordService - publishes records to other in-process components as a
RTRRTRecordService service type.
The following table provides a comparison of the main features of the different publishing service
implementations
Client connectivity
Client applications that can connect to this service include any user application using SSL 4.X or
higher libraries and infrastructure components that support connecting into a source application, like
the 4.1.X and higher version of Source Distributor.
In contrast to earlier versions, this publishing service implementation allows client applications to
connect directly to the publishing application. The RTRSSLConnectionServer opens a well-known port
("triarch_sink” by default) that client applications connect to using SSL 4.X or higher protocol. From the
perspective of the client application, the SSL publishing service looks like a Sink Distributor.
NOTE: SFC-based client applications can connect directly to an SFC publishing application. This is
new in the 4.1 release.
Multiple instances of publishing services may use the same connection server to allow client
applications to access all services over the same connection. Several different constructors are
available to provide programmers flexibility in choosing which services publish to which port.
The RTRSSLConnectionServer class manages the well-known port that downstream applications use
to establish communications with SSL publishing services. This class is used by SSL publishing
services to find out when new downstream connections have been established so that the publishing
services can be made available on the channel.
For applications publishing a single service, an instance of RTRSSLConnectionServer is typically
created automatically by the publishing service (based on which service constructor is used).
Alternatively, application components can create an instance of this class and pass it to one or more
publishing services. This is typically done when the application is publishing multiple services and
each service needs to share the same downstream connection.
Another reason to create and pass an instance of RTRSSLConnectionServer to publishing services is
if multiple well-known ports will be used on a given machine to publish services. In this case a different
instance of RTRSSLConnectionServer would be created for each publishing service and given
different well-known ports.
By default, RTRSSLConnectionServer uses port service "triarch_sink." This is the default port used by
the Source Distributor (version 4.1 and later) and by SSL and SFC-based end user applications to
access market data.
If the connection server is unable to open the well-known port, it will continue retrying the port at 5-
second intervals and will log a message. This condition would typically occur if another application
already has the port open.
Application structure
Typically, an application is structured so that an instance of RTRRTFieldToSSLRecordService is
created during initialization (the program’s "main” for instance) and then passed to application
components as a RTRRTRecordServiceImpl. This allows the data provider code to focus only on its
given task, publishing data via the RTRRTRecordServiceImpl instance. This also hides the
implementation type from the vast majority of your application code, allowing your application to easily
switch between publisher implementations.
After construction, this service will be in the Active/Stale state. It is the responsibility of the
programmer’s application code to determine when the service should transition to Active/NotStale. As
client applications connect to the service, an indication of the state of the service will be sent to the
client application.
The default is "_TIC."; therefore, the SFC publisher would send the subject "_TIC.A.B.C.D" for
publishing of the subject "A.B.C.D" to RTIC.
TIB connectivity
The TIB publishing service will communicate with the RMDS network via the Rendezvous Daemon
(RVD). The connection to the rvd is created and handled by the RTRTIBConnection class. The TIB
connection can be shared with multiple TIB publishing and subscription services.
By default, the TIB publishing service will create a TIB connection using default parameters. The data
provider application may override the default values for these parameters or may create a
RTRTIBConnection and pass it into the service at construction.
An application may have multiple instances of publishing services use the same rvd connection by
creating a single RTRTIBConnection and passing it into each RTRRTFieldToTIBRecordService
instance at construction. Several different constructors are available to provide programmers flexibility
in choosing which services publish through which RVD.
See section 4.8.1 and see the SFC Reference Manual for more information on RTRTIBConnection.
Application structure
Typically, an application is structured so that an instance of RTRRTFieldToTIBRecordService is
created during initialization (the program’s "main” for instance) and then passed to application
components as a RTRRTRecordServiceImpl. This allows the data provider code to focus only on its
given task, publishing data via the RTRRTRecordServiceImpl instance.
After construction, this service will be in the Active_stale state. It is the responsibility of the
programmer’s application code to determine when the service should transition to Active_valid and
when it will start publishing records.
4.2.5 Examples
Several sample programs have been provided to show different ways in which a record publisher
application can be constructed.
Figure 4.12 provides a component version of the various interfaces and implementations. Note that the
Data Provider and Data Consumer components are written to the abstract interfaces, then are
"plugged in” with a specific implementation of the abstract interface.
While most application components should refer only to the base classes mentioned in the preceding
section, the "main” routine or initialization section of an application must instantiate one or more
implementation specific classes. Please refer to the alphabetical reference section and the relevant
example programs for more details concerning these classes.
Table 4.10 provides a brief overview of the example applications provided with the software
distribution. A few representative samples of these programs are detailed in the sections following this
table.
You are encouraged to study the examples as a guide to using the SFC record publication classes.
Data Flow
The Components
In-Process Solution
Field
Real-time to Real-time
Data Field Data
Provider Consumer
Implementation
Component Component
Networked Solutions
Example
Description Refers To
Program
Simulator Simple application that publishes canned record data simulator.C (main)
based on user requests. This example shows how simrec.C
to: simsrvc.C
• create specific field types fcsimsrvc.C
• properly initialize publishing services and
records (section 4.2.6)
• optionally setting template numbers
• change state and propagate state change
events
• update record data and propagate data
change events
• publish as a sink-driven (interactive) or
source-driven (non-interactive) service.
• create necessary components to publish to
TIB, to SSL, or within an application process
Gateway An application that obtains record data via the sub- gateway.C (main)
scription SFC interfaces and re-publishes the data gatesvc.C
using the publisher SFC interfaces. This example sinkgates.C
shows the same capabilities as the Simulator exam- srcgates.C
ple, plus how to: gaterec.C
• publish data from an asynchronous data
source (an SFC RTRecordService in this (section 4.2.7)
case)
• handle differences in field definitions
between subscription and publication
services
• design a bridging or value-added gateway
application using SFC
Example
Description Refers To
Program
This section describes the design and implementation of a new class or "component” and shows how
to use that class in an application.
4.2.6.1 Requirements
The application must respond to requests from downstream clients for canned record data and
subsequent update and state change notifications. The main purpose of the program is to show the
basics of how to set up a publishing service and use the various methods from the publishing classes.
This application also serves to provide all the different types of events associated with a real-time
record publisher.
This application can be used as a starting point for most user applications.
NOTE: This type of sink-driven, or interactive, service model will not work with the TIB implementa-
tion because new requests will not be forwarded to the publisher on an RMDS (RTIC) system.
Rather, requests are always made to an RTIC process on the RMDS network. To work on RTIC, the
simulated service must pre-load records into the publishing service. See the FullCacheSimulated-
Service class in section 4.2.6.5 for details.
Before handling new records, the simulated service makes sure that the FID database is completely
populated because the FID database’s definitions are needed for creating new record fields. To do this,
the service checks the state of the FID database. If the FID database is not in the Complete state, then
the simulated service registers with the FID database to be notified when it is complete or when an
error occurs. Once the FID database is complete, the simulated service may handle requests and
populate records with fields.
The SimulatedRecord provides an initial set of fields to the given published record
(RTRRTRecordImpl) and subsequent state and data changes and events. The state and data changes
occur automatically in each SimulatedRecord based on random time intervals. The data provided to
the published record is hard-coded into the SimulatedRecord, so the record can be populated as soon
as the request for the new symbol is received.
The simulated record will produce many of the possible state and data change events that a record
can transition through as well as a variety of event combinations. In this respect, the SimulatedRecord
represents an interesting test of the ability of the downstream components to handle various events
that may rarely occur on live systems.
#include "rtr/rtrsvimp.h"
#include "rtr/logevnt.h"
#include "rtr/objid.h"
class SimulatedService
: public virtual RTRRTRecordServiceImplClient
public RTRFidDbClient
{
public:
SimulatedService(RTRObjectId& context,
RTRRTRecordServiceImpl& s,
int lowRangeUpdateRate = 3,
int highRangeUpdateRate = 60);
virtual ~SimulatedService();
void processFidDbComplete(RTRFidDb&);
void processFidDbError(RTRFidDb&);
protected:
RTRRTRecordServiceImpl& _implService;
RTRObjectId _instanceId;
RTRLogEvent _logEvent;
int _lowRangeUpdateRate;
int _highRangeUpdateRate;
};
#endif
NOTE: The processNewRecord() method will only be called if a new record is added to the publish-
ing service due to demand from downstream client components. Any future requests for a record
that is already populated in the publishing service will be handled by the publishing service; the sim-
ulated service will not be notified.
When the processNewRecord() method is called, the simulator service simply creates a new
instance of SimulatedRecord, providing it with the published record, FID database and high and low
intervals for sending new events to the record (remember, the simulated record sends canned data
and events, so it must determine when to send the events). The simulated record will then handle all
data and state changes associated with that published record.
#include "simsrvc.h"
#include "simrec.h"
SimulatedService::SimulatedService(RTRObjectId& context,
RTRRTRecordServiceImpl& s,
int lr,
int hr)
: _implService(s), _instanceId(context, s.name()),
_lowRangeUpdateRate(lr),
_highRangeUpdateRate(hr)
{
_logEvent.setComponent(_instanceId);
if (implService.fidDb().complete())
{
initPubService();
}
else
{
_implService.setText("Waiting for fid db to initialize...");
_implService.indicateInfo();
RTRFidDb*fidDb = (RTRFidDb*)&_implService.fidDb();
fidDb->addClient(*this);
}
void SimulatedService::initPubService()
{
_implService.setText("Ready!");
_implService.setNotStale();
_implService.indicateSync();
_implService.setClient(*this);
}
SimulatedService::~SimulatedService()
{
_implService.unsetClient();
}
void SimulatedService::processNewRecord(RTRRTRecordServiceImpl&,
RTRRTRecordImpl& newRecord)
{
RTRString tmp("Got NewRecord event for record ");
tmp.append(newRecord.symbol());
_logEvent.setText(tmp);
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
newRecord,
_implService.fidDb(),
_lowRangeUpdateRate,
_highRangeUpdateRate);
};
_instanceId,
"symbolList",
"RTRSY.O").list(’,’);
RTRRTRecordImpl *_implRecord = 0;
for (ricList.start(); !ricList.off(); ricList.forth())
{
// Check for duplicate symbols.
if (!_implService.hasRTRecordImpl(ricList.item()))
{
// Create a new record in the _implService.
//
_implRecord = &(_implService.newRTRecordImpl( ricList.item() ));
void FullCacheSimulatedService::processNewRecord(
RTRRTRecordServiceImpl& service,
RTRRTRecordImpl& newRecord)
{
// As a Full Cache service, only the records that were pre-loaded
// into the cache will be made available. By setting the text on
// the newRecord and calling indicateInactive(), we are able to provide
// some reason why the request will not be accepted.
//
// As an alternative, this application component could have not
// become a client of the RTRRTRecordServiceImpl, in which case
// the implService would handle requests for un-cached items. In
// particular, a canned text string ("Item Not Available") would
class SimulatedRecord
: public RTRRTRecordImplClient,
public RTRTimerCmd
{
public:
SimulatedRecord(RTRObjectId& context,
RTRRTRecordImpl& record,
const RTRFidDb& fidDb,
long lowRange,
long highRange);
virtual ~SimulatedRecord();
void setTimer();
void processNotHasEventClient(RTRRTRecordImpl&);
void sendCorrection();
void sendClosingRun();
void sendStale();
void sendNotStale();
void sendResync();
// Utility
void updateFields();
protected:
RTRRandomNumberGenerator _randomValue;
RTRRTFieldUpdateList _updList;
RTRRTRecordImpl& _record;
RTRObjectId _instanceId;
RTRLogEvent _logEvent;
RTRVariableDateTime _dateTime;
int _counter;
int _bidVal;
int _tradeVal;
int _askVal;
int _acvolVal;
};
#endif
fields and to modify the state of the record. Associated events are sent to downstream client
components through the RTRRTRecordImpl interface methods. After taking the appropriate action, the
simulated record gets the next random time interval (between the high and low ranges) and activates
the timer again. This continues for the life of the simulated record.
As a registered RTRRTRecordImplClient, the simulated record prints out a message whenever a
transition occurs between having and not having downstream event clients.
The simulated record will continue running indefinitely.
#include "simrec.h"
#include "rtr/rtstrhsh.h"
SimulatedRecord::SimulatedRecord(RTRObjectId& context,
RTRRTRecordImpl& record,
const RTRFidDb& fidDb,
long lowRange,
long highRange)
: _record(record), _updList(record),
_bidVal(1), _askVal(3), _tradeVal(2),
_counter(1), _instanceId(context,record.symbol()),
_randomValue((int)strHashFunction(
record.symbol()),
highRange, lowRange)
{
_logEvent.setComponent(_instanceId);
_record.addClient(*this);
_record.setText("Ready.");
_record.setNotStale();
_record.indicateSync();
setTimer();
activate();
}
SimulatedRecord::~SimulatedRecord()
{
_record.dropClient(*this);
// From RTRTimerCmd
void SimulatedRecord::processTimerEvent()
{
// Do different kinds of data and state
// events periodically.
_counter++;
if (!(_counter % 4))
{
if (_record.stale())
sendInfo();
else
sendStale();
}
else if (!(_counter % 5))
{
sendResync();
}
else if (!(_counter % 3))
{
sendCorrection();
}
else if (!(_counter % 120))
{
sendClosingRun();
}
else
sendUpdate();
setTimer();
activate();
}
void SimulatedRecord::setTimer()
{
_randomValue.getNext();
setTimerOffset(_randomValue, 0);
}
// From RTRRTRecordImplClient
void SimulatedRecord::processHasEventClient(RTRRTRecordImpl&)
{
_logEvent.setText("Users are now monitoring this record.");
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
}
void SimulatedRecord::processNotHasEventClient(RTRRTRecordImpl&)
{
_logEvent.setText("Users are no longer monitoring this record.");
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
}
void SimulatedRecord::updateFields()
{
static RTRString val;
RTRRTField *fld = 0;
_tradeVal++;
_bidVal++;
_askVal++;
_acvolVal += 100;
_updList.reinitialize(_record);
fld = _record.field(6);
if (fld)
{
val.append(_tradeVal);
fld->set(val,val.count());
fld->indicateFieldUpdated();
_updList.putFieldByFid(6);
}
val.clear();
fld = _record.fieldByName("ASK");
if (fld)
{
val.append(_askVal);
fld->set(val,val.count());
fld->indicateFieldUpdated();
_updList.putField(*fld);
}
val.clear();
fld = _record.fieldByName("ACVOL_1");
if (fld)
{
val.append(_acvolVal);
fld->set(val,val.count());
fld->indicateFieldUpdated();
_updList.putField(*fld);
val.clear();
fld = _record.fieldByName("TRDTIM_1");
if (fld)
{
_dateTime.setToSystemTime();
val.append(_dateTime.hours());
val.append(":");
val.append(_dateTime.minutes());
val.append(":");
val.append(_dateTime.seconds());
fld->set(val,val.count());
fld->indicateFieldUpdated();
_updList.putField(*fld);
}
}
void SimulatedRecord::sendUpdate()
{
_record.indicateUpdateTick();
updateFields();
_record.indicateUpdateComplete(_updList);
}
void SimulatedRecord::sendCorrection()
{
_record.indicateCorrectionTick();
updateFields();
_record.indicateUpdateComplete(_updList);
}
void SimulatedRecord::sendClosingRun()
{
// Update the record and indicate.
//
_record.indicateCloseTick();
updateFields();
_record.indicateUpdateComplete(_updList);
}
void SimulatedRecord::sendInfo()
{
// Notify info
_record.setText("Informational message...");
_record.indicateInfo();
}
void SimulatedRecord::sendStale()
{
// Notify info
_record.setText("GOING STALE NOW");
_record.setStale();
_record.indicateStale();
}
void SimulatedRecord::sendNotStale()
{
_record.setText("OK NOW");
_record.setNotStale();
_record.indicateNotStale();
}
void SimulatedRecord::sendResync()
{
_record.setText("Resyncronized");
if (_record.stale())
_record.setNotStale();
else
_record.setStale();
_record.indicateResync();
updateFields();
_record.indicateResyncComplete(_updList);
}
fld->set(tmp, tmp.count());
_record.putField(*fld);
}
fld->set(tmp, tmp.count());
_record.putField(*fld);
}
fidDef = fidDb.defByName("TRDPRC_4");
if (fidDef)
{
farea = new char[fidDef->length() + 1];
fld = new RTRRTPriceField(*fidDef, farea, 0);
fld->set("0", 1);
_record.putField(*fld);
}
initSSLFidDb();
pubService = new RTRRTFieldToSSLRecordService(appId,
pubServiceName,
*sslfdb);
simService = new SimulatedService(appId, *pubService);
This code creates a Triarch FID database which is loaded from a disk file:
void initSSLFidDb()
{
if (!sslfdb)
{
sslfdb = new RTRFileFidDb (appId, "fidDb");
((RTRFileFidDb *) sslfdb)->load();
if (sslfdb->error())
{
cout<< "FidDb error: " << sslfdb->errorText() <<endl;
cleanup(-2);
}
}
}
This code initializes the notifier to run and cleans up resources after the notification loop exits. Note
that in this example, the system select() version of the notifier is being used. Each implementation of
notifier will have a different way to start the main loop. See section 4.8.3 for details.
Note that all cleanup must be done in the opposite order of creation to ensure that resources are
properly freed and objects do not reference components that have already been deleted:
RTRSelectNotifier::run();
delete simService
delete pubService;
delete sslfdb;
delete logger;
delete configDb;
return 0;
This code creates the TIB publishing service and the simulated service that will provide data to the
publishing service. Note that the RTRRTFieldToTIBRecordService has several different constructors
that may be used. This particular constructor takes a context ID, name of the service and a connection
at construction. The connection handles connectivity to RTIC via the RVD. The publishing service will
automatically download the data dictionary (FID database).
connection = new RTRTIBConnection(appId,
"tibconnection",
servicePort,
network,
daemon);
connection->connect();
RTRRTFieldToTIBRecordService *pubService =
new RTRRTFieldToTIBRecordService(appId,
pubServiceName,
*connection);
simService = new FullCacheSimulatedService(appId, *pubService);
This code initializes the notifier to run and cleans up resources after the notification loop exits. Note
that in this example, the system select() version of the notifier is being used. Each implementation of
notifier will have a different way to start the main loop. See section 4.8.3 for details.
Note that all cleanup must be done in the opposite order of creation to ensure that resources are
properly freed and objects do not reference components that have already been deleted:
RTRSelectNotifier::run();
delete simService
delete pubService;
delete connection;
delete logger;
delete configDb;
return 0;
the service and a FID database at construction. The publishing service is then given to a number of
SFC consumer clients that display record data to stdout.
// RTFieldToFieldService is a RTRecordServiceImpl
// and a RTRecordService.
initSSLFidDb();
RTRRTFieldToFieldRecordService *pubService =
new RTRRTFieldToFieldRecordService(pubServiceName,
*sslfdb);
simService = new SimulatedService(appId,
*pubService);
createFieldClients(*pubService);
This code initializes several display components that obtain data from an RTRRTRecordService and
display it to stdout. Each TickerClient component watches a single record and displays various fields
from the record to stdout. Note that the service passed into the createFieldClients() method takes a
type of RTRRTRecordService. Since the RTRRTFieldToFieldRecordService implements the
RTRRTRecordService (consumer) interface in addition to the RTRRTRecordServiceImpl (publishing)
interface, the publishing service is passed directly to the createFieldClients() method.
void createFieldClients(RTRRTRecordService &service)
{
RTRListOfExternalValue ricList = RTRConfig::configDb().variable(
appId, "Simulator", "symbolList", "RTRSY.O").list(’,’);
clients = new TickerClientPtr [ricList.count()];
int i = 0;
for (ricList.start(); !ricList.off(); ricList.forth(), i++)
clients[i] = new TickerClient(service, ricList.item());
}
This code initializes the notifier to run and cleans up resources after the notification loop exits. Note
that in this example, the system select() version of the notifier is being used. Each implementation of
notifier will have a different way to start the main loop. See section 4.8.3 for details.
Note that all cleanup must be done in the opposite order of creation to ensure that resources are
properly freed and objects do not reference components that have already been deleted:
RTRSelectNotifier::run();
delete clients[];
delete simService
delete pubService;
delete logger;
delete configDb;
return 0;
4.2.7.1 Requirements
The application must respond to requests from downstream clients for live record data and subsequent
update and state change notifications that are obtained from a different upstream market data system.
No translation of the data is made; it is simply published straight to the SFC publishing service.
The main purpose of the program is to show how to set up a gateway publishing service and how to
ensure that your application can be made portable across multiple market data systems. This
application also shows how an application can respond to new record requests in an asynchronous
fashion and apply data updates and state changes in an asynchronous fashion.
This application can be used as a starting point for gateway style applications.
class GatewayService
: public RTRMDServiceClient
{
public:
// Constructor
GatewayService(RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService);
// Destructor
virtual ~GatewayService();
// Identification
const char *name();
// Access
RTRBOOL hasRecord(const RTRString& symbol);
// Is the gateway record associated with symbol currently
// cached?
protected:
RTRRTRecordService& _gateService;
RTRRTRecordServiceImpl& _implService;
RTRLinkedList<GatewayRecord> _cache;
RTRObjectId _instanceId;
RTRObjectId _classId;
RTRLogEvent _logEvent;
};
#endif
registers with the subscription service to receive these events. Each of the methods inherited from
RTRMDServiceClient are implemented to on-pass the event and gateway service text to the publishing
service.
There are several methods that are used to create, store and destroy all GatewayRecord instances
created in this service. This cache of gateway records is needed to allow for proper clean-up when the
gateway service is destructed.
The destructor for this class will basically undo all the previous actions. Specifically, the publishing
service is sent an Inactive event, all the gateway records are destroyed and the gateway service
deregisters from the subscription service.
GatewayService::GatewayService(
RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService)
: _implService(implService), _gateService(gateService),
_instanceId(context, gateService.name()),
_classId("GatewayService")
{
_logEvent.setComponent(_instanceId);
_gateService.addClient(*this);
_implService.setText(gateService.text());
_implService.setStale();
};
GatewayService::~GatewayService()
{
if (_gateService.hasClient(*this))
_gateService.dropClient(*this);
_implService.setText("Service unavailable.");
_implService.setInactive();
_implService.indicateInactive();
{
GatewayRecord *gateRec = _cache.item();
delete gateRec;
}
};
_cache.extend(gateRec);
}
#include "gatesvc.h"
class SinkDrivenGatewayService
: public GatewayService,
public virtual RTRRTRecordServiceImplClient
{
public:
// Constructor
SinkDrivenGatewayService(RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService);
// Destructor
virtual ~SinkDrivenGatewayService();
};
#endif
the publishing service whenever a user of that service requests data for a record that is not currently in
the service’s cache. The term "sink driven" is used to describe a service that has the ability to receive
record requests interactively from the publishing service.
It is the responsibility of this component to take action whenever a processNewRecord() event
occurs. In this implementation, a new GatewayRecord instance is created to supply data and state
information to the given published record. The data and state information is obtained from a different
record service, the subscription service which is given to the new GatewayRecord instance at
construction.
As a descendant of GatewayService, this class inherits the ability to handle service level events from
the subscription service and to cache GatewayRecords.
At construction, the SinkDrivenGatewayService is given two other services: an RTRRTRecordService
(subscription service) from which to access data and state and a RTRRTRecordServiceImpl
(publishing service) that is used to publish the same data and state information. If the subscription
service is in the NotStale state, then the publishing service is also initialized to the NotState state and a
ServiceSync event is sent to all downstream client components. If the subscription service is Stale,
then this class waits until the subscription service transitions to NotStale (as indicated when the
GatewayService::processServiceSync() event is called by the subscription service).
The processNewRecord() method is implemented to create a new GatewayRecord instance for the
given published record and to log the event through the SFC logger. The GatewayRecord constructor
takes an instance of RTRRTRecord (subscription record) that is obtained from the subscription
service. This is the record that the gateway record will be publishing information from. Note that the
GatewayRecord will add itself to the cache of this gateway service.
#include "sinkgates.h" // SinkDrivenGatewayService
SinkDrivenGatewayService::SinkDrivenGatewayService(
RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService)
: GatewayService(context, implService, gateService)
{
_implService.setClient(*this);
if (!_gateService.stale())
{
_implService.setText(_gateService.text());
_implService.setNotStale();
_implService.indicateSync();
}
};
SinkDrivenGatewayService::~SinkDrivenGatewayService()
{
_implService.unsetClient();
};
void SinkDrivenGatewayService::processNewRecord(
RTRRTRecordServiceImpl&,
RTRRTRecordImpl& newRecord)
{
RTRString tmp("Got NewRecord event for new record ");
tmp.append(newRecord.symbol());
_logEvent.setText(tmp);
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
#include "gatesvc.h"
class SourceDrivenGatewayService
: public GatewayService
{
public:
// Constructor
SourceDrivenGatewayService(RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService);
// Destructor
virtual ~SourceDrivenGatewayService();
// Initialization
void load();
// Initialize the cache.
#endif
SourceDrivenGatewayService::SourceDrivenGatewayService(
RTRObjectId& context,
RTRRTRecordServiceImpl& implService,
RTRRTRecordService& gateService)
: GatewayService(context, implService, gateService)
{
// Initialize the cache.
load();
if (!_gateService.stale())
{
_implService.setNotStale();
_implService.setText(_gateService.text());
_implService.indicateSync();
}
};
SourceDrivenGatewayService::~SourceDrivenGatewayService(){};
void SourceDrivenGatewayService::load()
{
class GatewayService;
class GatewayRecord
: public RTRRTRecordImplClient,
public RTRRTRecordClient
{
public:
// Constructor
GatewayRecord(RTRObjectId& context,
RTRRTRecord& gateRec,
RTRRTRecordImpl& srcRec,
GatewayService& service);
// Destructor
virtual ~GatewayRecord();
// Identification
const RTRString& symbol() const;
// State
RTRBOOL active();
// From RTRRTRecordImplClient
void processHasEventClient(RTRRTRecordImpl&);
// Handle the case where the first downstream client has
// started watching the record
void processNotHasEventClient(RTRRTRecordImpl&);
// Handle the case where the last downstream client has
// stopped watching the record.
protected:
RTRRTFieldUpdateList _updList;
RTRRTRecordImpl& _implRecord;
RTRRTRecord& _gateRecord;
GatewayService& _service;
static RTRLogEvent _logEvent;
RTRObjectId _instanceId;
};
#endif
populated (_gateRecord.hasData()), then the published record is populated with fields and its state is
initialized by calling the processRecordSync() method. If the subscription record is not populated yet,
then the gateway record will wait until the subscription record sends a Sync event.
The processRecordSync() method is implemented to iterate through all the fields found in the
subscription record and add a new field of the same type into the published record. To do this, an
instance of RTRRTRecordIterator is obtained from the subscription record and, for each field in the
iteration, a new field of the same type is created. It is possible that the publishing service’s FID
database will not contain all the same fields as the subscription service. When this occurs, the gateway
record will ignore the field.
Note that the RTRFidDefinitions passed to the constructor of each new field are obtained from the
RTRFidDb of the publishing service. Since the subscription and publishing services can have different
FID databases, the gateway application must get the FID definitions from the publishing service.
NOTE: Use the field name (RTRRTField.name()) when querying the FID database for a FID defini-
tion. The field names are more likely to be consistent across different system’s FID databases. This
is true of the RMDS and SSL systems, where the field ID values tend to be inconsistent, while the
field names are consistent.
After the new fields are populated into the published record, the record’s state is changed to reflect that
of the subscription record and a RecordSync event is propagated to all downstream client
components.
As updates occur in the subscription record, the processUpdateComplete() method is called. This
method is implemented to iterate through all the fields that have updated in the subscription record and
reset the values of the corresponding published record. After all fields have been updated, the list of
fields is passed to the published record and an UpdateComplete event is propagated to all
downstream client components.
The same processing occurs when the subscription record is resynchronized, or refreshed, and the
processResyncComplete() method is called.
As other events are received from the subscription record, the events are reflected in the published
record. Whenever a subscription record state change event occurs, the published record’s state and
descriptive text are changed and the same event is propagated to the published record.
If a RecordInactive event is received from the subscription record, the gateway record will propagate
the event to the published record, remove the published record from the publishing service, remove
itself from the gateway service, de-register from the subscription and published records and then
destroy itself. An Inactive event is a permanent indication that the symbol is no longer valid.
Note that the processHasEventClient() and processNotHasEventClient() callbacks are not
implementated to take any action. This is because, for the purposes of this example program, the
record will remain cached for the life of the application once it is added
#include "gaterec.h"
#include "gatesvc.h"
GatewayRecord::GatewayRecord(RTRObjectId& context,
RTRRTRecord& gateRec,
RTRRTRecordImpl& srcRec,
GatewayService& service)
: _gateRecord(gateRec), _implRecord(srcRec),
_instanceId(context, gateRec.symbol()),
_service(service), _updList(srcRec)
{
RTPRECONDITION( !service.hasRecord(gateRec.symbol()) );
_implRecord.setStale();
_implRecord.addClient(*this);
_gateRecord.addClient(*this);
if ( _gateRecord.active() )
{
_service.addRecord(this);
if ( _gateRecord.hasData() )
{
processRecordSync(_gateRecord);
}
else
{
_implRecord.setText(_gateRecord.text());
_implRecord.indicateInfo();
}
}
else
{
_gateRecord.dropClient(*this);
}
};
GatewayRecord::~GatewayRecord()
{
_implRecord.dropClient(*this);
_implRecord.setText(_gateRecord.text());
if (_gateRecord.hasClient(*this))
_gateRecord.dropClient(*this);
RTRBOOL GatewayRecord::active()
{
return _gateRecord.active();
};
{
RTRRTRecordIterator iter = gateRec.iterator();
char* farea;
RTRRTField *fld = 0;
if (def)
{
// NOTE: Always allocate one extra byte for storing the
// end-of-data delimiter.
int fLength = def->length() + 1;
if (field.type() == RTRFidDefinition::Price)
{
farea = new char [fLength];
fld = new RTRRTPriceField(*def, farea, 0);
}
else if (field.type() == RTRFidDefinition::Alphanumeric ||
field.type() == RTRFidDefinition::LongAlphanumeric )
{
farea = new char [fLength];
fld = new RTRRTAlphanumericField(*def, farea, 0);
}
else if (field.type() == RTRFidDefinition::TimeSecs ||
field.type() == RTRFidDefinition::Date ||
field.type() == RTRFidDefinition::Time )
{
farea = new char [fLength];
fld = new RTRRTDateTimeField(*def, farea, 0);
}
else if (field.type() == RTRFidDefinition::Integer)
{
farea = new char [fLength];
fld = new RTRRTIntegerField(*def, farea, 0);
}
else if (field.type() == RTRFidDefinition::Numeric)
{
farea = new char [fLength];
fld = new RTRRTNumericField(*def, farea, 0);
}
else if (field.type() == RTRFidDefinition::Enumerated)
{
int eLength = def->expandedLength() + 1;
if(eLength > fLength)
farea = new char [eLength];
else
farea = new char [fLength];
const RTREnumTable *tbl =
_implRecord.fidDb().enumTableByName(field.name());
fld = new RTRRTEnumeratedField(*def, farea, 0, *tbl);
}
else
{
farea = new char [fLength];
fld = new RTRRTField(*def, farea, 0);
}
// Set the field value.
fld->set(field.to_c(), field.count());
if (!gateRec.stale())
_implRecord.setNotStale();
_implRecord.setText(gateRec.text());
_implRecord.indicateSync();
};
void GatewayRecord::processRecordInactive(RTRRTRecord&)
{
// Note - clean up of gate and impl records is done in
// the destructor.
delete this;
};
void GatewayRecord::processUpdate(RTRRTRecord&)
{
_implRecord.indicateUpdateTick();
};
void GatewayRecord::processTick(RTRRTRecord&)
{
_implRecord.indicateUpdateTick();
};
void GatewayRecord::processCloseTick(RTRRTRecord&)
{
_implRecord.indicateCloseTick();
};
void GatewayRecord::processCorrectionTick(RTRRTRecord&)
{
_implRecord.indicateCorrectionTick();
};
_updList.reinitialize(_implRecord);
for (iter.start(); !iter.off(); iter.forth())
{
// Update the _implRecord’s field with the value
// from the _gateRecord’s field.
//
RTRRTField& gateFld = iter.field();
RTRRTField& implFld = *_implRecord.fieldByName(gateFld.name());
if (&implFld)
{
implFld.set(gateFld.to_c(), gateFld.count());
_updList.putField(implFld);
implFld.indicateFieldUpdated();
}
}
_implRecord.indicateUpdateComplete(_updList);
};
_updList.reinitialize(_implRecord);
for (iter.start(); !iter.off(); iter.forth())
{
// Update the _implRecord’s field with the value
// from the _gateRecord’s field.
//
RTRRTField& gateFld = iter.field();
RTRRTField& implFld = *_implRecord.fieldByName(gateFld.name());
if (&implFld)
{
implFld.set(gateFld.to_c(), gateFld.count());
_updList.putField(implFld);
implFld.indicateFieldUpdated();
}
}
_implRecord.indicateResyncComplete(_updList);
};
// From RTRRTRecordImplClient
void GatewayRecord::processHasEventClient(RTRRTRecordImpl&)
{
_logEvent.setComponent(_instanceId);
_logEvent.setText("Users are now monitoring this record.");
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
};
void GatewayRecord::processNotHasEventClient(RTRRTRecordImpl&)
{
_logEvent.setComponent(_instanceId);
_logEvent.setText("Users are no longer monitoring this record.");
_logEvent.setSeverity(RTRLogSeverity::debug);
_logEvent.log();
};
RTRLogEvent GatewayRecord::_logEvent;
4.3.1 Overview
4.3.1.1 Pages
A page represents an item of market data which is paginated, i.e., the data has embedded formatting
and display information. This is done by encoding escape sequences in the data or by providing
formatting data in a separate attribute field. It is this embedded display information that differentiates
pages from records. In Triarch and RMDS, pages are sometimes called ANSI pages. In TIB, they are
referred to as effects pages.
In all other ways, pages represent real-time market data, just like records. The content of a page may
change over time. These changes are referred to as updates and constitute distinct events which must
be conveyed to the clients of any given page. Page clients are application components which are
interested in the events associated with a particular page. This interest is expressed by the act of
registering a client with a page.
The data represented by a page can be characterized by its condition. The condition of the data may
affect the way in which it is used by a client. For example, if data is suspect, or stale, a display
application might indicate this by using a different color to display that data. Data condition is
expressed in the model by state variables associated with a page. Changes in the condition of the data
result in changes to page state. As with updates, page state changes are generally of interest to
clients, requiring timely propagation of these events to individual clients of the page.
Some services support a "next" and "previous" page. The names of these adjacent pages are
dependent on the service and the infrastructure. The infrastructure-specific names are available from
the page.
The logical page abstraction is represented in SFC by the class RTRPage. Page regions are
represented by the class RTRPageRegion. Clients can register interest in a page’s state and data
events with the interface RTRPageClient. They can register interest in an individual region using the
interface RTRPageRegionClient.
4.3.1.3 Attributes
Logical pages also include attributes that describe how the data should be displayed. They can
describe properties such as highlighting, inverse printing, and colors. Attributes are stored with the
page, so an individual cell’s attribute data can be accessed from the page. A page actually stores two
sets of attributes, the normal attributes and the fade attributes. The fade attributes are used to
temporarily highlight regions that have updated. Figure 4.13 shows the object structure of a RTRPage
.
RTRPageAttributes
2..1
RTRPageRegion
RTRPage 1..n
1..n
RTRPageClient 1..n
RTRPageRegionClient
+processSync()
+processUpdate()
+processUpdate()
+processRename()
+processInactive()
+processInfo()
+processResync()
+processStale()
+processNotStale()
A page’s attributes describe how each cell within the page should be displayed. All of a page’s
attributes are stored in a single object of type RTRPageAttributes. The page model supports the
following attributes:
• background color - enumeration of type RTRPageAttributes::Color
• foreground color - enumeration of type RTRPageAttributes::Color
• character set - enumeration of type RTRPageAttributes::CharSet
Character sets can be used to select a display character set that supports the necessary
extended ASCII characters (character above127).
• blink - RTRBOOL
• bold (also called bright) - RTRBOOL
• dim - RTRBOOL
• overline - RTRBOOL
• reverse - RTRBOOL
• underline - RTRBOOL
4.3.2 Design
State transitions are propagated to clients as events on the RTRPageClient interface. Figure 4.14
shows page states and events causing state transitions. The state transition diagram is the same as
the one for records.
Sync
Active_noData Active_valid
Inactive Resync
Inactive Sync
Stale
NotStale
• RTRPage - This class defines an item of real-time page data that has been shredded into a
logical format. A RTRPage maintains an ordered list of RTRPageRegions and the attribute
information for the page. Also, it keeps track of the state of the market data item.
• RTRPageClient - This is the abstract base class for application components that wish to register
with one or more real-time pages in order to receive data and state events from those pages.
• RTRPageRegion - This class stores the data for a single row of a page.
• RTRPageRegionClient - This is the abstract base class for application components which wish
to register to receive data events for one or more page regions.
• RTRPageRegionIterator - This class provides sequential access to all regions in a page.
• RTRPageRegionUpdate - This class is used during page updates to keep track of the offsets in a
page region that were updated. It is only valid during a RTRPageClient::processUpdate() or a
RTRPageRegionClient::processUpdate() callback.
• RTRPageRegionUpdateIterator - During a RTRPageClient::processUpdate() callback, this
class provides sequential access to the regions that were updated.
• RTRPageAttributes - This class stores all of the attributes of a page. Attributes for a single page
cell can be accessed by specifying the row and column.
• RTRPageService - This class provides access to real-time page data and manages the
aggregate state of those pages. Service state change events are propagated to those application
components that register to receive those events.
• RTRMDServiceClient - This is the abstract base class for components that wish to register with
one or more instances of page or record services in order to receive data and state events from
those services.
• RTRPageServicePool - Instances of this class serve as a repository for all available real-time
page services. Typically there is only one instance of this type within an application. A pool
provides random and sequential access to the services it contains. A pool provides the means to
insert and remove services and allows clients to register for the change events which are
generated when the contents are modified.
• RTRPageServicePoolClient - This is the abstract base class for components that wish to
register with one or more real-time page service pools in order to receive data and state events
from those pools. This interface does not indicate when the state of a service changes. To monitor
state changes in a service (e.g. when a service becomes stale), register directly with the service
using RTRMDServiceClient.
4.3.4 Implementation
Currently, the SFC provides two implementations of the real-time page model, although other
implementations are possible. While most application components should refer only to the base
classes mentioned in the preceding section, the "main" routine or initialization section of an application
must instantiate one or more implementation specific classes. The implementation classes mentioned
here are similar in functionality and implementation to those that provide access to real-time record
data and which are presented in section 4.1.3. Please refer to that section for more details.
The SFC page model includes implementation classes for use by both single (simple) and multiple
service (more complex) applications. These classes encapsulate the procedures for creating an SSL
connection and the page services which use that connection. They are:
• RTRSSLPageService - This class is a descendant of RTRPageService and creates is own
RTRRTPageService to retrieve data. It is typically used by simple applications which require
access to only a single service.
• RTRTIBPageService - This class is a descendant of RTRPageService which creates its own TIB
connection. It is typically used by simple applications which require access to only a single
service.
the Triarch and TIB formats do not have to be interpreted by the application. Typical applications will
still need to map attributes from SFC boolean function values to the display API’s encoding format.
The WinPage and CursesPage examples have functions that map SFC semantic attributes to display
attributes.
TIB services do not deliver fade attributes, so SFC will return the same data for
RTRPage::attributes() and RTRPage::fadeAttributes().
Some datafeeds for TIB page services do not use the TIB standard method for encoding attributes.
Instead of sending color information, they encode the colors in the attribute fields. SFC provides a
mechanism for mapping these attributes to the correct TIB-semantic attributes. For more information
on configuring a TIB page service, see section 5.5.8.2.
4.3.4.3 Configuration
Both implementations provide the following configuration variables for setting up a page service:
• enableColors - should colors be enabled (default is true)
• defaultFgColor - set the default foreground color (7 for white)
• defaultBgColor - set the default background color (0 for black)
4.3.5 Examples
SFC includes a few examples that highlight different parts of the page model. These examples can be
found the sfc/examples/pages/ directory. They typically have two parts:
1. main() function that:
Example
Description Refer to...
Program
WinPage Displays a page’s data and applies its attributes to a section 4.3.8
(win32 only) Windows console using Win32 platform SDK functions. winpage.C
consolepageclient.*
(win32 only)
CursesPage Displays a page’s data and applies its attributes to an section 4.3.8
(Unix only) xterm using the X-Open curses library. cursespage*.*
(Unix only)
The name of this class is PageUpdClient. In order to receive page events, this class is a descendant of
RTRPageClient. This class must provide implementations of all pure virtual methods inherited from its
ancestor. These methods correspond to the events that can be received from RTRPage.
This class processes these events:
• Sync
• Resync
• Stale
• NotStale
• Info
• Inactive
• Rename
• Update
#ifndef _pageupdclient_h_
#define _pageupdclient_h_
#include "rtr/page.h"
#include "rtr/pgreg.h"
class RTRPageService;
class PageUpdClient :
public RTRPageClient
{
public:
PageUpdClient(RTRPageService &service, RTRString &ric);
virtual ~PageUpdClient();
// RTRPageClient
virtual void processSync(RTRPage&);
virtual void processResync(RTRPage&);
virtual void processStale(RTRPage&);
virtual void processNotStale(RTRPage&);
virtual void processInfo(RTRPage&);
virtual void processInactive(RTRPage&);
virtual void processUpdate(RTRPage&);
virtual void processRename(RTRPage&);
protected:
RTRPage *_page;
};
#endif
PageUpdClient::~PageUpdClient()
{
if (_page && _page->hasClient(*this))
_page->dropClient(*this);
}
#include "pageupdclient.h"
#include "rtr/selectni.h"// RTRSelectNotifier
#include "rtr/tibpgsrvc.h"
#include "rtr/sslpgsrvc.h"
RTRCmdLine RTRCmdLine::cmdLine;
RTRPageService *service = 0;
if ( !service->active() )
{
cerr << "Service error: " << service->text() << endl;
delete (service);
return -1;
}
#include "regionupdclient.h"
// ...
RegionUpdClient::~RegionUpdClient()
{
if (_region && _region->hasClient(*this))
_region->dropClient(*this);
if (_page && _page->hasClient(*this))
_page->dropClient(*this);
}
// ...
4.4.1 Overview
4.4.1.2 Services
Real-time page services are very similar to real-time record services, except that they provide pages
instead of records. As with record services, if a page service is unable (or unwilling) to provide data for
a requested page, the page will be in an inactive state. The text of an inactive page explains the
reason that data will not be provided. Once in an inactive state, a page will never transition to the
active state. Applications should not maintain references (pointers) to inactive items. This rule is based
on practical considerations, e.g. memory management. Resources used by inactive items are eligible
to be reclaimed. In the SFC, logical page services are represented by the class RTRPageService, and
page stream services are represented by the class RTRRTPageService.
4.4.2 Design
Stale
Active_valid Active_stale
NotStale
Inactive
Inactive
Inactive
monitor state changes in a service (e.g. when a service becomes stale), register directly with the
service using RTRMDServiceClient.
• RTRBufferReadIterator - This class provides access to the encoded page images and updates.
4.4.4 Implementation
Currently, the SFC provides an SSL based implementation of the real-time page model, although other
implementations are possible. While most application components should refer only to the base
classes mentioned in the preceding section, the "main" routine or initialization section of an application
must instantiate one or more implementation specific classes. The implementation classes mentioned
here are similar in functionality and implementation to those which provide access to real-time record
data and which are presented in section 4.1.3. Please refer to that section for more details.
The SFC page model includes implementation classes for use by both single (simple) and multiple
service (more complex) applications. These classes encapsulate the procedures for creating an SSL
connection and the page services which use that connection. They are:
• RTRDefaultRTPageService - This class is a descendant of RTRRTPageService (via
RTRSSLRTPageService) which creates its own SSL session. It is typically used by simple
applications which require access to only a single service.
• RTRDefaultRTPageServicePool - This class is a descendant of RTRRTPageServicePool which
creates its own SSL session and monitors that session for page based services. For each
observed service, the pool adds to itself an instance of RTRSSLRTPageService. Other
application components can, as usual, monitor the pool for new services without being dependent
on SSL implementation details.
4.4.5 Examples
Please refer to the alphabetical reference section and the relevant example programs for more details
concerning these classes. Examples are presented for:
Type of
Name Description Refer to...
Example
Type of
Name Description Refer to...
Example
#ifndef _ansipagesnapclient_h
#define _ansipagesnapclient_h
class AnsiPageSnapshotClient :
public virtual RTRRTPageClient
{
public:
// Constructor
AnsiPageSnapshotClient(RTRRTPageService&, const char *);
// Destructor
~AnsiPageSnapshotClient();
// Event processing
void processStreamResync(RTRBufferReadIterator& ) {};
// The stream has new image. It may still be stale. Clients should
// check the condition of the stream. It is not relevant here, since
// this class takes a snap-shot and teriminates the application.
void processStreamInactive();
// The stream is invalid. Drop all references immediately. Resources
// consumed by the stream will be reclaimed by the service which
// provided it.
void processStreamInfo();
// The stream has new informational text. The state of the
// stream has not changed.
protected:
// Implementation attributes
RTRRTPage *_page;
};
#endif
Page data is made available through an instance of RTRBufferReadIterator. For the purposes of page
clients, this is a contiguous sequence of storage. The buffer iterator provides access to the storage
and conveys the number of bytes in the message. The processStreamSync() method illustrates the
technique for accessing the data. Notice that the buffer indexing is based on one, not zero. This class
uses some simple escape sequences for cursor positioning when printing status messages.
//
// This file contains the implementation of AnsiPageSnapshotClient
//
//
// Constructor
//
AnsiPageSnapshotClient::AnsiPageSnapshotClient(
RTRRTPageService& service, const char *symbol)
{ // 1
cout << CLR_SCRN << flush;
_page = &(service.rtPage(symbol));
_page->addClient(*this);
if ( _page->active() )
{
if ( _page->imageAvailable() )
processStreamSync(_page->imageData());
else
processStreamInfo();
}
else
{ // 2
cout << STATUS_POSITION << "Page inactive:" << _page->text() << flush;
_page->dropClient(*this);
_page = 0;
}
}
//
// Destructor
//
AnsiPageSnapshotClient::~AnsiPageSnapshotClient()
{
// 3
if ( _page )
_page->dropClient(*this);
}
//
// Event processing
//
void AnsiPageSnapshotClient::processStreamInactive()
{
cout << STATUS_POSITION << "Page inactive:" << _page->text() << flush;
_page->dropClient(*this);
_page = 0;
}
void AnsiPageSnapshotClient::processStreamInfo()
{
// 4
cout << STATUS_POSITION << "Page info:" << _page->text() << flush;
}
#include <iostream.h>
return -1;
}
#ifndef _ansipageupdclient_h
#define _ansipageupdclient_h
class AnsiPageUpdateClient :
public virtual RTRRTPageClient
{
public:
// Constructor
AnsiPageUpdateClient(RTRRTPageService&, const char *);
// Destructor
~AnsiPageUpdateClient();
// Event processing
void processStreamResync(RTRBufferReadIterator& iter);
// The stream has new image. It may still be stale. Clients should
// check the condition of the stream.
void processStreamStale();
// The stream data is out-of-date. Stream state information is
// available via text().
void processStreamNotStale();
// The stream is no longer stale.
void processStreamInactive();
// The stream is invalid. Drop all references immediately. Resources
// consumed by the stream will be reclaimed by the service which
// provided it.
void processStreamInfo();
// The stream has new informational text. The state of the
// stream has not changed.
protected:
// Implementation attributes
RTRRTPage *_page;
};
#endif
// 1
AnsiPageUpdateClient::AnsiPageUpdateClient(
RTRRTPageService& service, const char *symbol)
{
cout << CLR_SCRN << flush;
_page = &(service.rtPage(symbol));
if ( _page->active() )
{
_page->addClient(*this);
if ( _page->imageAvailable() )
processStreamSync(_page->imageData());
}
else
{
cout << STATUS_POSITION << "Page inactive:" << _page->text() << flush;
_page = 0;
}
}
AnsiPageUpdateClient::~AnsiPageUpdateClient()
{
if ( _page )
_page->dropClient(*this);
}
RTRRTPage *AnsiPageUpdateClient::page()
{
return _page;
}
void AnsiPageUpdateClient::processStreamStale()
{
cout << STATUS_POSITION << "Page stale:" << _page->text() << flush;
}
void AnsiPageUpdateClient::processStreamNotStale()
{
cout << STATUS_POSITION << "Page ok:" << _page->text() << flush;
}
void AnsiPageUpdateClient::processStreamInactive()
{
cout << STATUS_POSITION << "Page inactive:" << _page->text() << flush;
_page->dropClient(*this);
_page = 0;
}
void AnsiPageUpdateClient::processStreamInfo()
{
cout << STATUS_POSITION << "Page info:" << _page->text() << flush;
}
4.5 Time-Series
4.5.1 Overview
4.5.1.1 Time-Series
A time-series represents the historical record of market activity for some financial instrument. The
history includes data for one or more fields of interest, e.g. price or trading volume over some period. A
series is a time-ordered sequence of data samples. Each sample contains the values of the relevant
fields at the point in time represented by the sample.
A periodic series provides samples which occur with a fixed frequency, e.g. daily or weekly, and which
represent a summary of trading activity for the period between two samples. An aperiodic, or tick,
series provides samples which represent actual market activity. The semantics of the two types of
series is different. For example, in a periodic series volume would typically represent accumulated
volume for the period, while in an aperiodic series, volume would represent the volume of a particular
trade. The difference in semantics of the two series manifests itself in the field content of the series.
Samples are the constituent parts of a series. Each has a time-stamp and provides access to its
constituent values. In some circumstances a sample may be invalid, e.g. a sample from a daily series
which falls on a holiday. Each value has an identifier corresponding to that of the field which it
represents. The semantics of the value is determined by the identifier.
Applications need to traverse some or all of the available samples in a series. They may need a fixed
number of samples or all of the samples that fall within a certain time period. A client of a series must
specify the range of samples which is required, i.e., the "view" which it will take. Retrieval of data for a
given view may be synchronous or asynchronous. If asynchronous, a series may be in an incomplete
state for some period of time. Once complete, the series is responsible for informing the client that the
requested view has been established.
The SFC class representing time-series is RTRTimeSeries. Samples are of type RTRTimeSample,
while clients are of type RTRTimeSeriesClient. The time-series client class provides the mechanism by
which a series propagates state change events and informational messages to an interested
application component.
NOTE: SFC provides two request types for time-series data model when Reuters disable the
streaming (updating) TS1 capability. The *requestTS1RealTime configurable parameter is provided
for users to change the primary record request behavior to request TS1 data in real-time as
opposed to a snapshot. The default value is False.
Complete
Complete Incomplete
Error
Error
State transitions are propagated to clients as events. Figure 4.17 shows real-time time-serie states and
events causing state transitions. The *requestTS1RealTime configurable option has to be set to True.
Complete
Info Complete Incomplete Info
/NewSample
/NewEvent
Error
Error
Error
• RTRSeriesValue - Values contain the historical data values for a particular field of interest, e.g.,
trading volume for the interval represented by a sample.
• RTRTimeSeriesClient - This is the abstract base class for application components which can
own a time-series and which will receive events relating to that time-series.
• RTRTimeSeriesService - This is the abstract base class for providers of time-series data.
• RTRSimpleSeries - A series which contains values for only a single field. Instances of this class
are derived from a complete series and are used to provide simplified access to data for a
particular field.
• RTRTSValDefDb - The database of field definitions used by a particular service.
• RTRTSValDefDbClient - This is the abstract base class for application components which need
to know when the RTRTSValDefDb is completely populated and when it updates.
• RTRSeriesValueDefinition - Provides the description for a column in a time-series. A definition
comprises the name, identifier, and type of column. The type determines the way in which the raw
data should be interpreted; the name and identifier define the meaning of the field.
4.5.4 Implementation
Currently, the SFC provides an implementation of the time-series model which relies on the TS1 data
available over the Reuters datafeeds. Other implementations are possible. The TS1 implementation
uses the abstract interface of either the snapshot or real-time (setting *requestTS1RealTime to True)
record model to retrieve the encoded time-series data. While most application components should
refer only to the base classes mentioned in the preceding section, the "main" routine or initialization
section of an application must instantiate one or more implementation specific classes.
Typically, applications will create a single instance of a TS1 time-series service, which is described in
as follows:
• RTRTS1TimeSeriesService - This implementation of RTRTimeSeriesService uses an instance
of RTRRTRecordService to retrieve snapshot records or real-time records (setting
*requestTS1RealTime to True) containing compressed time-series data in the TS1 format. Please
refer to section 4.1.1 for more information on the real-time and snapshot record models and
associated implementation classes. TS1 data may span multiple records and must be
decompressed prior to use. The implementation relies on numerous underlying classes such as
RTRTS1TimeSeries and RTRTS1TimeSample. A complete description of these and related
implementation classes is beyond the scope of this manual.
4.5.5 Examples
SFC includes several examples which highlight different parts of the time-series model. All of these
examples can be found the sfc/examples/historical/ directory. These examples typically have two
parts:
1. main() function that:
• parses command line arguments
• creates a service pool factory or record service
• creates a time-series client
• creates the event control loop
2. SFC client implementation that is responsible for
• requesting a time-series
• printing output to standard out
The following table summarized the time-series examples and details where more information about
them can be found.
TSDump creates a time-series client to get weekly data a speci- section 4.5.6
fied number of samples tsdump.C
tsbycount.*
#ifndef _tsbycount_h
#define _tsbycount_h
class TSClientByCount :
public virtual RTRTimeSeriesClient
{
public:
// Constructor
TSClientByCount(
RTRTimeSeriesService&,// The service to use
const char *, // name of instrument to retrieve
int // the number of samples to retrieve
);
void processSeriesError(RTRTimeSeries&);
// Process an error event generated by the series.
void processSeriesInfo(RTRTimeSeries&);
// Process an informational event generated by the series.
protected:
// Implementation attributes
RTRTimeSeries _series;
};
#endif
//
// This file contains the implementation of TSClientByCount.
//
#include "rtr/rtrnotif.h"// Access to event loop
#include "tsbycount.h"
//
// Constructor
//
TSClientByCount::TSClientByCount(
RTRTimeSeriesService& service,
const char *symbol,
int numSamples
)
: _series(service.timeSeries(symbol, RTRTimeSeriesService::Weekly, *this))
// 1
{// 2
// Specify the desired content of the series and check it’s state.
_series.setView(numSamples);
if (!_series.error())
{
if (_series.complete())
processSeriesComplete(_series);
else
processSeriesInfo(_series);
}
else
processSeriesError(_series);
}
//
// Event processing
//
void TSClientByCount::processSeriesComplete(RTRTimeSeries&)
{
// 3
cout << _series << endl;
RTREventNotifierInit::notifier->disable();
}
void TSClientByCount::processSeriesError(RTRTimeSeries& s)
{
cerr << s.symbol() << " - Error :" << s.text() << endl;
RTREventNotifierInit::notifier->disable();
}
void TSClientByCount::processSeriesInfo(RTRTimeSeries& s)
{
cerr << s.symbol() << " - Info :" << s.text() << endl;
}
This class by itself does not comprise an application. A complete application must provide access to
an implementation of RTRTimeSeriesService and must instantiate one (or more) instance of
TSClientByCount. The next section illustrates such an application.
#include <iostream.h>
#include <stdlib.h>
#include "tsbycount.h"
NOTES:
• The series provides array style access to its constituent samples.
• The series provides attributes which define the valid range of indices (upper() & lower()).
}
return os;
}
Likewise, the output routine used above to print values from a sample illustrates the proper technique
to use when extracting values from a sample:
NOTES:
• Like a series, a sample provides array style access to its constituent values.
• The sample provides attributes which define the valid range of indices (upper() & lower()).
NOTES:
• Access to the floating point representation of a value is only meaningful if the value is "valid".
• A value provides an operator which allows the application to cast the value as a floating point
number (in this case, the explicit cast is required).
#ifndef _tsbycount_h
#define _tsbycount_h
class TSClientByDate :
public virtual RTRTimeSeriesClient
{
public:
// Constructor
TSClientByDate(
RTRTimeSeriesService&,// The service to use
const char *, // name of instrument to retrieve
const char *, // string representing desired start date
const char * // string representing desired end date
);
void processSeriesError(RTRTimeSeries&);
// Process an error event generated by the series.
void processSeriesInfo(RTRTimeSeries&);
// Process an informational event generated by the series.
protected:
// Implementation attributes
RTRTimeSeries _series;
};
#endif
//
// 1
#include "rtr/rtrnotif.h"
#include "tsbydate.h"
//
// Constructor
//
TSClientByDate::TSClientByDate(
RTRTimeSeriesService& service,
const char *symbol,
const char *startDate,
const char *endDate
)
: _series(service.timeSeries(symbol, RTRTimeSeriesService::Weekly, *this))
{
// 2
_series.setView(startDate, endDate);
if (!_series.error())
{
if (_series.complete())
processSeriesComplete(_series);
else
processSeriesInfo(_series);
}
else
processSeriesError(_series);
}
//
// Event processing
//
void TSClientByDate::processSeriesComplete(RTRTimeSeries&)
{
cout << _series << endl;
RTREventNotifierInit::notifier->disable();
}
void TSClientByDate::processSeriesError(RTRTimeSeries& s)
{
cerr << s.symbol() << " - Error :" << s.text() << endl;
RTREventNotifierInit::notifier->disable();
}
void TSClientByDate::processSeriesInfo(RTRTimeSeries& s)
{
cerr << s.symbol() << " - Info :" << s.text() << endl;
}
The rest of the client implementation stays the same. Obviously, the application (tsdates.C) which
uses this client is slightly different from the original application because it must provide two dates.
A modified version of the original application is required. It takes two string arguments instead of the
integer. The original version extracted four weeks of data. The data for the same four weeks of the
preceding year can be extracted using tsdates as follows:
> -infra ssl -servicename IDN_SELECTFEED -symbol RTRSY.O -d1 11/30/1993 -d2
11/01/1993
Date OPN CLS HI LOW VOL
11/26/93 36.125 35.25 36.1875 34.875 2.0072e+06
11/19/93 34.9375 36.5625 36.75 34 5.2614e+06
11/12/93 36.5625 36.125 37.125 35.9375 2.6146e+06
11/05/93 36.4375 36.4375 37.375 35.75 1.941e+06
argument (in addition to the service, symbol, and number of samples arguments) to the constructor
which specifies the name of the field on which to perform the computation.
#ifndef _simpleclient_h
#define _simpleclient_h
class SimpleSeriesClient :
public virtual RTRTimeSeriesClient
{
public:
// Constructor
SimpleSeriesClient(
RTRTimeSeriesService&,// the service to use
const char *, // name of instrument to retrieve
int, // the number of samples to retrieve
const char * // name of the field to retrieve
);
void processSeriesError(RTRTimeSeries&);
// Process an error event generated by the series.
protected:
// Implementation attributes
RTRTimeSeries _series;
RTRString _fieldName;
};
#endif
#include "rtr/rtrnotif.h"
#include "simpleclient.h"
SimpleSeriesClient::SimpleSeriesClient(
RTRTimeSeriesService& service,
const char *symbol,
int numSamples,
const char *fieldName
)
: _series(service.timeSeries(symbol, RTRTimeSeriesService::Weekly, *this)),
_fieldName(fieldName)
{
_series.setView(numSamples);
if (!_series.error())
{
if (_series.complete())
processSeriesComplete(_series);
}
else
processSeriesError(_series);
}
void SimpleSeriesClient::processSeriesComplete(RTRTimeSeries&)
{
RTRSimpleSeries ss = _series.simpleSeries(_fieldName, _fieldName);
float sum = 0;
int n = 0;
for (int i = ss.lower(); i <= ss.upper(); i++)
{
if (ss[i].valid())
sum += (float)ss[i];
n++;
}
if (n)
cout << "Average is : " << sum/n << endl;
else
cout << "No valid data" << endl;
RTREventNotifierInit::notifier->disable();
}
void SimpleSeriesClient::processSeriesError(RTRTimeSeries& s)
{
cerr << "Error - " << s.symbol() << ":" << s.text() << endl;
RTREventNotifierInit::notifier->disable();
}
An application called tsavg uses the new class and produces the following output:
> -infra ssl -servicename IDN_SELECTFEED -symbol RTRSY.O -n 3 -field vol
Average is : 954233
>
4.6 Inserts
NOTE: The insert model can only be used to contribute data to Triarch and RMDS. See section
5.5.9 for details on how to contribute on an RTIC-based RMDS infrastructure.
4.6.1 Overview
An insert is used to pass information related to a particular market data item to a service associated
with that item. An insert is passed to an insert service, where the insert may be accepted or denied.
An insert transaction starts when an insert is passed to an insert service and ends when the service
accepts or rejects the insert. An insert client is the application component that is notified at the
completion of an insert transaction.
The format and content of the information provided in an insert is service specific. Therefore, an
application utilizing inserts must know how to format information for each insert service. Typically this
format is Marketfeed. See the Reuters Marketfeed Reference Manual for details on the Marketfeed
data format.
Note that upon completion of the transaction, all references to the insert must be released so that the
insert object may be freed by the insert service.
1. Publishers are able to receive inserts; subscription clients are able to publish out inserts. In this section, ’insert pub-
lisher’ is the subscription client. the ’insert request server’ is the publisher that is able to respond to inserts.
2. The setText method limits text length to 80 characters; the exceeded characters will be truncated.
method. It may also set the failure code by using setNakCode method. See Reference Manual for
more details.
If the default implementation is overriden, the insert request server must make sure to send
acknowledgements in a timely fashion for the following reason. For each insert received, it is added to
a list. When the insert request server sends a success or failure for an insert, that particular insert is
then removed from the list and deleted. If the insert request server never sends success or failure for
that insert, this insert will only be removed and deleted when the service is marked stale, at which time
a failure for that insert will be sent.
A constraint for this insert request server is that if it never sends success or failure on a insert, the
insert request server’s cache has the potential of running out of RAM; each insert received is put into
the list. The insert request server application must keep track of inserts at all times.
Source file
class SimpleInsertExample
: public RTRInsertServiceClient,
public RTRInsertClient
{
protected:
RTRInsertService& _service;
RTRString _itemName;
RTRString _insertData;
void releaseResources();
public:
// Constructor
// Destructor
virtual ~SimpleInsertExample();
void sendInsert();
// Send an insert and wait for a response.
};
SimpleInsertExample::~SimpleInsertExample(){;};
void SimpleInsertExample::sendInsert()
{
cout<<"Sending insert for item "<<_itemName<<" and data "<<_insertData;
cout<<" to service "<<_service.name()<<endl<<endl;
srvc.dropClient(*this);
sendInsert();
};
releaseResources();
};
releaseResources();
};
void SimpleInsertExample::releaseResources()
{
RTRObjectId appId("SimpleInsert");
RTRInsertService *insertService = 0;
void cleanup(int, char * = 0);
void setup(int, char **);
void createLogger(int, char **);
RTRString infra, servicename, symbol, servicePort, daemon, network, value;
RTRSelectNotifier::run();
cleanup(0, argv[0]);
return 0;
}
4.7 Session
NOTE: The session model is independent of infrastructure. It does not use Rrendezvous or
Triarch.
• provide one simple interface (RTRMessageSession) for exchanging messages between two peer
application components,
• provide implementation classes of the abstract interface that utilize particular communication
protocols and shield the user from the intricacies of learning how to code to these protocols
interfaces.
The abstract message session interface defines the message exchange portion of peer-to-peer
communication while the specific implementation classes, like a TCP Client Session, define and
implement the abstract interface by using a particular communication protocol and determine how a
message session is created and how associations are made to a peer application component.
The implementation for this class follows the declaration. It illustrates the correct procedure to use in
reading and writing messages with a message session. In pseudo-code, the procedure for writing a
message is:
Allocate message from message session
If message available from session then
Get a write iterator for the allocated message
Format data into the message
Send the message
else
Wait for allocation ready event from message session or take other action
In pseudo-code, the procedure for reading a message when the processSessionMessage() function
is called is:
Obtain from message session a read iterator to the new message
Use the iterator methods to extract message information
#include "rtr/rtrdefs.h"
#include "rtr/msgsess.h"// Message session
class Reply
: public virtual RTRMessageSessionClient
{
public:
// Constructor
Reply();
// Destructor
virtual ~Reply();
// From RTRMessageSessionClient
void processSessionMessage(RTRMessageSession& session);
// This method is invoked when a new message has been
// received from the session. The method
RTRBOOL hasSession();
// Is a session set for this instance?
protected:
void sendReply();
// Format and send the reply message "YES. I AM HERE"
// to the given peer session.
void releaseResources();
// Release all resources held by this instance.
#endif // _reply_h
#include "reply.h"
Reply::Reply()
: _session(0)
{
};
Reply::~Reply()
{
releaseResources();
};
RTRBOOL Reply::hasSession()
{
return _session != 0 ? RTRTRUE : RTRFALSE;
};
void Reply::sendReply()
{
// Allocate a new message write buffer.
_session->allocateMessage();
void Reply::releaseResources()
{
// Delete the session.
if (_session != 0)
{
_session->terminate();
_session = 0;
}
};
class Request
: public virtual RTRMessageSessionClient,
public virtual RTRTimerCmd
{
public:
// Constructor
Request();
// Destructor
virtual ~Request();
// From RTRMessageSessionClient
void processSessionMessage(RTRMessageSession& session);
// This method is invoked when a new message has been
// received from the session. The method
// session.lastMessageReceived() is invoked to retrieve
// the message.
void processSessionAllocationReady(RTRMessageSession&);
// This method is invoked when the session is again
// capable of provided messages after a
// session.allocateMessage() call had failed.
// From RTRTimerCmd
void processTimerEvent();
// This method is invoked when the timer goes off.
// this instance.
// REQUIRE: !hasSession()
RTRBOOL hasSession();
// Is a session set for this instance?
void start();
// Start sending request messages.
// REQUIRE: hasSession()
protected:
void sendRequest();
// Start sending request messages.
void activateTimer();
// Activate the timer to go off in 5 seconds.
void releaseResources();
// Release all resources held by this instance.
};
#endif //_request_h
Request::Request()
: RTRTimerCmd(),
_session(0),
_messageCount(1)
{
};
Request::~Request()
{
releaseResources();
};
delete this;
}
// Otherwise activate the timer to send another message in 5 seconds.
else
activateTimer();
};
void Request::processSessionAllocationReady(RTRMessageSession&)
{
// Messagess are now available, so attempt to send the
// request message.
sendRequest();
};
void Request::processTimerEvent()
{
// The 5 second timer has expired. Send another request message.
sendRequest()
};
void Request::setSession(RTRMessageSession* s)
{
// Precondition enforces the fact that the session
// for this instance can only be set once.
RTPRECONDITION( !hasSession() );
_session = s;
};
RTRBOOL Request::hasSession()
{
return _session != 0 ? RTRTRUE : RTRFALSE;
};
void Request::start()
{
RTPRECONDITION( hasSession() );
sendRequest();
};
void Request::sendRequest()
{
RTPRECONDITION( _session != 0 );
if (!_session->connected())
processSessionDisconnected(*_session);
}
else
// The processSessionAllocationReady() method will be
// invoked by the message session when more messages
// can be sent.
cout<<"Requestor: Unable to send message! Waiting..."<<endl;
};
void Request::activateTimer()
{
// Set the time, in seconds, for the timer and activate.
setTimerOffset((long)5, (short)0);
activate();
};
void Request::releaseResources()
{
// Delete the session.
if (_session != 0)
{
_session->terminate();
_session = 0;
}
that may be used to implement the abstract interface are determined by RTRMessageSession
descendants.
One such descendant relies upon the TCP/IP protocol suite and sockets to implement a message
session. TCP presents users with a client/server model in which a server is established on a well
known port and clients connect to that port to start a communication session. When the TCP server
receives a connection attempt, a new socket is allocated for communications with the TCP client.
The TCP implementation of the message session classes use a similar model. A TCP session server,
(RTRTcpSessionServer) is a TCP server that accepts connections from TCP client session instances
and propagates to its event client an event indicating that a new message session is available.
A TCP client session (RTRTcpClientSession) is a message session descendant that attempts to
establish a connection with a specific TCP session server instance. If the connection to the server
succeeds, the message session is considered established and messages may be passed between
itself and a peer session.
A TCP server session (RTRTcpServerSession) is a message session descendant that is created by
the TCP session server whenever another session connects to the session server. Upon creation, the
session is used to exchange messages with a peer TCP client session.
#include "rtr/rtrdefs.h"
#include "rtr/rtstring.h"// String
#include "rtr/msgsess.h"// Message session
#include "rtr/tcpssvr.h"// Tcp Session Server
class ReplyServer
: public virtual RTRTcpSessionServerClient
// Inherit TCP session server client to receive event notification.
{
public:
// Constructor
ReplyServer(RTRTcpSocketIdentifier& identifier);
// Destructor
~ReplyServer();
// From RTRMessageSessionServerClient
void processNewSession(RTRTcpSessionIdentifier& id,
RTRTcpSessionServer& svr);
// This method is invoked when a new message session is
// available from the session server.
protected:
RTRTcpSessionServer _server;
// Session server instance
};
#endif // _reply_server_h
ReplyServer::ReplyServer(RTRTcpSocketIdentifier& identifier)
: _server(identifier.to_c(),
identifier,
*this)
{
// The Tcp Session Server is created with this instance as
// the event client. Note that default values will be used
// for configurable variables of the message session instances
// created by this session server.
};
ReplyServer::~ReplyServer()
{
};
RTRString sessionName("ReplySession");
RTRTcpServerSession::setDefaultNumberWriteMessages(5);
RTRTcpServerSession::setDefaultWriteMessageSize(200);
_newReply->setSession(_session);
};
RTRBOOL ReplyServer::error()
{
// Reply server’s error is based on state of it’s
// TCP session server.
return _server.error();
};
if (!_replyServer.error())
{
cout<<endl<<"Reply server ready for requests"<<endl;
return -1;
}
if (_socketId.isValid())
{
// Create the client of the Tcp Client Session.
Request *_requestor = new Request();
RTRTcpClientSession *_session;
RTRTcpClientSession::setDefaultWriteMessageSize(300);
RTRTcpClientSession::setDefaultNumberWriteMessages(2);
RTRTcpClientSession::setDefaultReadBias(3);
// Clean up
delete _session;
_session = 0;
}
}
else
{
cout<<"Socket identifier invalid : "<<_socketId.errorText()<<endl;
}
}
else
cout<<"Usage: "<<argv[0]<<" host_name service_name"<<endl;
return -1;
}
4.8 Support
This cluster describes the SFC "infrastructure" models:
• connection classes (section 4.8.1)
• service pool factory classes (section 4.8.2)
• the event loop abstraction and implementation (section 4.8.3)
• configuration classes (section 4.8.5)
• event logging classes (section 4.8.6)
• command line classes (section 4.8.7)
• international character support (section 4.8.8.7)
4.8.1 Connection
4.8.1.1 Overview
The connection classes provide a way to control and monitor the connection between SFC and a
market data infrastructure. The abstract base class for connections is RTRMDConnection. Market data
connections are created internally by most market data services and by service pool factories (section
4.8.2), so SFC applications typically do not have to use them directly. Connection classes can be
useful in the following circumstances.
• The application needs to monitor the status of the connection. A class which inherits from
RTRMDConnectionClient can register to receive status and information events from a connection.
• Multiple services need to share a connection. The connection can be created and passed into the
constructors of the services. Most of the time, it’s easier to use a service pool factory. However,
this approach is especially useful for sharing a RTRTIBConnection between consuming and
publishing services, since publishing services are not maintained by service pool factories.
• The application needs fine-tuned control over connection parameters. Some parameters can be
controlled through parameters on services or service pool factories. Most parameters can also be
controlled through configuration. The connection classes can be used to directly set the mount
parameters without using a configuration file. All mount parameters must be set before calling
connect() which puts the connection in an initalized state.
4.8.1.2 Implementation
SFC includes three implementations of RTRMDConnection: RTRSSLConnection,
RTRSSLConnectionServer, and RTRTIBConnection. Each implementation class has unique mount
parameters and connection values that are type specific. See the SFC Reference Manual for details on
these parameters.
When entitlements are enabled, RTRTIBConnection represents both the RVD connection and the
connection to the DACS daemon. An entitled TIB connection is only connected when both connections
have been established. RTRTIBConnection can connect to a Rendezvous network using either the
SASS2 protocol or the SASS3 protocol. By default, it uses RTRTIBConnection::SASS3. It can be
changed on a per connection basis using:
RTRTIBConnection::protocol(RTRTIBConnection::Protocol p)
4.8.1.3 Examples
The following code shows how to create a TIB connection. Most of the connection parameters can be
set in the constructor.
RTRObjectId _instanceId("appId");
RTRTIBConnection _connection(_instanceId, "TIBConnection",
service, network, daemon);
_connection.connect();
Additional connection parameters are needed when using a TMF in an RMDS infrastructure (Appendix E).
This second example shows how to create a TIB connection with a TMF session. All of the TMF
session parameters can be set through the setUpdateSession() method.
RTRObjectId _instanceId("appId");
RTRTIBConnection _connection(_instanceId, "TIBConnection",
This third example shows how to create an SSL connection. Since SSL has more connection
parameters, they cannot be set on the constructor. Instead, they are set individually before calling
connect().
RTRObjectId _instanceId("appId");
RTRSSLConnection _connection(_instanceId, "sslDispatcher");
_connection.userName("jsmith");
_connection.serverList("sinkdistmach1 sinkdistmach2");
_connection.port(8102);
4.8.2.1 Overview
Service pool factories make it easier to create and manage the life-cycle of market data connections,
FID databases, services, and service pools. A service pool factory is responsible for creating and
destroying its connection, services, and pools. The abstract base class for service pool factories is
RTRMDServicePoolFactory.
The RTRMDServicePoolFactory is responsible for specifying the service pool factory interface and
generic configuration. From this service pool factory, a program can retrieve RTRecord, Page,
RTPage, and TimeSeries service pools. The pools are created with lazy initialization. Since a factory
has a single connection, all of the pools uses that same, shared connection.The service pool factory
classes follow the Abstract Factory pattern [6] . RTRMDServicePoolFactory has two infrastructure-
specific subclasses: RTRSSLServicePoolFactory and RTRTIBServicePoolFactory.
The Composite pattern [6] is used by RTRCompositeServicePoolFactory to provide a way of merging
service pools from multiple factories into a set of pools. For example, this feature can be used to
automatically create a single RTRRTRecordServicePool that contains services from two SSL and one
TIB connection. If there is a service name conflict, services are included in the composite pool on a
first-come, first serve basis. Conflicts are also logged.
4.8.2.2 Implementation
While the abstract class for service pool factories has an infrastructure-agnostic interface, some
implementation details should be considered.
• On Triarch, services can be dynamically discovered. To disable this feature,
enableDynamic*ServicePool configuration variables were added to RTRMDServicePoolFactory
to turn this functionality on and off. By default these values are enabled by
RTRSSLServicePoolFactory. However with RTIC (SASS2), all services names must be known by
the application. So, these values are disabled and ignored by RTRTIBServicePoolFactory.
• Since services cannot be dynamically determined on RTIC (SASS2), they must be specified with
configuration or by calling one of the install*Service(const char*) methods. It may be useful to
note that theses services are generic, so they can be add services when dynamic services are
disabled on Triarch, or they can be used to ensure certain services are listed first in the pool.
• Page stream services are not available on RTIC (SASS2), so rtPageServicePool() returns 0. In
general, the return values of pageServicePool(), timeSeriesServicePool(), and
rtRecordServicePool() should always be checked if they are null before being used.
• RTRSSLServicePoolFactory and RTRTIBServicePoolFactory each have some additional
configuration variables that can be used to customize their connections.
4.8.2.3 Examples
The following example code shows how SSL and TIB service pool factories are created and how a
composite factory can be used:
RTRObjectId instanceId("appId");
// At this point, all of the service pools don’t exist because they are
// created with lazy initialization.
tibFactory.installRTRecordService("RSF");
// A RTRTIBRTRecordService named RSF is created and added to the new
// RTRRTRecordServicePools in _tibFactory and _factory.
More Examples
The connection and factory models and the single service constructors were designed for both
flexibility and ease of use. While on the surface, the wide variety of these examples may seem
complicated, they are shown here as a demonstration of SFC’s initantiation flexibiliy. Please keep in
mind, for most purposes, a factory or single service constructor will work with minimal setup.
These examples may use the following includes and declarations:
#include "rtr/objid.h" // RTRObjectId
#include "rtr/tibrtsvc.h"// RTRTIBRTRecordService
#include "rtr/sslrtrs.h" // RTRSSLRTRecordService
#include "rtr/tconnect.h"// RTRTIBConnection
#include "rtr/sconnect.h"// RTRSSLConnection
#include "rtr/sslconns.h"// RTRSSLConnectionServer
#include "rtr/tibsplf.h" // RTRTIBServicePoolFactory
#include "rtr/sslsplf.h" // RTRSSLServicePoolFactory
#include "rtr/fldtossl.h"// RTRRTFieldToSSLRecordService
#include "rtr/fldtotib.h"// RTRRTFieldToTIBRecordService
RTRObjectId appId("appId");
1. Single SASS3 service
RTRTIBRTRecordService service(appId, "RSF");
2. Single SASS3 service with mount parameters
RTRTIBRTRecordService service(appId, "RSF", "7501", "tcp:rvdhost:7500");
3. Single SASS2 service
RTRTIBConnection::DefaultSassVersion = RTRTIBConnection::SASS2;
RTRTIBRTRecordService service(appId, "RSF");
4. Single SASS2 service with mount parameters
RTRTIBConnection::DefaultSassVersion = RTRTIBConnection::SASS2;
RTRTIBRTRecordService service(appId, "RSF", "7501", "", "tcp:rvdhost:7500");
5. Single SSL service, using sslapi.cnf or ipcroute for mount location
RTRSSLRTRecordService service(appId, "IDN_SELECTFEED", "");
6. Single SSL service, customizing mount location
RTRSSLRTRecordService service(appId, "IDN_SELECTFEED", "sinkDistHost1
sinkDistHost2");
7. Customizing connection for single SASS3 service
RTRTIBConnection connection(appId, "tibconnection", "7501");
connection.userName("mylogin");
connection.connect();
RTRTIBRTRecordService service(appId, "RSF", connection);
8. Customizing connection for single SASS3, TMF service
RTRTIBConnection connection(appId, "tibconnection", "7501");
connection.setUpdateSession("7502", "", "");
connection.connect();
RTRTIBRTRecordService service(appId, "RSF", connection);
9. Customizing connection for single SASS2 service
RTRTIBConnection connection(appId, "tibconnection", "7501");
connection.enableEntitlements(RTRFALSE);
connection.protocol(RTRTIBConnection::SASS2);
connection.connect();
RTRTIBRTRecordService service(appId, "RSF", connection);
10.Customizing connection for single SSL service
RTRSSLConnection connection(appId, "sslconnection");
connection.userName("mylogin");
connection.protocol(RTRTIBConnection::SASS2);
connection.connect();
11. Two SASS3 services from factory
RTRTIBServicePoolFactory factory(appId, "factory");
// the next two lines are not needed if the config file includes
// *appId.factory : RSF, RDF
factory.installRTRecordService("RSF");
factory.installRTRecordService("RDF");
RTRRTRecordServicePool *pool = factory.rtRecordServicePool();
RTRRTRecordService *s1 = pool->service("RSF");
RTRRTRecordService *s2 = pool->service("RDF");
12.Two SASS2 services from factory
// The next two lines could be replaced with
// RTRTIBConnection::DefaultSassVersion = RTRTIBConnection::SASS2;
RTRTIBConnection connection(appId, "tibconnection", "7501");
connection.protocol(RTRTIBConnection::SASS2);
RTRTIBServicePoolFactory factory(appId, "factory", connection);
// the next two lines are not needed if the config file includes
// *appId.factory : RSF, IDN_RDF
factory.installRTRecordService("RSF");
factory.installRTRecordService("IDN_RDF");
RTRRTRecordServicePool *pool = factory.rtRecordServicePool();
RTRRTRecordService *s1 = pool->service("RSF");
RTRRTRecordService *s2 = pool->service("IDN_RDF");
13.Two SSL services from factory
RTRSSLServicePoolFactory factory(appId, "factory");
// the next two lines are not needed if the config file includes
// *appId.factory : IDN_SELECTFEED, IDN_RDF
factory.installRTRecordService("RSF");
factory.installRTRecordService("IDN_RDF");
RTRRTRecordServicePool *pool = factory.rtRecordServicePool();
RTRRTRecordService *s1 = pool->service("IDN_SELECTFEED");
RTRRTRecordService *s2 = pool->service("IDN_RDF");
OR
// AppRecSrvcPoolClient client, subclass of RTRRTRecordServicePoolClient
RTRSSLServicePoolFactory factory(appId, "factory");
RTRRTRecordServicePool *pool = factory.rtRecordServicePool();
pool->addClient(client);
{
if (srvc.name() == "IDN_SELECTFEED")
s1 = &srvc;
else if (srvc.name() == "IDN_RDF");
s2 = &srvc;
}
14.Two SASS3 services with two connections
RTRTIBRTRecordService s1(appId, "RSF");
RTRTIBRTRecordService s2(appId, "RDF");
15.Two SASS2 services with separate connections
RTRTIBConnection::DefaultSassVersion = RTRTIBConnection::SASS2;
RTRTIBRTRecordService s1(appId, "RSF");
RTRTIBRTRecordService s2(appId, "RDF");
16.Two SSL services with two connections
RTRSSLRTRecordService s1(appId, "IDN_SELECTFEED", "");
RTRSSLRTRecordService s2(appId, "IDN_RDF", "");
17.Two SASS3 service with one connection
RTRTIBRTRecordService s1(appId, "RSF");
RTRTIBConnection &connection = s1.connection();
RTRTIBRTRecordService s2(appId, "RDF", connection);
18.Two SASS2 service with one connection
RTRTIBConnection::DefaultSassVersion = RTRTIBConnection::SASS2;
RTRTIBRTRecordService s1(appId, "RSF");
RTRTIBConnection &connection = s1.connection();
RTRTIBRTRecordService s2(appId, "RDF", connection);
19.Two SSL services with one connection
RTRSSLRTRecordService s1(appId, "IDN_SELECTFEED", "");
RTRSSLConnection &connection = s1.connection();
RTRSSLRTRecordService service(appId, "IDN_RDF", connection);
20.Single SSL pub service
RTRRTFieldToSSLRecordService s1(appId, "MY_SRVC1")
// ...cleanup
}
• The RTRWindowsNotifier can be used when MFC is used. If MFC is used and the event loop is to
be compiled into a DLL, use libmfcml.lib.
Timers have minimal resolutions set by the operating systems. The default minimal resolution is set at
10ms. Some operating systems round off timers, so setting a timer for 23 ms may actually set a timer
of 30ms.
Applications can set a repeating timer by calling activate() inside processTimerEvent(). This sets a
timer event to execute in a user determined amount of time in the future. If the timer set in the
processTimerEvent() is a null timer, this timer will be automatically set to the minimal resolution time
(10ms). This is also true for timers set inside of a windows event callback.
NOTE: For performance reasons, the SFC notifier implementations only check the current time
once through a notifier loop. This means that timers set at the beginning and end of a callback will
be set to fire at the same system time.
Each of the examples which follows uses a different version of the notifier and instantiates one
instance of IOTimerClient.
#include "rtr/timercmd.h"
#include "rtr/ioclient.h"
class IOTimerClient :
public RTRTimerCmd,
public RTRIOClient
{
public:
// Constructor
IOTimerClient(char *);
// Destructor
~IOTimerClient();
protected:
// Implementation attributes
int fd;
};
#ifndefIBMRS
#include <sys/unistd.h>
#endif
#include <unistd.h>
#include <fcntl.h>
#include <iostream.h>
#include <stdlib.h>
#include "rtr/rtrnotif.h"
#include "iotimerclient.h"
IOTimerClient::IOTimerClient(char *fnm)
{
fd = open(fnm, O_RDWR);
RTREventNotifierInit::notifier->addReadClient(*this, fd);
cout << "Enter time:" << flush;
}
IOTimerClient::~IOTimerClient()
{
RTREventNotifierInit::notifier->dropReadClient(fd);
}
void IOTimerClient::processIORead(int)
{
char buf[100];
int len = read(fd, buf, 10);
buf[len] = ‘\0’;
int s = atoi(buf);
if (active())
{
cout << "Canceling current event" << endl;
deactivate();
}
if ( s > 0 )
{
cout << "Adding event for " << s << " seconds" << endl;
setTimerOffset(s, 0);
activate();
}
else
{
if ( active() )
deactivate();
RTREventNotifierInit::notifier->dropReadClient(fd);
}
}
void IOTimerClient::processTimerEvent()
{
cout << "HELLO WORLD, AGAIN" << endl << "Enter time:" << flush;
}
#include "iotimerclient.h"
void main()
{
IOTimerClient client("/dev/tty");
RTRSelectNotifier::run();
}
#include <stdio.h>
#include "rtr/xtenimp.h"
#include "iotimerclient.h"
#define XTFUNCPROTO 1
#define MAXLEN 50
i = 0;
XtSetArg(args[i], XtNlabel, string); i++;
XtSetValues(w, args, i);
}
XtAppContext RTRXtNotifier::appContext = 0;
topLevel = XtAppInitialize(
&app_context,
"XClickcount", /* Application class */
NULL, /* Resource Mgr. options */
0, /* number of RM options */
#if (XtSpecificationRelease < 5)
(Cardinal *)&argc, /* number of args */
#else
&argc, /* number of args */
#endif
argv, /* command line */
NULL,
&arg,
0
);
hello = XtCreateManagedWidget(
"click me", /* arbitrary widget name */
commandWidgetClass, /* widget class from Label.h */
topLevel, /* parent widget*/
NULL, /* argument list */
0 /* arg list size */
);
XtRealizeWidget(topLevel);
XtResizeWidget(topLevel, 120, 80, 10);
RTRXtNotifier::appContext = app_context;
IOTimerClient client("/dev/tty");
XtAppMainLoop(app_context);
}
#include "iotimerclient.h"
#define __sys_unistd_h
#define __SIGNAL_H
extern "C" {
#include <xview/xview.h>
#include <xview/panel.h>
#include <xview/openmenu.h>
}
int
selected(Panel_item item, Event *)
{
printf("%s selected\n", xv_get(item, PANEL_LABEL_STRING));
return XV_OK;
}
void
menu_proc(Menu, Menu_item)
{
}
Frame RTRXViewNotifier::baseFrame = 0;
Panel panel;
Menu menu;
Frame base_frame;
RTRXViewNotifier::baseFrame = base_frame;
IOTimerClient client("/dev/tty");
xv_main_loop(base_frame);
}
4.8.5 Configuration
4.8.5.1 Overview
In general, configuring software components is a matter of assigning values to named variables.
Reusable software, and in particular, object-oriented software, makes the problem more complicated.
The primary concern is avoiding name conflicts. Variable names must be chosen without knowledge of
the entire application. Another aspect of the problem is the complex composition of the end product,
and the fact that the same component may appear in many different applications. The SFC provides a
model to address these problems.
The SFC configuration model is derived from the following analysis:
class, but is not very extensible. Suppose TickerMonitor had a descendant called Foo, also
needing configuration. The class identifier should be "TIckerMonitor.Foo". Foo can declare its
own instance of RTRObjectId and initialize it appropriately. The problem is that when the method
Foo::processRTRecordServiceAdd is invoked, it will use the class identifier from the ancestor
(TickerMonitor) not the identifier of the descendant (TickerMonitor.Foo).
Solving this problem requires a change in design of the TickerMonitor class. The class identifier should
not be hard-coded. One solution is to provide a constructor argument which has a default value and
use that to initialize a class identifier object. The new constructor for TickerMonitor would be declared
as follows:
TickerMonitor(
const RTRObjectId&,
const char *,
RTRRTRecordServicePool&,
const char* classId = "TickerMonitor");
....
protected:
RTRObjectId _classId;
Then the initialization list needs an entry:
_classId(classId)
The descendant class Foo then invokes the constructor with the fourth argument set to
"TickerMonitor.Foo".
The examples in this manual use this technique. For example, the ticker application (ticker.C) which
creates an instance of TickerMonitor has an application identifier of "ticker". This is passed to the
instance of TickerMonitor along with a unique name, in this case "ticker_monitor". The service pool in
this application can also be configured; its unique name is "pool". This means that the instance of
TickerMonitor has an instance identifier of "ticker.ticker_monitor", and the pool has an instance
identifier of "ticker.pool".
This technique works well in most cases but has some flaws. If two of these applications run on the
same system and use the same configuration file, there is no way to tell them apart. The application
designer may want to ensure uniqueness by providing a command line argument which gives this
application its name. Further more, in distributed systems, it may be appropriate to use the hostname
as the first level of identification.
• a default value that is returned if a variable cannot be found in the database that matches either of
the identifiers
Note that the query for "var4" does not provide a default value. The returned variable must therefore
be checked for an error.
#include "rtr/rtxfdb.h"
int main()
{
RTRXFileDb configDb("./config_file");
if (!configDb.error())
{
// Example of full expanded name
RTRConfigVariable var1 = configDb.variable(
"RequestQueue",
"machine2.sink_distributor.request_queue",
"queueSize",
"500");
// Example of wildcarded id
RTRConfigVariable var2 = configDb.variable(
"RequestQueue",
"machine30.sink_distributor.request_queue",
"queueSize",
"500");
// Example of using default value
RTRConfigVariable var3 = configDb.variable(
"SessionManager",
"mySessionClient.session_manager",
"maxSessions",
"30");
// Example of no default and no value
RTRConfigVariable var4 = configDb.variable(
"RrmpManager",
"machine22.V301.sink_distributor.rrmp_manager",
"rrmpAddress");
cout<<var2.value().to_c()<<"‘"<<endl;
if (var4.error())
{
cerr << "Error: No value for ‘var4’" << endl;
}
}
else
cout<<"Config db error: "<<configDb.errorText()<<endl;
}
Given the following configuration file entries in file./config_file:
machine2.sink_distributor.request_queue.queueSize : 1000
*queueSize : 200
the program output is:
Value of variable ‘var1’ is ‘1000’
Value of variable ‘var2’ is ‘200’
Value of variable ‘var3’ is ‘30’
Error: No value for ‘var4’
The value of var1 is 1000 because it matches.
Configuration variable var3 shows how the default value is used if a value cannot be found in the
configuration database.
Configuration variable var4 is returned with error() set to True because no value is found in the
configuration database and no default value was provided.
The SFC provides an implementation of RTRConfigDb called RTRRegistryDb which applications can
use to access configuration data in the registry. This implementation provides the RTRConfigDb
interface for reading configuration variables from the database and also provides new functions to
allow writing values for variables into the configuration database.
The convention for storing software configuration information in the registry is to store configuration
values in a tree in the HKEY_LOCAL_MACHINE hive whose key name is of the form:
SOFTWARE\<Vendor>\<Application>\<Version>
The hive and key name may be specified by the application programmer when creating an instance of
RTRRegistryDb (or the default values of HKEY_LOCAL_MACHINE and
SOFTWARE\Reuters\SFC\CurrentVersion may be used). Within this tree, RTRRegistryDb uses two
separate sub-trees with key names Class and Instance to store values for configuration by class and
instance, respectively.
When a configuration variable is accessed from the RTRRegistryDb, the specified class and instance
object ID names for the variable are converted to key names using backslashes to separate the
components of the object ID names. These key names are then used as sub-keys in the registry
configuration tree under the Instance and Class sub-trees.
When a variable is retrieved using the variable() function, the Instance sub-tree is searched before the
Class sub-tree.
int main()
{
RTRRegistryDb configDb(HKEY_LOCAL_MACHINE,
"SOFTWARE\\MyCompany\\MyApplication\\CurrentVersion");
if (!configDb.error())
{
// Retrieve initial configuration value
RTRConfigVariable var1 = configDb.variable("RequestQueue",
"machine2.sink_distributor.request_queue", "queueSize", "500");
cout<<"Value of variable ‘var1’ is ‘";
cout<<var1.value().to_c()<<"‘"<<endl;
NOTE: Unlike file configuration variables, which permit wildcarding, registry configuration variables
must be fully specified. For example the configuration variable logger.defaultFileAction.selector can be
set in a file as:
*logger*selector: *.debug
\\HKEY_LOCAL_MACHINE\SOFTWARE\MyComp\MyApp\Version\logger\defaultFileAction\selector
must be used.
4.8.6.1 Overview
The event logging cluster addresses the problem of allowing components to generate events without
being concerned with application context. For example, it may not be appropriate for a library
component to arbitrarily print error messages on the standard error device. Application designers may
want those errors to be displayed somewhere else, perhaps in a dialogue box of a windowing system.
The event logging cluster provides an extensible event logging mechanism which allows the
application designer to decide how events are handled.
The class representing an event is RTRMgmtEvent. Components allocate instances of
RTRMgmtEvent and assign it identity, severity and text. The event can then be logged as necessary.
The component generating the event does not determine what to do when the event is created.
"Actions" are objects which decide how to process events. For example, a file action could write an
event to a log file, and a stderr action could write a message on the standard error device.
An application has a single instance of an event router. This centralized component is informed by an
event when it needs to be distributed. The router passes the events to all actions that have registered
with it.
Actions choose which events they will process with a filter. An action’s filter keeps track of a list of
"selection pairs". A selection pair consists of a component name and a severity level. The selection
pair list defines the log events that the log action instance will process. To generate a match, the
component name of the selection pair must match the identity of the component generating the event.
In addition, the severity level of the selection pair must match the severity level of the even
The identity used in generating log events is an arbitrary value set by the designer of the component.
The technique used in assigning instance identities in configuration (section 4.8.5.3) is also
appropriate for logging events. Severities have one of the following values (listed from lowest severity
to highest):
1. Debug
2. Info
3. Notice
4. Warning
5. Error
6. Critical
7. Alert
8. Emergency
NOTE: Events are logged for all events that are the specified severity level and higher. So, if the
Info severity level is specified, all events will be logged except Debug events.
Through the use of wild-carded selection pairs, actions can be configured to trap log events in a
number of ways:
• from a particular component with any type of severity
• from all components with a specific severity (and higher)
• from a particular component and severity pair
• or from all components and all severity types
int main()
{
RTRXFileDb configDb("./config");
RTRConfig::setConfigDb(configDb);
#include "rtr/mgmtevnt.h"
MyClass::doSomething()
{
RTRMgmtEvent logEvent;
• Configuration file for a system with a default file action and a default stderr action:
logger.install_stderr_logger: True
By adding this line to the configuration file, the default logger will automatically create and install a
default stderr action to go along with the default file log action. Since there are no special
configurations for either the file action or the stderr action, some special default characteristics
specific to these default log actions are put into place. Specifically, the stderr action and file action
will each process log events for any component name and for severity levels "info" to
"emergency".
The other default configuration options for default logger’s file action are shown in the following
table.
max_bytes integer 10000 The file size which, when exceeded, will cause
the file to be saved to a ".old" file
The other default configuration options for both default logger’s stderr action and default logger’s
file actions are shown in the following table.
priority integer 100 Sets the priority of this action with respect to
other actions
• Configuration file for a system with only a default file action and with selector overrides and
overrides for log file name and log file size:
*logger.defaultFileAction.file: /var/triarch/my_log_file
*logger.defaultFileAction.max_bytes: 100000
*logger.defaultFileAction.selector: *.info
This configuration causes the default logger to create and install a file action that uses the file
/var/triarch/my_log_file to log event text, sets the maximum number of bytes the file will contain
to 100,000, and accepts log events that are from any component and have a severity of "info" or
higher.
• Configuration file for a system with only a default stderr action with selector overrides:
*logger.install_file_action: False
*logger.install_stderr_action: True
*defaultStdErrorAction.selector: *.*
With this configuration, the default logger will automatically create and install a stderr action that
will accept log events that are from any component and have any severity. Notice that the "*"
notation is used with the instance ID of the stderr action configuration.
• Configuration file for a system with win32 event logging:
*install_system_action: True
This configuration allows you to monitor SFC’s events via Windows event viewer.
4.8.7.1 Overview
The command line classes are utility classes that make it easier to specify and parse command line
arguments. They are used extensively in the example programs. All of the command line classes
begin with RTRCmdLine-. The base class representing the entire command line is RTRCmdLine. Only
one RTRCmdLine will be in an application, so the static variable RTRCmdLine::cmdLine should be
used to access it.
All of the command line arguments are descendent from RTRCmdLineArg. They can be strings,
numerics, flags, or a list. Each command line variable consists of a tag, a name, a purpose. For some
types of variables, a default value must be specified. By default, all variables are required, although
variables can be made optional using an extra argument in the constructor. When a variable is
specified on the command line, its tag should be pre-pended with ‘-’. See section 4.8.8.6 for more
information on the type of command line arguments. See the SFC Reference Manual for more detailed
information about the classes that implement the various types for command line arguments.
The RTRCmdLineFlag -? is always included in RTRCmdLine. When the -? command line argument is
specified, usage information for the application will be printed. This information is derived from the
tags, names, and purposes specified in the command line variable constructors.
4.8.7.2 Usage
Five typical steps are taken when using the command line model.
1. Declare the static RTRCmdLine::cmdLine.
2. Declare and specify all of the command line variables.
3. Call RTRCmdLine::cmdLine.resolve(argc, argv).
4. Check the command line proper usage.
5. Access each variable’s typed data.
The following example code shows the steps listed above.
#include <iostream.h>
#include "cmdline.h"
RTRCmdLine RTRCmdLine::cmdLine; // Create static first
//other includes
RTRCmdLine::cmdLine.resolve(argc, argv);
if ( RTRCmdLine::cmdLine.error() )
{
RTRCmdLine::cmdLine.printUsage(cerr, argv[0]);
return -1;
}
cout << "outFiles are ";
const RTRDLinkList<RTRCmdLineData, RTRDLink0>& l = files.values();
for (RTRCmdLineData *d = l.first(); d; d = l.next(d) )
{
cout << *d;
if ( d != l.last() )
cout << ",";
}
cout << endl;
return 0;
}
For this example, the command:
a.out -d -config config.cnf -i 24 file1.txt file2.txt file3.txt
would result in debugging on, config.cnf used as a configuration file, 24 as the instance number, and a
file list that includes file1.txt, file2.txt, and file3.txt. Note that the ‘-d’ command line flag does not have a
sub-argument. When a command line flag is followed immediately by another command line
argument, its sub-argument is optional.
The command line arguments can be in any order. The following command is equivalent to the first.
a.out -config config.cnf -d true file1.txt file2.txt file3.txt -i 24
Note that all extra arguments (file*.txt) are added to the list. The ‘true’ sub-argument for -d is needed to
make sure file1.txt is not picked up as the optional sub-argument of -d.
4.8.8.1 Connection
• RTRMDConnection - This class is the abstract base class for implementing connections to
market data infrastructure.
• RTRMDConnectionClient - This class defines the interface by which a subclass can receive
state and informational events about an RTRMDConnection.
• RTRTIBConnection - This class encapsulates a Rendezvous session. RTRTIBConnection is
also listens to all advisory messages.
• RTRSSLConnection - This class encapsulates a connection to an upstream SSL component.
• RTRSSLConnectionServer - This class encapsulates a well-known listen port.
RTRMDConnectionClients are notified if a DACS connection is lost or established.
• RTRTimerCmd - This is the abstract base class for timers, i.e. commands which will be executed
after a specified interval.
• RTRSelectNotifier - This is an implementation of a control loop based on the select() system
call.
• RTRXtNotifier - This is an implementation of a control loop which uses the facilities provided by
the Xt library.
• RTRXViewNotifier - This is an implementation of a control loop which uses the facilities provided
by the XView library.
• RTRWindowsNotifier- This is an implementation of a control loop which uses facilities of the
Windows API.
4.8.8.4 Configuration
• RTRConfigDb- This is the abstract base class for a database of configuration variables.
Variables are accessed by means of three keys: the class identifier of the requesting component,
the instance identifier of the requesting component, and the name of the variable. An additional
parameter, the default value to be assigned to the variable, is optional.
• RTRConfigVariable - This descendant of RTRString (via RTRExternalValue) is the basic unit of
configuration. A configuration variable accessed from the database will be in an error state if a
value has not been explicitly configured and no default is provided. A configuration variable
provides various mechanisms by which the underlying data may be converted to some other
form.
• RTRObjectId - This class is used for both class and instance identifiers. It is can be thought of as
a compound string.
• RTRExternalValue - This descendant of RTRString is an ancestor of RTRConfigVariable and
provides the means to transform the underlying data into other representations, e.g., into a list of
values based on a given delimiter.
• RTRListOfExternalValue - This class is the result of interpreting a configured value as a
delimited list of values.
• RTRConfig - This class provides static functions which allow access to a "global" configuration
database. Unless otherwise specified by the application, the database to which access is
provided is of type RTRDefaultConfigDb.
5.3 Entitlements
The SFC has the ability to interact with a permissioning system to ensure that users can only access
data for which they are entitled. By default, this capability is disabled. If a permissioning system is
available at your site, your SFC applications will need to be enabled to take advantage of entitlements
support. See section 5.3.2 for details on enabling entitlements.
There are several different types of entitlements that are supported by the SFC. Following is a
description of each type:
• User Based Entitlements - This type of entitlement determines if the user is allowed to gain
access to any infrastructure resources. The application is not allowed to access any services until
the SFC has successfully “logged in” with the permissioning system. SFC services will remain in
the stale state until the login occurs. Each user has a profile which determines a user’s level of
entitlements. A username is passed to the permissioning system to match the application with the
proper entitlements profile. See section 5.3.3 for details on how the username is set in an SFC
application.
• Subject Based Entitlements (SBE) - This type of entitlement determines if the user is allowed to
access data from or publish/insert data to a particular data service and data item based on the
user’s profile. Examples of services are specific feed handlers or third party publishing
applications. An example of a data item is the quote data for the symbol “IBM” from the NYSE
exchange. Just before the item is to be requested from the infrastructure, the subject based
entitlement check will be made. If the check fails, an Inactive event is propagated to the
subscriber indicating that entitlements are denied. In the case of publishing applications, a
message is logged indicating that the item cannot be published. In the case of inserts, an
InsertNak event is propagated to the insert initiator.
• Content Based Entitlements (CBE) - This type of entitlement determines if the user is allowed to
access a particular data service and data item based on entitlement data specific to that data
item. The entitlement data typically contains codes that map into various entitlement categories,
such as products, exchanges or vendors which are set up in the permissioning system database.
The permissioning system is able to use the codes to check whether a given user is allowed to
access the data item based on their entitlements in each category. The entitlement data is
received from the infrastructure in conjunction with an image. The content based entitlement
check is done before the image is forwarded to the subscriber. If the check fails, an Inactive event
is propagated to the subscriber indicating that entitlements are denied.
• Publishing Entitlement Data - This is not so much an entitlement type as it is a mechanism used
by publishing applications to provide entitlement data to downstream infrastructure clients for use
in CBE checks. Entitlement data is provided before the initial image so that CBE checks can be
performed before image data is processed. See section 4.2.2.11, Publishing Entitlement Data, for
details. For information on how to create DACS access locks, refer to Appendix G, DACS
LIBRARY FUNCTIONS.
In most cases, the entitlement checks occur within the SFC library at the desktop using profile data
that is obtained from the permissioning system at start-up. The profile data is cached in the SFC library
for quick access. In the case of SSL subscription, the entitlement checks occur in the Triarch
infrastructure.
The DACS permissioning system is supported with the RMDS infrastructure (RRDP Market Data Hub
and RMDS Distribution layer using SASS3) and the Triarch infrastructure.
By default, data entitlements are enabled for SSL publication and RMDS (SASS3), but can be disabled
either through SFC configuration files or by application code through methods available in the
connection classes
To enable or diasble entitlement through configuration, use the following parameter in the SFC
configuration file *enableEntitlements with a value of "True" or "False" (case insensitive). (See section
5.5.10.1 and 5.5.10.2.)
To enable or disable entitlements programmatically, explicitly create a connection and pass it to the
service or service pool factory:
RTRObjectId appId("appId");
RTRTIBConnection connection(appId, "connection");
//to disable entitlements the following line
//should be RTRFALSE
connection.enableEntitlements(RTRTRUE);
connection.connect();
RTRTIBRTRecordService service(appId, "RSF", connection);
The RTRMDConnection base class has an entitlementsEnabled() state attribute. This attribute has
different semantics for the SSL than for RMDS or TIB implementations. On RTIC-based RMDS, this
attribute reflects whether entitlement checking will be performed in the SFC or not. On SSL
subscription, this attribute has no meaning since all subscription entitlements are performed in the
Triarch infrastructure.
In the RTRTIBConnection and RTRSSLConnectionServer classes, the
enableEntitlements(RTRBOOL) method is available in both classes to enable or disable entitlement
checking for that connection. Entitlements can also be enabled through configuration with the
*enableEntitlements variable.
When entitlements are enabled for TIB implementations, the SFC services will remain in a Stale state
and will not request data until connection has successfully logged the user into the permissioning
system. The login process is initiated when the connect() method is invoked on an instance of
RTRTIBConnection or RTRSSLConnectionServer.
When entitlements are enabled for TIB implementations, the system will block until all entitlements
information is loaded in from the permissioning system. This entitlement profile from the permissioning
system is loaded after the data dictionary is complete. This means the connect() call will block and
SFC services will not be available during that time.
The User_name field identifies a particular person who has been assigned a unique name for access
to the infrastructure (e.g., “E_DICKINSON”). The SFC does not attempt to verify the supplied name
beyond ensuring that the name corresponds to an actual user. The length of the User_name field must
not exceed 255 characters. If this field is omitted, the SFC assigns the default user name which is the
name that the owner of the current process used to log into the operating system. This default is
suitable for most applications. The explicit specification of the user name is needed mainly for
applications that handle multiple users in a single process.
The Application_ID field provides the identity of the SSL application. The application ID must be an
integer in the range 1 to 511. The SFC will verify that the application ID is in this range.
Application IDs are assigned by the Reuters International Product Manager for Entitlements. A unique
ID should be obtained if the application is to be used at more than one client site. For site-specific
applications, select any ID in the range 257-511. If this field is omitted, the SFC assigns the default
application ID number, which is 256.
In the case of a single user running the same application on the same machine, the Position field can
be used to uniquely identify the sink application. The length of the Position field must not exceed the
239 (255 - 16. Note that 16 bytes must be reserved for the IP address, which is added to this field by
the SFC.) If this field is omitted, the SFC assigns the default Position “net”. The Position is used in
conjunction with the IP address of the machine on which the process is running to check entitlements
based upon physical location.
Some examples of valid UserName strings are as follows:
1. V_Kandinsky+7+13”
This means the user V_Kandinsky is running application number 7 on position 13 of this
machine.
2. “C_Monet+7”
User C_Monet is running application number 7; the position takes the default value.
3. “+8”
The user name takes the default value. The application ID is 8. The position takes the default
value.
4. “”
The user name, application ID, and position all take the default values. This is the most
common usage.
WARNING: If a mapping does not exist for the services associated with a given DACS access lock,
the entitlement data will not be published and an error message will be logged in the SFC log file.
If your publishing application is creating its own DACS access locks, your application must pass to the
DACS access lock creation function the same service type (also called service ID) that is used by the
DACS system for your published service name. This service type is embedded into the access lock
and is used by subscription applications to make CBE checks against that service. The administrator
for the DACS system sets the service ID.
Furthermore, the service mapping table in the SFC must contain a mapping for that service ID to the
name of the published service. This is required by the SFC so the access lock can be translated into a
format acceptable by the Source Distributor. Your publishing application has several options for setting
the service ID/service name mapping entry.
• Allow the SFC to automatically add the mapping entry via configuration variable.
This will only work if the DACS access locks were created with a service ID that maps to the
service name that is being published to the Source Distributor. The SFC will look for the entry
"*<service_name>*serviceId : <id>" in the SFC configuration file, where <service_name> is the
name of your published service and <id> is the service ID associated with that service name in
the DACS system and in the Source Distributor.
An example configuration entry is "*MY_SOURCE*serviceId : 123". This same service ID must
also be set in the Source Distributor configuration file for the service name "MY_SOURCE".
• Add one or more service ID / service name mappings to the SFC directly.
This method should be used if your application does not use the SFC configuration file or if your
DACS access locks contain a service ID that does not map to the same service name that is
being published. For instance, if you create an access lock or compound lock that contains the
service ID 13033 which represents service name "IDN_RSF", but your application is publishing to
service name "MY_SOURCE" which has a service ID of 123 (as determined in the DACS system
and the Source Distributor configuration file), then your application needs to add a new SFC map
entry for service ID 13033 and service name "IDN_RSF".
The following code shows how this mapping entry is added to the SFC from application code:
#include "rtr/sslinterface.h"
entry = RTRSSLInterface::ServiceIdMap.getPair(serviceName);
if (entry == 0)
{
cout<< "Adding service map entry for "<< serviceName;
cout<< " and id "<< serviceId << endl;
RTRSSLInterface::ServiceIdMap.addEntry( serviceName, serviceId );
}
else
{
cout<< "Already have entry for " << entry->serviceName();
cout<< " with service id " << entry->serviceId() << endl;
}
NOTE: The service ID used in the SFC service name mapping table must be the same as the ser-
vice ID set in the DACS system and the service ID set in the Source Distributor configuration file for
this published service name. If not set correctly, the published entitlement data will not make it
through the RRDP Market Data Hub.
NOTE: Publishing DACS access lock to RTIC does not need mapping.
machine as the application. Once the connection is established, the SFC will attempt a login to the
DACS permissioning system using the information available in the username (see section 5.3.3). If this
succeeds, a profile for that user is passed back to the SFC and cached there. At this point all data
services will become available in the normal fashion, depending on availability from the infrastructure.
As new items are requested, items are published, or inserts are made, the SBE and CBE checks are
invoked and the results handled as described at the top of this section.
If exceptional conditions occur, the SFC will handle them as follows:
Connection to DACS The SFC will automatically attempt to reconnect to the DACS Daemon at
Daemon is lost 60-second intervals. The interval is configurable via the
DACS_retry_connection_interval configuration parameter. While the
connection is down, entitlements checks will continue using the existing
user profile cached in the SFC. Furthermore, by default, no interruption in
data service will be seen. After the reconnection to DACS is successful, all
entitlements will be re-checked in case the user profile has been changed.
Login to DACS The SFC will automatically retry the login at 1-second intervals. This inter-
Daemon fails val is configurable via the DACS_user_login_retry_interval config
parameter. While the SFC is not successfully logged into the DACS per-
missioning system, no access will be allowed to the infrastructure services.
All subscription services will remain in the Stale state during this time. Pub-
lishing applications will not publish records to the infrastructure during this
time because the connection server will not be activated until the login suc-
ceeds. All insert services will be in the Stale state until the login succeeds.
DACS system The SFC will receive an event from DACS to re-verify all items. Both SBE
changes the profile for and (where feasible) CBE checks will be made. Subscription items that fail
a user the entitlements check will be transitioned to the Inactive state. All publish-
ing items that fail the entitlement check will stop propagating data to the
infrastructure and log a message to the SFC log file. Note that publishing
applications may not receive a notification of this event except for an indi-
cation that no more clients are watching the item. All services that fail the
check will be transitioned to the Stale state as will all items associated with
the service.
DACS system The SFC will receive an event indicating the user login is no longer valid.
un-entitles a user All data services will be transitioned to the Stale state. All data items will be
transitioned to the Stale state with appropriate text indicating that the user
login failed. No attempts are made to re-login. It is up to the application to
do this via the RTRMDConnection::connect() method.
For Windows [NT, 2000, XP or 2003], an environment variable can be set at the system level by using
the normal Windows facility for setting environment variables.
Variable Name: DACSAPI_THREAD_AWARE
Variable Value: true
When SFC does not have an item cached, the time that it takes to provide data images via sync events
depends on the infrastructure and on SFC’s request queue. If the infrastructure does not have items
cached, then the image rate will be slower and the CPU utilization by SFC will be slower. For
information on how the request queue affects image rates, see section 5.4.4.
• RTRField::set(double) and RTRField::set(int) use sprintf to write numbers into a field’s string
storage.
• RTRField::set(const char *, int) searches the data for character repetition, partial field update,
and RMTES escape sequences. Alphanumeric Marketfeed data may include these escape
sequences. This method is less efficient then setData().
• RTRField::setFromExpandedValue(const char *) is a convenience method that converts an
expanded string to an enumeration integer and then writes the integer to the field’s string storage.
• RTRField::use(char *) is a utility method that can be used for rippling fields efficiently. The char *
argument is swapped for the field’s internal storage. The old storage is returned and the
application must be sure to clean up the field. No data copying is done, so this data is very
efficient. The following code shows how the method can be used to ripple fields:
// RTRRTRecord _record
// RTRRTField * fld; // a field that ripples
RTRRTField * rfld = fld;
char * tmpfld = (char *) fld->to_c();
while (rfld->rippleDefinition())
{
rfld = _record.field(rfld->rippleDefinition()->fid());
if (!rfld)
break;
tmpfld = rfld->use(tmpfld);
}
fld->use(tmpfld);
fld->setData("new data");
After setting a field’s data for an update, the field must be added to a RTRRTFieldUpdateList. This
class includes several methods for adding the field: putFieldByName(const char*),
putFieldByFid(int), and putField(RTRRTField&). putFieldByName(const char *) looks up the FID
and calls putFieldByFid(int), which searches for the field in the record and then calls
putField(RTRRTField&). So putField(RTRRTField&) is the most efficient of the three methods.
RTRRTFieldUpdateList and RTRRTRecordImpl maintain fields in a sorted array. So fields should be
added using putField(RTRRTField&) in FID order whenever possible to avoid resorting.
client tries to recover, it will again request all of the items, thereby repeating the problem. This situation
is often called thrashing.
Sometimes, requests for valid market data items take a longer period of time, especially when they are
not cached by the infrastructure. SFC uses request timing to ensure that a few slow pending items do
not prevent other queued requests from waiting an unreasonable amount of time.
Many of the properties, such as queue sizes and request time-out intervals, can be customized. Most
client applications can use the defaults. However, depending on factors such as client machine speed
and network bandwidth, some applications may see better behavior by changing the request
properties.
The request queuing and timing implementations are the same for both the SSL and the TIB
implementations of SFC.
In light request traffic, the pending limit will remain at its current level and requests will go out as they
are received. When a large burst of requests causes the pending limit to be hit, the limit will be raised
and a new set of requests will be made upon the successful completion of the last pending request.
Normally the pending list is only resized when it is empty. However, if a few of the requests take a very
long time to receive images or inactive events, requests in the waiting list will not be sent, and image
throughput will suffer. If pending_resize_trigger is set to be > 0, the slow items will not prevent the
queue from resizing and additional requests from being made. When the pending list reduces to the
size of the resize trigger, the queue resize algorithm described above will be used.
5.4.4.3 Configuration
The following parameters can be set in the SFC configuration file:
*request_queue*initial_pending_limit : 10
*request_queue*max_pending_limit : 100
*request_queue*limit_multiplier : 2
*request_queue*pending_resize_trigger : 0
*request_item_config*timeout_seconds : 100
*request_item_config*retry_seconds : 10
*request_item_config*retry_cap_seconds : 5000
*request_item_config*retry_multiplier : 3
All of the values shown above are the defaults.
This configuration assumes that the application creates an instance of RTRXFileDb and makes this
instance the global configuration database by calling RTRConfig::setConfigDb() and passing the
new RTRConfigDb instance.
NOTE: Make sure you include the "*" in front of the "request_queue" portion of the identifier. (See
section 4.8.5 for details.)
When the source application controls the size of the cache list and maxCache is configured as 0
(zero) or not present in the Source Distributor’s configuration file, the SFC source application’s
publisher service *number_of_items value will be used as the maximum cache size. If the Source
Distributor’s *maxCache is configured to greater than 0 (zero) in the Source Distributor’s configuration
file, the SFC source application’s publisher service’s *number_of_items value will be used as the
Source Distributor’s openLimit value. So, in this case the Source Distributor will set its cache size for
this service to the lesser of the Source Distributor’s maxCache and SFC’s *number_of_items.
More information is available in the Reuters Market Data Hub - Source Infrastructure 4.2 Software
Installation Manual.
When the source application completes a connection to an infrastucture component, the source
application will issue the images for all of the items in a service over a period of time. This set of
images is broken down into subsets (defined by *imagesPerInterval) and sent to the Source
Distributor at intervals (defined by *imageInterval) which would, by default, publish 300 images per
second. These parameters allow the user to avoid sending out a very large number of images in a very
short period of time, which could lead to overloading socket connection or Rendezvous daemon.
.
NOTE: In source-driven mode, a source application should pre-load its cache before a Source Dis-
tributor connects to it or it connects to an RV daemon. Otherwise, the pacing algorithm described
above will not be used. To pre-load SFC’s cache, create and populate all records before calling
RTRRTRecordServiceImpl.indicateSync().
When publishing to a Source Distributor it is important to have the Source Distributor configured to
handle a very large number of images.The Source Distributor also has a parameter called "open
window". This specifies the maximum number of outstanding image requests allowed at any given
time. Currently, the default is 40. The configuration variable to set the Source Distributor’s open
window is *initialOpenLimit. This is a configuration variable found in the triarch.cnf file. Please see
the Source Distributor’s documentation for more details.
5.4.6 Tracing
SFC provides several mechanisms for tracing that can aid in troubleshooting the network configuration
and the application.
To turn on tracing, the debug filter must first be enabled in the selector configuration variable:
*selector : *.debug
*traceLevel : <level>
which will begin logging debug messages to the logger. <level> is an integer bitmask used to set
various levels of tracing. Valid values are created by adding the integer values of each level that is to
be logged.
1= service level tracing
2 = basic item level tracing
4 = full item level tracing
8 = item image/update data tracing
In the TIB implementation, level 8 sends the full parsed TIBMsg to stderr.
The "debug" filter must also be enabled in the *selector configuration variable.
For example, to turn on service level and item image and update data tracing, the following entries
would be used:
*selector: *.debug
*traceLevel: 9
Additionally, IPC messages to Triarch and the Market Data Hub can be traced using the sslapi.cnf
configuration variables *eventLogging, *messageTracing, and *functionLogging, described in
section B.6.3. The IPCTRACE file created with this configuration is always in the local directory.
For RMDS, SASS3 library error messages are sent to SASS3.log. The name of the SASS3 log file can
be set with the *tiblogfile configuration variable.
The following configuration summarize these configuration variables with their defaults:
*sslLogFile : SSL.log
*sslLogFileSize : 10000
*mountTrace : False
*tiblogfile : SASS3.log
Issue Section
• Both dictionaries must at least know the FID definitions for all of the field names that will be
published. If SFC does not have a FID definition for a field, it will throw it out. If the infrastructure
cache does not have a FID definition for a field, it may also throw it out.
• The size of a price field is hardcoded to 17 bytes causing price fields which are larger than 17
bytes to be truncated. On Marketfeed-based this can be solved by modifying appendix_a. To
increase the size while using a TSS-based dictionary, SFC provides the configuration variable
*decimalSize to work around this issue. TSS-based format has an opaque data type that
supports binary data for a fixed size. SFC translates that type to RTRFidDefinition::Binary.
However, the Marketfeed data format used by Triarch and RMDS is limited to a 182-character set.
Binary fields in the appendix_a are actually base64 encoded in Marketfeed; RTRField’s
setFromBinaryData() and binaryData() methods can be used to encode and decode binary data
for all infrastructures.
• Page data in the Triarch and RMDS infrastructures is delivered as a stream of ANSI data. While
SFC parses the ANSI data and makes it available in a logical model, sometimes status text
messages also have ANSI escape sequences in them. SFC does not parse these ANSI escape
sequences, so the text in processInfo events will still contain the sequences.
• When using TSS dictionary format, SFC now allows consumers access to the SEQ_NO FID.
Previously SFC dropped this field. If it is necessary to have SFC automatically drop this field, set
*dropSeqNo to True. *dropSeqNo is False by default and will only need to be changed in rare
cases.
• When publishing to an RTIC(SASS2) infrastructure, SFC automatically create fields for
"SYMBOL" and "Output Ticker Symbol". Those fields are populated with the RIC for the item. The
configuration parameter *sendSymbolFields allows the publisher to disable the publishing of
SYMBOL and Output Ticker Symbol fields. The default value for this configuration parameter is
true for publishing SYMBOL and Output Ticker Symbol.
• When publishing chain over RTIC(SASS3) infrastructure, SFC configuration variable
*enableNextChainHeaderAsString must be set to True; otherwise, the size of chain elements
on the subscriber side would not be changed.
*enableNextChainHeaderAsString = True
• On the SASS3 protocol, SFC publisher can be configured to encode the published data to MF
data format by setting the configuration variable *mfencoder to True.
5.5.2.1 Enumerations
Consumers
When using SFC with Marketfeed-based program, both enumerated and expanded values will be
available for all enumerated fields. If the datafeed is expanding the enumerated fields upstream, SFC
will not look up the enumerated value if the configuration variable srcExpandsEnumFields is set to
True. The srcExpandsEnumFields configuration variable can be set on a per service basis.
TSS-based program expands all enumerations upstream and does not provide a way of looking up the
original enumerated value. No RTRRTEnumeratedFields will be created by the TIB implementation of
SFC. They will instead be of type RTRRTAlphanumericField.
For compatibility between the models, first check the type of the field. If it is enumerated, use
RTRRTEnumeratedField::expandedValue(). Otherwise use RTRField::string() to access the value.
// RTRRTRecord & record;
if (record.hasFieldByName("RDN_EXCHID")
{
RTRField *field = record.fieldByName("RDN_EXCHID");
if (field->type() == RTRFidDefinition::Enumerated)
cout << ((RTRRTEnumeratedField *)field)->expandedValue();
else
cout << field->string();
}
Publishers
Publishing enumerated fields presents some challenging tradeoffs. The issue centers around sending
data in the format (either enumerated or expanded) that the consuming program expects.
• If the publisher will only be used in a single infrastructure, then it can use the format expected in
that infrastructure.
• If the consumer is also an SFC-based program, then one of the consumer workarounds described
above can be used.
• If the publisher will be publishing to consuming applications that cannot be changed or
configured, then the publisher may need to add conditional code or configuration to change what
it publishes in the infrastructure in which it is deployed.
The following code shows how to conditionally publish an enumerated field as either Enumerated
or Alphanumeric:
// RTRRTRecordImpl & record;
// RTRFidDb & fidDb;
The following table summarizes how enumerations are received for various infrastructures. In each
row, the RDN_EXCHID field is being published. The value sent by the publisher is what will be sent
through the infrastructure. The client values show what will be available through SFC. The f and ef
variables are RTRRTField and RTRRTEnumeratedField, respectively. The value returned by string()
is the value actually stored by SFC.
5.5.2.2 Hint
SFC allows system to identify data display format by determining hint which can be grouped into 3
forms as described in table 5.6.
NOTE: Hint is only applicable to SFC consumer. There is no control for SFC publisher over hints, so
SFC will automatically generate appropriate hint for each field when it is published.
Type of Data
Hint Examples Data Display
Displaying
Scenario 1. SFC will strictly follow the hint if the data buffer size is sufficient.
Scenario 2. SFC will display data as hint, but the decimal part may be rounded up or rounded
down depending on the available buffer size.
Scenario 3. SFC will display data in exponential form without hint associated when the buffer
size is not sufficient for a whole number. This causes the data to be rounded up or rounded
down in some cases.
• When record templates are used, the RTIC will never add or remove fields, even if another image
is published. When record templates are not used, the RTIC will replace the item in the cache
when a TIB_MSG_VERIFY (i.e. record resync) is received.
• REC_TYPE is required when using the record publishing model to send contributions on a
Rendezvous network.
• Record template numbers have different values on TSS and Marketfeed data dictionaries. Since
most Marketfeed data dictionary do not use fixed record templates, the TSS data dictionary
values can be used.
Due to these issues, SFC’s TIB implementation has an additional mechanism for controlling when
record template numbers are published. First, the generic record publishing code can be written to
always publish record template numbers. Then record template number publishing can be controlled
for a service either programmatically or through configuration.
RTRRTFieldToTIBRecordService::setUseTemplates();
RTRRTFieldToTIBRecordService::clrUseTemplates();
setUseTemplates() enables publishing of the REC_TYPE field, and clrUseTemplates() disables
publishing of that field, regardless of whether the setRecordTemplateNumber() method was used. If
the template behavior is not set programmatically, the *useTemplates configuration variable can be
used. That configuration variable defaults to true. Setting *useTemplates to false is equivalent to
calling clrUseTemplates().
NOTE: If the record template number is only being used to distinguish between different types of
records and is not being used to define an exact set of fields for a record template, then the
RDNDISPLAY field can probably be used. This field is typically used to select a display template,
and it is available from all infrastructures.
dictionary that is loaded from tss_fields.cf, tss_records.cf, and supporting files. Many data
discrepancies between infrastructures can be linked to data dictionary differences.
"network", then the data dictionary will be downloaded from the P2PS or RTIC. See Appendix C.2.3
and Appendix E.4 for information on how to configure the infrastructure to support data dictionary
download. If the *fidDbLocation variable is not set, the default location for that type of data dictionary
will be used. For TSS, the default location is network. For Marketfeed, the default location is file.
When the *fidDbLocation is "file" for a TSS data dictionary, the fid_file_path configuration variable
must be set to the fully qualified filename of tss_fields.cf . Also, the cfile_path environment variable
must include the directory in which tss_fields.cf and its support files are found. The fid_file_path
config variable and the cfile_path environment variable should be set using forward slashes (e.g.
cfile_path = /var/tib; *fid_file_path: /var/tib/tss_fields.cf).
If the data dictionary is loaded from files, the *fid_file_path and *enum_file_path configuration
variables can be used to set the location. The priority for determining the data dictionary location is as
follows:
1. filenames passed in constructor or load() methods
2. filenames found in *fid_file_path and *enum_file_path
3. the current working directory
4. \\HKEY_LOCAL_MACHINE\SOFTWARE\Reuters\Triarch\MASTER_FID_FILE and
\\HKEY_LOCAL_MACHINE\SOFTWARE\REuters\Triarch\ENUM_FILE
registry entries (Win32 only)
5. MASTER_FID_FILE and ENUM_FILE environment variables
In order to load both tss_fields.cf and appendix_a from files specified in configuration, configuration
class IDs should be used. For example:
*fidDbLocation: file
FileFidDB.fid_file_path: /var/triarch/appendix_a
FileFidDB.enum_file_path: /var/triarch/enumtype.def
TIBFidDB.fid_file_path: /var/tib/tss_fields.cf
based and the other is Marketfeed-based. When a RTRTIBFidDb is loaded, one of the two global data
dictionaries is loaded. If another RTRTIBFidDb of the same type is loaded, the data dictionary in the
parser is replaced, and services based on the first data dictionary will begin to use the second for
parsing.
RMDS/RTIC/
RMDS/RTIC/
Triarch RMDS/P2PS Marketfeed
TSS based
based
5.5.4.3 Examples
The examples in this section show how to use the RTRTIBFidDb::load() methods and service
constructors to share data dictionaries. The examples may use some of the following includes and
declarations:
#include "rtr/objid.h" // RTRObjectId
#include "rtr/tibrtsvc.h" // RTRTIBRTRecordService
#include "rtr/tfdb.h" // RTRTIBFidDb
#include "rtr/tconnect.h" // RTRTIBConnection
#include "rtr/fldtossl.h" // RTRRTFieldToSSLRecordService
#include "rtr/fldtotib.h" // RTRRTFieldToTIBRecordService
RTRObjectId appId("appId");
RTRRTRecordService * subservice;
RTRRTRecordServiceImpl * pubservice;
RTRTIBConnection * c;
RTRTIBFidDb * fidDb;
1. RMDS local publishing and subscribing services, Marketfeed-based data dictionary from file
c = new RTRTIBConnection(appId, "connection");
c->connect();
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(); // Marketfeed, File is default
subservice = new RTRTIBRTRecordService(appId, "IDN", *fidDb, *c);
pubservice =new RTRRTFieldToTIBRecordService(appId, "PUB", *fidDb, *c);
2. RMDS local publishing and subscribing services, Marketfeed-based data dictionary from
network
connection = new RTRTIBConnection(appId, "connection");
c->connect();
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(RTRTIBFidDb::Marketfeed, RTRTIBFidDb::Network);
subservice = new RTRTIBRTRecordService(appId, "IDN", *fidDb, *c);
pubservice =new RTRRTFieldToTIBRecordService(appId, "PUB", *fidDb, *c);
3. RMDS local publishing and subscribing services, TSS-based data dictionary from file
c = new RTRTIBConnection(appId, "connection");
c->protocol(RTRTIBConnection::SASS2); //SASS3 is also valid if
//using a RTIC(SASS3)
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(RTRTIBFidDb::TSS, RTRTIBFidDb::File);
subservice = new RTRTIBRTRecordService(appId, "RSF", *fidDb, *c);
pubservice = new RTRRTFieldToTIBRecordService(appId, "PUB", *fidDb, *c);
4. RMDS local publishing and subscribing services, TSS-based data dictionary from network
c = new RTRTIBConnection(appId, "connection");
c->protocol(RTRTIBConnection::SASS2); //SASS3 is also valid if
//using a RTIC(SASS3)
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(RTRTIBFidDb::TSS); // Network is default for TSS
subservice = new RTRTIBRTRecordService(appId, "RSF", *fidDb, *c);
pubservice = new RTRRTFieldToTIBRecordService(appId, "PUB", *fidDb, *c);
5. Sharing data dictionary between a RMDS service and Triarch service.
Note that the FidDb is loaded from a file. Also note that a RTRTIBFidDb is used, not a
RTRFileFidDb. This necessary because RTRTIBFidDb has the side-effect of correctly
initializing the RMDS parser.
c = new RTRTIBConnection(appId, "connection");
c->connect();
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(); // Marketfeed, File is default
subservice = new RTRTIBRTRecordService(appId, "IDN", *fidDb, *c);
pubservice =new RTRRTFieldToSSLRecordService(appId, "PUB", *fidDb, *c);
6. Downloading a data dictionary when the RTIC’s RV3_OUTPUT_* and RV3_PUBLISH_*
parameters or the RTIC’s RV2_OUTPUT_* and RV2_PUBLISH_* parameters are set.
const char * outservice = "7501";
const char * pubservice = "7502";
c = new RTRTIBConnection(appId, "connection", outservice,"","");
c->connect();
fidDb = new RTRTIBFidDb(appId, c);
fidDb->load(RTRTIBFidDb::TSS); // Network is default for TSS
pubservice =
new RTRRTFieldToTIBRecordService(appId, "PUB", *fidDb, pubservice,"","");
Note that for this setup, the SFC configuration file must have the following entry:
TIBDistribution*pingInterval : 0
or
TIBDistribution*enablePingSubscription : false
5.5.4.4 Configuration
*fid_file_path - Default location of appendix_a. This configuration variable is also used for loading
tss_fields.cf from a local file. To set both appendix_a and tss_fields.cf filenames through
configuration, FileFidDB and TIBFidDb Class IDs must be used. When loading tss_fields.cf, the
cfile_path environment variable must also be set to include the directory with the supporting .cf files.
See section 5.5.4.1 for details.
*enum_file_path - Default location of enumtype.def.
*fidDbInterval - If downloaded RTRSSLFidDb or RTRTIBFidDb did not load within this time period (in
seconds), it will be retried. The default value is 20.
*fidDbLocation - Has no default. Can be set to "network" or "file" for RTRSSLFidDb or RTRTIBFidDb.
If it is not set, the default value for that infrastructure will be used.
*fidDbType - Has no default. Can be set to "Marketfeed" or "TSS" for RTRTIBFidDb. If it is not set, the
default value for that infrastructure will be used.
When sourcing data from an RMDS service (SASS2/SASS3), an item that is in a stale state will
transition to a not stale state when an update which record status equals to OK is received and the
configuration variable *allowUpdatesToChangeStaleToOk is set to true. On the other hand, an item
that is in a not stale state will transition to a stale state when an update which record status equals to
STALE_VALUE is received.
stale events. These events fall into two categories: those that SFC will try to recovery (for example
CACHE_FULL) and those that SFC will let the infrastructure recover (for example TMF_DOWN).
In most circumstances, SFC’s default mapping behavior is acceptable. In some environments, the
event mapping needs to be changed. The following configuration can be used to make some large
grain changes to how status codes fall into those categories:
*forceSFCDrivenRecovery : False
*forceInfrastructureDrivenRecovery : False
When *forceSFCDrivenRecovery is set to True, SFC will close all stale items and will re-request the
items. While this could speed up recovery in some circumstances, it could also lead to network request
"storms" if hundreds of applications are configured this way. In case RTIC is down, SFC will wait until
RTIC is up and then re-request the items.
When *forceInfrastructureDrivenRecovery is set to True, SFC will not re-request any stale items.
This can alleviate the network request load in complete system recovery situations. However, in some
circumstances, the items may never recover. This is especially true in the case of the SASS2 protocol.
If these configuration variables need to be set, then typically only one of them is set. These
configuration variables do not affect connection recovery or request queue timeouts.
5.5.6 Discovery
The *serviceProvider configuration variable controls the defaults of several other configuration
variables. The following table shows the relationship between various configuration variables.
*serviceProvider or
dynamic discovery Equivalent configuration Other values controlled
value
The first, third and fourth parts of the name can be determined from the SFC service name and the
market data item name. The sector, however, must be guessed. On the client side, SFC takes its best
guess (based on the service type and the RIC). If that guess does not work, it cycles through the other
sectors. Table 5.12 shows some examples of the four-part names that SFC creates from the service
names and RICs.
TELERATE 12 TELERATE.PAGE.12.NaE
Sector selection is slightly different with an RMDS infrastructure. When a consuming SFC application
accesses data provided by the Market Data Hub through an RTIC, it will always use the sector from
service provider discovery (see section 5.5.6.2), which is typically "ANY". SFC local publishers,
publishing on the Rendezvous portion of an RTIC SASS3 infrastructure, will always use the "ANY"
sector by default. In RTIC/SASS2 infrastructure, it will always guess which sector should be used for
the publishing symbol. The sector can possibly be "REC", "LINK", "PAGE" and "BRCAST" which
depends on the publishing symbol.
Yes
No
1st char is
'n' or 't'?
1st char is 'd'?
Yes sector = REC
Yes
No
sector = PAGE
item
contains '#'?
No
No
No
contains '/', but
not 1st char?
No
No
No sector = REC
Since page services have simpler sector schemes (i.e. its either PAGE or ANY), the default sector of
"." is only supported for record subscription and publishing services. The *defaultSector configuration
variable can still be used with page services to set the actual sector.
N2_UBMS is a special RIC in the RMDS and Triarch infrastructures to deliver news headlines, alerts,
and corrections. In the TIB infrastructure this data is typically delivered in seven separate subjects. For
a service RSF, these subjects would be:
RSF.BRCAST.NEWS2K_ALERT.NaE
RSF.BRCAST.NEWS2K_HL.NaE
RSF.BRCAST.NEWS2K_SUB_HL.NaE
RSF.BRCAST.NEWS2K_CORRECT.NaE
RSF.BRCAST.NEWS2K_CORECTD.NaE
RSF.BRCAST.NEWS2K_DELETE.NaE
RSF.BRCAST.NEWS2K_EXPIRE.NaE
For the TIB infrastructure, C++ Edition supports mapping the single N2_UBMS subject to the seven
TIB subjects. This allows SFC applications to request N2_UBMS from any infrastructure. When the
application subscribes to N2_UBMS, SFC will subscribe to each of seven news subjects. Subject-
based entitlement checks will be performed on each of the TIB news subjects. Data from any of those
subjects will be forward to clients of the N2_UBMS record.
N2_UBMS mapping is enabled by default, but it can be disabled by setting the *newsEnabled
configuration variable to false. The *newsSymbol configuration variable can be used to change the
application requested RIC from N2_UBMS.
When forwarding news subjects to the N2_UBMS record, SFC maps the subject to a numeric value
stored in the DSPLY_NAME field. The mapping can be configured using the *newsInstrumentList.
Applications using the TIB infrastructure still have the option of subscribing directly to the seven news
RICs: NEWS2K_ALERT, NEWS2K_HL, NEWS2K_SUB_HL, NEWS2K_CORRECT,
NEWS2K_CORECTD, NEWS2K_DELETE, and NEWS2K_EXPIRE.
The *newsSectorList configuration variable provides a list of sectors that the default record
customizer for TIB uses with news subjects. The *newsPrefix configuration variable is used to
determine which subjects use *newsSectorList and which use the *sectorList.
News configuration has the following defaults:
*newsInstrumentList: NEWS2K_ALERT.NaE 1, NEWS2K_HL.NaE 2, NEWS2K_SUB_HL.NaE 3,
NEWS2K_CORRECT.NaE 4, NEWS2K_CORECTD.NaE 5, NEWS2K_DELETE.NaE 7,
NEWS2K_EXPIRE.NaE 8
*newsSectorList: BRCAST, INTERACTIVE
*newsPrefix: NEWS2K_
*newsSymbol: N2_UBMS
*newsEnabled: true
5.5.7.7 Configuration
The complete list of customizer configurations is shown below with the default values. All values can
be configured on a per service basis.
The default record subscription customizer for TIB has the following configuration:
*sectorList: REC, PAGE,LINK
*newsSectorList: BRCAST,INTERACTIVE
*newsPrefix: NEWS2K_
The default record subscription and publishing customizers for TIB have the following configuration.
*localSubjectMapFile:! disabled
*globalSubjectMapFile:! disabled
*subjectMapSize: 1000! size of subjectMap hash table
*number_of_items: 503! size of RTRServiceCustomizer hash table
TIB page services have the following default configuration:
*defaultSector: PAGE
*defaultExchange: NaE
*columnsInPage : 80
The Source Distributor must also be configured with those settings.
Contributions are sent the same way that local published data is sent. Like local publish services, the
contribution service is specified as a broadcast service in the TIC’s configuration file. The difference is
a TIB contribution server (e.g. MarketLink) happens to be consuming the locally published data. It
validates the data and forwards it through the contribution infrastructure. SFC’s insert model cannot be
used; the record publishing model must be used to send contributions. The application will never
receive an ACK or NAK because TIB contribution servers do not send them. SFC publishes the data in
TIBMsg self-describing format and the TIC converts the data to a QForm based on the record template
number provided to SFC.
5.5.10.1 RTIC
The standard Rendezvous connection parameters can be set using the constructor or through the
RTRConfigDb:
• *service - which service group should be used. Default value is NULL which means the default
Rendezvous service will be used.
• *network - which network interface should be used. Default value is NULL which means the
primary network interface for the host computer will be used.
• *daemon - location of the Rendezvous daemon used to establish communication. Default value
is NULL which means the local daemon on TCP socket 7500 will be used.
• *updateService - which TMF service group should be used. Default value is NULL which means
the TMF default Rendezvous service will be used.
• *updateNetwork - which TMF network interface should be used. Default value is NULL which
means the TMF primary network interface for the host computer will be used.
• *updateDaemon - location of the TMF Rendezvous daemon used to establish communication.
Default value is NULL which means the TMF local daemon on TCP socket 7500 will be used.
NOTE: If updateService, updateNetwork, and updateDaemon are all NULL, then TMF is disabled
by default.
Entitlements are checked on both subscriber and publisher. The entitlements profile is loaded
after the data dictionary is complete. While the data dictionary and entitlements profile are
downloading, the subscription is not allowed and the service is not available.
• *reconnect_interval - If the Rendezvous connection fails due to a recoverable condition, this
value is used to schedule a reconnection attempt. The default value is 5 (seconds).
• *slowconsumer_interval - If the Rendezvous connection fails due to a slow consumer advisory
message, this value is used to schedule a reconnection attempt. The default value is 20
(seconds). This is slower than reconnect_interval to allow the consumer’s machine more
opportunity to catch up. If the value is less than or equal to zero, the RVD connection will not be
terminated when a slow consumer advisory message is received.
• *username - the DACS username used for entitlement checking. This option only applies when
entitlements are enabled and the connection is a SASS3 connection. See section 5.3.3 in this
manual or see RTRTIBConnection in the SFC Reference Manual for details.
• *enablePingSubscription - Two purposes of this configuration are: (1) for SASS2 and SASS3, to
enable RTRRTFieldToTIBRecordService to send heartbeats to the RTIC. These heartbeats
provide the RTIC to determine if the SFC publisher goes down. This mechanism will incorporate
with RTIC’s BC_DQA_MODE and SFC’s *groupId configuration. (2) for SASS2 only, these
heartbeats also provide the SFC publisher with a mechanism to determine if the RTIC goes down.
It does this by subscribing to its own ping over the same Rendezvous session on which it is
publishing. This configuration variable must be set to false when the RTIC is using different
values for the RVx_OUTPUT_* and RVx_PUBLISH_* RTIC configuration parameters.
Specifically, *enablePingSubscription must be set to false when:
O the RV2_OUTPUT_* and RV2_PUBLISH_* parameters are set in the RTIC’s configuration file
O the RV3_OUTPUT_* and RV3_PUBLISH_* parameters are set in the RTIC’s configuration
file
O sending contributions
O the RTIC has a partitions.cf that filters out the PING sector
• *pingInterval - the frequency, in seconds, that a heartbeat will be sent to the RTIC. The default
value is 10 seconds and it can be set on a per service basis. If RTIC’s BC_DQA_MODE is set to
TRUE and SFC’s *groupId is set, *pingInterval must be less than the RTIC’s
FEED_FAIL_TIMEOUT. Otherwise the RTIC will think that the publisher has died and will mark all
5.5.10.2 SSL
Most SSL client connection parameters are configured in the sslapi.cnf. The following values can be
set using the SFC configuration file for a SSL Connection:
• *username - the DACS username used when making a sink connection. See section 5.3.3 in this
manual or see RTRSSLConnection in the SFC Reference Manual for details.
• *mountRetryInterval - the number of seconds to wait before retrying an initial connection to a
Sink Distributor. The default value is 5 (seconds).
The following values can be set using the SFC configuration file for a SSL Connection Server:
• *username - the DACS username used when making a sink connection. See section 5.3.3 in this
manual or see RTRSSLConnectionServer in the SFC Reference Manual for details.
• *ipcServerName - the name of the well-known port to open for accepting connections
• *maxSessions - maximum number of connections accepted from other applications
• *reconnect_interval - The default value is 10 (seconds).
• *enableEntitlements - See section 5.3.2. This parameter is used to enable or disable
entitlements.
• *tcpNoDelay - The *tcpNoDelay parameter controls the use of TCP_NODELAY socket option
(which the Nagle algorithm disabled). SFC normally does not use TCP_NODELAY socket by
default; this improves overall performance but it might briefly delay transmission of smaller
packets. If the delay is undesirable or unacceptable, setting *tcpNoDelay to true may reduce
end-to-end latency but it will consume more CPU usage.
The following values can be set using the SFC configuration file for a SSL publisher session:
• *dispatchInterval - The parameter is interval time publisher session for dispatching event for
event queue. Setting the configuration lower may improve latency under high update rates but
would result in more CPU usage. The default value is 25 (milliseconds); the lowest value can be
10.
If your application needs to implement the main-loop, it needs to implement the SFC event notifier
rather than the Rendezvous event manager. (See the RTRSelectNotifier and
RTRWindowsNotifier class references for examples of how SFC notifiers can be implemented.
Also note that the source code for all notifier implementations is provided in this product
package.)
• Your build environment must include the Rendezvous 5 header files from Rendezvous version
5.3.
These header files are not included in the C++ Edition.
5.6 Configuration
Active A market data item is initially in this state when it is created. It remains
active until it is closed or destroyed.
ANSI The American National Standards Institute (ANSI) develops standards
widely used as guidelines by US firms. Data processing standards devel-
oped from ANSI range from the definition of ASCII to the determination of
overall datacom system performance.
ANSI Sequence A string of characters that invokes a mode or status change in the display
system. ANSI sequences start with an ESC character (1B hex) and then
follow one of several patterns. The receiving device must recognize these
patterns and act upon the supported sequences while ignoring unsup-
ported sequences. ANSI sequences can refer to position or attribute infor-
mation.
ANSI Page The Triarch name for Effects Page. Pages on Triarch are delivered as a
stream of ANSI encoded data. See also Page.
Attributes See Page Attributes.
CBE Content Based Entitlements
Client A component that receives events from SFC.
Client Application An application that receives data and events from a market data infrastruc-
ture.
CMON A TIB infrastructure component that monitors the status of other infrastruc-
ture components.
Consumer See Subscriber, Client Application, Sink Client.
Contribution See Insert.
DACS The Data Access Control System is a entitlement tool that allows custom-
ers to automatically control who is permitted to use which sets of data in
the customer’s financial information management system. In this way, the
customer can demonstrate to information providers and vendors how
many people are using which sets of data.
Data Dictionary See FID Database.
Distribution Layer The TIBCO Rendezvous network that consists of the RVDs of the RTIC
and end-user programs.
DQA Data Quality Assurance. The TIB infrastructure uses the CMON process to
send heartbeats to other market data components. If one of them does not
respond, a DQA message is broadcast to indicate that the data might be
stale.
Effects Pages An ANSI Page, after it has been converted into a Logical Page to be
broadcast on a TIB market data system.
Entitlement Code If a vendor requires subservice permissioning, a code must be provided
with each data item from that vendor. This entitlement code is used by the
permissioning system (DACS) to control access to data.
Enumerated Type An enumerated type is one of the field types defined for Marketfeed. It con-
sists of a set of mnemonics, each having its own specific meaning. A dis-
playable string is associated with each of these mnemonics.
Enumerated Value The integer value equivalent of data that has an enumerated type.
ETIC Entitlements TIC. Entitlements TIC provides the entitlement profile
needed for a TIB publisher and subscriber.
Expanded Value The full mnemonic string for data that has an enumerated type.
Fading An ANSI extension that many users and sources require, where an area of
updated text must be highlighted for a short time to draw attention to it.
After the end of the fade period, the text returns to its original color. Appli-
cations implement fading as follows: when a character is changed, it is
displayed initially with the “Fading” set of attributes; after a fixed period of
time, it is re-displayed with the default attributes.
Inactive A market data item is in this state when it is closed or destroyed. Some-
times, the market data infrastructure will force an item to be closed, and
thus become inactive. Inactive is a terminal state. Once an item is inactive,
it cannot become active again.
Item See Market Data Item.
Local Publisher A non-interactive publisher on the Rendezvous distribution network of
RMDS or a TIB market data system.
Logical Data Real-time data distributed across a market data system in a display-inde-
pendent format. By implication, applications can access both the syntax
and semantics of all constituent elements of the data. Implemented as
records on Triarch.
Logical Page A page that has been broken down into regions. Page data and attributes
are available by accessing a row and column.
Marketfeed Originally the Reuters presentation protocol providing public access to
data supplied by Reuters. Its purpose is to support the transfer of data
between Reuters and user computer systems in a consistent and logical
format for Triarch or RMDS.
Market Data Hub The set of components that provide resilient and scalable integration of the
RMDS with external sources of real-time market data and news. The hub
is comprised of the information source, source distributors, and compatible
feed handlers.
Market Data Item Information from a specified source (e.g., IDN_SELECTFEED or
YOUR_SOURCE) for a specified item (e.g., DEM=). Market data items are
identified by the name of the source service which supplies them and the
name of the individual item within that source service.
MDDS Market Data Distribution System
Node A device connected to a network cable. This usually refers to a server or a
workstation.
Non-Interactive Pub- A publisher that cannot except dynamic requests for market data items
lisher that it does not know about. See also Full-Cache Service, Source-
Driven.
Page A page is a type of data item formatted for distribution to display systems.
The data includes attribute information.
Page Attributes Information that describes how a page’s text should be displayed; e.g.
properties such as highlighting, blinking, color, etc.
Page Record A logical record that contain rows of text. Page records are different than
pages because page records do not contain attribute information. They are
also delivered from record sources. An example is an IDN Page Record.
Page Source A source that supplies data items in the form of pages. Pages are of vari-
able dimensions, but typically displayed in 80 columns and 24 or 25 rows.
Permissionable Entity A numeric code included in each Reuters IDN record. The PE is used to
(PE) determine to which subservice(s) the record belongs. For example the PE
value 62 indicates that the item is from the New York Stock Exchange.
Publisher An application that creates market data items and distributes them to a
market data infrastructure. Publishers can manage the state and update
the values of the market data items. In contrast, contributions do not have
a state model.
Record A record is a type of data item encoded in a form that is convenient for use
by computer applications.
Record Source A source that supplies data items in the form of records.
Record Template A specification of all of the fields that are in a record.
Record Template Num- The number that specifies which record template was used to create the
ber record of a given type. Also called Field List Number.
Rendezvous See TIBCO Rendezvous.
RIC Reuters Instrument Code. A RIC is a unique identifier for a record.
RMDS Reuters Market Data System. The market data system that fully leverages
the TSA Framework and consists of a best-of-breed combination of Triarch
and the TIB MDDS.
RRCP Reuters Reliable Communication Protocol. The UDP broadcast-based
communications layer of RRDP.
RRDP Reuters Reliable Datagram Protocol. A stack of protocols used to commu-
nicate between the key processes which make up the Triarch backbone or
the Reuters Market Data Hub. RRDP consists of RRMP and RRCP.
RRMP Reuters Reliable Messaging Protocol. The session management layer of
RRDP.
RTIC TIC—RMDS Edition. The RMDS caching server that connects a Reuters
Market Data Hub with a Rendezvous Distribution Layer.
RV See TIBCO Rendezvous.
RVD Rendezvous Daemon. RVD is a process that listens to multicast traffic on
a Rendezvous network. SFC gets data directly from the RVD.
SASS Subject Addressed Subscription Service. The SASS protocol is used by
the TIC and RTIC to deliver market data over Rendezvous network.
SBE Subject Based Entitlements
Server A server is a process or several coordinating processes whose function it
is to satisfy client requests.
Service A logical entity, made up of one or more source applications that have
been configured to provide a single, coherent view of a set of data. Sink
applications request data from services.
Service Distributor (a.k.a. Service Manager) An optional process that implements many of the
advanced information resource management features of an RRDP-based
Market Data Hub. The Service Distributor ensures that requests made by a
Sink Distributor (on Triarch) or an RTIC (on RMDS) are directed to the
most appropriate Source Distributor at the optimum rate.
Sink A consumer of data from the RMDS or Triarch network.
Source Service The name by which a particular vendor or data contributor is identified on
the Triarch network. A source service is comprised of one or more source
servers on the Triarch network.
SSL Source-Sink Library. A Reuters software product that provides an applica-
tion programming interface to the Triarch network. “SSL” is also used
when referring to the Triarch implementation of SFC.
System Foundation A set of object oriented class libraries written in C++. The SFC includes a
Classes (SFC) series of abstract interfaces and implementations to enable easy and con-
sistent access to both real-time and historical data.
Stale A state indicating that a service is unavailable or a market data item may
not have the current value. This state could result from mis-configuration,
network problems, or failures of upstream market data components.
State For a given application, each channel and data stream may be in one of
several possible defined states (e.g., open data stale). Certain events
cause a transition from one state to another.
Status A status indicates the state of a data item. For example, a status may indi-
cate that the item is unavailable, that the data provided may not be com-
plete, or that the item is now up to date.
Subscriber An application that listens for market data events. See also Consumer,
Client Application, Sink Client.
Triarch A network infrastructure aimed at the financial market place. It is designed
to distribute market data in an efficient manner. Core distribution compo-
nents of Triarch are the Source, Service, and Sink Distributors.
TIB The Information Bus. “TIB” is used when referring to the Rendezvous
implementation of SFC. It is also sometimes used to refer to the TIC-based
TIB infrastructure.
TIBCO The Information Bus Company
TIBCO Rendezvous A broadcast-based message delivery system that can be used to deliver
market data. The TIB implementation of SFC uses the Rendezvous infra-
structure.
TIC TIBCO Information Cache. The caching server that connects a TIB ciSer-
ver feed network with a ciServer or Rendezvous-based client-delivery net-
work. It caches market data broadcast traffic so it can provide initial
images to client applications. The TIC is the caching component of the TIB
infrastructure.
Update An update is a modification to the contents of a data item. A source sends
updates asynchronously as the contents of the item change.
(SSL) Channel A channel connects the sink application to the sink distributor . Multiple
channels may be mounted, and each channel is independent of any other
channel.
Most of the function calls, and all of the events that are available via the
SSL API, require that the application specify or provide a channel number
to the application letting it know to which connection an event applies.
For sink application programming, it also important to understand the concept of a data stream.
Data Stream A data stream is opened for a data item, allowing information about that
item to flow across a channel. The data stream logically relates a channel,
service name, and item name. An open data stream contains an image
message followed by any number of updates and state information. Infor-
mation concerning the state of the data is guaranteed to be sent, whether
the state is OK, STALE, or CLOSED.
In order to receive a data item, a sink application must open a data stream for that item. Likewise, a
source application sends an item to the network through an open data stream. Multiple data streams
may be opened on a channel, but only one data stream can be opened for a particular item on one
channel. If multiple channels are mounted, one data stream may be opened for the same data item on
each of the channels.
Once a channel has been mounted by a sink application, the sink application can request data items.
It is important to understand that the response to a sink application's request for a data item is not a
single, discrete event, but instead is a continual stream of data events (see Figures B.1 and B.2). As
the market changes throughout the day, the original data will be updated (as long as the source
remains active and the application is connected). Status messages and (in some cases) new images
will be sent if the data becomes stale. Data and status information will continue to flow through the
stream, as it is available, until the data stream is closed.
Source Source
Application Distributor
Update Status Update Image
Sink Sink
Distributor Application
Update Status Update Image
Application
Reuters Reuters Source
Server A Server B Service SSL
Source Source Source
Distributor Distributor Distributor
Network Backbone
Sink
Distributor
ATW
PTW
Unless a “snapshot” request is made by the sink application requesting the data, the request is for a
data stream to be opened to the service. Assuming that the request is for an actual data item and any
permission checks have been satisfied, the service will provide an image of the current contents of the
data item. As the data in the item changes, updates will be provided to the sink application until it
indicates that it is no longer interested in the data item, by closing the data stream.
Network Backbone
Sink
Distributor
IPC
SSL Library
A sink application accesses through its connection to the Sink Distributor. It is the role of the Sink
Distributor to retrieve data items and forward updates on behalf of the sink application. The Sink
Distributor communicates by passing messages using a socket Interprocess Communication (IPC)
mechanism. This allows great configuration flexibility, since these components may reside on any
node on the network.
single service, the system has been designed to provide redundancy as well as to optimize server
usage in normal operating circumstances. Optimized management is key when datafeeds provide a
complex set of constraints like data storage limitations, data request (or throttle) limitations, or mixed
delivery mechanisms.
Several key features are available for resource optimization:
• Optimized Resource Management
Multiple instances of a source server appear as a single service to the users, making the
optimization transparent. Where multiple instances of a server are present, the system strives to
ensure that any given item should be supplied by only a single server. All the available data item
storage (called “cache”) slots will be used before new item requests force items to be removed
from storage. When the resources are strictly limited, managed data storage removal (called
“preemption”) is available.
A powerful scheme allows the use of priorities to determine the best candidate to be preempted
from the cache to make way for a new request. The priority scheme supports various features
including the locking of cache items and allowing unconditional candidates for preemption.
Automatic re-request management enables the system, rather than the user, to manage request
retries. Control is asserted during times of insufficient resources, and server failure or failover is
managed gracefully.
• Optimum Response Timing
Requests for data items are managed intelligently. If any source server is already supplying an
item stored in its cache, a new request for that item will be fulfilled automatically from that server.
If multiple items are requested at once and the data is available on multiple servers, the requests
are grouped and sent to different servers for processing in parallel.
• Request Throttling
A source server can indicate that it is temporarily unable to service requests, i.e., it is throttled”, in
order to avoid server overload. The server may also indicate the optimum queue size that it is
willing to accept.
used to implement a single service and an individual server fails, the system automatically recovers
failed items while sharing the load equally among the remaining servers.
service. For example, if a Triarch system is configured with two Selectservers, a sink application can
determine that the Selectserver service is available, but the sink application is unable to distinguish the
actual number of servers.
Figure B.5 shows a typical configuration used in network request routing where (N) source servers of
the same type are present on the network. All of these source servers appear as a single service on
the network. Each may support a single datafeed or several datafeeds.
Datafeed of Datafeed of
Type A Type A
Network Backbone
Sink(s)
In SSL 4.x, an optional process (the Service Distributor) performs resource management. Because the
Service Distributor is optional, two scenarios exist for request routing:
• Request routing among source servers without the presence of a Service Distributor
• Request routing among source servers when the Service Distributor is present
In either case, the SSL Infrastructure:
• Ensures that all available source servers or datafeeds share the load of responding to requests
for data from Triarch users.
• Ensures that if a copy of a given data item is already available on Triarch, then subsequent
requests for the same item are satisfied from this copy, rather than by re-requesting the item from
another datafeed. This avoids so-called cache duplication, where the same item is open
simultaneously on several datafeeds.
Datafeed Datafeed
Request Routing
Sink Response Routing
Distributor
Application
Infrastructure tracks this activity. The system is dynamic; i.e., a new workstation may be added without
restarting the system. When an additional source server of an existing type is added to the network,
the network protocol ensures that all workstations are automatically able to access it. When a new
service is added to the network, the protocol ensures that workstations are informed.
The network protocol suite supports “Keep Alive” messages from both sources and sinks. These
messages, which are sent out at regular intervals, are used to detect the presence and absence of
network nodes. Keep Alive messages are particularly important for source servers. Sinks, and the
Service Distributor when active, receive these messages from all active sources and use this
information in deciding where to address requests for data items.
If a source fails, the Keep Alive messages from that server are no longer seen by the sinks. After the
absence of several such messages, a sink presumes that the server has failed, and any data from the
failed source is immediately marked as stale. If another source server can provide this data, the data
will be retrieved from it. Otherwise, the system places the items in an automatic recovery queue and
maintains the stale status of this data until the source recovers or the application is no longer
interested in the data. The sinks stop sending requests to a failed source until that source server has
recovered and again sends Keep Alive messages.
NOTE: The length of time a sink (or source) will wait before assuming failure of a node is a config-
urable parameter. This time interval should be kept fairly short to ensure that any data that has
recently become stale is not presumed to be valid.
B.6.1 Overview
The SSL Library configuration file provides a centralized configuration database functionality for all
SSL applications. This allows for consistent SSL Library behavior across many interrelated nodes and
applications. In addition, tuning and troubleshooting of SSL applications may be performed without
program recompilation. The SSL Library configuration file consists of a number of configuration
variable settings which may be specified on a per installation, per hosts, or per application name basis.
ClientHost is the name of the machine where the application that requests con-
nection to the SSL Library will run.
AppName is a string which identifies the SSL application on the machine.
AppInstanceID is used to uniquely identify an instance of an application with a spe-
cific AppName on the specific ClientHost.
SSLVersion is used to restrict the version number of the SSL Library to which the
entry applies. For SSL Library version 4.5, specify “SSLAPI_V45” .
ParamName is the name of the configuration variable.
test1*eventLogging:20000
B.6.3.1 eventLogging
Parameter Function: Automatically enables the logging of event handling failures with a
specified file size. The name of the log file is SSL_elog<pid>, where
<pid> is a unique numeric process ID assigned by the operating sys-
tem.
B.6.3.2 functionLogging
B.6.3.3 ipcConnectionTimeout
B.6.3.4 ipcRoute
Parameter Function: Allows specification of the SSL IPC server and host name pair.
Syntax 1:
*ipcRoute: ipcService [hostname...]
Syntax 2:
*name1.ipcRoute: ipcService [hostname1...]
*name2.ipcRoute: ipcService [hostname2...]
...
Syntax 1 specifies the TCP service port and host to which a sink
application will be connected when no value is set for
SSL_SNK_MO_IPC_ROUTE_NAME as part of sslSnkMount().
B.6.3.5 ipcRouteName
Parameter Function: Controls which ipcRoute entry will be used by sink applications that
do not specify an SSL_SNK_MO_IPC_ROUTE_NAME value with
sslSnkMount(). Only useful if multiple named ipcRoute entries are
present. If SSL_SNK_MO_IPC_ROUTE_NAME is specified, that
value overrides this parameter. This parameter is not used by source
applications. See ipcRoute for more information.
B.6.3.6 ipcServer
B.6.3.7 maxMounts
Default Value: 4 less than the per-process file descriptor limit set by the operating
system
Related Property: SSL_SRC_MAX_SESSIONS
Becomes Effective: sslSrcMount() call
Parameter Function: Controls the maximum number of sessions supported by a single
source channel.
B.6.3.8 maxProcessBuffers
B.6.3.9 maxRemountDelay
B.6.3.10 maxUnconfirmedMsgs
Default Value: 10
Related Property: SSL_OPT_MAX_UNCONFIRMED_MSG_COUNT
Becomes Effective: SSL_ET_SESSION_ACCEPTED event and sslSnkMount() call
Parameter Function: Controls the setting of the preferred number of unconfirmed IPC mes-
sages.
B.6.3.11 messageTracing
B.6.3.12 messageTracingFlags
B.6.3.13 minRemountDelay
B.6.3.14 msgMountDelimiter
B.6.3.15 numInputBuffers
Default Value: 2
Related Property: None
Becomes Effective: SSL_ET_SESSION_ACCEPTED event or sslSnkMount() call
Parameter Function: Controls the number of 3300-byte SSL message input buffers used to
handle and store incoming messages per channel. There must be at
least two input buffers per channel.
B.6.3.16 numOutputBuffers
B.6.3.17 pingInterval
B.6.3.18 serviceId
Parameter Function: Controls the mapping of service names to numeric service IDs.
Required for every service so that the source application can support
Bandwidth Enhancement capabilities. Used when an SSL 4.0 sink
application is connected to an SSL 4.5 source application and the
sink application requires access locks. This parameter is used to
translate permission data format into access lock format. The servi-
ceId parameter must be specified as a suffix of the string containing
the actual service name; e.g.:
IDN_SELECTFEED.serviceID: 20.
This statement assigns a service ID of 20 to the IDN_SELECTFEED
service.
B.6.3.19 snkResponseThrottle
B.6.3.20 tcpQuickAck
Parameter Function: Enable or disable the TCP_QUICKACK socket option from the SSL
configuration file, sslapi.cnf . It is disabled by default and is effective
only on Linux platforms.
#
# Session timeout is 3X pingInterval
*pingInterval: 10
*msgMountDelimiter: +
#
# Disable message trace
#*messageTracingFlags: SSL_TRACE_IN SSL_TRACE_OUT SSL_TRACE_DATA SSL_TRACE_HEX
#*messageTracing: 20000
#
# Enable logging
*functionLogging: 20000
*eventLogging: 20000
C.2 Architecture
The SFC on TIB enables client applications to either publish data or consume data from a TIB
infrastructure using Rendezvous. A TIC must be used to cache published data. The SASS2 protocol
must be used with the TIB infrastructure. Figure C.1 illustrates the possible deployments that can be
achieved. The following the sections describe the type of deployment that can be interfaced to by the
SFC API.
NOTE: It is necessary that a publisher provide a record template when operating with an SFC client
or the TIC will not cache the data. Record template also must be supplied to support historical Ren-
dezvous based market data products
SFC TIC
Publishing (Cache Process)
Application
REC_TYPE field using the setRecordTemplateNumber(RTRString *) method. See section 5.5.3 for
details.
C.2.3 Configuration
The following things may need to be configured in the TIC configuration file to support the SFC:
1. Communicate via Rendezvous and the SASS2 protocol. (This can be done for both publishing
clients and subscribing clients.)
RV_TRANSPORT_IN TRUE;
RV_TRANSPORT_OUT TRUE;
RV_SASS2_SUPPORT TRUE;
2. If you use the RTRTIBFidDb (which is needed to download the data dictionary from the TIC),
you must configure a TIC to send the data dictionary to clients.
PROVIDE_DATA_DICTIONARY TRUE;
3. If you are publishing to a new service, you must configure a TIC to accept the messages for
that service.
BROADCAST
{
SOURCE “MYSOURCE”;
# ...
}
4. If any of the the following configuration values are specified in the TIC configuration file:
RV2_OUTPUT_SERVICE
RV2_OUTPUT_NETWORK
RV2_OUTPUT_DAEMON
RV2_PUBLISH_SERVICE
RV2_PUBLISH_NETWORK
RV2_PUBLISH_DAEMON
Then SFC publishers will need to include the following configuration in the SFC configuration
file:
TIBDistribution*pingInterval : 0
Also, to download a dictionary for the publisher, a RTRTIBFidDb will need to be created
external to the publishing service, using a different RTRTIBConnection. See section 5.5.4.3 for
an example of the workaround.
Other configuration may be needed depending on your environment and usage.
If TIB Entitlements 4 (ETIC) is configured to send entitlement data on a different multicast channel or
broadcast service port than the data from the TIC, then SFC will need to be configured to use the
global entitlement session. See section 5.5.10 for details on how to set the configuration parameters.
See section 4.8.1.3 for information on how to set the parameters programmatically.
D.2 Architecture
The SFC enables client applications to either publish data or consume data from a TIB SASS3
infrastructure using Rendezvous. The TIC must be version 10.x or later. Figure D.1 illustrates the
possible deployments that can be achieved.
TIC
DACS
Server
RV Daemon
DACS Daemon
RV Daemon RV Daemon
SFC
SFC TIB-Based
TIB-Based Publisher
Consumer
The TIC in Figure D.1 can be replaced with a DTIC. To support a DTIC, the SFC’s service provider
discovery mechanism must be disabled. See section 5.5.6.2 for details. In the rest of this section, DTIC
can be interchanged with TIC.
The following sections describe the type of deployment that can be interfaced to by the SFC API.
NOTE: It is necessary that a publisher provide a record template when operating with an SFC client
or the TIC will not cache the data. Record template also must be supplied to support historical Ren-
dezvous based market data products
D.3 TMF
The SFC enables client applications to consume data from either a SASS3 TIC or SASS3 TMF in a
TIB infrastructure. A TIC must be set up to publish images and updates separately. The TMF receives
the updates from the TIC, aggregates the updates, and publishes the updates out on the distribution
LAN. Figure D.2 illustrates a possible deployment that can be achieved.
SFC can set TMF connection parameters either with configuration (section 5.5.10.1) or
programmatically (section 4.8.1.3).
TMF TIC
update(in) port: 7505 image port: 7501
update(out) port: 7600 update port: 7505
RV Daemon RV Daemon
RV Daemon RV Daemon
D.4 Configuration
The following things may need to be configured to support SFC on your TIB SASS3 infrastructure:
1. Rendezvous and SASS3 must be enabled in the client communication configuration file
(typically tic.cf or dtic.cf).
RV_TRANSPORT_IN TRUE;
RV_TRANSPORT_OUT TRUE;
RV_SASS3_SUPPORT TRUE;
2. If you use the RTRTIBFidDb to download the data dictionary from the network, you must
configure the TIC to send the data dictionary to clients in the tic.cf.
PROVIDE_DATA_DICTIONARY TRUE;
3. If you are publishing to a new service, you must configure tic.cf to accept the messages for
that service.
BROADCAST
{
SOURCE “MYSOURCE”;
# ...
}
4. If any of the the following configuration values are specified in the tic.cf:
RV3_OUTPUT_SERVICE
RV3_OUTPUT_NETWORK
RV3_OUTPUT_DAEMON
RV3_PUBLISH_SERVICE
RV3_PUBLISH_NETWORK
RV3_PUBLISH_DAEMON
then SFC publishers will need to include the following configuration in the SFC configuration
file:
*enablePingSubscription : false
Also, to download a dictionary for the publisher, a RTRTIBFidDb will need to be created
external to the publishing service, using a different RTRTIBConnection. See section 5.5.4.3 for
an example of the workaround.
5. The following configuration variables must be set in TMF’s configuration file:
SUPPORT_SASS3 TRUE
RV_UPDATE_SERVICE # match value from RTIC’s RV3_OUTPUT_SERVICE
RV_UPDATE_NETWORK # match value from RTIC’s RV3_OUTPUT_NETWORK
RV_INITIAL_SERVICE # match value from RTIC’s RV3_SERVICE
RV_INITIAL_NETWORK # match value from RTIC’s RV3_NETWORK
Services must be included in SUBJECTS. For example
SUBJECTS
{
ISFS...;
IDN_RDF...;
}
6. SFC’s updateService and updateNetwork configuration variables must match these values
in the TMF’s configuration file:
RV_OUT_SERVICE
RV_OUT_NETWORK
7. For the TIC to detect when broadcast, local publishers die, the following configuration must be
set:
BC_DQA_MODE TRUE;
FEED_FAIL_TIMEOUT 120; # if a message not received every 120
# seconds stale messages are sent for all items.
This mechanism requires TIC 10.1 or later. The groupId must also be configured in the SFC
publisher. See section 5.5.5.1 for details.
Other configuration may be needed depending on your environment and usage.
E.2 Architecture
The SFC enables client applications to either publish data or consume data from an RMDS
infrastructure using Rendezvous. An RTIC must be used to connect RRDP and Rendezvous networks
and to cache published data. Figure E.1 illustrates the possible deployments that can be achieved.
SFC
Datafeeds &
SSL-Based
Feed Handlers
Publisher
RTIC
DACS
Server
RV Daemon
DACS Daemon
RV Daemon RV Daemon
SFC
SFC TIB-Based
TIB-Based Publisher
Consumer
The RTIC in Figure E.1 can be replaced with a DTIC for RMDS (RDTIC). To support an RDTIC, the
SFC’s service provider discovery mechanism must be disabled. See section 5.5.6.2 for details. In the
rest of this section, RDTIC can be interchanged with RTIC.
The following sections describe the type of deployment that can be interfaced to by the SFC API.
E.3 TMF
The SFC enables client applications to consume data from either an RTIC or TMF in an RMDS
infrastructure. An RTIC must be set up to publish images and updates separately. The TMF receives
the updates from the RTIC, aggregates the updates, and publishes the updates out on the distribution
LAN. Figure E.2 illustrates a possible deployment that can be achieved.
SFC can set TMF connection parameters either with configuration (section 5.5.10.1) or
programmatically (section 4.8.1.3).
SFC
Datafeeds &
SSL-Based
Feed Handlers
Publisher
Source Source
Distributor Distributor
TMF RTIC
update(in) port: 7505 image port: 7501
update(out) port: 7600 update port: 7505
RV Daemon RV Daemon
RV Daemon RV Daemon
E.4 Configuration
The following things may need to be configured to support SFC on your RMDS infrastructure:
1. Rendezvous and SASS3 must be enabled in the client communication configuration file
(typically rtic.cf or rdtic.cf).
RV_TRANSPORT_IN TRUE;
RV_TRANSPORT_OUT TRUE;
RV_SASS3_SUPPORT TRUE;
2. If you use the RTRTIBFidDb to download the data dictionary from the network, you must
configure the RTIC to send the data dictionary to clients in the rtic.cf.
PROVIDE_DATA_DICTIONARY TRUE;
APPENDIX_A /var/triarch/appendix_a
ENUMTYPE_DEF /var/triarch/enumtype.def
3. If you are publishing to a new service, you must configure rtic.cf to accept the messages for
that service.
BROADCAST
{
SOURCE “MYSOURCE”;
# ...
}
4. If any of the the following configuration values are specified in the rtic.cf:
RV3_OUTPUT_SERVICE
RV3_OUTPUT_NETWORK
RV3_OUTPUT_DAEMON
RV3_PUBLISH_SERVICE
RV3_PUBLISH_NETWORK
RV3_PUBLISH_DAEMON
then SFC publishers will need to include the following configuration in the SFC configuration
file:
*enablePingSubscription : false
Also, to download a dictionary for the publisher, a RTRTIBFidDb will need to be created
external to the publishing service, using a different RTRTIBConnection. See section 5.5.4.3 for
an example of the workaround.
5. If the following configuration is in the RTIC’s Market Data Hub configuration file (typically
triarch.cnf):
*sectorName : REC ! anything besides the default value “ANY”
and SFC includes the configuration
*serviceProvider : RTIC
then SFC must configure the default sector.
*defaultSector : REC
See section 5.5.6.2 for details.
6. The following configuration variables must be set in TMF’s configuration file:
SUPPORT_SASS3 TRUE
APPENDIX_A /var/triarch/appendix_a
ENUMTYPE_DEF /var/triarch/enumtype.def
RV_UPDATE_SERVICE # match value from RTIC’s RV3_OUTPUT_SERVICE
RV_UPDATE_NETWORK # match value from RTIC’s RV3_OUTPUT_NETWORK
RV_INITIAL_SERVICE # match value from RTIC’s RV3_SERVICE
RV_INITIAL_NETWORK # match value from RTIC’s RV3_NETWORK
Services must be included in SUBJECTS. For example
SUBJECTS
{
ISFS...;
IDN_RDF...;
}
7. SFC’s updateService and updateNetwork configuration variables must match these values
in the TMF’s configuration file:
RV_OUT_SERVICE
RV_OUT_NETWORK
8. For the RTIC to detect when broadcast, local publishers die, the following configuration must
be set:
BC_DQA_MODE TRUE;
FEED_FAIL_TIMEOUT 120; # if a message not received every 120
# seconds stale messages are sent for all items.
This mechanism requires RTIC 10.1 or later. The groupId must also be configured in the SFC
publisher. See section 5.5.5.1 for details.
9. If any of the following configuration is set, then the RTIC may have been configured to support
legacy TIB applications.
FIELDS “tss_fields.cf”;
RECORDS “tss_records.cf”;
CI_SASS2_SUPPORT TRUE;
CI_TRANSPORT_OUT TRUE;
CI_TRANSPORT_IN TRUE;
RV_SASS2_SUPPORT TRUE;
RV2_*
When this configuration is set, the RTIC has been configured to look like a TIC. SFC
applications should be configured as if deployed on a TIB/SASS3 network. SFC does not
support the SASS2 or CI protocols to or from an RTIC.
Note that this configuration should only be used for specific migration situations.
Other configuration may be needed depending on your environment and usage.
Synopsis
DACS_CsLock (nm_channel, item_name, newLock, newLockLen, lockList, Dacs_Error)
int nm_channel
char *item_name
unsigned char **newLock
int *newLockLen;
COMB_LOCK_TYPE lockList[];
DACS_ERROR_TYPE *Dacs_Error
Description
This function is called to combine a list of access locks into a single composite DACS access lock. This
function is used to form complex access locks from constituent access locks that were created by a
previous call to the DACS_CsLock() or DACS_GetLock() functions.
Function Arguments
nm_channel
The nm_channel parameter is no longer used but it must be 0 (zero) for backward
compatibility.
item_name
The item_name parameter must be an empty string (“”). (This is also for backward
compatibility.)
newLock
The newLock parameter is a pointer to an unsigned char pointer that shall point to the
generated access lock for the “item” built by the source/compound server. If the pointer is
NULL on entry to the DACS_CsLock() function, space for the new access lock will be
dynamically allocated using malloc(). It is the responsibility for the caller of DACS_CsLock()
to free() the memory allocated when the newly generated access lock is no longer required.
newLockLen
The newLockLen parameter is updated to reflect the length of the newly generated lock. If
the newLock parameter does not point to a NULL pointer, then the source/compound server
application must supply the *newLockLen with the maximum size of the user-supplied
access lock pointer. If the *newLockLen parameter supplied by the source/compound server
application is less then the length required to fit the access lock, a PERM_FAILURE error is
returned.
lockList
The lockList parameter is a pointer to an array of pointers to the access locks of associated
component items to be combined by the source/compound server. A NULL pointer terminates
the list.
Dacs_Error
The Dacs_Error parameter points to a data structure of type DACS_ERROR_TYPE in which
returned errors will be placed.
Data Structures
The COMB_LOCK_TYPE data structure is defined to be as follows:
typedef struct {
int server_type;
char *item_name;
unsigned char *access_lock;
int lockLen;
} COMB_LOCK_TYPE;
where:
Return Values
This function returns DACS_SUCCESS if the function did not encounter a fatal error, and the
newLock and newLockLen parameters shall be populated.
DACS_FAILURE is returned if a fatal unrecoverable error was encountered. An ASCII explanation of
the error can be further determined by passing the Dacs_Error data structure to the DACS_perror()
function.
G.2 DACS_GetLock()
Synopsis
DACS_GetLock (service_type, ProductCodeList, LockPtr, LockLen, Dacs_Error)
int service_type;
PC_LIST *ProductCodeList;
unsigned char **LockPtr;
int *LockLen;
DACS_ERROR_TYPE *Dacs_Error;
Description
This function is used to make a DACS access lock from a list of entitlement codes (i.e., codes specified
by the data vendor on which all permissioning is based).
Function Arguments
service_type
The service_type parameter is used to pass the service type of the server that is to be
encoded into the DACS access lock. This value is a unique numeric ID assigned to a Triarch
service by the service’s serviceId parameter in the global Triarch configuration file.
ProductCodeList
The ProductCodeList parameter is used to pass the list of entitlement codes which shall be
encoded into the DACS access lock.
LockPtr
The LockPtr parameter is a pointer to an unsigned char pointer that shall point to the
generated access lock that represents the entitlement code list in DACS lock format. If the
pointer is NULL on entry to the DACS_GetLock() function, space for the new access lock will
be dynamically allocated using malloc(). It is the responsibility for the caller of
DACS_GetLock() to free() the memory allocated when the newly generated access lock is
no longer required.
NOTE: If the DACS_GetLock() function returns an error, then no space for the access
lock will have been allocated.
LockLen
The LockLen parameter is a pointer to an int. This parameter is updated to reflect the length
of the newly generated access lock. If the LockPtr parameter does not point to a NULL
pointer on entry, then the user must supply the LockLen with the maximum size of the user-
supplied access lock buffer. If the user-supplied LockLen is less then the length required to
fit the access lock, then a DACS_FAILURE error is returned.
Dacs_Error
The Dacs_Error parameter points to a data structure of type DACS_ERROR_TYPE in which
returned errors will be placed.
Data Structures
The PC_LIST data structure is defined to be as follows:
typedef struct {
char operator;
unsigned short pc_listLen;
unsigned long pc_list[/* pc_listLen */];;
} PC_LIST;
where:
The vertical bar character ‘|’ (for an “OR” list) — The implication is that
the user of the item only needs access to any one of the entitlement
codes contained within the access lock.
pc_listLen is used to flag the number of entries that are contained within the
pc_list array.
Return Value
This function returns DACS_SUCCESS if the function did not encounter a fatal error.
DACS_FAILURE is returned if a fatal unrecoverable error was encountered. An ASCII explanation of
the error can be further determined by passing the Dacs_Error data structure to the DACS_perror()
function.
G.3 DACS_CmpLock()
Synopsis
DACS_CmpLock (Lock1Ptr, Lock1Len, Lock2Ptr, Lock2Len, Dacs_Error)
unsigned char *Lock1Ptr;
int Lock1Len;
unsigned char *Lock2Ptr;
int Lock2Len;
DACS_ERROR_TYPE *Dacs_Error;
Description
This function is used to compare two access locks for equality. This function can be used to verify
whether or not a new access lock is different from a previously generated access lock, thus reducing
the possible overhead incurred in redistribution of an unchanged access lock.
Function Arguments
Lock1Ptr
The Lock1Ptr parameter is a pointer to the first access lock that is to be compared.
Lock1Len
The Lock1Len parameter is an integer. This parameter is the length of the first access lock
that is to be compared.
Lock2Ptr
The Lock2Ptr parameter is a pointer to the second access lock that is to be compared.
Lock2Len
The Lock2Len parameter is an integer. This parameter is the length of the second access
lock that is to be compared.
Return Value
This function returns DACS_SUCCESS if the function did not encounter a fatal error and the access
locks were logically identical.
DACS_DIFF is returned if a fatal error was not encountered but the two access locks were logically
different.
DACS_FAILURE is returned if a fatal unrecoverable error was encountered. An ASCII explanation of
the error can be further determined by passing the Dacs_Error data structure to the DACS_perror()
function.
G.4 DACS_perror()
Synopsis
DACS_perror (err_buffer, buffer_len, text, Dacs_Error);
unsigned char *err_buffer;
int buffer_len;
unsigned char *text;
DACS_ERROR_TYPE *Dacs_Error;
Description
This function is used to create a textual message describing the last error generated by a DACS
Library call. The message is saved to the supplied buffer. The error number is taken from the location
Dacs_Error -> dacs_errno variable pointed to by the user application, when a library function returns
a DACS_FAILURE.
Function Arguments
err_buffer
The err_buffer parameter contains the destination address for the generated error string. If
the supplied buffer is smaller then the generated error string, DACS_FAILURE will be
returned.
buffer_len
The buffer_len is a variable that indicates the maximum size of the err_buffer.
text
The text parameter is a pointer to a null-terminated character string that is placed into the
buffer before the DACS error message. This string and the error message will be separated
by a colon and a blank space. If an empty character string is specified, only the DACS Library
error message will be placed into the err_buffer.
Dacs_Error
The Dacs_Error parameter points to a data structure of type DACS_ERROR_TYPE into
which returned errors shall be placed.
Return Values
This function returns DACS_SUCCESS if the function did not encounter a fatal error.
DACS_FAILURE is returned if a fatal unrecoverable error was encountered.
Symbols C
_RV.ERROR.RV.DATALOSS 309 cache 286, 357, 358
_RV.ERROR.RV.DATALOSS.INBOUND 309 cache duplication 357
_RV.ERROR.RV.DATALOSS.OUTBOUND 309 cacheType 286
_RV.ERROR.SYSTEM.CLIENT.FASTPRODUCER channel 348
308 class identifier 245, 250
_RV.ERROR.SYSTEM.CLIENT.NOMEMORY 308 class names 3
_RV.ERROR.SYSTEM.CLIENT.SLOWCONSUMER COMB_LOCK_TYPE structure 2
308 communication protocols 203, 217
_RV.ERROR.SYSTEM.LICENSE.EXPIRE 308 configuration file 361
_RV.WARN.SYSTEM.EVENTLOOP.STARVATION configuration variables 330
308 configuring software components 244
connected() method 204
A connection parameters 325
connection timeout 364
abstraction 6, 7
contributions 324
access locks 371, 2
control loop 13, 46, 260
accessing pages from a service 162
creating record fields 69
accessing records from a service 43
Custom Event Loop 237
accessing time-series from a service 180
addClient() method 19, 196
allowUpdatesToChangeStateToOk 308 D
ANSI data 141, 293, 322 DACS 270, 355
appendix_a 283, 292, 304, 306 DACS Access Locks 272
application ID 271 DACS Daemon 274
application name 362 DACS_CmpLock() 7
assertions 8, 9, 278 DACS_CsLock() 1
automatic data recovery 355 Dacs_Error 2, 5, 9
DACS_ERROR_TYPE structure 3, 6
B DACS_GetLock() 4
DACS_perror() 9
Backus Naur Form (BNF) notation 270
DACS_retry_connection_interval 274
boolean type 4
DACS_user_login_retry_interval 275
buffer read iterator 159, 163
Data dictionary
buffer write iterator 196
customizing for performance 283
buffer_len (DACS Library) 9
data dictionary 301, 302, 303, 304, 306, 380, 388
buffers 367, 369
data item 11
data samples 172
username 270
username method 270
V
variable() method 247, 250
version number 362
W
Windows 235
Windows NT registry 249
X
X-Windows 13, 247