PRODML Technical Usage Guide v2.2 Doc v1.0
PRODML Technical Usage Guide v2.2 Doc v1.0
Acknowledgements
Sincere thanks to Laurence Ormerod for his many years of dedication to PRODML development,
project management, and documentation.
Amendment History
Standard Doc Date Comment
Version Version
2.2 1.0 May 16, 2022 For a summary of changes, see
PRODML_v2.2_Release_Notes_v1.0.PDF.
Table of Contents
No internal table of contents was generated inside the document. Use the table of contents pane in
Acrobat by clicking on the icon shown below, which is located at the far left in an open PDF
document.
If viewing this document in Microsoft Word, on the View tab, in the "Show" box, check "Navigation
Pane", which displays a table of contents pane, showing all headings in the document.
1 Introduction to PRODML
PRODML is an upstream oil and gas industry standard for the exchange of data related to production
management. This domain covers the flow of fluids from the point where they enter the wellbore to the
point of custody transfer, together with the results of field services and engineering analyses required
in production operation workflows.
The production phase of an oil or gas field covers a wide range of data transfer requirements.
PRODML covers some of these requirements but does not yet present a coherent view of the whole
domain and PRODML’s role within it. A “top-down” full development of such a large scope of data
transfers is too large a task for an essentially voluntary effort. Hence, PRODML development has
been a “bottom-up” development approach, driven by member company project requirements.
PRODML has been developed by Energistics members—which includes oil companies, oilfield
service companies, software and technology companies, regulatory agencies and academic
institutions—through the PRODML special interest group (SIG), and its various work groups and
project teams.
Segment Scope
Production volumes Internal use, to regulators, to partners.
Completion and well services Hardware and operations over producing life.
Surveillance Monitoring production in the wellbore, the reservoir (near
wellbore); and on the surface.
Optimization Decision support both for design and operational
optimization.
NOTE: As described in Section 1.1.2, completion and well services is now incorporated into WITSML.
But, per the integration of the Energistics domain standards (see Section 2.4), WITSML data objects
are easily available to user of PRODML.
These segments are defined in more detail in Figure 1-1, which shows some of the main data scope
requirements within each segment. This figure is intended to be illustrative rather than exhaustive. For
each item of data scope, the v2.x coverage of PRODML is shown, together with a high-level estimate
of current usage—the two together giving some idea of maturity. The bands of high-medium-low
scores have the following meanings:
1. PRODML Coverage
a. High: strong coverage where a multi-company group of domain experts has specified the
requirement and validated the model.
b. Medium: reasonable coverage where significant work has been done, but where the
requirement was not the primary aim of the effort, or where the effort was focused on one
aspect of the requirements, not general coverage.
c. Low: a low level of coverage, either where the scope required is only partially covered or not
at all, of where work has been donated and incorporated but based on one company’s views
with no peer review.
2. PRODML Usage
a. High: widespread uptake over a material number of companies/situations. The standard is
therefore well-tested and known to be complete.
b. Medium: material uptake but in a limited number of cases, e.g. for one purpose, or by a small
number of companies. The standard is therefore workable but may need adapting for high
usage.
c. Low: usage limited to one or two companies, or not used at all. The standard therefore may
need work and adaption before it can be used more widely.
PRODML
Coverage within PRODML
Segment Scope Required scope Usage
Volume reports daily High Medium
Production Volume reports monthly High Medium
volumes Operational Reports Medium Low
Networks Medium Low
Initial completion High Medium
Workover and well service change history High Medium
Completion and Completion correct details at any instant High Medium
well services Well services - details of each job Low Low
Artificial Lift equipment Low Low
Workover - details of each job Medium (WITSML)Low
Well tests (steady state production tests) High Medium
Producing well performance Medium Low
Downhole point measurements - time series Low Low
Gathering system point measurements - time seLow Low
Surface equipment surveillance Low Low
Surveillance
Downhole distributed measurements High Medium
Transient well tests/formation tests High Low
Gradient surveys/other artificial lift surveillanceLow Low
Production logging Low Low
Fluid samples and analysis High Low
Well simulation (inc operational and design usaLow Low
Optimization Network simulation (inc operational and designLow Low
Reservoir to production simulation and optimizLow (RESQML) Low
Figure 1-1. High-level scope of sub-domains within production segments, showing PRODML maturity.
The following sections describe in more detail the capability and usage within the segments of
production data outlined above.
Production Volumes
The origins of PRODML were focused on the use of the product volume data object. This has been
adopted on a large scale on the Norwegian Continental Shelf (NCS) where all producing fields report
to partners and the regulator using this standard. For more information, see Chapter 28.
The product volume data object has been used by other member companies for volume reporting but
adoption has not been large, and feedback has been that this is owing to the complexity of this model.
The simple product volume reporting capability (SPVR) has been developed and added to version 2.x
and the aim throughout this development has been to keep the model as simple as possible, while still
covering most, if not all, requirements.
The coverage of requirements in this area is therefore believed to be high using SPVR or product
volume if the most flexible approach is needed. Maturity is medium, since uptake outside the NCS is
low.
Surveillance
Production surveillance activities consist of a wide variety of measurement and analysis methods.
PRODML coverage of this whole universe of data types is limited. The areas highlighted above as
being covered to a high degree do have high-detail models and these are all described in this
document.
Usage of some of these (well performance, distributed fiber optics, fluid sampling for example) is
shown as low because these are recent developments, either in version 2.x or in version 1.3, which
preceded it. There does appear to be significant interest in uptake of these sub-domains.
Optimization
Within the production management domain, there is somewhat of a hierarchy of requirements. For
example: it is hard to report surveillance without a description of what is being measured, and it is
hard to optimize something without both a description of what is being optimized, and the
measurements which are input to the optimization.
The optimization segment has low scores for its sub-domains simply because the foundational data
models are not yet sufficiently developed to support major optimization workflows. For example: well
simulation (commonly known as nodal analysis) needs a model covering the following elements:
• The physical hardware in the well—available through the Completion segment since 2014.
• The paths by which fluid flows through the hardware—no model developed.
• The properties of the connection to the reservoir—available through the Completion segment.
• The flow properties of the near-wellbore reservoir and of the connection to the reservoir—no
model developed for well simulation, though gridded properties available in RESQML.
• The fluid properties flowing in the system—available from v2.x in 2016.
• The control of flow—chokes, artificial lift, etc.—no model developed.
• The reporting of well simulation results (flowrates, sensitivities, flowing gradients, etc.)—no model
developed.
Much of the input data can be modelled using other segments of PRODML and other Energistics
domain standards, but the specific data needed to run a simulation using this base physical data has
not yet been modelled.
Therefore, in general, the coverage of optimization is low, being limited to partial solutions. The
segment is a good opportunity for future development.
Audience Assumptions
This guide assumes that the reader has a good general understanding of programming and XML, and
a basic understanding of the exploration and production (E&P) domains and related workflows.
1.5 Resource Set: What Do You Get When You Download PRODML?
Each of the Energistics domain standards—RESQML, WITSML or PRODML—is a set of XML
schemas (XSD files) and other resources freely available to download and use from the Energistics
website. Energistics common is a set of data objects shared by the domain standards. To download
the standards, go to https://www.energistics.org/download-standards/.
Figure 1-2. The contents of the energyML download, containing all three MLs and the shared version of
Energistics common.
Keeping these folders in this relative location means that the schemas all reference the correct paths
to common elements. All three MLs have the same folder structure, as seen in Figure 1-3.
PRODML is a set of XML schemas (XSD files) and other technologies freely available to download
and use from the Energistics website. When you download PRODML, you get a zip file structured as
shown in Figure 1-4, which contains the resources listed in the tables below in these 2 main groups:
• PRODML-specific (Section 1.5.1). The schemas and documentation specific to PRODML. Note
in Figure 1-4 (left) (red square):
− ProdmlAllObjects.xsd is schema that includes all other schemas in shown.
- The ProdmlCommon schema includes objects shared by other PRODML data objects.
• Energistics Common Technical Architecture (Section 1.5.2) (in the common->v2.3 folder)
Components of the Energistics Common Technical Architecture (CTA), which is a set of
specifications, schemas, and technologies shared by all Energistics domain standards.
To download the latest version of this standard, visit: https://www.energistics.org/download-standards/
Figure 1-4. You download the PRODML standard as zip file containing all three MLs from the Energistics
website. It contains the resources described in the two tables below. The figure shows (left) the PRODML
schemas (xsd files) and (right) the Energistics common schemas.
PRODML-Specific Resources
The PRODML download will install the folder structure described above. Within the specific
prodml\v2.2 folder will be found the resources listed in the table below.
Document/Resource Description
Prodmlv2.2 (folder)
1. xsd_schemas (folder) Schemas for all of the data objects in PRODML v2.x. This
folder contains all top-level objects outlined in Section 2.2
and map onto the sub-domains of PRODML as shown in
Figure 2-2.
2. ancillary (folder) Contains supporting material (optional).
doc (folder) contains the following
documents:
3. PRODML Technical Usage Guide Provides an overview of PRODML and details about the
(This document.) sub-domains and supporting data objects.
If just getting started with PRODML, begin with this
document.
4. PRODML Technical Reference Guide Generated from the UML model, lists and defines all
elements in the data model (for easy reference).
5. PRODML Product Volume, Network Model & Documentation for previously published versions (v1.x) of
Time Series Usage Guide these PRODML data objects:
• Product Volume
• Product Flow Model (Network)
• Time Series
Some minor updates have been made to this document for
v2.x, and some of the material that is most relevant to v2.x
usage of these data objects is incorporated into the
PRODML Technical Usage Guide.
6. PowerPoint presentations Presentations for the following PRODML sub-domains:
• Simple Product Volume Reporting (SPVR)
• DAS
Document/Resource Description
• DTS
• PVT
• PTA
7. docexample (folder) This folder contains sub-folders for each of the following
sub-domains. Each folder contains data files for extended
worked examples, which are also explained in detail in the
corresponding chapters of this PRODML Technical Usage
Guide.
• DAS
• DTS
• PVT
• SPVR
Documentation Updates
Energistics is committed to providing quality documentation to help people understand, adopt, and
implement its standards. As uptake of the standards increases, lessons learned, best practices, and
other relevant information will be captured and incorporated into the documentation. Updated versions
of the documentation will be published as they become available.
To this in v2.2:
Figure 2-1. The Energistics family of standards includes WITSML, RESQML and PRODML, which share
the Energistics Common Technical Architecture.
PRODML comprises a set of XML schemas covering a range of sub-domains of the whole scope of
production management in upstream oil and gas. Additional components of PRODML are located
within the Energistics Common Technical Architecture. PRODML is available as a downloaded zip
file, which installs the schemas, together with examples and documentation, into a folder structure
which includes the CTA elements.
This chapter describes:
• An overview of the main CTA components and the purpose of each.
• PRODML schema top-level data objects and the mapping of these onto the sub-domains of
production management as outlined in Chapter 1.
• Usage of the elements of CTA in PRODML v2.x.
• Common expected usage of other Energistics domain standards by PRODML.
PRODML Sub-Domain
PRODML SPVR * PVT DTS DAS Pressure Product Product Production Time
General Transient Volume Flow Operation Series
Top Level Object Schema Testing Model
ProdmlAllObjects.xsd 1
ProdmlCommon.xsd 1
Report.xsd 1
ReportingEntityModel.xsd 1
SimpleProductVolume.xsd **
AssetProductionVolumes 1
ProductionWellTest 1
WellProductionParameters 1
TerminalLifting 1
Transfer 1
FluidAnalysis.xsd 1
FluidCharacterization.xsd 1
FluidSample.xsd 1
FluidSampleAcquisitionJob.xsd 1
FluidSampleContainer.xsd 1
FluidSystem.xsd 1
DtsInstalledSystem.xsd 1
DtsInstrumentBox.xsd 1
DtsMeasurement.xsd 1
FiberOpticalPath.xsd 1 1
DasAcquisition.xsd 1
FlowTestJob.xsd 1
FlowTestActivity.xsd 1
PressureTransientAnalysis.xsd 1
Deconvolution.xsd 1
PreProcess.xsd 1
ProductVolume.xsd 1
ProductFlowModel.xsd 1
ProductionOperation.xsd 1
TimeSeriesData.xsd 1
TimeSeriesStatistic.xsd 1
* SPVR = Simple Product Volume Reporting
** Simple Product Volume is abstract , from which these SPVR objects are derived
The Report top-level object is a “header” type object, which has been retained in this version of
PRODML for continuity, but in version 2 it is less likely to be needed.
Having selected the sub-domain in which to work, you can open the schema files, and/or import the
.XMI file containing the UML model (see Section 1.5.1) and import it into a UML tool for visualization.
You should do this in conjunction with reading the rest of this chapter, and the chapters of this
document concerned with the relevant sub-domain. You can also navigate to the appropriate example
files (again, for details, see Section 1.5.1).
Value Types
In terms of the Energistics common data types, PRODML makes extensive use of a package of data
types called Value Types. These types are to cover measurements where the measurement
conditions act as a qualifier to the measured value:
• Pressure: whether absolute or relative/gauge pressure has been measured; if relative or gauge,
then the reference/atmospheric pressure must/may be provided.
• Volume, Density and Flowrate: where the pressure and temperature conditions of the
measurement have a profound impact on the underlying “value” of the measurement. A choice is
available—either to supply the pressure and temperature of measurement, or to choose from a
list of standards organizations’ reference conditions. Note that the enum list of standard
conditions is extensible, allowing for local measurement condition standards to be used.
The four types are called “xxxValue” where “xxx” is one of the four measurement types listed
immediately above. The PressureValue is shown in Figure 2-3, and the other three types in Figure
2-4.
class PressureValueTypes
«XSDcomplexType»
PressureValue
«XSDcomplexType»
AbstractPressureValue
Ab stractMeasure
«XSDcomplexType»
TypeEnum
ReferencePressure
«enumeration»
«XSDattribute» ReferencePressureKind
+ uom: PressureUom
+ referencePressureKind: ReferencePressureKind [0..1] absolute
ambient
legal
class OtherValueTypes
+MeasurementPressureTemperature
«XSDcomplexType» «XSDcomplexType»
ReferenceTemperaturePressure TemperaturePressure
«XSDelement» «XSDelement»
+ ReferenceTempPres: ReferenceConditionExt + Temperature: ThermodynamicTemperatureMeasure
+ Pressure: PressureMeasure
Extending Enums
In Energistics standards enumerations can be extended by providing an authority and the new eum
value as shown here:
Authority:enum
For example: CompanyX:newenum
Enumerations which can be extended are called enum-nameext. Other enumerations cannot be
extended.
The WITSML and RESQML resources are included in the downloaded zip file and can be found in
their folders as shown in Figure 1-2.
Acknowledgement
The detailed input into and leadership of this project by the following companies is gratefully
acknowledged: P2 Energy Solutions, Oxy, Accenture, Energysys, and Halliburton. The contributions
in terms of helping set requirements from the following companies is also highly appreciated: IHS,
WellEZ, Infosys, Anadarko, Baker Hughes, Chevron, QEP, Schlumberger, and Statoil.
3.1 Overview of Simple Product Volume Reporting, the NAPR Project, and the
earlier PRODML models devised for the Norwegian Continental Shelf
NAPR stands for North America Production Reporting. The objective of the NAPR project was to
analyze and extend (if needed) PRODML schemas necessary for the reporting of daily and monthly
production related data to non-operating joint venture (NOJV) partners, and royalty owners in North
America. The result is the Simple Product Volume package of PRODML, first released as part of
PRODML v2.x.
Although the focus of the project has been North America requirements, the hope is that the data
standard can be used worldwide.
For a quick overview or to be able to make a presentation to colleagues, see the slide set: Worked
Example SPVR.pptx which is provided in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files.
Note that the Simple Product Volume capability is, as the name suggests, a standard which aims to
cover the minimum requirements for volume reporting. It was developed following widespread
feedback that the capabilities within earlier versions of PRODML, whilst comprehensive, were also
complex and therefore hard to understand and to implement.
The earlier version of volume reporting does remain part of PRODML. It comprises principally the
data object Product Volume, and the supporting objects Product Flow Model, and Production
Operation. This reporting standard has been adopted (and extensively developed) for use on the
Norwegian Continental Shelf (NCS). The NCS reporting is based on version 1.x of PRODML. The
data objects concerned have been migrated to Energistics CTA standards but otherwise left
unaltered, for compatibility with previous work. A fuller description can be found in Chapter 28,
Product Volume; Chapter 29, Product Flow Model; Chapter 31, Production Operation.
Reservoirs
Wells Injectors
Well
Production Injection
Water
Production and/or Dispositions
Gas
Storage Facilities
Sand
Asset
Oil, Gas, Condensate, flows out of
Boundary
operator’s scope of control.
(Export to Refineries, etc.)
3. On an event-driven basis, transfer other important ancillary data, e.g., well test data, well
production parameter changes, tanker lifting, or transfers of product in and out of the asset.
4. Once per periodic or event-driven transfer, describe the fluids whose volumes are being reported
(e.g., oil, gas, water and any details of these).
5. If the configuration of the asset changes, transfer the changes/updates. For example, if a new
well is completed, the sender must send to the receiver the new well information and its place in
the reporting entity hierarchy.
Figure 5-1. Overall data model for Simple Product Volume showing reporting entities section (orange
border), periodic volumes section (green border) and event-driven section (blue border).
The Reporting Entity object is important because all the volume and event-driven data transfers use it
to refer to the “thing” whose properties they are reporting. Figure 5-2 shows a UML diagram of the
data model (simplified to the top levels); it shows how the other types of data objects reference back
to the reporting entity for identity. (NOTE: The UML model is used to produce the PRODML schemas
and is provided (as an XMI file) when you download the standard.)
Figure 5-2. Top-level UML model diagram showing central role of reporting entity.
• The behavior of reporting entities according to Kind is not enforced; for example, the schema has
no mechanism to restrict well tests being associated only to reporting entities whose kind is “well”.
• Aliases can be used within a reporting entity to enable alternative identifiers to be used (e.g., a
well’s name in multiple systems, if they differ).
• Although named “volumes” in line with industry usage, different quantities may be reported, such
as volume, mass, and energy content.
• Where an actual volume measurement is reported, this will of course be dependent on the
measurement conditions of temperature and pressure. The purpose of SPVR is to transfer
volume and similar quantities for internal, partner and regulator reporting, not to transfer field
operational measurements. Hence the assumption is made that all volumes have been expressed
at the same temperature-pressure conditions for the current transfer. The Standard Conditions
element is mandatory in all the SPVR objects. A choice is available – either to supply the
temperature and pressure for all the volumes which follow, or to choose from a list of standards
organizations’ reference conditions. Note that the enum list of standard conditions is extensible,
allowing for local measurement condition standards to be used. See Figure 2-4, which shows the
Abstract Temperature Pressure class: this is the type for standard conditions in all SPVR objects.
• Use is also made within SPVR of the Pressure Value type, allowing absolute or relative
pressures. See Section 2.3 and Figure 2-3.
Fluid Quantities
The essence of volume reporting is to report the quantities of fluids produced, sold, injected, etc.
Whenever this is done, the element used is the Abstract Product Quantity (Figure 5-3). The Abstract
Product Quantity takes one of two forms: Product Fluid or Service Fluid. Both of these have a Kind,
which is an enumerated list of common fluid types, e.g. “Oil-gross” or “dry gas” for product fluid, and
e.g., “demulsifier” or “methanol” for service fluid.
Optionally, a reference can be made to a specific fluid type in the Fluid Component Catalog (see
Section 5.3.2), which will be required if the physical properties of the fluid are required to be reported.
Optionally for a Product Fluid, an Overall Composition can be added. This may be used, for example,
to be able to report a quantity of gas (as Product Fluid Kind “dry gas”) with an associated composition
(fraction methane, ethane…).
Note that when a volume (or elsewhere, a density or flowrate) is reported, the data kind is “xxxValue”
(where xxx is volume, etc.). Use of this construct allows for the transfer of measurement condition
pressure and temperature (Figure 5-4). BUSINESS RULE: Unless a pressure and temperature are
specified for a given quantity, it is assumed to have been corrected to the Standard Conditions which
are reported once at the start of the file. This allows items like flow meters to be reported with actual
metered volume, and the corresponding actual temperature and pressure of measurement.
«XSDcomplexType»
AbstractProductQuantity
«XSDelement»
+ Volume: VolumeValue [0..1]
+ Mass: MassMeasure [0..1]
+ Moles: AmountOfSubstanceMeasure [0..1]
«XSDattribute»
+ uid: String64
«XSDcomplexType» «XSDcomplexType»
ProductFluid Serv iceFluid
«XSDelement» «XSDelement»
+ ProductFluidKind: ProductFluidKindExt + ServiceFluidKind: ServiceFluidKindExt
+ GrossEnergyContent: EnergyMeasure [0..1] «XSDattribute»
+ NetEnergyContent: EnergyMeasure [0..1] + serviceFluidReference: String64 [0..1]
«XSDattribute»
+ productFluidReference: String64 [0..1]
0..1
«XSDcomplexType»
ProdmlCommon::
Ov erallComposition
«XSDelement»
+ Remark: String2000 [0..1]
0..* OverallComponent
«XSDcomplexType»
ProdmlCommon::FluidComponent
«XSDelement»
+ MassFraction: MassPerMassMeasure [0..1]
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ KValue: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
«XSDattribute»
+ fluidComponentReference: String64
+MeasurementPressureTemperature
«XSDcomplexType» «XSDcomplexType»
ReferenceTemperaturePressure TemperaturePressure
«XSDelement» «XSDelement»
+ ReferenceTempPres: ReferenceConditionExt + Temperature: ThermodynamicTemperatureMeasure
+ Pressure: PressureMeasure
Figure 5-4. "Value Types" where measurement value depends on measurement conditions (P and T).
only once per asset production volume data object; all volumes contained within reference the
appropriate components from the catalog.
Within an asset production volumes data object, each instance of a quantity refers to one of the
members of the Fluid Component Catalog in that same asset production volumes, using the UID for
reference.
The fluid components in the catalog include:
• stock tank oil
• natural gas
• formation water
• pure fluid component
• plus fluid component
• pseudo fluid component
• sulfer component
The first three are aimed at black oil descriptions and the second three at compositional descriptions;
however, the two kinds may be mixed quantities of produced fluid, which can be described in either
black oil or compositional terms, or both. The schema shows the attributes of each type of fluid
component.
Note that the Fluid Component Catalog and the Fluid Components are contained in the PRODML
Common package, and are shared with the PVT data objects, so are compatible with lab analysis
reports created using PRODML v2.
Note that the two kinds of movement in or out of asset can be reported either as elements within the
periodic asset production volumes object, or in an event-driven manner with standalone Terminal
Lifting and Transfer data objects (see Section 5.5).
The worked example in Chapter 6 shows many of these different volume types.
Both the Transfer object and Tanker Lifting object use the same reference to fluid components in a
fluid component catalog as does the Asset Production Volumes (described in Section 5.3.1). Each
instance of transfer and tanker lifting has its own fluid component catalog so that it can behave as a
standalone data object. However, where they are used within Asset Production Volumes (see above),
then they share the fluid component catalog of the parent asset production volumes.
The asset description is completed by Figure 6-2, which shows the reporting entities outside the
asset’s direct control and which play a part in transfers and terminal Liftings. These are a separate
Lease which transfers production to Lease X, and an oil tanker, Barge 99, which after a tanker lifting,
transports production to Terminal Z.
XML Objects
The set is listed in Figure 6-3. The left side of the figure lists the entities by kind and name, and the
right side shows a view of the folder containing one XML reporting entity data object per entity. In the
worked example, these files can be found in the sub-folder “ReportingEntities”.
Note that this set of data is expected to be transferred only at the start of the reporting relationship.
Any new entities (for example, a new well), require that the new XML data object be transferred when
needed.
Figure 6-3. Data objects in the test data set: reporting entities.
Figure 6-4 shows the details of one reporting entity object. Important to note are:
• The UUID is used in the other objects (hierarchies, asset production volumes, production well
test, well production parameters, transfer and terminal lifting) to identify the reporting entity(ies)
concerned (blue box).
• Title is the element in the Energistics Common Citation class to represent the name (red oval).
Figure 6-5. Optionally, reporting entities can reference a data object containing full data, using
Associated Object element.
The structure of a hierarchy data object is shown in Figure 6-7. The left side of the figure shows one
of the example hierarchies. The right side shows some of the XML hierarchy data object. Note:
• The root node of a hierarchy is the reporting node (red circle).
• Child nodes nest under this root and each other (blue circle).
• The use of reporting entity as the data object reference back to the reporting entity object (for well
01 in this example) using UUID (green box).
Figure 6-7. Hierarchies showing reference to reporting entities and the root node.
The worked example contains one asset production volumes XML data object, found in the worked
example root folder (Figure 6-9).
Figure 6-9. Periodic transfer with one asset production volumes XML data object.
So far as possible, the worked example has realistic numbers for the volumes, which are shown in
Figure 6-10 and can be found in the “Numbers“ worksheet of the spreadsheet “Worked Example
NAPR.xlsx. Figure 6-11 shows how the numbers in the spreadsheet reconciled. Figure 6-12 shows
the timeline of the worked example. The events on this timeline are represented in the various data
objects transferred.
Figure 6-10. Volumes reported in the worked example. (For a copy of this spreadsheet, see the file named
Worked Example Simple Product Volume Spreadsheet.xls included in the example download.)
.
Figure 6-11. Reconciliation of quantities.
Figure 6-13. Asset production volumes has repeating reporting entity volumes per reporting entity.
If you open any reporting entity volumes element, you can see the different volume types included
(per the matrix in Figure 6-8). Figure 6-14 shows the volumes for the Lease X parent asset.
• Shown (details collapsed):
− Disposition: Terminal Lifting
− Disposition: Transfer
− Disposition: Product Sale
− Inventory: Opening
- Inventory: Closing
• Shown (details expanded):
- Production
• Not shown but available:
− Injection
- Deferred Production
Figure 6-14. Kinds of quantity contained within each reporting entity volumes.
Figure 6-15. Fluid components transferred once per asset production volumes object and then
referenced.
Figure 6-16 shows more details of the Production data. Every Production element has a Quantity
Method, which is an enumeration listing the ways in which that quantity was determined (red box).
Every Production Quantity element has:
• A mandatory Product Fluid Kind, which is an enumeration of the product that the production
quantity represents (green box).
• An optional Product Fluid Reference, which is the reference to an entry in the Fluid Catalog,
which gives the physical properties of the component (blue box).
Figure 6-16. Quantity methods and product fluid kind can be included in the volumes.
#4 Service Fluids
In addition to product fluids, service fluids can be reported. Figure 6-17 shows an example. Like
Product Fluids, Service Fluids are reported by a mandatory Service Fluid Kind enumeration (purple
box), and have an optional Service Fluid Reference to the Fluid Component Catalog in the (unlikely)
event that the composition of the service fluid needs to be specified.
#8 Deferred Production
Deferred production can be reported using the specific element for this, Deferred Production Event.
Figure 6-21 shows the worked example, and the following elements can be seen:
A Deferred Production Event sits under a Reporting Entity Volumes, which associates it with a
specific Reporting Entity.
Each event has a start and end time and a duration. The duration is mandatory and the times are
optional because you may not always know exactly when a failure occurred.
There is a code for downtime reason. Because the codes are company specific, they are not part of
the standard. Figure 6-22 shows the UML for the downtime reason code. These can be arranged in
any desired hierarchy. In the example, two levels of code are shown. Use the Authority element used
to record which company (or standard) codes are being used.
Deferred Production can be assigned as planned or unplanned.
With the deferred Production Volume, an Estimation Method enumeration can be included, which lists
the various ways in which deferred volumes can be calculated.
• Section 6.4.5 describes how these can be incorporated within the asset production volumes data
object.
Figure 6-23. Event-driven transfers showing example XML data objects in worked example folder.
Figure 6-24. Production well tests: showing the fluid components used throughout the well test(s)
transfer, and the reference to the Flow Test Activity data object(s) containing the well test(s) data
Figure 6-25 - Flow Test Activity for one well test (there can be multiple of these per transfer of Production
Well Tests)
• Fluid component catalog and the references to fluid components work in the same way as other
objects, such as asset production volumes as shown in Section 6.3.2 and Figure 6-15.
• A range of operating parameters can be included (red box).
• The transfer can optionally be split into a number of Producing Well Periods using multiple
production well period elements. The example shows the choke change which resulted in the
volume reporting for this well, as shown in Section 6.3.2 and Figure 6-20 (two blue boxes).
• Optionally well production parameters can report flow rates (purple box).
• For Well Production Parameters as for Production Well Test, BUSINESS RULE: the Reporting
Entity data object referenced is expected to be to a well (etc.) type of reporting entity, but this is
not enforced in the schema (black box).
• Each Producing Well Period can refer to a different well and this way, a collection of well
performance parameters can be transferred in one file (similar to Production Well Tests).
Figure 6-26. Well production parameters event-driven or reported across any number of discrete periods.
Transfer
The transfer example in the worked example is shown in Figure 6-27. This is a standalone data
object with the following highlights:
• Fluid component catalog, the references to fluid components and the quantities work in the same
way as other objects, such as asset production volumes as shown in Section 6.3.2 and Figure
6-15 (red arrows).
• Transfer has a source facility and a destination facility. No tanker is involved; the transfer happens
via pipeline (green arrows). These are data object references to the reporting entities for these
objects.
• As an alternative to using a standalone transfer data object, transfers can be embedded in the
periodic asset production volumes as a special type of disposition; see Section 6.4.5 and Figure
6-29.
Terminal Lifting
The terminal lifting example in the worked example is shown in
Figure 6-28. This is a standalone data object with the following highlights:
• Fluid component catalog, the references to fluid components and the quantities work in the same
way as other objects, such as asset production volumes as shown in Section 6.3.2 and Figure
6-15 (red arrows).
• Terminal Lifting has a loading terminal and a destination terminal. A tanker is involved in the lifting
from the former to the latter (green arrows). These are data object references to the reporting
entities for these objects.
• A certificate number can be added for the reference to the document defining the lifting onto the
tanker.
• Note that reporting entity kinds are available for “oil tanker” and “tanker truck”, for ship and land
(truck) transport.
• As an alternative to using a standalone transfer data object, transfers can be embedded in the
periodic asset production volumes as a special type of disposition; see Section 6.4.5 and Figure
6-29.
Figure 6-29. Transfer and terminal lifting optional inclusion in asset production volumes period report.
The attributes of the stand- alone data objects can be included in the Disposition (red boxes) – shown,
“Tanker” and “Transfer Direction” (green boxes).
well A single well, possibly with many wellbores (side tracks). Optionally, it Asset
can reference a WITSML well object.
wellbore A single wellbore (side track) within a well. Optionally, it can reference Asset
a WITSML wellbore object.
Contact Represents the details of a single physical connection between well Asset
Interval and reservoir, e.g., the perforation details, depth, and reservoir
connected. Meaning: this is the physical nature of a connection from
reservoir to wellbore. Optionally, it can reference a WITSML
ContactInterval class within the wellboreCompletion object.
Well The well completion data object represents a “flow” or “stream” from Asset
Completion the well (e.g., from a wellhead port) that is associated with a set of
wellbore completions. When there is more than one such wellbore
completion, the flows from them commingle in the well (the wellbore
completions may be located in multiple wellbores). The well
completion represents this commingled flow. Optionally, it can
reference a WITSML wellCompletion object.
facility A generic label for a facility that is not described by the other physical Asset
reporting entity kinds.
oil tanker An oil tanker, a vessel that could be a barge, or a sea-going ship. Asset
tanker truck A truck which carries oil. Asset
country A single country. Geog
county A single county. Geog
field A single field. Geog
formation A bed or deposit composed throughout of substantially the same kind Geog
of rock.
rock-fluid A fluid phase plus one or more stratigraphic units. A unit may Geog
unit feature correspond to a pair of horizons that are not adjacent stratigraphically,
e.g., a coarse zonation, and is often used to define the reservoir.
Available to match reported production to the rock-fluid feature in
RESQML. Optionally, it can reference a RESQML rock-fluid unit
feature object.
state A single state or province. Geog
field - part An area or zone that forms part of a field. Geog
reservoir A single reservoir. Geog
company A company name that is the name of the operator company. Org
lease A single lease. Org
license A regulatory agreement that gives the licensees exclusive rights to Org
investigate, explore, and recover petroleum deposits within the
geographical area and time period stated in the agreement.
Goal Provide production data for a fixed duration necessary for monitoring, decision-making,
forecasting and reporting, and financial record-keeping associated with operated properties
to others with working and/or revenue interest in those properties.
Business Value Better operational insight supporting improved decision making and/or requests for
operational changes
Improved estimate of future production potential (reserve estimates/reporting; marketing
deliverability, etc.)
Timelier data for booking related financial transactions, governmental/financial reporting, etc.
reduce the cost and effort of both providing data and consuming data through a consistent
standard (currently each company tends to have their own solutions so each party has to
develop a separate solution for each partner in order to load their data
Summary For producing properties that are subject to joint operating agreements, the Operating
Description Partner is responsible for collecting operational information on those properties and
determining well-level and facility-level production volumes by product (or flow stream). They
are also typically responsible for providing this information to the other parties in the
agreement (Non-operating Partners). The Non-operating Partner uses this data to update its
own data store(s) that support making operational decisions, estimating reserves, reporting to
outside parties and booking related financial transactions. Although typically transmitted on a
regular on-going basis, the Non-operating Partner may from time to time request data that
encompasses prior periods to verify and/or complete its own data store. Likewise, the
operating partner may at times advise non-operating partners of prior period adjustments and
re-issue previously transmitted data that has been amended.
Actors Non-operating Partner; Operating Partner; non-working interest partners (e.g., royalty
owners)
Pre-conditions Non-operating Partner has a revenue and/or working interest in a producing property
operated by the Operating Partner
Operating Partner has collected and verified operating data and performed necessary
calculations to determine gross well-level production data.
Primary or The data is transmitted from the operating partner to interested parties (working interest
Typical partner, royalty owner, inter-departmental use, etc.) on a regular basis encompassing all new
Scenario data since the previous report (note: new well or facility may include historical data. For
records representing the aggregation of data over a period of time, the duration of each
record is typically fixed (full day, partial day, week, month, etc.) in a report, but there can be
multiple reports each consisting of different data ranges and record durations.
A non-operating partner that has not been tracking operating information on a well may need
a one-time issue of all historical data to establish a starting point.
It may be necessary to distinguish between data exchanges which are intended to over-write
old records or whether the transmission is of new data.
Post- Non-operating Partner has data necessary to assemble a production history for the property
conditions adequate for making operational decisions, estimating reserves, reporting to outside parties
and booking related financial transactions.
Business
Rules
Extended Requirements:
Well Type
Well Status
Artificial Lift History
Well Maintenance History
Facility measurement data
Measurement point identifier (NB facility identification may not be a standard)
Measurement point type (tank, meter, etc.)
Inventory (tank volumes)
Volume flowed (meters)
Fluid characteristics (temperature, pressure, density, etc.)
Pressures/temperatures (vessels/equipment)
Notes
Definitions
Goal Reduce cost and effort and support automation of data room presale activities and generate
added value for properties being sold, as well as for those purchasing properties
Business Value Consistent format and contents of production information reduces amount of custom
programming required for both buyer and seller
Consistent format and contents of production information ensures a consistency, accuracy
and completeness of data sets avoiding costly rework and potential delays.
Making production information available virtually, ensures access to hard copy data by the
divesting operator.
Ability to use pre-existing solutions based on PRODML standards reduces time and cost for
data room functions (both seller and prospective buyers)
Ability to support common data exchange formats and interfaces – PRODML, oData, web
services, ETL, will provide flexibility and efficiency in data access for potential buyers.
Providing additional information allows prospective buyers to determine maximum value of
properties allowing higher potential bids. Publish the meta information format.
Summary During the data room pre-sale process, the divesting partner provides information on assets
Description being offered for sale. Prospective acquiring parties analyze the available information and
determine whether to make a bid on the properties and how much to bid. Typically multiple
prospective acquiring parties will come into the data room to view data and documents and to
ask questions. The number and length of visits varies depending on the size and complexity
of the sale.
Additional discussions and meetings outside the scope of the formal data room are common
and may occur after the data room phase is completed. The end result of this process is for
the prospective acquiring parties to submit their bids so the divesting party can determine
whether to sell the property and who the acquiring party will be.
Pre-conditions Driven by divesting party decisions for data room phase of sales process
Primary or Prospective acquiring parties may be invited by divesting party or the data room may be open
Typical to the public. Typically a dedicated facility is made available containing documents,
Scenario computers, and telecommunications hookups for prospective acquiring parties. Resources
are made available to answer questions and required data is made available for all
prospective acquiring parties. The Data room phase of the sale process will have a specific
duration and prospective acquiring parties may make multiple visits.
Alternative Physical data room may be eliminated or minimized in exchange for virtual meetings and
Scenarios external links for data on properties being sold. In this case prospective acquiring parties will
download data and analyze it locally. Questions will be handled via teleconferences.
Post- During the due diligence phase of the sales process (after the acquiring party is selected and
conditions the bid is finalized), the acquiring party may download current / updated information and may
request information about the source of data or may request additional information. Both of
these activities are outside the scope of this use case for PRODML.
Business
Rules
Notes
Reviewer(s)
Goal Reduce cost and effort of post-acquisition data loads and improve completeness and
accuracy of loaded data.
Scope of allocated volumes includes the entire history of the properties. Scope for other data
is based on availability of the data.
Additional data requirements that are outside the scope of PRODML include
Complete well master data, including wells, well bores, and well completions (lat long, field,
reservoir, operator, completion date, date of production, location information, operating lease)
Facility data
Well and Facility Maintenance history
Cost information
Summary Once the acquisition is complete, the next step is for the acquiring company to take over the
Description properties and start operating them. The immediate need is for minimum information to be
able to perform closings for financial and production accounting systems. Additional reserves
need to be calculated and booked and production and status history need to be loaded for
future production accounting functions. The time frame for doing the initial loads is often very
short as deadlines for taking over acquired assets tend to be very short.
Once the initial data loads need to support accounting closings are completed, the rest of the
data is loaded and validated as resources are available. Success in this second phase
supports more efficient and successful operation of the acquired property.
Triggers Initial load is triggers when the sale is closed and the final contracts are signed
Secondary load is triggered when the initial load is completed and acquiring company has
taken over operation of the properties
Pre-conditions NA
Primary or Acquiring party is typically given all relevant data that can be provided by the divesting party
Typical (although quality of data may be inconsistent). The acquiring party determines critical data
Scenario need to support initial closing process. Once the required data is located, they attempt to load
the data and perform their initial accounting closings. If critical data is missing, requests will
be made to the divesting party.
Complete load may take several months and data gaps found during this phase are much
less likely to be addressed by the divesting party.
Alternative Follow up data transfers may be requested when needed data is missing from the initial data
Scenarios load process.
Post- Validation of the acquired data is performed to determine completeness, consistency, and
conditions accuracy. In addition reformatting may be necessary to convert data into format used by
acquiring company (PRODML can provide value here by reducing data conversions needed).
Business
Rules
Definitions Well completion – the zone / completion of the well where production or injection is occurring
(typically assigned an API_NO14 is US properties).
Version 0.1
Goal Ensure that when data is exchanged the sender is able to specify the rules for access to the
data, such that the receiver may enforce privacy rules specified by the data owner.
Business Maintain privacy over sensitive data at all times, notably after onward transmission.
Requirement Allow configuration of the rules for data access to be transferred between two systems.
The responsibility for ensuring all data is clearly marked with access conditions lies with the
data owner.
Business Value Contractual conditions typically specify who can see sensitive production numbers at various
locations. Unauthorized disclosure of this data to a third party is at best embarrassing and at
worse litigious.
Summary In several of the use cases two parties agree to exchange information. The Operating Partner
Description typically collects and sends data to the Non-Operating Partner. In some cases the data may
be distributed, possibly via a third-party hub.
When this occurs, it is important that the data access rights are carried with the data, so that
the rules for data security can be asserted by the receiving system.
It is expected that many cases of this type of data permissions need to be supported.
Alternative Data where the supplied permission entitles are not found.
Scenarios The receiving system does not support the enforcement of the required permissions.
Post-
conditions
Business If no permission are set then the assumption is that the data is accessible to anyone
Rules
If the any of the identifiers within the permission blocks is set, then the receiver is expected to
only allow the identified permissions entity.
Data
Requirements
Notes The data transfer can be considered as having an Envelope and a Content Body. The
Envelope of the data, when present, will define the required permissioned entities allowed to
view the content.
Systems could exchange the defined capabilities for enforcement of access permissions.
This would allow the refusal to transfer data where the privacy cannot be enforced.
It is required that the data permissions be configured such that the rules for any contract can
be described and managed from the schema.
An optional permissions entity could be added to any business data transferred. While it is
outside the scope of the standard to say how this is used, it must be enough to allow the
receiver the ability to apply access controls.
Issues The Parties exchanging data will need a way to identify the Permitted Entities. This means
that they will need to have shared identifiers for both the names of items. This will be difficult
to manage but is required to allow the secure transfer of permissions between the two
parties.
5. The way a Deferred Production Event now works is, Deferred Production Event has:
a. planned|unplanned kind
b. duration and event times
c. downtime code
d. remark
e. 0..* Deferred Production elements. The purpose of these is to be able to report the deferred
quantities related to the parent Event. Each Deferred Production has an Estimation Method,
and a mandatory Abstract Product Quantity, which either is a Product Fluid or a Service Fluid
element, as used elsewhere in SPVR for fluid kind and quantity data.
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for
their work in designing, developing and delivering the Fluid and PVT Analysis data model: Calsep,
Chevron, ExxonMobil, KBC Advanced Technologies, Shell, Schlumberger and Weatherford.
These PRODML PVT data objects have been designed to address these challenges and improve the
accuracy and completeness of the information transferred in these workflows.
Process Challenges
Standing (1981) declared “the greatest use of PVT data lies in calculating reserves of oil and gas in
place in the reservoir and the effect of field operational methods on the recovery of oil and gas.” In
other words, PVT data is fundamental to our ability to understand and predict resource size and
recovery.
Some of the challenges (per Standing):
• No assurance that samples obtained in one well are representative of the entire reservoir.
− Gravity effects can create sample differences
- Samples at the same structural elevation may have differences based on geologic activity
over time
• How representative are samples captured in a well?
− Fluid may be commingled (from multiple reservoir zones)
− Fluid may be contaminated
- Where two phases exist in contact in the same zone, flow rates are almost always different
• How complete is the samples collection data?
- Collection points, times, etc.
• How depleted is the reservoir?
− Prefer above 80% of original pressure
− Below 40% will likely lead to inaccuracies
- Samples should be re-run as reservoir depletes
References
Standing, M.B. 1981. Volumetric and Phase Behavior of Oil Field Hydrocarbon Systems, ninth edition.
Richardson, Texas: Society of Petroleum Engineers of AIME.
It has been difficult to estimate the impact of more accurate and consistent fluid data on the
operational bottom line, but the quantification of its impact should include:
• Less risk for estimates of initial rates, final recovery fractions, and development/depletion
strategies.
• Better assessments of peak capacities used to size surface operating facilities.
• Improved accuracy of subsurface flow potential and wellbore tubular design.
Other benefits may be realized through one standard by its interaction with other standards (i.e.,
Metcalfe’s Law). By delivering fluid and PVT analysis as useful data objects in PRODML, these can
be readily accessed and used by other data objects in both RESQML and WITSML and by the flow
network and completion/nodal analysis data objects in PRODML.
Figure 8-1. High-level fluid sample gathering and analysis workflow, which serves as basis for defining
requirements for PRODML PVT data objects.
The solid lines show the data movement processes that have been addressed. The goal of these data
objects is to provide complete and accurate XML-based machine readable documents that can be
transferred between the different parties and software systems that exist today. The need for this
capability extends from capturing fluid samples in the field to supplying data to these consumers:
• Systems of record (e.g., corporate/technical databases, etc.)
• Interpretation software (e.g., proprietary PVT analysis packages)
• End-user applications (e.g., nodal analysis, reservoir simulation, etc.)
Out of scope for this release (dashed lines in Figure 8-1):
• Sample tracking through material movement systems and storage
• Water chemistry and geochemical analysis
• Fluid sampling and analysis for custody transfer
• Project databases
Fluid Sample Acquisition can also reference wells, wellbores etc. at which the sample was taken or
the facility concerned if the sample is from a co-mingled field stream as opposed to a single well
stream. The Fluid Sample Acquisition also can link to a Flow Test Activity (for details of flow
conditions around the time of the sample acquisition).
After the fluid samples arrive at the laboratory and based on the business needs identified by the
customer and the information gathered during sample acquisition, a matrix of samples and laboratory
analyses is usually created. For each sample employed, the chain of custody is updated to detail how
much of which sample was used. Executing this plan, these tests are conducted and the resulting
measurements are reported. For a list of and description of these tests, see Section 10.6.1.
Laboratory reports may be included as documents within the EPC (Energistics Packaging
Convention) package
After the laboratory analysis is received, the usual first action is to complete a consistency and
correctness check of the data. If the information coming from the laboratory is found to be non-
coherent or not self-consistent, the laboratory may be requested to repeat some or all the
experiments, if possible. In other cases, to increase the confidence in the results, some or all of the
experiments may be repeated, either by the original laboratory or by another laboratory. Regardless
of validity, all measurements on fluid samples may be potential candidates for characterization and
may be described by the fluid analysis data object. For example, a sample that may have too much
gas may form a basis for a future sensitivity analysis.
In the past, samples were characterized using traditional methods, such as direct input of the lab data
superimposed with correlations/smoothing functions and possibly coupled with K-value charts or
correlations. To convey this legacy data, the tabular format (see Section 10.7.2) could be used.
Modern reports, however, are overwhelmingly analyzed using tools with built-in EoS engines for
which the parametric format is appropriate. Either or both of these formats may be used as needed for
a specific set of results. The final goal of the PVT workflow, regardless of the points of origin, is to use
it in the tools that help in engineering decisions.
Conducting the sample characterization soon after the PVT test makes it easier to repeat or add
some of experiments if needed. However, in the current business environment or depending on the
in-company processes, staff turnover may be an impediment to the technical level continuity issues,
depending on how long it takes to complete the full cycle analysis. Many times, it is difficult to locate
the correct data unless it has been properly categorized, indexed and stored. The fluid analysis data
object is designed to carry to system identifiers throughout its lifecycle to make this task easier and
more consistent.
Many disciplines and their respective software tools need a fluid system description that adequately
represents how the fluid is going to behave at different pressure and temperatures; for example:
reservoir simulators, lift packages, process simulators, and others targeting various disciplines such
as reservoir engineering, production and well engineering, facilities design, and operations. In the
context of the current effort, fluid system characterization refers to developing and characterizing fluid
components within the fluid system for the purpose of simultaneously evaluating the validity and
applicability of earlier fluid sample analyses and developing a mathematical model to represent
holistic behavior of the reservoir fluid system.
In the fluid characterization data object, references may be made to the fluid components defined for
the fluid system. This includes the use of specific oil, water and gas components, standard
components used in sample compositional analysis, or new pseudo-components intended for use in
reservoir or process simulators. The additional properties and characteristics (such as molecular
weight, boiling point, etc.) for each fluid component used are added as data in the fluid
characterization data object.
This use case is specifically intended to support the requirements from laboratories and operators for
loading, searching, and exporting fluid analysis and characterization datasets from systems of
records, such as in corporate databases.
The two general requirements for the data storage use case are:
• Accurate capture and organization of data to meet the needs of future workflows
• Preservation of context for data items that are used in multiple workflows.
These requirements significantly influenced the structure and behavior of the data objects that make
up this standard—such as the separation of the fluid sample from distinct sample acquisition and
laboratory analysis objects—so that implementers of the standard are not burdened with the need to
“make up” data not otherwise available. For example, the distinction between fluid analysis data and
fluid characterization data more clearly defines the difference between measured and calculated data
while better supporting the description of user-defined data because it can be handled without having
to fake “laboratory tests”.
Another product of these requirements was the assignment of identifiers within each data object and
allowing all data objects to reference other data objects by these identifiers. For example, this enables
the data in the sample acquisition data object to remain connected to the fluid samples it produces
and vice versa. These schemas are also designed to incorporate the operator’s system of record’s
identification (e.g., completion, well, or facility identifiers) within the initially created data objects, which
are then carried forward through the workflow, making it easier to load them into a system of record.
Because of the expanded scope of these specifications over earlier ones (Aydelotte et al. 2003), the
scope of the corporate repositories needing to store these datasets may need to be expanded.
Specifically, additional tables for sample acquisition, laboratory analysis tests, and fluid
characterization may be needed unless these data are de-normalized into existing tables, which could
potentially limit the system’s ability to export the original data. Also, a capability to managing the
document attachments (field notes and laboratory reports, etc.) may be required. Definition of the
scope or details of these potential changes are beyond the scope of any transfer standard and are the
province of the owner of the relevant databases
1
AbstractObject
«XSDcomplexType» «XSDcomplexType,XSDt...
FluidSampleAcquisitionJob:: FluidSampleAcquisitionJob::
FluidSampleAcquisition +FluidSampleAcquisition FluidSampleAcquisitionJob
0..*
0..*
1
+FluidSampleContainer «XSDcomplexType»
FluidCharacterization::
1 FluidCharacterizationSource A
AbstractObject +OriginalSampleContainer
«XSDcomplexType... +FluidSystem
FluidSampleContainer:: 1 +FluidSystem
0..1 0..1
FluidSampleContainer AbstractObject
AbstractObject
+FluidSystem «XSDcomplexTyp...
«XSDcomple...
FluidSystem FluidCharacterization::
0..1 FluidCharacterization
more detailed analysis and characterization, any number of fluid components may be defined in other
data objects.
The fluid system can also contain references to multiple rock fluid unit features, which are RESQML
objects describing the concept of a combination of a fluid kind in a reservoir zone. Samples can
subsequently be associated with a specific rock fluid unit feature from which they are taken.
The main elements of the fluid system object are shown in Figure 10-2. The fluid system is
referenced by several of the other objects, as shown in Figure 10-1.
class FluidSystem
Legend
AbstractObject
Most Important Class
«XSDcomplexType,XSDtopLevelElement»
FluidSystem Abstract Class
Enumeration
«XSDelement»
+ StandardConditions: AbstractTemperaturePressure Normal Class
+ ReservoirFluidKind: ReservoirFluidKind
+ PhasesPresent: PhasePresent [0..1]
+ ReservoirLifeCycleState: ReservoirLifeCycleState [0..1]
Normal Association
+ RockFluidUnitFeatureReference: DataObjectReference [0..-1] Generalization
+ SaturationPressure: SaturationPressure [0..1]
+ SolutionGOR: VolumePerVolumeMeasure
+ Remark: String2000 [0..1]
0..1
0..1
AbstractFluidComponent
AbstractFluidComponent
«XSDcomplexType»
ProdmlCommon::StockTankOil «XSDcomplexType»
ProdmlCommon::FormationWater
«XSDelement»
+ APIGravity: APIGravityMeasure [0..1] «XSDelement»
+ MolecularWeight: MolecularWeightMeasure [0..1] + SpecificGravity: double [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1] + Salinity: MassPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1] + Remark: String2000 [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]
0..1
AbstractFluidComponent
«XSDcomplexType»
ProdmlCommon::NaturalGas
«XSDelement»
+ GasGravity: double [0..1]
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]
operation to collect one or more fluid samples. Fluid sample acquisition elements repeat, one per
sample acquired, within one job. Fluid sample acquisitions can be made in five types of locations:
surface facilities, separators, wellheads, downhole, or directly from the formation by wireline formation
tester. Each type of location is defined with specific characteristics so that the important
measurements for each type are captured, such as measured depth for downhole samples and the
operating conditions for separator samples. Figure 10-3 and Figure 10-4 show the fluid sample
acquisition job data object.
AbstractObject
FluidSampleAcquisitionJob
+FlowTestJob AbstractObject
«XSDelement» 0..1
Flow TestJob::Flow TestJob
+ Client: BusinessAssociate [0..1]
0..* + StartTime: TimeStamp [0..1]
«XSDelement»
+FluidSampleAcquisitionJob
+ EndTime: TimeStamp [0..1] + Client: BusinessAssociate [0..1]
+ ServiceCompany: BusinessAssociate [0..1] 0..1
+ ServiceCompany: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
Figure 10-3. Fluid Sample Acquisition Job Object, showing links to other Objects.
class FluidSampleAcquisitionJob
FluidSampleAcquisition
«XSDelement»
+ StartTime: TimeStamp
+ EndTime: TimeStamp
+ AcquisitionPressure: AbstractPressureValue [0..1]
+ AcquisitionTemperature: ThermodynamicTemperatureMeasure [0..1]
+ AcquisitionVolume: VolumeMeasure [0..1]
+ AcquisitionGOR: VolumePerVolumeMeasure [0..1]
+ FormationPressureTemperatureDatum: LengthMeasureExt [0..1]
+ FormationPressure: PressureMeasure [0..1]
+ FormationTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000
«XSDattribute»
+ uid: String64
SeparatorSampleAcquisition
«XSDelement»
+ Separator: String64
+ WellCompletion: DataObjectReference [0..1] +FlowTestActivity
+ SeparatorPressure: AbstractPressureValue
+ SeparatorTemperature: ThermodynamicTemperatureMeasure 0..1
+ SamplingPoint: String64 [0..1]
+FlowTestActivity AbstractObject
+ CorrectedOilRate: VolumePerTimeMeasure [0..1]
FlowTestActivity::FlowTestActivity
+ CorrectedGasRate: VolumePerTimeMeasure [0..1] 0..1
+FlowTestActivity
+ CorrectedWaterRate: VolumePerTimeMeasure [0..1] +FlowTestActivity
+ MeasuredOilRate: VolumePerTimeMeasure [0..1] 0..1
+ MeasuredGasRate: VolumePerTimeMeasure [0..1] 0..1
+ MeasuredWaterRate: VolumePerTimeMeasure [0..1]
AbstractObject
ReportingEntity::ReportingEntity
«XSDelement»
+ Kind: ReportingEntityKind
+Facility + AssociatedFacility: DataObjectReference [0..1]
+ AssociatedObject: DataObjectReference [0..1]
0..1
Figure 10-4. Detail of the various types of Fluid Sample Acquisition showing link to Flow Test or to
Reporting Entity depending on the type of Fluid Sample acquired.
Using data objects defined elsewhere in PRODML, these sample acquisition locations may be linked
to identified separators, wells, wellbores, completions or facilities when these objects are known. To
allow sample acquisition activities to be defined within the context of other well work, they may be
linked also to a Flow Test Activity data object which reports flowing conditions around the time of
sampling.
AbstractObject
FluidSample
FluidSampleAcquisitionJobSource
«XSDelement»
+ SampleKind: FluidSampleKindExt [0..1] +FluidSampleAcquisitionJobSource
«XSDelement»
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1] 0..1 + FluidSampleAcquisitionReference: String64
+ SampleDisposition: String64 [0..1] +AssociatedFluidSample
+ Remark: String2000 [0..1] 0..*
+FluidSampleAcquisitionJobReference
0..*
AbstractObject
FluidSampleAcquisitionJob::
FluidSampleAcquisitionJob
+FluidSystem 0..1
«XSDelement»
AbstractObject
+ Client: BusinessAssociate [0..1]
FluidSystem::FluidSystem
+OriginalSampleContainer + StartTime: TimeStamp [0..1]
0..1 +FluidSystem
+ EndTime: TimeStamp [0..1]
«XSDelement»
AbstractObject + StandardConditions: AbstractTemperaturePressure
0..* 1 + ServiceCompany: BusinessAssociate [0..1]
FluidSampleContainer::FluidSampleContainer + ReservoirFluidKind: ReservoirFluidKind
+ PhasesPresent: PhasePresent [0..1]
«XSDelement» + ReservoirLifeCycleState: ReservoirLifeCycleState [0..1]
+ Make: String64 [0..1] + RockFluidOrganizationInterpretation: DataObjectReference [0..1]
+ Model: String64 [0..1] + SaturationPressure: SaturationPressure [0..1]
+ SerialNumber: String64 [0..1] + SolutionGOR: VolumePerVolumeMeasure
+ BottleID: String64 [0..1] + Remark: String2000 [0..1]
+ Capacity: VolumeMeasure [0..1]
+ Owner: String64 [0..1]
+ Kind: String64 [0..1]
+ Metallurgy: String64 [0..1]
+ PressureRating: PressureMeasure [0..1]
+ TemperatureRating: ThermodynamicTemperatureMeasure [0..1]
+ LastInspectionDate: date [0..1]
+ TransportCertificateReference: DataObjectReference [0..1]
+ Remark: String2000 [0..1]
AbstractObject
FluidSample
EnumExtensionPattern
FluidSampleKindExt «XSDelement»
+ SampleKind: FluidSampleKindExt [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1]
+ SampleDisposition: String64 [0..1] +AssociatedFluidSample
+ Remark: String2000 [0..1] 0..*
+FluidSample 0..1
TypeEnum
FluidSampleKind
bhs samples
blend-gas
+OriginalSampleContainer
blend-liquid 0..1
brine
condensate AbstractObject
filtrate FluidSampleContainer::FluidSampleContainer
gas
gas-dry «XSDelement»
mud filtrate + Make: String64 [0..1]
mud sample + Model: String64 [0..1]
oil & water + SerialNumber: String64 [0..1]
oil-base + BottleID: String64 [0..1]
oil-black + Capacity: VolumeMeasure [0..1]
oil-dead + Owner: String64 [0..1]
oil-heavy + Kind: String64 [0..1]
oil-unknown + Metallurgy: String64 [0..1]
oil-volatile + PressureRating: PressureMeasure [0..1]
other + TemperatureRating: ThermodynamicTemperatureMeasure [0..1] 0..1 +SampleRecombinationSpecification
recomb-fluid + LastInspectionDate: date [0..1]
recomb-gas + TransportCertificateReference: DataObjectReference [0..1] SampleRecombinationSpecification
rinse-post + Remark: String2000 [0..1]
rinse-pre «XSDelement»
solid + RecombinationPressure: AbstractPressureValue [0..1]
sto +CurrentContainer 1 0..1 +PrevContainer + RecombinationTemperature: ThermodynamicTemperatureMeasure [0..1]
toluene + RecombinationGOR: VolumePerVolumeMeasure [0..1]
water + RecombinationSaturationPressure: SaturationPressure [0..1]
water/condensate + OverallComposition: OverallComposition [0..1]
synthetic + Remark: String2000 [0..1]
separator water FluidSampleChainOfCustodyEv ent
separator oil
separator gas «XSDelement»
downhole cased + TransferVolume: VolumeMeasure [0..1]
downhole open + TransferPressure: AbstractPressureValue [0..1]
recombined + TransferTemperature: ThermodynamicTemperatureMeasure [0..1] 0..*
wellhead + SampleIntegrity: SampleQuality
TypeEnum commingled + RemainingVolume: VolumeMeasure [0..1] +RecombinedSampleFraction
SampleAction + LostVolume: VolumeMeasure [0..1] +FluidSampleChainOfCustodyEvent 2..*
+ CustodyDate: date [0..1]
custodyTransfer RecombinedSampleFraction
+ CustodyAction: SampleAction [0..1]
destroyed
+ Custodian: String64 [0..1]
sampleTransfer «XSDelement»
+ ContainerLocation: String64 [0..1]
stored + VolumeFraction: VolumePerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]
subSample Dead + MassFraction: MassPerMassMeasure [0..1]
subSample Live «XSDattribute»
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ uid: String64
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
Figure 10-6. Fluid Sample Object showing details supporting workflow for samples.
Each fluid sample is assigned a name to identify it within the context of its lifecycle. Each fluid sample
can have a description of its source geologic feature. Information such as the expected reservoir
temperature, pressure, and gas-liquid ratio are also recorded for each sample (some of the data is
within the associated fluid sample acquisition in the fluid sample acquisition job). Other characteristics
of a fluid sample include qualitative descriptors for sample quality and representativeness of the fluid
sample.
Because fluid samples may be combined with other fluid samples—either in the case of
recombination samples, or for specific laboratory investigations using additional fluids—a new fluid
sample can be described from the recombination of other fluid samples. These combined samples are
thereafter treated as regular fluid samples with additional data (the Recombined Sample Fraction)
describing the identity and amounts of source fluid samples which were combined. For a recombined
liquid and vapor sample, the Sample Recombination Specification (i.e., the specification for the
recombination) can be defined. An Associated Fluid Sample can also be reported, e.g., one sample of
a sequence of samples can be recorded as being associated with the others.
Often it is important to record the handling of the fluid sample before its analysis. To meet this
requirement, a chain of custody may be transferred for each fluid sample. The chain of custody
information includes the identity of the fluid sample containers in which the sample is stored, transfers
of specific volumes from one container to another, the pressures, temperatures, locations and dates
at which these transfers were conducted, and the identity of the custodians. As an essential element
of chain of custody process, integrity checks, such as opening and closing pressures and
temperatures for each fluid sample, may also be recorded for each transfer. Also, the disposition of a
fluid sample may be recorded to describe what happened to the sample after analysis.
class FluidSampleContainer
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidSampleContainer
«XSDelement»
+ Make: String64 [0..1]
+ Model: String64 [0..1]
+ SerialNumber: String64 [0..1]
+ BottleID: String64 [0..1]
+ Capacity: VolumeMeasure [0..1]
+ Owner: String64 [0..1]
+ Kind: String64 [0..1]
+ Metallurgy: String64 [0..1]
+ PressureRating: PressureMeasure [0..1]
+ TemperatureRating: ThermodynamicTemperatureMeasure [0..1]
+ LastInspectionDate: date [0..1]
+ TransportCertificateReference: DataObjectReference [0..1]
+ Remark: String2000 [0..1]
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis
«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1
«XSDcomplexType»
NonHydrocarbonAnalysis
«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis
«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]
Figure 10-8 shows the types of analysis supported by this data object.
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis
«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1
«XSDcomplexType»
NonHydrocarbonAnalysis
«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis
«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]
For hydrocarbon analysis, all of the routine/standard laboratory analyses have been addressed (these
are listed later). The common structure of each test is to define each one with a “header” record
containing contextual data and a series of “step” records that record the constant conditions and
measurements. Typically, the header record is given the name of the test and a laboratory-assigned
test number. Other information in the header includes the constant and/or reference conditions (such
as temperature in isothermal tests) at which the test is conducted. The step record consists of the
conditions changed in the test (such as pressure) and the property measurements made at that step.
For many test types, the volumes and compositions of the fluid phases present at each step may also
be captured.
The same header and step record structure is used for water analysis. The properties for water
include isothermal density, formation volume factor, viscosity, salinity, solution gas-water ratio and its
corresponding gas composition, and compressibility as functions of pressure. Water (ionic)
composition can also be recorded.
Note that the composition of fluids throughout uses a common model which is described in Section
10.8.
Sample Contaminant
A fluid analysis is conducted on one fluid sample. This sample may have been contaminated, e.g., by
mud filtrate. This can be reported by the sample contaminant object. Multiple contaminants can be
reported. Each reports the proportion of contaminant, its composition and other properties, and can
optionally reference a sample of the contaminant itself (e.g., a mud filtrate sample which was taken for
this purpose).
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis
«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
TypeEnum + StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
«enumeration» + AnalysisDescription: String2000 [0..1]
0..* FluidContaminant + AnalysisPurpose: String2000 [0..1]
+SampleContaminant
cement fluids + AnalysisSite: String2000 [0..1]
«XSDcomplexType» completion fluid + LabContact: BusinessAssociate [0..1]
SampleContaminant drilling mud + StandardConditions: AbstractTemperaturePressure
extraneous gas + AnalysisQuality: SampleQuality
«XSDelement» extraneous oil + Remark: String2000 [0..1]
+ ContaminantKind: FluidContaminant extraneous water
+ WeightFractionStockTank: MassPerMassMeasure [0..1] formation water
+ VolumeFractionStockTank: VolumePerVolumeMeasure [0..1] treatment chemicals
+ WeightFractionLiveSample: MassPerMassMeasure [0..1] solid
+ VolumeFractionLiveSample: VolumePerVolumeMeasure [0..1] unknown
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ Density: MassPerVolumeMeasure [0..1]
+ ContaminantComposition: LiquidComposition [0..1]
+ Description: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
+SampleOfContaminant 0..1
+AssociatedFluidSample
0..*
AbstractObject
«XSDcomplexType,XSDtopLevelElement» HydrocarbonAnalysis
FluidSample::FluidSample
«XSDelement»
+ SampleKind: FluidSampleKindExt [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1]
+ SampleDisposition: String64 [0..1] WaterAnalysis
+ Remark: String2000 [0..1]
«XSDcomplexType»
SampleIntegrityAndPreparation
«XSDelement»
+ OpeningDate: date
+ InitialVolume: VolumeMeasure [0..1]
+ OpeningPressure: AbstractPressureValue [0..1]
+ OpeningTemperature: ThermodynamicTemperatureMeasure [0..1]
+ SaturationPressure: SaturationPressure [0..1]
+ SaturationTemperature: SaturationTemperature [0..1]
+ BasicSedimentAndWater: VolumePerVolumeMeasure [0..1]
+ FreeWaterVolume: VolumeMeasure [0..1]
+ WaterContentInHydrocarbon: MassPerMassMeasure [0..1]
+ OpeningRemark: String2000 [0..1]
«XSDattribute»
+ uid: String64
+SampleRestoration 0..*
«XSDcomplexType»
SampleRestoration
«XSDelement»
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ RestorationPressure: AbstractPressureValue [0..1]
+ RestorationTemperature: ThermodynamicTemperatureMeasure [0..1]
+ MixingMechanism: String64 [0..1]
+ Remark: String2000 [0..1]
Figure 10-9. QC aspects of Fluid Analysis. Showing ability to report Sample Contaminants and how the
sample was prepared for analysis.
Hydrocarbon Analyses
The different types of hydrocarbon analysis can be seen in Figure 10-10 (at outline level). An
instance of fluid analysis can contain any or all of the analyses shown.
Figure 10-11 shows the model for an example analysis (Constant Composition Expansion) and also
shows a concept common to many analysis kinds—the test step. This is a set of properties that
repeat at different pressures or temperatures.
The same figure also shows a concept used in certain analyses – the volume reference. This is for
tests where the liquid volume is reported as a fraction, and this volume reference is used to define
what that fraction refers to. The parent “Test” contains one or more Fluid Volume Reference elements
and these use an enumeration of kinds of reference volume (e.g., at saturation conditions) to which
the volume at a test step is referenced using the Liquid Fraction concept. The liquid fraction is of type
Relative Volume Ratio which uses the uid of the relevant Fluid Volume Reference to define which the
reference condition is. (It is done this way to support the use of multiple volume references within one
test, if required).
To save space, the model diagrams for the remaining hydrocarbon analyses are not shown but can all
be seen in the UML provided in the documentation.
class FluidAnalysisTests
Legend FluidAnalysis
«XSDcomplexType»
VaporLiquidEquilibriumTest
«XSDcomplexType»
ConstantCompositionExpansionTest
«XSDcomplexType»
Sw ellingTest
«XSDcomplexType»
SaturationTest
«XSDcomplexType»
SlimTubeTest
«XSDcomplexType»
DifferentialLiberationTest
«XSDcomplexType»
MultipleContactMiscibilityTest
«XSDcomplexType»
ConstantVolumeDepletionTest
«XSDcomplexType»
STOAnalysis
«XSDcomplexType»
FluidSeparatorTest
«XSDcomplexType»
InterfacialTensionTest
«XSDcompl exType»
OilCompressibility «XSDcompl exType»
RelativeVolumeRatio
«XSDa ttri bute»
+ ki nd: Compres s i bi l i tyKi nd «XSDa ttri bute»
+ fl ui dVol umeReference: Stri ng64
TypeEnum
«enumera ti on»
«XSDcompl exType» VolumeReferenceKind
FluidVolumeReference res ervoi r
s a tura ti on-ca l cul a ted
«XSDel ement» s a tura ti on-mea s ured
+ Ki nd: Vol umeReferenceKi ndExt s epa ra tor s ta ge 1
+ ReferenceVol ume: Vol umeMea s ure [0..1] s epa ra tor s ta ge 10
+ Rema rk: Stri ng2000 [0..1] s epa ra tor s ta ge 2
«XSDa ttri bute» s epa ra tor s ta ge 3
+ ui d: Stri ng64 s epa ra tor s ta ge 4
s epa ra tor s ta ge 5
s epa ra tor s ta ge 6
s epa ra tor s ta ge 7
s epa ra tor s ta ge 8
s epa ra tor s ta ge 9
s tock ta nk
tes t s tep
other
EnumExtensionPattern
«XSDuni on»
VolumeReferenceKindExt
Saturation Test
Within the context of PVT studies, a saturation test is a test in which bubble or dew point pressure of
liquid or a gas system is determined at a constant temperature. The test involves determining
pressure/volume relationship of a constant amount of reservoir at constant temperature. In most
cases, a saturation point determination can be part of constant composition expansion (CCE) test.
This test is considered the most basic PVT study on a live fluid sample.
Differential Liberation
Differential liberation is a test to simulate the depletion of a black-oil or volatile-oil reservoir system at
a constant reservoir temperature. This test is designed to measure what happens to the reservoir fluid
as it is depleted below the bubble point—at which point evolved gas begins to flow segregated from
the oil. Measurements from this test include formation volume factors for oil and gas and the solution
gas-oil ratios and viscosities below saturation pressure. The measurements from this test, corrected
to separator test results, are appropriate for material balance calculations in reservoir engineering
models.
Separator
A separator test mimics the processing of produced fluids through one or more stages of separation.
Primarily for black-oil and volatile-oil fluid systems, the results of this test are used to optimize the
recovery of hydrocarbon products by testing samples at a series of separation stages and settings.
This test also measures key parameters, such as formation volume factors and densities, shrinkages,
API gravity, and producing GOR that are used both in reservoir engineering calculations and facilities
design.
Transport
This test is to assess the properties of the fluid relevant to transportation. A series of test steps for
properties (such as viscosity, density, etc.) can be defined. Additionally, tabular data can be defined
for the test, using the same tabular format as available for fluid characterization (see Section 10.7.2).
Vapor-Liquid Equilibrium
A vapor-liquid equilibria (VLE) test is a specialized PVT test that is used to represent phase equilibria
in a gas injection enhanced oil recovery (EOR) process. In this test, a mixture of oil and injection gas
is equilibrated in a fixed condition of pressure and temperature where two distinct vapor and liquid
phases are present. Composition and pressure (generally at fixed reservoir temperature) of the
system is designed in such a way that equilibria is established where maximum mass transfer occurs
between the two phases (near critical condition/fluid). Properties of each phase (composition, density,
viscosity) are also used to optimize an equation-of-state model to represent phase equilibria during
gas injection EOR process.
Swelling Test
A swelling test is a PVT test that is used for characterization of gas injection EOR processes. In
essence, a swelling test is a series of constant composition expansion tests, in each of which an
increasing amount of particular injection gas is added to a reservoir oil sample at fixed temperature. In
each expansion test, saturation pressure and physical and transport properties of the mixture (density
and viscosity) can be determined. Swelling test data are used to optimize an equation-of-state model
that properly represents phase behavior of reservoir oil and injection gas mixture at different mixing
ratios.
STO Analysis
For representative samples of stock tank oil, an additional set of stock tank oil (STO) analysis tests
may be conducted to provide measurements of viscosity (at temperature), pour point, cloud point, wax
appearance temperature, paraffin content, asphaltene content and SARA, Reid vapor pressure, total
acid number, total sulfur, molecular weight, water content, and the amounts of lead, nickel, vanadium
and elemental sulfur present.
Interfacial Tension
Interfacial tension tests measure the interfacial tension between two phases. These are labelled the
wetting phase and the non-wetting phase. A surfactant can also be defined. Each test step reports the
interfacial tension and optionally, the surfactant concentration.
Water Analysis
In addition to the hydrocarbon tests described above, a number of standardized lab tests can be
performed on water samples. The measurements supported includes the properties of the water such
as salinity and hardness. Optionally, test steps can be defined reporting on properties such as
dissolved gases and volume factors at a series of pressure temperature conditions. In addition, a set
of anion/cation/organic acid concentrations that can be used for scale modeling and for water injection
capabilities. Figure 10-12 shows the water analysis data model.
«XSDcomplexType,XSDtopLevelElement»
WaterAnalysis
«XSDcomplexType» «XSDcomplexType»
WaterAnalysisTest WaterSampleComponent
«XSDelement» «XSDelement»
+ TestNumber: NonNegativeLong + TestMethod: String2000 [0..1]
+ LiquidGravity: double [0..1] + Anion: AnionKindExt [0..1]
+ SalinityPerMass: MassPerMassMeasure [0..1] + Cation: CationKindExt [0..1]
+ TotalDissolvedSolidsPerMass: MassPerMassMeasure [0..1] + OrganicAcid: OrganicAcidKindExt [0..1]
+ TotalSuspendedSolidsPerMass: MassPerMassMeasure [0..1] + MolarConcentration: AmountOfSubstancePerAmountOfSubstanceMeasureExt [0..1]
+ TotalHardnessPerMass: MassPerMassMeasure [0..1] + VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ TotalAlkalinityPerMass: MassPerMassMeasureExt [0..1] + MassConcentration: MassPerMassMeasure [0..1]
+ TotalSedimentSolidsPerMass: MassPerMassMeasureExt [0..1] + EquivalentConcentration: MassPerMassMeasure [0..1]
+ SalinityPerVolume: MassPerVolumeMeasureExt [0..1] + ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind [0..1]
+ TotalDissolvedSolidsPerVolume: MassPerVolumeMeasureExt [0..1] + Remark: String2000 [0..1]
+ TotalSuspendedSolidsPerVolume : MassPerVolumeMeasureExt [0..1] «XSDattribute»
+ TotalHardnessPerVolume: MassPerVolumeMeasureExt [0..1] + uid: String64
+ TotalAlkalinityPerVolume: MassPerVolumeMeasureExt [0..1]
+ TotalSedimentSolidsPerVolume: MassPerVolumeMeasureExt [0..1]
+ TestMethod: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
+WaterAnalysisTestStep 0..*
«XSDcomplexType»
WaterAnalysisTestStep
«XSDelement»
+ StepNumber: NonNegativeLong
+ StepPressure: PressureMeasure
+ StepTemperature: ThermodynamicTemperatureMeasure
+ DissolvedCO2: MassPerVolumeMeasureExt [0..1]
+ DissolvedH2S: MassPerVolumeMeasureExt [0..1]
+ DissolvedO2: MassPerVolumeMeasureExt [0..1]
+ FlashedGas: FlashedGas [0..1]
+ pH: UnitlessMeasure [0..1]
+ Resistivity: ElectricalResistivityMeasureExt [0..1]
+ Turbidity: UnitlessMeasure [0..1]
+ WaterViscosity: DynamicViscosityMeasure [0..1]
+ SolutionGasWaterRatio: VolumePerVolumeMeasure [0..1]
+ WaterViscousCompressibility: ReciprocalPressureMeasure [0..1]
+ WaterDensity: MassPerVolumeMeasure [0..1]
+ WaterSpecificHeat: EnergyPerVolumeMeasure [0..1]
+ WaterFormationVolumeFactor: VolumePerVolumeMeasure [0..1]
+ WaterHeatCapacity: EnergyMeasure [0..1]
+ WaterIsothermalCompressibility: ReciprocalPressureMeasure [0..1]
+ WaterThermalConductivity: ElectricConductivityMeasure [0..1]
+ WaterDensityChangeWithPressure: MassPerVolumePerPressureMeasureExt [0..1]
+ WaterThermalExpansion: VolumetricThermalExpansionMeasure [0..1]
+ WaterDensityChangeWithTemperature: MassPerVolumePerTemperatureMeasureExt [0..1]
+ WaterEnthalpy: MolarEnergyMeasure [0..1]
+ WaterEntropy: EnergyLengthPerTimeAreaTemperatureMeasure [0..1]
+ WaterSpecificVolume: VolumePerMassMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
Non-Hydrocarbon Analysis
Non-hydrocarbon analysis refers to the detection of specific species such as H2S which are present
in the fluid stream.
If these are detected in the laboratory analysis, they can be reported in the appropriate test, as listed
above. The non-hydrocarbon molecular species are listed in the enumeration for Pure Fluid
Component. Sulfer species are split out in a separate Sulfer Fluid Component with enumeration.
However, some of the tests for these species are carried on with in-line equipment and no sample is
taken, nor is there transport of a sample to a laboratory. For this reason, this type of Fluid Analysis
can link either to a Fluid Sample (like Hydrocarbon and Water Analyses) or to a Flow Test Activity.
This can be seen in
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis
«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1
«XSDcomplexType»
NonHydrocarbonAnalysis
«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis
«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]
Figure 10-8.
To report Non-hydrocarbon Analysis as a standalone data object, the Non Hydrocarbon Analysis type
of Fluid Analysis can be used. This is shown in Figure 10-13.
A single test may yield the concentrations of one or more molecular species. The concentrations are
reported using the Non Hydrocarbon Concentrations element seen in the figure. This uses the same
construct of Overall Composition as used for the various Hydrocarbon Analyses. As stated above,
non-hydrocarbon molecular species are listed in the enumeration for Pure Fluid Component. Sulfer
species are split out in a separate Sulfer Fluid Component with enumeration.
«XSDcomplexType»
NonHydrocarbonAnalysis
0..*
«XSDcomplexType»
NonHydrocarbonTest
«XSDelement»
+ TestNumber: NonNegativeLong [0..1]
+ TestTime: TimeStamp [0..1]
+ TestVolume: VolumeMeasureExt [0..1]
+ PhasesTested: PhasePresent [0..1]
+ TestTemperature: ThermodynamicTemperatureMeasure [0..1]
+ TestPressure: PressureMeasureExt [0..1]
+ AnalysisMethod: String2000 [0..1]
+ SamplingPoint: String2000 [0..1]
+ CellId: NonNegativeLong [0..1]
+ InstrumentId: String2000 [0..1]
+ NonHydrocarbonConcentrations: OverallComposition [0..1]
+ OtherMeasuredProperties: ExtensionNameValue [0..-1]
+ Remark: String2000 [0..1]
class FluidCharacterizationTopLevel
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization
«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]
«XSDcomplexType»
«XSDcomplexType»
FluidCharacterizationModel
+ModelSpecification
AbstractPvtModel
«XSDelement»
+ Name: String64 [0..1] 0..1
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1] +FluidCharacterizationParameterSet
«XSDcomplexType»
«XSDattribute» 0..1 FluidCharacterizationParameterSet
+ uid: String64
+FluidCharacterizationTable 0..*
Figure 10-14. Fluid characterization high-level model, showing fluid characterization by model (green
box), by table (red box) or by a set of parameters (blue box).
The basic structure is shown in Figure 10-14. A fluid characterization has any number of fluid
characterization sources. Each is a reference to a fluid analysis and to the specific tests in that
analysis which were used for this characterization. The fluid system is also referenced. There are then
a set of fluid characterization models, and these can either have model parametric representation
(see below), and/or a set of tables of output properties and/or a set of fluid Characterization
Parameters
A fluid characterization may also be represented as a combination of tabular and parametric
approaches (e.g., in thermal compositional reservoir simulation applications), where part of fluid
characterization (e.g., k-values) is represented in a parametric model (EoS), and another part of fluid
characterization (e.g., component viscosities) is represented in a tabular format. It is important to note
that the current schema is designed to represent only one fluid sample/system at a time.
With either representation, information is included to describe the geologic feature and the fluid
system being characterized. Also provided are references to the specific fluid samples and laboratory
tests that were used as the basis for the characterization. Additional information found in the fluid
characterization data object are the identity (and version) of the application used for its
characterization, and the use case for which the characterization was done.
Note that the composition of fluids throughout uses a common model which is described in Section
10.8.
Model-Parametric Output
In the model-parametric approach, parameters of a named EoS model determined during the
characterization effort are provided to an application using the same model formulation. Typical
information found in a parametric format includes:
• Identification and parameters of EoS models.
• Characteristics of each component used in characterization, such as critical properties, acentric
factor, etc.
• Identification and parameters of viscosity models.
• Parameters of equations to represent thermal properties of components.
• Mixing parameters for density and viscosity models.
The models are arranged in a hierarchy, with the top level being compositional (i.e., with a set of fluid
components each having fluid component properties), or correlation (i.e., a set of parameters and a
model which fits the behavior of the whole fluid) (Figure 10-15). A fluid characterization model can
have a model specification, and the specification is abstract down to the level of a known model which
then has its required parameters (see below for more details).
Correlation models are currently all viscosity models and are divided into dead oil, bubble point,
under-saturated oil and gas viscosity correlations.
Compositional models are available as EoS (phase behavior) and viscosity models.
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization
«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]
«XSDcomplexType» «XSDcomplexType»
FluidCharacterizationModel FluidCharacterizationTable
«XSDelement» «XSDattribute»
+ Name: String64 [0..1] + uid: String64
+ ReferencePressure: AbstractPressureValue [0..1] + tableFormat: String64
+ ReferenceStockTankPressure: AbstractPressureValue [0..1] +FluidCharacterizationTable
+ name: String64
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
0..* «XSDelement»
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ TableConstant: FluidCharacterizationParameter [0..-1]
+ Remark: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
+ModelSpecification
0..1
«XSDcomplexType»
AbstractPvtModel
«XSDComplexType» TypeEnum
«XSDComplexType»
AbstractCorrelationModel AbstractCompositionalModel «enumeration»
MixingRule
«XSDelement»
+ MixingRule: MixingRule [0..1] asymmetric
classical
«XSDelement» «XSDattribute»
+ MolecularWeight: MolecularWeightMeasure [0..1] + phase: ThermodynamicPhase
Figure 10-16 shows more details of the models. All models (correlation and compositional) can have
model-wide PVT model parameters. All models (correlation and compositional) can also have custom
extensions, allowing extra parameters which specialist studies may have defined, to be added.
Compositional models also can have a range of fluid component properties defined per fluid
component, and a set of binary interaction coefficients, each relating one fluid component to another.
«XSDelement»
+ Description: String2000 [0..1]
0..1
0..*
«XSDcomplexType»
Pv tModelParameterSet
«XSDcomplexType» AbstractString
CustomPv tModelParameter «XSDsimpleType»
BaseTypes::EnumExtensionPattern
«XSDattribute»
+ fluidComponentReference: String64 [0..1]
+Coefficient 1..*
+CustomParameterValue
«XSDcomplexType»
1
Pv tModelParameter
«XSDcomplexType»
«XSDattribute» Abstract::ExtensionNameValue
+ kind: PvtModelParameterKindExt
+ name: String64 [0..1] «XSDelement»
+ Name: String64
+ Value: StringMeasure
+ MeasureClass: MeasureClass [0..1]
+ DTim: TimeStamp [0..1]
+ Index: NonNegativeLong [0..1]
+ Description: String2000 [0..1]
«XSDComplexType» «XSDComplexType»
AbstractCorrelationModel AbstractCompositionalModel
«XSDunion»
Pv tModelParameterKindExt
«XSDelement»
+ MixingRule: MixingRule [0..1]
0..1 «XSDcomplexType»
FluidComponentProperty
«XSDcomplexType»
BinaryInteractionCoefficientSet
«XSDelement»
+ CriticalPressure: PressureMeasure [0..1]
+ CriticalTemperature: ThermodynamicTemperatureMeasure [0..1]
double + CriticalViscosity: DynamicViscosityMeasure [0..1]
1..* + CompactVolume: VolumeMeasure [0..1]
«XSDsimpleT... + CriticalVolume: MolarVolumeMeasure [0..1]
«XSDcomplexType» BaseTypes:: + AcentricFactor: decimal [0..1]
BinaryInteractionCoefficient AbstractMeasure + MassDensity: MassPerVolumeMeasure [0..1]
+ OmegaA: double [0..1]
«XSDattribute»
+ OmegaB: double [0..1]
+ fluidComponent1Reference: String64
+ VolumeShiftParameter: decimal [0..1]
+ fluidComponent2Reference: String64 [0..1]
+ PartialMolarDensity: MassPerVolumeMeasure [0..1]
+ Parachor: double [0..1]
+ PartialMolarVolume: MolarVolumeMeasure [0..1]
+ ReferenceDensityZJ: MassPerVolumeMeasure [0..1]
+ ReferenceGravityZJ: APIGravityMeasure [0..1]
«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» + ReferenceTemperatureZJ: ThermodynamicTemperatureMeasure [0..1]
AbstractCorrelationViscosityModel AbstractCompositionalViscosityModel AbstractCompositionalEoSModel + ViscousCompressibility: ReciprocalPressureMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDelement» «XSDattribute» «XSDattribute»
+ MolecularWeight: MolecularWeightMeasure [0..1] + phase: ThermodynamicPhase + fluidComponentReference: String64
For a list of the specific models (which are concrete classes that may be instantiated in the XML data
transfer), see Section 12.1.4. The schema is designed so that each model inherits its correct
parameter set, and as noted above, custom parameters for extended models can be added.
Tabular Output
In a tabular format, fluid properties (such as pressure, density, viscosity, etc.) are provided as rows of
a table where a target application could use these fluid properties directly. The organization of each
table is defined for each characterization file, and many table formats may be used in a single
characterization. This organization gives the ordering, properties, and units of measure for each
column, and allows constant values and delimiters to be assigned. Typical information found in a
tabular format includes:
• Independent operating conditions such as pressure and temperature.
• Fluid properties such as oil and gas formation volume factor, solution gas or dissolved liquid,
viscosity and compressibility.
• Slope of compressibility/formation volume factor vs. pressure line in the under-saturated region, if
saturated fluid properties are provided.
• Tabular representation of k-values or component viscosities as a function of pressure and
temperature.
• Miscibility tables in case of gas/solvent EOR fluid characterization.
The organization of the tabular model is shown in Figure 10-17. The table format defines null values
and delimiters, plus a set of column headings which detail what comes in each column. The rows are
then strings of delimited values (e.g., CSV) which contain the values in the columns, plus a
saturated/under-saturated flag. Additionally, the table can contain table constants. The contents of the
Columns or Constants are available as an enumerated list of Output Fluid Property seen in the figure.
class FluidCharacterizationTable
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization
«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]
«XSDcomplexType»
FluidCharacterizationModel
«XSDelement»
+ Name: String64 [0..1]
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
+FluidCharacterizationTable 0..*
«XSDcomplexType» «XSDcomplexType»
FluidCharacterizationTableFormat FluidCharacterizationTable
«XSDattribute» «XSDattribute»
+ uid: String64 + uid: String64
+ tableFormat: String64
«XSDelement» + name: String64
+ NullValue: String64 [0..1]
+ Delimiter: String64 [0..1] «XSDelement»
+ TableConstant: FluidCharacterizationParameter [0..-1]
+ Remark: String2000 [0..1]
+TableColumn 1..*
«XSDcomplexType»
FluidCharacterizationTableColumn
«XSDattribute»
+ fluidComponentReference: String64 [0..1]
+TableRow 1..*
+ name: String64 [0..1]
+ sequence: NonNegativeLong [0..1]
+ uom: UnitOfMeasureExt AbstractString
«XSDelement» TypeEnum
«XSDcomplextype»
+ Property: OutputFluidPropertyExt
«enumeration» FluidCharacterizationTableRow
SaturationKind
+Phase 0..1 saturated
«XSDattribute»
+ row: String64
TypeEnum undersaturated + kind: SaturationKind [0..1]
«enumeration»
FluidAnalysis::
ThermodynamicPhase
«XSDcomplexType»
aqueous +Phase FluidCharacterizationParameter
oleic
+KeywordAlias 0..* 0..1 «XSDattribute»
vapor
total hydrocarbon + name: String64 [0..1]
«XSDcomplexType» + value: double
Abstract::Obj ectAlias + uom: String64
+KeywordAlias + fluidComponentReference: String64 [0..1]
EnumExtensionPattern
«XSDattribute» «XSDunion»
+ authority: String64 0..* «XSDelement»
+ Property: OutputFluidPropertyExt OutputFluidPropertyExt
«XSDelement»
+ Identifier: String64
+ IdentifierKind: AliasIdentifierKindExt [0..1]
+ Description: String2000 [0..1]
+ EffectiveDateTime: TimeStamp [0..1]
+ TerminationDateTime: TimeStamp [0..1]
«XSDcomplexType»
FluidCharacterizationModel
«XSDelement»
+ Name: String64 [0..1]
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1]
TypeEnum
«XSDattribute»
+ uid: String64 «enumeration»
OutputFluidProperty
Compressibility
Density
Derivative of Density w.r.t Pressure
Derivative of Density w.r.t Temperature
Enthalpy
Entropy
Expansion Factor
0..1
Formation Volume Factor
«XSDcomplexType» Gas-Oil Interfacial Tension
FluidCharacterizationParameterSet Gas-Water Interfacial Tension
Index
K value
Misc Bank Critical Solvent Saturation
Misc Bank Phase Density
Misc Bank Phase Viscosity
Miscibility Parameter (Alpha)
Mixing Parameter Oil-Gas
Normalized Pseudo Pressure
Oil-Gas Ratio
Oil-Water Interfacial Tension
Parachor
Pressure
Pseudo Pressure
P-T Cross Term
Saturation Pressure
Solution GOR
Solvent Density
1..* Specific Heat
Temperature
«XSDcomplexType» Thermal Conductivity
FluidCharacterizationParameter Viscosity
Viscosity Compressibility
«XSDattribute» Water vapor mass fraction in gas phase
+ name: String64 [0..1] Z Factor
+ value: double
+ uom: String64
EnumExtensionPattern
+ fluidComponentReference: String64 [0..1]
«XSDelement» «XSDunion»
+ Property: OutputFluidPropertyExt OutputFluidPropertyExt
«XSDcomplexType»
FluidComponentCatalog
«XSDelement»
+ StockTankOil: StockTankOil [0..-1]
+ NaturalGas: NaturalGas [0..-1]
+ FormationWater: FormationWater [0..-1]
+ PureFluidComponent: PureFluidComponent [0..-1]
+ PseudoFluidComponent: PseudoFluidComponent [0..-1]
+ PlusFluidComponent: PlusFluidComponent [0..-1]
+ SulfurFluidComponent: SulfurFluidComponent [0..-1]
«XSDcomplexType»
FluidComponentFraction
«XSDelement»
+
+
MassFraction: MassPerMassMeasure [0..1]
MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
Reference
+ VolumeFraction: VolumePerVolumeMeasureExt [0..1] by uid
+ VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ KValue: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind
«XSDattribute»
+ fluidComponentReference: String64
The Fluid Analysis and Fluid Characterization both include a Fluid Component Catalog at the top
level. All analyses and characterizations, respectively, use this catalog to reference the fluid
compositions which they require. If desired, every fluid analysis and characterization within a workflow
can all use the one set of possible fluid components, by having the same Fluid Component Catalog in
the parent analysis or characterization objects. Note that a fluid composition does not need to use all
of the fluid components in the fluid component catalog, only the ones required. It follows that every
component which is going to be used to describe a composition in a fluid analysis or fluid
characterization in that data object must be included in the fluid component catalog.
An example of the use of a fluid composition within fluid analysis can be seen in Figure 10-11, which
shows the Constant Composition Expansion test. Each test step (a pressure-temperature step)
contains vapor, liquid and overall compositions. Each of these composition types is shown in Figure
10-19.
Another use of this referencing of a fluid component within the fluid component catalog can be seen in
Figure 10-16: in the fluid component property class, the attribute at the bottom of the list is the fluid
component reference. This again contains the uid for the fluid component which is within the catalog
for the parent fluid characterization object. This fluid component property class contains the various
Equation of State parameters required for that component. The binary interaction coefficient has the
same referencing but for the two components whose binary interaction coefficient is being reported.
Figure 10-20 shows the fluid components (all derived from an abstract) in the catalog which includes:
• stock tank oil
• natural gas
• formation water
• pure fluid component
• pseudo fluid component
• plus fluid component
«XSDcomplexType»
AbstractFluidComponent
«XSDelement»
+ MassFraction: MassPerMassMeasure [0..1]
+ VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind [0..1]
«XSDattribute»
+ uid: String64
«XSDcomplexType»
FormationWater «XSDcomplexType»
StockTankOil
«XSDelement»
+ SpecificGravity: double [0..1] «XSDelement»
+ Salinity: MassPerMassMeasure [0..1] + APIGravity: APIGravityMeasure [0..1]
+ Remark: String2000 [0..1] + MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
«XSDcomplexType» + Remark: String2000 [0..1]
PureFluidComponent
«XSDelement»
+ Kind: PureComponentKindExt «XSDcomplexType»
+ MolecularWeight: MolecularWeightMeasure [0..1] NaturalGas
+ HydrocarbonFlag: boolean
+ Remark: String2000 [0..1] «XSDelement»
+ GasGravity: double [0..1]
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
«XSDcomplexType» + NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
SulfurFluidComponent + GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
«XSDelement» + Remark: String2000 [0..1]
+ Kind: SulfurComponentKindExt
+ MolecularWeight: MolecularWeightMeasureExt [0..1]
+ Remark: String2000 [0..1]
«XSDcomplexType»
PseudoFluidComponent
«XSDcomplexType» «XSDelement»
PlusFluidComponent + Kind: PseudoComponentKindExt
+ SpecificGravity: double [0..1]
«XSDelement» + StartingCarbonNumber: NonNegativeLong [0..1]
+ Kind: PlusComponentKindExt + EndingCarbonNumber: NonNegativeLong [0..1]
+ SpecificGravity: double [0..1] + AvgMolecularWeight: MolecularWeightMeasure [0..1]
+ StartingCarbonNumber: NonNegativeLong [0..1] + AvgDensity: MassPerVolumeMeasure [0..1]
+ StartingBoilingPoint: ThermodynamicTemperatureMeasure [0..1] + StartingBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ AvgDensity: MassPerVolumeMeasure [0..1] + EndingBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ AvgMolecularWeight: MolecularWeightMeasure [0..1] + AvgBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1] + Remark: String2000 [0..1]
Section 12.1 lists all the fluid components defined in the schema. However, the enumerated lists are
extensible so user-defined components can be added.
Note that the Fluid Component Catalog and the Fluid Components are contained in the PRODML
Common package. These are also used by the Simple Product Volume and Flow Test Activity data
objects, so are compatible with production volume reports created using PRODML.
Figure 11-1. Overall workflow for PVT worked example. Top-level object types each given own color and
types underlined. Red boxes indicate events that result in the creation of a new object.
Fluid System
The fluid system is referenced by many other objects (Figure 11-2). This is not a detailed object; the
full model for a reservoir and its sub-divisions containing hydrocarbons is the domain of RESQML.
The rock fluid unit feature in RESQML is the level of granularity which can be reproduced in the fluid
system description. Multiples can be included and they may correspond to, e.g., layers within an
overall reservoir comprising a fluid system.
Figure 11-2. Fluid system describes the reservoir and fluid contained in it (high-level outline).
Figure 11-3. Fluid sample acquisition job describes the operation of sample acquisition.
As described in Section 10.3, there are 5 types of fluid sample acquisition; the worked example is for
the downhole sample acquisition type. This represents the data from a downhole sampling tool in a
flowing well. One of the fluid sample acquisitions is shown in Figure 11-4. Additional data can be
added from the schema. Each type of sample acquisition has its own data and references. A well,
wellbore and a well test are included in the examples and are referenced from this fluid sample
acquisition.
Figure 11-4. Fluid sample acquisition has the details of the operation to acquire each individual sample.
Figure 11-5. Fluid sample container example. It does not reference other objects; they reference it.
Fluid Sample
A total of three fluid samples exist in the worked example:
1. Sample is acquired, found to be invalid and destroyed.
2. Sample is acquired and retained, and used for hydrocarbon analysis.
3. Sample is sub-sampled from Sample 2, with a water sample being removed and used for water
analysis.
Important information about the sample concerns its provenance: where it came from in the reservoir,
and which operation created it. This aspect is shown in Figure 11-6.
Note that the fluid sample container for a fluid sample is not a 1:1 correspondence because the
sample may be moved between containers during its lifetime. The original fluid sample container is
recorded with the fluid sample acquisition job. Any subsequent changes are recorded in the fluid
sample chain of custody event element. Two such are recorded for Sample 2:
1. Custody transfer to a new custodian (a PVT Lab), also recording the current fluid sample
container.
2. Sub-sample of dead fluid taken. This is into a new container and creates a new fluid sample. This
event is shown in detail in Figure 11-7.
Figure 11-7. Fluid sample chain of custody event— showing a sub-sample of aqueous phase being taken.
Note that while not shown in the example, an important feature of the model is the ability to combine
samples into a new sample, for example recombining separator oil and gas phase samples.
Fluid Analysis
The fluid component catalog is unique to each fluid analysis object (also to each fluid
characterization). The worked example contains a catalog containing all the kinds of fluid component,
as listed and shown in Figure 11-9. Each analysis test then references items from the fluid catalog to
report molar composition, etc. Note that the UID attribute is the means by which a fluid component is
referenced from a test. See Figure 11-12 for an example. The UID only needs to be unique within this
data object; it is not a UUID. Hence, any convenient naming convention can be used.
Fluid sample integrity and preparation is described in the element of that name, as shown in Figure
11-10.
The use of multiple sample contaminant elements, one of which includes a reference to a sample of
the contaminant itself, is shown in Figure 11-11. Note that this “sample of contaminant” refers to the
water sample which was sub-sampled from Sample 2, becoming Sample 3 (see Section 11.1.4).
An example analysis test is shown in Figure 11-12. This example shows how the fluid components in
the catalog are referenced using their UIDs. Each test also has a UID and this can be used to
reference the test from elsewhere. This will be used in the fluid characterization to identify the tests
that were used in a given characterization.
Figure 11-12. Fluid analysis—use of fluid components previously defined in fluid component catalog.
As shown in Figure 11-13, many tests have a pattern of common data followed by recurring test
steps. This example figure shows the constant composition expansion test. In the worked example,
the separator tests have multiple test steps, some of which report fluid composition of multiple
phases, so combining aspects of the examples detailed here.
Figure 11-13. Fluid analysis—many tests have a pattern of common data followed by recurring test steps.
Fluid Characterization
There are three ways in which a fluid can be characterized. They can be mixed in the same fluid
characterization data object. Here they are shown in two worked examples, other than for the third
method which is a standalone example.
1. Fluid characterization using models (file: Good Oil 4.ModelCharacterization).
2. Fluid characterization using tables (file: Good Oil 4.TabularCharacterization).
3. Fluid characterization using set of fluid parameters (file: PTA PVT Using Fluid Parameter Set).
The high level content of the model approach is shown in Figure 11-15. The tabular approach follows
later and shares the same high-level data elements, but adds tabular data in place of (or in addition
to) model data.
The software used to generate the characterization and its intended target software for consumption
of the data can be reported (Figure 11-15).
The source of the characterization means which fluid analysis tests results were used in this fluid
characterization. This is done by referencing the UID of the tests concerned, with a reference to the
parent fluid analysis data object, which can be seen in Figure 11-16.
Standard conditions can be reported. Also possible, but not shown in the example, separation
conditions (one or more stages plus stock tank conditions) can be reported, these being the
conditions at which the fluid is characterized.
The fluid component catalog works the same way as for the fluid analysis example (see Section
11.1.5.1). The worked example does not use the same catalog as the analysis, instead having one
with fewer components as would typically be used in software models.
The table format defines the columns that appear in the table. It also defines the column delimiter asci
character (e.g., a comma), and the null value (e.g., -999.25). The attributes of each column are as
follows.
Attribute Description Optionality
Property is an enumeration that can be extended and the properties are listed in Section 12.3. Phase
is also an enumeration and is listed in Section 12.3. Phase and fluid component reference are
provided so that properties may be output that relate to a specific phase or fluid component. Unit of
measure is not a controlled list. Keyword alias is provided in case it is useful to map properties onto
keywords in a specific software package.
An example extract from the worked example fluid characterization table format is shown in Figure
11-19.
The fluid characterization table then contains the actual values defined by the format. These are
contained in table row elements where the values are separated by the ASCII character defined
above.
In addition to the table rows, the table also has table constants. These have the same attributes as
listed above for table columns except that sequence is not present (tables do not have a sequence). A
value attribute is added and is mandatory.
The table has a UID so that it can be referenced from elsewhere, and a name.
An example table is shown in Figure 11-20.
The table example contains some other illustrations, e.g. an extension to the enum list of properties,
which can be seen by inspecting the file.
Compositional EoS
• Peng-Robinson 76
• Peng-Robinson 78
• SRK
Compositional – Viscosity
• C S Pedersen 84
• C S Pedersen 87
• Lohrenz-Bray-Clark
• Friction Theory
Compositional – Thermal
Currently, no specific models defined so a generic model together with custom model extensions can
be used.
Acknowledgement
Special thanks to Kappa Engineering and Schlumberger for donating their work to Energistics, which
is the basis for the current PRODML PTA data model.
13.2 Background
This work was originally done by Schlumberger and Kappa Engineering. It was proposed to be a
“PTAML” akin to PRODML or WITSML. The initial use case which was foreseen was data transfer
between a PTA package (e.g., Kappa’s) to a 3D geological model (e.g., Schlumberger’s).
The data model was developed with input from Energistics and experienced “ML” modelers and
donated to Energistics. The donated model was in the Energistics 1.x style which was current at the
time of development.
Since the donation, Energistics has updated the model to v2.x style. It has been incorporated into
PRODML version 2.1 for the first release.
Since the initial development, new and useful additions have been made to the more integrated
“energy ML” (both data objects available for re-use, and the Common Technical Architecture), which
have been re-used where possible.
Figure 13-1. Overall Workflow of Pressure Transient Testing and Fluid Sampling.
Figure 13-2. Use Case diagram for pressure transient data. The data transfers from external systems
shown in the red box are in scope. Processes internal to PTA are out of scope.
Deconvolution
The Deconvolution object deals with the specialist pre-processing activity of deconvolution. Not
shown on the figure for simplicity is the fact that the output of deconvolution is further time-series
data, and these data are also contained in linked Channels.
Preprocess
The Preprocess object deals with the general pre-processing activity such as re-sampling, smoothing,
merging measured data channels. The exact methods of pre-processing are not detailed, just the
generic type. Not shown on the figure for simplicity is the fact that the output of pre-processing is
further time-series data, and these data are also contained in linked Channels.
Figure 13-3. Top Level Objects used for Flow Tests and Pressure Transient Analysis, also showing link to
fluid sampling.
13.6 General Use of Time Series Data in Flow Test Data objects
All of the Flow Test Data objects (except Flow Test Job) use time series data. This is variously
measured data from sensors, pre-processed or deconvolved data, or simulated data from analysis. A
hierarchy of classes is used to describe the metadata appropriate to the time series type, and then the
Channel object is used to contain the time series data points. Channel is a WITSML object; Channels
are collected into Channel Sets and the Channel Set is where the actual values are stored, in the
Channel Data class.
The time series data model is shown in Figure 13-4. The four top-level objects for flow testing are
shown in the lower left. They all have associations where needed to the Abstract Flow Test Data
class, which has DORs to the Channel and Channel Set holding the time data for the particular
instance of time series data.
AbstractObject «XSDcomplexType»
FlowTestActivity::AbstractFlowTestData
«XSDcomplexType,XSDtopLevelElement»
AbstractObject
PressureTransientAnalysis::PtaDataPreProcess AbstractObject
«XSDcomplexType,XSDtopLevelElement» +FlowTestActivity
+PtaDataPreProcess «XSDcomplexType,XSDtopLe...
FlowTestJob 1 FlowTestActivity::FlowTestActivity
0..*
«XSDelement»
+ Client: BusinessAssociate [0..1]
+ ServiceCompany: BusinessAssociate [0..1] AbstractObject
+ StartTime: TimeStamp [0..1] AbstractObject «XSDcomplexType,XSDtopLeve...
+ EndTime: TimeStamp [0..1] +FlowTestActivity
«XSDcomplexType,XSDtopLevelElement» FlowTestActivity::ChannelSet
+PressureTransientAnalysis PressureTransientAnalysis:: 1
PressureTransientAnalysis
0..* +Channel
0..*
«byValue»
+FlowTestActivity
AbstractObject
0..*
«XSDcomplexType,XSDtopLeve...
AbstractObject
FlowTestActivity::Channel
+PtaDeconvolution «XSDcomplexType,XSDtopLevelElement»
PressureTransientAnalysis::PtaDeconvolution
0..*
+FlowTestJob 0..1
+FluidSampleAcquisitionJob 0..1
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidSampleAcquisitionJob::FluidSampleAcquisitionJob
«XSDelement»
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ ServiceCompany: BusinessAssociate [0..1]
Figure 13-4. Time series data utilizes links to Channel and Channel Set objects which contain the
“columns” of data.
Figure 13-5 shows the three types of data (green boxes) that can then be represented by the
appropriate inheritance.
class FlowTestActivityChannel
AbstractObject «XSDcomplexType»
«XSDcomplexType,XSDtopLevelEle... AbstractFlowTestData
FlowTestActivity TypeEnum
«XSDelement»
+ ChannelSet: ChannelSet «enumeration»
+ TimeChannel: Channel TimeSeriesPointRepresentation
+ TimeSeriesPointRepresentation: TimeSeriesPointRepresentation [0..1] point by point
+ Remark: String2000 [0..1] stepwise value at end of period
«XSDattribute»
+ uid: String64
«XSDelement» «XSDelement»
+ PreProcess: PtaDataPreProcess + PreProcess: PtaDataPreProcess
«XSDelement» «XSDelement»
+ Deconvolution: PtaDeconvolution + Deconvolution: PtaDeconvolution
Figure 13-5. Flow test activity channel diagram, showing the three main types of data (green boxes):
pressure, flow and flow test.
1. Pressure data: this has a pressure channel and optionally, a derivative channel (used in
analysis). The Abstract PTA Pressure Data itself can inherit from one of four concrete types.
These are useful to differentiate when performing analysis since the pressure data (in
particular) may have been pre-processed and it is useful to know what steps were taken in
this prior to analysis.
a. Measured (informally referred to as "raw" sometimes).
b. PreProcessed – which also has a data object reference (DOR) to a PreProcess data
object containing details of the pre-processing (see Section 16.1).
c. Deconvolved - which also has a DOR to a Deconvolution data object containing
details of the pre-processing (see Section 16.2).
d. Output – used for simulation, deconvolution output.
2. Flow data: this has an enum to state what is being measured (to allow for different types of
separation and metering of one or more phases). Like pressure data, Abstract PTA Flow Data
can inherit from one of the same four concrete types described in the previous point.
3. Flow test data: this is used for channels such as temperature, where no direct input to
pressure transient analysis is used, but they are useful contextual data.
As noted above, the Channel object is referenced by a DOR. This contains generic metadata
concerning the data in the Channel. For more information about the Channel data object, See
Energistics Online: WITSML: Channel.
The actual value of data are contained in the Channel Data class of the Channel Set which collects
together the Channels which share a common time index. For details of the format, see Energistics
Online: WITSML: Channel Set. Figure 13-6 shows an example of a Channel Data instance.
«XSDcomplexType»
DrillStemTest
«XSDcomplexType»
FormationTesterStation
+ TieInLog: DataObjectReference
«XSDcomplexType» +IntervalMeasurementSet
ProductionTransientTest
«XSDcomplexType»
+IntervalMeasurementSet1 Flow TestMeasurementSet
+IntervalMeasurementSet 1
«XSDcomplexType»
ProductionFlow Test
«XSDcomplexType»
Inj ectionFlow Test
«XSDcomplexType»
WaterLev elTest
Figure 14-1. UML Diagram for Flow Test Activity - High Level.
«XSDcomplexType»
Flow TestLocation
«XSDcomplexType»
FlowTestMeasurementSet «XSDcomplexType»
+ Remark: String2000 [0..1] TestPeriod
«XSDelement»
+ StartTime: TimeStamp
+ EndTime: TimeStamp
+TestPeriod + TestPeriodKind: TestPeriodKind [0..1]
+ WellFlowingCondition: WellFlowingCondition [0..1]
0..1 + Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64
«XSDcomplexType»
+FluidComponentCatalog ProdmlCommon::FluidComponentCatalog
b. A Well Completion represents a "flow from the wellhead" which may be one to one
with a single Wellbore Completion, or it may represent the commingled flow from
multiple Wellbore Completions.
Note: using Wellbore Completion has the merit that this describes the mechanical details of
the wellbore to reservoir connection, e.g., perforation types and shots per foot, etc.
Note also that Wellbore Completion itself has a DOR the Wellbore which contains it, and also
a DOR to the Well Completion which contains it. Well Completion has a DOR to the Well
which contains it.
3. Reporting Entity, which is a generic source of flow used in volume reporting. For more
information on reporting entities, see Section 5.2. This option will make particular sense for
the Production Flow Test type of Flow Test Activity (see Section 14.1) if this is being used as
part of volume reporting.
«XSDcomplexType»
Flow TestLocation
«XSDcomplexType»
ProdmlCommon::FluidComponentCatalog
«XSDelement»
+ StockTankOil: StockTankOil [0..-1]
+ NaturalGas: NaturalGas [0..-1]
+ FormationWater: FormationWater [0..-1]
+ PureFluidComponent: PureFluidComponent [0..-1]
+ PseudoFluidComponent: PseudoFluidComponent [0..-1]
+ PlusFluidComponent: PlusFluidComponent [0..-1]
+ SulfurFluidComponent: SulfurFluidComponent [0..-1]
Figure 14-6. Flow Test Activity links to Channels to report Measured data (showing attributes inherited by
Pressure, Flow, Other data).
Figure 15-1. Top Level Diagram for Pressure Transient Analysis Data object showing Referencing to Flow
Test Activity, Fluid and Reservoir Properties, Analysis, Pressure Transient Models.
See Figure 15-2 which shows the concept for a simple case where the Job comprises one Activity
(one test location, see bottom sketch of the figure), and two Analyses. The Activity contains all the
measurements needed and defines the location. Each Analysis references the Activity. It also
references (by uid within the Activity) the Test Period(s) used for the analysis (e.g., “final build up”).
See also Figure 15-4 where the red and blue boxes highlight these two types of reference.
Figure 15-2. Example showing links for a DST, one interval, with multiple analyses.
Interference testing is slightly more complex. In this case, refs must be provided to the Principal and
also to any Interfering Test Intervals (which will be multiple Flow Test Measurement Sets within the
Flow Test Activity). See Figure 15-3 which shows a vertical interference test. The Analysis has a DOR
to the Activity as before, but now it is necessary to also state which is the principal location and which
is the interfering location. There is a requirement to use the Principal Flow Test Interval Ref (to the uid
of this interval within the Activity), and then for each Interfering Flow Test Interval to reference it in the
Activity. See in Figure 15-4 the orange box and the green box for these two interval refs. The test
period(s) from the interfering interval, and the interfering flow rate also need to be identified.
Figure 15-3. Example showing links for WFT with a VIT between two stations.
Figure 15-4. Pressure Transient Analysis Links to Flow Test Activity elements.
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
PressureTransientAnalysis
«XSDelement»
+ ModelName: String2000
+ TimeAppliesFrom: TimeStamp [0..1]
+ MethodName: String2000 [0..1]
+ TimeAppliesTo: TimeStamp [0..1]
+ IsNumericalAnalysis: boolean [0..1]
+ FluidCharacterization: FluidCharacterization [0..1]
+ NumericalPtaModel: DataObjectReference [0..1]
+ FlowTestActivity: FlowTestActivity
+ PrincipalFlowTestMeasurementSetRef: String64 [0..1] TypeEnum
+ PrincipalTestPeriodRef: String64 [1..-1] «enumeration»
+ WellboreModel: WellboreBaseModel [0..1] FluidPhaseKind
+ LayerModel: LayerModel [0..*]
+ FluidPhaseAnalysisKind: FluidPhaseKind multiphase gas+water
+ PressureNonLinearTransformKind: PressureNonLinearTransformKind multiphase oil+gas
+ PseudoPressureEffectApplied: PseudoPressureEffectApplied [0..1] multiphase oil+water
+ TimeNonLinearTransformKind: TimeNonLinearTransformKind multiphase oil+water+gas
+ Remark: String2000 [0..1] single phase gas
single phase oil
single phase water
TypeEnum
«enumeration»
PressureNonLinearTransformKind
pressure (un-transformed)
pressure squared
gas pseudo-pressure
normalised gas pseudo-pressure
normalised multi-phase pseudo-pressure
TypeEnum
«enumeration»
PseudoPressureEffectApplied
TypeEnum
«enumeration»
TimeNonLinearTransformKind
Figure 15-5. Options for Representing Handling of Fluid Phases and Fluid Non-linearity.
15.4 Analysis
Analysis describes the use of techniques including log-log analysis, or other “specialized” analyses.
Pressure transient or rate transient analysis can be transferred. The choice is made by which version
of the Abstract Analysis is used, as seen in Figure 15-6 (Red Box).
A key role of this data model is to enable referencing which time series which were used in the
analysis, and also to reference any simulated time series (i.e., the history matches to the data).
1. Rate Transient Analysis (RTA) has input pressure and flowrate, and simulated flowrate. See
Purple Box.
2. Pressure Transient Analysis (PTA) has input pressure and simulated pressure, and for
flowrate data a choice shown in the Blue Box:
a. Use a single effective flow rate
b. Use a set of Test Periods for rate history (this will give constant rate steps)
Figure 15-6. Analysis Class showing Pressure or Rate Transient, Inputs and Outputs linking to Channels,
Choice of Analysis Methods, Choice of Flow Rate Data Type.
Figure 15-7. The Pressure Transient Analysis model has one Wellbore Section (red box), and as many
Layer Models as required; each Layer Model can contain one each Near Wellbore, Reservoir and
Boundary Sections.
Layer to Layer flow communication is also supported via the Layer To Layer Connection element.
Figure 15-8. Hierarchy of Model Sections which spans most of the well-known pressure transient models.
A large set of well-known Parameters is also pre-defined, with the parameters which belong to each
model controlled by the schema as described above. For a list of all the well-known Parameters
containing numerical values included in the schema, see Section 17.1.2. Each parameter has the
following attributes:
1. The type is the name of the well-known parameter, e.g., “wellbore volume” as seen in Figure
15-9.
2. The abbreviation is a fixed string in the schema and is the commonly used abbreviation often
used in reports to save space, e.g., “Vw”.
3. The value with its unit of measure, constrained by the data type, e.g., “volume measure types”
such as m3.
4. Uid.
5. Source Result Ref which is a string containing the uid of a different result (in the same Analysis)
which was used as the input for this parameter.
6. Remark
Figure 15-9. Parameters are provided which are needed by the pressure transient models.
Some Parameters are enums, e.g., “Boundary1 Type” (for a boundary model) can be “constant
pressure” or “no-flow”. For a list of all the well-known enum Parameters included in the schema, see
Section 17.1.3.
There are a small number of “sub models”, where an aspect of the Model requires a recurring set of
parameters. An example is for a hydraulic fracture which can recur to represent a multiply fractured
horizontal well. Each fracture has its own instance of a Single Fracture Sub Model. These are shown
in Figure 15-10.
«XSDcomplexType» «XSDcomplexType»
SingleFractureSubModel DistributedParametersSubModel «XSDcomplexType»
ResqmlModelRef
«XSDelement» «XSDelement»
+ FractureTip1Location: LocationIn2D + IsPermeabilityGridded: boolean «XSDelement»
+ FractureTip2Location: LocationIn2D + PermeabilityArrayRefID: ResqmlModelRef [0..1] + ResqmlModelRef: DataObjectReference
+ FractureHeight: FractureHeight + IsThicknessGridded: boolean
+ DistanceMidFractureHeightToBottomBoundary: + ThicknessArrayRefID: ResqmlModelRef [0..1]
DistanceMidFractureHeightToBottomBoundary [0..1] + IsPorosityGridded: boolean
+ FractureFaceSkin: FractureFaceSkin [0..1] + PorosityArrayRefID: ResqmlModelRef [0..1]
+ FractureConductivity: FractureConductivity [0..1] + IsDepthGridded: boolean
+ FractureStorativityRatio: FractureStorativityRatio [0..1] + DepthArrayRefID: ResqmlModelRef [0..1]
+ FractureModelType: FractureModelType + IsKvToKrGridded: boolean
+ KvToKrArrayRefID: ResqmlModelRef [0..1]
«XSDcomplexType» + IsKxToKyGridded: boolean
LocationIn2D + KxToKyArrayRefID: ResqmlModelRef [0..1]
«XSDelement»
+ CoordinateX: LengthMeasure
+ CoordinateY: LengthMeasure «XSDcomplexType»
InternalFaultSubModel
«XSDelement»
+ IsLeaky: boolean
+ TransmissibilityReductionRatioOfLinearFront : TransmissibilityReductionFactorOfLinearFront [0..1]
+ IsConductive: boolean
+ IsFiniteConductive: boolean
+ Conductivity: FractureConductivity [0..1]
+ FaultRefID: ResqmlModelRef
«XSDcomplexType»
ReservoirZoneSubModel
«XSDelement»
+ BoundingPolygonPoint: LocationIn2D [1..-1]
+ Permeability: HorizontalRadialPermeability [0..1]
+ Porosity: Porosity [0..1]
+ Thickness: TotalThickness [0..1]
Figure 15-10. Some types of pressure transient model require sub-model classes.
Custom parameters and custom models can be used to describe specialized PTA models not
included in the schema. They follow the pattern shown in Figure 15-11. Each model section has, in
addition to the well-known models, a Custom xx Model (where xx is one of the four model section
types). The custom model can use zero to many of the Any Parameter (which is the base Abstract
Parameter which can inherit any of the well-known Parameters), and zero to many Custom Parameter
which unlike the well-known Parameters can have its name and abbreviation set, and has a Measure
Value which is the General Measure Type (i.e., there is no defined set of UoM, since it can have
represent any measurement dimensionality).Figure 15-10
«XSDcomplexType»
WellboreBaseModel
«XSDelement»
+ WellboreRadius: WellboreRadius
+ WellboreStorageCoefficient: WellboreStorageCoefficient
+ WellboreVolume: WellboreVolume [0..1]
+ WellboreFluidCompressibility: WellboreFluidCompressibility [0..1]
+ TubingInteralDiameter: TubingInteralDiameter [0..1]
+ FluidDensity: FluidDensity [0..1]
+ WellboreDeviationAngle: WellboreDeviationAngle [0..1]
+ WellboreStorageMechanismType: WellboreStorageMechanismType [0..1]
«XSDcomplexType» «XSDcomplexType»
ConstantStorageModel ChangingStorageFairModel
«XSDelement»
+ RatioInitialToFinalWellboreStorage: RatioInitialToFinalWellboreStorage
+ DeltaTimeStorageChanges: DeltaTimeStorageChanges
AbstractParameter
«XSDcomplexType»
CustomWellboreModel «XSDcomplexType»
PtaParameters::CustomParameter
«XSDelement»
+ ModelName: ModelName «XSDelement»
+ AnyParameter: AbstractParameter [0..-1] + Name: String64
+ CustomParameter: CustomParameter [0..-1] + Abbreviation: String64
+ MeasureValue: GeneralMeasureType
For a matrix of all the Model Sections vs all the Parameters in a matrix, showing which parameters
are mandatory and optional in which model, see Section 17.1.4.
class PreProcess
AbstractObject TypeEnum
«XSDcomplexType,XSDtopLevelElement» «enumeration»
PtaDataPreProcess Flow TestActiv ity::DataConditioning
16.2 Deconvolution
The Deconvolution data object is relatively simple as shown in Figure 16-2. The principal elements
are:
1. The Flow Test Activity and Flow Test Measurement Set within it, and Flow Test Period(s) within
that, which is the source of the pre-processed data.
2. The input pressure data and flowrate data, and the reconstructed (a form of output) output data.
3. Parameters required by Deconvolution.
There are two forms of output deconvolved pressure, both of which contain deconvolved pressure
Channel(s) and associated reference flowrate(s):
1. A single deconvolved pressure output.
2. A set of multiple deconvolved pressure outputs, each one giving the response of a specific Test
Flow Period.
class Deconvolution
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
PtaDeconvolution Legend
Most Important Class
«XSDelement»
+ FlowTestActivity: FlowTestActivity Abstract Class
+ FlowTestMeasurementSetRef: String64 [0..1] Enumeration
+ FlowTestPeriodRef: String64 [0..*]
+ MethodName: String2000 Normal Class
+ InitialPressure: PressureMeasure
+ InputPressure: AbstractPtaPressureData Normal Association
+ InputFlowrate: AbstractPtaFlowData
+ ReconstructedPressure: DeconvolvedPressureData [0..1] Generalization
+ ReconstructedFlowrate: DeconvolvedFlowData [0..1]
+ Remark: String2000 [0..1]
+DeconvolutionOutput 1..*
«XSDcomplexType»
AbstractDeconvolutionOutput
«XSDcomplexType» «XSDcomplexType»
Deconv olutionSingleOutput Deconv olutionMultipleOutput
«XSDelement»
+ TestPeriodOutputRefId: String64
+DeconvolutionSingleOutput 1 +DeconvolutionMultipleOutput 1
«XSDcomplexType»
Deconv olutionOutput
«XSDelement»
+ DeconvolvedPressure: DeconvolvedPressureData [0..1]
+ DeconvolutionReferenceFlowrateValue: VolumePerTimeMeasure
Models
Name Description
Wellbore Section
Wellbore Base Model Abstract wellbore response model from which the other wellbore response model
types are derived.
Constant Storage Model Constant wellbore storage model.
Changing Storage Fair Changing wellbore storage model using the Fair model.
Model
Changing Storage Hegeman Changing wellbore storage model using the Hegeman model.
Model
Changing Storage Spivey Changing wellbore storage model using the Spivey Fissures model.
Packer Model
Changing Storage Spivey Changing wellbore storage model using the Spivey Packer model.
Fissures Model
Custom Wellbore Model Wellbore Storage Model allowing for the addition of custom parameters to
support extension of the model library provided.
Fractured Horizontal Finite Fracture model, with horizontal fracture (sometimes called "pancake fracture")
Conductivity Model flow. Finite Conductivity Model.
Partially Penetrating Model Partially Penetrating model, with flowing length of wellbore less than total
thickness of reservoir layer (as measured along wellbore).
Slanted Fully Penetrating Slanted wellbore model, with full penetrating length of wellbore open to flow.
Model
Slanted Partially Penetrating Slanted wellbore model, with flowing length of wellbore less than total thickness
Model of reservoir layer (as measured along wellbore).
Horizontal Wellbore Model Horizontal wellbore model with wellbore positioned at arbitrary distance from
lower surface of reservoir layer.
Horizontal Wellbore 2 Layer Horizontal wellbore model with wellbore positioned at arbitrary distance from
Model lower surface of reservoir layer, and with additional upper layer parallel to layer
containing wellbore.
Name Description
Horizontal Wellbore Multiple Horizontal wellbore model with wellbore positioned at arbitrary distance from
Equal Fractured Model lower surface of reservoir layer, containing a number "n" of equally spaced
identical vertical fractures.
Horizontal Wellbore Multiple Horizontal wellbore model with wellbore positioned at arbitrary distance from
Variable Fractured Model lower surface of reservoir layer, containing a number "n" of non-identical vertical
fractures. These may be unequally spaced and each may have its own
orientation with respect to the wellbore, and its own height. Expected to be
modelled numerically.
Custom Near Wellbore Near Wellbore Model allowing for the addition of custom parameters to support
Model extension of the model library provided.
Reservoir Section
Reservoir Base Model Abstract reservoir model from which the other types are derived.
Homogeneous Model Homogeneous reservoir model.
Dual Porosity Pseudo Dual Porosity reservoir model, with Pseudo-Steady-State flow between the two
Steady State Model porosity systems.
Dual Porosity Transient Dual Porosity reservoir model, with transient flow between the two porosity
Spheres Model systems, and assuming spherical shaped matrix blocks.
Dual Porosity Transient Dual Porosity reservoir model, with transient flow between the two porosity
Slabs Model systems, and assuming slab shaped matrix blocks.
Dual Permeability With Dual Permeability reservoir model, with Cross-Flow between the two layers.
Crossflow Model
Radial Composite Model Radial Composite reservoir model, in which the wellbore is at the center of a
circular homogeneous zone, communicating with an infinite homogeneous
reservoir. The inner and outer zones have different reservoir and/or fluid
characteristics. There is no pressure loss at the interface between the two
zones.
Linear Composite Model Linear Composite reservoir model in which the producing wellbore is in a
homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is no pressure loss at the interface between the
two zones.
Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Leaky Fault Model homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is a fault or barrier at the interface between the
two zones, but this is "leaky", allowing flow across it.
Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Conductive Fault Model homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is a fault or barrier at the interface between the
two zones, but this is "leaky", allowing flow across it and conductive, allowing
flow along it. It can be thought of as a non-intersecting fracture.
Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Changing Thickness Across homogeneous reservoir, infinite in all directions except one where the reservoir
Leaky Fault Model and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity and thickness. There is a fault or barrier at the interface
between the two zones, but this is "leaky", allowing flow across it.
Name Description
Numerical Homogeneous Numerical model with homogeneous reservoir. This model may have constant
Reservoir Model value or reference a grid of geometrically distributed values for the following
parameters: permeability (k), thickness (h), porosity (phi), depth (Z), vertical
anisotropy (KvToKr) and horizontal anisotropy (KyTokx). Internal faults can be
positioned in this reservoir.
Numerical Dual Porosity Numerical model with dual porosity reservoir. This model may have constant
Reservoir Model value or reference a grid of geometrically distributed values for the following
parameters: permeability (k), thickness (h), porosity (phi), depth (Z), vertical
anisotropy (KvToKr) and horizontal anisotropy (KyTokx). Internal faults can be
positioned in this reservoir.
Custom Reservoir Model Reservoir Model allowing for the addition of custom parameters to support
extension of the model library provided.
Boundary Section
Boundary Base Model Abstract boundary model from which the other types are derived.
Infinite Boundary Model Infinite boundary model - there are no boundaries around the reservoir.
Single Fault Model Single fault boundary model. A single linear boundary runs along one side of the
reservoir.
Closed Circle Model Closed circle boundary model.
Two Parallel Faults Model Two parallel faults boundary model. Two linear parallel boundaries run along
opposite side of the reservoir.
Two Intersecting Faults Two intersecting faults boundary model. Two linear non-parallel boundaries run
Model along adjacent sides of the reservoir and intersect at an arbitrary angle.
U Shaped Faults Model U-shaped faults boundary model. Three linear faults intersecting at 90 degrees
bound the reservoir on three sides with the fourth side unbounded.
Closed Rectangle Model Closed rectangle boundary model. Four faults bound the reservoir in a
rectangular shape.
Pinch Out Model Pinch Out boundary model. The upper and lower bounding surfaces of the
reservoir are sub-parallel and intersect some distance from the wellbore. Other
directions are unbounded.
Numerical Boundary Model Numerical boundary model in which any arbitrary outer shape of the reservoir
boundary can be imposed by use of any number of straight line segments which
together define the boundary.
Custom Boundary Model Boundary Model allowing for the addition of custom parameters to support
extension of the model library provided.
Numerical Parameters
Name Abbreviation Documentation
Wellbore
Wellbore Radius Rw The radius of the wellbore, generally taken to represent the
open hole size.
Wellbore Storage Cs The wellbore storage coefficient equal to the volume which
Coefficient flows into the wellbore per unit change in pressure in the
wellbore. NOTE that by setting this parameter to = 0, the
model becomes equivalent to a "No Wellbore Storage"
model.
Ratio Initial To Final Ci/Cs In models in which the wellbore storage coefficient changes,
Wellbore Storage the ratio of initial to final wellbore storage coefficients.
Leak Skin Sl In Spivey (a) Packer and (c) Fissure models of wellbore
storage, the Leak Skin controls the pressure communication
through the packer (a), or between the wellbore and the high
permeability region (b - second application of model a), or
between the high permeability channel/fissures and the
reservoir (c). In case c, the usual Skin parameter
characterizes the pressure communication between the
wellbore and the high permeability channel/fissures.
Wellbore Volume Vw The volume of the wellbore equipment which influences the
wellbore storage. It will be sum of volumes of all
components open to the reservoir up to the shut off valve.
Wellbore Fluid Cw The compressibility of the fluid in the wellbore, such that this
Compressibility value * wellbore volume = wellbore storage coefficient.
Tubing Internal Diameter ID Internal diameter of the tubing, generally used for
estimations of wellbore storage when the tubing is filling up.
Fluid Density Rho The density of the fluid in the wellbore, generally used for
estimations of wellbore storage when the tubing is filling up.
Wellbore Deviation Angle Deviation The angle of deviation from vertical of the wellbore,
generally used for estimations of wellbore storage when the
tubing is filling up.
Model Name ModelName The name of the model. Available only for Custom Models to
identify name of the model.
NearWellbore
Skin Relative To Total S Dimensionless value, characterizing the restriction to flow
Thickness (+ve value) or extra capacity for flow (-ve value) into the
wellbore. This value is stated with respect to radial flow
using the full layer thickness (h), i.e., the "reservoir radial
flow" or "middle time region" of a pressure transient. It
comprises the sum of
"MechanicalSkinRelativeToTotalThickness" and
"ConvergenceSkinRelativeToTotalThickness" both of which
also are expressed in terms of h.
Rate Dependent Skin D Value characterizing the rate at which an apparent skin
Factor effect, due to additional pressure drop, due to turbulent flow,
grows as a function of flowrate. The additional flowrate-
dependent Skin is this value D * Flowrate. The total
measured Skin factor would then be S + DQ, where Q is the
flowrate.
Delta Pressure Total Skin dP Skin The pressure drop caused by the total skin factor. Equal to
the difference in pressure at the wellbore between what was
observed at a flowrate and what would be observed if the
radial flow regime in the reservoir persisted right into the
wellbore. The reference flowrate will be the stable flowrate
used to analyse a drawdown, or the stable last flowrate
preceding a buildup.
Mechanical Skin Relative Smech Dimensionless value, characterizing the restriction to flow
To Total Thickness (+ve value, damage) or additional capacity for flow (-ve
value, e.g., acidized) due to effective permeability around
the wellbore. This value is stated with respect to radial flow
using the full reservoir thickness (h), i.e., the radial flow or
middle time region of a pressure transient. It therefore can
be added to "ConvergenceSkinRelativeToTotalThickness"
skin to yield "SkinRelativeToTotalThickness".
Orientation Of Fracture OrientationOfFract For an induced hydraulic fracture which is assumed for PTA
Plane urePlane purposes to be planar, the azimuth of the fracture in the
horizontal plane represented in the CRS.
Orientation Well OrientationWellTr For a slant wellbore or horizontal wellbore model, the
Trajectory ajectory azimuth of the wellbore in the horizontal plane, represented
in the local CRS. This is intended to be a value
representative of the azimuth for the purposes of PTA. It is
not necessarily the azimuth which would be recorded in a
survey of the wellbore trajectory.
Length Horizontal hw For a horizontal wellbore model, the length of the flowing
Wellbore Flowing section of the wellbore.
Distance Wellbore To Zw For a horizontal wellbore model, the distance between the
Bottom Boundary horizontal wellbore and the lower boundary of the layer.
Distance Mid Fracture Zf For a hydraulic fracture, the distance between the mid-
Height To Bottom height level of the fracture and the lower boundary of the
Boundary layer.
Number Of Fractures Nf For a multiple fractured horizontal wellbore model, the
number of fractures which originate from the wellbore. In a
"HorizontalWellboreMultipleEqualFracturedModel" these
fractures are identical and equally spaced, including one
fracture at each end of the length represented by
"LengthHorizontalWellboreFlowing".
Fracture Height Hf In any vertical hydraulic fracture model (including the cases
where the wellbore can be vertical or horizontal), the height
of the fractures. In the case of a vertical wellbore, the
fractures are assumed to extend an equal distance above
and below the mid perforations depth, given by the
parameter "DistanceMidPerforationsToBottomBoundary". In
the case of a horizontal wellbore, the fractures are assumed
to extend an equal distance above and below the wellbore.
Fracture Angle To FractureAngleTo For a multiple fractured horizontal wellbore model, the angle
Wellbore Wellbore at which fractures intersect the wellbore. A value of 90
degrees indicates the fracture plane is normal to the
wellbore trajectory.
Fracture Storativity Ratio etaD Dimensionless Value characterizing the fraction of the pore
volume occupied by the fractures to the total of pore volume
of (fractures plus reservoir).
Reservoir
Horizontal Radial K The radial permeability of the reservoir layer in the horizontal
Permeability plane.
Total Thickness h The total thickness of the layer of reservoir layer.
Permeability Thickness k.h The product of the radial permeability of the reservoir layer
Product in the horizontal plane * the total thickness of the layer.
Pressure Datum TVD datum The depth TVD of the datum at which reservoir pressures
are reported for this layer. Note, this depth may not exist
inside the layer at the Test Location but it is the reference
depth to which pressures will be corrected.
Average Pressure Pbar The average pressure of the fluids in the reservoir layer.
"Average" is taken to refer to "at the time at which the rate
history used in the pressure transient analysis ends".
Ratio Layer 1 To Total Kappa In a two-layer model, the ratio of layer 1 to the total
Permeability Thickness PermeabilityThickness.
Product
Layer 2 Thickness h layer 2 In a two-layer model, the Thickness (h) of layer 2.
Inner To Outer Zone M In a Radial or Linear Composite model, the mobility
Mobility Ratio (permeability/viscosity) ratio of inner zone/outer zone.
Region 2 Thickness h region 2 In a Linear Composite model where the thickness of the
inner and outer zones is different, the thickness h of the
outer region (2).
Orientation Of Anisotropy OrientationOfAnis In the case where there is horizontal anisotropy, the
X Direction otropy_XDirection orientation of the x direction represented in the local CRS.
Optional since many models do not account for this
parameter.
Boundary
Radius Of Investigation Ri For any transient test, the estimated radius of investigation
of the test.
Pore Volume Of PVinv For any transient test, the estimated pore volume of
Investigation investigation of the test.
Pore Volume Measured PVmeas In a closed reservoir model, the Pore Volume measured.
This is to be taken to mean that the analysis yielded a
measurement, as opposed to the RadiusOfInvestigation or
PoreVolumeOfInvestigation Parameters which are taken to
mean the estimates for these parameters derived from
diffuse flow theory, but not necessarily measured.
Distance To Pinch Out Lpinch In a model where the reservoir model is a Pinch Out, the
distance from the wellbore to the pinch-out.
Enum Parameters
Name Values Documentation
Wellbore
Wellbore Storage full well | rising Parameter used to indicate which physical mechanism type
Mechanism Type level | closed is believed responsible for the wellbore storage.
chamber Enumeration with choices: "full well" = wellbore is full of fluid
and storage likely related to wellbore volume * wellbore fluid
compressibility; "rising level" = wellbore is filling up and
storage likely related to cross sectional area / (cos deviation
angle * density); "closed chamber" = reservoir is flowing into
a closed chamber as pressure builds up.
NearWellbore
Fracture Model Type infinite For a Horizontal Wellbore Multiple Equal Fractured Model,
conductivity | the model type which applies to all the fractures.
uniform flux | finite Enumeration with choices of infinite conductivity, uniform
conductivity | flux, finite conductivity, or compressible fracture finite
compressible conductivity.
fracture finite
conductivity
Reservoir
Upper Boundary Type no-flow | constant The type of the upper boundary of the layer. Enumeration
pressure with choices of: "no-flow" or "constant pressure". Optional
since many models do not account for this parameter.
Lower Boundary Type no-flow | constant The type of the lower boundary of the layer. Enumeration
pressure with choices of: "no-flow" or "constant pressure". Optional
since many models do not account for this parameter.
Boundary
Boundary [N] Type no-flow | constant In any bounded reservoir model, the type of Boundary [N].
(N = 1 to 4) pressure Enumeration with choice of "no-flow" or "constant pressure".
o
ng
d
yK
d
o
T
ct
Len tiv
e
ech
eH
ss
b
atio ara
re
ntS h
vT
el
Fa
c
r
ff
oM eDiff
lPe
Mo
rm
ete
ter
ell
ha
oB
o
To
Mo
ion ilityIn
ela
lSk
od
Ra
talD
raje
otr
ell
talT
rat
du
e
oe
ne
lW
nA
ctu
op
bM
yK
reH inRela
kin
s
ivit
tur
Typ
D
yp
ceW talW
gth
wP
geC
lPe
ceM oreT
ara rageM
ive
reH cture
me
ota
t
ity
ute ubM
oW
om
iam
ne
yp
ota
inR
lNa fAnis
ter
ter
Re
ter
eC
eT
ub
olu stiga
ick
geP mTV
rfo
ss
otr
ina
ity
tio
gth
tio
op
ellT
n
To
To
Fra
Skin
ryT
e
nsm OfLin
e
Su
rac
uct
ss
tiv
Flo
rZo
rZo
lat
elT
talT lRadia
s
ary
ne
ara
lum
tur
ob
rag
su r
ora
ity
oT
reS
idC
me
me
me
leT
e Sk
yTh
reT
Pe
ius
ctiv
otr
alD
diu
ht
via
via
eab ss
To
To
nis
on
oF
ter
ellb
ter
ter
ne
ne
e
ltS
Of
da
a
k
idF
a
de
atu
ick
Re
ve
O
sur
Sto rosity
od
Lay yer1T
ace
eig
e
rac
ute
ute
Low ound
alS
We eMid
res
St o
St o
Fra rOfFr
dP
me
alf
tor
me
me
sity
ng
ive
kin
on
ctu
eSt
Flu
ad
ialT
Skin Para
ara
ara
riz
nis
t?
Vo
De
ssu
De
Zo
talA
ick
Ra
yR
c
ter
ion
kn
ion
ilit
me
me
me
du
au
n
en
fIn
er2
ed
en
Th
reM
reD
ou
res
rac
reA
ceT
reR
ceF
nic
reC
Ho
reS
ir
reF
lat
oO
oO
lNa
lNa
ivit
S
ore
ore
ore
ore
ore
ore
ore
hic
Th
Fra
alA
Tim
alF
en
on
mP
mP
in
gIn
er
Pre
t
ep
Init
tat
ara
tat
tat
ara
rat
sO
ity
erg
n2
Dp
vo
on
on
ta
rB
m
Lay
erB
o
c
La
e
ialP
kS k
gth
cha
Re
er2
trib
bst
ssu
erT
erT
idD
tan
tan
tan
tan
tan
ctu
ctu
ctu
ctu
ctu
ctu
ctu
ctu
ltC
rat
mb
erp
ern
teD
reV
de
de
gle
de
rtic
llb
llb
llb
llb
llb
llb
llb
era
diu
ros
bin
sto
sto
sto
ien
ien
ien
gio
ien
ser
rfo
pe
lta
lta
rm
tio
tio
tio
riz
riz
yP
yP
yP
nv
Skin
Mo
Mo
Mo
Lea
Name
We
We
We
We
We
We
Me
Pre
Init
IsA
Inn
Inn
Tra
Fra
Fra
Fra
Fra
Fra
Fra
Fra
Dis
Dis
Dis
Dis
Dis
Dis
Flu
Sin
Nu
Up
Ho
Ho
Int
Int
De
An
De
An
An
Ve
Av
Cu
Co
Cu
Po
Re
Re
Cu
Po
Ra
Ra
Ra
Ra
Ra
Tu
Or
Pe
Or
To
Pe
Or
Or
WellboreBaseModel 1 1 1 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ConstantStorageModel 0 1 1 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageFairModel 0 1 1 1 1 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageHegemanModel 0 1 1 1 1 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageSpiveyPackerMode0 1 1 1 0 1 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageSpiveyFissuresMod0 1 1 1 0 1 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CustomWellboreModel 0 1 1 0 0 0 2 2 2 2 2 1 2 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 0 0 0
NearWellboreBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FiniteRadiusModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedUniformFluxModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedInfiniteConductivityMode0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedFiniteConductivityModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 1 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalUniformFluxMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalInfiniteConduct0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalFiniteConductiv0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PartiallyPenetratingModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SlantedFullyPenetratingModel 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SlantedPartiallyPenetratingModel 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 1 1 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellbore2LayerModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreMultipleEqualF 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 1 2 0 2 0 0 0 0 0 2 1 2 2 1 2 1 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreMultipleVariabl0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 2 0 1 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CustomNearWellboreModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 1 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 0 0 0
ReservoirBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
HomogeneousModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityPseudoSteadyStateMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityTransientSpheresMode0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityTransientSlabsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPermeabilityWithCrossflowMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 1 2 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
RadialCompositeModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithLeakyFaultMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 1 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithConductiveFa0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 1 1 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithChangingThic0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 1 2 1 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0
NumericalHomogeneousReservoirM0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 1 3 0 0 0 0 0
NumericalDualPorosityReservoirMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 1 3 0 0 0 0 0
CustomReservoirModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 1 2 2 0 0 0 3 3 0 0 0
BoundaryBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
InfiniteBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
SingleFaultModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
ClosedCircleModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
TwoParallelFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
TwoIntersectingFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
UShapedFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
ClosedRectangleModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
PinchOutModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
NumericalBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
CustomBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 2 2 0
Key
1 Abstract model section – models inherit these parameters
1 1..1 parameter per model
2 0..1 parameter per model
3 0..* parameter per model
PRODML Technical Usage Guide
Acknowledgement
Thanks are due to the members of the PRODML SIG who joined the effort to develop the revised DTS
data object.
In particular, the efforts donated by members from the following companies must be acknowledged: Shell,
Chevron, AP Sensing, Perfomix, Schlumberger, Sristy Technologies, Tendeka, Teradata, and
Weatherford. The provision of definitions of terminology by the SEAFOM Joint Industry Forum is also
gratefully acknowledged.
18 Introduction to DTS
This section describes the data model in PRODML to cover the most common business scenarios and
workflows related to DTS as identified by the industry at large.
• Describing a physical fiber installation, where we will highlight the different possibilities to
accommodate multiple deployment scenarios that have occurred in real life. The option to capture
how a physical installation has changed over time is also covered by this data schema.
• Capturing measurements from DTS instrumentation so they can be transferred from the instrument to
other locations and ultimately be stored in a repository. This includes the ‘raw’ measurements
obtained by the light box as well as ‘derived’ temperature log curves.
• Manipulation of DTS-derived temperature log curves (depth adjustments, for example) while
maintaining a record of all the changes performed. The PRODML schema supports multiple forms of
data manipulation and versioning so that most business workflows surrounding DTS can also be
represented.
For detailed explanation of these use cases and related key concepts, see Chapter 19.
• Represent DTS Installation comprising one Optical Path and one Instrument Box. This is the “unit”
which generates measurements. Various configurations can be represented this way, such as an
Instrument Box that will be shared among multiple optical fibers in one or multiple physical locations
(a “drive-by” instrument box).
deployment. The meaning and usage of the colored boxes is explained in the next section of the
document.
Each of these boxes is covered by a top-level object in the data model:
• Fiber Optical path
• DTS Instrument box
• DTS Installed system
• DTS Measurement
Figure 19-2. Example of a dual-ended optical path installation, showing different types of components.
In this example, we have one optical path that is deployed in a well in a dual-ended configuration.
To support the requirement to be able to track the changes in the path over time, the optical path is
represented using the inventory and network pattern explained in the following sections.
• Optical Path Inventory. This is a list of all the components used in the Optical Path over the whole
time being reported. (For more information, see Section 19.3.2 (below).)
• Optical Path Network. This is a representation of the connectivity of a set of components at a given
time. (For more information, see Section 19.3.3.)
Figure 19-3. Representation of the optical path components in a PRODML Product Flow Model (network).
For the UML model that represents this network, see Figure 19-4. This diagram also shows the
mandatory elements and notes as to which convention to follow.
Note, the connectedNode element for two ports on different units indicates these two units are connected.
Example, “Port 2” of “fiber segment 1” and “Port 1” of “connector 1” both show connectedNode as “Node
a”. The first component can reference a specific named port on the instrument box if required to report
this, and the same applies to the port on the terminator if “looped back to instrument box”. This way the
instrument box connections are reported.
Figure 19-4. Product Flow Model which represents a network, showing mandatory elements (red border) and
including explanatory notes (blue boxes).
Facility Mappings
The purpose of the Facility Mappings element within the Fiber Optical Path object is to represent the
relationship between an optical path and the facility(s) (sometimes called “assets”) where the optical fiber
has been deployed. For locations where the fiber is used in one facility only, such as a well, the
relationship between the facility and the fiber (optical path) is naturally 1 to 1.
However if a fiber (optical path) is long enough that it can cover multiple assets (for example, 2 wells and
1 pipeline, as shown in Figure 19-5) then you must represent that relationship between that one optical
path and the 2 wells and the pipeline.
Figure 19-5. Example of an optical path that spans more than one well (asset)
When a PRODML distributed data measurement is obtained, we must be able to map the “optical path
length” of the measurement (which gives values along the measured length of the fiber) to the “facility
length” of the measurement (which gives distributed values in reference to lengths or depths within the
facility, for example in a well, as shown in Figure 19-5).
The Facility Mapping element provides that translation between the actual distance along the optical path
as measured from the instrument box and the actual distance of the measurement locations as measured
from a reference datum defined for the facility. See the example in Figure 19-6 wherein the mapping is
shown as the grey “facility lengths,” which are formed from certain “optical path lengths” along the total
length of the optical path.
With this arrangement, it is possible to receive a distributed measurement where all the “raw” values are
indexed by the absolute distance along the optical path plus the mapping for measurements to be
indexed against the distance along the facility.
Fiber Defects
The purpose of the Defects object is to facilitate troubleshooting and data analysis after the
measurements have been collected. Figure 19-7 shows an example of a defect.
It is quite common to have areas in the optical path where the measurement obtained is different from
what was obtained in surrounding areas. This difference in the measurement is not caused by a true
change in the environment but rather by an artificial ‘artifact’ that is causing the light scatter to return
abnormally. By documenting the areas where these anomalies are known to be, anybody analyzing these
measurements can promptly discard the measurement changes as part of the physical installation rather
than as a location of unexpected changes.
OTDR Acquisition
If an optical time domain reflectometry (OTDR) survey is carried out, then an OTDR Acquisition data
object can be created to report the results. If the OTDR is carried out for calibration purposes, then a
reference to this OTDR Acquisition data object can be added to the corresponding calibration data object
as shown in Figure 19-8.
The Fiber Optical Path top level object can reference a OTDR Acquisition top level element which
contains details about the OTDR survey. The OTDR data is contained in a file referenced by a string for
the file path and name (Data In OTDR File), or an image or plot of the data (OTDR Image File). There is
no Energistics file format standard for OTDR and hence the nature of these files is out of scope.
Legend
Most Important Class
Abstract Class
Enumeration
AbstractObject
Normal Class
«XSDcomplexType,XSDtopLevelElement»
FiberOpticalPath
Normal Association
Generalization
AbstractDtsEquipment
«XSDcomplexType»
DtsInstrumentBox::
Instrument
AbstractObject «enumeration»
OTDRReason
«XSDcomplexType,XSDtopLevelElement»
«XSDcomplexType»
OTDRAcquisition dts
FiberOTDRInstrumentBox
other
0..1
«XSDelement» post-installation
+ Name: String64 pre-installation
+ ReasonForRun: OTDRReason [0..1] run
+InstrumentVendor +InstallingVendor + DTimRun: TimeStamp
0..1 0..1 + DataInOTDRFile: String64 [0..1]
+ OTDRImageFile: String64 [0..1]
«XSDcomplexType» + OpticalPathDistanceStart: LengthMeasure
ProdmlCommon::BusinessAssociate TypeEnum
+ OpticalPathDistanceEnd: LengthMeasure
+ Direction: OTDRDirection «enumerati...
«XSDelement» + Wavelength: LengthMeasure OTDRDirection
+ Name: String64
+ Role: NameStruct [0..*] backward
+ Alias: NameStruct [0..*] 0..1 forward
+ Address: GeneralAddress [0..1] +MeasurementContact
+ PhoneNumber: PhoneNumberStruct [0..*]
+ Email: EmailQualifierStruct [0..*]
+ AssociatedWith: String64 [0..1]
+ Contact: String64 [0..-1]
+ PersonnelCount: nonNegativeInteger [0..1]
Conveyance
Conveyance refers to the physical means by which fiber is installed in or carried into the facility. The
categories of conveyance are listed in the following table:
Fiber in Control Line Applies when fiber is pumped into the facility inside a control line (small-
diameter tube) and reports:
• The size and type of control line.
• Pumping details for operation in which fiber is pumped into control line.
Intervention Applies to temporary installations (in a wellbore) and contains details such
as the intervention method by which fiber is conveyed into the well (on
wireline, coiled tubing, etc.).
Historical Information
Fiber Optical Path uses a common Energistics XML pattern that separates inventory of components from
the implemented network configuration, as described in sections 19.3.2and 19.3.3.
Use of this pattern means it is possible to keep items in the optical path description that have been
decommissioned, with the value that failure modes can be tracked.
For example, Figure 19-9 shows an optical path spanning two wells and one pipeline. At some point, it is
assumed that the pipeline segment is decommissioned for some reason. This would give rise to:
• one set of inventory (including all the segments and components)
• two networks that include the pipeline and later another one without the pipeline components included
This way, distributed measurements through the whole life of the installed system can be associated with
the right optical path and facilities.
Figure 19-10. A simple single optical path and instrument box installation.
However, in real life there are multiple variations on how the equipment is installed and used. The first
variation can be found when one instrument box is used to take measurements from more than one
optical path. See Figure 19-11. Here, in the Optical Path Network object, it is possible to record the
channel to which each optical path is connected at the instrument box.
Figure 19-11. One instrument box being used with two optical paths comprises two installed systems (blue
and red). In DTS each would be one installed system, both pointing to the same instrument box. In DAS, the
DAS acquisitions would point to the appropriate optical path, and both would point to the same instrument
box.
A second example is found when the same instrument box is moved around to different locations and
used to take measurements from multiple optical paths. This is sometimes known as a ‘drive-by’
instrument box. See Figure 19-12.
Figure 19-12. Drive-by instrument box: a box is taken from well to well and is connected only occasionally to
each optical path. Each such combination is an installed system.
From the hardware point of view, the instrument box does not change, and neither does the optical path,
so these two objects previously introduced are sufficient to describe these configurations. Things get
more interesting when we start dealing with the measurements obtained, which depend on the situation.
As an instrument box is moved from one optical path to another, it uses different calibration settings and
may generate different diagnostics information. When analyzing a distributed measurement, it is very
important that the data consumer knows exactly which instrument box took the measurement and to
which optical path this measurement relates. Also, it is conceivable that at some point the instrument box
is upgraded for a newer model and, therefore, the new measurements are now taken with a different
instrument box for the same optical path.
The most extreme example of the temporary use of a distributed installed system is when an instrument
box is used for a logging operation, in which case both the instrument box installation and the optical path
are temporary. See Figure 19-13.
Figure 19-13. Instrument box and portable fiber used for temporary logging operation.
Figure 19-14. Relationship between the Measured Trace Set (blue) and the Interpreted Log (red).
The actual data (index of distance/length and the different measured or derived channels) are stored in
WITSML objects called ChannelSets. For more information about ChannelSets, see Energistics Online:
http://docs.energistics.org/#WITSML/WITSML_TOPICS/WITSML-000-056-0-C-sv2000.html
Examples of WITSML ChannelSets are shown in the next chapter, as part of the worked examples.
There are two types of DTS measurement data:
• Measured Traces: the “raw” measurement, indexed along the optical path distance, and as measured
by the instrument box.
• Interpreted Logs: the “adjusted” (i.e., calibrated, re-sampled, smoothed, etc.) temperature indexed
against the facility length of the physical facility (well, pipe, etc.) being measured. This may be
considered the “product” of DTS measurement plus analysis/processing.
The mnemonics which are expected to be used for these two types of DTS measurements are as here:
Measurement Trace Interpreted Log
Expected
Mnemonic UoM Mnemonic Expected UoM
FiberDistance (index) m FacilityDistance (index) m
Antistokes mW AdjustedTemperature degC
Stokes mW
ReverseAntiStokes mW
ReverseStokes mW
Rayleigh1 mW
Rayleigh2 mW
BrillouinFrequency GHz
Loss dB/Km
LossRatio dB/Km
CumulativeExcessLoss dB
FrequencyQualityMeasure dimensionless
MeasurementUncertainty degC
BrillouinAmplitude mW
OpticalPathTemperature
this is assumed to be
adjusted to be correct
temperature. degC
UncalibratedTemperature1 degC
UncalibratedTemperature2 degC
Note that the index is mandatory for these types of data, but the other channels are not. In particular, the
measurement trace set of mnemonics are suitable for either Raman- or Brillouin-type DTS systems, and
any one system will certainly not generate all the measurement traces.
BUSINESS RULE: These mnemonics must be observed but are not enforced in the schemas.
Figure 20-1. Worked example fiber optical path showing fiber segments, optical path lengths and facility
lengths
For the optical path shown in Figure 20-1, the key sections are as shown below.
First, the inventory lists out the components comprising the optical path (Figure 20-2). Note that each
component has a name and a type (and optionally additional data), and also a uid (red box) which is used
to reference this component from the network (see below).
Next, the Network section shows how these components listed in Inventory are connected. (For details on
how this is done, see Section 19.3.3, Figure 19-3.) The network contains Units which are a nominal
representation of an item of inventory ("facility" is the Energistics terminology for general hardware) within
the network. Figure 20-3 shows a network Unit (blue box), which uses a facility reference (uidRef) back to
the uid of the component in Inventory (red box).
The unit has one or more Ports which are where connection occurs. Ports can be inlet or outlet (taking
direction from the lightbox). Ports contain a Connected Node element, and where two ports share the
same name and uid of Node, this defines their connection. See Figure 20-4, in which the outlet port (i.e.,
“far end”) of the surface fiber (in blue box), can be seen to be connected to the inlet port (i.e., “near end”)
of the surface connector (in red box). Both have a connected node called Surface Connector 1 with uid
CSC1, which defines the connection between the surface cable and this wellhead connector.
Figure 20-4. Optical path network showing Node connecting Ports of two components.
In Figure 20-5, we see the full inventory and network of the worked example with the links and
connections all shown.
Figure 20-5. Worked example optical path network showing how connectivity is defined.
The facility mapping section is where the fiber optical path distances are mapped onto the facility lengths
of physical equipment. Here, two mappings are shown: 1) a one-to-one mapping to the surface cable from
the surface segment of the optical path and 2) a mapping to the wellbore, which includes the offset and
the overstuffing, as shown in Figure 20-1. Figure 20-6 shows the two mappings. Note that the
FiberFacilityWell type of mapping (the second one) includes a well datum and a data object reference to
the wellbore concerned.
Figure 20-7 shows some of the other sections of the Optical Path. Defects can be recorded (for the
example, see Figure 20-1 and, for real-world defect examples, see Figure 19-7). OTDR surveys to verify
the optical path quality can be referenced. The data for OTDR are not in an Energistics standard because
OTDR is a generic fiber optical industry measurement with its own file formats; an external file is
referenced. Other data can be added as seen, such as vendor, the facility identifier, etc.
Instrument Box
Declaring a DTS Instrument Box is also very straightforward (Figure 20-8). Required elements include
unique identifier, a name, and the type. All other attributes are optional.
Installed System
Once both the optical path and instrument box are declared, they need to be ‘tied together’ as a DTS
installed system. All measurements will be generated by an installed system, not an individual instrument
box or an optical path. The DTS Installed System is a simple object to do this, as seen in Figure 20-9. It
additionally allows for a calibration to be associated with the system of path + box as a combination.
Reference to the
instrument box which
made this measurement
A ‘permanent’ DTS installation (i.e., the lightbox is fixed and so is the optical path) will only contain a
<dateMin> element with no <dateMax>, to indicate that this installation is still active.
After these three entities are declared, then captured measurements can be made and associated with
the installed system above.
It is strongly recommended that the installed system object also includes calibration information so that
any measurements obtained from the installed system can be compared against the calibration
parameters to help determine if any further fine tuning is required for the lightbox.
20.2 Use Case 2: Capturing DTS measurements for Transport and Storage
As stated previously, there are two types of measurements that can be obtained from a DTS installed
system: 1) the actual ‘raw’ DTS measurement along the length of the optical path and 2) the interpretation
log, which has a calculated temperature along the length of the facility only. This section contains
examples of XML for representing a DTS Measurement obtained from the installation described in the
previous use case.
In the previous use case, we assume the well is 483.25m deep and the entire optical path is 517.43m
long. Therefore, it is expected to have a measurement from distance 0 to distance 517.43m and an
interpretation log from distance 25m to distance 470m (the sample closest to the last measurement at
505m, see Figure 20-1).
The ‘raw’ measurement traces with stokes, antistokes, and an uncalibrated temperature would look as
shown in Figure 20-10. This is the example file DTS Measurement.xml. Note that there can be many
traces per DTS Measurement xml. Each has a uid (red box). Optionally, a trace can have an attribute
parentMeasurementID which contains the uid of another trace which may be the parent of this trace (in
the sense that this trace may have been depth shifted or otherwise derived from the parent).
The DTS Measurement object does not contain the actual numerical data—this is instead in a WITSML
object called Channel Set (for more details and references, see Section 19.6). An extract of the channel
set object corresponding to the DTS measured traces is shown in Figure 20-11. This shows the Index
element, which describes the index, and then the three channels listed above. The first of these is shown
expanded (antistokes). This is the example file ChannelSetContainingDtsMeasurement.xml.
The data then follows, as shown in Figure 20-12. The comma-separated values in the data section
correspond to the channels defined previously in this object.
Similarly, an interpreted log is shown in the example file DTS Measurement with Log.xml. This replaces
the measurement traces with an interpreted log but is otherwise similar. The data runs from 5m to 470m
of facility length (i.e., measured depth in this wellbore), see Figure 20-1. The Channel Set file is DTS Log
Interpretation In ChannelSet.xml. See Figure 20-13 which shows part of the DTS measurement file. Note
the two referenced Channel Sets (red and blue boxes). Note also the IntepretationData has a mandatory
measurement reference attribute which contains the uid of the measurement trace from which this
interpretation log was derived (green boxes).
Figure 20-13. DTS measurement with measurement trace and interpretation log.
one. See
Figure 20-14. This shows the example file DTS Interpretation Log With Versions.xml.
This contains three versions of interpretation log data with its original measurement trace. The tag which
identifies which is the preferred interpretation log (ID3) that should be used for business decisions is
shown in a green box. All three interpretation data instances refer to the same measurement trace (red
boxes). The example also shows how interpretation logs can show that they are derived from a parent
interpretation log (blue box). The interpretation processing type (purple box) indicates what was
processed to generate this instance. Here, ID1 was temperature shifted, ID2 took ID1 and averaged and
ID3 took ID2 and denormalized. Note: in the example, all three interpretation logs point to the same
Channel Set data.
Figure 20-14. Multiple DTS interpretation logs in one transfer, showing preferred log, parentage, and
processing type.
21.1 Principles
When light is transmitted along an optic fiber, transmitted light collides with atoms in the lattice structure
of the fiber and causes them to emit bursts of light at frequencies slightly different from the transmitted
radiation, which propagate back along the fiber and can be detected at its end. This is known as
backscattering; this is shown in Figure 21-1.
Optic fiber thermometry depends upon the phenomenon that the frequency shifts occur to bands both
less than and greater than the transmitted frequency. Furthermore, in the case of the Raman shifted
bands the intensity of the lower frequency band is only weakly dependent on temperature, while the
intensity of the higher frequency band is strongly dependent upon temperature (Anti-stokes and Stokes
bands). See Figure 21-2. Less commonly, the Brillouin bands are used.
Figure 21-2. Backscatter spectrum, showing Rayleigh, Brillouin, and Raman bands.
The ratio of these intensities is related to the temperature of the optic fiber at the site where the
backscattering occurs: i.e., all along the fiber. These intensities are averaged at regular Sampling
Intervals. See Figure 21-3. The intensities of backscatter of individual pulses are very weak,
necessitating statistical stacking of thousands of backscattered light pulses.
Figure 21-3. . Illustration of temperature data which is averaged over fixed Sample Intervals.
contamination. For these reasons, calibrations of the equipment are routinely necessary to monitor the
transmission characteristics.
21.4 Calibrations
Two kinds of calibration are applied to DTS data. These are:
• Factors pertaining to the Instrument Box such as the oven reference temperature, the number of
launches being stacked to get a temperature value at the required resolution.
• Factors related to the fiber, including the (practically linear) differential attenuation along the fiber, and
the refractive index of the fiber.
Where the fiber may be installed in a well bore for several years, it is necessary to recalibrate these
factors periodically in order to be able to compare reservoir temperatures over these periods.
The Optical Time Domain Reflectometry (OTDR) technique provides a routine means of examining the
condition of the fiber. OTDR consists of observing the attenuation of backscattered light in the frequency
band of the transmitted Rayleigh waves. An OTDR profile is illustrated in Figure 21-5.
21.5 Measurements
The measurements produced by the Instrument Box for a temperature profile typically comprise a set of
records containing contextual data followed by the temperature records. The contextual data identify the
well, the equipment, the date and time, and various calibration data. The temperature records contains
the length along the fiber of the temperature sample, the computed temperature, and measurements of
the Stokes and anti-Stokes intensities, and other measured curves as appropriate to the Instrument Box
concerned.
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the DAS data object: Shell, Fotech, OptaSense, BP, Baker
Hughes, Chevron, Pinnacle/Halliburton, Schlumberger, Silixa, Weatherford, Ziebel.
22 Introduction to DAS
This chapter serves as an overview and executive summary of both distributed acoustic sensing (DAS)
and the DAS data objects.
Figure 22-1. DAS Overview: Oil and gas assets are outfitted with optical fibers, which are vibrated by an
acoustic source. DAS interrogator systems measure tiny variations in the back-scattered light, essentially
turning the fiber itself into a sensitive distributed sensor that can measure acoustic, temperature, and strain
variations over distances of many kilometers. DAS data acquisition raw and processed (frequency band and
spectrum) data are used for input to many production surveillance software applications.
For a quick overview or to be able to make a presentation to colleagues, see the slide set: Worked
Example DAS.pptx which is provided in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files.
Challenges of DAS
The challenge for DAS is how to manage the huge volumes of data produced, which is typically terabytes
for raw data and hundreds of gigabytes for frequency filtered data. Because of the large data volumes
and lack of standards, industry workers spend lots of time manually reformatting data, which is costly and
error prone. These challenges hamper uptake of the commercial DAS applications and development of
new DAS applications.
For these reasons, a group of operators and service providers proposed to Energistics to define a shared
industry exchange format for DAS data types. Previous work at Energistics has been done for distributed
temperature sensing (DTS). Because of the similarities, the DAS design work is based on the DTS
design.
DAS Workflow
Figure 22-2 shows a very high-level DAS workflow. Assets such as wells and pipelines are fitted with
fiber optic cables. DAS interrogator systems are connected to these fibers and collect huge volumes of
raw acoustic data. The raw data is collected in the field, often some pre-processing takes place to extract
relevant data and reduce the data volume. The raw and/or extracted datasets are then transferred to the
office where they are further processed and stored. Oil and gas software uses both the raw and
processed data as inputs for applications such as surveillance.
For a more detailed workflow, see Section 23.2.
Figure 22-2. High-level DAS workflow, from the field to the office.
Terminology
Because DAS is an emerging technology with new and specialized terminology, definitions are provided
in this document.
SEAFOM (an international joint industry forum aimed at promoting the growth of fiber optic monitoring
system installations in the upstream oil and gas industry) and the Energistics DAS work group have
collaborated to define key terms for DAS. A few key terms are defined here for convenience. For a
detailed list of definitions, see Section 27.1.
Term Definition
DAS Job A set of one or more DAS acquisitions acquired in a defined
timeframe using a common optical path and DAS instrument box.
DAS Acquisition Collection of DAS data acquired during the DAS survey.
DAS Instrument Box The DAS measurement instrument. Sometimes called the
“interrogator unit”.
Optical Path A series of fibers, connectors, etc. that together form the path for the
light pulse emitted from the interrogator unit.
Interrogation Rate The rate at which the DAS instrument box interrogates the fiber
(or Pulse Rate) sensor. For most instruments, this is informally known as the pulse
rate.
Output Data Rate The rate at which the measurement system provides output data for
all ‘loci’ (spatial samples). This is typically equal to
the interrogation rate/pulse rate or an integer fraction thereof.
Trace Array of sensor values for all loci interrogated along the fiber for a
(or Scan) single “pulse.”
Locus (plural Loci) A particular location that indicates a spatial sample point along the
sensing fiber at which a “time series” of acoustic measurements is
made.
Measurement Start Time The time at the beginning of a data “sample” in a “time series.” This
is typically a GPS-locked time measurement.
Sample A single measurement, i.e., the acoustic signal value at a single
locus for a single trace.
Spatial Resolution The ability of the measurement system to discriminate signals that
are spatially separated. It should not be confused with Spatial
Sampling Interval.
Term Definition
Spatial Sampling Interval The separation between two consecutive “spatial sample”’ points on
the fiber at which the signal is measured. It should not be confused
with “spatial resolution.”
DAS Raw Data DAS data exchange format that describes unprocessed DAS data
provided by the vendor to a customer.
DAS FBE Data A category of “processed” data. DAS data exchange format for
frequency band extracted (FBE) DAS data provided by the vendor to
a customer. This DAS data type describes a dataset that contains
one or more frequency bands filtered time series of a “raw” DAS
dataset. For example, the RMS of a band passed filtered time series
or the level of a frequency spectrum.
DAS Spectrum Data A category of “processed” data. DAS data exchange format for
frequency spectrum information extracted from a (sub) set of “raw”
DAS data.
For a simple but effective animation that better shows this concept, see the PowerPoint slides included in
the DAS Example folder (slide 9) of the PRODML download. To see the animation, run the presentation
Slide Show mode or see Energistics Online:
http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-014-0-C-sv2000.html
In coherent OTDR, a launch pulse travels along fiber scattering from tiny imperfections, inherent to the
fiber. The scattered light is detected and used to create a signal for each location along the fiber, known
as a scatter pattern. Because the intensity of this signal at each location is determined by the coherent
addition of millions of waves scattered from millions of independent scatter sites, the intensity varies
randomly as a function of distance. However, upon successive measurements, this pattern nominally
remains constant.
In Figure 23-1, the pair of plots on the left is a “time zero” when the pulse (blue) is sent down the fiber; the
pair of images on the right is some time later as the pulse nears the end of the fiber, travelling from left to
right, whilst the back scattered light from each point along the fiber is making its way back up the fiber
towards the detector, from right to left (red).
For a simple but effective animation that better shows this concept, see the PowerPoint slides included in
the DAS Example folder (slide 11) of the PRODML download. To see the animation, run the presentation
Slide Show mode or see Energistics Online:
http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-015-0-C-sv2000.html
Figure 23-3. DAS installed system loci and raw data time series.
The DAS instrument box (or interrogation unit or IU) is connected to a sensing fiber. The sensing fiber
consists of a surface part and a downhole part together presenting the full fiber optical path. The DAS IU
sends light pulses in the fiber at a pre-configured rate (interrogation or pulse rate), and samples the
backscattered light creating an ensemble of N loci samples along the fiber. Such an ensemble is often
referred to as a “trace” or “scan,” and is shown as columns of dots in the figure. The time at which a trace
is collected is indicated by the measurement start time. A DAS IU interrogating the full fiber outputs a
maximum of N x pulse rate “raw” DAS samples per second. Each locus sample is shown as a dot in
Figure 23-3.
In many cases the DAS IU is configured such that it decimates the output by an integer factor M and only
outputs every Mth trace. The rate at which scans are output by the interrogator is called the “output data
rate.” The red dots in Figure 23-3 show such a decimation example (decimation factor 4).
Further, the DAS IUs often collect only measurements for a subset of the loci along the fiber. In Figure
23-3, the shaded grey box indicates such a scenario where only loci 8 to 24 are collected. In the figure,
“start locus index” and “number of loci” describe which subset is collected. To ensure a unique numbering
system, the first locus at the interrogator has by definition index ‘0’.
Measurement samples output for a single locus can form a time series of samples, represented by rows
of dots in the figure.
Figure 23-4 introduces a number of additional concepts when processing raw data into frequency bands
and spectra using Fourier transforms. An example of the raw data is shown in box ‘A’ containing a
representation of a sine wave exciting a single channel (locus) along the fiber (left plot) and the
corresponding waterfall DAS raw data plot (right plot). When we now apply filters to this raw data, one
filter which includes and one that excludes the sine wave’s frequency, then we obtain two processed
filtered images shown in box ‘B’ referred two as processed DAS Frequency Band Extracted (FBE) data.
We can also calculate the full signal frequency spectrum for each DAS channel using a Fast Fourier
Transform shown in box ‘C which is referred to as Spectrum data.
Figure 23-4. DAS raw and processed data types. (A) DAS raw data: a sine wave exciting a single locus along
the fiber (left) and the resulting waterfall plot (right). (B) DAS processed FBE data: the top plot shows a
filtered version of the raw data containing the sine wave frequency, whereas the bottom plot shows a filtered
version without the sine wave frequency (C) DAS processed spectrum data: frequency spectrum plotting as
a function of time.
Figure 23-5. High-level DAS workflow, from the field to the office.
Data Model
Figure 24-1 is a high-level diagram of the set of DAS data objects, which includes definitions of the
optical path and the DAS instrument box, and an acquisition data set. Each of these components is
explained in more detail in the sections below.
Figure 24-1. The basic DAS installed system is composed of an optical path and instrument box which
produces the DAS acquisition data.
Technology
Like all Energistics data objects, developers implement the DAS data object so that software can read
and write the common format; they do this using the key technologies in the Energistics Common
Technical Architecture (CTA). For more information, see the CTA Overview Guide and other CTA
resources, which are included in the PRODML download.
The main technologies are briefly described here. For important information on DAS usage of these
technologies—particularly HDF5 and EPC—see Section 24.1.2.7.
24.1.2.2 XML
Each data object in an Energistics standard is defined by an XSD file and is stored as an XML file. XML is
used because of its portability and because it's both machine-readable and human-readable. Common
design patterns leverage the CTA, which promotes consistency and integration across all Energistics
standards.
For more information, see the CTA Overview Guide.
24.1.2.5 HDF5
XML is not very efficient at handling large volumes of numerical or array data, so for this purpose
Energistics uses the Hierarchical Data Format, version 5 (HDF5), which is a data model, a set of open file
formats, and libraries designed to store and organize large amounts of numerical/array data for improved
speed and efficiency of data processing.
DAS uses HDF5 to store both raw and processed measurement data. HDF5 is also part of the CTA, and
its general use in Energistics is described in the CTA Overview Guide.
For more information about how to use HDF5 with DAS, see Section 24.5.
NOTE: The HDF5 version policy is to use the “backward version compatibility flag” set to HDF5 V1.8.x
when using HDF5 v1.10.x or above. This is to ensure compatibility with DAS HDF5 files written using the
older (v1.8.x) versions of HDF5 libraries.
24.1.2.6 EPC
The Energistics Packaging Convections (EPC) is a set of conventions that allows multiple files to be
grouped together as a single package (or file), which makes it easier to exchange the many files that may
make up a DAS model. It is an implementation of the Open Packaging Conventions (OPC), a commonly
used container file technology standard supported by two international standards organizations.
Essentially, an EPC file is a .zip file. You may open it and look at its content using any .zip tool. EPC is
also part of the CTA, and how it works is described in the EPC Specification.
For DAS, the EPC stores all the data objects described in Section 24.1.1, which includes the metadata for
the raw and/or processed datasets (if any). Each EPC file must refer to only one DAS Acquisition.
However, one DAS Acquisition may have multiple EPC files referring to it, depending on the business use
case.
For a typical DAS use case, an EPC file will be accompanied by a set of Hierarchical Data Format,
version 5 (HDF5) files for storing the raw and processed data and time arrays. These HDF5 files are
stored external to the EPC file. Figure 24-2 shows a conceptual model of how EPC works for DAS. For
more information on how HDF5 files are formatted, see Section 24.5. For more information on how the
HDF5 files are referenced from an EPC file, see Sections 25.2 and 25.3.
Figure 24-2. Diagram showing how EPC provides the technology to group together related files and
exchange them as a single package. (Container ship photo from Wikipedia:
http://fr.wikipedia.org/wiki/Fichier:CMA_CGM_Marco_Polo_arriving_Port_of_Hamburg_-_16._01._2014.jpg. Licence: Creative
Commons paternité – partage à l’identique 3.0 (non transposée)
• Metadata. All of the metadata about the DAS Acquisition—but not the arrays of data—are stored in
XML files. The metadata also is stored in the HDF5 files, along with the arrays of numbers. The
advantages of having this data in the XML include:
− XML has built-in schema validation, and all Energistics XML standards work on the basis that the
standard is represented by XSD schemas. HDF5 does not include schema-based validation. This
means that the DAS XML files can be checked for validity by simple schema validation built in to
most XML tools and editors.
- The XML files are small and can be opened in any editor, including a text editor, which may make
it much easier to figure out what the content of the set of acquisition files is, compared to opening
all the potentially huge HDF5 array files. All the metadata can sit in one small XML file, while the
actual measurement values are in many large HDF5 files.
The HDF5 files also contain the metadata. The metadata is duplicated from the XML files because it
is possible that the HDF5 files— which can easily fill several hard drives with raw data—may become
physically separated from the small XML file during transit. Having the metadata in each HDF5 file
enables files to be assembled back together as a coherent set.
For more information about HDF5 configuration options for DAS, see Section 24.6.3.
• Relationships between data objects, files, and the EPC file. The XML content is also used to
provide the links between the metadata and the data arrays, through Energistics data object
references, which are implemented in the EPC file.
• EPC. The EPC Specification states that HDF5 files may be stored either inside or outside of the EPC
file; however, DAS requires that HDF5 files be stored outside the EPC file.
24.1.2.8 Important Implementation Guidance: Implement BOTH EPC and HDF5 and begin with
EPC
PRODML DAS has been designed so that the EPC file specifies all relationships among the various data
objects/files and related HDF5 files that comprise a DAS data set. As a convenience and to prevent
potential problems associated with huge data sets that are so big they may span multiple physical disks,
the design includes some redundancy of metadata among the XML and HDF5 files (as described above).
This redundancy in metadata has led some people to believe they can implement PRODML DAS using
HDF5 only. This approach is NOT RECOMMENDED for a couple of reasons. First and foremost, this
HDF5-only approach is NOT a correct implementation of the PRODML DAS standard, which requires an
EPC file. If two organizations are exchanging DAS data using PRODML DAS and one has implemented
EPC and the other has not, a data transfer would fail. Second, the relationships are specified in the EPC
file. If you begin with HDF5, you may have some initial success figuring out the relationships and writing
and reading files, but a solution won't scale over the many permutations and complexity of relationships
inherent in DAS data sets.
For these reasons the recommendations is that when implementing PRODML DAS, begin with the EPC-
related capabilities, then HDF5.
class DasInstrumentBox
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
DasInstrumentBox
«XSDelement»
+ SerialNumber: String64 [0..1]
+ Parameter: IndexedObject [0..-1]
+ FacilityIdentifier: FacilityIdentifier [0..1]
+ Instrument: Instrument
+ FirmwareVersion: String64
+ PatchCord: DtsPatchCord [0..1]
+ InstrumentBoxDescription: String2000 [0..1]
Figure 24-3. DAS Instrument Box model. Note that the Instrument element contains further generic details
about the hardware.
Figure 24-5 shows the DAS data arrays that are used to store the DAS raw, DAS FBE, DAS spectra
samples and their corresponding sample times. Because of the volume of data, these arrays are only
stored in the HDF5 files. For more information about DAS data arrays, see Section 24.8.
Detailed definitions of individual attributes can be found in Chapter 27.
AbstractObject
«XSDcomplexType,XSDtopLevelElement,Group»
DasAcquisition
«XSDelement»
+ AcquisitionId: UuidString
+ AcquisitionDescription: String2000 [0..1]
+ OpticalPath: FiberOpticalPath
+ DasInstrumentBox: DasInstrumentBox
+ FacilityId: String64 [1..*]
+ ServiceCompanyName: String64
+ ServiceCompanyDetails: BusinessAssociate [0..1]
+ PulseRate: FrequencyMeasure [0..1]
+ PulseWidth: TimeMeasure [0..1]
+ GaugeLength: LengthMeasure [0..1]
+ SpatialSamplingInterval: LengthMeasure
+ MinimumFrequency: FrequencyMeasure
+ MaximumFrequency: FrequencyMeasure
+ NumberOfLoci: NonNegativeLong
+ StartLocusIndex: long
+ MeasurementStartTime: TimeStamp
+ TriggeredMeasurement: boolean
TypeEnum
«enumeration»
FacilityKind
+Raw 0..* +Custom 0..1 +Processed 0..1 +FacilityCalibration 0..*
generic
«XSDcomplexType,Group» «XSDcomplexType,Gro... «XSDcomplexType,Group» «XSDcomplexType,Group» pipeline
DasRaw DasCustom DasProcessed FacilityCalibration wellbore
FacilityLength
LocusIndex
OpticalPathDistance
Figure 24-4. UML diagram of XML content of DAS Acquisition file (described in Section 24.7 below).
class DasArray
Legend
Most Important Class
Abstract Class
Enumeration
Normal Class
Normal Association
«enumerati...
Generalization
DasDimensions
frequency
locus
time
AbstractValueArray
«XSDcomplexType»
BaseTypes::AbstractNumericArray
«XSDcomplexType» «XSDcomplexType»
Obj ectReference:: BaseTypes::AbstractIntegerArray
ExternalDatasetPart
«XSDelement»
+ Count: PositiveLong
+ PathInExternalFile: String2000
+ StartIndex: NonNegativeLong
+TimeArray 1
«XSDcomplexType»
BaseTypes::
«XSDcomplexType» IntegerExternalArray
DasExternalDatasetPart
«XSDelement»
«XSDelement» + NullValue: long
+ PartStartTime: TimeStamp [0..1] + Values: ExternalDataset
+ PartEndTime: TimeStamp [0..1]
Figure 24-5. UML diagram of HDF5 DAS arrays for raw samples, FBE samples, spectra samples and
corresponding times (described in Section 24.8 below)
Figure 24-6. DAS data can consist of both raw and processed (spectra and FBE) arrays. This figure shows
conceptually how different data arrays are related.
In addition to the array of values, attributes of the groups and arrays are used to provide necessary
ancillary and metadata. The HDF5 file also contains. Custom contains custom attributes for the whole
acquisition or any data group (e.g., raw, FBE or spectra).
If full mapping(s) from loci to optical path and facility lengths is available, they will also be stored in an
HDF file similar to storing arrays of DAS data.
At the root level of the HDF5 file, it must contain one attribute “uuid” which is the unique reference linked
to the file proxy reference in EPC.
Figure 24-7. DAS example HDF5 file structure. Use of DAS requires data stored in HDF5 to contain the
structure and naming conventions shown in the figure.
Figure 24-8. Possible HDF5 (.h5) file configurations for use with DAS (1 of 2). ]
Figure 24-9. Possible HDF5 (.h5) file configurations for use with DAS (2 of 2).]
The following instructions apply when writing/reading HDF groups and datasets:
• Create HDF groups——only to create a dataset(s) as a leaf node, unless the group is a Custom
group
− For example, to write RawData, you need to create the Acquisition Group and the Raw Group,
which then require attributes to be written for these groups. For more information, see the next
sections on HDF attributes.
• The tree structure of the HDF5 groups mimics the tree structure in the XML. For example, Processed
is a child group of Acquisition, and Fbe[0] is a child group of Processed.
• LocusDepthPoint is a look-up table for mapping every locus point to the optical path and facility
lengths. It is written to the HDF5 file as an array with a compound datatype such that each item
corresponds to a single locus point. The members of the compound datatype are: LocusIndex,
OpticalPathDistance and FacilityLengthDistance. The order of these members must correspond to
the Column sub-element under LocusDepthPoint. You should not rely on a fixed order of the columns
but instead, use the datatype name to retrieve a specific column. Following the first rule above, if the
LocusDepthPoint is absent (e.g., it cannot be computed from the given calibration input points), the
parent Calibration HDF group must be absent too, because it does not have datasets as leaf nodes.
Attributes in HDF5
• Each HDF group and dataset has a corresponding XML object. Each of these XML objects may
contain sub-elements and attributes that are either defined locally or inherited from more abstract
types. All of these sub-elements and attributes of the XML object should be copied to the HDF group
if they meet ONE of following criteria:
− XML element or attribute is derived from XSD primitive type (e.g. “xs:long”, “xs:boolean”). The
name of the element/attribute is used for the HDF attribute. For the corresponding HDF data
types, see the next section.
- XML element or attribute is a derived type of AbstractMeasure. The name of the element/attribute
is used for the HDF attribute. In addition, the “uom” attribute of the AbstractMeasure is copied,
with a name formatted as “ElementName.uom”.
• For each HDF Dataset, if there is a DasExternalDatasetPart referring to it, the sub-elements and
attributes of the DasExternalDatasetPart object must also be copied to the HDF Dataset as attributes.
The criteria specified in the previous bullet point also apply. However, the element
“PathInExternalFile” should be skipped because it duplicates the HDF path for the dataset.
• Where the above criteria are met:
− If an attribute is mandatory in the XML schema, it is also mandatory in the HDF and it must have
a value. All mandatory attributes musts have a valid entry (which means either with a valid
attribute value or a defined NULL value). Non-mandatory attributes (marked in the attached
Schema with [0..1]) or groups (marked with 0..*) should be omitted from the XML or HDF files if
they are not used (i.e., empty). When the attribute’s maximum number of occurrence is 1, the
HDF attribute should be a scalar.
- When the attribute’s maximum number of occurrence is larger than 1 (e.g., unbound), the HDF
attribute should be an array, even if there is only one item. For example, “Dimensions” is
presented as an array of strings in the worked example.
The optionality and requirements for HDF groups described means, in practice, it is allowable to not
include groups like raw, processed, spectra and FBE. However, for groups like Das Acquisition, attributes
that are marked as mandatory in optional groups that are used (such as NumberOfLoci, RawDataTime,
etc.) must always be properly populated if the associated group is used.
HDF Compression
In general, HDF compression is not required. However, if needed, GZIP is recommended because it is
widely available. Compression often requires decisions on how the data arrays should be chunked;
therefore, its use requires agreement between the providers and the users.
In V2.1, Vendor Code was changed to be a type Business Associate which is a common type across the
MLs containing vendor details (such as name, contact details etc.). Because it was not a complex type
and not a string, it was removed from the HDF file under the rules for mapping from XML to HDF.
It turns out that some users had been using this element for the software version used to create the DAS
data. In 2.1 (and later) this field no longer appears in the HDF file.
In every XML file, the Citation structure appears, and this has an element Format. The is documented as:
“Software or service that was used to originate the object and the file format created. Must be human and
machine readable and unambiguously identify the software by including the company name, software
name and software version. This is the equivalent in ISO 19115 to the distributionFormat.MD_Format.
The ISO format for this is [vendor:applicationName]/fileExtension where the application name includes the
version number of the application.
This Citation:Format string therefore contains the required content and in the XML DAS Acquisition file,
should be used for the purpose of identifying the software vendor and software name and version.
In the HDF file, the convention will be that the same string can be inserted in the Custom group. The
variable would be called Format and the content would be per the Energistics Citation:Format element.
Facility Calibration
The Facility Calibration object (FacilityCalibration) contains one or many Calibrations for one facility in the
DAS acquisition. Each Calibration is a mapping of loci-to-fiber length and facility distance along the optical
path. The actual calibration points are provided in an array of Locus Depth Point. See Section 24.6.4.
DAS Custom
The DAS Custom object (DasCustom) contains service–provider-specific customization parameters.
Service providers can define the contents of this data element as required. This data object has
intentionally not been described in detail to allow for flexibility. DasCustom can be inserted as a separate
data group under DasAcquisition, but also under the DasRaw, Das Fbe and Das Spectra. Note that these
objects are optional; they should not be required for reading and interpreting the data arrays. If used, the
service provider needs to provide a description of the data elements to the customer.
DAS Raw
The DAS Raw object (DasRaw) contains the attributes of raw data acquired by the DAS measurement
instrument. This includes the raw data unit, the location of the raw data acquired along the fiber optical
path, and information about times and (optional) triggers. Note that the actual raw data samples, times
and trigger times arrays are not present in the XML files but only in the HDF5 files because of their size.
The XML files only contain references to locate the corresponding HDF files, which contain the actual raw
samples, times, and (optional) trigger times.
For more information, see Sections 24.6.4 and 24.8.
DAS Processed
The DAS Processed object (DasProcessed) contains data objects for processed data types and has no
data attributes. Currently only two processed data types have been defined: the frequency band extracted
(FBE) and spectra. In the future other processed data types may be added.
Note that a DasProcessed object is optional and only present if DAS FBE or DAS spectra data is
exchanged.
Multiple FBE and spectra groups may be used with a DasProcessed object. Multiple groups of FBE bands
could be exchanged, for example, one group with linearly distributed frequency bands (e.g. two bands:
0.01-10.00 Hz and 10.01-20.00 Hz) and another group with logarithmically distributed frequency bands
(e.g. three bands: 0.1-1.0 Hz, 1.0-10.0 Hz and 10.0-100.0 Hz). In the HDF5 file these group should then
be named Fbe[1] and group Fbe[2] with the number added in square brackets [n]. If there is only one
FBEor Spectra group then the brackets and number can be omitted. In the XML file this is not necessary.
DAS FBE
The DAS FBE object (DasFbe) contains the attributes of FBE processed data. This includes the FBE data
unit, location of the FBE data along the fiber optical path, information about times, (optional) filter related
parameters. Note that the actual FBE data samples and times arrays are not present in the XML files but
only in the HDF5 files because of their size. The XML files only contain references to locate the
corresponding HDF files containing the actual FBE samples and times.
DAS Spectra
The DAS Spectra object (DasSpectra) contains the attributes of spectra processed data. This includes the
spectra data unit, location of the spectra data along the fiber optical path, information about times,
(optional) filter related parameters, and UUIDs of the original raw from which the spectra file was
processed and/or the UUID of the FBE files that were processed from the spectra files. Note that the
actual spectrum data samples and times arrays are not present in the XML files but only in the HDF5 files
because of their size. The XML files only contain references to locate the corresponding HDF files
containing the actual spectrum samples and times.
For more information on the arrays, see Section 24.8.
DAS Raw
Two-dimensional array containing raw data samples acquired by the DAS acquisition system.
24.8.5.1 Example
A DAS acquisition records 10 minutes of raw data starting at 1 May 2016 at noon. Because of the large
number of loci (5000) and the high OutputDataRate of (1kHz) it is decided to split the data set in two 5
minute recordings and store the raw data in two HDF5 files called part1.h5 and part2.h5.
The Das External Dataset Parts for these files in the XML and H5 will then contain:
Part1.h5
ExternalDataset
Count = NumberOfLoci x OutPutDataRate x rate = 5000 x 1000 x 5 x 60 = 1 500 000 000
PathInExternalFile = /Acquisition/Raw[0]/RawData
StartIndex = 0
DasExterdalDatasetPart:
PartStartTime = 2016-05-01T12:00:00.000000+00:00
PartEndTime = 2016-05-01T12:04:59:999000+00:00
RawDataTime
StartTime =2016-05-01T12:00:00.000000+00:00
EndTime = 2016-05-01T12:09:59:999000+00:00
Part 2.h5
ExternalDataset
Count = NumberOfLoci x OutPutDataRate x rate = 5000 x 1000 x 5 x 60 = 1 500 000 000
PathInExternalFile = /Acquisition/Raw[0]/RawDataStartIndex = 1 500 000 001
ExternalDatasetPart
PartStartTime = 2016-05-01T12:05:00.000000+00:00
PartEndTime = 2016-05-01T12:09:59:999000+00:00
RawDataTime
StartTime =2016-05-01T12:00:00.000000+00:00
Figure 25-3. DAS worked example conceptual model with HDF5, which is a “hybrid” implementation (from the
list in Section 24.5).
The XML files are all contained in the EPC file. The HDF5 files are transferred separately. Thus in the
worked example, which shows the arrays split over 2 HDF5 files and the calibration data stored in 1 HDF
file (see Figure 25-3), there are 4 files in total. The top of Figure 25-4 also shows these 4 files: one with
extension .epc and the three HDF5 files with the.h5 extension.
Opening the EPC file (i.e., unzipping it), shows 5 XML data files: the DAS Acquisition, Instrument Box,
and Optical Path (blue colored arrows) and the three EPC external part reference files (red colored
arrows) (see the middle of Figure 25-4)).
Relationships to related files are stored in the rels folder. For each data file, there is a corresponding .rels
file which stores the relationship of that file to the other files. These files have the extension .xml.rels.
The .rels files specify the relationships among ALL files—the XML files stored in the EPC file and the
externally stored HDF5 files (see the bottom of Figure 25-4).
• Files that are stored externally to the EPC file (such as the 2 HDF5 files in this example) are
specified using an EPC mechanism called external part reference. Each external file has a
corresponding external part reference XML file.
• Relationships to related files that are stored internally in the EPC file (such as the Instrument Box
and Optical Path files in this example) are specified using an internal EPC mechanism with a
TargetMode “Internal”.
The content of these is explained below.
Figure 25-4. The 4 files comprising the worked example (top); the content of the EPC file (middle)—7 XML
files; and the content of the Rels folder within the EPC file—one relationship file per XML data object
(bottom).
In the main folder of the EPC file, the DAS Acquisition XML file has the metadata as outlined previously.
In the middle of Figure 25-5, this file is shown with a light blue background. The UUID of the whole DAS
Acquisition is an attribute of this file. The incorporation of the UUID into the file name itself, as shown, is
also recommended (see the blue boxes in the figure).
Similarly, the EpcExternalPartReference files have their own UUID representing the UUID of the external
part (the red box in the lower part of Figure 25-5). These UUIDs are used within the DAS Acquisition XML
to reference the external part concerned (as part of the element EpcExternalPartReference within the
XML), shown by the red box within the DAS Acquisition file shown in Figure 25-5).
Because an EpcExternalPartReference is to an HDF5 file which may contain multiple arrays, every array
referenced from within the DAS Acquisition XML must have a specified path in the HDF5 file. This is
shown by the green box in the middle part of Figure 25-5.
Figure 25-5. The 7 files in the EPC root folder (DAS Acquisition, DAS Instrument Box, Optical Path, OTDR
Acquisition and 3 EpcExternalPartReference) (top); an example of a reference to a single HDF5 array from
within the DasAcquisition file with the path to the H5 file (green) and UUID of EpcExternalPartReference (red)
(middle); and how this UUID is located in the EpcExternalPartReference XML itself (bottom).
The rels folder contains one file (extension .xml.rels) per file contained in the root folder. See Figure 25-6,
the top snippet, light brown border.
The rels file for the DasAcquisition lists all the relationships for this file—that is, to two files internal to the
EPC (Optical Path and Instrument Box), and to two external EpcExternalPartReference files. See Figure
25-6, the middle snippet, blue border.
The rels file(s) for the HDF5 files show a relationship back to the DasAcquisition (blue bubble showing its
UUID), and a relationship to an external file (the .h5 file) (green bubble showing the file name itself). See
Figure 25-6, the bottom snippet, pink border.
By these means, the references all tie in with each other and the specific path to an array in a specific
HDF5 file can be discovered.
Figure 25-6. Showing the 7files in the EPC rels folder (one for each data file comprising the set) (top); the
relationships existing from the DASAcquisition with the three internal XML files, plus three
EpcExternalPartReferences (middle); and the relationships existing from the EpcExternalPartReference to a
physical H5 file (bottom).
When arrays are split over multiple HDF5 files (as they are in the worked example), then a single logical
array in the DasAcquisition XML (e.g. for a Raw array) contains 2 (or more) ExternalFileProxy elements.
Figure 25-7 shows the same worked example and shows how the Count and StartIndex elements are
used to define the partitioning of the array across the two files. The paths for both arrays are defined here.
To find the physical HDF5 file name, look in the rels file for the EpcExternalPartReference, as explained
above.
The DasExternalFile proxy elements PartStartTime and PartEndTime show the start and end times of the
partial Raw DAS data array stored in each HDF5 files.
Figure 25-7. Shows how a single array (raw data in this case) is split across two physical files per the worked
example and the figures above. The files are identified by green bubble/arrow/label (1st file, bottom left) and
by a purple bubble/arrow/label (2nd file, bottom right).
The above description has focused on how to navigate from the XML, via the data in the EPC, to the
required arrays in the HDF5 files. The HDF5 files also contain identification data (Figure 25-8). The root
level of the .h5 file contains the UUID of the EpcExternalPartReference and the Acquisition group
contains the UUID of the DasAcquisition XML. Every HDF5 file that is part of the sequence of HDF5 files
has this same ID so that if an HDF5 file gets separated (e.g., a disk is misplaced); it can be associated
with the right acquisition.
Figure 25-8. DAS worked example XML (displayed in text editor) and related HDF5 files (displayed with an
HDF5 utility). Showing where EpcExternalPartReference is used to reference to the .h5 files. The .h5 files
contain the uuid ID which is the UUID of the DasAcquisition XML, allowing a reference back to the XML that
describes the whole acquisition.
Processed data, i.e., FBE band or spectra data, is stored and referenced the same way (Figure 25-9
Figure 25-9. Spectra processed data referenced from the XML. In this case, each FBE band has a unique
array name in the HDF5 file, and these can be seen in the PathInExternalFile elements in the snippets.
The PRODML optical path object will require you to declare at the minimum these areas:
• Unique Identifier for the entire optical path
• Name
• Inventory section where all the different segments, connectors, turnarounds, splices and terminators
are declared
• Network section where you specify the connectivity of the components (referencing them in the
inventory section)
For the optical path drawn above, the worked example is described in the DTS Section. See Section 20.1.
DAS Acquisition
The DAS Acquisition data object specifies the key instrument attributes for the full DAS acquisition
including InstrumentBox and VendorFormat, MaximumFrequency, MinimumFrequency, PulseRate,
PulseWidth, GaugeLength and the SpatialSampling interval along the fiber.
• The PulseRate is the number of pulses sent into the fiber per second.
• The GaugeLength is the distance (length along the fiber) between a pair of pulses used in a dual-
pulse or multi-pulse system.
• For each pulse, the backscatter is sampled along the fiber (by an AD converter that samples at a
much higher rate than the PulseRate). The OpticalPath points to a metadata object that describes the
(fiber) path along which the light pulses travel.
The StartLocusIndex is the first location (locus) along the fiber at which a recording is made. The distance
between the recorded loci (locuses) is determined by the SpatialSamplingInterval. The number of
samples made from and including the first locus is stored in the NumberOfLoci record. Figure 25-11
shows the concept.
Figure 25-11 also shows the metadata attributes for the example file. Some of the main attributes are
explained here.
The DAS acquisition job name is indicated by the Title attribute. Further the Originator can be indicated.
The location of a locus on the fiber can be estimated by multiplying the locus number times the
SpatialSampingInterval. The index of the first is 0 and hence locus 0 has a distance of 0 and indicates the
measurement taken over first spatial sampling interval which starts at the DAS instrument
connector/splice and has a length of SpatialSamplingInterval. Note that this is always an estimate
because the fiberfiber refractive index is typically an average and may vary slightly along fiber and vary
more if the optical path consists of fibers with different refractive index values are spliced together. In
addition, splices or certain optical components may cause delays, which must be corrected for by the end
user in a distance (for wells often called depth) calibration exercise.
Figure 25-11. DAS acquisition attributes for part1.h5. In the left pane; note the file structure and naming
conventions as explained in Section 24.6.4 above.
Figure 25-12. Link between DAS acquisition time, trigger time, and DAS raw time elements.
Another important calibration point is the fiber distance from the last locus to the end of the fiber; this is a
useful calibration point for permanent downhole installations. The operator can use this measure to create
exactly the same interrogation interval along the fiber after a DAS instrument or surface cabling has been
changed out. This calibration point is optional and is defined in the Calibration group as
LastLocusToEndOfFiber.
Figure 25-13. Worked example Facility Calibration including some of the Calibration Input Points and the
External Part Reference to the Locus Depth Point array in the HDF5 file.
The following tables show how the Calibration Input Points can be used to provide a depth calibration for
the optical path example in Figure 25-10.
The first table refers to the calibration data for the surface cable. Locus index 0 to 4 present calibrated
points along the surface cable.
The second table, refers to calibration data for the downhole cable. There are 4 CalibrationInputPoint
objects for this facility and these are the entries shown in this table. Locus index 5 to 100 are the
interrogated part of the downhole cable.
Figure 25-10 further provides the measured depth derived using these: a surface cable of 25.0 m with no
overstuffing on a perfectly flat surface, a downhole cable installed in a perfectly vertical well, with the well
head connector 1.5 m above the surface (negative depth), and an overstuffed downhole cable of 483.25m
(1.9% overstuffing ~9.182m).
Figure 25-14. DAS calibration data stored in calibrations.h5. This shows the full mapping LocusDepthPoint
for both the surface cable (top) and the downhole cable (bottom). Notice that the LocusDepthPoint specifies
mappings for all locus points, from a fewer set of CalibrationInputPoint provided for the downhole cable.
Figure 25-15. DAS raw data array values: 12 loci (horizontal 0-11), 38 samples (vertical 0-37) in h5 file
part1.h5.
Figure 25-16. DAS raw trace times array and attributes in h5 file part1.h5.
Figure 25-18. DAS FBE attributes and array values and for frequency band FbeData[0] in h5 file part1.h5.
Figure 25-20. DAS FBE times array and attributes in h5 sample file part1.h5.
Figure 25-22 shows one of the FFT sub-arrays (FFT index 4 out of 32) for the 4 overlapping windows
(index 0 to 3 on the vertical axis) calculated for the 5 loci (index 0 to 4 on the horizontal axis).
Figure 25-22. DAS FFT spectra data sub-array for one loci (index 3) for the overlapping windows (index 0 to
31 on the vertical axis) calculated at different time (index 0 to 3 on the horizontal axis) for the spectra data
array in h5 sample file part1.h5.
Figure 25-23. Spectra data time array and attributes in H5 sample file part1.h5.
Actors
All DAS use cases have a set of potential possible actors, which are defined below and used in the use
cases described in this section.
Actor Description
DAS Field Engineer Engineer configuring and operating the DAS equipment in the field.
DAS Project Engineer Expert in DAS and its applications. Often the person who recommends
acquisition settings, QCs the acquisition, and does the final data
processing and transcribing of data to the required delivery medium.
DAS Instrumentation The DAS acquisition system (interrogator unit).
Fiber Vendor The vendor that supplied (and installed) the fiber.
DAS Vendor The vendor that supplied the DAS instrumentation.
Customer Field Engineer The engineer in the field representing the customer.
Customer Project Engineer Usually the engineer responsible for the requirements and successful
completion of the operation. Usually the recipient of the acquired
measurements.
Third-Party System IT system receiving DAS data.
Use Case Name DAS Project Engineer and Customer agree on the acquisition requirements of the
DAS service
Version 1.1
Goal Agreed definition of acquisition requirements and final deliverables including reports,
data, and media formats.
Summary Description Define the requirements and scope of the service that the DAS vendor shall provide.
Agree on: ping rates, spatial resolution, spatial sampling, type of DAS data to provide
(raw or reduced, e.g. frequency bands), when to acquire data, format and media of
deliverables.
Actors DAS Project Engineer, Customer Project Engineer
Triggers Request for service
Pre-conditions Suitable fiber optic downhole or surface installation compatible with the DAS
interrogator unit to be provided.
Primary or Typical Customer Project Engineer and DAS Project Engineer agree the requirements of the
Scenario acquisition.
Alternative Scenarios
Post-conditions Acquisition configuration agreed. The requirements are reflected in the metadata that
is transferred with the DAS data. Currently, no requirement exists in the standard to
transfer only this data.
Business Rules
Notes
Use Case Name Customer provides fiber optic installation layout to DAS project engineer and agrees
the optical path configuration for the DAS acquisition.
Version 1.1
Goal Agree optical path configuration for the DAS acquisition.
Summary Description In most services the DAS Vendor provides the DAS acquisition instrument and the
Customer is responsible for providing the fiber optic cable installation in the field from
which the data will be acquired. To properly operate the DAS instrument, the DAS
Vendor needs to be informed about the details of the field fiber optic installation and
cable properties (path, fiber length, optical connectors, fiber type single or multi-mode
fiber etc.).
Actors DAS Project Engineer, Customer Project Engineer
Triggers
Pre-conditions Suitable fiber optic downhole or surface installation compatible with the DAS
interrogator unit to be provided.
Primary or Typical Customer Project Engineer provides DAS Project Engineer with the configuration of
Scenario the optical installation to which the DAS acquisition instrument will be connected.
Alternative Scenarios For some DAS acquisitions the DAS vendor will provides the fibers in the field, for
example by providing a flexible rod with integrated fibers that can be spooled into a
well.
Post-conditions Fiber optic path understood.
Business Rules
Notes
Version(s) 1.1
Goal DAS equipment configured, QCed and ready for acquisition to begin.
Summary DAS Field Engineer tests fiber integrity and notes the positions of any losses and
Description reflections.
DAS Field engineer configures DAS Equipment with the required acquisition settings.
DAS Field engineer performs initial depth calibration (facility mapping)
DAS Project Engineer will perform QC
DAS Project Engineer delivers QC report to Customer Field and Project Engineers.
Ensure Instrument Box is configured to timestamp the data as per the customer’s
requirements, e.g. GPS time synced, or synced to other equipment in the operation.
Actors DAS Field engineer, DAS expert, Customer Project Engineer, Customer Field
Engineer, DAS Instrumentation
Triggers Request from client to deploy to site and prepare for acquisition.
Post-conditions DAS Instrument Box Connected, configured and ready for acquisition.
QC Setup report delivered to DAS Project Engineer, Customer Project and Field
Engineer.
These requirements are covered by the DAS instrument box and optical path data
object that are part of this standard.
Business Rules
Notes
Definitions See Section 27.1.
Version 1.1
Goal Acquire the required DAS measurements
Refine the depth calibration (facility mapping)
Provide operation log for time event association
Summary In its simplest form the DAS Field Engineer presses start record and stop
Description record.
DAS Project Engineer may perform timely QC.
The Customer Project Engineer may request previews of the acquisition data.
Actors DAS Field engineer, DAS Project Engineer, DAS Instrumentation, Customer
Field Engineer, Customer Project Engineer
Triggers Client requests acquisition to commence.
Pre-conditions Configuration of DAS Instrumentation complete.
Feedback from customer on setup QC report received.
Primary or Typical Perform acquisition for a vertical seismic profiling service.
Scenario
Alternative Perform acquisition for other borehole or surface monitoring services.
Scenarios
Post-conditions DAS data has been acquired to ‘field media’
This data can be transferred using the standard.
Business Rules
Notes Field media may be a different media to the media the final acquired data is
delivered on.
Definitions See Section 27.1.
Version 1.1
Goal Optionally post process the acquired data after acquisition and deliver to
customer or customer’s Third Party System.
Summary Post processing may include:
Description • Refining the depth calibration (facility mapping)
• Re-decimation or extraction of different frequency bands
• Updating or adding meta data
Alternative
Scenarios
Post-conditions Post Processed DAS data available for transfer using desired delivery media in
PRODML DAS format.
Derived formats such as SEGY for vertical seismic profiles may also be produced.
Details of post processing activities should be documented accordingly in the
metadata of the delivered data.
Business Rules
Notes For specific DAS acquisition services (e.g., vertical seismic profiles or pipeline
monitoring) the client may only be interested in the DAS post processed derived
product (e.g., SEGY for VSP or a list of events for pipeline monitoring) and may agree
with the DAS service provider to store and retain the field or reduced processed data
for an agreed period.
Note in the list of attributes, Vendor Code was renamed to Service Company Name. An optional (in XML
only) element called Service Company Details was added, with type Business Associate. This will not
appear in the HDF5 because of the rules for attribute types included, but can be used to supplement
Service Company Name.
• PulseWidth
• GaugeLength
27.3.1.3 Obsoleted
• GaugeLengthUnit
• SpatialSamplingIntervalUnit
• PulseWidthUnit
Fixes to EPC
Various fixes to the model around details in the way the Energistics Packaging Conventions (EPC) work
for DAS have been made.
Documentation
The DAS content has been updated to clarify some of the concepts and the rules that implement and to
address the changes listed above. For a more detailed list of documentation changes, see the
Amendment History (at the front of this document).
28 Product Volume
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the Product Volume and Flow Network data objects: the
initial work was donated by ExxonMobil; this was then adapted for the NCS by, amongst others, EPIM,
Statoil and TietoEnator. Saudi Aramco, Peformix (now Baker Hughes) and Atman Consulting then did
various modifications and generated useful documentation.
temperature
pressure
flow rate
Parameter Sets allow time varying "usage" parameters to be defined for the facility. For example, a valve
status may be toggled from "open" to "closed" to indicate that a well is offline.
Flows allow reporting for a flow of a product. For example, it may be used to specify the rate of oil
produced for a specified well.
The relevant enumerations found in the enumValuesProdml.xml file are as follows:
• Reporting Periods (e.g. day, month, year, etc.) are given in the ReportingPeriod enumeration.
• Facility Kinds (e.g. wellhead, separator, flow line, choke, etc.) are given in the FacilityParameter
enumeration.
• Facility Parameters (e.g. block valve status, reciprocating speed, etc.) are given in the
FacilityParameter enumeration.
• Flow Kinds (e.g. production, injection, export, etc.) are given in the ReportingFlow enumeration.
• Flow Qualifiers (e.g. measured, allocated, etc.) are given in the FlowQualifier and FlowSubQualifier
enumerations.
• Product Kinds (e.g. oil, water, gas, etc.) are given in the ReportingProduct enumeration.
The combination of Flow Kind and Flow Qualifier(s) are used to characterize the underlying nature of the
flow. For example, the following combination is used to indicate a production flow that is measured.
<flow>
<kind>production</kind>
<qualifier>measured</qualifier>
...
<flow>
Similarly, the following combination is used to indicate an injection flow that is simulated.
<flow>
<kind>injection</kind>
<qualifier>simulated</qualifier>
...
<flow>
Figure 28-1. Product Volume Report referencing Product Flow Model unit & port
• Online documentation for the NCS. This defines a standard of patterns for the usage of product
volume for volume reporting to partners and regulators via the EPIM Reporting Hub. This is a very
large scale adoption of PRODML where additional business rules have been added to define the
norms for the usage of the base standards.
− EPIM Reporting Hub introduction https://epim.no/erh/
− EPIM Reporting Formats https://epim.no/erf/ . These are very similar schemas to Product Volume
although in the earlier v1.x style. They have modified for use in the reporting in Norway but may
be useful to review alongside the example linked to below.
− Norwegian Regulator’s introduction http://www.npd.no/en/Reporting/Production/Reporting-
production-data-to-the-Norwegian-Petroleum-Directorate/
- Worked example at http://www.npd.no/Global/Norsk/7-Rapportering/Produksjon/generell-felt.xml
• A PRODML companion document which was produced as part of an extension to PRODML further to
a change request from a member organization. This extension added a number of features to make
production surveillance easier to implement. The document deals with the interplay between product
volume, product flow model and time series data objects to be able to use them for production
surveillance. This document is entitled PRODML Product Volume, Network Model & Time Series
Usage Guide and it can be found in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files. Extracts from that document have been included here. The document shows
PRODML v1.x style schemas and examples but the data model has not changed in v2.x.
Construction of a PRODML flow network can be summarized in the following steps, which are further
explained below:
1. Draw the real world diagram indicating measurement points.
2. Draw boundaries to include facility parameters and exclude flow measurements.
3. Make each boundary a unit in the flow network and identify the ports.
Table 1 summarizes the units of the flow network, assigned unique identifiers, and associated facility
kinds.
Table 1 Simple Network Units
1 outlet A-C
2 inlet A-C
3 outlet B-C
4 inlet B-C
5 outlet C-P
Figure 29-7 illustrates the definition of port 5 in XML. Note that this snippet shows PRODML v1.x style;
the content has not changed in v2.
30 Time Series
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the Time Series data object: Shell, OSISoft, Petrotechnical
Data Systems.
Keyword KindQualifier
asset A formatted URI identifier of the asset (facility) related to the value. This captures the
identifier kind of asset as well as the unique identifier of the asset within a specified context
(the authority). The identifier may define a hierarchy of assets. Refer to the CTA
Technical usage Guide for more information on Energistics identifiers.
qualifier A qualifier of the meaning of the value. This is used to distinguish between variations
of an underlying meaning based on the method of creating the value (e.g., measured
versus simulated). The values associated with this keyword must be from the list
defined by type FlowQualifier.
subqualifier A specialization of a qualifier. The values associated with this keyword must be from
the list defined by type FlowSubQualifier.
Keyword KindQualifier
product The type of product that is represented by the value. This is generally used with
things like volume or flow rate. It is generally meaningless for things like temperature
or pressure. The values associated with this keyword must be from the list defined by
type ReportingProduct.
flow Defines the part of the flow network where the asset is located. This is most useful in
situations (e.g., reporting) where detailed knowledge of the network configuration is
not needed. Basically, this classifies different segments of the flow network based on
its purpose within the context of the whole network. The values associated with this
keyword must be from the list defined by type ReportingFlow.
31 Production Operation
Acknowledgement
For Acknowledgements of detailed contributors, see Chapter 28.
«XSDcomplexType» «XSDelement»
ProductionOperationLostProduction + Type: OperationKind [0..1]
+ DTimStart: dateTime [0..1]
+ DTimEnd: dateTime [0..1]
+ Comment: String2000 [0..-1]
«XSDattribute»
+ uid: String64