0% found this document useful (0 votes)
119 views

PRODML Technical Usage Guide v2.2 Doc v1.0

The PRODML standard facilitates data transfer among the many software applications used in production operations, which helps promote interoperability and data integrity among these applications and improve workflow efficiency.

Uploaded by

akmal_salim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
119 views

PRODML Technical Usage Guide v2.2 Doc v1.0

The PRODML standard facilitates data transfer among the many software applications used in production operations, which helps promote interoperability and data integrity among these applications and improve workflow efficiency.

Uploaded by

akmal_salim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 261

PRODML Technical Usage Guide v2.

PRODML The PRODML standard facilitates data transfer


among the many software applications used in
Overview production operations, which helps promote
interoperability and data integrity among these
applications and improve workflow efficiency.
Version of Standard 2.2
Version of Document 1.0
Date published May 16, 2022
Prepared by Energistics and various work groups of the
PRODML SIG
Abstract Explains key concepts, data object organization
and intended usage.
Document type Usage guide
Language U.S. English
Keywords: standards, energy, data, information, production
operations, production reporting
PRODML Technical Usage Guide

Acknowledgements
Sincere thanks to Laurence Ormerod for his many years of dedication to PRODML development,
project management, and documentation.

Usage, Intellectual Property Rights, and Copyright


This document was developed using the Energistics Standards Procedures. These procedures help implement
Energistics’ requirements for consensus building and openness. Questions concerning the meaning of the
contents of this document or comments about the standards procedures may be sent to Energistics at
[email protected].
The material described in this document was developed by and is the intellectual property of Energistics.
Energistics develops material for open, public use so that the material is accessible and can be of maximum
value to everyone.
Use of the material in this document is governed by the Energistics Intellectual Property Policy document and the
Product Licensing Agreement, both of which can be found on the Energistics website,
https://www.energistics.org/legal-page/.
All Energistics published materials are freely available for public comment and use. Anyone may copy and share
the materials but must always acknowledge Energistics as the source. No one may restrict use or dissemination
of Energistics materials in any way.
Trademarks
Energistics®, Adopt>Advance>Accelerate®, Energistics Certified Product® and their logos are registered
trademarks and WITSML™, PRODML™, RESQML™ are trademarks or registered trademarks of Energistics
Consortium, Inc. in the United States. Access, receipt, and/or use of these documents and all Energistics
materials are generally available to the public and are specifically governed by the Energistics Product Licensing
Agreement (http://www.energistics.org/product-license-agreement).
Other company, product, or service names may be trademarks or service marks of others.

Standard: v2.2 / Document: v1.0 / May 16, 2022 2


PRODML Technical Usage Guide

Amendment History
Standard Doc Date Comment
Version Version
2.2 1.0 May 16, 2022 For a summary of changes, see
PRODML_v2.2_Release_Notes_v1.0.PDF.

Standard: v2.2 / Document: v1.0 / May 16, 2022 3


PRODML Technical Usage Guide

Table of Contents
No internal table of contents was generated inside the document. Use the table of contents pane in
Acrobat by clicking on the icon shown below, which is located at the far left in an open PDF
document.
If viewing this document in Microsoft Word, on the View tab, in the "Show" box, check "Navigation
Pane", which displays a table of contents pane, showing all headings in the document.

Standard: v2.2 / Document: v1.0 / May 16, 2022 4


PRODML Technical Usage Guide

1 Introduction to PRODML
PRODML is an upstream oil and gas industry standard for the exchange of data related to production
management. This domain covers the flow of fluids from the point where they enter the wellbore to the
point of custody transfer, together with the results of field services and engineering analyses required
in production operation workflows.
The production phase of an oil or gas field covers a wide range of data transfer requirements.
PRODML covers some of these requirements but does not yet present a coherent view of the whole
domain and PRODML’s role within it. A “top-down” full development of such a large scope of data
transfers is too large a task for an essentially voluntary effort. Hence, PRODML development has
been a “bottom-up” development approach, driven by member company project requirements.
PRODML has been developed by Energistics members—which includes oil companies, oilfield
service companies, software and technology companies, regulatory agencies and academic
institutions—through the PRODML special interest group (SIG), and its various work groups and
project teams.

1.1 PRODML Overview


PRODML is an XML-based data-exchange standard that facilitates reliable, automated exchange of
data among software packages used in production management workflows. It defines a set of data
objects that can be implemented into software so it can read and write to the PRODML standard
format, which allows PRODML-enabled software to more easily share and use this data.
Underpinning PRODML’s domain-specific data models is the Energistics Common Technical
Architecture, which provides various technical standards and methods upon which PRODML builds.
The table below lists the main segments of data transfer within production; PRODML standards are in
daily commercial use in all of these segments.

Segment Scope
Production volumes Internal use, to regulators, to partners.
Completion and well services Hardware and operations over producing life.
Surveillance Monitoring production in the wellbore, the reservoir (near
wellbore); and on the surface.
Optimization Decision support both for design and operational
optimization.

NOTE: As described in Section 1.1.2, completion and well services is now incorporated into WITSML.
But, per the integration of the Energistics domain standards (see Section 2.4), WITSML data objects
are easily available to user of PRODML.
These segments are defined in more detail in Figure 1-1, which shows some of the main data scope
requirements within each segment. This figure is intended to be illustrative rather than exhaustive. For
each item of data scope, the v2.x coverage of PRODML is shown, together with a high-level estimate
of current usage—the two together giving some idea of maturity. The bands of high-medium-low
scores have the following meanings:
1. PRODML Coverage
a. High: strong coverage where a multi-company group of domain experts has specified the
requirement and validated the model.
b. Medium: reasonable coverage where significant work has been done, but where the
requirement was not the primary aim of the effort, or where the effort was focused on one
aspect of the requirements, not general coverage.
c. Low: a low level of coverage, either where the scope required is only partially covered or not
at all, of where work has been donated and incorporated but based on one company’s views
with no peer review.

Standard: v2.2 / Document: v1.0 / May 16, 2022 5


PRODML Technical Usage Guide

2. PRODML Usage
a. High: widespread uptake over a material number of companies/situations. The standard is
therefore well-tested and known to be complete.
b. Medium: material uptake but in a limited number of cases, e.g. for one purpose, or by a small
number of companies. The standard is therefore workable but may need adapting for high
usage.
c. Low: usage limited to one or two companies, or not used at all. The standard therefore may
need work and adaption before it can be used more widely.

PRODML
Coverage within PRODML
Segment Scope Required scope Usage
Volume reports daily High Medium
Production Volume reports monthly High Medium
volumes Operational Reports Medium Low
Networks Medium Low
Initial completion High Medium
Workover and well service change history High Medium
Completion and Completion correct details at any instant High Medium
well services Well services - details of each job Low Low
Artificial Lift equipment Low Low
Workover - details of each job Medium (WITSML)Low
Well tests (steady state production tests) High Medium
Producing well performance Medium Low
Downhole point measurements - time series Low Low
Gathering system point measurements - time seLow Low
Surface equipment surveillance Low Low
Surveillance
Downhole distributed measurements High Medium
Transient well tests/formation tests High Low
Gradient surveys/other artificial lift surveillanceLow Low
Production logging Low Low
Fluid samples and analysis High Low
Well simulation (inc operational and design usaLow Low
Optimization Network simulation (inc operational and designLow Low
Reservoir to production simulation and optimizLow (RESQML) Low
Figure 1-1. High-level scope of sub-domains within production segments, showing PRODML maturity.

The following sections describe in more detail the capability and usage within the segments of
production data outlined above.

Production Volumes
The origins of PRODML were focused on the use of the product volume data object. This has been
adopted on a large scale on the Norwegian Continental Shelf (NCS) where all producing fields report
to partners and the regulator using this standard. For more information, see Chapter 28.
The product volume data object has been used by other member companies for volume reporting but
adoption has not been large, and feedback has been that this is owing to the complexity of this model.
The simple product volume reporting capability (SPVR) has been developed and added to version 2.x
and the aim throughout this development has been to keep the model as simple as possible, while still
covering most, if not all, requirements.

Standard: v2.2 / Document: v1.0 / May 16, 2022 6


PRODML Technical Usage Guide

The coverage of requirements in this area is therefore believed to be high using SPVR or product
volume if the most flexible approach is needed. Maturity is medium, since uptake outside the NCS is
low.

Completion and Well Services


A joint PRODML-WITSML project during 2012-2013 resulted in a comprehensive completion domain
capability. The model comprises data objects for:
• The well completion equipment itself (assembled into multiple strings, e.g. tubing, casing, etc.);
couplings between elements of the completion such as plugs and packers, cement.
• The connections to the reservoir, by position, by type (perforations, gravel pack, etc.).
• The time span of each item of equipment and each connection, so that the configuration at any
time can be reported.
• A “ledger” of jobs performed to change the completion equipment, which ties in with the time
spans of equipment per the previous list item.
The set of objects does not address in any detail the different kinds of well service or workover “jobs”.
Artificial lift is only dealt with at a skeletal level.
WITSML itself deals with some of the drilling rig type operations.
The capability described has been used in a large commercial setting and by the companies involved,
to various degrees.
The scores above reflect this experience and capability.
NOTE: This whole segment, although developed largely under the auspices of PRODML, is now
packaged as part of WITSML. For more details, see the WITSML Technical Usage Guide (Chapter 8).
Although this document does not refer further to the completion segment of production management,
it is vital to know that the level of coverage and maturity described is available in the Energistics
portfolio.

Surveillance
Production surveillance activities consist of a wide variety of measurement and analysis methods.
PRODML coverage of this whole universe of data types is limited. The areas highlighted above as
being covered to a high degree do have high-detail models and these are all described in this
document.
Usage of some of these (well performance, distributed fiber optics, fluid sampling for example) is
shown as low because these are recent developments, either in version 2.x or in version 1.3, which
preceded it. There does appear to be significant interest in uptake of these sub-domains.

Optimization
Within the production management domain, there is somewhat of a hierarchy of requirements. For
example: it is hard to report surveillance without a description of what is being measured, and it is
hard to optimize something without both a description of what is being optimized, and the
measurements which are input to the optimization.
The optimization segment has low scores for its sub-domains simply because the foundational data
models are not yet sufficiently developed to support major optimization workflows. For example: well
simulation (commonly known as nodal analysis) needs a model covering the following elements:
• The physical hardware in the well—available through the Completion segment since 2014.
• The paths by which fluid flows through the hardware—no model developed.
• The properties of the connection to the reservoir—available through the Completion segment.
• The flow properties of the near-wellbore reservoir and of the connection to the reservoir—no
model developed for well simulation, though gridded properties available in RESQML.
• The fluid properties flowing in the system—available from v2.x in 2016.
• The control of flow—chokes, artificial lift, etc.—no model developed.

Standard: v2.2 / Document: v1.0 / May 16, 2022 7


PRODML Technical Usage Guide

• The reporting of well simulation results (flowrates, sensitivities, flowing gradients, etc.)—no model
developed.
Much of the input data can be modelled using other segments of PRODML and other Energistics
domain standards, but the specific data needed to run a simulation using this base physical data has
not yet been modelled.
Therefore, in general, the coverage of optimization is low, being limited to partial solutions. The
segment is a good opportunity for future development.

1.2 Use Cases


Energistics aims to focus its development on use cases defined by its SIGs and its members. Use
cases for specific domains are included in the parts/chapters for each specific domain.

1.3 What’s New in PRODML v2.2?


PRODML v2.2 does not add any major new scope for the data objects. Minor upgrades are limited to
Fluid Analysis. For the detailed change list, see PRODML_v2.2_Release_Notes.PDF.
Fluid Analysis gains in two main areas:
1. Water Analysis. A good many new attributes have been added to make Water Analysis much
more comprehensive. There are now enumerated lists of anions, cations and organic acids.
2. Non-hydrocarbon analysis has been added to deal with analysis for various non-organic
“impurity” molecules, and the testing for these. The tests are not always laboratory-based.
Some are performed at the wellsite. Additionally, not all such tests require a sample to be
taken and stored prior to analysis. Some of the tests work in-line. A new type of Fluid Analysis
has been added to deal with these cases.
To support the above, in PRODML Common, inorganic species have been added to the enumeration
lists of kinds of Fluid Component. A new type of Fluid Component has been added – Sulfer Fluid
Components which has an enumerated list of commonly reported sulfer molecule species.
A useful improvement to clarity has been to rename what was “Fluid Component” in the Fluid
Composition classes to “Fluid Component Fraction”, to avoid confusion between this element and the
Fluid Components which are contained in the Fluid Component Catalog. The “fraction” element
contains a fraction (by moles, mass, etc.) of a component and a reference to that component within
the catalog. In the catalog can be found the list of fluid components themselves, each with its required
parameters and description.

1.4 Audience, Purpose and Scope


This guide is intended for both business/domain people who use data and applications, and for
information technology (IT) professionals—software engineers, data managers and developers—who
want to implement PRODML into software.
This guide describes how the objects have been designed and are intended to be used. Use of XML
means the standards can be implemented on many platforms and in many programming languages,
so no specifics on implementation are given.
Chapter 2 gives a technical overview of PRODML, and subsequent chapters go through each sub-
domain.

Audience Assumptions
This guide assumes that the reader has a good general understanding of programming and XML, and
a basic understanding of the exploration and production (E&P) domains and related workflows.

Standard: v2.2 / Document: v1.0 / May 16, 2022 8


PRODML Technical Usage Guide

1.5 Resource Set: What Do You Get When You Download PRODML?
Each of the Energistics domain standards—RESQML, WITSML or PRODML—is a set of XML
schemas (XSD files) and other resources freely available to download and use from the Energistics
website. Energistics common is a set of data objects shared by the domain standards. To download
the standards, go to https://www.energistics.org/download-standards/.

Options for Downloading and File Structure


With the final publication of these versions of the Energistics standards, you can choose from these
options on how you want to download the standards:
• The "traditional" ML-specific, where you download either WITSML, PRODML or RESQML
with its appropriate version of Energistics common. The current releases of the domain
standards all use Energistics common v2.3. The download packages available are:
− WITSML_v2.1.zip
− PRODML_v2.2.zip
- RESQML_v2.2.zip
• All Energistics domain standards bundled together into a single download package, with
the shared version of Energistics common v2.3. (NOTE: This is how the standards were
package for the public review that ran from Dec 2021 to March 2022.)
- The download package is energyML.zip.
The figures below show what the energyML package looks like. A domain specific package has a
folder for the domain standard you downloaded (EXAMPLE: if you downloaded PRODML_v2.2.zip, it
would have only the PRODML) and Energistics common.

Figure 1-2. The contents of the energyML download, containing all three MLs and the shared version of
Energistics common.

Keeping these folders in this relative location means that the schemas all reference the correct paths
to common elements. All three MLs have the same folder structure, as seen in Figure 1-3.

Figure 1-3. Contents of each ML’s folder.

PRODML is a set of XML schemas (XSD files) and other technologies freely available to download
and use from the Energistics website. When you download PRODML, you get a zip file structured as
shown in Figure 1-4, which contains the resources listed in the tables below in these 2 main groups:
• PRODML-specific (Section 1.5.1). The schemas and documentation specific to PRODML. Note
in Figure 1-4 (left) (red square):
− ProdmlAllObjects.xsd is schema that includes all other schemas in shown.
- The ProdmlCommon schema includes objects shared by other PRODML data objects.

Standard: v2.2 / Document: v1.0 / May 16, 2022 9


PRODML Technical Usage Guide

• Energistics Common Technical Architecture (Section 1.5.2) (in the common->v2.3 folder)
Components of the Energistics Common Technical Architecture (CTA), which is a set of
specifications, schemas, and technologies shared by all Energistics domain standards.
To download the latest version of this standard, visit: https://www.energistics.org/download-standards/

Figure 1-4. You download the PRODML standard as zip file containing all three MLs from the Energistics
website. It contains the resources described in the two tables below. The figure shows (left) the PRODML
schemas (xsd files) and (right) the Energistics common schemas.

PRODML-Specific Resources
The PRODML download will install the folder structure described above. Within the specific
prodml\v2.2 folder will be found the resources listed in the table below.
Document/Resource Description
Prodmlv2.2 (folder)
1. xsd_schemas (folder) Schemas for all of the data objects in PRODML v2.x. This
folder contains all top-level objects outlined in Section 2.2
and map onto the sub-domains of PRODML as shown in
Figure 2-2.
2. ancillary (folder) Contains supporting material (optional).
doc (folder) contains the following
documents:
3. PRODML Technical Usage Guide Provides an overview of PRODML and details about the
(This document.) sub-domains and supporting data objects.
If just getting started with PRODML, begin with this
document.
4. PRODML Technical Reference Guide Generated from the UML model, lists and defines all
elements in the data model (for easy reference).
5. PRODML Product Volume, Network Model & Documentation for previously published versions (v1.x) of
Time Series Usage Guide these PRODML data objects:
• Product Volume
• Product Flow Model (Network)
• Time Series
Some minor updates have been made to this document for
v2.x, and some of the material that is most relevant to v2.x
usage of these data objects is incorporated into the
PRODML Technical Usage Guide.
6. PowerPoint presentations Presentations for the following PRODML sub-domains:
• Simple Product Volume Reporting (SPVR)
• DAS

Standard: v2.2 / Document: v1.0 / May 16, 2022 10


PRODML Technical Usage Guide

Document/Resource Description
• DTS
• PVT
• PTA
7. docexample (folder) This folder contains sub-folders for each of the following
sub-domains. Each folder contains data files for extended
worked examples, which are also explained in detail in the
corresponding chapters of this PRODML Technical Usage
Guide.
• DAS
• DTS
• PVT
• SPVR

Energistics CTA (Common Technical Architecture) Resources


These resources are included in a standards download, for use with version 2 and higher of all
Energistics domain standards. Not all standards use all of these technologies. The technologies used
within PRODML are listed in Section 2.3.
For details on all available CTA resources, see energyML_Download_Package_ReadMe_File.pdf.
any of these Common elements used in PRODML are not covered in detail in this document, so you
should make sure you look over the documentation included in the Common package before
proceeding far with PRODML implementation. A brief introduction to the CTA can be found in Section
2.1, and the PRODML usage of CTA is outlined in Section 2.3.

Documentation Updates
Energistics is committed to providing quality documentation to help people understand, adopt, and
implement its standards. As uptake of the standards increases, lessons learned, best practices, and
other relevant information will be captured and incorporated into the documentation. Updated versions
of the documentation will be published as they become available.

1.5.4.1 v2.2 Documentation Status


All the PRODML data objects for v2.2 have documentation chapters in this Technical Usage Guide.
The chapters for the following data objects are less comprehensive than those for the newer data
objects, and do not have associated detailed worked examples. However, the documentation should
still be sufficient to guide implementation.
• Chapter 28: (Product Volume). Note: This data object is no longer recommended unless detailed
reporting not available in the SPVR data objects is required (see Section 28.1 for more details).
• Chapter 29: (Product Flow Model).
• Chapter 30: (Time Series).
• Chapter 31: (Production Operation).

1.5.4.1.1 Note on figures of worked examples:


Owing to lack of resources it has not been possible fully to update the figures in this Technical
Reference Guide. In many figures, the only change in the XML shown is that the Data Object
Reference (i.e. from the current data object to another) has changed content from what will be seen in
several figures:

Standard: v2.2 / Document: v1.0 / May 16, 2022 11


PRODML Technical Usage Guide

To this in v2.2:

The worked example files themselves are updated correctly.

1.5.4.2 Documentation Conventions


Note that version references and paths are often shown as "2.x" because they are not expected to
change for subsequent versions of PRODML. Some figures continue to show "2.0" or “2.1” but should
be understood as "2.x" (or the current version of PRODML); all else in the figure is still applicable.
These other conventions are used in this document.
Artefact/Concept Description of Convention
Document Hyperlinks: Internal Though no special text-formatting convention is used, all document-
internal section, page and figure number references in this and all
PRODML and Energistics documents are hyperlinks.
Business Rules Some requirements for the PRODML design cannot be enforced in the
schema, so the requirement is specified in this documentation. These
requirements are referred to as business rules and are indicated in text as
"BUSINESS RULE:" Software implementing PRODML should implement
these rules.

Standard: v2.2 / Document: v1.0 / May 16, 2022 12


PRODML Technical Usage Guide

2 Technology Overview of PRODML


Each of Energistics domain standards—RESQML, WITSML and PRODML—leverages components in
the Energistics Common Technical Architecture (Figure 2-1).

Figure 2-1. The Energistics family of standards includes WITSML, RESQML and PRODML, which share
the Energistics Common Technical Architecture.

PRODML comprises a set of XML schemas covering a range of sub-domains of the whole scope of
production management in upstream oil and gas. Additional components of PRODML are located
within the Energistics Common Technical Architecture. PRODML is available as a downloaded zip
file, which installs the schemas, together with examples and documentation, into a folder structure
which includes the CTA elements.
This chapter describes:
• An overview of the main CTA components and the purpose of each.
• PRODML schema top-level data objects and the mapping of these onto the sub-domains of
production management as outlined in Chapter 1.
• Usage of the elements of CTA in PRODML v2.x.
• Common expected usage of other Energistics domain standards by PRODML.

2.1 CTA: Main Components and What They Do


NOTE: For more information about standards used in the CTA, see the CTA Overview Guide.
Each Energistics domain standard (RESQML, WITSML, and PRODML) has its own set of schemas,
which include a <domain>common package (e.g., ProdmlCommon) of schemas for consistency
across each ML (Figure 2-1). The underlying technology to define the schemas (XSD files) for the
objects, artefacts, data, and metadata is XML, with HDF5 used for large numeric arrays. Each
instance of a top-level data object must be identified by a universally unique identifier (UUID).
Each domain ML leverages components of the Energistics CTA.
Energistics standards are designed using the Unified Modeling Language (UML), which is also used
to produce the schemas and some documentation.
• Energistics common schemas. The common package is standardized across all Energistics
domain standards; for each of the MLs, the same common package is included in the download.
Like the Energistics domain schemas, these schemas are also XML XSD files. Data object
schemas can be considered in these categories:

Standard: v2.2 / Document: v1.0 / May 16, 2022 13


PRODML Technical Usage Guide

− Mandatory (for example, AbstractObject, ObjectReference, objects related to units of


measure (UOM), etc.).
− Optional, available for use if wanted (for example, the Data Assurance and Activity Model
objects).
- Objects defined by Energistics specifications (see next bullet), which may be optional or
mandatory, depending on the specification and domain ML.
• Energistics specifications describe objects and behaviors for handling mandatory and optional
functionality across domains. For example, units of measure, metadata, and object identification
are mandatory. Other standards, such as packaging objects together for exchange, are optional
or ML-specific. Related data objects are implemented in the Energistics common schemas. The
specs describe additional behavior requirements. For the complete list of CTA resources, see
energyML_Download_Package_ReadMe_File.pdf.
− Energistics Transfer Protocol (ETP) is the Energistics spec that serves as the new
application programming interface (API) for all Energistics domain MLs. Initially designed to
replace the WITSML SOAP API, ETP is based on the WebSocket protocol. It delivers real-
time streaming capabilities and is being expanded to provide CRUD (create, read, update and
delete) capabilities.
• Information technology (IT) standards. Energistics standard’s leverage existing IT standards
for various purposes. For example:
− The Unified Modeling Language (UML) is used to develop the data model and produce the
schemas and some documentation.
− XML is used to define the data object schemas (XSD) and instances of data (XML files).
− UUID (as specified by RFC 4122 from the IETF) is used to uniquely identify an instance of a
data object.
− HDF5 is used when needed as a companion to the XML data object to store large numeric
data sets.

2.2 Mapping of the PRODML Sub-Domains onto Top-Level Objects


PRODML is defined by a set of schemas. These are found in the xsd_schemas folder (for more on
the folders, see Section 1.5). The schemas are a set of top-level objects (as defined by the CTA),
other than the two schema files highlighted with a red box in Figure 1-4.
The two schema files which are not top-level objects are:
1. Prodml All Objects: this includes all the other objects into one schema file. This is provided
because certain software development environments can efficiently consume this file.
2. Prodml Common: this has various elements that are used by multiple PRODML top-level objects
and therefore are put into one common location. This fulfils a similar role for PRODML as the
schema files in the Energistics common package do for all the Energistics domain standards.
Most of the mappings of these schemas to the sub-domains of PRODML are evident from the file
names. Figure 2-2 shows the mapping from PRODML sub-domains onto data objects. One additional
factor is that the simple product volume schema file is an abstract top-level object, packaged in one
schema file. There are five non-abstract top-level objects which are defined for this sub-domain.
These schemas are shown with blue text. It can be seen that the more recent sub-domains tend to
use multiple top-level objects (from 2 to 6 each in version 2.x), while the older PRODML sub-domains
commonly have a 1-to-1 mapping to data objects (which tends to make those older schemas more
complex).

Standard: v2.2 / Document: v1.0 / May 16, 2022 14


PRODML Technical Usage Guide

PRODML Sub-Domain
PRODML SPVR * PVT DTS DAS Pressure Product Product Production Time
General Transient Volume Flow Operation Series
Top Level Object Schema Testing Model
ProdmlAllObjects.xsd 1
ProdmlCommon.xsd 1
Report.xsd 1
ReportingEntityModel.xsd 1
SimpleProductVolume.xsd **
AssetProductionVolumes 1
ProductionWellTest 1
WellProductionParameters 1
TerminalLifting 1
Transfer 1
FluidAnalysis.xsd 1
FluidCharacterization.xsd 1
FluidSample.xsd 1
FluidSampleAcquisitionJob.xsd 1
FluidSampleContainer.xsd 1
FluidSystem.xsd 1
DtsInstalledSystem.xsd 1
DtsInstrumentBox.xsd 1
DtsMeasurement.xsd 1
FiberOpticalPath.xsd 1 1
DasAcquisition.xsd 1
FlowTestJob.xsd 1
FlowTestActivity.xsd 1
PressureTransientAnalysis.xsd 1
Deconvolution.xsd 1
PreProcess.xsd 1
ProductVolume.xsd 1
ProductFlowModel.xsd 1
ProductionOperation.xsd 1
TimeSeriesData.xsd 1
TimeSeriesStatistic.xsd 1
* SPVR = Simple Product Volume Reporting
** Simple Product Volume is abstract , from which these SPVR objects are derived

Figure 2-2. Mapping between PRODML sub-domains and schema files.

The Report top-level object is a “header” type object, which has been retained in this version of
PRODML for continuity, but in version 2 it is less likely to be needed.
Having selected the sub-domain in which to work, you can open the schema files, and/or import the
.XMI file containing the UML model (see Section 1.5.1) and import it into a UML tool for visualization.
You should do this in conjunction with reading the rest of this chapter, and the chapters of this
document concerned with the relevant sub-domain. You can also navigate to the appropriate example
files (again, for details, see Section 1.5.1).

Standard: v2.2 / Document: v1.0 / May 16, 2022 15


PRODML Technical Usage Guide

2.3 PRODML Use of CTA


As noted previously, PRODML makes extensive use of the CTA.
All the PRODML sub-domains use data objects, which are modelled in UML and defined by XSD
schemas, and which use common data objects and units of measure.
DAS uses the additional CTA elements of:
• HDF5 (for high-volume data).
• EPC (for packaging multiple files together). For more information on EPC, see Energistics Online:
Energistics Packaging Conventions Specification.
Other sub-domains could use the EPC, for example, multiple PVT XML files and other non-
Energistics documents used to report PVT data could be packaged together using EPC. However,
this has not been documented or tested.
The Energistics Transfer Protocol (which is not downloaded with PRODML; see Section 1.5.2) could
potentially be used to transfer XML data objects created by use of PRODML. This capability also
remains undocumented and untested.
Wider use of EPC and ETP are candidates for further PRODML development.

Value Types
In terms of the Energistics common data types, PRODML makes extensive use of a package of data
types called Value Types. These types are to cover measurements where the measurement
conditions act as a qualifier to the measured value:
• Pressure: whether absolute or relative/gauge pressure has been measured; if relative or gauge,
then the reference/atmospheric pressure must/may be provided.
• Volume, Density and Flowrate: where the pressure and temperature conditions of the
measurement have a profound impact on the underlying “value” of the measurement. A choice is
available—either to supply the pressure and temperature of measurement, or to choose from a
list of standards organizations’ reference conditions. Note that the enum list of standard
conditions is extensible, allowing for local measurement condition standards to be used.
The four types are called “xxxValue” where “xxx” is one of the four measurement types listed
immediately above. The PressureValue is shown in Figure 2-3, and the other three types in Figure
2-4.

Standard: v2.2 / Document: v1.0 / May 16, 2022 16


PRODML Technical Usage Guide

class PressureValueTypes

«XSDcomplexType»
PressureValue

«XSDcomplexType»
AbstractPressureValue

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


GaugePressure AbsolutePressure RelativePressure

«XSDelement» «XSDelement» «XSDelement»


+ GaugePressure: PressureMeasureExt + AbsolutePressure: PressureMeasureExt + RelativePressure: PressureMeasure

+ReferencePressure 0..1 1 +ReferencePressure

Ab stractMeasure
«XSDcomplexType»
TypeEnum
ReferencePressure
«enumeration»
«XSDattribute» ReferencePressureKind
+ uom: PressureUom
+ referencePressureKind: ReferencePressureKind [0..1] absolute
ambient
legal

Figure 2-3. PressureValue data type.

class OtherValueTypes

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


DensityValue FlowRateValue VolumeValue

«XSDelement» «XSDelement» «XSDelement»


+ Density: MassPerVolumeMeasure + FlowRate: VolumePerTimeMeasure + Volume: VolumeMeasure

+MeasurementPressureTemperature

0..1 0..1 0..1


+MeasurementPressureTemperature +MeasurementPressureTemperature
«XSDcomplexType»
AbstractTemperaturePressure

«XSDcomplexType» «XSDcomplexType»
ReferenceTemperaturePressure TemperaturePressure

«XSDelement» «XSDelement»
+ ReferenceTempPres: ReferenceConditionExt + Temperature: ThermodynamicTemperatureMeasure
+ Pressure: PressureMeasure

Figure 2-4. Density, Flowrate and Volume Value data types.

Standard: v2.2 / Document: v1.0 / May 16, 2022 17


PRODML Technical Usage Guide

Extending Enums
In Energistics standards enumerations can be extended by providing an authority and the new eum
value as shown here:
Authority:enum
For example: CompanyX:newenum
Enumerations which can be extended are called enum-nameext. Other enumerations cannot be
extended.

2.4 PRODML Use of Other Energistics Domain Standards


With version 2 of PRODML and WITSML, these domain standards join RESQML in using the
Energistics CTA. This upgrade makes it possible to easily integrate data from the three domain
standards. PRODML sub-domains are expected in common use (although these are not mandatory
usages) with certain other WITSML and RESQML data objects.
The usage pattern is that data that is useful to the PRODML sub-domain can be referenced (using the
data object reference element in Energistics common) from certain PRODML data objects. These are
generally either:
• Well/wellbore/completion/rock-fluid unit feature references for reference to flow, sample or
measurement sources. These are contextual references and so some degree optional. See
Figure 2-5, pink cells (value 1).
• Log/Channel Set/Channel references so that number arrays can be transferred using these data
objects, and referenced from PRODML. Note that for DAS arrays, HDF binary files are used in a
similar manner to these log usages. These are references to objects which transfer essential
data, and hence to use these, the WITSML schemas are required. See Figure 2-5, red cells
(value 2.
Figure 2-2 lists these usages by referencing as outlined above, from each PRODML sub-domain to
other domain data objects.

Other ML Element Pressure Product


Transient Product Flow Production Time
SPVR * PVT DTS DAS Analysis Volume Model Operation series
WITSML: well 1 1 1
WITSML: well completion 1 1 1
WITSML: wellbore 1 1 1 1 1
WITSML: wellbore completion 1 1 1
WITSML: log
WITSML: Channel Set 2 2
WITSML: Channel 2
RESQML: Rock Fluid Unit Feature 1

* SPVR = Simple Product Volume Reporting


1 = a reference to an object from another ML for contextual information
2 = a reference to an object from another ML for transfer of essential information
Figure 2-5. Mapping between other Energistics “ML” objects and their usage in PRODML

The WITSML and RESQML resources are included in the downloaded zip file and can be found in
their folders as shown in Figure 1-2.

Standard: v2.2 / Document: v1.0 / May 16, 2022 18


PRODML Technical Usage Guide

Part II: Simple Product Volume Report


Part II contains Chapters 3 through 7, which explain the set of PRODML data objects for simple
product volume reporting.

Acknowledgement
The detailed input into and leadership of this project by the following companies is gratefully
acknowledged: P2 Energy Solutions, Oxy, Accenture, Energysys, and Halliburton. The contributions
in terms of helping set requirements from the following companies is also highly appreciated: IHS,
WellEZ, Infosys, Anadarko, Baker Hughes, Chevron, QEP, Schlumberger, and Statoil.

Standard: v2.2 / Document: v1.0 / May 16, 2022 19


PRODML Technical Usage Guide

3 Introduction to Simple Product Volume


Report Objects
This chapter serves as an overview and executive summary of the Simple Product Volume data
objects.

3.1 Overview of Simple Product Volume Reporting, the NAPR Project, and the
earlier PRODML models devised for the Norwegian Continental Shelf
NAPR stands for North America Production Reporting. The objective of the NAPR project was to
analyze and extend (if needed) PRODML schemas necessary for the reporting of daily and monthly
production related data to non-operating joint venture (NOJV) partners, and royalty owners in North
America. The result is the Simple Product Volume package of PRODML, first released as part of
PRODML v2.x.
Although the focus of the project has been North America requirements, the hope is that the data
standard can be used worldwide.
For a quick overview or to be able to make a presentation to colleagues, see the slide set: Worked
Example SPVR.pptx which is provided in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files.
Note that the Simple Product Volume capability is, as the name suggests, a standard which aims to
cover the minimum requirements for volume reporting. It was developed following widespread
feedback that the capabilities within earlier versions of PRODML, whilst comprehensive, were also
complex and therefore hard to understand and to implement.
The earlier version of volume reporting does remain part of PRODML. It comprises principally the
data object Product Volume, and the supporting objects Product Flow Model, and Production
Operation. This reporting standard has been adopted (and extensively developed) for use on the
Norwegian Continental Shelf (NCS). The NCS reporting is based on version 1.x of PRODML. The
data objects concerned have been migrated to Energistics CTA standards but otherwise left
unaltered, for compatibility with previous work. A fuller description can be found in Chapter 28,
Product Volume; Chapter 29, Product Flow Model; Chapter 31, Production Operation.

3.2 The Business Case


Some of the challenges of reporting production data to other relevant entities are:
• Lack of a standard (protocol, format and mechanism).
• Associated requirement for a significant amount of manual effort both in creating data to be
transmitted, and receiving/consuming it.
• NOJV partners are increasingly requesting more detailed data (daily and monthly), resulting in a
significantly increased quantity of data to be transmitted.
• Timelier reporting needed to support decision making (e.g., marketing, participation in projects)
and internal reporting (accruals, etc.).
Therefore the benefits expected are:
• Streamlined approach to communicating production related data.
• Less manual effort and thus reduced cost.
• Greater accuracy (due to less manual intervention/preparation).
• Greater insight into the status of properties operated by others (OBO) supporting more informed
decision making.
• Quicker access to data from joint venture partners.

Standard: v2.2 / Document: v1.0 / May 16, 2022 20


PRODML Technical Usage Guide

3.3 Scope of Simple Product Volume Data Objects


The scope of this section pertains specifically to the Simple Product Volume data objects. Data in
scope for the current version includes:
• The standards established and lessons learned by the Norwegian Production Reporting
Guidelines using the Product Volume object first released in PRODML v1.1, and retained
(updated to Energistics Common Technical Architecture style) in PRODML v2.x. Most of the data
transfers supported by this standard have been included in the Simple Product Volume package.
See the note in Section 3.1 concerning the earlier data objects.
• The previous work by the National Data Repository workgroup has been considered and
leveraged, where appropriate, in developing North American standards.
Out of scope are:
• Regulatory reporting in North America, which is complex and varies by state.
• Regulatory reporting in the rest of the world, although it is hoped that in many cases the current
data model will be adequate for this purpose.
• Security infrastructure, such as user identity and authentication, for example used in transfer of
data between partners.

3.4 Use Cases: Overview


The Simple Product Volume data objects support these use cases:
1. Receive monthly and/or daily production data directly from the operating partner;
2. Transmit monthly and daily production data directly to a participating joint venture partner;
3. Provide historical production data (various frequencies and date ranges possible) for divested
properties;
4. Obtain historical production data (various frequencies and date ranges possible) for acquired
properties;
5. Transmit monthly and/or daily production data on multiple wells to a central data exchange
(optionally including a list of participating entities by well so that the exchange can limit access to
the production data as appropriate);
6. Obtain monthly and/or daily production data from a central data exchange.
For the detailed use cases, see Chapter 7.

3.5 Data Object Overview


The set of Simple Product Volume Reporting data objects work together to cover the following
capabilities:
• Describe the Reporting Entities:
− List of entities, any arrangement of hierarchies, reference the data for the physical entity
• Report Volumes per Reporting Entity:
− Production, injection, dispositions, deferred production and inventories, transfers to other
entities, terminal lifting
− For any time period; any type of produced or service fluid
• Transfer of event-driven reports and supporting data:
- Well tests, well production parameters

Standard: v2.2 / Document: v1.0 / May 16, 2022 21


PRODML Technical Usage Guide

4 Simple Product Volume: Use Cases, Data


Types, and Workflow
This chapter provides a brief over of the business context for using the Simple Production Volume
data objects, the use cases they support, key data types, and a high-level workflow.

4.1 Business Context: Asset Production Volume Flows


In general, oil and gas fields consist of largely discrete “assets”, administered by different operators.
This means that production flows are measured and reported on a per-asset basis. Importantly, the
financial imperative of custody transfer ensures that any inter-asset transfer flows or exports are also
measured and to a high standard.
An asset in this context is a set of interlinked oil production infrastructure administered by a single
operator. Figure 4-1 shows the generalized asset group flow. The Simple Product Volume data
objects are designed to handle the requirements for reporting the production flows of an asset as
generalized in this figure.

Reservoirs

Wells Injectors

Well
Production Injection
Water
Production and/or Dispositions
Gas
Storage Facilities
Sand

Asset
Oil, Gas, Condensate, flows out of
Boundary
operator’s scope of control.
(Export to Refineries, etc.)

Oil, Gas, Condensate, flows


to and from other Assets
(Transfers).

Figure 4-1. Asset production flows.

Standard: v2.2 / Document: v1.0 / May 16, 2022 22


PRODML Technical Usage Guide

4.2 Use Cases: Overview


For details of these use cases, see Chapter 7.
1. Receive monthly and/or daily production data directly from the operating partner. The goal
is to provide to others with working and/or revenue interest in jointly owned properties: production
data for a fixed duration necessary for monitoring, decision-making, forecasting and reporting,
and financial record-keeping associated with operated properties.
2. Transmit monthly and daily production data directly to a participating joint venture partner.
This is the “mirror image” of Use Case 1.
3. Provide historical production data (various frequencies and date ranges possible) for
divested properties. The goal is to reduce cost and effort and support automation of data room
presale activities and generate added value for properties being sold, as well as for those
purchasing properties.
4. Obtain historical production data (various frequencies and date ranges possible) for
acquired properties. The goal is to reduce cost and effort of post-acquisition data loads and
improve completeness and accuracy of loaded data.
5. Transmit monthly and/or daily production data on multiple wells to a central data
exchange. The goal is to ensure that when data is exchanged, the sender can specify the rules
for access to the data, such that the receiver may enforce privacy rules specified by the data
owner. Therefore, the transmission should optionally include a list of participating entities by well
so that the exchange can limit access to the production data as appropriate.
6. Obtain monthly and/or daily production data from a central data exchange. This is the
“mirror image” of Use Case 5. The functionality depends on the data exchange having information
concerning the rights of different users to access specific items of data.

4.3 Key Data Types


The data types required for Simple Product Volume include:
• Well identification information (name, unique well identifier, owner well number)
• Composition of hierarchies for reporting (e.g., groups of wells in a field)
• Total produced volumes of oil, gas, NGL, and water by well
• Volumes sub-divided by other entities such as formation, string, lease, field
• Volumes sub-divided by final disposition (sales, fuel, injection, etc.)
• Reporting of periodic production over different periods (e.g. daily and monthly)
• Temperatures and pressures
• Wellhead measured volumes (where applicable)
• Producing status
• Hours operated/down with downtime reasons and comments
• Well tests
• Parties authorized to access data by well
The data model is described Chapter 5 and a full worked example showing these data types is
described in Chapter 6 and included in the downloaded PRODML v2.x zip file.

4.4 Typical Workflow


A data exchange using the Simple Product Volume data object is expected to proceed as follows:
1. Establish the “reporting entities” (e.g. the wells, completed intervals, fields, etc., against which
volumes are reported) between sender/reporter and receiver/partner. The sender needs to send
the details of the reporting entities just once (and updated them if they change). Subsequent
transfers only need to include the volumes being reported against the previously-transferred
entities.
2. On an agreed periodic basis, transfer the volumes associated with these entities.

Standard: v2.2 / Document: v1.0 / May 16, 2022 23


PRODML Technical Usage Guide

3. On an event-driven basis, transfer other important ancillary data, e.g., well test data, well
production parameter changes, tanker lifting, or transfers of product in and out of the asset.
4. Once per periodic or event-driven transfer, describe the fluids whose volumes are being reported
(e.g., oil, gas, water and any details of these).
5. If the configuration of the asset changes, transfer the changes/updates. For example, if a new
well is completed, the sender must send to the receiver the new well information and its place in
the reporting entity hierarchy.

Standard: v2.2 / Document: v1.0 / May 16, 2022 24


PRODML Technical Usage Guide

5 Simple Product Volume: Data Model


This chapter gives an overview of the Simple Product Volume data objects, how they are used, and
how they are related.
Chapter 6 gives a detailed description of the worked example (that is downloaded with the Simple
Product Volume standard) and also gives more details on the data model and how it works.

5.1 Overview of the Simple Product Volume Data Object


Figure 5-1 shows the main data types and relationships between them that comprise the Simple
Product Volume data objects; this set of objects works together to cover these capabilities:
• Describe the reporting entities:
− List of entities,
− Any arrangement of hierarchies,
- Reference the data for the physical entity
• Report volumes per reporting entity:
- Production, injection, dispositions, deferred production and inventories, transfers to other
entities, and terminal lifting
- For any time period, any type of produced or service fluid
• Transfer supporting data:
− Well tests, well production parameters

Figure 5-1. Overall data model for Simple Product Volume showing reporting entities section (orange
border), periodic volumes section (green border) and event-driven section (blue border).

The Reporting Entity object is important because all the volume and event-driven data transfers use it
to refer to the “thing” whose properties they are reporting. Figure 5-2 shows a UML diagram of the
data model (simplified to the top levels); it shows how the other types of data objects reference back
to the reporting entity for identity. (NOTE: The UML model is used to produce the PRODML schemas
and is provided (as an XMI file) when you download the standard.)

Standard: v2.2 / Document: v1.0 / May 16, 2022 25


PRODML Technical Usage Guide

Figure 5-2. Top-level UML model diagram showing central role of reporting entity.

5.2 Describing the Reporting Entities


It is important to be certain that volumes are being reported against the correct component within an
asset; that is, production is reported for the correct well. For this reason, one function of the Simple
Product Volume data objects is to satisfy the requirements of identity of asset components, or
“reporting entities” as they are called here. The reporting entity functionality is sub-divided into:
defining the entities, showing how they are arranged in hierarchies, and linking them to any available
detailed descriptions. The following sections describe this.

Reporting Entity Defined


A reporting entity refers to a physical, organizational or geographic “thing” that production data is
reported against. For example: wells, fields, leases, business units, countries or states are reporting
entities.
At its basic level, the reporting entity data object is simply a “placeholder” object, which all other
Simple Product Volume data objects reference. That is, the object identifies the name or ID of the
entity against which production data is being reported, but not much other information about it.
Optionally, you can provide other additional data for a reporting entity, using these methods:
• Define hierarchies to give appropriate context. For example, a hierarchy might be: business unit,
fields within a BU, wells within a field, and wellbores within a well (see Section5.2.4).
• Reference a physical data object (see Section 5.2.2).
• Reporting entities may be of different kinds, which are enumerated in the model (for list of
enumerations, see Section 7.1):
− Physical asset, e.g. well
− Geographical, e.g. State
− Organizational, e.g. Company

Standard: v2.2 / Document: v1.0 / May 16, 2022 26


PRODML Technical Usage Guide

• The behavior of reporting entities according to Kind is not enforced; for example, the schema has
no mechanism to restrict well tests being associated only to reporting entities whose kind is “well”.
• Aliases can be used within a reporting entity to enable alternative identifiers to be used (e.g., a
well’s name in multiple systems, if they differ).

Reporting Entities Can Reference “Full” Data Objects


Optionally, reporting entities can reference a data object containing data about the physical object
concerned (see Figure 5-1 and the link marked “Optional Ref”). Section 5.2 explains how a reporting
entity is essentially a “placeholder” for the volume reporting. It contains very little actual data about the
entity. However, you can provide a link to another data object that contains more data about an entity;
for example, a well reporting entity can link to a well data object in WITSML.
This reference is done using a data object reference, which is a mechanism in the Energistics CTA
that can reference any v2.x data object from any of the Energistics standards (PRODML, WITSML,
and RESQML).
There are 2 ways to reference a physical data object:
• Use the AssociatedObject, to refer to an existing Energistics object, e.g. the Well object.
• Use the AssociatedFacility to reference a PRODML Facility Object, which is a placeholder
containing an enumerated kind of facility. This will be extended in future to deal with more details
of Facility objects.
For an example on how data object reference works, see 6.2.2.
For a list of example current Energistics data objects, see the right hand column in the table of
Reporting Entity Kinds in Section 7.1.

Add a Reporting Entity for each "place" you want to Report


Correct pattern of usage: create a reporting entity for each place where you want to report a volume
(not to extend the disposition kind list).
For example, if we have 3 flares: high, medium and low pressure (HP, MP, LP), set up 3 reporting
entities: e.g., "Flare HP", "Flare MP", and "Flare LP" and then report Dispositions of kind "flared" at
each of these entities. (Do NOT create extensions to dispositions, e.g., HP flared, MP flared, LP
flared, and assign these to a single reporting entity named "Flare".) The recommended option
maintains interoperability; the non-recommended one creates new kinds of quantity.

Reporting Entities can be Organized into Hierarchies


The key features of the Hierarchy data object are as follows:
• Hierarchies can be defined for reporting entities. To view this conceptually, see Figure 5-1 and as
a UML construction, see Figure 5-2.
• As many hierarchies can be included as are required for different purposes. For examples of the
use of different kinds of hierarchy, see Figure 6-6.
• Hierarchies have a name, but no enumerated kind.
• Hierarchies have a root “reporting node”, which then can have as many child “reporting hierarchy
nodes” in as many levels as required (Figure 5-2).

5.3 Asset Production Volumes


The Asset Production Volumes object transfers all volume data for all reporting entities. This section
explains the details of the use of this data object. The worked example in Chapter 6 covers all these
usage patterns with an illustration of actual use. Some general points are:
• Each reporting entity is taken in turn and all its volumes are reported. The Reporting Entity
Volumes contains all the volumes for a single reporting entity. See Figure 5-2 (blue box) where
the overall period (time start and time end) is contained in the asset production volumes
“container” and then a repeating pattern of reporting entity volumes follows.
• Reporting entity volumes reports all volumes for a single reporting entity, and contains a reference
back to the reporting entity using its UUID for reference.

Standard: v2.2 / Document: v1.0 / May 16, 2022 27


PRODML Technical Usage Guide

• Although named “volumes” in line with industry usage, different quantities may be reported, such
as volume, mass, and energy content.
• Where an actual volume measurement is reported, this will of course be dependent on the
measurement conditions of temperature and pressure. The purpose of SPVR is to transfer
volume and similar quantities for internal, partner and regulator reporting, not to transfer field
operational measurements. Hence the assumption is made that all volumes have been expressed
at the same temperature-pressure conditions for the current transfer. The Standard Conditions
element is mandatory in all the SPVR objects. A choice is available – either to supply the
temperature and pressure for all the volumes which follow, or to choose from a list of standards
organizations’ reference conditions. Note that the enum list of standard conditions is extensible,
allowing for local measurement condition standards to be used. See Figure 2-4, which shows the
Abstract Temperature Pressure class: this is the type for standard conditions in all SPVR objects.
• Use is also made within SPVR of the Pressure Value type, allowing absolute or relative
pressures. See Section 2.3 and Figure 2-3.

Fluid Quantities
The essence of volume reporting is to report the quantities of fluids produced, sold, injected, etc.
Whenever this is done, the element used is the Abstract Product Quantity (Figure 5-3). The Abstract
Product Quantity takes one of two forms: Product Fluid or Service Fluid. Both of these have a Kind,
which is an enumerated list of common fluid types, e.g. “Oil-gross” or “dry gas” for product fluid, and
e.g., “demulsifier” or “methanol” for service fluid.
Optionally, a reference can be made to a specific fluid type in the Fluid Component Catalog (see
Section 5.3.2), which will be required if the physical properties of the fluid are required to be reported.
Optionally for a Product Fluid, an Overall Composition can be added. This may be used, for example,
to be able to report a quantity of gas (as Product Fluid Kind “dry gas”) with an associated composition
(fraction methane, ethane…).
Note that when a volume (or elsewhere, a density or flowrate) is reported, the data kind is “xxxValue”
(where xxx is volume, etc.). Use of this construct allows for the transfer of measurement condition
pressure and temperature (Figure 5-4). BUSINESS RULE: Unless a pressure and temperature are
specified for a given quantity, it is assumed to have been corrected to the Standard Conditions which
are reported once at the start of the file. This allows items like flow meters to be reported with actual
metered volume, and the corresponding actual temperature and pressure of measurement.

Standard: v2.2 / Document: v1.0 / May 16, 2022 28


PRODML Technical Usage Guide

«XSDcomplexType»
AbstractProductQuantity

«XSDelement»
+ Volume: VolumeValue [0..1]
+ Mass: MassMeasure [0..1]
+ Moles: AmountOfSubstanceMeasure [0..1]
«XSDattribute»
+ uid: String64

«XSDcomplexType» «XSDcomplexType»
ProductFluid Serv iceFluid

«XSDelement» «XSDelement»
+ ProductFluidKind: ProductFluidKindExt + ServiceFluidKind: ServiceFluidKindExt
+ GrossEnergyContent: EnergyMeasure [0..1] «XSDattribute»
+ NetEnergyContent: EnergyMeasure [0..1] + serviceFluidReference: String64 [0..1]
«XSDattribute»
+ productFluidReference: String64 [0..1]

0..1

«XSDcomplexType»
ProdmlCommon::
Ov erallComposition

«XSDelement»
+ Remark: String2000 [0..1]

0..* OverallComponent

«XSDcomplexType»
ProdmlCommon::FluidComponent

«XSDelement»
+ MassFraction: MassPerMassMeasure [0..1]
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ KValue: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
«XSDattribute»
+ fluidComponentReference: String64

Figure 5-3. Fluid Model for PRODML.

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


DensityValue FlowRateValue VolumeValue

«XSDelement» «XSDelement» «XSDelement»


+ Density: MassPerVolumeMeasure + FlowRate: VolumePerTimeMeasure + Volume: VolumeMeasure

+MeasurementPressureTemperature

0..1 0..1 0..1


+MeasurementPressureTemperature +MeasurementPressureTemperature
«XSDcomplexType»
AbstractTemperaturePressure

«XSDcomplexType» «XSDcomplexType»
ReferenceTemperaturePressure TemperaturePressure

«XSDelement» «XSDelement»
+ ReferenceTempPres: ReferenceConditionExt + Temperature: ThermodynamicTemperatureMeasure
+ Pressure: PressureMeasure

Figure 5-4. "Value Types" where measurement value depends on measurement conditions (P and T).

Fluid Component Catalog


Fluid components are specified by selecting them from the Fluid Component Catalog, which lets you
specify fluid by component types or composition. NOTE: The Fluid Component Catalog is transferred

Standard: v2.2 / Document: v1.0 / May 16, 2022 29


PRODML Technical Usage Guide

only once per asset production volume data object; all volumes contained within reference the
appropriate components from the catalog.
Within an asset production volumes data object, each instance of a quantity refers to one of the
members of the Fluid Component Catalog in that same asset production volumes, using the UID for
reference.
The fluid components in the catalog include:
• stock tank oil
• natural gas
• formation water
• pure fluid component
• plus fluid component
• pseudo fluid component
• sulfer component
The first three are aimed at black oil descriptions and the second three at compositional descriptions;
however, the two kinds may be mixed quantities of produced fluid, which can be described in either
black oil or compositional terms, or both. The schema shows the attributes of each type of fluid
component.
Note that the Fluid Component Catalog and the Fluid Components are contained in the PRODML
Common package, and are shared with the PVT data objects, so are compatible with lab analysis
reports created using PRODML v2.

Different Volume Types


Asset production volumes support transfer of these types of volumes:
• Production:
− Production
- Injection
• Inventory:
− Opening Inventory
- Closing Inventory
• Dispositions:
− flared
− sold
− used on-site
− fuel
− buyback-fuel
− vented
− disposal
− gas lift
− lost or stolen
- other
• Movement in or out of an asset:
− Terminal Lifting
- Transfer
• Deferred Production
These different types of volumes, as required, all sit within a single reporting entity volumes element
in the asset production volumes object. Each quantity refers to a common Fluid Component Catalog
as described in Section 5.3.1.

Standard: v2.2 / Document: v1.0 / May 16, 2022 30


PRODML Technical Usage Guide

Note that the two kinds of movement in or out of asset can be reported either as elements within the
periodic asset production volumes object, or in an event-driven manner with standalone Terminal
Lifting and Transfer data objects (see Section 5.5).
The worked example in Chapter 6 shows many of these different volume types.

5.4 Production Well Tests and Well Production Parameters


Well tests and well operating parameters can be transferred as part of the Simple Product Volume
capability. Both these types of transfer are designed to be transferred upon events happening or upon
demand, rather than periodically as for asset production volumes. For this reason, they are available
as standalone objects (Figure 5-2, purple box).
In the case of Production Well Tests, the SPVR data object uses a Flow Test Activity object to convey
the “technical” data, and the SPVR object for the contextual “reporting” data. There is a data object
reference from Production Well Tests to Flow Test Activity and then the Flow Test Activity has a data
object reference to the Reporting Entity. See 6.4.1.
The Flow Test Activity object and the Well Production Parameters also use the data object reference
mechanism (described in Section 5.2.2 and shown in Figure 5-1 and Figure 5-2) to reference a single
Reporting Entity. BUSINESS RULE: The kind of Reporting Entity is expected to be a well, a wellbore,
a well completion or wellbore completion.
To transfer multiple production well tests within one XML transfer, use more than one Flow Test
Activity instance under the top level Production Well Tests. Each Flow Test Activity can refer to a
different well (etc.) so that the option exists to transfer all the period’s well tests in one file.
Well Production Parameters can also be used to transfer the data for multiple wells in one file. In this
case, use more than one Production Well Period elements under the top level Well Production
Parameters. Each Production Well Period can refer to a different well (etc.).
Both the Production Well Test object and the Well Production Parameters object use the same
reference to fluid components in a fluid component catalog as the Asset Production Volumes object
(described in Section 5.3.1). Each instance of production well test or well production parameters has
its own fluid component catalog so that it can behave as a standalone data object.

5.5 Transfer and Tanker Lifting


Volumes associate with transfers and tanker liftings can be transferred as part of the Simple Product
Volume capability. Again, both these types of transfer are designed to be transferred upon events
happening, rather than periodically as for asset production volumes. For this reason, they are
available as standalone objects (see Figure 5-2, purple box). However, unlike the production well test
object and the well production parameters objects, they can also be transferred embedded within an
asset production volumes data object. The reason for this is an operator may wish to do an end-of-
period (typically end-of-month) report showing all production and all dispositions of production,
including all transfers and tanker liftings. When used within asset production volumes in this way,
transfers and tanker liftings are to be identified as specific types of disposition.
The transfer object and tanker lifting object both use the same data object reference described above
(and shown in Figure 5-2). However, they differ in that they refer to multiple reporting entities.
Transfer: 2 Reporting Entities:
• source facility
• destination facility
Tanker Lifting: 3 Reporting Entities:
• loading terminal
• destination terminal
• tanker
For facility and terminal entities, the reporting entity should be some kind of facility. For oil tanker and
tanker trucks, the reporting entity is expected to be tanker. As noted above, the behavior of reporting
entities and kind is not enforced by the schema, so it needs to be validated by the software (that the
data objects are implemented in).

Standard: v2.2 / Document: v1.0 / May 16, 2022 31


PRODML Technical Usage Guide

Both the Transfer object and Tanker Lifting object use the same reference to fluid components in a
fluid component catalog as does the Asset Production Volumes (described in Section 5.3.1). Each
instance of transfer and tanker lifting has its own fluid component catalog so that it can behave as a
standalone data object. However, where they are used within Asset Production Volumes (see above),
then they share the fluid component catalog of the parent asset production volumes.

Standard: v2.2 / Document: v1.0 / May 16, 2022 32


PRODML Technical Usage Guide

6 Simple Product Volume: Worked Example


This chapter contains details of the worked example included with the download. The contents of the
example are covered in the same order as the model has been described in Chapter 5.
In addition to the example data files which are in energyml\data\prodml\v2.x\doc\examples\SPVR, the
SPVR example also includes:
• A presentation file found at: energyml\data\prodml\v2.x\doc\Worked Example Simple Product
Volume.pptx
• A spreadsheet file found at: energyml\data\prodml\v2.x\doc\examples\SPVR\Worked Example
Simple Product Volume Spreadsheet.xls

6.1 Reporting Entities: Physical and Commercial Layout


A set of Reporting Entities are provided to support the example of a small field containing 5 wells but
with sufficient complexities to demonstrate the main features of the data transfer model.
Figure 6-1 shows the reporting entities for the example. The physical reporting entities include:
• 5 wells (Well 1, 2, 3, 4, 5, which intersect 2 reservoirs that each have 3 contact intervals):
− Reservoir A
− Contact Interval 1-A, 2-A and 3-A
− Reservoir B
− Contact Interval 3-B, 4-B and 5-B
• Note that Well 3 intersects both reservoirs at Contact Intervals 3A and 3B.
Commercially, these wells sit in Lease X. A group called ABC Interest (perhaps land owners) has an
interest in Wells 1-4, and the XYZ Company (e.g., a non-operating partner) has an interest in Wells
3-5. So ABC Interest and the XYZ Company have commercial interest; therefore, are candidates to
receive reports about production from these assets/reporting entities.
The XML for the example is explained below, but you can begin to see the various physical and
commercial reporting entities that might arise from this simple example.

Figure 6-1. Reporting entities: physical and commercial layout.

The asset description is completed by Figure 6-2, which shows the reporting entities outside the
asset’s direct control and which play a part in transfers and terminal Liftings. These are a separate
Lease which transfers production to Lease X, and an oil tanker, Barge 99, which after a tanker lifting,
transports production to Terminal Z.

Standard: v2.2 / Document: v1.0 / May 16, 2022 33


PRODML Technical Usage Guide

Figure 6-2. Reporting entities: other related facilities.

6.2 Reporting Entities: Organization in the Data Transfer


With the set of facilities visualized as shown in Section 6.1, you need to create a set of reporting entity
data objects to define them.

XML Objects
The set is listed in Figure 6-3. The left side of the figure lists the entities by kind and name, and the
right side shows a view of the folder containing one XML reporting entity data object per entity. In the
worked example, these files can be found in the sub-folder “ReportingEntities”.
Note that this set of data is expected to be transferred only at the start of the reporting relationship.
Any new entities (for example, a new well), require that the new XML data object be transferred when
needed.

Figure 6-3. Data objects in the test data set: reporting entities.

Figure 6-4 shows the details of one reporting entity object. Important to note are:
• The UUID is used in the other objects (hierarchies, asset production volumes, production well
test, well production parameters, transfer and terminal lifting) to identify the reporting entity(ies)
concerned (blue box).
• Title is the element in the Energistics Common Citation class to represent the name (red oval).

Standard: v2.2 / Document: v1.0 / May 16, 2022 34


PRODML Technical Usage Guide

• The kind is the enum of reporting entity kinds (green oval).

Figure 6-4. Details of the reporting entity object.

Reference to a WITSML Well Data Object


Optionally, a reporting entity can reference a data object containing full data. An example of this is
shown in Figure 6-5. The XML on the left shows a reporting entity object, in this case a well. In this
example, more information about the well is desired, so a link to a WITSML well, object is specified.
The right side of the figure shows the referenced WITSML well object.
The kind of reporting entity being a well, linked to a Well object, is not enforced but is expected (red
circle).
As noted above, there are 2 ways to reference a physical data object (see Section 5.2.2):
AssociatedObject, to refer to an existing Energistics object, as shown in Figure 6-5, or
AssociatedFacility to reference a PRODML Facility Object, which is a placeholder containing an
enumerated facility kind.
The link is made using the Energistics Common data object reference class (from the Energistics
CTA), which is named Target Facility. The key data element is the UUID, which is the same as the
well object’s own UUID (blue box).
In the worked example, these “full data” object files can be found in the sub-folders “Wells” and
“wellboreCompletions”. Examples are provided for all the 5 wells and 6 wellbore completions listed in
Figure 6-4.

Standard: v2.2 / Document: v1.0 / May 16, 2022 35


PRODML Technical Usage Guide

Figure 6-5. Optionally, reporting entities can reference a data object containing full data, using
Associated Object element.

Referencing a Facility Object


The Associated Facility element can be used to reference a Facility object. It works exactly the same
as Associated Object and just differs in being named “Facility” to reference the fact that the Facility
object is not a “full” data object, but just a placeholder containing an enum of facility kinds.
For an example of how Associated Facility is used, see the example file ReportingEntity-
TerminalZ.xml (a Reporting Entity object). The example Facility is file Facility-Terminal Z.xml.

Defining the Hierarchies


The worked example contains 3 example hierarchies. As noted above, any number can be created to
show the context of the reporting entities. Figure 6-6 shows the hierarchies included. The left side of
the figure lists the hierarchy names and underneath, the hierarchy of nodes. The right side shows
these as 3 XML data objects of type reporting hierarchy. These can be found in the sub-folder
“ReportingHierarchies”. The 3 hierarchies demonstrate:
• Typical order for volume reporting: lease (field)/well/contact interval (layer)
• Order for reservoir offtake reporting: lease (field)/reservoir/contact interval (layer)
• Order for commercial relationships, e.g. for defining access to data: company (commercial
entity)/well
Again, you only need to update the hierarchy object when changes are made (e.g., a new well comes
on line, etc.).
Note that the third hierarchy supports the requirement of Use Case 5 and 6 (see Section 4.2) in which
reporting is done via a third-party “hub”. This hierarchy makes it possible to list all the commercial
entities associated with the asset being reported, and which require access to data pertaining to
particular reporting entities, e.g., the different well interests shown in Figure 6-1.

Standard: v2.2 / Document: v1.0 / May 16, 2022 36


PRODML Technical Usage Guide

Figure 6-6. Worked example reporting hierarchies.

The structure of a hierarchy data object is shown in Figure 6-7. The left side of the figure shows one
of the example hierarchies. The right side shows some of the XML hierarchy data object. Note:
• The root node of a hierarchy is the reporting node (red circle).
• Child nodes nest under this root and each other (blue circle).
• The use of reporting entity as the data object reference back to the reporting entity object (for well
01 in this example) using UUID (green box).

Figure 6-7. Hierarchies showing reference to reporting entities and the root node.

6.3 Periodic Volume Reporting using Asset Production Volumes

Worked Example Content


With the assets defined, now you can transfer volume data on a periodic basis. There are many
combinations possible. Figure 6-8 shows the worked example content illustrating:
• The reporting entities used (left hand column, blue box)
• The types of volume used (column headings across the top, orange box)
• The instances of volume (entries in the matrix where the word shows the qualifier for the volume
(see below for details, purple box).
The list of “Examples Shown” below the matrix shows how the figures and explanations below work
through the different usages. The colors correspond to the entries in the matrix (row headings, column
headings, or cell entries). The explanation below has sub-headings that refer back the numbers in the
“Example Shown” list in Figure 6-8.

Standard: v2.2 / Document: v1.0 / May 16, 2022 37


PRODML Technical Usage Guide

Figure 6-8. Content of asset production volumes example.

The worked example contains one asset production volumes XML data object, found in the worked
example root folder (Figure 6-9).

Figure 6-9. Periodic transfer with one asset production volumes XML data object.

So far as possible, the worked example has realistic numbers for the volumes, which are shown in
Figure 6-10 and can be found in the “Numbers“ worksheet of the spreadsheet “Worked Example
NAPR.xlsx. Figure 6-11 shows how the numbers in the spreadsheet reconciled. Figure 6-12 shows
the timeline of the worked example. The events on this timeline are represented in the various data
objects transferred.

Figure 6-10. Volumes reported in the worked example. (For a copy of this spreadsheet, see the file named
Worked Example Simple Product Volume Spreadsheet.xls included in the example download.)

Standard: v2.2 / Document: v1.0 / May 16, 2022 38


PRODML Technical Usage Guide

.
Figure 6-11. Reconciliation of quantities.

Figure 6-12. Worked example timeline.

Worked Example Volumes Walkthrough


The worked example shows the various kinds of volumes and is explained by the content and order
shown in Figure 6-8.
Note that the volumes and flowrates in this report are assumed to be at the standard conditions which
are provided one per asset production volumes transfer
#1 Multiple Entities Per Report
The asset production volumes object contains a repeating reporting entity volumes element, which
repeats for each reporting entity. Figure 6-13 shows the asset production volumes XML data object,
with a snippet showing some of the reporting entity volumes elements (in collapsed state, green box).
The reporting entity element references the reporting entity (here, for Contact Interval 3B, red box).
The standard conditions are transferred only once (and are mandatory) and apply to all volumes in
this transfer.

Standard: v2.2 / Document: v1.0 / May 16, 2022 39


PRODML Technical Usage Guide

Figure 6-13. Asset production volumes has repeating reporting entity volumes per reporting entity.

#2 Different Volume Kinds Per Lease

If you open any reporting entity volumes element, you can see the different volume types included
(per the matrix in Figure 6-8). Figure 6-14 shows the volumes for the Lease X parent asset.
• Shown (details collapsed):
− Disposition: Terminal Lifting
− Disposition: Transfer
− Disposition: Product Sale
− Inventory: Opening
- Inventory: Closing
• Shown (details expanded):
- Production
• Not shown but available:
− Injection
- Deferred Production

Standard: v2.2 / Document: v1.0 / May 16, 2022 40


PRODML Technical Usage Guide

Figure 6-14. Kinds of quantity contained within each reporting entity volumes.

#3 Use of Production Fluid Catalog and Quantity Method


Where the Reporting Entity Volumes are expanded as in Figure 6-14, you can see the reference to
the corresponding fluid component in the fluid component catalog. Remember, each transfer requires
only one fluid component catalog. It lists and characterizes all components needed. These could be
any of the following, as examples:
• One each of oil, gas, water, in a simple black oil system.
• Multiple oils, etc., with different qualities such as gravities.
• Compositional with pure, pseudo and plus fractions.
Figure 6-15 shows this in more detail, with the fluid component catalog for the asset product volumes
example (left side of the figure, red box). The figure also shows three black oil type fluid components
and below, some pure fluid components and plus fluid components.
The right side of the figure shows the production quantity element within (in this case) a production
quantity element, with the water used as an example as a reference (blue box). Note that because
this referencing is within one XML data object, a UID rather than a UUID is used (the UID only needs
be unique within the data object). For example, in Figure 6-15, you can see that for the compositional
fluid components, uids such as “C2” have been used, which are only unique in this transfer.

Standard: v2.2 / Document: v1.0 / May 16, 2022 41


PRODML Technical Usage Guide

Figure 6-15. Fluid components transferred once per asset production volumes object and then
referenced.

Figure 6-16 shows more details of the Production data. Every Production element has a Quantity
Method, which is an enumeration listing the ways in which that quantity was determined (red box).
Every Production Quantity element has:
• A mandatory Product Fluid Kind, which is an enumeration of the product that the production
quantity represents (green box).
• An optional Product Fluid Reference, which is the reference to an entry in the Fluid Catalog,
which gives the physical properties of the component (blue box).

Figure 6-16. Quantity methods and product fluid kind can be included in the volumes.

#4 Service Fluids
In addition to product fluids, service fluids can be reported. Figure 6-17 shows an example. Like
Product Fluids, Service Fluids are reported by a mandatory Service Fluid Kind enumeration (purple
box), and have an optional Service Fluid Reference to the Fluid Component Catalog in the (unlikely)
event that the composition of the service fluid needs to be specified.

Standard: v2.2 / Document: v1.0 / May 16, 2022 42


PRODML Technical Usage Guide

Figure 6-17. Service fluids can be included in the volumes.

#5 Compositional Fluid Reporting


Figure 6-18 shows how compositional reporting of fluid is done. In this example, the fluid component
catalog (left side of figure) has in addition to the black oil components, compositional components C1,
C2 and C3 (green box) and a C10+ plus component (red box). Pseudo and plus components can
contain additional data about them in the catalog (e.g., molecular weight).
The right side shows the use of the overall composition element, which is optional for any volume. It
shows the references to the compositional components and their fractions—mass in this example
(blue box). Note that the black oil description is also used, showing the volume of type gas (purple
arrows).

Figure 6-18. Fluids can be reported compositionally.

#6 Allocation to Layer Level


A requirement is to be able to allocate volumes into reservoir layers where required, in commingled
systems. This is done in the worked example for well 03 which is completed in two reservoirs, the
connections being represented by two wellbore completions. See Figure 6-1.
The pattern of reporting volumes does not change. You only need to include the level of detail that is
needed in the reporting entities and hierarchies (as shown), and then to reference these as shown in
Figure 6-19. Note that by way of illustration, the quantity method for these two entities is “estimated”
(being downhole, unless we have metering downhole).

Standard: v2.2 / Document: v1.0 / May 16, 2022 43


PRODML Technical Usage Guide

Figure 6-19. Allocation to individual layers.

#7 Split Period Due to Choke Change


A further requirement is to be able to report allocated volumes across sub-periods within the overall
reporting period. For example, when you need to report a well being shut in for some of the period,
and where its flow would be all allocated to the flowing period. The worked example shows an
example of a choke change part way through the period. To see how this is done, see Figure 6-20.
Multiple sub-periods can be defined using the optional start time and duration elements within multiple
instances of Reporting Entity Volumes. In this case, there are two Reporting Entity Volumes for the
entity concerned (well 01), one for each sub-period. The figure shows the start date, duration, and
volumes for this entity for the first period (red boxes) and second period (blue boxes).
To see how the changing choke settings (in this example) are transferred for these same periods, see
Section 6.4.2.

Figure 6-20. Splitting quantities allocated across a period.

Standard: v2.2 / Document: v1.0 / May 16, 2022 44


PRODML Technical Usage Guide

#8 Deferred Production
Deferred production can be reported using the specific element for this, Deferred Production Event.
Figure 6-21 shows the worked example, and the following elements can be seen:
A Deferred Production Event sits under a Reporting Entity Volumes, which associates it with a
specific Reporting Entity.
Each event has a start and end time and a duration. The duration is mandatory and the times are
optional because you may not always know exactly when a failure occurred.

Figure 6-21. Deferred production.

There is a code for downtime reason. Because the codes are company specific, they are not part of
the standard. Figure 6-22 shows the UML for the downtime reason code. These can be arranged in
any desired hierarchy. In the example, two levels of code are shown. Use the Authority element used
to record which company (or standard) codes are being used.
Deferred Production can be assigned as planned or unplanned.
With the deferred Production Volume, an Estimation Method enumeration can be included, which lists
the various ways in which deferred volumes can be calculated.

Figure 6-22. Downtime reason can be reported using a hierarchy of codes.

#9 Transfer or Terminal Lifting


A transfer or terminal lifting can happen at any time during the period being reported. There is a
choice of using a standalone data object transfer each time this happens, or of incorporating the data
within the periodic report (using asset production volumes).
• Section 6.4.3 describes the transfer as a standalone.
• Section 6.4.4 describes the terminal lifting as a standalone.

Standard: v2.2 / Document: v1.0 / May 16, 2022 45


PRODML Technical Usage Guide

• Section 6.4.5 describes how these can be incorporated within the asset production volumes data
object.

6.4 Event-driven Production Reporting


There are four kinds of event which are expected to give rise to a data transfer that is asynchronous
to the periodic volume reporting described in Section 6.3. The worked example has instances of all
four. They are:
• production well tests
• well production parameters
• transfer
• terminal lifting
These are described in Sections 5.4 and 5.5. Their occurrence in the worked example month is
shown in Figure 6-12. Figure 6-23 shows the example files for these transfers.

Figure 6-23. Event-driven transfers showing example XML data objects in worked example folder.

Production Well Tests


Figure 6-24 and Figure 6-25 show the worked example production well test. Note these key points:
• The data object Production Well Tests contains the same contextual data as all the other
reporting data objects (red boxes).
− Fluid Component Catalog and the references to fluid components work in the same way as
other objects, such as asset production volumes as shown in Section 6.3.2 and Figure 6-15
(purple box). Note, Fluid Component Catalog shown here in the Production Well Tests data
object – it could be put in each Flow Test Activity if fluid properties (e.g. gravity) need to be
reported separately for each test.
− The well test measurement data is in the Flow Test Activity data object. A Data Object
Reference is used for each well test being transferred (just one in this example, but can be
many) (grey box).
− Note that Flow Test Activity has multiple flow test types (see Figure 14-1) and the types
expected to be used for this purpose are Production Flow Test and Injection Flow Test.
• Navigating to the Flow Test Activity data object, of which there can be any number:
− Well test method and validation can be included (orange box).
− A range of operating parameters for the test period can be included (blue box).
− Well test reports flow rates rather than quantities (green box).
− The reporting entity data object reference is expected to be to a well type of reporting entity,
although this match is not enforced in the schema (black box)
• Note: the Flow Test Activity is abstract and can support various types of flow test (see Chapters
13 to 17). In the case of it being used in conjunction with Production Flow Tests as here, the Type
is expected to be either Production Flow Test or Injection Flow Test since these Types carry the
required validation attributes.

Standard: v2.2 / Document: v1.0 / May 16, 2022 46


PRODML Technical Usage Guide

Figure 6-24. Production well tests: showing the fluid components used throughout the well test(s)
transfer, and the reference to the Flow Test Activity data object(s) containing the well test(s) data

Figure 6-25 - Flow Test Activity for one well test (there can be multiple of these per transfer of Production
Well Tests)

Well Production Parameters


Figure 6-26 shows the worked example well production parameters. Note these key points:

Standard: v2.2 / Document: v1.0 / May 16, 2022 47


PRODML Technical Usage Guide

• Fluid component catalog and the references to fluid components work in the same way as other
objects, such as asset production volumes as shown in Section 6.3.2 and Figure 6-15.
• A range of operating parameters can be included (red box).
• The transfer can optionally be split into a number of Producing Well Periods using multiple
production well period elements. The example shows the choke change which resulted in the
volume reporting for this well, as shown in Section 6.3.2 and Figure 6-20 (two blue boxes).
• Optionally well production parameters can report flow rates (purple box).
• For Well Production Parameters as for Production Well Test, BUSINESS RULE: the Reporting
Entity data object referenced is expected to be to a well (etc.) type of reporting entity, but this is
not enforced in the schema (black box).
• Each Producing Well Period can refer to a different well and this way, a collection of well
performance parameters can be transferred in one file (similar to Production Well Tests).

Figure 6-26. Well production parameters event-driven or reported across any number of discrete periods.

Transfer
The transfer example in the worked example is shown in Figure 6-27. This is a standalone data
object with the following highlights:
• Fluid component catalog, the references to fluid components and the quantities work in the same
way as other objects, such as asset production volumes as shown in Section 6.3.2 and Figure
6-15 (red arrows).
• Transfer has a source facility and a destination facility. No tanker is involved; the transfer happens
via pipeline (green arrows). These are data object references to the reporting entities for these
objects.
• As an alternative to using a standalone transfer data object, transfers can be embedded in the
periodic asset production volumes as a special type of disposition; see Section 6.4.5 and Figure
6-29.

Standard: v2.2 / Document: v1.0 / May 16, 2022 48


PRODML Technical Usage Guide

Figure 6-27. Transfer: standalone data object in the event-driven mode.

Terminal Lifting
The terminal lifting example in the worked example is shown in

Figure 6-28. This is a standalone data object with the following highlights:
• Fluid component catalog, the references to fluid components and the quantities work in the same
way as other objects, such as asset production volumes as shown in Section 6.3.2 and Figure
6-15 (red arrows).

Standard: v2.2 / Document: v1.0 / May 16, 2022 49


PRODML Technical Usage Guide

• Terminal Lifting has a loading terminal and a destination terminal. A tanker is involved in the lifting
from the former to the latter (green arrows). These are data object references to the reporting
entities for these objects.
• A certificate number can be added for the reference to the document defining the lifting onto the
tanker.
• Note that reporting entity kinds are available for “oil tanker” and “tanker truck”, for ship and land
(truck) transport.
• As an alternative to using a standalone transfer data object, transfers can be embedded in the
periodic asset production volumes as a special type of disposition; see Section 6.4.5 and Figure
6-29.

Figure 6-28. Terminal lifting: standalone data object in event-driven mode.

Transfer and Terminal Lifting as Dispositions within Periodic Asset Production


Volumes Reporting
Transfer and terminal lifting data can be included as dispositions within the periodic asset production
volumes reporting. This is in addition to their being able to be reported in an event-driven manner as
standalone data objects for the reasons explained in Section 5.5. The standalone examples are
shown in Sections 6.4.3 and 6.4.4. The embedded example of the same data is shown in Figure 6-29.
Note these key points about this way of reporting transfer and terminal lifting:
• Transfer and terminal lifting are available as specific types of disposition (red boxes). Note: All
other types of dispositions than these two are typed “product disposition” with an enumeration to
define the kind of disposition. See the example in Figure 6-14.
• The same elements that can be included in the standalone version can also be included in this
embedded version. The example shows the elements tanker and transfer direction for illustration
(green boxes).
• The UUID of the standalone data object is not included in the embedded version (because this
data object has the UUID of the parent asset production volumes). The example here shows the
remark element used to refer to the event-driven transfer by UUID (purple arrow).

Standard: v2.2 / Document: v1.0 / May 16, 2022 50


PRODML Technical Usage Guide

Figure 6-29. Transfer and terminal lifting optional inclusion in asset production volumes period report.
The attributes of the stand- alone data objects can be included in the Disposition (red boxes) – shown,
“Tanker” and “Transfer Direction” (green boxes).

Standard: v2.2 / Document: v1.0 / May 16, 2022 51


PRODML Technical Usage Guide

7 Appendix for Simple Product Volume

7.1 Reporting Entity Kinds


The behavior of reporting entities according to kind is not enforced; for example, there is nothing in
the schema to restrict well tests to being associated only to reporting entities whose kind is “well”.
The Category column in this table is not a schema feature; it is provided here for information and
guidance only. Three categories are used for the reporting entity kinds: Asset, Geographic, and
Organizational.

Kind Description Category

platform A single platform. Asset


tank A single tank. Asset
terminal A physical object that is an industrial facility for the storage of oil Asset
and/or petrochemical products and from which these products are
usually transported to end users or further storage facilities.

well A single well, possibly with many wellbores (side tracks). Optionally, it Asset
can reference a WITSML well object.

wellbore A single wellbore (side track) within a well. Optionally, it can reference Asset
a WITSML wellbore object.

Contact Represents the details of a single physical connection between well Asset
Interval and reservoir, e.g., the perforation details, depth, and reservoir
connected. Meaning: this is the physical nature of a connection from
reservoir to wellbore. Optionally, it can reference a WITSML
ContactInterval class within the wellboreCompletion object.

Wellbore Each wellbore completion represents a flowing connection between Asset


Completion wellbore and reservoir. It contains contact Intervals, which reference
the physical aspects of these connections detailed in
downholeComponents. Optionally, it can reference a WITSML
wellboreCompletion object.

Well The well completion data object represents a “flow” or “stream” from Asset
Completion the well (e.g., from a wellhead port) that is associated with a set of
wellbore completions. When there is more than one such wellbore
completion, the flows from them commingle in the well (the wellbore
completions may be located in multiple wellbores). The well
completion represents this commingled flow. Optionally, it can
reference a WITSML wellCompletion object.

flow meter A single flow meter. Asset


pipeline A fluid conductor that consists of pipe, possibly also including pumps, Asset
valves, and control devices, intended for conveying liquids, gases, or
finely divided solids.
gas plant A facility that processes natural gas to achieve the recovery of natural Asset
gas liquids and/or removal of contaminants.

facility A generic label for a facility that is not described by the other physical Asset
reporting entity kinds.

production A single production processing facility. Asset


processing
facility
FPSO Floating production, storage and offloading facility. Asset

Standard: v2.2 / Document: v1.0 / May 16, 2022 52


PRODML Technical Usage Guide

Kind Description Category

oil tanker An oil tanker, a vessel that could be a barge, or a sea-going ship. Asset
tanker truck A truck which carries oil. Asset
country A single country. Geog
county A single county. Geog
field A single field. Geog
formation A bed or deposit composed throughout of substantially the same kind Geog
of rock.
rock-fluid A fluid phase plus one or more stratigraphic units. A unit may Geog
unit feature correspond to a pair of horizons that are not adjacent stratigraphically,
e.g., a coarse zonation, and is often used to define the reservoir.
Available to match reported production to the rock-fluid feature in
RESQML. Optionally, it can reference a RESQML rock-fluid unit
feature object.
state A single state or province. Geog
field - part An area or zone that forms part of a field. Geog
reservoir A single reservoir. Geog
company A company name that is the name of the operator company. Org
lease A single lease. Org
license A regulatory agreement that gives the licensees exclusive rights to Org
investigate, explore, and recover petroleum deposits within the
geographical area and time period stated in the agreement.

commercial An organizational construct through which a group of organizations or Org


entity facilities are aggregated as if it were a single organization.

7.2 Use Cases


This appendix contains the detailed use cases that are listed in Section 4.2.

Use Case 1 & 2: Direct Exchange of Production Data Between Partners


Use Case 1: Receive from a partner
Use Case 2: Transmit to a partner

Use Case Direct Exchange Of Production Data Between Partners


Name
Version
1.0

Author Terry Kite (Merrick)

Reviewer(s) Bill Logsdon/Ashraf Wardeh (Oxy) and William Gilmore (Accenture)

Goal Provide production data for a fixed duration necessary for monitoring, decision-making,
forecasting and reporting, and financial record-keeping associated with operated properties
to others with working and/or revenue interest in those properties.

Standard: v2.2 / Document: v1.0 / May 16, 2022 53


PRODML Technical Usage Guide

Use Case Direct Exchange Of Production Data Between Partners


Name
Business Discharge contractual obligations
Requirement Assess ongoing participation in properties operated by others (OBO)
Evaluate current operations of OBO properties
Comply with financial and reporting obligations related to OBO properties
Forecast future potential of OBO properties

Business Value Better operational insight supporting improved decision making and/or requests for
operational changes
Improved estimate of future production potential (reserve estimates/reporting; marketing
deliverability, etc.)
Timelier data for booking related financial transactions, governmental/financial reporting, etc.
reduce the cost and effort of both providing data and consuming data through a consistent
standard (currently each company tends to have their own solutions so each party has to
develop a separate solution for each partner in order to load their data

Summary For producing properties that are subject to joint operating agreements, the Operating
Description Partner is responsible for collecting operational information on those properties and
determining well-level and facility-level production volumes by product (or flow stream). They
are also typically responsible for providing this information to the other parties in the
agreement (Non-operating Partners). The Non-operating Partner uses this data to update its
own data store(s) that support making operational decisions, estimating reserves, reporting to
outside parties and booking related financial transactions. Although typically transmitted on a
regular on-going basis, the Non-operating Partner may from time to time request data that
encompasses prior periods to verify and/or complete its own data store. Likewise, the
operating partner may at times advise non-operating partners of prior period adjustments and
re-issue previously transmitted data that has been amended.

Actors Non-operating Partner; Operating Partner; non-working interest partners (e.g., royalty
owners)

Triggers Agreed upon periodic schedule (daily, weekly, monthly, etc.)


Out-of-Schedule request by Non-operating partner

Pre-conditions Non-operating Partner has a revenue and/or working interest in a producing property
operated by the Operating Partner
Operating Partner has collected and verified operating data and performed necessary
calculations to determine gross well-level production data.

Primary or The data is transmitted from the operating partner to interested parties (working interest
Typical partner, royalty owner, inter-departmental use, etc.) on a regular basis encompassing all new
Scenario data since the previous report (note: new well or facility may include historical data. For
records representing the aggregation of data over a period of time, the duration of each
record is typically fixed (full day, partial day, week, month, etc.) in a report, but there can be
multiple reports each consisting of different data ranges and record durations.

Standard: v2.2 / Document: v1.0 / May 16, 2022 54


PRODML Technical Usage Guide

Use Case Direct Exchange Of Production Data Between Partners


Name
Alternative Operating Partner amends production related information previously reported to the Non-
Scenarios operating Partner as part of the agreed upon periodic reporting process. The Operating
Partner transmits the report with all modified data, including producing period.

Non-operating partner wishes to refresh their production history on a property or properties in


which they have an interest (to verify the accuracy of their data, to fill gaps in their data store,
outages, etc.). They would request that the Operating Partner transmit a report over a
specified date range using a specific record duration (day, week, month, etc.).

A non-operating partner that has not been tracking operating information on a well may need
a one-time issue of all historical data to establish a starting point.
It may be necessary to distinguish between data exchanges which are intended to over-write
old records or whether the transmission is of new data.

Post- Non-operating Partner has data necessary to assemble a production history for the property
conditions adequate for making operational decisions, estimating reserves, reporting to outside parties
and booking related financial transactions.

Business
Rules

Standard: v2.2 / Document: v1.0 / May 16, 2022 55


PRODML Technical Usage Guide

Use Case Direct Exchange Of Production Data Between Partners


Name
Data Well identifier
Requirements Production date and period
Total produced volume of oil, water and gas
Injection volumes (oil, water or gas); allocated and raw
Pressures (tubing, casing, etc.)
Temperature
Sales volumes (oil, gas)
Fuel volumes (gas)
Flare/Vent volumes (gas)
Downtime/Deferred Production
Hours (minimum)
Volume Deferred
Comment (minimum
Formatted reason code/description)
Start date/time
End date/time
Operations comments
Well tests (Oil/Water/Gas 24 hour rates and test date)

Last modified date/time


Overwrite/new data flag

Extended Requirements:
Well Type
Well Status
Artificial Lift History
Well Maintenance History
Facility measurement data
Measurement point identifier (NB facility identification may not be a standard)
Measurement point type (tank, meter, etc.)
Inventory (tank volumes)
Volume flowed (meters)
Fluid characteristics (temperature, pressure, density, etc.)
Pressures/temperatures (vessels/equipment)

Notes

Definitions

Use Case 3: Provide Historical Data Pre-Sale (“Divestiture”)

Standard: v2.2 / Document: v1.0 / May 16, 2022 56


PRODML Technical Usage Guide

Use Case Data Room Pre Sale


Name
Version
1.0

Author Bill Logsdon/Ashraf Wardeh (Oxy)

Reviewer(s) Joe Palatka (BP)


Shaji John (Halliburton)

Goal Reduce cost and effort and support automation of data room presale activities and generate
added value for properties being sold, as well as for those purchasing properties

Business General requirements


Requirement Provide required data room production information in an easy to access format
Support standard data exchange interfaces.
Provide consistent, complete information needed to make purchase decisions

Business Value Consistent format and contents of production information reduces amount of custom
programming required for both buyer and seller
Consistent format and contents of production information ensures a consistency, accuracy
and completeness of data sets avoiding costly rework and potential delays.
Making production information available virtually, ensures access to hard copy data by the
divesting operator.
Ability to use pre-existing solutions based on PRODML standards reduces time and cost for
data room functions (both seller and prospective buyers)
Ability to support common data exchange formats and interfaces – PRODML, oData, web
services, ETL, will provide flexibility and efficiency in data access for potential buyers.
Providing additional information allows prospective buyers to determine maximum value of
properties allowing higher potential bids. Publish the meta information format.

Summary During the data room pre-sale process, the divesting partner provides information on assets
Description being offered for sale. Prospective acquiring parties analyze the available information and
determine whether to make a bid on the properties and how much to bid. Typically multiple
prospective acquiring parties will come into the data room to view data and documents and to
ask questions. The number and length of visits varies depending on the size and complexity
of the sale.

Additional discussions and meetings outside the scope of the formal data room are common
and may occur after the data room phase is completed. The end result of this process is for
the prospective acquiring parties to submit their bids so the divesting party can determine
whether to sell the property and who the acquiring party will be.

Actors Divesting Party – organization selling properties


Prospective Acquiring Parties – organizations considering bidding on a property

Standard: v2.2 / Document: v1.0 / May 16, 2022 57


PRODML Technical Usage Guide

Use Case Data Room Pre Sale


Name
Triggers Initiated when divesting party decides to offer properties for sale and to host a data room
Initial event and update cycles determined by divesting party based on the length of the data
room phase of the sales process
Multiple data room events can be scheduled if the mix of assets being sold changes or other
events require

Pre-conditions Driven by divesting party decisions for data room phase of sales process

Primary or Prospective acquiring parties may be invited by divesting party or the data room may be open
Typical to the public. Typically a dedicated facility is made available containing documents,
Scenario computers, and telecommunications hookups for prospective acquiring parties. Resources
are made available to answer questions and required data is made available for all
prospective acquiring parties. The Data room phase of the sale process will have a specific
duration and prospective acquiring parties may make multiple visits.

Alternative Physical data room may be eliminated or minimized in exchange for virtual meetings and
Scenarios external links for data on properties being sold. In this case prospective acquiring parties will
download data and analyze it locally. Questions will be handled via teleconferences.

Post- During the due diligence phase of the sales process (after the acquiring party is selected and
conditions the bid is finalized), the acquiring party may download current / updated information and may
request information about the source of data or may request additional information. Both of
these activities are outside the scope of this use case for PRODML.

Business
Rules

Data Specific data requirements relevant for PRODML include:


Requirements Monthly allocated production and injection volumes by product by well completion
Monthly end inventory by product and facility
Well status history by wellbore
Producing method history
Well test data by well completion
Downtime and Deferred Production by well completion
Gas composition and energy content (BTU) data if available
Daily meter volume by product if available (optional)
Hydrocarbon qualitative information, if available.
Scope of allocated volumes includes the entire history of the properties. Scope for other data
is based on availability of the data.
Additional data requirements that are outside the scope of PRODML include
Well master data (lat long, field, reservoir, operator, completion date)
Facility data
Maintenance history
Cost information

Notes

Standard: v2.2 / Document: v1.0 / May 16, 2022 58


PRODML Technical Usage Guide

Use Case Data Room Pre Sale


Name
Definitions Well completion – the zone / completion of the well where production or injection is occurring
(typically assigned an API_NO14 is US properties).

Use Case 4: Obtain Historical Production Data Post-Sale (“Acquisition”)

Use Case Acquisition Post Sale


Name
Version
1.0

Author Bill Logsdon/Ashraf Wardeh (Oxy)

Reviewer(s)
Goal Reduce cost and effort of post-acquisition data loads and improve completeness and
accuracy of loaded data.

Business General requirements


Requirement Provide needed production data in easy to access format
Provide consistent, complete information needed to operate acquired properties

Specific data requirements relevant for PRODML include:


Monthly allocated production and injection volumes by well completion
Complete well status, type, and Method Of Production (i.e., artificial lift type) history
Well test data by well completion
Downtime and lost production by well completion
Daily Raw Injection Volumes
Daily meter volume if available
Gas composition data if available and supported by PRODML
Analog data if available (Inferred Production, Beam & ESP settings, etc.) if supported by
PRODML
Booked Production Volumes (i.e., sold volumes) reporting by Financial Accounting systems.
(Noting these are not revised; instead a prior period adjustment would be made in case of
error: hence allocated and booked volumes can be different).

Scope of allocated volumes includes the entire history of the properties. Scope for other data
is based on availability of the data.
Additional data requirements that are outside the scope of PRODML include
Complete well master data, including wells, well bores, and well completions (lat long, field,
reservoir, operator, completion date, date of production, location information, operating lease)
Facility data
Well and Facility Maintenance history
Cost information

Standard: v2.2 / Document: v1.0 / May 16, 2022 59


PRODML Technical Usage Guide

Use Case Acquisition Post Sale


Name
Business Value Consistent format and contents of production information reduces amount of custom
programming required for both buyer and seller
Ability to use pre-existing solutions based on PRODML standards reduces time and cost for
data room functions (both seller and prospective buyers)
Fully incorporating PRODML data model into acquisitions process will yield more consistent
and complete data, which will drive more operational success for acquiring companies and
ultimately better sales prices for divesting companies

Summary Once the acquisition is complete, the next step is for the acquiring company to take over the
Description properties and start operating them. The immediate need is for minimum information to be
able to perform closings for financial and production accounting systems. Additional reserves
need to be calculated and booked and production and status history need to be loaded for
future production accounting functions. The time frame for doing the initial loads is often very
short as deadlines for taking over acquired assets tend to be very short.

Once the initial data loads need to support accounting closings are completed, the rest of the
data is loaded and validated as resources are available. Success in this second phase
supports more efficient and successful operation of the acquired property.

Actors Divesting Party – organization selling properties


Prospective Acquiring Parties – organizations considering bidding on a property

Triggers Initial load is triggers when the sale is closed and the final contracts are signed
Secondary load is triggered when the initial load is completed and acquiring company has
taken over operation of the properties

Pre-conditions NA

Primary or Acquiring party is typically given all relevant data that can be provided by the divesting party
Typical (although quality of data may be inconsistent). The acquiring party determines critical data
Scenario need to support initial closing process. Once the required data is located, they attempt to load
the data and perform their initial accounting closings. If critical data is missing, requests will
be made to the divesting party.

Complete load may take several months and data gaps found during this phase are much
less likely to be addressed by the divesting party.

Alternative Follow up data transfers may be requested when needed data is missing from the initial data
Scenarios load process.

Post- Validation of the acquired data is performed to determine completeness, consistency, and
conditions accuracy. In addition reformatting may be necessary to convert data into format used by
acquiring company (PRODML can provide value here by reducing data conversions needed).

Business
Rules

Standard: v2.2 / Document: v1.0 / May 16, 2022 60


PRODML Technical Usage Guide

Use Case Acquisition Post Sale


Name
Notes

Definitions Well completion – the zone / completion of the well where production or injection is occurring
(typically assigned an API_NO14 is US properties).

Use Cases 5 and 6: Transfer/Receive Data from Third-Party Source


Use Case 5: Transmit monthly data to central data exchange
Use Case 6: Receive monthly data from a central data exchange

Standard North American Production Reporting Standard

Version 0.1

Author Peter Westwood (EnergySys)

Reviewer(s) Barry Barksdale (PDS Energy) / Shaji John (Haliburton)

Goal Ensure that when data is exchanged the sender is able to specify the rules for access to the
data, such that the receiver may enforce privacy rules specified by the data owner.

Business Maintain privacy over sensitive data at all times, notably after onward transmission.
Requirement Allow configuration of the rules for data access to be transferred between two systems.
The responsibility for ensuring all data is clearly marked with access conditions lies with the
data owner.

Business Value Contractual conditions typically specify who can see sensitive production numbers at various
locations. Unauthorized disclosure of this data to a third party is at best embarrassing and at
worse litigious.

Summary In several of the use cases two parties agree to exchange information. The Operating Partner
Description typically collects and sends data to the Non-Operating Partner. In some cases the data may
be distributed, possibly via a third-party hub.

When this occurs, it is important that the data access rights are carried with the data, so that
the rules for data security can be asserted by the receiving system.

Use Case TBS


Scope

Standard: v2.2 / Document: v1.0 / May 16, 2022 61


PRODML Technical Usage Guide

Primary or Real Examples that this may apply to include:


Typical
Scenario Well Production data should indicate which interested parties are permitted to access the
numbers for a given well
Where a party receives information concerning the production, apportioned by NRI (or other
such split), only the personal net allocation and possibly the total, as defined by the contract,
should be viewable by a receiving party.

It is expected that many cases of this type of data permissions need to be supported.

Alternative Data where the supplied permission entitles are not found.
Scenarios The receiving system does not support the enforcement of the required permissions.

Post-
conditions
Business If no permission are set then the assumption is that the data is accessible to anyone
Rules
If the any of the identifiers within the permission blocks is set, then the receiver is expected to
only allow the identified permissions entity.

Entities that may be permissioned include…


Permissioned roles will be…
Permission classes include Read or Read/Write

Data
Requirements
Notes The data transfer can be considered as having an Envelope and a Content Body. The
Envelope of the data, when present, will define the required permissioned entities allowed to
view the content.

Systems could exchange the defined capabilities for enforcement of access permissions.
This would allow the refusal to transfer data where the privacy cannot be enforced.

It is required that the data permissions be configured such that the rules for any contract can
be described and managed from the schema.

An optional permissions entity could be added to any business data transferred. While it is
outside the scope of the standard to say how this is used, it must be enough to allow the
receiver the ability to apply access controls.

Definitions Content Body – any business data being transferred.


Permitted Entity – Group identify for whom the access is being defined.
Identity Map – mapping between the entities used in one or more systems.

Issues The Parties exchanging data will need a way to identify the Permitted Entities. This means
that they will need to have shared identifiers for both the names of items. This will be difficult
to manage but is required to allow the secure transfer of permissions between the two
parties.

Standard: v2.2 / Document: v1.0 / May 16, 2022 62


PRODML Technical Usage Guide

7.3 v2.1 SPVR Release Notes


Changes are as follows. See the documentation in the Technical Reference Guide for the guidance
as to exactly what any added elements represent.

General SPVR Changes


1. Reporting Entity. To enable a better identification of a Facility associated with a Reporting Entity,
the following changes were made:
a. A new Top-level Object (TLO) called Facility has been added.
b. Facility has a single element Kind: ReportingFacilityExt (extensible form of Reporting Facility,
which is a list of facility kinds already in PRODML)
c. In Reporting Entity, Target Facility Reference is renamed to Associated Facility, and is an
optional Data Object Reference (DOR) to a Facility TLO. Until specific facility type data
objects are added, this is to be used to reference a Facility TLO, which can contain the type
and identity of the facility associated with this reporting entity.
d. In Reporting Entity, a new element Associated Object has been added. This is an optional
DOR to various types of existing Energistics subsurface objects available now, for example,
well, wellbore, wellbore completion etc.
2. To better support use of “legacy units of measure” (e.g., psig, scf, stb) the following changes were
made:
a. In Energistics common, made:
i. DensityValue.Density use MassPerVolumeMeasure instead of
MassPerVolumeMeasureExt so it can use Legacy units.
ii. FlowRateValue.FlowRate use VolumePerTimeMeasure instead of
VolumePerTimeMeasureExt so it can use Legacy units.
iii. VolumeValue.Volume use VolumeMeasure instead of VolumeMeasureExt so it can use
Legacy units.
iv. link between DensityValue and AbstractTemperaturePressure optional.
v. link between FlowRateValue and AbstractTemperaturePressure optional.
vi. link between VolumeValue and AbstractTemperaturePressure optional.
vii. Removed uid from Product Rate
b. In PRODML made SimpleProductVolume: ProductRate.VolumeFlowRate use FlowRateValue
instead of VolumePerTimeMeasure.
3. In the places where fluid quantity or flow rates are reported, made a consistent pattern whereby
Product Fluid Kind (or, for a Service Fluid, Service Fluid Kind) is mandatory, and Product Fluid
Reference (or, for a Service Fluid, Service Fluid Reference) is optional.
a. Product Fluid Kind (or, for a Service Fluid, Service Fluid Kind) are enums, e.g., “crude -
stabilized”
b. Product Fluid Reference (or, for a Service Fluid, Service Fluid Reference) are references
(using the uid) to components contained in the Fluid Component Catalog, where not only the
kind but the physical properties of the fluid can be transferred.

Asset Production Volumes Changes


1. Scheme elements were re-ordered compared to v2.x, to make the model more human-readable.
This change made no difference to content.
2. Disposition: “buyback – fuel” added as a possible value in the enum list.
3. In Deferred Production Event:
a. added planned|unplanned enum
b. added Remark
4. In Deferred Production, made Abstract Product Quantity mandatory (1..1).

Standard: v2.2 / Document: v1.0 / May 16, 2022 63


PRODML Technical Usage Guide

5. The way a Deferred Production Event now works is, Deferred Production Event has:
a. planned|unplanned kind
b. duration and event times
c. downtime code
d. remark
e. 0..* Deferred Production elements. The purpose of these is to be able to report the deferred
quantities related to the parent Event. Each Deferred Production has an Estimation Method,
and a mandatory Abstract Product Quantity, which either is a Product Fluid or a Service Fluid
element, as used elsewhere in SPVR for fluid kind and quantity data.

Production Well Tests and Well Performance Parameters Changes


1. In Production Well Tests, added Effective Date – the date after which this test is effective in
production allocation calculations.
2. Moved Test Condition, with Well Flowing Condition, and Fluid Component Catalog, to Flow Test
Activity. See Chapter 14, in particular 14.2 and Figure 14-2. This set of classes was previously in
Production Well Tests. The change is to re-use this content in general Flow Testing and thereby
share flow test software development more easily.
3. As a result of change 2, the way to represent a production well test is now to have a Production
Well Tests top level object and for each well test to be transferred, have a data object reference to
a Flow Test Activity object.
a. Note that Flow Test Activity has multiple flow test types (see Figure 14-1) and the
types expected to be used for this purpose are Production Flow Test and Injection
Flow Test.
b. The Flow Test Activity is where the link to the Reporting Entity is made, the Location
class (see figures Figure 14-2 and Figure 14-3).
4. These changes were made to Test Period:
a. Renamed Test Condition to Test Period.
b. Added an enum Test Period Kind to Test Period. This includes transient test types as well as,
for the purposes of SPVR, the expected value of “production well test”.
c. Deleted Test Duration.
d. Added End Time: (so start and end are now of type dateTime).
e. Added XSDattribute uid: String64 (because there can be multiple Test Periods included).
5. These changes were made to Well Flowing Condition:
a. Removed Flowing Pressure (ambiguous).
b. Removed Bottom Hole Static Pressure (a different flow condition, e.g., a shut-in period, now
is reported as a separate Test Period, see below.)
c. Removed Bottom Hole Shut In Pressure (see previous).
d. Removed Tubing Head Shut In Pressure (see previous).
e. Added Casing Head Stabilized Temperature.
f. Renamed Bottom Hole Gauge Depth MD to Bottom Hole Pressure Datum MD.
g. Renamed the four elements for bottom-hole and tubing-head pressure and temperature to
use the word "stabilized" in attribute names, replacing "flowing". (See the comment below
concerning use of separate Test Periods for shut ins; "stabilized" is the description for steady
state conditions whether flowing or shut in.)
h. Added Fluid Level, Base Usable Water. This allowed the deprecation of the old Well Test data
object.
6. Because the Flowing Conditions class no longer has “flowing” and “shut in” (see previous),
instead, a Test Period should be used for each condition, e.g., one Test Period to report the
flowing conditions and another one for the shut in conditions. This can be done in the Flow Test
Activity data object by having two (or more) Test Periods under the Interval Measurement Set.

Standard: v2.2 / Document: v1.0 / May 16, 2022 64


PRODML Technical Usage Guide

7. In Well Production Parameters:


a. Changed multiplicity of Well Production Period from 0..* to 1..*
b. Moved Reporting Entity to Well Production Period.
c. In Well Production Period, made Reporting Entity mandatory (there must be a well we are
reporting against).
d. As a result of a-c, multiple periods and/or multiple well performance data can now be
transferred in one Well Production Parameters file.

Standard: v2.2 / Document: v1.0 / May 16, 2022 65


PRODML Technical Usage Guide

Part III: Fluid and PVT Analysis


Part III contains Chapters 8 through 12, which explain the set of PRODML data objects for fluid and
PVT analysis.

Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for
their work in designing, developing and delivering the Fluid and PVT Analysis data model: Calsep,
Chevron, ExxonMobil, KBC Advanced Technologies, Shell, Schlumberger and Weatherford.

Standard: v2.2 / Document: v1.0 / May 16, 2022 66


PRODML Technical Usage Guide

8 Introduction to Fluid and PVT Analysis


Data Objects
8.1 Overview of Fluid and PVT Analysis Reporting
PRODML includes a set of XML data objects that can be used to consistently capture and
communicate fluid and pressure-volume-temperature (PVT) analysis data covering:
1. Fluid System and “rock-fluid feature” from which a sample was taken;
2. Sample acquisition, including the methods and conditions of the operation;
3. Sample Container in which a sample is transported;
4. Laboratory Analysis and results thereof;
5. Fluid System Characterization and property generation for upstream technical workflows.
The PVT data objects are designed to improve reliability and reduce costs for data exchanges
between field personnel, laboratory personnel, subject matter experts, and end users including
technical software applications. The standard is also designed to support the evolution of a single
“document” for a fluid sample’s lifecycle. Other uses include storing these data in a system of record.
These chapters do not specify how to perform these steps in the reservoir fluid’s lifecycle; however,
the standard has been designed to allow data to be initially incomplete and updated as additional data
are developed. Also, final products—such as a fluid property table or an equation-of-state (EoS)
model—remain connected to the earlier lifecycle stages of characterization, analysis, and acquisition
and the reports and documents created at each stage.
For a quick overview or to be able to make a presentation to colleagues, see the slide set: Worked
Example PVT.pptx which is provided in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files.

8.2 Business Case: Why is a PVT Data-Exchange Standard Important?


Data associated with reservoir fluids are diverse, detailed, and important. Good reservoir fluid
characterization is critical for most business decisions about the development of oil and gas fields.
Reliable reservoir fluid properties obtained during the field’s lifecycle are critical factors in planning
reservoir, facilities, and well developments to maximize return on capital.
However, fluids data are often based on samples, which are normally collected, analyzed, and
modelled early in the reservoir’s lifecycle. Because of the diversity and physical nature of reservoir
fluids, the data are usually characterized by complex experiments that represent a best approximation
of fluid behavior, along the entire production pathway. The quality of the results of these experiments
depends on the test sample’s handling and preparation, the experimental process used, and the test
conditions used. To ensure that adequately representative properties are available for a specific
technical workflow, it is vital to understand and accurately communicate this background process for
fluid properties.

Data Exchange Challenges


The extended workflow—from sample collection and handling, to lab analysis and interpretation, to
communicating results to end users—is a complex, data-intensive process with ample opportunity for
errors, both in the actual workflow process (see a list of these challenges in Section 8.2.2) as well as
the data exchange required as part of that process. The challenges associated with data exchange
include:
• Chain of custody/time lags
• Multiple labs/multiple clients
• Incomplete, misinterpreted, or misunderstood information
• Sample results, while accurate, may need adjustments for use
• Multiple available data stores, vocabularies, and formats
• Multiple consumers of the data

Standard: v2.2 / Document: v1.0 / May 16, 2022 67


PRODML Technical Usage Guide

These PRODML PVT data objects have been designed to address these challenges and improve the
accuracy and completeness of the information transferred in these workflows.

Process Challenges
Standing (1981) declared “the greatest use of PVT data lies in calculating reserves of oil and gas in
place in the reservoir and the effect of field operational methods on the recovery of oil and gas.” In
other words, PVT data is fundamental to our ability to understand and predict resource size and
recovery.
Some of the challenges (per Standing):
• No assurance that samples obtained in one well are representative of the entire reservoir.
− Gravity effects can create sample differences
- Samples at the same structural elevation may have differences based on geologic activity
over time
• How representative are samples captured in a well?
− Fluid may be commingled (from multiple reservoir zones)
− Fluid may be contaminated
- Where two phases exist in contact in the same zone, flow rates are almost always different
• How complete is the samples collection data?
- Collection points, times, etc.
• How depleted is the reservoir?
− Prefer above 80% of original pressure
− Below 40% will likely lead to inaccuracies
- Samples should be re-run as reservoir depletes
References
Standing, M.B. 1981. Volumetric and Phase Behavior of Oil Field Hydrocarbon Systems, ninth edition.
Richardson, Texas: Society of Petroleum Engineers of AIME.

Benefits of Use of PVT Data Objects


Use of the PVT data objects will improve both the accuracy and availability of usable fluid and PVT
data within and between upstream companies. Significant improvements are expected in:
• Simplifying the project management of a fluids analysis program while increasing
efficiencies in the communication between the customer and the laboratory personnel.
• Simplifying the reporting of detailed laboratory results by using a common, self-validating
XML format. Improving the quality and consistency of fluid property data created for use in
technical applications by both systematically capturing the pedigree of fluid data in the workflow
all the way back to sample acquisition and the standardization of input data structures.
• Increasing the quantity of fluid and PVT analysis data in systems of record by simplifying
the workflow used to load field and laboratory measurements. Also, enabling standard PVT data
storage and search and allowing the data to be exchanged between databases, fluid
characterization applications, and end-user applications.
• Reducing mistakes in the communication of results caused by differences in technical
understanding of the fluid property data and the consistent handling of details, such as reference
conditions and units of measure.
• Making the field notes more available to laboratory analysts so that the quality of the
experiments and measurements more realistically represent real-life conditions, and making field
and laboratory notes more available to fluid property specialists seeking to characterize individual
samples or to produce a system characterization.
As this standard is adopted, these improvements are expected to be realized in both service and
software offerings from vendors and in operators’ internal workflows. Opportunities for additional
impacts have been identified for workflows attempting to automate field surveillance and performance
prediction because the use of these fluid data standards will support the consistent expression of fluid
properties across multiple and diverse applications and workflows.

Standard: v2.2 / Document: v1.0 / May 16, 2022 68


PRODML Technical Usage Guide

It has been difficult to estimate the impact of more accurate and consistent fluid data on the
operational bottom line, but the quantification of its impact should include:
• Less risk for estimates of initial rates, final recovery fractions, and development/depletion
strategies.
• Better assessments of peak capacities used to size surface operating facilities.
• Improved accuracy of subsurface flow potential and wellbore tubular design.
Other benefits may be realized through one standard by its interaction with other standards (i.e.,
Metcalfe’s Law). By delivering fluid and PVT analysis as useful data objects in PRODML, these can
be readily accessed and used by other data objects in both RESQML and WITSML and by the flow
network and completion/nodal analysis data objects in PRODML.

8.3 Scope of PVT Data Objects


Figure 8-1 shows a high-level workflow from which a set of functional requirements were developed.

Figure 8-1. High-level fluid sample gathering and analysis workflow, which serves as basis for defining
requirements for PRODML PVT data objects.

The areas of interest covered by these data objects include:


• Fluid sample acquisition
• Sample analysis (i.e., laboratory measurements)
• Characterization, including both the validation of laboratory measurements and the generation of
calculated results

Standard: v2.2 / Document: v1.0 / May 16, 2022 69


PRODML Technical Usage Guide

The solid lines show the data movement processes that have been addressed. The goal of these data
objects is to provide complete and accurate XML-based machine readable documents that can be
transferred between the different parties and software systems that exist today. The need for this
capability extends from capturing fluid samples in the field to supplying data to these consumers:
• Systems of record (e.g., corporate/technical databases, etc.)
• Interpretation software (e.g., proprietary PVT analysis packages)
• End-user applications (e.g., nodal analysis, reservoir simulation, etc.)
Out of scope for this release (dashed lines in Figure 8-1):
• Sample tracking through material movement systems and storage
• Water chemistry and geochemical analysis
• Fluid sampling and analysis for custody transfer
• Project databases

8.4 Fluid and PVT Use Cases


The main use cases are listed and described in Chapter 9.

Standard: v2.2 / Document: v1.0 / May 16, 2022 70


PRODML Technical Usage Guide

9 Fluid and PVT Analysis: Use Cases


This chapter provides an overview of the main use cases and explains how the set of PVT data
objects support them. Use cases include:
• Use Case 1: Sample Acquisition defines the requirements to capture all the relevant information
about a fluid sample and its capture, information which may subsequently be needed to
understand, quality check, and apply results of laboratory analysis. For details, see Section 9.1.
• Use Case 2: Laboratory Analysis provides requirements for the laboratory fluid analysis
program and the resulting experiment measurements, which involves many different participants
and detailed, accurate communication. For details, see Section 9.2.
• Use Case 3: Sample Characterization is the process of building a numerical model of the
laboratory measurements from an individual sample and determining the quality of the results
received from laboratory fluid analysis; it involves a QC and verification of the samples and
results. For details, see Section 9.3.
• Use Case 4: System Characterization is a process with a goal of building a numerical fluid
model for data-specific operating scenarios and fluid systems and providing a fluid description to
be used in engineering models; data from multiple samples may be used for conditions not
investigated in the laboratory. For details, see Section 9.4.
• Use Case 5: Data Storage addresses the storage, search, and retrieval of information in systems
of record for the lifecycle of the fluid sample data (i.e., from acquisition to utilization to disposal).
For details, see Section 9.5.

Standard: v2.2 / Document: v1.0 / May 16, 2022 71


PRODML Technical Usage Guide

9.1 Use Case 1: Sample Acquisition


The requirements for this use case are to capture all the relevant information about collecting a fluid
sample, including everything that may be needed to understand, quality-control, and apply the results
of laboratory analysis. Figure 9-1 highlights (red box) the sample acquisition portion of the workflow.

Figure 9-1. Sample acquisition use case.

How the Solution Supports this Use Case


The sample acquisition data object helps to answer questions like these:
• Which reservoir did this sample flow from?
• Which conditions was the sample acquired under? Where? When?
• What method was used to acquire it? Which particular tool was used?
• How much actual and usable sample volumes were collected?
• Were multiple samples taken from same location?
• What are the intended uses of the samples?
• Who handled the sample? What did they do? When?
This information is needed to assess the sample’s representativeness, its potential for contamination,
and its susceptibility for the removal of trace components (e.g., by adsorption onto surfaces of the
sampling tool). This information is also needed when designing the laboratory analyses and selecting
the best samples for characterization. The ability to describe the physical configuration and conditions
of the flow stream being sampled—either at the surface or downhole— was recognized as a very
important part of this use case. For this reason, the sample acquisition data object contains
references to other available data objects. These are, the Flow Test Job (for cases where the
sampling is performed in the context of an encompassing well testing operation of some kind. The

Standard: v2.2 / Document: v1.0 / May 16, 2022 72


PRODML Technical Usage Guide

Fluid Sample Acquisition can also reference wells, wellbores etc. at which the sample was taken or
the facility concerned if the sample is from a co-mingled field stream as opposed to a single well
stream. The Fluid Sample Acquisition also can link to a Flow Test Activity (for details of flow
conditions around the time of the sample acquisition).

9.2 Use Case 2: Laboratory Analysis


This use case provides requirements for the laboratory fluid analysis program and the resulting
experiment measurements, which involves many different participants and detailed, accurate
communication.
Figure 9-2 shows the components used to define the laboratory analysis program and its
measurements. Most of the information for this use case is described using the fluid analysis data
object.

Figure 9-2. Laboratory analysis use case.

After the fluid samples arrive at the laboratory and based on the business needs identified by the
customer and the information gathered during sample acquisition, a matrix of samples and laboratory
analyses is usually created. For each sample employed, the chain of custody is updated to detail how
much of which sample was used. Executing this plan, these tests are conducted and the resulting
measurements are reported. For a list of and description of these tests, see Section 10.6.1.
Laboratory reports may be included as documents within the EPC (Energistics Packaging
Convention) package

Standard: v2.2 / Document: v1.0 / May 16, 2022 73


PRODML Technical Usage Guide

9.3 Use Case 3: Fluid Sample Characterization


Figure 9-3 shows the next step in the fluid and PVT analysis life cycle, sample characterization. The
goal of this step is to develop a mathematical model that can faithfully reproduce the observed
laboratory measurements for a single fluid sample. This model may then be used to calculate the
properties of the fluid system (i.e., system characterization, the next use case) for historical field
conditions or for various depletion strategies (such as EOR), which is the process of building a
numerical model of the laboratory measurements from an individual sample and determining the
quality of the results received from laboratory fluid analysis. This use case involves a QC and
verification of the samples and results.

Figure 9-3. Sample characterization use case.

After the laboratory analysis is received, the usual first action is to complete a consistency and
correctness check of the data. If the information coming from the laboratory is found to be non-
coherent or not self-consistent, the laboratory may be requested to repeat some or all the
experiments, if possible. In other cases, to increase the confidence in the results, some or all of the
experiments may be repeated, either by the original laboratory or by another laboratory. Regardless
of validity, all measurements on fluid samples may be potential candidates for characterization and
may be described by the fluid analysis data object. For example, a sample that may have too much
gas may form a basis for a future sensitivity analysis.
In the past, samples were characterized using traditional methods, such as direct input of the lab data
superimposed with correlations/smoothing functions and possibly coupled with K-value charts or
correlations. To convey this legacy data, the tabular format (see Section 10.7.2) could be used.
Modern reports, however, are overwhelmingly analyzed using tools with built-in EoS engines for
which the parametric format is appropriate. Either or both of these formats may be used as needed for
a specific set of results. The final goal of the PVT workflow, regardless of the points of origin, is to use
it in the tools that help in engineering decisions.

Standard: v2.2 / Document: v1.0 / May 16, 2022 74


PRODML Technical Usage Guide

Conducting the sample characterization soon after the PVT test makes it easier to repeat or add
some of experiments if needed. However, in the current business environment or depending on the
in-company processes, staff turnover may be an impediment to the technical level continuity issues,
depending on how long it takes to complete the full cycle analysis. Many times, it is difficult to locate
the correct data unless it has been properly categorized, indexed and stored. The fluid analysis data
object is designed to carry to system identifiers throughout its lifecycle to make this task easier and
more consistent.

9.4 Use Case 4: Fluid System Characterization


Fluid System characterization (Figure 9-4) is a process with a goal of building a numerical fluid model
for data-specific operating scenarios and fluid systems and providing a fluid description to be used in
engineering models; data from multiple samples may be used for conditions not investigated in the
laboratory.

Figure 9-4. System characterization use case.

Many disciplines and their respective software tools need a fluid system description that adequately
represents how the fluid is going to behave at different pressure and temperatures; for example:
reservoir simulators, lift packages, process simulators, and others targeting various disciplines such
as reservoir engineering, production and well engineering, facilities design, and operations. In the
context of the current effort, fluid system characterization refers to developing and characterizing fluid
components within the fluid system for the purpose of simultaneously evaluating the validity and
applicability of earlier fluid sample analyses and developing a mathematical model to represent
holistic behavior of the reservoir fluid system.
In the fluid characterization data object, references may be made to the fluid components defined for
the fluid system. This includes the use of specific oil, water and gas components, standard
components used in sample compositional analysis, or new pseudo-components intended for use in
reservoir or process simulators. The additional properties and characteristics (such as molecular

Standard: v2.2 / Document: v1.0 / May 16, 2022 75


PRODML Technical Usage Guide

weight, boiling point, etc.) for each fluid component used are added as data in the fluid
characterization data object.

9.5 Use Case 5: Data Storage


Data storage (Figure 9-5) addresses the storage, search, and retrieval of information in systems of
record for the lifecycle of the fluid sample data (i.e., from acquisition to utilization to disposal).

Figure 9-5. Data storage use case.

This use case is specifically intended to support the requirements from laboratories and operators for
loading, searching, and exporting fluid analysis and characterization datasets from systems of
records, such as in corporate databases.
The two general requirements for the data storage use case are:
• Accurate capture and organization of data to meet the needs of future workflows
• Preservation of context for data items that are used in multiple workflows.
These requirements significantly influenced the structure and behavior of the data objects that make
up this standard—such as the separation of the fluid sample from distinct sample acquisition and
laboratory analysis objects—so that implementers of the standard are not burdened with the need to
“make up” data not otherwise available. For example, the distinction between fluid analysis data and
fluid characterization data more clearly defines the difference between measured and calculated data
while better supporting the description of user-defined data because it can be handled without having
to fake “laboratory tests”.
Another product of these requirements was the assignment of identifiers within each data object and
allowing all data objects to reference other data objects by these identifiers. For example, this enables
the data in the sample acquisition data object to remain connected to the fluid samples it produces
and vice versa. These schemas are also designed to incorporate the operator’s system of record’s
identification (e.g., completion, well, or facility identifiers) within the initially created data objects, which
are then carried forward through the workflow, making it easier to load them into a system of record.

Standard: v2.2 / Document: v1.0 / May 16, 2022 76


PRODML Technical Usage Guide

Because of the expanded scope of these specifications over earlier ones (Aydelotte et al. 2003), the
scope of the corporate repositories needing to store these datasets may need to be expanded.
Specifically, additional tables for sample acquisition, laboratory analysis tests, and fluid
characterization may be needed unless these data are de-normalized into existing tables, which could
potentially limit the system’s ability to export the original data. Also, a capability to managing the
document attachments (field notes and laboratory reports, etc.) may be required. Definition of the
scope or details of these potential changes are beyond the scope of any transfer standard and are the
province of the owner of the relevant databases

Standard: v2.2 / Document: v1.0 / May 16, 2022 77


PRODML Technical Usage Guide

10 Fluid and PVT Analysis: Data Model


This chapter describes the PVT data model, which is composed of the key data objects shown in
Figure 10-1 and explained in the sections below.
+AssociatedFluidSample 0..*
AbstractObject
AbstractObject
«XSDcomplexType,X... +FluidSample «XSDcomple...
FluidSample::FluidSample
FluidAnalysis::
FluidAnalysis
+FluidSample 1 0..*
0..1
+Identify specific
analysis tests
+FluidAnalysis

1
AbstractObject
«XSDcomplexType» «XSDcomplexType,XSDt...
FluidSampleAcquisitionJob:: FluidSampleAcquisitionJob::
FluidSampleAcquisition +FluidSampleAcquisition FluidSampleAcquisitionJob
0..*
0..*
1

+FluidSampleContainer «XSDcomplexType»
FluidCharacterization::
1 FluidCharacterizationSource A
AbstractObject +OriginalSampleContainer
«XSDcomplexType... +FluidSystem
FluidSampleContainer:: 1 +FluidSystem
0..1 0..1
FluidSampleContainer AbstractObject
AbstractObject
+FluidSystem «XSDcomplexTyp...
«XSDcomple...
FluidSystem FluidCharacterization::
0..1 FluidCharacterization

Figure 10-1. Key data objects of PRODML PVT capabilities.

10.1 Common Elements


Note that common to several objects, wherever fluid composition is required, is a system for
describing the fluid composition in terms of a catalog of fluid components and their properties; this is
described in Section 10.8.
The Standard Conditions element is used in many of the PVT objects. A choice is available – either to
supply the temperature and pressure for all the volumes which follow, or to choose from a list of
standards organizations’ reference conditions. Note that the enum list of standard conditions is
extensible, allowing for local measurement condition standards to be used. See Figure 2-4, which
shows the Abstract Temperature Pressure class: this is the type for standard conditions in all PVT
objects.
Use is also made within PVT of the Pressure Value type, allowing absolute or relative pressures. See
Section 2.3 and Figure 2-3.
Chapter 11 provides an explanation of the examples (files) included with the PVT data object
download. The objective is this: An understanding of the basic data model described in this chapter
combined with the example files and related explanations provide good working knowledge of how
PVT works and can be used.

10.2 Fluid System Object


The fluid system data object was created to designate each distinct subsurface accumulation of
economically significant fluids. This data object primarily serves to identify the source of one or more
fluid samples and provides a connection to the geologic environment that contains it. Characteristics
of the fluid system include the type of system (e.g., black oil, dry gas, etc.), the fluid phases present,
and its lifecycle status (e.g., undeveloped, producing, etc.).
Within the fluid system, the products that make up the fluid system can be described. Hydrocarbon
components for crude oil and natural gas with the GOR, and formation water, can be defined. For

Standard: v2.2 / Document: v1.0 / May 16, 2022 78


PRODML Technical Usage Guide

more detailed analysis and characterization, any number of fluid components may be defined in other
data objects.
The fluid system can also contain references to multiple rock fluid unit features, which are RESQML
objects describing the concept of a combination of a fluid kind in a reservoir zone. Samples can
subsequently be associated with a specific rock fluid unit feature from which they are taken.
The main elements of the fluid system object are shown in Figure 10-2. The fluid system is
referenced by several of the other objects, as shown in Figure 10-1.

class FluidSystem

Legend
AbstractObject
Most Important Class
«XSDcomplexType,XSDtopLevelElement»
FluidSystem Abstract Class
Enumeration
«XSDelement»
+ StandardConditions: AbstractTemperaturePressure Normal Class
+ ReservoirFluidKind: ReservoirFluidKind
+ PhasesPresent: PhasePresent [0..1]
+ ReservoirLifeCycleState: ReservoirLifeCycleState [0..1]
Normal Association
+ RockFluidUnitFeatureReference: DataObjectReference [0..-1] Generalization
+ SaturationPressure: SaturationPressure [0..1]
+ SolutionGOR: VolumePerVolumeMeasure
+ Remark: String2000 [0..1]

0..1
0..1
AbstractFluidComponent
AbstractFluidComponent
«XSDcomplexType»
ProdmlCommon::StockTankOil «XSDcomplexType»
ProdmlCommon::FormationWater
«XSDelement»
+ APIGravity: APIGravityMeasure [0..1] «XSDelement»
+ MolecularWeight: MolecularWeightMeasure [0..1] + SpecificGravity: double [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1] + Salinity: MassPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1] + Remark: String2000 [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]

0..1

AbstractFluidComponent
«XSDcomplexType»
ProdmlCommon::NaturalGas

«XSDelement»
+ GasGravity: double [0..1]
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]

Figure 10-2. Fluid System Object (some details omitted).

10.3 Fluid Sample Acquisition Job Object


The fluid sample acquisition job data object is used to describe the method, equipment, time, place
and operating conditions for each fluid sample acquired. The sample acquisition job represents the

Standard: v2.2 / Document: v1.0 / May 16, 2022 79


PRODML Technical Usage Guide

operation to collect one or more fluid samples. Fluid sample acquisition elements repeat, one per
sample acquired, within one job. Fluid sample acquisitions can be made in five types of locations:
surface facilities, separators, wellheads, downhole, or directly from the formation by wireline formation
tester. Each type of location is defined with specific characteristics so that the important
measurements for each type are captured, such as measured depth for downhole samples and the
operating conditions for separator samples. Figure 10-3 and Figure 10-4 show the fluid sample
acquisition job data object.
AbstractObject
FluidSampleAcquisitionJob
+FlowTestJob AbstractObject
«XSDelement» 0..1
Flow TestJob::Flow TestJob
+ Client: BusinessAssociate [0..1]
0..* + StartTime: TimeStamp [0..1]
«XSDelement»
+FluidSampleAcquisitionJob
+ EndTime: TimeStamp [0..1] + Client: BusinessAssociate [0..1]
+ ServiceCompany: BusinessAssociate [0..1] 0..1
+ ServiceCompany: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]

+FluidSystem 1 +FluidSampleAcquisition 0..*


+AssociatedFluidSample 0..*
AbstractObject AbstractObject AbstractObject
FluidSampleAcquisition
FluidSystem::FluidSystem FluidSample::FluidSample FluidSampleContainer::FluidSampleContainer
«XSDelement»
«XSDelement» + StartTime: TimeStamp «XSDelement» «XSDelement»
+ StandardConditions: AbstractTemperaturePressure + EndTime: TimeStamp + SampleKind: FluidSampleKindExt [0..1] + Make: String64 [0..1]
+ ReservoirFluidKind: ReservoirFluidKind + AcquisitionPressure: AbstractPressureValue [0..1] + RockFluidUnitInterpretation: DataObjectReference [0..1] + Model: String64 [0..1]
+ PhasesPresent: PhasePresent [0..1] + AcquisitionTemperature: ThermodynamicTemperatureMeasure+FluidSample
[0..1] + Representative: boolean [0..1] +OriginalSampleContainer + SerialNumber: String64 [0..1]
+ ReservoirLifeCycleState: ReservoirLifeCycleState [0..1] + AcquisitionVolume: VolumeMeasure [0..1] + SampleDisposition: String64 [0..1] + BottleID: String64 [0..1]
+ RockFluidOrganizationInterpretation: DataObjectReference [0..1] 1 1 + Remark: String2000 [0..1] 0..1 + Capacity: VolumeMeasure [0..1]
+ AcquisitionGOR: VolumePerVolumeMeasure [0..1]
+ SaturationPressure: SaturationPressure [0..1] + FormationPressureTemperatureDatum: LengthMeasureExt [0..1] + Owner: String64 [0..1]
+ SolutionGOR: VolumePerVolumeMeasure + FormationPressure: PressureMeasure [0..1] + Kind: String64 [0..1]
+ Remark: String2000 [0..1] + FormationTemperature: ThermodynamicTemperatureMeasure [0..1] + Metallurgy: String64 [0..1]
+ Remark: String2000 + PressureRating: PressureMeasure [0..1]
+ TemperatureRating: ThermodynamicTemperatureMeasure [0..1]
«XSDattribute» +FluidSampleContainer + LastInspectionDate: date [0..1]
+ uid: String64
+ TransportCertificateReference: DataObjectReference [0..1]
1
+ Remark: String2000 [0..1]

Figure 10-3. Fluid Sample Acquisition Job Object, showing links to other Objects.

class FluidSampleAcquisitionJob

FluidSampleAcquisition

«XSDelement»
+ StartTime: TimeStamp
+ EndTime: TimeStamp
+ AcquisitionPressure: AbstractPressureValue [0..1]
+ AcquisitionTemperature: ThermodynamicTemperatureMeasure [0..1]
+ AcquisitionVolume: VolumeMeasure [0..1]
+ AcquisitionGOR: VolumePerVolumeMeasure [0..1]
+ FormationPressureTemperatureDatum: LengthMeasureExt [0..1]
+ FormationPressure: PressureMeasure [0..1]
+ FormationTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000
«XSDattribute»
+ uid: String64

FacilitySampleAcquisition WellheadSampleAcquisition Dow nholeSampleAcquisition FormationTesterSampleAcquisition

«XSDelement» «XSDelement» «XSDelement» «XSDelement»


+ SamplingPoint: String64 [0..1] + Well: DataObjectReference [0..1] + Wellbore: DataObjectReference + Wellbore: DataObjectReference [0..1]
+ FacilityPressure: AbstractPressureValue + WellCompletion: DataObjectReference [0..1] + WellboreCompletion: DataObjectReference [0..1] + MdTop: MeasuredDepth [0..1]
+ FacilityTemperature: ThermodynamicTemperatureMeasure + WellheadPressure: AbstractPressureValue + SamplingRun: NonNegativeLong + MdBase: MeasuredDepth [0..1]
+ WellheadTemperature: ThermodynamicTemperatureMeasure + TopMD: LengthMeasure + SampleContainerName: String64 [0..1]
+ SamplingPoint: String64 [0..1] + BaseMD: LengthMeasure [0..1] + SampleCarrierSlotName: String64 [0..1]
+ ToolSerialNumber: String64 [0..1] + SampleContainerConfiguration: String64 [0..1]
+ ToolKind: String64 [0..1] + CushionPressure: PressureMeasureExt [0..1]
+ GrossFluidKind: String64 [0..1]
+ ToolSerialNumber: String64 [0..1]
+ ToolSectionName: String64 [0..1]

SeparatorSampleAcquisition

«XSDelement»
+ Separator: String64
+ WellCompletion: DataObjectReference [0..1] +FlowTestActivity
+ SeparatorPressure: AbstractPressureValue
+ SeparatorTemperature: ThermodynamicTemperatureMeasure 0..1
+ SamplingPoint: String64 [0..1]
+FlowTestActivity AbstractObject
+ CorrectedOilRate: VolumePerTimeMeasure [0..1]
FlowTestActivity::FlowTestActivity
+ CorrectedGasRate: VolumePerTimeMeasure [0..1] 0..1
+FlowTestActivity
+ CorrectedWaterRate: VolumePerTimeMeasure [0..1] +FlowTestActivity
+ MeasuredOilRate: VolumePerTimeMeasure [0..1] 0..1
+ MeasuredGasRate: VolumePerTimeMeasure [0..1] 0..1
+ MeasuredWaterRate: VolumePerTimeMeasure [0..1]

AbstractObject
ReportingEntity::ReportingEntity

«XSDelement»
+ Kind: ReportingEntityKind
+Facility + AssociatedFacility: DataObjectReference [0..1]
+ AssociatedObject: DataObjectReference [0..1]
0..1

Figure 10-4. Detail of the various types of Fluid Sample Acquisition showing link to Flow Test or to
Reporting Entity depending on the type of Fluid Sample acquired.

Using data objects defined elsewhere in PRODML, these sample acquisition locations may be linked
to identified separators, wells, wellbores, completions or facilities when these objects are known. To
allow sample acquisition activities to be defined within the context of other well work, they may be
linked also to a Flow Test Activity data object which reports flowing conditions around the time of
sampling.

Standard: v2.2 / Document: v1.0 / May 16, 2022 80


PRODML Technical Usage Guide

10.4 Fluid Sample Data Object


Initially in a sampling project, each fluid sample represents a small amount of fluid extracted from a
parent fluid system, as described by the fluid sample acquisition within the fluid sample acquisition
job. Figure 10-5 and Figure 10-6 shows the main elements of the fluid sample data object, which are
further explained below the figure.

AbstractObject
FluidSample
FluidSampleAcquisitionJobSource
«XSDelement»
+ SampleKind: FluidSampleKindExt [0..1] +FluidSampleAcquisitionJobSource
«XSDelement»
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1] 0..1 + FluidSampleAcquisitionReference: String64
+ SampleDisposition: String64 [0..1] +AssociatedFluidSample
+ Remark: String2000 [0..1] 0..*
+FluidSampleAcquisitionJobReference
0..*
AbstractObject
FluidSampleAcquisitionJob::
FluidSampleAcquisitionJob
+FluidSystem 0..1
«XSDelement»
AbstractObject
+ Client: BusinessAssociate [0..1]
FluidSystem::FluidSystem
+OriginalSampleContainer + StartTime: TimeStamp [0..1]
0..1 +FluidSystem
+ EndTime: TimeStamp [0..1]
«XSDelement»
AbstractObject + StandardConditions: AbstractTemperaturePressure
0..* 1 + ServiceCompany: BusinessAssociate [0..1]
FluidSampleContainer::FluidSampleContainer + ReservoirFluidKind: ReservoirFluidKind
+ PhasesPresent: PhasePresent [0..1]
«XSDelement» + ReservoirLifeCycleState: ReservoirLifeCycleState [0..1]
+ Make: String64 [0..1] + RockFluidOrganizationInterpretation: DataObjectReference [0..1]
+ Model: String64 [0..1] + SaturationPressure: SaturationPressure [0..1]
+ SerialNumber: String64 [0..1] + SolutionGOR: VolumePerVolumeMeasure
+ BottleID: String64 [0..1] + Remark: String2000 [0..1]
+ Capacity: VolumeMeasure [0..1]
+ Owner: String64 [0..1]
+ Kind: String64 [0..1]
+ Metallurgy: String64 [0..1]
+ PressureRating: PressureMeasure [0..1]
+ TemperatureRating: ThermodynamicTemperatureMeasure [0..1]
+ LastInspectionDate: date [0..1]
+ TransportCertificateReference: DataObjectReference [0..1]
+ Remark: String2000 [0..1]

Figure 10-5. Fluid Sample Object showing links to other Objects.

AbstractObject
FluidSample
EnumExtensionPattern
FluidSampleKindExt «XSDelement»
+ SampleKind: FluidSampleKindExt [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1]
+ SampleDisposition: String64 [0..1] +AssociatedFluidSample
+ Remark: String2000 [0..1] 0..*

+FluidSample 0..1

TypeEnum
FluidSampleKind

bhs samples
blend-gas
+OriginalSampleContainer
blend-liquid 0..1
brine
condensate AbstractObject
filtrate FluidSampleContainer::FluidSampleContainer
gas
gas-dry «XSDelement»
mud filtrate + Make: String64 [0..1]
mud sample + Model: String64 [0..1]
oil & water + SerialNumber: String64 [0..1]
oil-base + BottleID: String64 [0..1]
oil-black + Capacity: VolumeMeasure [0..1]
oil-dead + Owner: String64 [0..1]
oil-heavy + Kind: String64 [0..1]
oil-unknown + Metallurgy: String64 [0..1]
oil-volatile + PressureRating: PressureMeasure [0..1]
other + TemperatureRating: ThermodynamicTemperatureMeasure [0..1] 0..1 +SampleRecombinationSpecification
recomb-fluid + LastInspectionDate: date [0..1]
recomb-gas + TransportCertificateReference: DataObjectReference [0..1] SampleRecombinationSpecification
rinse-post + Remark: String2000 [0..1]
rinse-pre «XSDelement»
solid + RecombinationPressure: AbstractPressureValue [0..1]
sto +CurrentContainer 1 0..1 +PrevContainer + RecombinationTemperature: ThermodynamicTemperatureMeasure [0..1]
toluene + RecombinationGOR: VolumePerVolumeMeasure [0..1]
water + RecombinationSaturationPressure: SaturationPressure [0..1]
water/condensate + OverallComposition: OverallComposition [0..1]
synthetic + Remark: String2000 [0..1]
separator water FluidSampleChainOfCustodyEv ent
separator oil
separator gas «XSDelement»
downhole cased + TransferVolume: VolumeMeasure [0..1]
downhole open + TransferPressure: AbstractPressureValue [0..1]
recombined + TransferTemperature: ThermodynamicTemperatureMeasure [0..1] 0..*
wellhead + SampleIntegrity: SampleQuality
TypeEnum commingled + RemainingVolume: VolumeMeasure [0..1] +RecombinedSampleFraction
SampleAction + LostVolume: VolumeMeasure [0..1] +FluidSampleChainOfCustodyEvent 2..*
+ CustodyDate: date [0..1]
custodyTransfer RecombinedSampleFraction
+ CustodyAction: SampleAction [0..1]
destroyed
+ Custodian: String64 [0..1]
sampleTransfer «XSDelement»
+ ContainerLocation: String64 [0..1]
stored + VolumeFraction: VolumePerVolumeMeasure [0..1]
+ Remark: String2000 [0..1]
subSample Dead + MassFraction: MassPerMassMeasure [0..1]
subSample Live «XSDattribute»
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ uid: String64
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

Figure 10-6. Fluid Sample Object showing details supporting workflow for samples.

Each fluid sample is assigned a name to identify it within the context of its lifecycle. Each fluid sample
can have a description of its source geologic feature. Information such as the expected reservoir
temperature, pressure, and gas-liquid ratio are also recorded for each sample (some of the data is

Standard: v2.2 / Document: v1.0 / May 16, 2022 81


PRODML Technical Usage Guide

within the associated fluid sample acquisition in the fluid sample acquisition job). Other characteristics
of a fluid sample include qualitative descriptors for sample quality and representativeness of the fluid
sample.
Because fluid samples may be combined with other fluid samples—either in the case of
recombination samples, or for specific laboratory investigations using additional fluids—a new fluid
sample can be described from the recombination of other fluid samples. These combined samples are
thereafter treated as regular fluid samples with additional data (the Recombined Sample Fraction)
describing the identity and amounts of source fluid samples which were combined. For a recombined
liquid and vapor sample, the Sample Recombination Specification (i.e., the specification for the
recombination) can be defined. An Associated Fluid Sample can also be reported, e.g., one sample of
a sequence of samples can be recorded as being associated with the others.
Often it is important to record the handling of the fluid sample before its analysis. To meet this
requirement, a chain of custody may be transferred for each fluid sample. The chain of custody
information includes the identity of the fluid sample containers in which the sample is stored, transfers
of specific volumes from one container to another, the pressures, temperatures, locations and dates
at which these transfers were conducted, and the identity of the custodians. As an essential element
of chain of custody process, integrity checks, such as opening and closing pressures and
temperatures for each fluid sample, may also be recorded for each transfer. Also, the disposition of a
fluid sample may be recorded to describe what happened to the sample after analysis.

10.5 Fluid Sample Container Object


Each fluid sample may be contained within a fluid sample container at a point in time. These
containers represent the internal tool chambers (fixed or removable) and external sample bottles in
which fluid samples are stored and transported. The information recorded for these containers may
include their serial numbers, owner, make, model and identity as well as its capacity, metallurgy,
service compatibility, fluid integrity capability, pressure and temperature ratings, last inspection date
and transport certification. Figure 10-7 shows the fluid sample container data object.

class FluidSampleContainer

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidSampleContainer

«XSDelement»
+ Make: String64 [0..1]
+ Model: String64 [0..1]
+ SerialNumber: String64 [0..1]
+ BottleID: String64 [0..1]
+ Capacity: VolumeMeasure [0..1]
+ Owner: String64 [0..1]
+ Kind: String64 [0..1]
+ Metallurgy: String64 [0..1]
+ PressureRating: PressureMeasure [0..1]
+ TemperatureRating: ThermodynamicTemperatureMeasure [0..1]
+ LastInspectionDate: date [0..1]
+ TransportCertificateReference: DataObjectReference [0..1]
+ Remark: String2000 [0..1]

Figure 10-7. Fluid Sample Contained Object.

10.6 Fluid Analysis Object


The fluid analysis data object details the measurements made on fluid samples by a variety of
laboratory tests or online tests. With the separation of fluid samples into hydrocarbon and water
samples, laboratory fluid analysis has also been separated into hydrocarbon and water analysis. In
addition, analysis for non-hydrocarbon species which is often done in-line, is supported. For
laboratory hydrocarbon or water analysis, the sample being tested, is identified. For non-hydrocarbon
analysis, either the sample or the associated flow test (for in-line measurements) is identified. All
types can report the date, purpose, analyst, and company conducting the test. As with sample
acquisition, the laboratory report or reports resulting from the analysis may be linked or attached to
the data file produced.

Standard: v2.2 / Document: v1.0 / May 16, 2022 82


PRODML Technical Usage Guide

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis

«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1

«XSDcomplexType»
NonHydrocarbonAnalysis

«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis

«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]

Figure 10-8 shows the types of analysis supported by this data object.

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis

«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1

«XSDcomplexType»
NonHydrocarbonAnalysis

«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis

«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]

Figure 10-8. Types of fluid analysis tests.

For hydrocarbon analysis, all of the routine/standard laboratory analyses have been addressed (these
are listed later). The common structure of each test is to define each one with a “header” record
containing contextual data and a series of “step” records that record the constant conditions and
measurements. Typically, the header record is given the name of the test and a laboratory-assigned
test number. Other information in the header includes the constant and/or reference conditions (such
as temperature in isothermal tests) at which the test is conducted. The step record consists of the
conditions changed in the test (such as pressure) and the property measurements made at that step.
For many test types, the volumes and compositions of the fluid phases present at each step may also
be captured.
The same header and step record structure is used for water analysis. The properties for water
include isothermal density, formation volume factor, viscosity, salinity, solution gas-water ratio and its
corresponding gas composition, and compressibility as functions of pressure. Water (ionic)
composition can also be recorded.
Note that the composition of fluids throughout uses a common model which is described in Section
10.8.

Standard: v2.2 / Document: v1.0 / May 16, 2022 83


PRODML Technical Usage Guide

Types of Laboratory Analysis Supported


The types of hydrocarbon analysis tests are shown in Figure 10-10. The water analysis tests are
shown in Figure 10-12.

Sample Integrity and Preparation


The initial description of a fluid sample is handled as a distinct test type. The data captured includes
opening pressures, volumes and phases present, gas-oil ratios (GORs), densities and/or specific
gravities. Contaminants or other adverse characteristics may be identified within the fluid sample, and
the laboratory may conduct and record additional steps to make the fluid sample more representative
of the in-situ reservoir fluid or stock tank oil. The purposes of these analyses are to establish sample
validity and repeatability to ensure confidence in results of PVT testing analyses yet to be performed.
Extended validation tests, such as saturation (bubble point) pressure, viscosity measurements, air
content, water content, etc., are not described in this test but may be reported in other tests.
An important step in the preparation of some separator samples is the recombination of distinct oil
and gas samples into a reservoir fluid sample. This involves the physical manipulation of separator oil
and gas samples to duplicate the reservoir and producing conditions observed in the field. The
sample produced by this test can then be subjected to a variety of PVT and flow assurance testing to
describe the reservoir from which the samples were collected.

Sample Contaminant
A fluid analysis is conducted on one fluid sample. This sample may have been contaminated, e.g., by
mud filtrate. This can be reported by the sample contaminant object. Multiple contaminants can be
reported. Each reports the proportion of contaminant, its composition and other properties, and can
optionally reference a sample of the contaminant itself (e.g., a mud filtrate sample which was taken for
this purpose).

Standard: v2.2 / Document: v1.0 / May 16, 2022 84


PRODML Technical Usage Guide

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis

«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
TypeEnum + StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
«enumeration» + AnalysisDescription: String2000 [0..1]
0..* FluidContaminant + AnalysisPurpose: String2000 [0..1]
+SampleContaminant
cement fluids + AnalysisSite: String2000 [0..1]
«XSDcomplexType» completion fluid + LabContact: BusinessAssociate [0..1]
SampleContaminant drilling mud + StandardConditions: AbstractTemperaturePressure
extraneous gas + AnalysisQuality: SampleQuality
«XSDelement» extraneous oil + Remark: String2000 [0..1]
+ ContaminantKind: FluidContaminant extraneous water
+ WeightFractionStockTank: MassPerMassMeasure [0..1] formation water
+ VolumeFractionStockTank: VolumePerVolumeMeasure [0..1] treatment chemicals
+ WeightFractionLiveSample: MassPerMassMeasure [0..1] solid
+ VolumeFractionLiveSample: VolumePerVolumeMeasure [0..1] unknown
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ Density: MassPerVolumeMeasure [0..1]
+ ContaminantComposition: LiquidComposition [0..1]
+ Description: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

+SampleOfContaminant 0..1
+AssociatedFluidSample
0..*
AbstractObject
«XSDcomplexType,XSDtopLevelElement» HydrocarbonAnalysis
FluidSample::FluidSample

«XSDelement»
+ SampleKind: FluidSampleKindExt [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ Representative: boolean [0..1]
+ SampleDisposition: String64 [0..1] WaterAnalysis
+ Remark: String2000 [0..1]

«XSDcomplexType»
SampleIntegrityAndPreparation

«XSDelement»
+ OpeningDate: date
+ InitialVolume: VolumeMeasure [0..1]
+ OpeningPressure: AbstractPressureValue [0..1]
+ OpeningTemperature: ThermodynamicTemperatureMeasure [0..1]
+ SaturationPressure: SaturationPressure [0..1]
+ SaturationTemperature: SaturationTemperature [0..1]
+ BasicSedimentAndWater: VolumePerVolumeMeasure [0..1]
+ FreeWaterVolume: VolumeMeasure [0..1]
+ WaterContentInHydrocarbon: MassPerMassMeasure [0..1]
+ OpeningRemark: String2000 [0..1]
«XSDattribute»
+ uid: String64

+SampleRestoration 0..*

«XSDcomplexType»
SampleRestoration

«XSDelement»
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ RestorationPressure: AbstractPressureValue [0..1]
+ RestorationTemperature: ThermodynamicTemperatureMeasure [0..1]
+ MixingMechanism: String64 [0..1]
+ Remark: String2000 [0..1]

Figure 10-9. QC aspects of Fluid Analysis. Showing ability to report Sample Contaminants and how the
sample was prepared for analysis.

Hydrocarbon Analyses
The different types of hydrocarbon analysis can be seen in Figure 10-10 (at outline level). An
instance of fluid analysis can contain any or all of the analyses shown.
Figure 10-11 shows the model for an example analysis (Constant Composition Expansion) and also
shows a concept common to many analysis kinds—the test step. This is a set of properties that
repeat at different pressures or temperatures.
The same figure also shows a concept used in certain analyses – the volume reference. This is for
tests where the liquid volume is reported as a fraction, and this volume reference is used to define
what that fraction refers to. The parent “Test” contains one or more Fluid Volume Reference elements
and these use an enumeration of kinds of reference volume (e.g., at saturation conditions) to which
the volume at a test step is referenced using the Liquid Fraction concept. The liquid fraction is of type
Relative Volume Ratio which uses the uid of the relevant Fluid Volume Reference to define which the
reference condition is. (It is done this way to support the use of multiple volume references within one
test, if required).
To save space, the model diagrams for the remaining hydrocarbon analyses are not shown but can all
be seen in the UML provided in the documentation.

Standard: v2.2 / Document: v1.0 / May 16, 2022 85


PRODML Technical Usage Guide

class FluidAnalysisTests

Legend FluidAnalysis

Most Important Class «XSDcomplexType»


HydrocarbonAnalysis
Abstract Class
Enumeration
Normal Class
«XSDcomplexType»
OtherMeasurementTest
Normal Association
Generalization «XSDcomplexType»
AtmosphericFlashTestAndCompositionalAnalysis

«XSDcomplexType»
VaporLiquidEquilibriumTest

«XSDcomplexType»
ConstantCompositionExpansionTest

«XSDcomplexType»
Sw ellingTest

«XSDcomplexType»
SaturationTest

«XSDcomplexType»
SlimTubeTest

«XSDcomplexType»
DifferentialLiberationTest

«XSDcomplexType»
MultipleContactMiscibilityTest

«XSDcomplexType»
ConstantVolumeDepletionTest

«XSDcomplexType»
STOAnalysis

«XSDcomplexType»
FluidSeparatorTest

«XSDcomplexType»
InterfacialTensionTest

Figure 10-10. Hydrocarbon analysis types.

Atmospheric Flash and Compositional Analysis


An atmospheric flash takes the reservoir fluid to stock tank conditions, producing a dead oil sample
and a gas sample. These samples are used to measure basic properties like the oil’s API gravity, the
gas-oil ratio, and the compositions of the oil and gas.

Constant Composition Expansion


A constant composition expansion (CCE) is an isothermal test that establishes the pressure/volume
relationship for a fluid system as it may be depleted in a reservoir. Measurements of this test include
saturation pressure, relative volume, single-phase densities, and other details depending on fluid
type, for example, compressibility (oil), deviation factors (gas), and liquid dropout (condensates) as a
fluid sample is progressively allowed to expand with decreasing pressure without removing any of the
sample. As an example of test types, the CCE model is shown in Figure 10-11.

Standard: v2.2 / Document: v1.0 / May 16, 2022 86


PRODML Technical Usage Guide

«XSDcompl exType» «XSDcompl exType»


ConstantCompositionExpansionTest ConstantCompositionExpansionTestStep

«XSDel ement» «XSDel ement»


+ Tes tNumber: NonNega ti veLong + StepNumber: NonNega ti veLong
+ Tes tTempera ture: Thermodyna mi cTempera tureMea s ure + StepPres s ure: Pres s ureMea s ure
+ Sa tura ti onPres s ure: Sa tura ti onPres s ure [0..1] + Li qui dFra cti on: Rel a ti veVol umeRa ti o [0..1]
+ Li qui dFra cti onReference: Fl ui dVol umeReference [0..-1] + Oi l Dens i ty: Ma s s PerVol umeMea s ure [0..1]
+ Rel a ti veVol umeReference: Fl ui dVol umeReference [0..-1] + Oi l Compres s i bi l i ty: Oi l Compres s i bi l i ty [0..1]
+ Cons ta ntCompos i ti onExpa ns i onTes tStep: Cons ta ntCompos i ti onExpa ns i onTes tStep [0..-1] + Oi l Vi s cos i ty: Dyna mi cVi s cos i tyMea s ure [0..1]
+ Rema rk: Stri ng2000 [0..1] + Tota l Vol ume: Vol umeMea s ure [0..1]
«XSDa ttri bute» + Rel a ti veVol umeRa ti o: Rel a ti veVol umeRa ti o [0..1]
+ ui d: Stri ng64 + Ga s Dens i ty: Ma s s PerVol umeMea s ure [0..1]
+ Ga s ZFa ctor: doubl e [0..1]
+ Ga s Compres s i bi l i ty: Reci proca l Pres s ureMea s ure [0..1]
+ Ga s Vi s cos i ty: Dyna mi cVi s cos i tyMea s ure [0..1]
+ YFuncti on: doubl e [0..1]
AbstractMeasure AbstractMeasure + Fl ui dCondi ti on: Fl ui dAna l ys i s StepCondi ti on [0..1]
«XSDcompl exType» «XSDcompl exType» + Pha s es Pres ent: Pha s ePres ent [0..1]
MeasureType:: MeasureType::VolumePerVolumeMeasure + Va porCompos i ti on: Va porCompos i ti on [0..1]
ReciprocalPressureMeasure + Li qui dCompos i ti on: Li qui dCompos i ti on [0..1]
«XSDa ttri bute» + Overa l l Compos i ti on: Overa l l Compos i ti on [0..1]
«XSDa ttri bute» + uom: Vol umePerVol umeUomWi thLega cy + Rema rk: Stri ng2000 [0..1]
+ uom: Reci proca l Pres s ureUom «XSDa ttri bute»
+ ui d: Stri ng64

«XSDcompl exType»
OilCompressibility «XSDcompl exType»
RelativeVolumeRatio
«XSDa ttri bute»
+ ki nd: Compres s i bi l i tyKi nd «XSDa ttri bute»
+ fl ui dVol umeReference: Stri ng64

TypeEnum
«enumera ti on»
«XSDcompl exType» VolumeReferenceKind
FluidVolumeReference res ervoi r
s a tura ti on-ca l cul a ted
«XSDel ement» s a tura ti on-mea s ured
+ Ki nd: Vol umeReferenceKi ndExt s epa ra tor s ta ge 1
+ ReferenceVol ume: Vol umeMea s ure [0..1] s epa ra tor s ta ge 10
+ Rema rk: Stri ng2000 [0..1] s epa ra tor s ta ge 2
«XSDa ttri bute» s epa ra tor s ta ge 3
+ ui d: Stri ng64 s epa ra tor s ta ge 4
s epa ra tor s ta ge 5
s epa ra tor s ta ge 6
s epa ra tor s ta ge 7
s epa ra tor s ta ge 8
s epa ra tor s ta ge 9
s tock ta nk
tes t s tep
other

EnumExtensionPattern
«XSDuni on»
VolumeReferenceKindExt

Figure 10-11. Example of a hydrocarbon analysis model (Constant Composition Expansion).

Saturation Test
Within the context of PVT studies, a saturation test is a test in which bubble or dew point pressure of
liquid or a gas system is determined at a constant temperature. The test involves determining
pressure/volume relationship of a constant amount of reservoir at constant temperature. In most
cases, a saturation point determination can be part of constant composition expansion (CCE) test.
This test is considered the most basic PVT study on a live fluid sample.

Differential Liberation
Differential liberation is a test to simulate the depletion of a black-oil or volatile-oil reservoir system at
a constant reservoir temperature. This test is designed to measure what happens to the reservoir fluid
as it is depleted below the bubble point—at which point evolved gas begins to flow segregated from
the oil. Measurements from this test include formation volume factors for oil and gas and the solution
gas-oil ratios and viscosities below saturation pressure. The measurements from this test, corrected
to separator test results, are appropriate for material balance calculations in reservoir engineering
models.

Constant Volume Depletion


Constant volume depletion (CVD) is a test to simulate the production of a retrograde gascondensate
system through the depletion of the reservoir below the dew point. The test simulates what occurs
when the liquid dropout stays in the reservoir and the surface production stream becomes leaner
relative to the single-phase system. Deliverables from this test include well stream compositions
below the dew point and the average liquid yields, Z-Factor, as the fluid system is produced to
abandonment pressure.

Standard: v2.2 / Document: v1.0 / May 16, 2022 87


PRODML Technical Usage Guide

Separator
A separator test mimics the processing of produced fluids through one or more stages of separation.
Primarily for black-oil and volatile-oil fluid systems, the results of this test are used to optimize the
recovery of hydrocarbon products by testing samples at a series of separation stages and settings.
This test also measures key parameters, such as formation volume factors and densities, shrinkages,
API gravity, and producing GOR that are used both in reservoir engineering calculations and facilities
design.

Transport
This test is to assess the properties of the fluid relevant to transportation. A series of test steps for
properties (such as viscosity, density, etc.) can be defined. Additionally, tabular data can be defined
for the test, using the same tabular format as available for fluid characterization (see Section 10.7.2).

Vapor-Liquid Equilibrium
A vapor-liquid equilibria (VLE) test is a specialized PVT test that is used to represent phase equilibria
in a gas injection enhanced oil recovery (EOR) process. In this test, a mixture of oil and injection gas
is equilibrated in a fixed condition of pressure and temperature where two distinct vapor and liquid
phases are present. Composition and pressure (generally at fixed reservoir temperature) of the
system is designed in such a way that equilibria is established where maximum mass transfer occurs
between the two phases (near critical condition/fluid). Properties of each phase (composition, density,
viscosity) are also used to optimize an equation-of-state model to represent phase equilibria during
gas injection EOR process.

Swelling Test
A swelling test is a PVT test that is used for characterization of gas injection EOR processes. In
essence, a swelling test is a series of constant composition expansion tests, in each of which an
increasing amount of particular injection gas is added to a reservoir oil sample at fixed temperature. In
each expansion test, saturation pressure and physical and transport properties of the mixture (density
and viscosity) can be determined. Swelling test data are used to optimize an equation-of-state model
that properly represents phase behavior of reservoir oil and injection gas mixture at different mixing
ratios.

Slim Tube Test


A slim tube test is used to study the miscibility condition of a reservoir oil sample when it comes in
contact with a gas sample flowing inside a long tube filled with sand (porous media) in a near one-
dimensional flow condition. In general, a number of slim tube tests are performed at constant
temperature with varying pressures to determine the minimum pressure at which miscibility is
achieved after injection of certain pore volume of gas. The minimum miscibility pressure (MMP)
together with production information during slim tube tests may be used to optimize an equation-of-
state model for use in reservoir simulations. Slim tube tests may also be used to optimize injection
gas composition that ensures miscibility at condition of particular oil reservoir.

Multiple Contact Miscibility


A multiple contact miscibility test is a specialized PVT test that represents establishment of miscibility
when a gas and oil sample come in contact under a fixed condition of pressure and temperature. The
test involves multiple contact of a fixed amount of gas with multiple fresh charges of an oil sample,
where intermediate components from the oil sample are stripped consecutively (forward contact). In
case of a backward contact test, a fixed charge of an oil sample will come in contact with multiple
charges of fresh rich gas, so that intermediate components in the gas phase are transferred to the oil
phase. The process continues until miscibility is achieved. During each contact, the composition of
each phase is measured. Such mass transfer process and compositional change may be shown
using a pseudo-ternary diagram. Results of such study are generally used to optimize an equation-of-
state (EoS) model. The optimized EoS model is generally used in a compositional reservoir simulator
to properly capture the miscibility process in a reservoir during a gas injection EOR process.

STO Analysis
For representative samples of stock tank oil, an additional set of stock tank oil (STO) analysis tests
may be conducted to provide measurements of viscosity (at temperature), pour point, cloud point, wax

Standard: v2.2 / Document: v1.0 / May 16, 2022 88


PRODML Technical Usage Guide

appearance temperature, paraffin content, asphaltene content and SARA, Reid vapor pressure, total
acid number, total sulfur, molecular weight, water content, and the amounts of lead, nickel, vanadium
and elemental sulfur present.

Interfacial Tension
Interfacial tension tests measure the interfacial tension between two phases. These are labelled the
wetting phase and the non-wetting phase. A surfactant can also be defined. Each test step reports the
interfacial tension and optionally, the surfactant concentration.

Water Analysis
In addition to the hydrocarbon tests described above, a number of standardized lab tests can be
performed on water samples. The measurements supported includes the properties of the water such
as salinity and hardness. Optionally, test steps can be defined reporting on properties such as
dissolved gases and volume factors at a series of pressure temperature conditions. In addition, a set
of anion/cation/organic acid concentrations that can be used for scale modeling and for water injection
capabilities. Figure 10-12 shows the water analysis data model.

«XSDcomplexType,XSDtopLevelElement»
WaterAnalysis

+WaterAnalysisTest 0..* 0..* +WaterSampleComponent

«XSDcomplexType» «XSDcomplexType»
WaterAnalysisTest WaterSampleComponent

«XSDelement» «XSDelement»
+ TestNumber: NonNegativeLong + TestMethod: String2000 [0..1]
+ LiquidGravity: double [0..1] + Anion: AnionKindExt [0..1]
+ SalinityPerMass: MassPerMassMeasure [0..1] + Cation: CationKindExt [0..1]
+ TotalDissolvedSolidsPerMass: MassPerMassMeasure [0..1] + OrganicAcid: OrganicAcidKindExt [0..1]
+ TotalSuspendedSolidsPerMass: MassPerMassMeasure [0..1] + MolarConcentration: AmountOfSubstancePerAmountOfSubstanceMeasureExt [0..1]
+ TotalHardnessPerMass: MassPerMassMeasure [0..1] + VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ TotalAlkalinityPerMass: MassPerMassMeasureExt [0..1] + MassConcentration: MassPerMassMeasure [0..1]
+ TotalSedimentSolidsPerMass: MassPerMassMeasureExt [0..1] + EquivalentConcentration: MassPerMassMeasure [0..1]
+ SalinityPerVolume: MassPerVolumeMeasureExt [0..1] + ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind [0..1]
+ TotalDissolvedSolidsPerVolume: MassPerVolumeMeasureExt [0..1] + Remark: String2000 [0..1]
+ TotalSuspendedSolidsPerVolume : MassPerVolumeMeasureExt [0..1] «XSDattribute»
+ TotalHardnessPerVolume: MassPerVolumeMeasureExt [0..1] + uid: String64
+ TotalAlkalinityPerVolume: MassPerVolumeMeasureExt [0..1]
+ TotalSedimentSolidsPerVolume: MassPerVolumeMeasureExt [0..1]
+ TestMethod: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

+WaterAnalysisTestStep 0..*

«XSDcomplexType»
WaterAnalysisTestStep

«XSDelement»
+ StepNumber: NonNegativeLong
+ StepPressure: PressureMeasure
+ StepTemperature: ThermodynamicTemperatureMeasure
+ DissolvedCO2: MassPerVolumeMeasureExt [0..1]
+ DissolvedH2S: MassPerVolumeMeasureExt [0..1]
+ DissolvedO2: MassPerVolumeMeasureExt [0..1]
+ FlashedGas: FlashedGas [0..1]
+ pH: UnitlessMeasure [0..1]
+ Resistivity: ElectricalResistivityMeasureExt [0..1]
+ Turbidity: UnitlessMeasure [0..1]
+ WaterViscosity: DynamicViscosityMeasure [0..1]
+ SolutionGasWaterRatio: VolumePerVolumeMeasure [0..1]
+ WaterViscousCompressibility: ReciprocalPressureMeasure [0..1]
+ WaterDensity: MassPerVolumeMeasure [0..1]
+ WaterSpecificHeat: EnergyPerVolumeMeasure [0..1]
+ WaterFormationVolumeFactor: VolumePerVolumeMeasure [0..1]
+ WaterHeatCapacity: EnergyMeasure [0..1]
+ WaterIsothermalCompressibility: ReciprocalPressureMeasure [0..1]
+ WaterThermalConductivity: ElectricConductivityMeasure [0..1]
+ WaterDensityChangeWithPressure: MassPerVolumePerPressureMeasureExt [0..1]
+ WaterThermalExpansion: VolumetricThermalExpansionMeasure [0..1]
+ WaterDensityChangeWithTemperature: MassPerVolumePerTemperatureMeasureExt [0..1]
+ WaterEnthalpy: MolarEnergyMeasure [0..1]
+ WaterEntropy: EnergyLengthPerTimeAreaTemperatureMeasure [0..1]
+ WaterSpecificVolume: VolumePerMassMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

Figure 10-12. Water Analysis model.

Non-Hydrocarbon Analysis
Non-hydrocarbon analysis refers to the detection of specific species such as H2S which are present
in the fluid stream.

Standard: v2.2 / Document: v1.0 / May 16, 2022 89


PRODML Technical Usage Guide

If these are detected in the laboratory analysis, they can be reported in the appropriate test, as listed
above. The non-hydrocarbon molecular species are listed in the enumeration for Pure Fluid
Component. Sulfer species are split out in a separate Sulfer Fluid Component with enumeration.
However, some of the tests for these species are carried on with in-line equipment and no sample is
taken, nor is there transport of a sample to a laboratory. For this reason, this type of Fluid Analysis
can link either to a Fluid Sample (like Hydrocarbon and Water Analyses) or to a Flow Test Activity.
This can be seen in
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidAnalysis

«XSDelement»
+ RequestDate: date [0..1]
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ AnalysisDescription: String2000 [0..1]
+ AnalysisPurpose: String2000 [0..1]
+ AnalysisSite: String2000 [0..1] AbstractObject
+ LabContact: BusinessAssociate [0..1]
«XSDcomplexType,XSDtopLevel...
+ StandardConditions: AbstractTemperaturePressure
Flow TestActiv ity::Flow TestActiv ity
+ AnalysisQuality: SampleQuality
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Remark: String2000 [0..1]
0..1

«XSDcomplexType»
NonHydrocarbonAnalysis

«XSDcomplexType,XSDtopLevelElement»
HydrocarbonAnalysis

«XSDelement»
+ SampleIntegrityAndPreparation: SampleIntegrityAndPreparation [0..1]
+AssociatedFluidSample + AtmosphericFlashTestAndCompositionalAnalysis: AtmosphericFlashTestAndCompositionalAnalysis [0..-1]
0..* + ConstantCompositionExpansionTest: ConstantCompositionExpansionTest [0..-1]
AbstractObject
+ SaturationTest: SaturationTest [0..-1]
«XSDcomplexType,XSDtopLevelElement» + DifferentialLiberationTest: DifferentialLiberationTest [0..-1]
FluidSample::FluidSample 0..1 + ConstantVolumeDepletionTest: ConstantVolumeDepletionTest [0..-1]
+ SeparatorTest: FluidSeparatorTest [0..-1]
«XSDelement» + TransportTest: OtherMeasurementTest [0..-1]
+ SampleKind: FluidSampleKindExt [0..1] + VaporLiquidEquilibriumTest: VaporLiquidEquilibriumTest [0..-1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1] 1 + SwellingTest: SwellingTest [0..-1]
+ Representative: boolean [0..1] + SlimTubeTest: SlimTubeTest [0..-1]
«XSDcomplexType,XSDtopLevelElement»
+ SampleDisposition: String64 [0..1] + MultipleContactMiscibilityTest: MultipleContactMiscibilityTest [0..-1]
WaterAnalysis
+ Remark: String2000 [0..1] 1 + STOAnalysis: STOAnalysis [0..-1]
+ InterfacialTensionTest: InterfacialTensionTest [0..-1]

Figure 10-8.
To report Non-hydrocarbon Analysis as a standalone data object, the Non Hydrocarbon Analysis type
of Fluid Analysis can be used. This is shown in Figure 10-13.
A single test may yield the concentrations of one or more molecular species. The concentrations are
reported using the Non Hydrocarbon Concentrations element seen in the figure. This uses the same
construct of Overall Composition as used for the various Hydrocarbon Analyses. As stated above,
non-hydrocarbon molecular species are listed in the enumeration for Pure Fluid Component. Sulfer
species are split out in a separate Sulfer Fluid Component with enumeration.

«XSDcomplexType»
NonHydrocarbonAnalysis

0..*

«XSDcomplexType»
NonHydrocarbonTest

«XSDelement»
+ TestNumber: NonNegativeLong [0..1]
+ TestTime: TimeStamp [0..1]
+ TestVolume: VolumeMeasureExt [0..1]
+ PhasesTested: PhasePresent [0..1]
+ TestTemperature: ThermodynamicTemperatureMeasure [0..1]
+ TestPressure: PressureMeasureExt [0..1]
+ AnalysisMethod: String2000 [0..1]
+ SamplingPoint: String2000 [0..1]
+ CellId: NonNegativeLong [0..1]
+ InstrumentId: String2000 [0..1]
+ NonHydrocarbonConcentrations: OverallComposition [0..1]
+ OtherMeasuredProperties: ExtensionNameValue [0..-1]
+ Remark: String2000 [0..1]

Figure 10-13. Non-hydrocarbon Analysis.

Standard: v2.2 / Document: v1.0 / May 16, 2022 90


PRODML Technical Usage Guide

10.7 Fluid Characterization


The fluid characterization data object (Figure 10-14) describes the characteristics and properties of a
fluid sample or fluid system under the conditions expected in a historical or future state. The
characterization may be based on the measurements from one or more fluid samples, and usually
(but not always) uses a specialized application that simulates fluid behavior. The typical workflow is to
characterize individual fluid samples as soon as their data are delivered to the sample owner from a
fluid analysis laboratory. Although not standardized, the typical approach includes iterative
optimization of characteristics (and in some cases amount) of individual compositional components
representing the fluid sample, and parameters of an EoS model (or other mathematical models) to
simultaneously represent (simulate), and evaluate (consistency/QC check) experimental PVT data. In
some cases, it is required to either represent multiple fluid samples with one characterization or to re-
optimize existing characterization for different operating conditions.
Such a developed characterization may be used to produce a data set descriptive of the fluid behavior
appropriate for use in upstream workflows, such as reservoir simulation, flow in tubulars, and flow
through surface facilities. Flow assurance data (e.g., solid deposit rates, multiphase flow regime
maps) are not supported now, although some of the data used (e.g., asphaltenes) are included.
Three basic formats are available to represent fluid characterization results for delivery to consumer
applications:
• Model (model kind + parameters) (green box)
• Tabular (red box)
• Set of Fluid Characterization Parameters (blue box).

Standard: v2.2 / Document: v1.0 / May 16, 2022 91


PRODML Technical Usage Guide

class FluidCharacterizationTopLevel

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization

«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]

«XSDcomplexType»
«XSDcomplexType»
FluidCharacterizationModel
+ModelSpecification
AbstractPvtModel
«XSDelement»
+ Name: String64 [0..1] 0..1
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1] +FluidCharacterizationParameterSet
«XSDcomplexType»
«XSDattribute» 0..1 FluidCharacterizationParameterSet
+ uid: String64

+FluidCharacterizationTable 0..*

«XSDcomplexType» +FluidCharacterizationParameter 1..*


FluidCharacterizationTable
«XSDcomplexType»
«XSDattribute» FluidCharacterizationParameter
+ uid: String64
+ tableFormat: String64 «XSDattribute»
+ name: String64 + name: String64 [0..1]
+ value: double
«XSDelement»
+ uom: String64
+ TableConstant: FluidCharacterizationParameter [0..-1]
+ fluidComponentReference: String64 [0..1]
+ Remark: String2000 [0..1]
«XSDelement»
+ Property: OutputFluidPropertyExt

Figure 10-14. Fluid characterization high-level model, showing fluid characterization by model (green
box), by table (red box) or by a set of parameters (blue box).

The basic structure is shown in Figure 10-14. A fluid characterization has any number of fluid
characterization sources. Each is a reference to a fluid analysis and to the specific tests in that
analysis which were used for this characterization. The fluid system is also referenced. There are then
a set of fluid characterization models, and these can either have model parametric representation
(see below), and/or a set of tables of output properties and/or a set of fluid Characterization
Parameters
A fluid characterization may also be represented as a combination of tabular and parametric
approaches (e.g., in thermal compositional reservoir simulation applications), where part of fluid
characterization (e.g., k-values) is represented in a parametric model (EoS), and another part of fluid
characterization (e.g., component viscosities) is represented in a tabular format. It is important to note
that the current schema is designed to represent only one fluid sample/system at a time.
With either representation, information is included to describe the geologic feature and the fluid
system being characterized. Also provided are references to the specific fluid samples and laboratory
tests that were used as the basis for the characterization. Additional information found in the fluid
characterization data object are the identity (and version) of the application used for its
characterization, and the use case for which the characterization was done.

Standard: v2.2 / Document: v1.0 / May 16, 2022 92


PRODML Technical Usage Guide

Note that the composition of fluids throughout uses a common model which is described in Section
10.8.

Model-Parametric Output
In the model-parametric approach, parameters of a named EoS model determined during the
characterization effort are provided to an application using the same model formulation. Typical
information found in a parametric format includes:
• Identification and parameters of EoS models.
• Characteristics of each component used in characterization, such as critical properties, acentric
factor, etc.
• Identification and parameters of viscosity models.
• Parameters of equations to represent thermal properties of components.
• Mixing parameters for density and viscosity models.
The models are arranged in a hierarchy, with the top level being compositional (i.e., with a set of fluid
components each having fluid component properties), or correlation (i.e., a set of parameters and a
model which fits the behavior of the whole fluid) (Figure 10-15). A fluid characterization model can
have a model specification, and the specification is abstract down to the level of a known model which
then has its required parameters (see below for more details).
Correlation models are currently all viscosity models and are divided into dead oil, bubble point,
under-saturated oil and gas viscosity correlations.
Compositional models are available as EoS (phase behavior) and viscosity models.

Standard: v2.2 / Document: v1.0 / May 16, 2022 93


PRODML Technical Usage Guide

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization

«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]

«XSDcomplexType» «XSDcomplexType»
FluidCharacterizationModel FluidCharacterizationTable

«XSDelement» «XSDattribute»
+ Name: String64 [0..1] + uid: String64
+ ReferencePressure: AbstractPressureValue [0..1] + tableFormat: String64
+ ReferenceStockTankPressure: AbstractPressureValue [0..1] +FluidCharacterizationTable
+ name: String64
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
0..* «XSDelement»
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ TableConstant: FluidCharacterizationParameter [0..-1]
+ Remark: String2000 [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

+ModelSpecification
0..1

«XSDcomplexType»
AbstractPvtModel

«XSDComplexType» TypeEnum
«XSDComplexType»
AbstractCorrelationModel AbstractCompositionalModel «enumeration»
MixingRule
«XSDelement»
+ MixingRule: MixingRule [0..1] asymmetric
classical

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


AbstractCorrelationViscosityModel CorrelationThermalModel AbstractCompositionalViscosityModel CompositionalThermalModel AbstractCompositionalEoSModel

«XSDelement» «XSDattribute»
+ MolecularWeight: MolecularWeightMeasure [0..1] + phase: ThermodynamicPhase

Figure 10-15. Top Level of PVT Model hierarchy.

Figure 10-16 shows more details of the models. All models (correlation and compositional) can have
model-wide PVT model parameters. All models (correlation and compositional) can also have custom
extensions, allowing extra parameters which specialist studies may have defined, to be added.
Compositional models also can have a range of fluid component properties defined per fluid
component, and a set of binary interaction coefficients, each relating one fluid component to another.

Standard: v2.2 / Document: v1.0 / May 16, 2022 94


PRODML Technical Usage Guide
Most Important Class
Abstract Class «XSDcomplexType»
AbstractPvtModel
Enumeration
Normal Class

Normal Association «XSDcomplexType»


Generalization CustomPv tModelExtension
0..1

«XSDelement»
+ Description: String2000 [0..1]

0..1
0..*
«XSDcomplexType»
Pv tModelParameterSet
«XSDcomplexType» AbstractString
CustomPv tModelParameter «XSDsimpleType»
BaseTypes::EnumExtensionPattern
«XSDattribute»
+ fluidComponentReference: String64 [0..1]

+Coefficient 1..*
+CustomParameterValue
«XSDcomplexType»
1
Pv tModelParameter
«XSDcomplexType»
«XSDattribute» Abstract::ExtensionNameValue
+ kind: PvtModelParameterKindExt
+ name: String64 [0..1] «XSDelement»
+ Name: String64
+ Value: StringMeasure
+ MeasureClass: MeasureClass [0..1]
+ DTim: TimeStamp [0..1]
+ Index: NonNegativeLong [0..1]
+ Description: String2000 [0..1]

«XSDComplexType» «XSDComplexType»
AbstractCorrelationModel AbstractCompositionalModel
«XSDunion»
Pv tModelParameterKindExt
«XSDelement»
+ MixingRule: MixingRule [0..1]

0..1 «XSDcomplexType»
FluidComponentProperty
«XSDcomplexType»
BinaryInteractionCoefficientSet
«XSDelement»
+ CriticalPressure: PressureMeasure [0..1]
+ CriticalTemperature: ThermodynamicTemperatureMeasure [0..1]
double + CriticalViscosity: DynamicViscosityMeasure [0..1]
1..* + CompactVolume: VolumeMeasure [0..1]
«XSDsimpleT... + CriticalVolume: MolarVolumeMeasure [0..1]
«XSDcomplexType» BaseTypes:: + AcentricFactor: decimal [0..1]
BinaryInteractionCoefficient AbstractMeasure + MassDensity: MassPerVolumeMeasure [0..1]
+ OmegaA: double [0..1]
«XSDattribute»
+ OmegaB: double [0..1]
+ fluidComponent1Reference: String64
+ VolumeShiftParameter: decimal [0..1]
+ fluidComponent2Reference: String64 [0..1]
+ PartialMolarDensity: MassPerVolumeMeasure [0..1]
+ Parachor: double [0..1]
+ PartialMolarVolume: MolarVolumeMeasure [0..1]
+ ReferenceDensityZJ: MassPerVolumeMeasure [0..1]
+ ReferenceGravityZJ: APIGravityMeasure [0..1]
«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» + ReferenceTemperatureZJ: ThermodynamicTemperatureMeasure [0..1]
AbstractCorrelationViscosityModel AbstractCompositionalViscosityModel AbstractCompositionalEoSModel + ViscousCompressibility: ReciprocalPressureMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDelement» «XSDattribute» «XSDattribute»
+ MolecularWeight: MolecularWeightMeasure [0..1] + phase: ThermodynamicPhase + fluidComponentReference: String64

Figure 10-16. Details of PVT model types and parameters available.

For a list of the specific models (which are concrete classes that may be instantiated in the XML data
transfer), see Section 12.1.4. The schema is designed so that each model inherits its correct
parameter set, and as noted above, custom parameters for extended models can be added.

Tabular Output
In a tabular format, fluid properties (such as pressure, density, viscosity, etc.) are provided as rows of
a table where a target application could use these fluid properties directly. The organization of each
table is defined for each characterization file, and many table formats may be used in a single
characterization. This organization gives the ordering, properties, and units of measure for each
column, and allows constant values and delimiters to be assigned. Typical information found in a
tabular format includes:
• Independent operating conditions such as pressure and temperature.
• Fluid properties such as oil and gas formation volume factor, solution gas or dissolved liquid,
viscosity and compressibility.
• Slope of compressibility/formation volume factor vs. pressure line in the under-saturated region, if
saturated fluid properties are provided.
• Tabular representation of k-values or component viscosities as a function of pressure and
temperature.
• Miscibility tables in case of gas/solvent EOR fluid characterization.
The organization of the tabular model is shown in Figure 10-17. The table format defines null values
and delimiters, plus a set of column headings which detail what comes in each column. The rows are
then strings of delimited values (e.g., CSV) which contain the values in the columns, plus a

Standard: v2.2 / Document: v1.0 / May 16, 2022 95


PRODML Technical Usage Guide

saturated/under-saturated flag. Additionally, the table can contain table constants. The contents of the
Columns or Constants are available as an enumerated list of Output Fluid Property seen in the figure.

class FluidCharacterizationTable

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidCharacterization

«XSDelement»
+ ApplicationTarget: String2000 [0..-1]
+ Kind: String64 [0..1]
+ IntendedUsage: String64 [0..1]
+ RockFluidUnitInterpretation: DataObjectReference [0..1]
+ StandardConditions: AbstractTemperaturePressure [0..1]
+ Source: FluidCharacterizationSource [0..-1]
+ FluidComponentCatalog: FluidComponentCatalog [0..1]
+ Model: FluidCharacterizationModel [0..-1]
+ TableFormat: FluidCharacterizationTableFormat [0..-1]
+ Remark: String2000 [0..1]

«XSDcomplexType»
FluidCharacterizationModel

«XSDelement»
+ Name: String64 [0..1]
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

+FluidCharacterizationTable 0..*

«XSDcomplexType» «XSDcomplexType»
FluidCharacterizationTableFormat FluidCharacterizationTable

«XSDattribute» «XSDattribute»
+ uid: String64 + uid: String64
+ tableFormat: String64
«XSDelement» + name: String64
+ NullValue: String64 [0..1]
+ Delimiter: String64 [0..1] «XSDelement»
+ TableConstant: FluidCharacterizationParameter [0..-1]
+ Remark: String2000 [0..1]
+TableColumn 1..*

«XSDcomplexType»
FluidCharacterizationTableColumn

«XSDattribute»
+ fluidComponentReference: String64 [0..1]

+TableRow 1..*
+ name: String64 [0..1]
+ sequence: NonNegativeLong [0..1]
+ uom: UnitOfMeasureExt AbstractString
«XSDelement» TypeEnum
«XSDcomplextype»
+ Property: OutputFluidPropertyExt
«enumeration» FluidCharacterizationTableRow
SaturationKind
+Phase 0..1 saturated
«XSDattribute»
+ row: String64
TypeEnum undersaturated + kind: SaturationKind [0..1]
«enumeration»
FluidAnalysis::
ThermodynamicPhase
«XSDcomplexType»
aqueous +Phase FluidCharacterizationParameter
oleic
+KeywordAlias 0..* 0..1 «XSDattribute»
vapor
total hydrocarbon + name: String64 [0..1]
«XSDcomplexType» + value: double
Abstract::Obj ectAlias + uom: String64
+KeywordAlias + fluidComponentReference: String64 [0..1]
EnumExtensionPattern
«XSDattribute» «XSDunion»
+ authority: String64 0..* «XSDelement»
+ Property: OutputFluidPropertyExt OutputFluidPropertyExt
«XSDelement»
+ Identifier: String64
+ IdentifierKind: AliasIdentifierKindExt [0..1]
+ Description: String2000 [0..1]
+ EffectiveDateTime: TimeStamp [0..1]
+ TerminationDateTime: TimeStamp [0..1]

Figure 10-17. Tabular Output model.

Set of Fluid Parameters Output


This option for fluid characterization is appropriate where the application only needs a set of
parameters. An example is for pressure transient analysis, which in liquid phases generally assume
constant temperature and constant compressibility, and therefore a table provides extra dimensions
that are not needed. It is simpler to just provide a set of parameters, e.g., density, viscosity, etc., at a
fixed pressure temperature condition.
This can be achieved by using a Fluid Characterization Parameter Set, which simply contains many
Fluid Characterization Parameters. These each hold a Property (from the extensible enumeration),
and its value and UoM. Optionally, a reference to the Fluid Component concerned can be added
(Figure 10-18).

Standard: v2.2 / Document: v1.0 / May 16, 2022 96


PRODML Technical Usage Guide

«XSDcomplexType»
FluidCharacterizationModel

«XSDelement»
+ Name: String64 [0..1]
+ ReferencePressure: AbstractPressureValue [0..1]
+ ReferenceStockTankPressure: AbstractPressureValue [0..1]
+ ReferenceTemperature: ThermodynamicTemperatureMeasure [0..1]
+ ReferenceStockTankTemperature: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1]
TypeEnum
«XSDattribute»
+ uid: String64 «enumeration»
OutputFluidProperty

Compressibility
Density
Derivative of Density w.r.t Pressure
Derivative of Density w.r.t Temperature
Enthalpy
Entropy
Expansion Factor
0..1
Formation Volume Factor
«XSDcomplexType» Gas-Oil Interfacial Tension
FluidCharacterizationParameterSet Gas-Water Interfacial Tension
Index
K value
Misc Bank Critical Solvent Saturation
Misc Bank Phase Density
Misc Bank Phase Viscosity
Miscibility Parameter (Alpha)
Mixing Parameter Oil-Gas
Normalized Pseudo Pressure
Oil-Gas Ratio
Oil-Water Interfacial Tension
Parachor
Pressure
Pseudo Pressure
P-T Cross Term
Saturation Pressure
Solution GOR
Solvent Density
1..* Specific Heat
Temperature
«XSDcomplexType» Thermal Conductivity
FluidCharacterizationParameter Viscosity
Viscosity Compressibility
«XSDattribute» Water vapor mass fraction in gas phase
+ name: String64 [0..1] Z Factor
+ value: double
+ uom: String64
EnumExtensionPattern
+ fluidComponentReference: String64 [0..1]
«XSDelement» «XSDunion»
+ Property: OutputFluidPropertyExt OutputFluidPropertyExt

Figure 10-18. Fluid Characterization by Set of Fluid Parameters.

10.8 Fluid Composition


Fluid composition refers to the mixture of fluid components in a hydrocarbon fluid. A fluid composition
has a list of fluid components and the molar fraction of each.
Fluid components are the individual molecular types and are specified by including them inside the
fluid component catalog. This is the place where properties such as molecular weight are reported.
A particular Fluid Composition then is a collection of Fluid Component Fractions. Each Fluid
Component Fraction has the molar fraction, and a reference (using uid) to the fluid component which
is within the Fluid Component Catalog.
This method of describing fluid composition is used both in the fluid analysis and the fluid
characterization objects. It is also used in the Simple Product Volume Reporting and in the Flow Test
Activity PRODML schemas.
The roles of fluid composition, fluid component and fluid component catalog are shown in Figure
10-19.

Standard: v2.2 / Document: v1.0 / May 16, 2022 97


PRODML Technical Usage Guide

«XSDcomplexType»
FluidComponentCatalog

«XSDelement»
+ StockTankOil: StockTankOil [0..-1]
+ NaturalGas: NaturalGas [0..-1]
+ FormationWater: FormationWater [0..-1]
+ PureFluidComponent: PureFluidComponent [0..-1]
+ PseudoFluidComponent: PseudoFluidComponent [0..-1]
+ PlusFluidComponent: PlusFluidComponent [0..-1]
+ SulfurFluidComponent: SulfurFluidComponent [0..-1]

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


Ov erallComposition LiquidComposition VaporComposition

«XSDelement» «XSDelement» «XSDelement»


+ Remark: String2000 [0..1] + Remark: String2000 [0..1] + Remark: String2000 [0..1]

+FluidComponent +LiquidComponent +VaporComponent


0..* 0..* 0..*

«XSDcomplexType»
FluidComponentFraction

«XSDelement»
+
+
MassFraction: MassPerMassMeasure [0..1]
MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
Reference
+ VolumeFraction: VolumePerVolumeMeasureExt [0..1] by uid
+ VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ KValue: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind
«XSDattribute»
+ fluidComponentReference: String64

Figure 10-19. Fluid Composition model common to all PRODML.

The Fluid Analysis and Fluid Characterization both include a Fluid Component Catalog at the top
level. All analyses and characterizations, respectively, use this catalog to reference the fluid
compositions which they require. If desired, every fluid analysis and characterization within a workflow
can all use the one set of possible fluid components, by having the same Fluid Component Catalog in
the parent analysis or characterization objects. Note that a fluid composition does not need to use all
of the fluid components in the fluid component catalog, only the ones required. It follows that every
component which is going to be used to describe a composition in a fluid analysis or fluid
characterization in that data object must be included in the fluid component catalog.
An example of the use of a fluid composition within fluid analysis can be seen in Figure 10-11, which
shows the Constant Composition Expansion test. Each test step (a pressure-temperature step)
contains vapor, liquid and overall compositions. Each of these composition types is shown in Figure
10-19.
Another use of this referencing of a fluid component within the fluid component catalog can be seen in
Figure 10-16: in the fluid component property class, the attribute at the bottom of the list is the fluid
component reference. This again contains the uid for the fluid component which is within the catalog
for the parent fluid characterization object. This fluid component property class contains the various
Equation of State parameters required for that component. The binary interaction coefficient has the
same referencing but for the two components whose binary interaction coefficient is being reported.
Figure 10-20 shows the fluid components (all derived from an abstract) in the catalog which includes:
• stock tank oil
• natural gas
• formation water
• pure fluid component
• pseudo fluid component
• plus fluid component

Standard: v2.2 / Document: v1.0 / May 16, 2022 98


PRODML Technical Usage Guide

• sulfer fluid component


The first three are aimed at black oil descriptions and the second four at compositional descriptions;
however, the two kinds may be mixed. Figure 10-20 shows the attributes of each type of fluid
component.

«XSDcomplexType»
AbstractFluidComponent

«XSDelement»
+ MassFraction: MassPerMassMeasure [0..1]
+ VolumeConcentration: MassPerVolumeMeasureExt [0..1]
+ MoleFraction: AmountOfSubstancePerAmountOfSubstanceMeasure [0..1]
+ ConcentrationRelativeToDetectableLimits: DetectableLimitRelativeStateKind [0..1]
«XSDattribute»
+ uid: String64

«XSDcomplexType»
FormationWater «XSDcomplexType»
StockTankOil
«XSDelement»
+ SpecificGravity: double [0..1] «XSDelement»
+ Salinity: MassPerMassMeasure [0..1] + APIGravity: APIGravityMeasure [0..1]
+ Remark: String2000 [0..1] + MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
+ GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
«XSDcomplexType» + Remark: String2000 [0..1]
PureFluidComponent

«XSDelement»
+ Kind: PureComponentKindExt «XSDcomplexType»
+ MolecularWeight: MolecularWeightMeasure [0..1] NaturalGas
+ HydrocarbonFlag: boolean
+ Remark: String2000 [0..1] «XSDelement»
+ GasGravity: double [0..1]
+ MolecularWeight: MolecularWeightMeasure [0..1]
+ GrossEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
«XSDcomplexType» + NetEnergyContentPerUnitMass: EnergyPerMassMeasure [0..1]
SulfurFluidComponent + GrossEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
+ NetEnergyContentPerUnitVolume: EnergyPerVolumeMeasure [0..1]
«XSDelement» + Remark: String2000 [0..1]
+ Kind: SulfurComponentKindExt
+ MolecularWeight: MolecularWeightMeasureExt [0..1]
+ Remark: String2000 [0..1]

«XSDcomplexType»
PseudoFluidComponent

«XSDcomplexType» «XSDelement»
PlusFluidComponent + Kind: PseudoComponentKindExt
+ SpecificGravity: double [0..1]
«XSDelement» + StartingCarbonNumber: NonNegativeLong [0..1]
+ Kind: PlusComponentKindExt + EndingCarbonNumber: NonNegativeLong [0..1]
+ SpecificGravity: double [0..1] + AvgMolecularWeight: MolecularWeightMeasure [0..1]
+ StartingCarbonNumber: NonNegativeLong [0..1] + AvgDensity: MassPerVolumeMeasure [0..1]
+ StartingBoilingPoint: ThermodynamicTemperatureMeasure [0..1] + StartingBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ AvgDensity: MassPerVolumeMeasure [0..1] + EndingBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ AvgMolecularWeight: MolecularWeightMeasure [0..1] + AvgBoilingPoint: ThermodynamicTemperatureMeasure [0..1]
+ Remark: String2000 [0..1] + Remark: String2000 [0..1]

Figure 10-20. Types of fluid components supported for fluid composition.

Section 12.1 lists all the fluid components defined in the schema. However, the enumerated lists are
extensible so user-defined components can be added.
Note that the Fluid Component Catalog and the Fluid Components are contained in the PRODML
Common package. These are also used by the Simple Product Volume and Flow Test Activity data
objects, so are compatible with production volume reports created using PRODML.

Standard: v2.2 / Document: v1.0 / May 16, 2022 99


PRODML Technical Usage Guide

11 Fluid and PVT Analysis: Worked


Examples
To help explain how the PVT data objects have been designed to work, an example (all files as part of
the PRODML download) is included and explained here.
The worked example is found in the folder energyml\data\prodml\v2.x\doc\examples\PVT\Worked
Examples and all the files are called “Good Oil 4 xxx” where “xxx” is the name of the data object type
and optionally a number.

11.1 End-to-End Worked Example


The main worked example supplied covers a complete workflow, from acquiring a sample to
characterizing it. All the data objects, and multiple key types and sub-classes within them are used to
show how this workflow comes together in a typical scenario. Figure 11-1 shows the worked example
workflow.
In a fluid sample acquisition job, two samples are taken from a fluid system, filling two fluid sample
containers. One sample is invalid and destroyed. The other sample is split by having a water sample
removed. The hydrocarbon phase and the water phase are then sent for appropriate fluid analyses.
The hydrocarbon is then characterized using compositional models, and tabular output is generated
from black oil models,

Figure 11-1. Overall workflow for PVT worked example. Top-level object types each given own color and
types underlined. Red boxes indicate events that result in the creation of a new object.

Fluid System
The fluid system is referenced by many other objects (Figure 11-2). This is not a detailed object; the
full model for a reservoir and its sub-divisions containing hydrocarbons is the domain of RESQML.
The rock fluid unit feature in RESQML is the level of granularity which can be reproduced in the fluid
system description. Multiples can be included and they may correspond to, e.g., layers within an
overall reservoir comprising a fluid system.

Standard: v2.2 / Document: v1.0 / May 16, 2022 100


PRODML Technical Usage Guide

Figure 11-2. Fluid system describes the reservoir and fluid contained in it (high-level outline).

Fluid Sample Acquisition Job


This object describes the operation whereby one or more samples are acquired. The high level outline
of the worked example is shown in Figure 11-3. The fluid sample acquisition element recurs, one for
each sample acquired within the job.

Figure 11-3. Fluid sample acquisition job describes the operation of sample acquisition.

As described in Section 10.3, there are 5 types of fluid sample acquisition; the worked example is for
the downhole sample acquisition type. This represents the data from a downhole sampling tool in a
flowing well. One of the fluid sample acquisitions is shown in Figure 11-4. Additional data can be
added from the schema. Each type of sample acquisition has its own data and references. A well,
wellbore and a well test are included in the examples and are referenced from this fluid sample
acquisition.

Standard: v2.2 / Document: v1.0 / May 16, 2022 101


PRODML Technical Usage Guide

Figure 11-4. Fluid sample acquisition has the details of the operation to acquire each individual sample.

Fluid Sample Container


Three fluid sample containers are used in the example. Two for the original fluid sample acquisition
and one for the water sample which is sub-sampled from the retained sample. An example is shown
in Figure 11-5. These are relatively simple data objects, which are referenced from other objects as
needed.

Figure 11-5. Fluid sample container example. It does not reference other objects; they reference it.

Fluid Sample
A total of three fluid samples exist in the worked example:
1. Sample is acquired, found to be invalid and destroyed.
2. Sample is acquired and retained, and used for hydrocarbon analysis.
3. Sample is sub-sampled from Sample 2, with a water sample being removed and used for water
analysis.

Standard: v2.2 / Document: v1.0 / May 16, 2022 102


PRODML Technical Usage Guide

Important information about the sample concerns its provenance: where it came from in the reservoir,
and which operation created it. This aspect is shown in Figure 11-6.

Figure 11-6. Fluid sample showing references to its provenance.

Note that the fluid sample container for a fluid sample is not a 1:1 correspondence because the
sample may be moved between containers during its lifetime. The original fluid sample container is
recorded with the fluid sample acquisition job. Any subsequent changes are recorded in the fluid
sample chain of custody event element. Two such are recorded for Sample 2:
1. Custody transfer to a new custodian (a PVT Lab), also recording the current fluid sample
container.
2. Sub-sample of dead fluid taken. This is into a new container and creates a new fluid sample. This
event is shown in detail in Figure 11-7.

Figure 11-7. Fluid sample chain of custody event— showing a sub-sample of aqueous phase being taken.

Note that while not shown in the example, an important feature of the model is the ability to combine
samples into a new sample, for example recombining separator oil and gas phase samples.

Fluid Analysis

11.1.5.1 Hydrocarbon Analysis


The hydrocarbon analysis example illustrates a number of key features of the fluid analysis:

Standard: v2.2 / Document: v1.0 / May 16, 2022 103


PRODML Technical Usage Guide

1. Reference to fluid sample analyzed


2. Reference to an external report file
3. The fluid component catalog
4. Details of sample contaminants
5. Details of sample integrity and preparation
6. Details of the analysis tests themselves.
This high-level content is shown in Figure 11-8. Items 3 - 6 on this list are expanded below.

Figure 11-8. Fluid analysis—hydrocarbon, high-level content.

The fluid component catalog is unique to each fluid analysis object (also to each fluid
characterization). The worked example contains a catalog containing all the kinds of fluid component,
as listed and shown in Figure 11-9. Each analysis test then references items from the fluid catalog to
report molar composition, etc. Note that the UID attribute is the means by which a fluid component is
referenced from a test. See Figure 11-12 for an example. The UID only needs to be unique within this
data object; it is not a UUID. Hence, any convenient naming convention can be used.

Standard: v2.2 / Document: v1.0 / May 16, 2022 104


PRODML Technical Usage Guide

Figure 11-9. Fluid analysis—fluid component catalog showing various kinds.

Fluid sample integrity and preparation is described in the element of that name, as shown in Figure
11-10.

Figure 11-10. Fluid analysis—sample integrity and preparation.

The use of multiple sample contaminant elements, one of which includes a reference to a sample of
the contaminant itself, is shown in Figure 11-11. Note that this “sample of contaminant” refers to the
water sample which was sub-sampled from Sample 2, becoming Sample 3 (see Section 11.1.4).

Standard: v2.2 / Document: v1.0 / May 16, 2022 105


PRODML Technical Usage Guide

Figure 11-11. Fluid analysis—sample contaminant, showing reference to a sample of contaminant.

An example analysis test is shown in Figure 11-12. This example shows how the fluid components in
the catalog are referenced using their UIDs. Each test also has a UID and this can be used to
reference the test from elsewhere. This will be used in the fluid characterization to identify the tests
that were used in a given characterization.

Figure 11-12. Fluid analysis—use of fluid components previously defined in fluid component catalog.

As shown in Figure 11-13, many tests have a pattern of common data followed by recurring test
steps. This example figure shows the constant composition expansion test. In the worked example,
the separator tests have multiple test steps, some of which report fluid composition of multiple
phases, so combining aspects of the examples detailed here.

Standard: v2.2 / Document: v1.0 / May 16, 2022 106


PRODML Technical Usage Guide

Figure 11-13. Fluid analysis—many tests have a pattern of common data followed by recurring test steps.

11.1.5.2 Water Analysis


Water analysis is a type of fluid analysis object (it is either hydrocarbon or water analysis type)
(Figure 11-14). There is only one kind of water analysis test, and it can contain multiple test steps.

Figure 11-14. Fluid analysis—water analysis test.

Fluid Characterization
There are three ways in which a fluid can be characterized. They can be mixed in the same fluid
characterization data object. Here they are shown in two worked examples, other than for the third
method which is a standalone example.
1. Fluid characterization using models (file: Good Oil 4.ModelCharacterization).
2. Fluid characterization using tables (file: Good Oil 4.TabularCharacterization).

Standard: v2.2 / Document: v1.0 / May 16, 2022 107


PRODML Technical Usage Guide

3. Fluid characterization using set of fluid parameters (file: PTA PVT Using Fluid Parameter Set).
The high level content of the model approach is shown in Figure 11-15. The tabular approach follows
later and shares the same high-level data elements, but adds tabular data in place of (or in addition
to) model data.

Figure 11-15. Fluid characterization—model, high-level contents.

The software used to generate the characterization and its intended target software for consumption
of the data can be reported (Figure 11-15).
The source of the characterization means which fluid analysis tests results were used in this fluid
characterization. This is done by referencing the UID of the tests concerned, with a reference to the
parent fluid analysis data object, which can be seen in Figure 11-16.

Figure 11-16. Fluid characterization source.

Standard conditions can be reported. Also possible, but not shown in the example, separation
conditions (one or more stages plus stock tank conditions) can be reported, these being the
conditions at which the fluid is characterized.
The fluid component catalog works the same way as for the fluid analysis example (see Section
11.1.5.1). The worked example does not use the same catalog as the analysis, instead having one
with fewer components as would typically be used in software models.

11.1.6.1 Model Specification Option


The model specification contains the different kinds of parameter appropriate to the model. Figure
11-17 shows the worked example. Note that the model type (seen at the top, red box) is one of the
models listed in Section 12.1.4. Each of these model types has its own set of parameters which the
schema enforces. See Figure 10-15 and Figure 10-16 for the concept behind this. Details are in the
schema.

Standard: v2.2 / Document: v1.0 / May 16, 2022 108


PRODML Technical Usage Guide

Figure 11-17. Fluid characterization – model specification.

11.1.6.2 Tabular Option


The tabular fluid characterization uses the following two main elements:
1. One or more fluid characterization tables. These have table constants (per table). The data is
then transferred in table rows within each table.
2. One or more fluid characterization table formats. This contains table column headings
(representing what the contents of a column are).
The UID of the fluid characterization table format is used in the fluid characterization table to show
which format applies to the table (Figure 11-18).

Figure 11-18. Table and table format.

The table format defines the columns that appear in the table. It also defines the column delimiter asci
character (e.g., a comma), and the null value (e.g., -999.25). The attributes of each column are as
follows.
Attribute Description Optionality

Name User-defined name for this attribute optional


Sequence Integer to allow this column attribute to be indexed optional
Property The physical property that this value represents mandatory
Uom String to represent the unit of measure for the property mandatory
values
fluidComponentReference Reference to the fluid component to which this value relates optional

Phase Reference to the phase to which this value relates optional


KeywordAlias Used to apply a consumer product keyword to this value, Optional (zero
with "authority" being the product name concerned to many)

Standard: v2.2 / Document: v1.0 / May 16, 2022 109


PRODML Technical Usage Guide

Property is an enumeration that can be extended and the properties are listed in Section 12.3. Phase
is also an enumeration and is listed in Section 12.3. Phase and fluid component reference are
provided so that properties may be output that relate to a specific phase or fluid component. Unit of
measure is not a controlled list. Keyword alias is provided in case it is useful to map properties onto
keywords in a specific software package.
An example extract from the worked example fluid characterization table format is shown in Figure
11-19.

Figure 11-19. Table format.

The fluid characterization table then contains the actual values defined by the format. These are
contained in table row elements where the values are separated by the ASCII character defined
above.
In addition to the table rows, the table also has table constants. These have the same attributes as
listed above for table columns except that sequence is not present (tables do not have a sequence). A
value attribute is added and is mandatory.
The table has a UID so that it can be referenced from elsewhere, and a name.
An example table is shown in Figure 11-20.

Figure 11-20. Fluid characterization table.

The table example contains some other illustrations, e.g. an extension to the enum list of properties,
which can be seen by inspecting the file.

11.1.6.3 Fluid Characterization Parameter Option


The worked example for this method does not use the same dataset as the rest of the worked
example, but a standalone example based on the requirements of a constant compressibility pressure
transient analysis. For an extract of the example, see Figure 11-21. Note how the data for each
phase in a black oil description can be contained in one set by use of the fluid Component Reference
attribute (the uid of the three phases being “oil”, “gas”, “water” in this example).

Standard: v2.2 / Document: v1.0 / May 16, 2022 110


PRODML Technical Usage Guide

Figure 11-21. Fluid Characterization Parameter Set example.

11.2 Other examples


Three other worked examples of lab analyses are included. These can be found in the Other
Examples folder. Only the fluid analysis objects are included.
1. Volatile Oil
2. Gas Condensate
3. Oil Plus Swelling Tests
Data in the form of Excel files for the first two of these are provided.

Standard: v2.2 / Document: v1.0 / May 16, 2022 111


PRODML Technical Usage Guide

12 Appendix for Fluid and PVT Analysis


This chapter contains appendix information for the previous main fluid and PVT analysis chapters.

12.1 Fluid Components Defined in the Model

Pure Fluid Components


The table below shows the enumeration of pure fluid components. The enumeration is extensible so
that additional molecule types can be added.

Aliphatic Cyclic and Aromatic Inorganic


c1 methylcyclopentane ar
c2 cyclohexane co2
c3 methylcyclohexane h2
n-c4 ethylcyclohexane h2s
i-c4 benzene he
n-c5 toluene hg
i-c5 m-xylene n2
ra
cos
neo-c5 o-xylene
n-c6 p-xylene
2-methylpentane ethylbenzene
3-methylpentane 1-2-4-trimethylbenzene
2-dimethylbutane
3-dimethylbutane
n-c7
2-methylhexane
3-methylhexane
n-c8
2-methylheptane
3-methylheptane
n-c9
n-c10

Pseudo Fluid Components


The table below shows the enumeration of pseudo fluid components. The enumeration is extensible
so that additional pseudo components can be added. Additional parameters are available with which
to characterize all pseudo components.

Range of carbon numbers Single carbon numbers Single carbon numbers


(cont.)
c2-c4+n2 c4 c20
R-SH
c5 c21
c6 c22
c7 c23

Standard: v2.2 / Document: v1.0 / May 16, 2022 112


PRODML Technical Usage Guide

Range of carbon numbers Single carbon numbers Single carbon numbers


(cont.)
c8 c24
c9 c25
c10 c26
c11 c27
c12 c28
c13 c29
c14 c30
c15 c31
c16 c32
c17 c33
c18 c34
c19 c35

Plus Fluid Components


The table below shows the enumeration of plus fluid components. The enumeration is extensible so
that additional plus components can be added. Additional parameters are available with which to
characterize all plus components.

Plus Carbon Number Plus Carbon Number Plus Carbon Number


(cont.) (cont.)
c5+ c9+ c20+
c6+ c10+ c25+
c7+ c11+ c30+
c8+ c12+ c36+

Sulfer Fluid Components


The table below shows the enumeration of sulfer fluid components. The enumeration is extensible so
that additional plus components can be added. Additional parameters are available with which to
characterize all sulfer components.

Sulfer Component Sulfer Component (cont.) Sulfer Component (cont.)


Hydrogen Sulfide Carbonyl Sulfide Methyl Mercaptan
Ethyl Mercaptan Dimethyl Sulfide Carbon Disulfide
Isopropyl Mercaptan Tert-Butyl Mercaptan n-Propyl Mercaptan
Ethyl-Methyl Sulfide Thiophene Methyl Isopropyl Sulfide
Sec-Butyl Mercaptan Isobutyl Mercaptan Diethyl Sulfide
n-Butyl Mercaptan Dimethyl Disulfide 2-Methyl Thipophene
3-Methyl Thipophene 2-Methyl 1-Butanethiol Tetra-Hydro Thiophene
Isopentyl Mercaptan n-Pentyl Mercaptan 2-Ethyl Thiophene
2-Ethyl Thiophene 2,5-Dimethyl Thiophene 3-Ethyl Thiophene
2,4 & 2,3 Dimethyl Thiophene Dipropyl Sulfide 3,4-Dimethyl Thiophene
Ditert.Butyl Sulfide n-Hexyl Mercaptan Diethyl Disulfide
Ethyl Isopropyl Disulfide Di-Sec.Butyl Sulfide n-Heptyl Mercaptan
Dibutyl Sulfide n-Octyl Mercaptan n-Nonyl Mercaptan
Benzothiophene

Standard: v2.2 / Document: v1.0 / May 16, 2022 113


PRODML Technical Usage Guide

12.2 PVT Models

Correlation PVT Models – Viscosity


For dead oil, bubble point and under-saturated conditions:
• Bergan-Sutton
• DeGhetto
• Dindoruk-Christman
• Petrosky-Farshad
• Standing

Compositional EoS
• Peng-Robinson 76
• Peng-Robinson 78
• SRK

Compositional – Viscosity
• C S Pedersen 84
• C S Pedersen 87
• Lohrenz-Bray-Clark
• Friction Theory

Compositional – Thermal
Currently, no specific models defined so a generic model together with custom model extensions can
be used.

12.3 Fluid Properties that can be Output to Tables


The table below shows enumeration of fluid properties which can be output to tables. The
enumeration is extensible so that additional fluid properties can be added.
Also listed are the enumeration values for phase.

Output Fluid Property Phase


Compressibility aqueous
Density oleic
Derivative of Density w.r.t Pressure total hydrocarbon
Derivative of Density w.r.t Temperature vapor
Enthalpy
Entropy
Expansion Factor
Formation Volume Factor
Gas-Oil Interfacial Tension
Gas-Water Interfacial Tension
Index
K value
Misc Bank Critical Solvent Saturation
Misc Bank Phase Density
Misc Bank Phase Viscosity
Miscibility Parameter (Alpha)
Mixing Parameter Oil-Gas
Oil-Gas Ratio

Standard: v2.2 / Document: v1.0 / May 16, 2022 114


PRODML Technical Usage Guide

Output Fluid Property Phase


Oil-Water Interfacial Tension
Parachor
Pressure
P-T Cross Term
Saturation Pressure
Solution GOR
Solvent Density
Specific Heat,
Temperature
Thermal Conductivity
Viscosity
Viscosity Compressibility
Water vapor mass fraction in gas phase
Z Factor

12.4 v2.1 PVT Release Notes

Fluid Sample Acquisition Job Changes


1. Link to and from Flow Test Job added. Removed Operation which was a string to represent the ID
of a (at the time unspecified) associated “job” object. This is now definitive as the Flow Test Job.
2. Removed Field Note Reference. This was aimed at use of a link to (for example) a PDF report to
be transferred as part of the “job” data together with the XML file in a Energistics Packaging
Convention (EPC) container file. However, this technology does not currently support non-
Energistics files so this element has no purpose.
3. Removed Service Company from Fluid Sample Acquisition. It is instead at the level of Fluid
Sample Acquisition Job where it belongs, because the Job is done by one Service Company and
individual Sample Acquisitions are done under that same job.
4. WFT specific acquisition renamed to Formation Tester Sample Acquisition. WftRun link removed
from this. Attributes relevant to formation testers added.
5. Link added to Flow Test Activity in the Sample Acquisition types for which it is appropriate:
Separator, Wellhead, Downhole, and Formation Tester. All these can occur during a flow test.
The old link to Production Well Tests was removed.

Fluid Sample Changes


1. Changes to Recombination of Samples:
a. Renamed Sample Recombination Requirement to Sample Recombination Specification for
clarity.
b. Renamed Fluid Sample Composition to Recombined Sample Fraction for clarity and made
this a class below Sample Recombination Specification.
c. Hence if the sample is a recombined one, the Sample Recombination Specification describes
the objective and conditions of the recombination (all optional) and then has 2 or more
Recombined Sample Fraction, each of which contains the fraction of a sample and has a
reference to the sample concerned.
d. Deleted Liquid Sample and Vapor Sample from Sample Recombination Specification because
this is now covered by the sub-classes of Recombined Sample Fraction.

Fluid Characterization Changes


1. Made Fluid System Characterization Type optional.

Standard: v2.2 / Document: v1.0 / May 16, 2022 115


PRODML Technical Usage Guide

2. AbstractCorrelationFluidViscosityBubblePointModel – for Solution GOR, corrected typo and


wrong type.
3. Removed Fluid Characterization Table Format Set as container for multiple table formats. Each
Fluid Characterization Table Format now sits under Fluid Characterization.
4. For PTA purposes:
a. Added Pseudo pressure and Normalized pseudo pressure to Output Fluid Property.
b. To have a single condition set of fluid parameters (rather than needing a table which spans
multiple conditions), made these changes:
i. Renamed Fluid Characterization Table Constant to Fluid Characterization Parameter.
ii. In Fluid CharacterizationTable, the element TableConstant is now of type Fluid
Characterization Parameter, otherwise it is the same as v2.0.
iii. Added Fluid Characterization Parameter Set to Fluid Characterization Model. It contains
1..* Fluid Characterization Parameter. This gives the 3rd means of defining a model
(after a Table or a Model).

Standard: v2.2 / Document: v1.0 / May 16, 2022 116


PRODML Technical Usage Guide

Part IV: Flow Test Activity and Pressure


Transient Analysis (PTA)
Part IV contains Chapters 13 through 17, which explain the set of PRODML data objects for Flow Test
Activity and Pressure Transient Analysis (PTA).

Acknowledgement
Special thanks to Kappa Engineering and Schlumberger for donating their work to Energistics, which
is the basis for the current PRODML PTA data model.

Standard: v2.2 / Document: v1.0 / May 16, 2022 117


PRODML Technical Usage Guide

13 Introduction to Flow Test Activity and


PTA
13.1 Objective
The objective for the pressure transient capability in PRODML is to transfer for all kinds of flow test
the following kinds of data:
• Measured data
• Pre-processed data
• Analysis and result data

13.2 Background
This work was originally done by Schlumberger and Kappa Engineering. It was proposed to be a
“PTAML” akin to PRODML or WITSML. The initial use case which was foreseen was data transfer
between a PTA package (e.g., Kappa’s) to a 3D geological model (e.g., Schlumberger’s).
The data model was developed with input from Energistics and experienced “ML” modelers and
donated to Energistics. The donated model was in the Energistics 1.x style which was current at the
time of development.
Since the donation, Energistics has updated the model to v2.x style. It has been incorporated into
PRODML version 2.1 for the first release.
Since the initial development, new and useful additions have been made to the more integrated
“energy ML” (both data objects available for re-use, and the Common Technical Architecture), which
have been re-used where possible.

13.3 Pressure Transient Testing Workflow


The pressure transient testing workflow within reservoir and production analysis is shown in Figure
13-1. From left to right this shows how:
• a pressure transient is induced in the reservoir (generally by opening or closing flow from a
completed interval);
• flow is usually measured at surface and pressure downhole, which give rise to a set of different
time series data;
• fluid samples may be taken and PVT analysis performed;
• the time series once pre-processed can be analyzed together with the fluid properties;
• the results of the analysis are generally used for some combination of well performance (inflow)
model or prediction, and/or for reservoir description.
The capabilities of PRODML pressure transient (taken with other data objects) cover this whole
workflow. Well performance modelling (“Nodal analysis”) is not covered at present.

Standard: v2.2 / Document: v1.0 / May 16, 2022 118


PRODML Technical Usage Guide

Figure 13-1. Overall Workflow of Pressure Transient Testing and Fluid Sampling.

13.4 Use Cases


The use cases for pressure transient data are as shown in Figure 13-2. Because Energistics is
focused on data transfer between different parties (companies, or software packages), the following
guidelines have been applied:
• Transfers between data sources and consumers are in-scope (red box)
• “Internals” of analysis processes (the how and what was done) are out of scope (blue box).
The use cases shown in the figure can be briefly described as follows:
1. Input data from a flow test job. (NB “flow test” rather than “pressure transient” has been used
as the generic term since this can be used for a broader range of test types).
a. Time series data
b. PVT fluid and reservoir input data
c. The potential for real time streaming of (a).
2. The concept of input of full or partial flow test data sets using PRODML:
a. From a wellsite report – e.g., the measured data without analysis, from the wellsite
contractor
b. From one PTA package to another for further analysis
3. Results and output data
a. Archiving flow test and PTA data
b. Taking the results to well performance analysis (the Inflow Parameters shown in
Figure 13-1)
c. Taking the results to reservoir description
d. Producing “paper-like” reports for end users.

Standard: v2.2 / Document: v1.0 / May 16, 2022 119


PRODML Technical Usage Guide

Figure 13-2. Use Case diagram for pressure transient data. The data transfers from external systems
shown in the red box are in scope. Processes internal to PTA are out of scope.

13.5 PTA Data Model: Top Level Objects


Pressure transient has the top-level objects (TLOs) listed below. In addition, some other TLOs from
PRODML and WITSML are used to round out the capability. The high-level model is shown in Figure
13-3. Their purpose is outlined in the following sections.

Flow Test Job


In PRODML v2.1 the Flow Test Job object acts as a means of grouping together the activity (i.e.,
measurement data) of the flow test(s), the analyses and any pre-processing performed (see below for
these data objects). This is done by referencing the different data objects comprising the whole Job
using Data Object References. Figure 13-3 shows the zero to many links from Flow Test Job to each
of the following four TLOs, which contain the technical data for Flow Testing.
This object contains start and end times, and the service company concerned, for the Job. There is
also a link possible to a Fluid Sample Acquisition Job data object. This is for cases where a single Job
encompasses flow testing and fluid sampling, e.g., a drill stem test or a wireline formation test.
Flow Test Job may in future be extended to included further “operational” data (tools, events, etc.).
This data object is not described in further detail.

Standard: v2.2 / Document: v1.0 / May 16, 2022 120


PRODML Technical Usage Guide

Flow Test Activity


The Flow Test Activity object deals with the measured data, including the location of the flow test
(wellbore, interval, etc.), the flow rates and fluids, and the time series data which are contained in
Channel objects as seen in the figure. All kinds of flow test—e.g., drill stem test (DST), wireline
formation tester (WFT)—are included.

Pressure Transient Analysis


The Pressure Transient Analysis object contains details of pressure- (and rate-) transient analyses
performed on the data which was reported in a Flow Test Activity object. It deals with the measured
data used, fluid properties used (contained in a Fluid Characterization object as shown in the figure),
analysis methods and results, and the pressure transient model used.

Deconvolution
The Deconvolution object deals with the specialist pre-processing activity of deconvolution. Not
shown on the figure for simplicity is the fact that the output of deconvolution is further time-series
data, and these data are also contained in linked Channels.

Preprocess
The Preprocess object deals with the general pre-processing activity such as re-sampling, smoothing,
merging measured data channels. The exact methods of pre-processing are not detailed, just the
generic type. Not shown on the figure for simplicity is the fact that the output of pre-processing is
further time-series data, and these data are also contained in linked Channels.

Figure 13-3. Top Level Objects used for Flow Tests and Pressure Transient Analysis, also showing link to
fluid sampling.

13.6 General Use of Time Series Data in Flow Test Data objects
All of the Flow Test Data objects (except Flow Test Job) use time series data. This is variously
measured data from sensors, pre-processed or deconvolved data, or simulated data from analysis. A
hierarchy of classes is used to describe the metadata appropriate to the time series type, and then the
Channel object is used to contain the time series data points. Channel is a WITSML object; Channels
are collected into Channel Sets and the Channel Set is where the actual values are stored, in the
Channel Data class.
The time series data model is shown in Figure 13-4. The four top-level objects for flow testing are
shown in the lower left. They all have associations where needed to the Abstract Flow Test Data
class, which has DORs to the Channel and Channel Set holding the time data for the particular
instance of time series data.

Standard: v2.2 / Document: v1.0 / May 16, 2022 121


PRODML Technical Usage Guide

AbstractObject «XSDcomplexType»
FlowTestActivity::AbstractFlowTestData
«XSDcomplexType,XSDtopLevelElement»
AbstractObject
PressureTransientAnalysis::PtaDataPreProcess AbstractObject
«XSDcomplexType,XSDtopLevelElement» +FlowTestActivity
+PtaDataPreProcess «XSDcomplexType,XSDtopLe...
FlowTestJob 1 FlowTestActivity::FlowTestActivity
0..*
«XSDelement»
+ Client: BusinessAssociate [0..1]
+ ServiceCompany: BusinessAssociate [0..1] AbstractObject
+ StartTime: TimeStamp [0..1] AbstractObject «XSDcomplexType,XSDtopLeve...
+ EndTime: TimeStamp [0..1] +FlowTestActivity
«XSDcomplexType,XSDtopLevelElement» FlowTestActivity::ChannelSet
+PressureTransientAnalysis PressureTransientAnalysis:: 1
PressureTransientAnalysis
0..* +Channel
0..*
«byValue»
+FlowTestActivity
AbstractObject
0..*
«XSDcomplexType,XSDtopLeve...
AbstractObject
FlowTestActivity::Channel
+PtaDeconvolution «XSDcomplexType,XSDtopLevelElement»
PressureTransientAnalysis::PtaDeconvolution
0..*
+FlowTestJob 0..1

+FluidSampleAcquisitionJob 0..1
AbstractObject
«XSDcomplexType,XSDtopLevelElement»
FluidSampleAcquisitionJob::FluidSampleAcquisitionJob

«XSDelement»
+ Client: BusinessAssociate [0..1]
+ StartTime: TimeStamp [0..1]
+ EndTime: TimeStamp [0..1]
+ ServiceCompany: BusinessAssociate [0..1]

Figure 13-4. Time series data utilizes links to Channel and Channel Set objects which contain the
“columns” of data.

Figure 13-5 shows the three types of data (green boxes) that can then be represented by the
appropriate inheritance.

class FlowTestActivityChannel

AbstractObject «XSDcomplexType»
«XSDcomplexType,XSDtopLevelEle... AbstractFlowTestData
FlowTestActivity TypeEnum
«XSDelement»
+ ChannelSet: ChannelSet «enumeration»
+ TimeChannel: Channel TimeSeriesPointRepresentation
+ TimeSeriesPointRepresentation: TimeSeriesPointRepresentation [0..1] point by point
+ Remark: String2000 [0..1] stepwise value at end of period
«XSDattribute»
+ uid: String64

«XSDcomplexType» «XSDcomplexType» TypeEnum «XSDcomplexType»


PressureTransientAnalysis::AbstractPtaPressureData AbstractPtaFlowData «enumeration» OtherData
PressureTransientAnalysis::
«XSDelement» «XSDelement» FluidPhaseKind «XSDelement»
+ PressureChannel: Channel + FluidPhaseMeasuredKind: FluidPhaseKind [0..1] + DataChannel: Channel
+ PressureDerivativeChannel: Channel [0..1] + FlowChannel: Channel multiphase gas+water
+ PressureReferenceDepth: LengthMeasure [0..1] multiphase oil+gas
+ Datum: WellboreDatumReference [0..1] multiphase oil+water
multiphase oil+water+gas
single phase gas
single phase oil
single phase water

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


PressureTransientAnalysis:: PressureTransientAnalysis:: PressureTransientAnalysis:: PressureTransientAnalysis::
MeasuredPressureData PreProcessedPressureData MeasuredFlow Data PreProcessedFlow Data

«XSDelement» «XSDelement»
+ PreProcess: PtaDataPreProcess + PreProcess: PtaDataPreProcess

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


PressureTransientAnalysis:: PressureTransientAnalysis:: PressureTransientAnalysis:: PressureTransientAnalysis::
Deconv olv edPressureData OutputPressureData Deconv olv edFlow Data OutputFlow Data

«XSDelement» «XSDelement»
+ Deconvolution: PtaDeconvolution + Deconvolution: PtaDeconvolution

Figure 13-5. Flow test activity channel diagram, showing the three main types of data (green boxes):
pressure, flow and flow test.

1. Pressure data: this has a pressure channel and optionally, a derivative channel (used in
analysis). The Abstract PTA Pressure Data itself can inherit from one of four concrete types.
These are useful to differentiate when performing analysis since the pressure data (in
particular) may have been pre-processed and it is useful to know what steps were taken in
this prior to analysis.
a. Measured (informally referred to as "raw" sometimes).

Standard: v2.2 / Document: v1.0 / May 16, 2022 122


PRODML Technical Usage Guide

b. PreProcessed – which also has a data object reference (DOR) to a PreProcess data
object containing details of the pre-processing (see Section 16.1).
c. Deconvolved - which also has a DOR to a Deconvolution data object containing
details of the pre-processing (see Section 16.2).
d. Output – used for simulation, deconvolution output.
2. Flow data: this has an enum to state what is being measured (to allow for different types of
separation and metering of one or more phases). Like pressure data, Abstract PTA Flow Data
can inherit from one of the same four concrete types described in the previous point.
3. Flow test data: this is used for channels such as temperature, where no direct input to
pressure transient analysis is used, but they are useful contextual data.
As noted above, the Channel object is referenced by a DOR. This contains generic metadata
concerning the data in the Channel. For more information about the Channel data object, See
Energistics Online: WITSML: Channel.
The actual value of data are contained in the Channel Data class of the Channel Set which collects
together the Channels which share a common time index. For details of the format, see Energistics
Online: WITSML: Channel Set. Figure 13-6 shows an example of a Channel Data instance.

Figure 13-6. Channel Data is in text format as string or in a file

Standard: v2.2 / Document: v1.0 / May 16, 2022 123


PRODML Technical Usage Guide

14 Flow Test Activity


This chapter covers the Flow Test Activity data object, starting with its high-level organization and
then going through the details of each main section.
As noted above, the purpose of this object is to transfer the measurement data of all kinds of flow test.

14.1 Types of Test


Most types of flow test are covered. Those which are not specifically included, can be requested to be
added in later versions.
Figure 14-1 shows how the top level object of Flow Test Activity can inherit the specific type of flow
test from a range of possible types:
1. Transient Tests: Drillstem Test, Formation Tester Station, Production Transient Test, Vertical
Interference Test, Interwell Test. The ones normally done using wireline, Formation Tester
Station and Vertical Interference Test, can reference a WITSML Tie-In Log, performed for
depth correlation.
2. Production well tests: Production Flow Test, Injection Flow Test. These carry attributes
related to test validation for production allocation. These are expected not to have transient
test data but only to report steady state flow conditions, although time series data can be
included if required. They are expected to be used by Simple Product Volume Reporting
(SPVR) use cases, see Section 6.4.1.
3. Statutory Tests: Water Level Test.
The actual technical data of the flow test is contained in a sub-class called Flow Test Measurement
Set. The purpose of the inheritance of the different types of flow test is to set up how many
measurement sets will exist to describe the flow test. The relationship is called Interval Measurement
Set, and the multiplicity of Interval Measurement Set aligns with the test type. Most tests measure at
one location and therefore have just one Interval Measurement Set, whilst Interference tests measure
at multiple locations and each location requires an Interval Measurement Set.
Note that in a case where multiple intervals are tested (for example, a drill stem test where the testing
tool is moved and various intervals tested in sequence, or for a wireline formation tester where it is
standard to test multiple “stations” in a job) then each interval or station is a separate Flow Test
Activity. So for example, a wireline formation tester job with 10 stations would give rise to one Flow
Test Job which would have links to 10 Flow Test Activities, each of which would correspond to one
station with one depth interval being tested, and one set of corresponding time series and flowing
data.

Standard: v2.2 / Document: v1.0 / May 16, 2022 124


PRODML Technical Usage Guide

«XSDcomplexType»
DrillStemTest

«XSDcomplexType»
FormationTesterStation

+ TieInLog: DataObjectReference

«XSDcomplexType» +IntervalMeasurementSet
ProductionTransientTest
«XSDcomplexType»
+IntervalMeasurementSet1 Flow TestMeasurementSet

+ Remark: String2000 [0..1]


1
AbstractObject «XSDcomplexType» +IntervalMeasurementSet «XSDattribute»
+ uid: String64
«XSDcomplexType,XSDtopLev... VerticalInterferenceTest
1
FlowTestActivity +IntervalMeasurementSet
+ TieInLog: DataObjectReference
1..*
+IntervalMeasurementSet
1..*
«XSDcomplexType» +IntervalMeasurementSet
Interw ellTest
1
+IntervalMeasurementSet

+IntervalMeasurementSet 1
«XSDcomplexType»
ProductionFlow Test

+ Validated: boolean [0..1]


+ WellTestMethod: String64 [0..1]
+ EffectiveDate: TimeStamp [0..1]

«XSDcomplexType»
Inj ectionFlow Test

+ Validated: boolean [0..1]


+ WellTestMethod: String64 [0..1]
+ EffectiveDate: TimeStamp [0..1]

«XSDcomplexType»
WaterLev elTest

Figure 14-1. UML Diagram for Flow Test Activity - High Level.

Standard: v2.2 / Document: v1.0 / May 16, 2022 125


PRODML Technical Usage Guide

14.2 Flow Test Measurement Set


The Interval Measurement Set is contained in a Flow Test Measurement Set class. See Figure 14-2
for the overview of this class. There are four sub-classes which are described in the following sections
in more detail. These are: location, test periods, fluids flowing, and flow test (time series) data.

«XSDcomplexType»
Flow TestLocation

+ Wellbore: DataObjectReference [0..1]


+ MdInterval: MdInterval [0..1]
+ WellboreCompletion: DataObjectReference [0..1]
+ Well: DataObjectReference [0..1]
+ WellCompletion: DataObjectReference [0..1]
+ ReportingEntity: ReportingEntity [0..1]
+Location + Remark: String2000 [0..1]
1 + ProbeDepth: LengthMeasure [0..1]
+ ProbeDiameter: LengthMeasure [0..1]
+ Datum: ReferencePointKind [0..1]

«XSDcomplexType»
FlowTestMeasurementSet «XSDcomplexType»
+ Remark: String2000 [0..1] TestPeriod

«XSDelement»
+ StartTime: TimeStamp
+ EndTime: TimeStamp
+TestPeriod + TestPeriodKind: TestPeriodKind [0..1]
+ WellFlowingCondition: WellFlowingCondition [0..1]
0..1 + Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

«XSDcomplexType»
+FluidComponentCatalog ProdmlCommon::FluidComponentCatalog

+MeasuredFlowRate 0..* 0..1 «XSDelement»


+ StockTankOil: StockTankOil [0..-1]
«XSDcomplexType» + NaturalGas: NaturalGas [0..-1]
AbstractPtaFlowData + FormationWater: FormationWater [0..-1]
+ PureFluidComponent: PureFluidComponent [0..-1]
«XSDelement» + PseudoFluidComponent: PseudoFluidComponent [0..-1]
+ FluidPhaseMeasuredKind: FluidPhaseKind [0..1] + PlusFluidComponent: PlusFluidComponent [0..-1]
+ FlowChannel: Channel + SulfurFluidComponent: SulfurFluidComponent [0..-1]

Figure 14-2. Flow Test Measurement Set – High Level.

14.3 Flow Test Location


Flow Test Location is a mandatory class in the Flow Test Measurement Set—a measurement is
meaningless unless one knows what it is measuring. Within the Flow Test Location class there are a
number of options. See Figure 14-3. To give flexibility, there are no mandatory combinations but it is
expected that the following combinations would be commonly used:
1. Well and/or Wellbore + interval flowing. The interval flowing could be:
a. Md Interval in the case of normal radial flow (i.e., perforated, open hole, straddle
packers, gravel packed completions, etc.). Md Interval is an Energistics Common
object containing Md Top and Md Bottom depths.
b. Probe Depth and Probe Diameter, in the case of a formation tester tool probe being
the connection to reservoir.
Note: Well and Wellbore are Data Object References (DORs) (for more information, see
Energistics Online: Object Reference) which reference other data objects (WITSML Well and
Wellbore respectively). However, a “dummy” reference can be created by simply completing
the mandatory elements in the DOR and without having to have a separate full well or
wellbore object.
2. Well Completion and/or Wellbore Completion, which use a Data Object Reference to a
separate WITSML data object.
a. A Wellbore Completion represents a "flow from a wellbore". It is possible for a
Wellbore to have multiple Wellbore Completions, e.g., if it is dual-completed then
each Wellbore Completion can flow up its own tubing string and they can be tracked
separately. Or it may be a multi-lateral and each lateral will have its own Wellbore
Completion. But a flow test which measures one such flow can use the Wellbore
Completion option.

Standard: v2.2 / Document: v1.0 / May 16, 2022 126


PRODML Technical Usage Guide

b. A Well Completion represents a "flow from the wellhead" which may be one to one
with a single Wellbore Completion, or it may represent the commingled flow from
multiple Wellbore Completions.
Note: using Wellbore Completion has the merit that this describes the mechanical details of
the wellbore to reservoir connection, e.g., perforation types and shots per foot, etc.
Note also that Wellbore Completion itself has a DOR the Wellbore which contains it, and also
a DOR to the Well Completion which contains it. Well Completion has a DOR to the Well
which contains it.
3. Reporting Entity, which is a generic source of flow used in volume reporting. For more
information on reporting entities, see Section 5.2. This option will make particular sense for
the Production Flow Test type of Flow Test Activity (see Section 14.1) if this is being used as
part of volume reporting.

«XSDcomplexType»
Flow TestLocation

+ Wellbore: DataObjectReference [0..1]


+ MdInterval: MdInterval [0..1]
+ WellboreCompletion: DataObjectReference [0..1]
+ Well: DataObjectReference [0..1]
+ WellCompletion: DataObjectReference [0..1]
+ ReportingEntity: ReportingEntity [0..1]
+ Remark: String2000 [0..1]
+ ProbeDepth: LengthMeasure [0..1]
+ ProbeDiameter: LengthMeasure [0..1]
+ Datum: ReferencePointKind [0..1]

Figure 14-3. Flow Test Location class.

14.4 Test Period


The Test Period class is to hold details of the flow and shut in periods of the flow test. This is a re-use
of the same object as used in Simple Product Volume Reporting. It can be seen in Figure 14-4. A Test
Period is one in which the flow rate is constant (to within the tolerance required by the purpose of the
test, e.g., a “drawdown” may have a varying rate but is broadly a constant flow with a constant choke
setting). The main data elements are:
1. Start and End times of the period.
2. Test Period Kind – an enum as shown.
3. Well Flowing Conditions – the choke size and basic bottom hole and wellhead stabilized flow
measurements (Note: the actual varying measurement streams of data can be recorded in
time series as described below; the values here are just to represent typical or stabilized
conditions).
4. Flow rates – these are reported using the Product Rate class. A mass or volume rate can be
used; if volume, then optionally the Volume Value class has the measurement conditions
(pressure, temperature) since the volume depends on these. The Product Rate repeats per
fluid component (phase, or compositional component, as required).
a. If a simple description of the phase is all that is needed (e.g., “gas”) then the enum of
Product Fluid Kind (which is extensible) can be used. This can be seen in Figure
6-16.
b. Optionally, if the physical fluid properties are required to be reported, or a fluid
composition, then the Fluid Component Catalog needs to be filled in. See Figure
14-5: either black oil or compositional components can be contained in the Catalog.
Each fluid type in the catalog has a type, associated physical properties, and a UID.
For more details of the Fluid Components and the Fluid Component Catalog, see
Section 10.7.3. The Product Rate element then references (using the UID) this
specific Fluid Component in the catalog. For an example (using Product Quantity
rather than Product Rate, but which works the same way), see Figure 6-15.
Note, the UID, which can be local to this catalog, so it can be human readable strings like
“gas”. It does not need to be UUID.

Standard: v2.2 / Document: v1.0 / May 16, 2022 127


PRODML Technical Usage Guide

Figure 14-4. Test Period and Associated Classes.

«XSDcomplexType»
ProdmlCommon::FluidComponentCatalog

«XSDelement»
+ StockTankOil: StockTankOil [0..-1]
+ NaturalGas: NaturalGas [0..-1]
+ FormationWater: FormationWater [0..-1]
+ PureFluidComponent: PureFluidComponent [0..-1]
+ PseudoFluidComponent: PseudoFluidComponent [0..-1]
+ PlusFluidComponent: PlusFluidComponent [0..-1]
+ SulfurFluidComponent: SulfurFluidComponent [0..-1]

Figure 14-5. Fluid Component Catalog, as used elsewhere in PRODML.

14.5 Time Series Data


Using the pattern for time series data described in Section 13.6, the Flow Test Activity is concerned
with measured data and so has references to zero to many channels of Pressure (type measured),
Flow and Other data. There are commonly multiple pressure, flow and other sensors all recording
time series data during a flow test, and the intention is that all these can be recorded. Subsequently,
before analysis, they may be pre-processed and only one channel of pressure and flow will actually
be used in any one analysis. Figure 14-6 shows the 3 associations to measured data.
Note that for a complex downhole tool such as a wireline formation tester, there may be many
channels of data. From the point of view of flow testing, most of these will count as “Other” since only
pressure and flow get specific analysis in flow testing.

Standard: v2.2 / Document: v1.0 / May 16, 2022 128


PRODML Technical Usage Guide

Figure 14-6. Flow Test Activity links to Channels to report Measured data (showing attributes inherited by
Pressure, Flow, Other data).

Standard: v2.2 / Document: v1.0 / May 16, 2022 129


PRODML Technical Usage Guide

15 Pressure Transient Analysis


15.1 Overview of Pressure Transient Analysis
The Pressure Transient Analysis data object contains details of the analysis itself, and references to
the data input, and any simulated output.
The key roles of the top level Pressure Transient Analysis object are:
• Referencing to Flow Test Activity
• Fluid and Reservoir Properties
• Analysis details
• Pressure Transient Models
Figure 15-1 shows the elements and associations which achieve these roles, and which are color-
coded by function.

Figure 15-1. Top Level Diagram for Pressure Transient Analysis Data object showing Referencing to Flow
Test Activity, Fluid and Reservoir Properties, Analysis, Pressure Transient Models.

15.2 Referencing to Flow Test Activity


An Analysis is useless without knowing where the measured data comes from! Hence there is a
mandatory Data Object Reference (DOR) to the Flow Test Activity whose data we are analyzing.
Reference is also needed to which Test Period(s) are being analyzed. Examples: the final build-up, or
a set of build-ups used simultaneously.

Standard: v2.2 / Document: v1.0 / May 16, 2022 130


PRODML Technical Usage Guide

See Figure 15-2 which shows the concept for a simple case where the Job comprises one Activity
(one test location, see bottom sketch of the figure), and two Analyses. The Activity contains all the
measurements needed and defines the location. Each Analysis references the Activity. It also
references (by uid within the Activity) the Test Period(s) used for the analysis (e.g., “final build up”).
See also Figure 15-4 where the red and blue boxes highlight these two types of reference.

Figure 15-2. Example showing links for a DST, one interval, with multiple analyses.

Interference testing is slightly more complex. In this case, refs must be provided to the Principal and
also to any Interfering Test Intervals (which will be multiple Flow Test Measurement Sets within the
Flow Test Activity). See Figure 15-3 which shows a vertical interference test. The Analysis has a DOR
to the Activity as before, but now it is necessary to also state which is the principal location and which
is the interfering location. There is a requirement to use the Principal Flow Test Interval Ref (to the uid
of this interval within the Activity), and then for each Interfering Flow Test Interval to reference it in the
Activity. See in Figure 15-4 the orange box and the green box for these two interval refs. The test
period(s) from the interfering interval, and the interfering flow rate also need to be identified.

Standard: v2.2 / Document: v1.0 / May 16, 2022 131


PRODML Technical Usage Guide

Figure 15-3. Example showing links for WFT with a VIT between two stations.

Figure 15-4. Pressure Transient Analysis Links to Flow Test Activity elements.

15.3 Use of PRODML Fluid Characterization Top Level Object


Pressure Transient Analysis needs fluid characterization data – either under constant compressibility
assumptions (most liquids) or variable compressibility assumptions (most gases).
The PVT capability of PRODML includes the Fluid Characterization Top Level Object. This supports
the following options, and see the further details referenced here:
1. Transfer a set of fluid parameters (constant compressibility analysis). For the model, see Section
10.7.3. For a worked example and also the worked example file PTA PVT Using Fluid Parameter
Set.xml, see Section 11.1.6.3.
2. Transfer a table of pseudo-pressures (variable compressibility). For the model, see Section
10.7.2. For a worked example and also the worked example file PTA PVT Using Pseudo
Pressure.xml, see Section 11.1.6.2

Standard: v2.2 / Document: v1.0 / May 16, 2022 132


PRODML Technical Usage Guide

Reservoir compressibility is contained in an additional class. This is Compressibility Parameters – see


Figure 15-1.
A number of enum elements are included in the Pressure Transient Analysis data object to describe
fully the treatment applied to fluids. These can be seen in Figure 15-5 and are:
1. Fluid Phase (single or multi phases)
2. Pressure Non-Linear Transform (pressure, pseudo pressure etc.)
3. Pseudo Pressure Effect Applied (gas, multiphase, etc.). Note, there may be multiple effects
applied.
4. Time Non-Linear Transform (time, pseudo time, etc.).

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
PressureTransientAnalysis

«XSDelement»
+ ModelName: String2000
+ TimeAppliesFrom: TimeStamp [0..1]
+ MethodName: String2000 [0..1]
+ TimeAppliesTo: TimeStamp [0..1]
+ IsNumericalAnalysis: boolean [0..1]
+ FluidCharacterization: FluidCharacterization [0..1]
+ NumericalPtaModel: DataObjectReference [0..1]
+ FlowTestActivity: FlowTestActivity
+ PrincipalFlowTestMeasurementSetRef: String64 [0..1] TypeEnum
+ PrincipalTestPeriodRef: String64 [1..-1] «enumeration»
+ WellboreModel: WellboreBaseModel [0..1] FluidPhaseKind
+ LayerModel: LayerModel [0..*]
+ FluidPhaseAnalysisKind: FluidPhaseKind multiphase gas+water
+ PressureNonLinearTransformKind: PressureNonLinearTransformKind multiphase oil+gas
+ PseudoPressureEffectApplied: PseudoPressureEffectApplied [0..1] multiphase oil+water
+ TimeNonLinearTransformKind: TimeNonLinearTransformKind multiphase oil+water+gas
+ Remark: String2000 [0..1] single phase gas
single phase oil
single phase water

TypeEnum
«enumeration»
PressureNonLinearTransformKind

pressure (un-transformed)
pressure squared
gas pseudo-pressure
normalised gas pseudo-pressure
normalised multi-phase pseudo-pressure

TypeEnum
«enumeration»
PseudoPressureEffectApplied

gas properties with pressure


multiphase flow properties with pressure
other
variable desorption with pressure
variable poroperm with pressure

TypeEnum
«enumeration»
TimeNonLinearTransformKind

material balance pseudo-time


pseudo-time transform
time (un-transformed)

Figure 15-5. Options for Representing Handling of Fluid Phases and Fluid Non-linearity.

15.4 Analysis
Analysis describes the use of techniques including log-log analysis, or other “specialized” analyses.
Pressure transient or rate transient analysis can be transferred. The choice is made by which version
of the Abstract Analysis is used, as seen in Figure 15-6 (Red Box).
A key role of this data model is to enable referencing which time series which were used in the
analysis, and also to reference any simulated time series (i.e., the history matches to the data).
1. Rate Transient Analysis (RTA) has input pressure and flowrate, and simulated flowrate. See
Purple Box.
2. Pressure Transient Analysis (PTA) has input pressure and simulated pressure, and for
flowrate data a choice shown in the Blue Box:
a. Use a single effective flow rate
b. Use a set of Test Periods for rate history (this will give constant rate steps)

Standard: v2.2 / Document: v1.0 / May 16, 2022 133


PRODML Technical Usage Guide

c. Use a Channel of potentially continuously varying flowrate.


Required test parameters are also included, e.g., “P0” the pressure at start of test.
Once the method and data sources are identified there are two types of analysis available (Green
Box):
1. Log-log – the most general kind used.
2. Specialized – where specific data transforms (e.g., semi log, square root, etc.) are used. In
these cases, there is no enum of the possible types – the x and y data transforms are
specified by text strings (“Axis Description” elements seen in the figure). Specialized analyses
can also have the generic Any Parameter or Custom Parameter added (see 15.5.2).
For both types of analysis, the Analysis Pressure (transformed) element is the time series for the data
as functioned for this analysis method. For the log-log, for example, it will have the superposition
pressure and time functions, transformed to log axes – which is what the analyst will use visually to
perform the analysis.
Both types of analysis can also report straight lines used in the analysis – the slope and intercept
referring to the same transformed axes as the Analysis Pressure element referred to above.

Figure 15-6. Analysis Class showing Pressure or Rate Transient, Inputs and Outputs linking to Channels,
Choice of Analysis Methods, Choice of Flow Rate Data Type.

15.5 Pressure Transient Models

Wellbore Model and Layer Model


A single Pressure Transient Analysis contains (see Figure 15-7):
• One Wellbore Model (since all layers share the same wellbore effects – e.g., wellbore storage)
(Red Box).
• One or more Layer Models. This allows multi-layer models to be represented. (Yellow Box and
lower class in figure).
The Layer Model has a MD top and bottom and can reference by DOR a geology feature (probably a
RockFluidUnitFeature in a RESQML model). It is also possible to have one of the Layer Models
where there are more than one, represent the aggregate behavior of all the layers. This is done by
setting the Aggregate Layers Model Boolean.
Finally, the layer model can represent the inflow performance relationship (IPR) either as:
1. A Productivity Index (PI) (flowrate/pressure drawdown) – generally used for liquids;
2. A Laminar (Darcy flow) and Turbulent (non-Darcy flow) coefficients – generally used for gas.

Standard: v2.2 / Document: v1.0 / May 16, 2022 134


PRODML Technical Usage Guide

Figure 15-7. The Pressure Transient Analysis model has one Wellbore Section (red box), and as many
Layer Models as required; each Layer Model can contain one each Near Wellbore, Reservoir and
Boundary Sections.

Layer to Layer flow communication is also supported via the Layer To Layer Connection element.

Models and Model Parameters


As seen in Figure 15-7, there are four sections to a pressure transient model. These are modelled
separately to allow for “mix and matching” of different model sections, depending on the capabilities of
the analysis software in use. They are:
1. Wellbore – the wellbore storage effects
2. Near-Wellbore – flow near the well, leading to “skin factor” effects
3. Reservoir – flow away from the well, representing the “true reservoir”
4. Boundary – flow which is affected by the boundaries of the reservoir.
As noted above, the wellbore model will be common to all layers so is associated with the pressure
transient analysis class, whereas the other three model sections are associated with individual layer
models, which will repeat per layer.
A large set of well-known Models is pre-defined in the schema, together with the well-known
Parameters which comprise these models. Custom models and parameters can be used.
The part of the data model for Model shown in Figure 15-8. There is an Abstract Model Section. This
inherits from one of four abstract xx Base Model where xx is one of the four sections listed above. In
turn, concrete classes for each model are inherited by the abstract base models. The figure shows
examples for two of the model sections and for two concrete model sections for each of those. The
concept is that each model section contains only the Parameters relevant to that model. This is
prevent data transfers from becoming a “free for all” of non-standard models with non-standard
parameters. Custom models can be used for genuine innovations beyond well-known industry norms.
For a list of all the well-known models included in the schema, see Section 17.1.1.

Standard: v2.2 / Document: v1.0 / May 16, 2022 135


PRODML Technical Usage Guide

Figure 15-8. Hierarchy of Model Sections which spans most of the well-known pressure transient models.

A large set of well-known Parameters is also pre-defined, with the parameters which belong to each
model controlled by the schema as described above. For a list of all the well-known Parameters
containing numerical values included in the schema, see Section 17.1.2. Each parameter has the
following attributes:
1. The type is the name of the well-known parameter, e.g., “wellbore volume” as seen in Figure
15-9.
2. The abbreviation is a fixed string in the schema and is the commonly used abbreviation often
used in reports to save space, e.g., “Vw”.
3. The value with its unit of measure, constrained by the data type, e.g., “volume measure types”
such as m3.
4. Uid.
5. Source Result Ref which is a string containing the uid of a different result (in the same Analysis)
which was used as the input for this parameter.
6. Remark

Figure 15-9. Parameters are provided which are needed by the pressure transient models.

Some Parameters are enums, e.g., “Boundary1 Type” (for a boundary model) can be “constant
pressure” or “no-flow”. For a list of all the well-known enum Parameters included in the schema, see
Section 17.1.3.

Standard: v2.2 / Document: v1.0 / May 16, 2022 136


PRODML Technical Usage Guide

There are a small number of “sub models”, where an aspect of the Model requires a recurring set of
parameters. An example is for a hydraulic fracture which can recur to represent a multiply fractured
horizontal well. Each fracture has its own instance of a Single Fracture Sub Model. These are shown
in Figure 15-10.

«XSDcomplexType» «XSDcomplexType»
SingleFractureSubModel DistributedParametersSubModel «XSDcomplexType»
ResqmlModelRef
«XSDelement» «XSDelement»
+ FractureTip1Location: LocationIn2D + IsPermeabilityGridded: boolean «XSDelement»
+ FractureTip2Location: LocationIn2D + PermeabilityArrayRefID: ResqmlModelRef [0..1] + ResqmlModelRef: DataObjectReference
+ FractureHeight: FractureHeight + IsThicknessGridded: boolean
+ DistanceMidFractureHeightToBottomBoundary: + ThicknessArrayRefID: ResqmlModelRef [0..1]
DistanceMidFractureHeightToBottomBoundary [0..1] + IsPorosityGridded: boolean
+ FractureFaceSkin: FractureFaceSkin [0..1] + PorosityArrayRefID: ResqmlModelRef [0..1]
+ FractureConductivity: FractureConductivity [0..1] + IsDepthGridded: boolean
+ FractureStorativityRatio: FractureStorativityRatio [0..1] + DepthArrayRefID: ResqmlModelRef [0..1]
+ FractureModelType: FractureModelType + IsKvToKrGridded: boolean
+ KvToKrArrayRefID: ResqmlModelRef [0..1]
«XSDcomplexType» + IsKxToKyGridded: boolean
LocationIn2D + KxToKyArrayRefID: ResqmlModelRef [0..1]

«XSDelement»
+ CoordinateX: LengthMeasure
+ CoordinateY: LengthMeasure «XSDcomplexType»
InternalFaultSubModel

«XSDelement»
+ IsLeaky: boolean
+ TransmissibilityReductionRatioOfLinearFront : TransmissibilityReductionFactorOfLinearFront [0..1]
+ IsConductive: boolean
+ IsFiniteConductive: boolean
+ Conductivity: FractureConductivity [0..1]
+ FaultRefID: ResqmlModelRef

«XSDcomplexType»
ReservoirZoneSubModel

«XSDelement»
+ BoundingPolygonPoint: LocationIn2D [1..-1]
+ Permeability: HorizontalRadialPermeability [0..1]
+ Porosity: Porosity [0..1]
+ Thickness: TotalThickness [0..1]

Figure 15-10. Some types of pressure transient model require sub-model classes.

Custom parameters and custom models can be used to describe specialized PTA models not
included in the schema. They follow the pattern shown in Figure 15-11. Each model section has, in
addition to the well-known models, a Custom xx Model (where xx is one of the four model section
types). The custom model can use zero to many of the Any Parameter (which is the base Abstract
Parameter which can inherit any of the well-known Parameters), and zero to many Custom Parameter
which unlike the well-known Parameters can have its name and abbreviation set, and has a Measure
Value which is the General Measure Type (i.e., there is no defined set of UoM, since it can have
represent any measurement dimensionality).Figure 15-10

Standard: v2.2 / Document: v1.0 / May 16, 2022 137


PRODML Technical Usage Guide

«XSDcomplexType»
WellboreBaseModel

«XSDelement»
+ WellboreRadius: WellboreRadius
+ WellboreStorageCoefficient: WellboreStorageCoefficient
+ WellboreVolume: WellboreVolume [0..1]
+ WellboreFluidCompressibility: WellboreFluidCompressibility [0..1]
+ TubingInteralDiameter: TubingInteralDiameter [0..1]
+ FluidDensity: FluidDensity [0..1]
+ WellboreDeviationAngle: WellboreDeviationAngle [0..1]
+ WellboreStorageMechanismType: WellboreStorageMechanismType [0..1]

Same pattern for all


the other Models for
this model section.

«XSDcomplexType» «XSDcomplexType»
ConstantStorageModel ChangingStorageFairModel

«XSDelement»
+ RatioInitialToFinalWellboreStorage: RatioInitialToFinalWellboreStorage
+ DeltaTimeStorageChanges: DeltaTimeStorageChanges

AbstractParameter
«XSDcomplexType»
CustomWellboreModel «XSDcomplexType»
PtaParameters::CustomParameter
«XSDelement»
+ ModelName: ModelName «XSDelement»
+ AnyParameter: AbstractParameter [0..-1] + Name: String64
+ CustomParameter: CustomParameter [0..-1] + Abbreviation: String64
+ MeasureValue: GeneralMeasureType

Figure 15-11. Custom parameters and custom models.

For a matrix of all the Model Sections vs all the Parameters in a matrix, showing which parameters
are mandatory and optional in which model, see Section 17.1.4.

Standard: v2.2 / Document: v1.0 / May 16, 2022 138


PRODML Technical Usage Guide

16 Data PreProcess and Deconvolution


16.1 Data PreProcess
The Data PreProcess data object is relatively simple as shown in Figure 16-1. Note that the model is
not attempting to capture the details of the pre-processing algorithms and parameters, only the kind of
processing that was carried out. The principal elements are:
1. The Flow Test Activity and Flow Test Measurement Set within it, which is the source of the
pre-processed data.
2. The input data (which will be one or more time series—depending on the kind of pre-
processing—of pressure, flow or other data, each referencing a Channel).
3. The output data, which is the PreProcessed (output) data, which is a single time series
element—of type pressure, flow or other data—referencing a Channel.
An enum, Data Conditioning, has a list of commonly used pre-processing methods, covering: data
conditioning, data corrections, channel splicing, channel differencing for QAQC purposes, and data
transforms from one measurement type/location to another. The enum is extensible (refer to
Common) so that custom values can be added. The enum may be repeated to show multiple data
conditioning steps that have been performed. For example, using wavelet filtering can in one step
process with data smoothing and data reduction. In this kind of method, the enum
DataConditioningExt can be used to list each kind of conditioning applied in the one step. A Remark
element can be used to further describe the processing.

class PreProcess

AbstractObject TypeEnum
«XSDcomplexType,XSDtopLevelElement» «enumeration»
PtaDataPreProcess Flow TestActiv ity::DataConditioning

«XSDelement» data outliers removed


+ FlowTestActivity: FlowTestActivity data reduced
+ FlowTestMeasurementSetRef: String64 [0..1] data smoothed
+ InputData: AbstractFlowTestData [1..-1] data time shifted
+ PreProcessedData: AbstractFlowTestData tide corrected
+ DataConditioning: DataConditioningExt [0..-1] trend removal
+ Remark: String2000 [0..1] data value shifted
flow to volume
fluid level to pressure
fluid level to volume
gauge to datum pressure
pressure to flow
«XSDcomplexType» volume to flow
FlowTestActivity::AbstractFlowTestData data difference
data channel spliced
«XSDelement» EnumExtensionPattern data channels averaged
+ ChannelSet: ChannelSet «XSDunion» rate reallocation
+ TimeChannel: Channel Flow TestActiv ity::DataConditioningExt total rate calculation from phase rates
+ TimeSeriesPointRepresentation: TimeSeriesPointRepresentation [0..1]
+ Remark: String2000 [0..1]
«XSDattribute»
+ uid: String64

Figure 16-1. Data Preparation top level object.

16.2 Deconvolution
The Deconvolution data object is relatively simple as shown in Figure 16-2. The principal elements
are:
1. The Flow Test Activity and Flow Test Measurement Set within it, and Flow Test Period(s) within
that, which is the source of the pre-processed data.
2. The input pressure data and flowrate data, and the reconstructed (a form of output) output data.
3. Parameters required by Deconvolution.
There are two forms of output deconvolved pressure, both of which contain deconvolved pressure
Channel(s) and associated reference flowrate(s):
1. A single deconvolved pressure output.
2. A set of multiple deconvolved pressure outputs, each one giving the response of a specific Test
Flow Period.

Standard: v2.2 / Document: v1.0 / May 16, 2022 139


PRODML Technical Usage Guide

class Deconvolution

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
PtaDeconvolution Legend
Most Important Class
«XSDelement»
+ FlowTestActivity: FlowTestActivity Abstract Class
+ FlowTestMeasurementSetRef: String64 [0..1] Enumeration
+ FlowTestPeriodRef: String64 [0..*]
+ MethodName: String2000 Normal Class
+ InitialPressure: PressureMeasure
+ InputPressure: AbstractPtaPressureData Normal Association
+ InputFlowrate: AbstractPtaFlowData
+ ReconstructedPressure: DeconvolvedPressureData [0..1] Generalization
+ ReconstructedFlowrate: DeconvolvedFlowData [0..1]
+ Remark: String2000 [0..1]

+DeconvolutionOutput 1..*

«XSDcomplexType»
AbstractDeconvolutionOutput

«XSDcomplexType» «XSDcomplexType»
Deconv olutionSingleOutput Deconv olutionMultipleOutput

«XSDelement»
+ TestPeriodOutputRefId: String64

+DeconvolutionSingleOutput 1 +DeconvolutionMultipleOutput 1

«XSDcomplexType»
Deconv olutionOutput

«XSDelement»
+ DeconvolvedPressure: DeconvolvedPressureData [0..1]
+ DeconvolutionReferenceFlowrateValue: VolumePerTimeMeasure

Figure 16-2. Deconvolution top level object.

Standard: v2.2 / Document: v1.0 / May 16, 2022 140


PRODML Technical Usage Guide

17 Appendix for PTA


This chapter contains appendices for the PTA section.

17.1 List of Pressure Transient Analysis Models, Parameters and Matrix


relating these two

Models
Name Description
Wellbore Section
Wellbore Base Model Abstract wellbore response model from which the other wellbore response model
types are derived.
Constant Storage Model Constant wellbore storage model.
Changing Storage Fair Changing wellbore storage model using the Fair model.
Model
Changing Storage Hegeman Changing wellbore storage model using the Hegeman model.
Model
Changing Storage Spivey Changing wellbore storage model using the Spivey Fissures model.
Packer Model
Changing Storage Spivey Changing wellbore storage model using the Spivey Packer model.
Fissures Model
Custom Wellbore Model Wellbore Storage Model allowing for the addition of custom parameters to
support extension of the model library provided.

Near Wellbore Section


Near Wellbore Base Model Abstract near-wellbore response model from which the other near wellbore
response model types are derived.
Finite Radius Model Finite radius model with radial flow into wellbore with skin factor.
Fractured Uniform Flux Fracture model, with vertical fracture flow. Uniform Flux Model.
Model
Fractured Infinite Fracture model, with vertical fracture flow. Infinite Conductivity Model.
Conductivity Model
Fractured Finite Conductivity Fracture model, with vertical fracture flow. Finite Conductivity Model.
Model
Fractured Horizontal Uniform Fracture model, with horizontal fracture (sometimes called "pancake fracture")
Flux Model flow. Uniform Flux Model.
Fractured Horizontal Infinite Fracture model, with horizontal fracture (sometimes called "pancake fracture")
Conductivity Model flow. Infinite Conductivity Model.

Fractured Horizontal Finite Fracture model, with horizontal fracture (sometimes called "pancake fracture")
Conductivity Model flow. Finite Conductivity Model.
Partially Penetrating Model Partially Penetrating model, with flowing length of wellbore less than total
thickness of reservoir layer (as measured along wellbore).
Slanted Fully Penetrating Slanted wellbore model, with full penetrating length of wellbore open to flow.
Model
Slanted Partially Penetrating Slanted wellbore model, with flowing length of wellbore less than total thickness
Model of reservoir layer (as measured along wellbore).
Horizontal Wellbore Model Horizontal wellbore model with wellbore positioned at arbitrary distance from
lower surface of reservoir layer.
Horizontal Wellbore 2 Layer Horizontal wellbore model with wellbore positioned at arbitrary distance from
Model lower surface of reservoir layer, and with additional upper layer parallel to layer
containing wellbore.

Standard: v2.2 / Document: v1.0 / May 16, 2022 141


PRODML Technical Usage Guide

Name Description
Horizontal Wellbore Multiple Horizontal wellbore model with wellbore positioned at arbitrary distance from
Equal Fractured Model lower surface of reservoir layer, containing a number "n" of equally spaced
identical vertical fractures.

Horizontal Wellbore Multiple Horizontal wellbore model with wellbore positioned at arbitrary distance from
Variable Fractured Model lower surface of reservoir layer, containing a number "n" of non-identical vertical
fractures. These may be unequally spaced and each may have its own
orientation with respect to the wellbore, and its own height. Expected to be
modelled numerically.
Custom Near Wellbore Near Wellbore Model allowing for the addition of custom parameters to support
Model extension of the model library provided.

Reservoir Section
Reservoir Base Model Abstract reservoir model from which the other types are derived.
Homogeneous Model Homogeneous reservoir model.
Dual Porosity Pseudo Dual Porosity reservoir model, with Pseudo-Steady-State flow between the two
Steady State Model porosity systems.
Dual Porosity Transient Dual Porosity reservoir model, with transient flow between the two porosity
Spheres Model systems, and assuming spherical shaped matrix blocks.
Dual Porosity Transient Dual Porosity reservoir model, with transient flow between the two porosity
Slabs Model systems, and assuming slab shaped matrix blocks.
Dual Permeability With Dual Permeability reservoir model, with Cross-Flow between the two layers.
Crossflow Model
Radial Composite Model Radial Composite reservoir model, in which the wellbore is at the center of a
circular homogeneous zone, communicating with an infinite homogeneous
reservoir. The inner and outer zones have different reservoir and/or fluid
characteristics. There is no pressure loss at the interface between the two
zones.
Linear Composite Model Linear Composite reservoir model in which the producing wellbore is in a
homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is no pressure loss at the interface between the
two zones.
Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Leaky Fault Model homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is a fault or barrier at the interface between the
two zones, but this is "leaky", allowing flow across it.

Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Conductive Fault Model homogeneous reservoir, infinite in all directions except one where the reservoir
and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity. There is a fault or barrier at the interface between the
two zones, but this is "leaky", allowing flow across it and conductive, allowing
flow along it. It can be thought of as a non-intersecting fracture.

Linear Composite With Linear Composite reservoir model in which the producing wellbore is in a
Changing Thickness Across homogeneous reservoir, infinite in all directions except one where the reservoir
Leaky Fault Model and/or fluid characteristics change across a linear front. On the farther side of
the interface the reservoir is homogeneous and infinite but with a different
mobility and/or storativity and thickness. There is a fault or barrier at the interface
between the two zones, but this is "leaky", allowing flow across it.

Standard: v2.2 / Document: v1.0 / May 16, 2022 142


PRODML Technical Usage Guide

Name Description
Numerical Homogeneous Numerical model with homogeneous reservoir. This model may have constant
Reservoir Model value or reference a grid of geometrically distributed values for the following
parameters: permeability (k), thickness (h), porosity (phi), depth (Z), vertical
anisotropy (KvToKr) and horizontal anisotropy (KyTokx). Internal faults can be
positioned in this reservoir.
Numerical Dual Porosity Numerical model with dual porosity reservoir. This model may have constant
Reservoir Model value or reference a grid of geometrically distributed values for the following
parameters: permeability (k), thickness (h), porosity (phi), depth (Z), vertical
anisotropy (KvToKr) and horizontal anisotropy (KyTokx). Internal faults can be
positioned in this reservoir.
Custom Reservoir Model Reservoir Model allowing for the addition of custom parameters to support
extension of the model library provided.

Boundary Section
Boundary Base Model Abstract boundary model from which the other types are derived.
Infinite Boundary Model Infinite boundary model - there are no boundaries around the reservoir.
Single Fault Model Single fault boundary model. A single linear boundary runs along one side of the
reservoir.
Closed Circle Model Closed circle boundary model.
Two Parallel Faults Model Two parallel faults boundary model. Two linear parallel boundaries run along
opposite side of the reservoir.
Two Intersecting Faults Two intersecting faults boundary model. Two linear non-parallel boundaries run
Model along adjacent sides of the reservoir and intersect at an arbitrary angle.
U Shaped Faults Model U-shaped faults boundary model. Three linear faults intersecting at 90 degrees
bound the reservoir on three sides with the fourth side unbounded.
Closed Rectangle Model Closed rectangle boundary model. Four faults bound the reservoir in a
rectangular shape.
Pinch Out Model Pinch Out boundary model. The upper and lower bounding surfaces of the
reservoir are sub-parallel and intersect some distance from the wellbore. Other
directions are unbounded.

Numerical Boundary Model Numerical boundary model in which any arbitrary outer shape of the reservoir
boundary can be imposed by use of any number of straight line segments which
together define the boundary.

Custom Boundary Model Boundary Model allowing for the addition of custom parameters to support
extension of the model library provided.

Numerical Parameters
Name Abbreviation Documentation
Wellbore
Wellbore Radius Rw The radius of the wellbore, generally taken to represent the
open hole size.

Wellbore Storage Cs The wellbore storage coefficient equal to the volume which
Coefficient flows into the wellbore per unit change in pressure in the
wellbore. NOTE that by setting this parameter to = 0, the
model becomes equivalent to a "No Wellbore Storage"
model.
Ratio Initial To Final Ci/Cs In models in which the wellbore storage coefficient changes,
Wellbore Storage the ratio of initial to final wellbore storage coefficients.

Standard: v2.2 / Document: v1.0 / May 16, 2022 143


PRODML Technical Usage Guide

Name Abbreviation Documentation


Delta Time Storage dT In models in which the wellbore storage coefficient changes,
Changes the time at which the initial wellbore storage coefficient
changes to the final coefficient.

Leak Skin Sl In Spivey (a) Packer and (c) Fissure models of wellbore
storage, the Leak Skin controls the pressure communication
through the packer (a), or between the wellbore and the high
permeability region (b - second application of model a), or
between the high permeability channel/fissures and the
reservoir (c). In case c, the usual Skin parameter
characterizes the pressure communication between the
wellbore and the high permeability channel/fissures.

Wellbore Volume Vw The volume of the wellbore equipment which influences the
wellbore storage. It will be sum of volumes of all
components open to the reservoir up to the shut off valve.

Wellbore Fluid Cw The compressibility of the fluid in the wellbore, such that this
Compressibility value * wellbore volume = wellbore storage coefficient.

Tubing Internal Diameter ID Internal diameter of the tubing, generally used for
estimations of wellbore storage when the tubing is filling up.

Fluid Density Rho The density of the fluid in the wellbore, generally used for
estimations of wellbore storage when the tubing is filling up.

Wellbore Deviation Angle Deviation The angle of deviation from vertical of the wellbore,
generally used for estimations of wellbore storage when the
tubing is filling up.
Model Name ModelName The name of the model. Available only for Custom Models to
identify name of the model.

NearWellbore
Skin Relative To Total S Dimensionless value, characterizing the restriction to flow
Thickness (+ve value) or extra capacity for flow (-ve value) into the
wellbore. This value is stated with respect to radial flow
using the full layer thickness (h), i.e., the "reservoir radial
flow" or "middle time region" of a pressure transient. It
comprises the sum of
"MechanicalSkinRelativeToTotalThickness" and
"ConvergenceSkinRelativeToTotalThickness" both of which
also are expressed in terms of h.

Rate Dependent Skin D Value characterizing the rate at which an apparent skin
Factor effect, due to additional pressure drop, due to turbulent flow,
grows as a function of flowrate. The additional flowrate-
dependent Skin is this value D * Flowrate. The total
measured Skin factor would then be S + DQ, where Q is the
flowrate.

Delta Pressure Total Skin dP Skin The pressure drop caused by the total skin factor. Equal to
the difference in pressure at the wellbore between what was
observed at a flowrate and what would be observed if the
radial flow regime in the reservoir persisted right into the
wellbore. The reference flowrate will be the stable flowrate
used to analyse a drawdown, or the stable last flowrate
preceding a buildup.

Standard: v2.2 / Document: v1.0 / May 16, 2022 144


PRODML Technical Usage Guide

Name Abbreviation Documentation


Ratio Dp Skin To Total Ratio dP Skin To The ratio of the DeltaPressureTotalSkin to the total
Drawdown Total drawdown pressure. Indicates the fraction of the total
pressure drawdown due to completion effects such as
convergence, damage, etc. The remaining pressure drop is
due to radial flow in the reservoir.
Skin Layer 2 Relative To S2 In a two-layer model with both layers flowing into the
Total Thickness wellbore, the skin factor of the second layer. This value is
stated with respect to radial flow using the full layer
thickness (h), i.e., the "reservoir radial flow" or "middle time
region" of a pressure transient.
Convergence Skin Sconv Dimensionless value, characterizing the restriction to flow
Relative To Total (+ve value, convergence) or additional capacity for flow (-ve
Thickness value, fractured or horizontal wellbore) owing to the
geometry of the wellbore connection to reservoir. This value
is stated with respect to radial flow using the full reservoir
thickness (h), i.e., the radial flow or middle time region of a
pressure transient. It therefore can be added to
"MechancialSkinRelativeToTotalThickness" to yield the
"SkinRelativeToTotalThickness".

Mechanical Skin Relative Smech Dimensionless value, characterizing the restriction to flow
To Total Thickness (+ve value, damage) or additional capacity for flow (-ve
value, e.g., acidized) due to effective permeability around
the wellbore. This value is stated with respect to radial flow
using the full reservoir thickness (h), i.e., the radial flow or
middle time region of a pressure transient. It therefore can
be added to "ConvergenceSkinRelativeToTotalThickness"
skin to yield "SkinRelativeToTotalThickness".

Fracture Half Length Xf The half-length of an induced hydraulic fracture, measured


from the wellbore to the tip of one "wing" of the fracture.

Fracture Face Skin Sf Dimensionless value, characterizing the restriction to flow


(+ve value, damage) or additional capacity for flow (-ve
value, e.g., acidized) due to effective permeability across the
face of a hydraulic fracture, i.e., controlling flow from
reservoir into fracture. This value is stated with respect to
radial flow using the full reservoir thickness (h), i.e., the
radial flow or middle time region of a pressure transient. It
therefore can be added, in a fractured well, to
"ConvergenceSkinRelativeToTotalThickness" skin to yield
"SkinRelativeToTotalThickness".

Orientation Of Fracture OrientationOfFract For an induced hydraulic fracture which is assumed for PTA
Plane urePlane purposes to be planar, the azimuth of the fracture in the
horizontal plane represented in the CRS.

Fracture Conductivity Fc For an induced hydraulic fracture, the conductivity of the


fracture, equal to Fracture Width * Fracture Permeability

Fracture Radius Rf For a horizontal ("pancake") induced hydraulic fracture,


which is assumed to be circular in shape in the horizontal
plane, the radius of the fracture.

Distance Fracture To Zf For a horizontal ("pancake") induced hydraulic fracture, the


Bottom Boundary distance between the plane of the fracture and the lower
boundary of the layer.

Standard: v2.2 / Document: v1.0 / May 16, 2022 145


PRODML Technical Usage Guide

Name Abbreviation Documentation


Perforated Length hp For a partial penetration (a vertical or slant well with less
than full layer thickness open to flow) or a hydraulically
fractured model, the length of the perforated section of the
wellbore.
Distance Mid Zp For a partial penetration (a vertical or slant well with less
Perforations To Bottom than full layer thickness open to flow), the distance from the
Boundary mid-perforation point to the bottom boundary of the layer.

Orientation Well OrientationWellTr For a slant wellbore or horizontal wellbore model, the
Trajectory ajectory azimuth of the wellbore in the horizontal plane, represented
in the local CRS. This is intended to be a value
representative of the azimuth for the purposes of PTA. It is
not necessarily the azimuth which would be recorded in a
survey of the wellbore trajectory.

Length Horizontal hw For a horizontal wellbore model, the length of the flowing
Wellbore Flowing section of the wellbore.

Distance Wellbore To Zw For a horizontal wellbore model, the distance between the
Bottom Boundary horizontal wellbore and the lower boundary of the layer.

Distance Mid Fracture Zf For a hydraulic fracture, the distance between the mid-
Height To Bottom height level of the fracture and the lower boundary of the
Boundary layer.
Number Of Fractures Nf For a multiple fractured horizontal wellbore model, the
number of fractures which originate from the wellbore. In a
"HorizontalWellboreMultipleEqualFracturedModel" these
fractures are identical and equally spaced, including one
fracture at each end of the length represented by
"LengthHorizontalWellboreFlowing".

Fracture Height Hf In any vertical hydraulic fracture model (including the cases
where the wellbore can be vertical or horizontal), the height
of the fractures. In the case of a vertical wellbore, the
fractures are assumed to extend an equal distance above
and below the mid perforations depth, given by the
parameter "DistanceMidPerforationsToBottomBoundary". In
the case of a horizontal wellbore, the fractures are assumed
to extend an equal distance above and below the wellbore.

Fracture Angle To FractureAngleTo For a multiple fractured horizontal wellbore model, the angle
Wellbore Wellbore at which fractures intersect the wellbore. A value of 90
degrees indicates the fracture plane is normal to the
wellbore trajectory.
Fracture Storativity Ratio etaD Dimensionless Value characterizing the fraction of the pore
volume occupied by the fractures to the total of pore volume
of (fractures plus reservoir).

Reservoir
Horizontal Radial K The radial permeability of the reservoir layer in the horizontal
Permeability plane.
Total Thickness h The total thickness of the layer of reservoir layer.
Permeability Thickness k.h The product of the radial permeability of the reservoir layer
Product in the horizontal plane * the total thickness of the layer.

Porosity Phi The porosity of the reservoir layer.

Standard: v2.2 / Document: v1.0 / May 16, 2022 146


PRODML Technical Usage Guide

Name Abbreviation Documentation


Initial Pressure Pi The initial pressure of the fluids in the reservoir layer. "Initial"
is taken to mean "at the time at which the rate history used
in the pressure transient analysis starts"

Pressure Datum TVD datum The depth TVD of the datum at which reservoir pressures
are reported for this layer. Note, this depth may not exist
inside the layer at the Test Location but it is the reference
depth to which pressures will be corrected.

Average Pressure Pbar The average pressure of the fluids in the reservoir layer.
"Average" is taken to refer to "at the time at which the rate
history used in the pressure transient analysis ends".

Interporosity Flow Lambda The dimensionless interporosity flow parameter, known as


Parameter Lambda. In dual porosity, represents the ability of the matrix
to flow into the fissure network. In dual permeability or other
multi-layer cases, represents the ability of flow to move from
one layer to another.
Storativity Ratio Omega The dimensionless storativity ratio, known as Omega equal
to the fracture storativity divided by total storativity.
Storativity = porosity * total compressibility.

Ratio Layer 1 To Total Kappa In a two-layer model, the ratio of layer 1 to the total
Permeability Thickness PermeabilityThickness.
Product
Layer 2 Thickness h layer 2 In a two-layer model, the Thickness (h) of layer 2.
Inner To Outer Zone M In a Radial or Linear Composite model, the mobility
Mobility Ratio (permeability/viscosity) ratio of inner zone/outer zone.

Inner To Outer Zone D In a Radial or Linear Composite model, the diffusivity


Diffusivity Ratio (permeability/(porosity*viscosity*total compressibility) ratio of
inner zone/outer zone.

Distance To Mobility Li In a Radial or Linear Composite model, the distance to the


Interface boundary of the inner and outer zones.

Orientation Of Linear OrientationOfLine In a Linear Composite model, the orientation of the


Front arFront boundary of the inner and outer zones represented in the
local CRS.
Transmissibility Leakage The transmissibility reduction factor of a fault in a Linear
Reduction Factor Of Composite model where the boundary of the inner and outer
Linear Front zones is a leaky fault. If T is the complete transmissibility
which would be computed without any fault between point A
and point B (T is a function of permeability, etc.), then Tf = T
* leakage. Therefore: leakage = 1 implies that the fault is not
a barrier to flow at all, leakage = 0 implies that the fault is
sealing (no transmissibility anymore at all between points A
and B).

Fault Conductivity Fc In a Linear Composite model where the boundary of the


inner and outer zones is a leaky and conductive fault, the
fault conductivity (i.e., along the face of the fault).

Region 2 Thickness h region 2 In a Linear Composite model where the thickness of the
inner and outer zones is different, the thickness h of the
outer region (2).

Standard: v2.2 / Document: v1.0 / May 16, 2022 147


PRODML Technical Usage Guide

Name Abbreviation Documentation


Vertical Anisotropy Kv To kvTokr The Vertical Anisotropy of permeability, K(vertical)/K(radial).
Kr K(radial) is the effective horizontal permeability, which in
anisotropic horizontal permeability equals square root
(Kx^2+Ky^2). Optional since many models do not account
for this parameter. It will be mandatory in some models
however, e.g., limited entry or horizontal wellbore models.

Horizontal Anisotropy Kx kxToky The Horizontal Anisotropy of permeability, K(x direction)/K(y


To Ky direction). Optional since many models do not account for
this parameter.

Orientation Of Anisotropy OrientationOfAnis In the case where there is horizontal anisotropy, the
X Direction otropy_XDirection orientation of the x direction represented in the local CRS.
Optional since many models do not account for this
parameter.

Boundary
Radius Of Investigation Ri For any transient test, the estimated radius of investigation
of the test.

Pore Volume Of PVinv For any transient test, the estimated pore volume of
Investigation investigation of the test.

Drainage Area A In a closed reservoir model, the Drainage Area measured.


Measured This is to be taken to mean that the analysis yielded a
measurement, as opposed to the RadiusOfInvestigation or
PoreVolumeOfInvestigation Parameters which are taken to
mean the estimates for these parameters derived from
diffuse flow theory, but not necessarily measured.

Pore Volume Measured PVmeas In a closed reservoir model, the Pore Volume measured.
This is to be taken to mean that the analysis yielded a
measurement, as opposed to the RadiusOfInvestigation or
PoreVolumeOfInvestigation Parameters which are taken to
mean the estimates for these parameters derived from
diffuse flow theory, but not necessarily measured.

Distance To Boundary 1 L1 In any bounded reservoir model, the distance to the


Boundary 1. The orientation of this can be thought of
conceptually (i.e., in relationship to other boundaries in the
model, not literally) as "East".

Distance To Boundary 2 L2 In any bounded reservoir model, the distance to the


Boundary 2. The orientation of this can be thought of
conceptually (i.e., in relationship to other boundaries in the
model, not literally) as "North".

Distance To Boundary 3 L3 In any bounded reservoir model, the distance to the


Boundary 3. The orientation of this can be thought of
conceptually (i.e., in relationship to other boundaries in the
model, not literally) as "West".

Distance To Boundary 4 L4 In any bounded reservoir model, the distance to the


Boundary 4. The orientation of this can be thought of
conceptually (i.e., in relationship to other boundaries in the
model, not literally) as "South".

Standard: v2.2 / Document: v1.0 / May 16, 2022 148


PRODML Technical Usage Guide

Name Abbreviation Documentation


Orientation Of Normal To OrientationOfNor In any bounded reservoir model, the orientation of the
Boundary 1 malToBoundary1 normal to Boundary 1. This is an absolute orientation in the
local CRS.
Angle Between AngleBetweenBou In a boundary model with two Intersecting Faults, the angle
Boundaries ndaries of intersection. 90 degrees indicates two boundaries which
are normal to each other.

Distance To Pinch Out Lpinch In a model where the reservoir model is a Pinch Out, the
distance from the wellbore to the pinch-out.

Enum Parameters
Name Values Documentation
Wellbore
Wellbore Storage full well | rising Parameter used to indicate which physical mechanism type
Mechanism Type level | closed is believed responsible for the wellbore storage.
chamber Enumeration with choices: "full well" = wellbore is full of fluid
and storage likely related to wellbore volume * wellbore fluid
compressibility; "rising level" = wellbore is filling up and
storage likely related to cross sectional area / (cos deviation
angle * density); "closed chamber" = reservoir is flowing into
a closed chamber as pressure builds up.

NearWellbore
Fracture Model Type infinite For a Horizontal Wellbore Multiple Equal Fractured Model,
conductivity | the model type which applies to all the fractures.
uniform flux | finite Enumeration with choices of infinite conductivity, uniform
conductivity | flux, finite conductivity, or compressible fracture finite
compressible conductivity.
fracture finite
conductivity

Reservoir
Upper Boundary Type no-flow | constant The type of the upper boundary of the layer. Enumeration
pressure with choices of: "no-flow" or "constant pressure". Optional
since many models do not account for this parameter.
Lower Boundary Type no-flow | constant The type of the lower boundary of the layer. Enumeration
pressure with choices of: "no-flow" or "constant pressure". Optional
since many models do not account for this parameter.

Boundary
Boundary [N] Type no-flow | constant In any bounded reservoir model, the type of Boundary [N].
(N = 1 to 4) pressure Enumeration with choice of "no-flow" or "constant pressure".

Standard: v2.2 / Document: v1.0 / May 16, 2022 149


Matrix of Model vs Parameter

o
ng

Len tionW nAng


io

d
yK
d

o
T

ct
Len tiv

e
ech

eH

ss
b

atio ara
re
ntS h

vT
el

Fa

c
r
ff

oM eDiff

Fau issibil earFr


t
Len oBo
pre

lPe

Mo
rm
ete

ter
ell
ha

oB

o
To

Mo

ion ilityIn
ela
lSk

od
Ra
talD

raje

otr
ell
talT

rat

du

e
oe

ne
lW

nA

ctu

op

bM
yK
reH inRela
kin

s
ivit

tur

Typ
D

yp
ceW talW
gth

wP
geC

lPe
ceM oreT
ara rageM

ive

reH cture

me
ota

t
ity

ute ubM
oW
om
iam

ne
yp
ota

inR

lNa fAnis
ter

ter

Re

ter
eC

eT

ub

olu stiga
ick

geP mTV
rfo

ss

otr
ina

ity
tio

gth

tio

op
ellT

n
To

To

Fra
Skin

ryT
e

nsm OfLin
e

Su
rac
uct

ss
tiv

Flo

rZo

rZo
lat

elT

talT lRadia
s

ary
ne

ara
lum

tur

ob
rag

su r
ora

ity
oT
reS
idC

me

me

me
leT
e Sk

yTh
reT

Pe
ius

ctiv

otr
alD
diu

ht
via

via

eab ss
To

To

nis
on
oF

ter

ellb

ter

ter
ne

ne
e

ltS
Of

da
a
k

idF

a
de

atu

ick
Re

ve
O
sur

Sto rosity
od

Lay yer1T
ace

eig

e
rac

ute

ute

Low ound
alS

We eMid

res
St o

St o

Fra rOfFr

dP
me

alf

tor
me

me
sity

ng
ive

kin

on

ctu
eSt

Flu

ad
ialT

Skin Para

ara

ara
riz

nis
t?

Vo

De

ssu

De

Zo
talA
ick
Ra

yR
c
ter

ion

kn

ion
ilit
me

me

me
du

au
n
en

fIn
er2

ed
en

Th
reM

reD

ou
res
rac

reA

ceT
reR

ceF
nic

reC

Ho

reS

ir
reF
lat

oO

oO
lNa

lNa

ivit
S
ore

ore

ore

ore

ore

ore

ore

hic

Th
Fra

alA
Tim

alF
en

on
mP

mP
in

gIn
er

Pre

t
ep
Init

tat

ara

tat

tat

ara
rat

sO
ity
erg

n2
Dp

vo
on

on
ta

rB
m

Lay

erB
o
c

La
e

ialP
kS k

gth
cha
Re

er2

trib
bst

ssu

erT

erT
idD

tan

tan

tan

tan

tan
ctu

ctu

ctu

ctu

ctu

ctu

ctu

ctu

ltC
rat
mb

erp

ern
teD

reV
de

de

gle

de
rtic
llb

llb

llb

llb

llb

llb

llb

era

diu
ros
bin

sto

sto

sto
ien

ien

ien

gio

ien

ser
rfo

pe
lta

lta

rm
tio

tio

tio
riz

riz
yP

yP

yP
nv
Skin
Mo

Mo

Mo
Lea

Name
We

We

We

We

We

We

Me

Pre
Init
IsA

Inn

Inn

Tra
Fra

Fra

Fra

Fra

Fra

Fra

Fra
Dis

Dis

Dis

Dis

Dis

Dis
Flu

Sin
Nu

Up
Ho

Ho
Int

Int
De

An

De

An

An
Ve
Av
Cu

Co

Cu

Po

Re

Re

Cu

Po
Ra

Ra

Ra

Ra

Ra
Tu

Or

Pe

Or

To

Pe

Or

Or
WellboreBaseModel 1 1 1 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ConstantStorageModel 0 1 1 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageFairModel 0 1 1 1 1 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageHegemanModel 0 1 1 1 1 0 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageSpiveyPackerMode0 1 1 1 0 1 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ChangingStorageSpiveyFissuresMod0 1 1 1 0 1 2 2 2 2 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CustomWellboreModel 0 1 1 0 0 0 2 2 2 2 2 1 2 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 0 0 0
NearWellboreBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FiniteRadiusModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedUniformFluxModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedInfiniteConductivityMode0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedFiniteConductivityModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 0 0 1 2 2 1 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalUniformFluxMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalInfiniteConduct0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FracturedHorizontalFiniteConductiv0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PartiallyPenetratingModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SlantedFullyPenetratingModel 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SlantedPartiallyPenetratingModel 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 2 2 2 2 2 2 0 0 0 0 0 0 1 1 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellbore2LayerModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreMultipleEqualF 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 1 2 0 2 0 0 0 0 0 2 1 2 2 1 2 1 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HorizontalWellboreMultipleVariabl0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0 0 2 1 2 0 1 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CustomNearWellboreModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 1 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 0 0 0
ReservoirBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
HomogeneousModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityPseudoSteadyStateMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityTransientSpheresMode0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPorosityTransientSlabsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
DualPermeabilityWithCrossflowMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 1 2 0 0 0 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
RadialCompositeModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 0 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 0 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithLeakyFaultMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 1 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithConductiveFa0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 1 1 1 2 1 1 0 2 2 2 0 2 2 0 0 0 0 0 0 0 0
LinearCompositeWithChangingThic0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 1 2 1 0 1 2 2 2 0 2 2 0 0 0 0 0 0 0 0
NumericalHomogeneousReservoirM0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 1 3 0 0 0 0 0
NumericalDualPorosityReservoirMo0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 1 1 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 1 3 0 0 0 0 0
CustomReservoirModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 1 1 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 0 2 2 2 1 2 2 0 0 0 3 3 0 0 0
BoundaryBaseModel 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
InfiniteBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
SingleFaultModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
ClosedCircleModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
TwoParallelFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
TwoIntersectingFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
UShapedFaultsModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
ClosedRectangleModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
PinchOutModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0
NumericalBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
CustomBoundaryModel 0 0 0 0 0 0 0 0 0 0 0 1 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 3 2 2 0

Key
1 Abstract model section – models inherit these parameters
1 1..1 parameter per model
2 0..1 parameter per model
3 0..* parameter per model
PRODML Technical Usage Guide

Part V: Distributed Temperature Sensing


(DTS)
Part V contains Chapters 18 through 21 which explain the set of PRODML data objects for distributed
temperature sensing (DTS).

Acknowledgement
Thanks are due to the members of the PRODML SIG who joined the effort to develop the revised DTS
data object.
In particular, the efforts donated by members from the following companies must be acknowledged: Shell,
Chevron, AP Sensing, Perfomix, Schlumberger, Sristy Technologies, Tendeka, Teradata, and
Weatherford. The provision of definitions of terminology by the SEAFOM Joint Industry Forum is also
gratefully acknowledged.

Standard: v2.2 / Document: v1.0 / May 16, 2022 151


PRODML Technical Usage Guide

18 Introduction to DTS
This section describes the data model in PRODML to cover the most common business scenarios and
workflows related to DTS as identified by the industry at large.

18.1 Overview of DTS


Distributed Temperature Sensing (DTS) is a technology where one sensor can collect temperature data
that is spatially distributed over many thousands of individual measurement points throughout a facility.
DTS requires that the facilities being monitored—for example, wellbores or pipelines—are fitted with fiber
optic cable for gathering temperature data along the entire length, instead of using individual gauges.
While still a relatively young technology in the oil and gas industry, DTS and other fiber technologies are
offering a lot of promise for improved temperature monitoring and for quickly detecting production-related
problems. Research is showing that, in addition to temperature sensing, fiber optic technology can also
be used to detect sound, pressure, flow, and fluid composition. For more information about DTS, see
Chapter 21.

18.2 The Business Case


The applications of DTS data in the oil & gas industry are growing rapidly. There has been an emergence
of business workflows that depend on DTS data in order to make timely decisions. This is a radical
change from the way DTS data was handled in the past, where a limited audience consumed the data.
In addition to the changes observed in the oil & gas industry there has been an increased application of
DTS technology to other areas as well. It is important that the new standard can accommodate other
scenarios and it could be extended beyond upstream oil and gas to other industries if required.

Business Drivers and Benefits


By having a common data schema that provides comprehensive coverage of DTS data and all its
potential uses we provide a mechanism by which different vendors and operating companies can
integrate solutions seamlessly.
The main goal for this PRODML capability is to ensure that all the business requirements outlined by the
participating oil companies and oil services companies are covered. This in turn will allow different
companies create a number of products (for data transport, storage, management, analysis, etc.) that can
integrate seamlessly, allowing mix-and-match hardware and software.

Challenges with DTS Data


Extensive use of the previous PRODML schema for DTS revealed the following areas requiring
enhancements, which were first made in version 1.3.
• Data schema provided definitions for describing a fiber installation but more detailed information
would be desired.
• A clearer distinction was needed between ‘raw trace’ measurements obtained from the fiber versus
derived ‘log-like’ curve data that is eventually used for interpretation and analysis.
• Provision of a mechanism to label data with ‘versions’ was needed, in order to accommodate diverse
business workflows that involve the use of DTS data.
• It would be useful to include placeholders for annotations and contextual data to aid subject matter
experts troubleshoot any issues related to the DTS measurements.

18.3 Scope and Use Cases


Sample use cases are listed and documented to illustrate the level of coverage of DTS in PRODML and
to provide a few examples of how the data model could be used. From these examples it will be easy to
extrapolate additional use cases that will apply to more specific situations. The use cases documented
below focus on these areas:

Standard: v2.2 / Document: v1.0 / May 16, 2022 152


PRODML Technical Usage Guide

• Describing a physical fiber installation, where we will highlight the different possibilities to
accommodate multiple deployment scenarios that have occurred in real life. The option to capture
how a physical installation has changed over time is also covered by this data schema.
• Capturing measurements from DTS instrumentation so they can be transferred from the instrument to
other locations and ultimately be stored in a repository. This includes the ‘raw’ measurements
obtained by the light box as well as ‘derived’ temperature log curves.
• Manipulation of DTS-derived temperature log curves (depth adjustments, for example) while
maintaining a record of all the changes performed. The PRODML schema supports multiple forms of
data manipulation and versioning so that most business workflows surrounding DTS can also be
represented.
For detailed explanation of these use cases and related key concepts, see Chapter 19.

Standard: v2.2 / Document: v1.0 / May 16, 2022 153


PRODML Technical Usage Guide

19 Use Cases and Key Concepts


This chapter provides an overview of use cases addressed with this data object, related key concepts,
and data object organization.

19.1 Use Cases


The DTS data objects have been designed to address these primary use cases:
1. Represent an optical fiber installation;
2. Capture DTS measurements for transport and storage;
3. Manipulate DTS-derived temperature log curves.
Each is outlined in the following sections.

Use Case 1: Represent an Optical Fiber Installation


The Optical Path data object has been designed to address the following scenarios:
• Installation of an optical path that consists of multiple fiber segments joined via splices,
connections and turnarounds. Optical fibers can adopt multiple configurations such as:
− Straight fiber with a termination at the end.
− ‘J’ configuration.
− Dual-ended fiber that terminates back in the instrument box.
For diagrams of these and other geometries, see Figure 21-4.
Optical fiber installation is not restricted to wells. The data schema has been designed to accommodate
other deployment scenarios where optical fiber could also be applied, such as pipelines.
Further note that the optical fiber installations are used in a similar manner for distributed acoustic and
strain sensing (DAS and DSS) applications. For DAS PRODML (Part V. Distributed Acoustic Sensing) the
same definition of installation of optical paths apply,
The data schema is extremely detailed, allowing great flexibility and fine granularity at the same time. It is
possible easily to document things such as:
• Location of splices, and their type.
• Signal loss and reflectivity properties on a per-fiber-segment basis.
• Overstuffing (whereby the length of fiber is greater than that of the physical facility being measured).
• Type of material used in the optical fiber.
• Conveyance of the fiber, e.g. in a control line, in a permanent cable, deployed in a wireline logging
mode, etc.
• Map the length along the optical path to specific facility lengths, so that the analyst knows which
parts of a measurement pertain to the wellbore, pipeline etc. which they are analyzing.
• Denote locations in the fiber where fiber defects exist so that future troubleshooting of the
measurements can take into account the presence of these defects.
• Store calibrations of the Instrument Box or whole system.
• Store OTDR (Optical Time Domain Reflectometry) a type of diagnostic test on the optical path)
information, including the type of equipment and personnel used for taking the OTDR.
• Represent the DTS Instrumentation Box that has been installed, either permanently connected or
as a temporary installation, to the optical fiber. Several details regarding the instrumentation can be
represented through the data schema, ranging from make/model of the box, contact information on
the person who installed it, configuration data, and calibration data, to diagnostics information. The
DTS Instrument Box is sometimes known as a “Lightbox”.

Standard: v2.2 / Document: v1.0 / May 16, 2022 154


PRODML Technical Usage Guide

• Represent DTS Installation comprising one Optical Path and one Instrument Box. This is the “unit”
which generates measurements. Various configurations can be represented this way, such as an
Instrument Box that will be shared among multiple optical fibers in one or multiple physical locations
(a “drive-by” instrument box).

Use Case 2: Capture DTS Measurements for Transport and Storage


The DTS data object clearly differentiates between the ‘raw’ measurements obtained by the instrument
box (such as stokes and anti-stokes curves) and the final, derived, temperature value along the fiber
that is recorded in the form of a temperature log curve for easier loading into different visualization and
analysis tools.
Each measurement set has associated with it the DTS Installation which created it, so that traceability can
be assured.
The data schema does not impose requirements as to what measurement curves are required so that
each different installation can make use of the appropriate curves. However, the family of curves which
can be used is limited to a set agreed by a group of major DTS suppliers. One family of curve names
covers both Rayleigh and Brillouin methods of DTS measurement, with absent channels being omitted.
The structure used for representing the measurements was chosen in order to maintain a balance
between the size of the resulting XML file and compatibility with current Energistics XML representations
and libraries.

Use Case 3: Manipulation of DTS-derived Temperature Log Curves


A very important part of DTS data usage is the support for diverse business workflows that not only uses
DTS data for analysis but also performs different data transformations to it for better supporting decision-
making activities.
When modifying DTS data it is critical that one can always trace the origins of any transformation. The
DTS Data object provides all the necessary fields so that any modifications done to the data (such as
depth shift) can be transferred maintaining its association with the original reading obtained from the DTS
Installed System. The data allows representation of scenarios where a single measurement from a DTS
Installed System has undergone different transformations for various reasons, keeping all the
transformations (by versioning) and having all those transformations reference the original measurement
for full traceability.
In addition to data versioning the data object offers a number of flags that can be used to denote
attributes such as:
• Whether the measurement is ‘bad’ or not.
• Use of keyword “tags” for easier search and retrieval.
• When having multiple versions of interpretations derived from one measurement, have a flag that
shows which interpretation is the ‘approved’ one for business decisions
• Whether the measurement is empty or not, allowing tracking of how often the instrumentation
generates readings where those readings are completely empty (because the fiber is disconnected,
for example)
There are also placeholders for storing diagnostics information from the instrument box that can be used
for troubleshooting any issues that may be found with the measurement itself.

19.2 Key Data Model Concepts


This section presents an overview of the DTS data object model, and then goes through each major
element and concept in turn.

Data Model Overview


Figure 19-1 is a simple diagram of the data model for a DTS implementation. The colored boxes
represent the minimum set of objects in the DTS data object that are needed to represent any DTS

Standard: v2.2 / Document: v1.0 / May 16, 2022 155


PRODML Technical Usage Guide

deployment. The meaning and usage of the colored boxes is explained in the next section of the
document.
Each of these boxes is covered by a top-level object in the data model:
• Fiber Optical path
• DTS Instrument box
• DTS Installed system
• DTS Measurement

Figure 19-1. Overall representation of a DTS installation and measurements.

19.3 Defining the Optical Path


The fiber optical path is used by both the DTS and the DAS sub-domains of PRODML. This section 19.3
is therefore common to both.
For the UML diagrams of the Fiber Optical Path, see the UML model (the XMI file included with PRODML
download).

Fiber Optical Path


An optical path is a set of continuous optical waveguides that acts as a linear sensor, used to record
temperature or acoustic or dynamic strain events along its length, as shown for example in Figure 19-2. It
can be composed of one or more optical path components and has one termination. An optical path
component could be a fiber segment, a connector, a splice, or a turnaround. See Section 21.3 for some
example physical fiber installations in wellbores which may make up the Optical Path.
In the PRODML distributed data objects, the optical path is represented by a Fiber Optical Path top-level
object. This object contains one optical path object that represents both the collection of components
used along the path and the connection network.

Standard: v2.2 / Document: v1.0 / May 16, 2022 156


PRODML Technical Usage Guide

Figure 19-2. Example of a dual-ended optical path installation, showing different types of components.

In this example, we have one optical path that is deployed in a well in a dual-ended configuration.
To support the requirement to be able to track the changes in the path over time, the optical path is
represented using the inventory and network pattern explained in the following sections.
• Optical Path Inventory. This is a list of all the components used in the Optical Path over the whole
time being reported. (For more information, see Section 19.3.2 (below).)
• Optical Path Network. This is a representation of the connectivity of a set of components at a given
time. (For more information, see Section 19.3.3.)

Optical Path Inventory Representation


The optical path in Figure 19-2 is composed of the following components, which are referred to as the
Inventory of components in the optical path.
• 5 fiber segments, one from each of these locations:
− the instrument box to the first connector (at the wellhead)
− the first connector to the turnaround at the bottom of the well
− the turnaround to the second connector
− the second connector to the splice
- the splice back to the terminator at the instrument box
• 2 connectors:
− 1 splice
− 1 turnaround
• 1 terminator of type ‘looped back to instrument box’ to represent that the optical path ends back into
the instrument box in a dual-ended configuration. (The alternative type of terminator is ‘termination at
cable’, where the optical path is single ended).
It is necessary to declare only those components and segments that are of relevance to the particular
application or well design. If there is no interest in keeping track of splices and connectors, you can easily
declare this optical path to consist of 1 fiber segment that happens to be the entire length of the optical
path. Of course, doing so means that there is no mapping of possible anomalies to connections, etc., and
no way to record different physical properties of each fiber segment.
In addition, OTDR (optical time domain reflectometry) tests can be reported for the optical path.

Standard: v2.2 / Document: v1.0 / May 16, 2022 157


PRODML Technical Usage Guide

Optical Path Network Representation


The optical path network is represented by a PRODML Product Flow Model (a standard object for
networks in PRODML) (Figure 19-3). This model uses the concept of units, which are “black boxes” that
have ports, representing connection points. Each component in the inventory that is part of the optical
path is represented by a unit. Connectivity is represented by nodes, which are virtual objects representing
the connection between two ports.
Generally along the optical path, all the components have linear connectivity so units have 2 ports. One
benefit of this approach is standardization with other network representations, e.g., the flow network of
PRODML.

Figure 19-3. Representation of the optical path components in a PRODML Product Flow Model (network).

For the UML model that represents this network, see Figure 19-4. This diagram also shows the
mandatory elements and notes as to which convention to follow.
Note, the connectedNode element for two ports on different units indicates these two units are connected.
Example, “Port 2” of “fiber segment 1” and “Port 1” of “connector 1” both show connectedNode as “Node
a”. The first component can reference a specific named port on the instrument box if required to report
this, and the same applies to the port on the terminator if “looped back to instrument box”. This way the
instrument box connections are reported.

Standard: v2.2 / Document: v1.0 / May 16, 2022 158


PRODML Technical Usage Guide

Figure 19-4. Product Flow Model which represents a network, showing mandatory elements (red border) and
including explanatory notes (blue boxes).

Standard: v2.2 / Document: v1.0 / May 16, 2022 159


PRODML Technical Usage Guide

Facility Mappings
The purpose of the Facility Mappings element within the Fiber Optical Path object is to represent the
relationship between an optical path and the facility(s) (sometimes called “assets”) where the optical fiber
has been deployed. For locations where the fiber is used in one facility only, such as a well, the
relationship between the facility and the fiber (optical path) is naturally 1 to 1.
However if a fiber (optical path) is long enough that it can cover multiple assets (for example, 2 wells and
1 pipeline, as shown in Figure 19-5) then you must represent that relationship between that one optical
path and the 2 wells and the pipeline.

Figure 19-5. Example of an optical path that spans more than one well (asset)

When a PRODML distributed data measurement is obtained, we must be able to map the “optical path
length” of the measurement (which gives values along the measured length of the fiber) to the “facility
length” of the measurement (which gives distributed values in reference to lengths or depths within the
facility, for example in a well, as shown in Figure 19-5).
The Facility Mapping element provides that translation between the actual distance along the optical path
as measured from the instrument box and the actual distance of the measurement locations as measured
from a reference datum defined for the facility. See the example in Figure 19-6 wherein the mapping is
shown as the grey “facility lengths,” which are formed from certain “optical path lengths” along the total
length of the optical path.

Standard: v2.2 / Document: v1.0 / May 16, 2022 160


PRODML Technical Usage Guide

Figure 19-6. Mapping Optical Path Distance and Facility Distance.

With this arrangement, it is possible to receive a distributed measurement where all the “raw” values are
indexed by the absolute distance along the optical path plus the mapping for measurements to be
indexed against the distance along the facility.

Fiber Defects
The purpose of the Defects object is to facilitate troubleshooting and data analysis after the
measurements have been collected. Figure 19-7 shows an example of a defect.
It is quite common to have areas in the optical path where the measurement obtained is different from
what was obtained in surrounding areas. This difference in the measurement is not caused by a true
change in the environment but rather by an artificial ‘artifact’ that is causing the light scatter to return
abnormally. By documenting the areas where these anomalies are known to be, anybody analyzing these
measurements can promptly discard the measurement changes as part of the physical installation rather
than as a location of unexpected changes.

Standard: v2.2 / Document: v1.0 / May 16, 2022 161


PRODML Technical Usage Guide

Figure 19-7. Examples of defects found on a distributed measurement trace.

OTDR Acquisition
If an optical time domain reflectometry (OTDR) survey is carried out, then an OTDR Acquisition data
object can be created to report the results. If the OTDR is carried out for calibration purposes, then a
reference to this OTDR Acquisition data object can be added to the corresponding calibration data object
as shown in Figure 19-8.
The Fiber Optical Path top level object can reference a OTDR Acquisition top level element which
contains details about the OTDR survey. The OTDR data is contained in a file referenced by a string for
the file path and name (Data In OTDR File), or an image or plot of the data (OTDR Image File). There is
no Energistics file format standard for OTDR and hence the nature of these files is out of scope.

Standard: v2.2 / Document: v1.0 / May 16, 2022 162


PRODML Technical Usage Guide

Legend
Most Important Class
Abstract Class
Enumeration
AbstractObject
Normal Class
«XSDcomplexType,XSDtopLevelElement»
FiberOpticalPath
Normal Association
Generalization

AbstractDtsEquipment
«XSDcomplexType»
DtsInstrumentBox::
Instrument

+Otdr 0..* TypeEnum

AbstractObject «enumeration»
OTDRReason
«XSDcomplexType,XSDtopLevelElement»
«XSDcomplexType»
OTDRAcquisition dts
FiberOTDRInstrumentBox
other
0..1
«XSDelement» post-installation
+ Name: String64 pre-installation
+ ReasonForRun: OTDRReason [0..1] run
+InstrumentVendor +InstallingVendor + DTimRun: TimeStamp
0..1 0..1 + DataInOTDRFile: String64 [0..1]
+ OTDRImageFile: String64 [0..1]
«XSDcomplexType» + OpticalPathDistanceStart: LengthMeasure
ProdmlCommon::BusinessAssociate TypeEnum
+ OpticalPathDistanceEnd: LengthMeasure
+ Direction: OTDRDirection «enumerati...
«XSDelement» + Wavelength: LengthMeasure OTDRDirection
+ Name: String64
+ Role: NameStruct [0..*] backward
+ Alias: NameStruct [0..*] 0..1 forward
+ Address: GeneralAddress [0..1] +MeasurementContact
+ PhoneNumber: PhoneNumberStruct [0..*]
+ Email: EmailQualifierStruct [0..*]
+ AssociatedWith: String64 [0..1]
+ Contact: String64 [0..-1]
+ PersonnelCount: nonNegativeInteger [0..1]

Figure 19-8. OTDR class.

Conveyance
Conveyance refers to the physical means by which fiber is installed in or carried into the facility. The
categories of conveyance are listed in the following table:

Category of Conveyance Description/Application


Permanent Applies to permanent installations and contains details, such as fiber
clamping or other method of attaching to tubulars. The fiber is encapsulated
in a control line. However, it is not pumped down the control line, in contrast
with the following method.

Fiber in Control Line Applies when fiber is pumped into the facility inside a control line (small-
diameter tube) and reports:
• The size and type of control line.
• Pumping details for operation in which fiber is pumped into control line.

Intervention Applies to temporary installations (in a wellbore) and contains details such
as the intervention method by which fiber is conveyed into the well (on
wireline, coiled tubing, etc.).

Standard: v2.2 / Document: v1.0 / May 16, 2022 163


PRODML Technical Usage Guide

Historical Information
Fiber Optical Path uses a common Energistics XML pattern that separates inventory of components from
the implemented network configuration, as described in sections 19.3.2and 19.3.3.
Use of this pattern means it is possible to keep items in the optical path description that have been
decommissioned, with the value that failure modes can be tracked.
For example, Figure 19-9 shows an optical path spanning two wells and one pipeline. At some point, it is
assumed that the pipeline segment is decommissioned for some reason. This would give rise to:
• one set of inventory (including all the segments and components)
• two networks that include the pipeline and later another one without the pipeline components included
This way, distributed measurements through the whole life of the installed system can be associated with
the right optical path and facilities.

Figure 19-9. Keeping track of historical changes.

19.4 Defining the DTS Instrument Box


The DTS Instrument Box object represents all the hardware equipment located at the site which is
responsible for generating DTS measurements. The DTS Instrument Box is usually located next to the
facility element for which DTS surveys are taken (a well or a pipeline, for example). It consists of, among
other devices, an optoelectronic instrument to which the optical fiber is attached. This contains a
controllable light source, optical switches, photonic detection devices, and a reference temperature
chamber. In some cases there is a computer or server connected to the instrument that will be
responsible for capturing the measurements and initiating the data transmission.
Hardware setup can wildly vary from one deployment to the next, and will also depend on the vendor,
make/model of the hardware used. The data schema was created to allow you to capture the relevant
parameters of the installation without worrying about how the hardware is physically laid out. Attributes
available for the instrumentation cover areas such as:
• Make/model of the instrument
• Number of channels
• Software Version
• Factory Calibration information

Standard: v2.2 / Document: v1.0 / May 16, 2022 164


PRODML Technical Usage Guide

• Warmup and Startup times


• Oven location
• Reference temperature
• Calibration parameters
The PRODML data schema will also supports keeping track of inventory, including old equipment, as it is
possible to specify decommissioning details to every instrument box.

19.5 Defining the Distributed Sensing Installed System


The installed system concept is used by both the DTS and the DAS sub-domains of PRODML. This
Section 19.5 is therefore common to both.

Distributed Sensing Installed System Concept


NOTE: In this section, the DTS and DAS data models treat the Distributed Sensing Installed System
slightly differently:
• DTS uses an intermediate object called DTS Installed System. This contains pointers (data object
references) to the Optical Path and DTS Instrument Box data objects, which comprise the Installed
System. Calibrations may also be attached to the Installed System. The DTS Measurement data
object points to the DTS Installed System data object that generated the measurement.
• DAS does not use an intermediate data object; instead, a DAS Acquisition data object points to the
Optical Path and DAS Instrument Box data objects that generated the acquired data.
• In both cases, the important point is: there is a ‘pairing in time’ of an optical path and the
instrumentation.
− For permanent deployments (where the instrument box is fixed and not reused with other optical
paths), this pairing in time is permanent.
- For a drive-by instrument box, an installed system is formed every time one instrument box is
temporarily connected to an optical path in order to obtain measurements. As the instrument box
is moved from one location to the next, it employs different calibration settings and therefore
constitutes a different installation.

Distributed Sensing Installed System Examples


The installed system concept is highly relevant, because all distributed measurements are directly
associated with one installed system, and through that installed system the data consumer determines
the instrument box and optical path for which the measurement was taken. In Figure 19-10 to Figure
19-13, the installed system is represented by the curved boundary around the various combinations of
(DTS or DAS) Instrument Box and Optical Path making the measurements at any particular time.
A typical distributed sensing (DTS or DAS) installation consists of an instrument box and a fiber (optical
path). See Figure 19-10.

Figure 19-10. A simple single optical path and instrument box installation.

Standard: v2.2 / Document: v1.0 / May 16, 2022 165


PRODML Technical Usage Guide

However, in real life there are multiple variations on how the equipment is installed and used. The first
variation can be found when one instrument box is used to take measurements from more than one
optical path. See Figure 19-11. Here, in the Optical Path Network object, it is possible to record the
channel to which each optical path is connected at the instrument box.

Figure 19-11. One instrument box being used with two optical paths comprises two installed systems (blue
and red). In DTS each would be one installed system, both pointing to the same instrument box. In DAS, the
DAS acquisitions would point to the appropriate optical path, and both would point to the same instrument
box.

A second example is found when the same instrument box is moved around to different locations and
used to take measurements from multiple optical paths. This is sometimes known as a ‘drive-by’
instrument box. See Figure 19-12.

Figure 19-12. Drive-by instrument box: a box is taken from well to well and is connected only occasionally to
each optical path. Each such combination is an installed system.

From the hardware point of view, the instrument box does not change, and neither does the optical path,
so these two objects previously introduced are sufficient to describe these configurations. Things get
more interesting when we start dealing with the measurements obtained, which depend on the situation.
As an instrument box is moved from one optical path to another, it uses different calibration settings and
may generate different diagnostics information. When analyzing a distributed measurement, it is very
important that the data consumer knows exactly which instrument box took the measurement and to
which optical path this measurement relates. Also, it is conceivable that at some point the instrument box
is upgraded for a newer model and, therefore, the new measurements are now taken with a different
instrument box for the same optical path.

Standard: v2.2 / Document: v1.0 / May 16, 2022 166


PRODML Technical Usage Guide

The most extreme example of the temporary use of a distributed installed system is when an instrument
box is used for a logging operation, in which case both the instrument box installation and the optical path
are temporary. See Figure 19-13.

Figure 19-13. Instrument box and portable fiber used for temporary logging operation.

19.6 DTS Measurements


As mentioned previously, DTS measurements are obtained from a DTS installed system, and can contain
both the ‘raw’ measurement and the interpreted temperature log curves (Figure 19-14). Both types are
actually optional, depending on the situation and the requirements for that particular installation. DTS
Measurement is another top-level object.
Every DTS Measurement object contains a mandatory header section that contains metadata about the
measurement sets contained. Metadata includes elements such as:
• Timestamp, indicating when the measurement began.
• Timestamp, indicating when the measurement finished.
• Different flags and tag entries.
• Pointers to other measurements that may be related, e.g., to the parent measurement data for
interpreted data.

Figure 19-14. Relationship between the Measured Trace Set (blue) and the Interpreted Log (red).

Standard: v2.2 / Document: v1.0 / May 16, 2022 167


PRODML Technical Usage Guide

The actual data (index of distance/length and the different measured or derived channels) are stored in
WITSML objects called ChannelSets. For more information about ChannelSets, see Energistics Online:
http://docs.energistics.org/#WITSML/WITSML_TOPICS/WITSML-000-056-0-C-sv2000.html
Examples of WITSML ChannelSets are shown in the next chapter, as part of the worked examples.
There are two types of DTS measurement data:
• Measured Traces: the “raw” measurement, indexed along the optical path distance, and as measured
by the instrument box.
• Interpreted Logs: the “adjusted” (i.e., calibrated, re-sampled, smoothed, etc.) temperature indexed
against the facility length of the physical facility (well, pipe, etc.) being measured. This may be
considered the “product” of DTS measurement plus analysis/processing.
The mnemonics which are expected to be used for these two types of DTS measurements are as here:
Measurement Trace Interpreted Log
Expected
Mnemonic UoM Mnemonic Expected UoM
FiberDistance (index) m FacilityDistance (index) m
Antistokes mW AdjustedTemperature degC
Stokes mW
ReverseAntiStokes mW
ReverseStokes mW
Rayleigh1 mW
Rayleigh2 mW
BrillouinFrequency GHz
Loss dB/Km
LossRatio dB/Km
CumulativeExcessLoss dB
FrequencyQualityMeasure dimensionless
MeasurementUncertainty degC
BrillouinAmplitude mW
OpticalPathTemperature
this is assumed to be
adjusted to be correct
temperature. degC
UncalibratedTemperature1 degC
UncalibratedTemperature2 degC

Note that the index is mandatory for these types of data, but the other channels are not. In particular, the
measurement trace set of mnemonics are suitable for either Raman- or Brillouin-type DTS systems, and
any one system will certainly not generate all the measurement traces.
BUSINESS RULE: These mnemonics must be observed but are not enforced in the schemas.

Standard: v2.2 / Document: v1.0 / May 16, 2022 168


PRODML Technical Usage Guide

20 DTS Worked Examples


This chapter reviews the XML code examples for implementing the DTS data object for each of the use
cases described in Chapter 19, and which are provided in the PRODML installation in the folder
energyml\data\prodml\v2.x\doc\examples\DTS.

20.1 Use Case 1: Represent an Optical Fiber Installation


This section contains examples of XML code for representing a typical common style of fiber installation
(Figure 20-1). There are two fiber segments of 25m and 492.43m, giving a total distance of 517.43m.
There is overstuffing of 1.9% in the downhole fiber segment, giving it a length along the facility of
483.25m. Two facility mappings are made: 1) the surface cable, which is one to one, and 2) the downhole
cable mapped to the wellbore, which includes the effects of the offset to the wellhead and of the
overstuffing.

Figure 20-1. Worked example fiber optical path showing fiber segments, optical path lengths and facility
lengths

For the optical path shown in Figure 20-1, the key sections are as shown below.
First, the inventory lists out the components comprising the optical path (Figure 20-2). Note that each
component has a name and a type (and optionally additional data), and also a uid (red box) which is used
to reference this component from the network (see below).

Standard: v2.2 / Document: v1.0 / May 16, 2022 169


PRODML Technical Usage Guide

Figure 20-2. Inventory section of the Optical Path.

Next, the Network section shows how these components listed in Inventory are connected. (For details on
how this is done, see Section 19.3.3, Figure 19-3.) The network contains Units which are a nominal
representation of an item of inventory ("facility" is the Energistics terminology for general hardware) within
the network. Figure 20-3 shows a network Unit (blue box), which uses a facility reference (uidRef) back to
the uid of the component in Inventory (red box).

Figure 20-3. Network section of Optical Path showing reference to Inventory.

The unit has one or more Ports which are where connection occurs. Ports can be inlet or outlet (taking
direction from the lightbox). Ports contain a Connected Node element, and where two ports share the
same name and uid of Node, this defines their connection. See Figure 20-4, in which the outlet port (i.e.,
“far end”) of the surface fiber (in blue box), can be seen to be connected to the inlet port (i.e., “near end”)
of the surface connector (in red box). Both have a connected node called Surface Connector 1 with uid
CSC1, which defines the connection between the surface cable and this wellhead connector.

Standard: v2.2 / Document: v1.0 / May 16, 2022 170


PRODML Technical Usage Guide

Figure 20-4. Optical path network showing Node connecting Ports of two components.

In Figure 20-5, we see the full inventory and network of the worked example with the links and
connections all shown.

Figure 20-5. Worked example optical path network showing how connectivity is defined.

The facility mapping section is where the fiber optical path distances are mapped onto the facility lengths
of physical equipment. Here, two mappings are shown: 1) a one-to-one mapping to the surface cable from
the surface segment of the optical path and 2) a mapping to the wellbore, which includes the offset and
the overstuffing, as shown in Figure 20-1. Figure 20-6 shows the two mappings. Note that the
FiberFacilityWell type of mapping (the second one) includes a well datum and a data object reference to
the wellbore concerned.

Standard: v2.2 / Document: v1.0 / May 16, 2022 171


PRODML Technical Usage Guide

Figure 20-6. Facility Mapping section of the Optical Path.

Figure 20-7 shows some of the other sections of the Optical Path. Defects can be recorded (for the
example, see Figure 20-1 and, for real-world defect examples, see Figure 19-7). OTDR surveys to verify
the optical path quality can be referenced. The data for OTDR are not in an Energistics standard because
OTDR is a generic fiber optical industry measurement with its own file formats; an external file is
referenced. Other data can be added as seen, such as vendor, the facility identifier, etc.

Figure 20-7. Defect, OTDR, and other sections of Optical Path.

Instrument Box
Declaring a DTS Instrument Box is also very straightforward (Figure 20-8). Required elements include
unique identifier, a name, and the type. All other attributes are optional.

Standard: v2.2 / Document: v1.0 / May 16, 2022 172


PRODML Technical Usage Guide

Figure 20-8. DTS Instrument Box example.

Installed System
Once both the optical path and instrument box are declared, they need to be ‘tied together’ as a DTS
installed system. All measurements will be generated by an installed system, not an individual instrument
box or an optical path. The DTS Installed System is a simple object to do this, as seen in Figure 20-9. It
additionally allows for a calibration to be associated with the system of path + box as a combination.

Reference to the optical


path which made this
measurement

Reference to the
instrument box which
made this measurement

Figure 20-9. DTS Installed System.

A ‘permanent’ DTS installation (i.e., the lightbox is fixed and so is the optical path) will only contain a
<dateMin> element with no <dateMax>, to indicate that this installation is still active.
After these three entities are declared, then captured measurements can be made and associated with
the installed system above.

Standard: v2.2 / Document: v1.0 / May 16, 2022 173


PRODML Technical Usage Guide

It is strongly recommended that the installed system object also includes calibration information so that
any measurements obtained from the installed system can be compared against the calibration
parameters to help determine if any further fine tuning is required for the lightbox.

20.2 Use Case 2: Capturing DTS measurements for Transport and Storage
As stated previously, there are two types of measurements that can be obtained from a DTS installed
system: 1) the actual ‘raw’ DTS measurement along the length of the optical path and 2) the interpretation
log, which has a calculated temperature along the length of the facility only. This section contains
examples of XML for representing a DTS Measurement obtained from the installation described in the
previous use case.
In the previous use case, we assume the well is 483.25m deep and the entire optical path is 517.43m
long. Therefore, it is expected to have a measurement from distance 0 to distance 517.43m and an
interpretation log from distance 25m to distance 470m (the sample closest to the last measurement at
505m, see Figure 20-1).
The ‘raw’ measurement traces with stokes, antistokes, and an uncalibrated temperature would look as
shown in Figure 20-10. This is the example file DTS Measurement.xml. Note that there can be many
traces per DTS Measurement xml. Each has a uid (red box). Optionally, a trace can have an attribute
parentMeasurementID which contains the uid of another trace which may be the parent of this trace (in
the sense that this trace may have been depth shifted or otherwise derived from the parent).

Reference to the Installed


System which made this
measurement

Reference to the Channel


Set containing the data
for this measurement

Figure 20-10. DTS Measurement with measurement traces data.

The DTS Measurement object does not contain the actual numerical data—this is instead in a WITSML
object called Channel Set (for more details and references, see Section 19.6). An extract of the channel

Standard: v2.2 / Document: v1.0 / May 16, 2022 174


PRODML Technical Usage Guide

set object corresponding to the DTS measured traces is shown in Figure 20-11. This shows the Index
element, which describes the index, and then the three channels listed above. The first of these is shown
expanded (antistokes). This is the example file ChannelSetContainingDtsMeasurement.xml.

Figure 20-11. Channel Set for DTS Measurement traces.

The data then follows, as shown in Figure 20-12. The comma-separated values in the data section
correspond to the channels defined previously in this object.

Figure 20-12. Data section of DTS Measurement for measurement traces.

Standard: v2.2 / Document: v1.0 / May 16, 2022 175


PRODML Technical Usage Guide

Similarly, an interpreted log is shown in the example file DTS Measurement with Log.xml. This replaces
the measurement traces with an interpreted log but is otherwise similar. The data runs from 5m to 470m
of facility length (i.e., measured depth in this wellbore), see Figure 20-1. The Channel Set file is DTS Log
Interpretation In ChannelSet.xml. See Figure 20-13 which shows part of the DTS measurement file. Note
the two referenced Channel Sets (red and blue boxes). Note also the IntepretationData has a mandatory
measurement reference attribute which contains the uid of the measurement trace from which this
interpretation log was derived (green boxes).

Figure 20-13. DTS measurement with measurement trace and interpretation log.

20.3 Use Case 3: Manipulation of DTS-derived Temperature Log Curves


This section contains example XML that represent situations when temperature logs are modified by end-
users after DTS measurements or interpretation logs have been obtained and stored in a repository.
These types of modifications are common in places where additional workflows are in place that enable
subject matter experts to apply different mathematical algorithms to the data (for example, de-noising the
temperature values) and need to store the results for sharing with other people. When changing values of
interpreted logs, it’s very important that the original log is never changed. It is also important that any
modified log is also connected to the original log from which the calculations were performed (i.e., log
‘versioning’). The PRODML schema can accommodate all these scenarios.
Furthermore, there will be cases where several versions of a temperature log have been generated, but
there is one version that is ‘preferred’ for use towards business decisions and deeper analysis. The
PRODML schema provides a special tag that allows you to flag one of the log versions as the preferred

Standard: v2.2 / Document: v1.0 / May 16, 2022 176


PRODML Technical Usage Guide

one. See

Figure 20-14. This shows the example file DTS Interpretation Log With Versions.xml.
This contains three versions of interpretation log data with its original measurement trace. The tag which
identifies which is the preferred interpretation log (ID3) that should be used for business decisions is
shown in a green box. All three interpretation data instances refer to the same measurement trace (red
boxes). The example also shows how interpretation logs can show that they are derived from a parent
interpretation log (blue box). The interpretation processing type (purple box) indicates what was
processed to generate this instance. Here, ID1 was temperature shifted, ID2 took ID1 and averaged and
ID3 took ID2 and denormalized. Note: in the example, all three interpretation logs point to the same
Channel Set data.

Figure 20-14. Multiple DTS interpretation logs in one transfer, showing preferred log, parentage, and
processing type.

Standard: v2.2 / Document: v1.0 / May 16, 2022 177


PRODML Technical Usage Guide

21 Appendix for DTS: Fiber Optic Technology


and DTS Measurements
This appendix provides details on what DTS is, implications for oil and gas wells, and some of the
challenges that the DTS data object was designed to address.

21.1 Principles
When light is transmitted along an optic fiber, transmitted light collides with atoms in the lattice structure
of the fiber and causes them to emit bursts of light at frequencies slightly different from the transmitted
radiation, which propagate back along the fiber and can be detected at its end. This is known as
backscattering; this is shown in Figure 21-1.

Figure 21-1. Principle of DTS Measurement.

Optic fiber thermometry depends upon the phenomenon that the frequency shifts occur to bands both
less than and greater than the transmitted frequency. Furthermore, in the case of the Raman shifted
bands the intensity of the lower frequency band is only weakly dependent on temperature, while the
intensity of the higher frequency band is strongly dependent upon temperature (Anti-stokes and Stokes
bands). See Figure 21-2. Less commonly, the Brillouin bands are used.

Standard: v2.2 / Document: v1.0 / May 16, 2022 178


PRODML Technical Usage Guide

Figure 21-2. Backscatter spectrum, showing Rayleigh, Brillouin, and Raman bands.

The ratio of these intensities is related to the temperature of the optic fiber at the site where the
backscattering occurs: i.e., all along the fiber. These intensities are averaged at regular Sampling
Intervals. See Figure 21-3. The intensities of backscatter of individual pulses are very weak,
necessitating statistical stacking of thousands of backscattered light pulses.

Figure 21-3. . Illustration of temperature data which is averaged over fixed Sample Intervals.

21.2 Equipment: Instrument Box and Fiber


Several organizations manufacture and / or operate DTS equipment on a service basis for the E&P
industry. The optic fibers are mass-manufactured by a number of companies with a variety of materials
and designs for different applications. The 'Instrument Boxes' comprise a laser light source, an oven
which provides a reference temperature, and photo-electric sensors which capture the backscattered
light, along with digital circuitry which both controls the laser transmission and analyses the frequency and
intensity of the backscatter. Instrument boxes and fibers can readily be interchanged.
The transmission characteristics of optic fibers are dependent upon several factors in addition to
temperature, which include their construction, physical deformation and chemical, particularly aqueous,

Standard: v2.2 / Document: v1.0 / May 16, 2022 179


PRODML Technical Usage Guide

contamination. For these reasons, calibrations of the equipment are routinely necessary to monitor the
transmission characteristics.

21.3 Installation in Wellbores


The location of the source of the backscattered light is determined in terms of its distance along the fiber
by the two-way transmission time and the velocity of light in the transmitting medium. It is therefore
necessary to know how length along the fiber correlates to measured depth along the wellbore in which
the fiber is installed. The foregoing information includes knowledge of the positions in the well and along
the fiber where connections through casing shoes, packers and other features occur. All these can be
assumed to remain constant between initial installation and mechanical interventions in the wellbore.
There are several fiber configurations deployed in wellbores, illustrated in Figure 21-4 which enable
various calibrations to be made to the temperature data.

Figure 21-4. DTS fiber configuration patterns.

21.4 Calibrations
Two kinds of calibration are applied to DTS data. These are:
• Factors pertaining to the Instrument Box such as the oven reference temperature, the number of
launches being stacked to get a temperature value at the required resolution.
• Factors related to the fiber, including the (practically linear) differential attenuation along the fiber, and
the refractive index of the fiber.
Where the fiber may be installed in a well bore for several years, it is necessary to recalibrate these
factors periodically in order to be able to compare reservoir temperatures over these periods.
The Optical Time Domain Reflectometry (OTDR) technique provides a routine means of examining the
condition of the fiber. OTDR consists of observing the attenuation of backscattered light in the frequency
band of the transmitted Rayleigh waves. An OTDR profile is illustrated in Figure 21-5.

Standard: v2.2 / Document: v1.0 / May 16, 2022 180


PRODML Technical Usage Guide

Figure 21-5. Optical Time Domain Reflectometry (OTDR) Profile.

21.5 Measurements
The measurements produced by the Instrument Box for a temperature profile typically comprise a set of
records containing contextual data followed by the temperature records. The contextual data identify the
well, the equipment, the date and time, and various calibration data. The temperature records contains
the length along the fiber of the temperature sample, the computed temperature, and measurements of
the Stokes and anti-Stokes intensities, and other measured curves as appropriate to the Instrument Box
concerned.

Standard: v2.2 / Document: v1.0 / May 16, 2022 181


PRODML Technical Usage Guide

Part VI: Distributed Acoustic Sensing (DAS)


Part VI contains Chapters 22 through 27 which explain the set of PRODML data objects for distributed
acoustic sensing (DAS).

Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the DAS data object: Shell, Fotech, OptaSense, BP, Baker
Hughes, Chevron, Pinnacle/Halliburton, Schlumberger, Silixa, Weatherford, Ziebel.

Standard: v2.2 / Document: v1.0 / May 16, 2022 182


PRODML Technical Usage Guide

22 Introduction to DAS
This chapter serves as an overview and executive summary of both distributed acoustic sensing (DAS)
and the DAS data objects.

22.1 Overview of DAS


Distributed acoustic sensing (DAS) is a fiber optic technology used for oil and gas surveillance
applications, which include wellbore, pipeline and surface facility monitoring. DAS datasets are used as
input to many production surveillance software applications. Examples of downhole applications include
monitoring of flow, hydraulic fractures, and seismic surveys. Examples of surface applications include
monitoring of pipelines, intrusion detection, and equipment vibration.
At a high level, here’s how DAS works (Figure 22-1): Oil and gas assets, such as wells and pipelines, are
outfitted with optical fibers (fiber optic cable). Acoustic sources create small vibrations of the fiber, and
DAS interrogator systems measure tiny variations in the back-scattered light caused by these vibrations—
which essentially turns the fiber itself into a sensitive distributed sensor that can measure acoustic,
temperature, and strain variations over distances of many kilometers.
The data collected by the DAS interrogator unit is often referred to as raw data and is further processed;
this processing often includes data reduction, filtering of frequency bands of interest, or transformations
into spectrum data. Processed datasets that are commonly provided to customers are frequency band
extracted (FBE) data and Fourier transformed spectrum datasets. Surveillance software applications
derive information from both raw and processed datasets.

Figure 22-1. DAS Overview: Oil and gas assets are outfitted with optical fibers, which are vibrated by an
acoustic source. DAS interrogator systems measure tiny variations in the back-scattered light, essentially
turning the fiber itself into a sensitive distributed sensor that can measure acoustic, temperature, and strain
variations over distances of many kilometers. DAS data acquisition raw and processed (frequency band and
spectrum) data are used for input to many production surveillance software applications.

For a quick overview or to be able to make a presentation to colleagues, see the slide set: Worked
Example DAS.pptx which is provided in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files.

Standard: v2.2 / Document: v1.0 / May 16, 2022 183


PRODML Technical Usage Guide

22.2 The Business Case


DAS applications for oil and gas have seen rapid development over the past 5 years by both service
companies and operators. Several commercial applications for surveillance based on DAS measurements
have been developed and are being offered on the market today, including a large range of wellbore
applications such as noise monitoring, hydraulic fracturing and perforation monitoring, flow profiling for
several well types, and vertical seismic profiling. Furthermore, several pipeline monitoring applications
were under development when this document was being written.

Benefits of DAS-based Systems


In many cases, fiber-based DAS applications are replacing conventional downhole measurements
performed with wireline tools or permanently installed downhole sensors because they often can provide
data more frequently, with higher resolution, and over longer distances, in harsh environments, and
sometimes at significantly lower cost, than conventional applications.

Challenges of DAS
The challenge for DAS is how to manage the huge volumes of data produced, which is typically terabytes
for raw data and hundreds of gigabytes for frequency filtered data. Because of the large data volumes
and lack of standards, industry workers spend lots of time manually reformatting data, which is costly and
error prone. These challenges hamper uptake of the commercial DAS applications and development of
new DAS applications.
For these reasons, a group of operators and service providers proposed to Energistics to define a shared
industry exchange format for DAS data types. Previous work at Energistics has been done for distributed
temperature sensing (DTS). Because of the similarities, the DAS design work is based on the DTS
design.

22.3 Scope of DAS Data Objects


Data in scope for the current version of the DAS data object includes:
• A description of the DAS optical path and instrument box (leveraging the existing PRODML DTS
(distributed temperature sensing) data object.
• Raw and processed DAS data; Processed data types included are frequency band extracted (FBE)
and Fourier transformed spectrum data.
Out of scope are data formats for specific applications that derive end products from the raw, frequency
band or spectrum DAS data; examples are seismic traces, leak detection events, and flow profiles. For
many of these applications, industry standards already exist (SEG standards for seismic, LAS for well log
data, PRODML and WITSML for DTS, etc.).

DAS Data Objects


The DAS data objects were developed to provide an industry-defined, vendor-neutral format for
exchanging the large volumes of data associated with DAS. It builds on recent work to define data-
exchange standards for data associated with distributed temperature sensing (DTS), which is also part
PRODML.
Like all Energistics standards, DAS is implemented in software using standards-based technology, such
as XML and HDF5. Developers implement the DAS data object so that software can read and write the
DAS format, thereby allowing software packages to “exchange” data. For more information on technology
used, see Section 24.1.2.

22.4 DAS Use Cases and Workflow


The current version of the DAS data object supports exchanging data between the business process use
cases that are listed below and described in Section 27.2.
• Define DAS acquisition requirements with client
• Represent fiber optic installation

Standard: v2.2 / Document: v1.0 / May 16, 2022 184


PRODML Technical Usage Guide

• Configure DAS equipment for acquisition


• Perform DAS data acquisition
• Post-process DAS data

DAS Workflow
Figure 22-2 shows a very high-level DAS workflow. Assets such as wells and pipelines are fitted with
fiber optic cables. DAS interrogator systems are connected to these fibers and collect huge volumes of
raw acoustic data. The raw data is collected in the field, often some pre-processing takes place to extract
relevant data and reduce the data volume. The raw and/or extracted datasets are then transferred to the
office where they are further processed and stored. Oil and gas software uses both the raw and
processed data as inputs for applications such as surveillance.
For a more detailed workflow, see Section 23.2.

Figure 22-2. High-level DAS workflow, from the field to the office.

Standard: v2.2 / Document: v1.0 / May 16, 2022 185


PRODML Technical Usage Guide

23 DAS: Key Concepts, Workflow, and Use


Cases
This chapter provides:
• Useful concepts and terminology to help you understand DAS technology and processes
• An overview of the DAS workflow
Additional information:
• For business process use cases that are supported by the DAS data object, see Section 27.2.
• For information on the DAS data model, see Chapter 24.

23.1 DAS Measurement Concepts

Terminology
Because DAS is an emerging technology with new and specialized terminology, definitions are provided
in this document.
SEAFOM (an international joint industry forum aimed at promoting the growth of fiber optic monitoring
system installations in the upstream oil and gas industry) and the Energistics DAS work group have
collaborated to define key terms for DAS. A few key terms are defined here for convenience. For a
detailed list of definitions, see Section 27.1.

Term Definition
DAS Job A set of one or more DAS acquisitions acquired in a defined
timeframe using a common optical path and DAS instrument box.
DAS Acquisition Collection of DAS data acquired during the DAS survey.
DAS Instrument Box The DAS measurement instrument. Sometimes called the
“interrogator unit”.
Optical Path A series of fibers, connectors, etc. that together form the path for the
light pulse emitted from the interrogator unit.
Interrogation Rate The rate at which the DAS instrument box interrogates the fiber
(or Pulse Rate) sensor. For most instruments, this is informally known as the pulse
rate.
Output Data Rate The rate at which the measurement system provides output data for
all ‘loci’ (spatial samples). This is typically equal to
the interrogation rate/pulse rate or an integer fraction thereof.
Trace Array of sensor values for all loci interrogated along the fiber for a
(or Scan) single “pulse.”
Locus (plural Loci) A particular location that indicates a spatial sample point along the
sensing fiber at which a “time series” of acoustic measurements is
made.
Measurement Start Time The time at the beginning of a data “sample” in a “time series.” This
is typically a GPS-locked time measurement.
Sample A single measurement, i.e., the acoustic signal value at a single
locus for a single trace.
Spatial Resolution The ability of the measurement system to discriminate signals that
are spatially separated. It should not be confused with Spatial
Sampling Interval.

Standard: v2.2 / Document: v1.0 / May 16, 2022 186


PRODML Technical Usage Guide

Term Definition
Spatial Sampling Interval The separation between two consecutive “spatial sample”’ points on
the fiber at which the signal is measured. It should not be confused
with “spatial resolution.”
DAS Raw Data DAS data exchange format that describes unprocessed DAS data
provided by the vendor to a customer.
DAS FBE Data A category of “processed” data. DAS data exchange format for
frequency band extracted (FBE) DAS data provided by the vendor to
a customer. This DAS data type describes a dataset that contains
one or more frequency bands filtered time series of a “raw” DAS
dataset. For example, the RMS of a band passed filtered time series
or the level of a frequency spectrum.
DAS Spectrum Data A category of “processed” data. DAS data exchange format for
frequency spectrum information extracted from a (sub) set of “raw”
DAS data.

Principles: Pulse and Backscatter of Light


An optical time-domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an
optical fiber. An OTDR in its most basic form uses a pulse of optical radiation to interrogate a waveguide,
typically an optical fiber.
As the pulse propagates along the fiber, discontinuities formed by tiny density fluctuations (frozen into the
fabric of the fiber at the time of manufacture) cause a small fraction of the incident light to scatter. This
process is usually referred to as Rayleigh scattering.
This scattered light is then recaptured by the fiber and guided back toward the starting end where it can
be detected. By measuring properties of the scattered light as a function of time, properties of the
waveguide or fiber as a function of distance can be inferred.
The simplest way of understanding it is as a one-dimensional (1D) RADAR, but at optical wavelengths.
All OTDR systems share the following properties:
• Detection time maps linearly to location along the fiber.
• Properties of the scattered light are a function of the local properties of the waveguide.
By tailoring the properties of the launched pulse and the processing applied, different information about
the waveguide/fiber being interrogated can be inferred. In the case of coherent OTDR, the fiber is
interrogated by a pulse of coherent light of a finite length.
As stated above, this pulse propagates along the fiber and a small fraction is scattered. The resulting
intensity of the scattered light as a function of time and hence distance is determined by the coherent sum
of scatter from typically millions of scatter sites. The distribution of the position and magnitude of the
scatter sites are random and hence the intensity of the scattered field is also random as a function of
distance.
From this point on, different DAS technologies may use different properties of the scattered light to infer
the action of an acoustic disturbance. For example, systems may measure the absolute phase of the
scattered light or they may measure the differential phase; however, for illustration of how the OTDR
principle may form a DAS system and for simplicity, the following example describes an intensity-based,
single-pulse DAS system.

23.1.2.1 Example: Intensity-Based, Single-Pulse DAS system


Figure 23-1 (and the linked animation referenced below) shows the scattered intensity as a function of
time from a fiber illuminated by single pulse of coherent light. The important thing to note for the
application of DAS is that, if the fiber is unperturbed, the scatter pattern generated by a fiber remains
constant on successive pulses.

Standard: v2.2 / Document: v1.0 / May 16, 2022 187


PRODML Technical Usage Guide

For a simple but effective animation that better shows this concept, see the PowerPoint slides included in
the DAS Example folder (slide 9) of the PRODML download. To see the animation, run the presentation
Slide Show mode or see Energistics Online:
http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-014-0-C-sv2000.html
In coherent OTDR, a launch pulse travels along fiber scattering from tiny imperfections, inherent to the
fiber. The scattered light is detected and used to create a signal for each location along the fiber, known
as a scatter pattern. Because the intensity of this signal at each location is determined by the coherent
addition of millions of waves scattered from millions of independent scatter sites, the intensity varies
randomly as a function of distance. However, upon successive measurements, this pattern nominally
remains constant.
In Figure 23-1, the pair of plots on the left is a “time zero” when the pulse (blue) is sent down the fiber; the
pair of images on the right is some time later as the pulse nears the end of the fiber, travelling from left to
right, whilst the back scattered light from each point along the fiber is making its way back up the fiber
towards the detector, from right to left (red).

(Image courtesy of OptaSense.)


Figure 23-1. The first two plots (left) are a “time zero” when the pulse (blue) is sent down the fiber; the
second pair (right) is some time later, as the pulse nears the end of the fiber, travelling from left to right,
while the back scattered light from each point along the fiber is making its way back up the fiber towards the
detector, from right to left (red). See the animation in the PowerPoint example; run it in Slide Show mode or
see Energistics Online: http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-014-0-C-
sv2000.html

Measurements: Pulse Rate and Sample Locations


A typical DAS measurement cycle consists of the fiber being interrogated by a sequence of pulses at a
repetition rate arranged such that the scatter from the distal end of the fiber has time to return along the
length of the fiber and be detected before the next pulse being launched. Such an interrogation is often
referred to as a ping in the same sense as that used in sonar.
In Figure 23-2 (and the linked animation referenced below), the coherent scatter pattern remains
constant on successive pings until the fiber is disturbed. Vibration from an acoustic source modulates the
fiber altering the physical length of the waveguide and in turn the relative positions of the scatter site. In
turn, the coherent sum of the scatter sites in the disturbed region is also modified, thereby affecting the
scatter pattern. The data gathered recording the scatter pattern over several pulses makes it possible,
with correct processing, to determine a history of the nature of acoustic source as a function of position.

Standard: v2.2 / Document: v1.0 / May 16, 2022 188


PRODML Technical Usage Guide

For a simple but effective animation that better shows this concept, see the PowerPoint slides included in
the DAS Example folder (slide 11) of the PRODML download. To see the animation, run the presentation
Slide Show mode or see Energistics Online:
http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-015-0-C-sv2000.html

(Image courtesy of OptaSense)


Figure 23-2. Vibration from an acoustic source modulates the fiber and hence the scatter pattern. The data
gathered makes it possible to determine a history of the nature of acoustic source as a function of position.
The left hand lower plot shows the back scattered intensity as a function of time (equivalent to distance
along fiber) with no disturbance to the fiber (black). The right hand plots show that when a sound source is
adjacent to the fiber at any location, the back scattered intensity from that location is altered (red line), so
that it repeats the undisturbed signal for most of the fiber length, but diverges from it where the noise source
is (red). By processing these variations all along the fiber length, the acoustic signal at each point can be
obtained. See the animiation in Energistics Online:
http://docs.energistics.org/#PRODML/PRODML_TOPICS/PRO-DAS-000-015-0-C-sv2000.html

DAS Equipment and Data Types: Raw and Processed Data


Figure 23-3 shows basic DAS equipment configuration and usage and introduces some important
concepts about the raw data and how it’s represented in the DAS PRODML data schemas, which are
explained in Chapter 24.

Standard: v2.2 / Document: v1.0 / May 16, 2022 189


PRODML Technical Usage Guide

Figure 23-3. DAS installed system loci and raw data time series.

The DAS instrument box (or interrogation unit or IU) is connected to a sensing fiber. The sensing fiber
consists of a surface part and a downhole part together presenting the full fiber optical path. The DAS IU
sends light pulses in the fiber at a pre-configured rate (interrogation or pulse rate), and samples the
backscattered light creating an ensemble of N loci samples along the fiber. Such an ensemble is often
referred to as a “trace” or “scan,” and is shown as columns of dots in the figure. The time at which a trace
is collected is indicated by the measurement start time. A DAS IU interrogating the full fiber outputs a
maximum of N x pulse rate “raw” DAS samples per second. Each locus sample is shown as a dot in
Figure 23-3.

Standard: v2.2 / Document: v1.0 / May 16, 2022 190


PRODML Technical Usage Guide

In many cases the DAS IU is configured such that it decimates the output by an integer factor M and only
outputs every Mth trace. The rate at which scans are output by the interrogator is called the “output data
rate.” The red dots in Figure 23-3 show such a decimation example (decimation factor 4).
Further, the DAS IUs often collect only measurements for a subset of the loci along the fiber. In Figure
23-3, the shaded grey box indicates such a scenario where only loci 8 to 24 are collected. In the figure,
“start locus index” and “number of loci” describe which subset is collected. To ensure a unique numbering
system, the first locus at the interrogator has by definition index ‘0’.
Measurement samples output for a single locus can form a time series of samples, represented by rows
of dots in the figure.
Figure 23-4 introduces a number of additional concepts when processing raw data into frequency bands
and spectra using Fourier transforms. An example of the raw data is shown in box ‘A’ containing a
representation of a sine wave exciting a single channel (locus) along the fiber (left plot) and the
corresponding waterfall DAS raw data plot (right plot). When we now apply filters to this raw data, one
filter which includes and one that excludes the sine wave’s frequency, then we obtain two processed
filtered images shown in box ‘B’ referred two as processed DAS Frequency Band Extracted (FBE) data.
We can also calculate the full signal frequency spectrum for each DAS channel using a Fast Fourier
Transform shown in box ‘C which is referred to as Spectrum data.

Standard: v2.2 / Document: v1.0 / May 16, 2022 191


PRODML Technical Usage Guide

Figure 23-4. DAS raw and processed data types. (A) DAS raw data: a sine wave exciting a single locus along
the fiber (left) and the resulting waterfall plot (right). (B) DAS processed FBE data: the top plot shows a
filtered version of the raw data containing the sine wave frequency, whereas the bottom plot shows a filtered
version without the sine wave frequency (C) DAS processed spectrum data: frequency spectrum plotting as
a function of time.

Standard: v2.2 / Document: v1.0 / May 16, 2022 192


PRODML Technical Usage Guide

23.2 DAS Data Life Cycle


Figure 23-5 is a high-level view of the entire DAS workflow, from acquisition, through processing, use and storage.

Figure 23-5. High-level DAS workflow, from the field to the office.

Standard: v2.2 / Document: v1.0 / May 16, 2022 193


PRODML Technical Usage Guide

24 DAS: Data Model


This chapter explains the DAS data model, provides a brief overview of the technology used to implement
DAS, and explains how the technology maps to the data model. Specifically, it:
• Explains the DAS equipment data model, including the XML data objects and options for transferring
the fiber optical path and DAS instrument box.
• Explains the acquisition data model, which includes:
− Specific optical path and instrument box for a particular acquisition
- The acquisition array data and options for how it can be stored
IMPORTANT IMPLEMENTATION GUIDANCE: Key technologies for implementing DAS include the
Energistics Packaging Conventions (EPC) and HDF5. It is important to implement BOTH technologies
and to implement EPC FIRST. For more information, see Section 24.1.2.
For more information:
• For a walkthrough of the worked example included in the DAS download, see Chapter 24.
• For examples of other DAS data transfer configurations, see Chapter 24.

24.1 Data Model and Technology Overview

Data Model
Figure 24-1 is a high-level diagram of the set of DAS data objects, which includes definitions of the
optical path and the DAS instrument box, and an acquisition data set. Each of these components is
explained in more detail in the sections below.

Figure 24-1. The basic DAS installed system is composed of an optical path and instrument box which
produces the DAS acquisition data.

Object Define Using this Schema For More Information, See:


Optical Path Fiber Optical Path Section 19.3

OTDR (Optical Time OTDR Acquisition Section 19.3.6


Domain Reflectometry)

Standard: v2.2 / Document: v1.0 / May 16, 2022 194


PRODML Technical Usage Guide

Object Define Using this Schema For More Information, See:


This is an optional link from a
Calibration.
DAS Instrument Box DAS Instrument Box Section 24.2
DAS Installed System (to DAS Acquisition Section 24.4
represent the path+box
pairing)
DAS Acquisition DAS Acquisition Section 24.5

Technology
Like all Energistics data objects, developers implement the DAS data object so that software can read
and write the common format; they do this using the key technologies in the Energistics Common
Technical Architecture (CTA). For more information, see the CTA Overview Guide and other CTA
resources, which are included in the PRODML download.
The main technologies are briefly described here. For important information on DAS usage of these
technologies—particularly HDF5 and EPC—see Section 24.1.2.7.

24.1.2.1 UML for Data Modeling and XSD Generation


Energistics uses the Unified Modeling Language™ (UML®), implemented with Enterprise Architect (EA),
a data modeling software tool, to design PRODML, including DAS. The DAS download includes an XMI
file of the UML model; the XMI file can be implemented into any UML modeling tool.
The UML model provides the "big picture" of data objects and the relationships between them. Energistics
also produces the XSD files (which define various data objects in XML) from the UML model.
For an overview of the DAS data object UML models, see Section 24.5.

24.1.2.2 XML
Each data object in an Energistics standard is defined by an XSD file and is stored as an XML file. XML is
used because of its portability and because it's both machine-readable and human-readable. Common
design patterns leverage the CTA, which promotes consistency and integration across all Energistics
standards.
For more information, see the CTA Overview Guide.

24.1.2.3 Top-Level Data Objects and Energistics AbstractObject


An Energistics data object schema is a complete, high-level schema from one of the Energistics
domain specifications (RESQML, WITSML, PRODML). These schemas are represented in XSD files as
global (i.e., root level) XML elements. For Energistics standards, there is a single root element across all
schemas, which is AbstractObject (though this is not a general requirement of XML schemas).
A data object is a single-instance document described by one of these schemas. Energistics refers to
these as ‘top-level data objects’, which inherit from AbstractObject and, by definition, each has a UUID
and citation metadata (based on the Energistics EIP, a profile of an ISO metadata standard created
specifically for upstream oil and gas).
Top-level data objects in DAS include a DAS acquisition, an optical path and a DAS instrument box. In
Energistics UML diagrams, top-level data objects (and in some cases, other important objects) generally
are represented as blue boxes and abstract elements as green boxes.
IMPORTANT: Abstract objects typically have required data elements. For specific data objects, the
required elements (1..*) are defined in the combination of the data object and any abstract object(s) it may
inherit from.

Standard: v2.2 / Document: v1.0 / May 16, 2022 195


PRODML Technical Usage Guide

24.1.2.4 Universally Unique Identifiers (UUIDs)


To manipulate and exchange data objects independently, each instance of an Energistics data object
requires a universally unique identifier (UUID). This UUID is contained in the Citation data element that
contains key metadata about each Energistics data object that is derived from AbstractObject.
Energistics uses UUID standard RFC 4122 from the Internet Engineering Task Force (IETF)
(https://tools.ietf.org/html/rfc4122).
According to the abstract of the RFC: "This specification defines a Uniform Resource Name namespace
for UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally Unique IDentifier). A UUID is
128 bits long, and can guarantee uniqueness across space and time."
For UUIDs, Energistics standards are case-insensitive.
For more information about use of UUIDs by Energistics, see the Energistics Identifier Specification.
Additionally, other elements and attributes may use UUID formats for unique identification. For example:
• in DAS, the AcquistionId (on the DasAcquistion data object) refers to the acquisition job that produced
raw and related processed data.
• Each HDF5 file is uniquely identified with a UUID; the EPC file references related HDF5 files by each
file's UUID.

24.1.2.5 HDF5
XML is not very efficient at handling large volumes of numerical or array data, so for this purpose
Energistics uses the Hierarchical Data Format, version 5 (HDF5), which is a data model, a set of open file
formats, and libraries designed to store and organize large amounts of numerical/array data for improved
speed and efficiency of data processing.
DAS uses HDF5 to store both raw and processed measurement data. HDF5 is also part of the CTA, and
its general use in Energistics is described in the CTA Overview Guide.
For more information about how to use HDF5 with DAS, see Section 24.5.
NOTE: The HDF5 version policy is to use the “backward version compatibility flag” set to HDF5 V1.8.x
when using HDF5 v1.10.x or above. This is to ensure compatibility with DAS HDF5 files written using the
older (v1.8.x) versions of HDF5 libraries.

24.1.2.6 EPC
The Energistics Packaging Convections (EPC) is a set of conventions that allows multiple files to be
grouped together as a single package (or file), which makes it easier to exchange the many files that may
make up a DAS model. It is an implementation of the Open Packaging Conventions (OPC), a commonly
used container file technology standard supported by two international standards organizations.
Essentially, an EPC file is a .zip file. You may open it and look at its content using any .zip tool. EPC is
also part of the CTA, and how it works is described in the EPC Specification.
For DAS, the EPC stores all the data objects described in Section 24.1.1, which includes the metadata for
the raw and/or processed datasets (if any). Each EPC file must refer to only one DAS Acquisition.
However, one DAS Acquisition may have multiple EPC files referring to it, depending on the business use
case.
For a typical DAS use case, an EPC file will be accompanied by a set of Hierarchical Data Format,
version 5 (HDF5) files for storing the raw and processed data and time arrays. These HDF5 files are
stored external to the EPC file. Figure 24-2 shows a conceptual model of how EPC works for DAS. For
more information on how HDF5 files are formatted, see Section 24.5. For more information on how the
HDF5 files are referenced from an EPC file, see Sections 25.2 and 25.3.

Standard: v2.2 / Document: v1.0 / May 16, 2022 196


PRODML Technical Usage Guide

Figure 24-2. Diagram showing how EPC provides the technology to group together related files and
exchange them as a single package. (Container ship photo from Wikipedia:
http://fr.wikipedia.org/wiki/Fichier:CMA_CGM_Marco_Polo_arriving_Port_of_Hamburg_-_16._01._2014.jpg. Licence: Creative
Commons paternité – partage à l’identique 3.0 (non transposée)

24.1.2.7 Important Usage of these Technologies for DAS


This section gives important usage information for the technologies listed above for implementing DAS.
For more information on these technologies, see the CTA Overview Guide and other CTA resources,
which are included in the PRODML download.
• DAS data objects use the following XSDs:
− The optical path is represented using the PRODML Fiber Optical Path, which is shared with the
DTS (Distributed Temperature Sensing) schemas (see Section 19.3).
- All other DAS concepts use schemas developed specifically for DAS.
Additionally:
− All top-level data objects inherit from AbstractObject (in Energistics common) which contains the
Citation object, which includes key metadata shared by all Energistics top-level data objects. This
metadata includes the data object instance's UUID.
- Keep in mind, required elements/attributes for some data objects may be specified in abstract
object(s) from which the data object inherits.
• Arrays and use of HDF5. The DAS acquisition process and resulting raw data are extremely large
arrays of loci and corresponding data values. They are data sets so huge they generally exceed the
capacity of modern hard drives. More array data is produced when those raw data sets are processed
and related FBE and/or spectra data sets are produced. HDF5 is specifically designed to manage and
access extremely large array data sets.

Standard: v2.2 / Document: v1.0 / May 16, 2022 197


PRODML Technical Usage Guide

• Metadata. All of the metadata about the DAS Acquisition—but not the arrays of data—are stored in
XML files. The metadata also is stored in the HDF5 files, along with the arrays of numbers. The
advantages of having this data in the XML include:
− XML has built-in schema validation, and all Energistics XML standards work on the basis that the
standard is represented by XSD schemas. HDF5 does not include schema-based validation. This
means that the DAS XML files can be checked for validity by simple schema validation built in to
most XML tools and editors.
- The XML files are small and can be opened in any editor, including a text editor, which may make
it much easier to figure out what the content of the set of acquisition files is, compared to opening
all the potentially huge HDF5 array files. All the metadata can sit in one small XML file, while the
actual measurement values are in many large HDF5 files.
The HDF5 files also contain the metadata. The metadata is duplicated from the XML files because it
is possible that the HDF5 files— which can easily fill several hard drives with raw data—may become
physically separated from the small XML file during transit. Having the metadata in each HDF5 file
enables files to be assembled back together as a coherent set.
For more information about HDF5 configuration options for DAS, see Section 24.6.3.
• Relationships between data objects, files, and the EPC file. The XML content is also used to
provide the links between the metadata and the data arrays, through Energistics data object
references, which are implemented in the EPC file.
• EPC. The EPC Specification states that HDF5 files may be stored either inside or outside of the EPC
file; however, DAS requires that HDF5 files be stored outside the EPC file.

24.1.2.8 Important Implementation Guidance: Implement BOTH EPC and HDF5 and begin with
EPC
PRODML DAS has been designed so that the EPC file specifies all relationships among the various data
objects/files and related HDF5 files that comprise a DAS data set. As a convenience and to prevent
potential problems associated with huge data sets that are so big they may span multiple physical disks,
the design includes some redundancy of metadata among the XML and HDF5 files (as described above).
This redundancy in metadata has led some people to believe they can implement PRODML DAS using
HDF5 only. This approach is NOT RECOMMENDED for a couple of reasons. First and foremost, this
HDF5-only approach is NOT a correct implementation of the PRODML DAS standard, which requires an
EPC file. If two organizations are exchanging DAS data using PRODML DAS and one has implemented
EPC and the other has not, a data transfer would fail. Second, the relationships are specified in the EPC
file. If you begin with HDF5, you may have some initial success figuring out the relationships and writing
and reading files, but a solution won't scale over the many permutations and complexity of relationships
inherent in DAS data sets.
For these reasons the recommendations is that when implementing PRODML DAS, begin with the EPC-
related capabilities, then HDF5.

24.2 Defining the Optical Path


The Optical Path is a shared object between the DTS and DAS data models. It is described Section 19.3.
An additional data object, OTDR Acquisition, has been added for optical time-domain reflectometer
surveys of the optical path. For more details, see Section 19.3.6.

24.3 Defining the DAS Instrument Box


The DAS Instrument Box object is shown in Figure 24-3. This represents the hardware equipment
located at the site that is responsible for generating DAS measurements. The DAS instrument box is
usually located next to the facility element for which DAS surveys are taken (a well or a pipeline, for
example). It consists of, among other devices:
• An optoelectronic instrument to which the optical fiber is attached. This instrument contains:

Standard: v2.2 / Document: v1.0 / May 16, 2022 198


PRODML Technical Usage Guide

− a controllable light source


− optical switches
- photonic detection devices
• In some cases, a computer or server is connected to the instrument that is responsible for capturing
the measurements and initiating the data transmission.
Hardware setup can vary widely from one deployment to the next and also depends on the vendor and
the make/model of the hardware used. The data schema was created to capture the relevant parameters
of the installation without worrying about how the hardware is physically laid out. Attributes available for
the instrumentation cover areas such as:
• Make/model of the instrument
• Number of channels
• Software version
• Factory calibration information
• Calibration parameters

class DasInstrumentBox

AbstractObject
«XSDcomplexType,XSDtopLevelElement»
DasInstrumentBox

«XSDelement»
+ SerialNumber: String64 [0..1]
+ Parameter: IndexedObject [0..-1]
+ FacilityIdentifier: FacilityIdentifier [0..1]
+ Instrument: Instrument
+ FirmwareVersion: String64
+ PatchCord: DtsPatchCord [0..1]
+ InstrumentBoxDescription: String2000 [0..1]

Figure 24-3. DAS Instrument Box model. Note that the Instrument element contains further generic details
about the hardware.

24.4 Defining the DAS Installed System


A DAS acquisition system consists of a pairing of an optical path and a DAS instrument box, which are
used to acquire DAS measurements. Because there are several ways in which an optical path and a DAS
instrument box can be combined to make measurements, the DAS Acquisition object contains a
reference to an optical path and an instrument box from which the measurement was generated. This
pairing of optical path and DAS instrument box is known as a DAS Installed System. For examples of the
combinations of Optical Path and Instrument Box which can be represented, see Section 19.5.

24.5 Overview of the DAS Acquisition UML Model


This section provides an overview of the DAS Acquisition data model using UML diagrams. For a
complete description of all elements in the UML model/DAS XSDs, see the PRODML UML model or the
PRODMLTechnical Reference Guide.
Figure 24-4 shows the content of the XML DAS Acquisition file. The data elements in this XML schema
are repeated in the HDF5 file as HDF attributes, distributed through the file associated with the group or
array concerned. For more information about the DAS acquisition, see Section 24.7.

Standard: v2.2 / Document: v1.0 / May 16, 2022 199


PRODML Technical Usage Guide

Figure 24-5 shows the DAS data arrays that are used to store the DAS raw, DAS FBE, DAS spectra
samples and their corresponding sample times. Because of the volume of data, these arrays are only
stored in the HDF5 files. For more information about DAS data arrays, see Section 24.8.
Detailed definitions of individual attributes can be found in Chapter 27.

AbstractObject
«XSDcomplexType,XSDtopLevelElement,Group»
DasAcquisition

«XSDelement»
+ AcquisitionId: UuidString
+ AcquisitionDescription: String2000 [0..1]
+ OpticalPath: FiberOpticalPath
+ DasInstrumentBox: DasInstrumentBox
+ FacilityId: String64 [1..*]
+ ServiceCompanyName: String64
+ ServiceCompanyDetails: BusinessAssociate [0..1]
+ PulseRate: FrequencyMeasure [0..1]
+ PulseWidth: TimeMeasure [0..1]
+ GaugeLength: LengthMeasure [0..1]
+ SpatialSamplingInterval: LengthMeasure
+ MinimumFrequency: FrequencyMeasure
+ MaximumFrequency: FrequencyMeasure
+ NumberOfLoci: NonNegativeLong
+ StartLocusIndex: long
+ MeasurementStartTime: TimeStamp
+ TriggeredMeasurement: boolean
TypeEnum
«enumeration»
FacilityKind
+Raw 0..* +Custom 0..1 +Processed 0..1 +FacilityCalibration 0..*
generic
«XSDcomplexType,Group» «XSDcomplexType,Gro... «XSDcomplexType,Group» «XSDcomplexType,Group» pipeline
DasRaw DasCustom DasProcessed FacilityCalibration wellbore

«XSDelement» +Custom «XSDelement»


+ RawIndex: NonNegativeLong [0..1] + Remark: String2000 [0..1]
+ RawDescription: String2000 [0..1] 0..1 + FacilityName: String64 TypeEnum
+ RawDataUnit: String64 + FacilityKind: FacilityKind
+ OutputDataRate: FrequencyMeasure [0..1] +Custom 0..1 +Custom 0..1 + Wellbore: DataObjectReference [0..1] «enumeration»
+ StartLocusIndex: long + OpticalPathDistanceUnit: LengthUom CommonEnumerations::
+ NumberOfLoci: NonNegativeLong + FacilityLengthUnit: LengthUom WellboreDatumReference
+ RawData: DasRawData
+ RawDataTime: DasTimeArray
+ RawDataTriggerTime: DasTimeArray [0..1]
«XSDattribute» +Calibration 0..*
+ uuid: UuidString
TypeEnum
«XSDcomplexType,Group»
+Spectra 0..* «enumeration»
Calibration
DasCalibrationInputPointKind
«XSDcomplexType,Group» +Fbe «XSDcomplexType,Group» + Remark: String2000 [0..1]
DasFbe DasSpectra + LastLocusToEndOfFiber: LengthMeasure [0..1]
0..* other calibration point
+ OTDR: OTDRAcquisition [0..1]
tap test
«XSDelement» «XSDelement» + WellboreDatum: WellboreDatumReference [0..1]
+ FbeIndex: NonNegativeLong [0..1] + SpectraIndex: NonNegativeLong [0..1] + PipelineDatum: String2000 [0..1]
+ FbeDescription: String2000 [0..1] + SpectraDescription: String2000 [0..1] + Originator: String2000 [0..1]
+ FbeDataUnit: String64 + SpectraDataUnit: String64 + Creation: TimeStamp [0..1]
+ OutputDataRate: FrequencyMeasure + OutputDataRate: FrequencyMeasure + Editor: String2000 [0..1] EnumExtensionPattern
+ StartLocusIndex: long + StartLocusIndex: long + LastUpdate: TimeStamp [0..1]
«XSDunion»
+ NumberOfLoci: NonNegativeLong + NumberOfLoci: NonNegativeLong
DasCalibrationInputPointKindExt
+ SpatialSamplingInterval: LengthMeasure [0..1] + SpatialSamplingInterval: LengthMeasure [0..1]
+ SpatialSamplingIntervalUnit: String64 [0..1] + SpatialSamplingIntervalUnit: String64 [0..1]
+ FilterType: String64 [0..1] + FilterType: String64 [0..1] +LocusDepthPoint 0..1 +CalibrationInputPoint 0..*
+ WindowSize: NonNegativeLong [0..1] + WindowSize: NonNegativeLong [0..1]
AbstractValueArray «XSDcomplexType,Hdf5Table»
+ WindowOverlap: NonNegativeLong [0..1] + WindowOverlap: NonNegativeLong [0..1]
+ WindowFunction: String64 [0..1] + WindowFunction: String64 [0..1] «XSDcomplexType,Hdf5Table» DasCalibrationInputPoint
+ TransformType: String64 [0..1] + TransformType: String64 CompoundExternalArray
+ TransformSize: NonNegativeLong [0..1] + TransformSize: NonNegativeLong «XSDelement»
+ RawReference: UuidString [0..1] + RawReference: UuidString [0..1] «XSDelement» + Remark: String2000 [0..1]
+ SpectraReference: UuidString [0..1] + FbeReference: UuidString [0..1] + Columns: DasCalibrationColumn [3] + LocusIndex: NonNegativeLong
+ FbeData: DasFbeData [1..-1] + SpectraData: DasSpectraData + Values: ExternalDataset + OpticalPathDistance: LengthMeasure
+ FbeDataTime: DasTimeArray + SpectraDataTime: DasTimeArray + FacilityLength: LengthMeasure [0..1]
+ InputPointKind: DasCalibrationInputPointKindExt
«XSDattribute» «XSDattribute»
+ uuid: UuidString + uuid: UuidString TypeEnum
«enumeration»
DasCalibrationColumn

FacilityLength
LocusIndex
OpticalPathDistance

Figure 24-4. UML diagram of XML content of DAS Acquisition file (described in Section 24.7 below).

Standard: v2.2 / Document: v1.0 / May 16, 2022 200


PRODML Technical Usage Guide

class DasArray

Legend
Most Important Class
Abstract Class
Enumeration
Normal Class

Normal Association
«enumerati...
Generalization
DasDimensions

frequency
locus
time

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


DasRaw Data DasSpectraData DasFbeData DasTimeArray

«XSDelement» «XSDelement» «XSDelement» «XSDelement»


+ Dimensions: DasDimensions [2] + StartFrequency: FrequencyMeasure + StartFrequency: FrequencyMeasure + StartTime: TimeStamp
+ EndFrequency: FrequencyMeasure + EndFrequency: FrequencyMeasure + EndTime: TimeStamp [0..1]
+ Dimensions: DasDimensions [3] + Dimensions: DasDimensions [2]

+RawDataArray 1 +SpectraDataArray 1 +FbeDataArray 1

AbstractValueArray
«XSDcomplexType»
BaseTypes::AbstractNumericArray

«XSDcomplexType» «XSDcomplexType»
Obj ectReference:: BaseTypes::AbstractIntegerArray
ExternalDatasetPart

«XSDelement»
+ Count: PositiveLong
+ PathInExternalFile: String2000
+ StartIndex: NonNegativeLong
+TimeArray 1

«XSDcomplexType»
BaseTypes::
«XSDcomplexType» IntegerExternalArray
DasExternalDatasetPart
«XSDelement»
«XSDelement» + NullValue: long
+ PartStartTime: TimeStamp [0..1] + Values: ExternalDataset
+ PartEndTime: TimeStamp [0..1]

Figure 24-5. UML diagram of HDF5 DAS arrays for raw samples, FBE samples, spectra samples and
corresponding times (described in Section 24.8 below)

24.6 DAS Acquisition Data


One DAS acquisition job may consist of many data arrays, including both raw and processed (FBE and
spectra) data. Because DAS arrays are so large, HDF5 is used to store and manage the measurement
data arrays.
NOTE: DAS arrays may be so large that they are split across multiple HDF5 files. XML is used to store all
the metadata, and this metadata is repeated in the HDF5. Comments below about metadata refer to both
formats.

Standard: v2.2 / Document: v1.0 / May 16, 2022 201


PRODML Technical Usage Guide

High-Level Conceptual Model


Figure 24-6 shows the high-level conceptual model for how raw and processed data are related. The
orange “Arrays” boxes in the diagram refer to data arrays. In this model, a DAS Acquisition may have
multiple raw arrays (0..*) and one processed array set, but the processed array set may have multiple
spectra and FBE (0..*) arrays.
There are 3 kinds of data: raw and two types of processed data: spectra and FBE.
Each kind of data has two parts:
• the “data” array (the values of the measurements across time and across loci for raw and for FBE,
and the values of Fourier transformed data across frequencies for the spectra)
• the “time” array (the times which are common to all loci)

Figure 24-6. DAS data can consist of both raw and processed (spectra and FBE) arrays. This figure shows
conceptually how different data arrays are related.

In addition to the array of values, attributes of the groups and arrays are used to provide necessary
ancillary and metadata. The HDF5 file also contains. Custom contains custom attributes for the whole
acquisition or any data group (e.g., raw, FBE or spectra).
If full mapping(s) from loci to optical path and facility lengths is available, they will also be stored in an
HDF file similar to storing arrays of DAS data.
At the root level of the HDF5 file, it must contain one attribute “uuid” which is the unique reference linked
to the file proxy reference in EPC.

Overview of the HDF5 File Arrays


Since an HDF5 file is referred to by only one EPC, each HDF5 file must relate to one and only one DAS
Acquisition. NOTE: The screen shots of HDF5 (Figure 24-7), XML and EPC files in the following sections
are taken from the worked example, which has:
• 1 raw array, which is split over 2 physical HDF5 files

Standard: v2.2 / Document: v1.0 / May 16, 2022 202


PRODML Technical Usage Guide

• 1 processed FBE data set containing 2 bands of FBE data


• 1 processed spectrum
The cardinality for this data is listed here. Note that the pattern shown here remains the same, but the
specific cardinalities of arrays changes for other examples:
• Custom
• RAW group is 0..* (One group in this example)
• FBE groups is 0..* (One group in this example).
• FBE bands within an FBE group is 1..* (Two bands in this example)
• Spectra is 0..* (1 in this example)

Figure 24-7. DAS example HDF5 file structure. Use of DAS requires data stored in HDF5 to contain the
structure and naming conventions shown in the figure.

HDF5 File Array Configuration Options


DAS data arrays can be very large (especially raw arrays) and can exceed the capacity for individual
HDF5 files (i.e., exceed the space available on a disk). For this reason, it is possible to split arrays across
multiple physical HDF5 files. Only one raw array can be transferred within one DAS acquisition EPC file.
However, multiple processed arrays may be transferred within one DAS acquisition EPC file. These
processed arrays all are expected to be derived from the one raw array.
There are several options for how to store these combinations of DAS array data in HDF5 files, which are
listed in the table below and shown in Figure 24-8 and Figure 24-9.

Configuration Options Description of Contents


Processed in separate file • One or more files that contain the one raw array split across
them.
• The processed data is in one or more files.
Processed with own raw • One or more files that contain the one raw array split across
data them.
• Each file contains a sub-section of the raw array and the raw
array’s corresponding processed array(s), which are also
organized into sub-section(s).

Standard: v2.2 / Document: v1.0 / May 16, 2022 203


PRODML Technical Usage Guide

Configuration Options Description of Contents


Hybrid of 1 and 2 • One or more files that contain the one raw array split across
them.
• The processed array(s) for the whole of the raw array are in
one of the files along with one sub-section of the raw array.
Raw only • One or more files that contain the one raw array split across
(Recommended) them and no processed arrays.
• Typically used when one company does the acquisition and
another company does any subsequent processing.
Processed only • One file containing the processed arrays, with no raw arrays.
• Typically used when one company does the acquisition and
another company does the processing.
FBE only • One or more files that contain FBE arrays split across them
along the time axis.
(Recommended)
• Typically used because (1) one company does the acquisition
and another company does the processing and (2) for file
management purposes because FBE files are smaller than the
corresponding spectra array.
Spectra only • One or more files that contain spectra arrays split across them
along the time axis.
(Recommended)
• Typically used because (1) one company does the acquisition
and another company does the processing and (2) for file
management purposes because spectra data arrays tend to
be larger than FBE arrays.

Figure 24-8. Possible HDF5 (.h5) file configurations for use with DAS (1 of 2). ]

Standard: v2.2 / Document: v1.0 / May 16, 2022 204


PRODML Technical Usage Guide

Figure 24-9. Possible HDF5 (.h5) file configurations for use with DAS (2 of 2).]

HDF5 Structure and Naming Conventions (Required for DAS)


Figure 24-7 (above) shows the DAS requirements for structure and naming conventions of an HDF5 file.
HDF data groups are available for the following data kinds. The XML contains elements with the same
names and attributes (but does not contain the array data):

Standard: v2.2 / Document: v1.0 / May 16, 2022 205


PRODML Technical Usage Guide

Group Path Description Multiplicity


(min, max)
/ Provide one attribute “uuid” for linking each HDF file 1, 1
to the EPC. For more information, see the
Energistics Packaging Conventions Specification on
external part references and HDF5 files.
/Acquisition Refers to the DasAcquisition object in the XML and 1, 1
it contains attributes for the acquisition. There must
be one Acquisition group in each file.
/Acquisition/Raw[n] Contains only 1 raw data array and at least one time 0, unbound
array(s). Each acquisition may have more than one
raw group. The group must be named “Raw[n]”, with
“n” being consecutive numbers starting from 0, e.g.
“Raw[0]”, “Raw[1]”. If there is only one raw group, it
must still be named “Raw[0]”.The Raw[n] group
contains additional groups and data sets (arrays)
shown in Figure 24-7.
/Acquisition/Processed One processed group is allowed. It is used to 0, 1
contain FBE and/or spectra groups as children.
This group may have zero to many FBE and spectra
groups. If there is no FBE or spectra group at all,
this group may be omitted.
/Acquisition/Processed/Fbe[n] The group that contains FBE data and time arrays. 0, unbound
The group must be named “Fbe[n]” where “n” are
consecutive integers starting from 0, e.g. “Fbe[0]”,
“Fbe[1]”. If there is only one FBE group, it must still
be named “Fbe[0]”. The group contains multiple
arrays for the frequency bands and these must be
named FbeData[m] with “m” identifying the band
index, which is also zero-based. There is one
FbeDataTime array for the times common to all
FbeData[] bands.
/Acquisition/Processed/Spectra[n] The group that contains spectra data and time 0, unbound
arrays. The group must be named “Spectra[n]”
where “n” are consecutive integers starting from 0,
e.g. “Spectra[0]”, “Spectra[1]”. If there is only one
spectra group, it must still be named “Spectra[0]”.
The group contains one array named “SpectraData”
for the data and one array named
“SpectraDataTime” for time.
/Acquisition/Custom An optional HDF group named “Custom” can be 0, 1 (per
/Acquisition/Raw[n]/Custom added. Custom data is any service-provider-specific parent group)
customization parameters, which service providers
/Acquisition/Processed/Fbe[n]/Custom can provide as required. For more information, see
/Acquisition/Processed/Spectra[n]/Custom Section 24.7.5
If you look at the UML diagram (Figure 24-4), you
can see that any custom data can also be added to
the individual data groups (raw, FBE, and spectra).

Standard: v2.2 / Document: v1.0 / May 16, 2022 206


PRODML Technical Usage Guide

/Acquisition/FacilityCalibration[n] A group for all Calibrations of one facility. The group 0, 1


must be named “FacilityCalibration[n]” where “n” are
consecutive integers starting from 0. If there is only
one FacilityCalibration, the group must still be
named “FacilityCalibration[0]”. The group contains
one or many calibration groups for the same facility.
The number of groups does not necessarily
correspond to the number of facilities because some
facilities may not have calibration data.
/Acquisition/FacilityCalibration[m]/Calibrati A group for a single calibration for one facility. The 1, unbound
on[n] group must be named “Calibration[n]” where “n” are
consecutive integers. The group must still be named
“Calibration[0]” even if there is only one such group.
Each group must contain one array named
“LocusDepthPoint"

The following instructions apply when writing/reading HDF groups and datasets:
• Create HDF groups——only to create a dataset(s) as a leaf node, unless the group is a Custom
group
− For example, to write RawData, you need to create the Acquisition Group and the Raw Group,
which then require attributes to be written for these groups. For more information, see the next
sections on HDF attributes.
• The tree structure of the HDF5 groups mimics the tree structure in the XML. For example, Processed
is a child group of Acquisition, and Fbe[0] is a child group of Processed.
• LocusDepthPoint is a look-up table for mapping every locus point to the optical path and facility
lengths. It is written to the HDF5 file as an array with a compound datatype such that each item
corresponds to a single locus point. The members of the compound datatype are: LocusIndex,
OpticalPathDistance and FacilityLengthDistance. The order of these members must correspond to
the Column sub-element under LocusDepthPoint. You should not rely on a fixed order of the columns
but instead, use the datatype name to retrieve a specific column. Following the first rule above, if the
LocusDepthPoint is absent (e.g., it cannot be computed from the given calibration input points), the
parent Calibration HDF group must be absent too, because it does not have datasets as leaf nodes.

Attributes in HDF5
• Each HDF group and dataset has a corresponding XML object. Each of these XML objects may
contain sub-elements and attributes that are either defined locally or inherited from more abstract
types. All of these sub-elements and attributes of the XML object should be copied to the HDF group
if they meet ONE of following criteria:
− XML element or attribute is derived from XSD primitive type (e.g. “xs:long”, “xs:boolean”). The
name of the element/attribute is used for the HDF attribute. For the corresponding HDF data
types, see the next section.
- XML element or attribute is a derived type of AbstractMeasure. The name of the element/attribute
is used for the HDF attribute. In addition, the “uom” attribute of the AbstractMeasure is copied,
with a name formatted as “ElementName.uom”.
• For each HDF Dataset, if there is a DasExternalDatasetPart referring to it, the sub-elements and
attributes of the DasExternalDatasetPart object must also be copied to the HDF Dataset as attributes.
The criteria specified in the previous bullet point also apply. However, the element
“PathInExternalFile” should be skipped because it duplicates the HDF path for the dataset.
• Where the above criteria are met:
− If an attribute is mandatory in the XML schema, it is also mandatory in the HDF and it must have
a value. All mandatory attributes musts have a valid entry (which means either with a valid
attribute value or a defined NULL value). Non-mandatory attributes (marked in the attached

Standard: v2.2 / Document: v1.0 / May 16, 2022 207


PRODML Technical Usage Guide

Schema with [0..1]) or groups (marked with 0..*) should be omitted from the XML or HDF files if
they are not used (i.e., empty). When the attribute’s maximum number of occurrence is 1, the
HDF attribute should be a scalar.
- When the attribute’s maximum number of occurrence is larger than 1 (e.g., unbound), the HDF
attribute should be an array, even if there is only one item. For example, “Dimensions” is
presented as an array of strings in the worked example.
The optionality and requirements for HDF groups described means, in practice, it is allowable to not
include groups like raw, processed, spectra and FBE. However, for groups like Das Acquisition, attributes
that are marked as mandatory in optional groups that are used (such as NumberOfLoci, RawDataTime,
etc.) must always be properly populated if the associated group is used.

Data Types in HDF Files


The norms in the following table are not imposable by use of HDF5 schemas (the concept does not exist)
and are therefore rules which must be followed in order to ensure interoperability.

XSD Type HDF Data Type


xs:string Fixed-length string, i.e. H5T_C_S1 in C.
xs:integer and its derived Integer data type with the bit-ness follows that defined in the XSD type.
types Example:
• int64 is used for “xs:long”
• int32 is used for “xs:int”
• int16 is used for “xs:short”
xs:float float32
xs:double float64
xs:boolean One of these formats:
• int8 or uint8 where 0 corresponds to False, 1 corresponds to
True.
• Enum type where the underlying values are 0 and 1, as above.

Dataset HDF Data Type


RawData Various data types may be used depending on the situation, including: uint8,
int8, uint16, int16, uint32, int32, uint64, int64, float32, float64
On-demand data conversion can be applied, with a performance cost. See
HDF5 documentation on Data Conversion
https://support.hdfgroup.org/HDF5/doc/H5.user/Datatypes.html
RawDataTime, int64
RawDataTriggerTime,
FbeDataTime,
SpectraDataTime
FbeData float32
SpectraData float64

Standard: v2.2 / Document: v1.0 / May 16, 2022 208


PRODML Technical Usage Guide

HDF Compression
In general, HDF compression is not required. However, if needed, GZIP is recommended because it is
widely available. Compression often requires decisions on how the data arrays should be chunked;
therefore, its use requires agreement between the providers and the users.

24.7 Contents of DAS Acquisition Object


The DAS Acquisition object (see Figure 24-4) contains metadata about the DAS acquisition common to
the various types of data acquired during the acquisition, which includes DAS measurement instrument
data, fiber optical path, time zone, and core acquisition settings like pulse rate and gauge length,
measurement start time, and whether or not this was a triggered measurement.
Note that the DAS Acquisition object is derived from AbstractObject, which also defines a set of
mandatory and optional attributes. The “uuid” (defined in AbstractObject) and the “AcquisitionId” attributes
(defined in DasAcquisition) have different purposes and they must NOT have the same value. The “uuid”
identifies the entire dataset package, which may include raw and/or FBE and/or spectra data (see Section
24.6.3). The “AcquisitionId” identifies the DAS acquisition job. One example is that a raw-only package is
created for an acquisition. Then a separate FBE-only package is derived from this raw dataset. Because
these two packages are separate (there are two EPC files), they have different values for the “uuid” under
DasAcquisition, but they share the same value for “AcquisitionId” because they refer to the same
acquisition job.

Recommended file content regarding software version


In V2.0 of PRODML DAS there was an element “Vendor Code” which was a string and which therefore
appeared in the HDF file as well as being an element of the XML DasAcquisition file.

In V2.1, Vendor Code was changed to be a type Business Associate which is a common type across the
MLs containing vendor details (such as name, contact details etc.). Because it was not a complex type
and not a string, it was removed from the HDF file under the rules for mapping from XML to HDF.

It turns out that some users had been using this element for the software version used to create the DAS
data. In 2.1 (and later) this field no longer appears in the HDF file.

In every XML file, the Citation structure appears, and this has an element Format. The is documented as:
“Software or service that was used to originate the object and the file format created. Must be human and
machine readable and unambiguously identify the software by including the company name, software
name and software version. This is the equivalent in ISO 19115 to the distributionFormat.MD_Format.

The ISO format for this is [vendor:applicationName]/fileExtension where the application name includes the
version number of the application.

SIG Implementation Notes


- Legacy DCGroup from v1.1 - publisher
- fileExtension is not relevant and will be ignored if present.
- vendor and applicationName are mandatory.”

This Citation:Format string therefore contains the required content and in the XML DAS Acquisition file,
should be used for the purpose of identifying the software vendor and software name and version.

In the HDF file, the convention will be that the same string can be inserted in the Custom group. The
variable would be called Format and the content would be per the Energistics Citation:Format element.

Standard: v2.2 / Document: v1.0 / May 16, 2022 209


PRODML Technical Usage Guide

DAS Instrument Box


Note that the DAS Instrument Box data elements (DasInstrumentBox) are described in the DAS
Instrument Box XML file and are not repeated in the DAS Acquisition XML or HDF5 files. A reference to
the DAS Instrument Box data object is provided. For more information on data object references, see the
Energistics Identifier Specification.

Fiber Optic Path


Note that the Fiber Optical Path data elements are described in the Fiber Optical Path XML file and are
not repeated in the DAS Acquisition XML or HDF5 files. A reference to the Fiber Optical Path data object
is provided. For more information on data object reference, see the Energistics Identifier Specification.

Facility Calibration
The Facility Calibration object (FacilityCalibration) contains one or many Calibrations for one facility in the
DAS acquisition. Each Calibration is a mapping of loci-to-fiber length and facility distance along the optical
path. The actual calibration points are provided in an array of Locus Depth Point. See Section 24.6.4.

DAS Custom
The DAS Custom object (DasCustom) contains service–provider-specific customization parameters.
Service providers can define the contents of this data element as required. This data object has
intentionally not been described in detail to allow for flexibility. DasCustom can be inserted as a separate
data group under DasAcquisition, but also under the DasRaw, Das Fbe and Das Spectra. Note that these
objects are optional; they should not be required for reading and interpreting the data arrays. If used, the
service provider needs to provide a description of the data elements to the customer.

DAS Raw
The DAS Raw object (DasRaw) contains the attributes of raw data acquired by the DAS measurement
instrument. This includes the raw data unit, the location of the raw data acquired along the fiber optical
path, and information about times and (optional) triggers. Note that the actual raw data samples, times
and trigger times arrays are not present in the XML files but only in the HDF5 files because of their size.
The XML files only contain references to locate the corresponding HDF files, which contain the actual raw
samples, times, and (optional) trigger times.
For more information, see Sections 24.6.4 and 24.8.

DAS Processed
The DAS Processed object (DasProcessed) contains data objects for processed data types and has no
data attributes. Currently only two processed data types have been defined: the frequency band extracted
(FBE) and spectra. In the future other processed data types may be added.
Note that a DasProcessed object is optional and only present if DAS FBE or DAS spectra data is
exchanged.
Multiple FBE and spectra groups may be used with a DasProcessed object. Multiple groups of FBE bands
could be exchanged, for example, one group with linearly distributed frequency bands (e.g. two bands:
0.01-10.00 Hz and 10.01-20.00 Hz) and another group with logarithmically distributed frequency bands
(e.g. three bands: 0.1-1.0 Hz, 1.0-10.0 Hz and 10.0-100.0 Hz). In the HDF5 file these group should then
be named Fbe[1] and group Fbe[2] with the number added in square brackets [n]. If there is only one
FBEor Spectra group then the brackets and number can be omitted. In the XML file this is not necessary.

DAS FBE
The DAS FBE object (DasFbe) contains the attributes of FBE processed data. This includes the FBE data
unit, location of the FBE data along the fiber optical path, information about times, (optional) filter related
parameters. Note that the actual FBE data samples and times arrays are not present in the XML files but
only in the HDF5 files because of their size. The XML files only contain references to locate the
corresponding HDF files containing the actual FBE samples and times.

Standard: v2.2 / Document: v1.0 / May 16, 2022 210


PRODML Technical Usage Guide

For more information on the arrays, see Section 24.8.

DAS Spectra
The DAS Spectra object (DasSpectra) contains the attributes of spectra processed data. This includes the
spectra data unit, location of the spectra data along the fiber optical path, information about times,
(optional) filter related parameters, and UUIDs of the original raw from which the spectra file was
processed and/or the UUID of the FBE files that were processed from the spectra files. Note that the
actual spectrum data samples and times arrays are not present in the XML files but only in the HDF5 files
because of their size. The XML files only contain references to locate the corresponding HDF files
containing the actual spectrum samples and times.
For more information on the arrays, see Section 24.8.

24.8 DAS Array


The DAS Array (DasArray) schema (Figure 24-5) defines the raw, FBE, spectrum sample arrays and the
corresponding times arrays used in the HDF5 files.
Further the DasArray schema defines ExternalDataSetPart and derived class DasExternalDatasetPart
which help the user to keep track of which HDF5 file contains which part of the DAS sample data. Use of
these parts is necessary for very large acquisitions where the DAS data samples array may become so
large that must be stored in multiple HDF5 files.
The raw, FBE, and spectra arrays have a Dimensions attribute that specifies the ordering of elements in
the data array. The values for Dimensions can be ‘locus’, ‘time’ or ‘frequency’. In the XML, there are two
(raw, FBE) and three (spectra) instances of Dimensions which is an enum element with the list of possible
values given above. In the HDF5, it is a single attribute which is an array of the same values. In XML, the
array element with the highest index and, in HDF5, the last instance in the array, contains the fastest-
running index.
Typically the acquisition system writes the raw data with locus as the fastest running index, because it
acquires all active loci on the fiber for each pulse and writes these out trace after trace. In this case, the
XML will have two elements: Dimension = “time” followed by Dimensions = “locus”, and the HDF attribute
of Dimensions will contain “time,locus”. However, the ordering may differ in these cases, and also may
vary system by system. Hence, the Dimensions attribute is important and is mandatory.
• The raw data array is a 2D array containing data samples for each ‘scan’ (array size = number of
scans x NumberOfLoci).
• The FBE data array is a frequency band filtered version of the raw data and also a 2D array (array
size = number of scans x NumberOfLoci).
• The spectrum data array is a 3D array because each DFT provides N points, with N equal to the
TransformSize (array size = number of scans x NumberOfLoci x TransformSize).
In addition, the FBE and spectra arrays have StartFrequency and EndFrequency attributes:
• For the FBE data, these attributes contain the start and end frequency of the frequency band.
• In the spectra data set, these correspond to the minimum and maximum frequency in the spectrums.
The Times arrays contain the ‘scan’ or ‘trace’ times at which the raw, FBE and spectrum arrays were
acquired or processed. See Section 24.8.4.
Figure 2-3 shows traces/scans as vertical columns of dots. Note that these Times arrays contain the
times in Unix time (microseconds passed since 1 January 1970).

DAS Raw
Two-dimensional array containing raw data samples acquired by the DAS acquisition system.

Standard: v2.2 / Document: v1.0 / May 16, 2022 211


PRODML Technical Usage Guide

DAS FBE Data


Two dimensional (loci & time) array containing processed frequency band extracted data samples. This
processed data type is obtained by applying a frequency band filter to the raw data acquired by the DAS
acquisition system. For each frequency band provided, a separate DAS FBE data (DASFbeData) array
object is created.

DAS Spectra Data


Three-dimensional array (loci, time, transform) containing spectrum data samples. Spectrum data is
processed data obtained by applying a mathematical transformation function to the DAS raw data
acquired by the acquisition system. The array is 3D and contains TransformSize points for each locus and
time for which the data is provided. For example, many service providers will provide Fourier transformed
versions of the raw data to customers, but other transformation functions are also allowed.

DAS Time Array


The Times arrays contain the ‘scan’ or ‘trace’ times at which the raw, FBE and spectrum arrays were
acquired or processed:
• For raw data, these are the times for which all loci in the ‘scanned’ fiber section were interrogated by
a single pulse of the DAS measurement system.
• For the processed data, these are the times of the first sample in the time window used in the
frequency filter or transformation function to calculate the FBE or spectrum data.
Figure 2-3 shows traces/scans as vertical columns of dots. Note that these Times arrays contain the
times in Unix time (passed since 1 January 1970). The time unit is specified by the Uom attribute.
Because these UNIX times are difficult to decipher by a human user, the Times arrays contain a
StartTime and EndTime in human-readable format, for example 2015-07-20T07:23.45.678000+00:00
corresponding to Unix time 1437377025678000 in microseconds. Note that the timestamp must comply
with ISO-8601 format and always include time zone.
StartTime and EndTime always refer to the full raw/FBE/spectra dataset across all file parts. EndTime is
optional, but StartTime is required. PartStartTime and PartEndTime are used to keep track of start and
end times of individual recordings (see Section 24.8.5).

DAS External Dataset Part


DAS Raw data recordings are often large in size which requires the recording to be split in parts stored
across multiple HDF5 files. In some use cases, processed data may also be split in parts. To keep track
of the contents of each part PRODML classes ExternalDatasetPart and its subclass
DasExternalDatasetPart are used. Note that even if a data array fits within one HDF5 file,
DasExternalDatasetPart must still being used.
ExternalDataSet attributes
• Count: the size of the partial array
• PathInExternalFile: the path to file
• StartIndex: start index of the partial array in the larger dataset
The following DasExternalDatasetPart attributes describe the start and end times of the partial arrays in
human readable format:
• PartStartTime
• PartEndTime.
Note that DasTimeArray: StartTime and EndTime attributes in the DAS Time Array object always refer to
the time span of the full array spanning across multiple files.

Standard: v2.2 / Document: v1.0 / May 16, 2022 212


PRODML Technical Usage Guide

24.8.5.1 Example
A DAS acquisition records 10 minutes of raw data starting at 1 May 2016 at noon. Because of the large
number of loci (5000) and the high OutputDataRate of (1kHz) it is decided to split the data set in two 5
minute recordings and store the raw data in two HDF5 files called part1.h5 and part2.h5.
The Das External Dataset Parts for these files in the XML and H5 will then contain:

Part1.h5
ExternalDataset
Count = NumberOfLoci x OutPutDataRate x rate = 5000 x 1000 x 5 x 60 = 1 500 000 000
PathInExternalFile = /Acquisition/Raw[0]/RawData
StartIndex = 0
DasExterdalDatasetPart:
PartStartTime = 2016-05-01T12:00:00.000000+00:00
PartEndTime = 2016-05-01T12:04:59:999000+00:00
RawDataTime
StartTime =2016-05-01T12:00:00.000000+00:00
EndTime = 2016-05-01T12:09:59:999000+00:00

Part 2.h5
ExternalDataset
Count = NumberOfLoci x OutPutDataRate x rate = 5000 x 1000 x 5 x 60 = 1 500 000 000
PathInExternalFile = /Acquisition/Raw[0]/RawDataStartIndex = 1 500 000 001
ExternalDatasetPart
PartStartTime = 2016-05-01T12:05:00.000000+00:00
PartEndTime = 2016-05-01T12:09:59:999000+00:00
RawDataTime
StartTime =2016-05-01T12:00:00.000000+00:00

Standard: v2.2 / Document: v1.0 / May 16, 2022 213


PRODML Technical Usage Guide

25 DAS Worked Example


The DAS download includes an example DAS project, which consists of a set of files containing sample
data. The data from this project is used in the examples that appear throughout this document and is
often this is referred to as the “worked example.”
This chapter:
• Lists the files that comprise the example
• Provides detailed descriptions and explanations of the worked example, including:
− The EPC file
− The XML files contained in the EPC file
- The DAS acquisition file (which is stored outside the EPC file in HDF5 files)

25.1 Overview of the Example Files


The example file contains a simplified DAS project. To keep the example manageable, only 101 loci are
included (compare to the thousands that might be in an actual DAS acquisition), and only 75 samples in
time.
The example is composed of these files:
• An EPC file, named example.epc, which contains all the XML files comprising the acquisition (for
details, see Section 25.2 below).
• 3 HDF5 files, which are named calibration.h5, part1.h5 part2.h5. These are external to the EPC
“container” (Figure 25-1).

Figure 25-1. Worked example folder contents.

25.2 EPC “Container” File


For more information about EPC, see Section 24.1.2.6 and the Energistics Packaging Conventions
(EPC) Specification referenced in Section 2.4.
The EPC file (which is actually a .zip file and can be opened by any zip file tool, e.g., Winzip, if its file
extension is changed to .zip for this purpose) (Figure 25-2) contains all XML data that comprises a DAS
acquisition, and includes:
• DAS Acquisition
• Instrument Box
• Optical Path
• OTDR Acquisition
• References to the related HDF5 files (which the DAS standard requires to be stored outside the EPC
file) using 3 EPC External Part Reference (Proxy) files
• Relationship files

Standard: v2.2 / Document: v1.0 / May 16, 2022 214


PRODML Technical Usage Guide

Figure 25-2. Content of the EPC file (which is a .zip file).

25.3 Relationships Between XML Metadata and Arrays


This section provides a detailed walk-through of how the relationships between the various files of the
worked example are specified. This is achieved using the Energistics Packaging Convention (EPC), as
introduced in Section 24.1.2.6.
Figure 25-3 shows the conceptual data model and HDF5 storage for the arrays in the worked example
and Figure 25-4 shows how this model is implemented in EPC and HDF5.

Figure 25-3. DAS worked example conceptual model with HDF5, which is a “hybrid” implementation (from the
list in Section 24.5).

The XML files are all contained in the EPC file. The HDF5 files are transferred separately. Thus in the
worked example, which shows the arrays split over 2 HDF5 files and the calibration data stored in 1 HDF
file (see Figure 25-3), there are 4 files in total. The top of Figure 25-4 also shows these 4 files: one with
extension .epc and the three HDF5 files with the.h5 extension.
Opening the EPC file (i.e., unzipping it), shows 5 XML data files: the DAS Acquisition, Instrument Box,
and Optical Path (blue colored arrows) and the three EPC external part reference files (red colored
arrows) (see the middle of Figure 25-4)).

Standard: v2.2 / Document: v1.0 / May 16, 2022 215


PRODML Technical Usage Guide

Relationships to related files are stored in the rels folder. For each data file, there is a corresponding .rels
file which stores the relationship of that file to the other files. These files have the extension .xml.rels.
The .rels files specify the relationships among ALL files—the XML files stored in the EPC file and the
externally stored HDF5 files (see the bottom of Figure 25-4).
• Files that are stored externally to the EPC file (such as the 2 HDF5 files in this example) are
specified using an EPC mechanism called external part reference. Each external file has a
corresponding external part reference XML file.
• Relationships to related files that are stored internally in the EPC file (such as the Instrument Box
and Optical Path files in this example) are specified using an internal EPC mechanism with a
TargetMode “Internal”.
The content of these is explained below.

Figure 25-4. The 4 files comprising the worked example (top); the content of the EPC file (middle)—7 XML
files; and the content of the Rels folder within the EPC file—one relationship file per XML data object
(bottom).

In the main folder of the EPC file, the DAS Acquisition XML file has the metadata as outlined previously.
In the middle of Figure 25-5, this file is shown with a light blue background. The UUID of the whole DAS
Acquisition is an attribute of this file. The incorporation of the UUID into the file name itself, as shown, is
also recommended (see the blue boxes in the figure).
Similarly, the EpcExternalPartReference files have their own UUID representing the UUID of the external
part (the red box in the lower part of Figure 25-5). These UUIDs are used within the DAS Acquisition XML
to reference the external part concerned (as part of the element EpcExternalPartReference within the
XML), shown by the red box within the DAS Acquisition file shown in Figure 25-5).
Because an EpcExternalPartReference is to an HDF5 file which may contain multiple arrays, every array
referenced from within the DAS Acquisition XML must have a specified path in the HDF5 file. This is
shown by the green box in the middle part of Figure 25-5.

Standard: v2.2 / Document: v1.0 / May 16, 2022 216


PRODML Technical Usage Guide

Figure 25-5. The 7 files in the EPC root folder (DAS Acquisition, DAS Instrument Box, Optical Path, OTDR
Acquisition and 3 EpcExternalPartReference) (top); an example of a reference to a single HDF5 array from
within the DasAcquisition file with the path to the H5 file (green) and UUID of EpcExternalPartReference (red)
(middle); and how this UUID is located in the EpcExternalPartReference XML itself (bottom).

The rels folder contains one file (extension .xml.rels) per file contained in the root folder. See Figure 25-6,
the top snippet, light brown border.
The rels file for the DasAcquisition lists all the relationships for this file—that is, to two files internal to the
EPC (Optical Path and Instrument Box), and to two external EpcExternalPartReference files. See Figure
25-6, the middle snippet, blue border.
The rels file(s) for the HDF5 files show a relationship back to the DasAcquisition (blue bubble showing its
UUID), and a relationship to an external file (the .h5 file) (green bubble showing the file name itself). See
Figure 25-6, the bottom snippet, pink border.
By these means, the references all tie in with each other and the specific path to an array in a specific
HDF5 file can be discovered.

Standard: v2.2 / Document: v1.0 / May 16, 2022 217


PRODML Technical Usage Guide

Figure 25-6. Showing the 7files in the EPC rels folder (one for each data file comprising the set) (top); the
relationships existing from the DASAcquisition with the three internal XML files, plus three
EpcExternalPartReferences (middle); and the relationships existing from the EpcExternalPartReference to a
physical H5 file (bottom).

When arrays are split over multiple HDF5 files (as they are in the worked example), then a single logical
array in the DasAcquisition XML (e.g. for a Raw array) contains 2 (or more) ExternalFileProxy elements.
Figure 25-7 shows the same worked example and shows how the Count and StartIndex elements are
used to define the partitioning of the array across the two files. The paths for both arrays are defined here.
To find the physical HDF5 file name, look in the rels file for the EpcExternalPartReference, as explained
above.
The DasExternalFile proxy elements PartStartTime and PartEndTime show the start and end times of the
partial Raw DAS data array stored in each HDF5 files.

Standard: v2.2 / Document: v1.0 / May 16, 2022 218


PRODML Technical Usage Guide

Figure 25-7. Shows how a single array (raw data in this case) is split across two physical files per the worked
example and the figures above. The files are identified by green bubble/arrow/label (1st file, bottom left) and
by a purple bubble/arrow/label (2nd file, bottom right).

The above description has focused on how to navigate from the XML, via the data in the EPC, to the
required arrays in the HDF5 files. The HDF5 files also contain identification data (Figure 25-8). The root
level of the .h5 file contains the UUID of the EpcExternalPartReference and the Acquisition group
contains the UUID of the DasAcquisition XML. Every HDF5 file that is part of the sequence of HDF5 files
has this same ID so that if an HDF5 file gets separated (e.g., a disk is misplaced); it can be associated
with the right acquisition.

Standard: v2.2 / Document: v1.0 / May 16, 2022 219


PRODML Technical Usage Guide

Figure 25-8. DAS worked example XML (displayed in text editor) and related HDF5 files (displayed with an
HDF5 utility). Showing where EpcExternalPartReference is used to reference to the .h5 files. The .h5 files
contain the uuid ID which is the UUID of the DasAcquisition XML, allowing a reference back to the XML that
describes the whole acquisition.

Processed data, i.e., FBE band or spectra data, is stored and referenced the same way (Figure 25-9

Figure 25-9. Spectra processed data referenced from the XML. In this case, each FBE band has a unique
array name in the HDF5 file, and these can be seen in the PathInExternalFile elements in the snippets.

Standard: v2.2 / Document: v1.0 / May 16, 2022 220


PRODML Technical Usage Guide

25.4 Optical Path


A fiber installation consists of the description of the optical path and the instrument box. For the simple
worked example, consider the following optical path:

Figure 25-10. Example optical path.

The PRODML optical path object will require you to declare at the minimum these areas:
• Unique Identifier for the entire optical path
• Name
• Inventory section where all the different segments, connectors, turnarounds, splices and terminators
are declared
• Network section where you specify the connectivity of the components (referencing them in the
inventory section)
For the optical path drawn above, the worked example is described in the DTS Section. See Section 20.1.

25.5 OTDR Acquisition


Optionally, if one or more OTDR surveys have been performed, each can be represented by a top-level
OTDR Acquisition object. These OTDR Acquisition objects can referenced from either:
• The Fiber Optical Path object (e.g., for an OTDR performed at the time of installation), or
• The DAS Acquisition object where each Calibration instance can reference an OTDR.
The worked example includes an OTDR Acquisition which is included in the EPC and is referenced from
the Fiber Optical Path object.

Standard: v2.2 / Document: v1.0 / May 16, 2022 221


PRODML Technical Usage Guide

25.6 Instrument Box


Declaring a DTS instrument box is also very straightforward. Not many elements are required other than
the unique identifier, a name, and the type. All other attributes are optional.

25.7 Acquisition Data


This section describes the equipment and optical path used for the worked example acquisition and the
raw and processed measurement data.
The following sections explain the content for part1.h5, but the same general principles apply to all HDF5
files. The metadata is repeated in the XML file. The relationships and navigation from the XML to the
arrays in the .h5 files has been described above.

DAS Acquisition
The DAS Acquisition data object specifies the key instrument attributes for the full DAS acquisition
including InstrumentBox and VendorFormat, MaximumFrequency, MinimumFrequency, PulseRate,
PulseWidth, GaugeLength and the SpatialSampling interval along the fiber.
• The PulseRate is the number of pulses sent into the fiber per second.
• The GaugeLength is the distance (length along the fiber) between a pair of pulses used in a dual-
pulse or multi-pulse system.
• For each pulse, the backscatter is sampled along the fiber (by an AD converter that samples at a
much higher rate than the PulseRate). The OpticalPath points to a metadata object that describes the
(fiber) path along which the light pulses travel.
The StartLocusIndex is the first location (locus) along the fiber at which a recording is made. The distance
between the recorded loci (locuses) is determined by the SpatialSamplingInterval. The number of
samples made from and including the first locus is stored in the NumberOfLoci record. Figure 25-11
shows the concept.
Figure 25-11 also shows the metadata attributes for the example file. Some of the main attributes are
explained here.
The DAS acquisition job name is indicated by the Title attribute. Further the Originator can be indicated.
The location of a locus on the fiber can be estimated by multiplying the locus number times the
SpatialSampingInterval. The index of the first is 0 and hence locus 0 has a distance of 0 and indicates the
measurement taken over first spatial sampling interval which starts at the DAS instrument
connector/splice and has a length of SpatialSamplingInterval. Note that this is always an estimate
because the fiberfiber refractive index is typically an average and may vary slightly along fiber and vary
more if the optical path consists of fibers with different refractive index values are spliced together. In
addition, splices or certain optical components may cause delays, which must be corrected for by the end
user in a distance (for wells often called depth) calibration exercise.

Standard: v2.2 / Document: v1.0 / May 16, 2022 222


PRODML Technical Usage Guide

Figure 25-11. DAS acquisition attributes for part1.h5. In the left pane; note the file structure and naming
conventions as explained in Section 24.6.4 above.

25.7.1.1 Acquisition Timing


The DasAcquistion data object specifies the following time elements for the acquisition:
• MeasurementStartTime (string): Specifies the UTC time of the first raw DAS sample in the
acquisition.
• TimeZone (string): The time zone of the recording, this can be used to convert the UTC time into local
time and vice versa.
• TriggeredMeasurement (Boolean): Flag set to TRUE if the recording is triggered.
• TriggeredTime (string): Specifies the time at which the trigger was activated. This time can be used to
identify the first sample of the recording for the triggered event (for example, a microseismic event) in
the DAS raw data recorded during the acquisition.
Figure 25-12 shows how the DAS acquisition time elements link the DAS raw data recording time
elements described in the DAS raw times object.

Standard: v2.2 / Document: v1.0 / May 16, 2022 223


PRODML Technical Usage Guide

Figure 25-12. Link between DAS acquisition time, trigger time, and DAS raw time elements.

DAS Facility Calibration


The optional Facility Calibration group contains one or many mappings of loci index to optical path (fiber)
distance and facility length for a given facility. A facility is any physical object being measured, e.g., a
wellbore or a pipeline.
• If calibrations exist for multiple facilities in the DAS acquisition, there will be more than one Facility
Calibration class.
• If multiple calibrations have been measured for the same facility, there will be more than one
Calibrations class within the Facility Calibration.
With that, there is a one-to-many relationship from DAS Acquisition to Facility Calibration, and a one-to-
many relationship from a Facility Calibration to Calibration.
Each Calibration mapping concerns only one facility in the optical path and is represented by:
1. A set of DasCalibrationInputPoint elements in the XML only, for the input (calibration points) to the
calibration.
2. An array of Locus Depth Points in the HDF5 only, which are a lookup table of locus vs. optical path
distance and facility length.
3. Datums and metadata.
4. Reference to any OTDR Acquisitions relating to this Calibration.
The Locus Depth Point array is analogous to the Time array in that it is the lookup for the 2- (or 3-)
dimensional array of measured values contained in Raw, FBE or Spectra arrays.
The locus depth points can be generated from the various inputs to the calibration by DAS viewing
software.

25.7.2.1 Das Calibration Input Point


A DasCalibrationInputPoint structure contains five elements: LocusIndex, OpticalPathDistance,
FacilityLength, InputPointKind and Remark (Figure 25-13). The OpticalPathDistance is the distance the
light has traveled in the fibers; the FacilityLength translates this into a length along the facility (e.g.,
wellbore, pipeline) which corrects for overstuffing, fiber defects, refraction index variations, etc. DAS
visualization software can use this table with other metadata to display data in a view familiar to the user
(for example, measured depth in a wellbore or location along a pipeline).
The InputPointKind can be used to differentiate between different calibration points. The user is free to
define their own calibration types, but two special types have been pre-defined namely ‘tap test’ and
‘other calibration point’:
• A so-called ‘tap test’ is a test where an operator uses a (spark-free!) hammer to create a vibration on
a known location like the well head to provide the interpreter a known point for depth calibration.
• The ‘other calibration point’ is a general calibration point that is not a tap test.

Standard: v2.2 / Document: v1.0 / May 16, 2022 224


PRODML Technical Usage Guide

Another important calibration point is the fiber distance from the last locus to the end of the fiber; this is a
useful calibration point for permanent downhole installations. The operator can use this measure to create
exactly the same interrogation interval along the fiber after a DAS instrument or surface cabling has been
changed out. This calibration point is optional and is defined in the Calibration group as
LastLocusToEndOfFiber.

Figure 25-13. Worked example Facility Calibration including some of the Calibration Input Points and the
External Part Reference to the Locus Depth Point array in the HDF5 file.

The following tables show how the Calibration Input Points can be used to provide a depth calibration for
the optical path example in Figure 25-10.
The first table refers to the calibration data for the surface cable. Locus index 0 to 4 present calibrated
points along the surface cable.

LocusIndex OpticalPathDistance FacilityLength


0 5.000 5.000
1 10.000 10.000
2 15.000 15.000
3 20.000 20.000
4 23.50 23.50

The second table, refers to calibration data for the downhole cable. There are 4 CalibrationInputPoint
objects for this facility and these are the entries shown in this table. Locus index 5 to 100 are the
interrogated part of the downhole cable.

Standard: v2.2 / Document: v1.0 / May 16, 2022 225


PRODML Technical Usage Guide

LocusIndex OpticalPathDistance FacilityLength


5 30.000 29.907
6 35.000 34.814
99 500.000 491.143
100 505.000 496.050

Figure 25-10 further provides the measured depth derived using these: a surface cable of 25.0 m with no
overstuffing on a perfectly flat surface, a downhole cable installed in a perfectly vertical well, with the well
head connector 1.5 m above the surface (negative depth), and an overstuffed downhole cable of 483.25m
(1.9% overstuffing ~9.182m).

25.7.2.2 Locus Depth Point Array


These CalibrationInputPoint objects allow you to calculate the mapping from locus to optical path
distances and facility lengths, for one calibration of one facility. In some situations, optical path distance or
facility length information may be missing, and the full mapping cannot be calculated. When a full
mapping is available, it is stored as an external array in an HDF5 file. This array is referred to as
LocusDepthPoint (Figure 25-14).

Figure 25-14. DAS calibration data stored in calibrations.h5. This shows the full mapping LocusDepthPoint
for both the surface cable (top) and the downhole cable (bottom). Notice that the LocusDepthPoint specifies
mappings for all locus points, from a fewer set of CalibrationInputPoint provided for the downhole cable.

Standard: v2.2 / Document: v1.0 / May 16, 2022 226


PRODML Technical Usage Guide

DAS Raw data


The Raw Group stores the raw sample measurements recorded by the DAS instrument. For each DAS
pulse sent out, a measurement is obtained for all the loci interrogated by the instrument. This means that
for each locus, a time-series is typically recorded at the instrument PulseRate frequency set in the
Acquisition group, in case the raw output data is written at a different rate, then OutputDataRate must be
present. If OutputDataRate has not been set, then it is assumed equal to the acquisition PulseRate.
The recorded data is stored in a 2D array named DasRawData. Figure 25-15 shows the stored values of
the first 38 samples (numbered 0 to 38) of the times series for the first 12 loci (numbered 0 to 11) for the
DAS raw data recorded in the example file.

Figure 25-15. DAS raw data array values: 12 loci (horizontal 0-11), 38 samples (vertical 0-37) in h5 file
part1.h5.

25.7.3.1 Das Raw Times


A DAS Raw object has the following attributes specifying the recorded sample times in the example file
(Figure 25-16):
• StartTime (string): Specifies the UTC time of the samples recorded in this h5 file RawData 2D array
(2D because for each loci vs. a series of samples is recorded).
• RawDataTime (array object): A 1D array that contains the times in microseconds for each sample in
the current recording. The array object has two additional attributes describing the StartTime and
EndTime of the current recording, which contain the start and end time of the full acquisition. For
each partial h5 file, the start and end times are stored in PartStartTime and PartEndTime.

Standard: v2.2 / Document: v1.0 / May 16, 2022 227


PRODML Technical Usage Guide

Figure 25-16. DAS raw trace times array and attributes in h5 file part1.h5.

DAS FBE data


The Processed DAS FBE data object stores so-called frequency band extracted (FBE) data values
calculated from raw DAS data recordings. The FBE data values are typically calculating the energy in a
specific frequency band in a filter-window over the raw DAS data recording for each locus. This
calculation produces a ‘trace’ (1D array) that contains an energy value for each locus, each time the filter-
window was applied.
Figure 25-17 shows the attributes for the DAS FBE data object in the example file. Key attributes stored
are: the FilterType applied while creating the frequency bands, and the StartIndex and NumberOfLoci
recorded. Note that FBE trace loci can be a subset of the DAS raw data loci recorded for the DAS raw
data from which the FBE data was generated. The OutputDataRate determines the Number of FBE
traces output per second in the Das FBE object and is equal to or smaller than the DAS instruments
PulseRate used to collect the DAS raw data. For each frequency band, the calculated energy value are
stored in an FbeData Array presented by a 2D array as shown in Figure 25-18. Each FbeData Array has
a StartFrequency and EndFrequency attribute indicating in the FBE frequency band for which the data
was calculated and stored. Figure 25-18 shows part of the first frequency band stored in FbeData[0] Array
in the H5 part1.h5 sample file. The frequency band covers 0.5-1.0Hz and contains energy data for the first
locus starting at index 10 (StartLocus) and covering 30 (NumberOfLoci) loci ranging with index numbers
10 to 39. There are 15 FBE traces (indexed 0 to 14).

Standard: v2.2 / Document: v1.0 / May 16, 2022 228


PRODML Technical Usage Guide

Figure 25-17. DAS FBE attributes h5 file part1.h5.

Figure 25-18. DAS FBE attributes and array values and for frequency band FbeData[0] in h5 file part1.h5.

Standard: v2.2 / Document: v1.0 / May 16, 2022 229


PRODML Technical Usage Guide

25.7.4.1 DAS FBE ‘trace’ Times


An individual FBE trace’s time is specified as the time of the first sample in the filter-window. The number
of FBE traces output per second is determined by the OutputDataRate attribute (equal or smaller than the
DAS instrument’s ‘raw’ PulseRate).
A DasFbe object stores the time values for each FBE ‘trace’ in an FbeDataTime array object. This is a 1D
array that contains the times in microseconds for each FBE ‘trace’ in the current recording*. Note that the
times in the 1D example array are 1/OutputDataRate microseconds apart. The FbeDataTime array object
has two additional attributes describing the StartTime and EndTime of the series of FBE traces recorded.
StartTime corresponds to the first stored FBE trace time and EndTime corresponds to the last stored FBE
trace time in this DAS FBE object. Figure 25-19 shows how the DAS FBE ‘trace’ times link together.
Figure 25-20 shows the FbeDataTime array attributes and the 14 FbeDataTime (one for each FBE trace)
in microseconds in h5 file part1.h5.

Figure 25-19. DAS FBE ‘trace’ times.

Standard: v2.2 / Document: v1.0 / May 16, 2022 230


PRODML Technical Usage Guide

Figure 25-20. DAS FBE times array and attributes in h5 sample file part1.h5.

DAS Spectra data


The DAS Spectrum object stores one spectrum or multiple spectra calculated from the raw DAS data
recordings. The spectra are typically calculated as N-point fast Fourier transforms (FFTs) along a window
of length FilterWidowSize Raw DAS sample. The FilterWindowSize equals the FFT size N and is the
number of output points from the discrete Fourier transform (DFT) calculation.
For each locus in the array of loci specified by StartLocus and NumberOfLoci, the FFT calculation
produces FilterWindowSize data points. FFT calculations can be repeated by shifting the filter-window
over the raw DAS samples. The windows can be overlapping and the number of samples that overlap is
specified by the FilterWindowOverlap attribute. This means that the spectrum values are stored in a 3D
array [time:locus:FFT-value]. Figure 25-21 shows an example of 4 overlapping windows in the spectra
object in the example file.

Standard: v2.2 / Document: v1.0 / May 16, 2022 231


PRODML Technical Usage Guide

Figure 25-21. Spectra[0] attributes in H5 sample file part1.h5.

Figure 25-22 shows one of the FFT sub-arrays (FFT index 4 out of 32) for the 4 overlapping windows
(index 0 to 3 on the vertical axis) calculated for the 5 loci (index 0 to 4 on the horizontal axis).

Standard: v2.2 / Document: v1.0 / May 16, 2022 232


PRODML Technical Usage Guide

Figure 25-22. DAS FFT spectra data sub-array for one loci (index 3) for the overlapping windows (index 0 to
31 on the vertical axis) calculated at different time (index 0 to 3 on the horizontal axis) for the spectra data
array in h5 sample file part1.h5.

25.7.5.1 DAS Spectra ‘trace’ times


A DAS Spectrum object stores the start times of each FFT calculation window applied in the
SpectraDataTime array object. This is a 1D array that contains the times in microseconds for each FFT
window stored in the DAS spectrum object*. Note that the times in the 1D example array are 1/
OutputDataRate microseconds apart. The SpectraDataTime array object has two additional attributes
describing the times of the FBE traces recorded: StartTime corresponds to the first stored FBE trace time
and EndTime corresponding to the last stored FBE trace time in this DAS FBE object.
Figure 25-22 shows an example where spectra data is generated at an OutputDataRate of 3.125 Hz
(note that this is 1/16th of the DAS instrument ‘raw’ PulseRate of 50 Hz, and corresponds to the number
of raw samples the calculation filter-window is shifted when calculating the FFT for each locus). (The
microsecond start times corresponding to the sample are printed in small fonts, the corresponding
recording times (seconds displayed only) are printed in bold black on the Time axis). The sample times in
microseconds are stored in the SpectraDataTime array; they are relative to the StartTime of the raw
recording. Figure 25-23 shows the SpectraDataTime array corresponding with the spectrum windows in
Figure 25-22 in the example file.

Standard: v2.2 / Document: v1.0 / May 16, 2022 233


PRODML Technical Usage Guide

Figure 25-23. Spectra data time array and attributes in H5 sample file part1.h5.

Standard: v2.2 / Document: v1.0 / May 16, 2022 234


PRODML Technical Usage Guide

26 Code Examples: Use Cases


For help in implementing the DAS PRODML standard, the download includes the example described in
detail in Chapter 24, plus five additional examples of typical use cases for acquisition and processed data
exchange between service providers and operators.
The five use cases presented are:
• Use Case 1: Field Data Acquisition: raw, single transport medium. The EPC package contains
acquisition, optical path, OTDR Acquisition and instrument box XML files and two external part
reference files: one points to a single HDF5 file containing all the raw DAS data for the acquisition,
the other one points to a single HDF5 file containing the calibration data.
• Use Case 2: Field Data Acquisition and Processed Data; raw and frequency band; single
transport media. The EPC package contains acquisition, optical path, OTDR Acquisition and
instrument box XML files and two external part reference files: one points to a single HDF5 file
containing all the raw and processed frequency band arrays, the other one points to a single HDF5
file containing the calibration data.
• Use Case 3: Field Data Acquisition and Processed Data: raw, frequency band and spectra
data; single transport media. The EPC package contains acquisition, optical path, OTDR
Acquisition and instrument box XML files and two external part reference files: one points to a single
HDF5 file containing all the raw and processed frequency band and spectra arrays, the other one
points to a single HDF5 file containing the calibration data.
• Use Case 4: Field Data Acquisition and Processed Data; raw, frequency band and spectra;
multiple transport media. The EPC package contains acquisition, optical path, OTDR Acquisition
and instrument box XML files and three external part reference files: one points to a first HDF5 file
containing all the raw data acquired, one points to a second HDF5 file containing all the processed
frequency band and spectra arrays, and the last one points to a third HDF5 file containing all the
calibration data.
• Use Case 5: Field Data Acquisition and Processed Data; raw, frequency band and spectra;
multiple transport media. The EPC package contains acquisition, optical path, OTDR Acquisition
and instrument box XML files and 4 external part reference files: 2 point to HDF5 files containing only
raw data, one points to an HDF5 file that contains both raw data and the processed frequency band
and spectra arrays, the last one points to an HDF5 file containing all the calibration data.
• Use Case 6: Field Data Acquisition and Processed Data; raw, frequency band and spectra;
multiple transport media. The EPC package contains acquisition, optical path, OTDR Acquisition
and instrument box XML files and 7 external part reference files: two external part references point to
HDF5 files containing only raw data, two point to HDF5 files containing only processed frequency
band arrays, two point to HDF5 files containing only processed spectra arrays, and the last one points
to an HDF5 file containing all the calibration data.

Standard: v2.2 / Document: v1.0 / May 16, 2022 235


PRODML Technical Usage Guide

27 Appendix for DAS


27.1 DAS Terminology
This appendix contains a detailed list and definitions of DAS terminology.
Name Definition Unit Example
Decimated Output An integer fraction of the ‘Output Data Rate’. Hz (1/s) 1Khz
Data Rate
Decimation Factor The integer number by which the ‘Output Data Integer 5
Rate’ is decimated.
Fiber Distance The ‘Optical Path’ distance from the m* 1234m
connector of the measurement instrument to
the desired acoustic sample point along the
fiber that is the furthest from the
measurement instrument for that particular
test.
Start Locus Index The first ‘Locus’ acquired by the interrogator Integer 20
unit. Where ‘Locus Index 0’ is the acoustic
sample point at the connector of the
measurement instrument.
Frequency The Nyquist frequency (or some fraction Hz (1/s) 20kHz
Response (of the thereof) of the ‘Interrogation Rate’ or ‘Pulse
Interrogator Unit) Rate’.
Gauge Length The distance (length along the fiber) between m* 40m
pair of pulses used in a dual-pulse or multi-
pulse system. This is a distance which the
DAS Interrogator Unit manufacturer
designs/implements by hardware/software to
effect the Interrogator Unit spatial resolution.

Locus* (plural: A particular location indicating a spatial


Loci) or ‘Spatial sample point along the sensing fiber at which
Sample’ a ‘Time Series’ of acoustic measurements is
made. In figure 1A a locus is identified as ‘o’
or ‘o’.
Locus Distance The ‘Fiber Distance’ to center of the ‘Locus’. m* 1250m
Locus Index The index number of a ‘Locus’ represents a Integer 1
spatial sample, where such numbering of the
spatial sample starts from the output of the
Interrogator Unit (IU) starting with ‘0’.
Successive loci are numbered as positive
integers which increase along the length of
the sensor fiber.

The separation of consecutive loci along the


sensor fiber is determined by the spatial
sample interval. The dots in Figure 2-3
indicate the center of the particular spatial
sample.
Measurement Start The time at the beginning of a data ‘Sample’ Clock Time 2014-08-19
Time in a ‘Time Series’. This is typically a GPS 07:57:11.453721
locked time measurement. +- X
Number of Loci The total number of ‘Loci’ (acoustic sample Integer 100
points) acquired by the measurement

Standard: v2.2 / Document: v1.0 / May 16, 2022 236


PRODML Technical Usage Guide

Name Definition Unit Example


instrument in a single ‘scan’ of the fiber. The
number of ‘scans’ per second is determined
by the ‘Interrogation Rate’ or ‘Pulse Rate’.
Number of Samples The number of ‘Samples’ in a ‘Time Series’. Integer 12345
Optical Path A series of fibers, connectors, etc. together
forming the path for the light pulse emitted
from the measurement instrument.
Output Data Rate The rate at which the measurement system Hz (1/s) 5.0 kHz
provides output data for all ‘Loci’ (Spatial
Samples). This is typically equal to the
Interrogation Rate/Pulse Rate or an integer
fraction thereof.
Pulse A single burst of laser light into the fiber
Interrogation Rate/ The rate at which the Interrogator Unit Hz (1/s) 2.5 kHz
Pulse Rate interrogates the fiber sensor. For most
interrogators this is informally known as the
pulse rate.
Pulse Width The width of the ‘Pulse’ sent down the fiber. s 10 ns
Trace / Array of sensor values for all loci interrogated
Scan along the fiber for a single ‘pulse’ (columns in
figure 2.3).
Sample Number The index of a particular ‘Sample’ within its Integer 3
parent ‘Time Series’.
Sample Time Time at which a particular ‘Sample’ was s
acquired
Spatial Resolution The ability of the measurement system to m* 5.0 m
discriminate signals that are spatially
separated. It should not be confused with
Spatial Sampling Interval.
Spatial Sampling The separation between two consecutive m* 1.0 m
Interval* ‘Spatial Sample’ points on the fiber at which
the signal is measured. It should not be
confused with ‘Spatial Resolution’.
Time Offset The time off set relative to ‘Measurement μs 1234567
Start Time’.
Time Series A data set for a particular ‘Locus’ which is
sampled at the Interrogation Rate/Pulse Rate
(rows in figure 2.3).
Time Series The rate at which acoustic samples are Hz (1/s) 5 kHz
Sampling Rate acquired for a particular ‘Locus’. This is equal
to the ‘Output Data Rate’ or an integer fraction
thereof (‘Decimated Output Data Rate’).
Total Fiber Length The maximum ‘Fiber Distance’ from the m* 5120 m
connector of the measurement instrument to
the final end of the ‘Optical Path’. This end is
either a purposefully cut or terminated end of
the fiber
Trigger Accuracy (to be defined)
Triggered Measurement for an acquisition which Boolean True, False
Measurement requires synchronization between a
transmitting source (Tx) and a recording (Rx)
measurement system. This needs to be
recorded for every measurement regardless
of what application it will serve.

Standard: v2.2 / Document: v1.0 / May 16, 2022 237


PRODML Technical Usage Guide

Name Definition Unit Example


Triggered Time The time of the trigger in a ‘Triggered Clock Time 2014-08-19
Measurement’ 07:57:11.453721
+- X
DAS Acquisition Collection of DAS data acquired during the
DAS survey.
Acquisition ID A universally unique identifier (UUID) for the
acquisition.
Facility Generic term for equipment being measured,
e.g., a wellbore, a pipeline, etc.
Facility Length A distance along a physical Facility from the m 10 m
connector of the Interrogator Unit to the
physical end of the Facility.
Optical Path A series of fibers, connectors, etc. together
forming the path for the light pulse emitted
from the Interrogator Unit.
Optical Path Fiber distance along the Optical Path m*
Distance measured from the connector of the
Interrogator Unit.
Instrument Box / The measurement instrument. Often referred
Interrogator Unit to as Interrogator Unit or IU.
Instrument Location Latitude – Longitude – Mean Sea Level. (Latitude, (35.466088,-
If GPS antenna is not connected leave as Longitude, 84.415512,-
N/A- N/A - N/A MSL pairs) 85.9)
Or WGS84
Time Zone UTC time zone String
DAS Calibration A mapping of loci to fiber distance to a
physical location and/or depth along an
Optical Path.
Offset End Fiber The Fiber Distance to the end of the fiber. m* 5678 m
Distance
Offset End Fiber The Index of the last Locus interrogated by
Locus Index the measurement instrument. The position of
this Locus is determined by the Offset End
Fiber Distance.
Trigger Accuracy
Vendor Code A string describing the data acquisition String “OptaSense 010
provided by the vendor to the client. This is DAS raw
backed by a document that the vendor issues decimated”
to its clients describing the data collected. For
example data mode 1 for Vendor A might
mean raw ADC single pulse and Mode 2
might be raw ADC single pulse, with a high
pass filter. This must be vendor specific as it’s
commercially sensitive.
Tap Test A test to identify a certain physical location on
the fiber in the field. This is often done by
‘tapping’ the wellhead, hence the name Tap
Test.
Tap Test Locus The locus index of the Tap Test.
Index
Tap Test Fiber The Fiber Distance of the Tap Test.
Distance

Standard: v2.2 / Document: v1.0 / May 16, 2022 238


PRODML Technical Usage Guide

Name Definition Unit Example


DAS Raw Data DAS data exchange format for describes ‘raw’ System
(unprocessed) DAS data provided by the dependent
vendor to a customer.
DAS FBE Data DAS data exchange format for Frequency System
Band Extracted (FBE) DAS data provided by dependent
the vendor to a customer. This DAS data type
describes a dataset that contains one or more
frequency bands filtered time series of a ‘raw’
DAS dataset. This could be for example the
RMS of a band passed filtered time series or
the level of a frequency spectrum.
DAS Spectrum DAS data exchange format for Frequency System
Data Spectrum information extracted from a (sub) dependent
set of ‘raw’ DAS data.
DAS Job A set one or more DAS acquisitions acquired
in a defined timeframe using a common
Optical Path and DAS Instrument Box.
Start Frequency Start an individual frequency band in a DAS Hz 1.00 Hz
FBE data set. This corresponds to the
frequency of the 3dB point of the filter.
End Frequency End of an individual frequency band in a DAS Hz 9.99 Hz
FBE data set. This corresponds to the
frequency of the 3dB point of the filter.
Minimum The minimum signal frequency a Hz 0.1 Hz
Frequency measurement instrument can provide as
specified by the vendor.
Maximum The maximum signal frequency a Hz 10.0 kHz
Frequency measurement instrument can provide as
specified by the vendor. This is the Nyquist
frequency (or some fraction thereof) of the
Time Series.
Filter Describes the mathematical operation applied
to remove unwanted components of the signal
pre-processing. Filters can be described in
the time domain as a time series or in the
frequency domain as a response per
frequency ‘sub’-band.
Filter Vendor A string describing the filter applied by the String “Vendor X
Technique vendor to data provided to the client. This is Bandpass”
backed by a document that the vendor issues
to its clients describing the data collected.
Filters applied are part of a vendor’s
proprietary processing techniques.
Filter Type A string describing the type of filter applied by String “Vendor X
the vendor. Important frequency type filter Bandpass”
classes are frequency response filters (low-
pass, high-pass, band-pass, notch filters) and
butterworth, chebyshev and bessel filters. The
filter type and characteristics applied to the
acquired or processed data is important
information for end-user applications.
Filter Window Size The number of samples in the filter window Integer 128
applied.

Standard: v2.2 / Document: v1.0 / May 16, 2022 239


PRODML Technical Usage Guide

Name Definition Unit Example


Filter Window The number of samples overlap between Integer 64
Overlap consecutive filter windows applied
Depth Depths are commonly specified either as m 1234.0 m (KB)
measured depth (MD) along the borehole or or
true vertical depth (TVD), which is the 1228.1 m (TVD)
measured depth relative to a depth reference.
Common datums used are Mean Sea Level
(MSL) and Kelly Busing (KB).

27.2 Use Cases


These are high-level business process use cases. The DAS data object supports transfer of the data that
is produced from these use cases, but not the entire workflow of the use case.
The DAS business process use case involves the following steps:
• Define DAS acquisition requirements with customer
• Represent fiber optic installation
• Configure DAS equipment for acquisition
• Perform DAS data acquisition
• Post-process DAS data

Actors
All DAS use cases have a set of potential possible actors, which are defined below and used in the use
cases described in this section.

Actor Description
DAS Field Engineer Engineer configuring and operating the DAS equipment in the field.
DAS Project Engineer Expert in DAS and its applications. Often the person who recommends
acquisition settings, QCs the acquisition, and does the final data
processing and transcribing of data to the required delivery medium.
DAS Instrumentation The DAS acquisition system (interrogator unit).
Fiber Vendor The vendor that supplied (and installed) the fiber.
DAS Vendor The vendor that supplied the DAS instrumentation.
Customer Field Engineer The engineer in the field representing the customer.
Customer Project Engineer Usually the engineer responsible for the requirements and successful
completion of the operation. Usually the recipient of the acquired
measurements.
Third-Party System IT system receiving DAS data.

Use Case: Define Acquisition Requirements with Customer


Requirements for a job that a Customer specifies to an acquisition company.

Use Case Name DAS Project Engineer and Customer agree on the acquisition requirements of the
DAS service
Version 1.1
Goal Agreed definition of acquisition requirements and final deliverables including reports,
data, and media formats.

Standard: v2.2 / Document: v1.0 / May 16, 2022 240


PRODML Technical Usage Guide

Summary Description Define the requirements and scope of the service that the DAS vendor shall provide.
Agree on: ping rates, spatial resolution, spatial sampling, type of DAS data to provide
(raw or reduced, e.g. frequency bands), when to acquire data, format and media of
deliverables.
Actors DAS Project Engineer, Customer Project Engineer
Triggers Request for service
Pre-conditions Suitable fiber optic downhole or surface installation compatible with the DAS
interrogator unit to be provided.
Primary or Typical Customer Project Engineer and DAS Project Engineer agree the requirements of the
Scenario acquisition.
Alternative Scenarios

Post-conditions Acquisition configuration agreed. The requirements are reflected in the metadata that
is transferred with the DAS data. Currently, no requirement exists in the standard to
transfer only this data.
Business Rules

Notes

Definitions See Section 27.1.

Use Case: Represent Fiber Optic Installation


Requirements for a job that a Customer specifies to an acquisition company.

Use Case Name Customer provides fiber optic installation layout to DAS project engineer and agrees
the optical path configuration for the DAS acquisition.
Version 1.1
Goal Agree optical path configuration for the DAS acquisition.

Summary Description In most services the DAS Vendor provides the DAS acquisition instrument and the
Customer is responsible for providing the fiber optic cable installation in the field from
which the data will be acquired. To properly operate the DAS instrument, the DAS
Vendor needs to be informed about the details of the field fiber optic installation and
cable properties (path, fiber length, optical connectors, fiber type single or multi-mode
fiber etc.).
Actors DAS Project Engineer, Customer Project Engineer
Triggers
Pre-conditions Suitable fiber optic downhole or surface installation compatible with the DAS
interrogator unit to be provided.
Primary or Typical Customer Project Engineer provides DAS Project Engineer with the configuration of
Scenario the optical installation to which the DAS acquisition instrument will be connected.
Alternative Scenarios For some DAS acquisitions the DAS vendor will provides the fibers in the field, for
example by providing a flexible rod with integrated fibers that can be spooled into a
well.
Post-conditions Fiber optic path understood.
Business Rules

Standard: v2.2 / Document: v1.0 / May 16, 2022 241


PRODML Technical Usage Guide

Notes

Definitions See Section 27.1.

Use Case: Configure DAS Equipment for Acquisition


Use Case Name Configure DAS Equipment for Acquisition

Version(s) 1.1

Goal DAS equipment configured, QCed and ready for acquisition to begin.

Summary DAS Field Engineer tests fiber integrity and notes the positions of any losses and
Description reflections.
DAS Field engineer configures DAS Equipment with the required acquisition settings.
DAS Field engineer performs initial depth calibration (facility mapping)
DAS Project Engineer will perform QC
DAS Project Engineer delivers QC report to Customer Field and Project Engineers.
Ensure Instrument Box is configured to timestamp the data as per the customer’s
requirements, e.g. GPS time synced, or synced to other equipment in the operation.

Actors DAS Field engineer, DAS expert, Customer Project Engineer, Customer Field
Engineer, DAS Instrumentation

Triggers Request from client to deploy to site and prepare for acquisition.

Pre-conditions Fiber deployed / installed


Fiber specification known (refractive index, type, optical path geometry, length, end
depth)
OTDR completed, results delivered with OTDR acquisition settings.
Acquisition requirements agreed with operator
Service company deployed to site.
Calibrations performed.

Primary or Typical Configure acquisition for a vertical seismic profiling service


Scenario
Alternative Configure acquisition for alternative DAS borehole or surface applications. Typical
Scenarios examples of borehole applications are Hydraulic fracture monitoring, flow profiling, and
well and casing integrity monitoring, Typical examples of surface applications are
pipeline and equipment vibration monitoring.

Post-conditions DAS Instrument Box Connected, configured and ready for acquisition.
QC Setup report delivered to DAS Project Engineer, Customer Project and Field
Engineer.
These requirements are covered by the DAS instrument box and optical path data
object that are part of this standard.

Business Rules
Notes
Definitions See Section 27.1.

Standard: v2.2 / Document: v1.0 / May 16, 2022 242


PRODML Technical Usage Guide

Use Case: Perform Data Acquisition


Use Case Name Perform Data Acquisition and Provide Configuration and Measured Data

Version 1.1
Goal Acquire the required DAS measurements
Refine the depth calibration (facility mapping)
Provide operation log for time event association
Summary In its simplest form the DAS Field Engineer presses start record and stop
Description record.
DAS Project Engineer may perform timely QC.
The Customer Project Engineer may request previews of the acquisition data.
Actors DAS Field engineer, DAS Project Engineer, DAS Instrumentation, Customer
Field Engineer, Customer Project Engineer
Triggers Client requests acquisition to commence.
Pre-conditions Configuration of DAS Instrumentation complete.
Feedback from customer on setup QC report received.
Primary or Typical Perform acquisition for a vertical seismic profiling service.
Scenario
Alternative Perform acquisition for other borehole or surface monitoring services.
Scenarios
Post-conditions DAS data has been acquired to ‘field media’
This data can be transferred using the standard.
Business Rules
Notes Field media may be a different media to the media the final acquired data is
delivered on.
Definitions See Section 27.1.

Use Case: Post-Process DAS Data


Use Case Name Post Process and Deliver DAS Data

Version 1.1
Goal Optionally post process the acquired data after acquisition and deliver to
customer or customer’s Third Party System.
Summary Post processing may include:
Description • Refining the depth calibration (facility mapping)
• Re-decimation or extraction of different frequency bands
• Updating or adding meta data

Actors • DAS Project Engineer

Triggers Acquisition complete

Pre-conditions Acquisition complete and stored in the PRODML DAS format.


Primary or Typical Refining the depth calibration facility mapping based on further analysis.
Scenario Re-decimation or extraction of different frequency bands

Standard: v2.2 / Document: v1.0 / May 16, 2022 243


PRODML Technical Usage Guide

Updating or adding metadata


Converting acquired data to other industry standard formats such as SEG Y.

Alternative
Scenarios
Post-conditions Post Processed DAS data available for transfer using desired delivery media in
PRODML DAS format.
Derived formats such as SEGY for vertical seismic profiles may also be produced.
Details of post processing activities should be documented accordingly in the
metadata of the delivered data.

Business Rules
Notes For specific DAS acquisition services (e.g., vertical seismic profiles or pipeline
monitoring) the client may only be interested in the DAS post processed derived
product (e.g., SEGY for VSP or a list of events for pipeline monitoring) and may agree
with the DAS service provider to store and retain the field or reduced processed data
for an agreed period.

Definitions See Section 27.1.

27.3 Release Notes for v2.1


These release notes describe the main changes to DAS for v2.1. As part of the adoption/implementation,
these improvements were made to the model.

DasAquisition Fields – Changes to Optionality


For more detail on these fields, see the documentation in the UML model or the PRODML Technical
Reference Guide (which is produced from the UML model).

Note in the list of attributes, Vendor Code was renamed to Service Company Name. An optional (in XML
only) element called Service Company Details was added, with type Business Associate. This will not
appear in the HDF5 because of the rules for attribute types included, but can be used to supplement
Service Company Name.

27.3.1.1 Mandatory fields


• AcquisitionId
• OpticalPath
• DasInstrumentBox
• FacilityId
• ServiceCompanyName
• NumberOfLoci
• StartLocusIndex
• MeasurementStartTime
• MinimumFequency
• MaximumFrquency
• TriggeredMeasurement
• SpatialSampleInterval

27.3.1.2 Optional fields


• AcquisitionDescription
• PulseRate

Standard: v2.2 / Document: v1.0 / May 16, 2022 244


PRODML Technical Usage Guide

• PulseWidth
• GaugeLength

27.3.1.3 Obsoleted
• GaugeLengthUnit
• SpatialSamplingIntervalUnit
• PulseWidthUnit

Array Dimension Metadata


Change from:
Dimensions[1] = "time"
Dimensions[2] = "locus"
Changed to
Dimensions = ["time", "locus"]

Changes to DAS Calibration

In v2.x, the calibration model used the following hierarchy of elements:


DAS Acquisition
Calibration
Calibration Data Points.

This has changed to:


DAS Acquisition
Facility Calibration [one per facility for optical paths, which span multiple facilities]
Calibration [one per calibration version on this facility]
Calibration Input Point [the input test points, e.g., “tap test” and location]
Locus Depth Point [array of locus indices vs location – optional]
The attributes of these elements have changed to match their roles – see the UML model.

Fixes to EPC
Various fixes to the model around details in the way the Energistics Packaging Conventions (EPC) work
for DAS have been made.

Changes to Worked Examples


Worked examples were updated to reflect the above design changes.

Documentation
The DAS content has been updated to clarify some of the concepts and the rules that implement and to
address the changes listed above. For a more detailed list of documentation changes, see the
Amendment History (at the front of this document).

Standard: v2.2 / Document: v1.0 / May 16, 2022 245


PRODML Technical Usage Guide

Part VII: Other PRODML Data Objects


PRODML has several data objects that for historical reasons have neither been fully documented nor
widely adopted. The combination of new PRODML development and resources constraints has left time
to provide only an introduction and overview of these data objects. Where available, links to other related
resources are provided.
The data objects covered in this section are:
• Product Volume (Chapter 28)
• Product Flow Model (Chapter 29)
• Time Series and Time Series Statistic (Chapter 30)
• Production Operation (Chapter 31)

Standard: v2.2 / Document: v1.0 / May 16, 2022 246


PRODML Technical Usage Guide

28 Product Volume
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the Product Volume and Flow Network data objects: the
initial work was donated by ExxonMobil; this was then adapted for the NCS by, amongst others, EPIM,
Statoil and TietoEnator. Saudi Aramco, Peformix (now Baker Hughes) and Atman Consulting then did
various modifications and generated useful documentation.

28.1 Usage of Product Volume vs. Simple Product Volume Reporting


The Product Volume data object is used as the standard for reporting production on the Norwegian
Continental Shelf (NCS). It has been used elsewhere for volume reporting, and also for production
surveillance.
However, it has often been found to be overly complex for the task of volume reporting, which is a core
requirement for PRODML in order to support production management. Product Volume is very flexible
and can be used for many purposes. The drawback to having this flexibility is that it is somewhat complex
to understand, and it sometimes can be found to offer multiple ways of achieving the same result. In the
case of the NCS, by dint of a central regulatory authority defining exactly how Product Volume is to be
used by reporting companies, the data standard has been successfully deployed and rolled out across a
major oil province.
Because of the drawbacks mentioned above, and the desire to have a simple standard for volume
reporting, the Simple Product Volume Reporting (SPVR) capability has been added to PRODML. It is
believed that parties wishing to exchange volumes data without needing the additional data and flexibility
allowed by Product Volume will chose to standardize on SPVR. See Chapters 3 to 7 for full details on
SPVR.
The main additional functionality provided by Product Volume (PV) compared to SPVR is:
• PV works with the Product Flow Network (see Chapter 29) so that flows can be reported at precisely
defined points within a network. Time Series (see Chapter 30) can also reference the Product Flow
Network making detailed production surveillance data exchange possible. SPVR uses a list of
Reporting Entities (e.g. well, platform) and then reports volumes against these. It is possible to
arrange reporting entities in hierarchies, but there is no sense of a network. SPVR does not support
time series data.
• PV has the concept of flows of fluids which can have function (e.g., production, gas lift) and which can
be broken down into products which describe the composition of the fluid. SPVR only deals with
products.
• PV also supports the transfer of Facility Parameters which are non-flow data of any facility. SPVR
again only supports volume related flow, although it does have a Well Production Parameters data
object for exchanging well parameters (the most common requirement).

28.2 Overview of Product Volume


The Product Volume data object can be used to report production flows or other parameters. For
instance, it can be used to report the daily allocated volume of oil production for a well or group of wells. It
could also be used to report other characteristics (pressure, temperature, flow rate, concentrations, etc.)
associated with a specific wellhead. It uses a general hierarchy of:
Product Volume
Facility (wellhead, separator, flow line, choke, completion ...)
Parameter Set (block valve status, reciprocating speed, available room ...)
Parameter
Flow (production, injection, export, import, gas lift ...)
Product (oil, water, gas, CO2, oil-gas, cuttings ...)
Period (instant, day, month, year …)

Standard: v2.2 / Document: v1.0 / May 16, 2022 247


PRODML Technical Usage Guide

temperature
pressure
flow rate
Parameter Sets allow time varying "usage" parameters to be defined for the facility. For example, a valve
status may be toggled from "open" to "closed" to indicate that a well is offline.
Flows allow reporting for a flow of a product. For example, it may be used to specify the rate of oil
produced for a specified well.
The relevant enumerations found in the enumValuesProdml.xml file are as follows:
• Reporting Periods (e.g. day, month, year, etc.) are given in the ReportingPeriod enumeration.
• Facility Kinds (e.g. wellhead, separator, flow line, choke, etc.) are given in the FacilityParameter
enumeration.
• Facility Parameters (e.g. block valve status, reciprocating speed, etc.) are given in the
FacilityParameter enumeration.
• Flow Kinds (e.g. production, injection, export, etc.) are given in the ReportingFlow enumeration.
• Flow Qualifiers (e.g. measured, allocated, etc.) are given in the FlowQualifier and FlowSubQualifier
enumerations.
• Product Kinds (e.g. oil, water, gas, etc.) are given in the ReportingProduct enumeration.
The combination of Flow Kind and Flow Qualifier(s) are used to characterize the underlying nature of the
flow. For example, the following combination is used to indicate a production flow that is measured.
<flow>
<kind>production</kind>
<qualifier>measured</qualifier>
...
<flow>

Similarly, the following combination is used to indicate an injection flow that is simulated.
<flow>
<kind>injection</kind>
<qualifier>simulated</qualifier>
...
<flow>

28.3 Product Volume Associated with Product Flow Model


A product flow network is not mandatory when using product volume. It is quite acceptable simply to use
names on Facility elements under which the flows and products will be reported (see the hierarchy of
product volume shown in Section 28.2).
However, where it is desired to exchange the detailed network model, typically so that the flow paths and
the precise ports with volume data being reported can be defined, then a product flow model can be used
(probably only at the outset and then whenever it changes, with product volume being used each time
dynamic data is to be exchanged). Having used product flow model (refer to Chapter 29 for details on
this), then product volume can now be associated with any given unit in the product flow model. The XML
snippet in Figure 28-1 shows how to reference the associated unit and port from a Product Volume
Report. Note that this snippet shows PRODML V1.x style; the content has not changed in v2.
The unit element provides the name and uidRef attribute that refers to the unique identifier for the flow
unit C in the previous simple flow network example. Similarly, the port element provides the name and
unique identifier for the port associated with a reported flow.

Standard: v2.2 / Document: v1.0 / May 16, 2022 248


PRODML Technical Usage Guide

Figure 28-1. Product Volume Report referencing Product Flow Model unit & port

28.4 Further Information on Product Volume


Product Volume has not been updated during the development of PRODML version 2 (other than to adopt
the style of XML as defined in the CTA). This is primarily to keep the data model compatible with the
usage of version 1.x Product Volume, as used primarily on the NCS. There is no exhaustive usage guide
for product volume, product flow model, and time series, as can work together as outlined in this chapter,
Chapter 29, and Chapter 30. However, there are documentation sources that use version 1.x examples
that are still applicable to version 2 because the data model has not changed.

Standard: v2.2 / Document: v1.0 / May 16, 2022 249


PRODML Technical Usage Guide

• Online documentation for the NCS. This defines a standard of patterns for the usage of product
volume for volume reporting to partners and regulators via the EPIM Reporting Hub. This is a very
large scale adoption of PRODML where additional business rules have been added to define the
norms for the usage of the base standards.
− EPIM Reporting Hub introduction https://epim.no/erh/
− EPIM Reporting Formats https://epim.no/erf/ . These are very similar schemas to Product Volume
although in the earlier v1.x style. They have modified for use in the reporting in Norway but may
be useful to review alongside the example linked to below.
− Norwegian Regulator’s introduction http://www.npd.no/en/Reporting/Production/Reporting-
production-data-to-the-Norwegian-Petroleum-Directorate/
- Worked example at http://www.npd.no/Global/Norsk/7-Rapportering/Produksjon/generell-felt.xml
• A PRODML companion document which was produced as part of an extension to PRODML further to
a change request from a member organization. This extension added a number of features to make
production surveillance easier to implement. The document deals with the interplay between product
volume, product flow model and time series data objects to be able to use them for production
surveillance. This document is entitled PRODML Product Volume, Network Model & Time Series
Usage Guide and it can be found in the folder: energyml\data\prodml\v2.x\doc in the Energistics
downloaded files. Extracts from that document have been included here. The document shows
PRODML v1.x style schemas and examples but the data model has not changed in v2.x.

Standard: v2.2 / Document: v1.0 / May 16, 2022 250


PRODML Technical Usage Guide

29 Product Flow Model


Acknowledgement
For Acknowledgements of detailed contributors, see Chapter 28.

29.1 Overview of Product Flow Model


The Product Flow Model data object can be used to define a directed graph of flow connections (Figure
29-1). The basic building block is a Unit which can be used to define the flow behavior of any facility
(where the term facility represents any use of equipment to perform a function) such as a separator, a
wellhead, a valve, a flow line. It uses a general hierarchy of:
Model (collection of networks)
Network (collection of connected units)
Unit (black box with ports)
Port (allows flow in or out)
Node (allows ports to connect)
The Network represents the internal behavior of the model or a unit in another network and is a collection
of connected units. A Unit is essentially a black box that can represent anything (big or small). Ports allow
flow in or out of a Unit. Nodes are used to connect ports.

Figure 29-1. Product Flow Model data object.

In any given Product Flow Model, the following is assumed:


• Steady state fluid flow across nodes and ports. That is, pressure is constant across internally and
externally connected ports and nodes.
• Conservation of mass across a node or port.
• Pressure can vary internally between ports on a unit.
• Connections between models should be one-to-one so that mass balance concerns are internal to
each model.
A variety of models may be created and used for different systems (Figure 29-2). For instance, a
production accounting system will have a different model than a production operations dashboard that is
used to monitor real-time data from a facility. However, by using PRODML, these various models may be
exchanged using the same standard format.
Note that Product Flow Model is also used in the Optical Path object (see Section 19.3) in distributed
sensing (DTS and DAS), to show the connection of the components in a fiber optical path.

Standard: v2.2 / Document: v1.0 / May 16, 2022 251


PRODML Technical Usage Guide

Figure 29-2. Various network models.

29.2 Simple Flow Network Example


The purpose of this example is to provide a very simple example of how to construct a PRODML Product
Flow Model in order to introduce the overall concepts that will be used in a more complex example in the
section
Figure 29-3 shows a very simple production network. This network consists of two producing oil wells
connected to a pipeline. Although wells typically product a combination of oil, gas, and water, we will only
consider the flow of oil for the purposes of this example.

Figure 29-3. Simple oil production network.

Product Flow Model Construction


When constructing a Product Flow Model, it is important to differentiate between the “product” type
measurements (e.g., oil rate, gas rate, pressure, etc.) which are reported at Ports, and the “facility” type
measurements (e.g., motor speed, choke position, valve status, etc.) which can be measured and
reported within a unit.
Information about a flow is best reported where it leaves or enters Units (facilities) which may modify the
flow (e.g., separators, manifolds etc.). On the other hand facility parameters are internal to the “workings”
of a facility.

Standard: v2.2 / Document: v1.0 / May 16, 2022 252


PRODML Technical Usage Guide

IMPORTANT: The Product Flow Model assumes:


• Steady state fluid flow across nodes and ports (i.e. pressure is constant across internally and
externally connected ports and nodes).
• Conservation of mass across a node or port.
• Pressure can vary internally between ports on a unit.
• Connections between models should be one-to-one so that mass balance concerns are internal
to each model.

Construction of a PRODML flow network can be summarized in the following steps, which are further
explained below:
1. Draw the real world diagram indicating measurement points.
2. Draw boundaries to include facility parameters and exclude flow measurements.
3. Make each boundary a unit in the flow network and identify the ports.

Step 1: Draw the Diagram


First, draw the real-world diagram indicating measurement points (Figure 29-4).

Figure 29-4. Measurement points for a simple flow network.

Standard: v2.2 / Document: v1.0 / May 16, 2022 253


PRODML Technical Usage Guide

Step 2: Draw Boundaries


The next step is to draw boundaries around items with no flow measurements (Figure 29-5).

Figure 29-5. Unit boundaries for a simple flow network.

Step 3: Make Each Boundary a Flow Unit


Finally, make each boundary a unit in the flow network and label each unit and node with a unique name
(Figure 29-6).

Figure 29-6. Simple network units and nodes.

Table 1 summarizes the units of the flow network, assigned unique identifiers, and associated facility
kinds.
Table 1 Simple Network Units

Unit Unit Facility Facility Kind Kind Qualifier Measurements


Name ID ID

A U01 W01 well (outlet) pressure

Standard: v2.2 / Document: v1.0 / May 16, 2022 254


PRODML Technical Usage Guide

Table 1 Simple Network Units

Unit Unit Facility Facility Kind Kind Qualifier Measurements


Name ID ID

B U02 W02 well (outlet) pressure

C U03 HDR01 manifold production header (outlet) oil flow rate


(outlet) pressure

Table 2 Simple Network Ports

Port Direction Connected Node

1 outlet A-C

2 inlet A-C

3 outlet B-C

4 inlet B-C

5 outlet C-P

Table 3 Simple Network Flow Properties

Port Property Flow Product Qualifier

1 pressure production oil measured

3 pressure production oil measured

5 pressure production oil measured

5 flow rate production oil measured

Figure 29-7 illustrates the definition of port 5 in XML. Note that this snippet shows PRODML v1.x style;
the content has not changed in v2.

Standard: v2.2 / Document: v1.0 / May 16, 2022 255


PRODML Technical Usage Guide

Figure 29-7. Port definition example in XML

29.3 Further Information on Product Flow Network


Product Flow Network has not been updated during the development of PRODML version 2 (other than to
adopt the style of XML as defined in the CTA). There is no exhaustive usage guide for product volume,
product flow model and time series, as can work together as outlined in Chapter 28, this chapter and
Chapter 30. There are however documentation sources which use version 1.x examples but which are
still applicable to version 2 since the data model has not changed.
• A companion document as described in Section 28.3, entitled PRODML Product Volume, Network
Model & Time Series Usage Guide and it can be found in the folder: energyml\data\prodml\v2.x\doc in
the Energistics downloaded files. Extracts from that document have been included here. The
document shows PRODML v1.x style schemas and examples but the data model has not changed in
v2.x.
• The Optical Path usage of Product Flow Network is shown in Section 19.3.3.

Standard: v2.2 / Document: v1.0 / May 16, 2022 256


PRODML Technical Usage Guide

30 Time Series
Acknowledgement
Special thanks to the following companies and the individual contributors from those companies for their
work in designing, developing and delivering the Time Series data object: Shell, OSISoft, Petrotechnical
Data Systems.

30.1 Time Series in Energistics


The time series data object (and companion time series statistic) are available for the exchange of time
series data, and have been part of PRODML since v1.2. The PRODML Time Series data object is
intended for use in support of smart fields or high-frequency historian type interactions.
It is likely that this data object will be replaced in time by usage of data objects associated with the
Energistics Transfer Protocol (ETP) which has been developed for transfer of data channels streamed in
real time. For backwards compatibility, it is included here.

30.2 Time Series Data Object Overview


The time series data object is intended for use in transferring time series of data, e.g. from a historian.
The Time Series data object describes a context free, time based series of measurement data for the
purpose of targeted exchanges between consumers and providers of data services. This is intended for
use in support of smart fields or high-frequency historian type interactions, not reporting. It provides a
“flat” view of the data and uses a set of keyword-value pairs to define the business identity of the series,
as described in the following generalized hierarchy.
Time Series Data
Meta Data
Keyword Name/Value Pairs (asset identifier, qualifier, product, flow …)
Units of Measure (psi, rpm, mA, m …)
Measure Class (electric current, thermodynamic temperature, …)
Time/Value Pairs
The keyword value pairs are used to characterize the underlying nature of the values. The key value may
provide part of the unique identity of an instance of a concept or it may characterize the underlying
concept. For example, the following keyword value pairs are used to indicate the measured production
flow of oil.
<key keyword="flow">production</key>
<key keyword="product">oil</key>
<key keyword="qualifier">measured</key>

Keyword KindQualifier
asset A formatted URI identifier of the asset (facility) related to the value. This captures the
identifier kind of asset as well as the unique identifier of the asset within a specified context
(the authority). The identifier may define a hierarchy of assets. Refer to the CTA
Technical usage Guide for more information on Energistics identifiers.

qualifier A qualifier of the meaning of the value. This is used to distinguish between variations
of an underlying meaning based on the method of creating the value (e.g., measured
versus simulated). The values associated with this keyword must be from the list
defined by type FlowQualifier.
subqualifier A specialization of a qualifier. The values associated with this keyword must be from
the list defined by type FlowSubQualifier.

Standard: v2.2 / Document: v1.0 / May 16, 2022 257


PRODML Technical Usage Guide

Keyword KindQualifier
product The type of product that is represented by the value. This is generally used with
things like volume or flow rate. It is generally meaningless for things like temperature
or pressure. The values associated with this keyword must be from the list defined by
type ReportingProduct.
flow Defines the part of the flow network where the asset is located. This is most useful in
situations (e.g., reporting) where detailed knowledge of the network configuration is
not needed. Basically, this classifies different segments of the flow network based on
its purpose within the context of the whole network. The values associated with this
keyword must be from the list defined by type ReportingFlow.

30.3 Asset Identifier


In a Time Series Data object, the key element with the keyword of asset identifier element refers back to
the Product Flow Model. The asset identifier key is an Energistics URI identifier of the asset (facility)
related to the value. The identifier may define a hierarchy of assets. For more information on PRODML
identifiers, see the Energistics Identifier Spec, which was included in the PRODML download.
The example in Figure 30-1 shows an asset identifier for the manifold facility (yellow highlight) assigned
the unique identifier of HDR01 (blue highlight) in the naming system aramco.com (green highlight).

Figure 30-1. Time series asset identifier example.

30.4 Value Status Attribute


In a Time Series Data object, the value status attribute is used as an indicator of the quality of the value.
IMPORTANT: If the status attribute is absent and the value is not set to NaN, then the data value can be
assumed to be good with no restrictions.
The example in Figure 30-2 shows several readings, each with a status of frozen, which indicates the
sensor has been reading the same value for a specific period of time.

Standard: v2.2 / Document: v1.0 / May 16, 2022 258


PRODML Technical Usage Guide

Figure 30-2. Value status example.

30.5 Time Series Statistic Object


This data object is a companion to the Time Series Data object. It has the same elements as time series
data including the keyword concept, to identify a time series of data. Then, rather than the time series
data itself, the time series statistic object has elements to define the minimum and maximum time values,
between which the data statistics apply. This is followed by a set of statistical data applying to the time
series data, as follows:
• Minimum
• Maximum
• Sum
• Mean
• Median
• Standard deviation
• Time at Threshold, which shows the time during which data sits within a range as follows:
− Threshold minimum (optional): defines lower bound of the range.
− Threshold maximum (optional): defines upper bound of the range.
- Duration: the sum of the time over which data was within the range defined above.

30.6 Further Information on Time Series


Time Series has not been updated during the development of PRODML version 2 (other than to adopt the
style of XML as defined in the CTA). There is no exhaustive usage guide for product volume, product flow
model and time series, as can work together as outlined in Chapter 28, Chapter 29 and this chapter.
There are however documentation sources which use version 1.x examples but which are still applicable
to version 2 because the data model has not changed.
• A companion document as described in Section 28.3, entitled PRODML Product Volume, Network
Model & Time Series Usage Guide, can be found in the folder: energyml\data\prodml\v2.x\doc in the
Energistics downloaded files. Extracts from that document have been included here. The document
shows PRODML v1.x style schemas and examples but the data model has not changed in v2.x.

Standard: v2.2 / Document: v1.0 / May 16, 2022 259


PRODML Technical Usage Guide

31 Production Operation
Acknowledgement
For Acknowledgements of detailed contributors, see Chapter 28.

31.1 Production Operation Data Object Overview


The Production Operation Data object is a further companion to the product volume object described in
Chapter 28. It enables the exchange of production operation data along the lines of a “morning report” for
production operations. The volumes would be expected to be transferred using product volume.
Production operation has an offshore operation orientation, reflecting its origins in the Norwegian
Continental Shelf reporting requirements.
Production operation has a set of identifying elements and common elements, and then has a repeating
Installation Report element. This identifies the installation, so that one transfer can deal with a number of
related installations, and then it contains the following categories of data:
• Count of crew, by crew type and beds available.
• Work hours performed day, month and year to date.
• Production operation HSE (see below).
• Production operation activity (see below).
Production operation HSE is a list of statistics concerning HSE such as number of incidents, time since
last incident, and similar. It also contains a weather report with detailed weather attributes.
Production operation activity is a more detailed capability to report operational activity in a number of
areas:
• Water quality
• Alarms
• Cargo ship operations (which includes volumes loaded, duplicating a part of product volume, but from
the perspective of an operational report rather than a hydrocarbon accounting requirement).
• Marine operations (supply and standby vessels and related).
• Shutdowns (including time, duration, loss of oil and gas production).
• Lost production (which can also be referred to as deferred production since it is not “lost” in the usual
sense of the word: in SPVR, deferred production was the term used; see Section 6.3.2, item 8). This
element has a volume lost amount, and also an enum list of reason for loss. In SPVR this enum was
not listed since it was understood that each company would have its own set of reason codes. The
enum values reflect agreed values for NCS reporting.
• Comments on operations, which are typed by an enum list.
For the UML diagram of production operation activity, giving a good overview, see Figure 31-1.

Standard: v2.2 / Document: v1.0 / May 16, 2022 260


PRODML Technical Usage Guide

«XSDcomplexType» «XSDcomplexType» «XSDcomplexType»


ProductionOperationWaterCleaningQuality ProductionOperationAlarm ProductionOperationCargoShipOperation

«XSDelement» «XSDelement» «XSDelement»


+ SamplePoint: String64 [0..1] + DTim: dateTime [0..1] + VesselName: String64 [0..1]
+ OilInWaterProduced: MassPerMassMeasure [0..1] + Area: String64 [0..1] + DTimStart: dateTime [0..1]
+ AmountOfOil: MassMeasure [0..1] + Type: String64 [0..1] + DTimEnd: dateTime [0..1]
+ Ammonium: MassPerVolumeMeasure [0..1] + Reason: String2000 [0..1] + Captain: String64 [0..1]
+ TotalOrganicCarbon: MassPerMassMeasure [0..1] + Comment: String2000 [0..1] + CargoNumber: String64 [0..1]
+ Phenol: MassPerVolumeMeasure [0..1] «XSDattribute» + CargoBatchNumber: nonNegativeInteger [0..1]
+ Glycol: MassPerVolumeMeasure [0..1] + uid: String64 + Cargo: String2000 [0..1]
+ PhValue: DimensionlessMeasure [0..1] + OilGrossStdTempPres: VolumeMeasure [0..1]
+ WaterTemperature: ThermodynamicTemperatureMeasure [0..1] + OilGrossTotalStdTempPres: VolumeMeasure [0..1]
+ ResidualChloride: MassPerMassMeasure [0..1] +Alarm 0..* + OilNetStdTempPres: VolumeMeasure [0..1]
+ Oxygen: MassPerMassMeasure [0..1] + OilNetMonthToDateStdTempPres: VolumeMeasure [0..1]
+ Turbidity: DimensionlessMeasure [0..1] +WaterCleaningQuality +CargoShipOperation0..* + DensityStdTempPres: MassPerVolumeMeasure [0..1]
+ CoulterCounter: MassPerMassMeasure [0..1] + Density: MassPerVolumeMeasure [0..1]
0..*
«XSDattribute» + Rvp: RelativePressureMeasure [0..1]
+ uid: String64 + Bsw: VolumePerVolumeMeasure [0..1]
+ Salt: MassPerVolumeMeasure [0..1]
«XSDattribute»
+ uid: String64
«XSDcomplexType»
«XSDcomplexType» +Shutdown
ProductionOperationActiv ity
ProductionOperationMarineOperation
+MarineOperation 0..* «XSDcomplexType»
ProductionOperationShutdow n
«XSDelement» 0..*
+ DTimStart: dateTime [0..1]
«XSDelement»
+ DTimEnd: dateTime [0..1]
+ Installation: FacilityIdentifierStruct [0..1]
+ GeneralComment: String2000 [0..1]
+ Description: String2000 [0..1]
+ SupplyShip: String64 [0..1]
+ DTimStart: dateTime [0..1]
+ StandbyVessel: String64 [0..1]
+ DTimEnd: dateTime [0..1]
«XSDattribute» + VolumetricDownTime: TimeMeasure [0..1]
+ uid: String64 + LossOilStdTempPres: VolumeMeasure [0..1]
+OperationalComment 0..*
+ LossGasStdTempPres: VolumeMeasure [0..1]
«XSDcomplexType» «XSDattribute»
ProductionOperationOperationalComment + uid: String64
+LostInjection 0..1 0..1 +LostProduction

«XSDcomplexType» «XSDelement»
ProductionOperationLostProduction + Type: OperationKind [0..1]
+ DTimStart: dateTime [0..1]
+ DTimEnd: dateTime [0..1]
+ Comment: String2000 [0..-1]
«XSDattribute»
+ uid: String64

Figure 31-1. Model of production operations activities.

Standard: v2.2 / Document: v1.0 / May 16, 2022 261

You might also like