0% found this document useful (0 votes)
200 views

Oracle: Scene

Uploaded by

elcaso34
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
200 views

Oracle: Scene

Uploaded by

elcaso34
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Edition highlights:

OBIEE Delivers: Enterprise, Are Managed Support Moving Data Quickly with Utilising New Technologies
Self Service & Search Services in your Future? GoldenGate for Improved UX

OracleScene #OracleScene Spring 17 | Issue 63

Making Life Easier


With Oracle
Database 12c

www.ukoug.org
An independent publication not
affiliated with Oracle Corporation
OracleScene

SPRING 17
Welcome to Oracle Scene

Inside this issue


11 16
Oracle Scene Editorial Team
Editor: Brendan Tierney
Email: [email protected]
Deputy Editor (Tech): Khalil Rehman
Deputy Editor (Apps): Toby Price
ORACLE’S MODERN
UKOUG Contact: Karen Smith
Email: [email protected] PROBLEMS FROM STATISTICS ANALYTICS PLATFORM
by Jonathan Lewis by Antony Heljula
Sales: Kerry Stuart
Email: [email protected]

UKOUG Governance
A full listing of Board members, along with
details of how the user group is governed,
can be found at:
www.ukoug.org/about-us/governance
22 WHAT TO EXPECT FROM
44 ARE MANAGED SUPPORT
SERVICES THE FUTURE FOR
ORACLE DATABASE 12c ORACLE E-BUSINESS SUITE?
UKOUG Office by Maria Colgan by Andy Nellis
UK Oracle User Group, User Group House,
591-593 Kingston Road, Wimbledon
London, SW20 8SA
Tel: +44 (0)20 8545 9670
TECHNOLOGY
Email: [email protected] Responsive Design With ADF by Sten Vesterli 19
Web: www.ukoug.org
Oracle Management Cloud – Finding the Needle in a Haystack
Produced and Designed by by Philip Brown 31
Why Creative Improving Statspack Experience by Franck Pachot 37
Tel: +44 (0)7900 246400
Web: www.whycreative.co.uk Getting Started With Oracle GoldenGate by Neil Chandler 46
Recover Without Recover – Use Flashback by Marco Mishke 53
Next Oracle Scene Issues
Issue 64: June 2017
Content deadline: 3rd April APPLICATIONS
Issue 65: September 2017 The Importance of User Experience for Enterprise HCM Applications
Content deadline: 26th June by Adrian Biddulph 08
Issue 66: December 2017
Content deadline: 4th September Taking Oracle E-Business Suite from the Back Office to Mobile
by Vishal Goyal 28

EVENTS
UKOUG 2017 Events 26

More than 17,000 people


follow UKOUG. INTERVIEWS
Join them now. @UKOUG Meet UKOUG's New President 06
The Office for National Statistics and Their Journey From Oracle E-Business
Suite to Cloud Applications with Sarah Green and Debra Lilley 34

OracleScene© UK Oracle User Group Ltd REGULAR FEATURES


The views stated in Oracle Scene are the views of the News & Reviews 04
author and not those of the UK Oracle User Group Ltd.
We do not make any warranty for the accuracy of any
published information and the UK Oracle User Group
will assume no responsibility or liability regarding the
use of such information. All articles are published on
the understanding that copyright remains with the
individual authors. The UK Oracle User Group reserves
the right, however, to reproduce an article, in whole
or in part, in any other user group publication. The
reproduction of this publication by any third party,
in whole or in part, is strictly prohibited without
the express written consent of the UK Oracle User
Group. Oracle is a registered trademark of Oracle
Corporation and /or its affiliates, used under license.
This publication is an independent publication,
not affiliated or otherwise associated with Oracle
OracleScene View the latest edition online and
join UKOUG to access the archive:
Corporation. The opinions, statements, positions
and views stated herein are those of the author(s) or
D I G I T A L www.ukoug.org/os
publisher and are not intended to be the opinions,
statements, positions, or views of Oracle Corporation.

02 www.ukoug.org
First Word

First word

Welcome to the first edition of 2017


Our user group is constantly evolving, growing and changing. it probably takes about 4 hours over a 3 week period. I’m sure
All of this is not possible without a very dedicated group of you can find the time for that? If this is something that you are
volunteers. New volunteers are always needed to help out with interested in take a look at www.ukoug.org/editorialteam, then
the various SIG groups, arranging meet-ups, helping out with get in touch at, [email protected].
conferences etc. You don’t have to be an expert or a long time
member of the user group to contribute. Members of all levels In this issue, we have an article from Jonathan Lewis about
of experience are needed & welcomed. If you are newbie or in problems with database statistics and from the next edition
the early stages of your Oracle career, you can help guide a SIG Jonathan will be commencing an ‘Ask Jonathan’ column. This is
with the type of topics your peers are particularly interested in where you, the reader, can submit questions for him to answer.
or contribute with judging papers and reviewing articles. Check Now is your chance to challenge his encyclopedic knowledge
out the UKOUG website for a current listing of various volunteer of the Oracle Database. Get your thinking caps on! Check out
roles and the next time you are at a SIG ask the committee or page 15 for more details on how you can submit and get your
UKOUG team how you can get involved. question answered.

With Oracle Scene we are looking to recruit some members to The popularity of Oracle Scene continues to grow, not just with
help out in a deputy editor role. We are looking to recruit two readership numbers but also with the number of submissions
deputy editors to cover the Tech area and one deputy editor for we are receiving. For this edition we could have published at least
the Apps area. The role of the deputy editor is to: help source twice the number of articles. Oracle Scene is definitely seen as the
& review articles, assist authors with improving their content, premier publication in the Oracle community around the World.
helping select which articles will be included in the edition,
taking part in an editorial call and proof reading the chosen So thank you for the continued support and keep submitting
content. This may sound like a lot of work but for each edition those articles.

ABOUT Brendan Tierney


Consultant, Oralytics.com
THE Brendan is an Oracle ACE Director, independent consultant and lectures on
EDITOR Data Mining and Advanced Databases in DIT in Ireland. Brendan has extensive
experience working in the areas of Analytics, Data Mining, Data Warehousing, Data
Architecture and Database Design for over 20 years. He started working with the
Oracle 5 Database, Forms 2.3 and ReportWriter 1.1, and has worked with all versions
since then. Brendan is editor of the UKOUG Oracle Scene magazine and is the deputy
chair of the OUG Ireland BI SIG. Brendan is a regular presenter at conferences
around the world.

Contact Brendan at: [email protected]

More information on submitting an article can be found online at: www.ukoug.org/oraclescene

www.ukoug.org 03
OracleScene

SPRING 17
News & Reviews

2017 is a year for celebration...


the UKOUG Partner of the Year Awards hits a milestone

It’s the 10th year anniversary of this prestigious event, and what These awards are solely voted for by Oracle customers so winning
better way to reward your team for their achievements, and really does give you that respected industry status.
receive recogonition for the outstanding work & contribution
you provide to the Oracle community, than by being a recipient Look out for nominations opening in April at www.ukoug.org/pya
of one of these coveted awards.

partneroftheyear
2017/18

by Debra Lilley, UKOUG


Member Advocate

REFRESH
YOUR SKILLS
Women in
WITH OUR UPCOMING SPECIAL
INTEREST GROUP EVENTS
IT Update
MARCH At UKOUG we are always listening to feedback as to what works and what you want
14th UKOUG Business Analytics SIG, us to try. When everyone at UKOUG 2015 said having the Women in IT session as a
Solihull breakfast forum, so as to not be against an educational session, worked, we knew 2016
21st UKOUG Development SIG, Solihull needed to be in the same format. However we were also aware that we need to think
about what we want to do, if anything, between conferences for WIT.
23rd UKOUG Middleware & Integration
SIG, Solihull Like last year we had a full house of of the session. Ideas captured included:
delegates happy to arrive early, drink coffee starting them young, engagement
APRIL
and eat bacon butties. We sat around with fun technology ie 3D printers and
25th UKOUG Higher Education SIG, tables and asked each group to share dedicated events.
Solihull what their organisations do around WIT,
if anything, and what they thought we Overwhelmingly this was a positive bunch,
MAY
should do, as well as capturing any issues agreeing that children as early as possible,
9th UKOUG Public Sector Applications they experience. probably of primary school age, need to be
& Financials SIG encouraged to work in and not just with IT.
10th UKOUG Spatial & Graph SIG, Then James Jeynes (UKOUG Executive We talked about joining up with existing
Reading Director) and I wandered around the tables schemes to share with schools and I had
18th UKOUG Public Sector HCM SIG, and consolidated the post-its at the end nearly 20 volunteers, so watch this space.
Solihull
23rd UKOUG APEX SIG, Solihull
25th UKOUG HCM Solutions SIG,
London

Event dates correct at time of print

04 www.ukoug.org
News & Reviews

Moving with the times –


a new name and a new focus
2017 sees the Application & Middleware SIG going through a process of change and rejuvenation.
Whilst the committee remains the same, the name is changing and we have several other activities
happening. The committee put forward to the UKOUG Board the proposal to change the name to
Middleware & Integration SIG and this has been accepted.

Why the new name? In the last couple of is the app server and much, much, more, is so much to draw from here. We’re also
years we have seen enormous change and be that on-premise, cloud or hybrid. looking at some new ideas to share and
rejuvenation within Oracle’s middleware collaborate. Last year the Journey to the
portfolio and we wanted to reflect this The SIG has looked beyond, just the Cloud event committee arranged a couple of
change. This is not just in the arrival configuration and management of the one hour webinars which enjoyed very good
of cloud which still includes the trusty middleware platform but considers the use attendance. Clearly such sessions are easier
WebLogic WebServer as a core engine, of these technologies to be the glue that to engage with - no escaping the office for the
which in some cases we can see, in others makes the web, mobile, social, bots and day or travel demands. So we are currently
we know it is there but certainly hidden other solutions work with the applications exploring similar ideas for this SIG and by the
from our reach; but we are seeing a great and data stores. We hope that the change time you are reading this, you may have seen
number of offerings that go beyond will help give more clarity to this as well. something in the regular newsletters. We
WebLogic, SOA, OSB - new identity hope that you can join us for these events.
products, API products, Traffic Director New name, old SIG and same faces?
(which doesn’t even use WebLogic) and Yes and no, whilst SIG content has been Dates for your diary:
the support of open source technologies evolving under its old name we’re hoping 23rd March Solihull
such as Node.js, Kafka and Docker. It is no some new faces will be attracted to get 20th June Reading
longer just about the Application Server; it involved and contribute; after all there 28th September London

Other SIG name changes to look out for:


The Apps DBA SIG has changed its name to become UKOUG Apps Tech SIG and the HCM & HCM in the Cloud including Taleo SIG
will become UKOUG HCM Solutions and will be for those interested in Oracle Cloud and E-Business Suite HCM Solutions.

U KOUG CON FERENCES SEE YOUR SALES


O P E N
Call For Papers Soon!ING SOAR WITH
UKOUG
4-6 DEC EMBER 2017 | ICC BI RMI NGHAM Looking to grow your
brand identity through
Next month (April) we open the call for papers for the UKOUG conferences, where UKOUG? Then take
we look to the Oracle community to share their knowledge around all aspects of a look at our Partner
Oracle Technology, Analytics & Reporting & Applications. Guide at www.ukoug.
org/partnerguide
If you’ve never presented before but are tempted to try – do take a look at the
article in the last edition of Oracle Scene by first time presenter Brian Dwyer – We have sponsorship
his first experience was presenting at a SIG and now he’s got the bug and ready options to fit every budget,
for more. with packages to suit your
specific needs and accelerate your sales
Register your interest at: www.ukoug.org/conferences cycle. For more information, contact
Kerry Stuart, Head of Sales, on +44
(0)20 8545 9685 /
If you’re interested in sponsorship and exhibition opportunities at this year’s events, +44 (0) 7775 758 878 or email
[email protected].
please view our Partner Guide to the right and contact Kerry Stuart for more information.

www.ukoug.org 05
OracleScene

SPRING 17
Interview

15
Meet Paul Fitton,
UKOUG’s New
President
This month sees Linda Barker’s two year term as
UKOUG President come to an end and although we
are sad to see Linda step down, we’re delighted to

minutes welcome Paul Fitton, who transitions from President


Elect into the role.

with Hi Paul, some readers may have seen and really start to get out and meet the

Paul Fitton you on stage at the UKOUG’s co-located


conferences in December, where you gave
membership. Maybe even do a few site
visits if people are willing to have me!
a brief introduction about yourself, but for
those that missed it can you tell us a little What made you keen to stand for
about yourself. nomination for President?
I was really pleased to be invited on stage I’m fairly new to Oracle and I’ve found it a
at the conference last year by Linda to pretty “closed” community, so when I came
briefly introduce myself to the attendees, it across the UKOUG and first experienced
really is a fantastic idea to have a period of some of the events and content they
time where I can shadow Linda and get up provided for their membership, I realised
to speed before becoming the President. just how valuable the organisation was.
I work for an organisation called Home I’ve always believed that being new to
Group where I am Head of IS Architecture, something has as many advantages as it
managing a small team of domain and does disadvantages, and although I might
solution specialists in the design and ask a few silly questions at times, I think a
management of our entire IT landscape. fresh pair of eyes to compliment the wealth
It’s a demanding job, but I love it and I love of experience that currently sits on the
what we do; we’re England’s number one board is something I was keen to add.
provider of care and support services and
one of the largest housing associations in
the UK so it’s easy to connect what we do
in IT to the good we do as an organisation. Ultimately, I want to
I’ve been our main contact for the UKOUG be in a position to help
membership since we joined, when we
first started on our Oracle implementation influence where we go as an
3 years ago. I have found the experience organisation and to speak on
very beneficial hence why I put myself
forward to become a part of the your behalf to continue our
organisation. solid relationship with Oracle.
As you are now making the transition from
President Elect to President – how will the
roles differ? What are you most looking forward to
Well, right now all I’m really doing is about the role?
shadowing Linda and getting up to speed I’m someone who enjoys connecting
with things. I’ve attended a number of with people, particularly those who work
board meetings and have started my in similar fields and will have similar
induction. When I take over, I hope to take experiences to those I have faced, or am
a more active role at some of our events likely to. This is probably what excites me

06 www.ukoug.org
Interview: Paul Fitton

the most about this role, the ability to get of their membership is and who uses it
to know our membership and find out for what purposes.
more about what they need from us and
how we can, together, become an even You also said that some organisations
better organisation. don’t understand the value of joining
a User Group – what value has being a
Do you have a message for the members? member made to your organisation?
I’d just like to reiterate what I said on Oh we’ve seen massive benefits from
stage; that I’m really pleased to have been being a member. You should see our teams
given this opportunity and I genuinely am the first couple of weeks after coming
looking forward to connecting with you all back from conference in December; it’s
so please do get in touch! I might not be like a melting pot of ideas and everyone
able to respond immediately, but I will in is excited to explain what they’ve seen
due course. to everyone else. We like to do debrief
very soon afterwards in order to capture
What are your first official duties in as many of the ideas as possible and Very often I’ve found myself returning
the role? categorise them into quick-wins vs. longer to a problem I’ve previously not been
One of the first things I’ll be doing is term initiatives. That’s just the conference able to fully solve only this time with
attending the UKOUG Applications’ too; we’re starting to attend some more new knowledge or different applications
Journey to the Cloud event in March, which SIGs now we’ve got a bit more time to in the portfolio which can make the
I’m really looking forward to! Hopefully do so and the Oracle Scene magazine improvements required on the second pass.
I’ll be able to connect with a number of regularly gets a mention in team meetings.
members and also soak up some really Having said all of that though, I know there What do you see the biggest challenge for
useful information. is more we could benefit from and more the User Group over the next few years?
that the UKOUG could do for us moving Maybe a little bit of the answer to my
Do you know what else is in store for your forward, and I’m keen to understand how last question; moving with the times.
first year? other members feel about this also. The Oracle market is changing, much like
I’m definitely aiming to get around a the markets of many of the other major
number of SIGs and Events in my first The IT industry and especially the Oracle software vendors. More cloud, more
year to understand the many perspectives world is an ever changing place, what loosely coupled platform applications
that exist within our membership. I’m made you want to get in to a career in IT? that do specific things very well but are
also keen to meet with members and Honestly? I wasn’t really sure what to do interchangeable and compete on quality,
understand how they benefit from their with my life! I finished school, stumbled function and cost rather than being the
membership and how we might improve into University – didn’t really enjoy that “whole package”. IT departments are
as an organisation. I’ll be attending some and then one day found myself needing to changing too, meaning our members are
events on behalf of our membership too get a job (never mind a career). Luckily for changing also. Gone are the days where
such as the EMEA User Group meeting me, I was able to take an apprenticeship huge teams of internal resources are
and also be looking to cement a positive in the IT department at a local council dedicated to “keeping the lights on” and
working relationship with Oracle where I quickly realised something I more focus is on vendor management
leadership in the UK. hadn’t cottoned onto before… I enjoyed and service delivery roles. But much like
hard work! After that it was just a natural my previous answer, we should see these
In your manifesto you said you were keen progression thing, I worked hard and challenges as opportunities.
to see UKOUG appeal to a wider audience. learnt fairly quickly. IT is really dynamic
How will you be working with the UKOUG and fast paced but the thing I’ve always We want our current membership to
board & team to achieve this? loved about it is you’re very often the come with us on this journey, to embrace
Before I was successful in being appointed person tasked with solving a problem and new members, new content and different
as President Elect I worked with the how you do so is not predefined. types of events and engagements from
strategy group on how we take the the UKOUG but also to figure out where
organisation forward in 2017 and beyond they fit in the new world and how we can
so I’ve been engaged on this for a little help learn together as a community.
while now. We are very lucky as a board to It’s a world of continual
have a good mix of experience and fresh improvement and innovation Thanks for your time Paul, if the members
new ideas so our challenge now is to distil wanted to get in touch what is the
this mix of ideas and pragmatism down and never stops still, which is best way?
into a business plan that will take us in the a challenge, but means that I’m a keen social media advocate so would
right direction. encourage anyone who wants to, to seek
you too don’t get to rest on me out on Twitter (@paulfitton) or on
I am keen to hear from the members on your laurels. LinkedIn but equally, happy to receive
this one though and would welcome any email through my UKOUG address which
input from them as to what the make-up is, [email protected]

UKOUG members vote in their Member Advocate

Don't forget At the time of printing UKOUG members were voting to elect a Member Advocate to represent them
for the next two years on the UKOUG Board. Find out the results in the next edition or sign up to the
UKOUG ebulletin (www.ukoug.org/ebulletin) to hear the news when announced.

www.ukoug.org 07
OracleScene

SPRING 17
Applications

The Importance of

User Experience
for Enterprise
HCM Applications
Modern consumer
applications are engaging
and easy to use. Leading
brands like Facebook
and Uber devote huge
budgets to ensuring their
user experience outrivals
Adrian Biddulph competitor apps and meets
Managing Consultant customer demand.
Claremont

If these applications fail to innovate But employees are now demanding This was a 40% increase on previous
their user experience continually, they access to technologies that are flexible, years – an upsurge likely due to the rising
risk losing consumers – and with them, accessible and user friendly. So, enterprise consumer (and therefore employee)
revenue. The battle to create the most applications must up their game, expectations for user experience. As
attractive, intuitive app has raised the when it comes to usability, if they are Stacey Harris and Erin Spencer put it:
user experience bar. going to continue to deliver what the “Employees are becoming consumers
business needs. Traditionally, enterprise of HR services and HR is seeing a shift
At home, a user is a consumer; at work, applications, like Oracle E-Business Suite in its role from administrator to service
he or she is an employee – and their HCM have neglected to put usability first, provider’ (Sierra Cedar, 2016-2017, HR
expectations, when it comes to the but it has now become a key challenge Systems Survey).
usability of technology, are no different in that business and IT leaders realise they
the workplace than anywhere else. must address. There is no question that businesses
must address and respond to this shift.
This is reflected in Sierra Cedar’s 2016- Organisations rely on employees engaging
Where are we now with UX for the 2017 HR Systems Survey, which found with enterprise applications, to collect the
Enterprise? that 71% of users reported Oracle data needed to make strategic business
‘Enterprise’ applications are under the E-Business Suite to have “a poor user decisions. Timely and accurate absence
same usability pressures as ‘consumer’ experience.” 66% of the organisations and holiday tracking, for example, is
applications. Modern businesses depend who gave their vendors a low satisfaction crucial to any HR or payroll department,
on them for collecting and organising the rating, in this survey, also identified “poor which relies on this information for
large amounts of data that allow them to user experience” as their primary reason accurate payment. Information like this
make strategic business decisions. for doing so. empowers businesses to know their

08 www.ukoug.org
Applications: Adrian Biddulph

workforce better, to make operational By way of example, Applaud Solutions and investment to user experience in its
changes, if required, and to respond works in this way to add a modern, fresh suite of cloud applications is undeniable.
quickly to legislative changes. and intuitive user interface to Oracle Simplicity, mobility, extensibility and
E-Business Suite. This is complemented by consistency are themes that run across all
native mobile and tablet applications. products; giving a great, consumer grade
Driving user adoption wherever you HCM experience to employees, managers
are on your Oracle journey The National Trust has taken this and professional users alike.
Applications, like Oracle E-Business approach and seen significant
Suite HCM, were designed to provide improvements in user adoption of their As we saw in last quarter’s Oracle Scene,
businesses with important data – but Oracle HCM applications. They have seen Oracle’s commitment to UX here in the
they need to be used if they are going to a 100% increase in logged annual leave UK goes back many years. The UX team
deliver that critical information. and over 47% increase in logged sick leave has been attending the UKOUG APPS
– the business now has much more, and Conference for over a decade, allowing the
For organisations looking to get the most better quality, information at its disposal. team to interact directly with customers
out of their HR technology investments, and test the latest UX design patterns with
and increase employee adoption of This 360-degree view of staff has UK customers.
these applications, improving the user empowered The National Trust to improve
experience is a critical first step. operations, boost employee engagement
and empower management to support
Depending on where you are on your employees. More data means smarter
Oracle journey, you will have different business decisions.
options available to you. For example,
there are some organisations that have For those organisations who are looking to
made an investment in Oracle E-Business optimise Oracle E-Business Suite, Applaud
Suite with the view that it will underpin Solutions is a great way to do this.
their business for the foreseeable
future. These organisations can look to
new technologies, which sit on top of Oracle Cloud for greater usability
applications to improve their usability. More and more organisations are making
There are also those organisations that the jump to Oracle Cloud applications
are ready and eager to embrace the – a move that is becoming increasingly
latest in enterprise application software, inviting as the product matures and
deployed in Oracle’s cloud. benefits are realised.

With Oracle HCM Cloud, organisations


New technologies for improved UX gain a modern, intuitive user interface
One way to improve the user experience, across desktop and mobile. Oracle
and therefore, the user engagement, has acknowledged the importance of,
is through specialised software, which and the challenges surrounding, user
sits on top of Oracle E-Business Suite experience, and has built the answer into
applications. the application. Oracle’s commitment

www.ukoug.org 09
OracleScene

SPRING 17
Applications: Adrian Biddulph

Of course, the expectations around user What does this mean for my Whichever route you choose to take,
experience are always evolving. But, business? it is always helpful to work with an
because Oracle HCM Cloud is consistently There is a plethora of options available for experienced, proactive partner to help
updated and patches are regularly businesses looking to optimise the user shape a solution that is right for your
applied, organisations can feel secure in experience of their Oracle applications, organisation. No matter where you are on
the knowledge that the user experience to drive user adoption and get employees your Oracle journey – UX can no longer
will be continually optimised to keep up and all stakeholders equally invested in take a back seat.
with user demand. the business’s HR technology.

ABOUT Adrian Biddulph


Managing Consultant, Claremont
THE Adrian is a Managing Consultant for Claremont and is responsible for the delivery
AUTHOR of all things from a HCM technology perspective. With over fifteen years’ of global
consultancy experience, Adrian is a leading expert in the implementation and on-going
support of complex Human Capital Management solutions.

www.linkedin.com/in/adrianbiddulph/
@AdrianBiddulph

UKOUG EPM & HYPERION 2017

The Stage Awaits Share your EPM & Hyperion knowledge

We’ve an agenda to fill and an audience Customer insight stories Can you help others to improve their?
wanting to hear about real life • How does your business utilise its EPM • Planning, Budgeting & Forecasting
experiences and product insight that they systems between on-premise & cloud • Financial Processes & Close
can use to benefit their business. • Your cloud journey & experiences • Profitability & Cost Management
We’re particularly looking for: • Implementation tales & migration • Data Integration
quick wins • Reporting

To take part in this event, submit your abstract by 3rd April at www.ukoug.org/epmhyperion

10 www.ukoug.org
Technology

Problems From
Statistics Jonathan Lewis
Freelance Consultant
JL Computer Consultancy

If you want to give the optimizer the best chance of finding a good execution
plan in a reasonable time you need to have a well-structured database that has
been described sufficiently accurately through a set of suitable statistics.

If you have a poorly designed database there’s often little you than you’re expecting which, in turn, can lead to a highly
can do to change the structure but you may have some scope to inappropriate execution plan.
do something about the statistics, so it’s important to be able to
recognise the problems and patterns of instability that appear We have a workaround, of course: Oracle allows us to declare
when the optimizer is using guesswork rather than the actual “column groups” using the “extended stats” approach so we may
statistics that you can see stored in the data dictionary. gather stats using a method_opt that includes the phrase “for
columns (c1,c2) size 1”. This will give us a hidden column in the
data dictionary that (amongst other things) tracks the number
Correlated Columns of combinations of c1 and c2. It does mean a little extra work for
Perhaps the most commonly known bit of guesswork the future stats collections and in 12.1 Oracle may generate a few
optimizer uses has to do with correlated columns. If column c1 is sets of column groups automatically without giving us much of
known to hold 100 distinct values and c2 is also known to hold a clue that it is doing so. (This mechanism is part of the bundle
100 distinct values the optimizer still doesn’t know anything enabled by the optimizer_adaptive_features parameter in 12.1
about the number of combinations of the pair that actually but has been isolated to a specific parameter in 12.2, with a
exist in the table and simply assumes that there are 100 * 100 backport available for 12.1, that replaces optimizer_adaptive_
= 10,000 distinct combinations. If you know that there is some features with the two parameters optimizer_adaptive_plans and
significant overlap in the “meaning” of these two columns optimizer_adaptive_statistics).
you may know that the number of distinct values is far less
than Oracle expects, so the optimizer may give a predicate like: There are, however, instability traps associated with column
“c1 = {constant} and c2 = {constant}” a much lower cardinality groups; specifically the issue of the column group “suddenly”

www.ukoug.org 11
OracleScene

SPRING 17
Technology: Jonathan Lewis

not working. This tends to happen for two reasons – the first
is a classic optimizer problem in a different disguise: the “out create table t1 as select * from all_objects where rownum <=
50000;
of range” issue (which I’ll be coming back to). As time passes
and the data grows you may find that the actual data values update t1 set object_id = object_id + (select max(object_id)
from t1);
move outside the range recorded as the low and high values insert into t1 select * from all_objects where rownum <= 50000;
for the two columns; if, at this point, you optimize a query that
references out of range values the optimizer doesn’t use the create unique index t1_i1 on t1(object_id);
execute dbms_stats.gather_table_stats(user,’t1’, method_
column group stats but reverts to multiplying up individual opt=>’for all columns size 1’)
selectivity (num_distinct) values – potentially introducing
dramatic changes in plans when “nothing changed”.
Note that my call to gather_table_stats doesn’t request any
histograms even though, for example, the distribution of the
The second reason for column groups suddenly becoming
object_type column would probably make it a good candidate
irrelevant relates to the inherent instability of creating
for a frequency histogram. It’s also worth remembering in tests
histograms (a threat that shrinks dramatically in 12c – at least
like this that even though I haven’t declared any columns as
for frequency histograms and the new top-N histograms). If any
mandatory some columns will have picked up the NOT NULL
of the individual columns in a column group triggers Oracle into
constraint from the all_objects view.
creating a histogram when the column group itself doesn’t have
a histogram on it the optimizer stops using those column group
Consider a query based on the maximum object_id for rows
stats. This is just one of the things that makes a method_opt of
where the object_type is a particular value:
“for all columns size auto” a dangerous strategy.

As you can see from the previous two paragraphs you have a select *
from t1, t1 t2
Catch-22 to deal with: if you fail to maintain your stats you where t1.object_id >= (
may suddenly find unexpected changes in plans relating to select max(object_id)
column groups, on the other hand if you DO run some code to from t1
where object_type = ‘&m’
maintain your stats then that, too, may result in unexpected )
changes in plans. You just have to know that column groups and t2.object_name = t1.object_name
;
have side effects that are strongly dependent on the group and
its constituent columns having consistent stats.
If the substitution variable &m is defined as “SYNONYM”, the
query returns (for my dataset) 6 rows. If the variable is defined
Unknown Values as “SEQUENCE” the query returns 168,148 rows. Clearly, in the
Here’s a quick and dirty bit of SQL to create a table with 100,000 absence of histograms or dynamic sampling, the optimizer can
rows – a nice easy number to help us spot coincidences. I’ve only produce one plan for the query so we need to know how it
used the view all_objects as the basis for this data set because it will decide on suitable cardinality estimates.
has a set of well-known and easily remembered column names
and with a full Enterprise install of 11.2.0.4 there are far more
than 50,000 rows in the view: Below is the plan:

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18494 | 3503K| 222 (6)| 00:00:02 |
|* 1 | HASH JOIN | | 18494 | 3503K| 222 (6)| 00:00:02 |
| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 5000 | 473K| 17 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T1_I1 | 900 | | 4 (0)| 00:00:01 |
| 4 | SORT AGGREGATE | | 1 | 14 | | |
|* 5 | TABLE ACCESS FULL | T1 | 4762 | 66668 | 200 (4)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T1 | 100K| 9472K| 203 (6)| 00:00:02 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------
1 - access(“T2”.”OBJECT_NAME”=”T1”.”OBJECT_NAME”)
3 - access(“T1”.”OBJECT_ID”>= (SELECT MAX(“OBJECT_ID”) FROM “T1” “T1”
WHERE “OBJECT_TYPE”=’SEQUENCE’))
5 - filter(“OBJECT_TYPE”=’SEQUENCE’)

You can see from the access predicate at operation 3 that Oracle drives the query from the subquery outwards, first producing
a single value that can be used to drive the rest of the query. (This behaviour is an example of “subquery pushing”). Notice the
strange cardinality estimates for operations 3 and 2 – having dictated a path that finds a specific object_id (which, of course, has
to be treated as an “unknown value” at optimization time) the optimizer concludes that an index range scan using this value will
identify 900 index entries which will then expand to 5,000 rows when Oracle visits the table. The optimizer has no idea what value
it will get from the subquery and uses a pair of inconsistent guesses (0.9% and 5%) for the selectivity of the predicate “column >=
{unknown value}”. (The same guesses appear for the other inequalities: “>”, “<”, “<=”.)

12 www.ukoug.org
Technology: Jonathan Lewis

The anomaly doesn’t stop there. If you use a between clause with two subqueries a further inconsistency appears:

select *
from t1, t1 t2
where t1.object_id between (
select min(object_id)
from t1
where object_type = ‘&m’
)
and (
select max(object_id)
from t1
where object_type = ‘&m’
)
and t2.object_name = t1.object_name
;

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 925 | 175K| 215 (7)| 00:00:02 |
|* 1 | HASH JOIN | | 925 | 175K| 215 (7)| 00:00:02 |
| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 250 | 24250 | 10 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T1_I1 | 450 | | 3 (0)| 00:00:01 |
| 4 | SORT AGGREGATE | | 1 | 14 | | |
|* 5 | TABLE ACCESS FULL | T1 | 4762 | 66668 | 200 (4)| 00:00:02 |
| 6 | SORT AGGREGATE | | 1 | 14 | | |
|* 7 | TABLE ACCESS FULL | T1 | 4762 | 66668 | 200 (4)| 00:00:02 |
| 8 | TABLE ACCESS FULL | T1 | 100K| 9472K| 203 (6)| 00:00:02 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------
1 - access(“T2”.”OBJECT_NAME”=”T1”.”OBJECT_NAME”)
3 - access(“T1”.”OBJECT_ID”>= (SELECT MIN(“OBJECT_ID”) FROM “T1” “T1”
WHERE “OBJECT_TYPE”=’SEQUENCE’) AND “T1”.”OBJECT_ID”<= (SELECT
MAX(“OBJECT_ID”) FROM “T1” “T1” WHERE “OBJECT_TYPE”=’SEQUENCE’))
5 - filter(“OBJECT_TYPE”=’SEQUENCE’)
7 - filter(“OBJECT_TYPE”=’SEQUENCE’)

The plan tells us that Oracle will start by running the two scalar unless you supply some variant of a cardinality hint (SQL Profile,
subqueries to find a couple of values to drive an index range SQL Patch or opt_estimate hint) to give it some help.
scan at operation 3. In one respect the estimates in this plan On the plus side – if you’re using partitioned tables, a query that
are consistent with the previous plan and the general rules uses scalar subqueries of this type to identify partition key(s)
for multiple predicates – the estimated cardinality of the first allows the optimizer to produce a “key/key” plan that the run-
t1 access (operation 2) is 250 which is a selectivity of 0.25%, time engine can use to access only the required partitions.
in other words 5% of 5%; the optimizer has used the standard
“multiply the two selectivities” rule. On the other hand the index
cardinality (operation 3) is an “arbitrary” 450 – a hard-coded Histograms
selectivity of 0.45% - producing the strange prediction that the There are three main problems with histograms. The primary one
number of table rows will be less than the number of rowids is the cost of production; the other problems are a side effect of
found in the index. trying to minimize this cost. If you want a “stable” histogram you
need a large sample size but a large sample size means that it’s
In fact there’s one further anomaly we can get from this test. very expensive to generate the histogram from the data. A large
It’s not always legal for the optimizer to “push” subqueries, a fraction of this problem disappears in 12c where a new algorithm
limitation we can model by putting the /*+ no_push_subq */ allows Oracle to generate a good frequency histogram or “Top-N”
hint into the subqueries. With this hint in place the predicted histogram very efficiently using 100% of the data – but in earlier
cardinality for both queries changes to 369,000. This is a class versions of Oracle the default “auto_sample_size” would often
of defect that shows up fairly frequently in the optimizer – a result in a sample of around 5,500 rows being used even for
change in the choice of transformation can result in a significant data sets with millions of rows (and this small sample size still
change in cardinality estimates. This is one of the details that appears in the “Hybrid” histogram introduced in 12c to replace
makes hinting difficult – a hint to control the optimizer’s the older height-balanced histogram).
choice of transformation may result in such a drastic change in
cardinality estimate that the optimizer manages to produce an Essentially the different histogram types in Oracle can tell
even worse plan as a consequence. the optimizer something about “popular” values i.e. specific
values that appear relatively frequently in the data set; on top
We can get some minor variations in some of the intermediate of this, height-balanced or hybrid histograms can also tell you
cardinality estimates if we create histograms or pick a suitable something about how unevenly your values are spread across
level of dynamic sampling, and there are some minor differences different ranges (contrary to popular opinion, a histogram on
between 11g and 12c, but essentially the problem with this a unique key column may sometimes be useful for exactly this
query is that the optimizer has no idea about what the max reason, see: https://jonathanlewis.wordpress.com/2016/09/26/
(object_id) might be for a given object type – so it has to guess pk-histogram/).

www.ukoug.org 13
OracleScene

SPRING 17
Technology: Jonathan Lewis

The two cardinality problems relate to the differences between frequency histogram – conveniently (for our arithmetic) the
“popular” and “non-popular” values and the probability that a sample happens to select 5,334 rows, exactly 21 rows per
small sample size will fail to capture good enough information bucket.
consistently. If a value does appear frequently in the full data
set it is likely to be captured as a “popular” value even in a small For height-balanced histograms Oracle records one end-point
sample, but if it doesn’t appear frequently enough in the full data of each bucket, and identifies a value as popular if it appears
set then there is a good chance that it won’t appear in a small as the end-point of at least two buckets. Consider the two
sample – and that’s likely to matter because it’s often the low (or special cases: a value that appears in 22 rows in the sample
lower) frequency data values that are the most interesting. and a value that appears 41 times in the sample. With a little
luck a value that appears 22 times could appear once at the
Consider, for example, a data set with 10M rows, with a flag end of one bucket and 21 times in the next bucket – making it
column that can hold only 5 distinct values, ‘A’, ’B’, ’C’, ‘X’, ’Z’. a popular item; with a little luck a value that appears 41 times
Scattered around the most recent 10,000 rows in the table there could appear 21 times in one bucket and 20 times in the next,
are about 100 rows for each the values B C and X – and these are just failing to be seen as a popular item. This means the value
the rows you’re most interested in querying. We’ll use this data that (statistically) represents a much smaller amount of data is
set to demonstrate the principles of how frequency histograms seen as the more popular value; given the randomness of the
(in particular) can introduce instability. sampling it’s entirely possible that a height-balanced histogram
could cause plans to change every time it is gathered because
On (probably extremely rare) good days the default sample size a critical value could change from popular to non-popular (and
might capture one row each of B C and X, with about 2,250 rows vice versa) every time you gather stats.
each of A and Z: so the optimizer will know that it has three
rare values for which an indexed access path may be a good There are strategies for dealing with some of the problems
idea. Even with this “best possible” information the optimizer’s of frequency histograms and rare values (upgrade to 12c,
estimate of selectivity will be roughly 1/5500 for each of for example, or create and code for virtual columns that
the three rare values, which means a cardinality estimate of “hide” the popular values); but short of creating histograms
10M/5500 = 1,818; an error factor of 18, which could easily be programmatically it’s generally quite hard to deal with the
sufficient to encourage the optimizer to pick the wrong path in a instability that height-balanced histograms introduce if they rely
more complex query. on a small sample size.

More commonly you might find just one or two of the three rare
values appearing in the histogram – in which case a query for a Out of Range
rare value that wasn’t captured by the histogram would use the The last common class of optimizer problem reflects the fact
rule “half the frequency of the least common value”, accidentally that the stats you gather on time-based (or sequence based)
producing a better (though still poor) cardinality estimate of 909. columns can go dangerously out of date the moment you’ve
finished gathering them. The impact they have is that cardinality
Finally, on a really unlucky day, you might find that Oracle didn’t estimates can change catastrophically, leading to dramatic
find any of the rare values it built in the histogram – in which changes in execution plans. Fortunately (though, perhaps,
case the same “half the frequency” rule would apply but this bafflingly) it’s a problem that doesn’t always appear for data sets
time the cardinality estimate would be 2.5M because the “least that seem to be fairly similar.
common” value would be one of the two popular values.
To keep things simple I’ll talk about a data set that doesn’t need
From day to day, then, a frequency histogram could cycle any form of histogram, though the problem does appear (with a
you through dramatically different cardinality estimates and couple of extra sources of instability) when you have histograms
execution plans. On top of everything else, you could be even in place. Imagine that on New Year’s Eve you have a column that
more confused to find that the problem never appeared with holds date-only values covering the whole of 2016 (and nothing
one set of data, but appeared fairly frequently with a second else) for a total of 366 days at approximately 1,000 rows per day.
similar set of data – which could happen if, for example, one of Since, by default, statistics will only be gathered automatically
the rare values was also a low or high value in the first set, while after the number of table modifications exceeds 10% of the
the low and high were both popular values in the second set. current row count, the automatic stats gathering job might
not refresh the stats until roughly 5th Feb 2017. What does the
Bear in mind, also, that the timing of when you collected the optimizer do if we run a query requesting “all the data for 15th
stats could make a difference – perhaps the rare values come Jan 2017” at some time around the end of January?
into existence between 7:00 am and 10:00 pm and have all
disappeared by midnight. What happens if your stats collection Based on our knowledge of the data we expect to return roughly
happens to take place around 11:00 pm – perhaps on some 1,000 rows. The optimizer notes that we are running a query for
nights all of the rare values have already disappeared, on other data that is outside the recorded date range of 1st Jan to 31st
nights some of them still exist. Dec 2016 and scales its initial estimate down according to a
simple “linear decay” rule. The existing data range (from Oracle’s
The second cardinality problem highlights the weakness perspective) is 365 days (31/12/2016 – 01/01/2016 = 365), so
of height-balanced histograms (and helps to explain the the optimizer assumes that in a further 365 days from 31st
introduction of the hybrid histogram in 12c). Imagine we have Dec 2016 the number of rows per day will have dropped from
requested a histogram of 254 buckets for one of our columns 1,000 to zero. So its arithmetic will say that on 15th Jan 2017 the
because we know that it holds too many distinct values for a number of rows per day will have dropped by 1,000 * 15/365,

14 www.ukoug.org
Technology: Jonathan Lewis

giving an estimate of 959 rows – which probably isn’t too bad in moment when an input bind variable has a value that is now
this particular example. outside the high value. Imagine (an example, I have seen in
production systems) a query like:
Things get worse, though, if we start working with range-based
predicates. If, around the end of Jan 2017, we were to run a select * from tableX … where tableX.entry_stamp >= sysdate –
query requesting data where the date is greater than 15th 3/24 and …;
Jan, we probably expect an estimate of about 15,000 rows.
Unfortunately the optimizer is not self-consistent with what In other words: “show me everything that’s arrived in the last
it sees as “the future”; the estimated cardinality for “date_col 3 hours”. In a system that was receiving orders at a rate of 3
>= 15th Jan” is the same as “date_col = 15th Jan” – and the per second with timestamps accurate to the second this gives
estimate of 959 rows is suddenly much more of a threat. (As a roughly 10,000 rows. If the query was optimized shortly after
minor note, the estimated cardinality for the predicate “date_col stats were collected a suitable plan appeared and everyone was
between 15th Jan and 30th Jan” would actually go back to 1,000, happy because the optimizer’s estimate was in the right ballpark
i.e. using an estimate that didn’t allow for linear decay, but at – several thousand rows. If the plan was flushed from memory
the same time ignored the actual date range.) and re-optimized a few hours later then the new cardinality
estimate for tableX was 3 and the plan was a disaster. (For a few
Unlucky variations on this theme can be catastrophic, especially weeks the workaround to the problem was to gather stats on
when you remember that a single parse call can optimize a the table when the plan went wrong – the final workaround was
query (sensibly, we hope, and with a good cardinality estimate) to use a call to dbms_stats.set_column_stats() to update the
and leave a sharable cursor in place that is used for hours, or column’s high value with a safe value at regular intervals).
even days, very efficiently. Eventually such a cursor may be
flushed from memory and the statement re-optimized at a

Conclusion
I’ve covered a few of the commonest patterns that lead to unexpected executions plans. Some of the problems are
“static” problems – the plan is always bad – some are the rather more puzzling “dynamic” problems where plans change
catastrophically at random intervals for no apparent reason.
There are always ways to work around these problems, though some of the workarounds are basically undesirable and some
of them tend to be contrary to company policy or support contracts for 3rd party applications; nevertheless we can often
make some headway against even the worst problems if we recognise the patterns that cause them and invest some effort in
controlling the statistics that the optimizer is basing its decisions on.

ABOUT Jonathan Lewis


Freelance Consultant, JL Computer Consultancy
THE Jonathan’s experience with Oracle goes back more than 25 years. He specialises in
AUTHOR physical database design, the strategic use of the Oracle database engine and solving
performance issues. Jonathan is the author of ‘Oracle Core’, ‘Cost Based Oracle –
Fundamentals’ and ‘Practical Oracle 8i – Designing Efficient Databases’ and has
contributed to three other books about Oracle. He is one of the best-known speakers
on the UK Oracle circuit, as well as being very popular on the international scene,
having worked or lectured in 50 different countries. Further details of his published
papers, presentations and tutorials can be found through his blog.
Blog: jonathanlewis.wordpress.com
uk.linkedin.com/pub/jonathan-lewis/2a/93a/340
@JLOracle

OracleScene
Coming soon to

Ask
If you have a question or problem around: performance, trouble-shooting, optimizer behaviour
or internals, which can be described in a few sentences and would like an expert opinion, then
try “Ask Jonathan”, a new feature for Oracle Scene. In each issue Jonathan Lewis will respond to
a short list of questions sent in by readers on any topic relating to making best use of the Oracle
Database engine. Anything related to efficiency will be considered, but questions that will be of
interest to the wider audience are more likely to be selected.

JONATHAN Submit your questions, listing the topic area to [email protected]. Jonathan may summarise your
question and, with your prior agreement, may contact you to fill in a bit of background.

ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN

www.ukoug.org 15
OracleScene

SPRING 17
Business Analytics

Oracle’s
Modern
Analytics
Platform Antony Heljula, Technical Director,
Peak Indicators

About 10 years ago I was absorbed into Oracle, along with many others, through its
acquisition of Siebel. Siebel’s dominance in the CRM market was of course the big
attraction to Oracle at the time, but let’s not forget about Siebel CRM, the real jewel in
the crown was Siebel Analytics!

Siebel Analytics was quickly renamed But for years we witnessed the same old The same competition exists even today,
Oracle Business Intelligence Enterprise battle over and over again. OBIEE vs Qlik. but the situation has changed significantly.
Edition (OBIEE) and became Oracle’s OBIEE vs Tableau, or to put it another way: Whilst the likes of Tableau continue to
strategic BI platform (there wasn’t much “Enterprise” vs “Self-Service”. operate in one segment of the analytics
competition within Oracle, to be frank). market, Oracle now operates in three.
Whilst Siebel CRM eventually ended The Tableau sales people were always
up being overtaken by other products great at showing how quickly you can We are seeing the analytics marketplace
(e.g. Oracle CX Cloud) and is essentially create cool visualisations (“you mean we segment into three different types of
obsolete, OBIEE has evolved continuously don’t need IT?”). The problem was that product designed to support three specific
and today it remains a key player in the after 6 months of Tableau the customers business needs or functions:
analytics space. would be in trouble since their “self-
service” BI strategy had become the • Enterprise e.g. HR Manager
Back in those Siebel days, Siebel Analytics dark web……no control, no standards, no • Self-Service e.g. HR Analyst
was easily dismissed by competitors security, no governance. • Search e.g. HR Operations
as a “CRM-only reporting tool”. By
simply changing its name to OBIEE, the But on the other hand, nobody wants to Figure 1 opposite is a graphic that
product instantly became an “enterprise” wait 6 months for IT to deliver a report outlines the high-level differences in each
reporting tool that was capable of either. Deadlock. of the three segments:
supporting any industry vertical.

16 www.ukoug.org
Business Analytics: Antony Heljula

Enterprise Self-Service Search


Enterprise model delivered by IT, Designed to answer ad hoc
For power users & analysts
Phased Delivery questions

Supports both Analytical &


Upload your own data With any user with zero training
Operational

Connect to / join with any data “Google Search” style interface e.g.
Governed and secure
source revenue monthly 2014 bar chart

Strong on visualisations, not on


Dashboards, pixel perfect, ad hoc Focus on ease of use, fast response
formatting, security or code reuse

Some upload capability Knowledge of data required Limited layout/formatting features

FIGURE 1

The above capabilities all together form, what I would like to • Data sources typically have to be logically modelled by a
define, as a modern analytics platform. Great if you have all developer before end users can build reports
three, tough if you only have one! • The tools have a wide variety of charts and formatting features,
but are lacking the more advanced charting features available
Let’s now take a look at how Oracle’s modern and unified in other visualisation tools (e.g. Oracle Data Visualization)
analytics platform delivers the above functions.

1) Enterprise 2) Self-Service

BI Dashboards have always been a core part of OBIEE, and in Oracle Data Visualization has enabled Oracle to take a giant leap
more recent times, the Oracle BI Cloud Service (BICS). into the self-service market. Feedback from our customers is
The core capabilities of OBIEE as an Enterprise tool are: that it is much easier to use and more intuitive to use than other
“legacy” self-service tools on the market.
• Self-service dashboards delivered via IT (typically phased releases)
• Role based & personalised Here are its key features:
• Interactive & intuitive • Ideal for Business Analysts & Power Users
• Supports both analytical and operational (real-time) needs • Does not require involvement from IT
• Ability to build custom ad hoc reports (against IT delivered • Comes embedded within Oracle BI or as its own standalone
data sources) desktop version
• Drill-down from summary to detail • Excellent on visualisations & ease of use
• Governed & strong on security • Provides connectors to a wide variety of on-premise & cloud
• Great integration features (javascript, security, 3rd party systems. Users can seamlessly pull data from “governed” data
portals etc) sources such as OBIEE (via logical SQL or analyses)
• High fidelity, pixel-perfect reporting (BI Publisher) • Great features such as trends, outliers, storyboarding,
• Mobile enabled forecasting, free-form layout
• Supports “mash-ups” involving a wide variety of data sources
Enterprise tools are excellent from a governance and security (cloud, on-premise databases, spreadsheets etc)
view point, but will often have limitations such as:
Self-Service tools are extremely versatile but do come with
• L imited capability for end users to consume their own limitations:
custom content and data sources (e.g. spreadsheets) • End users will need to have some technical experience as well

www.ukoug.org 17
OracleScene

SPRING 17
Business Analytics: Antony Heljula

as knowledge of the backend data sources BI Ask has had a relatively quiet introduction into the market,
• Governance & security can often be an issue – it is harder you may not have heard of it! But it comes embedded in Oracle
to control who has access to reports and to restrict what BI (part of Oracle Data Visualization) and its simple “google
information people can see search” style interface offers great potential as a “zero training”
• Administrators have little or no visibility of what reports are ad hoc question & answer tool:
available or if users are building duplicate reports
• Self-service tools do not benefit from code re-use, you could Revenue X 2010 X Per NameMonth X bar chart X
have 10 reports using the same metrics but each with a
slightly different definition
The key features are:
• Enterprise features such as variables and dashboard prompts
are not as comprehensive, so it is harder to build sophisticated
• Designed for ad hoc questions & answers
security models or applications in a Self-Service tool
• Google-style search bar
• When you build visualisations within Oracle Data
• Any user, zero training required
Visualization, the content cannot be viewed on OBIEE
• Type-ahead / auto-complete features
Dashboards – the visuals are completely separate to
• Metadata & data indexed
dashboards (maybe this will change in the future)
• Queries run against data sources provisioned by IT (Subject
Areas)
NOTE: Oracle Big Data Discovery is another self-service analytics
• Voice integration planned
product, this offers powerful data discovery and visualisation
features aimed at data analysts / data scientists.
The search capability is achieved through indexing Subject Area
contents within Oracle BI (both metadata and data).
Search tools naturally will come with limitations:
3) Search: BI Ask
• They are designed for simple questions (e.g. “Sales 2016
Region Bar-Chart”), so you cannot ask questions that involve
complex filters or calculations
• Limited opportunity to modify the layout and formatting
• Data sources need to be indexed prior to use, and re-indexed
whenever fresh data is available
• Similar to search engines that are available on the internet,
end users will expect results to appear promptly, so there is
additional pressure on IT to make sure the queries generated
will be fully optimised

In Summary
The analytics landscape has changed!
Oracle has delivered a modern analytics platform that provides comprehensive enterprise, self-service and search capabilities
to meet the differing needs of your business functions, all within a single front-end. Customers no longer need to choose
between “enterprise” or “self-service” vendors.
Analytics tools always have their pros and cons and it can be a challenge to balance the governed vs ungoverned ways of
working. It is essential therefore to have a long-term strategy in place to make sure the available tools will be used effectively
and appropriately.

ABOUT Antony Heljula


Technical Director, Peak Indicators
THE Antony is an Oracle ACE for Business Intelligence and is one of Europe’s leading BI
AUTHOR architects with a focus on the Oracle BI and related database and middleware products
including BI Foundation, Exalytics, BI Applications, Spatial, Real-Time Decisions (RTD), Big
Data Discovery, Data-Mining, Endeca, SOA Suite and Oracle VM. He has over 15 years’
experience working with Oracle BI and Data Warehousing and is the Technical Director at
Peak Indicators.

Blog: www.peakindicators.com/blog
uk.linkedin.com/in/antonyheljula
@aheljula

18 www.ukoug.org
Technology

Responsive
Design With ADF
Responsive Design means building your applications so they change in response to the
available screen size. This allows you to build one application that will look good on
both large and small screens. This used to be cumbersome to achieve in ADF, but with
the new features available in ADF 12c release 2, it has become much easier.
This article will cover two important new features in ADF 12.2.1.x.x: The flowing Masonry
layout and the <af:matchMediaBehavior> tag. We’ll see how they work, where they don’t
work and how you can put them to use in your applications.

Sten Vesterli, More Than Code

Stretch and Squeeze fits together if you use bricks of standard sizes. The smallest
Most ADF applications start with a stretchable outer layout “brick” is 170 x 170 pixels, and you can make an ADF Faces
container like a PanelStretchLayout or a PanelGridLayout. This component this size by setting the Style Class property to
ensures that your application makes use of the entire available AFMasonryTileSize1x1.
screen area when running on a large screen, but doesn’t help
you when your screen gets smaller. A stretchable layout on a Several standard brick sizes are already defined for you: 1x1, 1x2,
small screen will behave erratically, because some components 1x3, 2x1, 2x2, 2x3, 3x1, and 3x2.
can be squeezed smaller, while others can’t.
Each brick has an 8-pixel border, so there are 16 pixels between
bricks. This means that a 2x1 brick is 356 pixels wide (two times
Another Brick in the Wall the standard size of 170 plus an extra 16 pixels, making it as
To handle the situation of moving to a smaller screen, you can wide as two 1x1 bricks with standard distance).
use the new ADF 12c release 2 Masonry layout manager. In my
mind, this name conjures up an image of a fixed wall with bricks If you are not happy with these sizes, you can apply your own
solidly set in mortar, but that’s not what Oracle means with the skin and override the standard brick size. However, the Masonry
Masonry layout manager. layout is already fickle, so I do not recommend messing with
non-standard brick sizes. You can also extend the brick sizes to
Instead, ADF is laying the bricks as it draws the screen, and re- larger bricks like a 4x4 by defining extra style classes in your skin.
arranges them as you resize the screen. The bricks in a masonry
layout can be any ADF Faces component that you provide with
one of the special masonry CSS style classes. Bricklaying
When ADF renders a masonry layout, it adds bricks in the
order they are listed in our ADF source code inside the
How Big is a Brick? <af:masonryLayout> tag. They are added in reading order,
Just like Lego bricks only fit together because they all have i.e. from left to right, unless you have reconfigured your ADF
an exact 8 mm stud distance, the ADF Masonry layout only application for a right-to-left language. It adds bricks in one row

www.ukoug.org 19
OracleScene

SPRING 17
Technology: Sten Vesterli

until it gets to the end of the screen or any layout container the
masonry layout is embedded in and then breaks to the next line.

From now on, whenever ADF picks up the next brick, it tries to fit
it into any open space that it had to leave open in an earlier row.
ADF might break to a new row when presented with a wide 3x1
brick, but it will keep trying to fit smaller bricks into the space
at the end of a line. Also, if you have bricks of different heights,
there might be space left in the middle of a line that ADF can fill
later with a smaller brick.

Building Ugly Walls


Pretty masonry is built from lots of little uniform bricks. Ugly
masonry is built from coarse bricks of uneven size.
Note that you probably don’t want to allow a masonry layout
If you want to use the Masonry layout, don’t just add bricks of to expand without limit. On my large monitor workstation, the
various sizes in random order. ADF follows a simple algorithm for above layout looks like this:
placing tiles and cannot automatically create a nice layout if you
give it pieces that don’t fit.

So, if you decide to use a masonry layout so your application will


look good on tablets and/or phones, place your masonry layout
inside an outer container that limits it to something that still
looks good. For example, a Panel Group Layout with a max-width
style, like this:

<af:panelGroupLayout id=”pgl1” layout=”vertical”


inlineStyle=”max-width:750px;”>
<af:masonryLayout id=”ml1”>
<af:spacer id=”s1” styleClass=”AFMasonryTileSize2x1”
inlineStyle=”background-color:orange;”/>
<af:spacer id=”s2” styleClass=”AFMasonryTileSize1x1”
inlineStyle=”background-color:green;”/>

Remember to use max-width, not the fixed width CSS style.


Building Pretty Walls If you place a masonry layout inside something that has a
If you want to use the Masonry layout to achieve an acceptable fixed width, the tiles are never reordered. On the other hand, a
layout, you need to make sure you have a sufficient supply of max-width just means that the reordering stops once the outer
small bricks, i.e. elements inside the <af:masonryLayout> styled container has reached that width.
with AFMasonryTileSize1x1. If you have enough of these, ADF
will eventually fill out all the holes in the masonry layout caused
by larger bricks not fitting tightly together. Explicit Responses
If you want to react with more precision to changes in the
display size, you can use the <af:matchMediaBehavior> tag. This
tag allows you to change property values based on a CSS media
query string.

MatchMediaBehavior Example
You use this tag inside another tag to change a property value
on the tag, like this:

<af:panelFormLayout id=”pfl1” rows=”6”>


<af:matchMediaBehavior propertyName=”rows”
matchedPropertyValue=”12”
mediaQuery=”screen and (max-width: 768px)”/>

20 www.ukoug.org
Technology: Sten Vesterli

This setting means that if the screen width is less than 768
pixels, the rows property has the value 12 instead of the default Error 500--Internal Server Error
java.lang.NullPointerException
6. The syntax for the mediaQuery property is a standard CSS at oracle.adfinternal.view.faces.taglib.behaviors.
media query – you can see some examples at http://www. MatchMediaBehaviorTag.getBehaviorString
w3schools.com/css/css3_mediaqueries_ex.asp. (MatchMediaBehaviorTag.java:70)
at oracle.adfinternal.view.faces.facelets.rich.
MatchMediaBehaviorHandler.getBehavior
Undocumented Feature… (MatchMediaBehaviorHandler.java:72)
Unfortunately, the <af:matchMediaBehavior> tag is almost
completely undocumented, except for a brief section in the This happens in the above example if the PanelFormLayout
documentation, providing only an example like the above. doesn’t contain the rows property.
The Tag Reference for Oracle ADF Faces (12.2.1.2) doesn’t even
contain this tag, and the ADF Faces demo application just shows There are many places where this would be useful, but few of
the same example as the documentation. them seem to work. The obvious use case is to de-clutter your
user interface on smaller screens, but try as I may, I cannot
…That Barely Works change the rendered or visible property of anything. As the
The reason for this lack of documentation seems to be that feature is effectively undocumented except for a few attributes,
this exact example is the only thing that works. If, for example, we are unable to raise bugs against it.
you don’t explicitly set the property that you refer to in your
<af:matchMediaBehavior> tag, your application comes crashing
down with a NullPointerException:

Conclusion
If you are building an application that needs to work on a desktop and on a tablet in both orientations and you have a number
of very small pieces of information, masonry layout could be just what you are looking for. On the other hand, masonry layout
is very inflexible about its tiles – they have an exact pixel size and your information needs to fit in the box.
The <af:matchMediaBehavior> tag should be considered a beta feature and not be used in production applications. It is unable
to do any of the useful things you’d expect it to do, but it can crash your application with ugly HTTP-500 errors. The idea is good
though, so check this feature again in future releases.

ABOUT Sten Vesterli


Principal, More Than Code
THE Sten Vesterli is one of the world’s leading experts on Oracle ADF. He has written two books
AUTHOR on ADF already and is currently writing a third, called ADF Survival Guide. Sten is an Oracle
ACE Director and helps ADF customers world-wide with ADF online and in-person training,
mentoring and architecture.

Blog: www.vesterli.com/blog
www.linkedin.com/in/stenvesterli/
@stenvesterli

OracleScene Needs You!


We’re expanding the editorial team and have vacancies for
3 deputy editors - 2x Tech & 1x Apps.
If you have a passion for reading well written Oracle focused content, can offer
editorial advice to authors, are able to spot an error at 50 paces and can devote time to
delivering 4 editions per year then we’d love to hear from you. Head to www.ukoug.
org/editorialteam, to find out more.
To apply, send a short a paragraph to [email protected] about why you should be
considered for the role including any relevant experience and Oracle interests.

www.ukoug.org 21
OracleScene

SPRING 17
Technology

What to Expect
From Oracle
Database 12c

With each new This has never been more evident than
with Oracle Database 12c, which has
within a Container Database (CDB). This
allows the PDBs to share the memory and
release of the been the most rapidly adopted release
in over a decade. In this article I’ll share
background processes of a common CDB,
while keeping many of the isolation aspects
Oracle Database some of my favourite new 12c features
both big and small, to give you a flavour
of single databases.

comes fundamental of what to expect after upgrading to the The obvious benefit of this new approach
latest release! is consolidation. By sharing memory
architectural and the background processes you can

changes, driven by Fundamental Changes


accommodate more databases on a single
server. There is also less administrative
new technologies Multitenant
With Oracle Database 12c, comes a major
overhead, as you can manage multiple
databases as one, making back ups and
and user change in the database architecture.
Instead of having a stand-alone database
patching more efficient. But what is
probably the most appealing part of the
requirements, as for every application, Oracle Multitenant new architecture is the ability to unplug and
provides a new database consolidation plug a PDB from one CBD to another, making
well as smaller model in which multiple Pluggable upgrading either the database software or

enhancements Databases (PDBs) are consolidated the underlying server less painful.

that make life However, my favourite aspect


of Multitenant came with
easier for DBAs and Oracle Database 12c Release 2,
the ability to provision a new
developers. database by hot cloning a PDB.

Hot cloning allows you


Maria Colgan, Oracle to create full copies of a
FIGURE 1: ORACLE’S NEW MULTITENANT ARCHITECTURE production database for

22 www.ukoug.org
Technology: Maria Colgan

testing or development without interrupting operations on the


production PDB.

The default install with 12c automatically creates a CDB with


one PDB. No application changes are required to take advantage
of this new architecture, so there is no harm in trying it.

Database In-Memory
It’s long been known that a column format is ideal for analytics,
as it allows faster data retrieval when only a few columns are
selected but the query accesses a large portion of the data set.
Up until now the Oracle Database has only stored data in a row
format. With the introduction of Database In-Memory, data
can now be populated into memory both in a row format (the
buffer cache) and a new in-memory optimized column format, FIGURE 3: ONE MASSIVE ORACLE DATABASE CAN NOW BE
simultaneously. SHARDED INTO A POOL OF SMALLER DATABASES

sharding key, application queries are automatically directed to


the appropriate database or shard.

However, it is possible to execute queries across all shards


to get a holistic view of the entire data set. Sharding enables
applications scale data, transactions, and users to any level,
FIGURE 2: ORACLE’S UNIQUE DUAL-FORMAT IN-MEMORY ARCHITECTURE simply by adding additional databases (shards) to the pool.

The database maintains full transactional consistency


between the row and columnar formats, just as it maintains JSON in the Database
consistency between tables and indexes. The Oracle Optimizer Although storing JSON (JavaScript Object Notation) in the
is fully aware of what data exists in the column format and database is not an architectural change, it is extremely useful
automatically routes analytic queries to the column format and technology worthy of a mention! Unlike XML, there is no new
OLTP operations to the row format, ensuring both outstanding JSON data type in 12c. Instead JSON is stored as text, in any table
performance and complete data consistency for all workloads column, using a VARCHAR2, CLOB or BLOB data type. Using
without any application changes. There remains a single copy of existing data types ensures that JSON data is automatically
the data on storage (in a row format), so there are no additional supported with all of the existing database functionality,
storage costs or redo / undo generated. including Oracle Text search and Database In-Memory.

It’s extremely easy to begin using It’s extremely easy for existing database users
Database In-Memory, as only two setup to access information within a JSON document,
steps are required. using the standard dot notation in SQL.

First you need to allocate an In-Memory column store, which is For example, you can select the city for each customer from
a new component of the System Global Area (SGA), called the within the JSON column using the following command:
In-Memory Area. It is a static pool within the SGA, whose size
is controlled by the initialisation parameter INMEMORY_SIZE SELECT c.json_column.address.city FROM
(default 0). Then you need to specify the new INMEMORY attribute customers c;
either on a tablespace, table, (sub)partition, or materialised view,
as only objects with the INMEMORY attribute are populated into
There are also a number of new JSON operators including IS
the In-Memory column store. Once the objects are populated,
JSON (to filter column values or to create constraints), JSON_
your application will automatically begin utilising the columnar
VALUE (to select one scalar value in the JSON data and return it
format for any analytical queries.
to SQL), JSON_EXISTS (to use a in the WHERE clause to filter
rows based on properties), and JSON_QUERY (to select (scalar or
Sharding
complex) value in the JSON data).
Starting with Oracle Database 12c Release 2 it is possible to
horizontally partition or shard a very large database across
a pool of independent databases called shards (up to 1000 SELECT json_query(custdata,’$.address[*].city’ with ARRAY
wrapper) FROM customers;
shards). Each shard runs on separate server and no shared
storage or clusterware is required. There is complete fault
isolation between shards and data is partitioned across the pool
based on a sharding key. The pool of databases is presented to Small But Useful Enhancements
the application as a single logical database. By specifying the Some of the most appealing new features in Oracle Database

www.ukoug.org 23
OracleScene

SPRING 17
Technology: Maria Colgan

12c are small enhancements. Here is a couple that I think will exchange partition command. The exchange partition command
make your life easier and shouldn’t require a huge effort to take allows you to swap the data in a non-partitioned table into a
advantage of. particular partition in your partitioned table via a sub-second
dictionary operation (no physical data moves). The command
Online Statistics Gathering can only succeed if the non-partitioned table is identical to
From Oracle 9i onwards, whenever an index is created, Oracle the partitioned table both in shape and semantics. In Oracle
automatically gathers optimizer statistics. The database Database 12c, Release 2, a new DDL command (CREATE TABLE
piggybacks the statistics gather on the full data scan and sort FOR EXCHANGE WITH) was introduced that will create a new
operation necessary for the index creation. This approach has table, absolutely identical in both shape and semantics to the
worked so well, few people even realise it’s happening. So in partitioned table, so the partition exchange command will
Oracle Database 12c, the same technique is now applied for direct always succeed.
path operations such as, Create Table As Select (CTAS) and Insert
/+APPEND */ As Select (IAS) operations into an empty table or
partition. Piggybacking the statistics gather as part of the data CREATE TABLE sales_JAN_2017 FOR EXCHANGE WITH sales;
loading operation, means no additional full data scan is required
to have statistics available immediately after the data is loaded. Approximate Analytic Functions
In some cases the level of precision within an analytical query can
be reduced, in a trade off for a shorter elapse time. For example
‘how many distinct visitors came to our website last month?’
The additional time spent on gathering An approximate answer that is for example within 1% of the
statistics is small, compared to a separate actual value but is returned 10X faster is not only sufficient but
also preferable. In order to address this requirement, Oracle
statistics collection process, and guarantees has introduced three new approximate functions that provide
accurate statistics readily available from the approximate results in a fraction of the time: APPROX_COUNT_
DISTINCT (12.1), APPROX_PERCENTILE, and APPROX_
get-go. MEDIAN. Let’s take APPROXIMATE_COUNT_DISTINCT as an
example. This function uses a HyperLogLog algorithm, which
enables the processing of large amounts of data significantly
Longer names faster that COUNT DISTINCT with negligible deviation from
Prior to Oracle Database 12c Release 2, all object names were the exact result. The APPROX_COUNT_DISTINCT function does
limited to 30 bytes. Starting in 12.2 the limit has been increased not use sampling and its results are 100% deterministic. This
to 128 bytes, making it easier to give all database objects technique was originally designed to improve the performance
descriptive names. However, you should remember that the of statistics gathering in 11g and is used by the DBMS_STATS
limit is now 128 bytes, not characters. If you’re using a multi- package to calculate the number of distinct values in a column
byte character set be careful you don’t get too carried away with when the ESTIMATE_PERCENT parameter is set to AUTO_
your descriptive names. SAMPLE_SIZE (the default).

Create table for Partition Exchange Upgrading to a new release is always a daunting task but
One of the benefits of partitioning is the ability to load data hopefully some of these new features will make it worth
quickly and easily with minimal impact on the users by using an your while!

ABOUT Maria Colgan


Master Product Manager, Oracle
THE Maria Colgan is a Master Product Manager at Oracle and has been with the company
AUTHOR since version 7.3 was released in 1996. Maria’s core responsibility is the Oracle Database
In-Memory Option. She is responsible for evangelising new database functionality and
getting feedback from customers and partners incorporated into future releases of
the product.
Based on Maria’s extensive experience in Oracle’s Server Technology Performance Group -
she creates material and lectures on the Oracle Database In-Memory Option and the best
practices for incorporating it into Oracle environments. She is also a contributing author to
the In-Memory blog http://blogs.oracle.com/In-Memory.

Blog: https://sqlmaria.com
@SQLMaria

24 www.ukoug.org
Technology: Maria Colgan

Want to give User Group


back to the Community?
UKOUG is about community and being part of something collaborative
and influential. Our volunteers are the vital ingredient in ensuring that
our members; come together, share insights and are provided with
relevant products and member benefits.

Here’s why you should consider volunteering:


• Give your CV a boost • Improve your confidence
• Meet new people • You’ll be making a difference
• A chance to give back

www.ukoug.org/volunteers

www.ukoug.org 25
OracleScene

SPRING 17
UKOUG Events 2017

TAKE YOUR PLACE AT


UKOUG’S 2017 EVENTS
Throughout the year UKOUG run a number of events focused around a product/
theme or regional community. There are many ways you and your company can get
involved whether it is; joining a committee, submitting an abstract to speak, taking
out a commercial sales opportunity or registering to attend.
Here’s a selection of what’s taking place in 2017 – view the full calendar at
www.ukoug.org/events

UKOUG APPLICATIONS’ OUG IRELAND 2017


JOURNEY
TO CLOUD 2 3 & 2 4 M A R C H 2 0 1 7 | TH E G R E S H A M H OTE L , D U B L I N

Over 200 attendees are expected at social for further opportunity to


The Gresham Hotel this March to hear engage with our exhibiting partners
from the likes of Maria Colgan, Alex and other attendees and day two
Nuijten, Edward Roske, Kiran Tailor and rounds off with a Question Time
other such noteworthy presenters. featuring a panel of leading Oracle
8 MARCH 2017 | LON DON
ACEs and Ask Tom’s Maria Colgan &
The agenda hosts over 55 sessions
This event presents the perfect Chris Saxon.
on subjects surrounding: Business
opportunity to find out all you Analytics & Big Data, Database, For more information and to book any
need to know before moving your Development and Cloud & APEX. last remaining places head to:
business to the cloud. The agenda Day one culminates with an event www.oug.org/ireland
features content surrounding ERP &
HCM, Digital CX and Technology.

www.ukoug.org/cloud | #ukoug_cloud
www.oug.org/ireland | #oug_ire

26 www.ukoug.org
UKOUG Events 2017

UKOUG EMEA PEOPLESOFT UKOUG NORTHERN


ROADSHOW 2017 TECHNOLOGY SIG 27 APRIL 2017
MANC H ESTER
26 APRI L 2017 | LON DON

Oracle’s Marc Weintraub will once again be Responding to the needs of the UKOUG membership we are
addressing the UKOUG audience as part of the delighted to provide a Northern Technology SIG which will
European roadshow. For more information on deliver three tracks of content surrounding the Database,
the agenda & to register head to Systems and RAC, Cloud Infrastructure & Availability.
www.ukoug.org/peoplesoft Attendees will have the opportunity to pick and choose
sessions from each track, creating a tailored learning
experience. Find out more at: www.ukoug.org/ntech
www.ukoug.org/peoplesoft | #ukoug_psoft

UKOUG EPM & OUG SCOTLAND 2017


HYPERION 2017 21 JUNE 2017 | GLASGOW
14 JUNE 2017 | ESHER
The largest annual gathering of This multi stream event will deliver
Scotland’s Oracle users will once again the latest Oracle insights by leading
Attendees looking to discover more return to the Radisson Blu Hotel in industry experts. Look out for the
about their EPM products will be Glasgow. agenda, available in April,
heading to Sandown Park Racecourse at www.oug.org/scotland
this June. Call for papers is open until
3rd April with the agenda launching
late April. Head to www.ukoug.org/
epmhyperion to find out how to
take part.
www.ukoug.org/epmhyperion
#ukoug_epmhyp www.oug.org/scotland | #oug_scot

UKOUG LICENCE MANAGEMENT EVENT 2017


2 4 O C TO B E R 2 0 1 7 | LO N D O N

The much talked about licensing event will address VLSS, Flexara Software, Version1, CGI and Nymad Limited
members concerns over auditing, compliance and answer and attendees will also have the opportunity to hear from
how best to engage with Oracle’s License Management their peers in dedicated roundtable sessions. For the full
Services team. agenda and further details on the day visit:
Presentations will be delivered from companies such as: www.ukoug.org/lme

www.ukoug.org/lme | #ukoug_lme17

www.ukoug.org 27
OracleScene

SPRING 17
Oracle E-Business Suite

Taking Oracle
E-Business Suite
From the
Back Office
to Mobile

In today’s highly competitive business environment, business users want all the
information to be available at their fingertips no matter where they are. Traditional
ERP applications have been used as back office applications accessed using desktops
and laptops, which are confined to a closed office space, most of the time.
Vishal Goyal, Program Manager, Fujitsu Consulting India

This creates a new business opportunity to make ERP systems JD Edwards etc. Oracle has released standard mobile
available on mobile phones with real time access no matter applications across different modules (Asset Management,
where we are. This is creating huge demand to have mobile HCM, Financials, Logistics, Procurement, Projects,
apps being made available for ERP applications with no or Manufacturing) which will help business users perform multiple
minimal additional investments. functionalities no matter where they are.

It is very unfortunate that in this digital age, so many These are standard apps and they come with no additional license
organisations still keep using their ERP applications as back cost. All one needs to do is to patch the application, download
office and do not take them to mobile devices. and configure the app and start using it. It is the quickest way
to go mobile with your Oracle E-Business Suite. On the flip side,
Fault is mostly with IT managers of these organisations who do this approach needs the user to use a different app for each
not take these solutions to the business users. Business users application area which means login / logout as many times as
can be unaware of the different options available and how tough number of apps being used. Also, lot of additional functionalities
or easy it is to go mobile. Onus is therefore on IT managers to cannot be included in this case. An example of some of the
explore these options and to help users improve their experience standard apps which can be configured with Oracle E-Business
of using Oracle E-Business Suite and improve their efficiency and Suite is shown below. There are apps available for more than 20
productivity in completing these day to day functions. modules in Oracle and same is the case for JD Edwards.

There are several options available to digitise Oracle E-Business Log in to Oracle E-Business Suite mobile apps using your Oracle
Suite and expand it on mobile apps. E-Business Suite login credentials (user name and password).
Mobile apps are compatible with both Release 12.1.3 and
Release 12.2.3 and onwards, as well as iOS 8.0 or higher and
Standard Mobile Apps Android 4.1 or higher. The next images show live screen shots
To start with, it can be as simple as using standard mobile apps for iProcurement, Procurement, Approvals and Fusion Expenses
like the ones available from Oracle for Oracle E-Business Suite, mobile apps.

28 www.ukoug.org
Oracle E-Business Suite: Vishal Goyal

If clients want to follow this approach, they need to decide If clients are interested in doing that, than an additional mobility
which of the mobile apps best fits their business needs. Below is consultant will be required for this project.
a suggested approach.
This is the simplest and almost zero cost approach for clients,
• Client assesses different mobile apps available and which one but has its own limitations with regards to functionality offered
best suits them in the standard mobile apps.
• Relevant Oracle E-Business Suite functional consultant will
advise the patches to be applied on the server and will also
take care of the additional configuration required Digitising Oracle Forms Using Mobile Applications
• Mobile app to be installed on the test devices and Oracle Another option is to look at Oracle Forms modernisation using
E-Business Suite URL configured 3rd party solutions. Oracle Forms is still being used to deliver
• If client’s Oracle E-Business Suite is behind a firewall, client most of the functionalities in Oracle E-Business Suite. I have
VPN will need to be installed on the mobile devices. If Oracle built solutions working with Auraplayer. Auraplayer offers
E-Business Suite is exposed on internet, no VPN may be an adapter which helps create REST services on top of Oracle
required E-Business Suite and consumes these web services into a mobile
• Once testing is done, step 2 above can be repeated in UAT and app using any platform. There is no coding required and no
production instances changes on Oracle Forms. All one needs to do is to buy license
• Mobile app can be distributed through any MDM solution from Auraplayer and start using it. License costs may look higher
which client may be using or communication sent to business initially, but given how easy it is to expose Oracle Forms logic as
users on how to download the apps and steps to use them is, without any change, makes it worth the investment.

Indicative Cost Assessment Indicative Cost Assessment


There is no additional license cost. Oracle E-Business Suite 1. Annual Subscription License from 3rd party vendor
mobile apps are available as part of existing product licenses 2. Existing Oracle apps DBA and Oracle E-Business Suite
and all of them are built using Oracle Mobile Application consultants can work on installation and creation of web
Framework (Oracle MAF), as well as additional components services
specific to Oracle E-Business Suite provided through the Oracle 3. Dedicated mobility consultant will be required to build the
E-Business Suite Mobile Foundation. To use these mobile apps, mobile app using the framework selected by client (Hybrid,
you only need to apply consolidated server-side patches and Native etc)
perform some setup tasks to configure your mobile apps on 4. A normal business scenario like automating “Creation of
the server. Different versions of the mobile apps may require employee in HR module” can be modernised in 4-6 working
different configuration steps on the Oracle E-Business Suite days. Accordingly we can assess the total time required
server. With the latest mobile foundation release, some level of based on all the business scenarios to be modernised using
customisations and branding of the apps is also possible. mobile apps

www.ukoug.org 29
OracleScene

SPRING 17
Oracle E-Business Suite: Vishal Goyal

Being able to automate the Oracle Forms which business users Indicative Cost Assessment
are very accustomed to is a great way to go mobile. This not 1. Since this is complete custom development, it requires proper
only helps avoid lots of testing by the business but also the requirement gathering with business users to understand
introduction of any changes, as it uses the existing code already the different scenarios to be built. This can take between 2-3
built in the form (which is supported by Oracle). weeks depending on the scope of work
2. Dedicated Oracle E-Business Suite functional, technical,
middleware and mobility consultants will be required to
Custom Mobile Apps do the development and testing. This development effort
More complex business requirements need complete custom could range from 2-3 weeks to several months depending on
mobile apps development as per the customisations done in complexity and scope
the source application. Any functionality can be built, exposed
as a web service and then consumed into a mobile app using a Building a customised mobile app has its own advantages.
hybrid approach or native app development. This approach takes Though this is costly and time consuming, complex business
much longer but is tailor made to the business needs, meets needs may need this level of investment to get real benefits of
any requirement, can be built using any platform (Native SDK, taking Oracle E-Business Suite to mobile devices.
Xamarin, Oracle MAF etc) and includes any other functionality
like digital signature. Another big advantage is single login helps
access all functionalities.

Conclusion
Oracle ERP can no longer be restricted as a back office application and taking it to mobile devices needs to be considered. It
takes the entire experience to a new level, makes business users more productive and efficient and helps create a rich user
experience. There are host of approaches to go mobile and the option to be used needs to be discussed among IT and business
teams to ensure the right solution is selected with benefits which will make the investment worth it. Happy Mobility to all.

ABOUT Vishal Goyal


Program Manager, Fujitsu Consulting India
THE Vishal is an experienced and proven technology consultant with a 16-year track record of
AUTHOR delivering results, adding value and motivating teams. Diverse experiences include leading
application support, as well as managing and leading global project teams working across
different geographies. For the last 6 years he has worked managing the rich technology
stack which includes: Oracle ERP, Siebel CRM, Hyperion Planning, WebCenter Content,
and Mobile applications with Fujitsu.

www.linkedin.com/in/vishal-goyal-309b122/

30 www.ukoug.org
Technology

Oracle Management Cloud –

Finding the Needle


in a Haystack
It’s been said probably more times than you care to remember that; there has been a
huge shift in IT with the on-set of Cloud. Systems management technology has evolved
over the years; but our way of managing and monitoring environments hasn’t, i.e. the
culture of infrastructure monitoring. The scale of infrastructure we have to work across
is huge; and transient, finding issues can be like…well finding a needle in a haystack.
Philip Brown, Director of Cloud Strategy, Red Stack Tech

Oracle Management Cloud is a suite of monitoring and • Host Logs


management tools for today’s modern IT infrastructure. There • Database Alert Log
are a couple of key things you need to understand about the • Database Audit Log
Oracle Management Cloud; firstly, it’s based in the Cloud; this • Listener Log
isn’t an on-premise solution which you need to feed and water. • Web Server Logs
The only thing you install is agents which gather operational • Access Logs
data. The next thing to know is that this is a suite of tools. • Application .out Logs
You can use these services individually but the benefits of
the solution become more compelling when you combine The challenge is that we need to search these log files efficiently
the services. At the time of writing there are seven services; and effectively. All the files will be in different locations on
Application Performance Monitoring, Log Analytics, IT Analytics, different servers so in reality there is no easy way to search all
Infrastructure Monitoring, Orchestration, Compliance, Security these logs. Traditional monitoring will search for each message
Monitoring and Analytics. in these logs but that is just searching silos of information. Being
able to search these logs in a single command and look for trend
In this article, we are going to explore the Oracle Management analysis across these logs AND link this back to application
Cloud Log Analytics service. Here we will see how this service performance issues is actually what we want to be doing. Log
enables you to work across infrastructure and application tiers to information is also transient and quickly gets deleted, being able
provide a better understanding of errors and issues and turn the to look retrospectively across time periods can also provide insight.
huge volumes of operational log data into a useful commodity.

The Problem… The Tools…


Here is a little equation for you; ((applications + Oracle Management Cloud is a PaaS solution providing Log
databases)*virtualisation)*cloud = ??? Fundamentally it equals Analytics, IT Analytics and Application Performance Monitoring.
lots of technology; tiers and tiers of technology all generating At Oracle OpenWorld this year more services were introduced,
information which IS vital to the smooth running of the Infrastructure Monitoring, Security and Compliance and
applications and enterprise. Even the simplest application with Security Analytics. Here we are going to talk about one of these
one application server and a database server you could have the components; Log Analytics. As Oracle Management Cloud is
following log files:

www.ukoug.org 31
OracleScene

SPRING 17
Technology: Philip Brown

a PaaS solution you don’t provide the Platform, Oracle does C: Field Summary – Visualisation of Displayed Fields and Group
that for you. All you need to do is send the information to the By Fields
Management Cloud. D: Histogram – Y Axis records and the X Axis is Date
E: Log Records – Ordered by Date

The Setup… To search it’s simple just type what you want to look for; in this
Oracle Management Cloud Log Analytics requires a Cloud example ORA-20011. What this has allowed us to do is search all
Agent which collects the data from the server. This is sent to alert logs for that error; to be fair existing systems management
the Management Cloud directly or via a ‘Gateway’ which is technology would be able to do this, to a certain degree. However,
effectively a proxy agent. In terms of terminology, here is what what existing systems management tools can’t do is quickly
you need to know. visualise the search and provide a drilldown for further analysis.

• Entity – this is the thing, host, database, server, listener, WLS


• Log Source – a logical collection of log files
• Log Entity – a single log file as part of a log source
• Log Parser – this digests the log information

Adding targets into the Management Cloud is done either by


editing a pre-defined JSON file with key information or auto
discovery. Not all ‘Oracle’ components are auto-discovered but I
believe this will be changing.

“name”: “orcl12c”,
“type”: “omc_oracle_db_instance”,
“displayName”: “orcl12c”,
“timezoneRegion”: “PST”,
“properties”: {
“host_name”: { So to take it up a level; how about what are the common ORA-
“displayName”: “ldndb01”, errors across all our environments? Because the Management
“value”: “ldndb01.redstack.com”
… Cloud has an Alert Log parser we can drag and drop fields into
“adr_home”: { the ‘group by’ criteria. The log parser allows us to group on ‘Error
“displayName”: “ADR Home”,
“value”: “/u01/app/oracle/diag/orcl12c/ORCL12C/rdbms”
Text. If we expand down the field summary suddenly we have a
clear visualisation of our issues. So we have managed to search
across all our alert log sources quickly and categorise ORA- errors
Editing a file is straight forward, but prior to starting gather all without having to pre-define which ones we are looking for. This
the information of entity names, log locations etc first as it will brings the ability to quickly search large volumes of data with
fast track the setup. The documentation is very good and clearly the added intelligence of the log parser understanding the key
explains what needs to be done. It shouldn’t take no longer than components of those log messages.
a day from getting access to the Cloud service to uploading log
data. For Log Analytics you don’t need anything other than an
Cloud Agent on the target server.

The Analysis…
In this example, we are looking at Alert Logs across a group
of systems. The key thing here is searching at scale and
visualisation of data to ‘fast track’ analysis. If you haven’t seen
the Log Analytics in Oracle Management Cloud I’ve annotated
the picture:

By hovering over the ‘Error Text’ we can see which targets have
been affected, alternatively by hovering over the ‘Entity’ we
can see which ORA- errors they have hit. If we then click, the
histogram is updated to then allow drill into times and dates.

A: Display Fields – What information do you want to see about


each entity
B: Group By – For each log entity the log parser has broken down
each log message into groupable attributes

32 www.ukoug.org
Technology: Philip Brown

Logs Logs and More Logs… some ‘one-off’ analysis and don’t need on-going log analytics.
While searching across a number of alert log files is definitely a For the Oracle Management Cloud loading data ‘On-Demand’ is
move forward in our ability to diagnose and troubleshoot issues, done via a CURL command. The command is simple and the key
middleware logs provide more of a challenge. The first challenge parameters are highlighted in RED.
is volume, while databases have one key alert log, middleware
can have ten and they can be much more verbose in their curl --insecure \
output. While some log files will be automatically discovered -u ‘[email protected]’ \
we want to also add log files in. To do this we simply, create a -X POST \
-H ‘X-USER-IDENTITY-DOMAIN-NAME:redstack’ \
log source, which is just a logical definition of a collection of log --form ‘data=@C:\cygwin64\alert_orcl.log’ \
files, then we add in the specific log files. Depending on the log “https://redstack.loganalytics.management.us2.oraclecloud.com/
serviceapi/logan uploads?uploadName=Upload1&targetName=orcl&ta
type they could be a pre-defined parser for it, if not we can use a rgetType=omc_oracle_db_instance&createTarget=true&logSourceName=
generic parser or create our own. DBAlertLogSource&logParserName=db_dbalertlog_body_logtype”

The log file can either be a single log file or a ZIP file, it doesn’t
matter. The uploadName parameter is a way of logically
grouping uploads into the Oracle Management Cloud, more on
that in a sec. Finally, the logSourceName and logParserName
determine how that log file will be interpreted.

As you load ‘On Demand’ through the CURL statement you will
see the logs appear in the Management Cloud. From here you
can drill directly into Analysis. If you give your logs different
upload names it will logically group them separately.

Quantity not Quality?


Quantity is the issue, not just in the number of log locations
but the volume of data that you need to search. Here are some
stats from a recent collection of Weblogic Server Logs. The
count is the number of log entities across
one logical log group (i.e. a collection of
log files). Therefore, in the first logical
collection we are looking at 2.7 billion log
entries, combined with the second that
takes us to 4.7 billion. Do we want to search
all that data…maybe…can we…yes! That
is the key really, Log Analytics gives us the Final Thought…
capability to search across these logs to So going back to the phrase from the beginning of the article
find the needle in the haystack, we don’t ‘...huge shift in IT...’ makes me want to draw out a particular point,
have to pick or choose what we want, the platform enables us and this isn’t due to the on-set of Cloud. IT now more than ever
to search across 5 minutes or 5 weeks and the volume of logs is needs to demonstrate value fast. Anything that takes months to
inconsequential. enable and deep technical knowledge to derive benefit is going
to lose before it even gets going. For me the compelling part of
Oracle Management Cloud Log Analytics is that you can derive
You Want More Logs… value immediately, it provides Log Analytics capability with pre-
Log Analytics is used to trend and analyse log files currently defined Log Parsers that understand the data. We shouldn’t view
being generated but you can also look at any log files from any Oracle Management Cloud services in isolation and the more we
system or logs from a particular system in which you need to do put into the tool the more we can get out.

ABOUT Philip Brown


Director of Cloud Strategy, Red Stack Tech
THE Philip is the Director of Cloud Strategy at Red Stack Tech. His role is to enable clients to
AUTHOR successfully adopt Cloud technology in all its IaaS, PaaS and SaaS forms. He is an active
member of the Oracle community presenting at UKOUG events since 2008 and regularly
blogs and writes articles on topics such as Oracle Management Cloud and Oracle
Enterprise Manager.

Blog: www.redstk.com/blog
uk.linkedin.com/in/philip-brown-159b4b13
@pbedba

www.ukoug.org 33
OracleScene

SPRING 17
Cloud

15
The Office for National
Statistics and Their Journey
From Oracle E-Business
Suite to Cloud Applications
The phrase ‘journey to cloud’ is heard all the time,

minutes but what does this actually mean for the existing
Oracle E-Business Suite team?

with Sarah Green, project manager for the


Office for National Statistics (ONS), talks
and why, but there wasn’t much else we
could do with it. Our team were probably

Sarah Green to Debra Lilley, VP of Certus Solutions


about their move from Oracle E-Business
one of the most experienced end user EBS
teams around.
Suite (EBS) to Oracle Cloud in 2016.
Office for National Firstly congratulations on your Go Live Tell me about those customisations?
Statistics Thank you, we are really proud of what
we have achieved and the difference it is
A good number of the customisations were
around managing the system technically:
making at ONS. We are still on our journey tablespace alerts, processing stats,
but have come a long way. monitoring etc. As we are now using SaaS
someone else looks after the technology so
those are redundant.
What were your drivers for moving to
Cloud? Other customisations were to allow us to do
Our Oracle E-Business Suite system was things, which we can do in the native Cloud,
old and the risk from being out of support to be honest some could be done in Oracle
was unsustainable. Our business is data E-Business Suite but we had customised
driven, providing the official statistics for in earlier releases and not taken up the
the country and yet our corporate systems standard functionality later on.
were old fashioned and didn’t inspire
people to work for us.
The flexibility of the Cloud
A journey has to start somewhere, system has allowed us to
where were ONS?
ONS implemented Oracle E-Business
deliver what we needed, so
Suite in 2004, covering all finance and most of the customisations
HR modules but not payroll. We were on
Release 11, no longer in premium support
simply aren’t needed.
but as we didn’t use payroll it still worked.

ONS hosted their system and have a small We took an approach of ‘like for like’, if it
team of around 4 who have supported the was done in Oracle E-Business Suite it has
system and the business use of it since to be done by Cloud. The system had to be
we first implemented it. Like most Oracle able to do what we had done before, but
E-Business Suite users we had customised not necessarily in the same way. In most
the system considerably. It was well cases the functionality is better and in
documented, we knew what we had done some cases it is different, but we haven’t

34 www.ukoug.org
Interview with Sarah Green

compromised any outcome. The phrase How was that go-live day? a big part of the project, but we have had
‘but we always did it’ was banned. Personally I was tired and nervous but great feedback. The system is intuitive
also very excited. As project manager I and users say once they understand the
We also had a lot of unnecessary checks in was in control of everything until that navigation, which is a simple training
the system, I think when we implemented final switchover. exercise, they can understand what they
Oracle E-Business Suite there was a lack of need to do much better.
trust in some of our processes, and so we The migration happened over the
checked everything. We are a much more weekend and if I‘m honest the cutover
mature organisation today, with a lot more was quicker and less frantic than I
trust. An example is expenses, today the expected, but then we had done the
The user experience is very
Cloud system defaults to their manager, preparation and testing; we were ready different but in a good way.
but in Oracle E-Business Suite we allowed for it. People were trained and we had
them to route to whomever they wanted. floor-walkers on hand for any thing in
People like using it.
Explaining that your manager should be those first few weeks.
aware, and they can reroute it as necessary,
made sense. Although it was a ‘big bang’ and we went Reporting is the other big change. A big
live on 31st October, it doesn’t all happen deal is made of the analytics in Cloud
ONS took this opportunity to have a new on day one. rather than simple transaction reporting,
chart of accounts, so there was a lot of and we still have a lot to exploit here. Our
mapping to do. Probably the biggest area It took almost five weeks for all areas to ‘like for like’ approach meant we have
was the many, many reports specific to be considered live, not because parts were created reports in the first instance where
ONS. Everyone in ONS was on board, we not live but because there were several the delivered analytics didn’t cover what
knew we couldn’t customise and we milestones before all processes were used. we needed.
didn’t want to; we wanted to get the best Self-service for HCM was the first area
out of the system. people adopted, our users had HR self One example of where we had
service in Oracle E-Business Suite so were embedded analytics immediately was
keen to use the new system. Another in performance. When we went live we
Cloud Applications don’t stand alone, major milestone was that payroll extract, were about to start mid term reviews
what about your interfaces? and then the first expenses run. Once we in ONS. In the past managers needed
All ONS interfaces are outbound, and had our first month end in Finance I was HR to run reports to see how much had
technically it was simply a case of able to relax in the knowledge we were been completed, now it is available as
replicating the files from the Cloud truly live. standard in the manager’s dashboard. All
application. Sounds simple, and in many the managers love having that immediate
cases it was. A few were more tricky, and insight into their teams.
testing without live data was a challenge
but we were confident they would work.
“Did we have challenges?
Yes of course we did but as I What are your support team doing now?
They still have some bedding in to do,
explained at several steering but their role has changed. They had
We had elected to have a boards, we had ice cubes to extended EBS to its limits and were kept
‘big bang’ upgrade so we had deal with, not icebergs”. busy reacting to issues. The continuous
innovation through 2 releases a year
contingencies in place for means upgrading is now actually Business
each of the interfaces. as Usual, but also understanding that this
Why did you choose a ‘big bang’? new functionality can deliver for ONS.
Mainly because of where we came
from, our Oracle E-Business Suite was Today they are pro-active, looking at how
Not surprisingly the interface with our
integrated Finance and HR, if we took a they can add value to the business with
payroll provider was the highest profile
phased approach we would have had to the tools available to them. There are
and the most difficult to test. People get
run the two in parallel and unstitch EBS as massive opportunities to improve how the
understandably nervous but we had layers
we rolled Cloud out. A ‘big bang’ was less business uses this single source of data.
of contingency here, ranging from manual
risky and much less messy.
input to amending the last electronic file.
We also had almost 3 weeks from the go
We didn’t go for a year-end though, we
live date till the first file was needed so we
will have been live for 5 months before
The EBS team were like a
were able to create files with live data in
that period and test again.
our next year end, plenty of time to runner on a treadmill, lots of
iron out any issues and be even more
confident of our new system.
effort but really boring and
How long was the implementation and
not going anywhere, now
when did you go live?
What did the users think of the new
they are running free with
We committed to Cloud in May 2015,
but we kicked off the implementation on
system? endless opportunities.
The biggest change for users is the ‘look
6th January 2016 and went live on the
and feel’, and change management was
31st October.

www.ukoug.org 35
OracleScene

SPRING 17
Interview with Sarah Green

Everybody is excited by it. Our executive this is. Our support team had to be more And what is next for ONS?
sponsor, Paul Layland has been really than just resources to the project. Lots of that pro-activity I mentioned and
supportive and encouraging – this is good a roll out to our field force who never
for both our staff personally and for ONS. When we had a challenge with the had access to EBS. This will remove the
implementation, it was addressed cultural divide we currently have.
One of the pro-active tasks the support together. No blame, no witch-hunt, just a
team will be working on is taking some resolution. The approach Certus took with Then we have phase II to start after
of our specific reports and adding them prototyping the system, and the mutual year-end, we will be considering adding
to dashboards. They have just taken our challenging of decisions in a positive EPM for our budgeting, and workforce
‘temporary promotion’ report and turned environment meant we were really planning.
that into an infolet, which drills down into comfortable with our build.
the detail; again instant information. We will also look forward to enhancing
our payroll processes, for example we
ONS have been very public with their currently manage overtime in Lotus
How would you sum up the change to project – why? Notes. Standard functionality in Cloud
analytics? We are public sector; we are funded by HCM will allow us to do that in a much
public money. We should be open as to more efficient way, again improving our
how we are spending that investment. It business from our single source of data.
The business of ONS is data, is also important we share our learning
we encourage our staff to and experience with other public sector
bodies, saving them more of that public
be curious about statistical money when they make their move.
This has been and continues
data and now they can apply to be challenging journey but
that same curiosity to our we are up for it and excited
corporate data. As active members of UKOUG about the possibilities.
we have benefited in the past
from other users’ knowledge
How did the engagement with your
systems implementer work?
sharing – now it is our turn to
We had a really good relationship with share.
Certus and it truly was a single project
team, don’t underestimate how important

ABOUT Sarah Green


Project Manager, Office for National Statistics
THE Sarah is the Project Manager for the Oracle Cloud implementation for the Office for
AUTHORS National Statistics and successfully delivered Oracle Cloud ERP and HCM for the
organisation in October 2016. This was the first such implementation in government.
Sarah now leads the team to embed the Oracle Cloud solution within the Office and
maximise its potential for the organisation as the product continues to develop.

Debra Lilley
Vice President, Certus Solutions

Debra Lilley is a VP of Certus Cloud Services, an ACE Director and Member Advocate at
UKOUG. She has worked with Oracle Applications for 20 years and now works exclusively
with Cloud Applications.

Blog: debrasoracle.blogspot.co.uk
uk.linkedin.com/in/debralilley
@debralilley

36 www.ukoug.org
Technology

Improving
Statspack
Experience
When you are in
Standard Edition,
or when you are
in Enterprise
Edition without the
Diagnostic Pack, you
cannot use AWR and
the performance
pages of Enterprise Installing Statspack is not difficult at
all and is documented in the spdoc.txt
system and application statistics if I have
snapshots from that time.
Manager (dbconsole, provided in Oracle Home rdbms/admin
directory. But, through the years of using
EM express or Cloud it on different versions and environments, Installation
there are few, or more, things I do to Statspack is not installed by default but is
Control) are empty. enhance its usage. In this article, I would easy to install by running spcreate.sql
like to share: some best practices, code which you find in ORACLE_HOME/rdbms/
As we often have snippets and ideas to improve your admin.
experience with Statspack.
to troubleshoot Here is the script (Listing 1) that I use to
I recommend to have Statspack create a STATSPACK tablespace and install
performance collecting snapshots at least every hour Statspack, to be run when connected as
in any database which is not covered sysdba. As I recommend to put Statspack
problems that by Diagnostic Pack. When a user comes tables in their specific tablespaces instead of

occurred in the past, and tells me that the database was slow
this morning, but is back to normal now,
SYSAUX, you just have to customise the file
name. I usually start with a 2GB tablespace
it is recommended I can see nothing without a history of
snapshots. But I will probably see lot of
but you can increase it if you have lot of big
SQL statements and want a long retention.
to install Statspack,
which requires no
set echo on
whenever sqlerror exit failure
create tablespace STATSPACK datafile '+DATA' size 100M autoextend on maxsize 2G;

additional option define default_tablespace='STATSPACK'


define temporary_tablespace='TEMP'
column random new_value perfstat_password noprint
licensing. select '"'||dbms_random.string('a',30)||'"' random from dual;
alter session set "_oracle_script"=true;
@?/rdbms/admin/spcreate
alter session set "_oracle_script"=false;
Franck Pachot, dbi services
LISTING 1- STATSPACK INSTALLATION

www.ukoug.org 37
OracleScene

SPRING 17
Technology: Franck Pachot

You connect as sysdba to run it and if you want to install it Snapshot Level
in a pluggable database, you first have to ALTER SESSION SET By default, Statspack snapshots are taken at level 5, but I want
CONTAINER, or connect through the listener to the PDB service. If to gather execution plans and gather segment statistics and
you want to install Statspack in CDB$ROOT, and I’ll explain why both are collected only from level 7.
later, in 12cR1 you need to set “_oracle_script” to true, so
that the Statspack schema, the PERFSTAT user, can be created Before setting the default to level 7 and running automatic jobs,
without C## prefix. It’s an Oracle maintained script anyway, so no I want to be sure that the level 7 gathering only takes a few
problem in doing that. In 12.2.0.1 this setting is in spcreate. seconds:
sql but still missing from spdrop.sql (Bug 25233027 opened
for this).
SQL> set timing on
SQL> exec statspack.snap(i_snap_level=>7);
You can see that I set a random password. Even if it’s only PL/SQL procedure successfully completed.
Elapsed: 00:00:03.04
performance data, they include SQL statements, bind values,
and information about your application, it’s better to protect it.
Change the password later or just don’t connect with PERFSTAT. I’ve seen few cases where it takes longer and it’s better to fix the
You will probably use it from your own DBA user (all tables are issue before setting the default level to 7. One cause can be lot
accessible via public synonyms) and can then even lock PERFSTAT of data in PERFSTAT with no statistics gathering. Another reason
account. is when the SGA is sized too large, then the shared pool may
become huge, and long to read.
The creation should finish with ‘successful’. In case of error, you
can deinstall with spdrop.sql. So, a few seconds is ok, then you can set the default level to 7
(Listing 2).

Filter and Access Predicates SQL> exec STATSPACK.MODIFY_STATSPACK_PARAMETER (i_snap_


Actually before installing, there’s something I sometimes ‘hack’ level=>7, i_instance_number=>null);
PL/SQL procedure successfully completed.
in spcpkg.sql because the gathering of execution plans
bypasses the most interesting information: the where clause LISTING 2 - SET STATSPACK DEFAULT LEVEL TO 7
predicates. This comes from old bugs in 9i where this gathering
used to encounter an ORA-7445 because those columns from If you are in RAC, you have to do that for each instance.
v$sql_plan are a bit special. The predicate information is not
stored as-is and has to do some ‘reverse parsing’. However, there is
no issue in the current versions, I query those columns very often Scheduled Jobs
and don’t tend to see problems anymore. Then, you can replace Now it’s time to schedule the jobs that will be owned by the
the commented lines: PERFSTAT user so that they will be dropped if we de-install
Statspack. If you used the random password for the create, you
, 0 -- should be max(sp.access_predicates) (2254299) can connect:
, 0 -- should be max(sp.filter_predicates)

connect perfstat/&perfstat_password
and change them to the ones commented out as ‘should be’:
You will be sad to see that I still use the deprecated dbms_job
, max(sp.access_predicates) -- not supported -- and you are free to use dbms_scheduler, but I have this script
, max(sp.filter_predicates) -- not supported --
(Listing 3) that I’ve used for years which has the advantage to
create the job with the instance number. If you are in RAC you
If you had already run the spcreate.sql above, just reload the have to run it on each instance because statistics gathering and
package by running spcpkg.sql (set the current schema to purge is done per instance.
PERFSTAT) but, it’s better to do that before taking any snapshot
or you will have some plans that miss the predicates. Be aware that There is one job to take a snapshot every hour and one job to call
this is not supported, and I’m not responsible for any side effects. the purge job every week, keeping 45 days of history.

Check the jobs:

SQL> select last_sec,next_date,next_sec,log_user,instance,what from dba_jobs;

LAST_SEC NEXT_DATE NEXT_SEC LOG_USER WHAT


-------- --------- -------- ----------------------------------------------------------------
29-JAN-17 10:00:00 PERFSTAT statspack.snap;
06-JAN-17 00:00:00 PERFSTAT statspack.purge(i_num_days=>45,i_extended_purge=>true);

38 www.ukoug.org
Technology: Franck Pachot

connect perfstat/&perfstat_password
variable jobno number
variable instno number
begin
select instance_number into :instno from v$instance;
dbms_job.submit(:jobno, 'statspack.snap;', trunc(sysdate+1/24,'HH'),
'trunc(SYSDATE+1/24,''HH'')', TRUE, :instno);
dbms_job.submit(:jobno, 'statspack.purge(i_num_days=>45,i_extended_purge=>true);', delete from STATS$IDLE_EVENT;
trunc(SYSDATE)+7, 'trunc(SYSDATE)+7', TRUE, :instno); insert into STATS$IDLE_EVENT
commit; select name from V$EVENT_NAME
end; where wait_class='Idle';
/ commit;

LISTING 3 - SCRIPT TO CREATE SNAPSHOT AND PURGE JOBS LISTING 4- REPLACE IDLE EVENT LIST BY THE
IDLE WAIT EVENT ONES

merge into STATS$SNAPSHOT s


using (
select
dbid,instance_number,snap_id
,ltrim(substr(listagg(name||':'||ltrim(to_char(time_waited_micro/elapsed_micro,'09.9'))||' ')
within group(order by round(time_waited_micro/elapsed_micro,1) desc,name nulls first),1,21)
,':') ucomment
from (
select
dbid,instance_number,snap_id
,decode(name,'DB time','','DB CPU','CPU','User I/O','I/O','System I/O','I/O',substr(name,1,3)) name
,time_waited_micro-lag(time_waited_micro)over(partition by dbid,instance_number,name order by snap_id) time_waited_micro
,case when lag(snap_time)over(partition by dbid,instance_number,name order by snap_id)-startup_time > 0 then
(snap_time-lag(snap_time)over(partition by dbid,instance_number,name order by snap_id))*24*60*60*1e6
end elapsed_micro
from (
select dbid,instance_number,snap_id,wait_class name,sum(time_waited_micro) time_waited_micro
from stats$system_event join v$event_name using(event_id)
where wait_class not in ('Idle')
group by dbid,instance_number,snap_id,wait_class
union all
select dbid,instance_number,snap_id,stat_name name,value
from stats$sys_time_model join stats$time_model_statname using (stat_id) where stat_name in ('DB time','DB CPU')
) join stats$snapshot using(dbid,instance_number,snap_id) where ucomment is null
) where elapsed_micro >1e6 and time_waited_micro>1e6
group by dbid,instance_number,snap_id
) l on (s.snap_id=l.snap_id and s.dbid=l.dbid and s.instance_number=l.instance_number)
when matched then update set s.ucomment=l.ucomment;

LISTING 5- SET SNAPSHOT COMMENTS WITH AVERAGE DATABASE LOAD

This is an example but you can schedule exec statspack. com/statspack-idle-events/ and my preferred solution, even if
snap and exec statspack.purge(i_num_days=>45,i_ it’s not the supported one, is to replace this list of idle events by
extended_purge=>true) with whatever you like for the idle event wait class (Listing 4).
scheduling jobs run as sysdba. You can also do it from crontab or
Task Manager. The spauto.sql script that is provided also uses
dbms_job but do not schedule any purge. It is always a bad idea Date Format
to gather information automatically without thinking about Your locale date format may be larger than what was defined in
retention. Especially as the Statspack tables are stored in SYSAUX spreport, such as when in the French language:
and it will grow.
Snap
Instance DB Name Snap Id Snap Started Level Comment
------------ ------------ --------- ----------------- ----- --------
SNAP_ID Sequence CDB CDB 531 04 Juil. 2016 00: 7
The snapshots are identified with a SNAP_ID and some people 00
541 04 Juil. 2016 01: 7
like to see this number without a gap when they query the 00
Statspack views. I don’t need that because I use the LAG()
analytic function, but if you prefer no gap, no problem, the
In that case it’s better to set NLS_LANG=AMERICAN_AMERICA
sequence is not read frequently: SQL> ALTER SEQUENCE
before running spreport.sql.
perfstat.stats$snapshot_id NOCACHE;

When you join Statspack tables, don’t forget to include DBID and
INSTANCE_NUMBER in addition to SNAP_ID, just in case you use
Snapshot Comment
Statspack has a feature that does not exist in AWR: a comment
your scripts later on a database with new DBID (after a RMAN
can be associated with a snapshot. For example, when you take
DUPLICATE for example) or in RAC.
a manual snapshot, you can tag it with a comment:

SQL> exec statspack.snap(i_ucomment=>'before load test #15');


Idle Events
The list of idle events in STATS$IDLE_EVENT often lags behind
the new events introduced in new releases. This leads to very But the automatic snapshots do not have any comment. I use
misleading wait events as shown at http://blog.dbi-services. the script in Listing 5 to set all null comments to one that shows

www.ukoug.org 39
OracleScene

SPRING 17
Technology: Franck Pachot

Snap
Instance DB Name Snap Id Snap Started Level Comment
------------ ------------ --------- ----------------- ----- --------------------
cdb1 CDB 330 12 Jul 2016 17:00 7 20.0 I/O:15.1 CPU:01
332 12 Jul 2016 18:00 7 21.4 I/O:14.4 Sch:02
334 12 Jul 2016 19:00 7 20.1 I/O:14.1 CPU:01
337 12 Jul 2016 20:00 7 19.0 I/O:14.1 CPU:01
338 12 Jul 2016 21:00 7 18.4 I/O:13.2 CPU:01
340 12 Jul 2016 22:00 7 21.1 I/O:15.0 CPU:01
342 12 Jul 2016 23:00 7 22.6 I/O:13.3 Sch:03
345 13 Jul 2016 00:00 7 18.6 I/O:13.2 CPU:01
347 13 Jul 2016 01:00 7 19.0 I/O:14.2 CPU:01
349 13 Jul 2016 02:00 7 18.9 I/O:14.6 CPU:01
350 13 Jul 2016 03:00 7 17.8 I/O:13.9 CPU:01
352 13 Jul 2016 04:00 7 18.4 I/O:14.1 CPU:01

LISTING 6 - OUTPUT OF SPREPORT.SQL WITH AAS COMMENTS

create or replace view PERFSTAT.DELTA$SNAPSHOT as select


e.SNAP_ID
,DBID
,INSTANCE_NUMBER
,SNAP_TIME
,lag(e.SNAP_ID)over(partition by DBID,INSTANCE_NUMBER,STARTUP_TIME order by e.snap_id) BEGIN_SNAP_ID
,ROUND((SNAP_TIME-lag(SNAP_TIME)over(partition by DBID,INSTANCE_NUMBER,STARTUP_TIME order by e.snap_id))*24*60*60) SNAP_
SECONDS
,STARTUP_TIME,PARALLEL, VERSION, DB_NAME, INSTANCE_NAME, HOST_NAME
FROM PERFSTAT.STATS$SNAPSHOT e
join stats$database_instance i using(STARTUP_TIME,DBID,INSTANCE_NUMBER)
/

create or replace view PERFSTAT.DELTA$SYSSTAT as select


n.snap_time,n.snap_seconds
,e.SNAP_ID
,e.DBID
,e.INSTANCE_NUMBER
,e.STATISTIC#
,e.NAME
,e.VALUE-b.VALUE VALUE
,n.startup_time instance_startup_time,n.db_name,n.instance_name,n.host_name
from PERFSTAT.DELTA$SNAPSHOT n join PERFSTAT.STATS$SYSSTAT e
on(e.snap_id=n.snap_id and e.dbid=n.dbid and
e.instance_number=n.instance_number)
join PERFSTAT.STATS$SYSSTAT b
on(n.begin_snap_id=b.snap_id AND e.dbid=b.dbid AND
e.instance_number=b.instance_number and e.NAME=b.NAME)
/

LISTING 7 - CUSTOM ‘DELTA$’ VIEWS FOR STATS$SNAPSHOT AND STATS$SYSSTAT INFORMATION

the Average Active Sessions (AAS), and main timed event. previous snapshot and calculate the difference. An example
of this for STATS$SNAPSHOT and STATS$SYSSTAT is in Listing 7.
With the comments set with Listing 5 the spreport.sql list DELTA$SNAPSHOT adds the elapsed time (SNAP_SECONDS) and
of snapshots id looks like the output in Listing 6. In this example, previous SNAP_ID (BEGIN_SNAP_ID) using the analytic function
you can see that the Average Active Session is between 17 and LAG(). DELTA$SNAPSHOT joins to DELTA$SNAPSHOT and joins to
21 and the highest wait classes: ‘I/O’ for ‘User I/O’, ‘CPU’ for ‘DB itself to get previous snapshot values to subtract.
CPU’, ‘Sch’ for ‘Scheduler’, etc.
I didn’t want to write those kind of views for each Statspack
Running these scripts helps to identify the period of time where table and maintain them with Statspack evolution, so 10 years
you have activity. ago I created a script (see http://www.dba-village.com/village/
dvp_scripts.ScriptDetails?ScriptIdA=3128) to generate them (see
Listing 8). It calculates the delta value for all nullable number
Delta Values datatype columns, and joins on primary key. The script generates
The Statspack tables store only the cumulative values from the the view creation in delta.tmp and runs it.
start of the instance. They make sense only when calculating the
delta values between two snapshots, which is what spreport.
sql does. Pack Management
When you are in Enterprise Edition and don’t have Diagnostic
When you want to go beyond the report, you can query the Pack, you will use Statspack, but by default, AWR is still
tables, but then you need to self-join the tables to get the activated:

SQL> show parameter pack

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
control_management_pack_access string DIAGNOSTIC+TUNING

40 www.ukoug.org
Technology: Franck Pachot

define owner=PERFSTAT
define prefix=DELTA

whenever sqlerror exit failure


whenever oserror exit failure

CREATE OR REPLACE VIEW &owner..&prefix.$SNAPSHOT AS


SELECT
e.SNAP_ID
,DBID
,INSTANCE_NUMBER
,SNAP_TIME
,lag(e.SNAP_ID)over(partition by DBID,INSTANCE_NUMBER,STARTUP_TIME order by e.snap_id) BEGIN_SNAP_ID
,ROUND((SNAP_TIME-lag(SNAP_TIME)over(partition by DBID,INSTANCE_NUMBER,STARTUP_TIME order by e.snap_id))*24*60*60) SNAP_
SECONDS
,STARTUP_TIME,PARALLEL, VERSION, DB_NAME, INSTANCE_NAME, HOST_NAME
FROM &owner..STATS$SNAPSHOT e
join stats$database_instance i using(STARTUP_TIME,DBID,INSTANCE_NUMBER)
/

BEGIN
FOR v IN (
SELECT table_name , '&prefix.'||SUBSTR(table_name,INSTR(table_name,'$')) view_name FROM ALL_TAB_COLUMNS
WHERE table_name LIKE 'STATS$%' AND table_name NOT IN ('STATS$SNAPSHOT','STATS$DATABASE_INSTANCE')
AND nullable='N' AND column_name IN ('SNAP_ID','DBID','INSTANCE_NUMBER') GROUP BY table_name HAVING COUNT(*) = 3
) LOOP
dbms_output.put_line('');
dbms_output.put_line('create or replace view &owner..'||v.view_name||' as select n.snap_time,n.snap_seconds');
FOR c IN (
SELECT * FROM ALL_TAB_COLUMNS
WHERE NOT(nullable='Y' AND data_type IN ('NUMBER')) AND table_name=v.table_name AND owner='&owner.'
) LOOP
dbms_output.put_line(' ,e.'||c.column_name);
END LOOP;
FOR c IN (
SELECT * FROM ALL_TAB_COLUMNS
WHERE nullable='Y' AND data_type IN ('NUMBER') AND table_name=v.table_name AND owner='&owner.'
) LOOP
dbms_output.put_line(' ,e.'||c.column_name||'-b.'||c.column_name||' '||c.column_name);
END LOOP;
dbms_output.put_line(' ,n.startup_time instance_startup_time,n.db_name,n.instance_name,n.host_name');
dbms_output.put_line('from &owner..&prefix.$SNAPSHOT n join &owner..'||v.table_name||' e ');
dbms_output.put_line(' on(e.snap_id=n.snap_id and e.dbid=n.dbid and e.instance_number=n.instance_number)');
dbms_output.put_line(' join &owner..'||v.table_name||' b' );
dbms_output.put(' on(n.begin_snap_id=b.snap_id AND e.dbid=b.dbid AND e.instance_number=b.instance_number');
FOR c IN (
SELECT * FROM ALL_CONS_COLUMNS join ALL_CONSTRAINTS USING(owner,constraint_name)
WHERE constraint_type='P' AND column_name NOT IN ('SNAP_ID','DBID','INSTANCE_NUMBER')
AND owner='&owner.' AND ALL_CONSTRAINTS.table_name=v.table_name
) LOOP
dbms_output.put(' and e.'||c.column_name||'=b.'||c.column_name);
END LOOP;
dbms_output.put_line(')');
dbms_output.put_line('/');
END LOOP;
dbms_output.put_line('');
END;
.

set feedback off serveroutput on size 100000


spool delta.tmp
/
spool off
set feedback on echo on
start delta.tmp

LISTING 8 - CREATE DELTA$ VIEWS WITH SNAPSHOT INFORMATION AND DELTA VALUES FOR ALL METRICS

It is an overhead and in addition to that it is a risk that someone In Standard Edition, the control_management_pack_access
uses it and gets it flagged in DBA_FEATURE_USAGE_STATISTICS, is set to none by default and it is not recommended to change that.
which will be a problem in the case of a license audit. My
recommendation is that you set control_management_
pack_access to none. Oracle recommends to leave it set Graphical View
because some functions accessible without the licensed option What is missing with Statspack are the graphics that you can
are based on AWR framework. Some examples are segment see on EM Express or Cloud Control performance pages. They are
advisors and undo advisors. In my opinion the risk to pay unfortunately only based on AWR so you have nothing when you
millions because of a recorded usage is more important. It’s don’t have the Diagnostic Pack license. An alternative is to build
different in 12.2 multitenant because lockdown profiles help your own queries, using the ‘DELTA’ views I’ve built above, and
finer control on features used. display the result on SQL Developer reports, or on Excel.

www.ukoug.org 41
OracleScene

SPRING 17
Technology: Franck Pachot

column "SNAP_ID" new_value end_snap noprint


column "LAG(SNAP_ID)OVER(ORDERBYSNAP_TIME)" new_value begin_snap noprint
select snap_id,lag(snap_id)over(order by snap_time) from stats$snapshot order by snap_time;
define report_name=sp_last.txt
@?/rdbms/admin/spreport
exit

LISTING 9 - SPLASTREP.SQL TO GENERATE THE LATEST REPORT

There are also third party tools that can do that and I I often use the following splastrep.sql script to automatically
recommend Orachrome Lighty which can graphically display generate a report on the two latest snapshots. When the begin_
what you get from Statspack reports: snap,end_snap and report_name variables are defined, then
spreport.sql does not require any user input. Run Listing 9 with
And Lighty can go further with a job that simulates ASH and sqplus (spreport.sql formatting is not compatible with sqlcl).
then can really simulate AWR. I’ve described how to test it in a
blog post: http://blog.dbi-services.com/exploring-oracle-se-a-ee- You can also try to set markup html on, on some sections
performance-statistics-with-orachrome-lighty/. if you prefer html reports over text ones but you will see that
some sections are better displayed with preformatted text.

Other Possible Enhancements Multitenant


The nice thing about Statspack is that we have the source, In multitenant you install Statspack in each container. Oracle
which is very useful to understand where the statistics come does not support installing it at CDB$ROOT, but my opinion is
from and how ratios are calculated. We can also do some different because you may want to capture activity of sessions
changes, for example, for 12c you can gather snapshots for that switched to CDB$ROOT through metadata or data links.
V$EVENT_HISTOGRAM_MICRO and add a section in the report. Having a report that covers the whole instance may be a good
The enhancement request to add it for AWR was accepted for start for system-wise performance analysis. In each PDB, the
12.2 but not as yet in Statspack. statistics are related to the container (what you see locally from
V$ views), but be careful as some statistics have a meaning only
at CDB level (redo for example).

Conclusion
Many features that are available with options, like AWR, have their counterparts, which you need to spend time customising.
The good thing is that Statspack is very similar to AWR. AWR was an enhancement of Statspack (which itself was an
enhancement of UTLBSTAT/UTLESTAT which are still there in ORACLE_HOME). They are based on the same metrics: statistics
and wait events, so the interpretation can be done with the same knowledge and the help of Oracle Database reference
documentation.
Statspack has several interesting features that are not well known. You can baseline some snapshots to keep them beyond the
purge retention (statspack.make_baseline). You can export/import Statspack repository with Data Pump to a centralised
one. When you upgrade the database you have to upgrade Statspack, or better: export the old repository elsewhere and re-
create it in new version.
If you are running Active Data Guard you can use Statspack to analyse performance on the standby. You install it in the primary
(with sbcreate.sql and sbaddins.sql) and snapshot gathering uses database links to query the standby performance

42 www.ukoug.org
Technology: Franck Pachot

views. This has been available since 11gR1 and Statspack is the only way, even with Diagnostic Pack, because the support of
AWR for Active Data Guard appears only in 12cR2.
Final word: remember that in addition to the snapshots you schedule (hourly for example) you can gather snapshots manually.
Always try to analyse a report that covers a homogenous sample of your performance issue. With a good method to analyse
them, the 20 years old Statspack reports are more powerful than most tuning tools found in other RDBMS.

ABOUT Franck Pachot


Principal Consultant, dbi-services
THE Franck Pachot is principal consultant, trainer and technology leader at dbi-services in
AUTHOR Switzerland. He has over 20 years of experience in Oracle databases in all areas from
development, data modeling, performance, administration and training, Franck is an
Oracle Certified Master 11g and 12c and an Oracle ACE Director. Franck is co-author of
‘Oracle 12cR2 Multitenant’ (https://www.amazon.com/Oracle-Database-Release-
Multitenant-Press/dp/1259836096).

Blog: http://blog.pachot.net
https://ch.linkedin.com/in/franckpachot
@FranckPachot

www.ukoug.org 43
OracleScene

SPRING 17
Oracle E-Business Suite

Are Managed
Support Services the
Future for Oracle
E-Business Suite?

The thing is, while we’re sure that most This is bound to show itself with increased
It’s hard to escape enterprise level users will ultimately do recruitment and retention costs for every

from all the talk so, we believe that for many, there’s a five
to 10 year roadmap for this to happen.
big EBS user, as even the most loyal staff
will want to move onto opportunities with

of moving your That’s because not only are many


greater long-term security.

systems and data organisations not yet ready to take such


a dramatic step, there is also a significant
A lack of trained Oracle E-Business Suite
staff will also be felt more sharply in
to ‘the cloud’ these business case for current EBS users to
maintain their current platforms in order to
local government organisations. Here,
the pressure to reduce headcounts in
days: with even fully realise the value of their investment. IT, combined with zero pay rises due
to austerity cuts, could leave many
Oracle trying to Having said that, we believe that some
changes are likely to happen right away,
organisations struggling to maintain their
HR and financial infrastructure sooner
persuade users of as an orderly transition to a cloud- rather than later.
based service is likely to involve many
its E-Business Suite organisations taking a fresh look at
the way they resource their current IT Moving towards a managed
(EBS) to set their infrastructures. Discussions with new and support service
incumbent clients across the public and The business case for maintaining legacy
legacy systems aside private sectors is supporting this. Oracle E-Business Suite platforms in

and migrate their the medium term is inescapable for


many organisations. It represents a very

ERP functions onto Managing the expected shortfall in


skilled personnel
significant investment, and whether you
have shareholders or stakeholders in mind,
Oracle Cloud. Highly skilled IT personnel hate to be left
behind by technology; so the resource
you need to do everything possible to
realise its full value.
pool for EBS specialists is already starting
Andy Nellis, Managing to decline, as the brightest and the best Yet the increased difficulty and expense
start retraining and moving to more of maintaining their own infrastructure
Director, e-Resolve future-proof platforms. has already led many major businesses

44 www.ukoug.org
Oracle E-Business Suite: Andy Nellis

ABOUT
THE
AUTHOR

Andy Nellis
CEO, e-Resolve

Andy Nellis is the CEO of e-Resolve


who are a specialist e-Business Suite
consultancy. Andy been working in the
field of ERP over 20 years. Starting out
as an Oracle user, moving into support,
then consultancy and today as a
founder and CEO of e-Resolve.

www.linkedin.com/in/
and governmental organisations to move Understanding that outsourcing is andy-nellis-9158514/
towards a managed support solution a collaborative process
for legacy Oracle E-Business Suite In our experience, many organisations
platforms. In our case, we’re delivering initially hold back from outsourcing the
a managed service with a blended mix management of their Oracle E-Business upfront investment in time and resource
of operational and strategic consultancy Suite infrastructure because it feels pays dividends in helping clients find the
to provide transitional support to one like giving up control of essential appropriate support partner.
local government organisation and are business functions.
submitting on others.
The fact is that while outsourcing takes Ensuring that your procurement
In the first instance, this provides away many worries and responsibilities, it framework offers a balanced
immediate cost savings, enabling still leaves you in full control, yet with the scorecard
valuable IT personnel who are already support of a trusted partner. I believe it almost goes without saying
on the payroll to be deployed elsewhere, that outsourcing your Oracle E-Business
and eliminating the need for future For one thing, most of e-Resolve’s new Suite support will provide cost savings,
investment in hardware. relationships start with us providing a together with a host of other benefits.
series of ‘tweaks’ to legacy systems that
Perhaps even more importantly than the make them work more efficiently. We Having said that, this essential function
cost savings, outsourcing to a specialist pride ourselves on then building lasting is too important for you to make cost the
company provides a guarantee that relationships by putting in the time and main criteria when choosing a provider.
essential HR and financial functions effort to understand the evolving needs Quality of service costs money, if your
will continue to receive expert support of our clients and behave proactively to chosen partner is going to be able to
throughout the lengthy transition to a ensure they receive the best value. The resource the contract with the time and
cloud-based solution. co-creation of managed services experience it will require.
agreements, based on our reviews
This certainty extends to the cost of of their current EBS landscape is an So, if you’re looking for a trusted, long-
maintaining your existing services; as effective methodology for clients. The term partner to look after your legacy
ongoing costs can be agreed in advance, resulting managed service requirements Oracle E-Business Suite, perhaps the first
and you will only pay for additional services specification enables them move forward step you need to take is to re-evaluate the
should they become necessary, e.g. in the into procurement with a defined path way in which services like this are scored
event of new financial legislation. to achieving operational stability. The during the procurement process.

www.ukoug.org 45
OracleScene

SPRING 17
Technology

Getting Started With


Oracle GoldenGate
Replicating data between databases in a timely fashion can be a surprisingly tricky
thing to do. There are many ways to replicate data, from home-grown code and
database trigger based solutions, to Oracle Streams, Materialized Views over Database
Links and several 3rd Party replication products, such as Dell Shareplex and DBVisit
Replicate. The more timely and resilient you want your solution to be, the harder it
becomes to implement.
Neil Chandler, Chandler Systems

Whatever your reasons for moving a test environment. For the purposes Architectural Overview
data; migrating to the Cloud or to a of this introduction, I will concentrate GoldenGate consists of several processes.
new system/platform, feeding a Data on showing how we can setup a simple
Warehouse, performing an upgrade Oracle-to-Oracle Master-to-Slave The EXTRACT process connects to the
with minimal or zero downtime, or replication of an entire schema. database, captures transactions and writes
even implementing a bespoke Business This example replication will be done on the transactions to a TRAIL FILE. The TRAIL
Continuity system where only a fraction Oracle VM VirtualBox “Developer Days” FILE can either be local to the EXTRACT to
of the data is required for DR, GoldenGate Servers, downloadable here: be used by a DATAPUMP, or directly sent to
will allow you to implement data a remote destination server.
movement across platforms and different http://www.oracle.com/technetwork/
storage engines easily and quickly, with database/enterprise-edition/ The DATAPUMP process is used to pick
built-in resilience. databaseappdev-vm-161299.html up transactions which have been written
to a local TRAIL FILE and sends them to a
GoldenGate is a platform independent
data extraction, transformation and
load tool (ETL). I have used it to reliably
replicate data from Mainframe SQL/
MX databases, transform it into Oracle,
and modify and transform it again into
SQL Server, as well as designing and
implementing one-to-one and one-
to-many data migrations and feeds
on Oracle-centric systems. I have also
implemented a multi-master ACTIVE/
ACTIVE solution.
Basic uni-directional replication, which
will encompass most GoldenGate
implementations, is straightforward
to setup. The best way to understand
GoldenGate is to install it and use it in

46 www.ukoug.org
Technology: Neil Chandler

$ cd /home/oracle/install
$ unzip fbo_ggs_Linux_x64_shiphome.zip
$ cd /home/oracle/install/fbo_ggs_Linux_x64_shiphome/Disk1/ogg.rsp

$ cat ogg.rsp
#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v12_1_2

# Specify a release and location to install Oracle GoldenGate


INSTALL_OPTION=ORA12c
SOFTWARE_LOCATION=/home/oracle/app/goldengate
INVENTORY_LOCATION=/home/oracle/app/oraInventory
UNIX_GROUP_NAME=oracle


$./runInstaller -silent -responseFile /home/oracle/install/fbo_ggs_Linux_x64_shiphome/Disk1/ogg.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 23701 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2063 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-01-07_01-30-23PM. Please wait ...
You can find the log of this install session at:
/home/oracle/app/oraInventory/logs/installActions2017-01-07_01-30-23PM.log

The installation of Oracle GoldenGate Core was successful.


Please check ‘/home/oracle/app/oraInventory/logs/silentInstall2017-01-07_01-30-23PM.log’ for more details.
Successfully Setup Software.

FIGURE 1

remote destination server. NOTE: The DATAPUMP is an optional Installing GoldenGate


step, but it is best practice to use a DATAPUMP. This is to protect GoldenGate is a straightforward install and is easily done via a
against running out of memory should the remote destination responseFile as there are few parameters to supply.
have any availability issues. It is technically a specialised Unzip the downloaded installation file to an appropriate
EXTRACT process, and runs as an extract. installation directory on each server, setup a response file [ogg.
rsp] to identify the SOFTWARE_LOCATION of the GoldenGate
The COLLECTOR process on the remote destination server writes install and perform a silent install. You will need to install
the transactions to a TRAIL FILE on the destination server. The GoldenGate for both the source and destination database servers.
COLLECTOR process is spawned automatically by the MANAGER
when the EXTRACT or DATAPUMP connects to the remote Example GoldenGate Install on Target Server (See Figure 1).
servers. It requires no other configuration.

The REPLICAT process reads the TRAIL FILE on the destination Initial Configuration
server and applies the change records to the destination First of all we need to configure some global settings, directories
database. and the GoldenGate Manager on the source and target servers.
We should create a GLOBALS file in the GoldenGate Home
The MANAGER process looks after all of the other processes, installation directory /home/oracle/app/goldengate. The
and can start and restart them automatically. It also cleans up GLOBALS file is read each time we use the GoldenGate
old TRAIL FILES and listens on a TCP port (7809) for incoming Command Interpreter “ggsci” and contains parameters which
connections from source EXTRACT and DATAPUMP processes. apply to the entire GoldenGate instance.

The TRAIL FILE is a series of binary files in a canonical format


-- /home/oracle/app/goldengate/GLOBALS
which contains all of the transactions we have captured in the
EXTRACT. The TRAIL FILE format is identical, regardless of the ggschema goldengate
checkpointtable goldengate.checkpoint_table
source or destination system type. It is written-to by EXTRACT
processes, and read by DATAPUMP and REPLICAT processes.
You can only name the file using 2 characters, so there’s no real We need to ensure all appropriate GoldenGate sub-directories
opportunity for a meaningful naming standard. The TRAIL FILE have been created underneath the GoldenGate Home. We can
is defined with a maximum size which should relate to the use the “create subdirs” command within “ggsci” to do this.
number of transactions you are putting through the system. The 3 key subdirectories are:
Once the maximum size is reached, or if you stop and start the
extract process, a new TRAIL FILE will be started. The file format dirprm – this contains all of the parameter files group extract,
of the TRAIL FILE is XXnnnnnnnnn, where XX is your 2 character datapump and replicat groups as well as the parameter file for
name and nnnnnnnnn is the incrementing sequence number the manager and any include files
(note: this is restricted to 6 characters pre v12.2 of GoldenGate).
The old filename format - using the FORMAT keyword in the dirrpt – this contains all of the report log files from each group,
EXTRACT - may be required depending upon the platform to showing information relating to group and manager processing
which you are replicating data.

www.ukoug.org 47
OracleScene

SPRING 17
Technology: Neil Chandler

dirdat – this is the default directory for all of the trail files contain a unique identifier, and you are able to modify the
produced by the extract and datapump/collector processes. schema, that a surrogate key column be added and populated.
The files in this directory will contain all of the data for every It can be a simple population using a DEFAULT sequence
transaction which is replicated, and so we need to ensure it has next-value to capture new values automatically ( e.g. create
sufficient size and performance resources sequence <table_seq>; alter table <table> add
surrogate_unique_col default <table_seq>.
To complete the basic setup, we need to create the Manager nextval).
parameter file and start the Manager.
There are a few other schema-based problems which may need
$ ggsci
to be overcome, such as deferred constraints. There is a script
Oracle GoldenGate Command Interpreter for Oracle within MOS article 1296168.1 which performs a check of your
schema for replication compliance and provides some advice
GGSCI 1 > create subdirs
Creating subdirectories under current directory /home/oracle/ and metrics too.
app/goldengate

Parameter files /home/oracle/app/goldengate/


If you need to replicate sequences, you must run the sequence.
dirprm: created sql script in the target database. This script is located in the
Report files /home/oracle/app/goldengate/ GoldenGate Home directory.
dirrpt: created
Checkpoint files /home/oracle/app/goldengate/
dirchk: created
Process status files /home/oracle/app/goldengate/
dirpcs: created
In the Database – Initialization parameters and
SQL script files /home/oracle/app/goldengate/ database settings
dirsql: created There are a few recommended and mandatory settings for the
Database definitions files /home/oracle/app/goldengate/
dirdef: created source and target databases:
Extract data files /home/oracle/app/goldengate/
dirdat: created
Temporary files /home/oracle/app/goldengate/ Setting / initialization
dirtmp: created Description
parameter
Credential store files /home/oracle/app/goldengate/
dircrd: created alter database force To ensure that you do not miss any
Masterkey wallet files /home/oracle/app/goldengate/ logging
dirwlt: created NOLOGGING operations. You may
Dump files /home/oracle/app/goldengate/ wish to do this at a more granular
dirdmp: created level, such as tablespace.
GGSCI 2> edit param mgr alter database add This adds required supplemental
supplemental log logging on at a database level. Whilst
GGSCI 3> start mgr data this is low impact to the redo logs,
Manager started.
you may wish to do this at a more
GGSCI 4> info mgr granular level within ggsci using
Manager is running (IP port DevDaysSourceGG.7809, Process ID
11713). add trandata <schema>.<table>
alter database add This always includes the primary
The Manager Parameter File looks like this: supplemental log and/or unique key data in the redo
data (primary key) stream to ensure we can identify
columns; each row, even if those columns
--/home/oracle/app/goldengate/dirprm/mgr.prm alter database have not been referenced by the
PORT 7809 -- listener port add supplemental
DYNAMICPORTLIST 7810-7830 -- port range for spawned server transaction.
log data (unique)
“collector” processes columns;
--Uncomment the below once everything is configured and running alter system set Keep undo for as long as it may be
smoothly undo_retention=28801
--PURGEOLDEXTRACTS /u01/app/gg12/dirprm/AA, USECHECKPOINTS needed for the longest database
scope=both sid=’*’ transaction. This should be balanced
--AUTOSTART ER *
--AUTORESTART ER *, RETRIES 5, WAITMINUTES 1, RESETMINUTES 60 with Bounded Recovery, which
defaults to 4 hours, so at least 8
-- Interval at which problems and errors should be written to hours of UNDO is needed. Oracle
the ggserr.log file
DOWNREPORTMINUTES 15 recommends keeping 1 day of undo
LAGREPORTMINUTES 30 -- Interval at which lag is checked if possible.
LAGINFOMINUTES 5 -- Threshold at which lag is reported
LAGCRITICALMINUTES 15 -- Critical threshold reporting value alter system set From Oracle 11.2.0.4 / 12.1.0.2+ this
enable_goldengate_ is mandatory. It enables access to
replication=TRUE certain internals for GoldenGate
scope=both sid=’*’; in relation to TDE, LOGREADER
In the Database – Check the schema to be replicated access, trigger suppression, deferred
There is one key requirement for replicating data. It must constraints and other integration
be possible to uniquely identify each row of data. There are points.
3 ways to do this; a Primary Key (PK), a Unique Key(UK) or a alter system Integrated EXTRACT and REPLICAT
combination of up to 38 columns which can be concatenated set streams_ use Streams technology. The streams
pool_size=200M pool should be at least 200M. Use the
together to form a unique value. By default this will be the first scope=spfile sid=’*’;
38 columns of any given table with no PK or UK, but you can advisors to determine the optimal size
for throughput for your system.
define any 38 columns for this using the KEYCOLS parameter. If
one of these conditions cannot be met, you cannot successfully
replicate the data. I would recommend that if a table does not

48 www.ukoug.org
Technology:Neil Chandler

In the Database – GoldenGate accounts The EXTRACT parameter file looks like this:
There needs to be a GoldenGate account in both the source and
the target databases. The Source EXTRACT will be mainly reading -- /home/oracle/app/goldengate/dirprm/e_hr.prm
from the in-memory REDO stream to capture transactions.
The target REPLICAT will be playing those transaction into the -- Setup Environment Variables so we login to the database
correctly
database. SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/
dbhome_1’)
SETENV (ORACLE_SID=’cdb1’)
The source database consists of a container database called
“cdb1” and a pluggable database called “orcl” (this pre-exists in -- Name the extract
the downloaded VM image). EXTRACT e_hr

-- Login Details. These can be encrypted.


The target database consists of a container database called USERID c##goldengate, PASSWORD goldengate
“cdb1” and a pluggable database called “orcltarget” (this is a -- Add our standard reporting options for every extract and
newly created PDB in the downloaded VM image). replicat
In the source container database, we need a common user for include /home/oracle/app/goldengate/dirprm/i_report.prm

the EXTRACT, with GoldenGate-specific and PDB-level privileges: -- Name the Trail File. We only get 2 characters!
EXTTRAIL ./dirdat/AA

SYS@cdb1 > create user c##goldengate identified by goldengate ; -- Makes trail files smaller and helps with primary key updates
User created. UPDATERECORDFORMAT COMPACT

SYS@cdb1 > exec dbms_goldengate_auth.grant_admin_ -- We want to replicate DDL too


privilege(‘C##GOLDENGATE’,container=>’ALL’); DDL INCLUDE MAPPED
PL/SQL procedure successfully completed.
-- And report all DDL operations in full in the report logs.
SYS@cdb1 > grant dba to c##goldengate container=all; DDLOPTIONS REPORT
Grant succeeded.
-- Finally list the objects we are replicating: pdb.schema.
object
In the target database, we need a GoldenGate user within the -- Wildcard matching is OK as long as we aren’t doing any data
-- transformation in this step.
PDB itself: SEQUENCE orcl.hr.*;
TABLE orcl.hr.*;

SYS@orcltarget > create user goldengate identified by


goldengate; For the DATAPUMP, we need to create, configure the remote
User created. TRAIL FILES, and start it
SYS@orcltarget > exec dbms_goldengate_auth.grant_admin_
privilege(‘GOLDENGATE’);
PL/SQL procedure successfully completed. GGSCI 1> info all
Program Status Group Lag at Chkpt Time Since
SYS@orcltarget > grant DBA to goldengate; Chkpt
Grant succeeded. MANAGER RUNNING
EXTRACT RUNNING E_HR 00:00:10 00:00:06

NOTE: If you are not using Pluggable Databases, setup both GGSCI 2> edit param p_hr

GoldenGate users the same as the target GoldenGate user in the GGSCI 3> add extract p_hr, exttrailsource /home/oracle/app/
PDB. goldengate/dirdat/AA
EXTRACT added.

GGSCI 4> add rmttrail /home/oracle/app/goldengate/dirdat/AA,


Setup the EXTRACT and DATAPUMP extract p_hr, megabytes 20
RMTTRAIL added.
Before we synchronise the data between source and target, we
should configure and start the Extract Process. This will ensure GGSCI 5> start p_hr
Sending START request to MANAGER ...
data overlap and means we will not miss any transactions. EXTRACT P_HR starting
We need to create it, configure TRAIL FILES, register and start it
GGSCI 6> info all
Program Status Group Lag at Chkpt Time Since
$ ggsci Chkpt
Oracle GoldenGate Command Interpreter for Oracle MANAGER RUNNING
EXTRACT RUNNING E_HR 00:00:02 00:00:01
GGSCI 1> edit param e_hr EXTRACT RUNNING P_HR 00:00:00 00:00:08

GGSCI 2> dblogin userid c##goldengate, password goldengate


Successfully logged into database CDB$ROOT.

GGSCI 3> add extract e_hr, integrated tranlog, begin now


EXTRACT (Integrated) added.

GGSCI 4> add exttrail ./dirdat/AA, extract E_HR, megabytes 20


EXTTRAIL added.

GGSCI 5> register extract e_hr database container (orcl)

2017-01-08 15:34:05 INFO OGG-02003 Extract E_HR


successfully registered with database at SCN 6252571.

We have ensured that all transactions after SCN 6252571 will be


captured to the TRAIL FILES.

www.ukoug.org 49
OracleScene

SPRING 17
Technology: Neil Chandler

The DATAPUMP parameter file looks like this: HANDLECOLLISIONS parameter in the REPLICAT. This temporary
parameter may be used to get your REPLICAT started should
-- /home/oracle/app/goldengate/dirprm/p_hr.prm
you have transactions in your TRAIL FILE which are already in
the database, and it endeavours to align your source and target
-- Setup Environment Variables so we login to the database using some sensible rules to handle clashes e.g. if a record to
correctly
SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/ be deleted does not exist, just ignore the fact it is not there
dbhome_1’) as it will have the same outcome as if we deleted it. However
SETENV (ORACLE_SID=’cdb1’)
HANDLECOLLISION does not cope with all potential scenarios
-- Name the datapump. Note that its’ really a special type of and it should be switched off [NOHANDLECOLLISIONS] as soon
extract as possible after initial REPLICAT synchronisation otherwise it
EXTRACT p_hr
may slowly corrupt the target dataset.
-- Add our standard reporting options for every extract and
replicat
include /home/oracle/app/goldengate/dirprm/i_report.prm
Setup the REPLICAT
-- Specify where the trail file is being transmitted-to First of all we need to check that the GoldenGate DATAPUMP
RMTHOST DevDaysTargetGG , MGRPORT 7809
RMTTRAIL /home/oracle/app/goldengate/dirdat/AA is transmitting changes to the target server by looking for
the TRAIL FILE. Running multiple ls -l commands will show if
-- If you are not doing any transformation in the datapump
-- this parameter increases performance by up to 30%
transactions are being transmitted as the TRAIL FILE grows:
PASSTHRU

-- This is needed to capture any data issues. $ ls -l /home/oracle/app/goldengate/dirdat


-- It is useful when debugging problems. total 4
DISCARDFILE ./dirrpt/p_hr.dsc, PURGE -rw-r-----. 1 oracle oracle 2227 Jan 11 14:22 AA000000000

-- And capture the relevant objects. $ ls -l /home/oracle/app/goldengate/dirdat


SEQUENCE orcl.hr.*; total 4
TABLE orcl.hr.*; -rw-r-----. 1 oracle oracle 2690 Jan 11 14:23 AA000000000

Excellent, the TRAIL FILE exists and is growing! We can now


Initial Loading and Data Synchronisation use this to start at the correct SCN, playing transactions into
This can be the hardest part of any replication. Seeding the the target database. In Oracle, we refer to the “SCN” or System
target to match the source can be difficult, especially across DB Change Number to keep track of transactional changes.
formats. This is possible within GoldenGate using a “Special GoldenGate refers to the “CSN” or Change System Number as
Replicat”, which is beyond the scope of this introduction. Special it needs to cope with multiple formats of CSN from different
Replicats can be slow to run with large data volumes but may be source databases. These terms can be used interchangeably.
your best option if replicating between different database types.
With Oracle-to-Oracle, my two preferred initialisation methods For the REPLICAT we need to create a checkpoint table (used by
are to either: all replicats to keep track of where they are), register the replicat
and start it after the SCN we used for the Export Datapump:
• Create a Physical Standby, start the extract, stop Data Guard,
and force open the standby R/W noting the V$DATABASE. GGSCI 1> info all
STANDBY_BECAME_PRIMARY_SCN
Program Status Group Lag at Chkpt Time Since
• Use Export Datapump to extract the source data as of a Chkpt
particular SCN MANAGER RUNNING

GGSCI 2> edit param r_hr


I will be using the Export Datapump method here as I wish
to rename the schema in the target system. Note the use of GGSCI 3> dblogin userid goldengate password goldengate
Successfully logged into database ORCLTARGET.
FLASHBACK_SCN in the Export Datapump to fix the point in
time of the data extraction. We will use this SCN when starting GGSCI 4> add checkpointtable
the playback of transactions in the target later.
No checkpoint table specified. Using GLOBALS specification
(goldengate.checkpoint_table)...
Logon catalog name ORCLTARGET will be used for table
SYS @ cdb1 > select current_scn from v$database; specification ORCLTARGET.goldengate.checkpoint_table.
CURRENT_SCN Successfully created checkpoint table ORCLTARGET.goldengate.
----------- checkpoint_table.
6302652
GGSCI 5> add replicat r_hr integrated exttrail /home/oracle/
$ expdp c##goldengate/goldengate@orcl directory=gg dumpfile=gg. app/goldengate/dirdat/AA
dmp logfile=gg_exp.log schemas=hr flashback_scn=6302652 REPLICAT (Integrated) added.
Copy the .dmp file to the target and import, with relevant re- GGSCI 6> info all
mappings
Program Status Group Lag at Chkpt Time Since
$ impdp goldengate/goldengate directory=gg dumpfile=gg. Chkpt
dmp logfile=gg_imp.log remap_schema=hr:hr_target remap_
tablespace=users:hr MANAGER RUNNING
REPLICAT STOPPED R_HR 00:00:00 00:05:03

If you are unable to ensure definitive extract points, such GGSCI 7> start r_hr aftercsn 6302652
as when you are using flat files extract and load to perform
Sending START request to MANAGER ...
an initial population, it may be necessary to use the

50 www.ukoug.org
Technology: Neil Chandler

REPLICAT R_HR starting Total updates 0.00


Total deletes 0.00
GGSCI 8> info all Total discards 0.00
Total operations 93605.00
Program Status Group Lag at Chkpt Time Since
Chkpt End of Statistics.

MANAGER RUNNING
REPLICAT RUNNING R_HR 00:00:00 00:00:02 REPLICAT r_hr STATS

The REPLICAT parameter file looks like this: GGSCI 1> stats r_hr, total

Sending STATS request to REPLICAT R_HR ...


-- /home/oracle/app/goldengate/dirprm/r_hr.prm Start of Statistics at 2017-01-11 15:31:41.

-- Setup Environment Variables so we login to the database Integrated Replicat Statistics:


correctly
-- NOTE the TWO_TASK to connect to the correct PDB directly Total transactions 4.00
SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/ Redirected 0.00
dbhome_1’) DDL operations 0.00
SETENV (TWO_TASK=’orcltarget’) Stored procedures 0.00
Datatype functionality 0.00
-- name the replicat Event actions 0.00
REPLICAT r_hr Direct transactions ratio
75.00%
-- Login to the DB
USERID goldengate PASSWORD goldengate Replicating from ORCL.HR.JOBS to ORCLTARGET.HR_TARGET.JOBS:

-- Add our standard reporting options for every extract and *** Total statistics since 2017-01-11 15:04:59 ***
replicat Total inserts 3.00
include /home/oracle/app/goldengate/dirprm/i_report.prm Total updates 0.00
Total deletes 0.00
-- Controlling REPLICAT memory use and parallelism Total discards 0.00
DBOPTIONS INTEGRATEDPARAMS (max_sga_size 200, parallelism 1) Total operations 3.00

-- Key file used to show failed records. Needed when Replicating from ORCL.HR.JOB_SUBTASKS to ORCLTARGET.HR_TARGET.
troubleshooting problems. JOB_SUBTASKS:
DISCARDFILE ./dirrpt/p_orcl2.dsc, PURGE
*** Total statistics since 2017-01-11 15:04:59 ***
-- This is how we map the tables across from source to target Total inserts 93605.00
MAP orcl.hr.*, TARGET hr_target.*; Total updates 0.00
Total deletes 0.00
Total discards 0.00
And the standard reporting i_report.prm file we have included in Total operations 93605.00
End of Statistics.
every group parameter file looks like this:

You may notice that there is one less insert in HR_TARGET.JOBS


-- configure reporting to provide throughput stats
REPORT AT 23:59 than were extracted. This is because the EXTRACT was started
REPORTROLLOVER AT 00:01 ON MONDAY before the Export Datapump extracted the full schema. Between
REPORTCOUNT EVERY 30 MINUTES, RATE the starting of the EXTRACT and the Export Datapump, there
REPORTCOUNT EVERY 100000 RECORDS, RATE
was one insert transaction in the HR.JOBS table, but this was
ignored in the REPLICAT as we started it from the SCN at the
point of export datapump, not the point that the initial EXTRACT
And What has GoldenGate been Doing? started.
We can use the stats command to see how much traffic has
been going through each group. Here we look at what went
through the EXTRACT and the REPLICAT since they started. Can you Prove That? Sure.
The EXTRACT was started at SCN: 6252571. Most rows were
EXTRACT e_hr STATS already in-place at this time, having an SCN of 6165829.

GGSCI 1> stats e_hr, total Row “IT_DBA” was inserted at SCN: 6284117 and therefore
captured by the EXTRACT, but not needed as the Export
Sending STATS request to EXTRACT E_HR ...
Start of Statistics at 2017-01-11 15:37:00. Datapump was executed with SCN: 6302652.

Output to ./dirdat/AA: Rows “IT_SDBA”, “IT_VSDBA”, “IT_GGDBA” were inserted at SCN:


Extracting from ORCL.HR.JOBS to ORCL.HR.JOBS: 6310258 and were therefore captured by the EXTRACT and used
by the REPLICAT.
*** Total statistics since 2017-01-08 15:04:01 ***
Total inserts 4.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 4.00

Extracting from ORCL.HR.JOB_SUBTASKS to ORCL.HR.JOB_SUBTASKS:

*** Total statistics since 2017-01-08 15:04:01 ***


Total inserts 93605.00

www.ukoug.org 51
OracleScene

SPRING 17
Technology: Neil Chandler

1* select ora_rowscn,job_id,job_title,min_salary,max_salary from jobs order by 2;


Conclusion
ORA_ROWSCN JOB_ID JOB_TITLE MIN_SALARY MAX_SALARY
---------- ---------- ----------------------------------- ---------- ----------
At a basic level, GoldenGate is very
6165829 AC_ACCOUNT Public Accountant 4200 9000 straightforward to implement but
6165829 AC_MGR Accounting Manager 8200 16000 you need to take care. It is highly
6165829 AD_ASST Administration Assistant 3000 6000
6165829 AD_PRES President 20000 40000 configurable and programmable,
6165829 AD_VP Administration Vice President 15000 30000 and a badly configured set of
6165829 FI_ACCOUNT Accountant 4200 9000
6165829 FI_MGR Finance Manager 8200 16000
transformations will corrupt your
6165829 HR_REP Human Resources Representative 4000 9000 target dataset.
6284117 IT_DBA Database Admin 4000 10000
6310258 IT_GGDBA GoldenGate DBA 3000 9000 You don’t “switch on” GoldenGate,
6165829 IT_PROG Programmer 4000 10000
6310258 IT_SDBA Senior DBA 8000 20000
like you switch on Data Guard. It
6310258 IT_VSDBA Very Senior DBA 9999 25000 needs to work with the application to
6165829 MK_MAN Marketing Manager 9000 15000 produce the best outcomes.
6165829 MK_REP Marketing Representative 4000 9000
6165829 PR_REP Public Relations Representative 4500 10500
6165829 PU_CLERK Purchasing Clerk 2500 5500
6165829 PU_MAN Purchasing Manager 8000 15000
6165829 SA_MAN Sales Manager 10000 20000
6165829 SA_REP Sales Representative 6000 12000
6165829 SH_CLERK Shipping Clerk 2500 5500
6165829 ST_CLERK Stock Clerk 2000 5000
6165829 ST_MAN Stock Manager 5500 8500

ABOUT Neil Chandler


Data Architect, Chandler Systems
THE Neil has been working in IT since 1988, focused primarily within Oracle, SQL Server and
AUTHOR their related Server technologies: UNIX, Linux, Windows and SAN. He has been a
successful technical lead for FTSE 100 Companies with Development and Production
Systems experience gained in the Financial, Real-Time Logistics, Property and Accountancy
sectors. Neil is also an Oracle ACE and is Chairman of the UKOUG RAC, Cloud Infrastructure
and Availability SIG, and is a regular presenter at Oracle conferences around the world.

Blog: https://chandlerdba.com
www.linkedin.com/in/nchandler
@ChandlerDBA

Feature in the next

Being part of the Oracle user technical and/or functional perspective.


Whatever your story, whether it’s
community, we invite you to
a great tip, use of a product’s new
share your Oracle experiences features, lessons learnt, innovative use
& insight in print with our of your applications or integrations
readers. Knowledge sharing with other solutions, we want to know
is a valuable and rewarding about it.
experience and those that Send your submissions for the Summer
take part feel a sense of giving edition by 3rd April or Autumn edition
back to the industry that they by 26th June to: [email protected]
have made their career in.
Did you know?
If you have an idea for an article, but
We’re looking for compelling stories would like some feedback on the topic
about your experiences with your before you start to write, you can
Oracle technology, analytics & submit the proposal to the editorial
reporting and applications – from a team at [email protected].

52 www.ukoug.org
Technology

Recover Without Recover


– Use Flashback
A main task of being a DBA is planning and operating a backup strategy that meets all
the requirements in terms of performance, recovery windows and the time to recover.
The typical methods that are used to implement these backups ensure recoverability of
physical failures, like lost files and corrupted data blocks. But it is probably not the typical
day-by-day task to repair such failures. It is much more likely that the data is wrong.
It may happen because of human errors or software bugs that the data got changed
or deleted by accident. It is most probably difficult to recover those failures and it will
require much time and effort. That is when the different methods of Oracle Flashback
kick in. Flashback in general provides efficient and easy-to-use methods to cope with
these data errors. Some of these methods I will discuss and explain in this article.
Marco Mischke, Lead Consultant, Robotron Datenbank-Software GmbH 

Imagine the following scenario. Suddenly the phone rings and as Relies on undo data
you pick it up one of the developers in your company nervously - Flashback Transaction: Can back out specific transactions
tells you that some important data got deleted accidentally by Relies on undo data
a well-tested script to reorganise the master data. Actually the - Flashback Data Archive: Allows long time storage of historical
script wasn’t tested at all, the developer simply chose the wrong versions of changes to tables
database to test it, but that is something he cannot tell. The Requires Flashback Data Archives
whole thing happened an hour ago, the developer was trying to - Flashback Export: Enables exports (datapump or conventional)
recover the data himself but did not succeed. As the database is to export data for a specific point in time
production, you can’t just stop the database and roll it back to Relies on undo
as it was one hour ago. You would do a restore and point-in-time
recovery which means losing all the changes that were done in I will mainly cover Flashback Query which makes use of the
that hour. Or you could do a tablespace point-in-time recovery consistent read mechanism of the Oracle Database. It’s been
on a separate system, but that would last too long. Now available since Version 9 and has been enhanced over time.
Flashback comes into play. Besides that, it comes for free with all editions of Oracle
Database.

Flashback – What it is
The thing called “Flashback“ covers a couple of different Flashback Query – Prerequisites
methods and uses different technologies inside the database. All the changes that do transactions to an Oracle Database
are tracked in the UNDO segments to enable rollback of these
- Flashback Database: Used to roll back the whole database to a transactions and allow consistent reads for all the sessions. By
specific point in time default Oracle keeps 15 minutes of committed transactions. This
Relies on Flashback Logs can be changed by setting the “undo_retention” parameter. Since
- Flashback Drop: Used to undo drop operations of tables the datafiles of the UNDO tablespace are typically autoextending,
Requires activated recycle bin the database keeps only these 15 minutes regardless of the
- Flashback Query: Provides access to data as it was at some available space in the tablespace. When the datafiles of the
time in the past UNDO tablespace are fixed size, then the value of “undo_
Relies on undo data retention” is ignored and the database will use all the available
- Flashback Version Query: Allows querying different versions of space to keep undo information. The actual value is available in
a specific dataset as it evolves over time the TUNED_UNDORETENTION column of the V$UNDOSTAT view.

www.ukoug.org 53
OracleScene

SPRING 17
Technology: Marco Mischke

The flashback_time parameter can be combined with any other


SQL> select * from ( parameters that you already know and use. For instance you can
2 select TUNED_UNDORETENTION
3 from V$UNDOSTAT just export some tables instead of a whole schema.
4 order by END_TIME desc
5 )
6 where ROWNUM = 1;
Having this export, you can simply import the data into another
database or just another schema in the same database and then
TUNED_UNDORETENTION start comparing and repairing.
-------------------
36356

Flashback Query
Now that there is an export of the data, when everything was
Flashback Query – How it works
fine, we can now try to repair the data directly. We can simply
In order to guarantee consistent sets of data during reads the
create a copy of a table with the data from the past. To achieve
database session remembers the SCN at the beginning of the
that we use the Create Table As Select and use the “as of
transaction. If another session is modifying data, the previous
timestamp” syntax to go back in time:
version of the data is kept in UNDO. Now, if the reading session is
trying to read blocks that were modified in the meantime, these
blocks have a higher SCN than the one that is stored with the SQL> create table emp_old
2 as
beginning of the transaction, it uses the UNDO information to roll 3 select *
back those blocks until the SCN is lower than the stored one. 4 from emp as of timestamp
5 to_timestamp(‘2016-05-11 11:00:00’,
6 ‘yyyy-mm-yy hh24:mi:ss’);
Flashback Query just makes use of this technique. It just does not
use the SCN that is current at the beginning of the transaction but
instead it estimates a SCN that was current at some specific point Now both tables can be compared to find changed or deleted
in time. And as long as the UNDO information is still present, this data by simple SQL queries:
works fine. Otherwise it will return the usual ORA-0155: Snapshot
“too old” error. This simply means that the UNDO information is SQL> select * from emp_old
not available anymore. So the “undo_retention” parameter simply 2 minus
defines how long the database can remember old data. 3 select * from emp;

Since the “as of timestamp“ clause can be used in every kind of


SQL, we can even compare the data directly without creating a
backup table:

SQL> select * from emp as of timestamp


2 to_timestamp(‘2016-05-11 11:00:00’,
3 ‘yyyy-mm-dd hh24:mi:ss’)
4 minus
5 select * from emp;

In the easiest case we can simply re-add the deleted records


FIGURE 1: FLASHBACK QUERY, UNDO – TIME RELATIONSHIP
back into the table:

Simply size the UNDO tablespace to allow keeping 10-12 hours SQL> insert into emp
2 select * from emp as of timestamp
of UNDO. This will enable you to cover a typical work day which 3 to_timestamp(‘2012-10-01 11:00:00’,
can be very beneficial in case of accidentally modified or deleted 4 ‘yyyy-mm-dd hh24:mi:ss’)
5 minus
or even added data. 6 select * from emp;

SQL> commit;
Flashback Export
Now back to the developer that ran the script by accident. What Following this principle we can also recover records that were
do we do first? The UNDO information is definitely going to be accidentally modified. The example shows how to reset the
deleted in the near future. So we can materialise it by simply salary to the original values:
by doing a schema export using Data Pump. Just take care of
the date and time format which is best set as: ‘YYYY-MM-DD- SQL> update emp e_live
HH24:MI:SS’. The command would then be something like this: 2 set sal = (select sal
3 from emp as of timestamp
4 to_timestamp(‘2016-05-11 11:00:00’,
$ expdp system/Oracle-1 dumpfile=scott_flashback.dmpdp \ 5 ‘yyyy-mm-dd hh24:mi:ss’) e_orig
logfile=scott_flashback.expdp.log \ 6 where e_orig.empno = e_live.empno
directory=data_pump_dir \ 7 )
flashback_time=’2016-05-11-11:00:00’ schemas=scott 8 ;

SQL> commit;
You could even use the old „exp“ utility:

$ exp system/Oracle-1 file=/tmp/scott_flashback.dmp \ There are lot more things that can be done. The above examples
log=/tmp/scott_flashback.exp.log \ outline the possibilities when using Flashback Query.
flashback_time=’2016-05-11-11:00:00’ owner=scott

54 www.ukoug.org
Technology: Marco Mischke

The tables indexes are back again, but they


SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJ. TYPE DROP TIME still have the names from the recyclebin. In
------------- ------------------------------ ---------- ------------------- order to correct that, we need to rename the
EMP BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 TABLE 2016-05-11:13:29:19
indexes to the original names. That’s why it
SQL> select ORIGINAL_NAME, OBJECT_NAME, OPERATION, DROPTIME is important to query the USER_RECYCLEBIN
2 from USER_RECYCLEBIN; view before doing the Flashback Drop.
ORIGINAL_NAME OBJECT_NAME OPERATION DROPTIME Afterwards there is no chance to get the
------------- ------------------------------ --------- ------------------- original names. The rename operation itself
EMP BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 DROP 2016-05-11:13:29:19
PK_EMP BIN$yxLVQQOpC6rgQBAKtBUKlA==$0 DROP 2016-05-11:13:29:19 is quite simple:
IX_ENAME BIN$yxLVQQOoC6rgQBAKtBUKlA==$0 DROP 2016-05-11:13:29:19
IX_MGR BIN$yxLVQQOnC6rgQBAKtBUKlA==$0 DROP 2016-05-11:13:29:19
SQL> alter index
“BIN$yxLVQQOoC6rgQBAKtBUKlA==$0” rename to
FIGURE 2
IX_ENAME;

Flashback Drop
In the above scenarios Flashback Query was used to recover Make sure to put the name in double quotes since it contains
unwanted DML. What if our developer dropped one or more special characters as well as lower and upper case letters. The
tables by accident? Since this is DDL, such operations won’t only thing we cannot restore are foreign key constraints. Those
be logged to UNDO, at least not in the way as DML is being need to be checked manually and recreated if necessary.
logged. The Oracle Database offers a “Recyclebin” to cover such
scenarios. By default the recyclebin is switched on, the parameter More of Flashback
“recyclebin” is used to enable or disable it. Now, if a table is being As mentioned at the very beginning there are some more
dropped, the segment in the tablespace is marked as free but it is Flashback features that are only available in Enterprise Edition.
not being released. If another segment needs more space, it first
allocates free space in a tablespace. Only, if there is no more free Flashback Version Query can show a history of modifications
space left, it starts using released segments of dropped objects. As to a specific row which includes INSERTs, UPDATEs as well
long as the released segment was not overwritten, it is possible to as DELETEs. It can show the start and end date as well as
restore it, this is called Flashback Drop. the transaction ID for a specific version of a row. Flashback
Transaction Query unveils information about a specific
To find out which segments can be restored, we can use the transaction and what it has done to the data. The FLASHBACK_
SQL*Plus command “show recyclebin” or simply query the USER/ TRANSACTION_QUERY view lists these changes. Flashback
ALL/DBA_RECYCLEBIN view (see Figure 2 above). As you can see, Table can roll back a whole table or even a set of tables to a
SQL*Plus only shows the dropped tables while the views list specific point in time. This is useful when tables are linked by
all the dropped segment tables as well as the corresponding foreign key constraints. Flashback Transaction can back out a
indexes. From the output we see that the accidentally dropped whole transaction. It even takes care of dependent transactions
table can be restored: that happened afterwards. One can choose if all dependent
transactions should be backed out too or if they should be kept.
SQL> flashback table emp to before drop;
Last but not least, Flashback Database rolls back the whole
Flashback complete. database. This requires separate logs called Flashback Logs. The
SQL> select ORIGINAL_NAME, OBJECT_NAME, OPERATION, DROPTIME
retention for these logs can be controlled via the “db_flashback_
2 from USER_RECYCLEBIN; retention_target” initialisation parameter. This feature is
particularly useful for test environments when several scenarios
No rows selected
should be tested against the same set of values. Simply flash back
the database to a restore point right after the test has finished.
No rows left in the view means the database did not only restore Then start over with the next test.
the table but also the indexes. Let’s crosscheck that:
Summary
SQL> select index_name from user_indexes Even the Standard Edition of Oracle Database provides powerful
2 where table_name=’EMP’; tools and methods to recover unwanted changes to the data
without influencing the availability of the database. The
INDEX_NAME
------------------------------ examples in this article just show the direction, there are many
BIN$yxLVQQOoC6rgQBAKtBUKlA==$0 more possibilities but, I have given you a rough understanding
BIN$yxLVQQOnC6rgQBAKtBUKlA==$0
BIN$yxLVQQOpC6rgQBAKtBUKlA==$0 of the technology and shown you a different way of coping with
such situations.

ABOUT Marco Mischke


Lead Consultant, Robotron Datenbank-Software GmbH
THE Marco Mischke is Lead Consultant at Robotron Datenbank-Software GmbH with his main
AUTHOR focus on High Availability and Backup & Recovery. He has been an ACE Associate since
2016 and has worked with the Oracle Database since 2000 starting with version 7.3.4.
Throughout his career he has built High Availability solutions on different operating
systems and Oracle versions.
Blog: dbamarco.wordpress.com
www.linkedin.com/in/marco-mischke-914042114
@DBAMarco

www.ukoug.org 55
yo
W r a dv
Ad ore Gu

an e
M

u
ve inf ide
r ti o c , s

t t rt h
sin an ee
g o be p.5

os e
pt fo fo
ion un r d

ee re?
s f d i et a
ro n t il
m he s.
£4 P
00 art
+V ne
AT r
.
Join UKOUG
& your network is within reach
From conferences and Special Interest Groups, to Oracle Scene and online
resources, become part of a network where users, partners and Oracle
collaborate, learn and share together.
We have a variety of membership packages to suit your individual, team and business requirements.
Take advantage of our current saving on individual memberships with our Bronze package priced at just
£165 +VAT. Use code OSB50 when applying at www.ukoug.org/join

Contact the membership team on [email protected] / Tel +44 (0)20 8545 9670
to learn more and to discuss the right package for you.

You might also like