oracle

May 29, 2016 | Author: donnie | Category: Types, Brochures
Share Embed Donate


Short Description

oracle...

Description

Autumn 13 Oracle Database 12c Supplement Brought to you by Oracle Scene

focus OracleScene

An in depth look at Oracle Database 12c

IN THIS ISSUE

Lots of 12c topics for the DBA, Architect, Developer and Data Scientist

Supplement sponsor:

www.ukoug.org An Independent Publication not affiliated with Oracle Corporation

Plug in to the Oracle  and Cloud experts

ORACLE

DATABASES

An Oracle Specialized Platinum Partner and owner/operator of Europe’s largest cloud platform, we’re experienced in all parts of the Oracle 12c picture with referenceable clients in many sectors. Plug in to expert advice and licensing best practice ahead of moving to Oracle 12c: 0115 983 6200 | [email protected]

www.interoute-IAM.com

INTEROUTE

CLOUD

Platinum Partner Specialized Oracle Exadata Oracle Linux Oracle Data Warehousing Oracle Database Oracle Real Application Clusters Oracle Database Performance Tuning Oracle Enterprise Manager Oracle Database Security Options

Welcome to Oracle Scene 12c Supplement

Introduction You may have noticed over the past few months that Oracle has released 12c of their database. With this release we have a new architecture and lots of new or enhanced features. In this special Oracle 12c supplement for Oracle Scene we have a number of articles that look at some of these features. You can use these to help you to get started with the new release. In addition to the 12c release, many of Oracle’s products have been updated and released. It has been a busy few months for Oracle. There are articles for the DBA, Developer, Architect and even those of you working in the Oracle Data Science area. Over the coming months many of you will be upgrading to and using 12c or one of the many newly released products. You

will be learning lots, discovering many hidden features and tricks, identifying work around, finding out how to get different products to work together, and so on. Oracle Scene is a great place for you to share your discoveries and experiences, by writing a short or long article on it. This is what being part of a user group is all about, Sharing our Knowledge. Content can be submitted all year round to [email protected]. Information about editorial deadlines, article guidelines etc can be found here: www.ukoug.org/what-we-offer/oracle-scene

Inside this issue

04

07

11

HISTOGRAMS IN 12C

ALL THE MORE REASON TO USE DATABASE PL/SQL by Bryn Llewellyn

ROW LIMITING, PL/SQL WITH SQL, AND TEMPORAL VALIDITY: THREE IN 12C by Melanie Caffrey

14

18

21

by Jonathan Lewis

ORACLE 12C SECURITY by Pete Finnigan

NEW FEATURES OF ORACLE ADVANCED ANALYTICS IN ORACLE 12C by Brendan Tierney

GETTING STARTED WITH FAR SYNC DATAGUARD IN ORACLE 12C by Bob Mycroft

24

PATTERN RECOGNITION: THE WEATHER by Alex Nuijten

www.ukoug.org

03

OracleScene D I G I T A L

AUTUMN 13

12C SUPPLEMENT

Technology

Histograms in 12c One of the most challenging problems a DBA has to face is defining a strategy for how and when to create statistics for a database; and one of the hardest parts of the problem is dealing with histograms which can be very expensive to create while still leaving the DBA facing unstable execution plans. Jonathan Lewis, Freelance Consultant, JL Computer Consultancy

In 12c Oracle has introduced some terrific enhancements to histogram creation: histograms are allowed to have more buckets, you can create accurate frequency histograms at a much lower resource cost, and there are two new types of histogram that eliminate various problems with the old “height-balanced” histograms.

Background

Before talking about histograms in 12c, it’s worth making a brief comment about a couple of features that appeared in 11g; these are the “approximate NDV” mechanism and the “table preferences” procedure (which only works properly in 11.2). If you enable the “approximate NDV” mechanism through a call to dbms_stats.set_param() or – the preferred method – dbms_stats. set_global_prefs(), then the various calls to gather stats will use a new mechanism to gather column stats whenever you leave the estimate_percent at its default value of “auto_sample_size”. This allows Oracle to collect accurate statistics using a 100% sample size without having to do the expensive count(distinct ()) on each column that it had to do in earlier releases of Oracle. In particular, of course, the automatic overnight stats collection job ought to do a better job with the newer, faster, more accurate mechanism. A piece of advice that I’ve often given to clients has been to disable automatic histogram generation, and then do something to create histograms programmatically for the few special cases where 04

they can see it is necessary. This tends to reduce the occurrence of “random” bad plans appearing due to the variability of the samples that Oracle might take to generate a histogram – and it may be that the best time to create some histograms is early in the morning rather than late at night when the automatic job starts to run. However, with 12c you’re likely to find that it is safe to let the automatic stats collection job generate some of your histograms – so it’s nice to know that you can set up table-level preferences for stats collections that will tell the automatic job exactly which histograms to collect. Here’s an example of how: begin dbms_stats.set_global_prefs(‘METHOD_OPT’,’for all columns size 1’); dbms_stats.set_table_prefs( ownname => ‘test_user’, tabname => ‘t1’, pname => ‘method_opt’, pvalue => ‘for all columns size 1 for columns n1 size 5 n3 size 254’ ); end; /

The first call (which you need only do once) tells Oracle that the general strategy is to collect no histograms. The second call tells Oracle that for table T1 you want a histogram of up to 5 buckets on column n1, a histogram of up to 254 buckets on column n3, and no histograms on any other column. www.ukoug.org

Technology: Jonathan Lewis

Having set the background – let’s move on to the things that 12c does better.

Enhancements

The first enhancement is simple – you can specify up to 2,000 buckets for a histogram in 12c. The default, if you don’t change it, is still 254. Frankly this is likely to be sufficient in many cases, and I don’t think many people will need to increase the value beyond 500 (In part for reasons I’ll mention shortly). If you are tempted to set the bucket count up to 2,000 remember that the calls to gather stats save the old stats to some history tables before overwriting them – and I have seen several complaints in the past from people who have noticed that their sysaux tablespace has become huge with wri$_optstat_histgrm_ history (the histogram history) as the single largest table in the tablespace.

The first enhancement is simple – you can specify up to 2,000 buckets for a histogram in 12c. The next enhancement is a relatively simple change to efficiency and precision for frequency histograms – Oracle has overloaded the “approximate NDV” mechanism to accumulate enough information to create a frequency histogram at the same time as it counts the number of distinct values for a column. The original mechanism used to create a hash table of 2^64 buckets, but applied a cunning algorithm that allowed it to represent the full data set (approximately) by recording a maximum of just 16,384 hash values. In 12c Oracle records the number of rows that appeared for each of these hash values, and a rowid to point back to a row with the actual value. If, at the end of the build the number of hash values hasn’t exceeded the number of buckets requested for the histogram, the contents of the hash table can be converted into a very accurate frequency histogram by looking up each of the rowids, and then sorting the hash table by lookup value. The time required to create a frequency histogram is the time to do a full tablescan plus the time to look up the N (up to 2,000) rows by rowid.

50 buckets because you knew that almost all the data would be one of 50 critical values. Oracle will compare the total number of rows in the table, with the number of rows in the largest 50 hash buckets, and if the left-over rows in the other 2,950 buckets account (in this case) for less than 1/50th of the data Oracle will build a “Top-N” frequency histogram – setting up histogram information that reflects the top 50 buckets and allows for “a little spare data”. The algorithm is a little more subtle than the description I’ve given as the histogram has to reflect the low and high values for the column, so Oracle may have to inject a couple of “rogue” buckets into the histogram, but the basic principle is this: if you have a data set which you know has N popular values and a load of rows that represent no more than 1/Nth of the data, then create a histogram with N (plus a couple for safety) buckets, and you will get a histogram that is a good model for your data. The final new feature of 12c is the “Hybrid” histogram – socalled because it combines features of the frequency histogram and the height-balanced histogram. The hybrid histogram is something that Oracle will generate if you’ve requested a histogram and the data pattern doesn’t allow Oracle to create a frequency or Top-N histogram. It can be expensive to create a hybrid histogram because Oracle has to sample the data and run a query that uses an analytic sum() function – the larger the sample becomes the more expensive the query. Older versions of Oracle also run an analytic query (using the ntile() function) against the data, so the amount of work to generate a hybrid histogram is not different from the heightbalanced version. The difference is in what Oracle does with the data. This is best described with a small example that gives the flavour of the mechanism. Imagine that (after sorting) I have the following 20 values for a column: 32 33 33 33 33 33 33 33 33 34 34 34 35 35 35 35 35 35 35 36

To create a height-balanced histogram of 4 buckets Oracle would record every fifth entry (20 rows / 4 buckets) = 5 rows per bucket, plus the first entry to show that the first bucket wasn’t completely full of 33’s, so we’d get the following selection as our histogram: 32 33 33 33 33 33 33 33 33 34 34 34 35 35 35 35 35 35 35 36

In 12c Oracle records the number of rows that appeared for each of these hash values, and a rowid to point back to a row with the actual value. New Features

But we don’t just get a faster, more accurate, frequency histogram – Oracle gets even smarter with the contents of the hash table. Imagine you have asked for a histogram of 50 buckets, and Oracle finds that the hash table holds 3,000 hash values – far more than you had hoped for. Possibly you requested

www.ukoug.org

32 33 34 35 36

Unfortunately Oracle only considers a value to be special (popular) if it appears in at least two consecutive positions in this list. So, from the perspective of the height-balanced histogram there’s nothing particularly skewed about this data – even though we can see that 33 and 35 occur rather more frequently than the other values. A hybrid histogram of 4 buckets starts off with the same unit of 5 values as its basic measure but doesn’t follow the fixed bucket size. After counting through five values it gets to a 33, but then keeps going until it reaches the last 33, remembering the size of the bucket and the number of 33’s; then it counts 5 values which takes it to a 35, and keeps going to the end of the 35’s – 05

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology: Jonathan Lewis

and so on, to give us the following:

Summary

32 33 33 33 33 33 33 33 33 34 34 34 35 35 35 35 35 35 35 36

In this case we run out of data before we run out of buckets – which isn’t’ too surprising since the algorithm tends to make buckets larger than the expected size as it works along the data. You’ll notice that I’ve also highlighted the first value in the list, again to show that the first bucket does have some values in it that don’t match the high value. The data stored as this histogram looks like this: (32,1) (33,8) (35,7) (36,1)

As you can see from this little example, I’ve managed to capture both of my popular values, even though in this case they appear only as single end-point values in the histogram. As a general rule, the hybrid histogram tends to capture more popular values for a similar amount of work than the equivalent heightbalanced histogram – and thanks to the end-point counts it captures better information. Hybrid histograms are a significant step forward from heightbalanced histograms; bear in mind, though, that it’s still a data sample – the smaller the sample the more variable the effect may be on your execution plans, the larger the sample the longer it will take to build the histogram.

ABOUT THE AUTHOR

In 12c a histogram can have up to 2,000 buckets. This is probably overkill, so the default is still 254 buckets, and you should be a little cautious about much large bucket counts. Remember the impact this would have on the “optimizer history” tables. The new “Hybrid” histograms – enabled automatically if you use the “auto sample size” option – are still as expensive to generate as the old “height-balanced” histograms, but the quality of information they hold is vastly superior. If you’ve enabled the 11g “approximate NDV” then Oracle can create accurate frequency histograms and “Top N” frequency histograms very cheaply; combine this with setting “table preferences” and you may find that you can allow Oracle to deal with almost all your histogram requirements in the overnight “autostats” collection. Remember, though, that the picture of the data you want Oracle to see during the day may not be the same as the actual data content at the moment the auto stats job is running. Despite all the improvements, you may still find a few cases where you need to include extra calls to the dbms_stats package in your application code (typically during batch runs), and some of these calls may even need to use the set_column_stats() procedure to construct an artificial histogram.

Jonathan Lewis Freelance Consultant, JL Computer Consultancy Jonathan Lewis is a freelance consultant whose experience with Oracle goes back over 24 years. He specialises in physical database design, the strategic use of the Oracle database engine and solving performance issues. Jonathan is the author of ‘Oracle Core’ and ‘Cost Based Oracle – Fundamentals’ both published by Apress, and ‘Practical Oracle 8i – Designing Efficient Databases’ published by Addison-Wesley, and has contributed to three other books about Oracle. He is one of the best-known speakers on the UK Oracle circuit, as well as being very popular on the international scene – having worked or lectured in 50 different countries – and further details of his published papers, presentations and tutorials can be found through his blog at http://jonathanlewis.wordpress.com.

06

www.ukoug.org

Technology

With Oracle Database 12c, there is all the more reason to use database PL/SQL Fifty-year old wisdom instructs us to expose the database to client side code as a PL/SQL API and to securely hide the implementation details — the names and structures of the tables, and the SQL statements that manipulate them — from the client. Yet a very large number of customers ignore, or refuse to follow, this wisdom. Bryn Llewellyn, Distinguished Product Manager, Database Division, Oracle

Some of the reasons for refusal have a reasonable justification. It is claimed that PL/SQL in the database cannot be patched without causing unacceptable downtime; that the datatypes for passing row sets between the client side code and the database PL/SQL are unnatural, cumbersome to use, and bring performance problems; and that it is impractical to give each of a large number of developers a private sandbox within which to work of the intended changes to database PL/SQL. Oracle Database 12c, hereinafter 12.1, brings changes that demolish each of these objections. There is now no excuse to follow the best practice principle that is already universally followed in all software projects except those that use a relational database as the persistence mechanism.

Introduction

The general wisdom is that the successful implementation of a large software system depends upon good modular design. A module is a unit of code organization that implements a coherent subset of the system’s functionality and exposes this via an API that directly expresses this, and only this, functionality and that hides all the implementation details behind this API. Of course, this principle of modular decomposition is applied recursively: the system is broken into a small number of major modules, each of these is further broken down, and so on. This principle is arguably the most important one among the legion

www.ukoug.org

best practice principles for software engineering — and it has been commonly regarded as such for at least the past fifty years. These days, an application that uses Oracle Database to implement its database of record is decomposed at the coarsest level into the database tier, the middle tier, and the browser tier. The focus of this paper is limited to the database tier. The ultimate implementation of the database tier is the SQL statements that query from, and make changes to, the content of, the application’s tables. However, very commonly, an operation upon a single table implements just part of what, in the application’s functional specification, is characterized as a business transaction. The canonical example is the transfer funds transaction within the scope of all the accounts managed by an application for a particular bank. The transfer funds transaction is parameterized primarily by identifying the source account, the target account, and the cash amount; other parameters, like the date on which the transaction is to be made, and a transaction memo, are sometimes required. This immediately suggests the API function Transfer_Funds (Source in ..., Target in..., Amount in..., ...) return Outcome_t is ...

The point is made without cluttering the example by showing the datatypes and all the possible parameters. The API is specified as a function to reflect the possibility that the attempt may be disallowed, and the return datatype is nonscalar to

07

OracleScene D I G I T A L

AUTUMN 13

12C SUPPLEMENT

Technology: Bryn Llewellyn

reflect the fact that the reason that the attempt is disallowed might be parameterized, for example by the shortfall amount in the source account.

— sometimes to the extent of creating new schemas and repositioning objects into those. 12.1 brings enhancements that remove all barriers to adopting EBR.

We can see immediately that there are several different design choices. For example, there might be a separate table for each kind of account, reflecting the fact that different kinds of account have different account details; or there might be a single table, with an account kind column, together with a family of per account kind details tables. There will similarly be a representation of account holders, and again these might have various kinds, like personal and corporate, with different details.

EBR allows database PL/SQL, views, and synonyms to be changed in the privacy of a new edition, without at all interrupting the database’s availability, using exactly the same DDL statements that would be used to patch the application’s server side code in a downtime exercise. The changes made in the new edition are invisible to the running application. Only when the code in the new edition is fully installed and quality controlled, are the application’s database sessions simply failed over to the new edition, session by session, until no sessions any longer are using the old edition. Then the new edition is established as the run edition; and the old edition is retired.

The point is obvious: a single API design that exactly reflects the functional specification may be implemented in many different ways. The conclusion is equally obvious: The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the middle tier module I know of many customers who strictly adhere to this rule; and I know of many who do not — to the extent that all calls to the database module are implemented as SQL statements that explicitly manipulate the application’s tables. Customers in the first group seem generally to be very happy with the performance and maintainability of their applications. Ironically, customers in the second group routinely complain of performance problems (because the execution of a single business transaction often involves many round trips from the middle tier module to the database module). And they complain of maintenance problems (because even small patches to the application imply changes both to the implementation of the database module and to the implementation of the middle tier module). Why then do some customers refuse to acknowledge fifty-year old wisdom, and choose rather to bring themselves problems? Here, briefly, are the reasons I hear: • my religion prevents me from using PL/SQL in the database • changing PL/SQL in the database requires shutting down the application; and the downtime might be very long • the datatypes for passing row sets between the client side code and the database PL/SQL are unnatural, cumbersome to use, and bring performance problems • it is impractical to give each of a large number of developers a private sandbox within which to work of the intended changes to database PL/SQL Sadly, no release of Oracle Database will be able to address the first objection. However, 12.1 brings new features that demolish each of the remaining objections.

Edition‑based redefinition enhancements

Edition‑based redefinition, hereinafter EBR, was introduced in Oracle Database 11g Release 2, hereinafter 11.2. In that release, it had some properties that made its adoption, in general, possible only after making changes to the disposition of the application’s objects among their owning schemas

EBR allows table changes, too. But to use it for this purpose requires some thought, some changes to the application’s schema objects, and a new way of writing the patch scripts. However, the use of EBR, in 12.1, to allow online changes to only PL/SQL, views, and synonyms is fruit that is hanging so low that it is literally on the ground. Those who know something about EBR will know that the adoption barrier was a consequence of the rule that a non‑editioned object may not depend upon an editioned object1. This barrier is removed by two distinct enhancements, thus: • The granularity of the editioned state of the name of a PL/SQL unit, a view, or a synonym is now the single occurring name. New DDL statements control this new degree of freedom. This means that, for example, Scott.Name_1 can be chosen to be editioned and can denote a procedure in one edition and a function (obviously with a different implementation) in another edition. On the other hand Scott.Name_2 can be chosen to be non‑editioned, and will, therefore, have the same implementation in all editions. In 11.2, the granularity of the editioned state was the entire schema: every name of every object in that schema, whose type is editionable, was either editioned or non‑editioned. The new per-object granularity of the editioned state solves the adoption problem met when a table has a column whose datatype is defined by a user‑defined‑type, hereinafter UDT, in the same schema as the table. Because tables are not editionable, and UDTs are editionable, a table is not allowed to depend on UDT. Therefore if, in 11.2, an attempt was made to editions‑enable a schema owning both a table and a UDT dependency parent, using the alter user enable editions statement, then the attempt caused an error, to prevent violation of the rule. The attempt would succeed only after re establishing the UDT in a different schema, that would not be editions‑enabled. This, of course, implied moving all the data from a column defined by the UDT in its old location to one defined by the UDT in its new location. • Materialized views, and virtual columns have new metadata in 12.1 to specify the edition to be used to resolve the name of an editioned dependency parent. They also have new metadata to specify the range of editions within which the optimizer may consider the object. New DDL statements control setting this new metadata.

In 11.2, the rule for the name resolution of an editioned dependency parent is to look for it in the same edition as the dependent object. So when the dependent object is non-editioned, the rule cannot be followed. Therefore any attempt to introduce a dependency that violates this rule causes an error. The attempt may be made explicitly, by a DDL statement on an individual object. Or it may be made implicitly by using the alter user enable editions statement.

1

08

www.ukoug.org

Technology: Bryn Llewellyn

The subquery that defines a materialized view may refer to a PL/SQL function. And a table may have a virtual column whose defining expression includes a reference to a PL/SQL function. (A virtual column is the modern way to implement the functionality of a function based index.) But materialized views and tables are all non‑editionable. Therefore, in 11.2, a schema with either a materialized view or a table, with a PL/SQL dependency parent could not be editions‑enabled because of the rule that a non‑editioned object may not depend upon an editioned object. While the new per object granularity for the editioned state might be used in 12.1 to overcome this prob-lem, this would lead to discomfort in the bigger picture of the application design. A major benefit of EBR is that exactly that it allows changes to PL/SQL units while they are in use — so this benefit must not be sacrificed. The solution is to address the rule that a non‑editioned object may not depend upon an editioned object in a different way. The editioned dependency parent is found by looking in the edition that the non‑editioned dependent object specifies explicitly in its metadata. The semantic model for EBR specifies that creating a new edition causes a copy of every editioned object in the parent edition into the new child edition. But the customer’s mental model of the implementation specifies that the copy on change paradigm will be used to create the new edition. Only when an editioned object is first changed, will there be two distinct objects in the old and the new editions. According to the semantic model, a materialized view or an index on a virtual column would be usable only in the edition in which the PL/SQL dependency parent exists. This would imply creating a new materialized view or table, or course with a new name, for each new edition. To avoid this expense, a materialized view and a virtual column have new metadata to specify the range of editions within which the optimizer may consider the object when computing the execution plan for a SQL statement. This accommodates the fact that the PL/SQL might indeed have different copies in the parent and child editions because of a change to the unit that does not change the semantics of the function defined within the unit. This might occur, for example, if the function is defined in a package specification and a change is made to a different subprogram defined in the same unit.

Improved binding from client code to database PL/SQL

New in 12.1, the invocation of PL/SQL subprograms from the client can now bind values of non-SQL datatypes: records and index‑by‑PL/SQL‑tables. This is important because, in a PL/SQL subprogram, an index‑by‑PL/SQL‑table of records is the natural target for select... bulk collect into... and the natural source for forall j in 1..My_Collection.Count(*)... Through 11.2, the invocation of PL/SQL subprograms from the client could bind only variables of the SQL datatypes. This meant that the contents of a record, or an index‑by‑PL/SQL‑table with any kind of payload, had to be copied to, or from, a variable of a corresponding SQL datatype to be passed to, or from, the client. For example, a record had to be copied to a UDT defined using create type t as object..., hereinafter ADT (for abstract datatype); and an index‑by‑PL/SQL‑table of records had to be copied to a nested table of ADTs. This led to cumbersome code and a performance penalty. Now in 12.1, when using, for example, a JDBC client, JPublisher can be used to create the definitions of Java classes, to be used in the client for binding, by deriving them from PL/SQL datatypes defined in a package specification. The OCI, similarly, has mechanisms to bind values of non-SQL datatypes. Recall that a PL/SQL subprogram must be invoked from a client by using an anonymous PL/SQL block — which is a kind of SQL statement2. Therefore, the ability to bind non-SQL datatypes from the client implies the ability to bind values of these datatypes into an anonymous block, and into other kinds of SQL statement. For example, a PL/SQL subprogram with a formal parameter declared using an index‑by‑PL/SQL‑table of records can be invoked using dynamic SQL from another PL/ SQL subprogram. Similarly, when a PL/SQL function is used in an expression in a DML statement, it can now have formals of nonSQL datatypes. Here is a particularly interesting example select • bulk collect into IBBI_1 from IBBI_2 where...

Here, IBBI_1 and IBBI_2 are variables of the same index‑by‑PL/ SQL‑table of records declared in the package specification of the package body that contains this code. This brings the declarative, set manipulating power of SQL into the implementation of ordinary PL/SQL code. Compare this with the complexity of implementation the functionality of restriction on a collection using looping and if tests.

NOTE: If the schema, prior to EBR adoption, has an occurrence of an index defined on an expression that includes a reference to a PL/SQL function, and if the post-adoption plan needs this function to be editioned, then the index must, in the editionsenabled schema, be dropped and re-created on a virtual column The multitenant architecture and pluggable databases defined using the expression that used to define the index. Serious software development uses a source control system to record the canonical definitions of a system as it evolves The general availability of Oracle E-Business Suite Release 12.2 and branches. The database module is represented in a source was announced in October 2013, certified on Oracle Database control system by the set of SQL statements that establishes the version 11.2.0.3.. It has been enhanced to use EBR, by making database artifacts that implement this module. There may also whatever changes that were needed to overcome the adoption be sets of SQL statements that patch or upgrade one version of barriers described above. From now on, every patch and upgrade the database module to another version. The most interesting will be done as an EBR exercise. The downtime that patching example of such a SQL statement, for the present discussion, is caused, in earlier releases, was of somewhat unpredictable the create or replace DML statement for a PL/SQL unit, because duration on the order of hours, or even days, By using EBR, the typically, during a development cycle, procedural code is changed downtime will be slashed to a predictable number of minutes. far more frequently than is the definitions of the shapes of tables, and so on. In a particular release cycle, several of the SQL statements that define the database module are changed. The call statement is syntax sugar: it is transformed into a an anonymous PL/SQL block under the covers.

2

www.ukoug.org

09

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology: Bryn Llewellyn

When, as is common, the changes to the defining SQL statements are made jointly by the several members of a development team, then a discipline must be followed. Individual developers check out, change, and check back in individual SQL statements Of course, in order to make and test the intended change to a SQL statement, it is necessary to instantiate the database module using the checked in set of SQL statements as of some label, and to work within this environment. This suggests that every developer must have his own private database for this purpose. I know of several customers who, in recognition of this, have developed fairly sophisticated internal applications to allow a developer to self provision a database as of a source control system label. The most successful of these internal applications make explicit use of thin provisioning schemes provided by filesystem vendors because otherwise the space consumption by a database for each of many tens of developers would be unacceptable. Of course, writing such an internal application is a non trivial task; and, sadly, I know of other customers who have found the ideal approach too daunting. Instead, they use an ad hoc scheme where many developers make the intended changes to

their checked out SQL statements in a single, shared database. Obviously, this leads to no end of difficulty; and these difficulties increase exponentially as the size of the code base increases. This effect is exacerbated when the number of PL/SQL is large. This has led some customers to limit the use of database PL/ SQL — for example, by allowing it only for triggers. And it has led others avoid it altogether. The advent of the multitenant architecture in 12.1 changes this. A pluggable database, hereinafter PDB, can be cloned from a read only master PDB that represents a particular source control system label using SQL statements. And it can later be dropped using SQL statements. When the underlying filesystem supports thin provisioning, then the snapshot keyword can be used in the clone PDB SQL statement so that the cloned PDB is provisioned instantly and with no storage expense. Only when changes are made, will blocks be written to represent the deltas. The fact that the clone PDB and drop PDB operations are exposed as SQL statements makes it very straightforward to write a PL/ SQL application to allow developers to self provision their PDBs.

Summary The following enhancements, brought by Oracle Database 12c, have been discussed in this paper: • Edition‑based redefinition: The granularity of the editioned state of the name of a PL/SQL unit, a view, or a synonym is now the single occurring name. Materialized views and virtual columns have new metadata to specify the edition to be used to resolve the name of an editioned dependency parent. They also have new metadata to specify the range of editions within which the optimizer may consider the object. • PL/SQL: Values of non-SQL datatypes can be bound to the formal parameters of database PL/SQL subprograms invoked from the client. In particular, row sets can now be passed between the client and the database using the natural datatype: an index‑by‑PL/SQL‑table of records. • The multitenant architecture: The clone PDB operation, taking advantage of the snapshot facility in the underlying filesystem, and the drop PDB operation are exposed as SQL statements. This makes it very straightforward to write a PL/SQL application to allow developers to rapidly, and thinly, self provision a private database environment in which to change and test their checked out code. There is now no excuse for violating the best practice principle that is already universally followed in the majority of software projects. Now you can confidently expose the Oracle Database using a PL/SQL API — hiding all the details of tables and SQL statements behind this API — knowing that this approach brings only benefits.

ABOUT THE AUTHOR

10

Bryn Llewellyn Distinguished Product Manager, Database Division, Oracle Bryn has worked in the software field for more than 30 years. He has been with Oracle since 1990, first in the UK and then, from 1996, in the USA at Oracle Headquarters. He has been the Product Manager for PL/SQL since 2001. In 2005, he took on the additional product management responsibility for the Oracle Database capabilities that support online application upgrade (known as edition-based redefinition – EBR for short).

www.ukoug.org

Technology

Row Limiting, PL/SQL WITH SQL, and Temporal Validity:

Three in 12c The latest, greatest incarnation of the Oracle database, 12c, is chockfull of interesting and exciting new features. This article introduces the reader to just three of them available within SQL alone. Melanie Caffrey , Senior Development Manager, Oracle

Top-N queries can now employ simpler syntax with the use of the new row_ limiting clause, which allows you to limit the rows returned by a query. You can now define a PL/SQL function in the WITH clause of a query, and reference it within the query, as well as any of the query’s subqueries. And last, but not least, temporal validity enables you to have data visible depending on its timebased validity, as determined by the periods of time for which it is considered valid. Any application can now support a situation where it is important to have visible only the data within a table considered valid as of a specified time period, instead of all the data within a table (the current default). To demonstrate the power of the row limiting clause, I’ll start by querying a table called EMPLOYEE. First the table is queried with no row limiting clauses and then the table is queried with the new row limiting FETCH FIRST and ROWS ONLY clauses, for comparison purposes (Figure 1).

ashton%ORA12CR1> select employee_id, first_name, last_name, salary 2 from employee 3 order by salary desc; EMPLOYEE_ID ----------6573 6571 28 6569 1234 37 7895 6572 6570 6567 6568

FIRST_NAME LAST_NAME SALARY ------------------------------ ------------------------------ ---------Lori Dovichi Thomas Jeffrey 300000 Emily Eckhardt 100000 michael peterson 90000 Donald Newton 80000 Frances Newton 75000 Matthew Michaels 70000 Theresa Wong 70000 mark leblanc 65000 Roger Friedli 60000 Betsy James 60000

11 rows selected. ashton%ORA12CR1> set feedback on ashton%ORA12CR1> set lines 32000 ashton%ORA12CR1> select employee_id, first_name, last_name, salary 2 from employee 3 order by salary desc nulls last 4 FETCH FIRST 5 ROWS ONLY; EMPLOYEE_ID ----------6571 28 6569 1234 37

FIRST_NAME LAST_NAME SALARY ------------------------------ ------------------------------ ---------Thomas Jeffrey 300000 Emily Eckhardt 100000 michael peterson 90000 Donald Newton 80000 Frances Newton 75000

5 rows selected. FIGURE 1

www.ukoug.org

11

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology: Melanie Caffrey

Notice how with just a few extra keywords, FETCH FIRST 5 ROWS ONLY, the query is limited to the Top-N records we are interested in. This is a syntactical simplification over, for example, the RANK and DENSE_RANK analytic functions also used within Oracle SQL. This is not to say that RANK and DENSE_ RANK should not be used. (I am a huge fan of analytic functions.) This is simply a statement that many Top-N queries that are interested in, say, fetching the first five rows, and maybe the next five rows, for example, could make good use of the new row limiting clause syntax available in 12c. The FETCH clause specifies the number of rows or percentage of rows to return. Comparing the second query with the first query, you can see that the omission of this clause results in all rows returned. The second query fetches only the top five salary earners from the employee table (additionally, the second query omits any records with null salary values, so that only real salary values are returned). To then fetch the next top 5 salary earners, consider the following query (Figure 2). The differences between the third query and the second query are the OFFSET and FETCH NEXT clauses. The OFFSET clause specifies the number of rows to skip before the row limiting begins. The way to look at this is to read it as “skip the first five salary earners, and return the next five salary earners only”. Effectively calling PL/SQL within SQL has been a challenge for many application developers since PL/SQL was first written. One of the challenges has consistently been how to effectively write SQL that calls schema-level stored functions and procedures and ensure that, overall, the code performs well and is maintainable. As far as name resolution goes, a PL/SQL function declared within a SQL query has precedence over schemalevel stored functions and procedures. The following query demonstrates how you can declare a function within the WITH clause of an SQL query (Figure 3). This type of query is particularly useful in a read-only database where the values are not likely to change often while the query is running. Keep in mind that if the query in which you specify this clause is 12

ashton%ORA12CR1> select employee_id, first_name, last_name, salary 2 from employee 3 order by salary desc nulls last 4 OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY; EMPLOYEE_ID ----------7895 6572 6570 6567 6568

FIRST_NAME LAST_NAME SALARY ------------------------------ ------------------------------ ---------Matthew Michaels 70000 Theresa Wong 70000 mark leblanc 65000 Roger Friedli 60000 Betsy James 60000

5 rows selected. FIGURE 2

ashton%ORA12CR1> WITH 2 FUNCTION get_prod_name (sku VARCHAR2) RETURN VARCHAR2 IS 3 start BINARY_INTEGER; 4 length BINARY_INTEGER; 5 BEGIN 6 start := INSTR(sku, ‘***.’); 7 length := INSTR(SUBSTR(sku, start + 4), ‘.’) – 1; 8 RETURN SUBSTR(sku, start + 4, length); 9 END; 10 SELECT get_prod_name(product_sku) 11 FROM product_catalog; 12 / FIGURE 3

ashton%ORA12CR1> create table employee_valid (employee_id NUMBER, 2 first_name VARCHAR2(30), 3 last_name VARCHAR2(30), 4 hire_date DATE, 5 departed_date DATE, 6 salary NUMBER(9,2), 7 manager NUMBER, 8 department_id NUMBER, 9 PERIOD FOR emp_valid_time (hire_date, departed_date)); Table created. FIGURE 4

not a top-level SELECT statement, but a sub-query, then if the top-level statement is a SELECT statement, it must have either a PL/SQL function or procedure declared within a WITH clause or it must specify the WITH_PLSQL hint. If the top-level statement is an INSERT, UPDATE, DELETE or MERGE statement, then it must have the WITH_PLSQL hint. This hint is not an optimizer hint. It is used specifically for declaring PL/SQL within SQL. Temporal Validity is sure to be a key audit

feature as it allows you to specify valid time periods or intervals within a table’s data. A heap table can become a table that supports temporal validity with the addition of the PERIOD FOR clause, which allows you to specify a valid start and end period for which rows are valid and should, therefore, be visible by an application, for example. Consider the following CREATE TABLE statement for a table named EMPLOYEE_VALID, (a table that is similar to the EMPLOYEE table used in this article’s first three queries (Figure 4). www.ukoug.org

Technology: Melanie Caffrey

The EMPLOYEE_VALID table has a valid period of time for which data is now considered when it falls between the time period specified by the emp_valid_ time period created with the PERIOD clause of the above CREATE TABLE statement. A subset of the records from this table reveals the following employee start and end periods (Figure 5).

ashton%ORA12CR1> select last_name, first_name, hire_date, departed_date 2 from employee_valid; LAST_NAME FIRST_NAME HIRE_DATE ------------------------------ ------------------------------ --------Eckhardt Emily 07-JUL-04 Newton Frances 14-SEP-05 Newton Donald 24-SEP-06

DEPARTED_ --------01-JUL-05 07-DEC-06 03-OCT-10

3 rows selected. FIGURE 5

This table can now be queried using Oracle Flashback Technology to perform AS OF and VERSIONS BETWEEN queries similar to the following (Figure 6).

ashton%ORA12CR1> select last_name, first_name, hire_date, departed_date 2 from employee_valid 3 AS OF PERIOD FOR emp_valid_time TO_DATE(‘07-JUL-2004’, ‘DD-MON-YYYY’); LAST_NAME FIRST_NAME HIRE_DATE DEPARTED_ ------------------------------ ------------------------------ --------- --------Eckhardt Emily 07-JUL-04 01-JUL-05

Summary

1 row selected.

So, in summary, each of these three new 12c SQL features has their place in terms of providing a simpler way to code an often asked-for type of query. The new row limiting clause greatly simplifies the syntax required for creating, for example, a ranked Top-N query. You can now declare and use a PL/SQL function or procedure within a SQL query which cuts down on necessary namespace resolution.

ashton%ORA12CR1> select last_name, first_name, hire_date, departed_date 2 from employee_valid 3 VERSIONS PERIOD FOR emp_valid_time BETWEEN 4 TO_DATE(‘15-SEP-2005’, ‘DD-MON-YYYY’) AND 5 TO_DATE(‘15-JAN-2006’, ‘DD-MON-YYYY’); LAST_NAME FIRST_NAME HIRE_DATE DEPARTED_ ------------------------------ ------------------------------ --------- --------Newton Frances 14-SEP-05 07-DEC-06 1 row selected. ashton%ORA12CR1> select last_name, first_name, hire_date, departed_date 2 from employee_valid 3 AS OF PERIOD FOR emp_valid_time TO_DATE (‘01-OCT-2006’, ‘DD-MON-YYYY’);

And, last but not least, you can give your table data temporal validity and be sure that certain table rows are only visible when they are valid (and not visible when they are not).

LAST_NAME FIRST_NAME HIRE_DATE ------------------------------ ------------------------------ --------Newton Frances 14-SEP-05 Newton Donald 24-SEP-06

DEPARTED_ --------07-DEC-06 03-OCT-10

2 rows selected. FIGURE 6

ABOUT THE AUTHOR

Save 10% With a 2 year fixed membership

www.ukoug.org

Melanie Caffrey Senior Development Manager, Oracle Melanie Caffrey is a senior development manager for Oracle Corporation. She is co-author of several technical publications including Expert PL/SQL Practices for Oracle Developers and DBAs and Expert Oracle Practices: Oracle Database Administration from the Oak Table (Apress), and the SQL 101 series of articles for Oracle Magazine.

Call the membership team on:

+44 (0)20 8545 9670

www.ukoug.org/membership

13

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology

Oracle 12c Security Selected New Features

The new release of the Oracle database version 12c was much anticipated, especially by those of us who were not on the beta program. Pete Finnigan, Independent Oracle Security Consultant, PeteFinnigan.com

Introduction

There has always been an association with the “word” security and Oracle databases; you only need to remember the “unbreakable campaign” and also Oracle’s push to get security certifications and also the security features added to the core product over the years such as Advanced Network Security, Virtual Private Database, Label Security, Fine grained Audit, Database Vault, Audit Vault and much more to know that Oracle is very serious about security in its database products so new releases are a time to review security. I for one was very interested to see the new 12c database when it arrived and see what is offered for increased data security and what has changed that may affect database security. The new security features fall into three main categories: 1. New security features, either free with the database license or additional cost options. An example is Data Redaction included with Oracle Advanced Security. 2. New features that may affect security but are not security features themselves. The biggest example is multitenant architecture. 3. Subtle changes made that are new security features or may affect security but are not recorded as new features. An example would the new hidden parameter _sys_logon_delay In this short article I want to introduce at a high level some of the core new security features and also spend some time to focus on what the security model means

14

for people who want to secure their data in a 12c database particularly when consolidation is used.

New features

I am not going to go into details on every new security feature but highlight the key elements and then I am going to discuss the most important features with the biggest implication for data security, multitenant architecture, see Table 1. One final new thing that stands out is the effective increase in default users when you create a database (if using containers) then you get a root/seed and pluggable database. On the surface the number of default accounts does not seem to have increased but there are 22 changes (new accounts and old ones removed. Also the number of PUBLIC grants has increased by over 30% from 11gR2.

What does 12c mean for securing data?

The biggest change for Oracle 12c is not security related but affects security a great deal; this is the addition of the multitenant architecture that allows databases to be “plugged” into and removed from a container database. This is great for consolidation but has an impact on security. A 12c database that uses multitenant architecture has now two containers by default; the root and the pluggable database. There is also a seed template and you can add other templates or pluggable databases as needed.

www.ukoug.org

Technology: Pete Finnigan

Feature

Free/Cost

Description

Data Redaction

Part of OAS

Prevents sensitive data from being displayed. Uses a policy driven approach. Data can be redacted at run time without changing applications.

SHA-2 Support

Free

DBMS_CRYPTO has been updated to support SHA-2. SHA-2 is a collection of algorithms from SHA-224 – SHA-512. Security flaws were found in SHA-1 in 2005.

Unified Audit

Free

Easier to turn on and to control via database policies. Also unified audit is more secure as the trails are written to insert only tablespaces. Two new roles have also been added to allow separation of audit duties for viewing and admin.

Code Based Security

Free

This change allows roles to be granted to PL/SQL procedures directly instead of users allowing finer grained security and least privilege.

New SYS privilege

Free

New system privileges are added for Data Guard, key management, backup with RMAN with the addition of SYS privileges that now mean SYSDBA is not needed for all tasks.

Smaller Changes

Free

SELECT ANY DICTIONARY is locked down with less SYS objects available (be aware that this can be turned off with a parameter). The last logon time of a user is added which is useful for auditing (Note it doesn’t work for SYSDBA). UNLIMITED TABLESPACE is no longer granted with the RESOURCE role. VPD has been extended to allow one or more context values to control when the policy function is evaluated.

Database Vault

Part of DV

New mandatory realms allow blocking of direct object privileges including “object owner”. DV and OLS are now installed by default which greatly simplifies their use and migration. Both are turned on via configuration.

Password Complexity

Free

Two new password complexity functions are shipped and if you use the dbca then they can optionally be enabled.

Privilege Analysis

Part of DV

Useful feature that enables you to create a profile for a user that collects details of privileges that have been used. This can then be used to create lists of privileges to remove to achieve “least privilege”.

Key Management

Part of OAS

An updated framework has been provided to manage encryption keys and a common layer now makes the use of keystores and HSMs more transparent.

Transparent Sensitive Data Protection

Free

Allows use of policies to apply protections to data classifications (for example credit cards). This can be used with data redaction.

TABLE 1

This major change has brought common users and roles, Oracle maintained users and roles and also the ability to grant roles, system privileges and object privileges at a common level or a local level. Common means that it exists in all containers (root and pluggable database) and local means it (pronoun for the user/role/grant) exists in just a pluggable database. Common users and roles can only be created in the root container and Oracle define the naming convention for these so that they must start ‘C##’ or ‘c##’ and conversely local users created in a pluggable database must not start ‘C##’ or ‘c##’. Roles and users can also be designated “oracle maintained” when created, users such as SYS and SYSTEM have this flag. Privileges, both system and object can also be granted common or local and roles can also be granted to users and roles on a common or local basis. This leads to very complex structures of privileges. I created some nice PL/SQL tools some years ago that show the hierarchy of privileges granted to objects, roles and system privileges (Figure 1).

SQL> who_can_access: Release 1.0.3.0.0 - Production on Tue Aug 27 18:06:14 2013 Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved. NAME OF OBJECT TO CHECK [USER_OBJECTS]: UTL_FILE OWNER OF THE OBJECT TO CHECK [USER]: SYS OUTPUT METHOD Screen/File [S]: FILE NAME FOR OUTPUT [priv.lst]: OUTPUT DIRECTORY [DIRECTORY or file (/tmp)]: EXCLUDE CERTAIN USERS [N]: USER TO SKIP [TEST%]: Checking object => SYS.UTL_FILE ==================================================================== Object type is => PACKAGE (TAB) Privilege => DEBUG is granted to => Role => C##ROLED (ADM = NO) which is granted to => User => SYS (ADM = YES) Role => C##ROLEA (ADM = NO) which is granted to => User => SYS (ADM = YES) User => C##TEST1 (ADM = NO) Privilege => EXECUTE is granted to => User => C##TEST1 (ADM = NO) User => DVSYS (ADM = NO) User => ORACLE_OCM (ADM = NO) User => ORDPLUGINS (ADM = NO) User => ORDSYS (ADM = NO) Role => PUBLIC (ADM = NO) Role => ROLEE (ADM = NO) which is granted to => User => TEST3 (ADM = NO) User => SYS (ADM = YES) User => WMSYS (ADM = NO) PL/SQL procedure successfully completed. For updates please visit http://www.petefinnigan.com/tools.htm SQL> FIGURE 1 – OUTPUT FROM WHO_CAN_ACCESS.SQL SHOWING PRIVILEGES FOR UTL_FILE

www.ukoug.org

15

1-4 DEC 2013 MANC H ESTER CENTRAL

www.tech13.ukoug.org

Agenda online & registration open

Oracle Explored end to end with quality speakers from the international Oracle community. Expanded Breakout Tracks Tech13 will deliver more targeted content than ever seen before at a UKOUG event. Have your say at the roundtables, join in hackathons and throw yourself into deep-dives. There’s something for everyone whatever your experience level.

Interested in 12c? Here’s a selection of what we’ve got for you: • Learn how you can take full advantage of this new • Larry Carpenter’s talk ‘Oracle Maximum Availability pluggable architecture with various deployment Architecture Best Practices for Oracle Multitenant’ options for Application Express. Attend David Peake’s will present the best practices that provide high presentation on Deploying and Developing Application availability required to consolidate business critical Express with Oracle Database 12c. databases or implement a multitenant architecture for public or private clouds. • Tom Kyte’s session on ‘Oracle Optimizer: What’s New in Oracle Database 12c?’ uses a real-world demo to • Frances Zhao-Perez’ presentation ‘WebLogic Server examine each component of the adaptive model and Strategy and New Elasticity Support in 12c’ will shows how they help the Optimizer learn additional summarise the WebLogic Server product and information during query execution that can be used investment strategy and how WebLogic Server will not only to improve the performance of the current meet requirements in areas such as cloud computing, SQL statement but all statements. mobile technology and application development.

Sound good? View the full agenda and get registering www.tech13.ukoug.org

For membership enquiries call:

+44 (0)20 8545 9670

www.ukoug.org

Follow us @UKOUG

Join the UKOUG Technology Conference 2013 networking group

Have your say! #ukoug_tech13

Technology: Pete Finnigan

This privilege model is now much more complex as a grant of EXECUTE to a package such as UTL_FILE can now be made at a common level or local. This, if made via a role or multiple roles can appear very complex as each role can be granted common or local and each role itself can be created common or local (Figure 2). The output in Figure 2 shows the grants made to UTL_FILE in my 12.1 database in the root container and the plug database. The ‘c’ (common) or ‘x’ (local) in the line of digits is each object privilege. The first ‘Y|N’ is whether the user/role is common and Oracle maintained. The [C] or [L] is the grant of a role, common or local and the {Y,N} is whether the role itself if common or local or Oracle Maintained. The interesting fact is that if I compare the root container (orcl) and the pluggable database (pdborcl) then a common object privilege granted in the root container only makes it to the pluggable database if the complete chain of grants available via roles are common grants. What does all of this mean? Well I perform security audits for people and I previously would audit a single database. In the future I can still audit a single 12c database but that would be the pluggable database. I also need to consider the root container and also the common users/roles and privileges visible in the container (plug) I am interested in as they potentially are a route to other pluggable databases in the same container. If we think of current single databases it makes sense to ensure separation of duties by having separate accounts and unique passwords. If we move to CDB then we must for security sake do the same. Common shared accounts across pluggable containers mean that a single person can access multiple databases (plugs) simply by setting the container. Finally the Oracle maintained flag cannot be relied upon to identify default users/roles as it’s possible to set this flag by using: ALTER SESSION SET “_ORACLE_SCRIPT”=TRUE;

We also cannot rely on common users being called ‘c##..’ as Oracle obviously don’t do this. This can be changed by setting the parameter ”_common_ user_prefix”. If this is set to a NULL string accounts with any name can now be created. This is useful for legacy

www.ukoug.org

C:\__scanner\__plsql_cracker>sqlplus sys/oracle1@//192.168.1.40:1521/orcl as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Aug 27 17:07:30 2013 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Personal Oracle Database 12c Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> set lines 120 SQL> set serveroutput on SQL> @wca SYS|C##TEST1|Y|N|000000H0000C000|| SYS|C##TEST2|Y|N|000000X00000000|| SYS|C##ROLEC|Y|N|00000000000X000||[R-NOGRANT] SYS|C##TEST2|Y|N|00000000000X000|D|[C]-C##ROLEB-{Y,N}=>[L]-C##ROLEA-{Y,N} SYS|C##TEST1|Y|N|00000000000X000|D|[L]-C##ROLEA-{Y,N} SYS|PUBLIC|Y|Y|00000000000X000|| SYS|C##TEST2|Y|N|000000C00000000|D|[C]-C##ROLEB-{Y,N}=>[L]-C##ROLEA-{Y,N}=>[C]-C##ROLED-{Y,N} SYS|C##TEST1|Y|N|000000C00000000|D|[L]-C##ROLEA-{Y,N}=>[C]-C##ROLED-{Y,N} PL/SQL procedure successfully completed. SQL> exit Disconnected from Personal Oracle Database 12c Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options C:\__scanner\__plsql_cracker>sqlplus sys/oracle1@//192.168.1.40:1521/pdborcl as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Aug 27 17:07:30 2013 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Personal Oracle Database 12c Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> set lines 120 SQL> set serveroutput on SQL> @wca SYS|C##TEST1|Y|N|000000H0000C000|| SYS|C##ROLEA|Y|N|000000C00000000||[R-NOGRANT] SYS|PUBLIC|Y|Y|00000000000X000|| SYS|TEST3|N|N|00000000000X000|D|[L]-ROLEE-{N,N} PL/SQL procedure successfully completed. SQL> FIGURE 2 – OUTPUT FROM RUNNING WCA.SQL AGAINST THE ROOT AND PLUGGABLE CONTAINER IN 12CR1 FOR A SECURITY CHECK OF ACCESS TO UTL_FILE

applications but also means that relying on accounts and roles called ‘c##...’ to identify common privileges will not work. There is also a new set of views called CDB_% which allow a view across all containers, the container ID is included. These represent a security risk as access would allow someone to see the gaps that may allow access to other databases. Fundamentally the database architecture has changed when multitenant options are used so a slightly different mindset must be employed in terms of your strategy of securing single and multiple databases. More care must be taken with common, local and shared containers. For more detailed comments on 12c security please see http://www.petefinnigan.com/ weblog/entries/index.html

ABOUT THE AUTHOR Pete Finnigan Independent Oracle Security Consultant, PeteFinnigan.com Pete Finnigan is well known in the Oracle world for specialising in helping people secure data held in their Oracle databases. Pete runs PeteFinnigan. com Limited which specialises in training customers on Oracle security, performing security audits and also supplying database security software. See http://www.pfclscan.com and http://www.pfclobfuscate.com for details.

17

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology

New Features of

Oracle Advanced Analytics in Oracle 12c By now you may have heard that Oracle 12c has been released. In addition to the new version of the database we also get a new version of SQL Developer (version 4). Brendan Tierney, Consultant, Oralytics.com

For those of you who are interested in data mining, predictive analytics or data science, Oracle 12c has a number of new features for Oracle Data Miner that is part of the Advanced Analytics Option. The purpose of this article is to outline these new features and to give some examples of how they can be used. The article has two main parts. The first part looks at new features that come with the Oracle Data Miner tool that is part of SQL Developer 4. The second part looks at the new in-database features that come with the Oracle 12c database. The Oracle Advanced Analytics Option consists of Oracle Data Mining and Oracle R Enterprise (ORE). ORE is a separate product that has a separate install for both the server and client side. There has been no new release of ORE that coinsided with the releases of 12c and SQL Developer 4. So this article will not be looking at what ORE can do.

18

New Features in the Oracle Data Miner Tool

The new features of the Oracle Data Miner tool, which comes as part of SQL Developer 4, can be grouped into 2 categories. The first category contains the new features that are available to all users of the tool (11.2g and 12c). The second category contains the new features that are only available in 12c. The new features of each of these categories will be explained below. Category 1 – Common new features for 11.2g and 12c Database users There is a new View Data feature that allows you to drill down to view the table object and to view nested tables. A new Graph Node that allows you to create graphs such as line, bar, scatter and boxplots for data at any stage of a workflow. You can specify any of the attributes from the data source for the graphs. You don’t seem to be limited to the number of graphs you can create.

www.ukoug.org

Technology: Brendan Tierney

This will output the deployment script to a file buried deep somewhere inside you SQL Developer directory. You can edit this location to have a different shorter location. A new SQL Node. This is a welcome addition, as there has been many times that I’ve needed to write some SQL or PL/SQL to do a specific piece of processing on the data that was not available with the other nodes. There are 2 important elements to this SQL node really. The first is that you can write SQL and PL/SQL code to do whatever processing you want to do. But you can only do it on the Data node you are connected to.

Category 2 – New features for 12c Database users. Now for the new features that are only visible when you are running ODM / SQL Dev 4 against a 12c database. No configuration changes are needed. The ODM tool checks to see what version of the database you are logging into. It will then present the available features based on the version of the database. New Predictive Query nodes allows you to build a node based on the new non-transient feature in 12c called Predictive Queries (PQs). In SQL Developer we get four types of Predictive Queries. These can be used for Anomaly Detection, Clustering, Feature Extraction and Classification.

The second is that you can use it to call some ORE code. This allows you to use the power of R and extensive range of packages that are available to expand the analytic functionality that is available in the database. If there is some particular function that you cannot do in Oracle and it is available in R, you can now embed this function/code as an ORE object in the database. It can then called using SQL and the SQL Node. WARNING: This particular feature will only work if you have ORE installed on your 11.2.0.3g, 11.2.0.4g or 12.1c. New Model Build Node features, include node level text specifications for text transformations, displays the heuristic rules responsible for excluding predictor columns and being able to control the amount of classification and regression test results that are generated. New Workflow SQL Script Deployment features. Up to now the workflow SQL script, I found to be of limited use. The development team have put a lot of work into generating a proper script that can be used by developers and DBA. But there are some limitations still. You can use the script to run the workflow automatically in the database without having the use the ODM tool. But it can only be run the in the schema that the workflow was generated. You will still have to do a lot of coding (although a lot less than you used to) to get your ODM models and workflows to run in another schema or database.

www.ukoug.org

It is important to remember that the underlying model produced by these PQs to not exist in the database after the query has executed. The model is created, used on the data and then the model is deleted. The Clustering node has the new algorithm Expectation Maximization in addition to the existing algorithms of K-Means and O-Cluster. The Feature Extraction node has the new algorithm called Principal Component Analysis in addition to the existing NonNegative Matrix Factorization algorithm. Text Transformations are now built into the model build nodes. These text transformations will

19

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology: Brendan Tierney

be part of the Automatic Data Processing steps for the model build nodes. This is illustrated in the above images. The Generalized Linear Model that is part of the Classification Node has a Feature Selection option in the Algorithm Settings. The default setting is Ridge Regression. Now there is an additional option of using Feature Selection.

This will makes the outputs more understandable and model more transparent. This feature is not default so you will need to set this on if you want to use it with the GLM algorithm. • In previous versions of the database, there could be some performance issues that relate to the data types used. In 12c these has been addressed for BINARY_DOUBLE and BINARY_ FLOAT. So if you are using these data types you should now see faster scoring of the data in 12c. • There is new in-database feature called Predictive Queries. This allows on-the-fly models that are temporary models that are formed as part of an analytics clause. These models cannot be tuned and you cannot see the details of the model produced. They are formed for the query and do not exist afterwards. SELECT cust_id, age, pred_age, age-pred_age age_diff, pred_det FROM (SELECT cust_id, age, pred_age, pred_det, RANK() OVER (ORDER BY ABS(age-pred_age) DESC) rnk FROM (SELECT cust_id, age, PREDICTION(FOR age USING *) OVER () pred_age, PREDICTION_DETAILS(FOR age ABS USING *) OVER () pred_det FROM mining_data_apply_v)) WHERE rnk ADD FAR_SYNC prmdb_fs AS prmdb_fs.world is prmdb_fs;

Next create the standby database in the usual way using an OS copy or an RMAN “Duplicate database for standby” (see “Creating a Physical Standby Database (Doc ID 1475344.1)” ) You will also need to add standby redo log groups to the primary and the Far Sync instance;- You can set the LOG_FILE_NAME_ CONVERT parameter for the Far Sync instance so that standby logs are created automatically when redo transport begins from the primary. Parameters for Physical Standby “stbdb1” On the standby database we simply need to amend the FAL_SERVER and LOG_ARCHIVE_CONFIG parameters to include the Far Sync instance. The FAL_SERVER setting simply tells the standby to get missing archivelogs from either the Far Sync instance or from the Primary. DB_UNIQUE_NAME= stbdb1 FAL_SERVER=’prmdb_fs’,’prmdb1’ LOG_ARCHIVE_CONFIG=’DG_CONFIG=(prmdb1, prmdb_fs, stbdb1)’ LOG_ARCHIVE_DEST_1=’LOCATION= USE_DB_RECOVERY_FILE_DEST VALID_ FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME= stbdb1’ LOG_ARCHIVE_DEST_2=’SERVICE=prmdb1 ASYNC VALID_FOR=(ONLINE_ LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prmdb1’

ABOUT THE AUTHOR

www.ukoug.org

The final thing to do is to set the protection mode for the standby database as below;SQL> ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

Once this is all up and running you need to do the usual checks, manually switch logs and confirm that redo arrives at both the Far Sync instance AND the Standby database. Make sure to check the STATUS column in V$ARCHIVE_DEST_STATUS, check for rows in V$ARCHIVE_GAP, check V$DATAGUARD_CONFIG view - you should see that the DEST_ROLE column in V$DATAGUARD_ CONFIG shows ‘FAR SYNC INSTANCE’ and that the DATABASE_ ROLE column in V$DATAB ASE has the new value of ‘FAR SYNC’.

Best Practices?

Best Practices are pretty much what you have for a usual Dataguard system;• Use FORCE LOGGING on the primary. • Monitor the Far Sync instances in the same way that you would monitor the standby databases. • Ensure the Primary, Far Sync and standby database have at least one standby redolog group more than the primary has redo logs. • The Far Sync instance needs to be geographically separate to the primary but not so far away as to introduce the risk of network slowdowns. • Use more than one Far Sync instance on each site to avoid a single point of failure. Locate the Far Sync instances away from each other! • Do plan for switchover and switchback;- Ensure that there are Far Sync instances running and ready on the standby side.

References • • • • • •

Data Guard 12c New Feature: Far Sync Standby (Doc ID 1565071.1) Data Guard: Oracle 12c – New and updated Features (Doc ID 1558256.1) Creating a Physical Standby Database (Doc ID 1475344.1) Oracle® Data Guard Concepts and Administration 12c Release 1 (12.1) Note 1542969.1 Cascaded Standby Databases in Oracle 12c Oracle® Database Licensing Information, 12c Release 1 (12.1)

Bob Mycroft Oracle DBA, Capgemini Bob has worked with Oracle as a developer and DBA for over 18 years and has a general interest in all things Oracle. In 2009 Bob was awarded “Oracle ACE” by OTN largely for the many articles written for Oracle Scene, and presentations at the UKOUG conferences over the last 10 years. Bob is presenting at the UKOUG Tech13 conference in December on the Advanced Security Option so come and say Hello!

23

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology

Alex Nuijten Senior Consultant Manager, Ordina Oracle Solutions

Pattern Recognition

The weather. Nobody it seems likes the weather. It’s either too cold or too hot. Too wet or too dry. Nobody ever seems to feel that the weather is just right.

The Weather

When you talk to people they always, well almost always, complain about the weather. When you look at the weather data objectively, it might reveal a different picture. The weather was not as bad as they might say. In this article we will use one of the new Oracle SQL 12c capabilities to analyze the actual weather.

Setup

Before we can look at the new Oracle 12c capabilities we will need some data to work with. In this article I will use the following table: SQL> create table weather 2 (dt date not null 3 ,rain number not null 4 ); Table created.

The table above is very straight forward, two columns: one for the date and one for the amount of rain that has fallen on that particular date. 24

Now that we have the table, we still need sample data to work with. To generate some data I will use the following statement: SQL> insert into weather 2 with rsf(r) 3 as 4 (select 1 5 from dual 6 union all 7 select r + 1 8 from rsf 9 where r < 31) 10 select to_date (to_char (r)||’-08-2013’, ‘dd-mm-yyyy’) dt 11 ,round ((dbms_random.value * 20)) rain 12 from rsf 13 / 31 rows created.

The syntax above uses a recursive WITH statement, actually it’s called Recursive Subquery Factoring, to generate the data. Lines www.ukoug.org

Technology: Alex Nuijten

2 through 9 make up the Recursive Subquery Factoring part. This will generate 31 rows, one for each day in August. Line 10 creates the actual date, and line 11 uses a random numerical value, multiplies it by 20 and rounds it off to represent the amount of rain which has fallen on that particular date. The content of the table that I will use in this article looks like this: SQL> select * 2 from weather 3 /

15-AUG-13 5 16-AUG-13 5 17-AUG-13 10 18-AUG-13 5 19-AUG-13 14 20-AUG-13 19 21-AUG-13 15 22-AUG-13 12 23-AUG-13 18 24-AUG-13 2 25-AUG-13 5 26-AUG-13 4 27-AUG-13 15 28-AUG-13 7 29-AUG-13 0 30-AUG-13 12 31-AUG-13 11

DT RAIN --------- ---------01-AUG-13 14 02-AUG-13 0 03-AUG-13 19 04-AUG-13 6 05-AUG-13 20 06-AUG-13 1 07-AUG-13 17 08-AUG-13 17 09-AUG-13 14 10-AUG-13 18 11-AUG-13 9 12-AUG-13 4 13-AUG-13 17 14-AUG-13 16

31 rows selected.

As you can see in the data, some days it doesn’t rain while other days it rains without stopping. Let’s start to analyze some of this data.

16-AUG-13 5 17-AUG-13 10 18-AUG-13 5 19-AUG-13 14 20-AUG-13 19 21-AUG-13 15 22-AUG-13 12 23-AUG-13 18 24-AUG-13 2 25-AUG-13 5 26-AUG-13 4 27-AUG-13 15 28-AUG-13 7 29-AUG-13 0 30-AUG-13 12 31-AUG-13 11

SAME WORSE BETTER WORSE WORSE BETTER BETTER WORSE BETTER WORSE BETTER WORSE BETTER BETTER WORSE BETTER

31 rows selected.

Lines 1 through 4 speak for themselves. Lines 5 through 13 make up the Match_Recognize clause. Because the Match_ Recognize clause is all about pattern recognition, there needs to be a way to sort the data. This is done on line 6 with a simple ORDER BY which sorts the data based on the date. If you ever worked with, or looked at, the MODEL clause (introduced in Oracle 10g) you might recognize the MEASURES keyword. The measures are the expressions which can be used in other parts of the SQL query.

As we only want to see whether the rain was getting better or worse, we only specify the CLASSIFIER() function. This function allows you to identify which row maps to variables defined in the DEFINE section. There are other functions like that, like the Match_Recognize Clause One of the new features in Oracle SQL 12c is the Match_Recognize MATCH_NUMBER() function. clause. The syntax is something that I believe takes some getting Line 8 instructs the query to return all rows. It is also possible to used to. Let’s take a look at an example to start with. have it return one row whenever a match is found. This is done by using ONE ROW PER MATCH. First let’s find out if the weather got better or worse as dates gone by. The actual pattern that we are looking for in our data is specified on line 9. The pattern that we want is really not a pattern at all, SQL> select dt we just want to see how the rain of a certain day relates to the 2 ,rain 3 ,trend amount of rain on the previous day. The pattern states: zero or 4 from weather more iterations of “better”, followed by zero or more iterations 5 match_recognize( 6 order by dt of “worse”, followed by zero or more iterations of the “same” 7 measures classifier() as trend amount of rain. The “*” is a regular expression quantifier. Other 8 all rows per match quantifier are listed later in this article. 9 pattern (better* worse* same*) 10 define better as better.rain < prev (rain) 11 ,worse as worse.rain > prev (rain) 12 ,same as same.rain = prev (rain) 13 );

DT RAIN --------- ---------01-AUG-13 14 02-AUG-13 0 03-AUG-13 19 04-AUG-13 6 05-AUG-13 20 06-AUG-13 1 07-AUG-13 17 08-AUG-13 17 09-AUG-13 14 10-AUG-13 18 11-AUG-13 9 12-AUG-13 4 13-AUG-13 17 14-AUG-13 16 15-AUG-13 5

www.ukoug.org

TREND ----------------------------BETTER WORSE BETTER WORSE BETTER WORSE SAME BETTER WORSE BETTER BETTER WORSE BETTER BETTER

The last part is where we define our variables, lines 10 through 12. The “better” variable is defined as follows: the rain needs to be less than the previous value of rain. The “worse” variable is defined by the opposite: the rain needs to be more than the previous value of rain. The “same” is defined by an equality of the current rain value and the previous one. To get the value of the previous day, the PREV() function is used. Of course there is also a function which can retrieve the next value; NEXT(). And yes, it is quite easy to answer this question using Analytic Functions (LEAD or LAG) and a CASE statement.

Make up your own Definitions

When it rains doesn’t necessarily mean that it needs to be considered as a wet day. Using the DEFINE clause, you determine

25

OracleScene D I G I T A L

12C SUPPLEMENT

AUTUMN 13

Technology: Alex Nuijten

if a day is considered a wet day. Let’s take a look at the next example: SQL> select * 2 from weather 3 match_recognize 4 ( 5 order by dt 6 measures first (wet.dt) as first_wetday 7 ,dry.dt as dryday 8 ,last (wet.dt) as last_wetday 9 one row per match 10 pattern (wet dry{2,} wet*) 11 define 12 wet as wet.rain > 10, 13 dry as dry.rain
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF