30 December 2011

What is a "Generic" Oracle Error Message? (9614)

The 29 December quiz tested your knowledge of the differences between various ways of raising errors and communicating error messages to users. It was a word-based quiz, as opposed to one that is mostly code so - no surprise - a number of players raised objections to some of the phrasing and scoring.

Let's go through these objections and see how much we can learn about PL/SQL through the process. Player comments are in blue. My response is in purple.

Choice 8990:  Both the PEI and RAE implementations allow you to set the error code to one that is not used by Oracle and is returned by a call to SQLCODE.

A player wrote

1. PEI allows code from -20000...-20999, but not only such codes; 
2. Several codes from -20000...-20999 is used by Oracle, for example: "ORA-20000: ORU-10027: buffer overflow, limit of 2000 bytes"
3. Is it unambiguous to say that outside this range all codes are used by Oracle, for example, does "ORA-22567: Message 22567 not found; product=RDBMS; facility=ORA" mean that it's used by Oracle?

My response: you are absolutely right that Oracle does, in a few packages, use some of "our" error codes, such as ORA-20000 and ORA-20000. I've always felt that this was rude behavior on Oracle's part. We only get 1000 error codes with which to work; surely, you could leave all of those to us! So good point, but I don't think it makes this choice wrong in any way. With both those implementations, I can choose to set the error code to one that is not used by Oracle (such as -20704). The choice does not claim that it is impossible for me to choose a code that Oracle also uses. 

As to which codes Oracle "uses" - no, I would say that at least for now, -22567 is not in use. But it is certainly the case that Oracle could at some point use these error codes - and we cannot.

Choice 8989: PEI and VE offer "generic" Oracle error messages, while RAE provides an application-specific error message.

Two players raised questions about this choice, and both circle back to the use of the word "generic". My intention behind the use of this word, combined with the "Oracle error message" phrase, is that these are the error messages returned by Oracle and are the same across all installations of Oracle.

I marked answer 8989 as "Incorrect" because PEI uses an application-specific exception - i.e. it's not a "generic" Oracle error. I didn't realize this answer was about the error *message text* in particular. Seems like this answer was a bit ambiguous.

and

I disagree that "RAISE VALUE_ERROR" raises a "generic" error. It raises the very specific error associated with ORA-06502.

and

When you use PRAGMA EXCEPTION_INIT to change the error code of a user-defined exception, then the error message returned by SQLERRM or DBMS_UTILITY.FORMAT_ERROR_STACK is not generic, it is blank. So I’d suggest that while VE offers a "generic" Oracle error message and RAE provides an application-specific error message that PEI does neither. 

My response: the text of the choice is clearly about messages. And my point in this choice is that with the EXCEPTION_INIT pragma, you can change the error code associated with a named exception, but you cannot change the error message. As for VALUE_ERROR resulting in a specific rather than generic error message, I could understand this objection if I wrote the choice as follows:

PEI and VE offer a single "generic" Oracle error message, while RAE provides an application-specific error message.

That is, if I said or implied there was just one "generic" message. But I use the plural form, so I feel it is clear that I am talking about the error messages returned by Oracle, which cannot be changed by the developer with EXCEPTION_INIT or RAISE.

So is a blank message "generic"? If you do not use EXCEPTION_INIT with a user-defined exception, then the error message is, well, generic: "User-defined error". When you assign a different error code to a user-defined exception, the error message is then blank. Gee, I don't know, that seems rather generic to me!

Your thoughts?

28 December 2011

Thanks for a Great Year!

As 2011 comes to a close, I would like to thank the thousands of players who have played the quizzes at the PL/SQL Challenge, especially the daily PL/SQL quiz, and who have also volunteered their time to write quizzes, edit and review quizzes, and provide many ideas for ways to improve the website.

At the PL/SQL Challenge website this year, 4,670 Oracle technologists from 104 countries spent over 32,000 hours submitting 213,077 answers to quizzes. These numbers reflect a very impressive commitment by all these players to improving their skill set in PL/SQL, SQL and APEX.

I am grateful beyond words to our reviewers, listed below with the number of questions they reviewed in parentheses:

Michael Brunstedt (270)
Ken Holmslykke (175)
Elic (172)
Darryl Hurley (129)
Kim Berg Hansen (25)
Patrick Barel (18)
Viji Thatai (6)
Munky (3)

The impact of our reviewers can be seen most clearly in the reduction in errors in our quizzes. In the eight months of play in 2010, we issued corrections for 31 quizzes. In all of 2011, with the addition of SQL and APEX quizzes, we only needed to issue corrections for 20 quizzes. And since January 2011, there have never been more than 2 corrections in a month. That's still too many - and I take full responsibility for all errors! - but it is certainly a big improvement. I expect to see the number decrease further in 2012.

Next, my thanks to the following players who found the time to submit quizzes that were then played in 2011, listed below with the number of questions authored in parentheses:

_Nikotin (8)
Kim Berg Hansen (8)
mentzel.iudith (8)
koko (6)
Scott Wesley (5)
Jeff Kemp (5)
Patrick Barel (5)
Gary Myers (4)
Tim Hall (4)
Christian Rokitta (3)
Ken Holmslykke (3)
Joaquin Gonzalez (2)
anil_jha (2)
Vinod Kumar (2)
Sergey Porokh (2)
Jan Leers (iAdvise.be) (2)
Christopher Beck (2)
Chris Saxon (2)
Keith Hollins (1)
poelger (1)
Radoslav Golian (1)
Ramesh Samane (1)
Marc Thompson (1)
Alexander Polivany (1)
David Codl (1)
Oleg Borodin (1)
Viacheslav Stepanov (1)
Sreeguruparan PA (1)
D.J. Alexander (1)
Darryl Hurley (1)
senthil prakash Muthu Irulappan (1)
Ralf Koelling (1)
Sohilkumar Bhavsar (1)
Neal Hawman (1)
David Alexander (1)
Randy Gettman (1)
Oleg Gorskin (1)
Joni Vandenberghe (1)
voltrik (1)
Sailaja Pasupuleti (1)
Dennis Klemme (1)
Tony Winn (1)

Finally, we have been working on some exciting new features for quite awhile now, and it looks like 2012 is the year in which they will appear, so please be on the lookout for announcements of big changes at the PL/SQL Challenge site soon.

Warmest holiday wishes to everyone!
Steven Feuerstein

21 December 2011

Are the Quizzes Too Long (Wordy)?

A player submitted this comment about the 20 December quiz:

The question of this quiz was too much long and contained a lot of unnecessary information. I used a lot of time to read and understand the question (279 seconds) while it was over the functionality of the simple built-in function LAST_DAY.

This happens regularly in daily quizzes. Lots of quiz players like me are not English native speakers. (My English is very poor). We need to spend much time reading the question to fully understand. Therefore, we are necessarily disadvantaged as compared to the English speaking participants.

Looking back at this quiz, I realize that I introduced a table and data in the table when it really wasn't necessary. That lengthened the quiz and could have been avoided.

So, definitely, this quiz could have been shortened and I am sure that many of the quizzes could be "stripped down" to the very basics.

I would love to hear what you think about this. I will also create a poll about this.

19 December 2011

EXTEND and NOT NULL

The 6 December quiz tested your knowledge of the effect of a NOT NULL constraint on a variable declaration, in this case through the declaration of a collection whose datatype is declared as NOT NULL.

The last choice (8773) used a nested table as follows:
DECLARE
   l_string   VARCHAR2 (10);

   TYPE nt IS TABLE OF VARCHAR2 (10) NOT NULL;

   l_nt       nt := nt ();
BEGIN
   l_nt.EXTEND;
   l_nt (1) := l_string;
END;
When Oracle attempts to assign l_string to an element in l_nt, a VALUE_ERROR exception is raised. So far, so good.

But a player noticed something odd here. Marcus checked the documentation on EXTEND and found this:


Note:
EXTEND(n,i) is the only form that you can use for a collection whose elements have the NOT NULL constraint.

Yet the call to EXTEND did not raise an exception. Either the documentation is wrong or there is a bug in EXTEND. If it's the latter, it's a bug that has gone unnoticed for many versions. I have logged a comment to the Oracle Documentation team about this.

Thanks, Marcus!

16 December 2011

%TYPE and RESULT_CACHE - Correction needed? (9604)

The 15 December quiz tested your knowledge of 11.2's ability to automatically detect when the contents  of a table on which a result cache function relies has been changed (and then invalidate the cache).

Two players notified us that even though we scored the following choice as incorrect (which meant, in the context of this quiz, that a change to the table would not invalidate the cache):
CREATE OR REPLACE FUNCTION plch_func (n IN NUMBER)
   RETURN NUMBER
   RESULT_CACHE
IS
   l_n   plch_tab1.n%TYPE := n;
BEGIN
   DBMS_OUTPUT.put_line ('Running plch_func');
   RETURN l_n;
END;
The output from the verification does not support this conclusion.

So, first: the reason that this version of the function should not result in an invalidation of the cache highlights the distinct between dependencies between database objects and "relies on" for a result cache.

The plch_func function depends on the plch_tab1 in the "traditional" sense: since there is a reference to the table in the function through the %TYPE declaration, when this table changes, the function is marked as invalid and will have to be recompiled (note that the Oracle11g fine-grained dependency feature reduces the scenarios under which this invalidation will occur).

But that is a compile-time dependency, and very different from the "relies on" relationship that Oracle tracks automatically for purposes of ensuring clean data in a result cache. In other words, if the contents (but not the structure) of he plch_tab1 table changes, the compilation status of plch_func is not affected.

This is not the case with the result cache feature, precisely because this cache is all about delivering data (the contents of the table) to users.

So...that was the idea: if the only "mention" of plch_tab1 in plch_func is %TYPE, then that function does not rely on plch_tab1, and changes to that table will not force an invalidation and flush of its cache.

Having said that, when I ran the verification code, I noticed that the output did not match my expectations. The first time I ran it, I saw this:
%TYPE Reference
1
1
when I should have seen this:
%TYPE Reference
Running plch_func
1
1
That seemed very strange. It was as though the compilation of the new version of the function did not flush the cache. I then put an update statement before the CREATE OR REPLACE to force the cache to be invalidated. Then I saw the expected results. Then I didn't believe that could possibly be necessary, so I removed the update statement and now do see the expected results consistently:
%TYPE Reference
Running plch_func
1
1
The two players who wrote in said they saw these results:
%TYPE Reference
Running plch_func
1
Running plch_func
1
Very odd! I cannot reproduce that, for sure. I'd love to hear about the experiences of any players who have an 11.2 instance with which to work (11.1 still requires that you provide an explicit RELIES ON clause to specify the tables). I have also added that update statement to the verification code.

Let me know what you discover!

Thanks, Steven

03 December 2011

The Pain of a Missing WHERE Clause Predicate

I sent the following email out to 1243 players today:

Dear PL/SQL Challenge player, 

One of things I like most about the PL/SQL Challenge is that it keeps me humble. I am reminded on an almost daily basis by one or more players that there is lots for me learn about PL/SQL and the wider Oracle technology stack. 

And then there are the moments of deep, deep humility - such as when I discover a nasty bug that has corrupted data in the PL/SQL Challenge repository. Painful lessons, those. 

Unfortunately, I write to you about precisely just such a moment. We discovered recently that a DELETE statement was missing a critical predicate in the WHERE clause (did a chill just run down your spine, as it did me when I realized what was happening?). As a result, if a player pressed the "Start Over" button while taking the Oracle Magazine quiz, all the choices made by players for this quiz were deleted. 

This bug went undiscovered for quite a long time, so it is not possible to recover the data. Instead, we are going to "wipe the slate clean", extend the close date on these quizzes, and offer you the opportunity to take them again. 

Look on the bright side: the second time around, you will likely answer faster and with higher accuracy!
Our deepest apologies for wasting even a few moments of your time. I hope you will continue to play the daily, weekly and monthly quizzes, thereby deepening your expertise in key Oracle technologies.

Trying to look on the bright side of things, this same problem would have occurred if anyone competing in the quarterly championship playoff had pressed that same "Start Over" button (it is only visible when the competition has multiple quizzes in it). That would have wreaked havoc - and required me to come up with five more playoff-worthy quizzes. 

Still, I feel horrible that we would have made such a mistake and not noticed it testing; and that I would cause any player to have to re-do something they've done already.

I have extended the close date for the Sept/Oct quiz to the end of the year. I hope that you all will find the time to take the quiz before the end of the year.

Again, my apologies....
Steven Feuerstein

The Complex World of Indexing (10193)

The SQL quiz for the week ending 2 December tested your knowledge of the circumstances under which Oracle would use an index to optimize a SQL statement.

Two players registered concerns about scoring the first choice as correct. This choice was:
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
My intention was to isolate a small enough sampling of the total data set so as to invoke the index - and to also point out (through other choices) that if you use the column inside an expression, the index will not be used.

Here are the comment; I look forward to your thoughts.

Steven Feuerstein

1. I think, that the first choice for this quiz is ambiguous. The problem is, that the query selects about 3% of the rows. With the index in place, the database can choose either table full scan or index range scan followed by table access by rowid to run the query. Based on the assumptions, it's impossible to tell, which execution plan will be more efficient and hence whether the index improves query performance. In real world situation, we could expect, that the table is "almost" ordered by hire_date, which means that the clustering factor of the index would be low and the index access path would be more efficient than full table scan. However, there was no such assumption specified for this quiz. It would be better to include last_name in the index or select just a single month or a week instead of a year.

2. I have performed some research around the topic of this quiz, just to verify a little bit more deeper the case of choice number 1, which selects about 1/31 of the rows ( 1 year out of 31 years ) and it seemed to be the most obvious among all the choices. But query optimization is a huge topic, never ending and always dynamic, sometimes very frustrating and always exciting to research. I have performed some research around the topic of this quiz, just to verify a little bit more deeper the case of choice number 1, which selects about 1/31 of the rows ( 1 year out of 31 years ).

That is, though the query returns about 3% of the rows only, this is still not a complete guaranty that the index will be used in any case. It highly depends on the clustering factor of the index, or, in other words, it depends on how the rows were loaded into the table, in which degree they were loaded ordered by the HIRE_DATE column.

Below are two test cases, one in which the data is loaded with minimum clustering of the hire_date values (values generated in a circular fashion) and the other in which the values are highly clustered.

In both cases I used a direct-path insert for creating the test data faster.Then, for each case, I created the index and checked the execution plan:

a. without any statistics collected, which means that dynamic sampling will be used
b. with statistics collected using 20% sampling
c. with complete statistics collected.
Also, I use two slightly different queries:

1. One query identical with that of choice 1 in the quiz:
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
2. One that selects the HIRE_DATE instead of the LAST_NAME
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
The conclusions are as follows:

1. When the rows are loaded cyclically by HIRE_DATE (not clustered, but dispersed), then the index is NOT used. Here dynamic sampling accurately estimates cardinality.

2. When the rows are loaded clustered by HIRE_DATE, then the index is always used, even for high cardinality estimated. Here dynamic sampling highly underestimates cardinality.

3. The index is always used when the query can be resolved entirely using the index only ( ex. selecting HIRE_DATE instead of LAST_NAME ).

See below the two test cases:

SET TIMING ON
SET LINESIZE 130
SET PAGESIZE 0


----------------------------------------------------------
CASE 1 -- data loaded in a circular fashion
----------------------------------------------------------
CREATE TABLE plch_employees
(
   employee_id     INTEGER
,  last_name       VARCHAR2 (100)
,  first_name      VARCHAR2 (100)
,  hire_date       DATE
,  salary          NUMBER
)
/

Table created.

Elapsed: 00:00:02.02

-- create a test case (10,000,000 rows) using smaller steps to avoid error
-- ORA-30009: Not enough memory for CONNECT BY operation

DECLARE
   K  PLS_INTEGER;
BEGIN
   FOR I IN 1 .. 1000
   LOOP
       K := (I-1)*10000;

       INSERT /*+APPEND */ INTO plch_employees
       SELECT K+ROWNUM, 
              'ABC '||(K+ROWNUM),
              'XYZ '||(K+ROWNUM),
               TO_DATE('01011980','DDMMYYYY') + MOD(K+ROWNUM, 31*365),
               10000 + MOD(K+ROWNUM, 20000)
       FROM DUAL
       CONNECT BY LEVEL <= 10000 ;

       COMMIT;
   END LOOP;
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:01:59.05


CREATE INDEX i_plch_employees
ON plch_employees (hire_date)
/

Index created.

Elapsed: 00:00:50.02

-- check execution plans without any statistics (will use dynamic sampling)

/* TEST1: selecting LAST_NAME */

EXPLAIN PLAN SET STATEMENT_ID='T1' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.07


select plan_table_output 
  from table(dbms_xplan.display('plan_table','T1','serial'))
/

------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |   326K|    50M|  6883  (10)| 00:00:27 |
|*  1 |  TABLE ACCESS FULL| PLCH_EMPLOYEES |   326K|    50M|  6883  (10)| 00:00:27 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Note
-----
   - dynamic sampling used for this statement

19 rows selected.

Elapsed: 00:00:00.00

/* TEST2: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T2' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00


select plan_table_output 
  from table(dbms_xplan.display('plan_table','T2','serial'))
/

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |   326K|  2866K|   869   (2)| 00:00:04 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |   326K|  2866K|   869   (2)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Note
-----
   - dynamic sampling used for this statement

19 rows selected.

Elapsed: 00:00:00.00

-- gather statistics with 20% sampling
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(USER,'PLCH_EMPLOYEES',
                                   cascade     => TRUE,
                                   method_opt  => 'FOR ALL INDEXED COLUMNS',
                                   estimate_percent => 20);
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:00:46.00

/* TEST3: selecting LAST_NAME */

EXPLAIN PLAN SET STATEMENT_ID='T3' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/

Explained.

Elapsed: 00:00:01.04

select plan_table_output 
  from table(dbms_xplan.display('plan_table','T3','serial'))
/

------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |   315K|    12M|  6867  (10)| 00:00:26 |
|*  1 |  TABLE ACCESS FULL| PLCH_EMPLOYEES |   315K|    12M|  6867  (10)| 00:00:26 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.03

/* TEST4: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T4' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/

Explained.

Elapsed: 00:00:00.00

select plan_table_output 
  from table(dbms_xplan.display('plan_table','T4','serial'))
/

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |   315K|  2463K|   907   (2)| 00:00:04 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |   315K|  2463K|   907   (2)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00

-- gather full statistics  -  almost same results as with 20%  
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(USER,'PLCH_EMPLOYEES',
                                   cascade     => TRUE,
                                   method_opt  => 'FOR ALL INDEXED COLUMNS',
                                   estimate_percent => 99);
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:02:52.03

/* TEST5: selecting LAST_NAME */

EXPLAIN PLAN SET STATEMENT_ID='T5' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

select plan_table_output 
  from table(dbms_xplan.display('plan_table','T5','serial'))
/

------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |   315K|    12M|  6867  (10)| 00:00:26 |
|*  1 |  TABLE ACCESS FULL| PLCH_EMPLOYEES |   315K|    12M|  6867  (10)| 00:00:26 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.03

/* TEST6: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T6' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

select plan_table_output 
  from table(dbms_xplan.display('plan_table','T6','serial'))
/

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |   315K|  2462K|   853   (2)| 00:00:04 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |   315K|  2462K|   853   (2)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00


DROP TABLE plch_employees
/

----------------------------------------------------------
CASE 2 -- data loaded clustered by HIRE_DATE
----------------------------------------------------------
CREATE TABLE plch_employees
(
   employee_id     INTEGER
,  last_name       VARCHAR2 (100)
,  first_name      VARCHAR2 (100)
,  hire_date       DATE
,  salary          NUMBER
)
/

Table created.

Elapsed: 00:00:02.02

DECLARE
   K  PLS_INTEGER;
   N  PLS_INTEGER := TRUNC(10000000 / (31*365));
BEGIN
   FOR I IN 1 .. 1000
   LOOP
       K := (I-1)*10000;

       INSERT /*+APPEND */ INTO plch_employees
       SELECT K+ROWNUM, 
              'ABC '||(K+ROWNUM),
              'XYZ '||(K+ROWNUM),
               TO_DATE('01011980','DDMMYYYY') + FLOOR( (K+ROWNUM)/ N ),
               10000 + MOD(K+ROWNUM, 20000)
       FROM DUAL
       CONNECT BY LEVEL <= 10000 ;

       COMMIT;
   END LOOP;
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:01:51.00

-- check correct (expected) distribution
SELECT COUNT(*)
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/

  COUNT(*)
----------
    322295

1 row selected.

Elapsed: 00:00:13.04

CREATE INDEX i_plch_employees ON plch_employees (hire_date)
/

Index created.

Elapsed: 00:00:47.01

/* TEST1: selecting LAST_NAME -- 
now index is used, though wrong cardinality estimation ! */

EXPLAIN PLAN SET STATEMENT_ID='T1' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |  2500 |   393K|    26   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PLCH_EMPLOYEES   |  2500 |   393K|    26   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_PLCH_EMPLOYEES |  2500 |       |     9   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - dynamic sampling used for this statement

19 rows selected.

Elapsed: 00:00:00.04


/* TEST2: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T2' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/

Explained.

Elapsed: 00:00:00.00

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |  2500 | 22500 |     9   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |  2500 | 22500 |     9   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Note
-----
   - dynamic sampling used for this statement

19 rows selected.

Elapsed: 00:00:00.01


-- gather statistics with 20% sampling
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(USER,'PLCH_EMPLOYEES',
                                   cascade     => TRUE,
                                   method_opt  => 'FOR ALL INDEXED COLUMNS',
                                   estimate_percent => 20);
END;
/
PL/SQL procedure successfully completed.

Elapsed: 00:00:48.02


/* TEST3: selecting LAST_NAME - now with correct (much higher) cardinality, index still used ! */

EXPLAIN PLAN SET STATEMENT_ID='T3' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |   318K|    12M|  3160   (2)| 00:00:12 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PLCH_EMPLOYEES   |   318K|    12M|  3160   (2)| 00:00:12 |
|*  2 |   INDEX RANGE SCAN          | I_PLCH_EMPLOYEES |   318K|       |   861   (2)| 00:00:04 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00


/* TEST4: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T4' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |   318K|  2490K|   861   (2)| 00:00:04 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |   318K|  2490K|   861   (2)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00


-- gather full statistics  -  almost same results as with 20%  
BEGIN
    DBMS_STATS.GATHER_TABLE_STATS(USER,'PLCH_EMPLOYEES',
                                   cascade     => TRUE,
                                   method_opt  => 'FOR ALL INDEXED COLUMNS',
                                   estimate_percent => 99);
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:02:42.05

/* TEST5: selecting LAST_NAME - 
now with correct (much higher) cardinality, index still used ! */

EXPLAIN PLAN SET STATEMENT_ID='T5' FOR
SELECT last_name
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                  |   321K|    12M|  3035   (2)| 00:00:12 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PLCH_EMPLOYEES   |   321K|    12M|  3035   (2)| 00:00:12 |
|*  2 |   INDEX RANGE SCAN          | I_PLCH_EMPLOYEES |   321K|       |   868   (2)| 00:00:04 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00


/* TEST6: selecting HIRE_DATE instead of LAST_NAME - index only access suffices */

EXPLAIN PLAN SET STATEMENT_ID='T6' FOR
SELECT hire_date
  FROM plch_employees
 WHERE hire_date BETWEEN TO_DATE ('01-01-2010', 'MM-DD-YYYY')
                     AND TO_DATE ('12-31-2010', 'MM-DD-YYYY')
/
Explained.

Elapsed: 00:00:00.00

-------------------------------------------------------------------------------------
| Id  | Operation        | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                  |   321K|  2511K|   868   (2)| 00:00:04 |
|*  1 |  INDEX RANGE SCAN| I_PLCH_EMPLOYEES |   321K|  2511K|   868   (2)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HIRE_DATE">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIRE_DATE"<=TO_DATE(' 2010-12-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

15 rows selected.

Elapsed: 00:00:00.00

02 December 2011

Looking for a few good...SQL quizzes

Friends,

As many of you know, we now offer a weekly SQL quiz, in addition to the daily PL/SQL quizzes.

I have a nice solid backlog of PL/SQL quiz submissions from players - and I must use these very gradually due to their impact on rankings (the author gets maximum score possible for their quiz).

We are in need, though, of SQL quizzes. I can and will write them, but my SQL expertise is not very expert. I'd love to feature the experience and creativity of players from all over the world.

Just click on Submit Quiz and have a go at it. I am happy to help you "polish up" the text, so don't worry about the fine details. Just make sure you provide working verification code, a nice variety of chocies, and a solid explanation from which I can work.

Many thanks in advance,
Steven Feuerstein

Late Report:: Results of Q3 2011 Championship Playoff

[My apologies to players of the championship - I just discovered that I had neglected to publish this post way back in October.]

You will find below the results of the Q3 2011 championship playoff, played on 25 October. The number next to the player's name is the number of times that player has been in a playoff.

You can take a look at the quizzes that were in the competition through the Library page. These results will also be available through the Rankings page.

Congratulations to all players, of course, but let's give a special round of applause to our top three players:

1. Frank Schrader, who has played in an astonishing five playoffs (this is the 6th since we started the PL/SQL Challenge in April 2010) and wins a US$250 Amazon.com gift card.

2. Kevan Gelling, who has played in three playoffs (a fine accomplishment in and of itself) and wins a US$175 Amazon.com gift card.

3. Justin Cave, who has played in four playoffs (very impressive) and wins a US$100 Amazon.com gift card.

Rank Name (Playoffs) Country Total Time Total Score
1Frank Schrader (5)Germany17 mins 08 secs3052
2Kevan Gelling (3)Isle of Man19 mins 57 secs2651
3Justin Cave (4)United States19 mins 59 secs2635
4Randy Gettman (4)United States17 mins 49 secs2629
5_Nikotin (3)Russia16 mins 28 secs2566
6mentzel.iudith (4)Israel19 mins 54 secs2512
7Jeff Kemp (6)Australia13 mins 25 secs2342
8Niels Hecker (5)Germany17 mins 09 secs2267
9Jerry Bull (2)United States19 mins 54 secs2247
10Janis Baiza (3)Latvia18 mins 45 secs2235
11Siim Kask (4)Estonia19 mins 27 secs2201
12james su (3)Canada16 mins 46 secs2070
13Anton Scheffer (2)Netherlands16 mins 04 secs1974
14Viacheslav Stepanov (3)Russia19 mins 32 secs1954
15Ludovic Szewczyk (2)Belgium19 mins 59 secs1850
16Joaquin Gonzalez (3)Spain16 mins 42 secs1786
17kowido (3)No Country Set19 mins 32 secs1649
18Ninoslav ÄŒerkez (1)Croatia19 mins 22 secs1638
19Chad Lee (2)United States19 mins 56 secs1596
20Syed Ariful Bari (1)Bangladesh05 mins 59 secs1550
21Anna Onishchuk (3)Ireland17 mins 32 secs1519
22Dmitry Pushkashu (1)Moldova16 mins 24 secs1462
23Michal Cvan (4)Slovakia19 mins 41 secs1426
24Thierry Poels (2)Belgium18 mins 32 secs1394
25Rajesh Venkataramani (2)India19 mins 26 secs1196
26Dejan Topalovic (1)Austria14 mins 09 secs1147
27macabre (2)Russia19 mins 47 secs1099
28Alain Boulianne (1)French Republic15 mins 11 secs1061
29Spoon (2)Singapore19 mins 40 secs957
30dobloman (1)Netherlands16 mins 16 secs700
31John Hall (3)United States19 mins 41 secs481

30 November 2011

Nuances of Deterministic Functions in PL/SQL (9588)

The 23 November quiz tested your knowledge of the little-known fact that as of Oracle Database 11g, the compiler will optimize the execution of deterministic functions in PL/SQL blocks. Previously, defining a function with the DETERMINISTIC clause led to optimizations when the function was called inside an SQL statement, but not in PL/SQL code itself.

_Nikotin, a devoted and high-ranking PL/SQL Challenge player, took the quiz as a launch point for an in-depth analysis of this optimization and came up with several interesting conclusions:

1. The optimization only occurs when the values passed as arguments to the function are compile-time constants, such as 1 or 'abc' or true or even TO_NUMBER('123') when the optimization level is set to at least 2. If you pass to one of the IN arguments an expression that cannot be determined at compile time to be a constant , no optimization occurs. Makes perfect sense, as this optimization occurs at compile time.

2.The optimization does not seem to occur outside of the context of a loop. In other words, calling the function N times with the same compile-time static arguments explicitly in your code results in the body of the function being executed N times.

3. Furthermore, if the loop contains non-deterministic code (that is, it is not simply calling deterministic functions with compile-time static constants), then the optimization also does not occur.

So at this point, it looks to _Nikotin and I that the applicability of this new optimization will be fairly narrow. Perhaps over time, the PL/SQL team will extend the circumstances under which the optimization will occur.

You will find below some details reinforcing the above points. In addition, I have attached a file to the list of resources for this quiz to demonstrate circumstances of optimization.

Cheers, Steven

DBMS_PROFILER Demonstrates No Optimization Outside of Loop (from _Nikotin)

def proc_name = PLCH_TEST_LOOPS
def func_name = PLCH_TEST_DETERM

create or replace function &func_name
(
  i varchar2
) return varchar2
  deterministic is
begin
  return i;
end;
/


create or replace procedure &proc_name is 
  res varchar2(1);
begin
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
  res := &func_name('a');
end;
/

var res number
exec :res := dbms_profiler.start_profiler('&proc_name');

exec plch_test_loops;
exec :res := dbms_profiler.stop_profiler;


col text format a60
set pagesize 100

select s.text, d.total_occur
from plsql_profiler_data d
   , user_source s
   , plsql_profiler_units u
where d.runid =
      (
        select max(runid) runid
        from plsql_profiler_runs
        where run_comment = '&proc_name'
      )
  and d.runid = u.runid
  and u.unit_name in ('&proc_name', '&func_name')
  and d.unit_number = u.unit_number
  and s.name = u.unit_name
  and s.line = d.line#
order by u.unit_number, d.line#;

And the results:

TEXT                                        TOTAL_OCCUR
------------------------------------------- -----------
procedure PLCH_TEST_LOOPS is                                           0
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
  res := PLCH_TEST_DETERM('a');              1
end;                                         1
function PLCH_TEST_DETERM                    0
  return                                    10
end;                                        10

For comparison with loops:

def proc_name = PLCH_TEST_LOOPS
def func_name = PLCH_TEST_DETERM

create or replace function &func_name
(
  i varchar2

) return varchar2
  deterministic is
begin
  return i;
end;
/

create or replace procedure &proc_name is 
  res varchar2(1);
begin
  for i in 1 .. 10 loop
    res := &func_name('a');
  end loop;
end;
/

var res number
exec :res := dbms_profiler.start_profiler('&proc_name');

exec plch_test_loops;
exec :res := dbms_profiler.stop_profiler;

col text format a60
set pagesize 140

select s.text, d.total_occur
from plsql_profiler_data d
   , user_source s
   , plsql_profiler_units u
where d.runid =
      (
        select max(runid) runid
        from plsql_profiler_runs
        where run_comment = '&proc_name'
      )
  and d.runid = u.runid
  and u.unit_name in ('&proc_name', '&func_name')
  and d.unit_number = u.unit_number
  and s.name = u.unit_name
  and s.line = d.line#
order by u.unit_number, d.line#;

And result is:

TEXT                                    TOTAL_OCCUR
--------------------------------------- -----------

procedure PLCH_TEST_LOOPS is              0
  for i in 1 .. 10 loop                  11
    res := PLCH_TEST_DETERM('a');        10
end;                                      1
function PLCH_TEST_DETERM                 0
  r                                       1
end;                                      1

Impact of Non-Deterministic Code Inside Loop

The following code uses inserts into a table (via an autonomous transaction) to demonstrate how often the body of the deterministic function is executed. You might think that this logging means the function is not deterministic and therefore will turn off the optimization, but that is not the case. The PL/SQL compiler does not validate that a function is trulydeterministic; it does not make a decision on whether or not to avoid execution of the function based on the code inside the function. All of that is decided at the time the code is compiled, based solely on the way the function is called - and, in turns out, the other code executed in the loop.
CREATE TABLE plch_log (created_on DATE)
/

CREATE OR REPLACE PACKAGE plch_getdata
IS
   PROCEDURE log_count (title_in IN VARCHAR2);

   FUNCTION vc (vc VARCHAR2)
      RETURN VARCHAR2
      DETERMINISTIC;
END;
/

CREATE OR REPLACE PACKAGE BODY plch_getdata
IS
   PROCEDURE log_count (title_in IN VARCHAR2)
   IS
      l_count   PLS_INTEGER;
   BEGIN
      SELECT COUNT (*) INTO l_count FROM plch_log;
      DBMS_OUTPUT.put_line (title_in || '=' || l_count);
   END;

   PROCEDURE log_call
   IS
      PRAGMA AUTONOMOUS_TRANSACTION;
   BEGIN
      INSERT INTO plch_log VALUES (SYSDATE);
      COMMIT;
   END;

   FUNCTION vc (vc VARCHAR2)
      RETURN VARCHAR2 DETERMINISTIC
   IS
   BEGIN
      log_call ();
      RETURN vc;
   END;
END;
/

DECLARE
   vc   VARCHAR2 (100);
BEGIN
   FOR i IN 1 .. 10
   LOOP
      vc := plch_getdata.vc ('abc');
   END LOOP;

   plch_getdata.log_count ('VC 10 iterations in loop');

   FOR i IN 1 .. 10
   LOOP
      vc := plch_getdata.vc ('abc');
      vc := 'abc' || TO_CHAR (SYSDATE);
   END LOOP;

   plch_getdata.log_count (
'VC 10 iterations with non-deterministic code in loop');
END;
/

VC 10 iterations in loop=1
VC 10 iterations with non-deterministic code in loop=11

DROP TABLE plch_log
/

DROP PACKAGE plch_getdata
/
Effect on Inlining Optimization (level 3) - from _Nikotin

Because Oracle can inline the local subroutines with plsql_optimize_level = 3, the "non-deterministic" part of the loop can became deterministic:


def proc_name = PLCH_TEST_LOOPS
def func_name = PLCH_TEST_DETERM

create or replace function &func_name
(
  i varchar2
) return varchar2
  deterministic is
begin
  return i;
end;
/

alter session set plsql_optimize_level = 2;

create or replace procedure &proc_name is
  res varchar2(1);
  procedure inline_proc (i number) is
  begin
    null;
  end;
begin
  for i in 1 .. 10 loop
    res := &func_name('A');
    inline_proc(1);
  end loop;
end;
/

var res number
exec :res := dbms_profiler.start_profiler('&proc_name');
exec plch_test_loops;
exec :res := dbms_profiler.stop_profiler;

col text format a60
set pagesize 140

select s.text, d.total_occur
from plsql_profiler_data d
   , user_source s
   , plsql_profiler_units u
where d.runid =
      (
        select max(runid) runid
        from plsql_profiler_runs
        where run_comment = '&proc_name'
      )
  and d.runid = u.runid
  and d.unit_number = u.unit_number
  and s.name = u.unit_name
  and s.line = d.line#
order by u.unit_number, d.line#;

And result:

TEXT                                             TOTAL_OCCUR
------------------------------------------------ -----------
procedure PLCH_TEST_LOOPS is                        1
  procedure inline_proc (i number) is               0
    null;                                          10
  for i in 1 .. 10 loop                            11
    res := PLCH_TEST_DETERM('A');                  10
    inline_proc(1);                                10
end;                                                1
function PLCH_TEST_DETERM                           0
  return i;                                        10
end;                                               10
And if uses:
alter session set plsql_optimize_level = 3;

Then it will:

TEXT                                             TOTAL_OCCUR
------------------------------------------------ -----------
procedure PLCH_TEST_LOOPS is                        0
  for i in 1 .. 10 loop                            11
    res := PLCH_TEST_DETERM('A');                  10
    inline_proc(1);                                10
end;                                                1
function PLCH_TEST_DETERM                           0
  return i;                                         1
end;                                                                   1
 
And nothing with inline_proc.

23 November 2011

Explanation for Slow Performance of VARCHAR2 Overloading?

Valentin Nikotin, a PL/SQL Challenge player, recently brought to my attention some unexpectedly slow performance when executing from within a query an overloaded function that accepts a VARCHAR2 argument, compared to a NUMBER argument. I have been able to reproduce this (see below). I thought I'd see if any readers of this blog had an idea as to what might be the cause.

We both ran this code on an 11.2 instance.
CREATE OR REPLACE PACKAGE pkg_test
AS
   FUNCTION fv (i VARCHAR2)
      RETURN NUMBER;

   FUNCTION fn (i NUMBER)
      RETURN NUMBER;
END;
/

CREATE OR REPLACE PACKAGE BODY pkg_test
AS
   FUNCTION fv (i VARCHAR2)
      RETURN NUMBER
   IS
   BEGIN
      RETURN 1;
   END;

   FUNCTION fn (i NUMBER)
      RETURN NUMBER
   IS
   BEGIN
      RETURN 1;
   END;
END;
/

CREATE OR REPLACE FUNCTION fv (i VARCHAR2)
   RETURN NUMBER
IS
BEGIN
   RETURN 1;
END;
/

CREATE OR REPLACE FUNCTION fn (i NUMBER)
   RETURN NUMBER
IS
BEGIN
   RETURN 1;
END;
/

/* No difference in PL/SQL 

"VC2 overloading" completed in: 2.5 seconds
"Number overloading" completed in: 2.52 seconds
"Schema-level VC2" completed in: 2.48 seconds
"Schema-level Number" completed in: 2.49 seconds

*/

DECLARE
   l             NUMBER;
   last_timing   NUMBER := NULL;

   PROCEDURE start_timer
   IS
   BEGIN
      last_timing := DBMS_UTILITY.get_cpu_time;
   END;

   PROCEDURE show_elapsed_time (message_in IN VARCHAR2 := NULL)
   IS
   BEGIN
      DBMS_OUTPUT.put_line (
            '"'
         || message_in
         || '" completed in: '
         || (DBMS_UTILITY.get_cpu_time - last_timing) / 100
         || ' seconds');
   END;
BEGIN
   start_timer;

   FOR indx IN 1 .. 10000000
   LOOP
      l := pkg_test.fv (NULL);
   END LOOP;

   show_elapsed_time ('VC2 overloading');
   --
   start_timer;

   FOR indx IN 1 .. 10000000
   LOOP
      l := pkg_test.fn (NULL);
   END LOOP;

   show_elapsed_time ('Number overloading');
   --
   start_timer;

   FOR indx IN 1 .. 10000000
   LOOP
      l := fv (NULL);
   END LOOP;

   show_elapsed_time ('Schema-level VC2');
   --
   start_timer;

   FOR indx IN 1 .. 10000000
   LOOP
      l := fv (NULL);
   END LOOP;

   show_elapsed_time ('Schema-level Number');
END;
/

/* Noticeable difference in SQL 

MAX(PKG_TEST.FV(NULL))
----------------------
                     1
1 row selected.
Elapsed: 00:00:07.93

MAX(FV(NULL))
-------------
            1
1 row selected.
Elapsed: 00:00:05.03

MAX(PKG_TEST.FN(NULL))
----------------------
                     1
1 row selected.
Elapsed: 00:00:04.45

MAX(FN(NULL))
-------------
            1
1 row selected.
Elapsed: 00:00:05.45

*/

SET TIMING ON

CREATE TABLE driver_table
AS
       SELECT LEVEL id
         FROM DUAL
   CONNECT BY LEVEL < 1000000
/

SELECT MAX (pkg_test.fv (NULL)) FROM driver_table
/

SELECT MAX (fv (NULL)) FROM driver_table
/

SELECT MAX (pkg_test.fn (NULL)) FROM driver_table
/

SELECT MAX (fn (NULL)) FROM driver_table
/

DROP driver_table
/

DROP package pkg_test
/

DROP FUNCTION fn
/

DROP FUNCTION fv
/

10 November 2011

How Reliable are PL/Scope Results? (9403)

The 9 November quiz on PL/Scope asked players to draw conclusions about a package body from the data in the user_identifiers view.

One of our most dedicated players, Iudith Mentzel, spent some time testing out the results one gets from PL/Scope (and queries against user_identifiers) for various uses of labels and GOTOs. I publish her comments below for your consideration.

From Iudith Mentzel

I don't want to object to either the results, which are somewhat "colorful" , or the PL/Scope feature itself, but it looks like the "safe usage" of the feature is at least a little bit "less wider" that one may be (too optimistically) tempted to believe .

While the reasoning behind the answer presented for each choice is completely logical and probably follows the reasoning that the players used, driving categorical conclusions about the code contents by ONLY looking at the data gathered by the PL/SQL Scope feature can sometimes be a little bit dangerous ...

For example:

The LABEL and GOTO issue seems very clear on a first glance, however, here is a small example of what can happen if we "tweak" the code a little bit:
-- amendment to add a GOTO ...
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      GOTO all_done;
      NULL;
      <<all_done>>

      DBMS_OUTPUT.put_line (SYSDATE);      

   END do_stuff;
END plch_pkg;
/

-- there is NO LABEL at all in the result set, though we have a label and a GOTO !!!
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE               USAGE       NAME
------------------ ----------- ------------------------------
ITERATOR           DECLARATION INDX
ITERATOR           REFERENCE   INDX
PACKAGE            DECLARATION PLCH_PKG
PACKAGE            DEFINITION  PLCH_PKG
PROCEDURE          DECLARATION DO_STUFF
PROCEDURE          DEFINITION  DO_STUFF
SYNONYM            CALL        PLITBLM
SYNONYM            REFERENCE   DBMS_SQL
VARIABLE           DECLARATION L_ITEMS
VARIABLE           REFERENCE   L_ITEMS
VARIABLE           REFERENCE   L_ITEMS

11 rows selected.
This is probably because the PL/SQL Optimizing compiler has removed the "non-effective" stuff ...but if we replace the NULL with some other stuff:
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      GOTO all_done;
      DBMS_OUTPUT.put_line('Some stuff');
      <<all_done>>

      DBMS_OUTPUT.put_line (SYSDATE);      

   END do_stuff;
END plch_pkg;
/
then the LABEL is back, though the code still performs exactly the same as before!
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE            USAGE       NAME
--------------- ----------- --------------------
FUNCTION        CALL        SYSDATE
ITERATOR        DECLARATION INDX
ITERATOR        REFERENCE   INDX
LABEL           DECLARATION ALL_DONE
LABEL           REFERENCE   ALL_DONE
PACKAGE         DECLARATION PLCH_PKG
PACKAGE         DEFINITION  PLCH_PKG
PROCEDURE       DECLARATION DO_STUFF
PROCEDURE       DEFINITION  DO_STUFF
SYNONYM         CALL        PLITBLM
SYNONYM         REFERENCE   DBMS_OUTPUT
SYNONYM         REFERENCE   DBMS_SQL
SYNONYM         REFERENCE   DBMS_OUTPUT
VARIABLE        DECLARATION L_ITEMS
VARIABLE        REFERENCE   L_ITEMS
VARIABLE        REFERENCE   L_ITEMS

16 rows selected.
Now I add one more label....
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      GOTO all_done;
      NULL;
      <<all_done>>

      DBMS_OUTPUT.put_line (SYSDATE);      

      <<another_label>>
      DBMS_OUTPUT.put_line (SYSDATE);
   END do_stuff;
END plch_pkg;
/
And still no LABEL seen in the output, though we have two labels and one GOTO ...
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE            USAGE       NAME
--------------- ----------- --------------------
ITERATOR        DECLARATION INDX
ITERATOR        REFERENCE   INDX
PACKAGE         DECLARATION PLCH_PKG
PACKAGE         DEFINITION  PLCH_PKG
PROCEDURE       DECLARATION DO_STUFF
PROCEDURE       DEFINITION  DO_STUFF
SYNONYM         CALL        PLITBLM
SYNONYM         REFERENCE   DBMS_SQL
VARIABLE        DECLARATION L_ITEMS
VARIABLE        REFERENCE   L_ITEMS
VARIABLE        REFERENCE   L_ITEMS

11 rows selected.
However, it all depends on where the label is located, for example:
-- adding still another label, but at the beginning 
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      <<still_another_label>>
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      GOTO all_done;
      NULL;
      <<all_done>>

      DBMS_OUTPUT.put_line (SYSDATE);      

      <<another_label>>
      DBMS_OUTPUT.put_line (SYSDATE);
   END do_stuff;
END plch_pkg;
/

-- now the first label only appears ...
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE               USAGE       NAME
------------------ ----------- ------------------------------
ITERATOR           DECLARATION INDX
ITERATOR           REFERENCE   INDX
LABEL              DECLARATION STILL_ANOTHER_LABEL
PACKAGE            DECLARATION PLCH_PKG
PACKAGE            DEFINITION  PLCH_PKG
PROCEDURE          DECLARATION DO_STUFF
PROCEDURE          DEFINITION  DO_STUFF
SYNONYM            CALL        PLITBLM
SYNONYM            REFERENCE   DBMS_SQL
VARIABLE           DECLARATION L_ITEMS
VARIABLE           REFERENCE   L_ITEMS
VARIABLE           REFERENCE   L_ITEMS

12 rows selected.
Now we have two labels, but still only one LABEL declaration, though they are both "equally uneffective" ...

Below I have two labels, one the target of a GOTO, but still no label appears in the user_identifiers view:
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      GOTO another_label;
      NULL;
      <<all_done>>

      DBMS_OUTPUT.put_line (SYSDATE);      

      <<another_label>>
      DBMS_OUTPUT.put_line (SYSDATE);
   END do_stuff;
END plch_pkg;
/

-- still no label, though here logic does matter !
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE            USAGE       NAME
--------------- ----------- --------------------
ITERATOR        DECLARATION INDX
ITERATOR        REFERENCE   INDX
PACKAGE         DECLARATION PLCH_PKG
PACKAGE         DEFINITION  PLCH_PKG
PROCEDURE       DECLARATION DO_STUFF
PROCEDURE       DEFINITION  DO_STUFF
SYNONYM         CALL        PLITBLM
SYNONYM         REFERENCE   DBMS_SQL
VARIABLE        DECLARATION L_ITEMS
VARIABLE        REFERENCE   L_ITEMS
VARIABLE        REFERENCE   L_ITEMS

11 rows selected.
The label was effective, but still NOT shown in user_identifiers. The CALL to SYSDATE also NOT shown !!! The reference to DBMS_OUTPUT synonym also NOT shown !!!

I just wanted to emphasize how volatile it is to drive conclusions about source code based ONLY on the results in USER_IDENTIFIERS ...

These results seem to be generated AFTER the compiler optimizes the source code so, at least in some aspects, they may be misleading ...

Regarding the choice that asked about FORALL, though the reasoning behind it seems correct, equally to you and to us, in an after-thought it also can be argued ...and this because using FORALL requires a collection to be used, and that would probably introduce additional data into the USER_IDENTIFIERS result set, whether it is a DBMS_SQL based collection, one based on a locally defined TYPE or even on a type referenced from some other package ...

The output for the sample package shown in the Verification code looks like this:
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      <<all_done>>
      DBMS_OUTPUT.put_line (SYSDATE);
   END do_stuff;
END plch_pkg;
/

-- we see some SYNONYMS in the output, that were not there in the original quiz
SELECT type, usage, name
  FROM user_identifiers 
 WHERE object_name = 'PLCH_PKG'
 ORDER BY 1, 2
/

TYPE               USAGE       NAME
------------------ ----------- ------------------------------
FUNCTION           CALL        SYSDATE
ITERATOR           DECLARATION INDX
ITERATOR           REFERENCE   INDX
LABEL              DECLARATION ALL_DONE
PACKAGE            DECLARATION PLCH_PKG
PACKAGE            DEFINITION  PLCH_PKG
PROCEDURE          DECLARATION DO_STUFF
PROCEDURE          DEFINITION  DO_STUFF
SYNONYM            CALL        PLITBLM
SYNONYM            REFERENCE   DBMS_OUTPUT
SYNONYM            REFERENCE   DBMS_SQL
VARIABLE           DECLARATION L_ITEMS
VARIABLE           REFERENCE   L_ITEMS
VARIABLE           REFERENCE   L_ITEMS

14 rows selected.
A last remark is about deciding whether a variable is defined at the package level or inside a subprogram: I think this can be done (maybe preferably) by checking whether the "parent" (the context owner) of the variable declaration is the PACKAGE, rather than a subprogram, for example:
--check variable context ownership
CREATE OR REPLACE PACKAGE BODY plch_pkg
IS
   g_variable   NUMBER ;

   PROCEDURE do_stuff
   IS
      l_items   DBMS_SQL.number_table;
   BEGIN
      FORALL indx IN 1 .. l_items.COUNT
         UPDATE plch_stuff
            SET amount = l_items (indx);

      <<all_done>>
      DBMS_OUTPUT.put_line (SYSDATE);
   END do_stuff;
END plch_pkg;
/

COLUMN TYPE FORMAT A15
COLUMN NAME FORMAT A20

SELECT var.type, var.usage, var.name, parent.type, parent.usage, parent.name
  FROM user_identifiers  var,
       user_identifiers  parent
 WHERE var.object_name = 'PLCH_PKG'
   AND var.type        = 'VARIABLE'
   AND var.usage       = 'DECLARATION'
   AND parent.object_name = var.object_name
   AND parent.object_type = var.object_type
   AND parent.usage_id    = var.usage_context_id
 ORDER BY 1, 2
/

TYPE            USAGE       NAME                 TYPE            USAGE       NAME
--------------- ----------- -------------------- --------------- ----------- ----------------
VARIABLE        DECLARATION L_ITEMS              PROCEDURE       DEFINITION  DO_STUFF
VARIABLE        DECLARATION G_VARIABLE           PACKAGE         DEFINITION  PLCH_PKG

2 rows selected.
In summary, looking only at the data in USER_IDENTIFIERS, we cannot derive 100% precise (YES/NO) conclusions regarding ALL aspects the code ... Just a few thoughts regarding a tough quiz ... and, as I see, not just for me ...For some reason, it reminds me of the one related to "implicit conversions" in the previous quarter ...

Best Regards,
Iudith

09 November 2011

Can Choice Be Correct If Error Raised? (9401)

The 7 November quiz tested your knowledge of the fact that this statement:
DROP PACKAGE
will drop both the package specification and body.

Several players complained that we marked this choice as correct :
DROP PACKAGE plch_pkg
/

DROP PACKAGE BODY plch_pkg
/
As one person wrote: "If you drop the a package and then try to drop the package body, you get the error ORA-04043: object PLCH_PKG does not exist. Answer 8571 is NOT correct."

One person even went so as to say: "I really don't like to have the solution DROP PACKAGE plch_pkg / DROP PACKAGE BODY plch_pkg / scored as correct. Yes the correct outcome is shown, BUT it is definitely bad style. And knowing that an error will pop up made me not choosing this answer, so I'm blamed again for good style. I know that you will not rescore the answer. But this kind of scoring makes me think to withdraw from the PL/SQL Challenge. So please harden your choices to support good style as well."

I must admit to being a little taken aback by these responses. Let's first address the "correctness" issue and then move on to what we should and should not include as correct choices in future quizzes.

This choice was marked correct because we asked this:

Which of the choices will result in the following two lines being displayed? [after calling a stored procedure that shows the status of the specified database object]
PACKAGE PLCH_PKG: UNDEFINED
PACKAGE BODY PLCH_PKG: UNDEFINED
It is, without doubt, true that if you run the two DROP statements above (a) those two lines of text will be displayed (so I don't see why a rescoring should be done), as both spec and body are gone, and (b) the second statement will result in Oracle throwing this error:
ORA-04043: object string does not exist
    Cause: An object name was specified that was not recognized by the system.
Now, as noted above, the text will be displayed as required, so I do not think that any changes to the scoring should be performed.

But should I not include such choices in the quiz? And if I do, should I always mark them as incorrect because they are "bad code"?

Certainly, it needs to be very clear that the second statement is unnecessary and will throw the error. More generally, we should provide strong recommendations against using a certain approach if it is problematic in some way.

But never include choices like this marked as "correct"? I just don't see why we would exclude such things. There is usually some sort of lesson to be learned, some way to help you reflect on the way you do your work, and the kinds of traps you can fall into.

In this case,  for example, I could see a developer who wasn't clear on the concept that DROP PACKAGE drops the body put such statements in their clean-up scripts. They might never have noticed that errors were being raised, because the end result met their requirements: packages all gone.

So they would in this case have marked that choice as correct, but then learned from reading the explanation that it is not necessary and should therefore be removed. Lesson learned.

What are your thoughts?

Cheers,
Grandpa Steven

03 November 2011

Explorations into Pipelined Functions (9398)

The 2 November quiz on pipelined table functions and autonomous transactions prompted the following submission from Chad Lee of Telligen, specifically the 2nd and 3rd points in the explanation:

(1) A pipelined table function returns a row to its invoker (a query) with the PIPE ROW statement and then continues to process rows.

(2) Pipelined functions can help improve response time because the entire collection need not be constructed and returned to the server before the query can work with and return a single result row.

(3) In addition, the function consumes less Process Global Area memory, because the object cache need not materialize the entire collection.

I post his comments below for all to consider and discuss:

If one modifies the plch_pipeline package body by adding "dbms_lock.sleep(2);" just before the update statement to introduce a 2 second delay between returning rows from the pipelined function, and creates the following get_sysdate function to return the current sysdate accurate to the second, one can see that the pipelined function does indeed return rows to the calling query in 2 second intervals, as seen in the output. This result does not directly counter (2) or (3).

However, the results of the query are returned all at once, rather than one row every two seconds (think all_rows vs first_rows), which seems to indicate there is some buffering/materializing of the rows due to the autonomous transaction / pipelined function before control is passed fully to the calling SQL statement. If the calling SQL statement was completely free to work with the individual rows returned from the pipelined autonomous transaction (the different get_sysdate value seems to indicate partial control at the calling SQL), wouldn't the rows have been returned to the SQL Plus prompt in 2 second intervals similar to the first_rows hint? This behavior would seem to counter (2) and (3).

If I put the query in a loop in a PL/SQL block, I can see that the loop does not execute until all rows from the pipelined function have been returned. Thus, the rows returned by the pipelined function are being buffered somewhere. Would this not be in the PGA? In effect, isn't the entire collection being materialized before control is returned to the calling program? Does this counter (2) and (3), as the loop does not work with the first row when it is returned, but instead waits for all rows to be materialized before starting execution of the loop?

This behavior reinforces my understanding of Autonomous Transactions, such that although operating outside of the calling transaction, the autonomous transaction must complete fully before control is passed back to the calling transaction. Thus, any data returned by a pipelined function used within an autonomous transaction must in fact be materialized fully before control is passed back to the calling transaction. Whether this occurs one row at a time with a back and forth interaction between the calling transaction and the autonomous transaction or when all rows are computed by the autonomous transaction, a continually growing memory structure would be created. The manner in which the rows are returned would also seem to impact performance, with the back and forth action you describe seemingly slower than returning all at once.

A more in depth example could be created to determine where the pipelined rows are being materialized before the calling transaction continues execution.
CREATE OR REPLACE PACKAGE BODY plch_pipeline
IS
   FUNCTION double_values (dataset refcur_t)
      RETURN numbers_t PIPELINED
   IS
      PRAGMA AUTONOMOUS_TRANSACTION;
      l_number   NUMBER;
   BEGIN
      LOOP
         FETCH dataset INTO l_number;
         EXIT WHEN dataset%NOTFOUND;

         dbms_lock.sleep(2);      -- add sleep of 2 seconds

         UPDATE plch_parts SET partnum = partnum;
         COMMIT;
         
         PIPE ROW (l_number * 2);
      END LOOP;
      CLOSE dataset;
      RETURN;
   END;
END plch_pipeline;
/

-- generate the current sysdate to the second
create or replace function get_sysdate
return varchar2
is
  ls_date       varchar2(20);
begin
  select to_char(sysdate,'mm/dd/yyyy hh24:mi:ss') 
    into ls_date from dual;
  return ls_date;
end;
/

col get_sysdate format a20
SELECT a.column_value, rownum, get_sysdate
  FROM TABLE (plch_pipeline.double_values (
                CURSOR (SELECT line
                          FROM user_source
                         WHERE name = 'PLCH_PIPELINE'
                           AND type = 'PACKAGE'
                           AND line <= 3
                         ORDER BY line))) a
/
Can verify rows are being returned by the autonomous transaction at 2 second intervals. However, the results are all returned at the same time to the SQL Plus prompt and not as they receive their get_sysdate value.
COLUMN_VALUE     ROWNUM GET_SYSDATE
------------ ---------- --------------------
           2          1 11/03/2011 10:33:10
           4          2 11/03/2011 10:33:12
           6          3 11/03/2011 10:33:14

begin
  for rec in (SELECT a.column_value, rownum, get_sysdate date_val
    FROM TABLE (plch_pipeline.double_values (
                CURSOR (SELECT line
                          FROM user_source
                         WHERE name = 'PLCH_PIPELINE'
                           AND type = 'PACKAGE'
                           AND line <= 3
                         ORDER BY line))) a) loop
  
    dbms_lock.sleep(1);
    dbms_output.put_line(
         rec.column_value || ', ' || rec.rownum || ', ' || 
         rec.date_val || ', ' || 
         to_char(sysdate,'mm/dd/yyyy hh24:mi:ss'));
  end loop;
end;
/
The timestamp generated/printed within the loop has a value after the timestamp generated from the pipelined function, showing that the data from the pipelined autonomous transaction function is being materialized prior to the loop starting execution. */
2, 1, 11/03/2011 11:21:31, 11/03/2011 11:21:36
4, 2, 11/03/2011 11:21:33, 11/03/2011 11:21:37
6, 3, 11/03/2011 11:21:35, 11/03/2011 11:21:38
In addition, if I modify the get_sysdate function to introduce a delay of 3 seconds, one can see from the output that there is a delay of 5 seconds between each row returned from the pipelined function (2 seconds within the pipelined function and 3 seconds from the get_sysdate call), indicating that the entire operation is running in a strictly serial manner. (1 - get value from pipeline function, 2 - make call to get_sysdate for the row in the calling query, repeat these two steps for each of the three rows returned by the pipeline function)
create or replace function get_sysdate
return varchar2
is
  ls_date       varchar2(20);
begin
  dbms_lock.sleep(3);
  select to_char(sysdate,'mm/dd/yyyy hh24:mi:ss') 
    into ls_date from dual;
  return ls_date;
end;
/

SELECT a.column_value, rownum, get_sysdate
  FROM TABLE (plch_pipeline.double_values (
                CURSOR (SELECT line
                          FROM user_source
                         WHERE name = 'PLCH_PIPELINE'
                           AND type = 'PACKAGE'
                           AND line <= 3
                         ORDER BY line))) a
/


COLUMN_VALUE     ROWNUM GET_SYSDATE
------------ ---------- --------------------
           2          1 11/03/2011 13:47:54
           4          2 11/03/2011 13:47:59
           6          3 11/03/2011 13:48:04

The same serial behavior is seen in the pl/sql block, with all rows from the pipeline function materialized in a serial manner before executing the cursor loop.
begin
  for rec in (SELECT a.column_value, rownum, get_sysdate date_val
    FROM TABLE (plch_pipeline.double_values (
                CURSOR (SELECT line
                          FROM user_source
                         WHERE name = 'PLCH_PIPELINE'
                           AND type = 'PACKAGE'
                           AND line <= 3
                         ORDER BY line))) a) loop
  
    dbms_lock.sleep(1);
    dbms_output.put_line(
        rec.column_value || ', ' || rec.rownum || ', ' ||
        rec.date_val || ', ' || 
        to_char(sysdate,'mm/dd/yyyy hh24:mi:ss'));
  end loop;
end;
/

2, 1, 11/03/2011 13:51:45, 11/03/2011 13:51:56
4, 2, 11/03/2011 13:51:50, 11/03/2011 13:51:57
6, 3, 11/03/2011 13:51:55, 11/03/2011 13:51:58