Quantcast
Channel: Hemant's Oracle DBA Blog
Viewing all 464 articles
Browse latest View live

Loading Documents into Oracle

$
0
0
A short demonstration of loading documents into an Oracle Database.

SQL> @create_docs_table
SQL> spool create_docs_table
SQL>
SQL> drop table my_documents purge;

Table dropped.

SQL> create table my_documents
2 (doc_number number generated always as identity,
3 doc_title varchar2(100),
4 doc_description varchar2(256),
5 os_file_name varchar2(256),
6 file_content blob,
7 file_type varchar2(16))
8 lob(file_content) store as securefile (compress high keep_duplicates)
9 /

Table created.

SQL>
SQL> spool off
SQL>


I have created table with a SecureFile LOB to hold the documents.
Next, I use sqlloader to load the documents, based on a "list file".


C:\load_docs_to_oracle>type List_of_Files_to_Upload.csv
Oracle 12c Availability,High Availability Options in Oracle 12c,12c_Availability.pdf,PDF
Oracle 12c Security,Security Mechanisms in Oracle 12c,12c_Security.pdf,PDF
Monitoring and Diagnostics without OEM,Presentation at Oracle SOS Conference,Monitoring and Diagnostics without OEM.pdf,PDF
Setting up Data Guard in 11gR2,My document on DG setup in 11gR2 ,Setting up DataGuard in 11gR2.docx,DOCX
Flashback Database and DG DR Testing,How to use Flasback Database in DataGuard for DR Testing,Using FLASHBACK DATABASE for Destructive DR Testing.docx,DOCX
Create Docs Table,Source Code for Script to create DOCS Table,create_docs_table.sql,SQL

C:\load_docs_to_oracle>


C:\load_docs_to_oracle>type my_documents.ctl
load data
infile 'List_of_Files_to_Upload.csv'
into table my_documents
fields terminated by ','
(
doc_title ,
doc_description ,
os_file_name,
file_content lobfile(os_file_name) terminated by EOF,
file_type
)

C:\load_docs_to_oracle>


C:\load_docs_to_oracle>sqlldr mydocdb/mydocdb@pdb1 control=my_documents.ctl

SQL*Loader: Release 19.0.0.0.0 - Production on Sun Nov 7 17:01:33 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 6

Table MY_DOCUMENTS:
6 Rows successfully loaded.

Check the log file:
my_documents.log
for more information about the load.

C:\load_docs_to_oracle>


C:\load_docs_to_oracle>type my_documents.log

SQL*Loader: Release 19.0.0.0.0 - Production on Sun Nov 7 17:01:33 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Control File: my_documents.ctl
Data File: List_of_Files_to_Upload.csv
Bad File: List_of_Files_to_Upload.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 250 rows, maximum of 1048576 bytes
Continuation: none specified
Path used: Conventional

Table MY_DOCUMENTS, loaded from every logical record.
Insert option in effect for this table: INSERT

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
DOC_TITLE FIRST * , CHARACTER
DOC_DESCRIPTION NEXT * , CHARACTER
OS_FILE_NAME NEXT * , CHARACTER
FILE_CONTENT DERIVED * EOF CHARACTER
Dynamic LOBFILE. Filename in field OS_FILE_NAME
FILE_TYPE NEXT * , CHARACTER


Table MY_DOCUMENTS:
6 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 258000 bytes(250 rows)
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 6
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 07 17:01:33 2021
Run ended on Sun Nov 07 17:01:35 2021

Elapsed time was: 00:00:01.17
CPU time was: 00:00:00.06

C:\load_docs_to_oracle>



Note : In "List_of_Files_to_Upload.csv" the incoming file names are *not* enclosed in quotes and there is no space-character before the filename.

Once you have loaded the documents, SQLDeveloper gives you the option of either opening the file (e.g. PDF files opened by Acrobat Reader) or downloading the file that you query.  (In my installation, DOCX and SQL files have to be downloaded for viewing as the Exernal Editor is not configured for these file-extensions -- the DOCX is identified as a "jar" file).

[click on the images below to view them better]

SQL Developer view




External Editor Preferences in SQL Developer








Indexing Documents in Oracle

$
0
0

 In the previous post, I had demonstrated how to load documents into an Oracle Database.  

(Note : I am referring to documents as in PDF files, PowerPoint presentations, Word documents and text files in the traditional sense, not "JSON documents").

Here I demonstrate how to use Oracle Text to create indexes and make such documents searchable.

I assume that Oracle Text (the "CTXSYS" schema) has been installed when the database was created.

First, I grant the appropriate privileges.



SQL> alter session set container=pdb1;

Session altered.

SQL> GRANT EXECUTE ON CTXSYS.CTX_CLS TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_DDL TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_DOC TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_OUTPUT TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_QUERY TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_REPORT TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_THES TO MYDOCDB;

Grant succeeded.

SQL> GRANT EXECUTE ON CTXSYS.CTX_ULEXER TO MYDOCDB;

Grant succeeded.

SQL>


Next, I create a CONTEXT Index on my documents table.

SQL> connect mydocdb/mydocdb@pdb1
Connected.
SQL> CREATE INDEX my_documents_ctx_idx ON my_documents(file_content)
2 INDEXTYPE IS CTXSYS.CONTEXT
3 /

Index created.

SQL>


My documents are now indexed and searchable. The syntax for a context index query is slightly different.

SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'DBA Role') > 0
4 /

DOC_TITLE DOC_DESCRIPTION FILE_TYPE
------------------------------------------ ------------------------------------------ ----------------
Oracle 12c Security Security Mechanisms in Oracle 12c PDF
Monitoring and Diagnostics without OEM Presentation at Oracle SOS Conference PDF

SQL>
SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'sqlplus hemant/hemant@orclpdb1') > 0
4 /

DOC_TITLE DOC_DESCRIPTION FILE_TYPE
------------------------------------------ ------------------------------------------ ----------------
Flashback Database and DG DR Testing How to use Flasback Database in DataGuard DOCX
for DR Testing


SQL>
SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'Flex ASM') > 0
4 /

DOC_TITLE DOC_DESCRIPTION FILE_TYPE
------------------------------------------ ------------------------------------------ ----------------
Oracle 12c Availability High Availability Options in Oracle 12c PDF

SQL>
SQL>
SQL> -- unfortunately, some queries on my plain-text SQL file don't work as expected
SQL>
SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'doc_number number generated always as identity,') > 0
4 /
select doc_title, doc_description, file_type
*
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-50901: text query parser syntax error on line 1, column 48


SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'doc_number number generated always as identity') > 0
4 /

no rows selected

SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'doc_number number generated always') > 0
4 /

no rows selected

SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, '(doc_number number generated always') > 0
4 /
select doc_title, doc_description, file_type
*
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: Oracle Text error:
DRG-50901: text query parser syntax error on line 1, column 36


SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'drop table my_documents') > 0
4 /

no rows selected

SQL>


Also, apparently the inclusion of "(" and "," and such are tricky when creatimg a query.

Note : The "CONTAINS ... >0" clause is to check the SCORE returned by the filter in the CONTAINS section and to see that it is greater than zero.

Here I have created a CTXSYS.CONTEXT index and it works well for the PDF and DOCX files.  This type of Index is *not* automatically updated when you load a new document.  To update the Index you must call the "CTX_DDL.SYNC_INDEX" procedure.

Using the same method as in my previous blog post, I load a new PPT file (callled "Flashback.ppt").


C:\load_docs_to_oracle>sqlldr mydocdb/mydocdb@pdb1 control=my_documents.ctl

SQL*Loader: Release 19.0.0.0.0 - Production on Sun Nov 14 17:04:55 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 1

Table MY_DOCUMENTS:
1 Row successfully loaded.

Check the log file:
my_documents.log
for more information about the load.

C:\load_docs_to_oracle>


And then sync the index and query again

SQL> select doc_number, doc_title, file_type
2 from my_documents
3 order by doc_number
4 /

DOC_NUMBER DOC_TITLE FILE_TYPE
---------- ------------------------------------------ ----------------
1 Oracle 12c Availability PDF
2 Oracle 12c Security PDF
3 Monitoring and Diagnostics without OEM PDF
4 Setting up Data Guard in 11gR2 DOCX
5 Flashback Database and DG DR Testing DOCX
6 Create Docs Table SQL
7 Flashback PPT

7 rows selected.

SQL> exec CTX_DDL.SYNC_INDEX('my_documents_ctx_idx','4M');

PL/SQL procedure successfully completed.

SQL>
SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'Thomas Kyte') > 0
4 /

DOC_TITLE DOC_DESCRIPTION FILE_TYPE
------------------------------------------ ------------------------------------------ ----------------
Flashback Tom Kyte's Presentation on Flashback PPT

SQL> select doc_title, doc_description, file_type
2 from my_documents
3 where contains(file_content, 'Rowids therefore will change') > 0
4 /

DOC_TITLE DOC_DESCRIPTION FILE_TYPE
------------------------------------------ ------------------------------------------ ----------------
Flashback Tom Kyte's Presentation on Flashback PPT

SQL>


The call to CTX_DDL.SYNC_INDEX contains a memory parameter '4M' specifying the memory to be used for synchronization.

Besides the CTXSYS.CONTEXT IndexType, Oracle also allows a "CTXSYS.CTXCAT" IndexType which can be used to index small documents or text fragements.  The CTXSYS.CTXCAT Index is updated synchronously when a new document is loaded into the table (i.e. a call to CTX_DDL.SYNC_INDEX is not required).

Typically, with a document database with moderate to large documents, you would use CTXSYS.CONTEXT IndexType and configure a scheduler job to call CTX_DDL.SYNC_INDEX periodically (e.g. every 30minutes or every 4hours) to update the Index for new documents that are loaded infrequently and do not need to be searched immediately.

The IndexType automically excludes certain popular words called "STOPWORD"s that may be in "STOPLIST"s.  These are very commonly used words that do not need to be indexed (e.g. words like "the" or "this" or "that".  Such STOPLISTs can be customised as well.
So, your implementation of the indexes is customisable.


See the Oracle Text Application Developers Guide and  Oracle Text Reference for more examples and a detailed study.


SQL Server Migration Assistant for Oracle

Refreshable Clone PDB -- 1 Manual Refresh

$
0
0

 The facility of creating Clone PDBs that are Refreshable either "on-demand" or to a set schedule was introduced in 12.2.  The Refreshable PDB can be opened only as a Read-Only database

This is somewhat similar to having a Standby Database that can be opened Read-Only for queries and refreshed as and when on-demand (without using the Active Data Guard License)

Here is a quick demo in 19.12 on Linux

First, on the Source database ORCLCDB I verify that I have the required configuration of Local Undo and ArchiveLog mode and then setup a Common account with the requisite privileges:



oracle19c>sqlplus sys/manager@ORCLCDB as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 30 23:01:41 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> col property_value format a16
SQL> select property_value, log_mode, force_logging
2 from database_properties, v$database
3 where property_name = 'LOCAL_UNDO_ENABLED'
4 /

PROPERTY_VALUE LOG_MODE FORCE_LOGGING
---------------- ------------ ---------------------------------------
TRUE ARCHIVELOG YES

SQL>
SQL> alter pluggable database orclpdb1 open -- verify or open the source pdb
2 /
alter pluggable database orclpdb1 open -- verify or open the source pdb
*
ERROR at line 1:
ORA-65019: pluggable database ORCLPDB1 already open


SQL>
SQL> create user c##for_pdb_clones identified by for_pdb_clones
2 default tablespace users temporary tablespace temp
3 container=ALL -- a common user for all PDBs
4 /

User created.

SQL> grant create session, create pluggable database, sysoper -- privileges that are required (not DBA Role)
2 to c##for_pdb_clones
3 container=ALL -- across all PDBs
4 /

Grant succeeded.

SQL>
SQL> select con_id, privilege, common
2 from cdb_sys_privs -- check across all PDBs
3 where grantee = 'C##FOR_PDB_CLONES'
4 order by 1,2
5 /

CON_ID PRIVILEGE COM
---------- ---------------------------------------- ---
1 CREATE PLUGGABLE DATABASE YES
1 CREATE SESSION YES
3 CREATE PLUGGABLE DATABASE YES
3 CREATE SESSION YES

SQL>


Next, on the Target Database CDB2 (where I want to create the Clone PDB), I setup a Database Link and then issue the CREATE PLUGGABLE DATABASE command



oracle19c>sqlplus sys/manager@CDB2 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 30 23:06:41 2021
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> create database link for_pdb_clones -- create in the Root CDB
2 connect to c##for_pdb_clones identified by for_pdb_clones using 'ORCLCDB'
3 /

Database link created.

SQL> create pluggable database ro_pdb -- the cloned pdb will be Read-Only
2 from orclpdb1@for_pdb_clones -- create from pluggable database orclpd1
3 refresh mode manual -- not specifying an auto-refresh interval
4 -- file_name_convert=('ORCLCDB','CDB2') -- not required as I am using OMF with db_create_file_dest
5 /

Pluggable database created.

SQL>
SQL> alter pluggable database ro_pdb open -- this should be opened Read-Only
2 /
alter pluggable database ro_pdb open -- this should be opened Read-Only
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL>
SQL> alter pluggable database ro_pdb open read only -- this will succeed
2 /

Pluggable database altered.

SQL>


So, now the Source PDB ORCLPDB1 has been cloned to CDB2 as a new Pluggable Database RO_PDB.

How do I refresh it ?  I will update the Source and then query the RO_PDB at CDB2


SQL> connect hemant/hemant@ORCLPDB1
ERROR:
ORA-28002: the password will expire within 7 days


Connected.
SQL> create table list_of_objects
2 as
3 select * from dba_objects
4 /

Table created.

SQL> select count(*) from list_of_objects
2 /

COUNT(*)
----------
73645

SQL>
SQL> connect hemant/hemant@ro_pdb
ERROR:
ORA-28032: Your password has expired and the database is set to read-only


Warning: You are no longer connected to ORACLE.
SQL>
--- cannot connect to the Read Only database with an expired password
SQL> connect sys/manager@CDB2 as sysdba
Connected.
SQL> alter session set container=RO_PDB;

Session altered.

SQL> select count(*)
2 from dba_objects
3 where owner = 'HEMANT'
4 /

COUNT(*)
----------
32

SQL> select max(created)
2 from dba_objects
3 where owner = 'HEMANT'
4 /

MAX(CREAT
---------
04-SEP-21

SQL>
SQL> desc hemant.list_of_objects
ERROR:
ORA-04043: object hemant.list_of_objects does not exist


SQL>


So, the RO_PDB does not have the new table and does not even allow login with an Expired Password. I will first update my password at the source ORCLPDB1 and then refresh RO_PDB and query again.

SQL> connect hemant/hemant@ORCLPDB1
ERROR:
ORA-28002: the password will expire within 7 days


Connected.
SQL> password hemant
Changing password for hemant
Old password:
New password:
Retype new password:
Password changed
SQL>
SQL> connect sys/manager@CDB2 as sysdba
Connected.
SQL> alter pluggable database ro_pdb close; -- must CLOSE to allow a REFRESH

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh; -- REFRESH command

Pluggable database altered.

SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL>
SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
73645

SQL>


The Refresh of the RO_PDB is a simple process. I simply CLOSE it, issue an ALTER ... REFRESH command and then OEPN READ ONLY again.
Now, RO_PDB allows connections with the new Password and has the updated data *as of the time the REFRESH command is issued*.   Any new data populated in the source or any change made in the source are not available in RO_PDB  until a fresh REFRESH is executed.

Refreshable Clone PDB -- 2 Datafile Names

$
0
0

 In my previous post, I demonstrated creating a Refreshable Clone PDB.  How do you handle datafile names ? You can use the FILE_NAME_CONVERT to change file names based on substring matching of the file names.  Or you could simply rely on the good-old (old nowadays !) feature / parameter DB_CREATE_FILE_DEST

In the ORCLPDB1 Pluggable Database at ORCLDB (Source),  these are my datafiles :

SQL> connect sys/manager@orclcdb as sysdba
Connected.
SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> col file_name format a98
SQL> select file_id, file_name
2 from dba_data_files
3 order by file_id
4 /

FILE_ID FILE_NAME
---------- --------------------------------------------------------------------------------------------------
9 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
10 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
12 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
31 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
32 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

7 rows selected.

SQL>



Note that DB_CREATE_FILE_DEST can be modified at any time. Here you can see that it was  modified before the the last 3 tablespaces/datafiles were added to the database.  (The datafiles are automatically sized at an initial 100MB, autoextensible to 32GB).

So, what are the file names in the RO_PDB at CDB2 ?


SQL> connect sys/manager@cdb2 as sysdba
Connected.
SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL> alter session set container=RO_PDB;

Session altered.

SQL> col file_name format a98
SQL> select file_id, file_name
2 from dba_data_files
3 order by file_id
4 /

no rows selected

SQL>


HEY ! Why I can't I see the data files in RO_PDB ?  This is likely Oracle Bug#30536162.  See MOS Document "Some Data File Missing From DBA_DATA_FILES or CDB_DATA_FILES in Refreshable PDB (Doc ID 2610743.1)".   The workaround is to use V$DATAFILE at the Instance Level.
Let me check that in both instances.

At the source :

SQL> connect sys/manager@ORCLCDB as sysdba
Connected.
SQL> select file#, name
2 from v$datafile
3 where con_id =
4 (select con_id
5 from v$pdbs
6 where name = 'ORCLPDB1')
7 order by 1
8 /

FILE# NAME
------ --------------------------------------------------------------------------------------------------
9 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
10 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
12 /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
31 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
32 /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

7 rows selected.

SQL>


And, at the target :

SQL> connect sys/manager@cdb2 as sysdba
Connected.
SQL> show parameter db_create_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /opt/oracle/oradata
SQL>
SQL> select file#, name
2 from v$datafile
3 where con_id =
4 (select con_id
5 from v$pdbs
6 where name = 'RO_PDB')
7 order by 1
8 /

FILE# NAME
---------- --------------------------------------------------------------------------------------------------
43 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_system_jtncqq6j_.dbf
44 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_sysaux_jtncqq6n_.dbf
45 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_undotbs1_jtncqq6o_.dbf
46 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_users_jtncqq6r_.dbf
47 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_data_min_jtncqq6s_.dbf
48 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_hemant_d_jtncqq6t_.dbf
49 /opt/oracle/oradata/CDB2/D240086D48C012D0E0550A00275FC834/datafile/o1_mf_hemant_d_jtncqq6v_.dbf

7 rows selected.

SQL>


Unlike the source ORCLPDB1 in ORCLCDB, the datafiles in RO_PDB at CDB2 are all OMF files because I had configured DB_CREATE_FILE_DEST before creating RO_PDB.

Why are the FILE_IDs (FILE# in V$DATAFILE) different between the Source PDB and the Target PDB ?  This is because the Source Container Database and Target Container Database may have different numbers of Pluggable Databases, each with different numbers of datafiles.  So, when the Target Pluggable Database is created from the Source PDB, it is allocated FILE# values as they are available in the target Container Database.

For example, these are the numbers at the Source (4 CON_IDs, 18 Datafiles) :

SQL> select con_id, file#
2 from v$datafile
3 order by 1,2
4 /

CON_ID FILE#
---------- ------
1 1
1 3
1 4
1 7
2 5
2 6
2 8
3 9
3 10
3 11
3 12
3 26
3 31
3 32
4 27
4 28
4 29
4 30

18 rows selected.

SQL>
SQL> select con_id
2 from v$pdbs
3 where name = 'ORCLPDB1'
4 /

CON_ID
----------
3

SQL>


And these are at the Target (5 CON_ID, 23 datafiles)

SQL> select con_id, file#
2 from v$datafile
3 order by 1,2
4 /

CON_ID FILE#
---------- ----------
1 1
1 3
1 4
1 7
2 5
2 6
2 8
3 9
3 10
3 11
3 12
3 26
4 27
4 28
4 29
4 30
5 43
5 44
5 45
5 46
5 47
5 48
5 49

23 rows selected.

SQL> select con_id
2 from v$pdbs
3 where name = 'RO_PDB'
4 /

CON_ID
----------
5

SQL>


The Source ORCLPDB1 is CON_ID=3 at ORCLCDB and the Target RO_PDB is CON_ID=5 at CDB2.

If I refresh RO_PDB from the source

SQL> alter pluggable database ro_pdb close;

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh;

Pluggable database altered.

SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL>


I find this in the session's trace file at CDB2 (the Target doing the Refresh) :

Trace file /opt/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_ora_6159.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
Build label: RDBMS_19.12.0.0.0DBRU_LINUX.X64_210715
ORACLE_HOME: /opt/oracle/product/19c/dbhome_1
System name: Linux
Node name: ora19cs2
Release: 4.14.35-1902.10.4.1.el7uek.x86_64
Version: #2 SMP Mon Jan 27 14:13:38 PST 2020
Machine: x86_64
Instance name: CDB2
Redo thread mounted by this instance: 1
Oracle process number: 59
Unix process pid: 6159, image: oracle@ora19cs2


*** 2021-12-04T17:39:25.431585+08:00 (RO_PDB(5))
*** SESSION ID:(377.22911) 2021-12-04T17:39:25.431608+08:00
*** CLIENT ID:() 2021-12-04T17:39:25.431613+08:00
*** SERVICE NAME:(CDB2) 2021-12-04T17:39:25.431617+08:00
*** MODULE NAME:(sqlplus@ora19cs2 (TNS V1-V3)) 2021-12-04T17:39:25.431622+08:00
*** ACTION NAME:() 2021-12-04T17:39:25.431626+08:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-12-04T17:39:25.431630+08:00
*** CONTAINER ID:(5) 2021-12-04T17:39:25.431634+08:00

JIT: pid 6159 requesting stop

*** 2021-12-04T17:39:48.052938+08:00 (RO_PDB(5))
Started Serial Media Recovery
This is Foreign Recovery:- HotClone:1 Pre-Plugin:0 HotClone_for_Refresh:1. The Redo going to be applied in Current PDB(Target) will be from Foreign PDB(Source) and there will be file# conversion. Source AFN is from Foreign DB and Target AFN is on Current(local) DB. Dumping Source PDB:3 to Target PDB:5 AFN mapping.
Source AFN:26 = Target AFN:47
Source AFN:32 = Target AFN:49
Source AFN:12 = Target AFN:46
Source AFN:31 = Target AFN:48
Source AFN:9 = Target AFN:43
Source AFN:11 = Target AFN:45
Source AFN:10 = Target AFN:44
.... and there is more information in the trace file which I will cover in my next Blog Post


The Source FILE# values (from ORCLPDB1 at ORCLCDB) are : 26, 32, 12, 31, 9, 11, 10
The corresponding Target FILE# values (in RO_PDB at CDB2) are : 47, 49, 46, 48, 43, 45, 44

To match them, check that the single SYSTEM tablespace datafile  is FILE#9 in ORCLPDB1 and FILE#43 in RO_PDB.
The single USERS tablespace datafile is FILLE#12 in ORCLPDB1 and FILE#46 in RO_PDB.
The two HEMANT_DATA tablespace datafile are FILE#s 31, 32 in ORCLPDB1 and FILE#s 48, 49 in RO_PDB.
Quite unfortunately, it seems that the mapping information in the Trace File is not ordered by FILE#

The trace file for the ALTER PLUGGABLE DATABASE RO_PDB has more information which I will cover in the next Blog Post.

Refreshable Clone PDB -- 3 The ArchiveLog and ArchParLog files

$
0
0

 In my previous blog post about Datafile Name mapping for a Refreshable Clone PDB, I had presented only a portion of the trace file for the ALTER PLUGGABLE DATABASE RO_PDB REFRESH command.

Here is the full listing of that trace file (I have already explained the AFN mapping of FILE#s)


Trace file /opt/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_ora_6159.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
Build label: RDBMS_19.12.0.0.0DBRU_LINUX.X64_210715
ORACLE_HOME: /opt/oracle/product/19c/dbhome_1
System name: Linux
Node name: ora19cs2
Release: 4.14.35-1902.10.4.1.el7uek.x86_64
Version: #2 SMP Mon Jan 27 14:13:38 PST 2020
Machine: x86_64
Instance name: CDB2
Redo thread mounted by this instance: 1
Oracle process number: 59
Unix process pid: 6159, image: oracle@ora19cs2


*** 2021-12-04T17:39:25.431585+08:00 (RO_PDB(5))
*** SESSION ID:(377.22911) 2021-12-04T17:39:25.431608+08:00
*** CLIENT ID:() 2021-12-04T17:39:25.431613+08:00
*** SERVICE NAME:(CDB2) 2021-12-04T17:39:25.431617+08:00
*** MODULE NAME:(sqlplus@ora19cs2 (TNS V1-V3)) 2021-12-04T17:39:25.431622+08:00
*** ACTION NAME:() 2021-12-04T17:39:25.431626+08:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-12-04T17:39:25.431630+08:00
*** CONTAINER ID:(5) 2021-12-04T17:39:25.431634+08:00

JIT: pid 6159 requesting stop

*** 2021-12-04T17:39:48.052938+08:00 (RO_PDB(5))
Started Serial Media Recovery
This is Foreign Recovery:- HotClone:1 Pre-Plugin:0 HotClone_for_Refresh:1. The Redo going to be applied in Current PDB(Target) will be from Foreign PDB(Source) and there will be file# conversion. Source AFN is from Foreign DB and Target AFN is on Current(local) DB. Dumping Source PDB:3 to Target PDB:5 AFN mapping.
Source AFN:26 = Target AFN:47
Source AFN:32 = Target AFN:49
Source AFN:12 = Target AFN:46
Source AFN:31 = Target AFN:48
Source AFN:9 = Target AFN:43
Source AFN:11 = Target AFN:45
Source AFN:10 = Target AFN:44
Dumping database incarnation table:
Resetlogs 0 scn and time: 0x0000000000f22944 11/30/2021 22:58:02
Dumping PDB pathvec - index 0
0000 : pdb 5, dbinc 3, pdbinc 0
db rls 0x0000000000f22944 rlc 1090018682
incscn 0x0000000000000000 ts 0
br scn 0x0000000000000000 ts 0
er scn 0x0000000000000000 ts 0
0001 : pdb 5, dbinc 2, pdbinc 0
db rls 0x0000000000e88404 rlc 1084312848
incscn 0x0000000000000000 ts 0
br scn 0x0000000000000000 ts 0
er scn 0x0000000000000000 ts 0
Recovery target incarnation = 3, activation ID = 0
Influx buffer limit = 39160 min(50% x 78320, 100000)
Start recovery at thread 1 ckpt scn 16121330 logseq 37 block 41766

*** 2021-12-04T17:39:48.430170+08:00 (RO_PDB(5))
Media Recovery add redo thread 1

*** 2021-12-04T17:39:48.494955+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_37_jtpcbxyv_.arc

*** 2021-12-04T17:39:48.868915+08:00 (RO_PDB(5))
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T17:39:51.447683+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T17:40:04.041793+08:00 (RO_PDB(5))
==== Redo read statistics for thread 1 ====
Total physical reads (from disk and memory): 89402Kb
-- Redo read_disk statistics --
Read rate (SYNC): 89402Kb in 15.61s => 5.59 Mb/sec
Total redo bytes: 89402Kb Longest record: 24Kb, moves: 78/213168 moved: 0Mb (0%)
Longest LWN: 2052Kb, reads: 3089
Last redo scn: 0x0000000000f7e789 (16246665)
Change vector header moves = 31243/386616 (8%)
----------------------------------------------

*** 2021-12-04T17:40:04.041915+08:00 (RO_PDB(5))
Media Recovery drop redo thread 1

*** 2021-12-04T17:40:07.389420+08:00 (RO_PDB(5))
KCBR: Number of read descriptors = 1024
KCBR: Media recovery blocks read (ASYNC) = 650
KCBR: Influx buffers flushed = 40 times
KCBR: Reads = 30 reaps (5 null, 8 wait), 2 all
KCBR: Redo cache copies/changes = 15025/15025

*** 2021-12-04T17:40:07.389466+08:00 (RO_PDB(5))
Completed Media Recovery
----- Abridged Call Stack Trace -----
(deleted by me)


The interesting parts are :
"Start recovery at thread 1 ckpt scn 16121330 logseq 37 block 41766"
and
"Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_37_jtpcbxyv_.arc"
and
"Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc"

It identifies the SCN from which it needs to recover transactions from the source CDB (ORCLCDB) and PDB (ORCLPDB1).  In a MultiTenant environment, all the PDBs share the same Redo Thread -- so any ArchiveLog may have transactions of multple PDBs.
For the purpose of refresh RO_PDB in CDB2, it identifies the entries from the source CDB ArchiveLogs.  For Log Sequence#28, it explicitly identifies the ORCLPDB1 (not the whole CDB) entries as "foreign archivelog".  However, this "foreign archivelog" is later automaticallly purged by ORCLCDB as I can see in the ORCLCDB alert log entry

2021-12-04T17:40:07.555215+08:00
Deleted Oracle managed file /opt/oracle/FRA/ORCLCDB/ORCLCDB/foreign_archivelog/ORCLPDB1/2021_12_04/o1_mf_1_38_jtpfttmz_.arc


However, sometimes the Refresh requires to read Redo Entries where an ArchiveLog has not been completely generated.  For example, a later Refresh at 22:41 required a partial ArchiveLog for Sequence#41

Start recovery at thread 1 ckpt scn 16246668 logseq 0 block 0

*** 2021-12-04T22:41:32.365818+08:00 (RO_PDB(5))
Media Recovery add redo thread 1

*** 2021-12-04T22:41:32.370629+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_38_jtpzhl2s_.arc

*** 2021-12-04T22:41:32.470154+08:00 (RO_PDB(5))
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:39.259689+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_39_jtpzhxck_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:42.187854+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2021_12_04/o1_mf_1_40_jtpzhy18_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

*** 2021-12-04T22:41:42.773949+08:00 (RO_PDB(5))
Media Recovery Log /opt/oracle/product/19c/dbhome_1/dbs/archparlog_1_41_f1dfc534_1084312848.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!


Sequence#41 was still an Online Redo Log at ORCLCDB and had not yet been archived (In fact, the source ORCLPDB FRA was full and ORCLPDB was unable to Archive Log files). So, it seems that for Sequence#41, a Partial Extract was obtained so as to Refresh the Clone PDB.
This doesn't seem to be documented.

DataPump Export using SQL Developer

$
0
0

 I have published a video demonstration of using SQL Developer to run a DataPump export.

Although I did not choose the "Export read-consistent view of data" (at 04:00 in the video), I strongly recommend using this option when exporting multiple tables and/or multiple schemas in a database with concurrent transactions while the export is running.

For example :

Transaction T1 might Insert into Table Y and then Delete from Table X. If DataPump does an export of Table X before the Delete and/or an export of Table Y after the Insert, the exported data would be inconsistent as far as these two tables are concerned.

Transaction T1 might Update Table G Some minutes later, Transaction T2 might Insert/Update/Delete data in Table H based on the (committed) rows in Table G. If DataPump exports Tables G and H before the DML on Table H (and after the Update at Table G), you would get inconsistent data in the export dump because the data in the dump would reflect two different points in time and not be read-consistent.

This can also happen with Materialized View Refresh's.





Most Viewed Blog Posts of H1-2021


The contents of the Database Controlfile -- 1 : Logical Sructure

$
0
0

 (No, I am not referring to the "alter database backup controlfile to trace ..." command here.

In Oracle, the controlfile for a database is the "master" reference to the physical structure of the database and "known" backups.  


This what I see in my 19c database :

SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 383 383 161 160 1407 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 175 1 175 175 0
BACKUP PIECE 780 1006 136 1 136 136 0
BACKUP REDOLOG 76 215 153 1 153 153 0
BACKUP SET 96 1022 125 1 125 125 0
BACKUP SPFILE 124 131 25 1 25 25 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 4 1 4 4 0
DATAFILE 520 1024 32 0 0 1493 0
DATAFILE COPY 736 1000 3 1 3 3 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 635 1 635 635 0
FILENAME 524 4146 28 0 0 0 0
FLASHBACK LOG 84 2048 2 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 1 0 0 2 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 292 28 27 611 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 678 1 678 678 0
PDB RECORD 780 10 5 0 0 26 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 4 0 0 15 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 2 0 0 4 0
RMAN STATUS 116 141 141 58 57 339 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 21 0 0 72 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 7 0 0 36 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


This structure is explained by :

SQL> select view_Definition from v$fixed_view_definition where view_name = 'GV$CONTROLFILE_RECORD_SECTION'
2 /

VIEW_DEFINITION
------------------------------------------------------------------------------------------------------------------------------------
select inst_id,decode(indx,0,'DATABASE',1, 'CKPT PROGRESS', 2, 'REDO THREAD',3,'REDO LOG',4,'DATAFILE',5,'FILENAME',6,'TABLESPACE',7
,'TEMPORARY FILENAME',8,'RMAN CONFIGURATION',9,'LOG HISTORY',10,'OFFLINE RANGE',11,'ARCHIVED LOG',12,'BACKUP SET',13,'BACKUP PIECE',
14,'BACKUP DATAFILE',15, 'BACKUP REDOLOG',16,'DATAFILE COPY',17,'BACKUP CORRUPTION',18,'COPY CORRUPTION',19,'DELETED OBJECT',20,'PRO
XY COPY',21,'BACKUP SPFILE',23,'DATABASE INCARNATION',24,'FLASHBACK LOG',25, 'RECOVERY DESTINATION', 26,'INSTANCE SPACE RESERVATION'
, 27, 'REMOVABLE RECOVERY FILES', 28, 'RMAN STATUS', 29, 'THREAD INSTANCE NAME MAPPING', 30, 'MTTR', 31, 'DATAFILE HISTORY', 32, 'ST
ANDBY DATABASE MATRIX', 33, 'GUARANTEED RESTORE POINT', 34, 'RESTORE POINT', 35, 'DATABASE BLOCK CORRUPTION', 36, 'ACM OPERATION', 3
7, 'FOREIGN ARCHIVED LOG', 38, 'PDB RECORD', 39, 'AUXILIARY DATAFILE COPY', 40, 'MULTI INSTANCE REDO APPLY', 41, 'PDBINC RECORD', 42
, 'TABLESPACE KEY HISTORY', 'UNKNOWN'),rsrsz,rsnum,rsnus,rsiol,rsilw,rsrlw, con_id from x$kccrs where indx not in (22)


SQL>


Thus, for example :

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> create tablespace x;

Tablespace created.

SQL>
SQL> select *
2 from v$controlfile_record_section
3 where type in ('DATAFILE','TABLESPACE')
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DATAFILE 520 1024 33 0 0 1494 0
TABLESPACE 180 1024 22 0 0 73 0

SQL>


The datafile and tablespace record counts incremented by 1 for the new tablespace and datafile in PDB ORCLPDB1

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Jan 16 16:48:54 2022
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> backup pluggable database orclpdb1;

Starting backup at 16-JAN-22
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=253 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=371 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
input datafile file number=00033 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_x_jy7po5gg_.dbf
channel ORA_DISK_1: starting piece 1 at 16-JAN-22
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf
channel ORA_DISK_2: starting piece 1 at 16-JAN-22
channel ORA_DISK_1: finished piece 1 at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp tag=TAG20220116T164908 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:46
channel ORA_DISK_2: finished piece 1 at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp tag=TAG20220116T164908 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:56
Finished backup at 16-JAN-22

Starting Control File and SPFILE Autobackup at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 16-JAN-22
RMAN-08591: warning: invalid archived log deletion policy

RMAN>

SQL> select *
2 from v$controlfile_record_section
3 where type like 'BACKUP%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 185 1 185 185 0
BACKUP PIECE 780 1006 140 1 140 140 0
BACKUP REDOLOG 76 215 153 1 153 153 0
BACKUP SET 96 1022 129 1 129 129 0
BACKUP SPFILE 124 131 27 1 27 27 0

6 rows selected.

SQL>


My RMAN Backup of was for 8 datafiles and ran to 3 BackupPieces and 3 BackupSets.
Yet, the number of "BACKUP DATAFILE" records increased by 10, the number of "BACKUP PIECE" by 4 and the number of "BACKUP SET"s by 4.  Also, note the "BACKUP SPFILE" records also increased by 2.
The difference is explained by the AutoBackup created immediately after I added the new tablespace 'X' and datafile to the Pluggable Database ORCLPDB1 and the fact that the controlfile AutoBackup  is included in the "BACKUP DATAFILE" count.


RMAN> list backup completed after "sysdate-1"
2> ;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
126 Full 18.36M DISK 00:00:02 16-JAN-22
BP Key: 137 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164836
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
SPFILE Included: Modification time: 16-JAN-22
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 16813310 Ckp time: 16-JAN-22

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
127 Full 342.80M DISK 00:00:42 16-JAN-22
BP Key: 138 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164908
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
List of Datafiles in backup set 127
Container ID: 3, PDB Name: ORCLPDB1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
10 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
31 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
33 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_x_jy7po5gg_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
128 Full 635.72M DISK 00:00:51 16-JAN-22
BP Key: 139 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164908
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
List of Datafiles in backup set 128
Container ID: 3, PDB Name: ORCLPDB1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
9 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
12 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
32 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 18.36M DISK 00:00:02 16-JAN-22
BP Key: 140 Status: AVAILABLE Compressed: NO Tag: TAG20220116T165004
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
SPFILE Included: Modification time: 16-JAN-22
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 16813424 Ckp time: 16-JAN-22

RMAN>


Thus, BackupSet 126 is the automatically-created fourth "BACKUP SET"and the AutoBackup in BackupSets 126 and 129 are the two additional "BACKUP DATAFILE"s.  Simillarly, the SPFiles included in the two AutoBackups also incremented the "BACKUP SPFILE" count.

However, when you DROP a Tablespace (and remove it's Datafile(s), the record count does NOT decrement.


SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> drop tablespace X including contents and datafiles;

Tablespace dropped.

SQL>

SQL> select *
2 from v$controlfile_record_section
3 where type in ('DATAFILE','TABLESPACE')
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DATAFILE 520 1024 33 0 0 1495 0
TABLESPACE 180 1024 22 0 0 74 0

SQL>


It simply means that the "33rd" DATAFILE and "22nd" TABLESPACE records are reusable later. (Note that LAST_RECID also has got incremented for the two entries). Note how "RECORDS_TOTAL" is 1024 for "DATAFILE" and "TABLESPACE". This allows for slots that are present but not in use currently.

What about ArchiveLogs ?


SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 161 160 1407 0

SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 165 164 1411 0

SQL>
SQL> select count(*)
2 from v$archived_log
3 /

COUNT(*)
----------
383

SQL>


Apparently, Oracle reuses "ARCHIVED LOG" records in the Controlfile, while adjusting the FIRST_INDEX, LAST_INDEX and LAST_RECID values (Note how, in this cas, "FIRST_INDEX" is actually less than "LAST_INDEX"). So, this seems to be Circular Structure that will expand only when necessary.

So, if I still generate two more ArchiveLogs and then check and delete missing ArchiveLogs with :


SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL>


RMAN> crosscheck archivelog all; -- which returns a number of "validation failed for archived log" warnings, indicating ArchiveLogs that have been deleted at the OS level
and
RMAN> delete noprompt expired archivelog all; -- to delete all those marked expired after validation failed

SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 169 168 1415 0

SQL>
SQL> select dest_id, standby_dest, deleted, status, count(*)
2 from v$archived_log
3 group by dest_id, standby_dest, deleted, status
4 order by 1,2,3,4
5 /

DEST_ID STA DEL S COUNT(*)
---------- --- --- - ----------
1 NO NO A 102
1 NO YES D 23
2 NO NO A 8
2 YES NO A 97
3 YES NO A 105
4 YES NO A 48

6 rows selected.

SQL>


23 ArchiveLog entries at DEST_ID=1 are now marked as DELETED (by the "DELETE .. EXPIRED ARCHIVELOG ALL" command).
The FIRST_INDEX and LAST_INDEX have changed again. 

 My query on v$archived_log shows a a number of entries for DEST_IDs 2 to 4 as Standby Destinations (I have 3 Standby Databases, so this Database as a Primary, is tracking the ArchiveLogs it has to send to the Standbys).  Only entries for DEST_ID=1 are on the Local Filesystem on this server.

So : The 383 Records in the Controlfile does not represent the actual count of Physical ArchiveLogs for this Database present on this server.  At some point in time in the past, the number of entries had hit 383 but now there are "empty slots" that are being reused.

In the meantime, "DELETED OBJECT" count has increased by 48.


SQL> select *
2 from v$controlfile_record_section
3 where type = 'DELETED OBJECT'
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DELETED OBJECT 20 818 683 1 683 683 0

SQL>


Is it some form of "garbage collector" ?

The contents of the Database Controlfile -- 2 : Physical Structure

$
0
0

 In my previous post, I had demonstrated how you can view the "count of records" in different sections of the controlfile.  However, you cannot actually view the data in those records.

You can, however, do a Physical Dump of the controlfile :



QL> select name, block_size*file_size_blks/1024
2 from v$controlfile;

NAME
------------------------------------------------------------------------------------------------------------------------------------
BLOCK_SIZE*FILE_SIZE_BLKS/1024
------------------------------
/opt/oracle/oradata/ORCLCDB/control01.ctl
18688

/opt/oracle/oradata/ORCLCDB/control02.ctl
18688


SQL> !ls -l /opt/oracle/oradata/ORCLCDB/control01.ctl
-rw-r-----. 1 oracle oinstall 19152896 Jan 18 22:15 /opt/oracle/oradata/ORCLCDB/control01.ctl

SQL> !bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
19152896/1024
18704
quit

SQL>
SQL> alter session set tracefile_identifier='my_cntrlfile_dump';

Session altered.

SQL>
SQL> select *
2 from v$diag_info
3 where name = 'Default Trace File'
4 /

INST_ID NAME
---------- ----------------------------------------------------------------
VALUE
------------------------------------------------------------------------------------------------------------------------------------
CON_ID
----------
1 Default Trace File
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc
0


SQL>
SQL> alter session set events 'immediate trace name controlf level 10';

Session altered.

SQL> !ls -l /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc
-rw-r-----. 1 oracle oinstall 1449760 Jan 18 22:17 /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>view /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc


The "alter session set events 'immediate trace name controlf level 10'" command does a Physical Dump of the trace file.

I can't reproduce the whole file here, but am presenting the interesting bits.


DUMP OF CONTROL FILES, Seq # 1036136918 = 0x3dc231d6
V10 STYLE FILE HEADER:
Compatibility Vsn = 318767104=0x13000000
Db ID=2778483057=0xa59c4971, Db Name='ORCLCDB'
Activation ID=0=0x0
Control Seq=1036136918=0x3dc231d6, File size=1168=0x490
File Number=0, Blksiz=16384, File Type=1 CONTROL
Logical block number 1 (header block)
Dump of memory from 0x00007F02655C9000 to 0x00007F02655CD000
7F02655C9000 0000C215 00000001 00000000 04010000 [................]
7F02655C9010 0000791A 00000000 13000000 A59C4971 [.y..........qI..]
7F02655C9020 4C43524F 00424443 3DC231D6 00000490 [ORCLCDB..1.=....]
7F02655C9030 00004000 00010000 00000000 00000000 [.@..............]
7F02655C9040 32474154 30313230 54363239 33333132 [TAG20210926T2133]
7F02655C9050 00003230 00000000 00000000 00000000 [02..............]
7F02655C9060 AACA5B83 3C0C08F2 01025B25 00008000 [.[.....<%[......]
7F02655C9070 413A3EE4 00000000 00000000 00000000 [.>:A............]
7F02655C9080 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C9100 00000000 00000000 00000008 00000008 [................]
7F02655C9110 00000008 00000000 00000000 00000000 [................]
7F02655C9120 00000001 00000000 00000000 00000000 [................]
7F02655C9130 00000000 00000000 00000000 00000000 [................]
Repeat 1003 times
7F02655CCFF0 00000000 00000000 00000000 00001501 [................]
... notihing interesting
... except that, funnily, some RMAN Backup TAG is visible !


Logical block number 44
Dump of memory from 0x00007F02655AD000 to 0x00007F02655B1000
7F02655AD000 0000C215 00000059 3DC230EF 0401FFFF [....Y....0.=....]
7F02655AD010 0000461E 00030003 00000000 6F2F0000 [.F............/o]
7F02655AD020 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AD030 524F2F61 44434C43 65722F42 33306F64 [a/ORCLCDB/redo03]
7F02655AD040 676F6C2E 00000000 00000000 00000000 [.log............]
7F02655AD050 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD220 00020003 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655AD230 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655AD240 44434C43 65722F42 32306F64 676F6C2E [CLCDB/redo02.log]
7F02655AD250 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD420 00000000 00000000 00000000 00010003 [................]
7F02655AD430 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655AD440 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655AD450 65722F42 31306F64 676F6C2E 00000000 [B/redo01.log....]
7F02655AD460 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD630 00000000 00000000 00070004 00000000 [................]
7F02655AD640 6F2F0000 6F2F7470 6C636172 726F2F65 [../opt/oracle/or]
7F02655AD650 74616461 524F2F61 44434C43 73752F42 [adata/ORCLCDB/us]
7F02655AD660 30737265 62642E31 00000066 00000000 [ers01.dbf.......]
7F02655AD670 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD840 00000000 00040004 00000000 6F2F0000 [............../o]
7F02655AD850 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AD860 524F2F61 44434C43 6E752F42 62746F64 [a/ORCLCDB/undotb]
7F02655AD870 2E313073 00666264 00000000 00000000 [s01.dbf.........]
7F02655AD880 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655ADA50 00010004 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655ADA60 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655ADA70 44434C43 79732F42 6D657473 642E3130 [CLCDB/system01.d]
7F02655ADA80 00006662 00000000 00000000 00000000 [bf..............]
7F02655ADA90 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655ADC50 00000000 00000000 00000000 00030004 [................]
7F02655ADC60 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655ADC70 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655ADC80 79732F42 78756173 642E3130 00006662 [B/sysaux01.dbf..]
7F02655ADC90 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655ADE60 00000000 00000000 00010007 00000000 [................]
7F02655ADE70 6F2F0000 6F2F7470 6C636172 726F2F65 [../opt/oracle/or]
7F02655ADE80 74616461 524F2F61 44434C43 65742F42 [adata/ORCLCDB/te]
7F02655ADE90 3130706D 6662642E 00000000 00000000 [mp01.dbf........]
7F02655ADEA0 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AE070 00000000 00050004 00000000 6F2F0000 [............../o]
7F02655AE080 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AE090 524F2F61 44434C43 64702F42 65657362 [a/ORCLCDB/pdbsee]
7F02655AE0A0 79732F64 6D657473 642E3130 00006662 [d/system01.dbf..]
7F02655AE0B0 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AE280 00060004 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655AE290 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655AE2A0 44434C43 64702F42 65657362 79732F64 [CLCDB/pdbseed/sy]
7F02655AE2B0 78756173 642E3130 00006662 00000000 [saux01.dbf......]
7F02655AE2C0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655AE480 00000000 00000000 00000000 00080004 [................]
7F02655AE490 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655AE4A0 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655AE4B0 64702F42 65657362 6E752F64 62746F64 [B/pdbseed/undotb]
7F02655AE4C0 2E313073 00666264 00000000 00000000 [s01.dbf.........]
7F02655AE4D0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
.... and so on ..
... So, in the above section we see the physical datafiles of the database.




7F02655B0760 6F000000 6F2F7470 6C636172 72702F65 [...opt/oracle/pr]
7F02655B0770 6375646F 39312F74 62642F63 656D6F68 [oduct/19c/dbhome]
7F02655B0780 642F315F 732F7362 6370616E 524F5F66 [_1/dbs/snapcf_OR]
7F02655B0790 44434C43 00662E42 00000000 00000000 [CLCDB.f.........]
7F02655B07A0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655B0960 00000000 00000000 00000000 6F000000 [...............o]
7F02655B0970 6F2F7470 6C636172 72702F65 6375646F [pt/oracle/produc]
7F02655B0980 39312F74 62642F63 656D6F68 642F315F [t/19c/dbhome_1/d]
7F02655B0990 732F7362 6370616E 524F5F66 44434C43 [bs/snapcf_ORCLCD]
7F02655B09A0 00662E42 00000000 00000000 00000000 [B.f.............]
7F02655B09B0 00000000 00000000 00000000 00000000 [................]
... here we see the Snapshot Controlfile



Logical block number 177
Dump of memory from 0x00007F02655C1000 to 0x00007F02655C5000
7F02655C1000 0000C215 00000163 3DC230C2 0401FFFF [....c....0.=....]
7F02655C1010 0000993A 00000001 59530006 58554153 [:.........SYSAUX]
7F02655C1020 00000000 00000000 00000000 00000000 [................]
Repeat 8 times
7F02655C10B0 00000000 00010003 00000000 00000000 [................]
7F02655C10C0 00000000 00000000 00000000 59530006 [..............SY]
7F02655C10D0 4D455453 00000000 00000000 00000000 [STEM............]
7F02655C10E0 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1160 00000000 00000000 00010001 00000000 [................]
7F02655C1170 00000000 00000000 00000000 00000002 [................]
7F02655C1180 4E550008 42544F44 00003153 00000000 [..UNDOTBS1......]
7F02655C1190 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1210 00000000 00000000 00000000 00010004 [................]
7F02655C1220 00000000 00000000 00000000 00000000 [................]
7F02655C1230 00000004 53550005 00535245 00000000 [......USERS.....]
7F02655C1240 00000000 00000000 00000000 00000000 [................]
Repeat 8 times
7F02655C12D0 00010007 00000000 00000000 00000000 [................]
7F02655C12E0 00000000 00000003 45540004 0000504D [..........TEMP..]
7F02655C12F0 00000000 00000000 00000000 00000000 [................]
Repeat 6 times
7F02655C1360 00000000 00000000 00010000 00000000 [................]
7F02655C1370 00000000 00000000 00000000 00000000 [................]
7F02655C1380 00000000 00010001 00000000 00000000 [................]
7F02655C1390 00000000 00000000 00000000 59530006 [..............SY]
7F02655C13A0 4D455453 00000000 00000000 00000000 [STEM............]
7F02655C13B0 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1430 00000000 00000000 00020005 00000000 [................]
7F02655C1440 00000000 00000000 00000000 00000001 [................]
7F02655C1450 59530006 58554153 00000000 00000000 [..SYSAUX........]
7F02655C1460 00000000 00000000 00000000 00000000 [................]
.... and so on ...
... this section has Tablespace Names.




Logical block number 193
Dump of memory from 0x00007F02655C1000 to 0x00007F02655C5000
7F02655C1000 0000C215 00000183 3DC1CC8F 0401FFFF [...........=....]
7F02655C1010 00000887 48435241 4C455649 4420474F [....ARCHIVELOG D]
7F02655C1020 54454C45 204E4F49 494C4F50 00005943 [ELETION POLICY..]
7F02655C1030 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
7F02655C1050 00000000 204F5400 4C505041 20444549 [.....TO APPLIED ]
7F02655C1060 41204E4F 53204C4C 444E4154 42205942 [ON ALL STANDBY B]
7F02655C1070 454B4341 50552044 54203120 53454D49 [ACKED UP 1 TIMES]
7F02655C1080 204F5420 4B534944 00000000 00000000 [ TO DISK........]
7F02655C1090 00000000 00000000 00000000 00000000 [................]
Repeat 60 times
7F02655C1460 00000000 00000000 49564544 54204543 [........DEVICE T]
7F02655C1470 00455059 00000000 00000000 00000000 [YPE.............]
7F02655C1480 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
7F02655C14A0 00000000 00000000 53494400 4150204B [.........DISK PA]
7F02655C14B0 4C4C4152 53494C45 2032204D 4B434142 [RALLELISM 2 BACK]
7F02655C14C0 54205055 20455059 42204F54 554B4341 [UP TYPE TO BACKU]
7F02655C14D0 54455350 00000000 00000000 00000000 [PSET............]
7F02655C14E0 00000000 00000000 00000000 00000000 [................]
Repeat 944 times
... this secion has RMAN CONFGURATION information



Logical block number 211
Dump of memory from 0x00007F0265589000 to 0x00007F026558D000
7F0265589000 0000C215 000001A8 3DC21EA3 0401FFFF [...........=....]
7F0265589010 000008C1 40845AAE 00010012 000001EE [.....Z.@........]
7F0265589020 00493300 00000000 3DC1C40E 00E6DAE8 [.3I........=....]
7F0265589030 00008000 40845629 00E6E7E1 00008000 [....)V.@........]
7F0265589040 40845A9A 0002631D 00000200 00000001 [.Z.@.c..........]
7F0265589050 A7521CCD 00010000 74706F2F 61726F2F [..R...../opt/ora]
7F0265589060 2F656C63 68637261 6C657669 4F2F676F [cle/archivelog/O]
7F0265589070 434C4352 312F4244 3439345F 3330315F [RCLCDB/1_494_103]
7F0265589080 38303136 2E343138 00666264 00000000 [6108814.dbf.....]
7F0265589090 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F0265589250 00000000 00000000 00000000 40A141E4 [.............A.@]
7F0265589260 00010012 000001EF 00493300 00000000 [.........3I.....]
7F0265589270 3DC1C40E 00E6E7E1 00008000 40845A9A [...=.........Z.@]
7F0265589280 00E6E7E4 00008000 40A141E3 00000001 [.........A.@....]
7F0265589290 00000200 00000001 A7521CCD 00010000 [..........R.....]
7F02655892A0 74706F2F 61726F2F 2F656C63 68637261 [/opt/oracle/arch]
7F02655892B0 6C657669 4F2F676F 434C4352 312F4244 [ivelog/ORCLCDB/1]
7F02655892C0 3539345F 3330315F 38303136 2E343138 [_495_1036108814.]
7F02655892D0 00666264 00000000 00000000 00000000 [dbf.............]
7F02655892E0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655894A0 00000000 40A143D5 00010012 000001F0 [.....C.@........]
7F02655894B0 00493300 00000000 3DC1C40E 00E6E7E4 [.3I........=....]
7F02655894C0 00008000 40A141E3 00E86F7B 00008000 [.....A.@{o......]
7F02655894D0 40A143D5 000000A9 00000200 00000001 [.C.@............]
7F02655894E0 A7521CCD 00010000 74706F2F 61726F2F [..R...../opt/ora]
7F02655894F0 2F656C63 68637261 6C657669 4F2F676F [cle/archivelog/O]
7F0265589500 434C4352 312F4244 3639345F 3330315F [RCLCDB/1_496_103]
7F0265589510 38303136 2E343138 00666264 00000000 [6108814.dbf.....]
7F0265589520 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
.... and so on ...
... this section has *some* ArchiveLog FileNames
... other ArchiveLog FileNames appear further down the file



Repeat 30 times
7F0265589DC0 00000000 40A143DD 0001061A 000001EA [.....C.@........]
7F0265589DD0 00493300 00000000 3DC1C40E 00E683F5 [.3I........=....]
7F0265589DE0 00008000 40844BA3 00E6841B 00008000 [.....K.@........]
7F0265589DF0 40844BA9 00000017 00000200 003C0001 [.K.@..........<.]
7F0265589E00 A7521CCD 00030000 42445453 00000032 [..R.....STDB2...]
7F0265589E10 00000000 00000000 00000000 00000000 [................]
.... here I see a reference to one of my Standby Databases STDB2




Logical block number 218
Dump of memory from 0x00007F02655A5000 to 0x00007F02655A9000
7F02655A5000 0000C215 000001B5 3DC1D6CE 0401FFFF [...........=....]
7F02655A5010 0000B131 3D94E011 3D94E010 00000001 [1......=...=....]
7F02655A5020 00018003 3D94E011 00000000 00008FA0 [.......=........]
7F02655A5030 4B534944 00000000 00000000 00000000 [DISK............]
7F02655A5040 706F0000 726F2F74 656C6361 6F72702F [..opt/oracle/pro]
7F02655A5050 74637564 6339312F 6862642F 5F656D6F [duct/19c/dbhome_]
7F02655A5060 62642F31 2D632F73 38373732 30333834 [1/dbs/c-27784830]
7F02655A5070 322D3735 30303230 2D333232 00003030 [57-20200223-00..]
7F02655A5080 00000000 00000000 00000000 00000000 [................]
Repeat 36 times
7F02655A52D0 00000000 32474154 30303230 54333232 [....TAG20200223T]
7F02655A52E0 37343232 00003434 00000000 00000000 [224744..........]
7F02655A52F0 00000000 00000000 B637B686 657ADF2F [..........7./.ze]
7F02655A5300 06F753E0 CA270AE8 3D94E011 00000000 [.S....'....=....]
7F02655A5310 00000000 00000000 00000000 00000000 [................]
7F02655A5320 3DC1C42D 3DC1C42C 00000024 0001C003 [-..=,..=$.......]
7F02655A5330 3DC1C42D 00000000 00008FA0 4B534944 [-..=........DISK]
7F02655A5340 00000000 00000000 00000000 706F0000 [..............op]
7F02655A5350 726F2F74 656C6361 4152462F 43524F2F [t/oracle/FRA/ORC]
7F02655A5360 4244434C 43524F2F 4244434C 7475612F [LCDB/ORCLCDB/aut]
7F02655A5370 6361626F 2F70756B 30323032 5F33305F [obackup/2020_03_]
7F02655A5380 6F2F3732 666D5F31 315F735F 31363330 [27/o1_mf_s_10361]
7F02655A5390 34383830 37685F34 66666E73 2E5F7862 [08844_h7snffbx_.]
7F02655A53A0 00706B62 00000000 00000000 00000000 [bkp.............]
7F02655A53B0 00000000 00000000 00000000 00000000 [................]
Repeat 34 times
7F02655A55E0 32474154 30303230 54373233 30303030 [TAG20200327T0000]
7F02655A55F0 00003434 00000000 00000000 00000000 [44..............]
7F02655A5600 00000000 B637B686 657ADF2F 06F753E0 [......7./.ze.S..]
7F02655A5610 CA270AE8 3DC1C42D 00000000 00000000 [..'.-..=........]
7F02655A5620 00000000 00000000 00000000 3E0E1E81 [...............>]
7F02655A5630 3E0E1E80 00000027 0001C003 3E0E1E97 [...>'..........>]
7F02655A5640 00000040 00093AA0 4B534944 00000000 [@....:..DISK....]
7F02655A5650 00000000 00000000 706F0000 726F2F74 [..........opt/or]
7F02655A5660 656C6361 4152462F 43524F2F 4244434C [acle/FRA/ORCLCDB]
7F02655A5670 43524F2F 4244434C 6361622F 7370756B [/ORCLCDB/backups]
7F02655A5680 322F7465 5F303230 325F3530 316F2F32 [et/2020_05_22/o1]
7F02655A5690 5F666D5F 646E6E6E 41545F66 32303247 [_mf_nnndf_TAG202]
7F02655A56A0 32353030 31325432 34323835 6864685F [00522T215824_hdh]
7F02655A56B0 31316E70 622E5F6C 0000706B 00000000 [pn11l_.bkp......]
7F02655A56C0 00000000 00000000 00000000 00000000 [................]
Repeat 31 times
7F02655A58C0 00000000 00000000 00000000 3E0E1E80 [...............>]
7F02655A58D0 00000000 0000002C 00000000 00000000 [....,...........]
7F02655A58E0 00000000 00000000 00000000 32474154 [............TAG2]
7F02655A58F0 30303230 54323235 38353132 00003432 [0200522T215824..]
7F02655A5900 00000000 00000000 00000000 00000000 [................]
7F02655A5910 B637B686 657ADF2F 06F753E0 CA270AE8 [..7./.ze.S....'.]
7F02655A5920 3E0E1E81 00000000 00000000 00000000 [...>............]
7F02655A5930 00000000 00000000 3E0E1E9A 3E0E1E9A [...........>...>]
7F02655A5940 00000028 0001C003 3E0E1EAC 00000040 [(..........>@...]
7F02655A5950 000409F0 4B534944 00000000 00000000 [....DISK........]
7F02655A5960 00000000 706F0000 726F2F74 656C6361 [......opt/oracle]
7F02655A5970 4152462F 43524F2F 4244434C 43524F2F [/FRA/ORCLCDB/ORC]
7F02655A5980 4244434C 3138382F 39324532 46343638 [LCDB/8812E29864F]
7F02655A5990 31363734 35304535 30313033 37303030 [47615E0530100007]
7F02655A59A0 32344146 61622F34 70756B63 2F746573 [FA424/backupset/]
7F02655A59B0 30323032 5F35305F 6F2F3232 666D5F31 [2020_05_22/o1_mf]
7F02655A59C0 6E6E6E5F 545F6664 30324741 35303032 [_nnndf_TAG202005]
7F02655A59D0 32543232 32383531 64685F34 746E7068 [22T215824_hdhpnt]
7F02655A59E0 2E5F7939 00706B62 00000000 00000000 [9y_.bkp.........]
.... and so on ...
... here I see some of the BackupPiece FileNames
... strangely enough BackupPieces from May 2020 are still in the controlfile !!
... more appear later down the file


7F02655BF7B0 00000000 00000000 00000000 41374F03 [.............O7A]
7F02655BF7C0 00000000 00000155 00000000 00000000 [....U...........]
7F02655BF7D0 00000000 00000000 00000000 32474154 [............TAG2]
7F02655BF7E0 30323230 54363131 39343631 00003830 [0220116T164908..]
7F02655BF7F0 00000000 00000000 00000000 00000000 [................]
7F02655BF800 98E21288 1576F464 000153E0 24A47F00 [....d.v..S.....$]
7F02655BF810 41374F08 00000000 00000000 00000000 [.O7A............]
7F02655BF820 00000000 00000000 41374F3D 41374F3C [........=O7AO7A....]
7F02655BF840 000092E0 4B534944 00000000 00000000 [....DISK........]
7F02655BF850 00000000 706F2F00 726F2F74 656C6361 [...../opt/oracle]
7F02655BF860 4152462F 43524F2F 4244434C 43524F2F [/FRA/ORCLCDB/ORC]
7F02655BF870 4244434C 7475612F 6361626F 2F70756B [LCDB/autobackup/]
7F02655BF880 32323032 5F31305F 6F2F3631 666D5F31 [2022_01_16/o1_mf]
7F02655BF890 315F735F 31343930 30383334 796A5F34 [_s_1094143804_jy]
7F02655BF8A0 78317137 2E5F3170 00706B62 00000000 [7q1xp1_.bkp.....]
7F02655BF8B0 00000000 00000000 00000000 00000000 [................]
Repeat 32 times
7F02655BFAC0 00000000 00000000 41374F3C 00000000 [........



So, it seems that not every entry in the Controlfile is ordered.
And also, that there are many entries referencing ArchiveLogs or BackupPieces that no longer exist but haven't been cleaned up yet !




The Contents of the Database Controlfile -- 3 : Recreating the Controlfile

$
0
0

 In my previous posts, I have shown the Logical Structure and the Physical Contents of the Controlfile.  It is clear that some of the sections (e.g. for "ARCHIVED LOG" entries) have many slots (or "records") not all of which are currently in use.

What happens if you recreate the controlfile ?  Since my first post, I have generated some more Backup Pieces and ArchiveLogs (and purged ArchiveLogs) so the RECORDS_UESD or INDEXes of "BACKUP DATAFILE", "BACKUP PIECE" and "ARCHIVED LOG" in this listing are different from that first listing.



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 383 383 198 197 1444 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 210 1 210 210 0
BACKUP PIECE 780 1006 160 1 160 160 0
BACKUP REDOLOG 76 215 215 51 50 265 0
BACKUP SET 96 1022 149 1 149 149 0
BACKUP SPFILE 124 131 33 1 33 33 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 4 1 4 4 0
DATAFILE 520 1024 34 0 0 1532 0
DATAFILE COPY 736 1000 3 1 3 3 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 818 14 13 831 0
FILENAME 524 4146 29 0 0 0 0
FLASHBACK LOG 84 2048 2 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 1 0 0 2 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 292 54 53 637 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 696 1 696 696 0
PDB RECORD 780 10 5 0 0 26 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 4 0 0 15 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 6 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 7 0
RMAN STATUS 116 141 141 98 97 379 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 75 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 7 0 0 36 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> alter database backup controlfile to trace as '/home/oracle/controlfile_creation.txt';

Database altered.

SQL>

SQL> !cat /home/oracle/controlfile_creation.txt
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
--
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
--
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
--
-- DB_UNIQUE_NAME="ORCLCDB"
--
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- FAL_CLIENT=''
-- FAL_SERVER=''
--
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER'
-- LOG_ARCHIVE_DEST_1='NOALTERNATE'
-- LOG_ARCHIVE_DEST_1='NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE

--
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
--
-- Set #1. NORESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE

-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;

-- Database can now be opened normally.
ALTER DATABASE OPEN;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;
-- Set #2. RESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" RESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE

-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;

SQL>


The trace that is generated with the "alter database backup controlfile to trace ..." command actually contains the SQL statements that can be used to recreate the controlfile.  There are a number of useful comments also added.
Essentially, there are two sets of SQL commands.  

The first set, titled "Set #1" is for the NORESETLOGS scenario where all the Online Redo Log files are available.
The second set, titled "Set #2" is for a RESETLOGS scenario where Incomplete Recovery is required because not all the Online Redo Log files are available.

In my case, I have all my Online Redo Log files so I do not lose any data and I can use the commands from Set #1.

Recreating the Controlfile :


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 1024
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
11 -- STANDBY LOGFILE
12 -- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
13 DATAFILE
14 '/opt/oracle/oradata/ORCLCDB/system01.dbf',
15 '/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
16 '/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
17 '/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
18 '/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
19 '/opt/oracle/oradata/ORCLCDB/users01.dbf',
20 '/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
21 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
22 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
23 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
24 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
25 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
26 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
27 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
28 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
29 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
30 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
31 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
32 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
33 CHARACTER SET AL32UTF8
34 ;

Control file created.

SQL>
SQL> -- Configure RMAN configuration record 1
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 2
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 3
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');

PL/SQL procedure successfully completed.

SQL> -- Recovery is required if any of the datafiles are restored backups,
SQL> -- or if the last shutdown was not normal or immediate.
SQL> -- I will be doing a SHUTDOWN IMMEDIATE so I do not need a RECOVER DATABASE
SQL> -- RECOVER DATABASE
SQL>
SQL> -- All logs need archiving and a log switch is needed.
SQL> ALTER SYSTEM ARCHIVE LOG ALL;

System altered.

SQL>
SQL> -- Database can now be opened normally.
SQL> ALTER DATABASE OPEN;

Database altered.

SQL>
SQL> -- Open all the PDBs.
SQL> ALTER PLUGGABLE DATABASE ALL OPEN;

Pluggable database altered.

SQL>
SQL> -- Commands to add tempfiles to temporary tablespaces.
SQL> -- Online tempfiles have complete space information.
SQL> -- Other tempfiles may require adjustment.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
2 SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "PDB$SEED";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
2 SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "ORCLPDB1";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
2 SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
2 SIZE 52428800 REUSE AUTOEXTEND OFF;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "RMANCAT";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
2 SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "CDB$ROOT";

Session altered.

SQL> -- End of tempfile additions.
SQL>
SQL> spool off
SQL>
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 71
Next log sequence to archive 73
Current log sequence 73
SQL>


The ARCHIVE LOG LIST output shows that I did NOT have to do a RESETLOGS. The CREATE CONTROLFILE and ALTER DATABASE OPEN commands succeeded.

What do I now see in the Logical Entries in the Controlfile ?


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 3 1 3 3 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 0 0 0 0 0
BACKUP PIECE 780 1006 0 0 0 0 0
BACKUP REDOLOG 76 215 0 0 0 0 0
BACKUP SET 96 1022 0 0 0 0 0
BACKUP SPFILE 124 131 0 0 0 0 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 0 0 0 0 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Recreating the Controlfile resulted in loss of information on all ArchivedLogs and RMAN Backups !
Compare the RECORDS_USED and INDEXes for "ARCHIVED LOG" and "BACKUP %" entries against the listing obtained before recreated the Controlfile.  (A few ArchivedLogs were generated by the "ALTER SYSTEM ARCHIVE LOG ALL" command in the script itself).

Can I add that information back ?


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';

using target database control file instead of recovery catalog
searching for all files that match the pattern /opt/oracle/FRA/ORCLCDB

List of Files Unknown to the Database
=====================================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

RMAN>
RMAN> list backup summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1 B F A DISK 26-SEP-21 1 1 NO TAG20210926T213259
2 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164836
3 B F A DISK 16-JAN-22 1 1 NO TAG20220116T165004
4 B F A DISK 16-JAN-22 1 1 NO TAG20220116T171842
5 B F A DISK 20-JAN-22 1 1 NO TAG20220120T111847
6 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112115
7 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112136
8 B F A DISK 20-JAN-22 1 1 NO TAG20220120T133901
9 B F A DISK 20-JAN-22 1 1 NO TAG20220120T143945
10 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
11 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
12 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
13 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
14 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
15 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112134
16 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
17 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
18 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
19 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
20 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
21 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
22 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
23 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
24 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
25 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
4 1 60 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc

5 1 61 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc

6 1 62 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc

7 1 63 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc

8 1 64 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc

9 1 65 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc

10 1 66 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc

11 1 67 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc

12 1 68 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc

13 1 69 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc

14 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc

1 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jylbnvgo_.arc

15 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

2 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jylbo521_.arc

3 1 72 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_72_jylbofq3_.arc


RMAN>


Yes ! RMAN can "add the information" back into the Controlfile  using the CATALOG command  (also see this earlier post on the CATALOG command)-- as long as you know where the Backups and ArchiveLogs ARE ! (Another good reason to use a single FRA location !)


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 15 1 15 15 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 28 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 8 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 3 1 3 3 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> select dest_id, standby_dest, deleted, status, count(*)
2 from v$archived_log
3 group by dest_id, standby_dest, deleted, status
4 order by 1,2,3,4
5 /

DEST_ID STA DEL S COUNT(*)
---------- --- --- - ----------
1 NO NO A 15

SQL>


The RECORDS_USED for ARCHIVED LOG" and "BACKUP DATAFILE" and "BACKUP PIECE" are now properly reset.
I would NOT advice that you periodically recreate your Controlfile.  

But this is an available method to "reset" the controlfile. I could also take this opportunity to increase controlled-values like MAXLOGFILES, MAXMEMBERS and MAXDATAFILES (or even MAXINSTANCES !).


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> quit

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 24
3 MAXLOGMEMBERS 4
4 MAXDATAFILES 2048
5 MAXINSTANCES 8
6 MAXLOGHISTORY 512
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
....
....
....


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';
....
....
....



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 532 18 1 18 18 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 2048 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 6242 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 584 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 24 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 9 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 1 1 1 1 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 2048 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 2048 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Note how I have increased the "DATAFILE" RECORDS_TOTAL for "DATAFILE" from 1024 to 2048 and for "REDO LOG" from 16 to 24 and "LOGHISTORY" from 292 to 584.


Refreshable Clone PDB -- 4 Converting it to a Read Write PDB

$
0
0

 As demonstrated in my first post on Refreshable Clone PDBs, the Clone PDB can be opened in only READ ONLY mode.  This is akin to a Basic Materialized View or a Read Only Standby Database, either of which is updated from the source table(s) / database.

Such a Refreshable Clone PDB is useful as a Reporting Database where you can run queries / extracts -- with data AS OF the last Refresh -- without putting load on the running Production (Transactional, Read Write Database).

But if you want to (finally) open the Clone PDB in Read Write mode (and severe any links with the source, thus preventing further Refresh's) you can do so by changing the Refresh Mode.



SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
73645

SQL> connect / as sysdba
Connected.

SQL> alter pluggable database ro_pdb close; -- close it so that I can do a REFRESH

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh; -- execute the REFRES

Pluggable database altered.

SQL> alter pluggable database ro_pdb open ; -- attempt to OPEN, defaulting to READ WRITE mode
alter pluggable database ro_pdb open
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL> alter pluggable database ro_pdb open read write;
alter pluggable database ro_pdb open read write
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL>
SQL> alter pluggable database ro_pdb open read only; -- open READ ONLY

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects; -- verify that the database has been refreshed (new rows visible in the source table)

COUNT(*)
----------
83645

SQL>
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database ro_pdb close; -- close it again

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh mode none ; -- ***DISABLE FURTHER REFRESH's***

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb open read write; -- open in READ WRITE mode now !!

Pluggable database altered.

SQL>
SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
83645

SQL> delete list_of_objects where owner = 'HEMANT'; -- proof that the database is now WRITABLE

32 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> alter pluggable database ro_pdb close;

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh; -- check if it can be REFRESH'ed from the source
alter pluggable database ro_pdb refresh
*
ERROR at line 1:
ORA-65261: pluggable database RO_PDB not enabled for refresh


SQL>


Thus, to enable the PDB to be WRITABLE, REFRESH has to be disabled. 
 Once REFRESH is disabled, the PDB can no longer be refreshed from the source. It has diverged from the source.

These are the alert log messages when REFRESH is disabled :


2022-01-23T15:35:55.766486+08:00
alter pluggable database ro_pdb refresh mode none
2022-01-23T15:35:55.846041+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo opening
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR failed: start:1511350 end:1511367 diff:17 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1511329 end: 1511464 diff: 135 ms (0.1 seconds)
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1511465 end: 1511507 diff: 42 ms (0.0 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1511329 end:1511509 diff:180 ms (0.2 seconds)
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:35:57.144146+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo closing
RO_PDB(6):JIT: pid 6305 requesting stop
RO_PDB(6):Closing sequence subsystem (1512195729).
RO_PDB(6):Buffer Cache flush started: 6
RO_PDB(6):Buffer Cache flush finished: 6
Completed: alter pluggable database ro_pdb refresh mode none


And these are the messages when it is OPENed in READ WRITE mode :



2022-01-23T15:36:56.085938+08:00
alter pluggable database ro_pdb open read write
2022-01-23T15:36:56.088166+08:00
RO_PDB(6):Pluggable database RO_PDB opening in read write
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR complete: start:1571711 end:1571732 diff:21 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1571710 end: 1571732 diff: 22 ms (0.0 seconds)
2022-01-23T15:36:58.249919+08:00
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1571732 end: 1573115 diff: 1383 ms (1.4 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1571710 end:1573161 diff:1451 ms (1.5 seconds)
RO_PDB(6):Deleting old file#9 from file$
RO_PDB(6):Deleting old file#10 from file$
RO_PDB(6):Deleting old file#11 from file$
RO_PDB(6):Deleting old file#12 from file$
RO_PDB(6):Deleting old file#26 from file$
RO_PDB(6):Deleting old file#31 from file$
RO_PDB(6):Deleting old file#32 from file$
RO_PDB(6):Deleting old file#33 from file$
RO_PDB(6):Deleting old file#34 from file$
RO_PDB(6):Adding new file#50 to file$(old file#9). fopr-1, newblks-48640, oldblks-19200
RO_PDB(6):Adding new file#51 to file$(old file#10). fopr-1, newblks-61440, oldblks-15360
RO_PDB(6):Adding new file#52 to file$(old file#11). fopr-1, newblks-58240, oldblks-12800
RO_PDB(6):Adding new file#53 to file$(old file#12). fopr-1, newblks-46880, oldblks-640
RO_PDB(6):Adding new file#54 to file$(old file#26). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#56 to file$(old file#31). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#55 to file$(old file#32). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#57 to file$(old file#34). fopr-1, newblks-115200, oldblks-12800
RO_PDB(6):Successfully created internal service RO_PDB at open
2022-01-23T15:36:59.694482+08:00
****************************************************************
Post plug operations are now complete.
Pluggable database RO_PDB with pdb id - 6 is now marked as NEW.
****************************************************************
RO_PDB(6):Pluggable database RO_PDB dictionary check beginning
RO_PDB(6):Pluggable Database RO_PDB Dictionary check complete
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:37:05.752451+08:00
RO_PDB(6):JIT: pid 6305 requesting full stop
2022-01-23T15:37:11.910855+08:00
RO_PDB(6):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)
2022-01-23T15:37:20.655852+08:00
RO_PDB(6):Opening pdb with no Resource Manager plan active
RO_PDB(6):joxcsys_required_dirobj_exists: directory object exists with required path /opt/oracle/product/19c/dbhome_1/javavm/admin/, pid 6305 cid 6
2022-01-23T15:37:22.578807+08:00
Pluggable database RO_PDB opened read write
Completed: alter pluggable database ro_pdb open read write


Note : To understand FILE# values, see my previous post on Datafile Names for Refresh Clone PDB.
(Some of the FILE# values have changed since that post because I have added a new tablespace in the source database and recreated RO_PDB as a Refreshable Clone PDB since that post).

SQL Execution Statistics -- 1 : A Quick Introduction

$
0
0

 A couple of quick examples of how to obtain SQL Execution Statistics using the GATHER_PLAN_STATISTICS Hint


Here's how you get the Statisitics for an SQL executed from the same session.
SQL> Run query with /*+ gather_plan_statistics */ hint
SQL> Get execution statistics with dbms_xplan.display_cursor(FORMAT=>'ALLSTATS LAST');

For an SQL from another session, you will have to pass the SQL_ID (and, if necessary, the CHILD_NUMBER) as a parameter(s) to the DISPLAY_CURSOR function.

I present a query on DEPARTMENTS and EMPLOYEES with a three different execution plans.  The first is the one automatically generated by the Optimizer and using a Hash Join.  The next two use Nested Loop(s) but differently based on how I specify the LEADING and USE_NL Hints.


The first one using a HASH JOIN :


SQL> select /*+ gather_plan_statistics   */
2 d.department_id, d.department_name, e.first_name, e.last_name
3 from hr.departments d join hr.employees e
4 on d.department_id=e.department_id
5 order by 1,2,4,3
6 /

DEPARTMENT_ID DEPARTMENT_NAME FIRST_NAME LAST_NAME
------------- ------------------------------ -------------------- -------------------------
10 Administration Jennifer Whalen
20 Marketing Pat Fay
20 Marketing Michael Hartstein
30 Purchasing Shelli Baida
30 Purchasing Karen Colmenares
30 Purchasing Guy Himuro
30 Purchasing Alexander Khoo
30 Purchasing Den Raphaely
30 Purchasing Sigal Tobias
40 Human Resources Susan Mavris
50 Shipping Mozhe Atkinson
....
.... Intermediate rows deleted from the output presented here
....
100 Finance Ismael Sciarra
100 Finance Jose Manuel Urman
110 Accounting William Gietz
110 Accounting Shelley Higgins
280 New Department Hemant Chitale

107 rows selected.

SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
2 /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 59vduj8s28fhn, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ d.department_id,
d.department_name, e.first_name, e.last_name from hr.departments d join
hr.employees e on d.department_id=e.department_id order by 1,2,4,3

Plan hash value: 207737665

-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 107 |00:00:00.01 | 14 | | | |
| 1 | SORT ORDER BY | | 1 | 107 | 107 |00:00:00.01 | 14 | 18432 | 18432 |16384 (0)|
|* 2 | HASH JOIN | | 1 | 107 | 107 |00:00:00.01 | 14 | 1572K| 1572K| 1652K (0)|
| 3 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 29 | 29 |00:00:00.01 | 6 | | | |
| 4 | VIEW | index$_join$_002 | 1 | 108 | 107 |00:00:00.01 | 8 | | | |
|* 5 | HASH JOIN | | 1 | | 107 |00:00:00.01 | 8 | 1610K| 1610K| 1681K (0)|
| 6 | INDEX FAST FULL SCAN| EMP_DEPARTMENT_IX | 1 | 108 | 107 |00:00:00.01 | 4 | | | |
| 7 | INDEX FAST FULL SCAN| EMP_NAME_IX | 1 | 108 | 108 |00:00:00.01 | 4 | | | |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
5 - access(ROWID=ROWID)


27 rows selected.

SQL> SQL>




The second one with a simple NESTED LOOP using  DEPARTMENTS as the "Outer" table :

SQL> select /*+ gather_plan_statistics  leading(d) use_nl(e) */
2 d.department_id, d.department_name, e.first_name, e.last_name
3 from hr.departments d join hr.employees e
4 on d.department_id=e.department_id
5 order by 1,2,4,3
6 /

DEPARTMENT_ID DEPARTMENT_NAME FIRST_NAME LAST_NAME
------------- ------------------------------ -------------------- -------------------------
10 Administration Jennifer Whalen
20 Marketing Pat Fay
20 Marketing Michael Hartstein
30 Purchasing Shelli Baida
30 Purchasing Karen Colmenares
30 Purchasing Guy Himuro
30 Purchasing Alexander Khoo
30 Purchasing Den Raphaely
30 Purchasing Sigal Tobias
40 Human Resources Susan Mavris
50 Shipping Mozhe Atkinson
....
.... Intermediate rows deleted from the output presented here
....
100 Finance Ismael Sciarra
100 Finance Jose Manuel Urman
110 Accounting William Gietz
110 Accounting Shelley Higgins
280 New Department Hemant Chitale

107 rows selected.


SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
2 /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8q08zbr367cgg, child number 0
-------------------------------------
select /*+ gather_plan_statistics leading(d) use_nl(e) */
d.department_id, d.department_name, e.first_name, e.last_name from
hr.departments d join hr.employees e on d.department_id=e.department_id
order by 1,2,4,3

Plan hash value: 2178791499

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 107 |00:00:00.01 | 19 | | | |
| 1 | SORT ORDER BY | | 1 | 107 | 107 |00:00:00.01 | 19 | 22528 | 22528 |20480 (0)|
| 2 | NESTED LOOPS | | 1 | 107 | 107 |00:00:00.01 | 19 | | | |
| 3 | NESTED LOOPS | | 1 | 261 | 107 |00:00:00.01 | 9 | | | |
| 4 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 29 | 29 |00:00:00.01 | 6 | | | |
|* 5 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 29 | 9 | 107 |00:00:00.01 | 3 | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 107 | 4 | 107 |00:00:00.01 | 10 | | | |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")


26 rows selected.

SQL> SQL>


And the third, more complicated, Execution Plan for a slightly different SQL :

SQL> select /*+ gather_plan_statistics  leading(e) use_nl(d) */
2 d.department_id, d.department_name, e.first_name, e.last_name
3 from hr.departments d join hr.employees e
4 on d.department_id=e.department_id
5 order by 1,2,4,3
6 /

DEPARTMENT_ID DEPARTMENT_NAME FIRST_NAME LAST_NAME
------------- ------------------------------ -------------------- -------------------------
10 Administration Jennifer Whalen
20 Marketing Pat Fay
20 Marketing Michael Hartstein
30 Purchasing Shelli Baida
30 Purchasing Karen Colmenares
30 Purchasing Guy Himuro
30 Purchasing Alexander Khoo
30 Purchasing Den Raphaely
30 Purchasing Sigal Tobias
40 Human Resources Susan Mavris
50 Shipping Mozhe Atkinson
....
.... Intermediate rows deleted from the output presented here
....
100 Finance Ismael Sciarra
100 Finance Jose Manuel Urman
110 Accounting William Gietz
110 Accounting Shelley Higgins
280 New Department Hemant Chitale

107 rows selected.

SQL>
SQL> @show_last_sql_statistics.sql
SQL> select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'))
2 /

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 2kbc7rcpjcu0w, child number 0
-------------------------------------
select /*+ gather_plan_statistics leading(e) use_nl(d) */
d.department_id, d.department_name, e.first_name, e.last_name from
hr.departments d join hr.employees e on d.department_id=e.department_id
order by 1,2,4,3

Plan hash value: 2010582952

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 107 |00:00:00.01 | 119 | | | |
| 1 | SORT ORDER BY | | 1 | 107 | 107 |00:00:00.01 | 119 | 18432 | 18432 |16384 (0)|
| 2 | NESTED LOOPS | | 1 | 107 | 107 |00:00:00.01 | 119 | | | |
| 3 | NESTED LOOPS | | 1 | 108 | 107 |00:00:00.01 | 12 | | | |
| 4 | VIEW | index$_join$_002 | 1 | 108 | 107 |00:00:00.01 | 8 | | | |
|* 5 | HASH JOIN | | 1 | | 107 |00:00:00.01 | 8 | 1610K| 1610K| 1539K (0)|
| 6 | INDEX FAST FULL SCAN | EMP_DEPARTMENT_IX | 1 | 108 | 107 |00:00:00.01 | 4 | | | |
| 7 | INDEX FAST FULL SCAN | EMP_NAME_IX | 1 | 108 | 108 |00:00:00.01 | 4 | | | |
|* 8 | INDEX UNIQUE SCAN | DEPT_ID_PK | 107 | 1 | 107 |00:00:00.01 | 4 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 107 | 1 | 107 |00:00:00.01 | 107 | | | |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access(ROWID=ROWID)
8 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")


30 rows selected.

SQL> SQL>


All three executions return exactly the same Result Set.  What are different are the Execution Plans and the Logical Reads.
The first Execution Plan (using a HASH JOIN) required 14 Logical Reads (Buffer Gets).
Note how the Second Execution required 19 Logical Reads  while the Third Execution required 119 Logical Reads  having queried the DEPARTMENTS Table via the Rows from the Nested Loop driven by the intermdiate "view" index$_join$_001  107 times (reading 1 Block each time).  

Since these are extremely small tables, the Actual Time is less than 1second but what matters in the comparison is the number of Logical Reads

("Starts" is the number of times that particular Step (Id) was actually executed, 
"E-Rows" is the Estimated Rows for the Execution Plan,
"A-Rows" is the *Actual* number of Rows retrieved at Runtime,
"A-Time" is the *Actual* Time in Hours:Minutes:Seconds,
"0mem" is the "Estimated Optimal Memory Usage", 
"1mem" is the Estimated 1-pass (one write to temp) Memory Usage 
"Used-Mem" is the Actual Memory Used)


SQL> select table_name, blocks, num_rows
2 from dba_tables
3 where owner = 'HR'
4 and table_name in ('DEPARTMENTS','EMPLOYEES')
5 order by 1
6 /

TABLE_NAME BLOCKS NUM_ROWS
------------------------------ ---------- ----------
DEPARTMENTS 5 29
EMPLOYEES 5 108

SQL>


Given the sizes of the two tables, I am sure that all the blocks are cached in the Buffer Cache, so I am not interested in Physical I/O statistics and Wait Times.  The Logical Reads consume CPU time.

This is an example of how to read SQL Execution Statistics.  

You may extrapolate to tables with thousands of blocks and tens of thousands of rows.  

Also, consider Memory Usage for the operations when you may have dozens of hundreds of sessions concurrently doing HASH JOINs and SORTs and being limited by PGA_AGGREGATE_TARGET.  Some Memory operations may overflow to the TEMP Tablespace, noticeably increasing total execution time.

SQL Execution Statistics -- 2 : SQL_ID from a Tracefile

$
0
0

In my previous blog post, I have demonstrated sql execution statistics for the last SQL statement of the current session.

However, you can trace SQL execution in the current session or in any other session and then review the information from the tracefile.

(For different methods of SQL Tracing see this series of blog posts

For example, in an SQLPlus session :



SQL> alter session set tracefile_identifier='CUSTOMER_COUNT';

Session altered.

SQL>
SQL> select *
2 from v$diag_info
3 where name = 'Default Trace File';

INST_ID NAME
---------- ----------------------------------------------------------------
VALUE
---------------------------------------------------------------------------------------
CON_ID
----------
1 Default Trace File
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_9416_CUSTOMER_COUNT.trc
0


SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE);

PL/SQL procedure successfully completed.

SQL> select /*+ gather_plan_statistics */ count(*) from tpcc.orders;

COUNT(*)
----------
303697

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>

From the tracefile :
PARSING IN CURSOR #139722640816616 len=62 dep=0 uid=106 oct=3 lid=106 tim=20489021058 hv=3158106211 ad='777865e0' sqlid='du49mwuy3ts33'
select /*+ gather_plan_statistics */ count(*) from tpcc.orders
END OF STMT
PARSE #139722640816616:c=10950,e=11447,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=630573765,tim=20489021057
EXEC #139722640816616:c=52,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=630573765,tim=20489021253
FETCH #139722640816616:c=78319,e=464572,p=1504,cr=1508,cu=0,mis=0,r=1,dep=0,og=1,plh=630573765,tim=20489485885
STAT #139722640816616 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=1508 pr=1504 pw=0 str=1 time=464570 us)'
STAT #139722640816616 id=2 cnt=303697 pid=1 pos=1 obj=80213 op='TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 str=1 time=566811 us cost=411 size=0 card=303697)'
FETCH #139722640816616:c=1,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=630573765,tim=20489486318


From a tkprof of the tracefile :
SQL ID: du49mwuy3ts33 Plan Hash: 630573765

select /*+ gather_plan_statistics */ count(*)
from
tpcc.orders


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.07 0.46 1504 1508 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.08 0.47 1504 1508 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=1508 pr=1504 pw=0 time=464570 us starts=1)
303697 303697 303697 TABLE ACCESS FULL ORDERS (cr=1508 pr=1504 pw=0 time=566811 us starts=1 cost=411 size=0 card=303697)


SQL> select * from table(dbms_xplan.display_cursor('du49mwuy3ts33',format=>'ALLSTATS LAST +COST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID du49mwuy3ts33, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tpcc.orders

Plan hash value: 630573765

-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 411 (100)| 1 |00:00:00.38 | 1506 | 1505 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.38 | 1506 | 1505 |
| 2 | TABLE ACCESS FULL| ORDERS | 1 | 303K| 411 (1)| 303K|00:00:00.07 | 1506 | 1505 |
-------------------------------------------------------------------------------------------------------------


14 rows selected.

SQL>


From the tracefile, I could identify the SQL_ID (du49mwuy3ts33) and then run dbms_xplan.display_cursor for that SQL_ID.  

There is a slight discrepancy between the tracefile and the dbms_xplan.display_cursor output on the Actual Time and Buffers ("cr" in the tracefile) and Physical Reads ("pr" in the tracefile).  Similarly, you might notice a discrepancy in the time reported in the raw trace file STAT lines for operation Ids 2 and 1.  When the timing is in microseconds, some discrepancies may arise.
You must also note that gather_plan_statistics itself introduces an overhead on execution time for SQL statements that run in microseconds/milliseconds.


(For an explanation of reading the tracefile, see this post on the details and and this post on the summary)

SQL Execution Statistics -- 3 : Using STATISTICS_LEVEL

$
0
0

 In my previous posts here and here, I have shown how to use the GATHER_PLAN_STATISTICS Hint to collect additional information.  But what if you can't modify the SELECT query statement ?

You could set STATISTICS_LEVEL='ALL' at the session, run the query and then reset STATISTICS_LEVEL to the default  'TYPICAL'



SQL> alter session set tracefile_identifier='H2';

Session altered.

SQL> alter session set statistics_level='ALL';

Session altered.

SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>TRUE); -- just to get the SQL_ID in the trace file

PL/SQL procedure successfully completed.

SQL>
SQL> select
2 d.department_id, d.department_name, e.first_name, e.last_name
3 from hr.departments d join hr.employees e
4 on d.department_id=e.department_id
5 order by 1,2,4,3
6 /
....
....
....
106 rows selected.

SQL>
SQL> exec DBMS_SESSION.SET_SQL_TRACE(sql_trace=>FALSE);

PL/SQL procedure successfully completed.

SQL>
QL> select * from table(dbms_xplan.display_cursor('4aa2k5b98ybfr',format=>'ALLSTATS LAST')); -- SQL_ID identified from the trace file

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 4aa2k5b98ybfr, child number 0
-------------------------------------
select d.department_id, d.department_name, e.first_name, e.last_name
from hr.departments d join hr.employees e on
d.department_id=e.department_id order by 1,2,4,3

Plan hash value: 2209899241

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 106 |00:00:00.10 | 10 | 14 | | | |
| 1 | SORT ORDER BY | | 1 | 106 | 106 |00:00:00.10 | 10 | 14 | 11264 | 11264 |10240 (0)|
| 2 | MERGE JOIN | | 1 | 106 | 106 |00:00:00.10 | 10 | 14 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 27 | 12 |00:00:00.03 | 2 | 2 | | | |
| 4 | INDEX FULL SCAN | DEPT_ID_PK | 1 | 27 | 12 |00:00:00.03 | 1 | 1 | | | |
|* 5 | SORT JOIN | | 12 | 107 | 106 |00:00:00.06 | 8 | 12 | 9216 | 9216 | 8192 (0)|
| 6 | VIEW | index$_join$_002 | 1 | 107 | 106 |00:00:00.06 | 8 | 12 | | | |
|* 7 | HASH JOIN | | 1 | | 106 |00:00:00.06 | 8 | 12 | 1610K| 1610K| 1658K (0)|
| 8 | INDEX FAST FULL SCAN | EMP_DEPARTMENT_IX | 1 | 107 | 106 |00:00:00.06 | 4 | 6 | | | |
| 9 | INDEX FAST FULL SCAN | EMP_NAME_IX | 1 | 107 | 107 |00:00:00.01 | 4 | 6 | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
filter("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
7 - access(ROWID=ROWID)


30 rows selected.

SQL>


Thus, I was able to get similar execution statistics without the Hint if I set STATISTICS_LEVEL='ALL'.

(Note : The top level operation here is a MERGE JOIN while that in the first example of the first post of this series on 27-January-22 was a HASH JOIN. The result here is 106 rows, not the 108 rows in the first example. The first example was in an HR schema in another database -- e.g. you would see DEPARTMENT_ID=280 'New Department' and Employee 'HEMANT CHITALE' in that listing.  These are two different HR schemas where I have modified some of the data.  Thus, you'll see a difference in the Execution Plan as well between the first example and this one. although the query is similar -- because the databases, patch levels,  possibly optimizer parameters, schema and data are different.)

Even a slight difference in data or parameters or patch levels can result in different execution plans !








SQL Execution Statistics -- 4 : Adaptive Plans

$
0
0

In the previous 3 blog posts, I have shown how you can use the GATHER_PLAN_STATISTICS Hint or the STATISTICS_LEVEL='ALL' to get sql execution statistics.

This may work with Adaptive Plans as well.



SQL> select /*+ gather_plan_statistics */ count(*)
2 from tables_list t
3 where (owner, table_name)
4 not in
5 (
6 select owner, table_name
7 from indexes_list i
8 )
9 /

COUNT(*)
----------
593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID g6q8m94krdvz7, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from tables_list t where
(owner, table_name) not in ( select owner, table_name from
indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 11 (100)| 1 |00:00:00.01 | 37 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 37 | | | |
| * 2 | HASH JOIN ANTI | | 1 | 22 | 11 (0)| 593 |00:00:00.01 | 37 | 1335K| 1335K| 1376K (0)|
|- 3 | NESTED LOOPS ANTI | | 1 | 22 | 11 (0)| 2202 |00:00:00.01 | 17 | | | |
|- 4 | STATISTICS COLLECTOR | | 1 | | | 2202 |00:00:00.01 | 17 | | | |
| 5 | INDEX FAST FULL SCAN| TABLES_LIST_NDX | 1 | 2202 | 5 (0)| 2202 |00:00:00.01 | 17 | | | |
|- * 6 | INDEX RANGE SCAN | INDEXES_LIST_NDX_1 | 0 | 2918 | 6 (0)| 0 |00:00:00.01 | 0 | | | |
| 7 | INDEX FAST FULL SCAN | INDEXES_LIST_NDX_1 | 1 | 2918 | 6 (0)| 2918 |00:00:00.01 | 20 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)


31 rows selected.

SQL>
SQL> alter session set statistics_level='ALL';

Session altered.

SQL> select count(*)
2 from tables_list t
3 where (owner, table_name)
4 not in
5 (
6 select owner, table_name
7 from indexes_list i
8 )
9 /

COUNT(*)
----------
593

SQL> select * from table(dbms_xplan.display_cursor('',format=>'ALLSTATS LAST +COST +ADAPTIVE'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID c1fdcdxhvjrat, child number 0
-------------------------------------
select count(*) from tables_list t where (owner, table_name) not in (
select owner, table_name from indexes_list i )

Plan hash value: 2460844421

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 11 (100)| 1 |00:00:00.01 | 37 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 37 | | | |
| * 2 | HASH JOIN ANTI | | 1 | 22 | 11 (0)| 593 |00:00:00.01 | 37 | 1335K| 1335K| 1381K (0)|
|- 3 | NESTED LOOPS ANTI | | 1 | 22 | 11 (0)| 2202 |00:00:00.01 | 17 | | | |
|- 4 | STATISTICS COLLECTOR | | 1 | | | 2202 |00:00:00.01 | 17 | | | |
| 5 | INDEX FAST FULL SCAN| TABLES_LIST_NDX | 1 | 2202 | 5 (0)| 2202 |00:00:00.01 | 17 | | | |
|- * 6 | INDEX RANGE SCAN | INDEXES_LIST_NDX_1 | 0 | 2918 | 6 (0)| 0 |00:00:00.01 | 0 | | | |
| 7 | INDEX FAST FULL SCAN | INDEXES_LIST_NDX_1 | 1 | 2918 | 6 (0)| 2918 |00:00:00.01 | 20 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")
6 - access("OWNER"="OWNER" AND "TABLE_NAME"="TABLE_NAME")

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)


30 rows selected.

SQL>


ote : The two SQL statements, although being the same, have different SQL_IDs because of the presence / absence of the GATHER_PLAN_STATISTICS Hint.

A Fresh Look at AWR Reports -- 1 : CPU Usage

$
0
0

 I have covered AWR Reports (from 10g to 12c, including MultiTenant) in the past.  You could search for "AWR" in the "Search this Blog" in the panel on the right or by specifying :   
site://hemantoracledba.blogspot.com AWR
in the Google Search Window

In this post I present a comparison between OS and Oracle CPU Statistics.

AWR Header

So, I have an AWR Report for 19c on Windows for a duration of 120minutes with 12 CPUs.  That is a total of 86,400 available CPU seconds (120minutes X 12 CPUs X 60seconds_per_minute)



Load Profile


The Load Profile shows 0.4 Oracle CPU seconds per Elapsed Seconds.  This means that, on average, I am consuming less than 1 CPU.


Listing of Waits and Wait Classes

Oracle Database CPU time is 13.5% of total Oracle Time.  This is 1,918 seconds.
Note very carefully that this does NOT mean that the Oracle Database Processes account for 13.5% of Server CPU Usage.



O.S. and Oracle Instance CPU
This shows that the Server reports 24% CPU Usage butOracle used only 2.4% of the total CPU available (approximately 2.4% of 86,400seconds is 2,073seconds close to Oracle's DB CPU figure)
This also translates about 9% of the total actual CPU "busyness" on the Server.

So, if Oracle consumed only 9% of the actual CPU Usage on the server, I would need to look for other processes on the same server that would have been consuming CPU.  

This could be other Oracle Database Instances (my report is for database instance "CDB1" -- was there / were there one or more other Database Instances active on the same server ?).  This could also be non-Oracle programs, Application Server software,  other system / monitoring tools running (I've seen monitoring tools alone consuming 20% of the CPU on a server !).

If this server is hosting only this 1 Database Instance (CDB1), it is excessively provisioned (12 CPUs) for the actual load.  Of course, the load that I have generated here is I/O intensive, but it means that I could have actually reduced the number of CPUs to, say, 4 CPUs and paid much less in Hardware, Oracle Licensing and Oracle Support costs.  (4 CPUs would have allowed for the OS and the other non-Oracle processes that seem to be accounting for 24% user-time of 12 CPUs).




A Fresh Look at AWR Reports -- 2 : Redo Size

$
0
0

 In my previous blog post, I showed how to interpret CPU Usage from an AWR report.

Here, I take the same 2hour AWR report and help interpret Redo Size -- ie. volume of Redo Generated and Written to Disk.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


These are the first few lines of the Load Profile :

First few lines of Load Profile

This indicates that the Redo Size (total Redo Generated) is just about 10GB  (multiply Redo Size bytes per Second by the total elapsed seconds (120.71) minutes)

I can confirm that from the Key Instance Activity Stats section :

Instance Activity Statistics

So, did my database write about 10GB (about 9.8GB actually, if you divide by 1024s) of Redo to disk ?   Is that true ?

Let me go to the IOStat(s) section.  (Please ignore the "average service time" in this table because I've run this test on a PC with a standard Hard-Disk -- not an Enterprise SAN !)

IO Statistics by FileType

What do I see here ?  The total volume written to Log Files (i.e. Online Redo Logs) is 19GB !

Why is that so ?

Because, in my database, every Redo Log Group is duplexed -- it has 2 members.  So, Redo of approximately 9.8GB actually translates to 19GB (or close to 20GB) of Physical Writes to Online Redo Log files.

Note that there are 10GB of Reads from  the Online Redo Log files.  That is the Log Archiver Process(es) writing out the Online Redo Logs to the Archive Logs -- which you can see is 10GB of Writes to Archive Log !

Thus, the initial "Redo Size" of 10GB is actually 30GB of Physical Writes to Disk (20GB for Duplexed Online Redo Logs  PLUS 10GB of Archive Logs).

If I had defined an additional ArchiveLogDestination on my server (as I have done in previous implementations where dedicated disks or NFS mount points were assigned), there would have been another 10GB of Physical Writes.

For every Standby Database (note how in previous blog posts here and here , I have demonstrated 3 separate Standby Databases for the Primary Database), 10GB of Redo has to be transferred over the Network and then written to the Standby Redo Logs at each Standby Site/Server.

So, my 10GB of Redo could effectively translate to anywhere from 30GB to 70GB of Physical Write of Redo  Plus 10GB to 30GB of transfer over the network !!

When you look at the SAN or underlying storage, you must consider how all those writes get through.  Does the SAN have enough Write-Through Cache ?  Are those Redo Logs and the one or more ArchiveLogDestination(s) on the Primary Server  actually on separate Disks  -- you might have separate FileSystem MountPoints or ASM DiskGroups but how do they translate down to the Disks in the SAN ?


A Fresh Look at AWR Reports -- 3 : I/O Profile

$
0
0

 After viewing CPU Usage and Redo Size statistics, here's a look at the I/O Profile.

Here's the header which indicates that this AWR report is for duration of 2hours on 19c on Windows :

AWR Header


Instead of the Load Profile, this time I will skip directly to the I/O Profile section :

I/O Profile

In the above table, "Database" seems to include both "Datafile" and "Tempfile".

Also, for some co-relation, the IOStat section :

IOStat by Filetype Summary

As I've already mentioned in my previous 2 posts, the load I that I generated was more I/O Intensive than CPU Intensive.
Here we can see a break-down of the I/O Profile.  I am more interested in the *Writes*.

From the previous post, we've already seen that 20GB of writes were for Duplexed Online Redo Logs and 10GB for ArchiveLogs.
But what about the rate per second ?

Online Redo Log file writes were 2.7MB/second.

Datafile + Tempfile writes were 3.3MB/second.  Note that Tempfile I/O was higher than Datafile I/O.

Although Datafile requests per second were higher, Datafile writes by DBWR may well be in smaller batches (i.e. fewer blocks at each write) --- more I/O calls (85.59), less total written bytes (1.388M).  This is in the nature of DBWR behaviour as DBWR periodically "collects" dirty buffers to flush to disk.

Writes to Tempfile for Sort/GroupBy operations (10.59 for 1.89M) can be more blocks per call.  This is because such operations can consist of large number of blocks in each sesssion. The writes are from PGA, not the Buffer Cache in the SGA.

So, what could have been happening here, that Tempfile writes were so high, relative to Datafile writes ?  Sort Operations, Group By Queries, Create / Rebuild Index operations.

Note : Create / Rebuild operations are unlikely to be all visible in the SQL Statements section of the AWR report because, as DDL statements, they get invalidated and aged out of the Shared Pool quickly before each AWR Snapshot run.  Yet, their I/O profile is visible in the IO Sections of the AWR Report).
Create/Rebuild Index operations do not generate Redo for their Tempfile writes but *do* generate Redo for their Datafile writes if they are not executed as NoLogging operations.









A Fresh Look at AWR Reports --- 4 : SQL*Net Waits

$
0
0

 After the first three posts on CPU Usage, Redo Size and I/O Profile, I present my interpretation of SQL*Net waits.

From the same AWR Report as the first three reviews, I have extracted the specific SQL*Net waits from the Foreground Wait Events table:

SQL*Net Waits from the Foreground Waits Table

"SQL*Net more data from client" waits occur when the server process hasn't received the required data (e.g. an INSERT statement data) within 1 network packet and has to wait to receive more more.  "SQL*Net more data to client" is the reverse -- the client has requested, say a large row or dataset that doesn't fit into 1 network packet.  The network packet size is determined essentially by two parameters "Session Data Unit" (aka "SDU") and "Transport Data Unit" (aka "TDU").  When sending rows out of the server, it can automatically also do compression also. ARRAYSIZE (or Fetch Size, depending on your client code, in java) can also achieve some compression. TDU is obsolete in all recent versions but SDU can still be tuned.  For guidance on tuning SDU,  section 14.2 of the Tuning Guide.

"SQL*Net message to client" doesn't really show time taken to send the message (e.g. sql select statement result or execution of commit) to the client.  It is actually the time taken by the Server Process to put the results on to the Network Layer on the DB Server.  Beyond that, the Oracle Server Process does *not* how long it takes for the data to reach the client (whether a user desktop or an application server / middle-tier).  The actual time would depend on network latency and bandwidth.  So, you can interpret the number of occurrences of the Wait Event but NOT the time.

"SQL*Net break/reset" would be presented when an error occurs or an unhandled exception is raised. A high number of such waits indicate repeated occurrences of errors in SQL calls.

"SQL*Net message from client" may or may not be a problem (and that is why Oracle does not include it in % DB time).  There are different manifestations of this event :

1. The database server has sent a row or a set of rows (depending on arraysize or fetchsize) and is waiting for the client / application server to acknowledge and request the next set of rows.  The client could be a user who is running, say, SQL*Developer, and has seen the first 50 rows of a 1000 row result set but has walked away from his desk to get some coffee and has not yet requested the next set of 50 rows. (Yes, it is true that SQL*Developer and most clients do NOT fetch and present all the rows to the user immediately, this is is done in batches -- the parameter is configurable in most client or java programs).  The user may come back 5minutes or 30minutes later and ask for the next 50 rows.  Till then, the Oracle Server Process is "waiting" on this event.

2. It could also be an application server / middle tier that is formatting / sorting the rows or is writing the results to disk or is joining this set of rows with another data set (even from another source -- e.g. an ODBC connection, an Excel file etc) and is taking some milliseconds to do so.  Till then, the Oracle Server Process may be "waiting" on this event.

3. The database server has sent the results of a query (or success of execution of a DML) to the client / application server and is waiting for the next SQL call.  The next SQL call could be a COMMIT (or ROLLBACK) or any INSERT/UPDATE/DML statement.  It may come 1second later or it may come 30minutes later or it may come 10hours later.  As long as the client / application server has maintained a session on the Database, the Oracle Server Process will be "waiting" on this event.



Other than the above set of "Wait" Events relating to a client / application server, there may occur exactly the same Events relating to a Database Link. :  "SQL*Net more data from/to dblink",  "SQL*Net message from/to dblink" and SQL*Net break/reset to dblink".  They don't appear in my AWR because my example load does not include Database Links to / from other databases.  But you need to interpret them as well.  For example, if you have Database Links for periodic refresh's of Materialized Views between Databases, you may well see such "SQL*Net message" and "SQL*Net more data" waits relating to "dblink"s.


Viewing all 464 articles
Browse latest View live