Tuesday, April 8, 2014

export of partitioned tables from 10 to 11g resulted in huge increase in size


Observation :

performed completed export of 10g database to 11g Database using datapump utility. Some of the tablespaces are have default initial extent defined at database level. Due to this the import of objects lying under this tablespaces have grown hugely while importing.
 
e.g: Tablespace test has been created with default storage extents as 10g and any object create in this tablespace without nay storage clause has been allocated 64 KB. When this object is imported to 11g tablespace this object had been allocated a default of 8MB space even there is no data in this object.
 

Cause :

                10g

                The custom tablespaces in 10g are created with default initial extent size of database (64K by default for 10g) whereas the standard tablespaces are created with its own initial extent(Uniform) of 1MB. Each subpartition in custom tablespaces are created with a default initial extent of 64KB even there is no data in the database.

                11g

                Whereas the default initial extent size allocation for database from 11.2.0.2 is 8MB to improve performance. Hence as part of import the partitions created in custom tablespaces has picked up default value of 8MB for every empty sub partition created resulting in huge increase of database.

 

 

Solution:

                To prevent this default allocation we need set the hidden parameter _partition_large_extents to FALSE explicitly and then initiate the import.

 

Initial Extent Size of a Partition Changed to 8MB from 64KB After Upgrade to 11.2.0.2 or Later (Doc ID 1295484.1) explains on allocation of default size

 

I tested the scenario by creating one test partition with and without the init parameter and the results are as expected. I will include this step to my import process document.

1)      Create a empty table with partitions and it allocated 8MB ß Init parameter _partition_large_extents is not defined in pfile

2)      Set init parameter _partition_large_extents to FALSE and created a empty table with partitions in custom ts and it allocated only 64KB.

Friday, February 14, 2014

HTTP-500 error in R12 instance after clone with ORA-01578

Issue:

Performed oracle ebs r12 application clone from PROD with RMAN Backup. Cloned instance is configured and services are brought up. When tried to launch the application getting HTTP-500 Internal error.
Checked application.log file under $LOG_HOME/ora/10.1.3/j2ee/oacore/oracore_default_group~1/ directory and found below errors.

Caused by: oracle.apps.jtf.base.resources.FrameworkException: Failure in CacheLoader: oracle.apps.jtf.base.resources.FrameworkException
Caused by: oracle.apps.jtf.base.resources.FrameworkException: ORA-01578: ORACLE data block corrupted (file # 131, block # 59236)
        ... 31 more
ORA-01110: data file 131: '/u01/oradata/UAT/apps_ts_tx_data_34.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


Cause:

As per metalink note 781413.1 the issue is Data in APPLSYS.WF_LOCAL_USER_ROLES are not synchronized.
query of table WF_LOCAL_USER_ROLES throws block corruption error

Solution:

Synchronize workflow tables from backend as below

sqlplus apps/<apps password>

set serveroutput on size 100000;
declare
begin
WF_LOCAL_SYNCH.BULKSYNCHRONIZATION(
P_ORIG_SYSTEM => 'ALL',
P_PARALLEL_PROCESSES => null,
P_LOGGING => null,
P_RAISEERRORS => TRUE);
exception
when others then
dbms_output.put_line('sqlerrm = ' || sqlerrm);
end;
/


Saturday, January 4, 2014

An error occurred while attempting to establish an Applications File Server connection OPP Files


I did R12 upgrade recently from 11i and as part of upgrade we had server migration also. So as part this activity we copied all old concurrent log & out files from old 11i server to new R12 server.

After copying all the files we updated fnd_concurrent_requests table to open old log and out files from R12 applications.

Then users are now able to open their 11i standard Log & Out files. But users are not able to open their Concurrent PDF output that is generated by Output post processor. Users when try to launch PDF out files they get below error.



An error occurred while attempting to establish an Applications File Server connection with the node FNDFS_<oldserver>. There may be a network configuration problem, or the TNS listener on node FNDFS_<oldserver> may not be running. Please contact your system administrator.




Solution:

After  investigation I identified that OPP will store file generayion details like file type,nodename, filename in table FND_CONC_REQ_OUTPUTS instead of FND_CONCURRENT REQUESTS. After updating these tables manually as below users are able to open OPP PDF output files.

update FND_CONC_REQ_OUTPUTS set
FILE_NAME=replace(FILE_NAME,'/u01/oracle/testcomn/admin/out/PROD_test','/u01/oracle/test/inst/apps/PROD_test/logs/appl/conc/out/PROD_tes');

 update FND_CONC_REQ_OUTPUTS  set FILE_NODE_NAME='<NewServer>'
 where FILE_NODE_NAME='<OldServer>';

...