Wednesday, December 22, 2010

Database Performence Tuning Scripts

Performence Tuning Scripts:

The below are some of the scripts usefull for Performence tuning.

Prompt
Prompt Cache hit ratio
prompt
select 1-(phy.value / (cur.value + con.value)) "Cache Hit Ratio",round((1-(phy.value / (cur.value + con.value)))*100,2) "% Ratio"from v$sysstat cur, v$sysstat con, v$sysstat phywhere cur.name = 'db block gets' andcon.name = 'consistent gets' andphy.name = 'physical reads';
/

Prompt
Prompt Another Buffer Cache hit ratio Calculation
prompt
column "logical_reads" format 99,999,999,999 column "phys_reads" format 999,999,999 column "phy_writes" format 999,999,999 select a.value + b.value "logical_reads", c.value "phys_reads",round(100 * ((a.value+b.value)-c.value) / (a.value+b.value)) "BUFFER HIT RATIO" from v$sysstat a, v$sysstat b, v$sysstat cwhere a.statistic# = 38 and b.statistic# = 39 and c.statistic# = 40;
/

prompt
prompt Data Dictionary Hit Ratio should be over 90 percent
prompt
select sum(gets) "Data Dict. Gets",sum(getmisses) "Data Dict. Cache Misses",round((1-(sum(getmisses)/sum(gets)))*100) "DATA DICT CACHE HIT RATIO",round(sum(getmisses)*100/sum(gets)) "% MISSED"from v$rowcache;
/

prompt
prompt Library Cache Miss Ratio
prompt
select sum(pins) "executions",sum(reloads) "Cache Misses",round((1-(sum(reloads)/sum(pins)))*100) "LIBRARY CACHE HIT RATIO",round(sum(reloads)*100/sum(pins)) "% Missed" from v$librarycache;
/

prompt
prompt More Library Cache stats
prompt
select namespace,trunc(gethitratio*100) "Hit Ratio",trunc(pinhitratio*100) "Pin Hit Ratio",reloads "Reloads"from v$librarycache;/

prompt
prompt Another Library Cache Calculation, total reloads should be as close to 0 as possible.
prompt
column libcache format 99.99 heading 'Percentage' jus censelect sum(pins) "Total Pins", sum(reloads) "Total Reloads",sum(reloads)/sum(pins) *100 libcachefrom v$librarycache;
/

prompt
prompt Redo Log Buffer should be as close to 0 as possible
prompt
select substr(name,1,30),value from v$sysstat where name ='redo log space requests';
/

prompt
prompt Redo Log Contention, all ratios less than 1
prompt
SET feedback OFFCOLUMN name FORMAT a15COLUMN gets FORMAT 99999999COLUMN misses FORMAT 999999COLUMN immediate_gets FORMAT 99999999 HEADING 'IMM_GETS'COLUMN immediate_misses FORMAT 99999999 HEADING 'IMM_MISSES'
PROMPT Examining Contention for Redo Log Buffer Latches...
PROMPT ----------------------------------------------------
SELECT name, gets, misses, immediate_gets, immediate_misses,Decode(gets,0,0,misses/gets*100) ratio1,Decode(immediate_gets+immediate_misses,0,0,immediate_misses/(immediate_gets+immediate_misses)*100) ratio2FROM v$latch WHERE name IN ('redo allocation', 'redo copy');
/

prompt
prompt Disk Vs. Memory Sorts. Try to keep the disk/memory ratio to less than .10 by increasing the sort_area_size
prompt
SET HEADING OFFSET FEEDBACK OFFCOLUMN name FORMAT a30COLUMN value FORMAT 99999990
SELECT name, value FROM v$sysstat WHERE name IN ('sorts (memory)', 'sorts (disk)');
/

prompt
prompt Initialization Parameters
prompt
select substr(name,1,35) "Parameter" ,substr(value,1,35) "Value" from v$parameter order by name asc;
/

Monday, December 20, 2010

Database Relink issue

10.2.0.5 database install, Relink issue

When attempting to Install 10.2.0.5.0 database on Linux 64bit, below relink error encoutered.

The cause of the issue is Missing pre-requisite RPM "compat-libstdc++-33-3.2.3-61(x86_64)", The issue solved by installing the missing RPM "compat-libstdc++-33-3.2.3-61(x86_64)" and completed the 10.2.0.5.0 Software installation.

ERROR

-------

INFO: /usr/bin/ld: warning: libstdc++.so.5, needed by /u01/oracle/product/102/ctx/lib//libsc_fa.so, not found (try using -rpath or -rpath-link)/u01/oracle/product/102/ctx/lib/libwv_core.so: undefined reference to `__cxa_pure_virtual@CXXABI_1.2'/u01/oracle/product/102/ctx/lib//libsc_fa.so: undefined reference to `__gxx_personality_v0@CXXABI_1.2'/u01/oracle/product/102/ctx/lib/libwv_core.so: undefined reference to `INFO: vtable for __cxxabiv1::__class_type_info@GLIBCPP_3.2'/u01/oracle/product/102/ctx/lib/libsc_ch.so: undefined reference to `__gxx_personality_v0'/u01/oracle/product/102/ctx/lib//libsc_fa.so: undefined reference to `_Unwind_Resume'/u01/oracle/product/102/ctx/lib/libwv_core.so: undefined reference to `vtable for __cxxabiv1::__si_class_type_info@GLIBCPP_3.2'/u01/oracle/product/102/ctx/lib/libwv_core.so: undefined reference to `std::ios_base::Init::Init()@GLIBCPP_3.2'/u01/oracle/product/102/ctINFO: x/lib/libwv_core.so: undefined reference to `std::ios_base::Init::~Init()@GLIBCPP_3.2'/u01/oracle/product/102/ctx/lib/libwv_core.so: undefined reference to `operator delete(void*)@GLIBCPP_3.2'collect2: ld returned 1 exit statusINFO: make: *** [ctxhx] Error 1

Database upgrade bit conversion issue

DB upgrade Bit conversion Issue from 10.2.0.3 to 10.2.0.5
---------------------------------------------------------------------


We are upgrading the database from 10.2.0.3 to 10.2.0.5 ,we have applied the patch 8202632 on oracle home and it was successful.

have opened the database in upgrade mode and running catupgrade script,while running catupgrade script getting the below error.

Error:

create or replace package body DBMS_ASSERT wrapped
*
ERROR at line 1:
ORA-06544: PL/SQL: internal error, arguments: [56319], [], [], [], [], [], [],
[]
ORA-06553: PLS-801: internal error [56319]

No errors.
CREATE OR REPLACE FUNCTION version_script
*
ERROR at line 1:
ORA-06544: PL/SQL: internal error, arguments: [56319], [], [], [], [], [], [],
[]
ORA-06553: PLS-801: internal error [56319]

Solution:
---------
The issue here is 10.2.0.3 database is 32bit and we are trying to upgrade the database to 10.2.0.5 with 64bit version.

we thought catupgrd.sql will also upgrade the database and convert the bit version - (catupgrd.sql calls utlip.sql) ,

But Converting 32-bit databases to 64-bit databases is not automatic for the 10.2.0.5 upgrade.

So to fix this issue, Performed below
------------------------------------
1. Change catupgrd.sql script to always execute utlip.sql
cd $ORACLE_HOME/rdbms/admin
vi catupgrd.sql change:
==> @@&utlip_file
to:==> @@utlip.sql

2. Rerun upgrade with echo on
SQL> spool /tmp/upgrade.logSQL> set echo onSQL> @?/rdbms/admin/catupgrd.sql

Oracle Application Database (10gR2) - Migration to Linux

Migrating Oracle E-Business Suite 11i Database 10g Release 2 (10GR2) to Linux

The following tasks provide clear step by step instructions on performing Oracle Apps database Migration to Linux from any operating system.

Section I: Prepare the source system for Export:-
1)Apply the Applications consolidated export/import utility patch:
Apply patch 4872830 to the source administration server node.

2) Apply latest Applications database preparation scripts patch;
Apply patch 7225862 to every application tier server node in the source system

3) Create a working directory:
$ mkdir /dd02/backup/expimp

4)Generate target database instance creation script aucrdb.sql:
$ sqlplus system/manager @$AU_TOP/patch/115/sql/auclondb.sql 10

5)Record Advanced Queue settings:
Copy the auque1.sql script from the $AU_TOP/patch/115/sql directory on the source administration server node to the working directory in the source database server node,
It generates auque2.sql.
$ sqlplus /nolog
SQL> connect / as sysdba;
SQL> @auque1.sql

6) Remove rebuild index parameter in spatial indexes:
Use SQL*Plus to connect to the source database as sysdba and run the following command:
SQL>select INDEX_NAME,PARAMETERS from dba_indexes where index_type='DOMAIN' and upper(parameters) like '%REBUILD%';
To remove the rebuild index parameter:

SQL> alter index rebuild parameters
Eg:
alter index MST.MST_MD_ADM_BNDS_N1 rebuild parameters ('sdo_indx_dims=2 sdo_rtr_pctfree=10 tablespace=APPS_TS_TX_IDX');
alter index MST.MST_MD_HYDROS_N1 rebuild parameters ('sdo_indx_dims=2 sdo_rtr_pctfree=10 tablespace=APPS_TS_TX_IDX');
alter index HR.HR_LOCATIONS_SPT rebuild parameters ('sdo_indx_dims=2 sdo_rtr_pctfree=10 tablespace=APPS_TS_TX_IDX');
alter index HR.PER_ADDRESSES_SPT rebuild parameters ('sdo_indx_dims=2 sdo_rtr_pctfree=10 tablespace=APPS_TS_TX_IDX');

Section II: Export the source Release 12 database instance:

0) Create a directory named dmpdir:
$ sqlplus system/
SQL> create directory dmpdir as '/u01/backup/expimp';

1) Create the export parameter file:
Create file auexpdp.dat with the following values, change the log file name & date.

$cat auexpdp.dat
directory=dmpdir
dumpfile=fullexp%U.dmp,fullexp2_%U.dmp,fullexp3_%U.dmp,fullexp4_%U.dmp
filesize=2G
full=y
exclude=SCHEMA:"='MDDATA'"
exclude=SCHEMA:"='OLAPSYS'"
exclude=SCHEMA:"='ORDSYS'"
exclude=SCHEMA:"='DMSYS'"
exclude=SCHEMA:"='OUTLN'"
exclude=SCHEMA:"='ORDPLUGINS'"
#transform=oid:n
logfile=expdpapps_01Feb09.log
JOB_NAME=expfull

2) Shut down All the Services:
Shutdown the Apps ,database and listener and open the database in restricted mode.
SQL>shu immediate;
SQL>host lsnrctl stop
SQL>startup restrict;

3) Back up AZ table data (conditional)
a)If you are using AZ.H, upgrade to AZ.H.DELTA.1. See document 403092.1 b)10.2 datapump does not allow tables with XML type columns to be migrated Perform step 2 of document 402785.1

4) Grant privilege to source system schema:
a) As sysdba run the following sql.
SQL> grant EXEMPT ACCESS POLICY to system;

5) Export the Applications database instance (Full DB):
expdp system/ parfile=auexpdp.dat
Typically export runs for several hours, Please monitor the export log file for any errors.

8) Revoke privilege from source system schema:
SQL> revoke EXEMPT ACCESS POLICY from system;

Section III: Prepare a target Release 12 database instance

1) Create target Oracle 10g (10.2.0.2) Oracle home (conditional):
If the target instance oracle home doesn’t exist and it’s a new build please perform the following steps to install a new oracle home.
If already oracle home exist, please skip this step and proceed further. $ cd /stage/StageR12/startCD/Disk1/rapidwiz

$ rapidwiz –techstack
Choose the "10gR2 RDBMS" option in the techstack components window and provide the details for the new Oracle home like SID, Port , Oracle home ..etc.

2) Create a working directory and copy files from source:
$ mkdir /u01/backup/expimp

Copy all the files generated in the sorce working directory to target i.e /dd02/backup/expimp to /u01/backup/expimp

3) Create the target database instance:
Update the aucrdb.sql script generated on the source database server node with any necessary changes to the directory structures for the log file(s), data file(s), or tablespaces, reflecting the layout of the target database server node
Note: This is very imp step Before running script ensure all the files locations are proper
Run this from VNC:
$ sqlplus /nolog
SQL> connect / as sysdba;
SQL> spool aucrdb.log;
SQL> startup nomount;
SQL> @aucrdb.sql
SQL> exit;

4) Copy database preparation scripts to target Oracle home:
Copy the following files from the $APPL_TOP/admin directory of the source administration server node to the working directory in the target database server node: addb1020.sql, adsy1020.sql, adjv1020.sql, and admsc1020.sql

5) Set up the SYS schema:
Run this from VNC – It runs for a while:
$ sqlplus "/ as sysdba" @addb1020.sql

6) Set up the SYSTEM schema:
If System password is not known go ahead and reset the password.
SQL>alter user system identified by manager;
$ sqlplus system/ @adsy1020.sql

7) Install Java Virtual Machine
$ sqlplus system/ @adjv1020.sql
Attention: This script can be run only once in a given database instance, because the scripts that it calls are not re runnable

8) Install other required components:
sqlplus system/ @admsc1020.sql FALSE SYSAUX TEMP

9)Run adpostcrdb.sql script
Copy the adpostcrdb.sql script, generated in Section 1, from the source administration server node to the target database server node. On the target database server node, use SQL*Plus to connect to the database instance as SYSTEM and run the following command.
$ sqlplus system/ @adpostcrdb.sql

10) Disable automatic gathering of statistics:
Copy $APPL_TOP/admin/adstats.sql from the source administration server node to the target database server node.

$ sqlplus "/ as sysdba"
SQL> shutdown normal;
SQL> startup restrict;
SQL> @adstats.sql
SQL> exit;

11) Back up the Blank target database instance:
$ nohup cp –R /u01/oradata /u01/backup &

Section IV: Import the Release 12 database instance:

1) Pre Import checks:

A) Ensure directory dmpdir has been created as /u01/backup/expimp

B) Ensuare database running in No Archive log mode:

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/arch
Oldest online log sequence 130
Current log sequence 133

C) Resize the redo log files to 400M;

SQL> SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;
select group#, status from v$log;
alter system switch logfile;
alter database drop logfile group 2;
alter database add logfile group 2 ('/u01/oradata/data03/log02a.dbf','/u01/oradata/data03/log02b.dbf') size 400m reuse;
alter database drop logfile group 3;
alter database add logfile group 3 ('/u01/oradata/data03/log03a.dbf','/u01/oradata/data03/log03b.dbf') size 400m reuse;
alter database drop logfile group 4;
alter database add logfile group 4 ('/u01/oradata/data03/log04a.dbf','/u01/oradata/data03/log04b.dbf') size 400m reuse;
alter database drop logfile group 1;
alter database add logfile group 1 ('/u01/oradata/data03/log01a.dbf','/u01/oradata/data03/log01b.dbf') size 400m reuse;
C) Create import dump directory:
$ sqlplus system/
SQL> create directory dmpdir as '/u01/backup/expimp';

2) Import the Applications database instance:

Create the par file auimpdp.dat with the following parameters,Modify the log file name as appropriate.
$ cat auimpdp.dat
directory=dmpdir
dumpfile=fullexp%U.dmp,fullexp2_%U.dmp,fullexp3_%U.dmp,fullexp4_%U.dmp
full=y
transform=oid:n
logfile=FULL_import_Sep_15.log
Execute the full import from VNC:
Set NLS_LANG parameter as source instance.
Eg: export NLS_LANG= American_America.UTF8
$ nohup impdp system/ parfile=auimpdp.dat &
Items to take care while import is running:
a) Note down the import start time.

b) Import runs for several hours,depending on the data volume and server performance.

c) Items to monitor while running import.
-Monitor continuously import log file for any errors
-Monitor the alert log file ,If tablespace runs out of space, It will be reported in alert log.

d) Import progress check queries.

SQL> select object_name,owner, TO_CHAR((MAX(LAST_DDL_TIME)),'DD-MON-YYYY HH24:MI:SS') LAST_DDL, TO_CHAR(sysdate,'DD-MON-YYYY HH24:MI:SS')sys_date from dba_objects where last_ddl_time=(select max(last_ddl_time) from dba_objects) group by object_name,owner;

e) Query to identify TableSpaces Filled More Than 80%
SQL> select a.TABLESPACE_NAME, a.MAX, b.USED, round((b.USED/a.MAX)*100,2) PCT from (select TABLESPACE_NAME, sum(MAXBYTES/1024/1024) MAX from dba_Data_files group by TABLESPACE_NAME) a,(select tablespace_name, sum(bytes/1024/1024) USED from dba_segments group by tablespace_name) b where a.TABLESPACE_NAME=b.TABLESPACE_NAME and b.USED > a.MAX *(85/100);f) Check the

errors:
egrep -n '^ERRORERROR:ORA-PLS-SP2-IMP-' FULL_import_Sep_15.log >imp_err.txt

3) Revoke privilege from target system schema:
SQL> revoke EXEMPT ACCESS POLICY from system;
Section V: Post Import Steps:

1) Reset Advanced Queues:
Run the auque2.sql script generated in the source database to enable the Advanced Queue settings that were lost during the export/import process

$ sqlplus /nolog
SQL> connect / as sysdba;
SQL> @auque2.sql

2) Run adgrants.sql:
Copy $APPL_TOP/admin/adgrants.sql from the administration server node to the working directory in the database server node and execute it.
$ sqlplus "/ as sysdba" @adgrants.sql APPLSYS

3) Grant create procedure privilege on CTXSYS:
Copy $AD_TOP/patch/115/sql/adctxprv.sql from the administration server node to the database server node and run it as apps.
sqlplus apps/ @adctxprv.sql CTXSYS

4) Gather statistics for SYS schema:
Copy $APPL_TOP/admin/adstats.sql from the administration server node to the working directory in the database server node and execute it.
$ sqlplus "/ as sysdba"
SQL> shutdown normal;
SQL> startup restrict;
SQL> @adstats.sql
SQL> shutdown normal;
SQL> startup;
SQL> exit;

5) Drop and recreate the database links:
Drop and recreate the DB links APPS_TO_APPS and EDW_APPS_TO_WH

6)Restore AZ table data (conditional)
If you performed the step to back up the AZ table data, Perform steps 4 and 5 of document 402785.1 on OracleMetaLink to restore the AZ table data.

7)Create OWA_MATCH package (conditional)
If you are using iAS 1.0.2, perform the steps in document 312165.1 to create SYS.OWA_MATCH on the target database.

8)Create ConText and Spatial objects
Certain ConText and Spatial objects are not preserved by the import process. Run the following.
$ perl $AU_TOP/patch/115/driver/dpost_imp.pl dpost_imp.drv
Once the driver file has been generated, use AutoPatch to apply it on the target administration server node.

9) Compile invalid objects
$ sqlplus "/ as sysdba" @$ORACLE_HOME/rdbms/admin/utlrp.sql

10) Maintain Applications database objects
Run AD Administration on the target administration server node. From the Maintain Applications Database Objects menu, perform the following tasks:
a. Compile flexfield data in AOL tables
b. Recreate grants and synonyms for APPS schema

11) Start Applications server processes
Start all the server processes on the target Applications system. You can allow users to access the system at this time.

12) Create DQM indexes
Create DQM indexes by following these steps:
a) Log on to Oracle Applications with the "Trading Community Manager" responsibility
b) Click Control > Request > Run
c) Select "Single Request" option
d) Enter "DQM Staging Program" name
e) Enter the following parameters:
Number of Parallel Staging Workers: 4
Staging Command: CREATE_INDEXES
Continue Previous Execution: NO
Index Creation: SERIAL
f) Click "Submit"


Database Links

Database Links:
---------------

A Database Link allows you to access data in a remote database as if it is a local table's data.
The connection can be made to a database hosted on the same machine or remotely on another
server.When you query data on a different database, you are essentially using a distributed query.It is also possible to execute distributed transactions through database links like insert,
update and delete.The authentication used in a database link is either the link's owner or a hard-coded usernameand password, depending on the way the link was created.

A database link in oracle can be either private (for use only by the link's owner) or public(accessible to any user with the correct privileges).Connectivity is established via Sql*Net, using an alias defined in the local database server's tnsnames.ora.


Example:
----------

export TNS_ADMIN=$ORACLE_HOME/network/admin

cd $TNS_ADMIN

vi tnsnames.ora

-- add the entries of target database that needs to be accessed
eg;

TEST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hyderabad.oracle.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = TEST)
)
)


tnsping TEST -- it should work - if not please listener on the target server.

sqlplus '/as sysdba'

SQL>CREATE DATABASE LINK LINK_TEST CONNECT TO SCOTT IDENTIFIED BY TIGER USING 'TEST';

SQL>SELECT * FROM DBA_DB_LINKS;
SQL>SELECT * FROM ALL_DB_LINKS;

SQL>select * from BONUS@LINK_TEST; - It shud give the results of target DB

SQL>DROP DATABASE LINK LINK_TEST; -- To Drop Database Link

TO Create Pubic Database link


PUBLIC database link is a DB link which can acceese by All the users in the Database.

Syntax to create a PUBLIC database:

SQL>CREATE PUBLIC DATABASE LINK LINK_TEST CONNECT TO SCOTT IDENTIFIED BY TIGER USING 'TEST';

Syntax to drop a PUBLIC database:

SQL>Drop PUBLIC DATABASE LINK LINK_TEST;


Transportable Tablespaces

Transportable TS on the same Platform
--------------------------------------

Source
------

SQL>select tablespace_name,status from dba_tablespaces;


SQL>alter tablespace USERDATA read only;


SQL> exec dbms_tts.transport_set_check('USERDATA');

PL/SQL procedure successfully completed.


SQL> select * from transport_set_violations;

no rows selected

$exp file=userdata.dmp transport_tablespace=Y tablespaces=USERDATA

/ as sysdba


$cp /prod/lab/oradata/userdata01.dbf /prod/rk/oradata

$cp /prod/lab/oradata/userdata02.dbf /prod/rk/oradata

SQL> alter tablespace userdata online;

SQL> select username,default_tablespace from dba_users;

USER2 USERDATA
USER1 USERDATA

Target
-------

grant connect,resource to user2 identified by user2;
grant connect,resource to user1 identified by user1;

$imp file=userdata.dmp transport_tablespace=Y tablespaces=USERDATA datafiles=('/prod/rk/oradata/userdata01.dbf','/prod/rk/oradata/userdata02.dbf')

/ as sysdba

SQL>select tablespace_name from dba_tablespaces;


SQL>alter tablespace USERDATA read write;

SQL> alter user user1 default tablespace USERDATA;

SQL> alter user user2 default tablespace USERDATA;

SQL>conn user1/user1

SQL>select * from tab;



Transportal Tablespace across the Platforms;
--------------------------------------------

From Hp Tru64 - Linux


source
------

SQL>select * from v$transportable_platform

-- Endian should match --

SQL> select tablespace_name,status from dba_tablespaces;


SQL> alter tablespace USERDATA read only;


SQL> create directory tts as '/prod/lab/tts';


$mkdir -p /prod/lab/tts


select * from dba_directories;


SQL> exec dbms_tts.transport_set_check('USERDATA');

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected


$ expdp directory=tts dumpfile=userdata.dmp transport_tablespaces=USERDATA


/ as sysdba


cd /prod/lab/tts/
ls


cp /prod/lab/oradata/userdata01.dbf /prod/lab/tts
cp /prod/lab/oradata/userdata02.dbf /prod/lab/tts


SQL>alter tablespace userdata read write;

SQL> select username,default_tablespace from dba_users;

Target
------

$cp /prod/lab/tts /prod/rk/tts

SQL>create directory tts as '/prod/rk/tts';

SQL>grant connect,resource to user2 identified by user2;
SQL>grant connect,resource to user1 identified by user1;

$impdp directory=tts dumpfile=userdata.dmp transport_datafiles=('/prod/rk/tts/userdata01.dbf','/prod/rk/tts/userdata02.dbf')

/as sysdba

SQL>select tablespace_name from dba_tablespaces;


SQL>alter tablespace USERDATA read write;

SQL> alter user user1 default tablespace USERDATA;

SQL> alter user user2 default tablespace USERDATA;

SQL>conn user1/user1

SQL>select * from tab;



















Oracle Database Recovery Scenarios


The below outlines few of the Oracle Database recovery scenarios in different cases.

Scenario 1 - Loss of C/R/D files
-------------------------------------

sqlplus '/as sysdba'

SQL>startup;
SQL>archive log list; - DB must be running in archive log mode

SQL>conn user1/user1
SQL>select * from emp;
SQL>insert into emp select * from emp;
/
/
/
/
SQL> commit;
SQL>alter system switch logfile;
SQL>exit;

$cd /prod/rk/oradata
$rm -f *

$sqlplus '/as sysdba'

SQL>select name from v$database;

-- You will see Error Here

SQL>shut abort;
SQL>exit;

$cd /prod/rk/backup/cold

$cp * /prod/rk/oradata

$sqlplus '/as sysdba'
SQL>startup mount;

SQL>alter database recover automatic using backup controlfile until cancel;

--
--
--
--
SQL>recover cancel;

SQL>alter database open resetlogs; -- The arch log seq number will be set to 1.

SQL>select name from v$Database;



Scenario 2 - Loss of Non-SYSTEM datafile
----------------------------------------



sqlplus '/as sysdba'

SQL>startup;

SQL>conn user1/user1

SQL>insert into emp as select * from emp;
/
/
/
SQL>commit;
SQL>select USERNAME,DEFAULT_TABLESPACE from dba_users;
SQL> select file_name from dba_data_files where tablespace_name='TS1';

SQL>exit;


$cd /u03/praveen/oradata
$ rm ts1a.dbf


SQL>sqlplus user1/user1

SQL>select * from emp;

-- You will see Error Here

SQL>alter database datafile '/u03/praveen/oradata/ts1a.dbf' offline;

SQL>exit;

$cp /u03/praveen/backup/cold/ts1a.dbf /u03/praveen/oradata

sqlplus '/as sysdba'

SQL>recover datafile '/u03/praveen/oradata/ts1a.dbf';

SQL>alter database datafile '/u03/praveen/oradata/ts1a.dbf' online;

SQL>conn user1/user1

SQL>select * from emp;


Scenario 3 - Loss of SYSTEM datafile
------------------------------------
sqlplus '/as sysdba'

SQL>startup;

SQL>conn user1/user1

SQL>insert into emp as select * from emp;
/
/
/
SQL>commit;


$cd /u03/praveen/oradata
$ rm system01.dbf


SQL>sqlplus user1/user1

SQL>select * from emp;

-- You will see Error Here

SQL>shu abort;


$cp /u03/praveen/backup/cold/system01.dbf /u03/praveen/oradata

sqlplus '/as sysdba'

SQL>startup mount;

SQL>recover database;

SQL>alter database open;

SQL>select name from v$database;

SQL>select * from emp;



Scenario 4 - Point in Time Recovery
-------------------------------------

SQL>conn user1/user1

SQL>select count(*) from emp;

SQL>select count(*) from salgrade;

SQL>insert into emp select * from emp;
SQL>/
SQL>commit;

SQL>set time on

SQL>select count(*) from emp; - Note down the records

SQL>drop table emp purge;

SQL>insert into salgrade select * from salgrade;
SQL>/
SQL>/
SQL>/
SQL>commit;


SQL>conn /as sysdba

SQL>alter system switch logfile;
SQL>/

SQL>shu immediate;

$cd /prod/lab/oradata

$rm -rf *

$cp /prod/lab/backup/cold/* /prod/lab/oradata

$sqlplus '/as sysdba'

SQL>startup mount;

SQL>recover database using backup controlfile until time '2010/09/16/16:18:49';

-- AUTO --


Scenario 5 - Loss of Log files
-------------------------------------

conn user1/user1

SQL>insert into emp select * from emp;
/
/
SQL>commit;
SQL>exit;

$cd /prod/lab/ordaata/

$rm redo1a.log redo2a.log

sqlplus user1/user1
SQL>insert into emp select * from emp;
SQL>/
/
/
/


-- DB will Hang Here -- Open alert log and check

From other session

sqlplus '/as sysdba'
SQL>shu abort;

$cd /prod/lab/backup/cold

cp *.ctl *.dbf /prod/lab/oradata

sqlplus '/as sysdba'

SQL>startup mount;

SQL>recover database using backup controlfile until cancel;

SQL>alter database open resetlogs;



Scenario 6: Loss of Undo Datafile
-------------------------------------

sqlplus user1/user1

SQL>insert into emp select * from emp;
/
/
/
SQL>commit;
SQL>exit;

$cd /prod/lab/oradata
rm labundo1.dbf

sqlplus user1/user1

SQL>insert into emp select * from emp;
/
/

-- Error -- Undo TBS missing

SQL>conn /as sysdba
SQL>shu abort
SQL>exit;

$cd /prod/lab/backup/cold

cp labundo1.dbf /prod/lab/oradata

sqlplus '/as sysdba'

SQL>startup mount;
SQL>select * from v$recover_file;
SQL>recover datafile '/prod/lab/oradata/labundo1.dbf'
SQL>alter database open;
SQL>select * from v$recover_file;


Scenario 7: Loss of 1 control File when Control Files Mutiplexed
--------------------------------------------------------------------------


sqlplus '/as sysdba'

SQL>show parameter control

-- Ensure u have 2 control files --

SQL>conn user1/user1

SQL>insert into emp select * from emp;
/
/
/
SQL>commit;
SQL>exit;

$cd /prod/lab/oradata
$rm cntrl01.ctl

sqlplus user1/user1

SQL>alter database datafile 4 offline;

-- Error Control File Missing --

SQL>shu abort
SQL>exit;

$cd /prod/lab/oradata
$cp cntrl02.ctl cntrl01.ctl

sqlplus '/as sysdba'

SQL>startup;

SQL>select * from user1.emp;



Scenario 8: Loss of all control Files
-------------------------------------

SQL>conn user1/user1

SQL>insert into emp select * from emp;
/
/
/
SQL>commit;
SQL>exit;

$cd /prod/lab/oradata
$rm cntrl01.ctl

sqlplus '/as sysdba'

SQL>alter database datafile 4 offline;

-- Error Control File Missing --

SQL>shu abort
SQL>exit;

$cd /prod/lab/oradata
$cp /prod/lab/backup/cold/cntrl01.ctl .

sqlplus '/as sysdba'
SQL>startup mount;
SQL>recover database using backup controlfile until cancel;

SQL>alter database open resetlogs;

SQL>select * from v$recover_file;




Scenario 9: Loss of control Files with no Backup
--------------------------------------------------------------------------

SQL>conn user1/user1

SQL>insert into emp select * from emp;
/
/
/
SQL>commit;

SQL>exit;

Sqlplus '/as sysdba'

SQL>alter database backup controlfile to trace as '/prod/lab/crt_ctl.sql';

SQL>exit;

$cd /prod/lab/oradata
$rm cntrl01.ctl

sqlplus '/as sysdba'

SQL>alter database datafile 4 offline;

-- Error Control File Missing --

SQL>shu abort
SQL>exit;

$ vi /prod/lab/crt_ctl.sql

-- Modify the control - Use first section

:wq!

sqlplus '/as sysdba'

SQL>@/prod/lab/crt_ctl.sql

SQL>alter database open;

-- If u face any error ---
-- SQL>recover database;
-- SQL>alter database open;


SQL>select count(*) from user1.emp;


SQL>alter database open resetlogs;

SQL>select * from v$recover_file;


Scenario 10: Loss of unbackup datafile
-------------------------------------

sqlplus user1/user1


SQL>insert into emp select * from emp;
/
/
/
SQL>commit;

SQL>conn /as sysdba

SQL>alter tablespace userdata add datafile '/prod/lab/oradata/userdata02.dbf' size 500M;

SQL>conn user1/user1

SQL>insert into emp select * from emp;
/
/
SQL>commit;

SQL>exit;

$cd /prod/lab/oradata
$rm userdata02.dbf

sqlplus user1/user1

SQL>insert into emp select * from emp;
/
/
/

-- Error - Datafile Missing --

SQL>conn /as sysdba

SQL>shu abort;

sqlplus '/as sysdba'

SQL>startup mount;

SQL>alter database create datafile '/prod/lab/oradata/userdata02.dbf';

SQL>recover datafile '/prod/lab/oradata/userdata02.dbf';

SQL>alter database open;


Scenario 11: How to recover from Hot Backup
------------------------------------------------------
sqlplus '/as sysdba'

SQL>select * from v$backup;

conn user1/user1

SQL>insert into salgrade select * from salgrade;

SQL>commit;

SQL>conn /as sysdba

SQL>alter tablespace userdata begin backup; - Taking hot backup of TS USERDATA

SQL>conn user1/user1

SQL>insert into salgrade select * from salgrade;

SQL>commit;

SQL>select count(*) from salgrade;

SQL>exit;

$cd /prod/lab/backup

$mkdir hot

$cd /prod/lab/oradata

$cp userdata01.dbf userdata02.dbf /prod/lab/backup/hot

$sqlplus '/as sysdba'

SQL>select * from v$backup;

SQL>alter tablespace userdata end backup;

SQL>alter system switch logfile;
/
/


SQL>conn user1/user1

SQL>insert into salgrade select * from salgrade;
/
/

SQL>commit;

SQL>select count(*) from salgrade;

SQL>exit;

$cd /prod/lab/oradata

$rm userdata01.dbf

sqlplus user1/user1

SQL>insert into salgrade select * from salgrade;
/
/
/
-- Error userdata datafile missing --

SQL>conn /as sysdba

SQL>alter database datafile '/prod/lab/oradata/userdata01.dbf' offline;

SQL>exit;

$cp ../backup/hot/userdata01.dbf .

sqlplus '/as sysdba'

SQL>recover datafile '/prod/lab/oradata/userdata01.dbf';
- AUTO -

SQL>alter database datafile '/prod/lab/oradata/userdata01.dbf' online;

SQL>select * from v$recover_file;

SQL>select count(*) from user1.salgrade;

SQL>exit;
























What is FNDLOAD

What is FNDLOAD
==============


The Generic Loader is a concurrent program named FNDLOAD. The concurrent executable takes the following parameters:
FNDLOAD apps/pwd 0 Y mode configfile datafile entity [ param ... ]

where

The APPS schema and password in the form
username/password[@connect_string]. If connect_string is omitted, it is taken in a
platform–specific manner from the environment using the name TWO_TASK.

<> Concurrent program flags

mode UPLOAD or DOWNLOAD. UPLOAD causes the datafile to be uploaded to the
database. DOWNLOAD causes the loader to fetch rows and write them to the datafile.

The configuration file to use (usually with a suffix of .lct, but not enforced or
supplied by the loader).

The data file to write (usually with a suffix of .ldt, but not enforced or supplied by
the loader). If the data file already exists, it will be overwritten.

The entity(ies) to upload or download. When uploading, you should always upload
all entities, so specify a ”–” to upload all entities.

< [param] > Zero or more additional parameters are used to provide bind values in the access
SQL (both UPLOAD and DOWNLOAD). Each parameter is in the form
NAME=VALUE. NAME should not conflict with an attribute name for the
entities being loaded.



Syntax
=======

An example of downloading is:
----------------------------

FNDLOAD apps/apps@TEST 0 Y DOWNLOAD testcfg.lct out.ldt FND_APPLICATION_TL APPSNAME=FND

This command does the following:

connects to apps/apps@devd
downloads data using the configuration file testcfg.lct
writes data to data file out.ldt
downloads the FND_APPLICATION_TL entity with APPSNAME parameter defined as value FND’


An example of uploading is:
--------------------------

FNDLOAD apps/apps@custdb 0 Y UPLOAD fndapp.lct fnd_exmpl.ldt

This command does the following:
connects to apps/apps@custdb
uploads data using the configuration file in fndapp.lct from datafile in fnd_exmpl
The contents of the entire data file is uploaded.


FNDLOAD Load Options
---------------------

Application level OR single entity level download
Entire OR Partial upload of a data file
Custom mode force update
Support for NLS uploads



AOL ENTITIES
-------------

FNDLOAD an be used for the below Application entities.


Concurrent program definitions
Request groups
Request Sets
Lookup types and lookup values
Profile options and profile option values
Flexfields setup data
Flexfield Value Sets
Descriptive Flexfields
Key Flexfields
Flexfield Value Security Rules
Flexfields Value Hierarchies (Rollup Groups)
Flexfields Values
Attachments definitions
Messages
Menus
Responsibilities
Security information
Etc..



Examples:
========

A)Concurrent Program Definitions:
---------------------------------

The concurrent program configuration file afcpprog.lct downloads and uploads concurrent program definitions. It takes as parameters program name and application name. The following table lists the entities, sub-entities (if any), and download parameters for this configuration file.


Download
-----------
FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct file_name.ldt PROGRAM APPLICATION_SHORT_NAME="prod" CONCURRENT_PROGRAM_NAME="concurrent name"

Upload
-----------
FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct file_name.ldt

B)Request Groups Configuration File:
------------------------------------

Use the file afcpreqg.lct for loading request group data. The following table lists the entities, sub–entities (if any), and download parameters for this configuration file.


Download
---------

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcpreqg.lct file_name.ldt REQUEST_GROUP REQUEST_GROUP_NAME="request group" APPLICATION_SHORT_NAME="prod"

Upload
------
FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcpreqg.lct file_name.ldt

C)Request Sets Configuration File
----------------------------------

Use the file afcprset.lct for loading request group data. The following table lists the entities, sub–entities (if any), and download parameters for this configuration file.

Download
---------
FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct file_name.ldt REQ_SET APPLICATION_SHORT_NAME="prod" REQUEST_SET_NAME="request set"

Upload
---------
FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct file_name.ldt


D)Lookups Configuration File
----------------------------

Use the file aflvmlu.lct for loading Lookup types and Lookups values. The following table lists the entities, sub–entities (if any), and download parameters for this configuration file.

Download
---------
FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct file_name.ldt FND_LOOKUP_TYPE APPLICATION_SHORT_NAME="prod" LOOKUP_TYPE="lookup name"

Upload
-----
FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/aflvmlu.lct file_name.ldt

E)Profile Options and Profile Values Configuration File:
-------------------------------------------------------
Use the file afscprof.lct for loading profile options and profile values. The following table lists the entities, sub–entities (if any), and download parameters for this configuration file.

Download
---------
FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afscprof.lct file_name.ldt PROFILE PROFILE_NAME="profile option" APPLICATION_SHORT_NAME="prod"

Upload
---------

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afscprof.lct file_name.ldt



F)Flexfield Value Sets:
----------------------
The entity VALUE_SET includes the following table details of table validated value sets, and user exit details of special/pair validated value sets. It does not include the values, security rules, rollup groups, or value hierarchies for the value set. These entities should be
downloaded and uploaded separately.
The key for this entity is FLEX_VALUE_SET_NAME.

Download
---------
FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt VALUE_SET FLEX_VALUE_SET_NAME="value set name"

Upload
---------

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt

G)Responsibilities Configuration File:
-------------------------------------
Use the file afscursp.lct for downloading and uploading forms, functions, menus, and menu entries. The following table lists the entities, sub–entities (if any), and download parameters for this configuration file.

Download
---------

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt FND_RESPONSIBILITY RESP_KEY="responsibility"

Upload
---------

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt


Program Name

Scripts For Downloading

Concurrent Program

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct file_name.ldt PROGRAM APPLICATION_SHORT_NAME="prod" CONCURRENT_PROGRAM_NAME="concurrent name"

Request Groups

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcpreqg.lct file_name.ldt REQUEST_GROUP REQUEST_GROUP_NAME="request group" APPLICATION_SHORT_NAME="prod"

Request Sets

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct file_name.ldt REQ_SET APPLICATION_SHORT_NAME="prod" REQUEST_SET_NAME="request set"

Lookups

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct file_name.ldt FND_LOOKUP_TYPE APPLICATION_SHORT_NAME="prod" LOOKUP_TYPE="lookup name"

Profile Options

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afscprof.lct file_name.ldt PROFILE PROFILE_NAME="profile option" APPLICATION_SHORT_NAME="prod"

Value Sets

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt VALUE_SET FLEX_VALUE_SET_NAME="value set name"

Descriptive Flexfield

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt DESC_FLEX P_LEVEL=?COL_ALL:REF_ALL:CTX_ONE:SEG_ALL? APPLICATION_SHORT_NAME="prod" DESCRIPTIVE_FLEXFIELD_NAME="desc flex name" P_CONTEXT_CODE="context name"

Key Flexfield

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt KEY_FLEX P_LEVEL=?COL_ALL:FQL_ALL:SQL_ALL:STR_ONE:WFP_ALL:SHA_ALL:CVR_ALL:SEG_ALL? APPLICATION_SHORT_NAME="prod" ID_FLEX_CODE="key flex code" P_STRUCTURE_CODE="structure name"

Value Sets with values

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt VALUE_SET_VALUE FLEX_VALUE_SET_NAME="value set name"

Messages

FNDLOAD apps/apps@db 0 Y DOWNLOAD $FND_TOP/patch/115/import/afmdmsg.lct file_name.ldt FND_NEW_MESSAGES APPLICATION_SHORT_NAME='prod' MESSAGE_NAME='message name'

Menus

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afsload.lct file_name.ldt MENU MENU_NAME="menu_name"

Responsibilities

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt FND_RESPONSIBILITY RESP_KEY="responsibility"

Printer Styles

FNDLOAD apps/apps@db O Y DOWNLOAD $FND_TOP/patch/115/import/afcppstl.lct file_name.ldt STYLE PRINTER_STYLE_NAME="printer style name"

Spread Table

FNDLOAD apps/apps@db 0 Y DOWNLOAD $FND_TOP/patch/115/import/jtfgrid.lct IEX_CASE.ldt JTF_GRID_DATASOURCES GRID_DATASOURCE_NAME="IEX_CASE"



Upload Scripts:
----------------

Program name

Scripts For Uploading

Concurrent Program

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct file_name.ldt

Request Groups

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcpreqg.lct file_name.ldt

Request Sets

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct file_name.ldt

Lookups

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/aflvmlu.lct file_name.ldt

Profile Options

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afscprof.lct file_name.ldt

Value Sets

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt

Descriptive Flexfield

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt

Key Flexfield

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt

Value Sets with values

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt

Messages

FNDLOAD apps/apps@db 0 Y UPLOAD $FND_TOP/patch/115/import/afmdmsg.lct file_name.ldt

Menus

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afsload.lct file_name.ldt

Responsibilities

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt

Printer Styles

FNDLOAD apps/apps@db O Y UPLOAD $FND_TOP/patch/115/import/afcppstl.lct file_name.ldt

Spread Table

FNDLOAD apps/apps@db 0 Y UPLOAD $FND_TOP/patch/115/import/jtfgrid.lct IEX_CASE.ldt


FRM-92101 Error

FRM-92101 Error

Error: "FRM-92101 , found the following error in FormsGroup.0.stdout - Forms session <84> aborted: unable to communicate with runtime process."

Modified
serverURL=/forms/formservlet
to
serverURL=/forms/formservlet?ifip=100.100.100.100

But this cannot be used as a permanent fix, because after this change all the connections coming to forms will treated as if they are coming from the IP address 100.100.100.100.

Also the logfiles and the tracefiles will have the client ip address as 100.100.100.100 irrespective of the actual client ip address

So current workaround can be used as temporary fix until the issue is permenantly fixed by applying the required patch.
"


MWA - Cannot establish telnet session to MWA server

MWA - Cannot establish telnet session to MWA server

Check if you are able to telnet to the box from your local machine command prompt

Example :
From local machine CMD prompt
C:\telnet test.oracle.com 30000

If you are not able to ping ,that means MWA server is down.

You have to check for all telnet ports where MWA server is running.

For MWA and Dispatchher Port informatiom :
=========================================

cd $MWA_TOP/secure
file " mwa.cfg "
grep for : "mwa.TelnetPortNumber" and "mwa.Dispatcher"
eg :
mwa.TelnetPortNumber=30000,30002,30004
mwa.Dispatcher=test.oracle.com:30030

From SSH in the box ,check if the ports are listening

Example :
$ netstat -an|grep 30000

The log details are available in nohup.out file


Start MWA server
=================

MWA start/stop scripts are available in $MWA_TOP/bin directory

Start the MWA server with the following commands:

cd $MWA_TOP/bin

$nohup ./mwactl.sh start 30000 1> nohup.out 2>&1 &

$nohup ./mwactl.sh start 30002 1> nohup.out 2>&1 &

$nohup ./mwactl.sh start 30004 1> nohup.out 2>&1 &

$nohup ./mwactl.sh start_dispatcher 1> nohup.out 2>&1 &

Execute all the four commands above.


Sysadmin User Responsibilities Missing

ISSUE: Sysadmin User Responsibilities Missing

In Oracle Applications, when logging as sysadmin - all the Sysadmin User Responsibilities Missing.

To fix this issue please perform the below.

1.) Execute following select :

SELECT *
FROM Fnd_Responsibility_vl
WHERE Application_Id = 1
AND Responsibility_Name = 'System Administrator'
AND Start_Date <= Sysdate
AND ( End_Date is NULL OR End_Date > Sysdate );

2.) Execute following select :

SELECT count(*) FROM Fnd_User_Resp_Groups;

3.a) Create the Backup table :

CREATE TABLE wf_role_hierarchies_copy AS SELECT * FROM wf_role_hierarchies;

3.b) truncate the existing table:

TRUNCATE TABLE applsys.wf_role_hierarchies;

3.c)sqlplus apps/xxxx @$FND_TOP/patch/115/sql/affurgol.sql FORCE

3.d) Insert the data into the new created table :

INSERT INTO wf_role_hierarchies SELECT * FROM wf_role_hierarchies_copy;

4.) Verify the output of the following SQL statement at this point :

SELECT count(*) FROM Fnd_User_Resp_Groups;



Pasta to PDF printing Setup

Pasta to PDF printing Setup

Pasta Introduction:
------------------
Pasta technology provides nultilingual support in Oracle E-business Suite particularly In printing (Pasta),viewing a report on-line (Vpasta) and generating bitmap reports ( IX Library).Pasta works on UTF8 and other character sets as well which Oracle Supports.

Pasta is part of E-business Suite’s Application object Library (FND).Pasta enables Multi-lingual handling by taking advantage of PostScript language. Pasta converts an input content from text format to PostScript format to add the multi-lingual support. During the PostScript conversion, Pasta includes all the necessary information in addition to the language handling such as paper size, font size and so on. After converting the input, Pasta sends the content to preprocess command if it is set in the preprocess option in the Pasta configuration file before printing. The content is then sent to the printer command that is specified in the printCommand option in the configuration file. ( pasta.cfg)


Prerequisites:
-------------

Printer should be setup at the operating system level .Use lpstat -a commandand check if the printer is configured as OS level.

Configuration:
--------------


Pasta patch 3325651 is provided as standard AD patch of Oracle Applications and it needs to be applied on the concurrent processing node.

Setting up your system to use Pasta is much simpler than the standard Oracle Applications printer setup procedure. The Printer Type, Printer Driver, and SRW driver files are provided. The only setup required to begin printing is the registration of the printer with Oracle E–Business Suite.

Pasta comes with default Printer type:

Responsibility: System Administrator Menu Path: Install->Printer->Type

--PASTA Universal Printer Type. We can use the same printer type or configure a new printer type.

Associate the printer type with different printers as required.

Responsibility: System Administrator Menu Path: Install->Printer->Register

Pasta configuration files are located in $FND_TOP/resource directory.

Pasta looks for files pasta_.cfg file in the above directory. If the file is not present it will look for the file pasta.cfg

You can change the file that is defined as the default configuration file for Pasta by using the -F command line parameter.

For example, suppose you create a PCL print specific configuration file named pcl.cfg. Set the FNDPSTAX command line option as follows: -Fpcl.cfg

Pasta will look for pcl_.cfg first, and if it is not found, Pasta will use pcl.cfg as the default.

These files must be placed under the $FND_TOP/resource directory.

The -F command line parameter can be set in the Arguments field ofthe Printer Drivers window.


Pasta_pdf.cfg is the file used by pasta to print pdf converted to postscript By 3rd party utility.The driver name is PASTA_PDF.

Ex:

In pasta_pd.cfg : preprocess=pdf2ps {infile} {outfile}.


The configuration file Pasta.cfg file governs many printing options.The file
is a simple ASCII text file.

Some important options :

1.OutputFormat = ps/text ( ps stands for printers which support postscript format and text stands for printers which doesnt support post script Format)

2.Preprocess =

3.Printcommand = lp (Print command for unix platform)

4.Pagewidth

5.pageheight .

There are lot of other options which can be configured like fontname, font size.Please Refer to PASTA3.pdf file uploaded for details.

With reference to the work related to ohsconfig team , we receive tars related to Pasta configuration , troubleshooting, changing parameters in pasta configuration files(pasta.cfg ..etc).

For most of the changes made to configuration files it is necessary to bounce concurrent manager for changes to take effect.