Friday, June 10, 2011

Purging and performence related Concurrent Requests for 11i and R12

The below seeded purging requests will help you to reduce the unwanted data and increase the performence of the oracle Applications.
These requests are applicable to both 11i and R12.

Note :Please note the purging request will delete the data permanently, So if you have any other business requierements please be cautious.

1)The "WORKFLOW CONTROL QUEUE CLEANUP" concurrent program should be scheduled to run at least every 12 HOURS.
2)Request Set with the following parameters:

For instances using Oracle Applications Release 11.5.8 or later, a concurrent request set consisting of the "PURGE OBSOLETE WORKFLOW RUNTIME DATA" and "COMPLETE DEFUNCT HR WORKFLOW PROCESSES" concurrent programs should be scheduled to run daily with the following parameters:
    "Purge Obsolete Workflow Runtime Data":
      Item Type: NULL
      Item Key: NULL
      Age: 7
      Persistence Type: select meaning
                        from fnd_lookups
                        where lookup_type = 'FND_WF_PERSISTENCE_TYPE'
                        and lookup_code = 'TEMP';

    "Complete Defunct HR Workflow Processes":
      Item Type: HR
      Age: 7
      Transaction Status: ALL

In addition, the stages of the request set should be linked such that the "Complete Defunct HR Workflow Processes" concurrent program only runs if the "Purge Obsolete Workflow Runtime Data" concurrent program returns Success or Warning.

3)The "GATHER SCHEMA STATISTICS" concurrent program should be scheduled to run at least every 7 DAYS.
4) For instances using Oracle Applications Release 11.5.8 or later, if PO is licensed, then the "PURGE OBSOLETE WORKFLOW RUNTIME DATA" concurrent program should be scheduled to run every week with the following parameters:

    Item Type: PO Approval
5)The program "PURGE CONCURRENT REQUEST AND/OR MANAGER DATA" should be scheduled to run within the next 24 hours with the following parameter(s): Entity=ALL, Mode=Age, Mode Value=30
6)The "PURGE OBSOLETE GENERIC FILE MANAGER DATA" concurrent program should be scheduled to run at least once a month with the following parameters:

     Program Name=NULL
     Program Tag=NULL
7)The program "PURGE SIGNON AUDIT DATA" should be scheduled to run within  the next 30 days with the following parameter value:

      Audit date: <30 days prior to the scheduled run date>
In addition, the "Increment date parameters each run" scheduling option should be selected for the scheduled run of the program.

The Best practices to reduce the patching downtime (Execution point view)

1)Use Hotpatch option:
where ever Possible apply the patches in the hotpatch mode (ihelp patches, One off patches (small patches) can be applied in hotpatch mode), In hotpatch mode there is no downtime required, Patch can be applied when the services are running.

Note : hotpatch mode is not suggested/supported for patching, It must be used in extreame situations.

2)Use adpatch options:
adpatch provides multiple options which can be used reduce the downtime, Eg: No compile DB, No Compile jsp...etc

These activies can be done later when the services are running.

3)Merge Patches:
Merging of the patches is very good option and it reduces the downtime significantly in case where we have Multiple patches to apply.

4)Increase the batch size during the patching:
Increasing the batch size to higher value (10000) will reduce the patch time by 10%.

5)Distributed AD can be used in Multi Node systems:
In case of multinode systems we can Distributed AD feature, In this method patch workers will spread and run in all the Applications nodes effectively utilizing the OS resources of others nodes as well.

6)Increase the OS resources (as possible) which help in increasing the the no.of parallel workers of the patch:
Having well sized OS resources such as CPU,Memory..etc will increase the patch runtime there by reduces the downtime.

7)Staged APPL_TOP:
A staged Applications system represents an exact copy of your Production system, including all APPL_TOPs as well as a copy of the Production database. Patches are applied to this staged system, while your Production system remains up. When all patches have been successfully applied to the test system, the reduced downtime for the Production system can begin.

Please Note:

i) Staged APPL_TOP only reduces the the time required to Apply the patch on Application Node only.(The database part of the patch requires downtime anyway)

ii)Usually Database portion of the patch takes 60-70% of the patching time, So by this method we can only reduce the 30-40% of the patch time,

iii) But staged APPL_TOP is only usefull in case of bulk patching or Upgrades,This approach is not a soultion to the small and day to day patching as this involves lot of preparation (cloning of the Production system to target) and synching back the APPL_TOP to Prod and it is a complex process.