Look Apps

Oracle Applications DBA APPSLab

 
R12
  • CREATE CUSTOM TOP IN R12
  • RELEASE12 INSTALLATION
  • R12.AD.A.DELTA.4 PATCH
  • APPS R12 SCRIPTS & LOG FILE LOCATIONS
  • APPS 11i
  • ENABLING SSL FOR E-BUSINESS SUITE
  • 11I INSTALLATION(11.5.10.2 on Redhat Linux AS 4.0 ...
  • Replacing Jinitiator with Sun Java JRE for APPS
  • INDIAN LOCALIZATION PATCH
  • Database Upgrade 9.2.0.6 to 10.2.0.2.0 Oracle App...
  • Multi Node to Single Node Cloning
  • 10G
  • Installing 10G Application Server
  • 10G DATAPUMP
  • INSTALLING 10G DEVELOPER SUITE ON LINUX MACHINE
  • 9i
  • Apply ADI-6 Patch
  • MANUALLY CLONE DATABASE
  • Copy and Rename an Oracle Database - (without expo...
  • linux/Unix
  • LINUX BASICS
  • Training
    Coming Soon
    Search
    Only search this Blog
    GOOGLE Ads
    Coming Soon
    APPS R12 SCRIPTS & LOG FILE LOCATIONS
    Saturday, August 30, 2008
    Log files are useful in troubleshooting issues in Oracle Applications. Here is the list of Log file location in Oracle Applications for Startup/Shutdown, Cloning, Patching, DB & Apps Listener and various components in Apps R12/12i:

    A. Startup/Shutdown Log files for Application Tier in R12

    Instance Top is new TOP added in R12 (to read more click here)

    –Startup/Shutdown error message text files like adapcctl.txt, adcmctl.txt…
    $INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log

    –Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web)
    $INST_TOP/apps/$CONTEXT_NAME/logs/ora/ (10.1.2 & 10.1.3)
    $INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]
    $INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)
    $INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)

    B. Log files related to cloning in R12

    Preclone log files in source instance
    i) Database Tier - /$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_MMDDHHMM.log)

    ii) Application Tier - $INST_TOP/apps/$CONTEXT_NAME/admin/log/ (StageAppsTier_MMDDHHMM.log)

    Clone log files in target instance

    Database Tier - $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_
    posted by Lokesh Babu @ 11:10 AM   0 comments
    Copy and Rename an Oracle Database - (without export/import)
    Copy and Rename an Oracle Database - (without export/import)


    You may find it necessary to duplicate (clone) an Oracle database. One method is to use import/export. This method can work fine, but what if your database is too big? Another method is to make a copy of the current database and rename it. Now I will explain the steps necessary to copy and rename a database. This article will assume that the original database is called PROD and you want to create a TEST duplicate database.


    1.) Copy production database files and init.ora
    The first step is to locate and copy all database files to their new location. You can use the view V$DATAFILE in the PROD database to locate these files. Before running the query from V$DATAFILE, ensure that you are connected to the PROD database by selecting from V$DATABASE:
    SQL> select name from v$database;

    NAME
    ---------------------------------------
    PROD


    SQL> select name from v$datafile;

    NAME
    ---------------------------------------
    /u08/app/oradata/PROD/system01.dbf
    /u06/app/oradata/PROD/rbs01.dbf
    /u07/app/oradata/PROD/temp01.dbf
    /u10/app/oradata/PROD/userd01.dbf
    /u09/app/oradata/PROD/userx01.dbf

    After recording these files, shutdown the PROD database and perform an operating system copy of all database files to another location and/or machine. In my example, I will copy all datafiles to a new location as shown in the following table:

    Old Location = New Location
    ============ ============
    /u08/app/oradata/PROD/system01.dbf = /u08/app/oradata/TEST/system01.dbf
    /u06/app/oradata/PROD/rbs01.dbf = /u06/app/oradata/TEST/rbs01.dbf
    /u07/app/oradata/PROD/temp01.dbf = /u07/app/oradata/TEST/temp01.dbf
    /u10/app/oradata/PROD/userd01.dbf = /u10/app/oradata/TEST/userd01.dbf
    /u09/app/oradata/PROD/userx01.dbf = /u09/app/oradata/TEST/userx01.dbf



    After copying all files to their new location, startup the PROD database.

    From the production database, get a copy of the initPROD.ora file and copy it to initTEST.ora. In the initTEST.ora file, change the value of "db_name" from PROD to TEST. Keep in mind that you may also need to change:

    • audit_file_dest
    • background_dump_dest
    • control_files
    • core_dump_dest
    • log_archive_dest
    • user_dump_dest

    If the TEST database is going to be on a different machine, copy the initTEST.ora file to that machine in the proper directory.


    2.) Create the script that will re-create the controlfile


    Using SVRMGR on the PROD database, create a script that will be able to re-create the controlfile for the database.

    PROD on testdb: svrmgrl
    SVRMGR> connect internal
    Connected.
    SVRMGR> alter database backup controlfile to trace;
    Statement processed.

    The above statement will put a text copy of the controlfile in the USER_DUMP_DEST directory. You will need to search for the newest trace file in this directory. In UNIX you can use the "ls -lt" command. Once you find the correct trace file, rename it to cr_control.sql and edit it as follows


    • Remove everything up to the "START NOMOUNT" statement and everything after the semicolon at the end of the "CREATE CONTROLFILE" statement.
    • Edit the line starting with "CREATE CONTROLFILE" and replace the word "REUSE" with the word "SET" right before the keyword DATABASE.
    • On the same line, modify the database name changing it from PROD to TEST.
    • On the same line, change the keyword NORESETLOGS to RESETLOGS.
    Your script should now read:


    Edited file cr_control.sql
    ===========================

    STARTUP NOMOUNT
    CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 5
    MAXDATAFILES 600
    MAXINSTANCES 10
    MAXLOGHISTORY 1000
    LOGFILE
    GROUP 1 (
    '/u03/app/oradata/TEST/redo_g01a.log',
    '/u04/app/oradata/TEST/redo_g01b.log',
    '/u05/app/oradata/TEST/redo_g01c.log'
    ) SIZE 200K,
    GROUP 2 (
    '/u03/app/oradata/TEST/redo_g02a.log',
    '/u04/app/oradata/TEST/redo_g02b.log',
    '/u05/app/oradata/TEST/redo_g02c.log'
    ) SIZE 200K,
    GROUP 3 (
    '/u03/app/oradata/TEST/redo_g03a.log',
    '/u04/app/oradata/TEST/redo_g03b.log',
    '/u05/app/oradata/TEST/redo_g03c.log'
    ) SIZE 200K
    DATAFILE
    '/u08/app/oradata/TEST/system01.dbf',
    '/u06/app/oradata/TEST/rbs01.dbf',
    '/u07/app/oradata/TEST/temp01.dbf',
    '/u10/app/oradata/TEST/userd01.dbf',
    '/u09/app/oradata/TEST/userx01.dbf'
    ;

    If the TEST database is on a different machine move this file to that machine.


    3.) Create the new controlfile for TEST


    Make sure that your Oracle environment variable "ORACLE_SID" is set to TEST. (i.e. export ORACLE_SID=TEST).
    Now use SVRMGR and the CREATE CONTROLFILE script (cr_control.sql) to create your controlfile for TEST:
    TEST on testdb: svrmgrl
    SVRMGR> connect internal
    Connected to an idle instance.
    SVRMGR> @cr_control
    ORACLE instance started.
    Total System Global Area 32798752 bytes
    Fixed Size 39816 bytes
    Variable Size 22600856 bytes
    Database Buffers 9994240 bytes
    Redo Buffers 163840 bytes
    Statement processed.
    SVRMGR>

    NOTE: Stay logged into SVRMGR and proceed to the next step.


    4.) Open the TEST database


    Before opening the TEST database, you will need to perform incomplete recovery. After recovery you can open the database using the RESETLOGS option as show below:
    SVRMGR> alter database recover database until cancel using backup controlfile;
    SVRMGR> alter database recover cancel;
    Statement processed.
    SVRMGR> alter database open resetlogs;
    Statement processed.
    SVRMGR>
    You can verify that the database was renamed to TEST by querying from V$DATABASE:
    SVRMGR> select name from v$database;
    NAME
    ---------
    TEST
    1 row selected.
    SVRMGR>

    Labels:

    posted by Lokesh Babu @ 11:10 AM   0 comments
    MANUALLY CLONE DATABASE
    MANUALLY CLONE DATABASE

    To clone a database manually, we need first use the operating system(OS) to copy all of the source database files to the target location.

    If we are on the same server we need to change the name of the database. If we are on a different server we can keep the database name as it is.

    We should first backup the source database control file to trace using the statement

    SQL>ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

    Using the trace file contents create a new control file that will help us create the new clone database.

    Steps to Manually Clone a Database :

    We follow the steps given below assuming that our source database is the production database named "prod" and our destination (target) database is the database named "test".

    1) Copy the prod database files to the target location

    2) Prepare a text file for the creation of a control file or the new database as follows :

    SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

    3) On the target location, create all the directories for the various files

    4) Copy the following four sets of files from the production(source) database to the target database :

    parameter files
    control files
    data files
    redo log files

    5) In all the clone database files, change the database name to test

    6) Run the CREATE DATABASE statement,which was prepared with the ALTER DATABASE BACKUP CONTROLFILE TO TRACE statement

    7) Create the controlfile for the test database using the following statement:

    SQL> CREATE CONTROLFILE REUSE SET DATABASE "TEST" RESETLOGS
    NOARCHIVELOG;

    We'll now have a new database called test that has a new controlfile pointing to the copied(target) version of the production database

    8) Once we get the prompt back from the previous command, run this command :

    SQL> ALTER DATABASE OPEN RESETLOGS USING BACKUP CONTROLFILE;

    9) Finally, change the global name of the database we just created by running the following command:

    SQL> UPDATE global_name SQT global_name='test.world';

    Labels:

    posted by Lokesh Babu @ 11:05 AM   0 comments
    10G DATAPUMP
    Database export/import - Using data pump

    Oracle Data Pump is the replacement for he original Export/Import utilities. This utility is available from oracle database 10g. Oracle data pump enables very high- speed of movement of data and metadata from one database to another.
    The Data Pump Export and Import utilities have a similar look and feel to the original utilities, But they are much more efficient and give you greater control and management of import and export jobs


    Directory Objects
    =================
    Description of feature: Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the data pump jobs using directory objects that identify the location of the files. The directory objects enforce a security model that can be used by DBAs to control access to these files.

    Interactive Command – Line Mode
    ===================
    Description of feature: Besides regular operating system command – line mode, there is now powerful interactive command – line mode which allows theuser to monitor and control data pump export and import operations.

    Definitions:

    Definition: Oracle 10g's new DataPump utility is designed as the eventual replacement for the original Oracle Import and Export utilities. This article - the first in a series - provides an overview of the Data Pump's suite of tools for extracting, transforming, and loading data within an Oracle database.


    User Procedures:
    ================

    In order to use this data pump feature, you must perform the following steps:
    • Changing from original Export/Import to oracle data pump.
    • Different modes of Export/Import using data pump.

    Each of these steps is described in more detail below.

    STEP1:
    ======
    if you want to export to a file, the first thing that you must do is create a database DIRECTORY object for the output directory, and grant access to users who will be doing exports and imports.

    Why Directory Objects?
    They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.Develop and include all the default mandatory columns in the invoice form. Also include additional columns are per the client requirement.
    For example will use Scott user account and create a directory object it can access, (Steps for using data pump)

    connect to sqlplus as sys user,

    $sqlplus '/as sysdba'

    make sure that Scott user account has unlocked by connecting to sqlplus as Scott user.

    Assign create directory privilege to Scott,

    GRANT CREATE ANY DIRECTORY TO Scott;
    CREATE OR REPLACE DIRECTORY test_dir AS
    '/u01/logicalbkp';

    Assign read,write privilege to Scott on that particular directory where we need to take the backup.

    GRANT READ, WRITE ON DIRECTORY test_dir TO Scott;


    STEP2:
    ======
    Full Export/Import Mode: You can invoke the data pump export using a command line. Export/Import parameters can be specified directly in the command line. A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE role.Shown below is an example

    expdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log




    impdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log

    Schema Export/Import Mode: The schema export mode is invoked using the SCHEMAS parameter. If you have no EXP_FULL_DATABASE role, you can only export your own schema. If you have EXP_FULL_DATABASE role, you can export several schemas in one go. Optionally, you can include the system

    privilege grants as well.

    expdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

    impdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
    logfile=impdpSCOTT.log

    Table Export/Import Mode: This export mode is specified using the TABLES parameter. In this mode, only the specified tables, partitions and their dependents are exported. If you do not have the EXP_FULL_DATABASE role, you can export only tables in your own schema. You can only specify tables in the same schema.

    expdp scott/tiger tables=EMP,DEPT directory=TEST_DIR
    dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log



    impdp scott/tiger tables=EMP,DEPT directory=TEST_DIR
    dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log


    TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.


    Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:

    expdp scott/tiger schemas=SCOTT directory=TEST_DIR
    parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log



    The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs



    The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export.

    expdp scott/tiger schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log


    expdp scott/tiger schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log




    All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process.

    The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job

    Labels:

    posted by Lokesh Babu @ 11:03 AM   0 comments
    INSTALLING 10G DEVELOPER SUITE ON LINUX MACHINE
    INSTALLING 10G DEVELOPER SUITE ON LINUX MACHINE:
    ===========================================

    ADD GROUP & USERS:
    ------------------
    groupadd dba
    useradd -g dba oracle

    set /etc/sysctl.conf
    --------------------
    kernek.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    # semaphores: semmsl, semmns, semopm, semmni
    kernel.sem = 256 32000 100 142
    fs.file-max = 206173
    net.ipv4.ip_local_port_range= 1024 65000
    kernel.msgmnb = 65535
    kernel.msgmni = 2878

    INSTALL DEVELOPER SUITE:
    ------------------------
    cd dev1
    ./runinstaller

    label the disk 2 location
    example: /u01/dev2/stage
    -------

    SET BASH_PROFILE:
    ----------------
    export ORACLE_HOME=/u01/OraHome_1;
    PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin

    DOWNLOAD j2sdk1.4.2_06
    ----------------------

    create soft links under the (/usr/lib/mozilla/plugins)
    example: ln -s /usr/java/j2sdk1.4.2_06/jre/plugin/i386/ns610-gcc32/libjavaplugin_oji.so libjavaplugin_oji.so
    -------

    su - oracle
    run frmbld.sh
    ---

    known issues: frmbld.sh: line 105: 4793 Segmentation fault $ORACLE_HOME/bin/frmbld $*
    -------------

    solution: add noexec=off in the /boot/grub/grub.conf file.
    ---------

    Labels:

    posted by Lokesh Babu @ 10:59 AM   0 comments
    Multi Node to Single Node Cloning
    Multi Node to Single Node Cloning

    With Oracle Applications 11.5.10 the ability to clone from a multi node to a single node system using rapid clone has now been certified.

    This is accomplished by the Shared APPL_TOP and Merging APPL_TOP features.

    In this post i am listing out the steps to perform a multi note to a single node cloning from scratch. The information mentioned here is the steps which i have used and found them to be successful for a more elaborate explanation you might want to refer to the metalink note Sharing the Application Tier File System in Oracle E-Business Suite 11i (233428.1).

    Source System here refers to the multi node system to be cloned
    Target System refers to the newly to be created single node system
    Applications Version 11.5.10
    Operating System Linux ES 4

    The following Steps are required to be performed on the source system

    1. Apply application tier patches using adpatch

    Apply patch 4038964
    Apply patch 4175764

    Both the above patches are include in the consolidated update 2 or CU2 so in case you are on 11.5.10.2 or later you can ignore this step.

    2. Maintain snapshot information

    Log in as the applications user on each application tier node and run 'Maintain Snapshot Information' by using adadmin.

    3. Merge existing APPL_TOPs

    Log in to the primary node of your application tier as the application user user and run:

    $ cd [COMMON_TOP]/admin/scripts/[CONTEXT_NAME]
    $ perl adpreclone.pl appsTier merge

    This will prompt you with option to merge secondary nodes

    Now log in as the applications user to each of the secondary application tier nodes being merged and run:

    $ cd [COMMON_TOP]/admin/scripts/[CONTEXT_NAME]
    $ perl adpreclone.pl appltop merge

    4. Prepare the source system database tier.

    Log on to the database tier of the source system as the oracle user and run the following commands:

    $ cd [RDBMS ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]
    $ perl adpreclone.pl dbTier

    With this all the pre clone tasks on the source system have been completed.

    The next set of tasks are required to be carried out on the target system that is the system on which you wish to place the merged single node instance.

    1. Create OS user accounts

    Create a OS user account for your applications

    $ useradd -g dba -d [home_directory] -p password username

    Similarly create a OS user account for your database

    2. Modify the orainventory to reflect the new location

    $ vi /etc/oraInst.loc

    3. Copy the following application tier directories

    Copy the following application tier directories from the primary node of your source system to the target application tier node, retaining the original directory structure:

    – [APPL_TOP]
    – [OA_HTML]
    – [OA_JAVA]
    – [COMMON_TOP/util]
    – [COMMON_TOP/clone>
    – [806 ORACLE_HOME]
    – [iAS ORACLE_HOME]

    4. Copy the required files for merging

    Log in as the applications user to each of the secondary source nodes and recursively copy:

    directory [COMMON_TOP]/clone/appl
    - to -
    directory [COMMMON_TOP]/clone/appl on the target system node

    Before proceeding with the next steps you must shutdown your oracle applications services and the database on the source system

    5. Copy the database tier file system

    Log on to the source system as the database user

    Copy the database (DBF) files from the source to the target system
    Copy the source database ORACLE_HOME to the target system

    After this you can now startup the database and applications services on your source system and release it for use.

    6. Configure the target system database server

    Log on to the target system as the database user and type the following commands to configure and start the database:

    $ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
    $ perl adcfgclone.pl dbTier

    This will prompt for new ORACLE_SID,ORACLE_HOME,Port Pool,JAVA_TOP and DATA_TOP give the appropriate values matching your target system

    Once successful this should start your database and listener

    7. Configure the application tier server nodes

    The database and its listener should remain up before executing the next set of commands.
    Log in to the merged APPL_TOP node as the applications user and execute the following commands:

    $ cd [COMMON_TOP]/clone/bin
    $ perl adcfgclone.pl appsTier

    This will prompt you new port pool for applictaion tier services as well as new APPLTOP,COMMON_TOP,ORACLE_HOME and IAS_TOP

    Successful completion of this task will bring up your application tier services on the target or the cloned node.

    8. Post Clone Tasks

    Log in to the target system application tier node as the APPLMGR user.
    Run the following tasks in adadmin for all products:
    o generate JAR files
    o generate message files
    o relink executables
    o copy files to destination

    9. Clean up of the target system

    Remove the temporary directory [COMMON_TOP]/clone/appl to reduce disk space usage.

    Note: The version of Oracle Applications used is 11.5.10 and the operating system is Linux ES 4

    Labels:

    posted by Lokesh Babu @ 10:57 AM   0 comments
    CREATE CUSTOM TOP IN R12
    Wednesday, July 16, 2008
    CREATE CUSTOM TOP IN R12



    SCHEMA NAME : XXRLT
    TOP NAME : XXRLT_TOP
    Application : XXRLT Custom Application
    Data Group : XXRLTGroup
    Request Group : XXRLT Request Group
    Menu : XXRLT_CUSTOM_MENU
    Responsibility : XXRLT Custom

    1) Make the directory structure for your custom application files.

    cd $APPL_TOP
    mkdir xxrlt
    mkdir xxrlt/12.0.0
    mkdir xxrlt/12.0.0/admin
    mkdir xxrlt/12.0.0/admin/sql
    mkdir xxrlt/12.0.0/admin/odf
    mkdir xxrlt/12.0.0/sql
    mkdir xxrlt/12.0.0/bin
    mkdir xxrlt/12.0.0/reports
    mkdir xxrlt/12.0.0/reports/US
    mkdir xxrlt/12.0.0/forms
    mkdir xxrlt/12.0.0/forms/US
    mkdir xxrlt/12.0.0/lib
    mkdir xxrlt/12.0.0/out
    mkdir xxrlt/12.0.0/log



    2) Add the custom module into the environment

    cd $APPL_TOP

    vi customPROD_app01.env
    XXRLT_TOP=/sandb1/oracle/PROD/apps/apps_st/appl/xxrlt/12.0.0
    export XXRLT_TOP
    source the environment file ( /sandb1/oracle/PROD/apps/apps_st/appl/APPSPROD_pkgprod.env )

    Make entry to context file

    /sandb1/oracle/PROD/inst/apps/PROD_pkgprod/appl/admin/PROD_pkgprod.xml

    /sandb1/oracle/PROD/apps/apps_st/appl/xxrlt/12.0.0

    cd $INST_TOP/admin/install
    sh adgentopfile.sh

    Open new session, source environment file, and stop middle tier services, run autoconfig
    Open new session, source environment file, check for custom top in topfile.txt in $APPL_TOP/admin, start the middle tier services.

    3) create tablespace XXRLT datafile '/sandb1/oracle/PROD/db/apps_st/data/xxrlt01.dbf' size 500M

    4) create user xxrlt identified by xxrlt
    default tablespace xxrlt
    temporary tablespace temp1
    quota unlimited on xxrlt
    grant connect, resource to xxrlt;

    5) Register your Oracle Schema.
    Login to Applications with System Administrator responsibility
    Navigate to Application-->Register
    Application = XXRLT Custom Application
    Short Name = XXRLT
    Basepath = XXRLT_TOP
    Description = XXRLT Custom Application

    6) Register Oracle User
    Naviate to Security-->Oracle-->Register
    Database User Name = XXRLT
    Password = XXRLT
    Privilege = Enabled
    Install Group = 0
    Description = XXRLT Custom Application User

    7) Add Application to a Data Group
    Navigate to Security-->Oracle-->DataGroup
    Data Group = XXRLTGroup
    Description = XXRLT Custom Data Group
    Click on "Copy Applications from" and pick Standard data Group, then add the following entry.
    Application = XXRLT Custom
    Oracle ID = APPS
    Description = XXRLT Custom Application



    8) Create custom request group
    This will act as a placeholder for any custom reports we wish to make available for the Custom Responsibility (which is defined at a later stage)
    Navigate to Security-->responsbility-->Request
    Group = XXRLT Request Group
    Application = XXRLT Custom
    Code = XXRLT
    Description = XXRLT Custom Requests
    We will not define any requests to add to the group at this stage, but you can add some now if required.

    9) Create custom menu
    This will act as a placeholder for any menu items we wish to make available for the Custom Responsibility (which is defined at a later stage). We will create

    two menus, one for Core Applications and one for Self Service.
    Navigate to Application-->Menu
    Menu = XXRLT_CUSTOM_MENU
    User Menu Name = XXRLT Custom Application
    Menu Type =
    Description = XXRLT Custom Application Menu
    Seq = 100
    Prompt = View Requests
    Submenu =
    Function = View All Concurrent Requests
    Description = View Requests

    Seq = 110
    Prompt = Run Requests
    Submenu =
    Function = Requests: Submit
    Description = Submit Requests

    Menu = XXRLT_CUSTOM_MENU_SSWA
    User Menu Name = XXRLT Custom Application SSWA
    Menu Type =
    Description = XXRLT Custom Application Menu for SSWA


    10) Create new responsibility. One for Core Applications and One for Self Service (SSWA)
    Navigate to Security-->Responsibility-->Define
    Responsibility Name = XXRLT Custom
    Application = XXRLT Custom
    Responsibility Key = XXRLTCUSTOM
    Description = XXRLT Custom Responsibility
    Available From = Oracle Applications
    Data Group Name = XXRLTGroup
    Data Group Application = XXRLT Custom
    Menu = XXRLT Custom Application
    Request Group Name = XXRLT Request Group

    Responsibility Name = XXRLT Custom SSWA
    Application = XXRLT Custom
    Responsibility Key = XXRLTCUSTOMSSWA
    Description = XXRLT Custom Responsibility SSWA
    Available From = Oracle Self Service Web Applications
    Data Group Name = XXRLTGroup
    Data Group Application = XXRLT Custom
    Menu = XXRLT Custom Application SSWA
    Request Group Name = XXRLT Request Group

    11) Add responsibility to user
    Navigate to Security-->User-->Define
    Add XXRLT Custom responsibility to users as required.

    12) Other considerations
    You are now ready to create your database Objects, custom Reports, Forms, Packages, etc

    Create the source code files in the XXRLT_TOP directory appropriate for the type of object. For example forms would be located in $XXRLT_TOP/forms/US or

    package source code in $XXRLT_TOP/admin/sql for example.

    Database Objects, such as tables, indexes and sequences should be created in the XXRLT schema, and then you need to
    a) Grant all privilege from each custom data object to the APPS schema.
    For example : logged in as XXRLT user
    grant all privileges on myTable to apps;

    b) Create a synonym in APPS for each custom data object
    For example : logged in as APPS user
    create synonym myTable for XXRLT.myTable;

    13) Login to sysadmin, Application Developer Responsibility

    Application > Form ( Register the form )
    Application > Funtion ( Add the form to a function )
    Application > Menu ( Attach the function to a menu )

    Menu that is added to a particular responsbility is given to specific user
    Security > User > Define

    Labels:

    posted by Lokesh Babu @ 2:52 AM   0 comments
    About Me

    Name: Lokesh Babu
    Home: Bangalore, Karnataka, India
    About Me:
    See my complete profile
    Previous Post
    Archives
    Links
    Template by

    Blogger Templates

    BLOGGER