Delphix Products

 View Only

AWS Oracle RDS Virtualization with Delphix

By Jatinder Luthra posted 08-14-2020 01:51:46 PM

  

Architecture

Architecture

AWS RDS Setup

  1. Create a Standard S3 Bucket with a backup directory.
    In this case, we created a bucket rdsdemo with folder, rds_backup
    rdsdemo
  2. Create IAM policy for S3 bucket access from RDS with List, Read and Write permissions.
    IAM policy
  3. Create IAM role and attach above policy with it.
    IAM role

  4. Create an Option group for RDS Engine and version.
    Option group

  5. Once option group is created, click on Add Option for the group.
    Add Option

  6. Add S3_INTEGRATION from Option name dropdown, choose Version 1.0, check Yes for Apply immediately and hit Add Option.
    S3_Integration

  7. Now use this option group with RDS database.

  8. Create Oracle RDS database. Make sure RDS version should be equal to greater than 12c 2019.
    NOTE: Make sure you have backup enabled for RDS database, which is the only way to enable Archive Log Mode on RDS.

  9. Once RDS database is created, add the IAM role created in Step 3, with S3_INTEGRATION feature, under Manager IAM roles of RDS databases.
    RDS db create (a)
    RDS db create (b)

  10. Connect to RDS database using endpoint from client.
  11. Set Archive Log retention.
    begin
        rdsadmin.rdsadmin_util.set_configuration(
            name  => 'archivelog retention hours',
            value => '48');
    end;
    /
  12. Enable Force Logging.
    exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);
  13. Enable Block Change Tracking.
    exec rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;

  14. Take Incremental Level 0 backup of RDS database with control file and archive logs, in DATA_PUMP_DIR.
    set SERVEROUTPUT ON
    begin
    RDSADMIN.RDSADMIN_RMAN_UTIL.BACKUP_DATABASE_INCREMENTAL(
    P_OWNER => 'SYS', 
    P_DIRECTORY_NAME => 'DATA_PUMP_DIR',
    P_LEVEL => 0,
    P_INCLUDE_ARCHIVE_LOGS => TRUE, 
    P_INCLUDE_CONTROLFILE => TRUE,
    P_OPTIMIZE=> TRUE,
    P_RMAN_TO_DBMS_OUTPUT => TRUE);
    end;
    /
    commit;
  15. List the files in DATA_PUMP_DIR to verify new backup files.
    select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime desc;

    Notice the control File backup has -c-<db_id>- in its name.

    db_id
  16. Transfer the files in DATA_PUMP_DIR to S3 bucket.
    SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
    p_bucket_name    =>  '<S3-bucketName>',
    p_s3_prefix      =>  '<backupDir_inBucket>/',
    p_directory_name =>  'DATA_PUMP_DIR')
    AS TASK_ID FROM DUAL;

    This command will generate a TASK ID, which we will use to track the upload status to S3.

    TASK_ID
  17. Check Status, and wait for task to finish successfully.
    SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASKID>.log'));

  18. Verify backups present in S3 bucket directory.

    verify backups

Mount S3 bucket on Staging Server


S3 bucket can be mounted on staging server either with Storage gateway or using utility s3fs. 

In this case, we will use s3fs.

  1. Install s3fs on Staging Server.
    yum install epel-release
    yum install s3fs-fuse
  2. Create AWS credential file on staging server.
    echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/passwd-s3fs​
  3. Provide grants to oracle or delphix os user on Staging Server.
    chown oracle:oinstall /etc/passwd-s3fs
    chmod 600 /etc/passwd-s3fs​
  4. Switch to OS user, do the mount and verify backups present under mount directory.
    s3fs <bucketName> <mountPath> -o passwd_file=/etc/passwd-s3fs​
    verify backups

Upload OBI Plugin in Delphix Engine

OBI Plugin


Add Staging Environment in Delphix Engine

  1. Add Staging Environment.
    Staging Environment
  2. Once environment is added, verify entry for OBI plugin is present under list of Databases in environment.
    Verify OBI

Add dSource in Delphix Engine

  1. Get DBName and DBID of RDS database.
    DBName and DBID
  2. Under Staging Environment - OBI plugin, click on plus sign (+) to add database.
    Plus Sign
  3. In pop-up window, provide source database name from first step, and provide staging database unique name and unique identify, to whatever name you want.
    Source DB
  4. Click on Add dSource.
    Add dSource
  5. In Add dSource window, provide the below details:
    DSource Type: Full
    Mount Path: <mountPath>/<newMountDirectoryForStagingDB>
    Instance Name: <StagingDBInstance>
    Source Database DBID: <RDS DBID extracted in 1st step>
    Database disk backup location: <NFS-Mounted-Path-S3-Bucket>
    Source Database Controlfile disk backup piece name (full path): <NFS-Mounted-Path-S3-Bucket>/<controlFilebackupName>

    NOTE: Refer to AWS RDS Setup, Step 13 for control file information.

    control file info
  6. Click Next.
    Next
  7. Provide dSource Name and dSource group. Click Next.
    Name and Group
  8. Choose Staging Environment from dropdown and OS user. Click Next.

     

  9. Review Policies and Hooks (as needed).
    Policies and Hooks
  10. Review Summary and Submit.

  11. Wait for dSource creation to finish.
    Finish Creation
  12. dSource mount will appear on staging box.
    mount appears
  13. Check on Staging Server to find staging database in MOUNTED state.
    Mounted state

Provision Virtual Database

  1. Choose snapshot of RDS dSource and click on provision icon.
  2. Follow the provisioning steps.
    Provision 1
    Provision 2
    Provision 3
    Provision 4
    Provision 5


Incremental Backup of RDS and dSource Sync

  1. Take Incremental Level 1 backup of RDS database with control file and archive logs, in DATA_PUMP_DIR.
    set SERVEROUTPUT ON
    begin
    RDSADMIN.RDSADMIN_RMAN_UTIL.BACKUP_DATABASE_INCREMENTAL(
    P_OWNER => 'SYS', 
    P_DIRECTORY_NAME => 'DATA_PUMP_DIR',
    P_LEVEL => 1,
    P_INCLUDE_ARCHIVE_LOGS => TRUE, 
    P_INCLUDE_CONTROLFILE => TRUE,
    P_OPTIMIZE=> TRUE,
    P_RMAN_TO_DBMS_OUTPUT => TRUE);
    end;
    /
    commit;
  2. List the files in DATA_PUMP_DIR to verify new backup files.
    select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime desc;

    Notice the new backup files along with control File backup which has -c-<db_id>- in its name.

    New backup and control file
  3. Transfer the files in DATA_PUMP_DIR to S3 bucket.
    SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
    p_bucket_name    =>  '<S3-bucketName>',
    p_s3_prefix      =>  '<backupDir_inBucket>/',
    p_directory_name =>  'DATA_PUMP_DIR')
    AS TASK_ID FROM DUAL;

    This command will generate a TASK ID, which we will use to track the upload status to S3.

    generate Task ID
  4. Check Status, and wait for task to finish successfully.
    SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASKID>.log'));
    Status 2
  5. Verify the new incremental backups present in S3 bucket directory.
    verify incrementals
  6. Verify the new backups also reflect under mount path.
    verify backups
  7. Now take a dSource Snapshot.
    dSource snapshot 1
    dSource snapshot 2
0 comments
84 views

Permalink