- Create a Standard S3 Bucket with a backup directory.
In this case, we created a bucket rdsdemo with folder, rds_backup
- Create IAM policy for S3 bucket access from RDS with List, Read and Write permissions.
- Create IAM role and attach above policy with it.
- Create an Option group for RDS Engine and version.
- Once option group is created, click on Add Option for the group.
- Add S3_INTEGRATION from Option name dropdown, choose Version 1.0, check Yes for Apply immediately and hit Add Option.
- Now use this option group with RDS database.
- Create Oracle RDS database. Make sure RDS version should be equal to greater than 12c 2019.
NOTE: Make sure you have backup enabled for RDS database, which is the only way to enable Archive Log Mode on RDS.
- Once RDS database is created, add the IAM role created in Step 3, with S3_INTEGRATION feature, under Manager IAM roles of RDS databases.
- Connect to RDS database using endpoint from client.
- Set Archive Log retention.
begin
rdsadmin.rdsadmin_util.set_configuration(
name => 'archivelog retention hours',
value => '48');
end;
/
- Enable Force Logging.
exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);
- Enable Block Change Tracking.
exec rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;
- Take Incremental Level 0 backup of RDS database with control file and archive logs, in DATA_PUMP_DIR.
set SERVEROUTPUT ON
begin
RDSADMIN.RDSADMIN_RMAN_UTIL.BACKUP_DATABASE_INCREMENTAL(
P_OWNER => 'SYS',
P_DIRECTORY_NAME => 'DATA_PUMP_DIR',
P_LEVEL => 0,
P_INCLUDE_ARCHIVE_LOGS => TRUE,
P_INCLUDE_CONTROLFILE => TRUE,
P_OPTIMIZE=> TRUE,
P_RMAN_TO_DBMS_OUTPUT => TRUE);
end;
/
commit;
- List the files in DATA_PUMP_DIR to verify new backup files.
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime desc;
Notice the control File backup has -c-<db_id>- in its name.
- Transfer the files in DATA_PUMP_DIR to S3 bucket.
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => '<S3-bucketName>',
p_s3_prefix => '<backupDir_inBucket>/',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
This command will generate a TASK ID, which we will use to track the upload status to S3.
- Check Status, and wait for task to finish successfully.
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASKID>.log'));
- Verify backups present in S3 bucket directory.
In this case, we will use s3fs.
- Take Incremental Level 1 backup of RDS database with control file and archive logs, in DATA_PUMP_DIR.
set SERVEROUTPUT ON
begin
RDSADMIN.RDSADMIN_RMAN_UTIL.BACKUP_DATABASE_INCREMENTAL(
P_OWNER => 'SYS',
P_DIRECTORY_NAME => 'DATA_PUMP_DIR',
P_LEVEL => 1,
P_INCLUDE_ARCHIVE_LOGS => TRUE,
P_INCLUDE_CONTROLFILE => TRUE,
P_OPTIMIZE=> TRUE,
P_RMAN_TO_DBMS_OUTPUT => TRUE);
end;
/
commit;
- List the files in DATA_PUMP_DIR to verify new backup files.
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime desc;
Notice the new backup files along with control File backup which has -c-<db_id>- in its name.
- Transfer the files in DATA_PUMP_DIR to S3 bucket.
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => '<S3-bucketName>',
p_s3_prefix => '<backupDir_inBucket>/',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;
This command will generate a TASK ID, which we will use to track the upload status to S3.
- Check Status, and wait for task to finish successfully.
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASKID>.log'));
- Verify the new incremental backups present in S3 bucket directory.
- Verify the new backups also reflect under mount path.
- Now take a dSource Snapshot.