Blogs

Some interesting facts about Oracle HCC (Hybrid Columnar Compression)

By Diego Loureda posted 04-14-2016 05:39:57 PM

  

These tests were performed on a Exadata X2-2 Full Rack.

Delphix was using 10 vCPU’s and 110 GB of RAM and version 4.2.3.1

 

NOTE: HCC is supported on Exadata, ZFS ZS*, Pillar Axiom and Flash FS* storage. 

 

We performed 2 tests:

 

•   TEST 1: Check size and status of HCC compressed segments after virtualization in Delphix

 

•   TEST 2: HCC segment status after virtual to physical

 

Here the details and conclusions of both tests:

 

TEST 1: Check size and status of HCC compressed segments after virtualization in Delphix

 

To do this test we played with a REAL database (bizd). The database size is 690GB and it’s running in an Exadata X2-2 Full Rack. We imported a FACT table from a DW called BMARK_COMP_ATTRIBUTION_FACT_2 (size ~107GB):

 

SQL> select sum(bytes)/1024/1024

from dba_segments

where owner='501677820'

and segment_name = 'BMARK_COMP_ATTRIBUTION_FACT_2'; 

 

SUM(BYTES)/1024/1024

--------------------

              109167

 

We compressed this table for query high:

 

SQL> alter table BMARK_COMP_ATTRIBUTION_FACT_2 compress for query high;

Table altered.

SQL>  alter table  BMARK_COMP_ATTRIBUTION_FACT_2 move nologging parallel 32;

Table altered.

SQL>

 

After compression, size was reduced to ~36GB:

 

SQL> select sum(bytes)/1024/1024

from dba_segments

where owner='501677820'

and segment_name = 'BMARK_COMP_ATTRIBUTION_FACT_2';  2    3    4

 

SUM(BYTES)/1024/1024

--------------------

             37000.5

SQL> 

 

After doing this compression, the size of the database in Exadata is ~690GB:

 

SQL> select (a.data_size+b.temp_size+c.redo_size+d.controlfile_size)/1024 "total_size in GB" from ( select sum(bytes)/1024/1024 data_size from dba_data_files) a,( select nvl(sum(bytes),0)/1024/1024 temp_size from dba_temp_files ) b,(select sum(bytes)/1024/1024 redo_size from sys.v_$log ) c,( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024 controlfile_size  from v$controlfile) d;

 

total_size in GB

----------------

690.78405

SQL>

 

We created a dSource from this database and we got a compression of 2.6:1 with a final size of ~215.05GB.

 

At this moment we provisioned a VDB called BIZDHCC and we tried querying HCC compressed data: 

 

 SQL> alter session set current_schema="501677820";

Session altered.

SQL> select count(*) from BMARK_COMP_ATTRIBUTION_FACT_2;

select count(*) from BMARK_COMP_ATTRIBUTION_FACT_2

*

ERROR at line 1:

ORA-12801: error signaled in parallel query server P007, instance dm02db08:BIZDHCC1 (1)

ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

 

Size of the segment is still same size than in Exadata:

 

SQL> select sum(bytes)/1024/1024

from dba_segments

where owner='501677820'

and segment_name = 'BMARK_COMP_ATTRIBUTION_FACT_2';

  2    3    4

SUM(BYTES)/1024/1024

--------------------

             37000.5

SQL>

 

 We uncompressed the table to see if we can run queries out of Exadata:

 

SQL> alter table BMARK_COMP_ATTRIBUTION_FACT_2 nocompress;

Table altered.

SQL> alter table BMARK_COMP_ATTRIBUTION_FACT_2 move nologging parallel 16;

Table altered.

SQL> select sum(bytes)/1024/1024

  2  from dba_segments

  3  where owner='501677820'

  4  and segment_name = 'BMARK_COMP_ATTRIBUTION_FACT_2';

 SUM(BYTES)/1024/1024

--------------------

          98136.8125

 

SQL> select count(*) from BMARK_COMP_ATTRIBUTION_FACT_2;

   COUNT(*)

---------

345124207

 

After uncompressing the data, size is pretty much same size that it was in Exadata and data is available again for query.

 

TEST 1 conclusions:

     
  1. We can create a dSource from a database with HCC segments on it
  2. Delphix won’t require extra space to “store” HCC segments
  3. To query HCC in Delphix you have to uncompress the data or find the way to give the expected answer to the SNMP call that Oracle is making on startup to the storage (I don’t think this is legal)

 

 

TEST 2: HCC segment status after virtual to physical

 

Using same dSource that was used for TEST 1, we provisioned a VDB (BIZDHCC1) in an Exadatata node. This VDB has a table with Hybrid Columnar Compression for Query High:

 

SQL> select table_name , compression , compress_for from dba_tables where owner='501677820';

 

TABLE_NAME                     COMPRESS COMPRESS_FOR

------------------------------ -------- ------------

BMARK_COMP_ATTRIBUTION_FACT_2  ENABLED  ARCHIVE HIGH

 

Using Exadata as a Target to provision a VDB is not enough to be able to use HCC. Oracle will do a SNMP call to the storage to see if HCC will be available for use.

 

SQL> select * from "501677820".BMARK_COMP_ATTRIBUTION_FACT_2;

select * from "501677820".BMARK_COMP_ATTRIBUTION_FACT_2

*

ERROR at line 1:

ORA-12801: error signaled in parallel query server P011, instance dm02db08.geam.corporate.ge.com:BIZDHCC1 (1)

ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

 

Now we did a v2ASM, to put all data back into Exadata storage. We executed the script from the toolkit. Default temporary tablespace on this VDB was a bigfile, we changed it to a normal one because if you don’t do so the process will break when moving the tempfiles into ASM. Also as a note for this process, you need to export ORACLE_SID, ORACLE_HOME and CRS_HOME. Here the output of the process:

 

[oracle@dm02db08 BIZDHCC1]$ sh ./move-to-asm.sh -parallel 16 +RECO

============================================================

Virtual-to-ASM script (move-to-asm.sh v1.6) for Delphix 4.x

Copyright (c) 2013, 2014 by Delphix. All rights reserved.

============================================================

 

Moving database BIZDHCC to ASM: started at Wed Jul  8 15:21:59 EDT 2015

Verify the following settings...

    db_unique_name => BIZDHCC

    ORACLE_SID => BIZDHCC1

    ORACLE_HOME => /u01/app/oracle/product/11.2.0.3

    Datafile diskgroup => +RECO

    RMAN Channels => 16

 

Do you want to proceed (Y/[N]) => Y

Generate script to move tempfiles to ASM

Generate script to drop old tempfiles

Generate script to drop offline tablespaces

Generate script to make read-only tablespaces read-write

Make read-only tablespaces read-write

Remove offline tablespaces

Updating server parameter file with ASM locations

Move spfile to ASM

Move datafiles to ASM: started at Wed Jul  8 15:22:44 EDT 2015

Move datafiles to ASM: completed at Wed Jul  8 15:46:49 EDT 2015

Startup database with updated parameters

Move tempfiles into ASM

Move Online logs

Restore any read-only tablespaces

Remove old tempfiles

Shutdown database

Starting up RAC database BIZDHCC...

Database BIZDHCC moved to ASM: completed at Wed Jul  8 15:48:06 EDT 2015

 

Final Steps to complete the move to ASM:

  1) Delete VDB on Delphix.

  2) Modify initialization parameters to match source and restart.

     Source parameters are restored at ./source_initBIZDHCC1.ora

 

After doing this process, HCC segment is again available for query:

 

SQL> select count(*) from BMARK_COMP_ATTRIBUTION_FACT_2;

   COUNT(*)

---------

345124207

 

 

TEST 2 conclusions:

     
  1. Using Exadata as a Target to provision a VDB is not enough to be able  to use HCC. Oracle will do a SNMP call to the storage to see if HCC will be  available for use. Only Exadata, ZS* and all Flash FS* arrays can use this feature.
  2. We can ingest HCC Data into Delphix and if we rehydrate it using virtual to ASM in a certified storage, data will be accessible again. In this particular client case this is good enough as they do functional test only with Delphix and after that, when they want to do performance they will do virtual to physical. They understand the process and for them is ok to see the ora error related with querying HCC because they know that data will be there after virtual to physical. Also is important to mention that you normally only use this type of compression on rarely accessed data, what is not normally used in a pure functional testing.

 



#Tip
2 comments
36 views

Comments

04-14-2016 06:17:41 PM

Hello Mouhssine,

That is exactly right. I have some clients that did that and also I have some that decided to let the data compressed in Delphix. By definition they just do HCC on rarely accessed segments, so they don't care too much if they see that ora error.

Thanks
Diego

04-14-2016 06:07:46 PM

Hi, Intersting case. I've got the same with one of my x5-2 1/2 exadata deployement and i made it easy by implementing hook to uncompress hcc tables at the configure, refresh and rewind event's. This way everything is transparent for customer and will not face the ora-64307 error. Regards, Mouhssine