Delphix Products

  • 1.  If an oracle DB has insufficient referential integrity, does it adversely affect VDB caching ?

    Posted 09-13-2017 04:32:00 PM
    Do insufficient foreign keys affect VDB caching efficiency?
    #Virtualization


  • 2.  RE: If an oracle DB has insufficient referential integrity, does it adversely affect VDB caching ?

    Posted 09-13-2017 04:57:00 PM
    Rob,

    When you say "insufficient foreign keys", do you mean "missing foreign key constraints"?

    If so, then the impact of the lack of RI constraints is primarily the integrity and accuracy of the data, not performance of retrieval or modification.

    Having said that, there are certain operations where the presence of FK constraints also requires indexing, and that indexing generally improves performance by reducing the volume of I/O required to retrieve or modify.  Reducing the volume of I/O generated positively affects the efficiency any I/O subsystem, including the ARC caching in the Delphix virtualization engine.

    Not sure if I understood or answered appropriately, please let me know?

    Thanks!

    -Tim


  • 3.  RE: If an oracle DB has insufficient referential integrity, does it adversely affect VDB caching ?

    Posted 09-13-2017 05:47:00 PM
    Thanks very much Tim.  You did understand the gist of the question.  I know that sometimes we should even compress the indexes to help buffering as well.  But was wondering if there is any direct tuning principles to follow to keep VDB cache performance optimized.
    Appreciate your insights,
    Rob


  • 4.  RE: If an oracle DB has insufficient referential integrity, does it adversely affect VDB caching ?
    Best Answer

    Posted 09-13-2017 06:23:00 PM
    Rob,

    Over the past 25 years, the relevant principles that I have followed are...

    1. reduce the volume of I/Os, then next...
    2. reduce the latency of I/Os

    Step 1 translates to "make it more efficient" by generating fewer I/Os.  Step 2 recognizes that there is only so far you can go with step 1, so when step 1 is exhausted, then (and only then) go on to step 2.

    Caching falls into step 2.  So does more memory, faster CPU, etc.

    If you take care of step 1, you rarely have to worry about step 2.

    Often, people try to reverse the order and perform step 2 first, but putting inefficient processes on faster hardware (a.k.a. "KIWI" or "kill it with iron") is only a temporary solution, never permanent, if it solves anything at all.  Expensive and disappointing.

    Also, remember that indexing isn't always the right solution;  there are numerous situations where a full scan is far more efficient.

    Reducing the number of I/Os and the latency of I/Os requires that you're able to measure both.  You can't improve that which you cannot measure.

    Sorry, a bit preachy and perhaps more high-level, but that's how I approach this stuff, and I did this for a living for a number of years.

    Hope this helps...

    -Tim