Data Company Central

 View Only

Are you a Digital Leader or Digital Laggard? Here's How to Accelerate Your Digital Agenda Using Hybrid Cloud AppDev

By Alberto Sigismondi posted 04-14-2020 09:00:00 AM

  
The investment you make for digital today will secure your business for tomorrow.
Are you a Digital Leader or Digital Laggard? Here's How to Accelerate Your Digital Agenda Using Hybrid Cloud AppDev
Learn how DataOps can help teams leverage a hybrid cloud strategy to deliver breakthrough customer experiences.

This article was originally published on the Delphix website here March 23, 2020. Co-authored by @Alberto Sigismondi  and @Lenore Adam.

The global onset of COVID-19 is putting more pressure than ever on CIOs to transform the customer experience and drive business agility to not only adapt but thrive through the crisis. Increasingly, cloud services play a critical role in helping companies accelerate innovation.  

But beyond the hype, companies are still in the early days of cloud adoption. Recent 451 Research findings show that 58 percent of enterprises are moving toward a hybrid IT environment that leverages both on-premise and public cloud infrastructure in an integrated fashion. 

The main reason being: most enterprises still have legacy applications on-premise that support their core business, and these will slowly (if ever) migrate to the cloud. Plus, data sovereignty laws are now part of the equation, where data is subject to the laws of the country in which it is collected.

Why Hybrid Cloud Matters: Cloud-based AppDev Creates the Foundation for Accelerating Cloud Adoption

Moving development workflows to the cloud increases flexibility and lowers the costs associated with test environments that often outnumber production by a four to one ratio. DevOps teams can optimize around automated and repeatable software delivery processes because infrastructure is now code. But what about the data that feeds your software development lifecycle? 

Data is the Blind Spot in DevOps 

Feeding and refreshing test data for cloud-based test environments from an on-prem production instance is harder than you think. Without the ability to automate data delivery as part of the DevOps workflow in the cloud, a data agility problem is created that drives bad behavior and slows the delivery pipeline.

Here are 5 reasons why: 

  1. Legacy databases are not designed for data agility. A tightly coupled application and legacy database hosted on-prem are not architected to support the agility required by a fast-moving appdev team in the cloud. In the past, database backups or snapshot replications used in lower-level environments were available for a development pipeline that produced new product functionality a few times a year. These copies were fine sitting in tape backup or other storage device on-prem, where they could be accessed locally via a SAN, given a month or two of lead time. In this model, a copy of a database with a high rate of change quickly becomes stale, and slow provisioning no longer works for an automated CI/CD toolchain.
     
  2. On-prem to cloud network bottlenecks. Transporting that monster dataset to the cloud is when the infrastructure ‘talks.’ Think of the sheer size of an enterprise database—it can easily take a couple of weeks to refresh and transfer a 25TB on-prem database for a test environment in the cloud. Not all cloud regions have the same network performance, which may impact geographically dispersed application teams more than others.
     
  3. Data restores can’t keep pace with the velocity of agile dev/test. With development teams now working in smaller batch sizes, they are continuously testing code changes to rapidly iterate. When testing destroys a dataset, it needs to be quickly returned to the original state, so tests can immediately resume to meet a two-week sprint deadline. But manual touchpoints and handoffs create major wait states for DevOps workflows that leverage the flexibility of the cloud. Appdev teams are forced to use subsetting or synthetic data to maintain release velocity, limiting test coverage that sends data-related defects further along in the SDLC. 
     
  4. Improving MTTR requires both speed and precision. If a defect escapes into production and a support ticket opens up, SREs or dev teams will need to quickly spin up support environments in the cloud for triage. What was the test configuration used when this code was implemented? While code is version controlled, data usually is not—making it difficult to quickly trace the state of the test database back to specific application changes.
     
  5. Manual anonymization processes and error-prone scripts. Old, manual processes and brittle scripts for anonymization are often the only tools in the tool chest for an enterprise data estate that encompasses a wide range of database types. While in-transit encryption can be applied to network transfers from on-prem to the cloud, PII data must be obfuscated prior to distribution. Doing so removes any value the data has to a hacker. Enterprise databases get distributed across multiple organizations and roles, and poor data masking practices put the organization and its customers at risk. 

Modernizing the Data Pipeline with DataOps

It used to be that the major consumers of copied data were engineering teams. But when data is the ‘new oil’ in a digital economy, there are many more data consumers within an organization looking to leverage production-quality datasets for purposes, such as business analytics and  machine learning. Customizing datasets in different formats, different obfuscation requirements, and different timelines can exponentially increase a data bottleneck. 

Leveraging Delphix to implement DataOps best practices when establishing a hybrid model for appdev can help address these challenges, so data becomes portable, accessible, and secure. 

The Delphix platform creates virtualized data clones that are lightweight and portable. Only incremental changes are needed to keep a virtual database in the cloud synced with production. This removes the strain on the network and ensures test data remains relevant and comprehensive. In addition to this, the ability to provision customized datasets can be integrated into the automated CI/CD toolchain. Flexible virtual clones can be refreshed from any point in time, so data becomes as agile as code. The integrated masking technology automatically identifies and anonymizes sensitive data prior to migrating to the cloud and governs who has access to what data and for how long through a policy-driven approach. These capabilities are also applicable when communicating from one cloud to another to help with the much feared cloud vendor lock-in. 

This global health crisis is forcing companies around the world to rethink their business models for the future by fast-tracking cloud strategies to stay ahead of their competition and manage through unprecedented challenges. In this new digital reality, organizations can’t afford to ignore the importance of data as a vital part of their application development lifecycle and ensure IT can evolve and adapt.



​​
0 comments
6 views

Permalink