Data Company Central

 View Only

5 Ways Test Data Holds Back DevOps and CI/CD

By Ugo Pollio posted 12-19-2023 09:00:25 AM

  
Provisioning, maintaining and using test data often impedes software developers and testers, frustrating precious talent, delaying releases, and often yielding poor-quality software.
5 Ways Test Data Holds Back DevOps and CI/CD
 
This article was originally published on the Delphix website here November 9, 2023.

With the adoption of DevOps and continuous integration and delivery (CI/CD) practices, test data management (TDM) practices are under pressure to evolve. In Gartner’s report, “Hype Cycle™ for Agile and DevOps, 2023” (27 July 2023, ID G00792717), the firm reports that traditional methods of test data management are becoming problematic for many businesses:

“As organizations shift to DevOps and the pace of development increases, this traditional approach is increasingly at odds with requirements for efficiency, privacy and security, and even the increased complexity of modern applications. This opens organizations to a variety of legal, security and operational risks.”

As a result, TDM has rapidly evolved in many organizations and its role in enterprise software development has grown. Indeed, according to the DevOps Research and Assessment (DORA) program, TDM is an essential capability of DevOps and “one of a set of capabilities that drive higher software delivery and organizational performance.” DORA goes on to say that successful DevOps teams apply three core TDM principles:

  1. “Adequate test data is available to run full automated test suites.

  2. Test data for automated test suites can be acquired on demand.

  3. Test data does not limit or constrain the automated tests that teams can run.”

To understand how your own TDM practices might cause concern, let’s explore the drawbacks of traditional TDM practices and what teams should look for in a better solution. 

Five Drawbacks of Traditional Test Data Management

In our experience, there are (at least) five ways that traditional TDM practices hold development teams back. These drawbacks significantly impact software quality, developer productivity, compliance and security risks, and other outcomes crucial to the business.

1. Delaying the setup of test environments

Traditional TDM processes are often manual or poorly automated. They begin with copying data from production databases into non-production environments, and they may include steps to secure sensitive data. As data volumes have increased, the time needed for these processes has grown for many companies. Development pipelines are often halted for days or weeks for test data to be provisioned and test environments to be set up. DevOps and CI/CD practices are incompatible with such delays. 

To accelerate provisioning, teams need to automate TDM steps and integrate them with existing toolchains such as GitHub, Jenkins, Terraform, and others. Automating eliminates drudgery and labor and allows teams to centralize TDM tools and processes and empower their developers from self-service TDM capabilities. In other words, TDM becomes a service to your developers, where your developers can request new or refreshed test data through the same systems they use to obtain their development environments.

According to an IDC study of Delphix customers across multiple verticals, teams that automate TDM with Delphix cut the time to provision or refresh their development environments by 93 percent and 91 percent, respectively.

2. Running afoul of compliance mandates

Data breaches involving non-production environments are on the rise. After one such breach at Uber, the US Federal Trade Commission issued guidance for businesses to secure non-production environments. The agency cited the challenge created by test data saying “such data could include consumer personal information, including sensitive information,” which “is like storing gold in the bank while it is being constructed – it could be risky.”

Sensitive data, no matter where it resides, must be protected to comply with various laws (e.g., GDPR, HIPAA, GLBA) and industry standards (e.g., ISO 27001:2022, PCI DSS) and to mitigate legal and financial risks. One of the most effective ways to protect sensitive data is to eliminate it from non-production environments altogether. Data masking or synthetic data generation are two methods for doing so.

Using data masking to eliminate sensitive data from non-production environments is standard practice at Worldpay, as shared by Arvind Anandam, Director of Platform Engineering at Worldpay: 

“We've got one production environment, but we've got about anywhere between 12 to 20 non-production environments. We do have a very robust masking process that we have put in place. We do a lot of checks to ensure that no production data accidentally slips through into non-production. Masking is the center of everything that we do.”

While masking addresses the risk of sensitive data in non-production environments, it creates yet another challenge: the need for referential integrity across multiple masked test datasets. For example, you may have three databases with sensitive data that are used by an integrated system. Independently masking those databases in your non-production environments may destroy the links between them, in turn breaking your system integration testing. To deal with this situation, seek a masking solution that preserves referential integrity across disparate data sources.

3. Compromising the quality of testing

Due to the effort and complexity of providing test data to developers, many enterprises do so very infrequently. It is not uncommon for teams to refresh test data only once or twice per year. Some teams instead rely on partial sets of data, but those too are often refreshed infrequently. Incomplete or outdated data often leads to poor test results. Both false positives (defects that aren’t real) and false negatives (real defects that are missed) are common outcomes of poor test data. 

With the proper technology, teams can easily establish weekly, daily, or even near-real-time updates of test data that are created from full production datasets. Important capabilities of such a solution include automated data synchronization, sensitive data discovery and automated masking. The solution must also synchronize schema changes so that application code can be tested against current database designs. 

For those who prefer synthetic data, caution must be taken to generate realistic data sets. Developers and testers need data that will expose system defects during testing. Production data often includes errors, unexpected variations, and inconsistencies that are useful for catching software defects, validating routines that handle bad data (i.e., catching and correcting data defects), and testing fraud detection routines and other data-dependent algorithms.

4. Eating into the application development budget

In our experience, the software, services and hardware — whether on-premises or in the public cloud — used for non-production data and applications often exceeds that used for production systems. A majority of these non-production systems are typically serving development and testing purposes. Just think of all of your unit, user acceptance and system integration test environments — how much infrastructure do they consume? This allocation of resources to non-production environments not only increases the fully burdened costs of your applications, it robs your business of resources that could be better spent on innovation and productivity. 

An infrastructure-as-code approach takes advantage of more recent technical advances, such as public cloud elasticity, containerization, and orchestration, to both expedite development environments and reduce their cost. They take advantage of the ephemeral nature of development environments (i.e., the fact that they’re not needed 100% of the time and can be “torn down” when not in use) to get more out of their infrastructure dollars.

Similarly, teams should be utilizing a “data-as-code” approach that provides test datasets on demand and destroys them when not needed. Technologies such as database virtualization and APIs allow teams to spin up virtual test datasets directly from their pipelines and destroy them as soon as software is delivered. These virtual datasets consume a trivial amount of storage and can be provisioned in minutes instead of days, weeks or months. 

Together, these capabilities lead to substantial savings. According to the IDC study, Delphix’s customers were able to reduce 82 percent of their physical storage and 32 percent of their physical servers for test data purposes. They also increased IT staff productivity across operations, security management, compliance and other functions.

5. Inhibiting scalability and flexibility

Test data serves many different use cases. For one, it helps developers build and test their own code prior to any user acceptance or system integration testing. It also helps quality engineers test software during each phase of testing, including security testing. For these use cases, those engineers often run hundreds or thousands of automated tests, which are best served by having the same data during each run. And since tests are run both parallel (many tests at once) and sequentially, duplicate test data sets need to be available, and they must be rewound to their starting state for subsequent sequential tests. Unfortunately, many TDM approaches do not provide this capability or they are very difficult to manage.

Luckily, automation can help. In particular, your TDM solution should be able to bookmark and rewind test data to a particular point in time. Similarly, it should be able to branch test data sets to run tests in parallel. If these automations synchronize schema changes, they can support tests of different versions of software, which often rely on specific versions of database schemas. With bookmarks, rewind, branching and support for schema changes, you can automate TDM for even the most complex testing scenarios.

Finally, the ability to refresh data on demand allows teams to test with the latest data and schemas. Having fresh data helps improve the effectiveness of software testing at each stage and shifts left quality, resulting in fewer defects making their way into production. Per the IDC study, Delphix’s customers reduced the number of errors per application by 73 percent while reducing the time spent by developers on defect reduction.

A Practical Example from a Telco Customer

Recently, I’ve been working with a large media and telecommunications customer to innovate how they test their software. Our primary goal is to improve their testing efficiency and reduce defects. Like others in their industry, they are under constant pressure to deliver a great customer experience (CX), making every customer journey smooth, fast, and personalized. Not to mention, they keep a constant focus on data security and privacy due to the high volume of sensitive personal information they possess and process each day for millions of customers.

Their time-to-market and a great CX depend on their application quality, but their testing was slow and less effective than needed. Their regression tests took one week and could be done only every four or five weeks while everyone else in that integrated environment stopped testing, followed by resolution planning, defect fixing, etc. Altogether, the process took about a month and a half. In the best case (i.e., no major defects found), they were able to release every two months. In the worst case, any major defect found during regression testing often means bug fixing on the current release and on the next one, slowing down the cadence to four or more months.

In one of our brainstorming sessions, they found a way to move from monthly regression testing to daily cadence, largely due to the ability to bookmark and version their test datasets. As a result, their testing teams work during the day on the next release, while their quality assurance (QA) teams work overnight with automated regression tests. Data is modified by these automated tests every night; when testing teams are back in the morning, their data is already rewound like the day before so they can begin with a known, predictable data set. It’s like nothing happened, even though thorough regression testing had been performed!

With this approach, they reduced the batch sizes, dramatically improved the quality, and accelerated the time-to-market by 4x. They also found that they improved collaboration between teams and team members, had fewer conflicts with their shared data environments, and improved the productivity of their squad teams.

Conclusion

Traditional TDM practices hinder the efficiency and effectiveness of software development teams. The time-consuming provisioning of test data, the risk of exposing sensitive information, infrequent or incomplete data sets, high management costs, and the lack of flexibility for different testing scenarios all contribute to these challenges.

However, by embracing modern approaches and leveraging advanced technologies like Delphix, organizations can overcome these limitations and achieve improved outcomes. Automation, toolset integration, self-service capabilities, data masking, real-time data updates, and data-as-code approaches all play a role in optimizing test data management.

By addressing these issues and implementing more robust test data management practices, businesses can enhance software quality, boost developer productivity, ensure compliance, mitigate security risks, and reduce costs. Ultimately, effective test data management becomes a strategic asset that enables high-performance software delivery and empowers organizations to stay competitive in today's fast-paced digital landscape.


0 comments
6 views

Permalink