Blog

Beyond Compliance: Why 2026 is the Year of Continuous Security Testing?

5 min

If 2025 was the year of compliance—dominated by the EU’s “regulatory tsunami” of DORA and the localization of NIS2—then 2026 is the year of execution.

For many organizations, the previous twelve months were spent navigating governance frameworks. However, the regulatory pressure has not gone away. Instead, it has shifted focus.
In the EU, the regulation continues to shape the cyber security field. Cyber Resilience Act (CRA) mandates reporting of vulnerabilities in applicable digital products from September 11th 2026 and will eventually affect products introduced to the market after December 11th 2027.

Preparation for CRA as well as the follow-up waves of NIS2 and DORA are likely to cause a shift in the focus from governance to more technical aspects of information security. CRA, for example, mandates vendors to apply effective and regular tests and reviews of security for products with digital elements. A very similar requirement is present in DORA too and NIS2 basically states the same.

Therefore, we will have a look at how certain aspects of product security, application security (AppSec) and cloud security might evolve in the current year of 2026. In this post and in the subsequent series of blog posts, I’ll discuss this from three viewpoints:

1. Integrating security testing tightly in the software development (SDLC) and application delivery processes and why continuous application security testing makes sense in so many ways.
2. What current state of AI means for application security and pentesting and
3. How those affect the cloud environments hosting cloud native applications, perhaps supporting digital products as per CRA.

In this first installment, we examine why integrating security testing directly into the Software Development Lifecycle (SDLC) is important for all software and application providers, not just for those mandated by CRA.

Continuous application security testing

Our team at 2NS Cybersecurity has long advocated for the benefits of implementing DevSecOps practices (Leppänen et al., 2022). Fundamentally, DevSecOps is the tight integration of security gates into the SDLC, moving security from a “blocker” to an enabler.
Recent empirical evidence supports this shift. Feio (2024) demonstrated that integrated, continuous security testing within CI/CD pipelines significantly improves the Mean Time to Discovery (MTTD) of vulnerabilities. Development teams were able to identify and remediate critical flaws during the development phase rather than post-deployment. This reduction in MTTD directly correlates to a reduced Mean Time to Remediate (MTTR)—a metric that translates directly to financial savings.

Reduced time to remediate translates to euros and in a very straightforward manner.

The cost of fixing a vulnerability is not static. It grows exponentially the longer the bug survives. Back in the day quality gurus Labovitz and Chang outlined the 1-10-100 rule, showing that fixing a defect in the later stage of SDLC can cost up to 10x more than fixing it during the earlier phase. This has been found to be a fair approximation in studies e.g. by NIST (National Institute of Standards & Technology) (Tassey, 2002). Even if pre-release security testing catches issues, the cost differential between fixing a bug “right of commit” versus “left of release” is substantial.

Driving down time to discover calls for tighter integration of security testing and software development. It means either lining up penetration testers to test each code change or increased use of automation. Considering the cost, many organizations over-pivot to automation, relying entirely on DAST (Dynamic Application Security Testing) scanners to police their pipelines.

This approach often results in “alert fatigue”. As originally demonstrated by Alavi et al. (2018) automated tools suffered from high false-negative rates, consistently missing critical logic flaws that human testers found easily. That result still stands, even though the DAST tools have advanced significantly as demonstrated by recent studies (Moreno et al, 2025).

We at 2NS argue that the Gold Standard for AppSec in 2026 is a hybrid model: combining the speed of advanced DAST automation with the lateral thinking of human penetration tester.
This model—often referred to as Penetration Testing as a Service (PTaaS) or Continuous Application Security Testing—provides regular access to human expertise. It ensures that while automation handles the low-hanging fruit (e.g., injection flaws) and frequency, human experts are freed to probe for complex business logic vulnerabilities that scanners miss.
We witnessed tremendous interest in this hybrid approach throughout last year, and we anticipate it will be a defining trend for 2026. Consequently, our offensive security team is focusing heavily on rolling out continuous security testing solutions that keep the “human in the loop” while responding to faster delivery cycles with automation. Stay tuned for the next post in this series, where we will dive into the current state of AI and what it means for the future of application security and pentesting.

Juha Eskelin

Head of Security Operations

2NS Cybersecurity

Intrested to know more?

Contact us

References

  • Alavi, F., Islam, S., & Jahankhani, H. (2018). Analyzing the False Negative Rate of Automated Web Application Scanners. In Wireless and Satellite Systems (pp. 143–152). Springer.
  • C. Feio, N. Santos, N. Escravana and B. Pacheco, “An Empirical Study of DevSecOps Focused on Continuous Security Testing,” in 2024 IEEE European Symposium on Security and Privacy Workshops (EuroS\&PW), Vienna, Austria, 2024, pp. 610-617, doi: 10.1109/EuroSPW61312.2024.00074.
  • Leppänen, T., Honkaranta, A., & Costin, A. (2022). Trends for the DevOps Security : A Systematic Literature Review. In B. Shishkov (Ed.), Business Modeling and Software Design : 12th International Symposium, BMSD 2022, Fribourg, Switzerland, June 27–29, 2022, Proceedings (pp. 200-217). Springer International Publishing. Lecture Notes in Business Information Processing, 453. https://doi.org/10.1007/978-3-031-11510-3_12
  • Moreno, J., et al. (2025). Comparative evaluation of approaches & tools for effective security testing of Web applications. Journal of Information Security and Applications.
  • Tassey, G. (2002). The Economic Impacts of Inadequate Infrastructure for Software Testing. National Institute of Standards and Technology (NIST).