Application security is a developer problem

March 7, 2026 7 min read

Why every developer - not just the AppSec team - is responsible for writing secure code, and what it actually takes to build that skill.

Application security is a developer problem

Application security is a developer problem (And Most Developers Are Ignoring It)

There’s a convenient fiction in the software industry: security is someone else’s job. You write the feature, the AppSec team reviews it, the pen testers hammer it, and some firewall catches whatever slips through. Layered defense. Problem solved.

Except it isn’t. Not even close.

The average time between a vulnerability being introduced and being discovered in production is measured in months, sometimes years. By then, it’s baked into five downstream services, three microservices that forked your code, and a Docker image running on infrastructure you forgot existed. The firewall doesn’t know your JWT validation logic is broken. The pen tester finds what they have time to find. The AppSec engineer triaging 400 SAST findings a week can’t save you from yourself.

The only person who can write secure code is the person writing the code.


What AppSec actually is

Application security is the practice of building software that behaves correctly even when someone is actively trying to make it behave incorrectly. That framing matters. It’s not about compliance checkboxes or vulnerability scanners giving you a green light. It’s about adversarial correctness.

The threat model is simple: assume a motivated attacker with full knowledge of your codebase, your dependencies, and your infrastructure. What breaks? Where does trust get violated? What assumptions did you make that were never actually enforced?

The OWASP Top 10 exists because the same categories of vulnerabilities keep appearing year after year in production systems built by competent engineers. Injection. Broken authentication. Insecure deserialization. These aren’t exotic edge cases. They’re the result of developers who understand how code works but haven’t internalized how code gets abused.


The jagged competency problem

Here’s what makes AppSec genuinely hard: developer security knowledge is almost always jagged. A senior engineer who writes airtight SQL queries will turn around and implement a custom session token scheme using Math.random(). A developer who religiously validates input on the frontend assumes the backend is safe by proximity. Someone who has heard of SSRF has never actually thought about where their HTTP client is making requests.

Security knowledge doesn’t accumulate uniformly with experience. It accumulates based on what you’ve been burned by, what you’ve read, and whether anyone in your organization has ever pushed back on your threat model. Most developers go years without meaningful security feedback. The result is high competence in the lanes they operate in and total blind spots everywhere else.

This is why security reviews catch so little. The reviewer is pattern-matching against known vulnerability classes. The developer introduced something subtler, something that only becomes a vulnerability three abstraction layers up when a different component makes a different assumption about data that was supposed to be sanitized.


The stack you actually need to understand

AppSec isn’t one thing. It’s a stack of concerns, and most developers are only aware of the bottom two or three layers:

1. Input validation and output encoding. The classics. SQL injection, XSS, command injection. If you’re not doing this right, stop reading and fix that first.

2. Authentication and authorization. Not just “is the user logged in” but “does this user have the right to do this specific thing, on this specific resource, in this specific context.” Broken access control is consistently the top OWASP finding because authorization logic is usually ad hoc, scattered across the codebase, and never systematically tested.

3. Cryptography. You should almost never be implementing cryptography. You should be selecting the right library, the right algorithm, and using it correctly. The failure mode here isn’t usually “bad math,” it’s using ECB mode, reusing IVs, rolling your own key derivation, or trusting a token you issued yourself without verifying its signature.

4. Dependency management. Every package you import is attack surface you don’t control. Supply chain attacks have moved from theoretical to routine. Know what’s in your dependency tree, keep it updated, and understand what you’re actually importing.

5. Secrets management. Credentials don’t belong in code, environment files committed to version control, or build logs. This is solved. Use a secrets manager. The number of production databases compromised by credentials in a public GitHub repo is genuinely embarrassing as an industry.

6. Security architecture. This is where it gets interesting and where most engineers stop. How does trust flow through your system? Where are the privilege boundaries? What’s the blast radius if one component is compromised? These questions get harder as systems grow and rarely get asked at design time.


What actually changes your security posture

Tooling helps, but it’s not the answer people want it to be. SAST tools catch a meaningful subset of common vulnerability patterns and produce a lot of noise. DAST gives you runtime behavior but requires a running application and test coverage. SCA flags known vulnerable dependencies. None of these tools think about your business logic, your trust model, or the interaction between components.

Threat modeling does. It’s underused because it requires sitting down and thinking adversarially before writing a line of code, which feels unproductive. It isn’t. An hour of threat modeling early in a feature’s lifecycle is worth days of remediation after deployment. The STRIDE model (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) gives you a structured way to enumerate what can go wrong. Use it.

Code review with a security lens is the other high-leverage activity. Not “does this code do what the ticket says” but “what can an attacker do with this API endpoint, this data flow, this trust assumption.” It requires a different mode of reading code and it’s a skill that has to be developed deliberately.

Certifications like eJPT and eventually OSCP matter more than most developers think, not because the offensive techniques are directly applicable to defensive engineering, but because they force you to think like an attacker. You can’t fully appreciate why SSRF is a critical vulnerability until you’ve used it to pivot into an internal network. You can’t understand why deserialization is dangerous until you’ve popped a shell through a gadget chain. The attacker’s perspective is a prerequisite for taking the defender’s job seriously.


Where this is going

AI-assisted development is making this harder. Copilot and its successors generate syntactically correct, often insecure code at high velocity. The developer’s job is increasingly review and integration rather than line-by-line authorship, which means security review skills matter more, not less. You’re auditing more code than you’re writing.

The regulatory environment is also tightening. The EU’s Cyber Resilience Act and various US federal mandates are moving toward software liability. “We didn’t know” is becoming a harder defense. Organizations that have treated AppSec as optional overhead are going to find it mandatory and expensive.

The developers who take this seriously now are building a skill set that becomes more valuable as the gap between what organizations need and what the industry provides widens. AppSec talent is scarce. Engineers who can write secure code and articulate security tradeoffs to non-technical stakeholders are scarcer.

The investment is not complicated. Learn the OWASP Top 10 until you could explain any item from first principles. Do a hands-on lab environment (HackTheBox, PortSwigger Web Security Academy, DVWA). Build threat modeling into your design process. Read your dependencies’ changelogs. Think adversarially about your own code before someone else does.

The security engineer isn’t coming to save you. You’re it.