2019-11-08, 11:30–11:55, Blue Day
Developers often do not know what the common issues are with the framework they are using. At the same time, most common frameworks ship with easy ways to shoot your application’s security in the foot. In this world we live in, developer education will fail if even one mistake is made, which will expose a dangerous vulnerability. In this talk, we’ll show how you can dramatically reduce the chance developers will shoot themselves in the foot by giving them safer versions of their common tools so your company can ship more secure code.
We will write wrapper classes and safe versions of common tools to eliminate XSS vectors, open redirects, XXE, SSRF, LFI, and other dangerous bugs in your codebase. After that we’ll show simple steps to educate developers and gain traction in your organization. Then we’ll show how easy it is to integrate SAST tools in your CI/CD pipeline to ensure your developers use your safe tools rather than the footguns built into common frameworks.
This session is ideal for security engineers interested in eliminating entire classes of security bugs inside their code base.
As a professional working in AppSec, I run into the issue of developers making the same mistake over and over again all too often. Usually this can never be truly solved with only developer education. For example, in Java and older versions of .Net Framework, XML parsing has DTDs enabled by default. This relies on the developer to use it safely instead of being safe by default. Luckily, newer versions of .Net Framework are safe from XXE by default.
We can follow the same successful pattern that Microsoft used for XXE .Net from 4.6.1 onward. The most effective solution I have found is to create a ‘safe-by-default’ version of the tool or class that the developers are using. Then you can simplify the training to: ‘use this not this,’ which usually has better traction among developers who are always in a time crunch. This also puts a huge amount of your dangerous uses of code in one place, which makes the AppSec team more efficient.
This methodology and its implementation has been effective in my experience, and it would be useful for the attendees at DeepSec. I’ll go over how to create the safe versions of common tools, how to educate developers effectively, and how to detect misuse of the new safe versions via static analysis with high accuracy.
Here is a tentative outline of the talk:
1. Introduction to the concept of safe classes by showing how it appears already in the industry
- Show how .Net became safer from XXE
- Show how we went from XSS everywhere with common templating languages, to safe by default in frameworks like Angular and React
Show how to write/maintain a safe version of a common tool
- Give my examples of 'RedirectSafe' and 'RegexSafe' to protect against open redirects and ReDoS respectively
- Demonstrate on stage writing a wrapper class around a common tool to make mistakes (Tentatively a wrapper around BouncyCastle's .ToRSA function to ensure private keys for JWTs are ephemeral by default)
Show how to educate developers effectively around this
- Give a free template around documentation that anyone can use
- Explain the key principles of training around this:
+ Use bullet points/numbered lists
+ Keep it impactful
+ Show what not to do
+ Show what to do
+ Show how to use it the unsafe way if needed (and provide context)
Show how to integrate with SAST tools to detect non-compliance
- Show the quick way with some Regex and DevSkim
- Show a simple static analysis way inside Roslyn (My example will be in .Net)
- Show an example of integrating this with Github Hooks to scan on pull requests.
- Find the common mistakes in your organization
- Make safe wrapper classes of the root cause
- Train developers to use it instead of the unsafe-by-default classes
- Enforce the rules with static analysis