Coding for Security – Some Developer Pitfalls


Over the next few weeks, I will be releasing some teaser information from my talk, “Secure Coding: What “Bad Guy” Wants You To Do”. Its an hour talk I am available to give to any organization. Please contact me to book a time.

Background… This talk takes a less traditional approach to talking about secure coding/development. It does not go into language specific, nit-noid details of buffer overflow (et al) preventions. Rather this talk presents the top 5 coarse coding meta vectors that an attacker like me takes to exploit code. These are the key entry points that are leveraged to gain that first “exploit” in the attack chain. You can find more information on the talk here

By presenting more “meta” level concepts, developers have easier to remember, higher level, and language agnostic concepts to guide their secure coding considerations.

One of the key threats to secure coding is what I call “Developer Mindset” pitfalls. Having been a developer, I know these exist happen daily. I have fallen for them, as well as watched others. Sometimes repeatedly. These pitfalls represent the initial blocks preventing the the developer from embracing the meta vectors, much less actually instituting them.

Here we go…

Pitfall #1 – I am a developer and I know security

Ok, I am a brain surgeon, let me crack your skull open and scoop out some of that stuff in there. You will let me, right. Bad guys shoot, move and communicate totally different than you. You can’t truly “know security” until you lived on the other side of it – the Bad Guy. Just because you have technical competence, and geek swag doesn’t mean you know them. Just because you have a CEH, CISSP, or the like you aren’t one of them. It takes folks like the guys at Tek Security Group¬†who have lived being a Bad Guy to “know security”. That said, your brilliance as a develop can surely be a force multiplier in securing your technological aspects.

Pitfall #2 – This is a minor issue that couldn’t possibly be exploited

Every major exploit you have heard about someone has said that about at some point in time. The big, horrendous exploits found normally get fixed ASAP. It is the “minor” ones that we leave hanging around. (BAD GUY FU Rule #6) With time any minor exploit can be leveraged in some major way. We should remove “minor” and other ordinal evaluation of exploits from our vocabulary and simply remove risks. Never underestimate the resourcefulness of the Bad Guy.

Pitfall #3 – The solution is insecure, but its better than nothing

Every piece of code you write or add to your application should be seen as an exploit basis. Read that again… Every single piece of code you write… If you take that mindset, then any insecure code added is an exploit waiting to happen. Known insecure solutions are NEVER a solution. Next time someone brings this “strategy” up, ask them to go get approval from their manager to add and unexploited exploit to the code base. It normally stops this bad thinking.

Pitfall #4 – We can hide this issue or protect it from being exploited

See #3. This mindset boils down to you: a) adding/keeping an unexploited exploit b) you adding more (potentially) exploitable attack surface to hide it. I like this… Its a two-fer as a bad guy. I can exploit you, then if/when you find it, exploit you again. Can anyone say “winner”? Thinking you can hide an exploit from a person who’s job is to do nothing but find your weaknesses is like thinking you can put out a massive house fire with a teaspoon.

Pitfall #5 – Assuming single modal, or single technology

As of the emergence of the web, this can never, ever (I repeat for the hearing impaired E V E R) assumed. This mindset is probably the worst of the bunch. It leads to many of the worst security breeches. The days of a single way in/out, single interface, single … (Mode) are gone. The days of a single technology (code, OS, network) model, equally gone. For example, I have seen many mobile apps that don’t take into consideration their backend may pull the inputed data back out in a web app. Can anyone spell XSS, CSRF, or any of those?

Pitfall #6 – I control the deployment environment

When I do development now, I keep a mental image of the deployment environment as the “wild wild west”. Its a good image to keep. It reminds me that no matter what I am told, or how “controlled” of a deployment or target environment – the reality is I don’t control it. Furthermore, once an attacker gets an ancillary “foot hold” in that environment, all bets are off. Never assume the user, environment, or anything is fixed or controlled. Users non-maliciously do stupid things. Installers unknowingly break stuff or mess it up. Admins and operations ignore minor warnings and errors. Its the wild, wild west.

For more information on this series, or to make a request for the author to speak, please contact him at tektengu@teksecgrp.com

Leave a comment

Your email address will not be published. Required fields are marked *