Joy Clark: Hello everyone, and welcome to the CaSE Podcast. This is Joy Clark. Today on the CaSE Podcast I have a guest, Peter Chestna, who's here to talk to us about security. Welcome to the show!
Peter Chestna: Thanks, Joy. It's a pleasure to be here.
Joy Clark: Can you briefly introduce yourself?
Peter Chestna: Sure. I've been a software developer for over 25 years; I've been at Veracode (a CA company) since 2006. I go out and speak a lot about application security to both a security audience and a developer audience.
Joy Clark: Our podcast is called the CaSE Podcast, which stands for Conversations About Software Engineering, so for this podcast what I'd like to focus on is how developers can help make applications more secure. Maybe you can talk something about that...?
Peter Chestna: Sure, absolutely. This is the first step, so the one thing I talk about more than anything else; it has nothing to do with technology, it doesn't have to do with process, it has to do with accountability. Unless we are goaled as software engineers to take care of security, then we never will. If your goal is "I ship software fast", then that's the only goal you're going to be concerned with. The higher level management needs to say "We are going to take accountability. We will be measured in a report on how we do on that" and that will start to change those behaviors, so you need to start there. If you don't, then you're just pushing the rock uphill.
Joy Clark: The talk you were giving today was called "AppSec In The DevOps World." Maybe you can talk a little bit about the role that application security plays with DevOps.
Peter Chestna: Absolutely. The thing that is obvious is that developers weren't trained in application security. If they get any security at all, it's probably around networking security or something else, and maybe there's a small smattering of talking about, as you said earlier today, buffer overflows. But the larger problem of the OWASP Top Ten and the kinds of concerns and the reasons that companies are breached today isn't really taught, so the security team needs to be responsible for training the developers, helping them know what they don't know. That's where it starts - it needs to start with the training.
Joy Clark: As far as the OWASP Top Ten goes, could you briefly explain what that is?
Peter Chestna: Sure. It stands for Open Web Application Security Project, so it's about the kinds of things that cause web applications to be breached, the most prevalent things that we see in the industry. If you look at that list over time, it hasn't changed a whole lot. There are some very simple things that ten years ago we thought would have been a solved problem right now, like SQL injection, where developers are being lazy, the concatenating strings to create their SQL queries, so they are susceptible to SQL injection... Until they get to the point where they take the proper preventative measures by using prepared statements, then this problem continues.
Peter Chestna: If you look at that OWASP Top Ten - SQL injection, cross-site scripting, use of insecure components. One of the things I brought up in my talk was the Equifax breach, which was caused by using an insecure third-party component.
Joy Clark: We talked about the SQL injection, cross-site scripting... What is cross-site scripting?
Peter Chestna: It's displaying malicious content; it could cause some scripts to be run, it could redirect you to a different site unknowing to you. If you take input from a user, you need to sanitize it and then present it back to the user in a way that can't be used. So if you're displaying HTML, if I go and edit HTML and then I put in an alert or a redirect or something else inside of their link to another website that causes my content to be loaded, then you fall victim to whatever it is that I did, so you need to be cautious about what you take in and what you display.
Joy Clark: We'll definitely link to the OWASP Top Ten in the show notes. Are there any other vulnerabilities on that list that are worth mentioning? Probably all of them, right?
Peter Chestna: True. The biggest one for me is the insecure components. This is a problem that we're mostly blind to. As a software engineer you find the component that does what you need, you insert it into your code and then you forget about it forever. And unless you have to incorporate some new functionality, you never really look at that... So it's not less secure than first-party code, it's just never looked at again. You are constantly running over your first-party code, adding new features and functionality; as a by-product, you're kind of refreshing that code as you go, and what tends to happen is if you take a look at the open source vulnerability that caused the Equifax breach, that could have been in the code for as much as nine years.
Peter Chestna: If nine years ago I incorporated Struts into my application and I haven't upgraded til now, now that it's a security concern, I have to go through those nine years of upgrades or rewrite the component or something to get up to the secured version. As developers, we can't get lazy with the technical debt that we introduce in the application; we at least need to make the decision that we are or we're not going to spend time to do the upgrades as we go through the software lifecycle.
Joy Clark: So that's how we can protect against using insecure components, by upgrading regularly?
Peter Chestna: Well, it won't necessarily prevent it, but it makes it easier to go to the newer version. So they'll patch forward, they won't go back nine years and patch all of the previous versions. You can't find a nine-year-old patched version of that component, you have to jump to the latest one. The cost of incrementally moving your codebase to all of the new software that goes through the lifecycle allows you to quickly move to the newer components, versus having multi-man month or multi-man year costs to do those upgrades.
Joy Clark: Are there any other items on the top ten list that you'd like to talk about?
Peter Chestna: I'd rather start small, because you can become overwhelmed with the things that you need to worry about. There are tools out there available; certainly, as a vendor, we produce one, but there are lots of tools in the industry that allow you to do what we call 'shift left', which is not a new term in the industry... It's the idea that you want to find and inspect quality as early as possible, so when you're writing your code, that's the time to test it for security concerns. You can find them then, and then you can fix them easily, as opposed to writing code and if you are in an agile project and you release once a month, you may have written something on day one that on day 25 you find has a security problem; what are all of the software changes that you're now going to need to make as a result of changing the one thing that you have to fix? That cost now becomes escalated.
Peter Chestna: That cost curve around fixing a bug is no different in security, so trying to find it as early as possible and take care of it... And really, you should be driven by the threat concerns and the security posture that your company is asking you to do, and not necessarily OWASP Top Ten, although that's one that you wanna think about. It might not be the only one, you might have to do PCI if you are responsible for dealing with payment cards, or in healthcare, dealing with PII. Those concerns really should be driven by the company that you're with and the kind of application that you write.
Joy Clark: You did a talk about application security, and one thing that's not always clear to me is what exactly is included in application security? When I think of application security, I think of web security, but it seems like there would be more things that are included in application security. Can you maybe try to define it? It seems like a very broad umbrella.
Peter Chestna: It is. So you think about the application that you deploy - well, that deployment sits on top of some kind of an application container, be that Tomcat or JBoss or a .NET framework or what have you... So that itself is other software that could be vulnerable. You then sit on top of a machine that is networked to other machines even if that's in a container, so you have to think about networking security and how can you protect the application or the database that it's connected to from becoming vulnerable by someone getting access and getting onto that machine.
Peter Chestna: This kind of struts vulnerability - when it's executed, you actually run in the context of the container that's running the application, which is usually someone that has a lot of power. Once you get there, now you can hop around the machine, you can open up files, you can do lots of malicious things, so it expands from the actual first-party application that you deployed to all of those things: to the perimeter, to the firewalls... If you can get onto those machines, that's a bad thing. There's a small portion of this that talks about being able to be found through static analysis, looking at those open source components through software composition analysis... Those are kind of inside-out technologies, dynamic analysis being an outside-in, so I look at the website as it's running; I might find configuration issues. One of the things that can happen as a result of running in these containers if you don't configure them properly is they could leak information. If I knew through security headers or HTML returns or what have you that you're running on Tomcat thus and such, then I might know that there are vulnerabilities against that.
Peter Chestna: Those are things you need to look for to say "I can't give anyone any information, because once they have that information, they can use it against me." It does grow quite a bit, but the focus for a development team should be more on the application itself and protecting from some easy stuff; SQL injection is very easy to protect against, but can be very harmful to your company.
Joy Clark: If you're running with an embedded server like Jetty, how does that differ as far as security goes from when you create -- I don't know exactly the name for all of it, but when you're running on Tomcat, or something?
Peter Chestna: The application itself can still be vulnerable; it doesn't matter what container you're running in, and every environment tends to be different, whether it's cloud or whether it's on-premise, whether it's a container that's running Tomcat or whether you're running on a physical machine that's running Tomcat, or a virtual machine - all of those things are becoming software. Even now, if you look at containers and VMs running in those virtualized environments, they are actually just -- even your network now becomes code that's running; you're not sharing a physical line, you are part of a virtual line. It's all those layers of software that need to be secured and thought about, so it becomes a very hard problem to solve. For development teams, the focus should be on their first-party stuff and the libraries that they bring into it.
Joy Clark: What is your feeling about everyone putting everything in Docker, and just shoving stuff around everywhere? I compile Docker images, I have no idea what I'm doing, and then I'm like "Okay, just run this Docker image somewhere..."
Peter Chestna: That's an interesting question. The questions around security become "Where did you get that image? What's inside of that image? Did you start with something that was clean?" In the case of security, you talk about having the lowest-level privileges that you need to do anything. So do you have the lowest level software on there that you need to do everything, or is there additional software that's come along for the ride that you didn't know about and didn't intend? Can you find the most stripped down version of a container that you need, and then only add in those components that you need?
Peter Chestna: The problem with this is that we as developers are now taking over realms that used to be the realm of IT or systems engineering, or security. Security would typically provide a hardening document for an operating system; if I was going to run Windows 10, the security team would provide a document that says "Turn off this service, uninstall these components, configure it this way", and then based on that you would have a hardened image that you would use to install your applications. Now that we're using containers, developers are just choosing - sometimes without thinking - what container they're gonna run in; who knows how it's configured, who knows if it's secure or not? And then we deploy it and now we've deployed all of those vulnerabilities.
Peter Chestna: Those also have open source in them, so if you think about writing an application and it's running in production and now "I'm done, I've moved on to something else" - if there's a vulnerability in that component, do you even know that you shipped that component? If you did know, how do you patch it? You're essentially rebuilding your application without changing your application, but you're changing the environment it's running in and redeploying that container. So there's a lot of things that you have to think about as we get more into the infrastructure as code and building on containers and these kinds of things...
Peter Chestna: This is what I call a full-spectrum engineer, where it's not important to me that you know every language that there is. Full-stack engineers are kind of the thing of the present and a little of the past; what we're now moving to is multi-discipline engineer, where you need to understand operations, you need to understand security, you need to understand systems and engineering because you're picking out these containers, all of those choices that you need to make now fall across disciplines that you weren't trained in and weren't usually your responsibility, so how do you account for that?
Joy Clark: I don't know... [laughs]
Peter Chestna: The question I often get asked is, you know, this waterfront is now so large, because you're also dealing with quality now, right? Because you're writing unit tests, you're writing integration tests and those kinds of things. That wasn't the realm of the developer 20 years ago. So as we grow these things, how do we train on them? How do we make sure that we know the things that are most important? That's really where the security team and the IT and systems engineering team need to help us and train us.
Peter Chestna: Then the question becomes "How much time do you have to spend on those?", because I couldn't be exhaustive and learn everything about security. I can learn so much that can be useful, but then the next part might be I fall down in containers or I fall down in operations.
Joy Clark: Yes, I find that to be one of the things that are a bit frustrating to me, because I find security fascinating and I always want to learn more, but then I don't have the time because I program all the time. I don't have the time to spend looking at what's the new next thing; vulnerabilities are being discovered all the time. When I use a Docker image, how do I even know if the Docker image is vulnerable or not? I don't know...
Peter Chestna: Eventually, the industry will catch up with these things. Right now you need to leverage the teams that typically do this stuff in your companies, to have them help you understand what choices you're making and what the consequences of those are.
Peter Chestna: This is really creating that bill of materials It's like, you buy a candy bar and it's got a list of ingredients; if you ship an application, whether that be on its own into a production environment or in a container into an environment, what is on that machine? What is in that image? If you don't have that list, then if you find out there's a vulnerability, how do you even know if it's something you should be concerned with?
Joy Clark: We talked at some point in time about static analysis. Could you tell us what that is?
Peter Chestna: Sure, absolutely. Static analysis is an inside-out kind of technology... Not unlike running lint or running unit tests and looking at code coverage - these are things that can be run while the code isn't running. They can look at either source code, which a lot of companies do, or in our case, we use binary static analysis, so we take the deployed artifact. If that's a .war file or an .ear file or some kind of executable, we'll take that and look at it. We'll look at that from the inside out, take a look at the components that are shipped with it, any known vulnerabilities with those; we'll find vulnerabilities inside of your code by looking at where data comes in and where it's utilized.
Peter Chestna: We look at what we call 'taint to sink'. If there's an input that comes in from a web page or you read from a document, or read from a database, those inputs aren't necessarily trusted and we need to figure out where did that data come from, who can manipulate it and what effect does that have later on down the line, such that when you, say, run an exec in your operating system, what command are you running? Is it something that is a hardcoded part of your application and knows exactly what it's doing, or was it based on some user input that now maybe makes you vulnerable. Does that make sense?
Joy Clark: Yes. So can we do that with Docker files, too?
Peter Chestna: Some companies can. We're working towards being able to introspect on a Docker container. You could certainly take a look at that bill of materials. Like I said, there are libraries and other things inside of that that may or may not have known vulnerabilities. There are companies working towards that effort, and we are as well.
Joy Clark: In your talk today you were really focusing on DevOps and how DevOps and application security need to go together. How can we make that work? How can we integrate security so tightly into our development process?
Peter Chestna: That's a great question. If you think about the definition of done in Agile, right? In most DevOps it's run on top of agile, whether that be Kanban or Scrum, or Scrumban or whatever flavor you happen to run... When you pick up a story to go and write it, there should be some definition of done that you follow, so the first step in that is understanding that security should be on that list.
Peter Chestna: Whatever tools you choose to buy or bring in as open source, those should be run prior to checking. The whole idea in DevOps is to fail fast, to inspect quality as soon as possible. If you can do that inside of your development cycle prior to getting it into the source codebase, then your CI/CD pipelines should always be green. They should flow right to production, or right to whatever the endpoint is to that process. That process should always work.
Peter Chestna: Those are what we would call assurance scans, even though you should run unit tests before your check-in; you run them afterwards as well. You always run them in your CI process because you need to validate that those were actually run, and how many times does a developer have used "Oh, this code change won't hurt anybody, I'm just gonna check it in; I don't have time to run the unit tests." Then you find out that you actually broke something. It's those kinds of things that we need to train out by having the things in the pipeline as an assurance, but what happens prior to check-in is the most important part, and building that discipline into the team to take the accountability for the outcomes, provide the training and tools that they need to be able to run it in a quick way, such that it doesn't get in their way.
Peter Chestna: That's really the key - the tools only in the last two years have been fast enough to be able to run in your environment such that you can do those checks prior to check-in. So this whole idea of shifting security left is the result of those improvements.
Joy Clark: When you say it like that, it sounds like DevOps does not only include continuous integration and operations; it sounds like it includes something else...
Peter Chestna: Yes, so a DevOps team -- again, I hear a lot of different flavors of this. A DevOps team needs to have everything that they need to be able to build and ship their product; so if security is part of that, then you need to have some security responsibility on your team. That typically takes the form of something like a Security Champions program where you get some additional -- there are members of your team that know a little bit more about security... Because again, not everyone can know everything, so you need to be able to train up certain people, so as you might have people that are more skilled in quality, you will probably also have people that are more skilled in operations, or monitoring, or logging or what have you.
Peter Chestna: You want to do the same thing with security and build that as a muscle inside of the team, such that you don't need outside-in influence, you don't need people to come in and tell you what to do; you as a team figure out how to do it well... And there will be failures along the way. Failure is okay in DevOps. You just want to find the quickest place that you could have fixed that mistake, build in a new test and then go forward.
Joy Clark: How is working together with the security people? Does that sometimes -- I guess you'd have to really make sure you're communicating well. Do the security people probably want the developers to know something about security? Or do they think that they're the only ones who can do security?
Peter Chestna: It depends on the team. Some security people think developers are idiots. We know that's not true, you and I. It's more we don't know what we don't know; we were never trained in this. Developers are eager learners, they take a lot of pride in their work, so when you give them the responsibility and the tools to measure, then they end up with better outcomes. So the security team needs to work with us, not against us. They can't be done at the end like it's typically done with pentesting etc.
Peter Chestna: If you're releasing multiple times a day, it really needs to be baked into the way you write software, the way you test software and the way you release software, and it can't have people in the middle of it. But it's all about building the right culture.
Joy Clark: Great. Do you have any other strategies we can use for integrating application security into DevOps?
Peter Chestna: Again, I don't want to overwhelm people. It's really about driving that accountability model, it's about buying the right tools, about taking accountability for your own work prior to check-in. If you can get to that point... Usually, on a maturity scale you look at "I don't know anything about the security of my application" to "I've scanned it once" to "I continue to scan it", in which case now I'm being reactive - so I check things and I fix things and I check things and I fix things, to the next stage, which is where you can get to "I'm now measuring for new things. I know that we're fixing things, but I want to make sure that I'm not introducing new stuff onto the pile, so I'm not creating new technical debt as a result of my check-in."
Peter Chestna: Once you start doing those things prior to your check-in, you're now at that fourth level of maturity where I'm being proactive; I'm making those tests and running those tests before I check-in so I don't increase the security debt inside of the application.
Joy Clark: Okay. My last final wrap-up question would be if you could give any resources that we could use to improve our knowledge of security?
Peter Chestna: The OWASP website is a great one to start with, certainly the Veracode website has a lot of detailed information on it as well. If you look at the PCI Standards, for instance, it's another great place. The SANS Top 25 is another list of known vulnerability types... So I would start with that area, and then start looking at how you can incorporate some tooling into your work stream.
Joy Clark: Okay. Well, thank you so much for taking the time to let me ask you a whole bunch of questions.
Peter Chestna: No problem, it was a pleasure. Thanks, Joy.
Joy Clark: You're welcome. To all our listeners, until next time.