CSSLP Tutorial: Module 03,Part 03 – Risks and Controls

Risks and Controls

This particular section discusses having a design, the risks associated with the design process and aspects to consider for design from previous sections. Risks associated with designing software or systems involve code reuse and the age-old question of open versus closed source for design.  When designing, also want to evaluate controls for the just enough security, cost-benefit analysis and psychological acceptability on the part of the users of the software we’ve created. We want to be careful to not have so much security that our users become frustrated.

When designing software or systems, there are risks inherent in the design, in and of itself, naturally.  One of those kinds of risks is code reuse.   This is something to consider as to the fact that when developing a new operating system, that there are millions of lines of code going into make that system usable.  For example, Windows Vista has over 60 million lines of code.  If we try to do this from scratch every single time, it becomes too cumbersome, too complex and too time-consuming.  So, it’s a much easier process to simply copy/ paste lines of code and then take advantage of what’s already written and what’s already out there.

 

However, there can be flaws or bugs with this technique, depending on how well-written it is and if it is really well for our particular environment.  Many people use the terms flaw and bug to mean pretty much the same thing and it can be used in colloquial terminology, however, there are some subtle nuances of differences which could be stated here. When there’s a flaw, it is something inherently flawed or vulnerable in the coding.  On the other hand, the bug has to do more with how the coding is being implemented.  Anytime, you implement something in a vulnerable environment, maybe I put something out in the DNC (Direct Numerical Control) organization and there are no other protective mechanisms, we look at that as a bug as something that’s been improperly implemented. 

Other considerations for our design include the question of do we use an open or a closed source for our design process.  There is an advantage as well as costs to each type and proponents of each type argue their philosophy, however, it doesn’t really matter as to which as long as it is well-written and remain relatively clear of vulnerabilities. Benefits of open source look at the coding when out in the open, can be up for peer review and so others can modify, suggest, offer different lines of code as to help improve the security of the overall final product. It’s better to have more eyes on a single project to check for efficiency and to see if it actually does what it’s supposed to do.

However, those in the closed source community, see that as a way that other users can end up breaking your code, weakening your design.  If you look at the open SSL. It was open and was assumed to have peer reviewed, so lots of people copy/ pasted it and used it, but it didn’t really do the job it was supposed to. Because it was poorly written to begin with and then poorly reviewed, then it ended up being severely flawed.  Just because it’s been written with an open source doesn’t guarantee that it will become more secure and stronger as a piece of coding design.  The argument of “if you can’t see it, you can’t break it” of the closed source isn’t entirely true either.  It’s like disabling your wifi ss ID.  Just because you can’t see my network as it’s called by it’s name, doesn’t automatically mean you can’t attack my system, or use my network. 

There’s a lot more to attacking a system then what appears on the surface.  So regardless if it’s an open or closed source coding, what matters is how well-written is the code and how well is it for the environment for which it was written.  Just as a side note, those in the government tend to work with closed sources. Especially encryption algorithms, for instance.  To crack cryptography, you’d need to know the algorithm and the key. Why would I tell you one of those pieces of information and so allow you that much faster access to my system?  This is what a lot of the folks in the government opt to do.  Those in the cryptographic community for the most part believes in the principle referred to as the Kerchoffs’s principle, where you got an algorithm and a key. Let one of those be open. You don’t have to protect them both.  Usually it’s the algorithm that’s known and the key is separate.  Most people in the security design prefer the open design. It’s easier for code and peer review.  It tends to be an easier way to check on the strength of coding in its process.  Again, it doesn’t matter which type of source is used, what really matter is how well it has been written.

Speaking of well-written security code, we also want to discuss controls that are in place and how to evaluate them for efficiency.  We want to essentially have just enough and no more than that in terms of the amount of controls.  This was one of the basic tenets of security we’d talked about in a previous section.  Not only are we looking at efficiency, but also making sure that we utilize the cost-benefit analysis.  This will help in decision making for how much, what kinds and to what extent we have in place for security.  We also want to consider psychological acceptability.  I know when I go to a military base for a training workshop, if I have to wait longer than 30 minutes at the gate, I get frustrated at the amount and level of security because I’ve come to the base to do a job and it is taking me longer than expected, perhaps longer than the job itself.  So, we have to consider this in the psychology of our users.  If we make them go through 17, perhaps 18 screens before they can get to the location they are looking for, they will leave and go somewhere else or they will become so frustrated, they will just bypass security altogether. If the security is too cumbersome, the users will find shortcuts.

There was a training session, where the morning was spent on the miracle of the right-click of the mouse. So, this was with a group of people who were not super technologically savvy and they were fairly new to the computing systems and so not hackers in any sense.  At lunch, there was a person who had struggled really with the mouseclick and when to use the right mouse click as opposed to the left mouseclick or keyboard, and yet they were able to navigate around the proxy server of their institution and trick their organization’s security to allow them to get to a website that was being blocked.  Once things become too cumbersome for our users, they will use shortcuts. We want to make sure that the controls, the security we put into place do have that psychological acceptability.

Other considerations of the design process involve three other aspects which we have discussed in previous sections, but are not detailed for discussion here. The first is the CIA triad which stands for confidentiality, integrity and availability.  The second is the triple A which stands for Authentication, Authorization, and Accountability.  The third are some general security tenets such as just enough security, principle of least privilege and so on.