This lesson is over secure software design. We are going to discover the different secure technologies that are available and then also the elements which can be compromised in that software. There is first the authentication and identity management. The authentication process is particularly important across country, state and local boundaries. There’s also federated trusts and services. Authentication gives a way to support identification. There are a number of ways to accomplish the authentication process. Identity and access management is looking at the services, policies, procedures, ways that we manage digital identity. Security controls help us to manage our digital identity especially under the guise of legal compliance issues. After security control measures, then is a discussion over implementation of a secure mechanism to control the flow of traffic for any of the applications you are working with. The concept or idea of virtualization and TCB (trusted computer base) both help with improving security as well. Associated with TCB is also new security models which can add to our understanding of previously learned security models like BIBA and the Wilson-Clark model. Other things to think about are such as privilege management and the principle of least privilege.
So, the first idea in association with secure software design is to discuss the elements associated with compromising software. Obviously, one of the first elements would be authentication and then identity management. Authentication is especially important as across international boundaries. Someone could claim that they are a certain person but then the authentication supports that claim. Ways that authentication can happen is through passwords, biometrics, smartcards and so on. So, whenever we talk about identity management, we are looking at all the services, policies, procedures, and so on to help us to manage the identity and be able to establish, maintain, and authenticate digital identity. Identity management is associated with how that account gets created, how it is managed and how to allow the various elements to be controlled, to be monitored, or updated.
In addition to our own safety, security and control, there is also the Sarbanes – Oxley Act, also known as the SOX, which is also very accountability oriented, and imposes auditing processes, so verifying the information is very important for maintaining legal compliance of this act. When we talked about the cross-site scripting and CSRF attacks in the previous section, the cross site request forgery takes advantage of a pre-established session and that the session information might be stored as a cookie. So, any information such as credentials, passwords, session identifying information, etc. need to be protected. We want to make sure that privilege escalation doesn’t happen as well as maintaining good practices to prevent unauthorized access. There are also multifactoral authentication processes that can be taking place as well. There may be multiple ways to authenticate for one session. For example, there may be use of biometrics and certificates. There may be the use of certificates and the use of smartcards. There are many possible combinations, however, it is the fact that there are multiple methods of authentication that make it multifactoral
The idea of a single signon where there is just one set of credentials is quickly becoming a very outdated idea. If the user who is legitimately trying to access the application, program, etc. with just the one signon, then all the attacker needs is one password and then they can access everything under the domain. Instead, there is a trend towards a super signon. Not just a single signon, but rather multiple sources such as facebook, twitter and the normal credentials to allow the user in. Idea is to be able to forward the authentication information across federated trusts, expands the concepts of identity, access management as well as expanding the role of credential management.
In addition to authentication and digital identity or access management, there are other security mechanisms in place which can then control the flow of traffic. Whether the elements can run through a proxy server and firewall, or by the use of “middleware”, inspection of traffic occurs.Middleware is where it is between the interface that does the inspection of a request from an untrusted entity and the actual trusted back end resources. So, the middleware is in the middle. In addition to the inspection, logging is just as essential for helping to track activities. Logging can be very resource intensive, but it can provide us with tons of information, especially as relating to security. Data Loss Prevention (DLP) systems are on the lookout for data loss like losing credit card number information of their clients, or losing identifiable or personal financial types of information. These numbers are being siphoned away from the network and consequently, the development of the DLP system is in response. This mechanism is designed to look specifically for sensitive types of information and to whether or not they are remaining in stable places or if they are being siphoned off. The DLP should be able to detect and track whether this is happening or not, to help reduce the loss of sensitive information.
Virtualization has been around for a long time. This is a concept of going from using many physical servers, to a smaller number of the physical servers and instead increasing the number of virtual servers within the physical machine. Historically, there were applications that were in conflict with each other so had to have the different physical machines. However, as application programming improved, then more of the servers could go virtual and fewer physical machines were needed. This helps to reduce our attack surface because there’s less in number of physical attack points. However, we are also cutting back on the servers for better service to our clients as well. Virtualization helps us to provide isolation from applications that would not normally work well together. Applications that don’t play nice together, can be isolated in visualizations with the ways than we do that.
The idea of trusted computing is the understanding that we have elements on our system that should be beyond reproach.These are the elements of our system that we call the trusted computer base ( or TCB). So, when we think about our memory about our operating system kernel, those should be trusted and should absolutely enforce the security policy of the system. So, two main elements of the TCB are the reference monitor and the security kernel.The reference monitor are the rules which govern how a subject can access an object. The security kernel is the actual software coding, or even hardware mechanisms that enforce the rules of the reference monitor. Other elements of trusted computing many systems are built with a chip installed already on board called the TPM chip (Trusted Platform Module)and that’s the chip to store an encryption key for your hardrive.
Other issues that revolve around trusted computing has to do with the concept associated with the Secure State Model. This model says that if a system stats securely and provides all its functions all the way through until it shuts down (even if failure), but did so securely, then there’s a secure system. If the system doesn’t perform securely in these states, then the system isn’t secure at all.
Another issue revolving around trusted computing has to do with compromise of the system bios during the startup processes. This is because the security mechanisms haven’t been loaded up yet. This is one of the ways that root kits get installed on systems, by loading before the security is loaded. When you have a root kit, you typically have to wipe out the system, re-install the OS, and restore the data from backup because of the depth of the installation of that root kit.
Other aspects to this whole discussion involve the principle of least privilege where we reduce the privilege escalation and maintain integrity about ourselves, and about our database/ application we are developing. There were talked about validation, code injection, poly-instantiation and some of the ways that we guarantee integrity of the database. So, as a wrap-up of this section is to remind ourselves about secure software design concepts and development. This next element in the next section, associated with module 4 is creating secure codes and working on actual coding based on the principles we’ve discussed up to this point.