Monday, May 21, 2012

ActiveX Exploit Samuel Warren IS 469- Information Security Capstone Dan Morrill City University May 17, 2012   ActiveX Exploit Executive Summary Exploiting ActiveX buffer overflow is a critical hole in the Microsoft Office Suite. The worst part of this exploit is that once the buffer overflow is initialized, the victim computer’s command controls are accessible and the attacker can do just about anything. However, due to the religious patching of Microsoft, there is already a working fix in place, despite the discovery in early April 2012. Introduction Criminal hackers are always looking for a weakness to exploit. To that end, they poke and prod different systems to see how it works. The hackers become subject matter experts in various coding languages and taxonomies. There are equal parts education and criminality that combine to show a beautiful tapestry of what happens when information is used for the wrong purposes. In April 2012, a new exploit was discovered by exploits-db.com and uses the ActiveX Framework to gain control of the victim computer via Microsoft Office 2003-2010, with the exception of 64-bit editions (TechCenter, 2012). The following analysis will show the potentials, risks and rewards of exploiting this loophole in the Active What does the code do? The code primarily causes a buffer overflow in Microsoft Office 2003-2010. It then attaches to the Windows common controls, MSCOMCTL.Listview, and MSCOMCTL.OCX (Exploits-db, 2012) and could be used to execute command level invocations to the victim system. Potential for a weapon The potential to use this exploit as a weapon is high according to Microsoft’s Security TechCenter. They rate it as “Critical” (TechCenter, 2012) in all affected Office products due to the potentially limitless uses. For example, if a user unintentionally activates this code, it could introduce a virus that takes any and all personal information and exports it to a file that is then sent off to the attacker. Another major point is the potential victim pool is so big and diverse that there will probably be at least a couple people affected, if not thousands. All told, roughly half a billion people use Office. Yet for all the ways consumers use it at home, there are many more time-saving solutions to be found in the world’s most ubiquitous desktop software. (Schultz, 2009) Because the code cannot self-intialize, due to the exploit being in ActiveX Framework, the user must be tricked, or socially coerced into clicking the link and accepting ActiveX prompts. However, once the user does that, the potential is really incredibly limitless. Risk and Rewards The major risk of weaponizing this exploit is Microsoft patch frequency. Microsoft releases a new patch to fix their bugs, loopholes, and exploits as soon as they can identify a fix. That being the case, the exploit has a potentially short existence; depending on how quickly the patches are installed by the end users. However, should the victim be coerced into starting this overflow process, the attacker would have access to do just about anything to the victim’s machine including self-propagation of the code to the other 499,999,999 users.   References Exploits-db. (2012, April). Exploit 18780. Retrieved from exploits-db: http://www.exploit-db.com/exploits/18780/ Schultz, M. (2009, January 8). Microsoft Office is Right at Home. Retrieved from Microsoft: http://www.microsoft.com/en-us/news/features/2009/jan09/01-08cesofficeqaschultz.aspx TechCenter, M. S. (2012, April 10). Microsoft Security Bulletin MS12-027 - Critical. Retrieved from Security TechCenter: http://technet.microsoft.com/en-us/security/bulletin/ms12-027
Scaling Internationally: Understanding International Implications Samuel Warren IS 469- Information Security Capstone Dan Morrill City University May 17, 2012 Scaling Internationally: Understanding International Implications Executive Summary International business is fraught with complexity. Because no two people are the same, no two countries are the same, despite the commonality shared by countrymen. This poses serious implications when considering changing a security policy to incorporate international addendums. To make one all-inclusive policy, one must pay scrutiny to the differences in thinking and communication between the host Country and any Countries that are using the new policy. Introduction Preparing a security policy that works well within the scope of a national company can present many challenges. Taking that and expanding it to a larger platform, namely the international stage, makes it far more complex and potentially risky. When designing and deploying a security policy that can be scaled to the international stage there are many things to take into account. Some of things that make cultures unique and beautiful can also cause the most difficulty and confusion when dealing with practical business application of the security policy. There are a whole host of different scenarios that need to be accounted for in a strong security policy such as internet coding standards, database protection standards, network infrastructure; and physical security; protection of company assets; data loss and recoverability among others. Going from a strong intra-national policy and scaling it outwards internationally means the dimensions provided for in the policy become far more complex. There are two major areas of complexity that are not typically fully covered because of the sheer size and implications thereof. Communication and thinking processes change from country to country. A simple example is the United States and the United Kingdom. While both speak English, both think and speak completely differently. Communication Differences One of the first things human children learn is how to communicate. All children, across the world learn to communicate in a manner that translates across all barriers, cultures, and countries: crying. There are different intonations for different needs, however even those are cross-cultural. When a baby cries because it is hungry, that same cry is understood whether the baby’s parent/guardian is Chinese speaking, German speaking, or English speaking. However, as the baby grows and develops, it learns to speak in the native language of its forebears, including all the cultural and family idiosyncrasies. Each of those plays a key part in how the individual communicates as an adult and directly plays into the culture as a whole. Communication is one of the most important things to consider while taking a security policy to the international stage. There are a lot of different laws from country to country that need to be deciphered; however, they are relatively easy compared to the challenge of communicating cross-culturally. A prime example could be viewed in the concept of negotiations. Americans see the goal of negotiations as to produce a binding contract which creates specific rights and obligations. Japanese see the goal of negotiations as to create a relationship between the two parties; the written contract is simply an expression of that relationship. (Salacuse, 1991) This is a crucial difference because for Americans, we view the end of the negotiations as a milestone and typically hand-off the control to another team. The Japanese want to be directly involved from beginning to end and work through the relationship as opposed to being handed off to another group that will “handle the next phases.” The aforementioned example encapsulates the fundamental differences that need to be addressed prior to expanding the policy to include international rules, policies and standards. Before work can begin on an international policy, a full discovery deep-dive must be performed to determine what Countries will have an impact on the policy and what requirement gaps exist. Then at least one security professional in each country, who knows the requirements and language of that Country, can customize the core policy in languages that work for the outlined countries and work through conceptual differences in language and culture. Thinking Differences Another major difference that needs to be accounted for in security policy expansion is the differences in thinking from Country to Country and culture to culture. This difference in thinking should be having attention paid from all angles. The problem is that most often when considering international policy, there is not enough emphasis placed on understanding the cultural thought patterns and how critical thinking is approached. From the Security policy standpoint, when a problem is broached, how does the security staff respond? The way a person in the United States thinks through a problem is completely differently than a person in Germany, or Japan. Through a set of experiments, Peng and Nisbett (1999) demonstrated that: 1) Chinese students preferred proverbs which contained apparent contradictions more than did their European-American counterparts; 2) Chinese students were less likely to take side in real-life social conflicts but more likely to choose a compromising resolution strategy than the European-American students; 3) Chinese students preferred arguments which based on the principle of holism while American students preferred arguments that relied on the law of noncontradiction; and 4) American students showed more polarized opinion after reading two seemingly contradictory accounts of the same issue whereas Chinese students would seek for an account which could accommodate both sides of the issue. (Miu-Chi Lun, 2010) The interesting thing about this discovery as discussed by Miu-Chi Lun is major difference in problem solving and critical thinking. The fact that Westerners chose polarization over neutral compromise shows that at a root level, the cultures and along with that the cultural thinking is different. How this affects security policy expansion is in the long term execution of the policy. The way the engineers or specialists will choose to solve problems related to the security of their systems may be fundamentally different. In security, there is a fine line between compromise and taking a polarized stand. This means that there will need to be a lot more defined in the policy of what is and is not flexible by country. Another major concern is how criminal hacking differs because of the change in thinking. For example: a hacker speaking English will write the code to create and execute a virus will have a different approach and syntax than one who speaks say Korean. The one redeeming factor is that the code used always using the same exact taxonomy. But the way it is implemented and propagated is completely different.   References Miu-Chi Lun, V. (2010). Examining the Influence of Culture on Critical Thinking in Higher Education. Retrieved from Victoria University of Wellington Research Archive: http://researcharchive.vuw.ac.nz/bitstream/handle/10063/1211/thesis.pdf?sequence=1 Salacuse, J. (1991). Making Deals in Strange Places: A Beginner's Guide to International Business Negotiations. Retrieved from University of Colorado, Conflict Research Consortium: http://www.colorado.edu/conflict/peace/example/sala7533.htm

Tuesday, May 15, 2012

Disaster Recovery Plan Samuel Warren IS308-Internet Technologies Lawrence Masters City University May 2, 2012   Disaster Recovery Plan Executive Summary Disasters come in all shapes and sizes. While an organization could go bankrupt trying to plan for everything, Sambergraphix must focus on what makes logical sense for disaster recovery and business continuity management. The goal of the group should be to retrofit their web presence, as outlined herein, and create a regular cycle of data backups including multiple redundant locations, so that information is readily accessible in the event of a natural disaster or other major issue. That said, creating a strong Storage Area Network, or SAN, with direct connections to cloud-based backup providers would ensure the best flexibility of data continuity and ease of architecture. In the event of a web server or proxy server failure, there are additional servers to handle the additional load until such time as the network technicians can fix or replace the compromised hardware. Along with the proposed technical solution, creating a task force to evaluate Sambergraphix’s response to disasters and other business continuity related tasks will ensure the company remains viable in the event of a major disaster or cyber attack. Introduction In late 2000, the ability of Sambergraphix to serve content to the web was tested in an earthquake that practically decimated the onsite data center. Our web-servers that served content to the web took a major jarring physically. Combine that with Sambergraphix exclusive content that was being requested by news outlets across the United States and what one can see is a serious spike in web traffic and Application Portal Interface (API) calls that our physically shaken servers could not handle. Although Sambergraphix is still considered a small business, the quality of the content and the need for its relevancy to the earthquake response made the server and its proxy crash every time it was brought back online. For a short-term solution, the IT group purchased a replacement server and replacement proxy server to allow for the web presence restoration. Fast forward 12 years to the present and while Sambergraphix has the same size company and typical web traffic, the need for disaster recovery plan and business continuity for our web presence is undeniable. As one can see, this proposal makes a strong recommendation to the IT, management staff, and CEO of Sambergraphix for a solution to said plans for recovery and continuity. Current Environment The server diagram illustrates the current environment. As noted, the replacement servers, (PROX1 and WEB1), as well as the POP3 email server, sit behind the firewall with both PROX1 and WEB1 connecting directly to the database tier for storage of web user information and content. Users and information come in through the firewall and are routed via Access Control Lists (ACL) to the appropriate server for processing after passing through a proxy. However, due to the need for periodic maintenance on WEB1 and the POP3 server, it is necessary to add more hardware to help balance the load so the web presence is uninterrupted. At the present, should anything from “Acts of God” to cyber-crime hit the servers, the website will go down until it is resolved. The following proposal outlines some ways to protect information, as well as prepare for any possible disaster. Solution Proposal One detail to note from the beginning of this proposal: adding hardware and additional storage backup cycles will cost a fair amount of money. However, in the event of an emergency, they can reduce the amount of down time by quite a bit. As demonstrated in the proposed solution diagram, adding additional servers to maintain a web presence and additional proxy servers will enable a fair load balance in the event of large traffic spikes. Additionally, by adding a Storage Area Network (SAN) that is housed in a different location and is connected by secure VPN tunnel, Sambergraphix web presence will be able to handle anything from an earthquake to breach in security with minimal data loss. Also, by adding backup service that is cloud-based and in multiple locations, in the event the SAN becomes corrupted or destroyed, there are multiple locations where the precious data is stored. It is also recommended that all email services be hosted externally so that in the event of another local emergency, the employees still have the capabilities to communicate via corporate email. Implementation Recommendation During the implementation phase of this solution, it is recommended that all hardware is purchased ahead of time and setup behind the firewall as additional servers. Then the network technicians can connect all the pieces together, ensuring they are configured correctly, and begin transferring data off the database tier on WEB1 into SAN1 (see above diagram). As soon as all the information has transferred into the new SAN and the backups have run at least two cycles, the final decision will be made to change the flow of data to the new proxy servers and web servers. Finally, along with the network changes, there is a clear need for a Disaster Recovery and Business Continuity task force to be created. The role of the task force should be to prepare scenarios that should be tested to ascertain the quality of the current architecture and the business continuity plans. The goal should be to test the equipment, as well as the people in various low to extreme scenarios to do a comprehensive gap analysis to determine how to improve business continuity management. According to Chris Ollington’s “Secure Online Business Handbook,” there is a fine line between what makes sense for Sambergraphix and what does not. A trade-off needs to be achieved between creating an effective fit-for-purpose capability and relying on untrained and untried individuals and hoping they will cope in an emergency. The spanning of the gap between the plan and those who carry it out can be achieved by either formal tuition and/or simulations. The well-known maxim that a team is only as strong as its weakest link is worth remembering here. (2004) Having said that, the task force should not seek to mitigate tornado response, because the likelihood of that occurrence is minimal at best in this region; but earthquakes should definitely be considered. The task force should be a representative group from all major departments and each member should be responsible to inform their area of the business continuity management topics and information. Also, the director of the business continuity task force should provide direct guidance, training, and reporting to the president’s team and the director of Human Resources.   References (ed), Chris Ollington. ( © 2004). The secure online business handbook: e-commerce, it functionality & business continuity, second edition. [Books24x7 version] Available from http://common.books24x7.com.proxy.cityu.edu/toc.aspx?bookid=9923.
Securing APIs Samuel Warren IS469-Information Security Capstone Dan Morrill City University April 27, 2012 Securing APIs Executive Summary There is an increasing demand on the enterprise API used by our organization. Keeping this in mind, our security needs to be updated to ensure no sensitive data is released to the public. There is one change that is recommended to more securely protect our API. Introduction Application Programming Interfaces (API) have been around for quite some time. When web applications were created, by necessity, so were APIs. While the API has taken recent changes in usage, the core idea is the same. APIs are the access point to the application for the developers to work. With the current desire to use a group of people to make projects better, API security has been under heavier fire. Prior to the demand for application sharing, API security was not a high priority simply because most of the application support was done inside the same relative zone on the network, all of which was typically secured private networks. However, with the increase in demand, every aspect of each API is being put under more scrutiny than its predecessors. With that in mind, the following is a recommended change to the current enterprise API. Recommended Changes to API Because of the demand on all API features and the amount of legitimate traffic coming to the application tier, the proposed solutions should meet both now and future API traffic. The recommendation is to create a custom checksum that is hashed values of the session data and private key. When the client sends the packet to the server, the server then takes the file, compile’s its version of the same checksum, and verifies that it matches. You realize that in real-life, this is basically like someone coming up to you and saying: “Jimmy told me to tell you to give the money to Johnny Two-toes,” but you have no idea who this guy is, so you hold out your hand and test him to see if he knows the secret handshake. (Kalla, 2011) If our systems can come up with a secure, hashed, match to the provided checksum, the internal application tier can grant trust to the requestor. This is called “HMAC” (Kalla, 2011) authentication and does not require any direct send of password or username over un-encrypted channels. For more information about securing APIs, please make use of the References section. References Kalla, R. (2011, April 26). Designing a Secure REST (Web) API without OAuth [White Paper]. Retrieved April 27, 2012, from The Buzz Media; Video Games, Movies, and Technology. website:http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/

Stormy Clouds are Rolling In: A Look at Cloud Technology

Stormy Clouds are Rolling In: A Look at Cloud Technology Samuel Warren IS469-Information Security Capstone Dan Morrill City University May 3, 2012 

Stormy Clouds are Rolling In: A Look at Cloud Technology


Executive Summary Without a doubt, Cloud technology is here to stay. The Cloud as a concept is the idea of separating specific services and utilizing a third party that specializes in that are to host an organization’s needs. One of the major issues with the Cloud is the lack of commonality among all various service providers. It is acceptable to have differences between Cloud Storage and Cloud web-servers; however, the differences have been proven to exist between Cloud Storage providers. This poses a major security concern because security engineers and analysts are not able to identify and mitigate attack vectors. There is a real need to bring discipline to the people and processes around Cloud usage in each individual organization as well.

Introduction At technology conferences every year, thousands file into a venue to learn about the newest gadgets and “killer apps” that are being displayed and presented. Among them in the last 3 years has been a concept, rather than a specific software or technology. Dubbed “The Cloud,” it I specifically the idea of taking a system that is directly integrated and exporting it a company that can provide a stronger support and maintenance plan for it. The classic example: removing server tier and having it hosted at a server provider, such as Rackspace, who can create, monitor, and maintain the server. There are a couple huge advantages to this idea. The main being that the customer of services such as Rackspace does not have to find, hire, and train a server administrator. The benefits of hosting some of IT services in the Cloud are clear. However, there are security risks as well.

Issue & Challenges The biggest security issue with the Cloud is that there is not enough of a standardized approach across all Cloud providers. Because of that, security experts are not able to fully understand how to protect data that goes into the Cloud. With a fluid concept such as cloud computing, that simply does not work. There is not one single model, nor is there one single architecture that you can point to and say definitively, "We are doing that, ergo we are doing cloud computing." (MacVittie, 2008) One of the major goals of Information Security is to locate and mitigate vectors of attack. However, attempting to do so in the Cloud could be equated to trying to pin Jello to the wall. While it is physically possible, the effectiveness of it is so miniscule that there is not really a point to attempting change. The biggest challenge around this is the lack of ability to clearly define attack vectors because of ambiguity relating to how the service works. That ambiguity serves to further reinforce the notion that the Cloud should not be used because it is not well known. However, fear of this type can easily be dissuaded with knowledge and internal changes to bring more discipline. What MacVittie highlights is a fundamental problem that is solvable, but requires discipline in the technology, the people implementing and using the technology, and the policies related to the Cloud.

Three-Fold change When discussing how to bring change, discipline, and structure to an unstructured landscape, there are many things to consider. With that in mind, the best possible way to bring adoption and long term change is to not simply inject change into one area, like technology; but to bring change to the people and the policies relating to that technology. With the Cloud’s lack of standardization (MacVittie, 2008), there is a real need to be structured and disciplined about how the enterprise deals with Cloud solutions. For example, if the Cloud application is storage, the business should be asking questions of the provider around what their specific security policy is, how often the information is available, and when do the storage devices come down for maintenance? Then the business’ IT group needs to spend time doing a gap analysis to determine how they can structure the data they are passing to the Cloud. Think of it like training in the military. During boot camp, a person is stripped down to their basic functions and taught from the ground up how to survive, fight, and how he/she works together as a unit with others. That way, when the soldiers go to a theater of war and the chaos starts, they are able to survive and function effectively as a unit. It is very easy to simply inject a tool into a group. In fact, there are organizations around the world that do so regularly. The problem becomes apparent when the tool is not utilized and the people who asked for it are the only people using it. Instead, if a group who owns, say web publishing, brings a new tool, said group should not just add the new tool and turn off old tools. They should look for ways to make the tool the most effective tool for web publishing across the organization. They should bring governance and process change to groups that have outdated processes. They should, within their scope of bi-lateral power, move to have the organization adopt this as the only tool for web publishing at the enterprise level, then constantly find ways to support the teams using the tool. When a tool, or a process, or a person is the only thing changing, not all three, there is a real potential for failure. But if there is a three-fold change there are more benefits. A 2009 article discusses the benefits: Customers benefit from advanced encryption that only they are able to decode, ensuring that Mimecast acts only as the custodian, rather than the controller of the data, offering companies concerned about privacy another layer of protection. (Binning) While Binning is specifically relating the relationship between Mimecast and its constituents relating to email management services in the Cloud, it is a pertinent example of the duality of benefits accomplished when a Cloud company provides a strong service and the people utilizing the service are able to change their policies and personal biases. There is a real bias for IT management to keep all information “in-house” because they can physically see and “touch” the data, but if they are disciplined about how they approach Cloud services and do not give un-due credence or cynicism they will not defeat the benefits gained.   References: Binning, D. (2009, April 24). News. In Top Five Cloud Computing Security Issues. Retrieved May 3, 2012, from ComputerWeekly.com website: http://www.computerweekly.com/news/2240089111/Top-five-cloud-computing-security-issues MacVittie, L. (2008, December). Web Software. In Defining Cloud Computing. Retrieved May 3, 2012, from Computerweekly.com website: http://www.computerweekly.com/opinion/Defining-cloud-computing

Wednesday, May 2, 2012

Scanning a Content Management System











Scanning a Content Management System
Samuel Warren
IS468 – Tools and Techniques
Matthew Pennington
March 10, 2012


Scanning a Content Management System

Executive Summary

            The need to scan systems is undeniable. As the Internet blossomed and grew, the vulnerabilities associated with Internet technologies increased exponentially. While there are a number of applications that need to be scanned, the Content Management System (CMS) is a newer application that has grown by leaps and bounds. This explosion has brought along with it a host of problems. Protecting the CMS is a task that can be done internally, using tools like Nessus and SQLMap, or can be done externally by using a service, such as WhiteHat Security. Ensuring the developers of the website comply with standards and processes set up by the organization, as well as keeping Sarbanes-Oxley and PCI (if applicable) should be at the forefront of all compliance related efforts.

Introduction

In every great Dark Ages’ legend, there is a story about a knight. Usually that knight wears a suit of armor. The armor was typically made of steel and chainmail. There were positives and negatives to wearing a suit of armor. The positive included protection from arrows, swords, and other lightweight weapons of war. When the soldier would fight, he would have a higher likelihood of coming home in one piece. The biggest negative was the lack of mobility. In open combat, the weight of all the armor slowed the knight to the point that he became easily overtaken. However, the sheer number of attackers needed to defeat the knight combined with their training and improved agility, while wearing armor, made the knight one of the most feared tools in any commander’s war chest. In the war on cyber-crime, the organization has a new knight. Corporate security, both physical and virtual, creates the new armor to protect the corporate entity. Unlike past suits of armor, the protection provided is not only about blocking attacks, but also managing potential weaknesses in the armor.
The major issue with the suit of armor is clearly explained in the 1991 film “Robin Hood: Prince of Thieves.” When pressed, Kevin Costner’s character, Robin of Locksley, says, “They’ve got armor Bull? Even this boy can be taught to find the jinx in every suit of armor” (Reynolds). Armor for the organization has “jinxes” in it. The jinxes come in all shapes and sizes, from Cross-Site Scripting attacks to Man-in-the-Middle, to pure and simple espionage. While there is no fool-proof way to completely secure an organization’s information and remain business effective, there is a need to find a way to mitigate the holes.

The Need to Scan and Protect Content Management Systems (CMS)

When the Internet was first developed, its capability was very limited. Simple text was the name of the game. As capabilities changed and bandwidth increased, developing what is now called a “web page” was complicated. A special language was specifically designed in order to code a web page. Adding a page was especially time consuming and complex to get right. As that changed, the Internet began to grow and transform at a tremendously rapid rate.  
The Internet continues to evolve, enabling people across the globe to communicate and send huge files in real time. The lack of a central authority controlling it helps the Internet to flourish rapidly, aided in great part by technological advancements. (Fuller, 2011)
Unfortunately, the same thing that helped the Internet grow also became the primary enabling factor for would-be criminal hackers. Hacking is nothing new; the first hackers were those who wanted to improve their device or system to make it work more efficiently or enable them more control. Nevertheless, criminal hacking has proliferated in the late 20th and early 21st centuries due primarily to the lack of centralized control and authority. As the technologies expanded, so too, did the methods of the criminal hackers. All the new systems that were created to make publishing to the Internet easier, also created additional loopholes. A great example is the Cross Site Request Forgery attack; it uses the victims system to exploit the trust between the browser and websites the victim recently visited.
            One of the more recent technologies is the Web Content Management System (CMS or WCMS). Starting to grow in the mid 1990’s (Laminack, 2008), they grew very quickly to adapt to the challenges and much needed flexibility of the Internet at the time.
This allowed people to upload photos, write stories, and made web pages much more interesting. In those days, everyone wrote their own. This was the dawn of the custom CMS. Then some people started commercializing their CMSs and building businesses that sold and supported CMSs. (Laminack, 2008)
Today, the number of CMSs on the market is literally thousands. Everything from proprietary systems developed by large organizations, such as Microsoft, to open source, community-driven CMSs like “Drupal” or “Joomla.” Each of these CMSs has benefits and drawbacks. None of them are completely secure. Their vulnerabilities are as different as their coding languages. However, the need to secure them is very real. As CMS platforms become more robust, allowing for form creation, forums, or live-chat’s, the vulnerabilities to this middle layer application and ultimately the databases it feeds into need to be scanned and protected with increasing earnestness. As mentioned previously, there is no 100% foolproof way to protect any system while remaining vigorous and flexible, although one can limit the possible holes (vectors). By protecting what can be protected completely, the security professionals are then able to focus only on those areas that are known vulnerabilities. Therefore, a two front counter-assault should be launched to find and eliminate as much vulnerability as possible. One front should be tool-oriented, the second should be focused on changing the processes and standards being used to publish and codify CMS platforms beyond their base (vanilla) install.

Front One: Recommended tools

            There are several recommended tools that can be used to protect a system, as well as companies that are trusted partners that will scan the system for the organization and report back results.
Nessus Scanner by Tenable
Nessus is a network and application-scanning tool that enables the System Administrator to scan their network for vulnerabilities in three separate classifications, as well as see the number of open network ports.
Nessus 5.0 features high-speed discovery, configuration auditing, asset profiling, sensitive data discovery, patch management integration, and vulnerability analysis of your security posture with features that enhance usability, effectiveness, efficiency, and communication with all parts of your organization. (Tenable Network Security)
This tool is full-featured and easy to use. After a network scan, it will give a comprehensive report on what the vulnerabilities were, what classification they were, if they were high, medium, or low, and how to mitigate, if available. Another very handy feature: Nessus will provide a list of open ports. Open ports are as dangerous as having no security whatsoever. Open ports give direct access into the network and provide attackers with a staging point for any other application attacks. Using Nessus in a CMS environment is simple. Just scan the host IP of the CMS platform and it will give all the vulnerabilities of the applications associated with that host IP.
SQLMap
SQLMap is an open-source tool used to find SQL injection points. It enables the system administrator to do some simple commands and completely exploit an SQL injection point. According to SQLMap’s site,
It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections. (SourceForge)
SQLMap provides a platform for adhoc scanning and, based on the vulnerability, will give varied levels of injection into the database and varied levels of access to information. This tool is especially helpful if the database(s) behind the CMS platform is using SQL.
Scanning as a Service
            Another option is to use a 3rd party to scan the CMS platform for vulnerabilities. One such service provider is WhiteHat Security.
Founded in 2001 by Jeremiah Grossman–a former Yahoo! information security officer–WhiteHat combines a revolutionary, cloud-based technology platform with a team of leading security experts to help customers in the toughest, most regulated industries, including e-commerce, financial services, information technology, healthcare and more. (WhiteHat Security, 2012)  
WhiteHat sets up a schedule for when to run the scan, then gathers the results and provides a comprehensive report of vulnerabilities that needs to be corrected within a specific timeline. The best part of this service is also the provision of scanning to maintain PCI compliance, so if the organization has the finances to spend on WhiteHat’s services, the organization has a powerful ally to help fix the “jinxes” in their armor. 

Front Two:  Processes and standards

            Scanning the CMS for vulnerabilities is just the beginning of correcting the problem. However, if the processes and standards used in the organization cause a relapse of the vulnerabilities then the tools are wasted resources. Beginning from the managers, down to the Internet developer, there is a real need to follow standards that remove as many of the vulnerabilities as possible. By following such standards and taking the time to practice smart coding, an organization can greatly limit the vulnerabilities that it produces. With respect to the aforementioned Content Management System, this involves creating new templates, modules, or plug-ins that follow the standards set forth. A good place to start is using the World Wide Web Consortium’s (W3C) standards for web-development.
Most W3C work revolves around the standardization of Web technologies. To accomplish this work, W3C follows processes that promote the development of high-quality standards based on the consensus of the community. W3C processes promote fairness, responsiveness, and progress, all facets of the W3C mission. (World Wide Web Consortium)
By building standards that make sense to the organization, the web-developers have the leverage to push back on other groups who are asking for Internet related things that do not meet the standards. Also, there should be an appeal process put in place to allow for current standards to be revised as technology changes.

The Continuing Impact of Sarbanes-Oxley

            For a publicly traded organization, one major element to consider is the impact of the Sarbanes-Oxley Act (SOX). Specifically, sections 302 and 404 deal directly with web-development.
These sections require that the CEO and CFO of an organization certify and assert to stakeholders that the financial statements of the company and all supplemental disclosures are truthful and reliable, and that management has taken appropriate steps and implemented controls to consistently produce reliable financial information to its stakeholders (Section 302). The company’s external auditor must report on the reliability of management's assessment of internal control (Section 404). (Imperva, 2009)
Where this comes in, especially for web-based organizations, is ensuring the financials are clearly shown on the website and easily accessible. Add to that the requirements of section 404, which requires external audits to also appear on the web. Following the aforementioned standards will make any audit less painful. Creating processes and policies around a set of standards, as well as regularly evaluating any changes to SOX will secure organizational compliance. As the Internet continues to evolve and grow, the laws governing certain types of transactions will change. It is, therefore, critical that the organization keep pace with the changes in order to secure its place on the Internet.

 




References
Fuller, M. (2011, June 09). The Evolution of the Internet and its Meteoric Rise. Retrieved from Techwrench.com: http://www.techwench.com/the-evolution-of-the-internet-and-its-meteoric-rise/
Imperva. (2009). Implementing Sarbanes-Oxley Audit Requirements. Retrieved from Imperva.com: http://www.imperva.com/docs/WP_SOX_compliance.pdf
Laminack, B. (2008, November 14). A Brief History of Content Management Systems. Retrieved from Brent Laminack’s Personal Site: http://laminack.com/2008/11/14/a-brief-history-of-content-management-systems/
Reynolds, K. (Director). (1991). Robin Hood Prince of Thieves [Motion Picture].
SourceForge. (n.d.). sqlmap automatic SQL injection and database takeover tool. Retrieved from sourceforge.net: http://sqlmap.sourceforge.net/
Tenable Network Security. (n.d.). Nessus 5.0 is here. Retrieved from tenable.com: http://www.tenable.com/products/nessus?_kk=nessus%20scanner&_kt=9b03f8a7-8e0a-4eb4-bcaf-fc0d14045e85&gclid=CLH3g5uM3a4CFQcFRQodTBjqWw
WhiteHat Security . (2012). About WhiteHat Security. Retrieved from Whitehatsec.com: https://www.whitehatsec.com/abt/abt.html
World Wide Web Consortium. (n.d.). About W3C Standards. Retrieved from W3C.org: http://www.w3.org/standards/about.html


Tuesday, May 1, 2012

Policy and the Manager










Policy and the Manager
Samuel Warren
IS464 – Policy
Ryan Gunhold
City University
March 7, 2012


Executive Summary


Policy affects every area of a corporation. It can cause drastic cultural change or it can be a few words spoken that do nothing to affect how the business runs. In either case, the managers of the corporation’s various teams are crucial to a policy’s influence. Managers are the key communicators, the chief cheerleaders, and the major drivers of many of the policies with which executives envision. While it could be done without their assistance, implementing policy without the help of the corporation’s management team would be tan amount to attempting to escape quicksand. There are three major phases involved in a policy’s lifecycle: Creation, Adoption, and Infusion. During the Creation phase, the policy is planned out, pushed out, and policed; while during the Adoption phase, managers heavily assist in the communication and approach needed by each policy. During the Infusion phase, the policy is compared directly to the culture of the corporation and evaluated for success. If necessary, the policy may change; however, the culture may need a modification as well to continue positive growth and innovation.

Introduction to Policy


Corporations are living entities; they take in, expel, and reproduce. Policy within the corporation is the brain activity, sending jolts of electricity to the various parts of the corporate body. The body reacts according to the signal received, as do different sub-divisions within a corporate structure when policy is introduced. While the policy is intended to convey a specific set of ideas and protocols for how to handle specific situations, policy can quickly become misinterpreted if there is a problem with the sending mechanism. Policies go through several phases during creation. It goes through a creation phase, where it is developed, deployed, and enforcement plans are laid out. Next, it goes through the adoption phase. During this phase, it is all about socializing the policy. Finally, it goes through a longer, more thorough infusion phase, where it is fully integrated and accepted by every person impacted.

Creation


            When a policy is produced, it goes through three stages within the creation phase. The policy is first developed. In this stage every angle of the policy is considered and analyzed. It is crucial to understand how the policy would affect the corporation and determine ways to mitigate any potential problems. For example, if a corporation has a policy of only hiring men for their positions, because of the physically demanding nature of the job, they must determine if that policy will have any detractions. What happens if a female comes along who is physically capable to complete the job duties and is not hired? Unless the organization has a legitimate reason for not hiring her, it would violate the Equal Opportunities Act and the corporation could face legal action.
            During the development stage, one must also consider whether this policy is strictly a national policy or an international policy. If it is an international policy, does it conflict with other policies already included in that space? How does it affect the workforce of other countries? Most often, adding an international policy causes an increase in development time and money to implement. Understanding that every country involved is different is only the beginning; there is also a need to modify the policy, where appropriate, to accommodate for social, political, and environmental changes from country to country.
The next stage is to determine how the policy can be deployed. What are the costs associated with implementation? How long would it take to deploy the policy? There are some policies that do not need additional implementation, other than releasing the policy and making people aware of it, for example, a dress code policy. Others require money, technology, and human interaction to implement. For example, if a policy states that all employees must use a laptop powered by Windows 7 as their main computer, money is required for purchasing the laptop, Windows 7 licenses, and any other licenses needed to run their business for each employee. Again, from an international perspective, deployment can be a complexity that will possibly expand the timeline. If the aforementioned policy is in place and an employee works in rural Nepal, they may not have access to a laptop with Windows 7. In such cases, the organization should either add a mitigation line to the policy that reads, “where applicable” or should furnish noncompliant individuals with the Windows 7 powered laptop.
            The final stage to be considered of the creation phase is the policy enforcement. Arguably, this is the most important of the three stages. An executive can create policies all day, but if none of them are enforced, the corporate body is free to do whatever it wants. Lack of enforcement is akin to Tourette’s syndrome of the corporate body; the leadership is not able to control what comes out. Enforcement needs to be consistent and fair. There also needs to be a policy appeal process that goes through a review, possibly changing the policy. While the appeal process happens after the policy is enforced, it is important to note that the mechanism that allows for appeals is created as a part of this phase.
            Managers play a key role in this and the subsequent phases. While executives are the creators of the policy, the managers of the various teams assist in deployment and enforcement efforts. Managers are the main source of communication, changes, and appeals to the policy. Specifically, enforcement is where the most time is spent. Making sure an employee adheres to policy is the most common task across any leadership role. Because of this, the manager must know all the policies that relate to their group and those that are general to the corporation. In Neuschel’s work, “The Servant Leader: Unleashing the Power of Your People,” he discusses the differences between management and leadership and how to best utilize one’s employees. This plays especially into an international corporation. Because of the differences in the culture, it is important to really empower the employee to follow the policy, or to exempt them from it. In chapter six, Neuschel (2005) describes the way a manager should be leading, “The manager/leader must meet and effectively deal with the immediate day-to-day issues. No organization can long survive if today’s issues are not dealt with immediately, competently, and vigorously” (p. 42). He goes on to say:
There is a compelling need to set up a framework within which responsibilities can be delegated on an orderly basis. By operating within such a framework, managers down the line can share in the decision-making in a reasonably orderly way. This permits, even encourages the multiplying power of submanagers— subleaders—in sharing the decision-making and, in the process, adding to their growth. (2005, p. 42)
Managers should encourage, guide, and empower their employees to follow the policies. Creating a development path and clear, attainable goals for the employee to achieve, the manager should work to ensure, at every level, the employee is equipped with policy.

Adoption


The next major phase in the lifecycle of a policy is the adoption phase. Much like the aforementioned phase, the managers play a crucial role at this stage. Wide adoption and socialization of a policy is as much a function of management as signing timecards. Managers are the voice of the executives carried down to the level of their group. At this time, the policy team should create a unique approach for each new policy.
Linguistic information is a powerful means to convey social information in our culture. When we lack first-person experience regarding a situation or another person, we often turn to information provided by others to form an opinion. (Ruz, 2011)
Some policies can be shared and spread via email or a corporate intranet; while other policies require a different approach entirely. Policies, such as those concerning sexual harassment, may require face to face time with whom is a subject matter expert in the field to avoid personal opinion conflicts with intended meaning of the policy.
The logistics of getting employees to that training should fall either on human resources and/or the manager. Human resources should handle the explanation of how employees can and should be involved; however, it ultimately falls on the manager to ensure the employee actually gets to the training and understands the importance thereof.
Another critical stage to the adoption phase is communication. Creating an actionable communication plan to disperse and gain employee acceptance should be firmly established prior to the deployment stage of the creation phase. However, during the adoption phase, it becomes active and the role of communication falls predominantly on management. The manager should be aware of employee behaviors on a day-to-day basis, in order to correct when necessary. Constant communication between the managers and the policy creators is a much needed path that seldom happens. If communication can be maintained and widespread acceptance of the policy can be gained, the next phase will be more straightforward.

Infusion


            The final phase to be considered is infusion. Each corporation has a culture, whether positive or negative. The culture is how the corporation’s employees act and react to each other and to outside influences, a social belonging, and a collective grouping of employees who work together day in and day out. The culture of a corporation should be considered very carefully during the infusion phase. If one were to inject cultural elements that were directly competing with the corporate culture at work at the time, the policy would be doomed to fail. For example, if there is a dress code policy stating that each employee must wear a suit and the culture of the company is relaxed and prefers to wear jeans, it may take more time to integrate with the groups. Culture is a complex concept; it is a combination of many factors, including environment and socio-economic position. Adapting a policy to the culture should be considered as an alternative to any policies where it is appropriate. However, there are some policies that necessitate cultural change. Psychologists often adapt their treatments to suit the culture of their patient. While it is a different field entirely, the following article explains the need to adapt and change:
When deciding when and how to culturally adapt treatments, psychotherapists should recognize the tension between population fit and treatment fidelity. If a traditional intervention such as cognitive therapy is adapted in content and format with an Asian American client by infusing the Buddhist principle of mindfulness, for example, then there comes a point at which the causal explanations of cognitive therapy may no longer predominate in the adapted treatment (therapy may facilitate meditative relaxation/awareness over the explicit refutation of irrational thoughts). (Smith, Rodríguez, & Bernal, 2011)
As described, it is very important to understand the balance between how the culture works and how the culture ought to be modified. Change is inevitable, but how one reacts to change is the key factor needing to be understood when it comes to infusion. If culture does not change, then outdated policy, poor standards, and other negative elements would continue to thrive and cause a net problem to the corporation. For example, while the culture of the 1800’s United States allowed for slavery, the culture needed to change. If it was not pushed and enforced at every level, the inequality of the Old World would continue in the new. While slavery and racism were not necessarily the issue in the old world, the core attitude associated with them needed to change to bring equality and unity to all citizens in the United States.
During the infusion phase, gaps between the policy, standards, and practices are identified. If a policy states that all website development must pass through W3C scanning, then the developers must learn to code in ways that make the scan successful. Another major point to consider with infusion is the amount of time and expense it will take to bring a team into compliance with a policy. Some policies may require a long-term investment; others may require changes to the IT controls. However, the manager, yet again, plays a huge role in this phase. Managers are the individuals who suggest and make strides towards the necessary changes; they are the ones who directly hold the power of success or failure in their hands regarding policy.

References


Neuschel, R. P. (2005). Servant Leader : Unleashing the Power of Your People. Northwestern University Press.
Ruz M, Moser A, Webster K, 2011 Social Expectations Bias Decision-Making in Uncertain Inter-Personal Situations. PLoS ONE 6(2): e15762. doi:10.1371/journal.pone.0015762
Smith, T. B., Rodríguez, M., & Bernal, G. (2011). Culture. Journal Of Clinical Psychology, 67(2), 166-175. doi:10.1002/jclp.20757