Thursday, December 13, 2012

SQL vs. NoSQL

SQL vs. NoSQL Samuel Warren CS416: Database Management Professor Noel Broman December 10, 2012 SQL vs. NoSQL Executive Summary Since 2004, there has been a debate raging between using relational databases and using SQL without the interfaces of times past, called “NoSQL.” This debate is not one even Google has settled. In a 2012 video on YouTube, Google developers presented a debate between SQL and NoSQL. The debate reached a dead tie with the developers agreeing that a pairing between the two would be a likely solution, at least in the short term. Introduction Arguably, one of the greatest resources to any database administrator is Structure Query Language (SQL). However incredibly fast and powerful as it can be, there is a contender for the throne of greatness in this field. “Not only SQL” (NoSQL) is a movement to do away with relational databases altogether. The first usage of the term, in modern context, was in 1998 by Carlos Strozzi; “Ironically it’s relational database just one without a SQL interface. As such it is not actually a part of the whole NoSQL movement we see today” (Haugen, 2010). Haugen goes on to share that in 2009 Eric Evans, who was at the time employed at a hosting company called “Rackspace,” used it to refer to a more recent uprising of non-relational databases. Strong pros and equally resilient cons have been presented for both SQL and NoSQL in the debates. SQL SQL uses “tables” and “columns” to store the data that is input. There are huge pros to this because it gives each piece of data a never-changing location that can be referenced if one labels and links back correctly. Getting the data into the database is not terribly challenging. Removing data is not difficult either, if one can determine the correct syntax and taxonomy of SQL. SQL is a simple query language that is highly repeatable and flexible. The inconvenience of SQL is the convoluted nature of linking so many different data types together to get one or two specific pieces of data. When compared to NoSQL, however, the ease of breaking down complex problems becomes a boon. Let’s say that you want to compute the average age of people living in each city. In Cloud SQL [a specific product by Google], it’s as easy as this. All you have to do is select the average age and group by city. (Google Developers, 2012) The query shown by the presenter was clear, easy to read, and syntactically the same as every other SQL query used by every database administrator, or analyst, working with SQL. This serves to illustrate the muscle of SQL queries and further demonstrates ease of use. It is a hands-down winner in comparison to NoSQL with respect to queries. When discussing the trade-offs between the two, one of the major reasons SQL has managed to thrive is that it has been refined to the extent anyone can learn a few commands and begin writing complex query-strings. Of NoSQL-based systems: “They’re not polished, and comfortable to use. They have new interfaces, and new models of working, that need learning” (Snell-Pym, 2010). While one can quickly pick up a SQL-based system and begin extricating information, it is not as easy to do so with a NoSQL-based system. According to Kahn, “ [A] user can access data from the database without knowing the structure of the database table” (2011). That kind of structure is invaluable for resource managers needing to find staff who can handle a relational database with the power of SQL. NoSQL On the other side of the playing field, so to speak, is the non-relational model lead by several NoSQL, open-source, contenders. Most agree that the “no” stands for “not only”—an admission that the goal is not to reject SQL but, rather, to compensate for the technical limitations shared by the majority of relational database implemen¬tations. In fact, NoSQL is more a rejection of a particular software and hardware architecture for databases than of any single technology, language, or product. (Burd, 2011) This rejection to some of the technical limitations has revealed highly desirable features in the process. The most notable feature is the ability to quickly scale the database in the event of extreme transactions. Burd goes on to explain that with traditional SQL, as transactions between the servers and the databases increase to a frenzied pace and the queries become larger and larger, the only real response is to put more hardware and storage into the path of the database. Although each of these techniques extended the functionality of existing relational technologies, none fundamentally addressed the core limitations, and they all introduced addi¬tional overhead and technical tradeoffs[sic]. In other words, these were good band-aids but not cures. (Burd, 2011) NoSQL enables much quicker information discovery, because the data lives within what the Google Developers called “entities” (2012). Whereas, the customary relational database uses different tables and has to look up the data within those tables using relational keys. The tables are then linked together using what SQL calls “JOIN” functions from within an individual’s query-string. It is simple to observe a decrease in performance unless the database is on a mature enough system that is well laid out. As the business model evolves concepts and data models often struggle to evolve and keep pace with changes. The result is often a data structure that is filled with archaic language and patched and adapted data. As anyone who has had to explain that the value in a column has a different meaning depending on whether it is less than or greater than 100 or that "bakeries" are actually "warehouses" due to historical accident knows that the weight of history in the data model can be a serious drag in maintaining a system or incorporating new business ideas. (Rees) Rees illustrates a common problem among all systems: change. As data changes, the current and dominant relational model may become extinct. However, SQL may not be up to the task of continuing to store data and serve data in its current fashion. As new as it is, NoSQL may quickly become the standard SQL is today. With such flexibility, NoSQL only needs more companies, like Google, to accept it and learn how to work with both SQL and NoSQL alike in the interim. References Burd, G. (2011, October). NoSQL [PDF]. Retrieved December 12, 2012 from the World Wide Web http://static.usenix.org/publications/login/2011-10/openpdfs/Burd.pdf Google Developers. (2012, June 29). Google I/O 2012 - SQL vs NoSQL [Video file]. Retrieved December 12, 2012 from the World Wide Web http://www.youtube.com/watch?v=rRoy6I4gKWU Haugen, K. (2010, March 16). A brief history of NoSQL [Blog post]. Retrieved December 12, 2012 from the World Wide Web: http://blog.knuthaugen.no/2010/03/a-brief-history-of-nosql.html Kahn, A. (2011, November 8). Difference between SQL and NoSQL: Comparision. Retrieved December 12, 2012 from the World Wide Web: http://www.thewindowsclub.com/difference-sql-nosql-comparision Rees, R. (n.d.). NoSQL comparison. Retrieved December 12, 2012 from the World Wide Web: http://www.thoughtworks.com/articles/nosql-comparison Snell-Pym, A. (2010). NoSQL vs SQL, why not both? Retrieved December 12, 2012 from the World Wide Web: http://www.cloudbook.net/resources/stories/nosql-vs-sql-why-not-both

Wednesday, November 28, 2012

MasterControl: A QMS to answer 21 CFR

MasterControl: A QMS to answer 21 CFR Samuel Warren IS472: IT Compliance Professor Steve O’Brien November 20, 2012 MasterControl: A QMS to answer 21 CFR Executive Summary Regulating and maintaining compliance in the biotechnology, pharmaceutical, and genetic engineering fields is quite a task. In order to maintain compliance with FDA required 21 CFR, many companies are choosing to turn to quality management systems as answers to deal with compliance. The beauty of this is they do not have to figure out how to comply with the regulations; they simply have to learn the proper way to interact with the software. Master Control makes one such software; with its features and integration, the MasterControl suite of products is one of the most robust QMS platforms that have been created. If utilized correctly, it will enable companies to do more research and spend less time trying to maintain compliance. Introduction What is quality? Is it an end state? Is it a process? Quality is all of the above. It is an end state, a process, and even a descriptor. When discussing quality with individuals, there is a somewhat vague, generalized answer floating to the forefront most often. That answer oftentimes describes the characteristics of a product or service as “reliable” or “stable.” While both definitions are generally acceptable, defining quality can be much larger and involve a fair amount of compliance to regulatory demands. The Quality Management System (QMS) described herein will have its benefits fully explained and enumerate the way it will contribute to the compliance requirements. What is MasterControl? MasterControl is a product suite created by Master Control Incorporated with the purpose of aiding in quality management services with specific regards to FDA and other regulatory compliance issues. With a host of offerings, ranging from Quality Management to Training Management, it is meant to provide a managed answer to how to achieve and keep an organization in compliance within 21 CFR, and other FDA required compliance fields. According to Master Control’s “Software Control Solutions Overview”: While market globalization has vastly increased the profit potential for manufacturers and other businesses, it has also intensified competition and the pressure to produce faster and at a lower cost. The situation is doubly challenging in a regulated environment (FDA, EMEA, etc.), where companies must contend not only with cutthroat competition, but also stringent regulatory requirements. (2010, p. 1) With such a highly competitive field and risky potential failures, it is imperative that organizations do everything they can to grease the skids and provide easier access to auditors and regulators in order to prevent being either considered noncompliant or non-cooperative. The MasterControl suite provides governed and trusted software and systems to help ensure the organization can focus on the fewest possible technology problems, it also frees up the companies to engage in more research and discovery. How to Optimize MasterControl It is essential for all systems to be optimized. QMS’ are not exempt from that necessity. Without optimization, the users are unable to best utilize the system to its fullest potential. When approaching the optimization of MasterControl, there are several significant areas to contemplate. One recommendation is to eliminate the muddled mix of digital and analog. This is too costly for the company to invest in the computer systems, not to mention the costliness of paper, ink to print and copy, and maintenance on the devices, depending on the size of the company. Another hidden cost is the time investment for audits or inspections. A routine GMP inspection typically lasts a week, but sometimes they can last up to five weeks. The investigator noted that within this context, an electronic record-keeping system could make all the difference in speeding up the inspection process. (“Six,” 2010, p. 2) While a week is not a long time for the auditor, they may spend time consuming internal resources and, in some cases, may stop work altogether. The end cost could be much higher than anticipated if the inspection or audit lasts longer. Another major way to optimize the QMS is to use different software and processes that connect together well. MasterControl provides such a large suite of software, all of which are interconnected, and all of which are fully digital. It has the ability to integrate with electronic repositories that are good for storing SOPs, engineering drawings, and other documents, but are incapable of controlling quality processes like training and CAPA. MasterControl allows companies to leverage their existing repositories by integrating them with robust MasterControl applications without expensive custom coding. (“Six,” 2010, p. 3) By having and maintaining connections to the electronic repositories, the MasterControl suite is able to have a wider reach digitally and reduce the potential disconnection points where the people and outdated processes connect to the system, thereby limiting the risk of system failure. How Does MasterControl Enhance Compliance Efforts? A major area MasterControl excels in is aiding in compliance efforts. With such tightly controlled fields, manually verifying compliance would be time consuming and potentially very expensive. By using a system like MasterControl’s suite, there are five areas accounted for: “system standard operating procedures, user authentication, access security, audit trails, and record retention” (“5 Ways,” 2010, pp. 1-4). All the areas are vital to maintaining a compliant lab, or business overall. The whitepaper written by Master Control Inc. provides quite a bit of detail for each item. For example, in user authentication, they describe the following MasterControl software features: MasterControl has numerous levels of security to ensure authenticity of each user in the system. The software tracks every signature combination and does not allow duplication or reassignment of the user ID and signature combination. Each user establishes a signature password upon first log in. He or she first logs into MasterControl with a user ID and a password just to gain access. To sign off on any document, the user must use a different “approval” password. All user IDs and passwords are encrypted and are not available to anyone in the system. (“5 Ways,” 2010, p. 3). The aforementioned security levels help to define and regulate how users interact with the QMS. However, they also provide a robust system control scheme enabling direct fulfillment of 21 CFR regulations for said area. While there are many more features of MasterControl’s products, this particular area serves as a poignant reminder of just how much detail was actually placed into MasterControl software. References 5 Ways MasterControl helps ensure system compliance with 21 CFR Part 11. (2010). MasterControl Inc. Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp Six ways to optimize your quality management system and ensure FDA and ISO compliance. MasterControl Inc. (2010). Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp Software control solutions overview. MasterControl Inc. (2010). Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp

Failure to Communicate Case Study Review

Failure to Communicate Case Study Review Samuel Warren IS472: IT Compliance Professor Steve O’Brien November 26, 2012 Failure to Communicate Case Study Review Executive Summary Almost every Information Security analyst is thought to be slightly paranoid in part due to their willingness to see potential problems everywhere. While not all of them are actually paranoid, there is a clear need to understand and train staff on potential threats when it comes to information. Flayton Electronics, a fictional mid-sized company with small web-presence discovered a major problem, wherein a large number of their customers had compromised payment accounts. There is no easy or fool-proof way to completely prevent data loss; however, communication and business continuity steps provide a way to keep any breach from getting too far out of hand. Introduction As long as data has existed, there has been communication, information transfer, and data fraud. How each is approached is vastly different, yet all share details requiring care and knowledge to navigate. Within the realm of data fraud, there are numerous required responses needing to be considered, not including the organizational response and reputation effects. The following review shall discuss the fictional company, “Flayton’s Electronics,” the major data loss they faced and their response to the situation. Problem Overview—Flayton’s Electronics Flayton’s CEO was informed of an alarming discovery by their principle banking institution. The bank reported a large number of Flayton’s customers had their cards compromised. Originally, they reported that 15% of a randomly sampled 10,000 accounts that were compromised had purchased at Flayton’s at one point or another. As they investigated further, they discovered there were two possible culprits and a disabled firewall. How the firewall stayed disabled was not a mystery; their Chief Information Officer (CIO) was constantly juggling new technology projects and seemed too busy with those to notice a downed firewall. That kind of innovation, while it yields results, also brings a level of risk of any oversight. Another major problem the Flayton team had was if and when to communicate the breach to their customers. At the time of discussion, they were unsure how the breach occurred, if it was a deliberate breach by former employees or a breach by hackers sitting in their car with a laptop near the headquarters. With such minimal information, there was a certain amount of time necessary, but instead of being proactive and researching the breach themselves, the Flayton team seemed to be avoiding the issue and trying to find a way out without having to communicate and deal directly with the affected customers. How to Handle the Situation Innovation is a great tool to have in any organization. However, innovation with improper execution does far more damage than not innovating at all. There is a level of research necessary prior to launching any technology project dealing with customer data, internal employee data, supply chain data, or any other confidential data. Conducting thorough investigations into all possible changes affecting the data, providing business continuity exercises, and keeping consistent communication between the CIO and different department heads will help to ensure this type of a problem is discovered and dealt with sooner rather than later. One common fallacy is that silver bullet technology can save the day. I've seen organizations spend hundreds of millions of dollars on security safeguards that were penetrated by a knowledgeable person with a handheld device. For example, Motorola proved to one of its customers, who had invested heavily in some of the best protection technology available, that we could access their core business systems using just a smartphone and the Internet. (McNulty, 2007) This fallacy was evident in the mind of the CEO and the CIO, as they believed being PCI compliant would protect and prevent problems from happening. However, being PCI compliant is just one first step in a number of proper security practices needing to happen within any organization. Another major point to consider that would help prevent this problem in the future is to have the Information Security team do regular security audits on the technology and the processes in the organization to determine if there are any potential threat vectors in the organization. While hacking by external attackers is still the number one threat, an article in CSO describes a close second to that is internal attackers. Keeping that in mind, there are many ways attackers could gain access to confidential information without actually being physically inside the internal network. Above all possible hardware and software solutions, the key to this, and other organization’s problems, is to hire, educate, and train staff to be knowledgeable of all the potential ways data can be acquired. Then, keeping staff and leadership validated by doing security and background checks can provide additional defense against disgruntled employees. Simple steps like changing passwords to the systems or removing access to separated employees can go a long way to ensure no separated employee can intrude and steal information. References Carr, K. (2003, August 3). Numbers: Internal Threats vs. External Threats. Retrieved November 27, 2012, from CSO Security and Risk website: http://www.csoonline.com/article/218405/numbers-internal-threats-vs.-external-threats McNulty, E., Lee, J. E., Boni, B., Coghlan, J., & Foley, J. (2007). Boss, I Think Someone Stole Our Customer Data Harvard Business Review, 85(9), 37-50.

Thursday, November 15, 2012

Comparing Nations: A look at health information privacy

Comparing Nations: A look at health information privacy Samuel Warren IS472: IT Compliance Professor Steve O’Brien 09-November-2012 Comparing Nations: A look at health information privacy Executive Summary There are many factors to consider when discussing the parallels and variations associated with Canada and the United States of America with respect to healthcare. Electronic protected health information (ePHI) is the driving force behind HITECH expansion of the US’ Health Insurance Portability and Accountability Act (HIPAA). While both discuss the standards, the penalties, and both intervene in the ecosystem of the Healthcare industries they serve. The differences highlighted are mostly related to costs and the context for which each health care system operates. Introduction With the recent trending towards making more and more information available to the owners of said information. Due to this fact, serious strides towards protecting the information from falling into the wrong hands are occurring. The United States Government Department of Health and Human Services (HHS) passed the Health Insurance Portability and Accountability Act (HIPAA) in 1996 with the goals of “making health care delivery more efficient and increasing the number of Americans with health insurance coverage” (National Academy of Sciences, 2009). After being passed into law, the creators of the bill allowed it to be put to public scrutiny in 1999. Because of the volume of comments in response to the bill, it went through several revisions (National Academy of Sciences, 2009). By way of comparison, the nation of Canada will be compared to discern the similarities and the differences. Similarities At its heart, the two countries share many similarities from a privacy standpoint. They both desire to protect the personal health information they store and transmit. They are also built with forethought towards electronic access. The “HITECH” portion of HIPAA provides incentives to move more in the realm of electronic records and expands the scope of HIPAA beyond the original legislation. The HITECH Act is transformational legislation that anticipates a massive expansion in the exchange of electronic protected health information (ePHI). The HITECH Act widens the scope of privacy and security protections available under HIPAA; increases potential legal liability for non-compliance; and provides more enforcement of HIPAA rules. (Leyva & Leyva) While the exchange of electronic information is already taking place in many health care providers’ offices, there is an additional need to be more forward thinking and aware of potential trends in data management within the scope of protected health information. Canada’s Health Information Protection Act (HIPA) has built in some terminology to assist in this change. In the event that a comprehensive electronic health record is created, The Health Information Protection Amendment Act ensures that patients will have the power to block access to their personal health information once that system is in place. (Gooliaff Beaupre, 2009) The idea of making a data warehouse of ePHI controlled and secured by HIPA experts may be appealing, but the Ministry of Health also has its eye on keeping their customers happy. Another similarity in the two laws is the great pains each act goes through to detail punishment for not meeting compliance. HIPAA violations, depending on the situation, can cause a punishment of up to 10 years imprisonment and a $250,000 USD fine. These acts are very similar across many areas, where they differ is what brings a certain amount of clarity around these two prominent health care systems. Differences One of the biggest differences noted is the cost. While the US has some of the highest cost of any country, Canada’s problems with cost are equally problematic. All care is “free” for insured services —those provided by physicians and hospitals. No premiums, deductibles or co-payments are imposed. (Other services such as dental care and prescription drugs must be paid for either through private insurance or out-of-pocket.) When no one is faced with any charge for services, demand is unrestrained and costs surge. (O’Neill & O’Neill, 2007, p. 2) The costs themselves are not directly related specifically to HIPA’s equivalent of HITECH, but when one considers the staggering costs of Information Technology (IT). Whether it is software that is HIPA/HIPAA compliant, hardware that stores the ePHI, networking equipment that transfers it, the costs go up for each of the companies in direct proportion to how compliant the healthcare provider has to be to prevent inadvertent data loss. Another major point to consider is the artificial demand created due to the decrease of cost health insurance. In 1966, Canada implemented a single-payer health care system, which is also known as Medicare. Since then, as a country, Canadians have made a conscious decision to hold down costs. One of the ways they do that is by limiting supply, mostly for elective things, which can create wait times. Their outcomes are otherwise comparable to ours. (Carroll, 2012) This is significant in and of itself, because of how frequently the system, especially IT-related portions, is utilized. As a result, both US and Canada’s systems are constantly in need of change. However, Canada’s IT infrastructure is much more difficult to continue expanding, because of the low cost and the driving need to help patients without additional undue wait from updating IT systems. If the costs were comparable, it is reasonable to assume the US would also have an increased service time, lower cost, and similar demand for healthcare as its neighbor to the north. But would the costs allow ePHI to be considered? Will the costs associated with the technology necessary decrease at a reasonable pace? Time will tell, but for Canada, the way they spend their money should primarily focus on the protection of the ePHI and the infrastructure used to transport it. References Carroll, A. (2012, April 16). 5 Myths About Canada’s Health Care System The truth may surprise you about international health care. Retrieved November 9, 2012, from AARP website: http://www.aarp.org/politics-society/government-elections/info-03-2012/myths-canada-health-care.html Gooliaff Beaupre, V. (2003, May 8). CONFIDENTIALITY OF HEALTH INFORMATION BETTER PROTECTED. Retrieved November 9, 2012, from The Government of Saskatchewan website: http://www.gov.sk.ca/news?newsId=79cc2a04-d0f5-4dc1-a145-e1bb5c067e17 Institute of Medicine (US) Committee on Health Research and the Privacy of Health Information: The HIPAA Privacy Rule; Nass SJ, Levit LA, Gostin LO, editors. Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research. Washington (DC): National Academies Press (US); 2009. 1, Introduction. Available from: http://www.ncbi.nlm.nih.gov/books/NBK9576/ Leyva, C., & Leyva, D. (n.d.). HIPAA Survival Guide. Retrieved November 9, 2012, from HIPAA Survival Guide website: http://www.hipaasurvivalguide.com/ O'Neill, D. M., & O'Neill, J. E. (2007, September). HEALTH STATUS, HEALTH CARE AND INEQUALITY: CANADA VS. THE U.S. [PDF]. Retrieved from http://www.nber.org/papers/w13429.pdf Penalties Under HIPAA. (n.d.). Retrieved November 9, 2012, from UC Davis Health System website: http://www.ucdmc.ucdavis.edu/compliance/guidance/privacy/penalties.html

Sunday, October 28, 2012

SOX case study

SOX Case Study Executive Summary The Sarbanes-Oxley Act (SOX) has created quite a tumultuous time for businesses in its brief existence. While the cost of compliance to SOX was initially rather expensive, the continued cost of compliance threatens to put many publically traded companies out of business (Sneller & Langendijk, 2007, p. 102). The biggest argument lies squarely in the realm of cost-benefits. Many in the European Commission believe SOX’s broad sweeping and international implications are, at best, not enough benefit and, at worst, an arrogant stance by the U.S. Government. While it is true that SOX can reach internationally to companies wishing to trade on the New York Stock Exchange (NYSE) (Sneller & Langendijk, 2007, p. 102), it is also true that they could choose not to trade on the NYSE and thus avoid SOX requirements. However, that would severely limit their ability to do business internationally with any sort of success. Much of the issues discussed in the case study by Sneller and Langendijk point to the physical costs and man-hour costs of auditing and complying with SOX. But, there are quite a few issues to consider beyond man-hours. Introduction Since its creation, the Sarbanes-Oxley Act (SOX) has been at the center of corporate scrutiny, especially the hotly contested section 404. Regardless of its contestation, SOX has been enacted into law, so the role of the businesses should be to do their best to comply with the law as it applies to their business. Sneller and Langendijk speak mostly of the costs associated with SOX compliance in their 2007 case study. However, there are more factors and issues they admittedly did not consider. Here in lies a few more issues and key factors to contemplate when discussing SOX compliance. Issues While the biggest issue discussed was price of head count, there are a few other factors to consider regarding costs. Simply looking at salary costs gives a moderate glance at the total cost of SOX, but it is just getting into the neighborhood of the true cost. When studying total cost, one has to look for the “fully-loaded” cost. That is the cost of all the various pieces, components, tools, and other non-labor costs adding together to give the total “fully-loaded cost.” Unfortunately, that cost is not a fixed cost, nor is it anywhere near acceptable for many companies. Smaller players in the market have also protested at being forced to pay disproportionately high compliance costs because of past scandals involving the big boys. Some public companies even took the bold decision to voluntarily delist from the NYSE because the cost of SOX compliance was deemed too expensive. (Rodgers) One approach many companies use to show achievement is listing their company on the New York Stock Exchange (NYSE). However, the cost of SOX to directly impact all companies that list on the NYSE is what causes many to either go public on the London Stock Exchange or not list at all (Sneller & Langendijk, 2007, p. 102). This is a major issue in and of itself; however, if one takes into consideration the impact on a company’s momentum and morale, there is a more realistic view of the widespread and far reaching costs associated with SOX compliance. Another major concern is the potential conflict of interest involved in how costs are estimated. The Securities and Exchange Commission (SEC) is the primary focus of the investigation done by Sneller and Langendijk. According to page 102 of their 2007 case study, the SEC is responsible to make the estimates. However, what is not clear is whether the estimates they made were with a clear understanding of how much actual cost was at the time, or if they were estimating using what their “best guess” told them. The potential for conflict of interest comes when one considers that the SEC is not only the primary consumer of this information, but also the primary driving force and enforcer of this law. What is concerning is the seeming lack of understanding of what it would actually cost to fully comply with SOX, especially section 404. The final major issue with SOX compliance is that the SEC does not factor into their estimates the additional costs associated with compliance to other required standards. For example, Payment Card Industry Data Security Standards (PCI DSS) or the Health Insurance Portability and Accountability Act (HIPAA) also take a major toll in the cost of doing business. If a publicly traded company were also to accept the burden of one of these other required compliance frameworks, the costs will skyrocket. Addressing these multiple compliance initiatives strains IT resources and creates redundancies in business processes within an organization. Furthermore, the high degree of specialization among security and compliance vendors exacerbates the challenge of finding a solution that works across multiple mandates. (Shulman, 2006) Most solutions listed required significantly more time and more money in the long run. The main reason for the additional money investment is simple: companies most often do not consider the compliance requirements prior to going public. How to Make Compliance Work The easiest way to make compliance work is one of two things: either loosen the burden of compliance requirements, especially related to SOX and HIPAA, or to do more up-front planning prior to going to the public trade arena. There are complications with either solution. For example, up-front planning only really helps those that are not already publicly traded. Additionally, with the aftermath of Enron, Worldcom, and some of the major banks having issues, there is no foreseeable future where the government would loosen the requirements of SOX. So where does that leave the world of businesses? For larger businesses, that leaves a bad taste in the mouth of their boards and leaves a dent in the year over year revenue stream. For smaller businesses, there is a very significant chance they will fail; either from not being able to go “public” or from noncompliance penalties. In either case, large, small or somewhere in between, businesses need to use the democratic process and lobby to find a solution that does not require such stringent and costly requirements. The best possible solution would also restore broken trust between large, publicly traded companies and the U.S. Government, whose responsibility is to protect and represent the private citizen. In addition, larger companies should set an example of proper compliance and provide tips to smaller companies that may not know where to begin. While there are some serious issues with compliance, especially in the realm of SOX, the only company that is a victim is the one who does nothing. References Rodgers, J. (n.d.). Counting the Cost of Compliance [White paper]. Retrieved October 27, 2012, from Business Management website: http://www.busmanagement.com/article/Counting-the-cost-of-compliance/ Shulman, A. (2006, December 18). PCI, HIPAA, SOX: Is Compliance the Tail Wagging the Dog? Retrieved October 27, 2012, from E-Commerce Times: Business means business website: http://www.ecommercetimes.com/story/54759.html Sneller, L., & Langendijk, H. (2007). Sarbanes Oxley Section 404 Costs of Compliance: a case study. Corporate Governance: An International Review, 15(2), 101-111. doi:10.1111/j.1467-8683.2007.00547.x onl

Wednesday, October 24, 2012

Compliance Criteria for RetireYourWay hypothetical Case study

RetireYourWay Compliance Executive Summary There are many compliance related issues that must be considered by any company planning on doing business. While the punitive measures are different for noncompliance, as are the drivers of the business, each framework requires a fair amount of work. Whether it is PCI, HIPAA, SOX, or OPPA, the main issue to consider is information management. One has to review all the rules and consider risks of noncompliance. It may cost quite a bit to stay in compliance with PCI; for example, the loss of ability to accept payment cards. Introduction When organizations begin to operate, there are a number of issues the company’s leadership needs to consider with respect to compliance. Depending on the type of business, numerous hurdles may be faced on the road to success. The business, location, and clientele are all important factors, which need to be considered in compliance frameworks. When RetireYourWay decided to enter business, their goals may have been simple; however, there are a number of compliance-related changes to how RetireYourWay does business in order to maintain their high level of success. Payment Card Industry Whenever payments are accepted via debit or credit cards, Payment Card Industry (PCI) must be addressed. PCI was originally created in 2006 (ControlScan). PCI relates to any company that accepts, transmits, and stores cardholder information from American Express, VISA, MasterCard, Discover, and JCB. This framework has everything to do with financial drivers for RetireYourWay. They use a special credit card rounding the amount to the nearest $0.50 and provide a 1% award. Then the cards are used to withdraw money and give rewards again for withdrawing. Whether RetireYourWay is processing the cards themselves, or they choose to use a third party vendor, there are stiff consequences for noncompliance. The payment brands may, at their discretion, fine an acquiring bank $5,000 to $100,000 per month for PCI compliance violations. The banks will most likely pass this fine on downstream till it eventually hits the merchant. Furthermore, the bank will also most likely either terminate your relationship or increase transaction fees. (ControlScan) One way to avoid this potential repercussion is to have their system scanned by an approved vendor and then makes any adjustments accordingly. For example, if they have web transactions, using a vendor like WhiteHat to scan the web site will enable expert advice on what needs to be accounted for to remain in compliance. Sarbanes-Oxley Act By the sheer fact that RetireYourWay is a publicly traded company, they also fall under the compliance framework of Sarbanes-Oxley Act (SOX). Of all the frameworks, this is by far the most challenging to set correctly, if not started towards the beginning. SOX is a regulatory process whose goal is simply to provide transparency within leadership of the organization. SOX primarily uses audits to leverage the awareness it demands. A traditional IT audit typically focuses on component, subsystem and sometimes on the system level auditable issues of the environment being audited with a strong bias towards security matters. Sarbanes IT audits add an entire layer of governance, financial, and controls matters to the audit process. The literature documents that a Sarbanes IT audit would rarely delve deeper than the system level since the primary objective of the Sarbanes audit is to assure the CEO, CFO, and Audit Committee that the financial information that is in the IT systems and being reported to the SEC is accurate and reliable. (Seider, 2004) When a company does not foster a sense of transparency by nature, or lack understanding the need for it, it causes quite a bit more potential hassle. According to Seider (2004), the point is to ensure the Securities and Exchange Commission (SEC) receives all the correct data, but in the event that does not happen, the aforementioned executives are squarely in the bulls-eye. All fines and punitive measures would fall directly on them. Health Insurance Portability and Accountability Act Because RetireYourWay has its own onsite health and fitness facilities, they also fall under The Health Insurance Portability and Accountability Act (HIPAA). HIPAA requires specific procedures and notifications for medical files. Even if they are only internal employee health files, RetireYourWay is still required to fall under this framework. One point to consider is whether or not they transmit employee health data electronically. If so, they are definitely required to comply with HIPAA. The Privacy and Security Rules apply only to covered entities. Individuals, organizations, and agencies that meet the definition of a covered entity under HIPAA must comply with the Rules' requirements to protect the privacy and security of health information and must provide individuals with certain rights with respect to their health information. (HHS.gov) If RetireYourWay does not provide their employees access to their records online, then they need not comply with HIPAA regulations (HHS.gov). However, there is a strong possibility, with the size of the organization that they will offer online access to their staff for self-service of medical and fitness records. California Online Privacy Protection Act One other major framework to comply with is the California Online Privacy Protection Act of 2003. This act is a regulatory act requiring companies who either operate in, or have consumers in California to post privacy notices on the website. OPPA's reach extends beyond California's borders to require any person or company in the United States (and conceivably the world) that operates a Web site that collects personally identifiable information from California consumers to post a conspicuous privacy policy on its Web site stating what information is collected and with whom it is shared, and to comply with such policy. (Cooley, 2004) What this means is the company must publish the privacy policy on their website and must, among other things, notify their California resident employees in the event of data breach. An amazing aspect about this framework is it deals with a geographic region and particular demographic of people. Because this rule applies to California residents rather than the individual organization, it enables this law to be much more powerful than other regulatory compliance laws. References ControlScan. (n.d.). PCI FAQs and Myths. Retrieved October 19, 2012, from PCI Compliance Guide: Guide to Data Secuirty Standards website: http://www.pcicomplianceguide.org/pcifaqs.php#8 Cooley LLP. (2004, June 29). California Online Privacy Protection Act of 2003. Retrieved October 20, 2012, from Cooley LLP website: http://www.cooley.com/57676 Department of Health and Human Services. (n.d.). Understanding Health Information Privacy. Retrieved October 20, 2012, from Department of Health and Human Services website: http://www.hhs.gov/ocr/privacy/hipaa/understanding/index.html Seider, D. (2004). Sarbanes-Oxley Information Technology Compliance Audit [PDF]. Retrieved from http://www.sans.org/reading_room/whitepapers/auditing/ sarbanes-oxley-information-technology-compliance-audit_1624

Monday, June 11, 2012

Lifelong Learning Matrix Samuel Warren IS469 – Information Security Capstone Dan Morrill City University of Seattle June 7, 2012   Lifelong Learning Matrix Executive Summary Whether one plans to become a Chief Information Officer, an Information Security professional, or an Information Auditor, there is a strong likelihood there is a certification that will greatly enhance the hiring potential. Whether one wants to get a certification in ISACA’s Certified Information Systems Management or something else, understanding the benefits and potential drawbacks of not attaining the certification is vital. Introduction The need for ongoing education in any field is crucial for continued innovation. While it is necessary in all fields, the need for it in Information Security is more critical and is directly linked to how well the field of Information Security overall does at protecting its organizations. By creating a Learning Matrix, like the one described within, the security professional can create visibility into the required tasks in the short, medium, and long terms. According to the ISACA website, for example, the CISM certified professional: • Demonstrates your understanding of the relationship between an information security program and broader business goals and objectives • Distinguishes you as having not only information security expertise, but also knowledge and experience in the development and management of an information security program • Puts you in an elite peer network • Is considered essential to ongoing education, career progression and value delivery to enterprises. (2012) While this will not be the sum total of all learning one should achieve in their lifetime, this certification, as well as CISSP certification, are two of the most highly demanded certifications by hiring managers in the Information Security realm. The Matrix The created matrix (attached) describes five columns of goals and three rows of time frames. The major goals were chosen based on personal preference. They consist of Certified Information Systems Security Professional (CISSP) certification, Certified Information Security Manager (CISM) certification, getting a job as an Information Security professional, becoming a Chief Information Officer (CIO), and Administration of the Learning Matrix. The intersection of each Column and Row contains the required tasks associated with the major goal. For example, in the “Near-Term” for CISSP certification, finding a job that works with a majority of the CISSP domains is a task. One of the requirements for CISSP certification is a minimum of five years of experience working in the domains provided on their site (International Information Systems Security Certification Consortium, 2012). That is just the starting requirement; there is also an examination that must be passed and fees to be paid to gain certification with this organization. Another example is found in the “Mid-Term” section of the “Administration” goal. In that cell, there are tasks to do a yearly look-back and create any additional goals to the matrix as necessary. The goal of the “Administration of the Matrix” column is to create a way to adjust the matrix and goals as needed to accommodate changes in certifications and goals of the matrix owner. Measuring Success The aforementioned “Administration of the Matrix” column is used as a way to create some buffered time to allow for reflection on how successful the creator of the Matrix has been in the major goals and tasks. It is extremely important to take time to evaluate growth, successes, and failures in the goals and tasks so one can have a keen understanding of where he/she is in the process of learning at said juncture. It is also crucial to keep the goals as static as possible to prevent making the goals impossible to reach. One should make the goals specific, measurable, and attainable so he/she can feel the accomplishment of completing major goals.   References International Information Systems Security Certifcation Consortium. (2012). Certified Information Systems Security Professional. Retrieved from ISC2.org: https://www.isc2.org/cissp/default.aspx ISACA. (2012). Certified Information Security Manager. Retrieved from ISACA.org: http://www.isaca.org/Certification/CISM-Certified-Information-Security-Manager/Pages/default.aspx
Virtualization is the Key Samuel Warren IS308 – Internet Technologies Lawrence Masters City University June 10, 2012   Virtualization is the Key Executive Summary Virtualization is the key to several problems. The problems inherent in technology change are many, not the least of which is what to do with outdated systems. Said systems may still be needed but are no longer supported by their original manufacturer. With that said, virtualization gives IT teams the capability to maintain the legacy systems without much overhead cost. However, like everything, there are problems that could pose a serious threat to virtualization becoming a must-have in every organization. Introduction With innovation being the lifeblood of technology, it is no wonder new advancements in technology are made and discovered every day. With a pace like that, one can rest assured there will be a number of potential failures and conversely a number of winners. Typically, the number of winners and losers in technology are skewed more in the “loser” category. Virtualization is a newer concept requiring a little abstract thinking, because it deals with the idea that you do not need to have a physical device to use to be operationally effective. Some basic questions regarding virtualization shall be answered herein. What is Virtualization? Virtualization is the idea that instead of having all the hardware of a given system, web servers for example, and uses software versions of the systems as the driving force for them. There are some advantages to this, the biggest of which is one can store multiple different systems on one set of hardware. Each virtual machine can interact independently with other devices, applications, data and users as though it were a separate physical resource. Different virtual machines can run different operating systems and multiple applications while sharing the resources of a single physical computer. And, because each virtual machine is isolated from other virtualized machines, if one crashes, it doesn’t affect the others. (McCabe, 2009) Virtualization may have started with web servers (McCabe, 2009); it has expanded to other networked devices, server application tiers, and desktop software. Arguably, virtualization is one of the fastest growing areas in technology in this decade. Why Use Virtual Machines? There are many reasons why one should consider virtualization. However, anyone interested in using virtualization must look into what benefits one could have for using virtualized systems. Virtual machines can be used to consolidate the workloads of several under-utilized servers to fewer machines, perhaps a single machine (server consolidation). Related benefits (perceived or real, but often cited by vendors) are savings on hardware, environmental costs, management, and administration of the server infrastructure. (Singh, 2004) Along with that reason, Singh highlights the needs of outdated software often can be generously fulfilled without any sort of system conflict by having on separate partitions of the same core system. He also highlights the potential for testing with Virtual machines. “Virtual machines can isolate what they run, so they provide fault and error containment. You can inject faults proactively into software to study its subsequent behavior” (Singh, 2004). Whether this, or the many other reasons, virtualization is a boon to those in the IT realm, because it provides ease of use and quick deletion in the event of problems. Why Not Use Virtual Machines? However you slice it, there are always negatives to any IT concept. With virtualization, all of the positives may serve to illustrate the holes in the traditional one machine for one system architecture. With the need for so many different systems at once, it may be tempting for system administrators to let virtual machines become a catchall for their woes. Using virtual machines does have some major drawbacks. One of the most problematic issues is what Pietroforte (2008) calls “Magnified physical failures.” He uses the example of multiple servers working on one physical system (Pietroforte, 2008). He goes on to describe the hypothetical scenario that the hardware supporting those servers fails. If that happens, then all the servers on that one physical device are potentially ruined. However, the answer to that is to plan well. Thus, if your virtual infrastructure is well planned, physical failures may be less problematic. However, this means that you have to invest in redundant hardware, which more or less eliminates one of the alleged advantages of server virtualization. (Pietroforte, 2008) Add to the potential hardware failure the increased need for hardware, because the virtual systems use so much more of the system hardware and you have a recipe for disaster. Technology fails frequently, that alone is enough of a reason to very carefully consider the choices made in relation to whether or not to virtualize. What IT managers must do is avoid giving into the popularity of any given technology and make sure to do a full analysis of pros, cons, and gaps associated with a technology set prior to choosing to implement. References McCabe, L. (2009, May 7). What is Virtualization, and Why Should you Care? Retrieved from Small Business Computing.com: http://www.smallbusinesscomputing.com/testdrive/article.php/3819231/What-is-Virtualization-and-Why-Should-You-Care.htm Pietroforte, M. (2008, July 3). Seven Disadvantages of Server Virtualization. Retrieved from 4Sysops: http://4sysops.com/archives/seven-disadvantages-of-server-virtualization/ Singh, A. (2004, January). An Introduction to Virtualization. Retrieved from Kernelthread.com: http://www.kernelthread.com/publications/virtualization/

Friday, June 8, 2012

Training as an Incentive

Training as an Incentive Samuel Warren IS469 - Information Security Capstone Dan Morrill City University June 6, 2012   Training as an Incentive Executive Summary In the busy environment of corporate America, there is a real need to understand and foster ongoing training. When employers approach employees about training, what attitude is displayed? If it is an attitude of fear, cynicism, or disgust, there may be an issue with how the employer treats training. By implementing some simple incentives and creating a safe environment, employers can radically change the atmosphere of their company. Introduction Without a doubt, the biggest need in any IT group is ongoing training. Depending on the field, the training could be extremely expensive, or low to no cost. However, the need is indelibly linked to the success of the individual employee and the organization at large. If one wants to have a successful company, one must invest in the people working in the company. With that said, there are some key ways to ensure investment makes an impact outlined herein. Incentives to Grow The old paradigm, “You can lead a horse to water, but you cannot make him drink” is a relevant saying with regards to employee growth. One of the major issues employers need to understand is the paycheck alone is often not enough for employees to be satisfied with their jobs. While some may get a certain level of satisfaction from their salary, others have a driving desire to grow and expand beyond what they do everyday. Employers need to recognize and do something about that. Depending on the company and the size of the budget, this can be as simple as giving a token gift, like an employee of the month award or an award for excellence. The key is to know what an incentive is for the employees. For example, if it is coffee then simply giving a gift card to the local coffee house may be just enough of an encouragement to create a desire to grow. Then simply set a goal of accomplishment anyone in your group can meet and challenge them to grow. This will enable them to utilize newly learned tools and techniques in the workplace. Employee Initiative While employers can create incentives, there are also those employees who simply will not attempt to grow. In those cases, the employer should focus their attention on coaching the employee to understand the need for growth. A stagnant employee leads to a deficiency of creative juices and a loss of daily positive productivity. Career management through proper training provides employees with vision, opportunities, increased individual creativity and a renewed sense of energy and purpose. (K Alliance) However, at some point, the employee needs to take some initiative. The employee should feel empowered to come to their manager and outline a plan of growth and suggest an incentive without feeling like they are unheard or unwanted. Employers need to understand they are here to serve the employees, not the other way around. That said, they need to be receptive to the more vocal employees and try to draw information out of the less vocal employees. Another way to further garner employee initiative is by mutual inclusion. This is the concept of taking the group of people who are constantly fighting change and including them first. If an employee has been a vocal opponent of previous projects, by including them early and actually using their feedback, the employer can create an overall attitude of innovation and growth. By harnessing the opponents, the employers are able to not only control any negative feedback or lack of growth, but also they are in fact enticing those vocal opponents into growing and sharing growth. Share the Praise, Shoulder the Blame Finally, all employers should have a basic level of understanding around team leadership dynamics. By that, a manager should be trained to understand the team is only as good as its weakest link. Then have the manager work with their weakest human link and focus their attention on growing that person. Then the managers need to create a safe environment to fail and learn. By taking all the blame and freely sharing the praise, the managers show the employees they are a part of the team and they can fail without fear of someone pointing them out specifically and making them the target of public ridicule.   References K Alliance. (n.d.). Training is the Bridge to Employee Growth, Work Force Maturity and Sustained Productivity. Retrieved from KAlliance Press Play for Success: http://www.kalliance.com/articles/training-is-the-bridge-to-employee-growth.htm

Monday, May 21, 2012

ActiveX Exploit Samuel Warren IS 469- Information Security Capstone Dan Morrill City University May 17, 2012   ActiveX Exploit Executive Summary Exploiting ActiveX buffer overflow is a critical hole in the Microsoft Office Suite. The worst part of this exploit is that once the buffer overflow is initialized, the victim computer’s command controls are accessible and the attacker can do just about anything. However, due to the religious patching of Microsoft, there is already a working fix in place, despite the discovery in early April 2012. Introduction Criminal hackers are always looking for a weakness to exploit. To that end, they poke and prod different systems to see how it works. The hackers become subject matter experts in various coding languages and taxonomies. There are equal parts education and criminality that combine to show a beautiful tapestry of what happens when information is used for the wrong purposes. In April 2012, a new exploit was discovered by exploits-db.com and uses the ActiveX Framework to gain control of the victim computer via Microsoft Office 2003-2010, with the exception of 64-bit editions (TechCenter, 2012). The following analysis will show the potentials, risks and rewards of exploiting this loophole in the Active What does the code do? The code primarily causes a buffer overflow in Microsoft Office 2003-2010. It then attaches to the Windows common controls, MSCOMCTL.Listview, and MSCOMCTL.OCX (Exploits-db, 2012) and could be used to execute command level invocations to the victim system. Potential for a weapon The potential to use this exploit as a weapon is high according to Microsoft’s Security TechCenter. They rate it as “Critical” (TechCenter, 2012) in all affected Office products due to the potentially limitless uses. For example, if a user unintentionally activates this code, it could introduce a virus that takes any and all personal information and exports it to a file that is then sent off to the attacker. Another major point is the potential victim pool is so big and diverse that there will probably be at least a couple people affected, if not thousands. All told, roughly half a billion people use Office. Yet for all the ways consumers use it at home, there are many more time-saving solutions to be found in the world’s most ubiquitous desktop software. (Schultz, 2009) Because the code cannot self-intialize, due to the exploit being in ActiveX Framework, the user must be tricked, or socially coerced into clicking the link and accepting ActiveX prompts. However, once the user does that, the potential is really incredibly limitless. Risk and Rewards The major risk of weaponizing this exploit is Microsoft patch frequency. Microsoft releases a new patch to fix their bugs, loopholes, and exploits as soon as they can identify a fix. That being the case, the exploit has a potentially short existence; depending on how quickly the patches are installed by the end users. However, should the victim be coerced into starting this overflow process, the attacker would have access to do just about anything to the victim’s machine including self-propagation of the code to the other 499,999,999 users.   References Exploits-db. (2012, April). Exploit 18780. Retrieved from exploits-db: http://www.exploit-db.com/exploits/18780/ Schultz, M. (2009, January 8). Microsoft Office is Right at Home. Retrieved from Microsoft: http://www.microsoft.com/en-us/news/features/2009/jan09/01-08cesofficeqaschultz.aspx TechCenter, M. S. (2012, April 10). Microsoft Security Bulletin MS12-027 - Critical. Retrieved from Security TechCenter: http://technet.microsoft.com/en-us/security/bulletin/ms12-027
Scaling Internationally: Understanding International Implications Samuel Warren IS 469- Information Security Capstone Dan Morrill City University May 17, 2012 Scaling Internationally: Understanding International Implications Executive Summary International business is fraught with complexity. Because no two people are the same, no two countries are the same, despite the commonality shared by countrymen. This poses serious implications when considering changing a security policy to incorporate international addendums. To make one all-inclusive policy, one must pay scrutiny to the differences in thinking and communication between the host Country and any Countries that are using the new policy. Introduction Preparing a security policy that works well within the scope of a national company can present many challenges. Taking that and expanding it to a larger platform, namely the international stage, makes it far more complex and potentially risky. When designing and deploying a security policy that can be scaled to the international stage there are many things to take into account. Some of things that make cultures unique and beautiful can also cause the most difficulty and confusion when dealing with practical business application of the security policy. There are a whole host of different scenarios that need to be accounted for in a strong security policy such as internet coding standards, database protection standards, network infrastructure; and physical security; protection of company assets; data loss and recoverability among others. Going from a strong intra-national policy and scaling it outwards internationally means the dimensions provided for in the policy become far more complex. There are two major areas of complexity that are not typically fully covered because of the sheer size and implications thereof. Communication and thinking processes change from country to country. A simple example is the United States and the United Kingdom. While both speak English, both think and speak completely differently. Communication Differences One of the first things human children learn is how to communicate. All children, across the world learn to communicate in a manner that translates across all barriers, cultures, and countries: crying. There are different intonations for different needs, however even those are cross-cultural. When a baby cries because it is hungry, that same cry is understood whether the baby’s parent/guardian is Chinese speaking, German speaking, or English speaking. However, as the baby grows and develops, it learns to speak in the native language of its forebears, including all the cultural and family idiosyncrasies. Each of those plays a key part in how the individual communicates as an adult and directly plays into the culture as a whole. Communication is one of the most important things to consider while taking a security policy to the international stage. There are a lot of different laws from country to country that need to be deciphered; however, they are relatively easy compared to the challenge of communicating cross-culturally. A prime example could be viewed in the concept of negotiations. Americans see the goal of negotiations as to produce a binding contract which creates specific rights and obligations. Japanese see the goal of negotiations as to create a relationship between the two parties; the written contract is simply an expression of that relationship. (Salacuse, 1991) This is a crucial difference because for Americans, we view the end of the negotiations as a milestone and typically hand-off the control to another team. The Japanese want to be directly involved from beginning to end and work through the relationship as opposed to being handed off to another group that will “handle the next phases.” The aforementioned example encapsulates the fundamental differences that need to be addressed prior to expanding the policy to include international rules, policies and standards. Before work can begin on an international policy, a full discovery deep-dive must be performed to determine what Countries will have an impact on the policy and what requirement gaps exist. Then at least one security professional in each country, who knows the requirements and language of that Country, can customize the core policy in languages that work for the outlined countries and work through conceptual differences in language and culture. Thinking Differences Another major difference that needs to be accounted for in security policy expansion is the differences in thinking from Country to Country and culture to culture. This difference in thinking should be having attention paid from all angles. The problem is that most often when considering international policy, there is not enough emphasis placed on understanding the cultural thought patterns and how critical thinking is approached. From the Security policy standpoint, when a problem is broached, how does the security staff respond? The way a person in the United States thinks through a problem is completely differently than a person in Germany, or Japan. Through a set of experiments, Peng and Nisbett (1999) demonstrated that: 1) Chinese students preferred proverbs which contained apparent contradictions more than did their European-American counterparts; 2) Chinese students were less likely to take side in real-life social conflicts but more likely to choose a compromising resolution strategy than the European-American students; 3) Chinese students preferred arguments which based on the principle of holism while American students preferred arguments that relied on the law of noncontradiction; and 4) American students showed more polarized opinion after reading two seemingly contradictory accounts of the same issue whereas Chinese students would seek for an account which could accommodate both sides of the issue. (Miu-Chi Lun, 2010) The interesting thing about this discovery as discussed by Miu-Chi Lun is major difference in problem solving and critical thinking. The fact that Westerners chose polarization over neutral compromise shows that at a root level, the cultures and along with that the cultural thinking is different. How this affects security policy expansion is in the long term execution of the policy. The way the engineers or specialists will choose to solve problems related to the security of their systems may be fundamentally different. In security, there is a fine line between compromise and taking a polarized stand. This means that there will need to be a lot more defined in the policy of what is and is not flexible by country. Another major concern is how criminal hacking differs because of the change in thinking. For example: a hacker speaking English will write the code to create and execute a virus will have a different approach and syntax than one who speaks say Korean. The one redeeming factor is that the code used always using the same exact taxonomy. But the way it is implemented and propagated is completely different.   References Miu-Chi Lun, V. (2010). Examining the Influence of Culture on Critical Thinking in Higher Education. Retrieved from Victoria University of Wellington Research Archive: http://researcharchive.vuw.ac.nz/bitstream/handle/10063/1211/thesis.pdf?sequence=1 Salacuse, J. (1991). Making Deals in Strange Places: A Beginner's Guide to International Business Negotiations. Retrieved from University of Colorado, Conflict Research Consortium: http://www.colorado.edu/conflict/peace/example/sala7533.htm

Tuesday, May 15, 2012

Disaster Recovery Plan Samuel Warren IS308-Internet Technologies Lawrence Masters City University May 2, 2012   Disaster Recovery Plan Executive Summary Disasters come in all shapes and sizes. While an organization could go bankrupt trying to plan for everything, Sambergraphix must focus on what makes logical sense for disaster recovery and business continuity management. The goal of the group should be to retrofit their web presence, as outlined herein, and create a regular cycle of data backups including multiple redundant locations, so that information is readily accessible in the event of a natural disaster or other major issue. That said, creating a strong Storage Area Network, or SAN, with direct connections to cloud-based backup providers would ensure the best flexibility of data continuity and ease of architecture. In the event of a web server or proxy server failure, there are additional servers to handle the additional load until such time as the network technicians can fix or replace the compromised hardware. Along with the proposed technical solution, creating a task force to evaluate Sambergraphix’s response to disasters and other business continuity related tasks will ensure the company remains viable in the event of a major disaster or cyber attack. Introduction In late 2000, the ability of Sambergraphix to serve content to the web was tested in an earthquake that practically decimated the onsite data center. Our web-servers that served content to the web took a major jarring physically. Combine that with Sambergraphix exclusive content that was being requested by news outlets across the United States and what one can see is a serious spike in web traffic and Application Portal Interface (API) calls that our physically shaken servers could not handle. Although Sambergraphix is still considered a small business, the quality of the content and the need for its relevancy to the earthquake response made the server and its proxy crash every time it was brought back online. For a short-term solution, the IT group purchased a replacement server and replacement proxy server to allow for the web presence restoration. Fast forward 12 years to the present and while Sambergraphix has the same size company and typical web traffic, the need for disaster recovery plan and business continuity for our web presence is undeniable. As one can see, this proposal makes a strong recommendation to the IT, management staff, and CEO of Sambergraphix for a solution to said plans for recovery and continuity. Current Environment The server diagram illustrates the current environment. As noted, the replacement servers, (PROX1 and WEB1), as well as the POP3 email server, sit behind the firewall with both PROX1 and WEB1 connecting directly to the database tier for storage of web user information and content. Users and information come in through the firewall and are routed via Access Control Lists (ACL) to the appropriate server for processing after passing through a proxy. However, due to the need for periodic maintenance on WEB1 and the POP3 server, it is necessary to add more hardware to help balance the load so the web presence is uninterrupted. At the present, should anything from “Acts of God” to cyber-crime hit the servers, the website will go down until it is resolved. The following proposal outlines some ways to protect information, as well as prepare for any possible disaster. Solution Proposal One detail to note from the beginning of this proposal: adding hardware and additional storage backup cycles will cost a fair amount of money. However, in the event of an emergency, they can reduce the amount of down time by quite a bit. As demonstrated in the proposed solution diagram, adding additional servers to maintain a web presence and additional proxy servers will enable a fair load balance in the event of large traffic spikes. Additionally, by adding a Storage Area Network (SAN) that is housed in a different location and is connected by secure VPN tunnel, Sambergraphix web presence will be able to handle anything from an earthquake to breach in security with minimal data loss. Also, by adding backup service that is cloud-based and in multiple locations, in the event the SAN becomes corrupted or destroyed, there are multiple locations where the precious data is stored. It is also recommended that all email services be hosted externally so that in the event of another local emergency, the employees still have the capabilities to communicate via corporate email. Implementation Recommendation During the implementation phase of this solution, it is recommended that all hardware is purchased ahead of time and setup behind the firewall as additional servers. Then the network technicians can connect all the pieces together, ensuring they are configured correctly, and begin transferring data off the database tier on WEB1 into SAN1 (see above diagram). As soon as all the information has transferred into the new SAN and the backups have run at least two cycles, the final decision will be made to change the flow of data to the new proxy servers and web servers. Finally, along with the network changes, there is a clear need for a Disaster Recovery and Business Continuity task force to be created. The role of the task force should be to prepare scenarios that should be tested to ascertain the quality of the current architecture and the business continuity plans. The goal should be to test the equipment, as well as the people in various low to extreme scenarios to do a comprehensive gap analysis to determine how to improve business continuity management. According to Chris Ollington’s “Secure Online Business Handbook,” there is a fine line between what makes sense for Sambergraphix and what does not. A trade-off needs to be achieved between creating an effective fit-for-purpose capability and relying on untrained and untried individuals and hoping they will cope in an emergency. The spanning of the gap between the plan and those who carry it out can be achieved by either formal tuition and/or simulations. The well-known maxim that a team is only as strong as its weakest link is worth remembering here. (2004) Having said that, the task force should not seek to mitigate tornado response, because the likelihood of that occurrence is minimal at best in this region; but earthquakes should definitely be considered. The task force should be a representative group from all major departments and each member should be responsible to inform their area of the business continuity management topics and information. Also, the director of the business continuity task force should provide direct guidance, training, and reporting to the president’s team and the director of Human Resources.   References (ed), Chris Ollington. ( © 2004). The secure online business handbook: e-commerce, it functionality & business continuity, second edition. [Books24x7 version] Available from http://common.books24x7.com.proxy.cityu.edu/toc.aspx?bookid=9923.
Securing APIs Samuel Warren IS469-Information Security Capstone Dan Morrill City University April 27, 2012 Securing APIs Executive Summary There is an increasing demand on the enterprise API used by our organization. Keeping this in mind, our security needs to be updated to ensure no sensitive data is released to the public. There is one change that is recommended to more securely protect our API. Introduction Application Programming Interfaces (API) have been around for quite some time. When web applications were created, by necessity, so were APIs. While the API has taken recent changes in usage, the core idea is the same. APIs are the access point to the application for the developers to work. With the current desire to use a group of people to make projects better, API security has been under heavier fire. Prior to the demand for application sharing, API security was not a high priority simply because most of the application support was done inside the same relative zone on the network, all of which was typically secured private networks. However, with the increase in demand, every aspect of each API is being put under more scrutiny than its predecessors. With that in mind, the following is a recommended change to the current enterprise API. Recommended Changes to API Because of the demand on all API features and the amount of legitimate traffic coming to the application tier, the proposed solutions should meet both now and future API traffic. The recommendation is to create a custom checksum that is hashed values of the session data and private key. When the client sends the packet to the server, the server then takes the file, compile’s its version of the same checksum, and verifies that it matches. You realize that in real-life, this is basically like someone coming up to you and saying: “Jimmy told me to tell you to give the money to Johnny Two-toes,” but you have no idea who this guy is, so you hold out your hand and test him to see if he knows the secret handshake. (Kalla, 2011) If our systems can come up with a secure, hashed, match to the provided checksum, the internal application tier can grant trust to the requestor. This is called “HMAC” (Kalla, 2011) authentication and does not require any direct send of password or username over un-encrypted channels. For more information about securing APIs, please make use of the References section. References Kalla, R. (2011, April 26). Designing a Secure REST (Web) API without OAuth [White Paper]. Retrieved April 27, 2012, from The Buzz Media; Video Games, Movies, and Technology. website:http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/

Stormy Clouds are Rolling In: A Look at Cloud Technology

Stormy Clouds are Rolling In: A Look at Cloud Technology Samuel Warren IS469-Information Security Capstone Dan Morrill City University May 3, 2012 

Stormy Clouds are Rolling In: A Look at Cloud Technology


Executive Summary Without a doubt, Cloud technology is here to stay. The Cloud as a concept is the idea of separating specific services and utilizing a third party that specializes in that are to host an organization’s needs. One of the major issues with the Cloud is the lack of commonality among all various service providers. It is acceptable to have differences between Cloud Storage and Cloud web-servers; however, the differences have been proven to exist between Cloud Storage providers. This poses a major security concern because security engineers and analysts are not able to identify and mitigate attack vectors. There is a real need to bring discipline to the people and processes around Cloud usage in each individual organization as well.

Introduction At technology conferences every year, thousands file into a venue to learn about the newest gadgets and “killer apps” that are being displayed and presented. Among them in the last 3 years has been a concept, rather than a specific software or technology. Dubbed “The Cloud,” it I specifically the idea of taking a system that is directly integrated and exporting it a company that can provide a stronger support and maintenance plan for it. The classic example: removing server tier and having it hosted at a server provider, such as Rackspace, who can create, monitor, and maintain the server. There are a couple huge advantages to this idea. The main being that the customer of services such as Rackspace does not have to find, hire, and train a server administrator. The benefits of hosting some of IT services in the Cloud are clear. However, there are security risks as well.

Issue & Challenges The biggest security issue with the Cloud is that there is not enough of a standardized approach across all Cloud providers. Because of that, security experts are not able to fully understand how to protect data that goes into the Cloud. With a fluid concept such as cloud computing, that simply does not work. There is not one single model, nor is there one single architecture that you can point to and say definitively, "We are doing that, ergo we are doing cloud computing." (MacVittie, 2008) One of the major goals of Information Security is to locate and mitigate vectors of attack. However, attempting to do so in the Cloud could be equated to trying to pin Jello to the wall. While it is physically possible, the effectiveness of it is so miniscule that there is not really a point to attempting change. The biggest challenge around this is the lack of ability to clearly define attack vectors because of ambiguity relating to how the service works. That ambiguity serves to further reinforce the notion that the Cloud should not be used because it is not well known. However, fear of this type can easily be dissuaded with knowledge and internal changes to bring more discipline. What MacVittie highlights is a fundamental problem that is solvable, but requires discipline in the technology, the people implementing and using the technology, and the policies related to the Cloud.

Three-Fold change When discussing how to bring change, discipline, and structure to an unstructured landscape, there are many things to consider. With that in mind, the best possible way to bring adoption and long term change is to not simply inject change into one area, like technology; but to bring change to the people and the policies relating to that technology. With the Cloud’s lack of standardization (MacVittie, 2008), there is a real need to be structured and disciplined about how the enterprise deals with Cloud solutions. For example, if the Cloud application is storage, the business should be asking questions of the provider around what their specific security policy is, how often the information is available, and when do the storage devices come down for maintenance? Then the business’ IT group needs to spend time doing a gap analysis to determine how they can structure the data they are passing to the Cloud. Think of it like training in the military. During boot camp, a person is stripped down to their basic functions and taught from the ground up how to survive, fight, and how he/she works together as a unit with others. That way, when the soldiers go to a theater of war and the chaos starts, they are able to survive and function effectively as a unit. It is very easy to simply inject a tool into a group. In fact, there are organizations around the world that do so regularly. The problem becomes apparent when the tool is not utilized and the people who asked for it are the only people using it. Instead, if a group who owns, say web publishing, brings a new tool, said group should not just add the new tool and turn off old tools. They should look for ways to make the tool the most effective tool for web publishing across the organization. They should bring governance and process change to groups that have outdated processes. They should, within their scope of bi-lateral power, move to have the organization adopt this as the only tool for web publishing at the enterprise level, then constantly find ways to support the teams using the tool. When a tool, or a process, or a person is the only thing changing, not all three, there is a real potential for failure. But if there is a three-fold change there are more benefits. A 2009 article discusses the benefits: Customers benefit from advanced encryption that only they are able to decode, ensuring that Mimecast acts only as the custodian, rather than the controller of the data, offering companies concerned about privacy another layer of protection. (Binning) While Binning is specifically relating the relationship between Mimecast and its constituents relating to email management services in the Cloud, it is a pertinent example of the duality of benefits accomplished when a Cloud company provides a strong service and the people utilizing the service are able to change their policies and personal biases. There is a real bias for IT management to keep all information “in-house” because they can physically see and “touch” the data, but if they are disciplined about how they approach Cloud services and do not give un-due credence or cynicism they will not defeat the benefits gained.   References: Binning, D. (2009, April 24). News. In Top Five Cloud Computing Security Issues. Retrieved May 3, 2012, from ComputerWeekly.com website: http://www.computerweekly.com/news/2240089111/Top-five-cloud-computing-security-issues MacVittie, L. (2008, December). Web Software. In Defining Cloud Computing. Retrieved May 3, 2012, from Computerweekly.com website: http://www.computerweekly.com/opinion/Defining-cloud-computing

Wednesday, May 2, 2012

Scanning a Content Management System











Scanning a Content Management System
Samuel Warren
IS468 – Tools and Techniques
Matthew Pennington
March 10, 2012


Scanning a Content Management System

Executive Summary

            The need to scan systems is undeniable. As the Internet blossomed and grew, the vulnerabilities associated with Internet technologies increased exponentially. While there are a number of applications that need to be scanned, the Content Management System (CMS) is a newer application that has grown by leaps and bounds. This explosion has brought along with it a host of problems. Protecting the CMS is a task that can be done internally, using tools like Nessus and SQLMap, or can be done externally by using a service, such as WhiteHat Security. Ensuring the developers of the website comply with standards and processes set up by the organization, as well as keeping Sarbanes-Oxley and PCI (if applicable) should be at the forefront of all compliance related efforts.

Introduction

In every great Dark Ages’ legend, there is a story about a knight. Usually that knight wears a suit of armor. The armor was typically made of steel and chainmail. There were positives and negatives to wearing a suit of armor. The positive included protection from arrows, swords, and other lightweight weapons of war. When the soldier would fight, he would have a higher likelihood of coming home in one piece. The biggest negative was the lack of mobility. In open combat, the weight of all the armor slowed the knight to the point that he became easily overtaken. However, the sheer number of attackers needed to defeat the knight combined with their training and improved agility, while wearing armor, made the knight one of the most feared tools in any commander’s war chest. In the war on cyber-crime, the organization has a new knight. Corporate security, both physical and virtual, creates the new armor to protect the corporate entity. Unlike past suits of armor, the protection provided is not only about blocking attacks, but also managing potential weaknesses in the armor.
The major issue with the suit of armor is clearly explained in the 1991 film “Robin Hood: Prince of Thieves.” When pressed, Kevin Costner’s character, Robin of Locksley, says, “They’ve got armor Bull? Even this boy can be taught to find the jinx in every suit of armor” (Reynolds). Armor for the organization has “jinxes” in it. The jinxes come in all shapes and sizes, from Cross-Site Scripting attacks to Man-in-the-Middle, to pure and simple espionage. While there is no fool-proof way to completely secure an organization’s information and remain business effective, there is a need to find a way to mitigate the holes.

The Need to Scan and Protect Content Management Systems (CMS)

When the Internet was first developed, its capability was very limited. Simple text was the name of the game. As capabilities changed and bandwidth increased, developing what is now called a “web page” was complicated. A special language was specifically designed in order to code a web page. Adding a page was especially time consuming and complex to get right. As that changed, the Internet began to grow and transform at a tremendously rapid rate.  
The Internet continues to evolve, enabling people across the globe to communicate and send huge files in real time. The lack of a central authority controlling it helps the Internet to flourish rapidly, aided in great part by technological advancements. (Fuller, 2011)
Unfortunately, the same thing that helped the Internet grow also became the primary enabling factor for would-be criminal hackers. Hacking is nothing new; the first hackers were those who wanted to improve their device or system to make it work more efficiently or enable them more control. Nevertheless, criminal hacking has proliferated in the late 20th and early 21st centuries due primarily to the lack of centralized control and authority. As the technologies expanded, so too, did the methods of the criminal hackers. All the new systems that were created to make publishing to the Internet easier, also created additional loopholes. A great example is the Cross Site Request Forgery attack; it uses the victims system to exploit the trust between the browser and websites the victim recently visited.
            One of the more recent technologies is the Web Content Management System (CMS or WCMS). Starting to grow in the mid 1990’s (Laminack, 2008), they grew very quickly to adapt to the challenges and much needed flexibility of the Internet at the time.
This allowed people to upload photos, write stories, and made web pages much more interesting. In those days, everyone wrote their own. This was the dawn of the custom CMS. Then some people started commercializing their CMSs and building businesses that sold and supported CMSs. (Laminack, 2008)
Today, the number of CMSs on the market is literally thousands. Everything from proprietary systems developed by large organizations, such as Microsoft, to open source, community-driven CMSs like “Drupal” or “Joomla.” Each of these CMSs has benefits and drawbacks. None of them are completely secure. Their vulnerabilities are as different as their coding languages. However, the need to secure them is very real. As CMS platforms become more robust, allowing for form creation, forums, or live-chat’s, the vulnerabilities to this middle layer application and ultimately the databases it feeds into need to be scanned and protected with increasing earnestness. As mentioned previously, there is no 100% foolproof way to protect any system while remaining vigorous and flexible, although one can limit the possible holes (vectors). By protecting what can be protected completely, the security professionals are then able to focus only on those areas that are known vulnerabilities. Therefore, a two front counter-assault should be launched to find and eliminate as much vulnerability as possible. One front should be tool-oriented, the second should be focused on changing the processes and standards being used to publish and codify CMS platforms beyond their base (vanilla) install.

Front One: Recommended tools

            There are several recommended tools that can be used to protect a system, as well as companies that are trusted partners that will scan the system for the organization and report back results.
Nessus Scanner by Tenable
Nessus is a network and application-scanning tool that enables the System Administrator to scan their network for vulnerabilities in three separate classifications, as well as see the number of open network ports.
Nessus 5.0 features high-speed discovery, configuration auditing, asset profiling, sensitive data discovery, patch management integration, and vulnerability analysis of your security posture with features that enhance usability, effectiveness, efficiency, and communication with all parts of your organization. (Tenable Network Security)
This tool is full-featured and easy to use. After a network scan, it will give a comprehensive report on what the vulnerabilities were, what classification they were, if they were high, medium, or low, and how to mitigate, if available. Another very handy feature: Nessus will provide a list of open ports. Open ports are as dangerous as having no security whatsoever. Open ports give direct access into the network and provide attackers with a staging point for any other application attacks. Using Nessus in a CMS environment is simple. Just scan the host IP of the CMS platform and it will give all the vulnerabilities of the applications associated with that host IP.
SQLMap
SQLMap is an open-source tool used to find SQL injection points. It enables the system administrator to do some simple commands and completely exploit an SQL injection point. According to SQLMap’s site,
It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections. (SourceForge)
SQLMap provides a platform for adhoc scanning and, based on the vulnerability, will give varied levels of injection into the database and varied levels of access to information. This tool is especially helpful if the database(s) behind the CMS platform is using SQL.
Scanning as a Service
            Another option is to use a 3rd party to scan the CMS platform for vulnerabilities. One such service provider is WhiteHat Security.
Founded in 2001 by Jeremiah Grossman–a former Yahoo! information security officer–WhiteHat combines a revolutionary, cloud-based technology platform with a team of leading security experts to help customers in the toughest, most regulated industries, including e-commerce, financial services, information technology, healthcare and more. (WhiteHat Security, 2012)  
WhiteHat sets up a schedule for when to run the scan, then gathers the results and provides a comprehensive report of vulnerabilities that needs to be corrected within a specific timeline. The best part of this service is also the provision of scanning to maintain PCI compliance, so if the organization has the finances to spend on WhiteHat’s services, the organization has a powerful ally to help fix the “jinxes” in their armor. 

Front Two:  Processes and standards

            Scanning the CMS for vulnerabilities is just the beginning of correcting the problem. However, if the processes and standards used in the organization cause a relapse of the vulnerabilities then the tools are wasted resources. Beginning from the managers, down to the Internet developer, there is a real need to follow standards that remove as many of the vulnerabilities as possible. By following such standards and taking the time to practice smart coding, an organization can greatly limit the vulnerabilities that it produces. With respect to the aforementioned Content Management System, this involves creating new templates, modules, or plug-ins that follow the standards set forth. A good place to start is using the World Wide Web Consortium’s (W3C) standards for web-development.
Most W3C work revolves around the standardization of Web technologies. To accomplish this work, W3C follows processes that promote the development of high-quality standards based on the consensus of the community. W3C processes promote fairness, responsiveness, and progress, all facets of the W3C mission. (World Wide Web Consortium)
By building standards that make sense to the organization, the web-developers have the leverage to push back on other groups who are asking for Internet related things that do not meet the standards. Also, there should be an appeal process put in place to allow for current standards to be revised as technology changes.

The Continuing Impact of Sarbanes-Oxley

            For a publicly traded organization, one major element to consider is the impact of the Sarbanes-Oxley Act (SOX). Specifically, sections 302 and 404 deal directly with web-development.
These sections require that the CEO and CFO of an organization certify and assert to stakeholders that the financial statements of the company and all supplemental disclosures are truthful and reliable, and that management has taken appropriate steps and implemented controls to consistently produce reliable financial information to its stakeholders (Section 302). The company’s external auditor must report on the reliability of management's assessment of internal control (Section 404). (Imperva, 2009)
Where this comes in, especially for web-based organizations, is ensuring the financials are clearly shown on the website and easily accessible. Add to that the requirements of section 404, which requires external audits to also appear on the web. Following the aforementioned standards will make any audit less painful. Creating processes and policies around a set of standards, as well as regularly evaluating any changes to SOX will secure organizational compliance. As the Internet continues to evolve and grow, the laws governing certain types of transactions will change. It is, therefore, critical that the organization keep pace with the changes in order to secure its place on the Internet.

 




References
Fuller, M. (2011, June 09). The Evolution of the Internet and its Meteoric Rise. Retrieved from Techwrench.com: http://www.techwench.com/the-evolution-of-the-internet-and-its-meteoric-rise/
Imperva. (2009). Implementing Sarbanes-Oxley Audit Requirements. Retrieved from Imperva.com: http://www.imperva.com/docs/WP_SOX_compliance.pdf
Laminack, B. (2008, November 14). A Brief History of Content Management Systems. Retrieved from Brent Laminack’s Personal Site: http://laminack.com/2008/11/14/a-brief-history-of-content-management-systems/
Reynolds, K. (Director). (1991). Robin Hood Prince of Thieves [Motion Picture].
SourceForge. (n.d.). sqlmap automatic SQL injection and database takeover tool. Retrieved from sourceforge.net: http://sqlmap.sourceforge.net/
Tenable Network Security. (n.d.). Nessus 5.0 is here. Retrieved from tenable.com: http://www.tenable.com/products/nessus?_kk=nessus%20scanner&_kt=9b03f8a7-8e0a-4eb4-bcaf-fc0d14045e85&gclid=CLH3g5uM3a4CFQcFRQodTBjqWw
WhiteHat Security . (2012). About WhiteHat Security. Retrieved from Whitehatsec.com: https://www.whitehatsec.com/abt/abt.html
World Wide Web Consortium. (n.d.). About W3C Standards. Retrieved from W3C.org: http://www.w3.org/standards/about.html