Inside the mind of a non sequitur
Thursday, April 10, 2014
HeartBleed Public Service Announcement...
Hi guys,
many of you know that I studied Information Security in school. I wanted to pass this along to you so you were aware and would have some additional confirmation if you get an email from an affected company.
There is a new bug that was discovered that affects nearly 2/3rds of the internet. It has been nicknamed "Heartbleed."
The bug affects a HIGHLY USED open source security policy called OpenSSL. It is able to render the security on the site vulnerable to attack leaving users of that site potentially exposed to data theft.
IF YOU GET AN EMAIL FROM:
Banking institutions
Shopping sites (Amazon, Ebay, etc.)
Video or TV streaming sites that accept banking information
THAT YOU HAVE SIGNED UP FOR, please follow the instructions in their email.
I know Amazon, Google, and Netflix have been affected, but they have not finished fixing their sites yet, so you may want to wait until you hear from them.
You can read more about the bug here: http://heartbleed.com/
You can test the site that you are using here: http://filippo.io/Heartbleed/
If you have any questions, please feel free to call me... 2537974789, email me, or talk to me in person.
Thursday, January 16, 2014
The hidden costs in Agile: Frenetic Pacing breaks the team mentality.
Introduction
Agile Methodology for project management is a bi-gone conclusion these days. The question is not "if" you are doing Agile, but which flavor or what tweaks have you made. The desire to get software out the door faster and faster has a huge marketing appeal. You can get a fully featured product by the end of a few Sprints. All for the low cost of 19.95. There are costs though associated with that. Even if the teams are aligned perfectly going into a project, and everyone is perfectly aware of Agile and how to sprint, there are things that come up. Ghost's in the system. There is a cost to speed, the question is can your team handle it.Speed costs balance
Balance is something that every tech organization should strive to maintain. To help understand this, I am going to use the analogy of a football team's offensive schemes. A good balance of running the ball, passing the ball, and trick play's can throw off the defense's attempts to get after the quarterback and make a big play. Conversely, too much speed can also cost the effectiveness of the team. There is a saying in every sport that involves catching a ball "Catch it, THEN make the play." This speaks to the tendency all humans have to rush and think too many steps ahead. But if one plans out well enough, makes the play simple enough, and can find the right method to pass to, the play is so highly effective that your offense becomes very hard to beat. There is a play style called "Hurry up offense" in the football. The idea behind it is to make your offensive players line up as quickly as possible before the defense can get mentally set, read you line ups, and catch what you are doing. It can be effective for a couple of plays until the Defense figures out that you are sticking to some much simpler plays.With Agile development, there is balance too. Finding the right blend of process, people, and flexibility ultimately gains a much quicker push to market. Some flavors of Agile boast a 300% increase in efficiency and time to market. However, all that speed comes with a cost too. The lack of balance means that something will always lose. A healthy team should be allowing patches, technical debt, and new features in each release. However, when the team is too fast, something suffers. Depending on where the IT team sits, or who screams the loudest, you may have new features being developed, but left un-supported; you may have support for old products that are beyond their shelf-life, or you may have too much debt to fix without any new features to whet the whistles of your supporters. In any case, the point here is that too much speed costs you your ability to balance your releases.
Speed costs communication
As eluded to in the previous section, speed also has a negative impact on communication. When scrum teams are constantly changing, constantly pushing into new features, for example, the communication suffers. Going back to football, the nifty arm bands that many quarterbacks wear are not a fashion statement. It is a list of some quick plays they could use in the event the defense has spied out their plan. Doing an Audible mid-snap count can cost quite a bit, but occasionally it pays off. However, when priorities are constantly changing, the communication both to the customers and to the internal teams must be perfect. Not great, not good, perfect. The reason for this is if one hints that the teams are not going to be focusing on a specific set of work, but communicate it in the wrong way it can cause frustration, consternation, and in some cases loss of political support. Think of it like this: The ball is snapped, but the defense has your play scouted. No one is open, they are running up into your face, you scramble back and forth and finally see a glimmer. You have only a gesture to point out that you notice the opening. Is the receiver looking? Is a linebacker or a safety looking? You make the decision to point in the direction you are going to throw, throw the ball and for what seems like an eternity you wait. When out of the corner of your eye you see the defense come across the field far faster than you anticipated, you know it's going to be intercepted, but you can't do anything. What do you do?This is the same sort of complexity that happens in communication with teams that is a byproduct of too much speed. When dealing with this, face to face is always preferable to email or instant message simply because it is much harder to convey tone in email or instant message.
Speed costs quality
Finally, Speed costs quality. The poignant example I have is my own personal experience. As teams rush to complete, their focus is more about finishing as much as possible, not finishing well. That does not speak to their personal scruples as much as it does the environment that is around them. They start spinning plates for each thing, then imagine all the additional regulations, requirements, and conversations that come up during the course of a week around those things. It's like spinning additional plates ON TOP OF the original plates and expecting something not to fall. Most often, the speed to delivery means a loss in overall quality with campaign promises and lobbying for getting them done "later."If we are honest, the campaign promises are almost hollow coming out of the mouths of the ones uttering them. It is not the developers that become the problem, no, not the developers, or the QA team, or even the P.M. crew that pushes the pace. It is the customer and the leadership. My personal belief is that leadership should embrace the philosophy of "You can't kick my dog, but I can." Leaders should be able to lead their staff and defend them against all the environmental pressure that causes the "speed up everything" mentality. However, when there is a break in that defense, or worse, the leadership is pushing equally hard, speed becomes a threat that must be put down like a rabid animal.
Conclusion
So how do we quell this fundamental problem? Does it not seem like the unstoppable force meets the immoveable object? There is a beautiful moment, if we are honest with our teams, where as work comes to a grinding halt a level of truthfulness and transparency develops. A level that screams equally across the aisle and allows for a crucial decision to be made. Obviously, you get where I am trying to point you, the decision should be to try to maintain healthy speed. But I have seen all too many times a push to keep the reckless speed going as if to stand in the face of an adversary and spit fully in defiance. The problem is the adversary is us, the team that enabled the behavior, the team that supported the initiative to speed up, the team that is now as fragile as a bubble. What can be learned? Ultimately, when dealing with speed, use caution, put your seat belts on, and watch out for other drivers. That is to say: Be Careful, put into place some guidelines that can protect your teams and bring the much needed balance back to the lime-light, and do not be so myopic that you cannot anticipate cross fire from other divisions of your organization. If you can do those three things, you can wrangle speed for a little while. But always, ALWAYS seek balance.Wednesday, May 15, 2013
Die a Hero, or Live a Villain
Introduction
Among business men and women, there has been an unspoken code. A law that governs behavior and guides choices. From time to time, people brush up against this unspoken code and make their own conclusions. I have done the same. But until now have waited and watched trying not to allow myself to be persuaded to one side or the other until I could see both sides. Some of the best leadership writers in the world speak of the need to lead people from the front. That is to say, people are lead best when inspired and brought alongside leadership. The dangers are apparently clear, however, and are best quoted by the Aaron Eckhart's character "Harvey Dent" in "The Dark Knight." "Fine! You either die a hero, or you live long enough to see yourself become the villain." This dichotomy is what I wish to focus on today. Why can't leaders lead and stay in charge? Why must they die to become a hero (whether literal or figurative)? What could cause them to fall far enough to become a villain?
The Black and White Knights
Christopher Nolan's sweeping narrative of Bruce Wayne's inner struggle and leadership as Batman is the platform I want to use to share my opinions about the aforementioned leadership struggle. We see Bruce become a leader on fighting injustice in Gotham City in "Batman Begins," we learn how his leadership is short lived because he chooses to do things in an "unconventional" and "uncomfortable" way to get the job done. In fact, there is only one rule he has: He will not kill. This kind of stance makes it apparent that he will not sacrifice justice to give criminals due process. He will not allow even the unjust justice system to come between him and his goal. What that looks like as a leader is becoming so obsessed with a goal, an ideal, a standard, a process that we don't care what it takes to get there, or how long, we will get there. We don't care who aligns with us, be it a people with higher moral standards, or those that share the same. We cannot, however, allow those who compromise our goal to become our allies, and certainly not our friends. This is where Bruce made a mistake. I believe, deep down, Bruce wanted Harvey Dent to take the leadership mantle of fighting injustice before he could fully understand what made Dent tick. In fact, we see the Joker push Dent just slightly and cause his world to come crashing around his ears.
When leaders allow themselves to work too closely with people that they don't understand, they expose themselves to risk. Don't get me wrong, there are times when you can misjudge someone, but a fool is one who does not learn from mistakes. What I am talking about directly though, is when people purposefully allow themselves to trust someone more than reason justifies. Allowing someone to get so close that they can manipulate and twist things in your view is allowing someone to get too close. These "Black Knights" fight for a cause, but are so tainted that they cease even holding on to what they once believed in order to win approval from those who are meant to follow them. This danger even exists among peers.
Why can't leaders stay in charge? Are all of us doomed to "die a hero? or become a villain?" These questions are hard. There are, admittedly, many reasons a leader may leave. However, Leaders must understand that there should be precious few people that directly influence your decisions. If we were to look at a model, we could look at a model of Jesus Christ. He lead 12 men from the front, He had an accountability group, he did die a hero and inspired an entire new way of thinking. But he also only allowed 3 people to be closest to him and they still had not direct ability to influence his mission. He did not allow Peter, James, or John to change his mission. Had he, Peter would have had Jesus fighting in armed combat with the Romans, or James and John would have had him fishing for fish instead of men. With this in mind, the "White Knight" is the antithesis of this. Sure that leader may have friends, but he does not allow those friends to change his perspective so easily without knowing how it relates to his mission.
Summary
This short post was to illustrate the dangers of allowing someone to influence too much. Keeping that in mind, there is a leadership structure for a reason. They are not your friend, they are not your buddy, they are your employee. They are there to take orders and leadership from you, not give you orders. If one can discover their mission, and stick to it, allowing people speak to that, but understanding how their opinions play into the mission is the goal. It is not easy, but it is of paramount concern to any leader.Wednesday, February 13, 2013
"We're listing Starboard Cap'n:" a look at using lists to control data integrity.
introduction
We live in an age where any individual with the right tool can forge credentials through a facebook token and gain access to banking information from their target. Unlike days of old where one had to be familiar with a specific technology or programming language to implement, one can go out to the internet and use a simple Google search to find any number of tools that will enable the novice (aka Script Kiddie) to mount a precision attack against any number of sites. The worst part of this isthe tools are free and open source
. While most of this is of no surprise to any forward thinking InfoSec analyst, one trending we are seeing more of is User Generated content. People being able to add their two cents to any page on the web. Wikipedia, facebook, twitter, blogs, news pages with comments, youtube, instagram, pinterest; All fundamentally use input from users to guide, drive, and traffic information. However, one alarming note is the general lack of consistent standardization for field validation within sites themselves, let alone across multiple sites and corporations. If one pauses to consider the kinds of internal turmoil that many web properties go through, one can quickly understand HOW this could happen. But it isn't until you go farther down, to the individual developer, page, and field that it starts becoming crystal clear.a quick note on standards
When a developer starts designing a page for a project, unless there are standards driving that development, it is bound to look like a patchwork quilt. One of the most dangerous areas to be that spotty is security. With that much inconsistency between developers within an organization, it is no wonder that when looking at web development and application development overall, one can see vast differences including differences in similar code. While the creativity of the developer is not the cause of this, it is important to point out that developing blindly without understanding the code base one is pushing their project into is dangerous. From there one has to understand what the most likely attacker profile will be. If one can understand that, they can then greatly reduce the number of vectors for which an attacker will likely gain access.White, Black, and Red listing
White and black lists are nothing new. The idea of white-listing a server to keep untrusted traffic out is a good concept. However, there is a huge hole in that concept. What happens if the traffic is NOT malicious, but just not on the White list? Let's take the example of Minecraft. In Minecraft multi-player, there are servers setup by various organizations. They also have white-listing as an out of the box feature of the Minecraft multi-player server. They use the white-lists to deter and keep players that are not trusted or well known off. For those that don't use white-listing, there are other systems that monitor and detect player actions by player so that one does not have to read the server logs to find out if players have been doing inappropriate activity. When a player does something that is deemed "inappropriate use" of the world in Minecraft, they are warned, booted from the server, or banned. If white-listing is acceptable players and banning is a black-list, then booting and warning players would be a separate list altogether. Let's bring that back to the world of field validation that we discussed briefly with regards to User Generated content. If one implements a white-list for acceptable values on a field, one also must add a black-list to define what IS NOT appropriate. The black-list is arguably more important than the white list because it defines what is absolutely not allowed in the field.Ultimately white-listing is no different from or better than black-listing because it is impossible for either humans or computer systems to distinguish good software from bad software.-Simon Crosby, February, 2013, taken from his blog The difference posed here is knowing your systems, and the information most likely to be targeted. If one can understand that, they can make a better white and black list.
n order to validate the integrity of the input we need to ensure it matches the pattern we expect. Blacklists looking for patterns such as we injected earlier on are hard work both because the list of potentially malicious input is huge and because it changes as new exploit techniques are discovered. Validating all input against whitelists is both far more secure and much easier to implement. In the case above, we only expected a positive integer and anything outside that pattern should have been immediate cause for concern. Fortunately this is a simple pattern that can be easily validated against a regular expression.-Troy Hunt, May 2010, taken from his website
the red-list
The next concept I would like to put out is the idea of using another list (what I call the "Red List"). Red-listing is for input that is not necessarily malicious, but is suspect because of the context, type, or time the information is presented. An example could be found back with the Minecraft players. Let's say that a player is white-listed and playing on the server, but he starts gathering specific materials needed to make and use TNT (sand, gun-powder) which is not allowed. Because the player is not doing anything inappropriate on the server, he will draw no attention from the server moderators. However, he would be Red-listed. He would be brought to a higher level of awareness and will be allowed to continue. The server moderators may then either pro-actively intervene (by asking the player what they are doing) or wait until the player has created the inappropriate item and warn/boot the player. Red-listing is nothing more than a clearing house for what could be malicious content, but is not overtly so. Just a thought.Thursday, December 13, 2012
SQL vs. NoSQL
SQL vs. NoSQL
Samuel Warren
CS416: Database Management
Professor Noel Broman
December 10, 2012
SQL vs. NoSQL
Executive Summary
Since 2004, there has been a debate raging between using relational databases and using SQL without the interfaces of times past, called “NoSQL.” This debate is not one even Google has settled. In a 2012 video on YouTube, Google developers presented a debate between SQL and NoSQL. The debate reached a dead tie with the developers agreeing that a pairing between the two would be a likely solution, at least in the short term.
Introduction
Arguably, one of the greatest resources to any database administrator is Structure Query Language (SQL). However incredibly fast and powerful as it can be, there is a contender for the throne of greatness in this field. “Not only SQL” (NoSQL) is a movement to do away with relational databases altogether. The first usage of the term, in modern context, was in 1998 by Carlos Strozzi; “Ironically it’s relational database just one without a SQL interface. As such it is not actually a part of the whole NoSQL movement we see today” (Haugen, 2010). Haugen goes on to share that in 2009 Eric Evans, who was at the time employed at a hosting company called “Rackspace,” used it to refer to a more recent uprising of non-relational databases. Strong pros and equally resilient cons have been presented for both SQL and NoSQL in the debates.
SQL
SQL uses “tables” and “columns” to store the data that is input. There are huge pros to this because it gives each piece of data a never-changing location that can be referenced if one labels and links back correctly. Getting the data into the database is not terribly challenging. Removing data is not difficult either, if one can determine the correct syntax and taxonomy of SQL. SQL is a simple query language that is highly repeatable and flexible. The inconvenience of SQL is the convoluted nature of linking so many different data types together to get one or two specific pieces of data. When compared to NoSQL, however, the ease of breaking down complex problems becomes a boon.
Let’s say that you want to compute the average age of people living in each city. In Cloud SQL [a specific product by Google], it’s as easy as this. All you have to do is select the average age and group by city. (Google Developers, 2012)
The query shown by the presenter was clear, easy to read, and syntactically the same as every other SQL query used by every database administrator, or analyst, working with SQL. This serves to illustrate the muscle of SQL queries and further demonstrates ease of use. It is a hands-down winner in comparison to NoSQL with respect to queries.
When discussing the trade-offs between the two, one of the major reasons SQL has managed to thrive is that it has been refined to the extent anyone can learn a few commands and begin writing complex query-strings. Of NoSQL-based systems: “They’re not polished, and comfortable to use. They have new interfaces, and new models of working, that need learning” (Snell-Pym, 2010). While one can quickly pick up a SQL-based system and begin extricating information, it is not as easy to do so with a NoSQL-based system. According to Kahn, “ [A] user can access data from the database without knowing the structure of the database table” (2011). That kind of structure is invaluable for resource managers needing to find staff who can handle a relational database with the power of SQL.
NoSQL
On the other side of the playing field, so to speak, is the non-relational model lead by several NoSQL, open-source, contenders.
Most agree that the “no” stands for “not only”—an admission that the goal is not to reject SQL but, rather, to compensate for the technical limitations shared by the majority of relational database implemen¬tations. In fact, NoSQL is more a rejection of a particular software and hardware architecture for databases than of any single technology, language, or product. (Burd, 2011)
This rejection to some of the technical limitations has revealed highly desirable features in the process. The most notable feature is the ability to quickly scale the database in the event of extreme transactions. Burd goes on to explain that with traditional SQL, as transactions between the servers and the databases increase to a frenzied pace and the queries become larger and larger, the only real response is to put more hardware and storage into the path of the database.
Although each of these techniques extended the functionality of existing relational technologies, none fundamentally addressed the core limitations, and they all introduced addi¬tional overhead and technical tradeoffs[sic]. In other words, these were good band-aids but not cures. (Burd, 2011)
NoSQL enables much quicker information discovery, because the data lives within what the Google Developers called “entities” (2012). Whereas, the customary relational database uses different tables and has to look up the data within those tables using relational keys. The tables are then linked together using what SQL calls “JOIN” functions from within an individual’s query-string. It is simple to observe a decrease in performance unless the database is on a mature enough system that is well laid out.
As the business model evolves concepts and data models often struggle to evolve and keep pace with changes. The result is often a data structure that is filled with archaic language and patched and adapted data. As anyone who has had to explain that the value in a column has a different meaning depending on whether it is less than or greater than 100 or that "bakeries" are actually "warehouses" due to historical accident knows that the weight of history in the data model can be a serious drag in maintaining a system or incorporating new business ideas. (Rees)
Rees illustrates a common problem among all systems: change. As data changes, the current and dominant relational model may become extinct. However, SQL may not be up to the task of continuing to store data and serve data in its current fashion. As new as it is, NoSQL may quickly become the standard SQL is today. With such flexibility, NoSQL only needs more companies, like Google, to accept it and learn how to work with both SQL and NoSQL alike in the interim.
References
Burd, G. (2011, October). NoSQL [PDF]. Retrieved December 12, 2012 from the World Wide Web http://static.usenix.org/publications/login/2011-10/openpdfs/Burd.pdf
Google Developers. (2012, June 29). Google I/O 2012 - SQL vs NoSQL [Video file]. Retrieved December 12, 2012 from the World Wide Web http://www.youtube.com/watch?v=rRoy6I4gKWU
Haugen, K. (2010, March 16). A brief history of NoSQL [Blog post]. Retrieved December 12, 2012 from the World Wide Web: http://blog.knuthaugen.no/2010/03/a-brief-history-of-nosql.html
Kahn, A. (2011, November 8). Difference between SQL and NoSQL: Comparision. Retrieved December 12, 2012 from the World Wide Web: http://www.thewindowsclub.com/difference-sql-nosql-comparision
Rees, R. (n.d.). NoSQL comparison. Retrieved December 12, 2012 from the World Wide Web: http://www.thoughtworks.com/articles/nosql-comparison
Snell-Pym, A. (2010). NoSQL vs SQL, why not both? Retrieved December 12, 2012 from the World Wide Web: http://www.cloudbook.net/resources/stories/nosql-vs-sql-why-not-both
Wednesday, November 28, 2012
MasterControl: A QMS to answer 21 CFR
MasterControl: A QMS to answer 21 CFR
Samuel Warren
IS472: IT Compliance
Professor Steve O’Brien
November 20, 2012
MasterControl: A QMS to answer 21 CFR
Executive Summary
Regulating and maintaining compliance in the biotechnology, pharmaceutical, and genetic engineering fields is quite a task. In order to maintain compliance with FDA required 21 CFR, many companies are choosing to turn to quality management systems as answers to deal with compliance. The beauty of this is they do not have to figure out how to comply with the regulations; they simply have to learn the proper way to interact with the software. Master Control makes one such software; with its features and integration, the MasterControl suite of products is one of the most robust QMS platforms that have been created. If utilized correctly, it will enable companies to do more research and spend less time trying to maintain compliance.
Introduction
What is quality? Is it an end state? Is it a process? Quality is all of the above. It is an end state, a process, and even a descriptor. When discussing quality with individuals, there is a somewhat vague, generalized answer floating to the forefront most often. That answer oftentimes describes the characteristics of a product or service as “reliable” or “stable.” While both definitions are generally acceptable, defining quality can be much larger and involve a fair amount of compliance to regulatory demands. The Quality Management System (QMS) described herein will have its benefits fully explained and enumerate the way it will contribute to the compliance requirements.
What is MasterControl?
MasterControl is a product suite created by Master Control Incorporated with the purpose of aiding in quality management services with specific regards to FDA and other regulatory compliance issues. With a host of offerings, ranging from Quality Management to Training Management, it is meant to provide a managed answer to how to achieve and keep an organization in compliance within 21 CFR, and other FDA required compliance fields. According to Master Control’s “Software Control Solutions Overview”:
While market globalization has vastly increased the profit potential for manufacturers and other businesses, it has also intensified competition and the pressure to produce faster and at a lower cost. The situation is doubly challenging in a regulated environment (FDA, EMEA, etc.), where companies must contend not only with cutthroat competition, but also stringent regulatory requirements. (2010, p. 1)
With such a highly competitive field and risky potential failures, it is imperative that organizations do everything they can to grease the skids and provide easier access to auditors and regulators in order to prevent being either considered noncompliant or non-cooperative. The MasterControl suite provides governed and trusted software and systems to help ensure the organization can focus on the fewest possible technology problems, it also frees up the companies to engage in more research and discovery.
How to Optimize MasterControl
It is essential for all systems to be optimized. QMS’ are not exempt from that necessity. Without optimization, the users are unable to best utilize the system to its fullest potential. When approaching the optimization of MasterControl, there are several significant areas to contemplate. One recommendation is to eliminate the muddled mix of digital and analog. This is too costly for the company to invest in the computer systems, not to mention the costliness of paper, ink to print and copy, and maintenance on the devices, depending on the size of the company. Another hidden cost is the time investment for audits or inspections.
A routine GMP inspection typically lasts a week, but sometimes they can last up to five weeks. The investigator noted that within this context, an electronic record-keeping system could make all the difference in speeding up the inspection process. (“Six,” 2010, p. 2)
While a week is not a long time for the auditor, they may spend time consuming internal resources and, in some cases, may stop work altogether. The end cost could be much higher than anticipated if the inspection or audit lasts longer.
Another major way to optimize the QMS is to use different software and processes that connect together well. MasterControl provides such a large suite of software, all of which are interconnected, and all of which are fully digital.
It has the ability to integrate with electronic repositories that are good for storing SOPs, engineering drawings, and other documents, but are incapable of controlling quality processes like training and CAPA. MasterControl allows companies to leverage their existing repositories by integrating them with robust MasterControl applications without expensive custom coding. (“Six,” 2010, p. 3)
By having and maintaining connections to the electronic repositories, the MasterControl suite is able to have a wider reach digitally and reduce the potential disconnection points where the people and outdated processes connect to the system, thereby limiting the risk of system failure.
How Does MasterControl Enhance Compliance Efforts?
A major area MasterControl excels in is aiding in compliance efforts. With such tightly controlled fields, manually verifying compliance would be time consuming and potentially very expensive. By using a system like MasterControl’s suite, there are five areas accounted for: “system standard operating procedures, user authentication, access security, audit trails, and record retention” (“5 Ways,” 2010, pp. 1-4). All the areas are vital to maintaining a compliant lab, or business overall. The whitepaper written by Master Control Inc. provides quite a bit of detail for each item. For example, in user authentication, they describe the following MasterControl software features:
MasterControl has numerous levels of security to ensure authenticity of each user in the system. The software tracks every signature combination and does not allow duplication or reassignment of the user ID and signature combination. Each user establishes a signature password upon first log in. He or she first logs into MasterControl with a user ID and a password just to gain access. To sign off on any document, the user must use a different “approval” password. All user IDs and passwords are encrypted and are not available to anyone in the system. (“5 Ways,” 2010, p. 3).
The aforementioned security levels help to define and regulate how users interact with the QMS. However, they also provide a robust system control scheme enabling direct fulfillment of 21 CFR regulations for said area. While there are many more features of MasterControl’s products, this particular area serves as a poignant reminder of just how much detail was actually placed into MasterControl software.
References
5 Ways MasterControl helps ensure system compliance with 21 CFR Part 11. (2010). MasterControl Inc. Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp
Six ways to optimize your quality management system and ensure FDA and ISO compliance. MasterControl Inc. (2010). Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp
Software control solutions overview. MasterControl Inc. (2010). Retrieved November 19, 2012 from the World Wide Web: http://www.mastercontrol.com/resource/index.html#wp
Failure to Communicate Case Study Review
Failure to Communicate Case Study Review
Samuel Warren
IS472: IT Compliance
Professor Steve O’Brien
November 26, 2012
Failure to Communicate Case Study Review
Executive Summary
Almost every Information Security analyst is thought to be slightly paranoid in part due to their willingness to see potential problems everywhere. While not all of them are actually paranoid, there is a clear need to understand and train staff on potential threats when it comes to information. Flayton Electronics, a fictional mid-sized company with small web-presence discovered a major problem, wherein a large number of their customers had compromised payment accounts. There is no easy or fool-proof way to completely prevent data loss; however, communication and business continuity steps provide a way to keep any breach from getting too far out of hand.
Introduction
As long as data has existed, there has been communication, information transfer, and data fraud. How each is approached is vastly different, yet all share details requiring care and knowledge to navigate. Within the realm of data fraud, there are numerous required responses needing to be considered, not including the organizational response and reputation effects. The following review shall discuss the fictional company, “Flayton’s Electronics,” the major data loss they faced and their response to the situation.
Problem Overview—Flayton’s Electronics
Flayton’s CEO was informed of an alarming discovery by their principle banking institution. The bank reported a large number of Flayton’s customers had their cards compromised. Originally, they reported that 15% of a randomly sampled 10,000 accounts that were compromised had purchased at Flayton’s at one point or another. As they investigated further, they discovered there were two possible culprits and a disabled firewall. How the firewall stayed disabled was not a mystery; their Chief Information Officer (CIO) was constantly juggling new technology projects and seemed too busy with those to notice a downed firewall. That kind of innovation, while it yields results, also brings a level of risk of any oversight. Another major problem the Flayton team had was if and when to communicate the breach to their customers. At the time of discussion, they were unsure how the breach occurred, if it was a deliberate breach by former employees or a breach by hackers sitting in their car with a laptop near the headquarters. With such minimal information, there was a certain amount of time necessary, but instead of being proactive and researching the breach themselves, the Flayton team seemed to be avoiding the issue and trying to find a way out without having to communicate and deal directly with the affected customers.
How to Handle the Situation
Innovation is a great tool to have in any organization. However, innovation with improper execution does far more damage than not innovating at all. There is a level of research necessary prior to launching any technology project dealing with customer data, internal employee data, supply chain data, or any other confidential data. Conducting thorough investigations into all possible changes affecting the data, providing business continuity exercises, and keeping consistent communication between the CIO and different department heads will help to ensure this type of a problem is discovered and dealt with sooner rather than later.
One common fallacy is that silver bullet technology can save the day. I've seen organizations spend hundreds of millions of dollars on security safeguards that were penetrated by a knowledgeable person with a handheld device. For example, Motorola proved to one of its customers, who had invested heavily in some of the best protection technology available, that we could access their core business systems using just a smartphone and the Internet. (McNulty, 2007)
This fallacy was evident in the mind of the CEO and the CIO, as they believed being PCI compliant would protect and prevent problems from happening. However, being PCI compliant is just one first step in a number of proper security practices needing to happen within any organization.
Another major point to consider that would help prevent this problem in the future is to have the Information Security team do regular security audits on the technology and the processes in the organization to determine if there are any potential threat vectors in the organization. While hacking by external attackers is still the number one threat, an article in CSO describes a close second to that is internal attackers. Keeping that in mind, there are many ways attackers could gain access to confidential information without actually being physically inside the internal network.
Above all possible hardware and software solutions, the key to this, and other organization’s problems, is to hire, educate, and train staff to be knowledgeable of all the potential ways data can be acquired. Then, keeping staff and leadership validated by doing security and background checks can provide additional defense against disgruntled employees. Simple steps like changing passwords to the systems or removing access to separated employees can go a long way to ensure no separated employee can intrude and steal information.
References
Carr, K. (2003, August 3). Numbers: Internal Threats vs. External Threats. Retrieved November 27, 2012, from CSO Security and Risk website: http://www.csoonline.com/article/218405/numbers-internal-threats-vs.-external-threats
McNulty, E., Lee, J. E., Boni, B., Coghlan, J., & Foley, J. (2007). Boss, I Think Someone Stole Our Customer Data Harvard Business Review, 85(9), 37-50.
Subscribe to:
Posts (Atom)