So, for what it's worth, here are my slides:
OWASP Austin threw a Crypto Party this week. We had five presentations on different topics, including VPNs, TOR, disk encryption, and secure voice calls. I had the task of explaining PGP to folks in 10 minutes, and decided it would be more useful to explain the concepts rather than giving a demo. (I have run into too many people who don't understand the key management aspect of PGP.)
So, for what it's worth, here are my slides:
(Turns out Slideshare doesn't provide the Powerpoint transitions/animations that I spent a lot of time on. Too bad. Ping me and I'll send you the original...)
I gave a talk at LASCON 2014 the other day titled "Multi-Factor Authentication -- Weeding out the Snake Oil". Rather than providing a catalog of selection criteria, this turned into a review of scenarios where throwing additional authentication factors at a problem might or might not make sense. Combined with examples of different solutions currently available, we discussed the types of threats to different environments where different multi-factor solutions might actually be able to help lower your risk.
The resulting message, as is often the case, was this: Even though I keep reminding my less tech-savvy friends that it is really a good idea to enable two-factor authentication for their "free" email accounts no matter what, it's not a one-fits-all solution. You need to understand what risks you are trying to control in order to determine whether multi-factor authentication, or a particular solution, is able to help you with that. Just buying an arbitrary solution that carries the label that's matching the current buzz words does not (typically) solve your issues around user authentication and data security.
The slides are on SlideShare:
This doesn't have to do too much with enterprise security, but since I did the research I figured I might as well write it down. I haven’t worked as a PCI assessor for the past three-or-so years (and obviously never in Europe ;-)), but some questions about EMV cards of the more technical nature were floating around in my head. Mainly I was wondering how crypto is used to facilitate the functions introduced by the EMV specifications.
There are plenty resources available online, so this is more or less just a summary of what I was curious about. I will spare you the usual basics introduced by and discussed in all those plenty of news articles generated by the current push to introduce chip cards into the US market and the motivating breaches that generated attention beyond the security community.
The end goal of using EMV technology, as usual, is obviously to authorize transactions that are initiated by the cardholder at a point-of-sales terminal, i.e. during card-present transactions (online and offline). The technology in the embedded chip (ICC) is supposed to reduce the potential for fraud, in particular as a result of theft of cardholder data (such as the card number and verification data).
Chip cards do not offer benefits for e-commerce (card-not-present) transactions unless the cardholder is supplied with a physical card reader for their PC that is then used to access the chip during the Internet transaction, turning it into sort of a card-present transaction. Obviously, if the cardholder enters the PAN (card number) and verification data by hand into a web browser, it is subject to the usual threats of exposing those on their way from a local device to the acquirer through the interwebs.
From a historic point of view, a lot of the basic functionality in the EMV specifications emphasizes the increased security for offline transactions, i.e. card-present transactions where the card terminal at the merchant location doesn’t “dial up” to the acquirer for an online authorization, because that’s slow and costs money. Rather, card-present fraud is significantly reduced by a number of measures that allow the local terminal / point-of-sale system (POS) to establish trust into the card (and cardholder), and allow the POS (and the card!) to authorize transactions that meet certain criteria. These days, avoiding online authorizations isn’t quite as much of an issue in many of the interconnected places of the world anymore, but this explains why ― for example ― the basic card authentication mechanisms are all designed to work offline. (It used to be the norm that, in particular smaller, merchants would collect their transactions offline during the day and then forward them at some point in time to their acquirer in batches.)
So let’s look at the mechanisms introduced by EMV for card authentication, cardholder verification, and transaction authorization. And how they are implemented from a cryptographic (key management) point of view, which is what I was wondering about in the first place. (There are other aspects we won’t look at ― scripts for updating chips remotely, etc.)
Disclaimer: I synthesized most of this information from the EMV Specifications and other information publicly available on EMV’s website. I’m not privy to any card manufacturer’s, brand-, or issuer-specific information, and beyond the basic EMV specifications those institutions certainly have options available to them to implement alternative / additional mechanisms in the applications they put on the cards that might not completely match the picture I’m painting below. ;-)
Most of the card authentication means built into the EMV specs deal with authentication of the card in card-present, offline transactions. In other words, the (trusted, tested, certified, approved, etc. :-)) terminal (and not the issuer) authenticates the card. In online transactions, the authentication of the card is built directly into the messages exchanged by the card, terminal, and issuer (etc.) for authorization, and whether or not any of the offline card authentication mechanisms are executed in this case is sort of left to the particular environment. (I believe most networks in the US might require it regardless?)
Three authentication types are defined:
SDA authenticates static data put onto the card by the issuer, which is not unique to individual cards, but only to a specific issuer’s application on the card. DDA and CDA authenticate ICC-resident data, data provided by the terminal, and data generated by the card itself, therefore providing for authentication of the card in one unique situation.
Which of these authentication mechanisms is performed basically depends on the least common denominator between card and terminal. In order of priority, the preference (if supported by both) is specified as CDA - DDA - SDA.
Functionality on ICC cards is organized into applications. Typically, you will find one application on an EMV card ― the card-type-specific application for a particular brand. (For example, a VISA credit card application.) Theoretically, the EMV specs allow for multiple applications on a card. For example, both a VISA credit card and a MasterCard credit card application on the same card, with the cardholder being able to choose (through the terminal) which one they want to use in a particular situation. Or maybe a credit card application and an application that generates one-time passwords for general authentication use in computing environments ― wouldn’t that be handy? But I’m not sure that the personalization of cards with more than one application is practiced much these days. (Of course, this becomes more interesting once we start talking about mobile applications and NFC-enabled smartphones ― loading banking applications onto cards that are also used as SIM-type cards in smart phones, etc.)
Applications (particularly, some static application data) are signed with an issuer’s key, which a certification authority operated by the payment system has issued a certificate for. Card ICCs then provide to a terminal the certificate for that issuer’s key, as well as the signature for the static application data that was stored on the card with the application during personalization of the card. Terminals contain the public keys of CAs and are thus able to validate both the issuer’s certificate as well as the signed application data provided by a card. Which is referred to as SDA.
In DDA (and CDA), the ICC has its own keypair that allows it to sign dynamic requests from the terminal, rather than just presenting the signature for parts of the application data that were signed by the issuer. (Static application data, in this case, is included in the certificate for the ICC’s keypair.) Requests to be signed include a list of data elements selected by the terminal, as well as an “Unpredictable Number” generated by the terminal.
DDA is performed before any transactions are processed; CDA is executed as part of messages exchanged by the terminal and ICC to process transactions, thus assuring that the individual ICC responses have been generated by the ICC in question.
The algorithms specified for all of this (CA keys, certificates, signatures, and their validation) are RSA with a maximum length of 248 bytes (not bit!) keys and SHA-1.
Cardholder Verification Methods
OK, we have authenticated the card, but how do we establish that the human presenting the card to the terminal is in fact the authorized user of that card? Traditionally, in the US, this happens by signature (shudder) for credit transactions, and by PIN for debit/ATM transactions.
This is likely not going to change much as a result of the introduction of EMV cards into the US market. The card brands have no current requirements to move credit card transactions to PIN authentication, although some brands offer incentives for supporting PINs in addition to signatures. (Although this news article claims that PINs might actually be the future.) According to Computerworld, in about two dozen countries a PIN isn’t required.
The four cardholder verification methods (CVM) defined by the EMV specs are:
Combinations of these can also be used.
For (the encrypted version of) offline PIN verification, an ICC either owns a PIN Encipherment public key pair, or uses the key pair it owns for offline dynamic data authentication. The terminal obtains the PIN from the cardholder and, together with a random challenge provided by the ICC in order to prevent replay attacks, encrypts it with the ICC’s public key before passing it on for verification to the ICC.
For online PINs, the best I could deduce is that the messages formatted according to ISO 9564-1 are used for sending the PIN entered by the cardholder from the terminal to the issuer as part of authorization requests. For encryption of online messages, see below. The PCI's PIN Security Requirements require the use of TDEA or an algorithm that’s at least comparable in strength for PIN encryption.
For the authorization (or declination) of offline transactions, EMV specifies a sophisticated risk management program that both the ICC and the terminal are involved in. In particular, this includes:
For online (and offline) transactions, and all other communication involved in transaction authorizations, the EMV specs define message formats. In the case of online processing, the most interesting ones are the ARQC (authorization request cryptogram) and ARPC (authorization response cryptogram). The former is generated by the ICC for inclusion into an authorization request that the terminal sends to the network. (The terminal also includes the PIN entered by the cardholder in case of online PIN verification.) The latter is the issuer’s response, which is processed by the terminal. The ARQC contains a card authentication method (CAM) allowing the issuer to validate the card’s authenticity. The issuer's response, apart from an authorization decision, can include issuer authentication data that is passed on to the ICC by the terminal to allow the ICC to authenticate the issuer response.
When it comes to specifying key management and encryption methods for online processing, the EMV specs get a bit fuzzy. (Actually, they are probably perfectly clear if you have all the information necessary and the time to study it.) Basically, specifying this seems to be up to the payment system (i.e., the brand/network). What the specs suggest is that the ICC owns some sort of master key, which is shared with the issuer and used to derive session keys based on an application transaction counter. (The EMV Card Personalization Specification, then, specifies at least three 112-bit “master” keys for different purposes to be loaded onto the card, derived from a personalization master key.) Session keys can then be used to generate MACs over transaction request data, encrypt/decrypt data, etc. The generally recommended algorithm seems to be triple-DES for session data, with EMV allowing AES as an option.
Other Random Questions
Other questions I was wondering about...
So how is use of the chip enforced for cards that have both a chip and a magnetic stripe?
Cards have different service codes depending on whether they have a chip or not, defined by ISO/IEC 7813. If a card is swiped at a terminal, the terminal can recognize the service code encoded on the magnetic tracks and prompt the user/attendant to use the chip reader instead. Unless I am missing something, this also means that once I have obtained the cardholder data I could create a fake card with an altered service code that would not trigger the terminal’s attention? (But probably the issuer’s fraud system’s, in an online transaction?)
2014-06-10: Editorial beautifications.
I gave a presentation to the Central Texas chapter of the ISSA last Thursday, entitled "Comparing NIST's Cybersecurity Framework with Best Practice". When I sat down to put the actual slides together, I struggled with defining what "best practice" actually means.
I believe the term has different connotations for different people. To me, it typically signifies practices that are commonly accepted as the right thing to do (an "industry standard"?), comprehensive, can be benchmarked against (both for internal performance and audit purposes), etc. But does best practice encompass the cutting edge of what might be possible in a certain area of expertise?
I have always considered ISO/IEC 27001 a best practice standard for the elements necessary to run an information security program, so that's what I chose to compare the Cybersecurity Framework to in my presentation. (See also my earlier blog post.) But I would argue that even if you are compliant with a best practice standard, you aren't typically done formulating your security management program. It's a start. It's much better than nothing. But:
When we are talking about compliance in the context of information security, the standards (or frameworks, or regulation) in question are typically bodies of work that contain — sometimes amongst other requirements — a catalog of security controls. Be that the controls in HIPAA's security and privacy rules, the Framework Core in the Cybersecurity Framework, Appendix A in ISO/IEC 27001, ...
Even if you managed to come up with a catalog of any possible control that might apply to any possible operational environment, how do you know which controls are the most significant ones for mitigating a particular organization's risks? Where should your priorities be? And how much effort should you invest into implementing any particular control?
These are questions answered by risk assessment and risk management activities. Risk is particular to individual organizations and their particular infrastructures, business objectives, operational environments, ... Almost all security compliance frameworks contain “controls” requiring us to perform risk assessments and manage controls based on their outcomes.
But it is easy to meet that requirement on paper by documenting a superficial risk assessment. Actively managing technology/IT risk in consideration of other business risks and overarching organizational objectives and exposure takes more than that. Which is what I tried to visualize with this little pyramid:
The complete slide deck is available on Slideshare.
I gave a talk at the BSides Austin conference yesterday. We looked at a number of authentication factors and did some threat modeling, with the attendees helping me to estimate the attack potential necessary to exploit certain vulnerabilities by voting on pre-defined adversary types. This included taking a look at how Steve Gibson's SQRL scheme works, possibly the most interesting part of the talk to many. ;-)
The slides are available on Google Drive.
To those of you who actually attended and participated in the online survey, thanks for humoring me!! Next time I include audience polls in a presentation, I shall figure out how to present the results in real time. But for now, below are the results of yesterday's poll. (Including those for the backup slides we didn't get to but that a fair number of you filled out anyway.)
Those of you just tuning in, please don't take these results out of context. We made a number of assumptions during the talk that aren't properly represented below. (But you can find most of them in the slides.) Also, these votes are best guesses by a small number of InfoSec professionals that aren't necessarily all experts in those particular exploit types. (Although some might be. ;-))
There were some surprising results (to me), maybe based on misunderstandings, but maybe you guys just know better than I do. The two that stood out to me most were:
The graphs below represent number of votes (on the y axis) for a given threat model from the slides (on the x axis).
Type of adversary necessary to succeed in (x axis):
1: social engineering (user)
2: social engineering (password reset)
3: theft of database (brute force guessing)
4: theft of database (improperly hashed)
5: keyboard logger
(2) Shared Secret- & Time-Based One-Time Passwords (soft & hardware)
Type of adversary necessary to succeed in (x axis):
1: weak RNG exploitation
2: cryptanalysis reduces brute force effort
3: (crypt)analysis of proprietary algorithms
4: extracting master key from HSM
(3) OTP tokens continued...
Type of adversary necessary to succeed in (x axis):
1: extract key from phone (remotely)
2: generate future time-based values
3: physical reversal of hardware
4: non-invasive analysis of hardware
5: malware-channeling of OTPs
6: Write-in: extract key from phone (physical access)
(4) Steve Gibson's SQRL
There weren't any threat vectors we hadn't voted on already.
1: Exploiting the smart phone OS to extract master key from memory -- see (3)(1) above...
2: Weakening the particular elliptic curve or ECC in general -- see (2)(2) above...
(Backup A) SMS / text message-based OTPs
Type of adversary necessary to succeed in (x axis):
1: base station spoofing
2: core network wiretap
3: sales agent social engineering
4: SIM cloning
5: lawful interception
6: phone / SIM theft
(Backup B) Public Key Tokens
Type of adversary necessary to succeed in (x axis):
1: malware on client OS
2: reader firmware replacement
This week, NIST published Version 1.0 of its Framework for Improving Critical Infrastructure Cybersecurity (aka Cybersecurity Framework). I reviewed the last draft for the framework here on the blog a while ago, and also sent some minor comments back to NIST. (Along with the major one to not try and reinvent the wheel. ;-))
Now that Version 1.0 is out there, I decided to spend a bit of time analysing how it compares to the approach of ISO/IEC 27001 (in short: 27001). 27001 provides industry-agnostic requirements for information security management, has been around for a while, and continues to gain traction. Read on if you are already using 27001 for managing information security in a critical infrastructure context, are contemplating it, or are just curious.
Cybersecurity Framework 1.0
Since its last draft, the Cybersecurity Framework (CSF) has seen a lot of polishing, and I mean this in a positive sense: Inconsistencies have been sorted out, language has been improved, etc.
The Executive Summary has some points that are worth stressing, notably that the framework is a voluntary one, and that it doesn’t replace proper risk management:
The Framework is not a one-size-fits-all approach to managing cybersecurity risk for critical infrastructure. Organizations will continue to have unique risks – different threats, different vulnerabilities, different risk tolerances – and how they implement the practices in the Framework will vary. Organizations can determine activities that are important to critical service delivery and can prioritize investments to maximize the impact of each dollar spent. ...
Along with the framework, NIST also published a roadmap outlining where it plans to take the framework from here, and US-CERT has bundled many of its cybersecurity tools and initiatives into a new Critical Infrastructure Cyber Community Voluntary Program. Good stuff!
Mapping 27001 Requirements and Controls to CSF Subcategories
One of my comments on the draft was that it provided “informative references” to (amongst others) the previous version of ISO/IEC 27001:2005, rather than to the recently published ISO/IEC 27001:2013. I was happy to see that this has been addressed in Version 1.0.
I took the liberty of reverse-engineering the mapping of CSF Subcategories to 27001 control objectives provided in the “Informative References” column of the CSF’s Framework Core. I also added 27001’s control objective statements (but not the text describing the control) for easier reference, and the “other” requirements from 27001 in the standard’s clauses 4 through 10.
The resulting Excel spreadsheet can be used to sort requirements one way or the other, and is hopefully of use to some of you:
What follows is a bit of analysis:
24 CSF Subcategories Do Not Map to Any 27001 Control Objectives
However, ISO/IEC 27001 does not just provide a list of controls in its Annex A, just as the CSF does not simply provide a list of requirements in it’s Framework Core in Appendix A. Clauses 4 to 10 in 27001 constitute actual requirements for an organization’s information security management system in addition to the list of controls in the annex. I added a mapping of those requirements (on the lowest level of available section headers) to applicable CSF subcategories in my spreadsheet.
As a result, the remaining major areas where the CSF provides more detailed requirements than 27001 can be summarized as follows:
(When looking at the detail of requirements, though, it should be noted that ISO/IEC 27002:2013 is available to break down the normative (but short) controls from 27001’s Annex A into more detailed best-practice control activities.)
19 Control Objectives Have No Corresponding Subcategory
27001 contains a number of control objectives that are more specific than what the CSF offers, such as the requirements for a mobile device policy, acceptable use standards, regular review of access rights, provisions to be put into supplier agreements, etc. Many of them could arguably be mapped to higher level Subcategories in the CSF's Framework Core, but are not specifically implied by any Subcategories. Curiously, this contains requirements for the documentation of how cryptographic controls are used (A.10.1.1) and how keys are managed (A.10.1.2) -- a topic that is not addressed in the CSF at all.
Requirements from clauses 4 to 10 in 27001 that have no corresponding requirements in the CSF’s Framework Core mostly relate to aspects of running a well-documented (security) management system, including requirements for competent resources, clear objectives, documentation, internal audits, management reviews, and continual improvement. Some of this is, however, addressed by the four Framework Implementation Tiers loosely defined in CSF's section 2.2 to measure an organization’s maturity in implementing the CSF categories. (As far as 27001 is concerned, besides setting a minimum baseline for a functioning management system process maturity is addressed in other standards and out of scope.)
The “gap analysis” of correspondence between 27001 and the CSF performed above could easily be extended to review cross-references with COBIT 5, NIST SP 800-53, etc. (Maybe the Cloud Security Alliance’s Cloud Control Matrix will pick up another column for the CSF Subcategories now?)
What the analysis really shows is mainly this: Now we have yet another set of security controls to deal with (at our disposal, is what I meant to say ;-)), mostly overlapping with those suggested by already existing standards, and with only minor specifics that make it special for critical infrastructure cybersecurity. (Although there is certainly one major advantage that the CSF has over 27001: It’s free!)
Unsurprisingly, the recommended approach for managing information security thus stays the same: Develop a risk and security management system that addresses your organization’s particular risks. Create mappings from your processes to whatever standards and frameworks you desire (or have) to demonstrate compliance to. Circle back and address any compliance/control gaps in your system.
(For 27001 and the CSF, the tools suggested by the standards for providing conformance statements are actually somewhat compatible: The CSF introduces, on a superficial level, the concept of a Framework Profile, suggesting that an organization create Current and Target Profiles outlining which of the framework’s control Categories and Subcategories have been selected based on “business drivers and a risk assessment” (pg. 5). This is somewhat similar to the concept of a Statement of Applicability (SoA) required by 27001. Neither of the standards provides a detailed outline or template for these statements, 27001 being a bit more specific about minimum contents. This presents an opportunity to align demonstration of conformance with control statements from both standards by generating a joint SoA & Current Profile. Somebody should go ahead and create a template!)
2014-03-06: Fixed a hyperlink.
Nearly all compliance regulation and standards addressing aspects of information security (aka cyber security ;-)) contain a mandate for IT risk assessment (or analysis) and management.
Most often, these requirements are fairly generic. They do not prescribe a specific methodology for risk assessments, and they do not mandate a specific depth (level of detail) for performing them, either. This is both (generally) appropriate and (sometimes) problematic:
As a result, it is possible for inexperienced organizations to implement a superficial risk assessment process that meets the letter of the compliance requirements but otherwise doesn’t do the organization any good. This is why information security professionals like to point out on a weekly basis that compliance with, say, the PCI Data Security Standard doesn’t mean that a company (or their data) is secure, but that a good security program usually implies compliance without much additional effort.
This blog entry will help you recognize that risk assessments can (and should) be performed at different levels of abstractions. And I will provides some guidance on how to determine how much of it might be enough. (The question of which methodology to use is a different one and not discussed here. There are a number of decent ones out there.) Needless to say that the following discussion is based on the assumption that you actually want to do the right thing, rather than trying to get by with a minimum "check-box" attitude.
First, let’s look at the requirement for risk assessments in a few common standards and regulations:
HIPAA, in 45 C.F.R. § 164.308 (a)(1)(ii)(A), tells entities involved in healthcare:
Risk analysis (Required).
PCI DSS v3.0, requirement 12.2, tells the payment card industry:
Implement a risk-assessment process that:
Similar requirements can be found in best practice frameworks.
ISO/IEC 27001:2013, in laying out requirements for information security management systems, states in section 6.1.2:
The organization shall define and apply an information security risk assessment process that:
And NIST’s Preliminary Cybersecurity Framework advises the critical infrastructure industry in category ID.RA to consider that:
The organization understands the cybersecurity risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals.
Subcategories further spell out the need to identify and document vulnerabilities and threats for identified assets, analyze potential impacts, and identify risk responses.
Abstraction Levels of Risk Assessments
OK, so we have to perform risk assessments. But how much assessment is enough?
Risk assessments can obviously be performed at varying degrees of abstraction, each addressing (if so desired) the oft-required aspects of assets, threats, and vulnerabilities. (Implying that, eventually, you will also want to weight the value of an asset and the perceived or measured likelihood or frequency of a threat to actually occur in order to be able to qualify or quantify your risks in a meaningful way.)
For example, let’s consider this high-level analysis, fit to be understood by a VP or at a board level (adjust this view of corporate hierarchies accordingly to your organization size ;-)):
Thus, when we are supposed to perform a risk assessment to achieve compliance with a respective requirement, we often seem to have two options:
Without a high-level understanding of major risks, your board or VP won’t be able to provide the necessary acknowledgement, direction, and support to manage them in alignment with corporate objectives and to have your back; without breaking things down in sufficient detail for your IT department to understand the many different vectors through which a high-level risk might materialize, you might under- or overestimate the likelihood for things to happen or they might miss the point of which countermeasures will actually help to address things on an implementation level.
Regardless of whether we are instituting risk management based on compliance needs or simply as the foundation for proper information security management in an organization, the question remains “How much is enough?”. Do I need to engage every individual sysadmin in reviewing what they are responsible for and how it might affect our risk posture? Architects? Department heads? Or do I just sit in my office and try to make up things myself?
As so often in life (and information security), the answer is: It depends.
The Office of Civil Rights’ Guidance on Risk Analysis for HIPAA puts this fairly aptly by pointing out that “methods will vary dependent on the size, complexity, and capabilities of the organization”.
I can think of two main factors when trying to gauge how much effort you should put into performing and documenting your regular risk assessments: Organization size and the value of the organization’s assets. As always, common sense is an important aspect, too.
Size here might be influenced by the number of employees, complexity of operations, etc...
If the systems administrator is at the same time the Director of IT, he or she obviously has multiple levels of abstraction to consider (and a lot more diverse tasks on their plate, anyway). Does a smaller organization size mean that a flaw in your Apache web server isn’t as relevant as for a large company? That obviously depends on the type and value of information assets that could be affected if the flaw gets exploited by an attacker (for example, a static website vs. an underlying SQL database with 100,000 credit card numbers), and by the potential motivation of more or less resourceful attackers to exploit it.
A smaller organization does not typically mean that you can just overlook the lower-level threats, but likely the systems you have to deal with are less complex and less in number, and maybe the (aggregated) value of the assets you are trying to protect is lower, making it less likely that highly sophisticated attackers will waste their hard-earned zero-day exploits on you. Likely the communication levels between the techies and whoever is at the helm of accepting the overall (IT and other organizational) risks are shorter, requiring less formal documentation and reporting structures. (I do NOT mean to imply that you can completely omit documenting your risks, just that having less stakeholders means that a simpler reporting structure might very well be sufficient.)
On the contrary, a large enterprise with many different departments contributing to operations, productions, IT, etc. that all depend on IT performing as expected and information being available and uncompromised probably needs to look at risks at multiple, hierarchical levels of abstractions, with a well thought-out aggregation of overall risks that allows the top of the food chain to prioritize which ones are acceptable and which ones need further treatment.
A system that protects assets worth $ 10 billion, or the IP whose confidentiality the existence of your company hinges on, might require more dedication in terms of assessing the risks associated with its individual technical components than the one that hosts the Wiki documenting how to operate the microwave in the cafeteria. Prioritizing efforts based on asset values works both from a governance perspective when looking at overall company assets and the departments that contribute to protecting them, and from organizational and technical perspectives when trying to determine, for example, how the availability of individual systems would affect the overall capability of the organization to run its business in a more or less orderly fashion.
Defining An Approach
In general, a top-to-bottom approach might not be the worst idea when starting on a blank page. Understand what your major assets are, in terms of information and IT services that the organization depends on, and what they are worth to the organization. Then come up with types of events that might cause them harm. Since we are primarily talking about information and IT risks here, think about how IT is involved in protecting and enabling your assets and processes. This is where you might start polling more technical subject matter experts in your organization that can actually help you to understand how likely certain things are to happen, and what else could go wrong with your IT. Circle back, maybe through multiple iterations, until you are confident that you have an understanding of your risks that is clear and complete enough to start managing them.
An exercise that might help to get things started is to perform a business impact analysis that focuses on events that could cause potentially critical disruptions to the business. Those can then be taken and looked at from a more formal risk assessment perspective.
Whatever you do, settle on a methodology / documentation framework soon enough that meets your needs. Otherwise things will end in chaos!
COBIT, in particular COBIT 5 for Risk, also has some advice on “how much is enough”: It advertises the technique of using “risk scenarios” for performing assessments, and advises that the number of scenarios “should be representative and reflect business reality and complexity”, further explaining that:
Risk management helps to deal with the enormous complexity of today’s IT environments by prioritising potential action according to its value in reducing risk. Risk management is about reducing complexity, not generating it [...]. However, the retained number of scenarios still needs to accurately reflect business reality and complexity.
In addition to figuring out the “depth” of your risk assessment activities, determining an appropriate organizational scope will also help with defining how much effort to spend. If parts of an organization can be identified whose operations do not have an impact, or less of a potential impact, on the risk to assets under consideration, then it may be appropriate to exclude them from initial risk assessment efforts.
Lastly, keep in mind that risk assessments aren’t a one-time thing. You need to circle back on a regular basis and consider new technologies and evolving threat landscapes. Ideally you will also collect metrics allowing you to qualify the understanding of your risks and the effectiveness of your mitigation measures.
The purpose of performing risk assessments is to enable informed decisions about managing the risks that an enterprise faces. Consequently, the key to determining how much effort to spend on risk assessments is to figure out how much information is needed in order to allow for informed risk management and governance.
Regardless of an organization’s size, this requires a certain amount of understanding of the subject matter at hand. In these days, information (and IT) assets are often of significant value to a business, if not in and by itself then by being able to affect all other aspects of operations. Unless a dedicated information security manager is at hand, it is likely the job of the CIO (or equivalent) position to obtain a reasonable understanding of the associated risks in order to a) communicate them to the governance level of the organization and obtain direction on priorities and risk appetite/tolerance, and b) direct staff to both assess and treat risks at the technological level.
It may well be appropriate for a three-person shop to not spend much time on formally assessing IT risks and rather concentrating efforts on developing the technology that earns the company money. But not giving thought at all to what could go wrong might be catastrophic to the business if an unanticipated threat realizes.
In larger organizations, clearly defining risk ownership, metrics, and reporting relationships throughout the company hierarchy will help to ensure proper assessment and management of risks.
In either situation, performing a risk assessment on paper just to satisfy a compliance requirement and without actually gaining a clearer understanding of the organization’s exposure and an opportunity to address it (maybe even in a way that actually adds value to your business) seems like a waste of time.
I wrote a white paper on risk-based authentication: Download it here as a PDF.
At conferences and trade fairs, I have run into the term risk-based authentication a lot recently. There is really nothing new to implementing authentication measures that are commensurate in effectiveness with the value of the information to be protected. (Iris scanners to access your email, anyone?) What is new is adding dynamically determined amounts of authentication to mass-market serving systems. These systems are exposed to frequent, wide-reaching attempts of subversion by attackers seeking gain from access to users' individual information held in users' accounts. The realization that the average mass-market user cannot be expected to protect their account effectively with strong passwords and avoidance of malware on their personal devices leads to the need to protect access to servers with additional measures that inconvenience users as little as possible.
Have you ever noticed that your web banking interface, on what might seem like random occasions, starts asking you those security questions on file as part of the authentication process, while at other times it jus lets you in with your user ID and password? Likewise, your favorite social network may prompt you to re-authenticate yourself even from a trusted browser session if that session was initiated in North America and suddenly resumes from an IP address believed to belong to an organization in Asia. Those are examples of risk-based authentication -- factoring in the context of an authentication attempt (geographical origin, time of day, etc.) and comparing it to a profile of expected parameters.
Using the word risk in risk-based authentication is not completely inappropriate; we are dealing with mechanisms to treat authentication attempts that occur under circumstances indicating that a possible fraud attempt might be underway, i.e. we are perceiving an increased risk for unauthorized access to an account. But I find it somewhat unfortunate. It is possible to employ risk-based authentication solutions without ever properly looking at an organization's actual risk. The solutions do not perform your due-diligence risk assessment for you. It is simply a fraud detection and reaction mechanism, similar in concept to what payment card issuers do in order to detect potentially fraudulent (risky ;-)) transactions.
I ended up going down the rabbit hole and writing up a (solution and vendor agnostic) white paper to not only dissect the (fairly obvious) mechanisms involved in these authentication schemes, but also provide some advice on how (and to what extent) they can contribute to addressing risk in an organizational context, and how they compare to traditional two-factor authentication. Hopefully, this will help put risk-based authentication solutions and their potential value into some useful context.
Feedback is appreciated, as always!
Also, new posts on these page will be announced in our email newsletter from now on. Subscribe on this page!
Bring Your Own Device (BYOD) is a hot topic in security management circles these days. This blog post provides some guidance on implementing a framework that allows users to access corporate assets from their privately owned devices.
In writing this, we assume that your organization does not already use third party solutions for managing user-owned end points, such as mobile device management (MDM) or mobile application management (MAM) services. Those may (or may not) replace some of the activities suggested here and alleviate some of the reliance on users, for example when it comes to initiating a remote wipe of your organization’s data on the device in case it is lost.
BYOD may not be limited to smart phones and tablets. You might be contemplating letting users bring their private laptops to work as well. This blog entry focuses primarily on mobile devices, but apart from the examples provided the principles lined out here apply to full-fledged PCs as well.
What Are the Risks?
The management of user-owned devices in your IT ecosystem should be informed by an assessment of the threats to organizational assets facilitated by the presence of those devices, and the likelihood and magnitude of potential damage that might be caused if those threats realize. If your organization is already dealing with company-issued devices, a risk assessment for user-owned devices will likely look similar, with some nuances and adapted countermeasures based on the fact that the organization does not have complete control (nor governance) over these devices. This fact may also lead to a different set of residual risks. The results of this kind of analysis will allow you to make informed decisions about the level of security measures you may want to implement.
An example of a question that a risk assessment would help answer in a more qualified fashion than just operating based on gut feeling would be: Are you happy to rely on the password/PIN barrier implemented by the operating system of a (particular type of) device in order to protect organizational assets, or do you need to take into account that your organization’s adversaries might have the resources and motivation to circumvent that protection, warranting the need for additional protection measures?
How To Get a Grip on User-Owned Mobile Devices?
You will need to augment your security management system with policies and procedures to clearly spell out under which circumstances and for which purpose employees are allowed to use their own mobile devices to access your systems and networks, and what is expected of them in return for that privilege.
Since the company does not own the device, it is important to assert the organization’s authority over certain aspects of its management in order to address situations like an employee leaving the company, loosing their device, or your monitoring efforts indicating that their device may have been compromised. For example, you cannot just initiate a remote wipe of the device in case it is lost, because you don’t have access to the user’s credentials required for this. (And for reasons of privacy protection and liability, you probably don’t want to have that access, either.) This requires clear upfront communication about expectations, and – to the extent possible – some sort of legally binding consent.
As discussed above, the configuration settings an organization prescribes for user-owned devices and the incident response procedures involving those devices should be based on threat modeling and risk management.
Clear Communication is Key
Depending on the outcome of your risk assessment, expectations that should be addressed with your users and that they should sign off on might include things like:
Also note that getting into legal arguments over a user's device is probably not going to help you resolve a security incident involving that device in a short-term fashion. Educating users upfront about the potential need for their cooperation seems like a more efficient use of resources.)
More Thoughts on Best Practice
A likely result to come out of your analysis is that you probably want to limit the amount of organizational data on the user’s device as far as possible, with or without making tradeoffs to usability based on the risks you are looking at. It is easy to disable VPN access to your network for a lost device. It is much harder to purge copies of trade secrets from a device that is lost, even more so if you didn’t own that device in the first place. Do your users really need access to all of that CRM data while their device is not connected to the CRM server?
Which leads to a related task that is crucial: Creating an inventory of the (types of) corporate data that might be located on a user’s device. Email, company directories, calendar entries, documents with sensitive information, etc. For one, if you don’t know which of your information assets might end up on those devices, your risk assessment might end up being incomplete. And it may be harder to remind users of the types of data they have to delete from their device when leaving the organization, trying to assess the potential damage of a data compromise on someone's device (how many credit card numbers!?! ;-)), etc.
It should also be best practice to keep mobile devices from connecting directly to your internal network. Provide a separate wireless network and require devices to establish a VPN connection if they want to access internal resources. This allows more fine-grained control over the resources that can be accessed by those devices, and keeps unauthorized devices and/or users out of the corporate network.
It is important that BYOD policies and procedures are based on proper risk management. You can implement best practice policies found on the Interwebs (and this blog) as much as you like, but some of them might be overkill depending on your particular situation, and the summary of all of them might still not address a particular threat that only your organization is exposed to. Or maybe you don’t trust end points at all, and there’s not much you have to care about here anyway. ;-)
Clear communication with users about responsibilities imposed on them in return for letting them access organizational assets with their privately owned devices is key. Users need to buy into the organization having a stake in their beloved smart phone or tablet in order for you to be able to protect organizational assets properly.
Sorting all of these things out in advance is most certainly an advantage over having to deal with them in hindsight while trying to deal with a breach that may have occurred through a user-owned device on your network.
cio.com: How BYOD Puts Everyone at Legal Risk
2013-11-22: Edited to add the "Further Reading" section.
The National Institute of Standards and Technology (NIST) recently released an official draft of its Cybersecurity Framework for America’s critical infrastructure, i.e. the infrastructure that contributes to keeping the US and its economy running. Think power generation and distribution, transportation, communication networks, …
The framework is a direct result of President Obama’s Executive Order 13636, which stipulates:
The Cybersecurity Framework shall include a set of standards, methodologies, procedures, and processes that align policy, business, and technological approaches to address cyber risks.On a high level, the Cybersecurity Framework (CSF) defines five functions for an organization’s security management: Identify, Protect, Detect, Respond, and Recover. It then specifies a number of controls for organizations to consider, organized into categories and subcategories, with cross-references to existing best practice standards. And it lines out different maturity “tiers” for information security management systems.
Other standards exist that have similar aims, and have been readily available to the private industry. Notably, ISO/IEC 27001 specifies a fairly comprehensive set of security requirements and guidelines and has been around for almost a decade. COBIT also comes in an information security (pardon me, cybersecurity) flavor. And there are more. They share in common that — to date — they are used mostly (and if at all) by organizations who have the dedication, resources, and maturity to run a sophisticated security operation.
Like ISO/IEC 27001 and COBIT, and unlike a compliance standard like the PCI Data Security Standard, NIST’s Cybersecurity Framework provides a set of guidance references that is designed to give an organization freedom to implement specific controls in a way that meets that organization’s risk exposure and business environment. So what is the difference between the framework and existing standards?
Focus on Critical Infrastructure
While born out of a specific initiative to better protect the critical infrastructure in the US, the CSF is largely (and thankfully) industry-agnostic. There are obviously references to industrial control systems in the framework, and a few of the recommended control “subcategories” address particular needs of the critical infrastructure sectors. But overall, the framework presented can serve organizations in sectors other than the critical infrastructure ones as a useful guide as well.
Thanks to the executive order spelling this need out explicitly, the Cybersecurity Framework contains an appendix with privacy considerations related to the individual categories that the framework defines for security management. Given that a lot of utilities process large numbers of private information in order to bill their customers for their services and vet their creditworthiness, and also just in general, this is a good idea. I wonder why it wasn’t possible to simply integrate these with the overall set of controls, though.
Executive Focus and Maturity Levels
The CSF makes an effort to provide practical guidance to organizations on governance for their cybersecurity efforts, and on how to communicate risks to executive management. Given that a fair number of the targeted organizations are probably lacking in that area or are likely not to have very mature security organizations and pretty much starting from scratch, that’s useful.
The framework also defines maturity levels, referred to as “Framework Implementation Tiers”, in an effort to give organizations a (very) basic understanding of where they are and where they might want to be with their security management efforts. Something to compare each other to, if you will.
Emphasis on External Cooperation
Both the implementation tiers and some of the functional categories underscore the need to not only understand an organization’s dependencies on external third parties, but to actively communicate with “partners” and share information about risks and concrete events that might be threatening the industry.
The need for this kind of (threat) information sharing is also stressed in the executive order. It’s good to put emphasis on it, but I am curious to see how successful this will be. While it certainly increases the resilience the individual organizations participating in the sharing of security intelligence and as a result of the whole industry, it’s not necessarily a given between industry peers (though not unheard of) to share this or any other kind of information. Not just for competitive, but also for liability reasons.
It certainly seems that NIST has had a decent amount of participation and input from various stakeholders in defining the CSF. In my opinion, the document needs some work on making its language consistent, is a bit rough on maintaining a uniform level of detail when it comes to defining its “cybersecurity activities”, etc. — things that are acceptable in a draft.
I am, however, not at all convinced that re-inventing definitions of security controls instead of simply integrating perfectly adequate existing ones by reference (like those from ISO/IEC 27001 and 27002) is the most efficient use of resources. I don’t see an improvement in the newly defined controls or framework language that would make it easier for the targeted consumers to understand what they are supposed to do, compared to existing standards.
In fact, when I was searching the web for additional cross-references between the preliminary framework’s controls and existing standards, I ran into an alternate proposal by Phil Agcaoili that does exactly this: integrate existing standards by reference. Which seems to make more sense to me.
Regardless: Like any other standard attempting to provide a comprehensive list of best practice controls, the CSF relies on an organization’s risk management processes to prioritize the pre-defined controls and identify additional ones that might be organization-specific. (Agcaoili’s framework proposal goes further here than NIST’s, too, providing some prioritization guidance based on breach surveys, and concrete examples on how to visualize risk.)
But having (yet another) framework available that identifies best practice controls or not, managing risk and an information security organization still requires effort and expertise. The framework might make it easier for smaller organizations to get started on this, but it can’t deliver a ready-to-operate system. For larger organizations and the industry in general, it might serve as a common-language-tool to communicate requirements on suppliers or industry sectors, and report on the own state of information security management.
The Cybersecurity Framework is part of a larger initiative to motivate the private industry that operates critical infrastructure in the US to enhance their security posture, both individually and in cooperation with peers, government agencies, and industry-specific interest groups. I predict that in the end, and regardless of its final form, whether the framework will be a success or not will depend largely on the incentives that the government (or natural selection) can come up with to motivate organizations to implement it.
This post first appeared on Medium .
David Ochel is a technology risk and information security management professional.