Townsend Security Data Privacy Blog

Patrick Townsend

Recent Posts

FIELDPROC Encryption and Backup Protection

Posted by Patrick Townsend on Jul 31, 2015 2:18:00 PM

IBM introduced Field Procedures (FIELDPROC, or FieldProc) on the IBM i (AS/400, iSeries) platform in V7R1 of the operating system. It is a strategically important implementation and is a permanent part of the DB2 for IBM i database going forward. The FieldProc implementation is an event-driven exit point directly in the DB2 database and is invoked for most of the standard CRUD operations (but not delete). While the FieldProc implementation can be used for many things, IBM i customers primarily use it as a mechanism to automatically encrypt and decrypt data at the column level. It is now a widely adopted and deployed method for data protection on the IBM i and IBM System z Mainframe editions of the DB2 database.

IBM i FIELDPROC Webinar While the benefits of the data at rest protection offered by FieldProc encryption is clear, our customers often ask us if FieldProc encryption will also protect their backups. It is a good question because there are times when making a copy of a file with FieldProc encryption causes the data to be decrypted by the copy. So does DB2 data remain protected with normal IBM i backups?

Fortunately, the answer is Yes - your backups will be protected with FieldProc encryption when you use any of the normal SAVE commands on the IBM i platform including commands like Save Object (SAVOBJ), Save Library (SAVLIB), Save Save File Data (SAVSAVFDTA), Save Changed Objects (SAVCHGOBJ), and the various IBM Backup Recovery and Media Services (BRMS) commands.

While it is rare, I have seen some uses of the Copy File (CPYF) command to copy data to backup tapes or files. In this case your data will be automatically decrypted during the copy operation and will NOT be protected in the backup image. To save data in encrypted format ALWAYS use one of the IBM save commands, the IBM BRMS application, or any third party backup solution that uses the IBM SAVE commands.

Another related question that we often get is how can I verify that the data is actually encrypted on the backup image? This is a good question because security auditors often want an independent verification of the encrypted status of the data. One way to verify the encrypted status of the data is to use the IBM Dump Tape (DMPTAP) command to dump the contents of the tape after a save operation. Try saving the file without FieldProc encryption, then save it with FieldProc encryption enabled. The Dump Tape command will show the contents of the data and you can easily see unencrypted values or encrypted values in the dump reports. Note that you may need to turn off save compression in order to view the data with this method.

Another way to verify the encrypted status of data is to use the same procedure, but save the file or table to a save file (SAVF). You can then use FTP to transfer the file to your PC in binary mode and use a file viewer to review the contents. Unfortunately, you can’t use the Display Physical File Member (DSPPFM) command as it does not display save files. On your PC you might like to use a utility like UltraEdit as it can view data in the EBCDIC character format. You can easily determine that your data is encrypted in the save file.

Either of these techniques can be used to verify the encrypted status of your files when saved with FieldProc active. You can rest assured that your data is protected on backup tapes and images and that the encryption key is not stored with the data!

Townsend Security provides a FieldProc implementation in our Alliance AES/400 solution. It integrates seamlessly with our Alliance Key Manager solution which manages encryption keys through the entire key life cycle. The Alliance AES/400 solution is the only IBM i FieldProc encryption solution that is NIST validated for the AES encryption library, and which combines this level of encryption with a NIST validated encryption key management solution, giving you provable compliance with industry standards.

Backup data protection is a great added benefit to FieldProc encryption on the IBM i platform. I hope this discussion helps resolve any question you have about FieldProc encryption and backup protection.

Patrick

FIELDPROC Encryption IBM i

Topics: IBM i, FIELDPROC

Which IBM i User Gets to My Log Collection Server and SIEM?

Posted by Patrick Townsend on Jul 10, 2015 10:44:00 AM

IBM i (iSeries, AS/400) users are often confused about user names in the IBM security audit journal QAUDJRN, and how they are reported to their log collection server or SIEM solution by Alliance LogAgent. To understand this it is important to know that every batch or interactive job on the IBM i platform actually has two user names: a job user name and a current user name (sometimes called the effective user). These two user names are often the same, but there are many times when they are different. Let’s take a look at some examples.

IBM i Logging for Compliance & SIEM Integration The IBM FTP server runs under the IBM user name QTCP as it waits for a connection from an FTP client. The user name QTCP is provided by IBM and is used for a number of network services. When an FTP client connects to the IBM FTP server and logs in, the job user remains QTCP but the current user is now the name of the actual user who logged in to the FTP session. If a user named BILL logged in you would then have these two user names:

Job user name: QTCP
Current user name: BILL

Both of these user names are recorded in the IBM security audit journal. You can see this information when you use the Display Audit Journal Entries command DSPAUDJRNE. Try selecting the job start event “JS” and you will see this in the output:

Entry Type Effective User Job Type Job SubType Job Name Job User Job Number Job Name Job User
JS M BILL B QTFTP00548 QTCP 803244 QTFTP00548 QTCP

But there is a big difference in the capabilities and security risk between these two users. The user QTCP is an IBM supplied user with no ability to log into the system, and the user BILL is an actual user whose authorities and capabilities are in effect. If BILL is a highly privileged user he will have the ability to do a lot of damage and may even be able to retrieve any database file on the system.

Monitoring both user names in your SIEM solution and retaining the history of the activity on these two users is critical for your security strategy on the IBM i.

In Alliance LogAgent we collect and report both of these names when sending information to your log collection server and SIEM solution in the Syslog format. When you look at these events you will see something like this:

user_name=”QTCP”
effective_user=”BILL”

If you are using the Common Event Format (CEF) that is preferred by HP ArcSight’s SIEM solution, you will see information like this:

suser=QTCP
eff_user=BILL

If you are using the new IBM QRadar log event extended format (LEEF), Alliance LogAgent will send the information like this:

user=QTCP
usrName=BILL

The “usrName” keyword is predefined to IBM QRadar and is the user credential that is monitored for anomalies and suspicious behavior. So it is important that the effective user be supplied in this case.

Both user names contain important security information, and both should be reported to your SIEM solution for active monitoring. Alliance LogAgent always sends both user names to make your monitoring and security strategy more effective.

IBM i logging for compliance & SIEM Integration

Topics: System Logging, IBM i, Alliance LogAgent

How Many Encryption Keys Should I Create to Protect My Data?

Posted by Patrick Townsend on Jul 1, 2015 10:30:00 AM

As a security architect, security administrator or database administrator, one of the first big questions you face with the encryption of data at rest is how to organize, plan, and implement encryption keys to protect that data. Should you use one key for everything? Or, should you use a different key for each application? Or, perhaps you should use a different key for every table and column? Or, should you use a different key for each department? It is hard to find good security best practice guidance on this topic, so let’s put some focus around this question and see if we can come up with some general principles and guidance.

How-to-Guide Key Management Best Practices eBo First, I like to start by identifying any applications or databases that contain highly sensitive information such as credit card numbers, social security numbers, or other personally identifiable information. These sources will be the high-value targets of cybercriminals, so you will want to protect them with your best security. For each of these applications and databases, assign encryption keys that are not used by any other application or database, and carefully monitor the use of these keys. Your encryption key management solution should help you with monitoring key usage. The objective is to protect the highly sensitive data and the related encryption keys from unauthorized access. If you have multiple sensitive applications and databases, assign each its own unique key.

Second, identify all of your major applications that are used across a broad set of departments within your company. Since these applications span multiple departments and will have a broad set of users with different needs, you should assign each of these applications their own specific encryption keys. In the event one application or database is compromised, it will not affect all of the other applications and databases.

Third, the remaining applications and databases are probably those that are used by one specific department within your organization. You will probably find that most departments in the organization have a number of specialized applications that help them get their work done. In terms of raw numbers, this might be the largest category of applications. Assign each department its own set of encryption keys that are not used by other departments. You may find that you need to sub-divide the department and assign keys for each sub-group, but the goal is to use encryption keys for the department that are not shared with other departments.

Lastly, cloud implementations are a special category and should always have separate keys. In the event that a Cloud Service Provider experiences a security breach, you will want to be sure that your internal IT systems are not affected. Assign specific encryption keys for your cloud applications and do not share the keys with internal, non-cloud applications.

Over the years I’ve occasionally seen organizations create and use a very large numbers of keys. In one case a unique key was used for every column and row in a table. In another case a different key was used for every credit card transaction. Large numbers of keys present management problems, and probably lowers overall security. Keep the number of encryption keys to a manageable level.

The above guidelines should help you protect your sensitive data and easily manage your encryption keys. There is a summary table for the above guidelines:

Highly sensitive data and applications Assign and use unique and non-shared encryption keys. Do not share keys across application and database boundaries. Carefully monitor encryption key usage.
 Broadly used applications and databases Assign and use unique and non-shared encryption keys. Do not share keys across application and database boundaries. 
 Departmental applications and data  Assign and use departmental encryption keys. Do not share keys among departments.
 Cloud applications  Assign and use unique encryption keys. Do not share encryption keys with non-cloud, IT applications.

There are always exceptions to general rules about how to deploy encryption keys for the best security. The above comments may not be appropriate for your organization, and you should always adjust your approach to your specific implementation. Hopefully the above will be helpful as you start your encryption project.

Request the Key Management Best Practices How-to-Guide

Topics: Best Practices, Encryption Key Management

Configuring the IBM i to Collect Security Events

Posted by Patrick Townsend on Jun 23, 2015 8:09:00 AM

Our Alliance LogAgent customers often ask us which IBM i security events we transmit from the IBM security audit journal QAUDJRN to their log collection server or SIEM solution. There are several factors that affect which security events get collected by the IBM i operating system, and even which events are collected by Alliance LogAgent for transmission to your SIEM server. Let’s take a look at these:

IBM i Logging for Compliance & SIEM Integration When your new IBM i server is delivered it is not configured to collect any security events. You must create the QAUDJRN journal and the journal receiver as a first step. Then you must change some system values in order to activate security event collection. This is the first step in answering the question about which security events Alliance LogAgent transmits. It can only transmit the events you enable and you set these with the system values.

The first system value you must set is QAUDCTL. When you receive your new IBM i platform this system value is set to *NONE meaning that no security events are collected. You should probably change this to:

*AUDLVL
*OBJAUD
*NOQTEMP

You now need to set the QAUDLVL and QAUDLVL2 system values to specify the type of events you want to collect. On a new IBM i server these system values are blank. IBM makes it easy to collect the security events through a special system value named *SECURITY. If you set the QAUDLVL system value to *SECURITY you will collect only the security-related events on the IBM platform. Of course, there are other events that you might like to collect. Press the F1 help key to view a complete list of events. If they won’t all fit in the QAUDLVL system value just add them to the QAUDLVL2 system value and specify *AUDLVL2 in the list.

You can now use the Change User Audit (CHGUSRAUD) command to audit users. I would suggest you turn on full user auditing for any security administrator, any user with All Object (*ALLOBJ) authority, and any user with audit (*AUDIT) authority.

You can also turn on object level logging with the Change Object Auditing (CHGOBJAUD) command. Be sure to specify all libraries and files that contains sensitive data. Do the same thing for IFS directories using the Change Audit (CHGAUD) command.

You’ve completed the first step in configuring security event collection. Alliance LogAgent can only report what you configure the system to collect and this first step defines those events.

Alliance LogAgent can also be configured to filter security events. The default is to report all of the events collected in the system audit journal QAUDJRN, but you can narrow these to a defined set of events. In the Alliance LogAgent configuration menu you will see an option to Work With Security Types. This will list all of the event types collected in the QAUDJRN journal. You can use function key F13 to set group patterns, or change each event. The F13 option is nice because it has a *SECURITY option that will let you set all security events on for reporting. Or, you can edit an individual security event to change its reporting status. For example, to turn off reporting of Spool File actions, edit the SF event and change the reporting option to No:

Send to log server . . . . . . . 2     1=Yes, 2=No

When you make this change Alliance LogAgent will no longer send spool file action information to your SIEM solution.

It is not wise to turn off the reporting of security events in Alliance LogAgent! You will always want to collect and report these events.

Setting the system values and configuring Alliance LogAgent security events are the primary ways you determine which events are transmitted to your log collection server. There are additional filtering options in Alliance LogAgent to include or exclude objects, IFS files and libraries and these can help you further refine the events that are transmitted.

IBM i logging for compliance & SIEM Integration

Topics: System Logging, IBM i, Alliance LogAgent

How Much Data Does Alliance LogAgent Send to My SIEM?

Posted by Patrick Townsend on Jun 19, 2015 8:27:00 AM

Our customers often ask how they can manage the amount of data that Alliance LogAgent sends to their SIEM active monitoring solution. It’s an important question because most SIEM solutions license their software based on the number of Events Per Second (EPS) or by the number Gigabytes per day (GBD). So managing the volume of data has an important cost benefit as long as you don’t undermine the effectiveness of the security monitoring!

IBM i Logging for Compliance & SIEM Integration There are some things Alliance LogAgent inherently does to help with the volume of data, and there are some things you can do, too. Let’s look at both of these areas.

First Alliance LogAgent reduces the amount of data sent from the IBM security audit journal QAUDJRN by extracting only the information that has relevance to security from each journal entry. Each journal entry has a 610-byte header and most of the information in the header has no security relevance. Then the actual event information that follows can can be several hundreds of bytes in length. The average journal entry is about 1,500 bytes in length. Alliance LogAgent extracts and formats the important information into one of the Syslog formats. The result is an event with an average size of 380 bytes.

That is a 75% reduction in the amount of data sent to your SIEM solution!

Alliance LogAgent also gives you the ability to meter the number of transactions per second that you are sending. The IBM i server can generate a large number of events and throttling the transactions with this configuration option can help you reduce and control SIEM costs. Additionally, it can also help minimize the impact on your network capacity. This is a great option if your SIEM solution is licensed based on the number of Events Per Second (EPS).

In the second category are things you can do to minimize the number of events that are processed using various Alliance LogAgent configuration settings. Let’s take them one at a time:

Selectively send journal entry types
The IBM security audit journal QAUDJRN collects security events and general system information. Some of the general system information may have no security relevance and Alliance LogAgent allows you to suppress the transmission of these events. For example, the security audit journal may have information about printed reports (journal entry type SF for spool files) that have been produced on your system. If this information is not needed for security monitoring, you can turn off the event reporting in Alliance LogAgent. From the configuration menu take the option to Work With Security Types. You can can change the option to Send To Log Server to No:

Send to log server . . . . . . . 2 1=Yes, 2=No

Hint: You can also use function key F13 to select all IBM Security (*SECURITY) level events for reporting, and turn all other events off.

Filter library objects
You may have many libraries on your IBM i server that are not used for production data or which do not contain any information that has security relevance. From the configuration menu you can create an object exclusion list to exclude individual libraries, or you can exclude all libraries and objects. If you take the latter approach be sure to define libraries in the inclusion list that you want to monitor and report. By excluding non-relevant libraries and objects you can minimize the number of events that are transmitted.

Filter IFS objects
Like library exclusion and inclusion you can define IFS file system filters. From the configuration menu you will see options for IFS exclusion and inclusion rules. You can even exclude all IFS directories (exclude the “/” root directory) and then add in the IFS directories you want to include. IFS filtering lets you define individual files or entire directories and subdirectories. The “/tmp” directory is a working directory and you may wish to exclude events from that directory if there are no relevant security-related events there.

Filter users
Alliance LogAgent also gives you the ability to filter certain users from reporting, too. You should use caution when implementing this type of filtering, and never filter highly privileged users. Alliance LogAgent provides a list of IBM user profiles that you might consider for exclusion, but you should review these with your IBM i security administrator before filtering these users. You can also add your own users to this list.

Filter QHST messages
The QHST message files contain important logon and logoff event information along with other messages that may not be as important. Alliance LogAgent lets you filter QHST messages to only include logon and logoff events if you wish.

Filter system values
Some of the IBM i system values have a low security value and can be suppressed by Alliance LogAgent. Alliance LogAgent provides a list of system values for your consideration and you can disable reporting changes if you decide they do not have security relevance. You can also add your own system values to the filter list.

These data compression, metering, and filtering options give you a lot of control over the amount of information that Alliance LogAgent sends to your log collection server and SIEM solution. These can help you control costs and minimize the impact on your network. The original information remains in your IBM security audit journal and system history messages file if needed for research or forensics.

Topics: System Logging, IBM i, Alliance LogAgent

Data Protection in the Cloud & PCI DSS - Logs and Log Monitoring (Part 3)

Posted by Patrick Townsend on Mar 18, 2015 9:16:00 AM

This is the third part in our series looking at recent announcements by Amazon, Microsoft and other cloud service providers regarding new encryption and key management services. Let’s talk about log collection and active monitoring as a security best practice, and as a requirement to meet PCI DSS security requirements. Since the PCI DSS guidelines implement common security best practices, they are a good starting point for evaluating the security of any application and platform that processes sensitive data. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point, and add in some other sources of security best practices. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Collecting system logs and actively monitoring them is a core component of every cyber security recommendation. Cybercriminals often gain access to IT systems and go undetected for weeks or months. This gives them the ability to work on compromising systems and stealing data over time. Active monitoring is important in the attempt to detect and thwart this compromise.

Here is what PCI says about active monitoring in Section 10 of the PCI DSS (emphasis added):

Review logs and security events for all system components to identify anomalies or suspicious activity.

Many breaches occur over days or months before being detected. Checking logs daily minimizes the amount of time and exposure of a potential breach. Regular log reviews by personnel or automated means can identify and proactively address unauthorized access to the cardholder data environment. The log review process does not have to be manual. The use of log harvesting, parsing, and alerting tools can help facilitate the process by identifying log events that need to be reviewed.

In recognition of the importance of ongoing, active monitoring the National Institute of Standards and Technology (NIST) provides this guidance in their Special Publication 800-137 “Information Security Continuous Monitoring (ISCM)” guidance:

The Risk Management Framework (RMF) developed by NIST, describes a disciplined and structured process that integrates information security and risk management activities into the system development life cycle. Ongoing monitoring is a critical part of that risk management process. In addition, an organization’s overall security architecture and accompanying security program are monitored to ensure that organization-wide operations remain within an acceptable level of risk, despite any changes that occur. Timely, relevant, and accurate information is vital, particularly when resources are limited and agencies must prioritize their efforts.

And active monitoring is a component of the SANS Top 20 security recommendations:

Collect, manage, and analyze audit logs of events that could help detect, understand, or recover from an attack.

Deficiencies in security logging and analysis allow attackers to hide their location, malicious software, and activities on victim machines. Even if the victims know that their systems have been compromised, without protected and complete logging records they are blind to the details of the attack and to subsequent actions taken by the attackers. Without solid audit logs, an attack may go unnoticed indefinitely and the particular damages done may be irreversible.

Because of poor or nonexistent log analysis processes, attackers sometimes control victim machines for months or years without anyone in the target organization knowing, even though the evidence of the attack has been recorded in unexamined log files.

Deploy a SIEM (Security Incident and Event Management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis.

This is why actively collecting and monitoring system and application logs is critical for your security strategy.

Implementing this critical security control in a cloud environment presents some special challenges. Here is what the PCI cloud guidance says:

Additionally, the ability to maintain an accurate and complete audit trail may require logs from all levels of the infrastructure, requiring involvement from both the CSP and the client. For example, the CSP could manage system-level, operating-system, and hypervisor logs, while the client configures logging for their own VMs and applications. In this scenario, the ability to associate various log files into meaningful events would require correlation of client-controlled logs and those controlled by the CSP.

It is not enough to collect logs from a few selected points in your cloud application environment. You need to collect all of the logs from all of the components that you deploy and use in your cloud application. This is because the effectiveness of active monitoring depends on the correlation of events across your entire application, database, and network and this includes the cloud providers systems and infrastructure. Here is what ISACA says about security event correlation:

Correlation of event data is critical to uncover security breaches because security incidents are made up of a series of events that occur at various touch points throughout a network--a many-to-one process. Unlike network management, which typically is exception-based or a one-to-one process, security management is far more complex. An attack typically touches a network at multiple points and leaves marks or breadcrumbs at each. By finding and following that breadcrumb trail, a security analyst can detect and hopefully prevent the attack.

Your encryption key management system is one of those critical system components that must be monitored and whose events should be aggregated into a unified view. Key management logs would include encryption key establishment and configuration, encryption key access and use, and operating system logs of every component of the key management service. You should be able to collect and monitor logs from all parts of your applications and cloud platform.

Unfortunately, current key management services from cloud providers only provide a very limited level of access to critical component logs. You might have access to a limited audit trail of your own access to encryption keys, but no access to the key service system logs, HSM access logs, HSM audit logs, or HSM operating system logs. Without access to the logs in these components it is not possible for you to implement an effective log collection and active monitoring strategy. You are working in the dark, and without full access to all logs on all components of your cloud key management service you can’t comply with security best practices for log collection, correlation, and active monitoring.

Since key management systems are always in scope for PCI audit and are extensions of your application environment it is difficult to see how these new cloud key management services can meet PCI DSS requirements for log collection and monitoring as currently implemented.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors, including us (see below), who offer cloud key management solutions that provide full access to key management, configuration, key usage, application, and operating system logs.  You can deploy a key management service that fully supports security best practices for log collection and monitoring.

In part 4 of this series we’ll look at the topic of key custody and multi-tenancy and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

 

download the Whitepaper: Meet the Challenges of PCI Compliance

Topics: PCI DSS, Amazon Web Services (AWS), logging, cloud, Microsoft Azure

Data Protection in the Cloud & PCI DSS - Segmentation (Part 2)

Posted by Patrick Townsend on Mar 9, 2015 2:53:00 PM

This is the second part in our series looking at recent announcements by Amazon, Microsoft and others regarding new encryption and key management services. Let’s talk about the concept of segmentation as a security best practice, and as a strong recommendation by PCI DSS security standards. Since the PCI DSS guidelines implement common security best practices they are a good jumping off point for evaluating the security of any application and platform. Following the practice of the first part of this series we will use the PCI document “PCI DSS Cloud Computing Guidelines, Version 2.0” as our reference point. Even if you don’t have to meet PCI data security requirements, this should be helpful when evaluating your security posture in the cloud.

Download Whitepaper on PCI Data Security

Segmentation as a security concept is very simple and very fundamental. Better security can be achieved by not mixing trusted and untrusted applications, data, and networks. This concept of trusted and untrusted applications extends to the value of the data assets – when applications process highly sensitive and valuable data assets they need to be separated into trusted and secure environments. We expend more effort and resources to protect what is valuable from criminals. Conversely, when there are no valuable data assets in an environment there is no need to take the same level of effort to secure them.

This is the core reason that PCI DSS recommends segmentation of applications that process payments from non-payment applications. Here is what PCI says about non-cloud applications:

Outside of a cloud environment, individual client environments would normally be physically, organizationally, and administratively separate from each other.

So, how do the PCI DSS security requirements relate to cloud platforms? Here is what PCI says (emphasis added):

Segmentation on a cloud-computing infrastructure must provide an equivalent level of isolation as that achievable through physical network separation. Mechanisms to ensure appropriate isolation may be required at the network, operating system, and application layers; and most importantly, there should be guaranteed isolation of data that is stored.

Proper segmentation is difficult to achieve even when you have complete control over all aspects of your environment. When you add the inherently shared and multi-tenant architecture of cloud platforms this becomes a high hurdle to get over. Here is what PCI says about this challenge:

Client environments must be isolated from each other such that they can be considered separately managed entities with no connectivity between them. Any systems or components shared by the client environments, including the hypervisor and underlying systems, must not provide an access path between environments. Any shared infrastructure used to house an in-scope client environment would be in scope for that client’s PCI DSS assessment.

This brings us exactly to the concern about new cloud key management services in Azure and AWS. These new services are inherently multi-tenant in both the key management services down to the hardware security modules (HSMs) that provide the ultimate security for encryption keys. You have no idea who you are sharing the service with.

The PCI guidance tells us what this segmentation looks like in a cloud environment:

A segmented cloud environment exists when the CSP enforces isolation between client environments. Examples of how segmentation may be provided in shared cloud environments include, but are not limited to: 

  • Traditional Application Service Provider (ASP) model, where physically separate servers are provided for each client’s cardholder data environment.
  • Virtualized servers that are individually dedicated to a particular client, including any virtualized disks such as SAN, NAS or virtual database servers.
  • Environments where clients run their applications in separate logical partitions using separate database management system images and do not share disk storage or other resources.

There is no cloud service provider implementation of key management services that meet these basic requirements.

The PCI DSS guidance takes a pretty strong view about inadequate segmentation in cloud environments:

If adequate segmentation is not in place or cannot be verified, the entire cloud environment would be in-scope for any one client’s assessment. Examples of “non-segmented” cloud environments include but are not limited to:

  • Environments where organizations use the same application image on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations use different images of an application on the same server and are only separated by the access control system of the operating system or the application.
  • Environments where organizations’ data is stored in the same instance of the database management system’s data store.

Since key management systems are always in scope for PCI audit and are extensions of your application environment and depend entirely on the access control system of the cloud provider, it is difficult to see how these new cloud key management services can meet PCI DSS requirements as currently implemented.

Here’s the last comment by PCI on segmentation in cloud environments:

Without adequate segmentation, all clients of the shared infrastructure, as well as the CSP, would need to be verified as being PCI DSS compliant in order for any one client to be assured of the compliance of the environment. This will likely make compliance validation unachievable for the CSP or any of their clients.

Does this mean you can’t implement security best practices for key management in the cloud? I don’t think so. There are multiple vendors including us (see below) who offer cloud key management solutions that we believe can be effectively isolated and segmented on cloud platforms, or even hosted outside of the cloud.

In our part 3 of this series we’ll look at the topic of logging and active monitoring and how it affects the security of your key management solution in the cloud.

Patrick


Resources

Alliance Key Manager for AWS

Alliance Key Manager for Azure

Alliance Key Manager for VMware and vCloud

Alliance Key Manager for Drupal

Alliance Key Manager for IBM Power Systems

Alliance Key Manager Cloud HSM

download the Whitepaper: Meet the Challenges of PCI Compliance

 

Topics: PCI DSS, Encryption Key Management, cloud

Data Protection in the Cloud & PCI DSS - Encryption and Key Management (Part 1)

Posted by Patrick Townsend on Mar 4, 2015 7:51:00 AM

Public and private organizations of all sizes are rapidly adopting cloud platforms and services as a way of controlling costs and simplifying administrative tasks. One of the most urgent concerns is addressing the new security challenges inherent in cloud platforms, and meeting various compliance regulations. Data protection, especially encryption and encryption key management, are central to those security concerns.

Download Whitepaper on PCI Data Security The recent announcements by Amazon, Microsoft and others regarding new encryption and key management services make it more urgent to understand the security and compliance implications of these cloud security services.

There are a number of sources for security best practices and regulatory rules for data encryption including the National Institute for Standards and Technology (NIST), the Cloud Security Alliance (CSA), the Payment Card Industry Security Standards Council (PCI SSC), the EU Data Protection Directive, and others.

Because so many organizations fall under the PCI Data Security Standards (PCI DSS) regulations, and because the PCI guidelines are mature and often referenced by security auditors, we will use the PCI recommendations and guidance as the basis for our discussion in this multi-part series.

For securing information in the cloud, the PCI Security Standards Council published the document “PCI DSS Cloud Computing Guidelines, Version 2.0” in February of 2013. This is the current guidance for PCI DSS compliance and provides recommendations and guidance for any organization that needs to meet PCI data security requirements. It is also a common benchmark for organizations who do not need to meet PCI DSS standards, but who want to meet security best practices for protecting sensitive data in the cloud. 

Disclaimer: Townsend Security, Inc. is not a Qualified Security Auditor (QSA) and the opinions in this article are not intended to provide assurance of PCI DSS compliance. For PCI DSS compliance validation, please refer to an approved QSA auditor or request a referral from Townsend Security.

First things first: Let’s tackle the most fundamental question - Who is responsible for data security in the cloud?

This one is easy to answer. You are! Not your Cloud Service Provider (CSP), not your QSA auditor, and no one else. You are ultimately responsible for insuring that you meet security best practices and PCI DSS guidance in the cloud platform.

This can be confusing when you are new to cloud computing. Cloud Service Providers often make this more confusing by claiming to be PCI compliant and you might infer that you are PCI compliant as a result of moving to their cloud. This is wrong. The PCI SSC makes it clear that you bear full and ultimate responsibility for insuring PCI DSS compliance. No major cloud service provider can make you PCI compliant just by implementing on their cloud platform. You will have to work with your CSP to insure compliance, and that is your responsibility under these standards.

Now let’s look at the PCI cloud guidance in a bit more detail. Here is what they say in the PCI DSS cloud guidance (emphasis added):

Much stock is placed in the statement “I am PCI compliant”, but what does this actually mean for the different parties involved?

Use of a PCI DSS compliant CSP does not result in PCI DSS compliance for the clients. The client must still ensure they are using the service in a compliant manner, and is also ultimately responsible for the security of their CHD—outsourcing daily management of a subset of PCI DSS requirements does not remove the client’s responsibility to ensure CHD is properly secured and that PCI DSS controls are met. The client therefore must work with the CSP to ensure that evidence is provided to verify that PCI DSS controls are maintained on an ongoing basis—an Attestation of Compliance (AOC) reflects a single point in time only; compliance requires ongoing monitoring and validation that controls are in place and working effectively.

Regarding the applicability of one party’s compliance to the other, consider the following:
a) If a CSP is compliant, this does not mean that their clients are.
b) If a CSP’s clients are compliant, this does not mean that the CSP is.
c) If a CSP and the client are compliant, this does not mean that any other clients are.

The CSP should ensure that any service offered as being “PCI compliant” is accompanied by a clear and unambiguous explanation, supported by appropriate evidence, of which aspects of the service have been validated as compliant and which have not.

Great, now you know that you are responsible for PCI DSS compliance and that you have to work with your cloud service provider to insure that their services and components are PCI DSS compliant and that you have deployed them in a compliant way.

Are the new cloud key management services PCI DSS compliant?

No.

At the time I am writing this (February 2015) there has been no claim of PCI DSS compliance by Microsoft for the new Azure Key Vault service, nor by Amazon for the new AWS Key Management Service, and there is no Attestation of Compliance (AOC) available for either service.

So what should you do?

In this case the PCI cloud guidance is very clear (emphasis added):

CSPs that have not undergone a PCI DSS compliance assessment will need to be included in their client’s assessment. The CSP will need to agree to provide the client’s assessor with access to their environment in order for the client to complete their assessment. The client’s assessor may require onsite access and detailed information from the CSP, including but not limited to:

  • Access to systems, facilities, and appropriate personnel for on-site reviews, interviews, physical walk-throughs, etc.
  • Policies and procedures, process documentation, configuration standards, training records, incident response plans, etc.
  • Evidence (such as configurations, screen shots, process reviews, etc.) to show that all applicable PCI DSS requirements are being met for the in-scope system components
  • Appropriate contract language, if applicable

More from the PCI cloud guidance (note that cloud encryption key management is a Security-as-a-Service):

Security as a Service, or SecaaS, is sometimes used to describe the delivery of security services using a SaaS-based delivery model. SecaaS solutions not directly involved in storing, processing, or transmitting CHD may still be an integral part of the security of the CDE. As an example, a SaaS-based anti-malware solution may be used to update anti-malware signatures on the client’s systems via a cloud-delivery model. In this example, the SecaaS offering is delivering a PCI DSS control to the client’s environment, and the SecaaS functionality will need to be reviewed to verify that it is meeting the applicable requirements.

This means that you, or your security auditor, will have to perform the full PCI data security assessment of the cloud service provider’s encryption and key management service and that the CSP will have to grant you full access to their facilities, staff, and procedures to accomplish this.

For both practical and logistical reasons that is not likely to happen. Most CSPs do not allow their customers unfettered access to their staff and facilities. Even if you could negotiate this, it may not be within your budget to hire a qualified auditor to perform the extensive initial and ongoing reviews that this requires.

The only reasonable conclusion you can draw is that the new encryption and key management services are not PCI DSS compliant at the present time, and you are not likely to achieve compliance through your own efforts.

In the next part of these series we will look at other security and compliance concerns about these new cloud key management services. We still have some distance to go on this topic, but if cloud security is a concern I think you will find this helpful!

download the Whitepaper: Meet the Challenges of PCI Compliance

Topics: Encryption, Encryption Key Management, Cloud Security

Anthem Data Breach - We Are Taking the Wrong Lesson About Encryption

Posted by Patrick Townsend on Feb 16, 2015 4:02:00 PM

We are taking the wrong lesson about encryption from the Anthem data breach. Several “experts” are weighing in with the opinion that encryption would not have prevented the breach, and even that Anthem should not bother implementing encryption to protect patient information! We don’t have a lot of information about the breach, but apparently the credentials of one or more system administrators were acquired by the attackers and used to access servers with sensitive patient data. So, if the attackers have the credentials of privileged users, it’s game over, right?

eBook The Encryption Guide Well, hold on Cowboy, you are taking the wrong lesson from this data breach!

Let’s start from the top. We have to use ALL of the tools at hand to deploy a defense in depth approach to protect our data. This means we need firewalls, intrusion detection, active monitoring, data leak prevention, anti-virus, two factor authentication, and everything else available to our security team to protect that information.  Further, it would be irresponsible to not consider encryption as an essential component as part of a defense in depth strategy. 

I am sure that Anthem already has a large number of these tools and defenses deployed in their environment. Should they just unplug them all and throw up their hands? Is surrender the best approach given the intelligence and persistence of dedicated attackers? 

Of course not, surrender should not even be in our vocabulary!

Encryption and related encryption key management tools are critical for any company that wants to protect the sensitive information of their customers (or patients, in the case of Anthem), employees, and business partners. It’s mandated by many compliance regulations such as PCI DSS which requires merchants and payment processors to encrypt credit card account numbers. It’s highly recommended to protect patient information by anyone like Anthem who is a Covered Entity under HIPAA regulations (any bets on how soon that will move from “recommended” to “required” status?). All serious security professionals know that encryption is a critical security component and recommend it is a part of an organization’s security strategy.

Does this mean encryption is the perfect defense? Of course not. Given enough authorization to sensitive data even encryption may not be able to prevent a breach.

Encryption raises the bar for the attacker. It narrows the attack surface and makes it more difficult. Unlike the situation at Anthem, in many cases an attacker compromises a non-privileged account and steals the database full of sensitive information. If the sensitive data is not encrypted, the data is lost. If the data is encrypted and you've protected the encryption key, the data is safe. Effective defenses involve a layered approach and constant vigilance.  If we use all of our tools effectively, including encryption, we have a very good chance of detecting an attack early and thwarting it.

A few months ago Adobe suffered a breach and lost millions of records. But the most sensitive data was encrypted. That story basically went away in a couple of days. Target and Sony also suffered large data breaches – do you think they wish they had been encrypting their data? You bet they do! Their stories never seem to go away.

Delay, hopelessness, and surrender are not going to help and are not justified.

This is the lesson we need to learn about encryption.

Patrick

The Encryption Guide eBook

Topics: Encryption, Data Breach

PGP on IBM System z Mainframes

Posted by Patrick Townsend on Feb 10, 2015 7:38:00 AM

With the new z13 model, IBM announced another round of enhancements and improvements to the venerable IBM System z Mainframe. Focusing on mobile and social media integration, IBM is yet again modernizing and extending this high-end enterprise server.

PGP Encryption Trial IBM i While the IBM System z Mainframe has a well-earned reputation for security, how do Mainframe customers protect their data as they move towards more open, internet-based mobile and social media integration?

Pretty Good Privacy (PGP) is one path to provable and defensible security, and PGP Command Line is the de facto standard for enterprise customers.

PGP is one of the most commonly accepted and widely deployed whole file encryption technologies that has stood the test of time. It works on all of the major operating system platforms and makes it easy to deploy strong encryption to protect data assets. And it runs on the IBM System z Mainframe!

For about a decade we at Townsend Security have been bringing PGP encryption to Mainframe customers to help them solve some of the most difficult problems with encryption. As partners with Symantec we provide IBM enterprise customers running IBM System z and IBM i (AS/400, iSeries) with the same strong encryption solution that runs on Windows, Linux, Mac, Unix, and other platforms.

Incorporating the OpenPGP standard, PGP Command Line from Townsend Security and backed by Symantec, is compatible with a variety of open source PGP encryption solutions, while adding features to warm the heart of the IBM Mainframe customers. And this is the same PGP whose underlying PGP SDK has been through multiple FIPS 140-2 validations and which is FIPS 140-2 compliant today.

While retaining the core functions of PGP and the standards-based approach to encryption, we’ve been busy extending PGP specifically for the IBM Mainframe customer. Here are just a few of the things we’ve done with PGP to embrace the IBM Mainframe architecture:

  • Native z/OS Batch operation
  • Support for USS operation
  • Text mode enhancements for z/OS datasets
  • Integrated EBCDIC to ASCII conversion using built-in IBM facilities
  • Simplified IBM System z machine and partition licensing
  • Support for self-decrypting archives targeting Windows, Mac, and Linux!
  • A rich set of working JCL samples
  • Free evaluation on your own IBM Mainframe

IBM Mainframe customers never have to transfer data to a Windows or Linux server to perform encryption, and in the process exposing data to loss on those platforms. With full cross-platform support you can encrypt and decrypt data on the IBM Mainframe regardless of its origination or destination.

PGP Command Line is the gold standard for whole file encryption, and you don’t have to settle for less.

Patrick

PGP Encryption for IBM

Topics: IBM z, Mainframe, PGP