+1.800.357.1019

+1.800.357.1019

Feel free to call us toll free at +1.800.357.1019.

If you are in the area you can reach us at +1.360.359.4400.

Standard support
6:30am - 4:00pm PST, Monday - Friday, Free

Premium support
If you own Townsend Security 24x7 support and
have a production down issue outside normal
business hours, please call +1.800.349.0711
and the on-call person will be notified.

International customers, please dial +1.757.278.1926.

Townsend Security Data Privacy Blog

Luke Probasco

Recent Posts

Case Study: Citizens Security Life Insurance

Posted by Luke Probasco on Mar 13, 2017 10:54:24 AM

CSLI-Logo.pngCompliance Made Easy - Protecting Private Information with Alliance AES/400 Encryption for IBM i and Alliance Key Manager for VMware


“Townsend Security was extremely easy to work with - from the sales process to deploying our proof of concept to post-sales support.”

- Adam Bell, Senior Director of IT

 
Citizens Security Life Insurance

M Citizens Security Life Insurance Company is a life and health insurance carrier. The company offers group benefits including dental and vision coverage, and individual ancillary insurance products. The company was founded in 1965 and is headquartered in Louisville, Kentucky.

The Challenge: Protect ePHI & PII on the IBM i

In order to meet growing partner requirements and pass a data security audit for protecting electronic Protected Health Information (ePHI) and Personally Identifiable Information (PII), Citizens Security Life Insurance (CSLI) needed to deploy an encryption solution on the IBM i. The solution needed to be easy to implement with excellent performance.

While FIELDPROC on the IBM i makes it very easy to encrypt data without application changes, CSLI also understood that for encrypted data to truly be secure, they would need to store and manage encryption keys with an external key manager.

By using a VMware-based encryption key manager, the company could meet encryption and key management best practices for separating encryption keys from the data they protect.

The Solutions

Alliance AES/400 Encryption

“The performance we are seeing with Alliance AES/400 encryption is excellent,” said Adam Bell, Senior Director of IT, Citizens Security Life Insurance. “The solution was easy to integrate and completely met our expectations.”

Alliance AES/400 FIELDPROC encryption is NIST-compliant and optimized for performance. The solution is up to 100x faster than equivalent IBM APIs on the IBM i platform.

With Alliance AES/400, businesses can encrypt and decrypt fields that store data such as credit card numbers, social security numbers, account numbers, ePHI, and other PII instantly without application changes.

Alliance Key Manager for VMware

Alliance Key Manager for VMWare was very easy to implement and the resources Townsend Security provided made deployment a smooth process,” continued Bell. By deploying Alliance Key Manager for VMware, CSLI was able to meet their business needs with a solution that could not only deploy quickly, but was also easy to set up and configure.

Alliance Key Manager for VMware leverages the same FIPS 140-2 compliant technology found in Townsend Security’s hardware security module (HSM) and in use by over 3,000 customers. The solution brings a proven and mature encryption key management solution to VMware environments, with a lower total cost of ownership. Additionally, the key manager has been validated to meet PCI DSS in VMware environments.

Integration with the IBM i Platform

An encryption strategy is only as good as the key management strategy, and it can be difficult to get key management right. For companies doing encryption, the most common cause of an audit failure is an improper implementation of key management. The seamless integration between Alliance AES/400 and the external Alliance Key Manager for VMware allowed CSLI to pass their data security audit with flying colors.

“The relationship we developed with Townsend Security enabled us to have a painless sales and support process, and in turn, enabled us to easily pass our data security audit,” finished Bell.

Meeting HIPAA and protecting ePHI with encryption and key management.

 

Topics: Alliance Key Manager, Alliance AES/400, Case Study

The Future of Active Security Monitoring on the IBM i

Posted by Luke Probasco on Jan 24, 2017 8:19:21 AM

Active monitoring is one of the most effective security controls an enterprise can deploy. In fact, a large majority of security breaches occur on systems that have been compromised days, weeks, or even months before sensitive data is lost. A recent Verizon Data Breach Investigations Report indicates that a full 84 percent of all breaches were detected in system logs.  By actively collecting security logs in real-time, organizations can not only monitor security events, but also prevent a data breach before it starts.  I recently sat down with Patrick Townsend, to discuss log collection and active monitoring on the IBM i.

Hi Patrick, can you give our readers an overview on the importance of collecting and monitoring security logs on the IBM i?

The Future of Active Security Monitoring on the IBM i One of the most effective things that you can do to prevent a data breach is to deploy an active monitoring solution, sometimes also known as system logging.  You’ll find active monitoring at the top of all cyber-security lists of things to do – because it is effective.  Active monitoring is key to a strong security posture, for anybody.

Today, we all know that there is no longer a true perimeter and that our systems are at risk.  Luckily, active monitoring can help.  Here are are some key principles that organizations need to understand.  First, an active monitoring solution needs to involve a log collection server or SIEM solution (IBM Security QRadar, Splunk, LogRythm, etc.) to collect security events across the entire enterprise and actively detect threats.  Second, there needs to be real-time collection and monitoring of security events.  Rather than scooping up the security events once or twice a day, it is imperative to be collecting these events in real-time. When you collect logs across the entire enterprise, a SIEM can provide a lot of intelligence to identify patterns and anomalies – which will identify a potential attack.  The final critical components are good reporting, query, and forensics tools.  SIEM solutions also give you the ability to quickly run reports and analyze suspect data.  This is important for two reasons.  If you are having an attack you need to identify quickly where the attack is originating and how it is happening.  This is essential in order to know how to remediate it.  If you aren’t able to pinpoint the problem, it is very likely that you are going to be attacked by the same methods again.

Switching gears, the serious points for an IBM i customer revolve around the fact that the IBM i is a critical back-office processor for most customers and runs multiple applications.  Too often the IBM i is an island within an organization, but it is important that it is fully integrated in your enterprise’s entire infrastructure security strategy.

Also, it is generally true that a cyber-attack almost never starts on an IBM i server.  They typically start on a compromised user PC or someplace in the organization.  From there, a hacker spends a fair amount of time probing around the IBM i finding any weak points.  We shouldn’t be naïve – hackers know about IBM i servers.  They know what to look for, they know the user IDs, they know how to compromise these systems – they are very good at it.

IBM introduced some new security event sources in V7R3.  Can you talk a bit about those? And what events should an IBM i customer be collecting?

Every release of the IBM i server has had new security events and fields to collect and monitor.  At Townsend Security we work very hard to stay ahead of these releases so that our customers are well positioned to handle new information and use it for protection.  A couple examples include IPV6 address support and new fields in existing events.  Regarding the recent V7R3 release, new sources include:

  • QAUDLVL (Auditing level) system value
  • *NETSECURE (to audit secure network connections)
  • *NETTELSVR (to audit Telnet connections)
  • *NETUDP (to audit UDP connections)

To address the second part of your question, when you deploy an active monitoring solution on the IBM i, you are certainly going to want to collect events from QAUDJRN, QHST, QSYSOPR, as well as exit points.  Interestingly, the QAUDJRN security audit journal does not exist when you first install a new IBM i server. You must create the journal receivers and the journal to start the process of security event collection.

Aside from the new log sources that IBM introduced in V7R3, for someone who maybe deployed a logging solution a few years ago, what should they be aware of now?

First, let’s take a look at how compliance regulations have been evolving.  We now know that most attacks work on the basis of privilege escalation.  For example, an attacker gets access to our systems and then eventually gets sufficient authority to steal data. Because of this, we are seeing that it is more important to identify when an administrative level or highly privileged user logs in to our system.  This is an example of how a logging solution needs to evolve to meet current compliance requirements. Businesses are now required to log and monitor that activity.

Unfortunately, this can be particularly hard on the IBM i.  On first look, an IBM i account may appear to have normal user privileges, but may in fact inherit higher privileges through a Group Profile or Supplemental Group Profile. It is important to detect these elevated privileges in real time and provide the security administrator with an easy-to-use report to identify the source of elevated privileges. This is an excellent example of how logging solutions need to evolve with the ways security events are monitored.  We recently tackled this in the latest release of our Alliance LogAgent.

Where do you see the future of logging on the IBM i?

Let me dust off my crystal ball!  First off, File Integrity Monitoring (FIM) will become more important.  To maintain a strong posture, security administrators need to know who is accessing sensitive data and system values on the IBM i.  We’re also going to see more requirements around File Integrity Monitoring across the regulatory compliance environments.  Why?  Because, as we discussed earlier, cyber-attackers escalate privileges, access sensitive data, and change security configurations in order to get the work done that they want to do.  Again, this is why we are seeing increased requirements in regulations like the Payment Card Industry Data Security Standard (PCI DSS) and new financial services regulations.

Another interesting prediction:  It won’t be unheard of for organizations to use multiple SIEM solutions. We are starting to see businesses use one SIEM for traditional security monitoring and another to monitor operational data.  Operational data, you ask?  Sure.  Logging solutions can easily allow administrators to answer operational questions like: How full are my disks?  Do I have any critical hardware errors?  Second, they can benefit from deploying a SIEM to monitor application data.  Sales teams, for example, can track inventory status, trending products, etc.  The benefits of file monitoring don’t have to be exclusive to security.

In the near future, we will also see a pickup of integration with Artificial Intelligence (AI), also commonly referred to as cognitive computing.  IBM has the Watson platform, and there are others, which I believe will be used to enhance security.  We are already seeing initial efforts in this respect.  Harnessing that AI capability with security makes total sense.  

Finally, as we are seeing, everything not bolted down is going to the cloud.  We will definitely see an evolution of new cloud services around security and logging.  It may take a little time for vendors to start leveraging that, but I believe it is definitely in the works.

To hear this interview in it’s entirety, download our podcast “The Future of Security Logging on the IBM i” and hear Patrick Townsend, founder and CEO of Townsend Security, further discuss log collection and monitoring on the IBM i, new log sources in V7R3, and the future of security logging on the IBM i.

The Future of Active Security Monitoring on the IBM i

Topics: System Logging, Alliance LogAgent

Townsend Security Announces Major Update to Alliance LogAgent for IBM i

Posted by Luke Probasco on Nov 29, 2016 12:01:00 AM

New features include full reporting of administrative users, including authority the user adopts through Group Profiles and Supplemental Group Profiles.

IBM i Security: Event Logging & Active Monitoring Townsend Security today announced a significant update to its existing Alliance LogAgent for IBM i (AS/400, iSeries) solution, allowing full reporting of administrative users, which includes authority the user adopts through Group Profiles and Supplemental Group Profiles. Alliance LogAgent is a Security Information and Event Management (SIEM) integration solution that collects, formats, and transmits security information in real-time to any SIEM or log collection server.

When the new configuration options are enabled, Alliance LogAgent will tag all significant security events as performed by the administrative level user. This enhancement will help security administrators easily identify which users have elevated privileges and enable SIEM solutions to quickly identify and alert on operations. In addition to the new administrative user reporting, Alliance LogAgent now provides an easy-to-use local assessment report that identifies privileged users. This will reduce the overhead of inspecting and adjusting privileges of IBM i users. 

Alliance LogAgent is compatible with all SIEM solutions that accept Syslog messages, IBM QRadar Log Event Extended Format (LEEF), or the HP ArcSight Common Event Format (CEF). The new administrative field reporting will make it easy for SIEM administrators to create dashboards, compliance reports, and alerts based on reported fields. When an administrator privileges are detected Alliance LogAgent adds the following field to the security message:

            admin_user=yes

For IBM QRadar the new field is:

            adminUser=yes

By providing a normalized field in the security events sent to the SIEM monitoring platform, the SIEM’s query and forensic tools can be used more effectively.

“Many IBM i customers have struggled with identifying who on their system has elevated privileges. It is crucial to identify and strictly control these users as cyber criminals often use privilege escalation to enable the exfiltration of sensitive data,” said Patrick Townsend, CEO of Townsend Security. “On first look an IBM i account may appear to have normal user privileges, but may in fact inherit higher privileges through a Group Profile or Supplemental Group Profile. Alliance LogAgent now detects these elevated privileges in real time, and provides the security administrator with an easy-to-use report to identify the source of elevated privileges. We think this is a crucial enhancement that will help IBM i customers better secure their platforms.”

Alliance LogAgent is in use with a wide variety of SIEM solutions including LogRhythm, SecureWorks, NTT Solutionary, IBM QRadar, Alert Logic, AlienVault, McAfee SIEM, Splunk, SolarWinds, and many others. In addition to collecting the IBM i security audit journal information Alliance LogAgent collects system history messages, operator messages, exit point information, system statistics, and a variety of open source application logs in Unix/Linux format.

The solution is licensed on a per logical partition (LPAR) basis, with perpetual and subscription licensing options available. Existing Alliance LogAgent customers on a current maintenance contract can upgrade to the new version at no charge.

IBM i

Topics: Alliance LogAgent, Press Release

IBM i FieldProc Architecture and Implementation

Posted by Luke Probasco on Oct 19, 2016 10:33:47 AM

Excerpt from the eBook "IBM i Encryption with FieldProc - Protecting Data at Rest."


FieldProc Architecture

IBM i Encryption with FieldProc FieldProc is a type of column-level exit point that is implemented directly in the DB2 database. As is typical with any of the other IBM exit points, IBM provides the architecture for the exit point to invoke a user application, but IBM does not provide that application. Customers or vendors can create a FieldProc application based on the documented architecture of the exit point. Townsend Security is one vendor who provides such software.

A note on terminology:
We will use the term “column” when referring to a field in a file, and we will use the term “table”
to refer to a physical file. For the purposes of discussing FieldProc these terms are interchangeable and we will use the more modern terms “column” and “table”. In a similar way we will use the term “index” to refer to a column that is either a primary or secondary key. A secondary key would include a key de ned in a logical file.

The exit point architecture is very simple. There are only two commands and three functions that are supported. The two commands are:

  • Start FieldProc
  • End FieldProc

The three functions that are handled by a FieldProc program are:

  • Initialization
  • Encode (Encrypt)
  • Decode (Decrypt)

When FieldProc is started on a column the FieldProc program is called for Initialization, and then called for each row in the table to provide for the encryption of the column. When FieldProc is ended the FieldProc program is called for each row to decrypt the data. All other normal read, change, and insert operations call the FieldProc program to provide for encryption or decryption as needed. FieldProc is not invoked for a delete operation on a row.

FieldProc Implementation

FieldProc is a SQL feature and is implemented through SQL commands. That is, FieldProc is not implemented through DDS, but only through SQL. Through SQL you create or alter a table to have the FIELDPROC attribute like this:

ALTER TABLE employee alter column
salary set FieldProc Userpgm

Likewise you can remove a FieldProc attribute using a SQL DROP statement like this:

ALTER TABLE employee alter column
salary Drop FieldProc

As noted above when you start FieldProc with the CREATE TABLE or ALTER TABLE commands, the effect is immediate. Starting FieldProc causes the FieldProc program to be called for each row in the table to perform encryption. Dropping the FieldProc attribute of a column causes the FieldProc program to be called for each row to decrypt the data.

Certain operations will cause FieldProc to be invoked even if the column data is not being used. For example, a legacy RPG program might read a file from beginning to end but not use a particular column that is under FieldProc control. Even though the column is not used in the RPG program FieldProc will be invoked to decrypt the value of the column for each row.

Note that a native SQL SELECT statement that does not include the encrypted column will NOT invoke FieldProc for decryption. Some work management operations are difficult with the current IBM implementation of FieldProc. For example, save and restore operations when the restore library is different than the save library can be problematic. Additionally, change management operations where there are changes to column attributes can be difficult to manage and require decrypting the database, applying the change, and then re-encrypting the database. These limitations are especially true in legacy RPG applications.

Supported Field (Column) Types

Most column types are support for FieldProc including character, numeric, date, time, and timestamp  fields. You can use FieldProc on null-capable  fields and double-byte character  fields. Some type of derived and counter  fields do not support FieldProc, but IBM i customers will  find that all normal application  fields are supported. With FieldProc there is no need to change the  eld size or attribute, nor is there any need to change or re-compile applications.

Legacy DDS Files

Many IBM i customers have the impression that legacy DDS  files will not support FieldProc encryption. This is not true! You can readily implement FieldProc encryption on  files created with DDS and it is not necessary to convert them to DDL and SQL tables. As IBM notes there are some advantages to doing so, but it is not necessary and the large majority of FieldProc users continue to use DDS  files. As noted above, you can only start and stop FieldProc using SQL commands, but these SQL commands work  ne on DDS-created  files.

Encrypting Multiple Fields in One File

DB2 FieldProc control can be started on multiple columns in one database table. Three is no practical limit on the number of columns you can select for FieldProc implementation. Of course, there are performance implications for encrypting multiple columns as the FieldProc program will be called independently for each column under FieldProc control. But many IBM i customers place multiple columns in a table under FieldProc encryption control. In some cases customers place hundreds of columns under FieldProc control. FieldProc’s support for multiple columns includes columns that are Primary or Secondary keys to the data. Please see the following sections for a discussion of limitations related to encrypted indexes and legacy RPG applications.

IBM i Encryption with FieldProc

Topics: IBM i, FIELDPROC

Encrypted Indexes - Overcoming Limitation with FieldProc Encryption on the IBM i

Posted by Luke Probasco on Aug 30, 2016 11:00:00 AM

Excerpt from the eBook "IBM i Encryption with FieldProc - Protecting Data at Rest."


Encrypted Indexes

Key Management for IBM i - Audit Failures The DB2 FieldProc implementation fully supports the encryption of columns which are indexes (keys) to the data in a native SQL context - this includes RPGSQL applications. However, there are some severe limitations with legacy RPG applications around encrypted index order. If you are like most IBM i users it is important to understand these limitations to implement DB2 encryption in RPG.


Legacy RPG applications use record-oriented operations and not set-oriented operations that are typical of SQL. Many record oriented operations on encrypted indexes in RPG will work as expected. For example, a CHAIN operation on an encrypted index to retrieve a record from a DB2 table will work. If the record exists it will be retrieved and FieldProc will decrypt the value. However, many range and data ordering operations will not work as expected with legacy RPG programs leaving you with empty reports and subfiles. Consider the following logic:

SETLL (position to an particular location in the index for a file)
WHILE (some condition)
READ (Read the next record by the index) END WHILE

The SETLL (Set lower limit) operation will probably work if the particular index value exists. However, the program logic will then read the next records based on that position in the file. IBM i developers are surprised to learn that they will be reading the next record based on the ENCRYPTED value, and not on the decrypted value which is what they might expect. The result is often empty subfiles and printed reports. This is very common logic in applications where indexes are encrypted. Note that your RPG program will not get a file I/O error, it just won’t produce the results you expect.

In simpler applications this side-effect of encrypted indexes is not significant, or easily programmed around. However, in some applications where sensitive data is encrypted across a large number of tables (think social security number in banking applications) this can be a significant limitation.

Don’t despair, keep reading.

Overcoming Encrypted Index Limitations in RPG

The limitations of encrypted indexes in legacy RPG applications often lead IBM i customers to abandon their encryption projects. The prospect of converting a large number of legacy RPG applications to newer SQL interfaces can be daunting. Their legacy RPG applications contain a lot of valuable business logic, and the effort to make the conversion can be quite large.

Wouldn’t it be great if you could wave a magic wand and make legacy RPG applications use SQL without any changes?

IBM opened a path to this type of solution with Open Access for RPG, or OAR. OAR allows for the substitution of traditional file I/O operations with user- written “Handlers”. In essence, you can replace the native file I/O operations of RPG with your own code. No change to program file handling or business logic! The OAR capability spawned a number of user interface modernization products, and other solutions that take advantage of this. Imagine if your RPG screen handling I/O with Execute Format (EXFMT) could be replaced with a web-based GUI library. Instant modernization of the UI! There are now many solutions that leverage OAR for UI modernization.

Join Logical Files with DDS

One limitation of logical files created with DDS specifications involves join logical files. You will not be able to create DDS join logical file where the join involves an encrypted  eld with FieldProc. You will get an error about invalid data type usage. This is an IBM limitation and there is no known workaround for this issue. Note that this limitation only applies to DDS join logical  les and does not apply to SQL joins using encrypted indexes. Most IBM i customers will need to change their RPG or COBOL program logic to avoid the use of DDS join logical  les which use encrypted indexes.

Application Changes

Legacy RPG applications that use encrypted indexes often need re-design and re-programming to avoid the problems of encrypted indexes. You can avoid these issues if you are using an Open Access for RPG (OAR) solution that maps the legacy RPG record-based file operations to native SQL and the SQL Query Engine (See the note above about Townsend Security’s OAR/SQL offering).

If you need to re-design your RPG applications to avoid encrypted indexes consider putting all of your sensitive data in a table that is indexed by some non-sensitive value such as a sequence number or customer number, and use FieldProc to encrypt that table. There are many approaches to application re- design and you should be able to find a method that works for you.

Change Management

IBM i customers who have deployed change management solutions can encounter challenges with FieldProc implementations. While most of these challenges are surmountable, it will take effort on the part of the systems management team to integrate FieldProc controls into their change management strategy. Unfortunately the implementation of FieldProc does not provide much in the way of support for change management systems. The activation of FieldProc changes an attribute on the column in the table, but change management systems generally are not aware of this attribute. It can be challenging to promote table and column level changes and properly retain the FieldProc attribute of the data. Look for a FieldProc implementation that provides a command-level interface to stop, start, and change FieldProc definitions. These commands can help you integrate FieldProc encryption into your change management strategy.

IBM i Encryption with FieldProc

Topics: IBM i, FIELDPROC

IBM i, PCI DSS 3.2, and Multi-Factor Authentication

Posted by Luke Probasco on Aug 16, 2016 10:48:05 AM

With the recent update to the Payment Card Industry Data Security Standard (PCI DSS) regarding multi-factor authentication (also known as Two Factor Authentication or 2FA), IBM i administrators are finding themselves faced with the requirement of deploying an authentication solution within their cardholder data environments (CDE). Prior to version 3.2 of PCI DSS, remote users were required to use two factor authentication for access to all systems processing, transmitting, or storing credit card data. With version 3.2 this is now extended to include ALL local users performing administrative functions in the CDE.

Here is an excerpt from section 8.3: (emphasis added)

8.3 Secure all individual non-console administrative access and all remote access to the cardholder data environment (CDE) using multi-factor authentication.

IBM i, PCI DSS, & Multi-Factor Authentication I recently was able to sit down with Patrick Townsend, Founder & CEO of Townsend Security, and talk with him about PCI DSS 3.2, what it means for IBM i users, and what IBM i users can do to meet the latest version of PCI DSS.

Thanks for taking some time to sit down with me, Patrick. Can you recap the new PCI-DSS version 3.2 multi-factor authentication requirement? This new requirement seems to be generating a lot of concern.

Well, I think the biggest change in PCI DSS 3.2 is the requirement for multi-factor authentication for all administrators in the cardholder data environment (CDE).  Prior to 3.2, remote users like contractors and third party administrators, had to use multi-factor authentication to login to the network.  This update extends the requirement of multi-factor authentication for ALL local, non-console users.  We are seeing businesses deploy multi-factor authentication at the following levels:

  •      Network Level - When you first access the network
  •      System Level – When you access a server or any system with the CDE
  •      Application Level – Within your payment application

The requirement for expanded multi-factor authentication is a big change and is going to be disruptive for many merchants and processors to implement.

Yeah, sounds like this is going to be a big challenge.  What does this mean for your IBM i customers?

There are definitely some aspects of this PCI-DSS update that will be a bigger challenge on the IBM i compared to Windows or Linux.  First, we tend to run more applications on the IBM i.  In a Windows or Linux environment you might have one application per server.  On the IBM i platform, it is not uncommon to run dozens of applications.  What this means is, you have more users who have administrative privileges to authenticate – on average there can be 60 or more who can sometimes be a challenge to identify!  When merchants and processors look at their IBM i platforms, they will be surprised at the number of administrators they will discover.

Additionally, the IBM i typically has many network services exposed (FTP, ODBC, Operations Navigator, etc).  The challenge of identifying all the entry points is greater for an IBM i customer.

You say it is harder to identify an administrative user, why is that?

On the IBM i platform, there are some really easy and some really difficult administrative users to identify.  For example, it is really easy to find users with QSECOFR (similar to a Windows Administrator or Linux Root User) privileges.  But it starts to get a little more difficult when you need to identify users, for example, who have all object (*ALLOBJ) authority.  These users have almost the equivalent authority of QSECOFR.  Finding, identifying, and properly inventorying users as administrators can be a challenge.

Additionally, with a user profile, there is the notion of a group profile.  It is possible for a standard user, who may not be an administrator, to have an administrative group profile.  To make it even more complex, there are supplemental groups that can also adopt elevated authority.  Just pause for a minute and think of the complex nature of user profiles and how people implement them on the IBM i platform.  And don’t forget, you may have users on your system who are not highly privileged directly through their user profile, but may be performing administrative tasks related to the CDE.  Identifying everyone with administrative access is a big challenge.

Townsend Security has a multi-factor authentication solution for the IBM i.  How are you helping customers deal with identifying administrators?

From the beginning, we realized this would be a problem and we have taken some additional steps, specifically related to PCI DSS 3.2 to help IBM i customers identify administrators.  We made it possible to build a list of all users who have administrative access and then require them to use multi-factor authentication when logging on.  We have done a lot to help the IBM i security administrator identify highly privileged users and enroll them into a two factor authentication solution, and then periodically monitor/update/audit the list.

What are some of the other multi-factor authentication challenges that IBM i customers face?

Some of them are pretty obvious.  If you don’t have a multi-factor authentication solution in place, there is the effort of evaluating and deploying something on the IBM i server.  You’ll find users who may already have a multi-factor authentication solution in place for their Windows or Linux environments, but haven’t extended it to their IBM i.  Even if they aren’t processing credit card numbers on the IBM i, if it is in the CDE, it still falls within the scope of PCI DSS.

Aside from deploying a solution, there is going to be administrative work involved.  For example, managing the new software, developing new procedures, and putting governance around multi-factor authentication.  Further, if you adopt a hardware-based solution with key FOBs, you have to have processes in place to distribute and replace them, as well as manage the back-end hardware.  It has been really great seeing organizations move to mobile-based authentication solutions based on SMS text messages where there isn’t any hardware of FOBs to manage.  Townsend Security’s Alliance Two Factor Authentication went that route.

Let’s get back to PCI DSS.  As they have done in the past, they announced the updated requirements, but businesses still have a period of time to get compliant.  When does PCI DSS 3.2 actually go into effect?

The PCI SSC always gives merchants and processors time to implement new requirements.  The actual deadline to meet compliance is February 1, 2018.  However, what we are finding is that most merchants are moving rapidly to adopt the requirements now.  When an organization has an upcoming audit or Self Assessment Questionnaire (SAQ) scheduled, they generally will want to meet the compliance requirements for PCI DSS 3.2.  It never is a good idea to wait until the last minute to try and deploy new technology in order to meet compliance.

You mentioned earlier that you have a multi-factor authentication solution.  Tell me a little bit about it.

Sure. Alliance Two Factor Authentication is a mature, cost-effective solution that delivers PINs to your mobile device (or voice phone), rather than through an expensive key FOB system. IBM i customers can significantly improve the security of their IBM i systems through implementation of proven two factor authentication.  Our solution is based on a non-hardware, non-disruptive approach.  Additionally, we audit every successful or failed authentication attempt and log it in the security audit journal (QAUDJRN).  One thing that IBM i customers might also be interested in, is in some cases, we can even extend their existing multi-factor authentication solution to the IBM i with Alliance Two Factor Authentication.  Then they can benefit from the auditing and services that we supply for the IBM i platform.  Our goal was to create a solution that was very cost-effective, rapid to deploy, meets compliance regulations, and doesn’t tip over the IT budget.

Download a podcast of my complete conversation here and learn more about what PCI DSS 3.2 means for IBM i administrators, how to identify administrative users, challenges IBM I customers are facing, and how Townsend Security is helping organizations meet PCI DSS 3.2.

Podcast: IBM i, PCI DSS, and Multi-Factor Authentication

 

Topics: 2FA, IBM i, PCI DSS

Encryption & Key Management for the IBM i

Posted by Luke Probasco on Jul 25, 2016 10:38:04 AM

Excerpt from the eBook "IBM i Encryption with FieldProc - Protecting Data at Rest."


Encryption in FieldProc

It goes without saying that your FieldProc application will need to use an encryption library to perform encryption and decryption operations. IBM provides an encryption software library as a native part of the IBM i operating system. It is available to any customer or vendor who needs to implement encryption and decryption in their FieldProc programs.

IBM i Encryption with FieldProc Unfortunately the native IBM encryption library is very slow. This might not be noticeable when encrypting or decrypting a small amount of data. But batch operations can be negatively impacted. The advent of AES encryption on the Power8 processor has done little to mitigate the performance issue with encryption. IBM i customers and third party vendors of FieldProc solutions should use caution when implementing FieldProc using the native IBM i AES software libraries. They are undoubtedly accurate implementations of AES encryption, but suffer on the performance front.

Key Management

An encryption strategy is only as good as the key management strategy, and it is difficult to get key management right. For companies doing encryption the most common cause of an audit failure is an improper implementation of key management. Here are a few core concepts that govern a good key management strategy:

  • Encryption keys are not stored on the same system as the sensitive data they protect.
  • Security administrators of the key management solution should have no access to the sensitive data, and database administrators should have no access to encryption key management (Separation of Duties).
  • On the IBM i system this means that security administrators such as QSECOFR and any user with All Object (*ALLOBJ) should not have access to data encryption keys or key encryption keys.
  • More than one security administrator should authenticate before accessing and managing keys (Dual Control).
  • All access to encryption keys should be logged and audited. This includes use of encryption keys as well as management of keys.
  • Encryption keys should be mirrored/backed up in real time to match the organization’s standards for system availability.

Encryption Key Caching

Encryption keys are often used frequently when batch operations are performed on sensitive data. It is not unusual that a batch program would need to perform millions or tens of millions of encryption and decryption operations. While the retrieval of an encryption key from the key server may be very efficient, performance may suffer when keys need to be retrieved many times. This can be addressed through encryption key caching in the local environment.

Secure key caching should be performed in separate program modules such as a service program and should not be cached in user programs where they are more subject to discovery and loss. Any module caching an encryption key should have debugging options disabled and visibility removed. Secure key caching is critical for system performance and care should be taken to protect storage.

Encryption Key Rotation

Periodically changing the encryption keys (sometimes called “key rotation” or “key rollover”) is important
to the overall security of your protected data. Both data encryption keys (DEK) and key encryption keys (KEK) should be changed at appropriate intervals. The appropriate interval for changing keys depends on a number of variables including the amount of data the key protects and the sensitivity of that data, as well as other factors. This interval is called the cryptoperiod of the key and is defined by NIST in Special Publication 800-57 “Key Management Best Practices”. For most IBM i customers rotation of data encryption keys should occur once a year and rotation of the key encryption keys should occur no less than once every two years. 

IBM i Encryption with FieldProc

Topics: Encryption, IBM i, FIELDPROC

Encryption & Key Management for SQL Server

Posted by Luke Probasco on Jul 22, 2016 3:27:11 PM

Excerpt from the eBook "Encryption & Key Management for Microsoft SQL Server."


Microsoft SQL Server has become a ubiquitous storage mechanism for all types of digital assets. Protecting these data assets in SQL Server is a top priority for business executives, security specialists, and IT professionals.  The loss of sensitive data can be devastating to the organization and in some cases represents a catastrophic loss. There is no alternative to a digital existence and cybercriminals, political activists, and state actors have become more and more adept at stealing this information.  To properly protect this information, businesses are turning to encryption and key management.

Encryption

Encryption and key management for SQL Server Encryption in the broadest sense means obscuring information to make it inaccessible to un- authorized access. But here we will use the term in its more precise and common use – the use of well accepted encryption algorithms based on mathematical proofs and which have been embodied and approved as international standards.

Many approaches to encryption do not meet minimal requirements for security and compliance. Our definition of encryption excludes:

  • Homegrown methods developed by even experienced and talented programmers.
  • Emerging encryption methods that are not yet widely accepted.
  • Encryption methods that are widely accepted as secure, but which have not been adopted by standards organizations.
  • Data substitution and masking methods not based on encryption.

An example of an encryption method that does meet our criteria would include the Advanced Encryption Standard (AES) which is sometimes knows as Rijndael, Triple Data Encryption Standard (3DES), RSA, and Elliptic Curve encryption methods.

In the context of protecting data in a SQL Server data- base, the most common encryption method protecting whole databases or an individual column in a table is AES. All key sizes of AES (128-bit, 192-bit, and 256-bit) are considered secure and are appropriate for protecting digital assets. Many organizations chose 256- bit AES for this purpose due to the larger key size and stronger security.

One major additional benefit of using an industry standard such as AES is that it meets many compliance requirements or recommendations for the use of industry standard encryption. This includes the PCI Data Security Standard (PCI-DSS), HIPAA, FFIEC, and the EU General Data Protection Regulation (EU GDPR).

Key Management

It is not possible to discuss an encryption strategy without discussing the protection of encryption keys. An encryption strategy is only as good as the method used to protect the encryption keys. Encryption algorithms such as AES and Triple DES are public and readily available to any attacker. The protection of the encryption key is the core to the security of the encrypted data. This is why security professionals consider the loss of the encryption key as equivalent to the loss of the digital assets. Once an attacker has the encryption key it is trivial to decrypt and steal the data.

Generating strong encryption keys and protecting them is harder that it might at first appear. The generation of strong encryption keys depends on the use of random number generation schemes, and modern computers do not excel at doing things randomly. Specialized software routines are needed to generate strong encryption keys. Encryption keys must also be securely stored away from the data they protect, and yet must be readily available to users and applications that are authorized to access the sensitive data. Authenticating that a user or application is authorized to an encryption key is a large focus of key management systems.

Over the years standards and best practices have emerged for encryption key management and these have been embodied in specialized security applications called Key Management Systems (KMS), or Enterprise Key Management (EKM) systems. The National Institute of Standards and Technology (NIST) has taken a lead in this area with the creation of Special Publication 800-57 entitled “Recommendation for Key Management”. In addition to this important NIST guidance, the organization publishes the Federal Information Processing Standard (FIPS) 140-2 “Security Requirements for Cryptographic Modules”. To serve the needs of organizations needing independent certification that a key management application meets this standard, NIST provides a validation program for FIPS 140-2 compliant systems. All professional key management systems have been validated to FIPS 140-2.

When protecting sensitive SQL Server data with encryption, look for these core principles of key management:

  • Encryption keys are stored away from the data they protect, usually on specially designed security devices or dedicated virtual servers.
  • Encryption keys are managed by individuals who do not have access to the data stored in the SQL Server database (Separation of Duties).
  • Encryption key management requires more than one security administrator to authenticate before performing any critical work on keys (Dual control).
  • Key retrieval requests from users and applications are authenticated using industry standard methods.
  • Encryption management and key usage are logged in real time and logs are stored on secure log collection servers.
  • Encryption key management systems have been validated to FIPS 140-2 and the Key Management Interoperability Protocol (KMIP).

These are just a few of the core requirements for deploying a professional key management solution to protect your SQL Server data.

Encryption and key management for SQL Server

 

Topics: Encryption, SQL Server

Encryption Performance on the IBM i

Posted by Luke Probasco on Jul 12, 2016 11:11:00 AM

Excerpt from the eBook "IBM i Encryption with FieldProc - Protecting Data at Rest."


The performance of an encryption solution is one of the biggest concerns that an IBM i customer has when implementing FieldProc. There are many factors that can affect performance of a FieldProc application and it is wise to pay special attention to performance as you prepare to implement a solution. Let’s look at several factors that can affect performance.

AES 128-bit and AES 256-bit Performance

IBM i Encryption with FieldProc All key sizes for AES encryption (128-bit, 192-bit, and 256-bit) are considered secure for protecting all commercial sensitive data. Customers naturally wonder if the smaller key size of 128-bit encryption means better encryption performance when compared to 256-bit AES keys. In fact, the performance difference is much smaller than one might imagine. The smaller 128-bit key size uses 10 rounds during AES encryption, but the 256-bit AES keys use 14 rounds during AES encryption. The IBM i RISC processor optimizes cryptographic operations so the performance penalty for 256-bit AES encryption is very small. Considering that cybersecurity guidelines now recommend the use of 256-bit AES for protecting data in long-term storage, and for protecting against advances in quantum computing, you should always consider using 256-bit AES encryption for protecting IBM i sensitive data.

AES Encryption Software Performance

Encryption software libraries can vary greatly in performance even when using exactly the same methods and key sizes. Unfortunately the native IBM i AES encryption software library and APIs have a low performance profile and this can have a negative impact on your IBM i applications. Fortunately, there are high performance AES encryption libraries available from third parties that have a much better performance profile. The difference between the native IBM encryption libraries and third party libraries can be more than 100 times in processing speed. Use care when deploying an encryption solution to insure that the performance meets your minimum needs for processing time.

POWER8 On-Chip AES Encryption Performance

The new Power8 processor from IBM provides a hardware implementation of AES encryption to improve the performance of encryption. All new models of IBM i servers built with the POWER8 processor include this hardware implementation of AES encryption. In concept this is similar to the Intel processors which provide AES encryption through the AES-NI implementation.

While encryption performance has improved with the POWER8 processor, it is not a large improvement. Encryption speeds using the native IBM i APIs are about double the speed of the previous versions of the Power processor, but still greatly lag in performance compared to third party encryption libraries. It should be noted that this performance lag disappears when the IBM APIs are used to encryption very large blocks of information. For example, the AES implementation on the POWER8 processor can encrypt a megabyte of information at incredible speeds. Unfortunately this does not apply to the smaller blocks of data (credit card numbers, social security numbers, etc.) that are typically stored in DB2 database tables. It is likely that IBM i customers using iASP encryption will see a major performance bene t with the new POWER8 systems.

FieldProgram Program Performance

When FieldProc invokes a program to perform encryption or decryption it makes a dynamic all to a program executable. There is always a certain amount of overhead when making a dynamic call to an external program and this is true in FieldProc context, too. The more columns in a table that are under FieldProc control the more dynamic calls you will have to the FieldProc program as the FieldProc program is invoked independently for each column. It is important that the FieldProc program be properly optimized
for performance. Some key performance measures include:

  • Minimized file I/O operations by the FieldProc program.
  • Program modules are optimized on compilation.
  • The program executable is optimized on compilation.
  • Memory management is optimized for performance.
  • Visibility of program objects is removed to improve performance.
  • The FieldProc program is optimized for multi- threaded operation.
  • Audit logging is optimized for performance. 

It is important that the FieldProc application be properly optimized as it may be invoked many times during a typical interactive or batch request.

FieldProc for Multiple Columns in a Table

It is likely that you will need to protect multiple columns in a table. For example, in a medical setting a table might contain the following information for a patient:

  • Name
  • Date of birth
  • Address
  • Social security number
  • Email address
  • Etc.
In this case you will be protecting multiple columns in a single table. This is fully supported by FieldProc and is very common in FieldProc implementations. When FieldProc programs are properly optimized for performance you should find that the extra performance impacts per column are not excessive. Thanks to the re-entrant model of the IBM i operating system the protection of multiple columns has a smaller performance impact. Encrypting two columns is NOT twice the performance impact as encryption one column.

Key Management Performance Impacts

As mentioned above your FieldProc encryption solution should implement good encryption key management practices. It is important that the key management interface impose little performance penalty. This means that the FieldProc application should use intelligent and secure key caching to minimize the number of key retrieval operations that must be performed. Additionally keys should not be exposed in user applications, but should be protected in separate modules. When implemented properly good key management will impose extremely little performance impact.   

Audit Logging Performance

One common feature of FieldProc applications is the ability to collect audit information about user activity. This might include collecting information about which users accessed decrypted information, etc. Audit logging will always extract a performance price. If you need to collect audit logs of FieldProc activity be sure to measure the performance impact of audit log collection in your own environment.

IBM i Encryption with FieldProc

Topics: Encryption, FIELDPROC

Key Management: The Hardest Part of Encryption

Posted by Luke Probasco on Jun 3, 2016 7:19:00 AM

Excerpt from the eBook "2016 Encryption Key Management: Industry Perspectives and Trends." 


2016 Encryption Key Management Industry Perspectives and Trends eBook While organizations are now committed to implementing encryption, they are still struggling with getting encryption key management right. With all major operating systems, cloud platforms, and virtualization products now supporting encryption, it is relatively easy to make the decision to activate encryption to protect sensitive data. But an encryption strategy is only as good as the method used to protect encryption keys. Most audit failures for customers already using encryption involve the improper storage and protection of encryption keys.

Ignorance and fear are the driving reasons for this core security failure. Many IT professionals are still not versed in best practices for encryption key management, and IT managers fear that the loss of encryption keys or the failure of access to a key manager will render their data unusable. This leads to insecure storage of encryption keys in application code, unprotected files on the data server, and poor protection of locally stored keys.

Most encryption key management solutions have evolved over the last decade to provide unparalleled reliability and redundancy. This has largely removed the risk of key loss in critical business databases and applications. But the concern persists and inhibits the adoption of defensible key management strategies.

Take Aways

  • Protect encryption keys with single-purpose key management security solutions.
  • Never store encryption keys on the same server that houses sensitive data.
  • Only deploy encryption key management solutions that are based on FIPS 140-2 compliant technology.
  • Only deploy encryption key management solutions that implement the KMIP industry standard for interoperability.
  • Avoid cloud service provider key management services where key management and key custody are not fully under your control.

Cloud Migration and Key Management Challenges

Cloud migration continues to a be a high priority for organizations large and small. The benefits for migrating to the cloud are clear. Reduction in cost for computing power and storage, leverage of converged infrastructure, reduction of IT administrative costs, on-demand
scalability, and many other benefits will continue the rapid migration to cloud platforms. As cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, Google App and Compute Engine, and IBM SoftLayer mature we can expect the pace of cloud migration to accelerate.

While cloud service providers are providing some encryption key management capabilities, this area will continue to be a challenge. The question of who has control of the encryption keys (key custody) and the shared resources of multi-tenant cloud service providers will continue to be headaches for organizations migrating to the cloud. The ability to exclusively manage encryption keys and deny access to the cloud service provider or any other third-party will be crucial to a good cloud key management strategy and end-customer trust. The attempt by governments and law enforcement agencies to access encrypted data through access to encryption keys will make this issue far more difficult moving forward.

Unfortunately most cloud service providers have not adopted common industry standards for encryption key management. This results in the inability of customers to easily migrate from one cloud platform to another resulting in cloud service provider lock-in. Given the rapid evolution of cloud computing and the infancy of cloud computing, customers will have to work hard to avoid this lock-in, especially in the area of encryption key management. This is unlikely to change in the near future.

Take Aways

  • Avoid hardware-only encryption key management solutions prior to cloud migration. Make sure your key management vendor has a clear strategy for cloud migration.
  • Ensure that your encryption key management solution runs natively in cloud, virtual and hardware platforms.
  • Ensure that your encryption key management solution provides you with exclusive management of and access to encryption keys. Neither your cloud service provider nor your encryption key management vendor should have administrative or management access to keys. Backdoor access through common keys or key management code is unacceptable.
  • Avoid cloud service provider lock-in to proprietary key management services. The cloud is still in its infancy and retaining your ability to choose and migrate between cloud platforms is important.
201

Topics: Encryption, Encryption Key Management


Subscribe to Email Updates

Posts by Topic

see all