Townsend Security Data Privacy Blog

Creating the IBM Security Audit Journal QAUDJRN

Posted by Patrick Townsend on Sep 28, 2016 8:49:33 AM

Excerpt from the eBook "IBM i Security: Event Logging & Active Monitoring - A Step By Step Guide."


IBM i Security: Event Logging & Active Monitoring

The QAUDJRN security audit journal does not exist when you first install a new IBM i server. You must create the journal receivers and the journal to start the process of security event collection. As is the case with any journal on the IBM i server you must first create a journal receiver and then create the QAUDJRN journal and associate it with the journal receiver.

The first step is to create a journal receiver which will be the actual container of security events. You can create the journal receiver in a user library or in the IBM general-purpose library QGPL. Creating a library to hold the journal receivers is recommended as this allows more flexible system management. You should use a sequence number in the last four or ve characters of the journal receiver name. This allows the IBM i operating system to automatically create new receivers. You can use this command to create the journal receiver.

   CRTJRNRCV JRNRCV(MYLIB/AUDRCV0001)
   THRESHOLD(1000000) AUT(*EXCLUDE)
   TEXT(’Auditing Journal Receiver’)

Now that we have created the first journal receiver, we can create the QAUDJRN journal. The QAUDJRN journal is always created in the operating system library QSYS. You can use this command to create the journal and associate it with the first journal receiver:

CRTJRN JRN(QSYS/QAUDJRN)
JRNRCV(MYLIB/AUDRCV0001)
MNGRCV(*SYSTEM) DLTRCV(*NO)
RCVSIZOPT(*RMVINTENT *MAXOPT3)
AUT(*EXCLUDE) TEXT(’Auditing Journal’)
IBM i

Topics: System Logging, IBM i

IBM i Security: Event Logging & Active Monitoring

Posted by Patrick Townsend on Sep 20, 2016 9:40:55 AM

Excerpt from the eBook "IBM i Security: Event Logging & Active Monitoring - A Step By Step Guide."


Active monitoring and log collection are at the top of the list of effective security controls. IBM i (AS/400, iSeries) users have to solve some special challenges to implement this critical security control. The IBM i server is a rich source of security information. Unlike many Windows and Linux server deployments the IBM i platform can host a complex mix of back-office applications, web applications, and open source applications and services. For convenience we will address these various log sources in terms of primary sources and secondary sources. Primary sources should be given priority in your implementation processes, as the most critical events are likely to be found in primary event sources.

Primary Sources

IBM i Security: Event Logging & Active Monitoring IBM Security Audit Journal QAUDJRN
When configured properly the IBM i server records a wide variety of security events in the security journal QAUDJRN. Events such as password failures, network authentication failures, failed SSL and TLS network negotiations, application and database authority failures, and many others go straight into the security audit journal QAUDJRN. This would naturally be the first focus of your event monitoring strategy. We discuss the setup and configuration of the QAUDJRN journal below.

System History File QHST
The system history message file QHST (actually a collection of files that start with the name QHST) is also a repository of important security information. All job start, job termination, and abnormal job termination events are recorded in the QHST files. Additionally, important operational messages are written to QHST files and should be collected and monitored.

Exit Points
The IBM i operating system provides a number of “hooks” for network and other services on the IBM i platform. These hooks are referred to as exit points. IBM i customers or vendors can write applications that collect information and register them for the exit points. Once registered an exit point monitoring application can collect information on a wide variety of activities. These include information on inbound and outbound FTP sessions, TCP sockets activity, network authentication, administrative commands through Operations navigator, and many other core operating system services. A good active monitoring strategy should collect information from exit points and pass the information to the centralized log collection server.

Secondary Sources

Web Applications
Most IBM i customers do not deploy the server as a primary web server, but selected web applications and services are often incorporated into a customer’s primary web deployment. The IBM i supports a rich set of web server platforms including IBM Websphere, PHP, Perl, Python, and traditional HTML/CGI services. When web applications are deployed on an IBM i server they are a target for traditional attacks. Web logs should be included in your event collection and monitoring strategy.

Open Source Applications
For some years IBM has been bringing a wide variety of open source applications to the IBM i platform. This includes applications like OpenSSH with Secure Copy (sCP), Secure FTP (sFT) and remote command shell. Many other application frameworks have
been added like Perl, PHP, and Python. All of these applications provide entry points and potential points of compromise for attackers. If you deploy these types of solutions you should enabled and collect their logs.

System and Applications Messages in QSYSMSG AND QSYSOPR
Many customer and vendor applications write important operational and security messages to the system operator message queue QSYSOPR and to the optional system message queue QSYSMSG.

IBM i

 

Topics: System Logging, IBM i

Privately Held Companies Think They Don’t Need to Worry About Security and Compliance

Posted by Patrick Townsend on Sep 13, 2016 9:37:33 AM

In my role at Townsend Security I engage with companies both large and small and have many conversations about data breaches and security. One pattern that I see a great deal is the belief by privately held companies that they are not subject to compliance regulations, are not a target by cybercriminals, and don’t have much to worry about. The feeling seems to be “Gee, we are not publicly traded so we don’t have to do all that security stuff. It’s hard and expensive and it doesn’t really apply to us. Besides, we are too small for hackers to bother with.” I’m sure I’ve heard exactly these comments a number of times.

For the sake of these small businesses who represent the real backbone of our economy, we need to dispel these myths. Let’s look at a few of them:

We Don’t Need to Meet PCI Compliance
Download Whitepaper on PCI Data Security Anyone who takes credit or debit cards in their business must comply with the PCI Data Security Standard (PCI-DSS). When you signed your merchant agreement you agreed to be bound by the data security standards for payment cards. This is not a government regulation - this is an agreement between you and your payment provider and they all require this compliance. It doesn’t matter if you are small or large, privately held or publicly traded on the NYSE. You have to meet PCI compliance for security. Failure to comply with PCI-DSS standards can put you a jeopardy for loss of the ability to accept credit card payments. This can be devastating for any company.

I Only Store a Small Number of Credit Cards, PCI Doesn’t Apply to Me
This is an incorrect understanding of the PCI Data Security Standard. Processing a small number of payment transactions, or a low dollar amount, may put in you a lower category for PCI compliance, but the PCI-DSS still applies to you. In the event of a data breach you may find that you have new on-site PCI audit requirements, additional liabilities, and you may have to pay a higher rate to authorize payment transactions. These effects can really hurt your business. Rest assured that if you take credit card payments the PCI-DSS requirements apply to you.

We Don’t Need to Meet HIPAA Compliance and Reporting
If you are a medical organization defined as a Covered Entity under HIPAA regulations you need to meet HIPAA data security requirements. This includes both privately held as well as publicly traded organizations. There is no exclusion for privately held organizations whether you are a country doctor with a small practice or a large HMO or hospital chain. A quick review of the compliance actions by OCR shows that privately held organizations are just as likely to be fined as larger organizations. And penalties often do not stop at monetary fines, there are often requirements for years of on-site security audits and assessments that are very costly.

We are Not Big Enough to be a Target
Many executives and members of the IT organization at privately held companies believe that they are not a target of hackers and cybercriminals simply because they are privately held and small. This falls into the category of wishful thinking. In fact, as the FBI reports, smaller private companies are increasingly the focus of cybercriminals and are experiencing significant financial losses. Here is what John Iannarelli, and FBI consultant, says:

"Time and again I have heard small business owners say they have nothing to worry about because they are too small to interest cybercriminals. Instead, small businesses are exactly who the criminals are targeting for two primary reasons. In the criminal's mind, why go after large companies directly, when easier access can be attained through small business vendor relationships. Secondly, since small businesses have less financial and IT resources, criminals know they are less 'compromise ready' and tend to be less resilient."

We Don’t have to Report a Data Breach
Many smaller privately held companies are under the mistaken impression that compliance laws do not require them to report a data breach. This is incorrect - state and federal laws that require data breach notification require small businesses and privately held companies to report data breaches. Being small or privately held does not exclude you from data breach notification.

Compliance Regulations Don’t Apply to Us
It is true that if you are a private company the Sarbanes-Oxley compliance regulations certainly do not apply to you, those regulations only apply to publicly traded companies. And the Federal Information Security Management Act (FISMA) does not apply to you because it only applies to US government federal agencies. You are, however, covered by state privacy law, PCI Data Security Standards (if you accept credit or debit cards), HIPAA if you are a Covered Entity in the medical segment or if you process patient data for a Covered Entity, and the Federal Trade Commission (FTC).

We are Not Under FTC Rules About Data Privacy
The Federal Trade Commission has a broad charter to protect consumers and this includes enforcement for consumer privacy. From large companies like Google, Facebook and Twitter to small companies like an auto dealership, the FTC has taken enforcement actions that include private companies. The enforcement actions can include fines, mandatory audits that last for many years, and other penalties. Private companies do fall under FTC jurisdiction. You can read more about the FTC and data privacy here.

In summary, smaller privately held companies are the target of cybercriminals, are less prepared to deal with an attack, and have fewer resources to recover from an attack. No competent executive in a privately held company would forgo general liability insurance for their business, but are much more likely to suffer a loss from a cyber attack. It is not good business sense to ignore real threats.

Some last thoughts from the FBI:

The criminal then either creates another account or directly initiates a funds transfer masquerading as the legitimate user. The stolen funds are often then transferred overseas. Victims of this type of scheme have included small and medium-sized business, local governments, school districts, and health care service providers.

The potential economic consequences are severe. The sting of a cyber crime is not felt equally across the board. A small company may not be able to survive even one significant cyber attack.

From a Washington Post interview with the FBI:

Federal agents notified more than 3,000 U.S. companies last year that their computer systems had been hacked, White House officials have told industry executives, marking the first time the government has revealed how often it tipped off the private sector to cyberintrusions.

The alerts went to firms large and small, from local banks to major defense contractors to national retailers such as Target, which suffered a breach last fall that led to the theft of tens of millions of Americans’ credit card and personal data, according to government and industry officials.

“Three thousand companies is astounding,” said James A. Lewis, a senior fellow and cyberpolicy expert at the Center for Strategic and International Studies. “The problem is as big or bigger than we thought.”

Others notified by the Secret Service last year included a major U.S. media organization, a large U.S. bank, a major software provider, and numerous small and medium-size retailers, restaurants and hotels, officials said.

“Within hours of us coming up with information that we can provide, we would go to a victim,’’ said Edward Lowery, Secret Service special agent in charge of the criminal investigative division. “The reaction would be just like when you’re told you’re the victim of any crime. There’s disbelief, there’s anger, all those stages, because there’s a lot at stake here.”

Yes, Indeed. There is a lot at stake.

download the Whitepaper: Meet the Challenges of PCI Compliance

Topics: Compliance, PCI DSS

How Do I Find and Start Alliance Key Manager for Encryption Key Management in AWS?

Posted by Patrick Townsend on Sep 6, 2016 10:52:19 AM

For Amazon Web Services (AWS) users, encryption and key management has never been easier.  Townsend Security's Alliance Key Manager uses the same FIPS 140-2 compliant key management technology found in the company's HSM and in use by over 3,000 customers worldwide. In the AWS Marketplace, there are two entries for Alliance Key Manager – one is for the fee-based implementation and one is for the Bring Your Own License (BYOL) implementation. Both are identical in their key management functionality. If you only need one or two instances of Alliance Key Manager you can use the fee-based entry in the marketplace. If you are going to use more than a couple of instances of the key manager you may want to use the Bring Your Own License entry to launch the key manager. There are discounts available for multiple instances of Alliance Key Manager and the BYOL version may be less expensive.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop If you are logged into your AWS account you can directly launch Alliance Key Manager from the marketplace. Both licensing models support a free 30-day license to use the key manager. 

Before launching, you should determine if you want to run the key manager in the public AWS cloud, or if you want to run the key manager in a virtual private cloud (VPC).  The AWS virtual private cloud platform provides more isolation from other cloud customers and therefore a bit more security, if that is desired.

As you launch Alliance Key Manager in the AWS cloud you will need to select a region in which to run the key manager. Alliance Key Manager supports all of the AWS regions and you can run it anywhere. Your choice of regions may reflect your estimate of where you will have the greatest demand, or where you want critical key material to reside.

Once your AWS instance of Alliance Key Manager has been launched you can open an SSH session to the key manager to perform initial set up. You will change your password, create a set of server and client PKI certificates, indicate whether this instance of the key server is a primary or secondary mirror server, and create some initial unique encryption keys. After answering these questions you will have a fully functional, dedicated EC2 instance of Alliance Key Manager ready to use.

Alliance Key Manager comes with a full suite of software development kits (SDKs) and documentation, but the marketplace is limited to three documents. After you launch your AWS instance of the key manager please contact Townsend Security to register and get access to the AKM Supplemental documentation.  Unless you register at the Townsend Security web site it will not be possible to contact you and send you the documentation. There is no charge for access to the documentation.

The AWS license comes with customer support at the Basic level. This provides technical support and software updates via email during business hours. A Premium Support options is available that provides telephone and web support and includes 24/7/365 support for business interruption issues. Please visit the Townsend Security web site for more information about the Premium Support option and to register your instance of Alliance Key Manager for AWS.

At Townsend Security we want to provide you with a positive experience with our key management products and provide you the support you deserve. When you run our Alliance Key Manager in AWS we won’t know who you are because Amazon does not report that information. By registering at the Townsend Security web site you get access to documentation, SDKs and free support. And we can keep you up to date on the latest security patches and enhancements!

You can find more information about Alliance Key Manager in AWS here.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop

 

Topics: Alliance Key Manager, Amazon Web Services (AWS)

Encrypted Indexes - Overcoming Limitation with FieldProc Encryption on the IBM i

Posted by Luke Probasco on Aug 30, 2016 11:00:00 AM

Excerpt from the eBook "IBM i Encryption with FieldProc - Protecting Data at Rest."


Encrypted Indexes

IBM i Encryption with FieldProc The DB2 FieldProc implementation fully supports the encryption of columns which are indexes (keys) to the data in a native SQL context - this includes RPGSQL applications. However, there are some severe limitations with legacy RPG applications around encrypted index order. If you are like most IBM i users it is important to understand these limitations to implement DB2 encryption in RPG.


Legacy RPG applications use record-oriented operations and not set-oriented operations that are typical of SQL. Many record oriented operations on encrypted indexes in RPG will work as expected. For example, a CHAIN operation on an encrypted index to retrieve a record from a DB2 table will work. If the record exists it will be retrieved and FieldProc will decrypt the value. However, many range and data ordering operations will not work as expected with legacy RPG programs leaving you with empty reports and subfiles. Consider the following logic:

SETLL (position to an particular location in the index for a file)
WHILE (some condition)
READ (Read the next record by the index) END WHILE

The SETLL (Set lower limit) operation will probably work if the particular index value exists. However, the program logic will then read the next records based on that position in the file. IBM i developers are surprised to learn that they will be reading the next record based on the ENCRYPTED value, and not on the decrypted value which is what they might expect. The result is often empty subfiles and printed reports. This is very common logic in applications where indexes are encrypted. Note that your RPG program will not get a file I/O error, it just won’t produce the results you expect.

In simpler applications this side-effect of encrypted indexes is not significant, or easily programmed around. However, in some applications where sensitive data is encrypted across a large number of tables (think social security number in banking applications) this can be a significant limitation.

Don’t despair, keep reading.

Overcoming Encrypted Index Limitations in RPG

The limitations of encrypted indexes in legacy RPG applications often lead IBM i customers to abandon their encryption projects. The prospect of converting a large number of legacy RPG applications to newer SQL interfaces can be daunting. Their legacy RPG applications contain a lot of valuable business logic, and the effort to make the conversion can be quite large.

Wouldn’t it be great if you could wave a magic wand and make legacy RPG applications use SQL without any changes?

IBM opened a path to this type of solution with Open Access for RPG, or OAR. OAR allows for the substitution of traditional file I/O operations with user- written “Handlers”. In essence, you can replace the native file I/O operations of RPG with your own code. No change to program file handling or business logic! The OAR capability spawned a number of user interface modernization products, and other solutions that take advantage of this. Imagine if your RPG screen handling I/O with Execute Format (EXFMT) could be replaced with a web-based GUI library. Instant modernization of the UI! There are now many solutions that leverage OAR for UI modernization.

Join Logical Files with DDS

One limitation of logical files created with DDS specifications involves join logical files. You will not be able to create DDS join logical file where the join involves an encrypted  eld with FieldProc. You will get an error about invalid data type usage. This is an IBM limitation and there is no known workaround for this issue. Note that this limitation only applies to DDS join logical  les and does not apply to SQL joins using encrypted indexes. Most IBM i customers will need to change their RPG or COBOL program logic to avoid the use of DDS join logical  les which use encrypted indexes.

Application Changes

Legacy RPG applications that use encrypted indexes often need re-design and re-programming to avoid the problems of encrypted indexes. You can avoid these issues if you are using an Open Access for RPG (OAR) solution that maps the legacy RPG record-based file operations to native SQL and the SQL Query Engine (See the note above about Townsend Security’s OAR/SQL offering).

If you need to re-design your RPG applications to avoid encrypted indexes consider putting all of your sensitive data in a table that is indexed by some non-sensitive value such as a sequence number or customer number, and use FieldProc to encrypt that table. There are many approaches to application re- design and you should be able to find a method that works for you.

Change Management

IBM i customers who have deployed change management solutions can encounter challenges with FieldProc implementations. While most of these challenges are surmountable, it will take effort on the part of the systems management team to integrate FieldProc controls into their change management strategy. Unfortunately the implementation of FieldProc does not provide much in the way of support for change management systems. The activation of FieldProc changes an attribute on the column in the table, but change management systems generally are not aware of this attribute. It can be challenging to promote table and column level changes and properly retain the FieldProc attribute of the data. Look for a FieldProc implementation that provides a command-level interface to stop, start, and change FieldProc definitions. These commands can help you integrate FieldProc encryption into your change management strategy.

IBM i Encryption with FieldProc

Topics: IBM i, FIELDPROC

How Do I Encrypt Data and Manage Encryption Keys Using Java in Amazon Web Services (AWS)?

Posted by Patrick Townsend on Aug 22, 2016 10:51:12 AM

eBook - Encryption Key Management Simplified If you are a Java developer you probably know that the Java language has full native support for AES encryption. You don’t need any third-party SDKs or add-ins to Java to use industry-standard, strong encryption. The standard Java APIs are based on industry standards and are very efficient. Don’t hesitate to use that built-in facility. You include it in your Java application like this:

import javax.crypto.Cipher;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;

Encryption key management is another story. To implement good encryption key management you will need to turn to an enterprise key management solution and their Java library to make this happen. Our Alliance Key Manager for AWS solution provides a Java SDK to help you with encryption key use. The Alliance Key Manager Java SDK lets you easily retrieve an encryption key for use in your application, or alternatively to send data to Alliance Key Manager on a secure connection where the encryption or decryption task can be performed directly on the key server. This encryption service is helpful in situations where you don’t want to expose the encryption key in your application or server environment.

Many developers use the Java Keystore (JKS/JCEKS) facility for storing encryption keys. The Java key store is more a key storage facility rather than a key management facility and rarely meets compliance regulations for separating keys from the data they protect, providing for separation of duties, and dual control. If you are currently storing encryption keys in a JKS repository you may want to consider moving them to true key management solution like Alliance Key Manager.

One of the advantages of the Alliance Key Manager SDK is the built-in high availability failover facility. By using the Alliance Key Manager SDK in the event of a network or other failure you automatically fail over to a secondary HA key server in real-time. This means your application keeps running even though a network or system error prevents access to the primary key server.

The Java SDK for Alliance Key Manager includes all of the support needed to make a secure connection to the key server, retrieve an encryption key, access the encryption and decryption services on Alliance Key Manager, and perform other common functions. By using the SDK the Java developer can avoid writing all of the code needed to perform these tasks – the work needed to retrieve an encryption key is reduced to a few lines of code.  We think this is a big bonus for the Java developer and helps make their lives easier. And sample source code will really speed along the process.

Here is an extract of the sample source code showing the retrieval of an encryption key from Alliance Key Manager, an encryption of some plaintext, and the decryption of that ciphertext:

// Note: Full sample source available (this is just an extract)

import javax.crypto.Cipher;

import javax.crypto.spec.IvParameterSpec;

import javax.crypto.spec.SecretKeySpec;


import com.townsendsecurity.akmcore.AkmException;

import com.townsendsecurity.akmcore.AkmUtil;

import com.townsendsecurity.akmcore.AkmRequest;


import com.townsendsecurity.akmkeys.AkmKeyRequest;

import com.townsendsecurity.akmkeys.AkmSymKey;


// The AKM configuration file

String sCfgFile = "/path/jakmcfg.xml"


// Create a key request object initialized from the configuration file

AkmKeyRequest keyRQ = null;

keyRQ = AkmKeyRequest.getInstance(sCfgFile);


// Define the key instance (version) name

String sInstance = "some-name"


// Retrieve the encryption key from Alliance Key Manager

AkmSymKey symkey = null;

symkey = keyRQ.retrieveSymKey(sKey, sInstance);


// Create a context

EncryptDecryptCBC cryptor = new EncryptDecryptCBC(symkey.getKeyBytes());


// Let’s encrypt some plaintext

byte[] ciphertext = null;

ciphertext = cryptor.encryptSymmetric(plaintext.getBytes());


// Let’s decrypt the ciphertext

byte[] plainbuf = null;

plainbuf = cryptor.decryptSymmetric(ciphertext);

There is no charge for the Java SDK and all Alliance Key Manager customers have access to the Java SDK and sample code. AWS customers must register on the Townsend Security web site to get access to the Java code. You can do that here.

Encryption Key Management Simplified eBook

Topics: Alliance Key Manager, Amazon Web Services (AWS), Encryption Key Management, Enryption

How Can I Manage and Monitor Alliance Key Manager in Amazon Web Services (AWS)?

Posted by Patrick Townsend on Aug 19, 2016 11:00:00 AM

Alliance Key Manager for AWS runs as a stand-alone EC2 virtual machine in the AWS cloud. This is an Infrastructure-as-a-Service (Iaas) implementation which means that the solution includes the operating system (Linux) and the key manager application all within the AMI and active EC2 instance. There are several ways you can manage and monitor the key server.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop The server components of Alliance Key Manager, such as the network interface, firewall, system logging, backup, and other operating system components are managed through a secure web browser interface. The secure browser session does not provide for management of encryption keys, but lets you perform various common server management tasks.

The Alliance Key Manager Administrative Console is a PC application that provides a GUI interface for the secure management of encryption keys and key access policy. You can manage multiple key managers through a single console instance and you can check the status of the key manager.

The Linux operating system of Alliance Key Manager provides a number of other ways to manage and monitor the key server. You can set firewall rules to control which client systems are authorized to access the key server, and you can set up system log forwarding to your log collection server or Security Information and Event Management (SIEM) solution running either inside or outside of AWS. Actively monitoring your security systems like Alliance Key Manager is a security best practice. You can easily monitor Linux system logs, web logs, firewall logs, and other system logs and transmit them to your SIEM log collection server.

Alliance Key Manager also creates audit and diagnostic logs that you can forward with the native syslog-ng daemon within the key manager. Every user and administrative access to Alliance Key Manager is logged to the audit file and errors are logged to the error log file. Both of these files should be forwarded to you SIEM log collection server.

Alliance Key Manager also implements a “NO-OP” command to provide for monitoring the up/down status of the key server. Your monitoring solution can periodically issue the No-Op command to the key server to determine the status of current key operations. The No-Op command is a lightweight transaction and you can safely monitor the status of the key server every few seconds if desired.

Many of our customers ask us if they can install third-party management and monitoring applications. In the past we’ve been restrictive about the installation of third party components, but we came to realize how important they are to many AWS customers. We now allow you to install these applications with the caveat that you must be responsible for their secure installation and deployment. Our customers are now installing applications like Chef and Nagios for active monitoring.

You should also be aware that Amazon provides a number of monitoring tools that you can use with any EC2 instance. One of the most common AWS monitoring tools is the Amazon CloudWatch.

You can use the CloudWatch facility to monitor the status of your Alliance Key Manager EC2 instance. This can help with early detection of potential problems.

Lastly, Alliance Key Manager is an API-driven enterprise key management solution. That is, all key management tasks that are performed from the Administrative Console can be performed from user applications or from the command line. In fact, the Administrative Console is built on these APIs. You can create your own applications that drive these functions without user intervention if you need to. This facility is very helpful for our partners who need to embed automated key management into their own AWS solutions.

You can find more information about Alliance Key Manager for AWS here.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop

Topics: Alliance Key Manager

IBM i, PCI DSS 3.2, and Multi-Factor Authentication

Posted by Luke Probasco on Aug 16, 2016 10:48:05 AM

With the recent update to the Payment Card Industry Data Security Standard (PCI DSS) regarding multi-factor authentication (also known as Two Factor Authentication or 2FA), IBM i administrators are finding themselves faced with the requirement of deploying an authentication solution within their cardholder data environments (CDE). Prior to version 3.2 of PCI DSS, remote users were required to use two factor authentication for access to all systems processing, transmitting, or storing credit card data. With version 3.2 this is now extended to include ALL local users performing administrative functions in the CDE.

Here is an excerpt from section 8.3: (emphasis added)

8.3 Secure all individual non-console administrative access and all remote access to the cardholder data environment (CDE) using multi-factor authentication.

IBM i, PCI DSS, & Multi-Factor Authentication I recently was able to sit down with Patrick Townsend, Founder & CEO of Townsend Security, and talk with him about PCI DSS 3.2, what it means for IBM i users, and what IBM i users can do to meet the latest version of PCI DSS.

Thanks for taking some time to sit down with me, Patrick. Can you recap the new PCI-DSS version 3.2 multi-factor authentication requirement? This new requirement seems to be generating a lot of concern.

Well, I think the biggest change in PCI DSS 3.2 is the requirement for multi-factor authentication for all administrators in the cardholder data environment (CDE).  Prior to 3.2, remote users like contractors and third party administrators, had to use multi-factor authentication to login to the network.  This update extends the requirement of multi-factor authentication for ALL local, non-console users.  We are seeing businesses deploy multi-factor authentication at the following levels:

  •      Network Level - When you first access the network
  •      System Level – When you access a server or any system with the CDE
  •      Application Level – Within your payment application

The requirement for expanded multi-factor authentication is a big change and is going to be disruptive for many merchants and processors to implement.

Yeah, sounds like this is going to be a big challenge.  What does this mean for your IBM i customers?

There are definitely some aspects of this PCI-DSS update that will be a bigger challenge on the IBM i compared to Windows or Linux.  First, we tend to run more applications on the IBM i.  In a Windows or Linux environment you might have one application per server.  On the IBM i platform, it is not uncommon to run dozens of applications.  What this means is, you have more users who have administrative privileges to authenticate – on average there can be 60 or more who can sometimes be a challenge to identify!  When merchants and processors look at their IBM i platforms, they will be surprised at the number of administrators they will discover.

Additionally, the IBM i typically has many network services exposed (FTP, ODBC, Operations Navigator, etc).  The challenge of identifying all the entry points is greater for an IBM i customer.

You say it is harder to identify an administrative user, why is that?

On the IBM i platform, there are some really easy and some really difficult administrative users to identify.  For example, it is really easy to find users with QSECOFR (similar to a Windows Administrator or Linux Root User) privileges.  But it starts to get a little more difficult when you need to identify users, for example, who have all object (*ALLOBJ) authority.  These users have almost the equivalent authority of QSECOFR.  Finding, identifying, and properly inventorying users as administrators can be a challenge.

Additionally, with a user profile, there is the notion of a group profile.  It is possible for a standard user, who may not be an administrator, to have an administrative group profile.  To make it even more complex, there are supplemental groups that can also adopt elevated authority.  Just pause for a minute and think of the complex nature of user profiles and how people implement them on the IBM i platform.  And don’t forget, you may have users on your system who are not highly privileged directly through their user profile, but may be performing administrative tasks related to the CDE.  Identifying everyone with administrative access is a big challenge.

Townsend Security has a multi-factor authentication solution for the IBM i.  How are you helping customers deal with identifying administrators?

From the beginning, we realized this would be a problem and we have taken some additional steps, specifically related to PCI DSS 3.2 to help IBM i customers identify administrators.  We made it possible to build a list of all users who have administrative access and then require them to use multi-factor authentication when logging on.  We have done a lot to help the IBM i security administrator identify highly privileged users and enroll them into a two factor authentication solution, and then periodically monitor/update/audit the list.

What are some of the other multi-factor authentication challenges that IBM i customers face?

Some of them are pretty obvious.  If you don’t have a multi-factor authentication solution in place, there is the effort of evaluating and deploying something on the IBM i server.  You’ll find users who may already have a multi-factor authentication solution in place for their Windows or Linux environments, but haven’t extended it to their IBM i.  Even if they aren’t processing credit card numbers on the IBM i, if it is in the CDE, it still falls within the scope of PCI DSS.

Aside from deploying a solution, there is going to be administrative work involved.  For example, managing the new software, developing new procedures, and putting governance around multi-factor authentication.  Further, if you adopt a hardware-based solution with key FOBs, you have to have processes in place to distribute and replace them, as well as manage the back-end hardware.  It has been really great seeing organizations move to mobile-based authentication solutions based on SMS text messages where there isn’t any hardware of FOBs to manage.  Townsend Security’s Alliance Two Factor Authentication went that route.

Let’s get back to PCI DSS.  As they have done in the past, they announced the updated requirements, but businesses still have a period of time to get compliant.  When does PCI DSS 3.2 actually go into effect?

The PCI SSC always gives merchants and processors time to implement new requirements.  The actual deadline to meet compliance is February 1, 2018.  However, what we are finding is that most merchants are moving rapidly to adopt the requirements now.  When an organization has an upcoming audit or Self Assessment Questionnaire (SAQ) scheduled, they generally will want to meet the compliance requirements for PCI DSS 3.2.  It never is a good idea to wait until the last minute to try and deploy new technology in order to meet compliance.

You mentioned earlier that you have a multi-factor authentication solution.  Tell me a little bit about it.

Sure. Alliance Two Factor Authentication is a mature, cost-effective solution that delivers PINs to your mobile device (or voice phone), rather than through an expensive key FOB system. IBM i customers can significantly improve the security of their IBM i systems through implementation of proven two factor authentication.  Our solution is based on a non-hardware, non-disruptive approach.  Additionally, we audit every successful or failed authentication attempt and log it in the security audit journal (QAUDJRN).  One thing that IBM i customers might also be interested in, is in some cases, we can even extend their existing multi-factor authentication solution to the IBM i with Alliance Two Factor Authentication.  Then they can benefit from the auditing and services that we supply for the IBM i platform.  Our goal was to create a solution that was very cost-effective, rapid to deploy, meets compliance regulations, and doesn’t tip over the IT budget.

Download a podcast of my complete conversation here and learn more about what PCI DSS 3.2 means for IBM i administrators, how to identify administrative users, challenges IBM I customers are facing, and how Townsend Security is helping organizations meet PCI DSS 3.2.

Podcast: IBM i, PCI DSS, and Multi-Factor Authentication

 

Topics: 2FA, IBM i, PCI DSS

How Can I Be Sure I Never Lose My Encryption Keys in Amazon Web Services (AWS)?

Posted by Patrick Townsend on Aug 12, 2016 11:00:00 AM

As organizations move to the cloud, the topics of encryption and key management are top concerns.  "How can I be sure that I never lose my encryption keys?" is one that we hear a lot.  With Alliance Key Manager (AKM), Townsend Security's FIPS 140-2 compliant encryption key manager, you never have to worry about that! There are several layers of protection that help put this worry to rest. Let’s take a look at them in order.

Backup and Restore

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop The first layer of protection is that Alliance Key Manager gives you a complete backup and restore facility -including both a manual and automated facility. At any time you can run the manual backup operation to back up your key database, certificates, configurations and access control definitions. This backup can be sent to your own secure server either in the AWS cloud or in your own data center. You can also create the backup image and download it directly to your own server for safekeeping.

Alliance Key Manager also supports the ability to automatically backup to a secure server at an interval you specify. You can back up your encryption keys daily, weekly, monthly or at an interval you specify. Secure off-line backup is the first layer of protection.

High Availability

Most of our customers in AWS will deploy a second instance of Alliance Key Manager as a high availability failover key server. You can deploy the HA instance of the key server in a different region, or even completely outside of the AWS cloud. Once you deploy the secondary HA instance of the AKM key server you can start mirroring your data keys from the primary production instance of the key server to this secondary HA instance of the key server. Keys and access policies are securely mirrored in real time and the mirror architecture is active-active. This means that if you fail over to the secondary key server, create keys or make changes to key access policies, these will be mirrored back to the production key server in real time. Key mirroring provides a second layer of protection from key loss.

For customers concerned about protection from failures of the AWS cloud platform itself, you can mirror encryption keys to a key server outside of the AWS cloud. That secondary mirror key server can be located in your data center, in another cloud service provider platform, or in a hardware security module (cloud HSM) in a hosting center. Note that there is no limit to the number of backup mirror key servers that you can configure. Alliance Key Manager supports a many-to-many architecture for key mirroring.

Export Encryption Keys

A third layer of protection is provided by the key export facility of Alliance Key Manager. You can securely export individual encryption keys to your own internal systems. The key export facility also provides you with the ability to share an encryption key with another user or organization.

Separation of Duties & Dual Control

Using Separation of Duties and Dual Control can provide a fourth layer of protection for encryption keys. This level of protection is especially helpful for protecting from insider threats. You can create a separate AWS account for use by your security administrators to create and manage encryption keys. These key management administrators would have no access to normal AWS instances where you store sensitive data, and your normal AWS administrators would have no access to the key management account. By activating Dual Control in Alliance Key Manager at least two security administrators need to authenticate to the server to make changes or delete encryption keys.

Stand-alone Instance

Lastly, Alliance Key Manager runs as a stand-alone EC2 instance in the AWS cloud. You are automatically taking advantage of the security, resilience and recoverability provided by Amazon. Always use good AWS account security and management practices to help protect your sensitive data and encryption keys!

It may theoretically be possible to lose an encryption key, but you are going to have to work very hard to do so! Alliance Key Manager takes the fear of key loss out of your encryption strategy in AWS.

You can find more information about Alliance Key Manager for AWS here.

How to Meet Best Practices for Protecting Information in AWS by Stephen Wynkoop

Topics: Amazon Web Services (AWS), Encryption Key Management

Managed FTP Services on the IBM i – Look for These 8 Features

Posted by Patrick Townsend on Aug 8, 2016 1:03:03 PM

In a previous blog I talked about the security features that you should find in a Managed FTP solution. Of course, we look for the security components first as we want to be very sure that our data is protected in transit and at rest when it arrives at its destination. But with the high volume of FTP transfer activity in the modern organization; we also want to find a number of automation and management features in our Managed FTP solution. That’s the focus of today’s blog.

Secure Managed File Transfer for IBM i Here are the eight main elements of a Managed FTP solution for the IBM i (iSeries, AS/400) platform:

  1. Automation

  2. Scheduling

  3. Application integration

  4. Diagnostic logging

  5. Notification and Exception handling

  6. Resource management

  7. File system support (DB2, IFS, etc.)

  8. Commands and APIs

Let’s take these areas one at a time.

Automation: By its nature FTP is a manual process. This is one of the original protocols of the Internet and it was designed as a command line facility. But our modern IT systems need a solution that is hands-off and lights-out. A good Managed FTP solution should allow you to fully automate both inbound and outbound file transfers. And because our IBM i servers are often located inside the firewall, we need to be able to detect and pull files that are available on remote and external servers. We sometimes call this the automatic scan of remote servers and it is a critical automation component. Your Managed FTP solution should allow you to automate every aspect of sending and receiving files, including encryption of files you are sending and decryption of files that you receive.

Scheduling: Many file transfers have to happen at a certain time of day. This means that your Managed FTP solution should provide for intelligent scheduling of file transfers. Scheduled transfers might happen hourly, once a day, once a week, or once a month. But the scheduling facility should accommodate your transfer needs. Additionally, the ability to schedule a transfer through a third party scheduling application should be fully supported.

Application integration: When you receive a file via FTP it should be possible to automatically decrypt the file and automatically process it into your applications. This level of automation removes the need for human intervention and provides data in a timely fashion to your applications and ultimately to your users. Look for your Managed FTP solution to provide callable exit points, library and IFS directory scan facilities, and plenty of sample programs that you can use to start your automation projects.

Diagnostic logging: It is easy to underestimate the importance of built-in diagnostic logging in a Managed FTP solution. When you are processing many files every day, and when you are processing time critical files (think payroll files), you have to be able to identify the cause of a transfer problem very quickly. A diagnostic log should be available for every transfer and should clearly identify the causes of failures. FTP sessions can fail for a wide variety of reasons including network outages, password changes, remote configuration changes, expired certificates and keys, and many other issues. The presence of diagnostic logging means the difference between a long night hunched over a terminal or a leisurely trip to the pub!

Exception handling: A good Managed FTP solution will tell you when things go wrong. From my point of view this is both a good thing AND a bad thing. We have customers who run our solutions for years and forget that they are there! But this is what you want. A Managed FTP solution should tell you when a transfer failed and give you some clues on the resolution. In our Managed FTP solution notifications are done by email and you have a lot of choices – you can get notified on failure, notified on successful transfer, or notified on all activity. But it is the ability to get notified on failure that is so critical.  Exception handling should also include automatically retrying a failed transfer operation. Look for the ability of your Managed FTP solution to retry a transfer at least three times before reporting a problem!

Resource management: We don’t think of FTP as a CPU or disk intensive operation, and that is generally true. But imagine what it might be like to transfer several thousand files a day!  Those small individual file transfers start to add up in terms of resource utilization pretty fast.  Your IBM i Managed FTP solution should allow you to manage job priorities, schedule transfers during off hours of light usage, manage CPU time slice and pool allocations, and many other aspects of resource management.

File system support: As IBM i users we have a lot of data stored in DB2 files and tables. But we also may have a lot of information stored in the Integrated File System (IFS). A Managed FTP solution should support these file systems for both inbound and outbound transfers. Also consider those special file system requirements. Can you manage file transfers in a Windows network shared folder? Or a Linux/Unix NFS mounted volume? Or in a mounted drive for a remote IBM i server through the File400 folder? These can be important features for an IBM i solution.

Commands and APIs: Last but not least, there are always things we can’t do with the ready-to-use features of a Managed FTP solution. We will want to have access to IBM i commands and APIs to help us handle those special situations. In our Alliance FTP Manager solution we give you access to every single FTP operation directly from your RPG and CL applications. You can perform every aspect of an FTP session under program control, and know if it was success or failed, and why. And of course, command interfaces make it easy to put or get a single file. You might not initially miss the rich set of APIs, but the day will come when you need them!

In this blog I’ve tried to give you a feel for the basic set of features that you should find in a Managed FTP solution. You can learn more about our Alliance FTP Manager solution for the IBM i platform here.

Patrick

Secure Managed File Transfer for IBM i

Topics: Managed File Transfer, Secure Managed File Transfer, FTP Manager for IBM i