+1.800.357.1019

+1.800.357.1019

Feel free to call us toll free at +1.800.357.1019.

If you are in the area you can reach us at +1.360.359.4400.

Standard support
6:30am - 4:00pm PST, Monday - Friday, Free

Premium support
If you own Townsend Security 24x7 support and
have a production down issue outside normal
business hours, please call +1.800.349.0711
and the on-call person will be notified.

International customers, please dial +1.757.278.1926.

Townsend Security Data Privacy Blog

Paul Taylor

Recent Posts

Securing FieldShield Encryption Keys with Alliance Key Manager

Posted by Paul Taylor on Dec 13, 2019 10:28:00 AM

The article below originally appeared on IRI's blog and is being re-published here to show Townsend Security's blog readers how Alliance Key Manager integrates with IRI FieldShield.

In a previous article, we detailed a method for securing the encryption keys (passphrases) used in IRI FieldShield data masking jobs through the Azure Key Vault. There is now another, even more robust option for encryption key management available, thanks to API-level integration between FieldShield and the Alliance Key Manager (AKM) platform from Townsend Security.

AKM provides the security of authenticated access to FieldShield passphrases from five different server options (below). They assure that only authorized users can access the AKM key server and obtain the keys to decrypt FieldShield-encrypted field data (column values).

But beyond authentication, AKM provides a complete encryption key management solution which includes: key server setup and configuration, key lifecycle administration, secure key storage, key import/export, key access control, server mirroring, and backup/restore. AKM also supports compliance audit logging of all server, key access and configuration functions.

How AKM Works with FieldShield

AKM is leveraged directly in FieldShield data masking jobs through field syntax that specifies the use of AKM. This syntax is “AKM:KeyName”, where “AKM:” invokes the use of the Alliance Key Manager, and “KeyName” (an example key name created by AKM but could be anything) is the name of a key created by AKM from which the value you want will be accessed.

In a FieldShield decryption job, key retrieval from AKM is performed via a secure TLS connection to the AKM server. Both the client (FieldShield user) and server (AKM) end-points are authenticated via TLS.

AKM can be deployed in: 1) VMware; 2) a cloud server in Microsoft Azure; 3) Amazon Web Services; 4) a privately managed Hardware Security Module (HSM); or, 5) a dedicated cloud HSM.

Setting Up

Prerequisites for using AKM to manage encryption key passphrases in FieldShield are:

  • A compatible Linux OS (a Windows version is planned)
  • A licensed IRI FieldShield installation for Linux under /usr/local/cosort
  • An AKM instance with connectivity to the Linux OS
  • A .conf file configured with the proper details to connect to AKM from the Linux OS
  • The Alliance Key Manager Linux SDK

To run FieldShield, obtain and install license keys from IRI. To run AKM, obtain a license from Townsend Security.

You will need to create a configuration (.conf) file to provide the connection information for AKM. The file includes the locations of certificates, logging options, and AKM connection properties.

The configuration file must be specified correctly, placed in the /usr/local/cosort/etc directory, and called keyclient.conf in order for key retrieval to succeed. Once that’s done, AKM will be accessible and work properly from any of the 5 deployment methods listed above.

You will also need to download the AKM Linux SDK. It contains the packages used to install the Linux libraries for AKM key retrieval used in FieldShield, and a sample keyclient.conf file (shown later).

FieldShield-AKM-Schematic

The AKM Linux SDK

FieldShield makes use of shared libraries provided by Townsend Security to integrate with AKM. More specifically, FieldShield uses the Linux C SDK, which provides tools for integrating C applications with AKM in Linux.

There are debian (or rpm, depending on Linux distribution) packages within the packages directory of the Linux directory of the Linux SDK that must be installed on your system for the FieldShield-AKM integration to work. Confirm (or put) the shared object library (.so file) in the /usr/lib directory.

The AKM Linux SDK contains packages for the following Linux platforms:

  • RHEL/CentOS 4, 5, 6, 7
  • SLE 11 SP2, SP3, SP4
  • Ubuntu 12.04, 14.04, 16.04

The Ubuntu 16.04 package in the AKM Linux SDK was tested and confirmed to work on Ubuntu 18.04.

Configuring AKM for FieldShield Use

AKM can be deployed in a variety of ways, including through cloud computing providers and local virtual machines. To setup AKM initially, follow the instructions in the documentation and log-in to the administrative menu to initialize AKM and create and manage certificates for user authentication.

VirtualBox_vm-1_30_10_2019_15_17_15

The AKM instance has a key server at port 6001, a port for key retrieval at port 6000, and a web interface at port 3886. This information must be put into the .conf file so that FieldShield can find the AKM and retrieve the key at decryption time.

After logging in to AKM, the IP address of the AKM instance can be found by typing ifconfig:

VirtualBox_vm-1_30_10_2019_15_11_51

Again, the default port is 6000 for AKM key retrieval. This should be written in the .conf file like this:

[ip]

KeyStoreIpPort=IP:Port

 

Where IP is the IP address of the AKM, and Port is the port number used for key retrieval. For example:

 

[ip]

KeyStoreIpPort=192.168.56.20:6000

 

A complete .conf file could look something like this:

 

; Configuration file for Universal Key Retrieval API
[log] Syslog=2 ; syslog output enabled StdErr=2 ; stderr output enabled
[ip]
KeyStoreIpPort=192.168.56.103:6000
ConnectTimeoutSecs=5 ; timeout value in seconds
ConnectTimeoutMSecs=0 ; timeout value in milliseconds
[cert]
VerifyDepth=1 ; certificate verify depth
TrustedCACertDir=/home/devon/Downloads/AKMPrimary_user_20191021/PEM
; CA Signed Cert directory
TrustedCACert=/home/devon/Downloads/AKMPrimary_user_20191021/PEM/AKMRootCACertificate.pem ;
CA Signed Cert (root cert)

ClientPrivKey=/home/devon/Downloads/AKMPrimary_user_20191021/PEM/AKMClientPrivateKey.pem
; Client Private key
ClientSignedCert=/home/devon/Downloads/AKMPrimary_user_20191021/PEM/AKMClientCertificate.pem
; Client Signed certificate

 

AKM Web Interface (webmin)

The AKM Server web interface (or webmin) monitors AKM performance and login or access attempts, and allows access to the AKM file browser. Many settings can be modified through a secure web interface:

webmin_AKMDashboard menu in the AKM ‘webmin’ web interface

From the file manager in the web interface, full file system access to AKM is available. In the /home/admin/downloads directory, all certificates and private keys should be available in zipped folders.

The certificates and private key should be in the .pem format and stored in the pem folder within the zip folder with the name of the user (rather than the admin1 or admin2 folders). The date value is the day of the month that the folder was created during initialization of the AKM server.

There is also the ability to access logs from AKM, set logging options and IP access control for the web interface, start/stop AKM, enable two-factor authentication for the web interface, check running processes in AKM, and more, all from within webmin.

Creating and Using FieldShield Keys

AKM provides options for creating, securing, and managing encryption keys through the AKM Administrative (Admin) Console app for Windows. Consult the AKM Crypto Officer documentation for current information on creating keys through the AKM Admin console app.

FieldShield only supports 256-bit symmetric keys from AKM, known as AES256 keys. This provides the best combination of security and performance.

AKM_console

Otherwise, select the rest of the options as desired and click the submit button to generate an encryption key. The output should be similar to this:

AKM-symmetric-key

Alternatively, when initializing AKM, a set of encryption keys can be automatically generated. A prompt appears at AKM initialization asking if an initial set of encryption keys should be generated or not.

The encryption keys you create in AKM at initialization, or through the AKM Admin Console application, will serve as passphrase values in FieldShield target /FIELD specifications that encrypt or decrypt values at the field or column level. For example, this statement:

/FIELD=(Encrypted_CCN=enc_aes256_fp_alphanum(CCN, AKM:AES256), TYPE=NUMERIC, POSITION=12, SEPARATOR=”|”, ODEF=CCAcctNum)

 

will encrypt the CCAcctNum in the 12th column of the source database table with 256-bit AES alphanumeric format-preserving encryption using the key created inside AKM under the name AES256.

What’s actually happening? FieldShield will use a base64-encoded stream of characters (a key value) retrieved (derived) from AKM that are associated with that AKM key name. That stream then gets used by FieldShield as a new passphrase value. 

It’s that new passphrase value that is then used by FieldShield (like before AKM) to derive the actual encrypt/decrypt key used at FieldShield runtime. So in other words, AKM involves a double derivation.  

If you want to use a different AKM key name in another /FIELD statement to differentiate your encrypt/decrypt keys, use the AKM Admin Console to create another key under a different name. Reflect that new name into your FieldShield job script in the appropriate /FIELD statement.

To decrypt in this case, a corresponding decryption statement in a subsequent FieldShield job script would need to specify the dec_aes256_fp_alphnum function with the same passphrase to restore the original CCAcctNum value. This method will work with any FieldShield-included encryption or decryption algorithm.

Example Operation

Here is a look at the FieldShield encrypt (left) and decrypt (right) job scripts used:

FieldShield-encrypt-and-decrypt

 

Note the syntax for specifying AKM use, which is “AKM:KeyName”. Make sure that the key name is properly spelled. Key names that do not exist on the connected AKM instance will result in a Tcpconnect error. 

AKM will attempt to retrieve the key 5 times, each with a timeout of 5 seconds, as specified in the default .conf file. If the key is ultimately unable to be retrieved, then the job will not run. 

Here is an image of data from this example that FieldShield encrypted using AKM:

 

Here is an image of the data after running FieldShield and the key in AKM to decrypt it:

 

 

The bottom line: Using AKM to store the passphrases used for decrypting data in FieldShield dramatically enhances encryption key security and industry compliance levels for data masking operations. Through key authentication and secure key management facilities, AKM can help FieldShield users close off more potential gaps in enterprise data security.

Topics: Alliance Key Manager, IRI FieldShield

Why You Should be Continuously Delivering Drupal Updates - All the Time

Posted by Paul Taylor on May 9, 2016 7:35:00 AM

This is a special blog post written for Townsend Security by the Drupal Drop Guard team.


While developing a system to automate Drupal updates and using that technology to fulfill our Drupal support contracts, we ran into many issues and questions about the workflows that integrate the update process into our overall development and deployment cycles. In this blog post, we’ll outline the best practices for handling different update types with different deployment processes – as well as the results thereof.

The general deployment workflow
Most professional Drupal developers work in a dev-stage-live environment. Using feature branches has become a valuable best-practice for deploying new features and hotfixes separately from the other features developed in the dev branch. Feature branches foster continuous delivery, although it does require additional infrastructure to test feature branches in separate instances. Let us sum up the development activity of the different branches.

Drop guard workflow

Dev
This is where the development of new features happens and where the development team commits their code (or in a derived feature branch). When using feature branches, the dev branch is considered stable; features can be deployed forward separately. Nevertheless, the dev branch is there to test the integration of your locally developed changes with the code contributions of other developers, even if the current code of the dev branch hasn’t passed quality assurance. Before going live, the dev branch will be merged into the stage branch to be ready for quality assurance.

Stage
The stage branch is where code that’s about to be released (merged to the master branch and deployed to the live site) is thoroughly tested; it’s where the quality assurance happens. If the stage branch is bug-free, it will be merged into the master branch, which is the code base for the live site. The stage branch is the branch where customer acceptance happens.

Master
The master branch contains the code base that serves the live site. No active changes happen here except hotfixes.

Hotfix branches
Hotfixes are changes applied to different environments without passing through the whole dev-stage-live development cycle. Hotfixes are handled in the same way as feature branches but with one difference: whereas feature branches start from the HEAD of the dev branch, a hotfix branch starts from the branch of the environment that requires the hotfix. In terms of security, a highly critical security update simply comes too late if it needs to go through the complete development cycle from dev to live. The same applies if there’s a bug on the live server that needs to be fixed immediately. Hotfix branches need to be merged back to the branches from which they were derived and all previous branches (e.g. if the hotfix branch was created from the master branch, it needs to be merged back to the master to bring all commits to the live site, and then it needs to be merged back to the stage and dev branch as well, so that all code changes are available for the development team)

Where to commit Drupal updates in the development workflow?
To answer this question we need to consider different types of updates. Security updates (including their criticality) and non-security updates (bug fixes and new features).

If we group them by priority we can derive the branches to which they need to be committed and also the duration of a deployment cycle. If you work in an continuous delivery environment, where you ship code continuously,the best way is to use feature branches derived from the dev branch.

Drupal Drop Guard

Low (<=1 month):
- Bug fix updates - Feature updates

These updates should be committed by the development team and analysed for side effects. It’s still important to process these low-prio updates, as high-prio updates assume all previous code changes from earlier updates. You might miss some important quality assurance during high-prio updates to a module that hasn’t been updated for a long time.

Medium (<5 days):
- Security updates that are not critical and not highly critical

These updates should be applied in due time, as they’re related to the site's security. Since they’re not highly critical, we might decide to commit them on the stage branch and send a notification to the project lead, the quality assurance team or directly to you customer (depending on your SLA). Then, as soon as they’ve confirmed that the site works correctly, these updates will be merged to the master branch and back to stage and dev.

High (<4 hours):
- Critical and highly critical security updates

For critical and highly critical security updates we follow a "security first" strategy, ensuring that all critical security updates are applied immediately and as quickly as possible to keep the site secure. If there are bugs, we’ll fix them later! This strategy instructs us to apply updates directly to the master branch. Once the live site has been updated with the code from the master branch, we merge the updates back to the stage and dev branch. This is how we protected all our sites from Drupalgeddon in less than two hours!

Updates automation options
There are only a few ways to ensure the updates will be applied just in time and when it’s really needed, depending on the type of update. Any of those have positive and negative sides, and it’s only up to you to choose what suites you the best:

  1. Monitoring for updates manually or via one of available services or custom scripts, and once the security update is detected, process it according to the workflow defined in your organization. This approach works in most cases, but it requires someone to be ready to take action 24/7;
  2. Building a completely custom solution, which will not only detect updates, but also take care of applying them when it’s time. The only obvious drawback of this is that you have to spend a lot of time building and maintaining your custom tool.
  3. Using the updates automation service, such as Drop Guard, which will integrate seamlessly in your workflow and process updates in exactly the way you want. You don’t have to worry about being alerted all the time, or spending too much time on building your own solution, but be prepared to spend a few dollars on the 3rd party solution.

Requirements for automation
If you want to automate your Drupal security updates with the Drop Guard service, all you need is the following:

  • Code deployment with GIT
  • Trigger the update of an instance by URL using e.g. Travis.ci, Jenkins CI, DeployHQ or other services to manage your deployment or alternatively execute SSH commands from the Drop Guard server.

Also to keep in mind:

  • Know what patches you’ve applied and don't forget to re-apply them during the update process (Drop Guard helps with its automated patch detection feature)
  • Automated tests reduce the time you spend on quality assurance

Conclusion
Where to commit an update depends on its priority and on the speed with which it needs to be deployed to the live site. Update continuously to ensure the ongoing quality and security of your project and to keep it future-proof. Feature and bug fix updates are less critical but also important to apply in due time.

There are many ways of ensuring the continuous security for your website, and it’s up to you whether to go with a completely manual process, try to automate some things, or opt-in for a fully automated solution such as Drop Guard.

Topics: security, Drupal

Mainframe Myth-Busting: File Integrity Monitoring is Only for Windows/UNIX Security Systems

Posted by Paul Taylor on Feb 16, 2016 8:41:00 AM

"This article was originally posted on CorreLog's blog. CorreLog is a high performance correlation, search and log management company and Townsend Security partner."


That’s the thing about myths: they’re only partly true.

IBM z/os MainframeYes, File Integrity Monitoring (FIM) has been part of the distributed computing landscape for a few years now. And yes, real-time enterprise security monitoring is harder to accomplish in a mainframe environment. But as attacks become more sophisticated, FIM needs to be a key component of the entire network, including your mainframe.

There’s a well-known software vendor that has an antivirus “sandbox” that is used to explode viruses in much like a police bomb squad would do with a suspicious package at a crime scene. After said software suspicious package is exploded, the software vendor adds the footprint to its database and the next time that package comes through the network, if it was clean the first time, it gets let through; if not, it’s blocked.

The tricky part of this story is that hackers are now smart enough to detect when they are about to be put into one of these sandboxes. When the A/V program starts to sandbox, the suspected virus, the virus goes into a cloaking mode to evade the sandbox. The A/V tool gives the executable a passing grade and there you have it; virus enters network.

Infosef Myths Debunked

Chances are you won’t have to worry about mainframe viruses anytime soon (though anything is possible these days). The point of the story here is the sophistication at which hackers are attempting to compromise corporate and government IP. For it is a much faster path to market to steal technology than it is to develop it. The same could be said for nation-state attackers who lack the subject matter expertise to develop their own IP, be it leading-edge technologies or schematics for nuclear reactors.

Having a Security Information & Even Management (SIEM) system with Data Loss Prevention (DLP), supported with antivirus detection and Identity/Access Management Systems (IAMS), gives you a fighting chance. Having a means for File Integrity Monitoring, especially on z/OS where the most strategic global banking and government data resides, further fortifies your security strategy and arms your information security team with another data point to determine the level of risk to your data.

FIM protocols are well established on the distributed side, but if you were to ask a mainframe sysprog (system programmer) if they have some type of FIM protocol in place for z/OS, they would look at you like you were speaking a foreign language. FIM on mainframe, or MFIM, must be addressed, at a minimum, to facilitate the Payment Card Industry Data Security Standard (PCI DSS) requirements 10.5.5; 10.6.1; 11.5; and 12.10.5. The standard and corresponding requirements can be found here, and we should note that this blog is not a review of the requirements. The takeaway in today’s blog is to understand that FIM is important to the Payment Card Industry Security Standards Council (and HIPAA, and FISMA, and IRS Pub. 1075, and others). PCI DSS lists FIM in four different requirements and it does not say “do this just on your Windows and UNIX systems.” The requirement says do this for all systems in your datacenter.

The key to MFIM is to look at the mainframe counterparts to the Microsoft Windows install folder. One of these, SYS1.PARMLIB, or the PARMLIB concatenation, is the most important set of datasets on z/OS, listing system parameter values used by nearly every component of z/OS. You can’t just take a checksum "snapshot" of these files, as a SIEM would do with distributed systems, because they’re simply too big for that to be a practical approach.

The details of tracking mainframe event messages are far too many to get into here. Essentially, you need a way of connecting the mainframe and your distributed SIEM system for notifications in real time, and you need a software tool that will convert mainframe events to a distributed event log format — RFC 3264 syslog protocol — so your SIEM system can interpret the data as actionable information.

You can learn more about MFIM and its relevance to PCI DSS from CorreLog’s complementary whitepaper titled “InfoSec Myths Debunked: FIM is only for Windows/UNIX.”

Topics: IBM z, File Integrity Monitoring (FIM), Mainframe

Limiting Encryption Key Access on Alliance Key Manager

Posted by Paul Taylor on Oct 18, 2012 10:59:00 AM

Key Management in the Multi-Platform Environment

encryption key management white paper

Download the white paper "Key Management in the Multi-Platform Environment"

Click Here to Download Now

I am often asked about how one can restrict access to Alliance Key Manager (AKM), our encryption key manager.  There are a few different options available here in relation to locking down and controlling who has access to which keys.  This often is a concern for bigger organizations that have multiple departments authenticating to the encryption key manager and performing key retrieval operations, but I’ve known smaller companies as well that take advantage of the granularity AKM provides in this area.

One way you can restrict access to Alliance Key Manager is by restricting keys to specific users or groups of users. The users and group access can be defined on a system level, or at the level of each key. When you create a key you can define the restrictions on user and group access.

Since all connections to AKM are mutually authenticated over a TLS session, you as a client (key requestor) must present an X509 digital certificate to AKM that is signed by a trusted Certificate Authority (which needs to be known to the key server).  Within your client certificate are multiple fields of user data collectively known as the Distinguished Name (DN). Further, within the DN you'll define fields with information regarding who you are, what organization you are with and where you're located. There are two fields in particular that the AKM server will look at to determine your Group or User privileges. These are the Common Name (CN) field and the Organization Unit (OU) field. We look at the common name to determine user access and the organization unit to grant group authority.

Lets look at an example.  There is an AES encryption key available on an AKM server used to protect an employee's personal data. It is restricted so that only members of the Human Resources group can use that key. So any individual with "Human Resources" defined as their OU can successfully request that key, all others are turned away. This is Group Restricted Access.

To further this example, the director of Human Resources, Sam, needs access to a specific key only he can use. There would then exist an encryption key on AKM that has group and user policy defined as "Sam / Human Resources" and Sam's X509 digital certificate would have the CN of "Sam" and the OU of "Human Resources." This would ensure only he is allowed to access that key. This is strict group and user control of key usage and deters other "Sams" in the company from getting the key, as well as other individuals within the "Human Resources" department.

There are a few other ways to restrict access. You can specify just specific users who can access keys and ignore the group altogether. This would require defining a user table within AKM and tying specific keys to it. Then any user with the appropriate CN can authenticate and use those keys. The same can be done for groups as mentioned previously or any combination of group or user status as defined by the group or user table laid out on AKM.

And lastly you can allow anyone with an authenticated x509 digital certificate that can latch up to the key server successfully request a key. This method ignores the CN and OU altogether and is the least restrictive level of key access. However it still locks down key control as only authenticated clients with proper certificates can gain access to encryption keys.

For more information on the importance of encryption key management, download our white paper "Key Management in the Multi-Platform Envrionment" and learn how to overcome the challenges of deploying encryption key management in business applications.

Click me

Topics: Alliance Key Manager, Encryption Key Management

Alliance Key Manager (AKM) at a Glance: 3 Major Components

Posted by Paul Taylor on Oct 3, 2012 9:06:00 AM

encryption key management resourcesThe task of deploying encryption key management into your infrastructure to meet security and compliance best practices can be overwhelming at first.  To help give you a 'bird's eye view' of the core components of our Alliance Key Manager (AKM), our encryption key management HSM, I want to breakdown the three major components to it.  Having this understanding in your back pocket as you roll out AKM can help smooth out the process.

First up, your security team can utilize our AKM Java GUI console to create and manage AES encryption keys for use in your applications.  This is a program that you install on a Windows machine that communicates directly with the key server via a secure TLS session.  Here, keys can be created, expired, revoked, rolled or even deleted – requirements of PCI DSS and other compliance regulations.  You can also define a key access policy for each key that is created, specifying what groups or individuals can request and use it.  Alternatively, you can also use our Linux command line facility to completely automate encryption key management through scripting calls.

The second component focuses on your application that's doing encryption and requires access to an external key manager.  You’ll need to make some minor coding changes to your application layer to enable it to make API calls to our shared library that does key retrieval portion.  To help you succeed here we offer sample code in a variety of programming languages for your development team to work with.  All of these samples can be found on the AKM product cd.

If you need Extensible Key Management (EKM) for Microsoft SQL Server 2008 Enterprise Edition and above you can take advantage of Transparent Data Encryption (TDE) or Cell Level Encryption.  We see many organizations use TDE and EKM because they can easily implement encryption without changing any of their applications - and can be deployed relatively quickly.

Finally you have the ability to physically manage the key server appliance itself.  By using a web browser directed at the IP address of the appliance on your network you can create system and database backups, define mirrored servers, and enable Syslog to meet PCI-DSS and other compliance requirements.

Download our “Encryption Key Management Simplified” resources kit to find more information on meeting PCI DSS and HIPAA, encryption key management best practices, and more.

Click me

Topics: Alliance Key Manager, Encryption Key Management

Oracle, SQL Server, and Encryption Key Management

Posted by Paul Taylor on Aug 22, 2012 10:59:00 AM

I often speak with organizations that need to employ encryption and external key management for multiple relational databases they are using to store encrypted data.  Often this is a combination of Oracle and Microsoft SQL Server databases.   

Transparent Data Encryption (TDE) is used within both the Microsoft SQL Server and Oracle Database universes to provide encryption services at the tablespace level.  Many companies employ TDE and external encryption key management to meet the concept of "Separation of Duties" as required by PCI DSS and other compliance regulations.  Also, TDE is often easier to implement than column level encryption that may require programming changes to your application layer.  

key management sql serverIn Microsoft's SQL Server Enterprise edition 2008/2012 you have access to Extensible Key Management (EKM).  When EKM is enabled, SQL Server users can use encryption keys stored on external key managers, as opposed to accessing local key stores, which doesn't line up with compliance requirements.  Also, another benefit of using EKM is that you can easily take advantage of TDE as your database encryption approach.  

If you're running versions of Microsoft SQL server that don’t support EKM, don't worry.  You can still take advantage of the added features and security of using an external key manager with our encryption key management HSM, Alliance Key Manager (AKM).  AKM fully supports the entire Microsoft SQL Server product line.  You’ll just have make some programming changes to your application code to perform the necessary API calls to the key manager and you'll be set up to do key retrieval.   To help you with the process, we provide sample code and the .Net key retrieval assemblies to add to your project.  Additionally, we have C# and VBNET sample code that shows how to retrieve a key from the key server.

Much like Microsoft SQL Server, in the land of Oracle you need to be running Oracle Enterprise Edition with the Advanced Security option.  This can often be a pricey upgrade and I find that quite a few organizations would rather do column level encryption due to this fact.  oracle key managementAKM fully supports the path to column level encryption within the Oracle 10g and 11g environments.  Again your approach will include making coding changes to your application layer to perform key retrieval from AKM.  To help you with this on the Oracle front we provide some PL/SQL sample code for you to work from.

For more information on the importance of encryption key management, download our white paper "Key Management in the Multi-Platform Envrionment" and learn how to overcome the challenges of deploying encryption key management in business applications.

Click me

Topics: Oracle, Encryption Key Management, SQL Server

3 Steps to Setting Up An Encryption Key Management HSM

Posted by Paul Taylor on Jul 23, 2012 11:18:00 AM

encryption key managementSo you've decided to purchase an encryption key management HSM to help you pass a QSA audit and meet PCI DSS compliance.  Unfortunately just showing the auditor your paid receipt and key manager is not enough to satisfy requirements.  You have to actually be using them in a production environment.  Fortunately this is a fairly simple process to get started with Alliance Key Manager, our encryption key management HSM.  

Once the appliances are assigned IP addresses and reachable on your network, there are three fundamental tasks that you should complete prior to going into production.

First you'll want to setup and configure mirroring to your H/A failover server.  This is as easy as toggling on outgoing mirroring in the AKM.conf file of your primary server.  Next you'll want to have one of your Security Admins log into the Java based AKM Admin console for the production server and point it towards the failover server that will be receiving all the mirrored commands.  The final step to complete mirroring requires logging into the failover server and defining the incoming mirror details in the AKM.conf file for that appliance.  You'll also want to be aware of any firewalls in your network that could inhibit traffic and add exceptions accordingly.

The second part to deploying an encryption key management appliance involves defining your log collection of system logs for audit purposes and meeting section 10 of PCI DSS.  Alliance Key Manager supports transferring system logs via syslog-ng to a log collection server that is running a SIEM solution.  This is configured in the standard syslog manner by defining a log source, destination, and path.

The final and surprisingly perhaps most overlooked step to appliance setup is the creation of system backups.  Within Alliance Key Manager you will create two different types of backups from the outset.  The first will be a backup of your key encryption keys and configuration settings.  This backup needs to really only be run once during the setup of the device as there won't normally be changes to these settings going forward.  The second backup will be of the primary key management database, which will contain all your data encryption keys used by key retrieval clients.

During the backup process you'll be asked where you want these backups stored and define a backup destination.  Your choices include a local directory on the key server itself or sending it to an FTP server using SSL or SSH.  We  recommend sending your backups to a secure FTP server off the appliance in the event of a hardware failure and you can't reach the backup directory you'll still have access to these crucial images elsewhere for restoring purposes.

To make life easier on your network team we provide a scheduling facility that allows you to automatically create and transmit these backups at any specified time of your choosing.


Tackling these three tasks while setting up your Alliance Key Manager HSM will help you well on your way to passing that QSA audit.  The deployment team at Townsend Security can help you breeze through these steps as well as provide you documentation that covers these items in further detail.

For more information on the importance of encryption key management, download our white paper "Key Management in the Multi-Platform Envrionment" and learn how to overcome the challenges of deploying encryption key management in business applications.

Click me

Topics: Alliance Key Manager, Encryption Key Management

What are HIPAA Encryption Best Practices?

Posted by Paul Taylor on Jul 10, 2012 8:02:00 AM

HIPAAThe Health Insurance Portability and Accountability Act (HIPAA) of 1996 establishes and governs national standards for electronic health care transactions.  According to the website of the U.S. Department of Health and Human Services: 

The HIPAA Privacy Rule provides federal protections for personal health information held by covered entities and gives patients an array of rights with respect to that information.... The Security Rule specifies a series of administrative, physical, and technical safeguards for covered entities to use to assure the confidentiality, integrity, and availability of electronic protected health information.  www.hhs.gov

The protections under HIPAA have been expanded by the Health Information Technology for Economic and Clinical Health Act (HITECH).  Again, according to the Department of Health and Human Services:

HITECH requires healthcare organizations to take more responsibility for protecting  patient records and health information. The Act widens the scope of privacy and security protections available under HIPAA, increases potential legal liability for non-compliance and provides more enforcement of HIPAA rules. The HITECH Act seeks to streamline healthcare and reduce costs through the use of health information technology, including the adoption of electronic health records.

HITECH defines a data breach of protected health information (PHI) as any unauthorized use, access or disclosure of PHI that violates the HIPAA Privacy Rule and poses significant financial, reputational or other harmful risks to an individual.

Should SMBs be concerned about a data breach of PHI?  A recent study found that only 5 percent of data breaches are caused by malicious cyber attacks, while almost 55 percent are linked to human error. 

To determine whether a PHI data breach has occurred, HHS looks at various factors, some within your control, some not.  A key question the Department will ask in the event of a data breach is:  Was the PHI safeguarded by encryption?

What level of HIPAA encryption is recommended?  What are the HIPAA encryption best practices?  The key, as the Practice Management Center of the American Medical Association points out, is to "...render electronic personal health information (ePHI) unusable, unreadable or indecipherable to unauthorized individuals...".  If you follow the specific technologies/methodologies prescribed, you increase the likelihood of being relieved of the potentially burdensome and expensive notification requirements established by the HITECH for a data breach.

Best practices for HIPAA encryption include:

  1. Ensuring your encryption is certified by the National Institute of Standards and Technology (NIST). 
  2. Using an encryption key management appliance that is FIPS 140-2 certified. Federal information processing standards codes (FIPS codes) are a standardized set of numeric/alphabetic codes issued by the National Institute of Standards and Technology (NIST).  They are designed to establish uniform identification of geographic entities through all federal government agencies. 
  3. Encrypting any and all systems and individual files containing ePHI including medical records (and related personnel records), scanned images, your practice management systems and any emails that contain ePHI.
  4. Encrypting data that is published on the Internet.  
  5. Encrypting data on your computers, including all laptops.
  6. Encrypting data that leaves your premises.
  7. Encrypting all sessions during which your data was accessed remotely.  This last one requires diligence supervision to ensure that it is followed every single time.  It should become a habit, something each staff member with access offsite does as a matter of course. 

HIPAA encryption protects not only the personal health information of employees and patients from unauthorized disclosure and use, it protects SMBs from the potentially significant costs (i.e., financial, administrative and via damage to the organization's reputation) that result from such disclosure. 

View our webcast “Protecting PHI and Managing Risk – HIPAA/HITECH Compliance” to learn how your organization can manage their risk of a data breach and achieve breach notification safe harbor status.

Click me

Topics: Encryption, Best Practices, HIPAA

Meeting PCI-DSS Requirements for Encryption Key Management: Part II

Posted by Paul Taylor on Jul 5, 2012 7:49:00 AM

Meeting PCI DSS with Key ManagementIn part one of Meeting PCI-DSS Requirements for Encryption Key Management I discussed Separation of Duties and Dual Control, two critical components necessary towards meeting Section 3 of PCI DSS for encryption key management compliance.  Equally important to meeting section 3 are the notions of Split Knowledge, Audit Trail Logging and Strong Key Usage and Protection.

Section 3.6.1 of PCI DSS v2.0 states that your encryption solution must generate strong keys as defined by PCI DSS and PA-DSS using "strong cryptography."  On our Alliance Key Manager (AKM) all data encryption keys, key encryption keys, and authentication keys are generated using a NIST approved and certified cryptographically secure random number generator.  This meets NIST requirements for strong encryption key creation and establishment.  Furthermore in regards to Key Encryption Keys and Authentication Keys that protect your data encryption key database, the former keys are protected by a 2048-bit RSA key.  Since AKM is FIPS 140-2 certified and meets NIST requirements, you're covered on how keys are stored in a protected format, detection of key corruption, and the separation of data encryption keys from your key encryption keys.

enterprise key managementSplit Knowledge can also play a crucial role in protecting your data encryption keys.  Parts of the security standard state that you shouldn’t export a key in the clear from the AKM database and that the key needs to be protected.  For this to occur, you'd first have to have your Admin latch up to the key server utilizing a secure TLS connection with the proper credentials and authenticate to the server.  Once the connection is established, the admin is free to export or import symmetric keys, however upon export they will be required to protect the symmetric key with a RSA key. No manual establishment of keys in the clear is supported. By default this is out of the box functionality; we ensure this requirement by setting a configuration option for PCI-DSS mode.

Finally, there is the important item of collecting your system logs and transmitting them over your network to a waiting log collection server.  This waiting log server would ideally be running a SIEM product that monitors and analyzes log messages looking for malicious activity or critical errors. Specifically, AKM writes logs to four different log files; audit, error, backup, and trace logging, when enabled.  The key manager comes with Syslog-ng built in and ready to be configured.  You simply select your sources and define your destination of the collection server to begin transmission of your log files.  You can configure your SIEM product to send out alerts when certain events or errors occur that you want to be on the lookout for.

Want to learn more?  You can view a pre-recorded webinar titled "Encryption Key Management Simplified" and learn how encryption key management can be easy, why encryption key management is important, and what the barriers are to good encryption key management.

Click me

Topics: Compliance, Split Knowledge, PCI DSS, Encryption Key Management

Meeting PCI-DSS Requirements for Encryption Key Management: Part 1

Posted by Paul Taylor on Jun 27, 2012 12:53:00 PM

DOWNLOAD WHITE PAPER

PCI compliance matrix

Download our Encryption Key Management and PCI DSS 2.0 Compliance Matrix white paper and learn more about ensuring the data you are protecting meets PCI compliance.

Click Here to Download Now

There are are a few major components of PCI-DSS that need to be addressed when implementing an external key manager into your data encryption equation.  Separation of duties for starters, simply states that those who have access to the sensitive data, such as card holder details or credit card numbers cannot also have access to the encryption keys that protect them.  Conversely, the same can be said for the individuals that are responsible for managing data encryption keys -- they should not have access to the sensitive data for which the keys they are creating are used to protect.  Quite simply, separation of duties is the concept of dividing critical data protection processes between different individuals. This helps reduce the opportunity and likelihood of fraud when processing sensitive data.

I often talk with companies who've until recently considered encryption key management as an afterthought to their security infrastructure.  Often times they would store encryption keys on USB sticks or locally, alongside the encrypted data.  This approach allows individuals within the organization access to both the keys and data, directly conflicting separation of duties.  Utilizing an external encryption key manager to house your encryption keys, as well as implementing a policy where your security team are the only ones managing those keys and your DBA's and users are the only individuals accessing the data, will help move you in the direction of PCI compliance.

But of course there are other pieces to PCI that one should be aware of when it comes to proper encryption key management.  While separation of duties is good practice, there is an additional level of security that can be implemented on the encryption key management side called dual control.  Dual control is a process that requires the involvement of two or more individuals to complete a specified task, such as creating a key, changing its attributes, revoking status, or destroying an encryption from use forever.  Think of dual control as the act of requiring two individuals with two different keys to unlock the launch codes for a nuclear missile.  You certainly wouldn’t want all that responsibility resting on the shoulders of just one person with no oversight in place.  The same can be said for the management of your encryption keys.

To implement dual control on Alliance Key Manager (AKM), our encryption key management HSM, you'd first active it in the AKM configuration file of the hardware appliance.  Then the two Security Admins responsible for key management would install our Java based admin console into their work environments and configure them to communicate with the key manager over a secure TLS connection.  Once this is established, the first Security Admin would authenticate to the key server and set an 'Authorized Administrator' time period.  This allows the the first Admin to specify a window of time (in minutes) where the other Admin can log onto the key manager and perform their duties.  Taking this approach to key creation and management adds that additional layer of security to your encryption key environment.

In Part II of 'Meeting PCI-DSS Requirements for Key Management'  I will discuss the importance of capturing your audit logs and transporting them to a collection server off the key manager device as well as dig into the concept of split knowledge and how AKM meets that requirement. Until then, download our white paper on encryption key management requirements for PCI.

Click me

Topics: Compliance, Separation of Duties, PCI DSS, Encryption Key Management, Dual Control

The Definitive Guide to AWS Encryption Key Management
 
Definitive Guide to VMware Encryption & Key Management
 

 

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all