+1.800.357.1019

+1.800.357.1019

Feel free to call us toll free at +1.800.357.1019.

If you are in the area you can reach us at +1.360.359.4400.

Standard support
6:30am - 4:00pm PST, Monday - Friday, Free

Premium support
If you own Townsend Security 24x7 support and
have a production down issue outside normal
business hours, please call +1.800.349.0711
and the on-call person will be notified.

International customers, please dial +1.757.278.1926.

Townsend Security Data Privacy Blog

Microsoft SQL Server Automatic Encryption - Cell Level Encryption

Posted by Patrick Townsend on Feb 21, 2017 9:11:00 AM

In this third part of the series on Microsoft SQL Server encryption we look at Cell Level Encryption, or CLE, which is Microsoft terminology for Column Level Encryption. With CLE the manner and timing of SQL Server’s call to the EKM Provider software is quite different than for Transparent Data Encryption. It is important to understand these differences in order to know when to use CLE or TDE. Let’s look at some aspects of the CLE implementation:

Encrypted Columns
Encryption-Key-Management-SQL-ServerCell Level Encryption is implemented at the column level in a SQL Server table. Only the column you specify for encryption is protected with strong encryption. You can specify more than one column for CLE in your tables, but care should be taken to avoid performance impacts of multiple column encryption (see below).

With Cell Level Encryption you may be able to minimize some of the encryption performance impacts on your SQL Server database. Because the EKM Provider is only called when the column must be encrypted or decrypted, you can reduce the encryption overhead with careful implementation of your database application code. If a SQL query does not reference an encrypted column, the EKM Provider will not be invoked to perform decryption. As an example, if you place the column Credit_Card under CLE encryption control, this query will not invoke the EKM Provider for decryption because the credit card number is not returned in the query result:

SELECT Customer_Number, Customer_Name, Customer_Address FROM Orders ORDERBY Customer_Name;

You can see that judicious use of SQL queries may reduce the need to encrypt and decrypt column data.

SQL Application Changes
Unlike Transparent Data Encryption you must make a change to the SQL statement in order to implement Cell Level Encryption. The SQL Server functions “encryptbykey” and “decryptbykey” are used on SQL statements. Here is an example of a SQL query that encrypts a CLE-encrypted column:

select encryptbykey(key_guid('my_key'), 'Hello World');

Implementing CLE encryption in your SQL Server database requires modifications to your applications, but may be well worth the additional work.

Encryption and Key Retrieval
The EKM Provider software is called for each column value to perform encryption and decryption. This means a larger number of calls to the EKM Provider compared to Transparent Data Encryption. Because the number of calls to the EKM Provider may be quite large it is important that the encryption and key management functions of the EKM Provider are highly optimized for performance (see the next section).

The EKM Provider software from your key management vendor is responsible for performing encryption of the data. From a compliance point of view it is important to understand the encryption algorithm used to protect data. Be sure that the EKM Provider software uses a standard like the Advanced Encryption Standard (AES) or other industry recognized standard for encryption. It is common to use 128-bit or 256-bit AES for protecting data at rest. Avoid EKM Providers which implement non-standard encryption algorithms.

Encryption Key Caching
When deploying CLE it is important that the EKM Provider software optimize both encryption and key management. The number of calls to the EKM Provider software can be quite high. Good EKM Providers will securely cache the symmetric key in the SQL Server context rather than retrieve a key on each call. The retrieval of an encryption key from a key server takes precious time and multiple calls to retrieve a key can have severe performance impacts. Secure key caching is important for CLE performance. The use of the Microsoft Windows Data Protection Application Program Interface (DPAPI) is commonly used to protect cached keys.

Performance Considerations
When properly implemented Cell Level Encryption can reduce the performance impact of encryption on your SQL Server database. For very large tables with a small number of columns under encryption control, the performance savings can be substantial. This is especially true if the column is used less frequently in your applications.

CLE Vendor Note
Note that each vendor of EKM Provider software implements encryption and key management differently. Some EKM Providers only implement Transparent Data Encryption (TDE). If you suspect you will need Cell Level Encryption be sure that your key management support includes this capability.

In the next part of this series we will look at encryption key management in SQL Server.

Patrick

Encryption and Key Management for Microsoft SQL Server

Topics: SQL Server, Cell Level Encryption, SQL Server encryption

Microsoft SQL Server Automatic Encryption - Transparent Data Encryption

Posted by Patrick Townsend on Feb 14, 2017 8:33:00 AM

In this second part of the series on Microsoft SQL Server encryption I want to focus on Transparent Database Encryption, or TDE. Most Microsoft customers who implement encryption in SQL Server use Transparent Data Encryption as it is the easiest to deploy. No code changes are required and enabling encryption requires just a few commands from the SQL Server console. Let’s look at some of the characteristics of TDE implementation.

Database Encryption
Download the Webinar - Just Click!TDE involves the encryption of the entire database space in SQL Server. There is no need or ability to select which tables or views are encrypted, all tables and views in a database are encrypted at rest (on disk). When data is read from disk (or any non-volatile storage) SQL Server decrypts the entire block making the data visible to the database engine. When data is inserted or updated the SQL Server database encrypts the entire block written to disk.

With TDE all of the data in your database is encrypted. This means that non-sensitive data is encrypted as well as sensitive data. There are advantages and disadvantages to this approach - you expend computing resources to encrypt data that may not be sensitive, but you also avoid mistakes in identifying sensitive data. By encrypting everything at rest you are also protected from expansion of regulatory rules about sensitive data protection.

sql-decrypts-db.png

Protection of the Symmetric Key
When you enable Transparent Data Encryption on your SQL Server database the database generates a symmetric encryption key and protects it using the EKM Provider software from your key management vendor. The EKM Provider software sends the symmetric key to the key server where it is encrypted with an asymmetric key. The encrypted database key is then stored locally on disk in the SQL Server context.

When you start a SQL Server instance the SQL Server database calls the EKM Provider software to decrypt the database symmetric key so that it can be used for encryption and decryption operations. The decrypted database key is stored in protected memory space and used by the database. The encrypted version of the database key remains on disk. In the event the system terminates abnormally, the only version of the database key is the encrypted version on disk.

Starting the SQL Server Instance
During normal operation of SQL Server there is no invocation of the EKM Provider software and therefore no communication with an external key manager. Every normal restart of the SQL Server database instance will cause the EKM Provider software to be called to unlock the database key on the key server.

It should be noted that it is the responsibility of the EKM Provider software to handle network or key server failure conditions. SQL Server itself has no visibility on the connection to an encryption key management solution. If the EKM Provider software is unable to retrieve an encryption key, the SQL Server start request will fail. We will discuss business continuity issues in more detail later in this series.

Protecting Database Logs
SQL Server logs may contain sensitive data and therefore must also be encrypted. Transparent Database Encryption addresses this by fully encrypting database logs along with the database itself. It is important to remember that encryption of the logs will only start after TDE is activated AND after you stop and restart the database log. If you neglect to restart logging sensitive data may be exposed in the SQL Server log files.

Table and Index Scanning
Certain SQL operations on indexes require that the SQL Server database have visibility on the entire index of a column. An example of a SELECT statement would be something like this:

SELECT Customer_Name, Customer_Address FROM Orders WHERE Credit_Card=’4111111111111111’;

To satisfy this SQL query the database must inspect every row in the table Orders. With TDE this means that the column Credit_Card must be decrypted in every row. Similar operations with the ORDERBY clause can cause table or index scans.

Performance Considerations
Transparent Data Encryption is very optimized for encryption and decryption tasks and will perform well for the majority of database implementations. Microsoft estimates the performance impact of TDE of 2% to 4% and we find this accurate for most of our customers. However, Microsoft SQL Server customers with very large SQL Server databases should use caution when implementing TDE. Be sure that you fully understand the impact of TDE on your application use of large tables. It is always recommended that you perform a proof-of-concept project on very large databases to fully assess the performance impact of encryption.

In the next part of this series we will look at the other option for SQL Server encryption - Cell Level Encryption, also called column level encryption.

Patrick

Encryption Key Management for Microsoft SQL Server

Topics: SQL Server, Transparent Data Encryption (TDE)

Microsoft SQL Server Encryption - An Introduction

Posted by Patrick Townsend on Feb 6, 2017 2:33:16 PM

In 2008 the Payment Card Industry Data Security Standard (PCI-DSS) was gaining serious traction and Microsoft released SQL Server 2008 with built-in support for encryption. This was no coincidence. In addition to the PCI standard which mandated encryption of credit card numbers, numerous states in the US had also adopted data breach notification laws with strong recommendations for encryption. The compliance environment was changing dramatically and the SQL Server group at Microsoft provided a path to meet those new compliance regulations. This was a prescient and crucially important enhancement for Microsoft customers - the security threats have increased over time and compliance regulations have become more stringent.

Download the Webinar - Just Click!In this multi-part series I want to talk about how Microsoft implemented encryption in SQL Server, how you can leverage this capability to achieve better security and compliance, and the critical issues involved in getting encryption right with SQL Server. I hope you will find this series helpful as you negotiate your SQL Server encryption projects.

Architecture
Many Microsoft applications and services implement a “Provider” interface. This is the term that Microsoft uses to describe a standardardized, pluggable architecture for third party software companies to integrate and extend the capabilities of Microsoft solutions. With Provider architectures Microsoft enables a method for third parties to register their software to the Microsoft application, and the Microsoft application will then call that software as needed. The third party software must obey rules about the data interface and behavior of their applications. If done correctly the Provider interface provides powerful extensions to Microsoft applications.

Starting with SQL Server 2008 the database implements a Provider interface for encryption and key management. This is named the “Extensible Key Management” Provider interface, or the “EKM Provider”. EKM Provider software performs encryption and key management tasks as an extension to the SQL Server database. The EKM Provider architecture opened the door for third party key management vendors to extend encryption to include proper encryption key management.

From a high level point of view the EKM architecture looks like this:

SQL-EKM.png

Every version of SQL Server since 2008 has fully implemented the EKM Provider architecture. This has provided a stable and predictable interface for Microsoft customers and encryption key management vendors.

EKM Architecture - Column and Database Encryption
The EKM Provider architecture supports two different methods of database encryption:

  • Cell Level Encryption
  • Transparent Database Encryption

Cell level encryption is also known as column level encryption. As its name implies it encrypts data in a column in a table. When a new row is inserted into a table, or when a column in a row is updated, the SQL Server database calls the EKM Provider software to perform encryption. When a column is retrieved from the database through a SQL SELECT or other statement the EKM Provider software is called to perform decryption. The EKM Provider software is responsible for both encryption and key management activity. Implementing cell level encryption requires minor changes to the SQL column definition.

Transparent Database Encryption, or TDE, provides encryption for the entire database and associated log files. All tables and views in the database are fully encrypted. Data is encrypted and decrypted as information is inserted, updated, and retrieved by users and applications. As its name implies, transparent data encryption requires no changes to applications, SQL definitions, or queries. The database works seamlessly after encryption is enabled.

Transparent Data Encryption is the easiest of the two encryption methods to implement. In a following part of these series I will discuss when it makes sense to use TDE and when Cell Level Encryption is a better choice.

Activating the EKM Provider
After installing the EKM Provider software from a third party, the SQL Server database administrator uses the SQL Server management console to activate the EKM Provider and place the database or columns under encryption control. The activation of the EKM Provider software causes the database to be immediately encrypted and all further data operations on the database will invoke the EKM Provider software.

Microsoft EKM Provider for Locally Stored Encryption Keys
Recognizing that some SQL Server customers wanted to encrypt data but did not have the resources or time to implement a key management solution, Microsoft provided a built-in EKM Provider that performs encryption but which stores encryption keys locally in the SQL Server context. Understanding that this was not a security best practice, Microsoft recommends that customers use a proper encryption key management solution that separates encryption keys from the SQL Server database. That was good advice - locally stored encryption keys can be recovered by cyber criminals and the use of external key management systems provides better security and compliance.

EKM Provider Software
EKM Provider software is usually provided by your encryption key management vendor. This means that the features and functions of the EKM Provider software can vary a great deal from one vendor to another. Be sure that you fully understand the architecture and capabilities of the EKM Provider before you deploy SQL Server encryption.

SQL Server Versions That Support EKM
EKM Provider support is available in all Enterprise editions of SQL Server including Data Warehouse and Business Intelligence editions. EKM provider support is not available in Standard, Web, or Express editions of SQL Server.

In the following series I will go into more detail on the EKM Provider interface, transparent data encryption, cell level encryption, business continuity, compliance, and other topics.

Patrick

Encryption Key Management for Microsoft SQL Server

Topics: Encryption, SQL Server

Three Core Concepts from "Zero Trust" to Implement Today

Posted by Ken Mafli on Feb 1, 2017 12:57:58 PM

 

“There are only two types of data that exist in your organization: data that someone wants to steal and everything else.”

Forrester Research

eBook The Encryption GuideIn 2013, Forrester released an outline of their proprietary “Zero Trust Model” of information security to The National Institute of Standards and Technology (NIST).  Their model seeks to change “the way that organizations think about cybersecurity,” execute on higher levels of data security, and all the while “allowing for free interactions internally.”

But, when looking to better secure your organization’s data security posture, it is good to start with what has changed.  In the report, Forrester concluded that the old network security model was that of “an M&M, with a hard crunchy outside and a soft chewy center.”  It is the idea of the hardened perimeter around the traditional, trusted datacenter.  This old model is fraught with vulnerabilities as the traditional model is not equipped to handle new attack vectors with IoT, workforce mobility, and data centers moving to the cloud. It is increasingly becoming outmoded and weak.

In it’s place must come a data security model that takes into account the current network landscape and its vulnerabilities.  Enter, Zero Trust.  It builds upon the notion of network segmentation and offers key updates all under the banner: "never trust, always verify."

Below are the three main concepts to Zero Trust.  Follow along as we break down the trusted/untrusted network model and in its place rebuild a new trust model.

 

Assume All Traffic is a Threat

The first rule of “never trust, always verify” is that all traffic within the network should be considered a potential threat until you have verified “that the traffic is authorized … and secured.” Let’s look at these two components:

  • Authorized Traffic: Each end user should present valid (and up-to-date) login credentials (i.e. username and password) as well as authenticate themselves with multi factor authentication for each session logging into the network.  Usernames and passwords are not enough.  Only multi-factor authentication can reduce the risk of a hacker obtaining and misusing stolen login credentials.
  • Secured Traffic: All communication, coming from inside and outside of the network, should be be encrypted.  It should always be assumed that someone is listening in.  Using SSH or TLS and keeping abreast of their potential vulnerabilities is the only way to reduce the risk of exposure.

 

Give Minimal Privileges

The only way to minimize the risk of employees, contractors, or external bad actors misusing data is to limit the access each user/role is given to the least amount of privileges possible.  With this, it is a forgone conclusion that all sensitive data is already encrypted and minimal privileges are given as to who can decrypt it.  We implement a minimal privileges policy so that “by default we help eliminate the human temptation for people to access restricted resources” and the ability for hackers to access a user’s login credentials and thereby have access to the entire network.

Role-based access control (RBAC) model, first formalized by David Ferraiolo and Richard Kuhn in 1992 and then updated under a more unified approach by Ravi Sandhu, David Ferraiolo, and Richard Kuhn in 2000 is the standard today.  It’s ability to restrict system access only to authorized roles/users makes it the ideal candidate for implementing this leg of Zero Trust.  While Zero Trust does not explicitly endorse RBAC, it is best game in town, as of today.  For a deeper dive, visit NIST’s PDF of the model.

 

Verify People are Doing the Right Thing

Once we have authenticated each user and restricted them to the least amount of data possible to adequately do their job, the last thing to do is “verify that they are doing the right thing” through logging and inspection.

Here is a short (and certainly not exhaustive) list of techniques used to inspect all events happening in your network.  

  • Real Time Event Collection: the first step is to collect and log all events, in real time.
  • Event Correlation: Next you need to analyze all of the events and narrowing in on the events that need greater scrutiny.
  • Anomaly Detection: In a related move, you will want to identify the events that do not conform to the expected pattern and investigate further.
  • Resolution Management: All events that do not meet the expected pattern should be investigated and either classified as benign or deemed a possible threat and given for further investigation.

Note: There are many tools available that accomplish these.  Please refer to Gartner’s Security Information Event Management (SIEM) Magic Quadrant to find the tools that may interest you.

 

Final Thoughts

It's not a question of if, but when, a data breach will happen. Hackers grow more sophisticated in their attacks and threaten everything from intellectual property to financial information to your customers Personally Identifiable Information (PII).  The old model of the high, guarded perimeter with the trusted, internal network no longer functions as a secure model.  Zero Trust offers a more comprehensive approach to today’s data security needs.  As you look to deploy this model, begin to seek out tools that will help you.  Here is a short list of some of the tools to consider:

  • Log Collection Tools: Some platforms, like the IBM i, have proprietary formats, that are difficult for SIEMs to read.  Make sure your SIEM can fully collect all needed logs.  If it cannot, find or make a tool that will properly capture and send the logs onto your SIEM.
  • SIEM Tools:  As mentioned earlier in the article, there are many good SIEM tools out there to help you collect, analyse, and monitor all events on your network.
  • Encryption (data-in-flight): Fortunately, there are many open source protocols for secure communications like SSH and TLS.
  • Encryption (data-at-rest): Advanced Encryption Standard (AES) encryption is ubiquitous in most platform’s native encryption libraries.  There are also a number of products that offer column level to folder/file level encryption.
  • Centralized Key Management: The encryption you deploy is only as good and the level of protection you give to the encryption keys.  Therefore, robust encryption key management is a must.
  • User Access Management: Managing privileges, credentials, and multi factor authentication can be a daunting task.  The more more you can automate this, the better.

In many cases, adopting this approach will not be about bolting on a few products onto your existing data security framework but completely renovating it.  Don’t let expediency force you to defend your data with only half measures.  Take a deep dive into Zero Trust’s approach and see where you may be vulnerable.

 

The Encryption Guide eBook

Topics: Data Security

The Future of Active Security Monitoring on the IBM i

Posted by Luke Probasco on Jan 24, 2017 8:19:21 AM

Active monitoring is one of the most effective security controls an enterprise can deploy. In fact, a large majority of security breaches occur on systems that have been compromised days, weeks, or even months before sensitive data is lost. A recent Verizon Data Breach Investigations Report indicates that a full 84 percent of all breaches were detected in system logs.  By actively collecting security logs in real-time, organizations can not only monitor security events, but also prevent a data breach before it starts.  I recently sat down with Patrick Townsend, to discuss log collection and active monitoring on the IBM i.

Hi Patrick, can you give our readers an overview on the importance of collecting and monitoring security logs on the IBM i?

The Future of Active Security Monitoring on the IBM iOne of the most effective things that you can do to prevent a data breach is to deploy an active monitoring solution, sometimes also known as system logging.  You’ll find active monitoring at the top of all cyber-security lists of things to do – because it is effective.  Active monitoring is key to a strong security posture, for anybody.

Today, we all know that there is no longer a true perimeter and that our systems are at risk.  Luckily, active monitoring can help.  Here are some key principles that organizations need to understand.  First, an active monitoring solution needs to involve a log collection server or SIEM solution (IBM Security QRadar, Splunk, LogRythm, etc.) to collect security events across the entire enterprise and actively detect threats.  Second, there needs to be real-time collection and monitoring of security events.  Rather than scooping up the security events once or twice a day, it is imperative to be collecting these events in real-time. When you collect logs across the entire enterprise, a SIEM can provide a lot of intelligence to identify patterns and anomalies – which will identify a potential attack.  The final critical components are good reporting, query, and forensics tools.  SIEM solutions also give you the ability to quickly run reports and analyze suspect data.  This is important for two reasons.  If you are having an attack you need to identify quickly where the attack is originating and how it is happening.  This is essential in order to know how to remediate it.  If you aren’t able to pinpoint the problem, it is very likely that you are going to be attacked by the same methods again.

Switching gears, the serious points for an IBM i customer revolve around the fact that the IBM i is a critical back-office processor for most customers and runs multiple applications.  Too often the IBM i is an island within an organization, but it is important that it is fully integrated in your enterprise’s entire infrastructure security strategy.

Also, it is generally true that a cyber-attack almost never starts on an IBM i server.  They typically start on a compromised user PC or someplace in the organization.  From there, a hacker spends a fair amount of time probing around the IBM i finding any weak points.  We shouldn’t be naïve – hackers know about IBM i servers.  They know what to look for, they know the user IDs, they know how to compromise these systems – they are very good at it.

IBM introduced some new security event sources in V7R3.  Can you talk a bit about those? And what events should an IBM i customer be collecting?

Every release of the IBM i server has had new security events and fields to collect and monitor.  At Townsend Security we work very hard to stay ahead of these releases so that our customers are well positioned to handle new information and use it for protection.  A couple examples include IPV6 address support and new fields in existing events.  Regarding the recent V7R3 release, new sources include:

  • QAUDLVL (Auditing level) system value
  • *NETSECURE (to audit secure network connections)
  • *NETTELSVR (to audit Telnet connections)
  • *NETUDP (to audit UDP connections)

To address the second part of your question, when you deploy an active monitoring solution on the IBM i, you are certainly going to want to collect events from QAUDJRN, QHST, QSYSOPR, as well as exit points.  Interestingly, the QAUDJRN security audit journal does not exist when you first install a new IBM i server. You must create the journal receivers and the journal to start the process of security event collection.

Aside from the new log sources that IBM introduced in V7R3, for someone who maybe deployed a logging solution a few years ago, what should they be aware of now?

First, let’s take a look at how compliance regulations have been evolving.  We now know that most attacks work on the basis of privilege escalation.  For example, an attacker gets access to our systems and then eventually gets sufficient authority to steal data. Because of this, we are seeing that it is more important to identify when an administrative level or highly privileged user logs in to our system.  This is an example of how a logging solution needs to evolve to meet current compliance requirements. Businesses are now required to log and monitor that activity.

Unfortunately, this can be particularly hard on the IBM i.  On first look, an IBM i account may appear to have normal user privileges, but may in fact inherit higher privileges through a Group Profile or Supplemental Group Profile. It is important to detect these elevated privileges in real time and provide the security administrator with an easy-to-use report to identify the source of elevated privileges. This is an excellent example of how logging solutions need to evolve with the ways security events are monitored.  We recently tackled this in the latest release of our Alliance LogAgent.

Where do you see the future of logging on the IBM i?

Let me dust off my crystal ball!  First off, File Integrity Monitoring (FIM) will become more important.  To maintain a strong posture, security administrators need to know who is accessing sensitive data and system values on the IBM i.  We’re also going to see more requirements around File Integrity Monitoring across the regulatory compliance environments.  Why?  Because, as we discussed earlier, cyber-attackers escalate privileges, access sensitive data, and change security configurations in order to get the work done that they want to do.  Again, this is why we are seeing increased requirements in regulations like the Payment Card Industry Data Security Standard (PCI DSS) and new financial services regulations.

Another interesting prediction:  It won’t be unheard of for organizations to use multiple SIEM solutions. We are starting to see businesses use one SIEM for traditional security monitoring and another to monitor operational data.  Operational data, you ask?  Sure.  Logging solutions can easily allow administrators to answer operational questions like: How full are my disks?  Do I have any critical hardware errors?  Second, they can benefit from deploying a SIEM to monitor application data.  Sales teams, for example, can track inventory status, trending products, etc.  The benefits of file monitoring don’t have to be exclusive to security.

In the near future, we will also see a pickup of integration with Artificial Intelligence (AI), also commonly referred to as cognitive computing.  IBM has the Watson platform, and there are others, which I believe will be used to enhance security.  We are already seeing initial efforts in this respect.  Harnessing that AI capability with security makes total sense.  

Finally, as we are seeing, everything not bolted down is going to the cloud.  We will definitely see an evolution of new cloud services around security and logging.  It may take a little time for vendors to start leveraging that, but I believe it is definitely in the works.

To hear this interview in it’s entirety, download our podcast “The Future of Security Logging on the IBM i” and hear Patrick Townsend, founder and CEO of Townsend Security, further discuss log collection and monitoring on the IBM i, new log sources in V7R3, and the future of security logging on the IBM i.

The Future of Active Security Monitoring on the IBM i

Topics: System Logging, Alliance LogAgent

Fixing the TDE Key Management Problem in Microsoft SQL Server

Posted by Patrick Townsend on Jan 10, 2017 7:31:56 AM

Many Microsoft SQL Server users have taken the first step to protect sensitive data such as Personally Identifiable Information (PII), Protected Health Information (PHI), Primary Account numbers (PAN) and Non-Public Information (NPI) by encrypting their databases with Transparent Data Encryption (TDE). It is extremely easy to implement TDE encryption as it does not require program changes.

Download the Webinar - Just Click!A common cause of audit failures might not be so obvious and that is the failure to properly protect the SQL Server key encryption key once you activate encryption in SQL Server. With Transparent Data Encryption you have the choice of storing the service master key within the SQL Server context itself, or protecting the master key with a key management system using the SQL Server Extensible Key Management (EKM) interface. Why is it important to do this?

It turns out that it is easy for cyber criminals to recover the SQL Server master key when it is stored within SQL Server itself. (Examples: https://blog.netspi.com/decrypting-mssql-credential-passwords/ and http://simonmcauliffe.com/technology/tde/#hardware)

Simon McAuliffe provides the clearest explanation I’ve seen on the insecurity of locally stored TDE keys in SQL Server. I don’t agree with him on the question of using a key manager to improve security. Given that there is no perfect security, I believe that you can get significant security advantages through a properly implemented key management interface.

If your TDE keys are stored locally, don’t panic. It turns out to be very easy to migrate to a key management solution. Assuming you’ve installed our SQL Server EKM Provider called Key Connection on your SQL Server instance, here are the steps to migrate your Service Master Key to key management protection using our Alliance Key Manager solution. You don’t even need to bring down SQL server to do this (from the Alliance Key Manager Key Connection manual):

Protecting an existing TDE key with Alliance Key Manager

First create a new asymmetric key pair within the AKM Administrative Console using the “Create EKM Key” and the “Enable Key for EKM” commands.

Then return to SQL Server and call the following command to create the asymmetric key alias for the new KEK that you created on the AKM server:

use master;

create asymmetric key my_new_kek from provider KeyConnection with provider_key_name = ’NEW_TDE_KEK’, creation_disposition = open_existing;

In this example, NEW_TDE_KEK is the name of the new key on AKM, and my_new_kek is the key alias.

Then use the ALTER DATABASE statement to re-encrypt the DEK with the new KEK alias assigned in the previous statement:

ALTER DATABASE ENCRYPTION KEY

ENCRYPTION BY SERVER

   {  ASYMMETRIC KEY my_new_kek}

Note that you do not have to take the database offline to perform this action.

Of course, there are other steps that you should take to secure your environment, but I wanted to demonstrate how easy it is to make the change.

The SQL Server DBA and the network administrator will have lots of other considerations in relation to SQL Server encryption. This includes support for clustering and high availability, automatic failover to secondary key servers, adequate support for separation of duties (SOD) and compliance, and the security of the credentials needed to validate SQL Server to the key manager. All of these concerns need to be addressed in a key management deployment.

For SQL Server users who deploy within a VMware or cloud infrastructure (AWS, Azure), Alliance Key Manager can run natively in your environment, too. It does not require a hardware security module (HSM) to achieve good key management with SQL Server. You have lots of choices in how you deploy your key management solution.

It turns out not to be difficult at all to address your SQL Server encryption key insecurities!

Patrick

Encryption Key Management for Microsoft SQL Server

Topics: SQL Server, Transparent Data Encryption (TDE)

OpenSSH on the IBM i and Your Security

Posted by Patrick Townsend on Jan 3, 2017 7:45:48 AM

Lately I’ve seen some criticism of the OpenSSH implementation on the IBM i platform which seems to imply that using a third-party implementation of the Secure Shell (SSH) file transfer application is better without the IBM no-charge licensed OpenSSH implementation. I disagree with that opinion and think there are good security and implementation reasons to stick with the IBM OpenSSH implementation.

Here are some reasons why I like OpenSSH:

OpenSSH is supported by an global open source community

Tatu Ylönen founded SSH Communications in 1995 and produced the first versions of an open source SSH implementation. Since 1999 the OpenSSH application has been maintained by the OpenBSD Project which is funded by the OpenBSD foundation and managed by Theo de Raadt. OpenSSH is available on a wide variety of operating systems including the IBM i where it is deployed as a no-charge licensed product and maintained by IBM.  OpenSSH continues to be actively developed and new encryption algorithms have been added recently.

OpenSSH is a widely used by large and small organizations

By some estimates the OpenSSH implementation of the SSH protocol and applications commands a 97 percent market share for SSH implementations. This means that OpenSSH is in wide use by large and small organizations to securely manage their eCommerce needs. This also means that OpenSSH receives a lot of scrutiny by compliance and security experts. Widely deployed solutions tend to get more scrutiny from security experts, and this is true for OpenSSH.

OpenSSH is secure

No application is immune to security challenges. However, OpenBSD and the OpenSSH application in particular have a stellar record for security. With security products, deep expertise and commitment matter. OpenSSH started with security as a leading goal by its developers and it shows. Over the last few years there have been fewer than a dozen security issues, and most were unlikely to be exploited and all were patched rapidly through updates by IBM. The OpenBSD set of applications that include OpenSSH have a great record on security. If you think the IBM i platform has a good security record, take a look at OpenSSH.

IBM provides technical support for OpenSSH

We have all developed a deep appreciation for IBM’s commitment to security over the years. It is one of great values of the IBM i platform. As new vulnerabilities are discovered you need to have a reliable and timely source of patches and enhancements and IBM has stood behind this critical application. Security notifications are managed by IBM so that you know when you need to do an update. By making OpenSSH a no-charge licensed program IBM i customers get patches through the normal PTF update process. Do you know any third-party IBM i vendor with an equal commitment to notification, maintenance and patching? IBM has earned our trust through this process.

OpenSSH is PCI compliant

PCI Qualified Security Assessors (QSAs) like Coalfire, TrustWave and others recognize that a properly patched implementation of OpenSSH meets PCI Data Security Standards (PCI-DSS) compliance, and IBM also tracks OpenSSH for PCI compliance. This again reflects IBM’s and OpenBSD’s commitment to security. If you are using a third-party IBM i solution for SSH how well is it tracked by the PCI audit community?

SSH is a complex protocol

Bruce Schneier said “Complexity is the enemy of security.” SSH is a complex protocol and this means that extra care needs to be taken in its development, deployment and maintenance. No third-party SSH solution rises to the level of care taken by the OpenSSH community and by IBM. Almost every business depends on secure file transfer for daily business operations. Deploying the most secure SSH solution is a critical security step.

OpenSSH does not use OpenSSL or Java JSSE

We’ve read a lot over the last few months about security issues in OpenSSL and Java. Many IBM i customers are confused about the relationship between OpenSSH and OpenSSL. In fact, OpenSSH does not use the OpenSSL library for communications. This means that OpenSSH was not subject to the HeartBleed and other OpenSSL vulnerabilities. We are all also now painfully aware of the security issues in Java. Most browsers no longer allow Java plugins for this reason. Third-party SSH products may or may not use OpenSSL or Java for communications. If you are running a third-party IBM i SSH solution, do you know if it uses OpenSSL or Java?

Third-party SSH solutions provide no significant advantage over OpenSSH

OpenSSH is a secure, reliable, and resilient implementation of SSH for secure data transfer that is backed by IBM and a worldwide community of users and developers. Our Alliance FTP Manager solution fully integrates with the IBM i OpenSSH application for secure, automated and managed file transfer. Our solution automates the OpenSSH transfer of hundreds of thousands of file transfers every day without compromising security.

My opinion? You probably don’t need more IT risk in your life. Stick with OpenSSH for your security needs. You will be in good company.

Patrick

Webinar: Secure Managed File Transfer on IBM i

Topics: IBM i, Secure Managed File Transfer

New York Department of Financial Services (NYDFS) and Encryption - 8 Things to Do Now

Posted by Patrick Townsend on Dec 12, 2016 10:27:38 AM

The New York Department of Financial Services (NYDFS) surprised the financial services industry by fast tracking new cybersecurity regulations in September of 2016. Due to go into effect in January of 2017 with a one-year transition period, it takes a very prescriptive approach to cybersecurity which includes a mandate to encrypt data at rest. The financial sector is broadly defined as banks, insurance companies, consumer lenders, money transmitters, and others. The law is formally known as 23 NYCRR 500 and you can get it here.

eBook The Encryption GuideThere isn’t much wiggle room on the requirement for encrypting sensitive data. You can use compensating controls if you can show that encryption is “infeasible”. But I am not sure how you would show that. All modern database systems used by financial applications support encryption. It would be hard to imagine a financial database where encryption would not be feasible. Don’t plan on that being an excuse to delay encrypting data at rest!

The time frame is short for implementing the encryption mandate. One year seems like a long time, but it is extremely aggressive given the development backlog I see in most banks.

Here are some things you should start doing right now:

1) Inventory All of Your Financial Systems

This seems like a no-brainer, but you might be surprised how many organizations have no formal inventory of their IT systems that contain financial data. This is a top-of-the list item on any cybersecurity list of recommendations, so making or updating this list will have a lot of benefits.

2) Document Storage of All Sensitive Information (Non-Public Information, or NPI)

For each system in your inventory (see above) document every database and storage mechanism that stores NPI. For database systems identify all tables and columns that contain NPI. You will need this documentation to meet the NYDFS requirements, and it is a roadmap to meeting the encryption requirements.

3) Prioritize Your Encryption Projects

You won’t be able to do everything at once. Following all modern cybersecurity recommendations, prioritize the systems and applications that should be addressed using a risk model. Here are a few factors that can help you prioritize:

  • Sensitivity of data
  • Amount of data at risk
  • Exposure risk of the systems and data
  • Compliance risk
  • Operational impact of loss

It is OK to be practical about how you prioritize the systems, but avoid assigning a high priority to a system because it might be easiest. It is better to tackle the biggest risks first.

4) Establish Encryption Standards

Be careful which encryption algorithms you use to protect sensitive data. In the event of a loss you won’t want to be using home-grown or non-standard encryption. Protect data at rest with NIST compliant, 256-bit AES encryption. This will give you the most defensible encryption strategy and is readily available in all major operating systems such as Windows, Linux, and IBM enterprise systems.

5) Establish Key Management Standards

Protecting encryption keys is the most important part of your encryption strategy and the one area where many organizations fail. Encryption keys should be stored away from the encrypted financial data in a security device specifically designed for this task. There are a number of commercial key management systems to choose from. Be sure your system is FIPS 140-2 compliant and implements the industry standard Key Management Interoperability Protocol (KMIP).

Hint: Don’t fall into the project-killing trap of trying to find a key management system that can meet every key management need you have in the organization. The industry just isn’t there yet. Pick a small number of key management vendors with best-of-breed solutions.

With encryption standards well defined and an encryption key management strategy in hand you are ready to get started with your encryption projects.

6) Analyze Performance and Operational Impacts

Encryption will naturally involve some performance and operational impacts. Encryption is a CPU intensive task, so plan on doing some performance analysis of your application in real-world scenarios. If you don’t have test environments that support this analysis, get started now to create them. They will be invaluable as you move forward. Modern encryption is highly optimized, and you can implement encryption without degrading the user experience. Just be prepared to do this analysis before you go live.

There are also operational impacts when you start encrypting data. Your backups may take a bit more storage and take longer to execute. So be sure to analyze this as a part of your proof-of-concept. Encrypted data does not compress as well as unencrypted data and this is the main cause of operational slow-downs. For most organizations this will not be a major impact, but be sure to test this before you deploy encryption.

8) Get Started

Oddly (to me at least) many organizations just fail to start their encryption projects even when they have done the initial planning. A lack of commitment by senior management, lack of IT resources, competing business objectives, and other barriers can delay a project. Don’t let your organization fall into this trap. Do your first project, get it into production, and analyze the project to determine how to do it better as you move forward.

Fortunately we have a lot of resources available to us today that were not available 10 years ago. Good encryption solutions are available and affordable for traditional on-premise environments, for VMware infrastructure, and for cloud applications.

You can meet the NYDFS requirements and timelines if you start now. But don’t put this one off.

Patrick

 

Resources:

New York Department of Financial Services:

http://www.dfs.ny.gov/legal/regulations/proposed/propdfs.htm

 

Harvard Law School analysis of NYDFS:

https://corpgov.law.harvard.edu/2016/09/24/nydfs-proposed-cybersecurity-regulation-for-financial-services-companies/

The Encryption Guide eBook

 

Topics: Compliance, Encryption

IBM i Security Architecture & Active Monitoring

Posted by Patrick Townsend on Dec 6, 2016 7:30:42 AM

Excerpt from the eBook "IBM i Security: Event Logging & Active Monitoring - A Step By Step Guide."


IBM i Security: Event Logging & Active MonitoringActive monitoring (sometimes referred to as Continuous Monitoring) is a critical security control for all organizations. And it is one of the most effective security controls you can deploy. The large majority of security breaches occur on systems that have been compromised days, weeks, or even months before sensitive data is lost. A recent Verizon Data Breach Investigations Report1 indicates that a full 84 percent of all breaches were detected in system logs. This is why the Center for Internet Security includes active monitoring as a Critical Security Control (Control number 6).

There are several elements of a truly effective Active Monitoring strategy:

Central Collection and Repository of All Events
Attackers almost never start with your core IBM i server directly. They attack a web application or infect a user PC and work their way into the IBM i server. A defensible active monitoring strategy has to collect events from across the entire organization. By the time they show up on your IBM i server they have probably compromised a number of intermediate systems and an opportunity to prevent the breach has been missed. Collect all events across your entire IT infrastructure to gain the best early detection opportunities.

Real Time Event Collection
Data breaches are happening much faster than in the past. In some cases the loss of data happens just minutes after the initial breach. This means that you must collect security events in real time. Good active monitoring solutions are able to digest threat information
in real time and give you the chance to deter them. Avoid batch event collection – you can collect IBM i security audit journal information in real time and you should.

Event Correlation
Event correlation is key to an effect active monitoring solution. This is typically accomplished through the use of special software implemented in Security Information and Event Management (SIEM) solutions. Highly automated SIEM solutions have the ability to correlate events across a large number of systems and automatically identify potential problems. They do exactly what we want computer systems to do – handle large amounts of data and apply intelligent interpretation of the data.

Anomaly Detection
Anomaly detection is another aspect of active monitoring. That unusual system login at 3:00am on a Sunday morning would probably escape the attention of our human IT team members, but good active monitoring solutions can see that anomalous event and report on it.

Alerting and Resolution Management
When a problem is discovered we need to know about it as soon as possible. A good active monitoring solution will inform us through a variety of alerting channels. Emails, text, dashboards and other mechanisms can be deployed to bring attention to the problem - and we need to be able to track the resolution of the event! We are all processing too much information and it is easy to forget or misplace a problem. 

Forensic Tools
Forensic tools are critical to an active monitoring solution as they enable the rapid analysis of an attacker’s footprints in our system. The key tool is an effective and easy-to-use query application. Log data can include millions of events and be impossible to inspect without a good query tool. The ability to save queries and use them at a later time should also be a core feature of your forensic toolset.

IBM i


Topics: System Logging, IBM i

Townsend Security Announces Major Update to Alliance LogAgent for IBM i

Posted by Luke Probasco on Nov 29, 2016 12:01:00 AM

New features include full reporting of administrative users, including authority the user adopts through Group Profiles and Supplemental Group Profiles.

IBM i Security: Event Logging & Active MonitoringTownsend Security today announced a significant update to its existing Alliance LogAgent for IBM i (AS/400, iSeries) solution, allowing full reporting of administrative users, which includes authority the user adopts through Group Profiles and Supplemental Group Profiles. Alliance LogAgent is a Security Information and Event Management (SIEM) integration solution that collects, formats, and transmits security information in real-time to any SIEM or log collection server.

When the new configuration options are enabled, Alliance LogAgent will tag all significant security events as performed by the administrative level user. This enhancement will help security administrators easily identify which users have elevated privileges and enable SIEM solutions to quickly identify and alert on operations. In addition to the new administrative user reporting, Alliance LogAgent now provides an easy-to-use local assessment report that identifies privileged users. This will reduce the overhead of inspecting and adjusting privileges of IBM i users. 

Alliance LogAgent is compatible with all SIEM solutions that accept Syslog messages, IBM QRadar Log Event Extended Format (LEEF), or the HP ArcSight Common Event Format (CEF). The new administrative field reporting will make it easy for SIEM administrators to create dashboards, compliance reports, and alerts based on reported fields. When an administrator privileges are detected Alliance LogAgent adds the following field to the security message:

            admin_user=yes

For IBM QRadar the new field is:

            adminUser=yes

By providing a normalized field in the security events sent to the SIEM monitoring platform, the SIEM’s query and forensic tools can be used more effectively.

“Many IBM i customers have struggled with identifying who on their system has elevated privileges. It is crucial to identify and strictly control these users as cyber criminals often use privilege escalation to enable the exfiltration of sensitive data,” said Patrick Townsend, CEO of Townsend Security. “On first look an IBM i account may appear to have normal user privileges, but may in fact inherit higher privileges through a Group Profile or Supplemental Group Profile. Alliance LogAgent now detects these elevated privileges in real time, and provides the security administrator with an easy-to-use report to identify the source of elevated privileges. We think this is a crucial enhancement that will help IBM i customers better secure their platforms.”

Alliance LogAgent is in use with a wide variety of SIEM solutions including LogRhythm, SecureWorks, NTT Solutionary, IBM QRadar, Alert Logic, AlienVault, McAfee SIEM, Splunk, SolarWinds, and many others. In addition to collecting the IBM i security audit journal information Alliance LogAgent collects system history messages, operator messages, exit point information, system statistics, and a variety of open source application logs in Unix/Linux format.

The solution is licensed on a per logical partition (LPAR) basis, with perpetual and subscription licensing options available. Existing Alliance LogAgent customers on a current maintenance contract can upgrade to the new version at no charge.

IBM i

Topics: Alliance LogAgent, Press Release


Subscribe to Email Updates

Posts by Topic

see all