Townsend Security Data Privacy Blog

Trying to Outfox the Other - A Brief Look at Cryptography and Cryptanalysis

Posted by Ken Mafli on Mar 31, 2017 10:35:55 AM

 A few months ago I wrote a definitive guide to Cryptographic Key Management. In it I wrote a section: A Brief History - the Need for Encryption Key Management. I wanted to expand upon the Classical Era of cryptography a bit because the story of data security goes back for millennia, and the twists and turns of this story can be felt even today.

Introduction

eBook: Definitive Guide to Encryption Key Management There has been a competition playing out through the centuries all the way from the highest corridors of power down to the shadiest back alleys. It is a struggle of those with a secret and those who want to uncover it. It is the story of cryptography and cryptanalysis.

As with every competition, each side is constantly trying to outfox the other. Peter Baofu described the competition this way, it is “the never ending cycle of replacing old broken designs” of cryptography and “new cryptanalytic techniques invented to crack the improved schemes.” In fact, “in order to create secure cryptography, you have to design against [all] possible cryptanalysis.” This means that both sides are in a never-ending arms race.

In his book, “The Future of Post-Human Mass Media,” Peter Baofu describes two main types of cryptanalysis: Classical and Modern Cryptanalysis. Let’s take a look at the Classical Period to see how this cat and mouse game has played out through the centuries:

The Classical Cat-and-Mouse Game

Classical Cryptography

One of the earliest forms of “secret writing” is the Substitution Cipher where each letter of the message is systematically replaced by another set of predetermined letters. In it’s most famous form, the Caesar Cipher, used by Julius Caesar himself (1st century, B.C.E):

“each letter in the plaintext is 'shifted' a certain number of places down the alphabet. For example, with a shift of 1, A would be replaced by B, B would become C, and so on.”

Another technique was Steganography, which literally means: “covered writing,” is the art of concealing a message in plain sight. Mehdi Khosrowpour recounts one of the first recorded instances (in the 5th century, B.C.E):

“Demaratus, a Greek who lived in Persia, smuggled a secret message to Sparta under the cover of wax.” It “ was to warn Sparta that Xerxes, the King of Persia, was planning an invasion ... by using his great naval fleet. He knew it would be very difficult to send the message to Sparta without it being intercepted. Hence, he came up with the idea of using a wax tablet to hide the secret message. In order to hide the secret message, he removed all the wax from the tablet, leaving only the wood underneath. He then wrote the secret message into the wood and recovered the tablet with the wax.”

Classical Cryptanalytic Response

While steganography is only hard to crack if you don’t uncover the message; substitution ciphers were meant to remain a secret even if the message fell into enemy hands. It remained a fairly reliable means of securing messages, so long as the cipher was not revealed.

All that changed with the first recorded technique of cryptanalysis: Frequency Analysis. This technique “can be traced back to the 9th-century [C.E.], when the Arabian polymath Abu Yusef Yaqub ibn Ishaq Al-Kindi (also known as ‘Alkindus’ in Europe), proposed in A Manuscript on Deciphering Cryptographic Messages.” It comes from the observation that certain letters appear more often than others in a given language (the letter “E,” for example, occurs most often in English). There also also common letter pairings (like “TH” in English).

So, in the case of the Caesar Cipher where the plaintext message is :

meet me at the theater

If each letter is shifted one letter in alphabet, it becomes:

nffu nf bu uif uifbufs

Frequency analysis would note that the most common letter in the ciphertext is “f” (which would suggest it is an “e”) and only letter pairing is “ui” (which would suggest the “u” is “t” and the “i” is “h”). If we replace these portions of the ciphertext we reveal:

_eet _e _t the the_te_

With these two facts of frequency analysis alone we have more than half the message deciphered. With a few logical leaps we could decipher the remaining the five letters. The simple substitution cipher was rendered useless.

The Classical Cryptography Counterattack

Polyalphabetic.jpg

Over the centuries other ciphers were introduced like the Polyalphabetic Substitution Cipher where a repeating, offset key is used to encrypt the plaintext (see picture, courtesy of the Library of Congress). First perfected by Johannes Trithemius in 1518 (although other variants existed beforehand), the person encoding the message would switch alphabets for each letter of the message.

So, “meet me” would now become: “lcbp gy,” a ciphertext that simple frequency analysis could not break since most of the letter and pairing statistics of a given language are not easily recognized.

Although, in time, this type of cryptography was broken by the likes of Charles Babbage using modular arithmetic, the existence of his cryptanalytic techniques remained a military secret for some years.

Final Thoughts

Fascinatingly, it was the use of math to break a cipher that led to our current arms race in data security. The use of math and algorithms to break cryptography means you need longer keys to encrypt the data and prevent a brute force attack; which, in turn, means you need faster computers to break the encryption; which, in turn, means you need longer keys; etc.

Unlike today, however, it took centuries to break a cipher back then. Now, it is just decades. From the Hebern Electric Super Code Cipher Machine in the 1920s, to the Enigma Machine of the 1930s and 40s, to the Data Encryption Standard (DES) of the 1970s and 80s, each seemed invincible until enhanced cryptanalytic techniques or greater computing power toppled it. Our current cryptography is reliable and secure, but quantum computers loom on the near horizon and their non-binary logic could brute force attack our current public key cryptography and make them insecure.

And so the arms race continues. Fortunately, NIST has already forecasted this threat and called for replacements to our current standards, well before it is a crisis.  

eBook: Definitive Guide to Encryption Key Management

Topics: Encryption

Case Study: Citizens Security Life Insurance

Posted by Luke Probasco on Mar 13, 2017 10:54:24 AM

CSLI-Logo.pngCompliance Made Easy - Protecting Private Information with Alliance AES/400 Encryption for IBM i and Alliance Key Manager for VMware


“Townsend Security was extremely easy to work with - from the sales process to deploying our proof of concept to post-sales support.”

- Adam Bell, Senior Director of IT

 
Citizens Security Life Insurance

M Citizens Security Life Insurance Company is a life and health insurance carrier. The company offers group benefits including dental and vision coverage, and individual ancillary insurance products. The company was founded in 1965 and is headquartered in Louisville, Kentucky.

The Challenge: Protect ePHI & PII on the IBM i

In order to meet growing partner requirements and pass a data security audit for protecting electronic Protected Health Information (ePHI) and Personally Identifiable Information (PII), Citizens Security Life Insurance (CSLI) needed to deploy an encryption solution on the IBM i. The solution needed to be easy to implement with excellent performance.

While FIELDPROC on the IBM i makes it very easy to encrypt data without application changes, CSLI also understood that for encrypted data to truly be secure, they would need to store and manage encryption keys with an external key manager.

By using a VMware-based encryption key manager, the company could meet encryption and key management best practices for separating encryption keys from the data they protect.

The Solutions

Alliance AES/400 Encryption

“The performance we are seeing with Alliance AES/400 encryption is excellent,” said Adam Bell, Senior Director of IT, Citizens Security Life Insurance. “The solution was easy to integrate and completely met our expectations.”

Alliance AES/400 FIELDPROC encryption is NIST-compliant and optimized for performance. The solution is up to 100x faster than equivalent IBM APIs on the IBM i platform.

With Alliance AES/400, businesses can encrypt and decrypt fields that store data such as credit card numbers, social security numbers, account numbers, ePHI, and other PII instantly without application changes.

Alliance Key Manager for VMware

Alliance Key Manager for VMWare was very easy to implement and the resources Townsend Security provided made deployment a smooth process,” continued Bell. By deploying Alliance Key Manager for VMware, CSLI was able to meet their business needs with a solution that could not only deploy quickly, but was also easy to set up and configure.

Alliance Key Manager for VMware leverages the same FIPS 140-2 compliant technology found in Townsend Security’s hardware security module (HSM) and in use by over 3,000 customers. The solution brings a proven and mature encryption key management solution to VMware environments, with a lower total cost of ownership. Additionally, the key manager has been validated to meet PCI DSS in VMware environments.

Integration with the IBM i Platform

An encryption strategy is only as good as the key management strategy, and it can be difficult to get key management right. For companies doing encryption, the most common cause of an audit failure is an improper implementation of key management. The seamless integration between Alliance AES/400 and the external Alliance Key Manager for VMware allowed CSLI to pass their data security audit with flying colors.

“The relationship we developed with Townsend Security enabled us to have a painless sales and support process, and in turn, enabled us to easily pass our data security audit,” finished Bell.

Meeting HIPAA and protecting ePHI with encryption and key management.

 

Topics: Alliance Key Manager, Alliance AES/400, Case Study

A Brief History of KMIP

Posted by Ken Mafli on Mar 6, 2017 1:31:39 PM

KMIP Logo.pngKey Management Interoperability Protocol (KMIP) is quickly becoming the industry standard for ensuring your product or software can communicate seamlessly with cryptographic key managers.  In fact, a study by the Ponemon Institute in 2013 reported on the state of encryption trends and found that “more than half of those surveyed said that the KMIP standard was important in cloud encryption compared with 42% last year.”  This is surprising since KMIP v1.0 was first ratified three short years earlier on October 1st, 2010!

How Did it All Start?

eBook: Definitive Guide to Encryption Key Management The first meeting held to start discussing the new set of standards was on April, 24th 2009 in San Francisco in conjunction with the RSA convention that year.  In attendance were representatives from RSA, HP, IBM, Thales, Brocade, and NetApp. Their initial scope was to “develop specifications for the interoperability of key management services with key management clients. The specifications will address anticipated customer requirements for key lifecycle management”

But why was KMIP necessary to begin with?  The short answer: more and more organizations were deploying encryption in multiple environments.  But with encryption comes the need to properly manage the encryption keys. With encryption increasing across multiple enterprise applications it became harder to easily manage the keys from the different enterprise cryptographic applications.  Better standards were needed to create uniform interfaces for the centralized encryption key manager.

Companies soon saw the benefits of adopting KMIP.  Both large and small organizations need their key management to work every time and need it to scale as their organization grows.  And while other work was done to address this issue, like OASIS EKMI, IEEE P1619.3,  and IETF Keyprov KMIP was designed to have a broader scope than it’s predecessors and give more comprehensive standards for the industry.


How Was KMIP Initially Received?

In 2010, KMIP debuted at RSA.  HP, IBM, and others demonstrated that their client programs using the KMIP version 1.0 protocol could “communicate securely with key management servers. The clients and servers [demonstrated] essential use cases such as generating cryptographic keys, locating existing keys, and retrieving, registering, and deleting keys.”

In 2011 at the RSA Conference major players like IBM, RSA, and HP demonstrated KMIP 1.0 compatibility with their client programs.  And again in 2012 and in 2013 even more companies like Thales, NetApp, and Townsend Security demonstrated KMIP compliance.  With all these prominent players becoming KMIP compatible, it was a major signal to the industry that KMIP was rapidly becoming the industry standard for interoperable communications for key managers.

How is KMIP Thought of Now?

Fast forward to 2014.  The The Storage Networking Industry Association (SNIA) announced a testing program for KMIP conformance for its members.  In their words, “By introducing the KMIP Test Program for the industry, we’re helping to encourage not only the adoption of enterprise–class key management, but a means for vendors to test for conformance and provide an assurance of interoperability and a layer of trust to their customers.”

At  OASIS’ Interoperability Showcase at RSA 2016 16 companies, including Townsend Security, demonstrated KMIP compatibility.  And with the likes of VMware, Oracle, Quantum, and many others  demonstrating KMIP compatibility, KMIP has become a dominant standard in key management interoperability.

Final Thoughts

Encryption is your last, best defense for data at rest.  But encryption is only as good as your key management.  If the key is exposed to hackers, the data is lost as well.  This is why key management standards like KMIP have already attracted considerable interest, and will continue to do so.  The ability to have a variety of vendor applications, platforms, and databases all able to communicate with a centralized key manager enhances the data security posture of the enterprise.  And this is what organizations should strive to achieve.

OASIS built the standard to address a broader scope of issues than what older industry standards addressed. But KMIP still is actively being matured by OASIS (we are on version 1.3) and we should expect to see further enhancements and revisions to the standard as well as broader industry adoption.  This should give us confidence that KMIP as a well-accepted, road-tested standard will continue to grow in industry popularity in years to come.

eBook: Definitive Guide to Encryption Key Management

Topics: Encryption Key Management

Hillary's email data breach taught us all the wrong lessons

Posted by Ken Mafli on Feb 28, 2017 9:11:00 AM

In an unprecedented October surprise, Wikileaks dumped thousands of emails onto the internet from the Democratic National Committee (DNC), most of them concerning Hillary Clinton’s presidential campaign.  Later, in defending this move, Wikileaks founder Julian Assange, in an interview with FOX News, “said a 14-year-old could have hacked into the emails of Hillary Clinton's campaign chairman,” reported the Daily Mail.  Assange later revealed in the interview that John Podesta’s, Hillary’s campaign chairman, password was 'password.'  Politifact has gone on to challenge that assertion, saying that “Podesta was using a Gmail account, and Google doesn’t allow users to make their passwords ‘password.’”

Whatever John Podesta’s password was, it has sparked a good deal of renewed interest in good password management.  And far be it from me to downplay this crucial bit of data security.  We still have a long way to go.  In fact, SplashData just completed their survey of over 5 million people’s passwords and found that over 10% of people still use the most commonly guessable passwords like:

  • password
  • 123456
  • qwerty
  • passw0rd
  • Password1
  • zaq1zaq1

If you use any of these, stop it. Now.

But if that is all that we learn from the hack and subsequent data breach, we have missed the lesson.  As far back as June of 2016, it was widely reported, by the likes of Brian Krebs and Jeremy Kirk, that the DNC was vulnerable to attacks do to systemic weaknesses in cybersecurity.  In fact, in Jeremy Kirk’s article, it was noted that a press assistant emailed everyone a new password after a recent breach (a strong password at that: 'HHQTevgHQ@z&8b6').  The irony is, some of the email accounts had been compromised.  The hackers needed only to open the email and use the new password.

Strong passwords are not enough to rebuff the efforts of hackers to gain entry and to render the data useless in case of a breach.  We need proven security measures in order to keep the data safe.  

The data security measures below reflect specific things you can do to secure your data-at-rest in general. While there are more more specific measures you can take for email servers, it is important to remember that organizations have sensitive data everywhere, not just in emails.  That being said, since even seemingly benign emails at the DNC can blow up into political controversy, they probably need to follow these along with more email specific recommendations.  Follow along to find some of the best methods your organization should be using today to better secure your data security posture.

Multi Factor Authorization

2FA.pngAs we have already mentioned, usernames and passwords, by themselves, are not enough to authenticate users.  Truly strong passwords are hard to manage and remember.  And once a system is compromised, login credentials can be scraped with keyloggers, malware, or other such attacks.

You need an external verification process.  You need multi factor authentication (MFA). MFA has traditionally relied on verifying you by two of three ways:

  • Something that you know (i.e.: username, password, challenge questions/responses, one-time-use code, etc.)
  • Something that you have (i.e.: token, RFID cards or key fobs,  mobile phones, etc.)
  • Something that you are (biometrics)

Each of these methods have their advantages and drawbacks. For example:

  • Challenge Questions:
    • PRO: do not require any physical equipment on the user side
    • CON: do rely on the user’s memory, which can be fuzzy when it comes to precisely writing the correct response
    • CON: are vulnerable to deduction through inspection of social media accounts, etc.
    • CON: are “something you know” and so fall into the same category as login credentials, thereby not taking advantage of any other kind of authentication
  • Physical Equipment: (like RFID cards and tokens)
    • PRO: do not rely on a person’s memory
    • CON: can be stolen or lost
    • CON: require active device management from an administrator

One method of authentication that is gaining ground because of its ease of use is authentication that relies on OAuth (an open standard for authorization).  It does not rely on physical fobs (which can be lost) or an SMS text (which can be intercepted).  It, instead, relies on cryptographic code that generates a time specific one-time-use codes based on the user’s secret key and the time. Since the code operates simultaneously (and separately) on the user’s device (typically a mobile phone) and on an internal server, with no need for an internet connection; it greatly reduces downtime because of internet issues and hackers intercepting the one-time-use code.

Encryption

lock.pngStrong, Advanced Encryption Standard (AES) encryption as put forward by NIST should be used to encrypt all sensitive customer and company data.  In 2001 NIST formally adopted the AES encryption algorithm.  Since then, it has been proven countless times to render the data useless in the event of a breach.  In fact, it would take the fastest supercomputer 375 x 1050 years to brute force AES encryption by running through all permutations of an AES 256-bit encryption key.  In comparison, the Sun will reach its Red Giant stage in 54 x 108 years, engulfing Mercury, Venus, and possibly Earth.  In other words, the Earth will be incinerated by the then rapidly expanding Sun before a hacker could effectively crack AES encryption through brute force.

The good news, AES encryption comes standard in most database’s native encryption libraries.  Along with those free versions, there are a number of commercial products that rely on AES encryption available.  So finding a way to secure your data with AES encryption will be fairly easy.  That being said, it is important to understand the development time and performance hits each solution takes. Native encryption libraries are generally free but take a bit of development time.  Commercial solutions take less time to deploy but many times are file/folder level encryption products and have performance hits because they take a longer to encrypt/decrypt than column level encryption products.

Centralized Encryption Key Management

key.pngAs we mentioned, AES encryption is extremely difficult to brute force attack.  It’s strength lies in its ability to encrypt the data with a very long key (typically 256-bit). But it’s strength is also its weakness.  If your encryption key becomes known to a bad actor, your encrypted data becomes compromised.  That is why any encryption strategy worth its salt will include proper, centralized encryption key management.  

When defending your encryption key with full lifecycle key management, consider these things:

  • The encryption keys should be logically or physically separated from the encrypted data.  This way, if the encrypted data is compromised, they will not be able to decipher it.
  • The encryption keys should only be generated with a cryptographically secure pseudo-random number generator (CSPRNG).
  • Restrict administrator and user access to the keys to the least amount of personnel possible.
  • Create clear separation of duties to prevent improper use of the keys by database administrators.
  • Manage the full lifecycle of the keys from key creation, activation, expiration, archive, and deletion.

For a more comprehensive view of encryption key management, please view the Definitive Guide to Encryption Key Management.

Real Time Log Monitoring

Forrester, in 2013, promulgated the cybersecurity model of “Zero Trust.”  In it, they put forward the motto: “never trust, always verify.”  By this, they mean that all users should be authenticated, restricted to the least amount of data possible, and verified that they are doing the right thing through real-time monitoring.  Of which, they advocate for:  

  • Real Time Event Collection in which you collect and log all events, in real time.
  • Event Correlation in which you analyze all events and narrow in on the ones that do not conform to expected patterns.
  • Resolution Management in which you investigate all suspect behavior and either classify them as either benign or a possible threat for further investigation.

There are many Security Information Event Management (SIEM) tools available that accomplish this.  For more information, refer to Gartner’s SIEM Magic Quadrant to find the tools that fit your needs.

Final Thoughts

Defending data-at-rest is a never ending struggle of building robust defenses and continuous improvement.  But, it's not a question of if, but when, a data breach will happen.  And if the DNC data breaches taught us anything is that breaches can be embarrassing and costly.  Since  hackers are only growing more sophisticated in their techniques, it is incumbent upon us to respond in ever increasing levels of agility and sophistication of our own.

The old models of the high, guarded perimeter with complex passwords to gain entry are just not enough.  We need a higher degree of authentication, sensitive data rendered useless, and constant real-time monitoring of all traffic.  You data depends on it.

Turning a Blind Eye to Data Security eBook

Topics: Data Security

SQL Server Column Level Encryption

Posted by Patrick Townsend on Feb 28, 2017 9:11:00 AM

Microsoft customers attempting to meet security best practices, compliance regulations, and protection of organization’s digital assets turn to encryption of sensitive data in Microsoft SQL Server databases. The easiest way to encrypt data in SQL Server is through Transparent Data Encryption (TDE) which is a supported feature in SQL Server Enterprise Edition. For a variety of reasons, TDE may not be the optimal solution. Microsoft customers using SQL Server Standard, Web, and Express Editions do not have access to the TDE feature. And even when using SQL Server Enterprise Edition, TDE may not be the best choice for very large databases.

Encryption & Key Management for SQL Server - Definitive Guide Let’s look at some approaches to column level encryption in SQL Server. The following discussion assumes that you want to meet encryption key management best practices by storing encryption keys away from the protected data, and retain full and exclusive control of your encryption keys.

Column Level Encryption (aka Cell Level Encryption) 
Starting with the release of SQL Server 2008, all Enterprise editions of the database have supported the Extensible Key Management (EKM) architecture. The EKM architecture allows for two encryption options: Transparent Data Encryption (TDE) and Column Level Encryption (CLE). Cell Level Encryption is the term Microsoft uses for column level encryption. SQL Server Enterprise edition customers automatically have access to column level encryption through the EKM architecture.

Encryption Key Management solution providers can support both TDE and Column Level Encryption through their EKM Provider software. However, not all key management providers support both - some only support TDE encryption. If your key management vendor supports Cell Level Encryption this provides a path to column level encryption in SQL Server Enterprise editions.

Application Layer Encryption
Another approach to column level encryption that works well for SQL Server Standard, Web, and Express editions is to implement encryption and decryption at the application layer. This means that your application performs encryption on a column’s content before inserting or updating the database, and performs decryption on a column’s content after reading a value from the database. Almost all modern application languages support the industry standard AES encryption algorithm. Implementing encryption in languages such as C#, Java, Perl, Python, and other programming languages is now efficient and relatively painless.

The challenge that developers face when implementing encryption at the application layer is the proper protection of encryption keys. Security best practices and compliance regulations require a high level of protection of encryption keys. This is best accomplished through the use of an encryption key management system specifically designed to create, securely store, and manage strong encryption keys. For developers, the primary challenge in a SQL Server encryption project is integrating the application with the key manager. Many vendors of key management systems make this easier by providing Software Development Kits (SDKs) and sample code to help the developer accomplish this task easily.

SQL Views and Triggers with User Defined Functions (UDFs)
Another approach to column level encryption involves the use of SQL Views and Triggers. Leveraging the use of User Defined Functions (UDFs) the database administrator and application developer can implement column level encryption by creating SQL Views over existing tables, then implementing SQL Triggers to invoke user defined functions that retrieve encryption keys and perform encryption and decryption tasks. This approach has the advantage of minimizing the amount of application programming that is required, but does require analysis of the SQL database and the use of User Defined Functions. Database administrators and application developers may be able to leverage the SDKs provided by an encryption key management solution to make this process easier.

SQL Server Always Encrypted
One promising new technology recently implemented by Microsoft is SQL Server Always Encrypted. This feature is new with SQL Server 2016 and can work with any edition of SQL Server. It is a client-side architecture which means that column data is encrypted before it is sent to the database, and decrypted after it is retrieved from the database. While there are many constraints in how you can put and get data from SQL Server, it is a promising new technology that will help some customers protect data at the column level. You can expect to see support for Always Encrypted being announced by encryption key management vendors in the near future.

SQL Server in the Azure Cloud
As Microsoft customers and ISVs move to the Azure cloud they are taking their SQL Server applications with them. And it is very common that they take full implementations of SQL Server into their Azure virtual cloud instances. When SQL Server applications run in a virtual machine in Azure they support the same options for column level encryption as described above. This includes support for Cell Level Encryption through the EKM Provider architecture as well as application layer encryption. As in traditional IT infrastructure the challenge of encryption key management follows you into the Azure cloud. Azure customers should look to their encryption key management vendors to provide guidance on support for their key management solution and SDKs in Azure. Not all key management solutions run in Azure and Azure is not a supported platform for all vendor SDKs.

Azure SQL Database
In the Azure cloud Microsoft offers the SQL Server database as a cloud service. That is, Microsoft hosts the SQL Server database in the cloud and your applications can use this service rather than a full instance of SQL Server in your cloud instance. Unfortunately, Azure SQL Database only supports Transparent Data Encryption through the EKM Provider interface and does not yet support Cell Level Encryption. It also restricts encryption key management to only the Azure Key Vault facility requiring you to share key custody with Microsoft.

Column level encryption at the application layer is fully supported for Azure SQL Database. As in the traditional IT infrastructure your C#, Java, and other applications can encrypt and decrypt sensitive data above the database level. Again, check with your key management solution provider to insure that application level SDKs are supported in the Azure cloud.

AWS Cloud and SQL Server
The Amazon Web Service (AWS) implementation of cloud workloads parallels that of Microsoft Azure. You can deploy a full instance of SQL Server in an AWS EC2 instance and use the features of SQL Server as in traditional IT infrastructure. Amazon also overs a database service called Amazon Relational Database Service, or RDS. The RDS service offers multiple relational databases including SQL Server. As with Azure there is no support for key management solutions other than the Amazon Key Management Service (KMS) requiring a shared implementation of key custody.

As you can see there are many ways to implement column level encryption in SQL Server and use good encryption key management practices. I hope this helps you on our journey to more secure data in SQL Server.

Patrick

Encryption

Topics: Encryption, SQL Server, Cell Level Encryption

Three Core Concepts from "Zero Trust" to Implement Today

Posted by Ken Mafli on Feb 1, 2017 12:57:58 PM

 

“There are only two types of data that exist in your organization: data that someone wants to steal and everything else.”

Forrester Research

eBook The Encryption Guide In 2013, Forrester released an outline of their proprietary “Zero Trust Model” of information security to The National Institute of Standards and Technology (NIST).  Their model seeks to change “the way that organizations think about cybersecurity,” execute on higher levels of data security, and all the while “allowing for free interactions internally.”

But, when looking to better secure your organization’s data security posture, it is good to start with what has changed.  In the report, Forrester concluded that the old network security model was that of “an M&M, with a hard crunchy outside and a soft chewy center.”  It is the idea of the hardened perimeter around the traditional, trusted datacenter.  This old model is fraught with vulnerabilities as the traditional model is not equipped to handle new attack vectors with IoT, workforce mobility, and data centers moving to the cloud. It is increasingly becoming outmoded and weak.

In it’s place must come a data security model that takes into account the current network landscape and its vulnerabilities.  Enter, Zero Trust.  It builds upon the notion of network segmentation and offers key updates all under the banner: "never trust, always verify."

Below are the three main concepts to Zero Trust.  Follow along as we break down the trusted/untrusted network model and in its place rebuild a new trust model.

 

Assume All Traffic is a Threat

The first rule of “never trust, always verify” is that all traffic within the network should be considered a potential threat until you have verified “that the traffic is authorized … and secured.” Let’s look at these two components:

  • Authorized Traffic: Each end user should present valid (and up-to-date) login credentials (i.e. username and password) as well as authenticate themselves with multi factor authentication for each session logging into the network.  Usernames and passwords are not enough.  Only multi-factor authentication can reduce the risk of a hacker obtaining and misusing stolen login credentials.
  • Secured Traffic: All communication, coming from inside and outside of the network, should be be encrypted.  It should always be assumed that someone is listening in.  Using SSH or TLS and keeping abreast of their potential vulnerabilities is the only way to reduce the risk of exposure.

 

Give Minimal Privileges

The only way to minimize the risk of employees, contractors, or external bad actors misusing data is to limit the access each user/role is given to the least amount of privileges possible.  With this, it is a forgone conclusion that all sensitive data is already encrypted and minimal privileges are given as to who can decrypt it.  We implement a minimal privileges policy so that “by default we help eliminate the human temptation for people to access restricted resources” and the ability for hackers to access a user’s login credentials and thereby have access to the entire network.

Role-based access control (RBAC) model, first formalized by David Ferraiolo and Richard Kuhn in 1992 and then updated under a more unified approach by Ravi Sandhu, David Ferraiolo, and Richard Kuhn in 2000 is the standard today.  It’s ability to restrict system access only to authorized roles/users makes it the ideal candidate for implementing this leg of Zero Trust.  While Zero Trust does not explicitly endorse RBAC, it is best game in town, as of today.  For a deeper dive, visit NIST’s PDF of the model.

 

Verify People are Doing the Right Thing

Once we have authenticated each user and restricted them to the least amount of data possible to adequately do their job, the last thing to do is “verify that they are doing the right thing” through logging and inspection.

Here is a short (and certainly not exhaustive) list of techniques used to inspect all events happening in your network.  

  • Real Time Event Collection: the first step is to collect and log all events, in real time.
  • Event Correlation: Next you need to analyze all of the events and narrowing in on the events that need greater scrutiny.
  • Anomaly Detection: In a related move, you will want to identify the events that do not conform to the expected pattern and investigate further.
  • Resolution Management: All events that do not meet the expected pattern should be investigated and either classified as benign or deemed a possible threat and given for further investigation.

Note: There are many tools available that accomplish these.  Please refer to Gartner’s Security Information Event Management (SIEM) Magic Quadrant to find the tools that may interest you.

 

Final Thoughts

It's not a question of if, but when, a data breach will happen. Hackers grow more sophisticated in their attacks and threaten everything from intellectual property to financial information to your customers Personally Identifiable Information (PII).  The old model of the high, guarded perimeter with the trusted, internal network no longer functions as a secure model.  Zero Trust offers a more comprehensive approach to today’s data security needs.  As you look to deploy this model, begin to seek out tools that will help you.  Here is a short list of some of the tools to consider:

  • Log Collection Tools: Some platforms, like the IBM i, have proprietary formats, that are difficult for SIEMs to read.  Make sure your SIEM can fully collect all needed logs.  If it cannot, find or make a tool that will properly capture and send the logs onto your SIEM.
  • SIEM Tools:  As mentioned earlier in the article, there are many good SIEM tools out there to help you collect, analyse, and monitor all events on your network.
  • Encryption (data-in-flight): Fortunately, there are many open source protocols for secure communications like SSH and TLS.
  • Encryption (data-at-rest): Advanced Encryption Standard (AES) encryption is ubiquitous in most platform’s native encryption libraries.  There are also a number of products that offer column level to folder/file level encryption.
  • Centralized Key Management: The encryption you deploy is only as good and the level of protection you give to the encryption keys.  Therefore, robust encryption key management is a must.
  • User Access Management: Managing privileges, credentials, and multi factor authentication can be a daunting task.  The more more you can automate this, the better.

In many cases, adopting this approach will not be about bolting on a few products onto your existing data security framework but completely renovating it.  Don’t let expediency force you to defend your data with only half measures.  Take a deep dive into Zero Trust’s approach and see where you may be vulnerable.

 

The Encryption Guide eBook

Topics: Data Security

The Future of Active Security Monitoring on the IBM i

Posted by Luke Probasco on Jan 24, 2017 8:19:21 AM

Active monitoring is one of the most effective security controls an enterprise can deploy. In fact, a large majority of security breaches occur on systems that have been compromised days, weeks, or even months before sensitive data is lost. A recent Verizon Data Breach Investigations Report indicates that a full 84 percent of all breaches were detected in system logs.  By actively collecting security logs in real-time, organizations can not only monitor security events, but also prevent a data breach before it starts.  I recently sat down with Patrick Townsend, to discuss log collection and active monitoring on the IBM i.

Hi Patrick, can you give our readers an overview on the importance of collecting and monitoring security logs on the IBM i?

The Future of Active Security Monitoring on the IBM i One of the most effective things that you can do to prevent a data breach is to deploy an active monitoring solution, sometimes also known as system logging.  You’ll find active monitoring at the top of all cyber-security lists of things to do – because it is effective.  Active monitoring is key to a strong security posture, for anybody.

Today, we all know that there is no longer a true perimeter and that our systems are at risk.  Luckily, active monitoring can help.  Here are some key principles that organizations need to understand.  First, an active monitoring solution needs to involve a log collection server or SIEM solution (IBM Security QRadar, Splunk, LogRythm, etc.) to collect security events across the entire enterprise and actively detect threats.  Second, there needs to be real-time collection and monitoring of security events.  Rather than scooping up the security events once or twice a day, it is imperative to be collecting these events in real-time. When you collect logs across the entire enterprise, a SIEM can provide a lot of intelligence to identify patterns and anomalies – which will identify a potential attack.  The final critical components are good reporting, query, and forensics tools.  SIEM solutions also give you the ability to quickly run reports and analyze suspect data.  This is important for two reasons.  If you are having an attack you need to identify quickly where the attack is originating and how it is happening.  This is essential in order to know how to remediate it.  If you aren’t able to pinpoint the problem, it is very likely that you are going to be attacked by the same methods again.

Switching gears, the serious points for an IBM i customer revolve around the fact that the IBM i is a critical back-office processor for most customers and runs multiple applications.  Too often the IBM i is an island within an organization, but it is important that it is fully integrated in your enterprise’s entire infrastructure security strategy.

Also, it is generally true that a cyber-attack almost never starts on an IBM i server.  They typically start on a compromised user PC or someplace in the organization.  From there, a hacker spends a fair amount of time probing around the IBM i finding any weak points.  We shouldn’t be naïve – hackers know about IBM i servers.  They know what to look for, they know the user IDs, they know how to compromise these systems – they are very good at it.

IBM introduced some new security event sources in V7R3.  Can you talk a bit about those? And what events should an IBM i customer be collecting?

Every release of the IBM i server has had new security events and fields to collect and monitor.  At Townsend Security we work very hard to stay ahead of these releases so that our customers are well positioned to handle new information and use it for protection.  A couple examples include IPV6 address support and new fields in existing events.  Regarding the recent V7R3 release, new sources include:

  • QAUDLVL (Auditing level) system value
  • *NETSECURE (to audit secure network connections)
  • *NETTELSVR (to audit Telnet connections)
  • *NETUDP (to audit UDP connections)

To address the second part of your question, when you deploy an active monitoring solution on the IBM i, you are certainly going to want to collect events from QAUDJRN, QHST, QSYSOPR, as well as exit points.  Interestingly, the QAUDJRN security audit journal does not exist when you first install a new IBM i server. You must create the journal receivers and the journal to start the process of security event collection.

Aside from the new log sources that IBM introduced in V7R3, for someone who maybe deployed a logging solution a few years ago, what should they be aware of now?

First, let’s take a look at how compliance regulations have been evolving.  We now know that most attacks work on the basis of privilege escalation.  For example, an attacker gets access to our systems and then eventually gets sufficient authority to steal data. Because of this, we are seeing that it is more important to identify when an administrative level or highly privileged user logs in to our system.  This is an example of how a logging solution needs to evolve to meet current compliance requirements. Businesses are now required to log and monitor that activity.

Unfortunately, this can be particularly hard on the IBM i.  On first look, an IBM i account may appear to have normal user privileges, but may in fact inherit higher privileges through a Group Profile or Supplemental Group Profile. It is important to detect these elevated privileges in real time and provide the security administrator with an easy-to-use report to identify the source of elevated privileges. This is an excellent example of how logging solutions need to evolve with the ways security events are monitored.  We recently tackled this in the latest release of our Alliance LogAgent.

Where do you see the future of logging on the IBM i?

Let me dust off my crystal ball!  First off, File Integrity Monitoring (FIM) will become more important.  To maintain a strong posture, security administrators need to know who is accessing sensitive data and system values on the IBM i.  We’re also going to see more requirements around File Integrity Monitoring across the regulatory compliance environments.  Why?  Because, as we discussed earlier, cyber-attackers escalate privileges, access sensitive data, and change security configurations in order to get the work done that they want to do.  Again, this is why we are seeing increased requirements in regulations like the Payment Card Industry Data Security Standard (PCI DSS) and new financial services regulations.

Another interesting prediction:  It won’t be unheard of for organizations to use multiple SIEM solutions. We are starting to see businesses use one SIEM for traditional security monitoring and another to monitor operational data.  Operational data, you ask?  Sure.  Logging solutions can easily allow administrators to answer operational questions like: How full are my disks?  Do I have any critical hardware errors?  Second, they can benefit from deploying a SIEM to monitor application data.  Sales teams, for example, can track inventory status, trending products, etc.  The benefits of file monitoring don’t have to be exclusive to security.

In the near future, we will also see a pickup of integration with Artificial Intelligence (AI), also commonly referred to as cognitive computing.  IBM has the Watson platform, and there are others, which I believe will be used to enhance security.  We are already seeing initial efforts in this respect.  Harnessing that AI capability with security makes total sense.  

Finally, as we are seeing, everything not bolted down is going to the cloud.  We will definitely see an evolution of new cloud services around security and logging.  It may take a little time for vendors to start leveraging that, but I believe it is definitely in the works.

To hear this interview in it’s entirety, download our podcast “The Future of Security Logging on the IBM i” and hear Patrick Townsend, founder and CEO of Townsend Security, further discuss log collection and monitoring on the IBM i, new log sources in V7R3, and the future of security logging on the IBM i.

The Future of Active Security Monitoring on the IBM i

Topics: System Logging, Alliance LogAgent

Fixing the TDE Key Management Problem in Microsoft SQL Server

Posted by Patrick Townsend on Jan 10, 2017 7:31:56 AM

Many Microsoft SQL Server users have taken the first step to protect sensitive data such as Personally Identifiable Information (PII), Protected Health Information (PHI), Primary Account numbers (PAN) and Non-Public Information (NPI) by encrypting their databases with Transparent Data Encryption (TDE). It is extremely easy to implement TDE encryption as it does not require program changes.

Encryption and key management for SQL Server A common cause of audit failures might not be so obvious and that is the failure to properly protect the SQL Server key encryption key once you activate encryption in SQL Server. With Transparent Data Encryption you have the choice of storing the service master key within the SQL Server context itself, or protecting the master key with a key management system using the SQL Server Extensible Key Management (EKM) interface. Why is it important to do this?

It turns out that it is easy for cyber criminals to recover the SQL Server master key when it is stored within SQL Server itself. (Examples: https://blog.netspi.com/decrypting-mssql-credential-passwords/ and https://simonmcauliffe.com/technology/tde/#hardware)

Simon McAuliffe provides the clearest explanation I’ve seen on the insecurity of locally stored TDE keys in SQL Server. I don’t agree with him on the question of using a key manager to improve security. Given that there is no perfect security, I believe that you can get significant security advantages through a properly implemented key management interface.

If your TDE keys are stored locally, don’t panic. It turns out to be very easy to migrate to a key management solution. Assuming you’ve installed our SQL Server EKM Provider called Key Connection on your SQL Server instance, here are the steps to migrate your Service Master Key to key management protection using our Alliance Key Manager solution. You don’t even need to bring down SQL server to do this (from the Alliance Key Manager Key Connection manual):

Protecting an existing TDE key with Alliance Key Manager

First create a new asymmetric key pair within the AKM Administrative Console using the “Create EKM Key” and the “Enable Key for EKM” commands.

Then return to SQL Server and call the following command to create the asymmetric key alias for the new KEK that you created on the AKM server:

use master;

create asymmetric key my_new_kek from provider KeyConnection with provider_key_name = ’NEW_TDE_KEK’, creation_disposition = open_existing;

In this example, NEW_TDE_KEK is the name of the new key on AKM, and my_new_kek is the key alias.

Then use the ALTER DATABASE statement to re-encrypt the DEK with the new KEK alias assigned in the previous statement:

ALTER DATABASE ENCRYPTION KEY

ENCRYPTION BY SERVER

   {  ASYMMETRIC KEY my_new_kek}

Note that you do not have to take the database offline to perform this action.

Of course, there are other steps that you should take to secure your environment, but I wanted to demonstrate how easy it is to make the change.

The SQL Server DBA and the network administrator will have lots of other considerations in relation to SQL Server encryption. This includes support for clustering and high availability, automatic failover to secondary key servers, adequate support for separation of duties (SOD) and compliance, and the security of the credentials needed to validate SQL Server to the key manager. All of these concerns need to be addressed in a key management deployment.

For SQL Server users who deploy within a VMware or cloud infrastructure (AWS, Azure), Alliance Key Manager can run natively in your environment, too. It does not require a hardware security module (HSM) to achieve good key management with SQL Server. You have lots of choices in how you deploy your key management solution.

It turns out not to be difficult at all to address your SQL Server encryption key insecurities!

Patrick

Encryption and key management for SQL Server

Topics: SQL Server, Transparent Data Encryption (TDE)

OpenSSH on the IBM i and Your Security

Posted by Patrick Townsend on Jan 3, 2017 7:45:48 AM

Lately I’ve seen some criticism of the OpenSSH implementation on the IBM i platform which seems to imply that using a third-party implementation of the Secure Shell (SSH) file transfer application is better without the IBM no-charge licensed OpenSSH implementation. I disagree with that opinion and think there are good security and implementation reasons to stick with the IBM OpenSSH implementation.

Here are some reasons why I like OpenSSH:

OpenSSH is supported by an global open source community

Tatu Ylönen founded SSH Communications in 1995 and produced the first versions of an open source SSH implementation. Since 1999 the OpenSSH application has been maintained by the OpenBSD Project which is funded by the OpenBSD foundation and managed by Theo de Raadt. OpenSSH is available on a wide variety of operating systems including the IBM i where it is deployed as a no-charge licensed product and maintained by IBM.  OpenSSH continues to be actively developed and new encryption algorithms have been added recently.

OpenSSH is a widely used by large and small organizations

By some estimates the OpenSSH implementation of the SSH protocol and applications commands a 97 percent market share for SSH implementations. This means that OpenSSH is in wide use by large and small organizations to securely manage their eCommerce needs. This also means that OpenSSH receives a lot of scrutiny by compliance and security experts. Widely deployed solutions tend to get more scrutiny from security experts, and this is true for OpenSSH.

OpenSSH is secure

No application is immune to security challenges. However, OpenBSD and the OpenSSH application in particular have a stellar record for security. With security products, deep expertise and commitment matter. OpenSSH started with security as a leading goal by its developers and it shows. Over the last few years there have been fewer than a dozen security issues, and most were unlikely to be exploited and all were patched rapidly through updates by IBM. The OpenBSD set of applications that include OpenSSH have a great record on security. If you think the IBM i platform has a good security record, take a look at OpenSSH.

IBM provides technical support for OpenSSH

We have all developed a deep appreciation for IBM’s commitment to security over the years. It is one of great values of the IBM i platform. As new vulnerabilities are discovered you need to have a reliable and timely source of patches and enhancements and IBM has stood behind this critical application. Security notifications are managed by IBM so that you know when you need to do an update. By making OpenSSH a no-charge licensed program IBM i customers get patches through the normal PTF update process. Do you know any third-party IBM i vendor with an equal commitment to notification, maintenance and patching? IBM has earned our trust through this process.

OpenSSH is PCI compliant

PCI Qualified Security Assessors (QSAs) like Coalfire, TrustWave and others recognize that a properly patched implementation of OpenSSH meets PCI Data Security Standards (PCI-DSS) compliance, and IBM also tracks OpenSSH for PCI compliance. This again reflects IBM’s and OpenBSD’s commitment to security. If you are using a third-party IBM i solution for SSH how well is it tracked by the PCI audit community?

SSH is a complex protocol

Bruce Schneier said “Complexity is the enemy of security.” SSH is a complex protocol and this means that extra care needs to be taken in its development, deployment and maintenance. No third-party SSH solution rises to the level of care taken by the OpenSSH community and by IBM. Almost every business depends on secure file transfer for daily business operations. Deploying the most secure SSH solution is a critical security step.

OpenSSH does not use OpenSSL or Java JSSE

We’ve read a lot over the last few months about security issues in OpenSSL and Java. Many IBM i customers are confused about the relationship between OpenSSH and OpenSSL. In fact, OpenSSH does not use the OpenSSL library for communications. This means that OpenSSH was not subject to the HeartBleed and other OpenSSL vulnerabilities. We are all also now painfully aware of the security issues in Java. Most browsers no longer allow Java plugins for this reason. Third-party SSH products may or may not use OpenSSL or Java for communications. If you are running a third-party IBM i SSH solution, do you know if it uses OpenSSL or Java?

Third-party SSH solutions provide no significant advantage over OpenSSH

OpenSSH is a secure, reliable, and resilient implementation of SSH for secure data transfer that is backed by IBM and a worldwide community of users and developers. Our Alliance FTP Manager solution fully integrates with the IBM i OpenSSH application for secure, automated and managed file transfer. Our solution automates the OpenSSH transfer of hundreds of thousands of file transfers every day without compromising security.

My opinion? You probably don’t need more IT risk in your life. Stick with OpenSSH for your security needs. You will be in good company.

Patrick

Webinar: Secure Managed File Transfer on IBM i

Topics: IBM i, Secure Managed File Transfer

New York Department of Financial Services (NYDFS) and Encryption - 8 Things to Do Now

Posted by Patrick Townsend on Dec 12, 2016 10:27:38 AM

The New York Department of Financial Services (NYDFS) surprised the financial services industry by fast tracking new cybersecurity regulations in September of 2016. Due to go into effect in January of 2017 with a one-year transition period, it takes a very prescriptive approach to cybersecurity which includes a mandate to encrypt data at rest. The financial sector is broadly defined as banks, insurance companies, consumer lenders, money transmitters, and others. The law is formally known as 23 NYCRR 500 and you can get it here.

eBook The Encryption Guide There isn’t much wiggle room on the requirement for encrypting sensitive data. You can use compensating controls if you can show that encryption is “infeasible”. But I am not sure how you would show that. All modern database systems used by financial applications support encryption. It would be hard to imagine a financial database where encryption would not be feasible. Don’t plan on that being an excuse to delay encrypting data at rest!

The time frame is short for implementing the encryption mandate. One year seems like a long time, but it is extremely aggressive given the development backlog I see in most banks.

Here are some things you should start doing right now:

1) Inventory All of Your Financial Systems

This seems like a no-brainer, but you might be surprised how many organizations have no formal inventory of their IT systems that contain financial data. This is a top-of-the list item on any cybersecurity list of recommendations, so making or updating this list will have a lot of benefits.

2) Document Storage of All Sensitive Information (Non-Public Information, or NPI)

For each system in your inventory (see above) document every database and storage mechanism that stores NPI. For database systems identify all tables and columns that contain NPI. You will need this documentation to meet the NYDFS requirements, and it is a roadmap to meeting the encryption requirements.

3) Prioritize Your Encryption Projects

You won’t be able to do everything at once. Following all modern cybersecurity recommendations, prioritize the systems and applications that should be addressed using a risk model. Here are a few factors that can help you prioritize:

  • Sensitivity of data
  • Amount of data at risk
  • Exposure risk of the systems and data
  • Compliance risk
  • Operational impact of loss

It is OK to be practical about how you prioritize the systems, but avoid assigning a high priority to a system because it might be easiest. It is better to tackle the biggest risks first.

4) Establish Encryption Standards

Be careful which encryption algorithms you use to protect sensitive data. In the event of a loss you won’t want to be using home-grown or non-standard encryption. Protect data at rest with NIST compliant, 256-bit AES encryption. This will give you the most defensible encryption strategy and is readily available in all major operating systems such as Windows, Linux, and IBM enterprise systems.

5) Establish Key Management Standards

Protecting encryption keys is the most important part of your encryption strategy and the one area where many organizations fail. Encryption keys should be stored away from the encrypted financial data in a security device specifically designed for this task. There are a number of commercial key management systems to choose from. Be sure your system is FIPS 140-2 compliant and implements the industry standard Key Management Interoperability Protocol (KMIP).

Hint: Don’t fall into the project-killing trap of trying to find a key management system that can meet every key management need you have in the organization. The industry just isn’t there yet. Pick a small number of key management vendors with best-of-breed solutions.

With encryption standards well defined and an encryption key management strategy in hand you are ready to get started with your encryption projects.

6) Analyze Performance and Operational Impacts

Encryption will naturally involve some performance and operational impacts. Encryption is a CPU intensive task, so plan on doing some performance analysis of your application in real-world scenarios. If you don’t have test environments that support this analysis, get started now to create them. They will be invaluable as you move forward. Modern encryption is highly optimized, and you can implement encryption without degrading the user experience. Just be prepared to do this analysis before you go live.

There are also operational impacts when you start encrypting data. Your backups may take a bit more storage and take longer to execute. So be sure to analyze this as a part of your proof-of-concept. Encrypted data does not compress as well as unencrypted data and this is the main cause of operational slow-downs. For most organizations this will not be a major impact, but be sure to test this before you deploy encryption.

8) Get Started

Oddly (to me at least) many organizations just fail to start their encryption projects even when they have done the initial planning. A lack of commitment by senior management, lack of IT resources, competing business objectives, and other barriers can delay a project. Don’t let your organization fall into this trap. Do your first project, get it into production, and analyze the project to determine how to do it better as you move forward.

Fortunately we have a lot of resources available to us today that were not available 10 years ago. Good encryption solutions are available and affordable for traditional on-premise environments, for VMware infrastructure, and for cloud applications.

You can meet the NYDFS requirements and timelines if you start now. But don’t put this one off.

Patrick

 

Resources:

New York Department of Financial Services:

https://www.dfs.ny.gov/legal/regulations/proposed/propdfs.htm

 

Harvard Law School analysis of NYDFS:

https://corpgov.law.harvard.edu/2016/09/24/nydfs-proposed-cybersecurity-regulation-for-financial-services-companies/

The Encryption Guide eBook

 

Topics: Compliance, Encryption