+1.800.357.1019

+1.800.357.1019

Feel free to call us toll free at +1.800.357.1019.

If you are in the area you can reach us at +1.360.359.4400.

Standard support
6:30am - 4:00pm PST, Monday - Friday, Free

Premium support
If you own Townsend Security 24x7 support and
have a production down issue outside normal
business hours, please call +1.800.349.0711
and the on-call person will be notified.

International customers, please dial +1.757.278.1926.

Townsend Security Data Privacy Blog

Financial Services and Creating a Security Strategy

Posted by Patrick Townsend on May 9, 2017 9:04:17 AM

I recently spent the better part of an hour talking to a new IT director for a small financial services company. He was feeling overwhelmed at the scope of the work ahead of him, and was bemoaning the lack of any guidance on how to start. The set of tasks in front of him seemed gargantuan in terms of the number of tasks and the scope of work. I can understand that sense of panic when you realize that you are behind the curve and that your organization is facing real threats. I want to share with you some of the advice I gave this IT director (with a tip of the hat to all of those hard working security professionals who’ve shared with me!).

It’s a Process, Not a Destination

Compliance Ready Encryption for Financial ServicesThe first error I see many IT managers make is that they look at security as a set of tasks to accomplish rather than a set of new IT processes. We technical folks really like to make a task list and check them off. We have a sense of accomplishment when we get to the end of the list. It’s done! Hallelujah!

Sorry, security is never done. It is important to realize that a security program means that many people throughout your organization are going to be doing things differently, and will be adjusting to new threats over time. For example, we used to think that the use of strong passwords was adequate to protect our access to corporate web services. But it isn’t enough now. Now we have to use multi-factor authentication in addition to strong passwords. Why? The attacks on password protected assets has become more sophisticated. We have to step up our game. And this is true across a number of security practice areas.

If you are successful you will be changing how your organization OPERATES over time. Not just completing a set of tasks.

Know Where Your Sensitive Data Is

It is very common that businesses do not actually know where their sensitive data resides in the organization, and where it goes outside of the organization. Business are always undergoing change to meet new objectives, counter emerging competitive threats, accommodate new technologies, and comply with new compliance regulations. Managing a business is like fighting a war on many fronts – it is barely organized chaos!

It is understandable then that an IT organization may not have a clear map of its critical data. But you will need that map before you can start really protecting those assets. For example, you might have data extracts uploaded to your web site for customers but not know that the upload process was put in place 5 years ago and the development has moved on. That sensitive data just gets uploaded every night and might not be properly protected.

It’s time to do some archeology.

Be sure you have an inventory of all of your critical applications along with the data the process. This is going to seem like a tedious job, but it will be critical to everything you do. Make the map and then hold a celebration and invite your executive team.

In the process don’t forget the data feeds. Document every place that data enters your organization from the outside, and where you send data to outside services.

Find a Dynamic Security Framework

Now you need a plan! Fortunately you won’t have to figure out a plan on your own. There are several good sources of dynamic security planning guides that you can use as a starting point. A good plan will cover the essential security tasks, and will prioritize them by importance. A complete plan with prioritized tasks will help you focus your attention in the right areas!

Here are some sources for security plans that you can access and use right away:

The great thing about these security plans and frameworks is that you can get started with them very quickly. For example, the CIS Critical Security Controls is available as an Excel spreadsheet. You can make a copy and start working through the sections from top to bottom.

Do the Important Things First

We are sometimes tempted to do some of the easy things first in order to convey a level of accomplishment to our management team. I recommend that you try to resist this tendency as much as possible. Start with the most important items in your priority list and tackle those first. They often give you a lot of security benefit and many do not require a lot of investment or work. It is important to do the most effective and critical tasks first.

Get Your Management Buy-in

Security takes commitment, human resources, financial resources, and much more. You will need to get your management buy-in as quickly as possible. Start by sharing some stories from other companies in the financial services segment. We don’t necessarily want to scare our managers, but they need to have a realistic idea of the threat.

Educating your management team means explaining your need for budget resources. Some things can be done on the cheap, and you won’t want to overlook inexpensive steps to take that improve security. But some things are going to take some budget dollars to deploy. For example, continuous monitoring of system logs with a SIEM solution is one of the most effective security strategies you can deploy. But this will almost certainly mean the deployment of a commercial SIEM solution and this will require fiscal expenditures.

Any steps you take to educate your management team will be worth the effort.

Don’t Forget About Employee Education

Remember that you live in the security world, but the employees in your organization don’t. They are not likely to be up to date on the latest threats. Educating employees on how to identify spam email messages has a lot of benefits. Find ways to work in a few minutes each week into employee schedules a simple security awareness exercise.

You’ve probably heard of Bug Bounties – how about providing some small rewards to employees that discover and report spam emails with potentially harmful content? It is amazing how effective programs like this are.

Rinse and Repeat

Let’s go back to that first point. A security program is something that changes how you and your colleagues live your professional lives – it is not a set of checkboxes. Create an annual calendar of security tasks and review points. Make sure that this includes periodic reviews with the upper management team. If you are doing this right you will be making periodic adjustments to the security program and things that are important today may be eclipsed by new threats tomorrow. That’s not a particularly happy thought, but if you keep adjusting you will be in a safer position.

Finally, we make progress one step at a time. Once you start down this road it will get easier as you progress. Good luck with your new security programs!

Patrick

Compliance

Topics: Data Security, Security Strategy

Press Release: Townsend Security Secures Nonpublic Personal Information (NPI) for Financial Services and Personally Identifiable Information (PII)

Posted by Luke Probasco on May 1, 2017 6:00:00 AM

By protecting data with encryption and key management solutions from Townsend Security, businesses can easily meet compliance requirements.

Townsend Security, a leading provider of encryption and key management solutions, today announced that Alliance Key Manager can help businesses in the finance industry meet new encryption requirements, including those found in the New York Department of Financial Services (NYDFS) cybersecurity regulation and the European Union General Data Privacy Regulation (GDPR), in addition to existing Gramm-Leach-Bliley Act (GLBA) and PCI Data Security Standard (PCI DSS).

By protecting nonpublic personal information (NPI) and personally identifiable information (PII) with NIST-compliant AES encryption and FIPS 140-2 key management found in Townsend Security’s Alliance Key Manager, businesses can protect private information including: customer financial records, social security number, income, and account numbers. Organizations that experience a data breach where un-encrypted data is lost can suffer fines reaching into the millions of dollars, as well as face indirect costs like brand damage and customer loss.

Fortunately, encryption and key management has gotten tremendously easier to deploy and is within reach of the most modest budgets. Customers worldwide have turned to Alliance Key Manager because it enables them to easily meet the most stringent requirements found in GLBA, PCI DSS, and HIPAA. The solution has been validated to meet PCI DSS in VMware, and is also available as a hardware security module (HSM) and in the cloud (AWS, Azure, vCloud).

“The finance industry is increasingly being held accountable for the security, confidentiality and integrity of non-public customer information,” said Patrick Townsend, founder and CEO, of Townsend Security. “Encryption, along with key management, is the best way to ensure that private information remains private – even in the event of a breach.”

Encryption Key Management Trends Perspectives 

Topics: Press Release

Encryption Requirements for Banks & Financial Services

Posted by Luke Probasco on Apr 25, 2017 8:33:00 AM

It should come as no surprise that the financial industry is among the most regulated in the world.  There are strong data security requirements for banking and financial industries due to the sensitive and private data that they deal with.  While GLBA/FFIEC are specific to these industries, compliance regulations such as PCI DSS, SOX, and state privacy laws can also apply.  One thing that they all have in common though, is that encryption, along with proper key management, can mean the difference between a public breach notification and having a safe harbor.

What Data Needs Encryption?

Encryption Requirements for Financial ServicesAside from the obvious personally identifiable information (PII) such as names, addresses, and social security numbers, the financial industry also regularly handles data that includes income, credit score, collection history, and family member PII and Non-public Personal Information (NPI).

The Gramm-Leach-Bliley Act (GLBA) specifically requires that institutions doing business in the US establish appropriate standards for protecting the security and confidentiality of customers’ NPI. The objectives are to:

  • Ensure the security and confidentiality of customer records and information
  • Protect against any anticipated threats or hazards to the security or integrity of such records
  • Protect against unauthorized access to information which could result in substantial harm or inconvenience to any customer

Additionally, the Federal Financial Institutions Examination Council (FFIEC), which is “empowered to prescribe uniform principles, standards, and report forms to promote uniformity in the supervision of financial institutions,” adds:

“Financial institutions should employ encryption to mitigate the risk of disclosure or alteration of sensitive information in storage and transit.”

Between FFIEC and GLBA, banks and financial institutions should encrypt:

  • Any sensitive information an individual gives you to get a financial product or service (such as name, address, income, Social Security number, or other information on an application)
  • Any information you get about an individual from a transaction involving your financial products or services (for example, the fact that an individual is your customer, account numbers, payment history, loan or deposit balances, and credit or debit card purchases)
  • Any information you get about an individual in connection with providing a financial product or service (for example, information from court records or from a consumer report)

Encrypting Private Data

Encryption is often considered the hardest part of securing private data.  The first step that banks and financial services can take is to deploy encryption based on industry-tested and accepted algorithms, along with strong key lengths.  Examples of industry-tested and accepted standards and algorithms for encryption include AES (128 bits and higher), TDES (minimum double-length keys), RSA (2048 bits and higher), ECC (160 bits and higher), and ElGamal (1024 bits and higher). See NIST Special Publication 800-57 for more information.

There are many levels within an organization’s stack that encryption can be deployed, ranging from the operating system to the application and database level.  Choosing where to implement encryption has security implications.  Let’s focus on the two that are the most secure.

Encryption at the Database Level

Almost all commercial databases now support some time of encryption in the database itself.  Encryption at the database layer provides some distinct advantages:

  • Encryption is optimized for database performance
  • Encryption services are better integrated with other database access control services resulting in fewer security gaps
  • Encryption key management may be better integrated into the encryption implementation

Encryption at the Application Level

Application encryption involves the use of an encryption library and a key retrieval service.  Encryption at the application layer fundamentally means that you are encrypting data before inserting it into a database or other storage mechanism, and decrypting it after you retrieve the data.  It provides a very granular level of control of sensitive data and allows for the application of user access controls, program access controls, data masking, and other security controls.  Many feel that application layer encryption is the most secure way to protect data.

Encryption Key Management

Encryption is only as secure as your encryption keys.  The essential functions of a key management solution include storing the encryption keys separate from the data that they protect, as well as managing the encryption keys through the entire lifecycle including:

  • Generating keys for different cryptographic systems and different applications
  • Generating and obtaining public keys
  • Distributing keys to intended users, including how keys should be activated when received
  • Storing keys, including how authorized users obtain access to keys
  • Changing or updating keys, including rules on when and how keys should be changed
  • Addressing compromised keys
  • Archiving, revoking, and specifying how keys should be withdrawn or deactivated
  • Recovering keys that are lost or corrupted as part of business continuity management
  • Logging the auditing of key management-related activities
  • Instituting defined activation and deactivation dates, and limiting the usage period of keys

Just as with encryption, it is paramount that your key management solution meets industry standards.  Again, look to NIST and vendors who have a solution that is FIPS 140-2 compliant.  By adequately encrypting data to industry standards, the loss of encrypted data is not generally considered a breach, and is exempt from notification requirements.

FFIEC Guidance

The FFIEC provides guidance and oversight of GLBA for banks and financial organizations.  They publish the IT Examination Handbook, which provides guidance for the IT security controls that can or should be used to protect NPI under GLBA.  According to the Handbook, financial institutions should employ encryption to mitigate the risk of disclosure or alteration of sensitive information in storage and transit. Encryption implementations should include:

  • Encryption strength sufficient to protect the information from disclosure until such time as disclosure poses no material risk
  • Effective key management practices
  • Robust reliability

Fortunately, encryption and key management has gotten tremendously easier to deploy and is within reach of even the most modest budgets.  By protecting data with strong, standards-based encryption, organizations can meet the requirements of GLBA/FFIEC and protect their customer's’ private data – even in the event of a breach.

Compliance

Topics: GLBA/FFIEC

Splunk, Alliance LogAgent, and the LEEF data format

Posted by Patrick Townsend on Apr 18, 2017 7:09:08 AM

We have a lot of Enterprise customers deploying our Alliance LogAgent solution for the IBM i server to send security events to Splunk. On occasion a customer will deploy Alliance LogAgent and send data in the Log Event Extended Format (LEEF) to Splunk. The LEEF format is the preferred log data format for the IBM Security QRadar SIEM, so I’ve always found this a bit puzzling.

IBM i Security: Event Logging & Active MonitoringThe light finally came on for me this week.

Security event information in syslog format (see RFC 3164) is largely unstructured data. And unstructured data is hard for SIEM solutions to understand. Here is an example from an Apache web server log:

[Wed Oct 11 14:32:52 2000] [error] [client 127.0.0.1] client denied by server configuration: /export/home/live/ap/htdocs/test

An SIEM administrator would have to do a fair amount of work configuring the SIEM to properly understand the importance of this message and take the proper action. If only the data was in some type of normalized format!

It turns out that the IBM Security QRadar LEEF format normalizes the system log information. A message like the above might look something like this in LEEF format:

date=20001011 time=143252 ipAddress=127.0.0.1 violation=client denied severity=5 path=/export/home/live/ap/htdocs/test

With field definitions like “date” and “time” the Splunk SIEM can easily digest the message and the Splunk query tools work great. It is easy to create reports and dashboards with this type of normalized data. The LEEF format is really good about this and Alliance LogAgent supports the LEEF definition.

What most Splunk administrators do not realize is that our Alliance LogAgent solution normalizes all IBM i security events in this type of normalized fashion. That is, this format is the default data format for security events. This is already what Alliance LogAgent does for IBM i security events!

When we started the development of Alliance LogAgent more than 10 years ago we understood at the outset that system log data would be hard for a SIEM to parse. So from the first release of our solution we provided data in this normalized format. Whether you are using Splunk, LogRhythm, Alert Logic, or any other SIEM we make it really easy for the SIEM to digest and act on the information. And forensic, query, and dashboards are easy to create.

So, Splunk users - listen up! The default system log format in Alliance LogAgent is exactly what you need to make Splunk work really well. You can use the LEEF format if you really want to, but you have all of the benefits of normalized data with the default format.

Here at Townsend Security we are vendor neutral when it comes to SIEM solutions. Our customers deploy a wide range of solutions including Splunk, IBM QRadar, LogRhythm, Alert Logic, SolarWinds, McAfee, and lots more. And they can move from one SIEM to another without changing their Alliance LogAgent configurations. We believe that actively monitoring system logs in real time is one of the most important security steps you can take. Early detection of a problem is so much better than trying to remediate a breach after the fact.

Patrick

IBM i

Topics: Alliance LogAgent, Splunk

Trying to Outfox the Other - A Brief Look at Cryptography and Cryptanalysis

Posted by Ken Mafli on Mar 31, 2017 10:35:55 AM

 A few months ago I wrote a definitive guide to Cryptographic Key Management. In it I wrote a section: A Brief History - the Need for Encryption Key Management. I wanted to expand upon the Classical Era of cryptography a bit because the story of data security goes back for millennia, and the twists and turns of this story can be felt even today.

Introduction

eBook: Definitive Guide to Encryption Key ManagementThere has been a competition playing out through the centuries all the way from the highest corridors of power down to the shadiest back alleys. It is a struggle of those with a secret and those who want to uncover it. It is the story of cryptography and cryptanalysis.

As with every competition, each side is constantly trying to outfox the other. Peter Baofu described the competition this way, it is “the never ending cycle of replacing old broken designs” of cryptography and “new cryptanalytic techniques invented to crack the improved schemes.” In fact, “in order to create secure cryptography, you have to design against [all] possible cryptanalysis.” This means that both sides are in a never-ending arms race.

In his book, “The Future of Post-Human Mass Media,” Peter Baofu describes two main types of cryptanalysis: Classical and Modern Cryptanalysis. Let’s take a look at the Classical Period to see how this cat and mouse game has played out through the centuries:

The Classical Cat-and-Mouse Game

Classical Cryptography

One of the earliest forms of “secret writing” is the Substitution Cipher where each letter of the message is systematically replaced by another set of predetermined letters. In it’s most famous form, the Caesar Cipher, used by Julius Caesar himself (1st century, B.C.E):

“each letter in the plaintext is 'shifted' a certain number of places down the alphabet. For example, with a shift of 1, A would be replaced by B, B would become C, and so on.”

Another technique was Steganography, which literally means: “covered writing,” is the art of concealing a message in plain sight. Mehdi Khosrowpour recounts one of the first recorded instances (in the 5th century, B.C.E):

“Demaratus, a Greek who lived in Persia, smuggled a secret message to Sparta under the cover of wax.” It “ was to warn Sparta that Xerxes, the King of Persia, was planning an invasion ... by using his great naval fleet. He knew it would be very difficult to send the message to Sparta without it being intercepted. Hence, he came up with the idea of using a wax tablet to hide the secret message. In order to hide the secret message, he removed all the wax from the tablet, leaving only the wood underneath. He then wrote the secret message into the wood and recovered the tablet with the wax.”

Classical Cryptanalytic Response

While steganography is only hard to crack if you don’t uncover the message; substitution ciphers were meant to remain a secret even if the message fell into enemy hands. It remained a fairly reliable means of securing messages, so long as the cipher was not revealed.

All that changed with the first recorded technique of cryptanalysis: Frequency Analysis. This technique “can be traced back to the 9th-century [C.E.], when the Arabian polymath Abu Yusef Yaqub ibn Ishaq Al-Kindi (also known as ‘Alkindus’ in Europe), proposed in A Manuscript on Deciphering Cryptographic Messages.” It comes from the observation that certain letters appear more often than others in a given language (the letter “E,” for example, occurs most often in English). There also also common letter pairings (like “TH” in English).

So, in the case of the Caesar Cipher where the plaintext message is :

meet me at the theater

If each letter is shifted one letter in alphabet, it becomes:

nffu nf bu uif uifbufs

Frequency analysis would note that the most common letter in the ciphertext is “f” (which would suggest it is an “e”) and only letter pairing is “ui” (which would suggest the “u” is “t” and the “i” is “h”). If we replace these portions of the ciphertext we reveal:

_eet _e _t the the_te_

With these two facts of frequency analysis alone we have more than half the message deciphered. With a few logical leaps we could decipher the remaining the five letters. The simple substitution cipher was rendered useless.

The Classical Cryptography Counterattack

Polyalphabetic.jpg

Over the centuries other ciphers were introduced like the Polyalphabetic Substitution Cipher where a repeating, offset key is used to encrypt the plaintext (see picture, courtesy of the Library of Congress). First perfected by Johannes Trithemius in 1518 (although other variants existed beforehand), the person encoding the message would switch alphabets for each letter of the message.

So, “meet me” would now become: “lcbp gy,” a ciphertext that simple frequency analysis could not break since most of the letter and pairing statistics of a given language are not easily recognized.

Although, in time, this type of cryptography was broken by the likes of Charles Babbage using modular arithmetic, the existence of his cryptanalytic techniques remained a military secret for some years.

Final Thoughts

Fascinatingly, it was the use of math to break a cipher that led to our current arms race in data security. The use of math and algorithms to break cryptography means you need longer keys to encrypt the data and prevent a brute force attack; which, in turn, means you need faster computers to break the encryption; which, in turn, means you need longer keys; etc.

Unlike today, however, it took centuries to break a cipher back then. Now, it is just decades. From the Hebern Electric Super Code Cipher Machine in the 1920s, to the Enigma Machine of the 1930s and 40s, to the Data Encryption Standard (DES) of the 1970s and 80s, each seemed invincible until enhanced cryptanalytic techniques or greater computing power toppled it. Our current cryptography is reliable and secure, but quantum computers loom on the near horizon and their non-binary logic could brute force attack our current public key cryptography and make them insecure.

And so the arms race continues. Fortunately, NIST has already forecasted this threat and called for replacements to our current standards, well before it is a crisis.  

eBook: Definitive Guide to Encryption Key Management

Topics: Encryption

Case Study: Citizens Security Life Insurance

Posted by Luke Probasco on Mar 13, 2017 10:54:24 AM

CSLI-Logo.pngCompliance Made Easy - Protecting Private Information with Alliance AES/400 Encryption for IBM i and Alliance Key Manager for VMware


“Townsend Security was extremely easy to work with - from the sales process to deploying our proof of concept to post-sales support.”

- Adam Bell, Senior Director of IT

 
Citizens Security Life Insurance

MCitizens Security Life Insurance Company is a life and health insurance carrier. The company offers group benefits including dental and vision coverage, and individual ancillary insurance products. The company was founded in 1965 and is headquartered in Louisville, Kentucky.

The Challenge: Protect ePHI & PII on the IBM i

In order to meet growing partner requirements and pass a data security audit for protecting electronic Protected Health Information (ePHI) and Personally Identifiable Information (PII), Citizens Security Life Insurance (CSLI) needed to deploy an encryption solution on the IBM i. The solution needed to be easy to implement with excellent performance.

While FIELDPROC on the IBM i makes it very easy to encrypt data without application changes, CSLI also understood that for encrypted data to truly be secure, they would need to store and manage encryption keys with an external key manager.

By using a VMware-based encryption key manager, the company could meet encryption and key management best practices for separating encryption keys from the data they protect.

The Solutions

Alliance AES/400 Encryption

“The performance we are seeing with Alliance AES/400 encryption is excellent,” said Adam Bell, Senior Director of IT, Citizens Security Life Insurance. “The solution was easy to integrate and completely met our expectations.”

Alliance AES/400 FIELDPROC encryption is NIST-compliant and optimized for performance. The solution is up to 100x faster than equivalent IBM APIs on the IBM i platform.

With Alliance AES/400, businesses can encrypt and decrypt fields that store data such as credit card numbers, social security numbers, account numbers, ePHI, and other PII instantly without application changes.

Alliance Key Manager for VMware

Alliance Key Manager for VMWare was very easy to implement and the resources Townsend Security provided made deployment a smooth process,” continued Bell. By deploying Alliance Key Manager for VMware, CSLI was able to meet their business needs with a solution that could not only deploy quickly, but was also easy to set up and configure.

Alliance Key Manager for VMware leverages the same FIPS 140-2 compliant technology found in Townsend Security’s hardware security module (HSM) and in use by over 3,000 customers. The solution brings a proven and mature encryption key management solution to VMware environments, with a lower total cost of ownership. Additionally, the key manager has been validated to meet PCI DSS in VMware environments.

Integration with the IBM i Platform

An encryption strategy is only as good as the key management strategy, and it can be difficult to get key management right. For companies doing encryption, the most common cause of an audit failure is an improper implementation of key management. The seamless integration between Alliance AES/400 and the external Alliance Key Manager for VMware allowed CSLI to pass their data security audit with flying colors.

“The relationship we developed with Townsend Security enabled us to have a painless sales and support process, and in turn, enabled us to easily pass our data security audit,” finished Bell.

Meeting HIPAA and protecting ePHI with encryption and key management.

 

Topics: Alliance Key Manager, Alliance AES/400, Case Study

A Brief History of KMIP

Posted by Ken Mafli on Mar 6, 2017 1:31:39 PM

KMIP Logo.pngKey Management Interoperability Protocol (KMIP) is quickly becoming the industry standard for ensuring your product or software can communicate seamlessly with cryptographic key managers.  In fact, a study by the Ponemon Institute in 2013 reported on the state of encryption trends and found that “more than half of those surveyed said that the KMIP standard was important in cloud encryption compared with 42% last year.”  This is surprising since KMIP v1.0 was first ratified three short years earlier on October 1st, 2010!

How Did it All Start?

eBook: Definitive Guide to Encryption Key ManagementThe first meeting held to start discussing the new set of standards was on April, 24th 2009 in San Francisco in conjunction with the RSA convention that year.  In attendance were representatives from RSA, HP, IBM, Thales, Brocade, and NetApp. Their initial scope was to “develop specifications for the interoperability of key management services with key management clients. The specifications will address anticipated customer requirements for key lifecycle management”

But why was KMIP necessary to begin with?  The short answer: more and more organizations were deploying encryption in multiple environments.  But with encryption comes the need to properly manage the encryption keys. With encryption increasing across multiple enterprise applications it became harder to easily manage the keys from the different enterprise cryptographic applications.  Better standards were needed to create uniform interfaces for the centralized encryption key manager.

Companies soon saw the benefits of adopting KMIP.  Both large and small organizations need their key management to work every time and need it to scale as their organization grows.  And while other work was done to address this issue, like OASIS EKMI, IEEE P1619.3,  and IETF Keyprov KMIP was designed to have a broader scope than it’s predecessors and give more comprehensive standards for the industry.


How Was KMIP Initially Received?

In 2010, KMIP debuted at RSA.  HP, IBM, and others demonstrated that their client programs using the KMIP version 1.0 protocol could “communicate securely with key management servers. The clients and servers [demonstrated] essential use cases such as generating cryptographic keys, locating existing keys, and retrieving, registering, and deleting keys.”

In 2011 at the RSA Conference major players like IBM, RSA, and HP demonstrated KMIP 1.0 compatibility with their client programs.  And again in 2012 and in 2013 even more companies like Thales, NetApp, and Townsend Security demonstrated KMIP compliance.  With all these prominent players becoming KMIP compatible, it was a major signal to the industry that KMIP was rapidly becoming the industry standard for interoperable communications for key managers.

How is KMIP Thought of Now?

Fast forward to 2014.  The The Storage Networking Industry Association (SNIA) announced a testing program for KMIP conformance for its members.  In their words, “By introducing the KMIP Test Program for the industry, we’re helping to encourage not only the adoption of enterprise–class key management, but a means for vendors to test for conformance and provide an assurance of interoperability and a layer of trust to their customers.”

At  OASIS’ Interoperability Showcase at RSA 2016 16 companies, including Townsend Security, demonstrated KMIP compatibility.  And with the likes of VMware, Oracle, Quantum, and many others  demonstrating KMIP compatibility, KMIP has become a dominant standard in key management interoperability.

Final Thoughts

Encryption is your last, best defense for data at rest.  But encryption is only as good as your key management.  If the key is exposed to hackers, the data is lost as well.  This is why key management standards like KMIP have already attracted considerable interest, and will continue to do so.  The ability to have a variety of vendor applications, platforms, and databases all able to communicate with a centralized key manager enhances the data security posture of the enterprise.  And this is what organizations should strive to achieve.

OASIS built the standard to address a broader scope of issues than what older industry standards addressed. But KMIP still is actively being matured by OASIS (we are on version 1.3) and we should expect to see further enhancements and revisions to the standard as well as broader industry adoption.  This should give us confidence that KMIP as a well-accepted, road-tested standard will continue to grow in industry popularity in years to come.

eBook: Definitive Guide to Encryption Key Management

Topics: Encryption Key Management

Hillary's email data breach taught us all the wrong lessons

Posted by Ken Mafli on Feb 28, 2017 9:11:00 AM

In an unprecedented October surprise, Wikileaks dumped thousands of emails onto the internet from the Democratic National Committee (DNC), most of them concerning Hillary Clinton’s presidential campaign.  Later, in defending this move, Wikileaks founder Julian Assange, in an interview with FOX News, “said a 14-year-old could have hacked into the emails of Hillary Clinton's campaign chairman,” reported the Daily Mail.  Assange later revealed in the interview that John Podesta’s, Hillary’s campaign chairman, password was 'password.'  Politifact has gone on to challenge that assertion, saying that “Podesta was using a Gmail account, and Google doesn’t allow users to make their passwords ‘password.’”

Whatever John Podesta’s password was, it has sparked a good deal of renewed interest in good password management.  And far be it from me to downplay this crucial bit of data security.  We still have a long way to go.  In fact, SplashData just completed their survey of over 5 million people’s passwords and found that over 10% of people still use the most commonly guessable passwords like:

  • password
  • 123456
  • qwerty
  • passw0rd
  • Password1
  • zaq1zaq1

If you use any of these, stop it. Now.

But if that is all that we learn from the hack and subsequent data breach, we have missed the lesson.  As far back as June of 2016, it was widely reported, by the likes of Brian Krebs and Jeremy Kirk, that the DNC was vulnerable to attacks do to systemic weaknesses in cybersecurity.  In fact, in Jeremy Kirk’s article, it was noted that a press assistant emailed everyone a new password after a recent breach (a strong password at that: 'HHQTevgHQ@z&8b6').  The irony is, some of the email accounts had been compromised.  The hackers needed only to open the email and use the new password.

Strong passwords are not enough to rebuff the efforts of hackers to gain entry and to render the data useless in case of a breach.  We need proven security measures in order to keep the data safe.  

The data security measures below reflect specific things you can do to secure your data-at-rest in general. While there are more more specific measures you can take for email servers, it is important to remember that organizations have sensitive data everywhere, not just in emails.  That being said, since even seemingly benign emails at the DNC can blow up into political controversy, they probably need to follow these along with more email specific recommendations.  Follow along to find some of the best methods your organization should be using today to better secure your data security posture.

Multi Factor Authorization

2FA.pngAs we have already mentioned, usernames and passwords, by themselves, are not enough to authenticate users.  Truly strong passwords are hard to manage and remember.  And once a system is compromised, login credentials can be scraped with keyloggers, malware, or other such attacks.

You need an external verification process.  You need multi factor authentication (MFA). MFA has traditionally relied on verifying you by two of three ways:

  • Something that you know (i.e.: username, password, challenge questions/responses, one-time-use code, etc.)
  • Something that you have (i.e.: token, RFID cards or key fobs,  mobile phones, etc.)
  • Something that you are (biometrics)

Each of these methods have their advantages and drawbacks. For example:

  • Challenge Questions:
    • PRO: do not require any physical equipment on the user side
    • CON: do rely on the user’s memory, which can be fuzzy when it comes to precisely writing the correct response
    • CON: are vulnerable to deduction through inspection of social media accounts, etc.
    • CON: are “something you know” and so fall into the same category as login credentials, thereby not taking advantage of any other kind of authentication
  • Physical Equipment: (like RFID cards and tokens)
    • PRO: do not rely on a person’s memory
    • CON: can be stolen or lost
    • CON: require active device management from an administrator

One method of authentication that is gaining ground because of its ease of use is authentication that relies on OAuth (an open standard for authorization).  It does not rely on physical fobs (which can be lost) or an SMS text (which can be intercepted).  It, instead, relies on cryptographic code that generates a time specific one-time-use codes based on the user’s secret key and the time. Since the code operates simultaneously (and separately) on the user’s device (typically a mobile phone) and on an internal server, with no need for an internet connection; it greatly reduces downtime because of internet issues and hackers intercepting the one-time-use code.

Encryption

lock.pngStrong, Advanced Encryption Standard (AES) encryption as put forward by NIST should be used to encrypt all sensitive customer and company data.  In 2001 NIST formally adopted the AES encryption algorithm.  Since then, it has been proven countless times to render the data useless in the event of a breach.  In fact, it would take the fastest supercomputer 375 x 1050 years to brute force AES encryption by running through all permutations of an AES 256-bit encryption key.  In comparison, the Sun will reach its Red Giant stage in 54 x 108 years, engulfing Mercury, Venus, and possibly Earth.  In other words, the Earth will be incinerated by the then rapidly expanding Sun before a hacker could effectively crack AES encryption through brute force.

The good news, AES encryption comes standard in most database’s native encryption libraries.  Along with those free versions, there are a number of commercial products that rely on AES encryption available.  So finding a way to secure your data with AES encryption will be fairly easy.  That being said, it is important to understand the development time and performance hits each solution takes. Native encryption libraries are generally free but take a bit of development time.  Commercial solutions take less time to deploy but many times are file/folder level encryption products and have performance hits because they take a longer to encrypt/decrypt than column level encryption products.

Centralized Encryption Key Management

key.pngAs we mentioned, AES encryption is extremely difficult to brute force attack.  It’s strength lies in its ability to encrypt the data with a very long key (typically 256-bit). But it’s strength is also its weakness.  If your encryption key becomes known to a bad actor, your encrypted data becomes compromised.  That is why any encryption strategy worth its salt will include proper, centralized encryption key management.  

When defending your encryption key with full lifecycle key management, consider these things:

  • The encryption keys should be logically or physically separated from the encrypted data.  This way, if the encrypted data is compromised, they will not be able to decipher it.
  • The encryption keys should only be generated with a cryptographically secure pseudo-random number generator (CSPRNG).
  • Restrict administrator and user access to the keys to the least amount of personnel possible.
  • Create clear separation of duties to prevent improper use of the keys by database administrators.
  • Manage the full lifecycle of the keys from key creation, activation, expiration, archive, and deletion.

For a more comprehensive view of encryption key management, please view the Definitive Guide to Encryption Key Management.

Real Time Log Monitoring

Forrester, in 2013, promulgated the cybersecurity model of “Zero Trust.”  In it, they put forward the motto: “never trust, always verify.”  By this, they mean that all users should be authenticated, restricted to the least amount of data possible, and verified that they are doing the right thing through real-time monitoring.  Of which, they advocate for:  

  • Real Time Event Collection in which you collect and log all events, in real time.
  • Event Correlation in which you analyze all events and narrow in on the ones that do not conform to expected patterns.
  • Resolution Management in which you investigate all suspect behavior and either classify them as either benign or a possible threat for further investigation.

There are many Security Information Event Management (SIEM) tools available that accomplish this.  For more information, refer to Gartner’s SIEM Magic Quadrant to find the tools that fit your needs.

Final Thoughts

Defending data-at-rest is a never ending struggle of building robust defenses and continuous improvement.  But, it's not a question of if, but when, a data breach will happen.  And if the DNC data breaches taught us anything is that breaches can be embarrassing and costly.  Since  hackers are only growing more sophisticated in their techniques, it is incumbent upon us to respond in ever increasing levels of agility and sophistication of our own.

The old models of the high, guarded perimeter with complex passwords to gain entry are just not enough.  We need a higher degree of authentication, sensitive data rendered useless, and constant real-time monitoring of all traffic.  You data depends on it.

Turning a Blind Eye to Data Security eBook

Topics: Data Security

SQL Server Column Level Encryption

Posted by Patrick Townsend on Feb 28, 2017 9:11:00 AM

Microsoft customers attempting to meet security best practices, compliance regulations, and protection of organization’s digital assets turn to encryption of sensitive data in Microsoft SQL Server databases. The easiest way to encrypt data in SQL Server is through Transparent Data Encryption (TDE) which is a supported feature in SQL Server Enterprise Edition. For a variety of reasons, TDE may not be the optimal solution. Microsoft customers using SQL Server Standard, Web, and Express Editions do not have access to the TDE feature. And even when using SQL Server Enterprise Edition, TDE may not be the best choice for very large databases.

Encryption & Key Management for SQL Server - Definitive GuideLet’s look at some approaches to column level encryption in SQL Server. The following discussion assumes that you want to meet encryption key management best practices by storing encryption keys away from the protected data, and retain full and exclusive control of your encryption keys.

Column Level Encryption (aka Cell Level Encryption) 
Starting with the release of SQL Server 2008, all Enterprise editions of the database have supported the Extensible Key Management (EKM) architecture. The EKM architecture allows for two encryption options: Transparent Data Encryption (TDE) and Column Level Encryption (CLE). Cell Level Encryption is the term Microsoft uses for column level encryption. SQL Server Enterprise edition customers automatically have access to column level encryption through the EKM architecture.

Encryption Key Management solution providers can support both TDE and Column Level Encryption through their EKM Provider software. However, not all key management providers support both - some only support TDE encryption. If your key management vendor supports Cell Level Encryption this provides a path to column level encryption in SQL Server Enterprise editions.

Application Layer Encryption
Another approach to column level encryption that works well for SQL Server Standard, Web, and Express editions is to implement encryption and decryption at the application layer. This means that your application performs encryption on a column’s content before inserting or updating the database, and performs decryption on a column’s content after reading a value from the database. Almost all modern application languages support the industry standard AES encryption algorithm. Implementing encryption in languages such as C#, Java, Perl, Python, and other programming languages is now efficient and relatively painless.

The challenge that developers face when implementing encryption at the application layer is the proper protection of encryption keys. Security best practices and compliance regulations require a high level of protection of encryption keys. This is best accomplished through the use of an encryption key management system specifically designed to create, securely store, and manage strong encryption keys. For developers, the primary challenge in a SQL Server encryption project is integrating the application with the key manager. Many vendors of key management systems make this easier by providing Software Development Kits (SDKs) and sample code to help the developer accomplish this task easily.

SQL Views and Triggers with User Defined Functions (UDFs)
Another approach to column level encryption involves the use of SQL Views and Triggers. Leveraging the use of User Defined Functions (UDFs) the database administrator and application developer can implement column level encryption by creating SQL Views over existing tables, then implementing SQL Triggers to invoke user defined functions that retrieve encryption keys and perform encryption and decryption tasks. This approach has the advantage of minimizing the amount of application programming that is required, but does require analysis of the SQL database and the use of User Defined Functions. Database administrators and application developers may be able to leverage the SDKs provided by an encryption key management solution to make this process easier.

SQL Server Always Encrypted
One promising new technology recently implemented by Microsoft is SQL Server Always Encrypted. This feature is new with SQL Server 2016 and can work with any edition of SQL Server. It is a client-side architecture which means that column data is encrypted before it is sent to the database, and decrypted after it is retrieved from the database. While there are many constraints in how you can put and get data from SQL Server, it is a promising new technology that will help some customers protect data at the column level. You can expect to see support for Always Encrypted being announced by encryption key management vendors in the near future.

SQL Server in the Azure Cloud
As Microsoft customers and ISVs move to the Azure cloud they are taking their SQL Server applications with them. And it is very common that they take full implementations of SQL Server into their Azure virtual cloud instances. When SQL Server applications run in a virtual machine in Azure they support the same options for column level encryption as described above. This includes support for Cell Level Encryption through the EKM Provider architecture as well as application layer encryption. As in traditional IT infrastructure the challenge of encryption key management follows you into the Azure cloud. Azure customers should look to their encryption key management vendors to provide guidance on support for their key management solution and SDKs in Azure. Not all key management solutions run in Azure and Azure is not a supported platform for all vendor SDKs.

Azure SQL Database
In the Azure cloud Microsoft offers the SQL Server database as a cloud service. That is, Microsoft hosts the SQL Server database in the cloud and your applications can use this service rather than a full instance of SQL Server in your cloud instance. Unfortunately, Azure SQL Database only supports Transparent Data Encryption through the EKM Provider interface and does not yet support Cell Level Encryption. It also restricts encryption key management to only the Azure Key Vault facility requiring you to share key custody with Microsoft.

Column level encryption at the application layer is fully supported for Azure SQL Database. As in the traditional IT infrastructure your C#, Java, and other applications can encrypt and decrypt sensitive data above the database level. Again, check with your key management solution provider to insure that application level SDKs are supported in the Azure cloud.

AWS Cloud and SQL Server
The Amazon Web Service (AWS) implementation of cloud workloads parallels that of Microsoft Azure. You can deploy a full instance of SQL Server in an AWS EC2 instance and use the features of SQL Server as in traditional IT infrastructure. Amazon also overs a database service called Amazon Relational Database Service, or RDS. The RDS service offers multiple relational databases including SQL Server. As with Azure there is no support for key management solutions other than the Amazon Key Management Service (KMS) requiring a shared implementation of key custody.

As you can see there are many ways to implement column level encryption in SQL Server and use good encryption key management practices. I hope this helps you on our journey to more secure data in SQL Server.

Patrick

Encryption

Topics: Encryption, SQL Server, Cell Level Encryption

Three Core Concepts from "Zero Trust" to Implement Today

Posted by Ken Mafli on Feb 1, 2017 12:57:58 PM

 

“There are only two types of data that exist in your organization: data that someone wants to steal and everything else.”

Forrester Research

encryption-key-management-simplifiedIn 2013, Forrester released an outline of their proprietary “Zero Trust Model” of information security to The National Institute of Standards and Technology (NIST).  Their model seeks to change “the way that organizations think about cybersecurity,” execute on higher levels of data security, and all the while “allowing for free interactions internally.”

But, when looking to better secure your organization’s data security posture, it is good to start with what has changed.  In the report, Forrester concluded that the old network security model was that of “an M&M, with a hard crunchy outside and a soft chewy center.”  It is the idea of the hardened perimeter around the traditional, trusted datacenter.  This old model is fraught with vulnerabilities as the traditional model is not equipped to handle new attack vectors with IoT, workforce mobility, and data centers moving to the cloud. It is increasingly becoming outmoded and weak.

In it’s place must come a data security model that takes into account the current network landscape and its vulnerabilities.  Enter, Zero Trust.  It builds upon the notion of network segmentation and offers key updates all under the banner: "never trust, always verify."

Below are the three main concepts to Zero Trust.  Follow along as we break down the trusted/untrusted network model and in its place rebuild a new trust model.

 

Assume All Traffic is a Threat

The first rule of “never trust, always verify” is that all traffic within the network should be considered a potential threat until you have verified “that the traffic is authorized … and secured.” Let’s look at these two components:

  • Authorized Traffic: Each end user should present valid (and up-to-date) login credentials (i.e. username and password) as well as authenticate themselves with multi factor authentication for each session logging into the network.  Usernames and passwords are not enough.  Only multi-factor authentication can reduce the risk of a hacker obtaining and misusing stolen login credentials.
  • Secured Traffic: All communication, coming from inside and outside of the network, should be be encrypted.  It should always be assumed that someone is listening in.  Using SSH or TLS and keeping abreast of their potential vulnerabilities is the only way to reduce the risk of exposure.

 

Give Minimal Privileges

The only way to minimize the risk of employees, contractors, or external bad actors misusing data is to limit the access each user/role is given to the least amount of privileges possible.  With this, it is a forgone conclusion that all sensitive data is already encrypted and minimal privileges are given as to who can decrypt it.  We implement a minimal privileges policy so that “by default we help eliminate the human temptation for people to access restricted resources” and the ability for hackers to access a user’s login credentials and thereby have access to the entire network.

Role-based access control (RBAC) model, first formalized by David Ferraiolo and Richard Kuhn in 1992 and then updated under a more unified approach by Ravi Sandhu, David Ferraiolo, and Richard Kuhn in 2000 is the standard today.  It’s ability to restrict system access only to authorized roles/users makes it the ideal candidate for implementing this leg of Zero Trust.  While Zero Trust does not explicitly endorse RBAC, it is best game in town, as of today.  For a deeper dive, visit NIST’s PDF of the model.

 

Verify People are Doing the Right Thing

Once we have authenticated each user and restricted them to the least amount of data possible to adequately do their job, the last thing to do is “verify that they are doing the right thing” through logging and inspection.

Here is a short (and certainly not exhaustive) list of techniques used to inspect all events happening in your network.  

  • Real Time Event Collection: the first step is to collect and log all events, in real time.
  • Event Correlation: Next you need to analyze all of the events and narrowing in on the events that need greater scrutiny.
  • Anomaly Detection: In a related move, you will want to identify the events that do not conform to the expected pattern and investigate further.
  • Resolution Management: All events that do not meet the expected pattern should be investigated and either classified as benign or deemed a possible threat and given for further investigation.

Note: There are many tools available that accomplish these.  Please refer to Gartner’s Security Information Event Management (SIEM) Magic Quadrant to find the tools that may interest you.

 

Final Thoughts

It's not a question of if, but when, a data breach will happen. Hackers grow more sophisticated in their attacks and threaten everything from intellectual property to financial information to your customers Personally Identifiable Information (PII).  The old model of the high, guarded perimeter with the trusted, internal network no longer functions as a secure model.  Zero Trust offers a more comprehensive approach to today’s data security needs.  As you look to deploy this model, begin to seek out tools that will help you.  Here is a short list of some of the tools to consider:

  • Log Collection Tools: Some platforms, like the IBM i, have proprietary formats, that are difficult for SIEMs to read.  Make sure your SIEM can fully collect all needed logs.  If it cannot, find or make a tool that will properly capture and send the logs onto your SIEM.
  • SIEM Tools:  As mentioned earlier in the article, there are many good SIEM tools out there to help you collect, analyse, and monitor all events on your network.
  • Encryption (data-in-flight): Fortunately, there are many open source protocols for secure communications like SSH and TLS.
  • Encryption (data-at-rest): Advanced Encryption Standard (AES) encryption is ubiquitous in most platform’s native encryption libraries.  There are also a number of products that offer column level to folder/file level encryption.
  • Centralized Key Management: The encryption you deploy is only as good and the level of protection you give to the encryption keys.  Therefore, robust encryption key management is a must.
  • User Access Management: Managing privileges, credentials, and multi factor authentication can be a daunting task.  The more more you can automate this, the better.

In many cases, adopting this approach will not be about bolting on a few products onto your existing data security framework but completely renovating it.  Don’t let expediency force you to defend your data with only half measures.  Take a deep dive into Zero Trust’s approach and see where you may be vulnerable.

 

The Encryption Guide eBook

Topics: Data Security

 

 

Subscribe to Email Updates

Posts by Topic

see all