Townsend Security Data Privacy Blog

Patrick Townsend

Recent Posts

Hosting and Cloud Provider PCI Compliance Confusion – No Magic Bullet

Posted by Patrick Townsend on Jun 15, 2012 1:47:00 PM

DOWNLOAD WHITE PAPER

PCI Compliance White Paper

Download the white paper "Meet the Challenges of PCI Compliance" and learn more about ensuring the data you are protecting meets PCI compliance.

Click Here to Download Now

Customers moving to a hosting provider or cloud provider are often confused about PCI DSS compliance regulations, and what their responsibilities are in that environment. Some companies feel that they can avoid compliance concerns by moving to a cloud service. Some feel that they are no longer under compliance regulations at all in that environment. I heard this comment just this week:

“I don’t need to worry about compliance because my hosting provider says they are PCI compliant.”

This is dangerously wrong.  Let’s sort this out.

First, hosting providers who say they are PCI compliant are usually talking about their own systems, not about yours. Their credit card payment application is PCI compliant, they run the required vulnerability assessments on their payment processing applications, they collect system logs, and so forth. All of these things are required by the hosting or cloud provider for their own systems to be PCI compliant. They aren’t talking about your applications and data.

This does not make you automatically PCI compliant when you use their platforms or applications. You still bear the responsibility for meeting PCI compliance in your applications. Regardless of the hosting or cloud implementation (Infrastructure-as-a-Service, Platform-as-a-Service, Software-as-a-Service, or a hybrid approach), you are always responsible for PCI compliance of your data.

What does the PCI Security Standards Council (PCI SSC) say about cloud environment?

The hosted entity (you) should be fully aware of any and all aspects of the cloud service, including specific system components and security controls, which are not covered by the provider and are therefore the entity’s responsibility to manage and assess.

And,

These challenges may make it impossible for some cloud-based services to operate in a PCI DSS compliant manner. Consequently, the burden for providing proof of PCI DSS compliance for a cloud-based service falls heavily on the cloud provider, and such proof should be accepted (by you) only based on rigorous evidence of adequate controls.

As with all hosted services in scope for PCI DSS, the hosted entity (you) should request sufficient assurance from their cloud provider that the scope of the provider’s PCI DSS review is sufficient, and that all controls relevant to the hosted entity’s environment have been assessed and determined to be PCI DSS compliant.

Simply put, you are responsible for understanding which parts of PCI compliance a cloud vendor can help you with, and which parts they can’t.

There is no cloud implementation that relieves you of the responsibility of protecting your data. See section 4.3 in this PCI guidance.

What does this mean from a practical point of view?

This means that you must meet all of the PCI DSS requirements for your cloud implementation. You may be able to take advantage of some PCI compliant services provided by the hosting or cloud vendor, but you must have the cloud vendor provide you with guidance, documentation, and certification.  You are not off the hook for responsibility in these areas.

Please note the chart on page 23 of the PCI cloud guidance. There is no hosting or cloud implementation that covers your data. You are always responsible for protecting your customer’s cardholder data. This means complying with PCI DSS Section 3 requirements to encrypt the data and protect the encryption keys.

There is no magic bullet. You have to do this work.

Living through a data breach is no fun, and I would not wish this experience on anyone. In hosted and cloud environments, ignorance is not bliss.

Stay safe.  For more information, download our whitepaper Meet the Challenges of PCI Compliance and learn more about protecting sensitive data to meet PCI compliance requirements.


Patrick

Topics: Hosting, PCI DSS, cloud, PCI

How LinkedIn Could Have Avoided a Breach – And Things You Should Do

Posted by Patrick Townsend on Jun 11, 2012 10:29:00 AM

password breachThe loss of passwords by LinkedIn, eHarmony, and Last.FM should be a wakeup call for CIOs, security auditors, and IT security professionals everywhere. Let’s take a look at what probably happened, what you can do, and why you need to look beyond passwords on your own systems.

One-way hashes are used in many places in your applications for data protection, data verification, and integrity checking. It is one of those fundamental building blocks of security. There are standards for hash algorithms and conscientious security professionals will be sure that a system uses industry standard hash methods such as those recommended by the National Institute of Standards and Technology (NIST). The Secure Hash Algorithm (SHA) is one of those standards and is readily available in a variety of open source and commercial applications (we have one). It is a common practice to save passwords as hashes rather than saving the password in the clear or encrypted. (You could, of course, protect passwords with encryption and good key management, but that is a discussion for another day).

LinkedIn was using SHA-1 for their password hash. So what went wrong?

First, SHA-1 is no longer recommended for use in security systems. It has been replaced by a new family of stronger and more secure SHA methods with names like SHA-256, SHA-512, and so forth. These newer hash methods provide better protection from the type of attack that LinkedIn experienced. We use SHA-256 or strong methods in all of our applications. So using an older, weaker algorithm that is no longer recommended was the first problem.

Second, it is considered a security best practice to use a Salt value with any data that you are protecting with a hash method. What is Salt? In the context of hashes a Salt value is just some additional data that you add to the sensitive data you want to protect (a password in this case) to make it harder for an attacker to use a brute force attack to recover information. LinkedIn was not using a Salt value with passwords, and more simple passwords were easily recovered. (More on Salt in a moment). The attackers easily recovered LinkedIn passwords.

LinkedIn has apparently taken some steps to better protect their passwords. Is it enough? Let’s look at what should be done. This will help you look at your own Web and IT systems and understand where you have weaknesses.

These are all common recommendations from the security community:

Recommendation 1 – Use industry standard, strong hash methods

You should be using SHA-256 or SHA-512 for this type of data protection.  Do not use weaker versions of the SHA hash method, and do not use older methods like MD5. Do not be swayed by arguments that hash methods consume too much CPU power – just ask LinkedIn if that is their concern right now!

Recommendation 2 – Use NIST certified hash software libraries

data breach protectionIf you use a hash method to protect sensitive data, you should use a NIST-certified software library. Why? Because it is terribly easy to make mistakes in the software implementation of a SHA hash method.  NIST certification is not a guarantee, but in my mind it is a minimum requirement that you should expect. I find it curious that most people would not consider buying a used car without a CARFAX report, but completely ignore NIST certification when deploying hash software to protect sensitive data. A lot more is at stake, and you don’t even have to pay to verify certification!

Recommendation 3 – Always use Salt with your hash

Always use a Salt value when creating a hash of sensitive data. This is particularly important if the sensitive data is short like a password, social security number, or credit card. A Salt value can make it much more difficult to attack the hashed value and recover the original data.

Recommendation 4 – Use a strong Salt value

Never use a weak Salt value when creating a hash. For example, don’t use a birth date, name, or other information that might be easy to guess, or discover from other sources (attackers are great data aggregators!). I recommend using a random number generated by a cryptographically secure software library or HSM. It should be at least 4 bytes in length, and preferably 8 bytes or longer.

Recommendation 5 – Protect the Salt value

Protect the Salt value as you would any sensitive cryptographic material. Never store the Salt in the clear on the same system with the sensitive data.  For the Salt value, consider using a strong encryption key stored on a key management system that is itself NIST certified to the FIPS 140-2 standard.

Are your systems at risk?

You are probably using hash methods in many places in your own applications. Here are some thoughts on where you can start looking to uncover potential problems with hash implementations:

  • Passwords (obviously)
  • Encryption key management
  • System logs
  • Tokenization solutions
  • VPNs
  • Web and web service applications
  • Messaging and IPC mechanisms

Hopefully this will give you some ideas on what questions to ask, what to look for, and where to look for possible problems on your own systems. You don’t want to be the next LinkedIn, eHarmony, or Last.FM. They are not having fun right now! 

Download our podcast "How LinkedIn Could Have Avoided a Breach" to hear even more about my take on this breach and ways you can keep this from happening to your organization.

Patrick

Click me

Topics: NIST, Data Privacy, Data Breach, password

Chris Evans – Security Blogger

Posted by Patrick Townsend on May 3, 2012 7:42:00 AM

blog writerI am on a new kick to share some security resources with you that I’ve found valuable over the years. I am not following any particular order or ranking people and resources by importance:  I’m just going to do this as the mood strikes me.

Let me introduce you to Chris Evans and his blog.

Chris works for Google, he’s a software and security geek, and is an independent sort. A lot of his work is technically deep, which is great for those of us who enjoy that sort of thing. But I also really like his world view.

Chris has a hacker’s mentality (in the good sense) and his values are lined up with making the world a better and safer place. He doesn’t avoid talking about his own mistakes, and believes that more information about security problems makes the world safer as it gives people the information they need to protect themselves, and it helps developers make their solutions better.  He also provides a lot of just plain good advice that anyone can use.

One example is a recent blog on web browser security. The blog combines some technical information, but it also gives you information about how to think about web browser security, and why some web browsers are better than others.

He also makes an interesting statement about browser security that I think has corollaries that apply to anyone writing software that needs to be safe. Chris says:

“The security of a given browser is dominated by how much effort it puts into other peoples' problems.”

For those of us who write business applications and security software, I would put it this way:

"In addition to everything else you do to make your solution more secure, you have to include other people’s problems in the scope of your thinking, including the unexpected ways they might use your solution."

Enjoy.

Patrick

Topics: security, Data Privacy

Commercial PGP Command Line and Our Symantec Partnership

Posted by Patrick Townsend on Apr 25, 2012 5:30:00 PM

Symantec Townsend Security PGPReally successful technology partnerships are hard to achieve and therefore are rare. There are so many potential pitfalls in this type of partnership that include conflicting goals, changing market conditions, and on and on. That’s why I am particularly pleased with our partnership with Symantec on the IBM Enterprise platform versions of PGP encryption. This technology partnership now spans more than a decade and several mergers and acquisitions. The level of trust and integration between Townsend Security and Symantec has just gotten better over time, and our IBM i (AS/400, iSeries) customers and IBM System z Mainframe customers have benefited.

One thing that has confused our customers is where they should go to get information and to license PGP Command Line for the IBM Enterprise platforms.

It can be hard to negotiate the Symantec web site to locate the PGP Command Line products. And calling Symantec’s 800 number can be downright disorienting. Symantec provides a large number of security and system management products, and finding the PGP products can be hard. Of course, you can always go to the old PGP web site, and it will re-direct you to the Symantec site. That helps, but not many people know about that little short-cut.

Here is a better idea – you can just go directly to the Townsend Security web site and you will be starting in the right place. Just select the PGP option under products.

SDS LogoIBM System z customers will be glad to know that we’ve partnered with Software Diversified Systems (SDS) to provide sales management and customer support that meets the Mainframe customer’s expectations of knowledge and experience with that platform. Just select the PGP Command Line product under their Products link. SDS and their worldwide partner network have really provided the Mainframe experience and depth of knowledge that customers expect. That’s also been a great partnership.

If you are an IBM Enterprise platform customer, save yourself some time and trouble. Go straight to Townsend Security or SDS for your PGP Command Line encryption solutions.

Patrick

Topics: PGP

Why Did I Fail a Security Audit on My IBM i (AS/400, iSeries)?

Posted by Patrick Townsend on Apr 13, 2012 10:14:00 AM

DOWNLOAD WHITE PAPER

PCI compliance matrix

Download our Encryption Key Management and PCI DSS 2.0 Compliance Matrix white paper and learn more about ensuring the data you are protecting meets PCI compliance.

Click Here to Download Now

As security auditors get more educated on the IBM i platform, more customers are having the experience of failing a security audit around encryption key management. CIOs, IT Managers, and System Administrators want to know why this is happening to them now? They ask, why was our approach OK two years ago, and why is it not OK now?

I think I can answer that.

My job brings me into conversations with a lot of companies undergoing security audits under a broad range of regulations including PCI DSS, SOX, GLBA/FFIEC, FERPA, and many others. Security and compliance auditors look to industry standards and best practices for guidance on what their clients should be doing in the area of key management. In the US this inevitably brings them into contact with the National Institute of Standards and Technology (NIST), an agency within the US Department of Commerce. NIST provides a wide set of standards and best practices guidance in the area of encryption and key management.

As you become familiar with the broader set of data security regulations, you start to realize that the one common source they have is NIST. Even if not directly referenced in the regulations, the concepts are largely drawn from work done by NIST, and that is why there are a set of common attributes that auditors look for in a key management implementation.

So, auditors now look for key management implementations based on NIST best practices and standards. Key management best practices can be found in the NIST Special Publication 800-57 (three parts).

One of those best practices is Separation of Duties. This best practice says that the people who manage encryption keys should not be the same people who manage and have access to sensitive data such as credit card numbers, social security numbers, patient data, and so forth. It makes sense – you want as few people as possible with access to sensitive data, and you only want people who have a real need to access sensitive data to do so. The same is true with encryption keys that protect that sensitive data.

On the IBM i platform the security officer and anyone with All Object (*ALLOBJ) authority can access any database file at any time, and can access any locally stored encryption key at any time, regardless of the protections you try to put in place. This is not really a limitation or weakness of the IBM i platform, the same condition exists on other operating systems and platforms, too. No matter what you do you can’t achieve a defensible level of Separation of Duties if you store encryption keys on the IBM i platform. You can try to mitigate this situation through system logging and similar controls, but you can’t eliminate it.

Auditors have learned this about the IBM i platform.

Separation of Duties is only one problem area with the local storage of keys. You also have to contend with Dual Control, Split Knowledge, key lifecycle management, and a broader set of key management best practices most of which are difficult or impossible to achieve when encryption keys are stored locally.

And that’s the main reason IBM i customers are failing security audits around encryption key management.  Download our our Encryption Key Management Requirements for PCI white paper to learn more on how you can pass your next key management audit with flying colors.

Patrick

Click me

Topics: IBM i, Best Practices, Encryption Key Management

IBM i Security Audit Journal QAUDJRN – Are You Logging Everything?

Posted by Patrick Townsend on Apr 5, 2012 8:49:00 AM

IBM i loggingWe’ve had an upsurge in interest recently in our Alliance LogAgent solution for the IBM i (AS/400) platform. This solution sends security events from the IBM i in real time to log collection servers and SIEM solutions. As I’ve talked to IBM i customers, I am beginning to appreciate how difficult it is to get IBM i security information into a usable format so that events can be collected and monitored. The challenges are big:

  • Data format – IBM security events are in internal IBM format, not syslog format.
  • Multiple sources – Security events get collected in a variety of locations, almost always in an internal and proprietary IBM format.
  • Timeliness – Tools are lacking to collect security events in real-time, increasing the security exposure.
  • Communications – There are no native syslog UDP, TCP or SSL TCP communications facilities.
  • Data completeness – While it is possible to print security information using IBM tools, critical information is missing from reports.

Here is a really good example of this last point. I can use the Display Audit Journal Entry command (DSPAUDJRNE) to print a report of user ID and password failures. Here is a bit of what that report looks like:

Logging Screen

Can you imagine a SIEM solution or poor network administrator trying to get useful information from this? Fields are not easily identified and extracted, and most SIEM query tools would have a really hard time extracting the meaning from this report. There are user ID and password failures here, but hard to parse them out.

And one of the most important pieces of information is missing. Can you see what it is?

Right, the IP address of the originator of the error. SIEM solutions are good at correlating events if they know where they are coming from. The IP address is critical for accomplishing this. This report could probably tell you when you are under attack, but not where it is coming from and certainly not in real-time.

Our Alliance LogAgent solution solves all of these problems. Events are extracted from all of the relevant sources, in real time, converted to standard syslog format, and communicated using your choice of UDP, TCP, or secure TLS communications to your log server. And, Yes, the IP address is in the event! Here is an example of a PW event as it is processed by Alliance LogAgent:

<118>Sep 20 15:47:11 S10125BA QAUDJRN:[PW@0 event="PW-Invalid user or password" event_type="Q-Signon failed profile disabled" user_profile="QTCP" device="*N" jrn_seq="002273092" timestamp="20120120154711021000" job_name="QTLPD00145" user_name="QTCP" job_number="630743" ip_addr="10.0.1.205" port="15427"]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             

This is caviar to your SIEM solution!  Real time alerts, event queries, and forensics become a snap when you get the right data into your SIEM solution. And real time system monitoring is one of the top recommendations by security professionals to keep your IBM i (AS/400) safe.

I’m proud of our system logging solution for the IBM platform. Our customers have deployed the solution in under an hour starting from the time they do the download from our web site.

Patrick

Click me

Topics: IBM i, Alliance LogAgent, logging

Driving a Taxi and Assessing Your Security Posture

Posted by Patrick Townsend on Mar 20, 2012 8:14:00 AM

taxiSome years ago, during an “in between” period of my life, I drove a taxi in Houston, Texas. It was one of those enriching life experiences (this means it left scars), and a recent security newsletter from Bruce Schneier had me thinking about it again.

All of us drivers loved to take a customer to Gilley’s, a famous honky-tonk out in Pasadena.  Gilley’s was a huge place with live country music, line dancing, a mechanical bull, a real rodeo arena, and lots of Texans (most with quite a few long necks behind them). It ran well into the early morning hours and was always busy. It was a good distance from downtown Houston or the Houston airport and a ride to or from Gilley’s was going to be a good fare and usually a good tip.

Here’s the security angle – Gilley’s could be a bit dangerous starting from about 10 or 11 at night. There was a whole lot of drinking going on (I know you will be surprised by that), and some roughneck or cowboy or soldier was going to take an unfortunate interest in someone else’s girlfriend. Or maybe someone liked the wrong football team. Or whatever – there was no shortage of things that could cause a fight. A shooting or brawl was not that uncommon at Gilley’s.  Every driver I knew carried some type of “protection” under the seat. Mine was a short tire iron. But some carried serious heat. But you never wanted to be in a position of actually having to defend yourself – you were probably going to get some serious hurt on you.

Every night when you were driving taxi you had to make a decision about taking a late night run to Gilley’s. A lot of drivers just wouldn’t go out there after 11pm. Some drew the line at 1pm, or wouldn’t go out there when the place was closing.  But if you’ve had a bad day, that run might help you get profitable before sunrise. So, you were always making a security assessment – how much risk were you willing to bear?

Now here is what I was thinking about: When I think of Pasadena, Texas, my impression is still tinged with that original experience. For all I know, Pasadena may have changed into a yuppie paradise with 5-star restaurants and day spas. I’ve seen other neighborhoods transform (good or bad) over time. South of Market in San Francisco now has a Whole Foods, and China Basin is definitely not as dangerous. So things change over time. And a person’s personal security posture will change, too, if there is adequate information about the neighborhood.

encryption key managementNow let’s bring these chickens home to roost.

Things have changed in the world of IT. We used to feel safe behind our firewalls and DLP systems and anti-virus software. We carefully avoided upgrading our operating systems and software to avoid buggy releases. This made complete sense at the time.

But now the attacks come in from infected PDF files and infected web sites. A USB thumb drive can carry the danger. Systems that we thought were relatively safe like Macs, mobile phones, or IBM Mainframes and AS/400s now are as much of a threat as anything outside the firewall. Criminals now routinely use weaknesses in unpatched systems to steal sensitive data. The threat landscape has changed. We need to change, too.

So, when you think about that OS or software upgrade you should give more weight to staying current, and perhaps a little less weight to avoiding some bugs. I know the risks of doing software upgrades, and that you have to make a judgment call. But out of date software is honey to the bad guys. It’s time to re-think the security posture - the neighborhood is not the same.

Patrick

No, I’m not from Texas (Hat tip to Lyle Lovett)

Learn how we have made encryption and key management easier and more affordable than ever with Alliance Key Manager.

Click me

Topics: Encryption, Data Privacy

Securing Data in Microsoft SharePoint 2010

Posted by Patrick Townsend on Mar 6, 2012 1:05:00 PM

“I’m scared to death about what my users are putting into SharePoint!”

SharepointThis is what a Database Administrator said to me recently when I attended a SQL Saturday event on the Microsoft campus in Redmond, Washington. And I’m hearing that a lot from IT directors and CIOs in the financial and medical sectors. Microsoft SharePoint is a wonderful collaboration tool, and it supports a number of versions and deployment options. These options run the gamut from free versions that ship with Windows Server, to versions tailored to the Microsoft Office suite of applications, to web portals. And an industry has grown up around installing, customizing, and hosting SharePoint.

But IT managers are sweating about the risk of data loss. And they have reason to be afraid.

We know that users are creative about circumventing written policies about data security. Ever look at an audit of user passwords? It’s a good bet that “Password1” is the most common password on your network. It has upper and lower case letters, and at least one number. And even good employees can accidentally violate security policy. We ask a lot of our colleagues and security is often not on the top of their consciousness. So how likely is it that users are following your security policy requirement NOT storing sensitive data in SharePoint?

Somewhere close to zero.

And that’s why IT managers have good reason to be concerned. And that’s one reason why the uptake of SharePoint collaboration runs into resistance in the financial and medical segments.

Fortunately, Microsoft added some important security features to SharePoint 2010. One of those is support for Transparent Data Encryption (TDE) when you use SQL Server 2008 as the storage mechanism for SharePoint. The great thing about TDE is that it is easy to implement. You get good encryption performance, separated key management, and a high level of automation. Your IT staff can deliver it with a minimum of fuss and delay.

Will encryption with TDE solve all of the SharePoint security concerns? No. But it will protect you from data loss in the event of a lost backup or hard drive, and a server breach that just steals a copy of the database or log files won’t compromise your data. That’s one big step in the right direction.

Take a look at our encryption key management solution built for Microsoft SQL Server. You can start to build the confidence you and your management team needs to move forward with SharePoint collaboration, and at a reasonable cost and in a reasonable time frame.

For even more information, view our webinar “Encryption Key Management with Microsoft SQL Server.”  See how easy it can be to implement strong key management and hear what hundreds of attendees learned at PASS last week.

Patrick

Click me

Topics: Alliance Key Manager, SQL, SharePoint

Skip V6R1 on IBM i and Upgrade to V7R1 - A Security Note

Posted by Patrick Townsend on Mar 1, 2012 9:17:00 AM

IBM i FIELDPROCEveryone in the IBM i (AS/400, iSeries) world with responsibility for these large servers knows that IBM will soon announce the next release of the IBM i operating system, and that version V5R4 will go off of support a short time after that. While the date of the next release and the sunset date for V5R4 have not been announced, IBM has a fairly predictable pattern of new OS releases and support schedule. You can read Timothy Pickett Morgan’s thoughts in an article he wrote titled "The Carrot: i5/OS V5R4 Gets Execution Stay Until May."

So right now IBM shops running V5R4 are busy planning their upgrades. Many are planning to move just one version ahead to V6R1.

News Update! IBM just announce the support end date for V5R4. It’s September 30, 2013. You can read it here.

Upgrading your IBM i (AS/400) to V6R1 instead of V7R1 is a bad idea. Here’s why:

In V7R1 IBM provided a new automatic encryption facility in DB2/400 called FIELDPROC (That’s short for “Field Procedure”). This new facility gives IBM i customers their first shot at making encryption of sensitive data really easy to do. With the right software support you can implement column level encryption without any programming. The earlier trigger and SQL View options were very unsatisfactory, and the new FIELDPROC is strategically important for customers who need to protect sensitive data.

Another key feature in V7R1 is a new version of the Secure Shell sFTP application. This is rapidly becoming the file transfer method of choice. And IBM provides version 4.7 in V7R1. If you are doing a substantial amount of file transfers with sFTP, or you plan to do so, you will want all of the latest security patches in OpenSSH.

I know that an operating system upgrade is a lot of work, and that’s why IBM i shops are reluctant to do it very often. And when they do an upgrade, there stay there as long as possible. But FIELDPROC is only available in V7R1, it is not patched back to V6R1. And the latest version of OpenSSH is provided in the V7R1 distribution.

So I think you should skip V6R1 and go directly to V7R1. You won’t want to be locked in to a version of the OS without important security features. And the jump from V5R4 directly to V7R1 is a fully supported path by IBM. I hope I’ve convinced you to consider this important security option as you look at your OS upgrades this year. 

Download our podcast on "The Benefits of FIELDPROC Encryption" to learn more about FIELDPROC capabilities and the benefits of transparent encryption.  Additionally, we have a podcast titled "FIELDPROC Performance - Speed Matters" for those who are wondering how it will impact their systems.

Patrick

Are you going to COMMON in Anaheim? I will be doing four sessions on security on the IBM i. Be sure to stop by the booth and say Hello!

Click me

Topics: IBM i, V7R1, FIELDPROC

RSA Key Vulnerability and Random Number Generation

Posted by Patrick Townsend on Feb 16, 2012 8:18:00 AM

rsa rng key problemThe last few days has seen a number of new reports about a security vulnerability in RSA public/private keys in use on the Internet. The vulnerability has to do with duplicate keys, and not with any weakness in the cryptographic algorithm itself. But it is disturbing information because public/private key encryption is crucial to the security of web sites and a number of other secure applications and services.

The news is based on the work of academic researchers and you can read the paper here.

While the researchers did not identify the cause of the duplicate keys, a number of us are guessing that the problem lies in random number generation. It is really easy to create bad random number generators, and hard to get it right and prove that you have it right. So any sloppiness in your engineering processes related to RNG can lead to this type of problem. There are not a lot of applications in use that create RSA public/private keys and X509 certificates, so the problem may be limited to a small number of these applications. But at this time there is no indication of which RSA key generation routines may be at fault.

How bad is this problem and should you worry about it?

At this point I don’t think there are any known attacks or breaches based on this vulnerability. It may have happened, but it hasn’t been reported yet. However, while the original researchers did not disclose their methods for identifying the duplicate RSA keys, it doesn’t seem hard to think of ways to do this. That being said, I am not sure how easy it would be to mount an effective attack even if you know about the duplicate keys. A lot is still unknown about this vulnerability.

I think Bruce Schneier has a good take on this issue. If you are concerned about this potential problem, you can read his comments here.

Are there bad random number generators in the wild?

You bet. Some years ago we found one on the IBM i (AS/400, iSeries) platform. In the early days of the IBM i platform you could use a system API named CEERAN0 to generate random numbers. We were shocked to learn how poor this RNG application is. It would start generating collisions within about 30,000+ cycles. That is really bad. It turns out that IBM also provides a cryptographically secure RNG application, but the older one still exists and we’ve seen it used in vendor and customer applications.

So, the obvious question is how can you know if a random number generator is good? One place to start is with the National Institute of Standards and Technology (NIST). NIST publishes guidelines on proper RNG methods, and a certification program. You can read the NIST recommendations for certifying random number generators here (warning: heavy lifting ahead).

insist on nistNIST also has a certification program for random number generators and vendors like us can submit our work to independent labs that perform NIST testing. All of our cryptographic solutions have been through this testing. It is also important to note that encryption key management systems that undergo FIPS 140-2 certification also go through full RNG testing. Our Alliance Key Manager if FIPS 140-2 certified and the RNG routines were NIST certified as a part of that process. We’ve also certified our RNG implementations on a variety of platforms including the IBM i. The list of vendors who have completed certification are here.

While I don’t think NIST certifications are a perfect indicator of good cryptographic implementations, I certainly wouldn’t accept any encryption key management or cryptographic solution that had not been through an independent certification process.

Proper random number generation is crucial to secure cryptographic systems. You can’t leave RNG to chance (sorry about that).

When more information is available on the RSA vulnerability I’ll give you an update. 

For more information on the importance of encryption key management, download our white paper "Key Management in the Multi-Platform Envrionment" and learn how to overcome the challenges of deploying encryption key management in business applications.

Stay safe.

Patrick

Click me

Topics: Encryption, encryption key, RSA