Standards

IPv6

IPv6 is the name of the networking protocol which is rapidly replacing the use of IPv4 in wake of widespread IPv4 exhaustion.  IPv6 is defined in 1998′s RFC 2460.

IPv6 addresses are written in “colon notation” like “fe80:1343:4143:5642:6356:3452:5343:01a4″ rather than the “dot notation” used by IPv4 addresses such as ” 11.22.33.44″.  IPv6 DNS entries are handled through “AAAA” entries rather than “A” entries under IPv4.

BEST PRACTICES: All FTP technology should now support an RFC 2428 implementation of IPv6 and the EPSV (and EPRT) commands under both IPv4 and IPv6.  Until IPv4 is entirely retired, the use of technology that supports both IPv4 and IPv6 implementations of FTP is preferred. Avoid using FTP over connections that automatically switch from IPv6 to IPv4 or visa versa.   (Read more…)

FTP

FTP (“File Transfer Protocol”) is the granddaddy of all modern TCP-based file transfer protocols.

Regardless of your personal feelings or experiences with this venerable and expansive protocol, you must be fluent in FTP to be effective in any modern file transfer situation because all other protocols are compared against it. You should also know what FTP/S is and how it works, as the combination of FTP and SSL/TLS will power many secure file transfers for years to come.

History

First Generation (1971-1980)

The original specification for FTP (RFC 114) was published in 1971 by Abhay Bhushan of MIT. This standard introduced down many concepts and conventions that survive to this day including:

  • ASCII vs. “binary” transfers
  • Username authentication (passwords were “elaborate” and “not suggested” at this stage)
  • “Retrieve”, “Store”, “Append”, “Delete” and “Rename” commands
  • Partial and resumable file transfer
  • A protocol “designed to be extendable”
  • Two separate channels: one for “control information”, the other for “data”
  • Unresolved character translation and blocking factor issues

Second Generation (1980-1997)

The second generation of FTP (RFC 765) was rolled out in 1980 by Jon Postel of ITI. This standard retired RFC 114 and introduced more concepts and conventions that survive to this day, including:

  • A formal architecture for separate client/server functions and two separate channels
  • A minimum implementation required for all servers
  • Site-to-site transfers (a.k.a. FXP)
  • Using “TYPE” command arguments like “A” or “E” to ask the server to perform its best character and blocking translation effort
  • Password authentication and the convention of client-side masking of typed passwords
  • The new “ACCT” command to indicate alternate user context during authentication
  • New “LIST”, “NLST” and “CWD” commands that allowed end users to experience a familiar directory-based interface
  • New “SITE” and “HELP” commands that allowed end users to ask the server what it was capable of doing and how to access that functionality
  • Revised “RETR”, “STOR”, “APPE”, “DELE”, “RNFR/RNTO” and “REST” commands to replace the even shorter commands from the previous version.
  • A new “NOOP” command to simply get a response from the remote server.
  • Using “I” to indicate binary data (“I” for “Image”)
  • Passive (a.k.a. “firewall friendly”) transfer mode
  • The 3-digits-followed-by-text command response convention. The meaning of the first digit of the 3-digit command responses was:
    • 1xx: Preliminary positive reply; user should expect another Final reply.
    • 2xx: Final positive reply: whatever was just tried worked. The user may try another command.
    • 3xx: Intermediate positive reply: the last command the user entered in a multi-command chain worked and the user should enter the next expected command in the chain. (Very common during authentication, e.g., “331 User name okay, need password”)
    • 4xx: Intermediate positive reply: the last command the user entered in a multi-command chain did not work but the user is encouraged to try it again or start the command sequence over again. (Very common during data transfer blocked by closed ports, e.g., “425 Can’t open data connection”)
    • 5xx: Final negative reply: whatever was just tried did not work.

A new, second-generation specification (RFC 959) was rolled out in 1985 by Jon Postel and Joyce Reynolds of ISI. This standard retired RFC 765 but introduced just a handful of new concepts and conventions. The most significant addition was:

  • New “CDUP”, “RMD/MKD” and “PWD” commands to further enhance the end user’s remote directory experience

At this point the FTP protocol had been battle-tested on almost every IP-addressable operating system ever written and was ready for the Internet to expand beyond academia. Vendor-specific or non-RFC conventions filled out any niches not covered by RFC 959 but the core protocol remained unchanged for over ten years.

From 1991 through 1994 , FTP arguably wore the crown as the “coolest protocol on campus” as the Gopher system permitted users to find and share files and content over high-speed TCP links that were previously isolated on scattered dial-up systems.

FTP under RFC 959 even survived the invention of port-filtering and proxy-based firewalls. In part, this was due to the widely-implemented PASV command sequence which permitted data connections to become inbound connections. In part this was also because firewall and even router vendors took special care to work around the multi-port, session-aware characteristics of this most popular of protocols, even when implementing NAT.

Third Generation (1997-current)

The third and current generation of FTP was a reaction to two technologies that RFC 959 did not address: SSL/TLS and IPv6. However, RFC 959 was not replaced in this generation: it has instead been augmented by three key RFCs that address these modern technology challenges.

Third Generation (1997-current): FTPS

The first technology RFC 959 did not address was the use of SSL (later TLS) to secure TCP streams. When SSL was combined with FTP, it created the new FTPS protocol. Several years earlier Netscape had created “HTTPS” by marrying SSL to HTTP; the “trailing S” on “FTPS” preserved this convention.

The effort to marry SSL with FTP led to RFC 2228 by Bellcore and a three-way schism in FTPS implementation broke out almost immediately. This schism remains in the industry today but has consolidated down to just two main implementations: “implicit” and “explicit” FTPS. (There were originally two competing implementations of “explicit” FTPS, but one won.)

The difference between the surviving “implicit” and “explicit” FTPS modes is this:

Implicit FTPS requires the FTP control channel to negotiate and complete an SSL/TLS session before any commands are passed. Thus, any FTP commands that pass over this kind of command channel are implicitly protected with SSL/TLS.

Many practitioners regard this as the more secure and reliable version of FTPS because no FTP commands are ever sent in the clear and intervening routers thus have no chance of misinterpreting the cleartext commands used in explicit mode.

Explicit FTPS requires the FTP control channel to connect using a cleartext TCP channel and then optionally encrypts the command channel using explicit FTP commands such as “AUTH”, “PROT” and “PBSZ”.

Explicit FTPS is the “official” version of FTPS because it is the only one specified in RFC 4217 by Paul Ford-Hutchinson of IBM. There is no RFC covering “implicit mode” and there will likely never be an “implicit mode” RFC. However explicit mode’s cleartext commands are prone to munging by intermediate systems or credential leakage through the occasional “USER” command sent over the initial cleartext channel.

Almost all commercial FTP vendors and many viable noncommercial or open source FTP packages have already implemented RFC 2228; most of these same vendors and packages have also implemented RFC 4217.

Third Generation (1997-current): IPv6

The second technology RFC 959 did not address was IPv6. There are many structures in the 1971-1985 RFCs that absolutely require IPv4 addresses. RFC 2428 by (SEVERAL FOLKS) answered this challenge. This specification also took early and awful experiences with FTPS through firewalls and NAT’ed networks into account by introducing the clever little EPSV command .

However, the EPSV command did not, by itself, address the massive problems the file transfer community was having getting legacy clients and servers (i.e., those that didn’t implement RFC 2428) to work through a still immature collection of firewalls in the early 2000s. To address this need, a community convention around server-defined ranges of ports grew up around 2003. With this convention in place, a server administrator could open port 21 and a “small range of data ports” in his firewall to limit the exposure he/she would otherwise have taken opening up all high ports to the FTP server.

Despite the fact that RFC 2428 is now (as of 2011) thirteen years old, support for IPv6 and the EPSV command in modern FTP is still uncommon. The slow adoption of IPv6 and FTP’s identity as a protocol “designed to be extendable” have allowed server administrators and FTP developers to avoid the remaining issue by clinging to IPv4 and implementing firewall and NAT workarounds for over a decade.  Nonetheless, support for IPv6 and EPSV remains a formidable defense against the inevitable future; it would be wise to treat file transfer vendors and open source projects who fail to implement or show no interest in RFC 2428 with suspicion.

BEST PRACTICES: A minimally interoperable FTP implementation will completely conform to RFC 959 (FTP’s 1985 revision).  All credible file transfer vendors and projects have already implemented RFC 2228 (adding SSL/TLS), and should offer client certificate authentication support as part of their current implementation.   Any vendor or project serious about long term support for their FTP software has already implemented RFC 2428 (IPv6 and EPSV) or has it on their development schedule.  Forward-looking vendors and projects are already considering various FTP IETF drafts that standardize integrity checks and more directory commands. 

 

ANSI X.9

ANSI X.9 (or “ANSI/X.9″) is a group of standards commonly used with bulk data transmissions in item processing and Fed transfers.

An example of an ANSI X.9 standard is “ANSI X9.100-182-2011″ which covers how XML can be used to deliver bulk data and images.

Published ANSI standards may include some technical artifacts such as XML XSD documents, but typically rely on specific maps set up in specific transformation engines to completely integrate with backend systems.

BIC

A Bank Identifier Code (BIC) is an 8 or 11 character ISO code used in SWIFT transactions to identify a particular financial institution.   (BICs are also called “SWIFT addresses” or “SWIFT codes”.)  The format of the BIC is determined by ISO 9362, which now provides for unique identification codes for both financial and non-financial organizations.

As per ISO 9362:2009, the first 8 characters are used for organization code (first 4 characters), organization country (next 2 characters), and “location” code (next 2 characters).  In 11-characters BICs, the last three characters are optional and denote a particular branch.  (Leaving off the last three characters or denoting them with “XXX” indicates the primary office location.)  If the 8th character in the BIC sequence is a “0″, then the BIC is a test BIC.

European Payments Council

The European Payments Council (“EPC”) coordinates European inter-banking technology and protocols, particularly in relation to payments.  In 2011 the EPC boasted that it processed 71.5 billion electronic payment transactions.

The EPC assumed all the former duties of the European Committee for Banking Standards (“ECBS”) in 2006.  It is now the major driver behind the Single Euro Payments Area (SEPA) initiative.

The official site of the EPC is http://www.europeanpaymentscouncil.eu/

ECBS

The European Committee for Banking Standards (“ECBS”) was a standards body that focused on European banking technology and infrastructure.  It was formed in 1992 and disbanded in 2006; it has since been replaced by the European Payments Council.

It is still common to see references to the ECBS in GSIT and PeSIT documentation.

PeSIT

PeSIT is an open file transfer protocol often associated with Axway.

Like Sterling Commerce’s proprietary NDM file transfer protocol, PeSIT has now been written into the standard communication specifications of several industry consortiums and government exchanges, thus ensuring a high degree of long-term dependence on Axway technology. PeSIT is required far more often in Europe than in the United States due to Axway’s French roots and home turf advantage.

Also like NDM, PeSIT is normally used in internal (or trusted WAN) transfers; other protocols are typically used for transfers across the Internet.

PeSIT stands for “Protocol d’Echanges pour un Systeme Interbancaire de Telecompensation” and was designed to be a specialized replacement for FTP to support European interbank transactions in the mid-1980′s.  One of the primary features of the protocol was “checkpoint restart”.    

BEST PRACTICE: PeSIT, NDM and other protocols should be avoided in new deployments unless specifically mandated by industry rule or statute.   The interesting capabilities these protocols offer (e.g., checkpoint restart, integrity checks, etc.) are now also available in vendor-neutral protocols, and from vendors who allow wider and less expensive licensing of connectivity technology than Axway and Sterling Commerce (now IBM).

LDAPS

LDAPS refers to LDAP connections secured with SSL, typically over TCP port 636.

See “LDAP” for more information.

LDAP

LDAP is a type of external authentication that can provide rich details about authenticated users, including email address, group membership and client certificates.

LDAP connection use TCP port 389 but can (and should) be secured with SSL.  When LDAP is secured in this manner, it typically uses TCP port 636 and is often referred to as “LDAPS”.

BEST PRACTICE: Use the SSL secured version of LDAP whenever possible; the information these data streams contain should be treated like passwords in transit.   Store as much information about the user in LDAP as your file transfer technology will permit; this will improve your ability to retain centralized control of that data and allow you to easily switch to different file transfer technology if your needs change.

RADIUS

RADIUS is an authentication protocol that supports the use of username, password and sometimes one extra credential numbers such as a hardware token PIN.

In file transfer applications, RADIUS sign on information can be collected by web-based, FTP-based or other file transfer prompts and then tried against trusted RADIUS servers.  When a file transfer application gets a positive acknowledgement from a RADIUS server, it will typically need to look up additional information about the authenticated user from its internal user database or other external authentication sources (frequently LDAP servers such as Active Directory).

AS2 Optional Profiles

AS2 optional profiles (also “optional AS2 profiles”) are features built into the AS2 protocol but not used by every Drummond certified vendor.  However, the Drummond Group does validate seven different optional profiles (nine total) and these are briefly covered below.

Certificate Exchange Messaging (CEM) - A standard way of exchanging certificates and information about how to use them.

Multiple Attachments (MA) – Simply the ability to transmit multiple files in a single AS2 transmission.

FileName preservation (FN) – Adds metadata to AS2 transmissions to preserve original filenames.  ”FN-MA” covers AS2 transmissions without MDNs and “FN-MDN” covers transmissions with MDNs.

Reliability – Provides an application standard around retry, IDs and related matters to prevent double posts.

AS2 Restart – Allows larger files, including those over 500MB, to be sent over AS2.

Chunked Transfer Encoding (CTE) - Permits transmission of data sets that are still being generated when transmission starts.

BEST PRACTICES: The most useful AS2 optional profiles for file transfer are usually MA (multiple attachments) and FN (filename preservation).  Your AS2 software should support all of these.  If you transmit files larger than a few megabytes with AS2, then AS2 restart is also a must.  Other options may be useful on a case-by-case basis.

SHA-2

SHA-2 (“Secure Hash Algorithm #2″) is the most secure hash algorithm NIST currently supports in its FIPS validated cryptography implementations.  SHA-2 is really a collection of four hashes (SHA-224, SHA-256, SHA-384 and SHA-512), all of which are stronger than SHA-1.

Complete SHA-2 implementations in file transfer are still uncommon but becoming more common as time passes.

A “SHA-3” contest to find a future replacement for SHA-2 is currently underway and expected to conclude in 2012.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks.

SHA-3

SHA-3 refers to the new hash algorithm NIST will choose to someday replace SHA-2.   A contest to select the new hash is scheduled to conclude in 2012.

SHA-384

SHA-384 is the 384-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”).  It is not a unique hash algorithm within the SHA-2 standard but is instead a truncated version of SHA-512.

See “SHA-2” for more information.

SHA-224

SHA-224 is the 224-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”).  It is not a unique hash algorithm within the SHA-2 standard but is instead a truncated version of SHA-256.

See “SHA-2” for more information.

SHA-512

SHA-512 is the 512-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”).  Like SHA-256, it is one of two unique algorithms that make up a SHA-2 hash, but SHA-512 is optimized for 64-bit calculations rather than 32-bit calculations.

See “SHA-2” for more information.

SHA-256

SHA-256 is the 256-bit component of the “SHA-2” data integrity check standard (a.k.a. “hash”).  Like SHA-512, it is one of two unique algorithms that make up a SHA-2 hash, but SHA-256 is optimized for 32-bit calculations rather than 64-bit calculations.

See “SHA-2” for more information.

MD4

MD4 (“Message Digest [algorithm] #4″) is best known as the data integrity check standard (a.k.a. “hash”) that inspired modern hashes such as MD5, SHA-1 and SHA-2.  MD4 codes are 128-bit numbers and are usually represented in hexadecimal format (e.g., “9508bd6aab48eedec9845415bedfd3ce”).

Use of MD4 in modern file transfer applications is quite rare, but MD4 can be found in rsync applications.  A variant of MD4 is also used to tag files in eDonkey/eMule P2P applications.

Although MD4 is considered a “cryptographic quality” integrity check (as specified in RFC 1320), it is not considered a secure hash today because it is possible for an attacker to create bad data that bears the same MD4 code as a set of good data.  For this reason, NIST does not allow the use of MD4 in key U.S. Federal Government applications.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks instead of MD4.

MD5

MD5 (“Message Digest [algorithm] #5″) is the most common data integrity check standard (a.k.a. “hash”) used throughout the world today.  MD5 codes are 128-bit numbers and are usually represented in hexadecimal format (e.g., “9508bd6aab48eedec9845415bedfd3ce”).

MD5 was created in 1991 as a replacement for MD4 and its popularity exploded at the same time use of the Internet did as well.  MD5′s use carried over into file transfer software and its use remains common today (e.g., FTP’s unofficial “XMD5″ command).

Although MD5 is considered a “cryptographic quality” integrity check (as specified in RFC 1321), it is not considered a secure hash today because it is possible for an attacker to create bad data that bears the same MD5 code as a set of good data.  For this reason, NIST has now banned the use of MD5 in key U.S. Federal Government applications.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks instead of MD5.  However, FTP software that supports the XMD5 command can be useful to provide backwards compatibility during migration to stronger hashes.

CRC

CRC (“cyclic redundancy check”) is an early data integrity check standard (a.k.a. “hash”).  Most CRC codes are 32-bit numbers and are usually represented in hexadecimal format (e.g., “567890AB”).

CRC was commonly used with modem-based data transfer systems because it was cheap to calculate and fast on early computers.   Its use carried over into FTP software and some vendors still support CRC in their applications today (e.g., FTP’s unofficial “XCRC” command).

However, CRC is not considered a “cryptographic quality” integrity check because it is trivial for an attacker to create bad data that bears the same CRC code as a set of good data.

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks instead of CRC.  However, FTP software that supports the XCRC command can be used to supplement stronger integrity checks, particularly over unreliable connections.  (i.e., If your application calculates a bad CRC code for a particular transfer, you can avoid the effort of calculating a more expensive SHA-1 or SHA-2 hash.)

B2B

B2B (“business to business”) is a market definition (a “space”) that covers technology and services that allow file and other transmissions to be performed between businesses (i.e., not between consumers).  B2B covers a lot of conceptual ground, from simple “file transfer” and “secure file transfer” to more sophisticated “managed file transfer” and up through traditional EDI.  In addition to overlapping with the venerable EDI space, many analysts now see that the B2B space overlaps with the EAI (“enterprise application integration”) space, the “Community Management” space (e.g., effortless provisioning) and even the hot new cloud services space.

If you are an IT administrator or someone else charged with getting a file transfer system to work, the presence of the “B2B” term in your project description should tell you to expect to use the FTP/S and SSH (SFTP) protocols, although you may also see AS2, proprietary protocols or some use of HTTP/S (especially if web services are available).

If you are a buyer of file transfer technology and services, the presence of the “B2B” term in your project requirements will limit the field of vendors who can win the business, attract the attention of traditional IT analyst firms (e.g., IDC, Forrester and Gartner) hoping to steer the business to their clients and may kick decision making further up your corporate hierarchy.  (These effects may be good or bad for your particular project – after having participated in hundreds of file transfer RFPs, the people of File Transfer Consulting may help your project thread the appropriate needle in your organization.)

DES

DES (“Digital Encryption Standard”) is an open encryption standard that offers weak encryption at 56-bit strength.  DES used to be considered strong encryption, but the world’s fastest computers can now break DES in near real time.  A cryptographically valid improvement on DES is 3DES (“Triple DES”) – a strong encryption standard that is still in use.

DES was one of the first open encryption standards designed for widespread use in computing environments, and it was submitted by IBM in 1975 based on previous work on an algorithm called Lucifer.  It was also the first encryption algorithm to be specified by a “FIPS publication”: FIPS 46 (and subsequent FIPS 46-1, 46-2 and 46-3 revisions).

See the Wikipedia entry for DES if you are interested in the technical mechanics behind DES.

TLS

TLS (“Transport Layer Security”) is the modern version of SSL and is used to secure TCP sockets.  TLS is specified in RFC 2246 (version 1.0), RFC 4346 (version 1.1) and RFC 5246 (version 1.2).  When people talk about connections “secured with SSL”, today TLS is the technology that’s really used instead of older editions of SSL.

See “SSL” for more discussion about how SSL/TLS is used in practice.

See the Wikipedia entry for TLS if you are interested in the technical mechanics behind TLS.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support TLS 1.0 today.  Most clients and servers support TLS 1.1 today, but TLS 1.1 support will probably not be required unless major issues appear in TLS 1.0.  Some clients and servers support TLS 1.2 today but it’s a trivial concern at this point.  All file transfer software should use FIPS validated cryptography to provide TLS services across file transfer protocols such as HTTPS, FTPS, AS1, AS2, AS3 or email protocols secured with TLS.

SSL

SSL (“Secure Sockets Layer”) was the first widely-deployed technology used to secure TCP sockets.  Its use in HTTPS (HTTP over SSL) allowed the modern age of “ecommerce” to take off on the world wide web and it has also been incorporated into common file transfer protocols such as FTPS (FTP over SSL) and AS2.

In modern usage, “protected by SSL”, “secured by SSL” or “encrypted by SSL” really means “protected by TLS or SSL version 3, whichever gets negotiated first.”  By 2010 clients and servers knew how to negotiate TLS sessions and will attempt to do so before trying SSL version 3 (an older protocol).  Version 2 of SSL is considered insecure at this point and some clients and servers will attempt to negotiate this protocol if attempts to negotiate TLS or SSL version 3 fail, but it is rare that negotiation falls through to this level.

SSL depends on the use of X.509 certificates to authenticate the identity of a particular server.  The X.509 certificate deployed on an SSL/TLS server is popularly referred to as the “server certificate”.  If the name (CN) on the server certificate does not match the DNS name of a server, clients will refuse to complete the SSL/TLS connection unless the end user elects to ignore the certificate name mismatch.  (This is a common option on web browsers and FTP clients.)  If the server certificate is expired, clients will almost always refuse to complete the SSL/TLS connection.  (Ignoring certificate expiration is usually not an option available to end users.)  Finally, if the server certificate is “self signed” or is signed by a CA that the client does not trust, then most clients will refuse the connection.  (Browsers and FTP clients usually have the option to either ignore the CA chain or import and trust the CA chain to complete the negotiation.)

Optional X.509 client certificates may also be used to authenticate the identity of a particular user or device.  When used, this certificate is simply referred to as a “client certificate.”  File transfer servers either manually map individual client certificates to user codes or use LDAP or Active Directory mappings to accomplish the same goal.  File transfer servers rarely have the ability to ignore expired certificates, often have the ability to import a third-party CA certificate chain, and often have the ability to permit “self-signed” certificates.

Whether or not client certificates are in use, SSL/TLS provides point-to-point encryption and tamper protection during transmission.  As such, SSL/TLS provides sufficient protection of “data in motion”, though it provides no protection to “data at rest.”

SSL/TLS connections are set up through a formal “handshake” procedure.  First, a connecting client presents a list of supported encryption algorithms and hashes to a server.  The server picks a set of these (the combination of algorithms and hashes is called a “ciphersuite”) and sends the public piece of its X.509 server certificate to the client so the client can authenticate the identity of the server.  The client then either sends the public piece of its X.509 client certificate to the server to authenticate the identity of the client or mocks up and sends temporary, session-specific information of a similar bent to the client.  In either case, key exchange now occurs.

When you hear numbers like “1024-bit”, “2048-bit” or even “4096-bit” in relation to SSL, these numbers are referring to the bit length of the keys in the X.509 certificates used to negotiate the SSL/TLS session.  These numbers are large because the exchange of keys in asymmetric public-key cryptography requires the use of very large keys, and not every 1024-bit (etc.) number is a viable key.

When you hear numbers like “80-bit”, “128-bit”, “168-bit” or even “256-bit” in relation to SSL, these numbers are referring to the bit length of the shared encryption key that is negotiated during the handshake procedure.  This number is dependent on the algorithm available on clients and servers.  For example, the AES algorithm comes in 128-bit, 192-bit and 256-bit editions, so these are all possible values for software that supports AES.

It is uncommon to refer to hash bit lengths in conversations about SSL; instead the named hash – typically MD5 or SHA-1 – is typically stated instead.

The three most important implementations of TLS/SSL today are Microsoft’s Cryptographic Providers, Oracle’s Java Secure Socket Extension (“JSSE”) and OpenSSL.  All of these libraries have been separately FIPS validated and all may be incorporated into file transfer software at no charge to the developer.

BEST PRACTICE: The SSL/TLS implementation in your file transfer clients and servers should be 100% FIPS validated today. More specifically, the following file transfer protocols should be using FIPS validated SSL/TLS code today: HTTPS, FTPS, AS1, AS2, AS3 and email protocols secured by SSL/TLS.  Modern file transfer software supports the optional use of client certificates on HTTPS, FTPS, AS1, AS2 and AS3 and allows administrators to deny the use of SSL version 2.  If you plan to use a lot of client certificates to provide strong authentication capabilities, research the level of integration between your file transfer software, your enterprise authentication technology (e.g., Active Directory) and your PKI infrastructure.

Triple DES

3DES (also “Triple DES”) is an open encryption standard that offers strong encryption at 112-bit and 168-bit strengths.

3DES is a symmetric encryption algorithm often used today to secure data in motion in both SSH and SSL/TLS.  (After asymmetric key exchange is used perform the handshake in a SSH or SSL/TLS sessions, data is actually transmitted using a symmetric algorithm such as 3DES.)

3DES is also often used today to secure data at rest in SMIME, PGP, AS2, strong Zip encryption and many vendor-specific implementations.  (After asymmetric key exchange is used to unlock a key on data at rest, data is actually read or written using a symmetric algorithm such as 3DES.)

NIST‘s AES competition was held to find a faster and stronger replacement for 3DES.  However, 3DES has not yet been phased out and is expected to remain approved through 2030 for sensitive government information.  (Only the 168-bit version is currently allowed; permitted use of the 112-bit version ceased January 1, 2011.) NIST validates specific implementations of 3DES under FIPS 140-2, and several hundred unique implementations have now been validated under that program.  The 3DES algorithm itself is specified in FIPS 46-3.

See the Wikipedia entry for 3DES if you are interested in the technical mechanics behind 3DES.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support FIPS-valided AES, FIPS-validated 3DES or both.  (AES is faster, may have more longevity and offers higher bit rates; 3DES offers better backwards compatibility.)

AES

AES (“Advanced Encryption Standard”) is an open encryption standard that offers fast encryption at 128-bit, 192-bit and 256-bit strengths.

AES is a symmetric encryption algorithm often used today to secure data in motion in both SSH and SSL/TLS.  (After asymmetric key exchange is used perform the handshake in a SSH or SSL/TLS sessions, data is actually transmitted using a symmetric algorithm such as AES.)

AES is also often used today to secure data at rest in SMIME, PGP, AS2, strong Zip encryption and many vendor-specific implementations.  (After asymmetric key exchange is used to unlock a key on data at rest, data is actually read or written using a symmetric algorithm such as AES.)

Rijndael is what AES was called before 2001.  In that year, NIST selected Rijndael as the new AES algorithm and Rinjdahl became known as AES.  NIST validates specific implementations of AES under FIPS 140-2, and several hundred unique implementations have now been validated under that program.

See the Wikipedia entry for AES if you are interested in the technical mechanics behind AES.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support FIPS-validated AES, FIPS-validated 3DES or both.  (AES is faster, may have more longevity and offers higher bit rates; 3DES offers better backwards compatibility.)

X.509 Certificate

An X.509 certificate is a high-security credential used to encrypt, sign and authenticate transmissions, files and other data.  X.509 certificates secure SSL/TLS channels, authenticate SSL/TLS servers (and sometimes clients), encrypt/sign SMIME, AS1, AS2, AS3 and some “secure zip” payloads, and provide non-repudiation to the AS1, AS2 and AS3 protocols.

The relative strength of various certificates is often compared through their “bit length” (e.g., 1024, 2048 or 4096 bits) and longer=stronger.  Certificates are involved in asymmetric cryptography so their bit lengths are often 8-10x longer (or more) than the bit length of the symmetric keys they are used to negotiate (usually 128 or 256 bits).

Each certificate either is signed by another certificate or is “self signed”.  Certificates that are signed by other certificates are said to “chain up” to an ultimate “certificate authority” (“CA”) certificate.   CA certificates are self-signed and typically have a very large bit length.  (If a CA certificate was ever cracked, it would put thousands if not millions of child certificates at risk.)  CAs can be public commercial entities (such as Verisign or GoDaddy) or private organizations.

Most web browsers and many file transfer clients ship with a built-in collection of pre-trusted CA certificates.  Any certificate signed by a certificate in these CA chains will be accepted by the clients without protest, and the official CA list may be adjusted to suit a particular installations.

In addition, many web browsers also ship with a built-in and hard-coded collection of pre-trusted “Extended Validation” CA certificates.  Any web site presenting a certificate signed by am extended validation certificate will cause a green bar or other extra visual clue to appear in the browser’s URL.  It is not typically possible to edit the list of extended validation certificates, but it is usually possible to remove the related CA cert from the list of trusted CAs to prevent these SSL/TLS connections from being negotiated.

X.509 certificates are stored by application software and operating systems in a variety of different places.  Microsoft’s Cryptographic Providers make use of a certificate store built into the Microsoft operating systems.  OpenSSL and Java Secure Sockets Extension (JSSE) often make use of certificate files.

Certificates may be stored on disk or may be burned into hardware devices.  Hardware devices often tie into participating operating systems (such as hardware tokens that integrate with Microsoft’s Certificate Store) or application software.

The most widespread use of X.509 hardware tokens may be in the U.S. Department of Defence (DoD) CAC card implementation.  This implementation uses X.509 certificates baked into hardware tokens to identify and authenticate individuals through a distributed Active Directory implementation.  CAC card certs are signed by a U.S. government certificate and bear a special attribute (Subject Alternative Name, a.k.a. “SAN”) that is used to map individual certificates to records in the DoD’s Active Directory server via “userPrincipalName” AD elements.

See the Wikipedia entry for X.509 if you are interested in the technical mechanics behind X.509.

FDIC

The FDIC (“Federal Deposit Insurance Corporation”) directly examines and supervises more than 4,900 United States banks for operational safety and soundness.  (As of January 2011, there were just less than 10,000 banks in the United States; about half are chartered by the federal government.)

As part of its bank examinations, the FDIC often inspects the selection and implementation of file transfer technology (as part of its IT evaluation) and business processes that involve file transfer technology (as part of its overall risk assessment).

The FDIC’s official web site is www.fdic.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “the Fed” (U.S. central bank), “NCUA” (credit unions), “OCC” (national and foreign banks) and “OTS” (savings and loans).

Check 21

“Check 21″ is the common name for the United States’ Check Clearing for the 21st Century Act, a federal law enacted in 2003 that enabled banks to phase out paper check handling by allowing electronic check images (especially TIFF-formatted files) to serve all the same legal roles as original paper checks.

Check 21′s effect on the file transfer industry has been to greatly increase the size of files transmitted through banks, as check images are frequently large and non-compressible.

FFIEC

The FFIEC (“Federal Financial Institutions Examination Council”) is a United States government regulatory body that ensures that principles, standards, and report forms are uniform across the most important financial regulatory agencies in the country.

The agencies involved include the Federal Reserve (“the Fed”), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS).  Since 2006, a State Liaison Committee (SLC) has also been involved; the SLC’s membership includes the Conference of State Bank Supervisors, the American Council of State Savings Supervisors, and the National Association of State Credit Union Supervisors.

The FFIEC’s official web site is www.ffiec.gov.

PCI Security Standards Council

The PCI Security Standards Council is the vendor-independent consortium behind the PCI (“Payment Card Industry”) standards.

PCI DSS

The “PCI DSS” (PCI Data Security Standard) is a credit card industry security standard.

It is currently on version 2.0.

PCI Council

The “PCI Council” is a short name for “PCI Security Standards Council”, the vendor-independent consortium behind PCI (“Payment Card Industry”) standards.

PCI

PCI stands for “Payment Card Industry”.  In file transfer, “PCI compliance” frequently refers to a deployed system’s ability to adhere to the standard outlined in “PCI DSS” – a security regulation for the credit card industry.  “PCI certification” is achieved when a PCI compliant system is audited by a PCI Council-approved firm and that third-party firm agrees that it is in compliance.

NIST

NIST (“National Institute of Standards and Technology”) is a United States based standards body whose influence on the file transfer industry is felt most heavily through its FIPS 140-2 encryption and hashing standard.  It is also the keeper of many other security standards which must be met if file transfer technology is used in or to connect with the federal government.

FIPS 140-2

FIPS 140-2 is the most commonly referenced cryptography standard published by NIST.  “FIPS 140-2 cryptography” is a phrase used to indicate that NIST has tested a particular cryptography implementation and found that it meets FIPS 140-2 requirements.

Among other things, FIPS 140-2 specifies which encryption algorithms (AES and Triple DES), minimum bit lengths, hash algorithms (SHA-1 and SHA-2) and key negotiation standards are allowed in U.S. federal applications.  (Canada also uses this standard for its federal standard – that is why the official FIPS 140-2 validation symbol is a garish mashup of the American blue/white stars and the Canadian red/white maple leaf.)

Almost all modern cryptographic modules, whether built in hardware or software, have been FIPS 140-2 validated.  High quality software implementations are also an integrated component of most modern computing platforms, including operating systems from Microsoft, Java runtime environments from Oracle and the ubiquitous OpenSSL library.

Almost all file transfer applications that claim “FIPS validated cryptography” make use of one or more FIPS validated cryptographic libraries, but are not themselves entirely qualified under FIPS.  This is not, by itself, a security problem: FIPS 140 has a narrow focus and other validation programs are available to cover entire applications.

FIPS 140-2 will soon be replaced by FIPS 140-3, but implementations validated under FIPS 140-2 will likely be allowed for some period of time.

Drummond Certified

In the file transfer industry, “Drummond Certified” typically indicates that the AS2 implementation in a particular software package has been tested and approved by the Drummond Group.

Most file transfer protocols follow RFCs, and AS2 is no exception.  (AS2 is specified in RFC 4130, and the “MDNs” AS2 relies on are specified in RFC 3798).  However, the AS2 protocol and Drummond certification are closely tied together like no other file transfer protocol or certification because of Wal-Mart, the world’s largest retailer.  In 2002 Wal-Mart announced that it would be standardizing partner communications on the AS2 standard, and that companies that wished to connect to it must have their AS2 software validated by Drummond.  As Wal-Mart and its massive supply chain led, so followed the rest of the industry.

There are two levels of tests in Drummond certification.  Interoperability is the basic level against which all products must test and pass.  There is also a second level of “optional profile” tests which check optional but frequently desirable features such as AS2 Restart.  There are also minor implementation differences, such as certificate import/export compatibility, that, combined with optional AS2 profiles, allow for significant differences between Drummond certified implementations, though the core protocol and basic options are generally safe between tested products.

Not every product that claims its AS2 implementation is Drummond certificated will itself be entirely Drummond certified.  Some software, such as Ipswitch’s MOVEit and MessageWay software and Globalscape’s EFT software, make use of third-party Drummond certified libraries such as \n Software’s IP*Works! EDI Engine.  In those cases, look for the name of the library your file transfer vendor uses instead of the file transfer vendor product on Drummond’s official list.

Drummond Group

The Drummond Group is a privately held test laboratory that is best known in the file transfer industry as the official certification behind the AS2 standard.  See “Drummond Certified” for more information about the AS2 certification.

The Drummond Group also offers AS1 and ebXML validation, quality assurance and other related services.

Validation

Software, systems and processes that are “validated” against a standard are typically better than those merely in “compliance” with a standard.  Validation means that a third-party agency such as NIST or the PCI Council has reviewed and tested the claim of fidelity to a standard and found it to be true.  Validating agencies will usually either publish a public list of all validated implementations or will be happy to confirm any stated claim.

A common example of validation in the file transfer industry is “FIPS validation“.  Under this standard, NIST tests various vendors’ cryptography implementations, issues a validation certificate for each that passes and lists all implementations that have passed in a public web page on the NIST site.

Validation is roughly equivalent to “certification“.

Compliance

“Compliance” to a standard is weaker than “validation” or “certification” against a standard.  Compliance indicates that a vendor recognizes a particular standard and has chosen to make design decisions that encompass most, if not all, of the standard.

When a vendor has implemented all of the required standard, that vendor will frequently upgrade their statement to “completely compliant” or “guaranteed compliant.”

A common example of compliance in the file transfer industry is a claim to “SFTP interoperability.”  Today, there is no universally-recognized third-party laboratory that will test, validate and stand behind these claims, but there are hundreds of vendors who claim that their products are compliant with various known SFTP standards, such as compatibility w/ OpenSSH and/or fidelity to RFC 4250.

Another common example of compliance in the file transfer industry is “FIPS compliance“.  This slippery phrase often indicates that the cryptography used a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) but that the underlying cryptography component has not been validated by a third party.

Certification (Training)

Individuals working in the file transfer industry frequently have earned one or more certifications through training and testing.  These certifications generally fall into one of three categories:

File Transfer Security Certification: (ISC)2 and SANS certified individuals have a good understanding of security from a vendor-neutral point of view.  (CISSP is an (ISC2)2 certification; CCSK is a newer vendor-neutral certification from the Cloud Security Alliance.)

File Transfer Workflow Certification: Surprisingly little exists today for certified training in the area of workflow design, implementation or operations.  (Quick plug: this is one area where File Transfer Consulting can help.) PMP certified project managers and Six Sigma trained personnel may fall in this category today.

File Transfer Vendor Certification: Cisco Firewall Specialists and Sterling Integrator Mapping Specialists are examples of vendor-specific certification programs for individuals.

Certification (Software and Systems)

Certification of software and systems against a standard is better than having software and systems merely in “compliance” with a standard.  Certification means that a third-party agency such as NIST or the PCI Council has reviewed and tested the claim of fidelity to a standard and found it to be true.  Certifying agencies will usually either publish a public list of all certified implementations or will be happy to confirm any stated claim.

A common example of certification in the file transfer industry is “AS2 certification”.  Under this standard, Drummond Group tests various vendors’ cryptography implementations, issues a validation certificate for each that passes and lists all implementations that have passed in a public web page on the NIST site.

Certification is roughly equivalent to “validation“.

AS2

AS2 (“Applicability Statement 2″) is an SMIME-based transfer protocol that uses HTTP/S to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

There are two main reasons that AS2-based transmission systems are unpopular unless specifically requested by particular partners are complexity and cost.

In terms of complexity, AS2 configurations can involve up to four different X.509 certificates on each side of a transfer, plus hostnames, usernames, passwords, URLs, MDN delivery options, timeouts and other variables.  Configuration and testing of each new partner can be a full or multi-day affair, where simpler protocols such as FTP may require hours or minutes.  To hide as much of the configuration complexity as possible from administrators, some AS2 products (such as Cleo’s Lexicom) come with dozens or hundreds of preconfigured partner profiles, but knowledge of the underlying options is still often necessary to troubleshoot and deal with periodic updates of partner credentials or workflows.

In terms of cost, AS2 products that can connect to multiple trading partners are rarely available for less than ten thousand dollars, and the ones that ship with well-developed list of partner profiles cost much more than that.   One factor that drives up this cost is that any marketable AS2 product will be “Drummond Certified“.  The cost of high-end AS2 products is driven up by the fact that compiling and keeping up an extensive library of partner profiles in an expensive endeavor in its own right.  Implementing AS2 securely across a multiple-zone network also tends to drive up costs because intermediate AS2 gateways are often required to prevent direct Internet- or partner-based access to key internal systems.

Another factor working against voluntary AS2-based implementations is transfer speed.  The use of HTTP-based encoding and the requirement that MDNs are only compared after the complete file has been delivered often tips the operational balance in favor of other technology.

AS3 was developed, in part, to cope with AS’s slow HTTP-based encoding, but other modifications (“optional profiles“) to the AS2 protocol have also been introduced to address other limitations.  For example, the optional “AS2 Restart” feature was introduced industry-wide to cope with large files whose delivery was heretofore dependent on long-lasting, unbroken HTTP streams.

Nonetheless, AS2 is considered to be the most successful and most widely adopted of any vendor-independent file transfer protocol that builds both transmission security and guaranteed delivery into the core protocol.

See also “AS1” for the email-based variant, “AS3” for the FTP-based variant and “AS2 optional profiles” for additional information about available AS2 features.

Event Log Analyzer by SolarWinds