FDIC

The FDIC (“Federal Deposit Insurance Corporation”) directly examines and supervises more than 4,900 United States banks for operational safety and soundness.  (As of January 2011, there were just less than 10,000 banks in the United States; about half are chartered by the federal government.)

As part of its bank examinations, the FDIC often inspects the selection and implementation of file transfer technology (as part of its IT evaluation) and business processes that involve file transfer technology (as part of its overall risk assessment).

The FDIC’s official web site is www.fdic.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “the Fed” (U.S. central bank), “NCUA” (credit unions), “OCC” (national and foreign banks) and “OTS” (savings and loans).

Federal Reserve

The Federal Reserve (also “the Fed”) is the central bank of the United States.  It behaves like a regulatory agency in some areas, but its main role in the file transfer industry is as the primary clearinghouse for interbank transactions batched up in files.  Nearly every bank or bank service center has a file transfer connection to the Fed.

As of January 2011 there were exactly three approved ways to conduct file transfer with the Federal Reserve.  These were:

  • Perform interactive file transfer through a web browser based application.  This has serious disadvantages to data centers which try to automate key business processes as the Fed’s interactive interface has proven to be resistant to screen scraping automation, and no scriptable web services are supported.
  • Perform automated file transfers through IBM’s Sterling Commerce software.  This is the Fed’s preferred method for high volumes.   Since Sterling Commerce software is among the most expensive in the file transfer industry, many institutions use a small number of installations of Sterling Commerce Connect:Direct for their Fed connections and use other automation software to drive Fed transfers through Perl scripts, Java applications or other methods.
  • Perform automated file transfers through Axway’s Tumbleweed software.  This is the Fed’s preferred method for medium volumes.  As with Connect:Direct, Tumbleweed Fed connections are often minimized and scripted by third-party software to reduce the overall cost of a Fed-connected file transfer installation.

The Federal Reserve’s official web site is www.federalreserve.gov.

See also: “FFIEC” (umbrella regulation, including state chartered banks), “FDIC” (federally chartered banks), “NCUA” (credit unions), “OCC” (national and foreign banks) and “OTS” (savings and loans).

FFIEC

The FFIEC (“Federal Financial Institutions Examination Council”) is a United States government regulatory body that ensures that principles, standards, and report forms are uniform across the most important financial regulatory agencies in the country.

The agencies involved include the Federal Reserve (“the Fed”), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS).  Since 2006, a State Liaison Committee (SLC) has also been involved; the SLC’s membership includes the Conference of State Bank Supervisors, the American Council of State Savings Supervisors, and the National Association of State Credit Union Supervisors.

The FFIEC’s official web site is www.ffiec.gov.

File Transfer Consulting

File Transfer Consulting (FTC) is a a vendor-independent firm that concentrates on managed file transfer strategy, custom implementations, add-ons and thought leadership.

FTC’s official site is http://www.filetransferconsulting.com.

FIPS 140-2

FIPS 140-2 is the most commonly referenced cryptography standard published by NIST.  “FIPS 140-2 cryptography” is a phrase used to indicate that NIST has tested a particular cryptography implementation and found that it meets FIPS 140-2 requirements.

Among other things, FIPS 140-2 specifies which encryption algorithms (AES and Triple DES), minimum bit lengths, hash algorithms (SHA-1 and SHA-2) and key negotiation standards are allowed in U.S. federal applications.  (Canada also uses this standard for its federal standard – that is why the official FIPS 140-2 validation symbol is a garish mashup of the American blue/white stars and the Canadian red/white maple leaf.)

Almost all modern cryptographic modules, whether built in hardware or software, have been FIPS 140-2 validated.  High quality software implementations are also an integrated component of most modern computing platforms, including operating systems from Microsoft, Java runtime environments from Oracle and the ubiquitous OpenSSL library.

Almost all file transfer applications that claim “FIPS validated cryptography” make use of one or more FIPS validated cryptographic libraries, but are not themselves entirely qualified under FIPS.  This is not, by itself, a security problem: FIPS 140 has a narrow focus and other validation programs are available to cover entire applications.

FIPS 140-2 will soon be replaced by FIPS 140-3, but implementations validated under FIPS 140-2 will likely be allowed for some period of time.

FIPS 140-3

FIPS 140-3 will soon replace FIPS 140-2 as the standard NIST uses to validate cryptographic libraries. The standard is still in draft status, but could be issued in 2011.

FIPS 140-2 has four levels of security: most cryptographic software uses “Level 1″ and most cryptographic hardware uses “Level 3″.  FIPS 140-3 expands that to five levels, but the minimum (and thus most common) levels for software and hardware will likely remain Levels 1 and 3, respectively.

FIPS Compliant

“FIPS compliant” is a slippery phrase that often indicates that the cryptography used in a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) but that the underlying cryptography component has not been validated by NIST laboratories.
FIPS validated” is much stronger statement.

FIPS Validated

“FIPS validated” is a label that indicates that the cryptography used in a particular solution implements some or all the algorithms specified in FIPS 140-2 (e.g., AES) and that the underlying cryptography component has been validated by NIST laboratories.  See “FIPS compliant” for a weaker statement.

Firefox

Mozilla’s Firefox is a free, open source web browser that offers a similar browsing experience across a wide variety of desktop operating systems, including Windows, Macintosh and some Linux variants.

As of December 2010, Firefox held about 30% of the desktop browser market, making it the #2 browser behind Internet Explorer.  Firefox uses an aggressive auto-update feature that ensures that most users are running the most recent major version of the browser within three months of release.

Firefox’s native FTP capabilities allow it to connect to FTP servers using passive mode and download specific files as anonymous or authenticated users with passwords.   Many advanced Firefox users use the free FireFTP Firefox extension to add the ability to upload files, connect using FTPS, connect using SFTP and browse local and remote directory structures.

Firefox’s official web site is www.mozilla.com/en-US/firefox/.

BEST PRACTICE: All credible file transfer applications that offer browser support for administrative, reporting or end user interfaces should support the Firefox web browser, and file transfer vendors should commit to supporting any new Firefox browser version within a few months of its release.   In addition, file transfer vendors that offer FTP, FTPS and/or SFTP support in their server products should support the FireFTP extension to Firefox.

FireFTP

Mime Čuvalo’s FireFTP is a free, full-featured, interactive FTP client that plugs into Mozilla Firefox as an add-on.

FireFTP offers support for FTP, FTPS, SFTP and can remember a large number of connection profiles.  FireFTP supports integrity checks using MD5 and SHA1, file compression on the fly (i.e., “MODE Z”), support for most FireFox platforms, support for multiple languages and IPv6.

FireFTP’s official site is fireftp.mozdev.org.

BEST PRACTICE: File transfer vendors that offer FTP, FTPS and/or SFTP support in their server products should support the FireFTP extension to Firefox.

Firewall Friendly

A file transfer protocol that is “firewall friendly” typically has most or all of the following attributes:

1) Uses a single port
2) Connects in to a server from the Internet
3) Uses TCP (so session-aware firewalls can inspect it)
4) Can be terminated or proxied by widely available proxy servers

For example:

Active-mode FTP is NOT firewall friendly because it violates #1 and #2.
Most WAN acceleration protocols are NOT firewall friendly because they violate #3 (most use UDP) and #4.
SSH’s SFTP is QUITE firewall friendly because it conforms to #1,2 and 3.
HTTP/S is probably the MOST firewall friendly protocol because it conforms to #1, 2, 3 and 4.

As these examples suggest, the attribute file transfer protocols most often give up to enjoy firewall friendliness is transfer speed.

When proprietary file transfer “gateways” are deployed in a DMZ network segment for use with specific internal file transfer servers, the “firewall friendliness” of the proprietary protocol used to link gateway and internal server consists of the following attributes instead:

1) Internal server MUST connect to DMZ-resident server (connections directly from the DMZ segment to the internal segment are NOT firewall friendly)
1) SHOULD use a single port (less important than #1)
3) SHOULD uses TCP (less important than #2)

FTC

FTC is an abbreviation for “File Transfer Consulting“, a vendor-independent firm that concentrates on managed file transfer strategy, custom implementations and thought leadership.

FTP

FTP (“File Transfer Protocol”) is the granddaddy of all modern TCP-based file transfer protocols.

Regardless of your personal feelings or experiences with this venerable and expansive protocol, you must be fluent in FTP to be effective in any modern file transfer situation because all other protocols are compared against it. You should also know what FTP/S is and how it works, as the combination of FTP and SSL/TLS will power many secure file transfers for years to come.

History

First Generation (1971-1980)

The original specification for FTP (RFC 114) was published in 1971 by Abhay Bhushan of MIT. This standard introduced down many concepts and conventions that survive to this day including:

  • ASCII vs. “binary” transfers
  • Username authentication (passwords were “elaborate” and “not suggested” at this stage)
  • “Retrieve”, “Store”, “Append”, “Delete” and “Rename” commands
  • Partial and resumable file transfer
  • A protocol “designed to be extendable”
  • Two separate channels: one for “control information”, the other for “data”
  • Unresolved character translation and blocking factor issues

Second Generation (1980-1997)

The second generation of FTP (RFC 765) was rolled out in 1980 by Jon Postel of ITI. This standard retired RFC 114 and introduced more concepts and conventions that survive to this day, including:

  • A formal architecture for separate client/server functions and two separate channels
  • A minimum implementation required for all servers
  • Site-to-site transfers (a.k.a. FXP)
  • Using “TYPE” command arguments like “A” or “E” to ask the server to perform its best character and blocking translation effort
  • Password authentication and the convention of client-side masking of typed passwords
  • The new “ACCT” command to indicate alternate user context during authentication
  • New “LIST”, “NLST” and “CWD” commands that allowed end users to experience a familiar directory-based interface
  • New “SITE” and “HELP” commands that allowed end users to ask the server what it was capable of doing and how to access that functionality
  • Revised “RETR”, “STOR”, “APPE”, “DELE”, “RNFR/RNTO” and “REST” commands to replace the even shorter commands from the previous version.
  • A new “NOOP” command to simply get a response from the remote server.
  • Using “I” to indicate binary data (“I” for “Image”)
  • Passive (a.k.a. “firewall friendly”) transfer mode
  • The 3-digits-followed-by-text command response convention. The meaning of the first digit of the 3-digit command responses was:
    • 1xx: Preliminary positive reply; user should expect another Final reply.
    • 2xx: Final positive reply: whatever was just tried worked. The user may try another command.
    • 3xx: Intermediate positive reply: the last command the user entered in a multi-command chain worked and the user should enter the next expected command in the chain. (Very common during authentication, e.g., “331 User name okay, need password”)
    • 4xx: Intermediate positive reply: the last command the user entered in a multi-command chain did not work but the user is encouraged to try it again or start the command sequence over again. (Very common during data transfer blocked by closed ports, e.g., “425 Can’t open data connection”)
    • 5xx: Final negative reply: whatever was just tried did not work.

A new, second-generation specification (RFC 959) was rolled out in 1985 by Jon Postel and Joyce Reynolds of ISI. This standard retired RFC 765 but introduced just a handful of new concepts and conventions. The most significant addition was:

  • New “CDUP”, “RMD/MKD” and “PWD” commands to further enhance the end user’s remote directory experience

At this point the FTP protocol had been battle-tested on almost every IP-addressable operating system ever written and was ready for the Internet to expand beyond academia. Vendor-specific or non-RFC conventions filled out any niches not covered by RFC 959 but the core protocol remained unchanged for over ten years.

From 1991 through 1994 , FTP arguably wore the crown as the “coolest protocol on campus” as the Gopher system permitted users to find and share files and content over high-speed TCP links that were previously isolated on scattered dial-up systems.

FTP under RFC 959 even survived the invention of port-filtering and proxy-based firewalls. In part, this was due to the widely-implemented PASV command sequence which permitted data connections to become inbound connections. In part this was also because firewall and even router vendors took special care to work around the multi-port, session-aware characteristics of this most popular of protocols, even when implementing NAT.

Third Generation (1997-current)

The third and current generation of FTP was a reaction to two technologies that RFC 959 did not address: SSL/TLS and IPv6. However, RFC 959 was not replaced in this generation: it has instead been augmented by three key RFCs that address these modern technology challenges.

Third Generation (1997-current): FTPS

The first technology RFC 959 did not address was the use of SSL (later TLS) to secure TCP streams. When SSL was combined with FTP, it created the new FTPS protocol. Several years earlier Netscape had created “HTTPS” by marrying SSL to HTTP; the “trailing S” on “FTPS” preserved this convention.

The effort to marry SSL with FTP led to RFC 2228 by Bellcore and a three-way schism in FTPS implementation broke out almost immediately. This schism remains in the industry today but has consolidated down to just two main implementations: “implicit” and “explicit” FTPS. (There were originally two competing implementations of “explicit” FTPS, but one won.)

The difference between the surviving “implicit” and “explicit” FTPS modes is this:

Implicit FTPS requires the FTP control channel to negotiate and complete an SSL/TLS session before any commands are passed. Thus, any FTP commands that pass over this kind of command channel are implicitly protected with SSL/TLS.

Many practitioners regard this as the more secure and reliable version of FTPS because no FTP commands are ever sent in the clear and intervening routers thus have no chance of misinterpreting the cleartext commands used in explicit mode.

Explicit FTPS requires the FTP control channel to connect using a cleartext TCP channel and then optionally encrypts the command channel using explicit FTP commands such as “AUTH”, “PROT” and “PBSZ”.

Explicit FTPS is the “official” version of FTPS because it is the only one specified in RFC 4217 by Paul Ford-Hutchinson of IBM. There is no RFC covering “implicit mode” and there will likely never be an “implicit mode” RFC. However explicit mode’s cleartext commands are prone to munging by intermediate systems or credential leakage through the occasional “USER” command sent over the initial cleartext channel.

Almost all commercial FTP vendors and many viable noncommercial or open source FTP packages have already implemented RFC 2228; most of these same vendors and packages have also implemented RFC 4217.

Third Generation (1997-current): IPv6

The second technology RFC 959 did not address was IPv6. There are many structures in the 1971-1985 RFCs that absolutely require IPv4 addresses. RFC 2428 by (SEVERAL FOLKS) answered this challenge. This specification also took early and awful experiences with FTPS through firewalls and NAT’ed networks into account by introducing the clever little EPSV command .

However, the EPSV command did not, by itself, address the massive problems the file transfer community was having getting legacy clients and servers (i.e., those that didn’t implement RFC 2428) to work through a still immature collection of firewalls in the early 2000s. To address this need, a community convention around server-defined ranges of ports grew up around 2003. With this convention in place, a server administrator could open port 21 and a “small range of data ports” in his firewall to limit the exposure he/she would otherwise have taken opening up all high ports to the FTP server.

Despite the fact that RFC 2428 is now (as of 2011) thirteen years old, support for IPv6 and the EPSV command in modern FTP is still uncommon. The slow adoption of IPv6 and FTP’s identity as a protocol “designed to be extendable” have allowed server administrators and FTP developers to avoid the remaining issue by clinging to IPv4 and implementing firewall and NAT workarounds for over a decade.  Nonetheless, support for IPv6 and EPSV remains a formidable defense against the inevitable future; it would be wise to treat file transfer vendors and open source projects who fail to implement or show no interest in RFC 2428 with suspicion.

BEST PRACTICES: A minimally interoperable FTP implementation will completely conform to RFC 959 (FTP’s 1985 revision).  All credible file transfer vendors and projects have already implemented RFC 2228 (adding SSL/TLS), and should offer client certificate authentication support as part of their current implementation.   Any vendor or project serious about long term support for their FTP software has already implemented RFC 2428 (IPv6 and EPSV) or has it on their development schedule.  Forward-looking vendors and projects are already considering various FTP IETF drafts that standardize integrity checks and more directory commands. 

 

FTP with PGP

The term “FTP with PGP” describes a workflow that combines the strong end-to-end encryption, integrity and signing of PGP with the FTP transfer protocol.  While FTPS can and often should be used to protect your FTP credentials, the underlying protocol in FTP with PGP workflows is often just plain old FTP.

BEST PRACTICE: (If you like FTP with PGP.) FTP with PGP is fine as long as care is taken to protect the FTP username and password credentials while they are in transit.  The easiest, most reliable and most interoperable way to protect FTP credentials is to use FTPS instead of non-secure FTP.

BEST PRACTICE: (If you want an alternative to FTP with PGP.) The AS1, AS2 and AS3 protocols all provide the same benefits of FTP over PGP, plus the benefit of a signed receipt to provide non-repudiation.  Several vendors also have their own way to provide the same benefits of FTP with PGP without onerous key exchange, without a separate encrypt-in-transit step or with streaming encryption; ask your file transfer vendors what they offer as an alternative to FTP with PGP.

   
Event Log Analyzer by SolarWinds