IPv6 is the name of the networking protocol which is rapidly replacing the use of IPv4 in wake of widespread IPv4 exhaustion.  IPv6 is defined in 1998’s RFC 2460.

IPv6 addresses are written in “colon notation” like “fe80:1343:4143:5642:6356:3452:5343:01a4” rather than the “dot notation” used by IPv4 addresses such as ”″.  IPv6 DNS entries are handled through “AAAA” entries rather than “A” entries under IPv4.

BEST PRACTICES: All FTP technology should now support an RFC 2428 implementation of IPv6 and the EPSV (and EPRT) commands under both IPv4 and IPv6.  Until IPv4 is entirely retired, the use of technology that supports both IPv4 and IPv6 implementations of FTP is preferred. Avoid using FTP over connections that automatically switch from IPv6 to IPv4 or visa versa.   (Read more…)


FTP (“File Transfer Protocol”) is the granddaddy of all modern TCP-based file transfer protocols.

Regardless of your personal feelings or experiences with this venerable and expansive protocol, you must be fluent in FTP to be effective in any modern file transfer situation because all other protocols are compared against it. You should also know what FTP/S is and how it works, as the combination of FTP and SSL/TLS will power many secure file transfers for years to come.


First Generation (1971-1980)

The original specification for FTP (RFC 114) was published in 1971 by Abhay Bhushan of MIT. This standard introduced down many concepts and conventions that survive to this day including:

  • ASCII vs. “binary” transfers
  • Username authentication (passwords were “elaborate” and “not suggested” at this stage)
  • “Retrieve”, “Store”, “Append”, “Delete” and “Rename” commands
  • Partial and resumable file transfer
  • A protocol “designed to be extendable”
  • Two separate channels: one for “control information”, the other for “data”
  • Unresolved character translation and blocking factor issues

Second Generation (1980-1997)

The second generation of FTP (RFC 765) was rolled out in 1980 by Jon Postel of ITI. This standard retired RFC 114 and introduced more concepts and conventions that survive to this day, including:

  • A formal architecture for separate client/server functions and two separate channels
  • A minimum implementation required for all servers
  • Site-to-site transfers (a.k.a. FXP)
  • Using “TYPE” command arguments like “A” or “E” to ask the server to perform its best character and blocking translation effort
  • Password authentication and the convention of client-side masking of typed passwords
  • The new “ACCT” command to indicate alternate user context during authentication
  • New “LIST”, “NLST” and “CWD” commands that allowed end users to experience a familiar directory-based interface
  • New “SITE” and “HELP” commands that allowed end users to ask the server what it was capable of doing and how to access that functionality
  • Revised “RETR”, “STOR”, “APPE”, “DELE”, “RNFR/RNTO” and “REST” commands to replace the even shorter commands from the previous version.
  • A new “NOOP” command to simply get a response from the remote server.
  • Using “I” to indicate binary data (“I” for “Image”)
  • Passive (a.k.a. “firewall friendly”) transfer mode
  • The 3-digits-followed-by-text command response convention. The meaning of the first digit of the 3-digit command responses was:
    • 1xx: Preliminary positive reply; user should expect another Final reply.
    • 2xx: Final positive reply: whatever was just tried worked. The user may try another command.
    • 3xx: Intermediate positive reply: the last command the user entered in a multi-command chain worked and the user should enter the next expected command in the chain. (Very common during authentication, e.g., “331 User name okay, need password”)
    • 4xx: Intermediate positive reply: the last command the user entered in a multi-command chain did not work but the user is encouraged to try it again or start the command sequence over again. (Very common during data transfer blocked by closed ports, e.g., “425 Can’t open data connection”)
    • 5xx: Final negative reply: whatever was just tried did not work.

A new, second-generation specification (RFC 959) was rolled out in 1985 by Jon Postel and Joyce Reynolds of ISI. This standard retired RFC 765 but introduced just a handful of new concepts and conventions. The most significant addition was:

  • New “CDUP”, “RMD/MKD” and “PWD” commands to further enhance the end user’s remote directory experience

At this point the FTP protocol had been battle-tested on almost every IP-addressable operating system ever written and was ready for the Internet to expand beyond academia. Vendor-specific or non-RFC conventions filled out any niches not covered by RFC 959 but the core protocol remained unchanged for over ten years.

From 1991 through 1994 , FTP arguably wore the crown as the “coolest protocol on campus” as the Gopher system permitted users to find and share files and content over high-speed TCP links that were previously isolated on scattered dial-up systems.

FTP under RFC 959 even survived the invention of port-filtering and proxy-based firewalls. In part, this was due to the widely-implemented PASV command sequence which permitted data connections to become inbound connections. In part this was also because firewall and even router vendors took special care to work around the multi-port, session-aware characteristics of this most popular of protocols, even when implementing NAT.

Third Generation (1997-current)

The third and current generation of FTP was a reaction to two technologies that RFC 959 did not address: SSL/TLS and IPv6. However, RFC 959 was not replaced in this generation: it has instead been augmented by three key RFCs that address these modern technology challenges.

Third Generation (1997-current): FTPS

The first technology RFC 959 did not address was the use of SSL (later TLS) to secure TCP streams. When SSL was combined with FTP, it created the new FTPS protocol. Several years earlier Netscape had created “HTTPS” by marrying SSL to HTTP; the “trailing S” on “FTPS” preserved this convention.

The effort to marry SSL with FTP led to RFC 2228 by Bellcore and a three-way schism in FTPS implementation broke out almost immediately. This schism remains in the industry today but has consolidated down to just two main implementations: “implicit” and “explicit” FTPS. (There were originally two competing implementations of “explicit” FTPS, but one won.)

The difference between the surviving “implicit” and “explicit” FTPS modes is this:

Implicit FTPS requires the FTP control channel to negotiate and complete an SSL/TLS session before any commands are passed. Thus, any FTP commands that pass over this kind of command channel are implicitly protected with SSL/TLS.

Many practitioners regard this as the more secure and reliable version of FTPS because no FTP commands are ever sent in the clear and intervening routers thus have no chance of misinterpreting the cleartext commands used in explicit mode.

Explicit FTPS requires the FTP control channel to connect using a cleartext TCP channel and then optionally encrypts the command channel using explicit FTP commands such as “AUTH”, “PROT” and “PBSZ”.

Explicit FTPS is the “official” version of FTPS because it is the only one specified in RFC 4217 by Paul Ford-Hutchinson of IBM. There is no RFC covering “implicit mode” and there will likely never be an “implicit mode” RFC. However explicit mode’s cleartext commands are prone to munging by intermediate systems or credential leakage through the occasional “USER” command sent over the initial cleartext channel.

Almost all commercial FTP vendors and many viable noncommercial or open source FTP packages have already implemented RFC 2228; most of these same vendors and packages have also implemented RFC 4217.

Third Generation (1997-current): IPv6

The second technology RFC 959 did not address was IPv6. There are many structures in the 1971-1985 RFCs that absolutely require IPv4 addresses. RFC 2428 by (SEVERAL FOLKS) answered this challenge. This specification also took early and awful experiences with FTPS through firewalls and NAT’ed networks into account by introducing the clever little EPSV command .

However, the EPSV command did not, by itself, address the massive problems the file transfer community was having getting legacy clients and servers (i.e., those that didn’t implement RFC 2428) to work through a still immature collection of firewalls in the early 2000s. To address this need, a community convention around server-defined ranges of ports grew up around 2003. With this convention in place, a server administrator could open port 21 and a “small range of data ports” in his firewall to limit the exposure he/she would otherwise have taken opening up all high ports to the FTP server.

Despite the fact that RFC 2428 is now (as of 2011) thirteen years old, support for IPv6 and the EPSV command in modern FTP is still uncommon. The slow adoption of IPv6 and FTP’s identity as a protocol “designed to be extendable” have allowed server administrators and FTP developers to avoid the remaining issue by clinging to IPv4 and implementing firewall and NAT workarounds for over a decade.  Nonetheless, support for IPv6 and EPSV remains a formidable defense against the inevitable future; it would be wise to treat file transfer vendors and open source projects who fail to implement or show no interest in RFC 2428 with suspicion.

BEST PRACTICES: A minimally interoperable FTP implementation will completely conform to RFC 959 (FTP’s 1985 revision).  All credible file transfer vendors and projects have already implemented RFC 2228 (adding SSL/TLS), and should offer client certificate authentication support as part of their current implementation.   Any vendor or project serious about long term support for their FTP software has already implemented RFC 2428 (IPv6 and EPSV) or has it on their development schedule.  Forward-looking vendors and projects are already considering various FTP IETF drafts that standardize integrity checks and more directory commands. 



PeSIT is an open file transfer protocol often associated with Axway.

Like Sterling Commerce’s proprietary NDM file transfer protocol, PeSIT has now been written into the standard communication specifications of several industry consortiums and government exchanges, thus ensuring a high degree of long-term dependence on Axway technology. PeSIT is required far more often in Europe than in the United States due to Axway’s French roots and home turf advantage.

Also like NDM, PeSIT is normally used in internal (or trusted WAN) transfers; other protocols are typically used for transfers across the Internet.

PeSIT stands for “Protocol d’Echanges pour un Systeme Interbancaire de Telecompensation” and was designed to be a specialized replacement for FTP to support European interbank transactions in the mid-1980’s.  One of the primary features of the protocol was “checkpoint restart”.    

BEST PRACTICE: PeSIT, NDM and other protocols should be avoided in new deployments unless specifically mandated by industry rule or statute.   The interesting capabilities these protocols offer (e.g., checkpoint restart, integrity checks, etc.) are now also available in vendor-neutral protocols, and from vendors who allow wider and less expensive licensing of connectivity technology than Axway and Sterling Commerce (now IBM).

FTP with PGP

The term “FTP with PGP” describes a workflow that combines the strong end-to-end encryption, integrity and signing of PGP with the FTP transfer protocol.  While FTPS can and often should be used to protect your FTP credentials, the underlying protocol in FTP with PGP workflows is often just plain old FTP.

BEST PRACTICE: (If you like FTP with PGP.) FTP with PGP is fine as long as care is taken to protect the FTP username and password credentials while they are in transit.  The easiest, most reliable and most interoperable way to protect FTP credentials is to use FTPS instead of non-secure FTP.

BEST PRACTICE: (If you want an alternative to FTP with PGP.) The AS1, AS2 and AS3 protocols all provide the same benefits of FTP over PGP, plus the benefit of a signed receipt to provide non-repudiation.  Several vendors also have their own way to provide the same benefits of FTP with PGP without onerous key exchange, without a separate encrypt-in-transit step or with streaming encryption; ask your file transfer vendors what they offer as an alternative to FTP with PGP.


LDAPS refers to LDAP connections secured with SSL, typically over TCP port 636.

See “LDAP” for more information.

Active Directory

Microsoft Active Directory (AD) is a type of external authentication that can provide rich details about authenticated users, including email address, group membership and client certificates.

AD is essentially an extended version of LDAP optimized for Windows environments, but AD is only available from Microsoft.  As such, AD (LDAP) connections use TCP port 389 but can (and should) be secured with SSL.  When AD (LDAP) is secured in this manner, it typically uses TCP port 636 and is often referred to as “LDAPS”.

BEST PRACTICE: Use SSL secured connections to AD whenever possible; the information these data streams contain should be treated like passwords in transit.   Store as much information about the user in AD as your file transfer technology will permit; this will improve your ability to retain centralized control of that data and allow you to easily switch to different file transfer technology if your needs change.


LDAP is a type of external authentication that can provide rich details about authenticated users, including email address, group membership and client certificates.

LDAP connection use TCP port 389 but can (and should) be secured with SSL.  When LDAP is secured in this manner, it typically uses TCP port 636 and is often referred to as “LDAPS”.

BEST PRACTICE: Use the SSL secured version of LDAP whenever possible; the information these data streams contain should be treated like passwords in transit.   Store as much information about the user in LDAP as your file transfer technology will permit; this will improve your ability to retain centralized control of that data and allow you to easily switch to different file transfer technology if your needs change.


SMTP is an email protocol used to push messages and attachments from server to server.  Many technologies have been used to secure SMTP over the years, but the best technologies available today use SSL (version 3) or TLS to secure the entire SMTP connection.

SMTP typically uses TCP port 25 to move unsecured traffic and often uses TCP port 465 to move secured traffic.  Use of alternate ports with SMTP is extremely common to reduce connections from spammers who try to exploit servers listening on the most common ports.

BEST PRACTICES: SMTP should always be secured using SSL (version 3) or TLS.  If your file transfer deployment uses email notifications, then email sent through SMTP should either always be accepted within a few seconds or should be automatically queued up in the file transfer application in case of delay.  If SMTP services are unreliable and no queue mechanism exists in your file transfer solution, then a standalone mail server relay should be implemented to ensure that timed-out notifications do not cause unnecessary delay or failure of file transfer activities.


RADIUS is an authentication protocol that supports the use of username, password and sometimes one extra credential numbers such as a hardware token PIN.

In file transfer applications, RADIUS sign on information can be collected by web-based, FTP-based or other file transfer prompts and then tried against trusted RADIUS servers.  When a file transfer application gets a positive acknowledgement from a RADIUS server, it will typically need to look up additional information about the authenticated user from its internal user database or other external authentication sources (frequently LDAP servers such as Active Directory).

AS2 Optional Profiles

AS2 optional profiles (also “optional AS2 profiles”) are features built into the AS2 protocol but not used by every Drummond certified vendor.  However, the Drummond Group does validate seven different optional profiles (nine total) and these are briefly covered below.

Certificate Exchange Messaging (CEM) – A standard way of exchanging certificates and information about how to use them.

Multiple Attachments (MA) – Simply the ability to transmit multiple files in a single AS2 transmission.

FileName preservation (FN) – Adds metadata to AS2 transmissions to preserve original filenames.  “FN-MA” covers AS2 transmissions without MDNs and “FN-MDN” covers transmissions with MDNs.

Reliability – Provides an application standard around retry, IDs and related matters to prevent double posts.

AS2 Restart – Allows larger files, including those over 500MB, to be sent over AS2.

Chunked Transfer Encoding (CTE) – Permits transmission of data sets that are still being generated when transmission starts.

BEST PRACTICES: The most useful AS2 optional profiles for file transfer are usually MA (multiple attachments) and FN (filename preservation).  Your AS2 software should support all of these.  If you transmit files larger than a few megabytes with AS2, then AS2 restart is also a must.  Other options may be useful on a case-by-case basis.


SHA-1 (“Secure Hash Algorithm #1”, also “SHA1”) is the second most common data integrity check standard (a.k.a. “hash”) used throughout the world today.  SHA-1 codes are 160-bit numbers and are usually represented in hexadecimal format (e.g., “de9f2c7f d25e1b3a fad3e85a 0bd17d9b 100db4b3”).

SHA-1 is the least secure hash algorithm NIST currently supports in its FIPS validated cryptography implementations.   However, SHA-1 is faster than SHA-2 (its successor) and is commonly available in file transfer technology today (e.g., FTP’s unofficial “XSHA1” command).

BEST PRACTICE: Modern file transfer deployments should use FIPS validated SHA-1 or SHA-2 implementations for integrity checks.  Some SHA-1 usage is already prohibited in high security environments.  If you are working with a U.S. Federal Government application, read NIST’s “Transitioning of Cryptographic Algorithms and Key Sizes” to see if your SHA-1 implementation is still allowed.


TLS (“Transport Layer Security”) is the modern version of SSL and is used to secure TCP sockets.  TLS is specified in RFC 2246 (version 1.0), RFC 4346 (version 1.1) and RFC 5246 (version 1.2).  When people talk about connections “secured with SSL”, today TLS is the technology that’s really used instead of older editions of SSL.

See “SSL” for more discussion about how SSL/TLS is used in practice.

See the Wikipedia entry for TLS if you are interested in the technical mechanics behind TLS.

BEST PRACTICE: All modern file transfer clients and file transfer servers should support TLS 1.0 today.  Most clients and servers support TLS 1.1 today, but TLS 1.1 support will probably not be required unless major issues appear in TLS 1.0.  Some clients and servers support TLS 1.2 today but it’s a trivial concern at this point.  All file transfer software should use FIPS validated cryptography to provide TLS services across file transfer protocols such as HTTPS, FTPS, AS1, AS2, AS3 or email protocols secured with TLS.


SSL (“Secure Sockets Layer”) was the first widely-deployed technology used to secure TCP sockets.  Its use in HTTPS (HTTP over SSL) allowed the modern age of “ecommerce” to take off on the world wide web and it has also been incorporated into common file transfer protocols such as FTPS (FTP over SSL) and AS2.

In modern usage, “protected by SSL”, “secured by SSL” or “encrypted by SSL” really means “protected by TLS or SSL version 3, whichever gets negotiated first.”  By 2010 clients and servers knew how to negotiate TLS sessions and will attempt to do so before trying SSL version 3 (an older protocol).  Version 2 of SSL is considered insecure at this point and some clients and servers will attempt to negotiate this protocol if attempts to negotiate TLS or SSL version 3 fail, but it is rare that negotiation falls through to this level.

SSL depends on the use of X.509 certificates to authenticate the identity of a particular server.  The X.509 certificate deployed on an SSL/TLS server is popularly referred to as the “server certificate”.  If the name (CN) on the server certificate does not match the DNS name of a server, clients will refuse to complete the SSL/TLS connection unless the end user elects to ignore the certificate name mismatch.  (This is a common option on web browsers and FTP clients.)  If the server certificate is expired, clients will almost always refuse to complete the SSL/TLS connection.  (Ignoring certificate expiration is usually not an option available to end users.)  Finally, if the server certificate is “self signed” or is signed by a CA that the client does not trust, then most clients will refuse the connection.  (Browsers and FTP clients usually have the option to either ignore the CA chain or import and trust the CA chain to complete the negotiation.)

Optional X.509 client certificates may also be used to authenticate the identity of a particular user or device.  When used, this certificate is simply referred to as a “client certificate.”  File transfer servers either manually map individual client certificates to user codes or use LDAP or Active Directory mappings to accomplish the same goal.  File transfer servers rarely have the ability to ignore expired certificates, often have the ability to import a third-party CA certificate chain, and often have the ability to permit “self-signed” certificates.

Whether or not client certificates are in use, SSL/TLS provides point-to-point encryption and tamper protection during transmission.  As such, SSL/TLS provides sufficient protection of “data in motion”, though it provides no protection to “data at rest.”

SSL/TLS connections are set up through a formal “handshake” procedure.  First, a connecting client presents a list of supported encryption algorithms and hashes to a server.  The server picks a set of these (the combination of algorithms and hashes is called a “ciphersuite”) and sends the public piece of its X.509 server certificate to the client so the client can authenticate the identity of the server.  The client then either sends the public piece of its X.509 client certificate to the server to authenticate the identity of the client or mocks up and sends temporary, session-specific information of a similar bent to the client.  In either case, key exchange now occurs.

When you hear numbers like “1024-bit”, “2048-bit” or even “4096-bit” in relation to SSL, these numbers are referring to the bit length of the keys in the X.509 certificates used to negotiate the SSL/TLS session.  These numbers are large because the exchange of keys in asymmetric public-key cryptography requires the use of very large keys, and not every 1024-bit (etc.) number is a viable key.

When you hear numbers like “80-bit”, “128-bit”, “168-bit” or even “256-bit” in relation to SSL, these numbers are referring to the bit length of the shared encryption key that is negotiated during the handshake procedure.  This number is dependent on the algorithm available on clients and servers.  For example, the AES algorithm comes in 128-bit, 192-bit and 256-bit editions, so these are all possible values for software that supports AES.

It is uncommon to refer to hash bit lengths in conversations about SSL; instead the named hash – typically MD5 or SHA-1 – is typically stated instead.

The three most important implementations of TLS/SSL today are Microsoft’s Cryptographic Providers, Oracle’s Java Secure Socket Extension (“JSSE”) and OpenSSL.  All of these libraries have been separately FIPS validated and all may be incorporated into file transfer software at no charge to the developer.

BEST PRACTICE: The SSL/TLS implementation in your file transfer clients and servers should be 100% FIPS validated today. More specifically, the following file transfer protocols should be using FIPS validated SSL/TLS code today: HTTPS, FTPS, AS1, AS2, AS3 and email protocols secured by SSL/TLS.  Modern file transfer software supports the optional use of client certificates on HTTPS, FTPS, AS1, AS2 and AS3 and allows administrators to deny the use of SSL version 2.  If you plan to use a lot of client certificates to provide strong authentication capabilities, research the level of integration between your file transfer software, your enterprise authentication technology (e.g., Active Directory) and your PKI infrastructure.

X.509 Certificate

An X.509 certificate is a high-security credential used to encrypt, sign and authenticate transmissions, files and other data.  X.509 certificates secure SSL/TLS channels, authenticate SSL/TLS servers (and sometimes clients), encrypt/sign SMIME, AS1, AS2, AS3 and some “secure zip” payloads, and provide non-repudiation to the AS1, AS2 and AS3 protocols.

The relative strength of various certificates is often compared through their “bit length” (e.g., 1024, 2048 or 4096 bits) and longer=stronger.  Certificates are involved in asymmetric cryptography so their bit lengths are often 8-10x longer (or more) than the bit length of the symmetric keys they are used to negotiate (usually 128 or 256 bits).

Each certificate either is signed by another certificate or is “self signed”.  Certificates that are signed by other certificates are said to “chain up” to an ultimate “certificate authority” (“CA”) certificate.   CA certificates are self-signed and typically have a very large bit length.  (If a CA certificate was ever cracked, it would put thousands if not millions of child certificates at risk.)  CAs can be public commercial entities (such as Verisign or GoDaddy) or private organizations.

Most web browsers and many file transfer clients ship with a built-in collection of pre-trusted CA certificates.  Any certificate signed by a certificate in these CA chains will be accepted by the clients without protest, and the official CA list may be adjusted to suit a particular installations.

In addition, many web browsers also ship with a built-in and hard-coded collection of pre-trusted “Extended Validation” CA certificates.  Any web site presenting a certificate signed by am extended validation certificate will cause a green bar or other extra visual clue to appear in the browser’s URL.  It is not typically possible to edit the list of extended validation certificates, but it is usually possible to remove the related CA cert from the list of trusted CAs to prevent these SSL/TLS connections from being negotiated.

X.509 certificates are stored by application software and operating systems in a variety of different places.  Microsoft’s Cryptographic Providers make use of a certificate store built into the Microsoft operating systems.  OpenSSL and Java Secure Sockets Extension (JSSE) often make use of certificate files.

Certificates may be stored on disk or may be burned into hardware devices.  Hardware devices often tie into participating operating systems (such as hardware tokens that integrate with Microsoft’s Certificate Store) or application software.

The most widespread use of X.509 hardware tokens may be in the U.S. Department of Defence (DoD) CAC card implementation.  This implementation uses X.509 certificates baked into hardware tokens to identify and authenticate individuals through a distributed Active Directory implementation.  CAC card certs are signed by a U.S. government certificate and bear a special attribute (Subject Alternative Name, a.k.a. “SAN”) that is used to map individual certificates to records in the DoD’s Active Directory server via “userPrincipalName” AD elements.

See the Wikipedia entry for X.509 if you are interested in the technical mechanics behind X.509.

Firewall Friendly

A file transfer protocol that is “firewall friendly” typically has most or all of the following attributes:

1) Uses a single port
2) Connects in to a server from the Internet
3) Uses TCP (so session-aware firewalls can inspect it)
4) Can be terminated or proxied by widely available proxy servers

For example:

Active-mode FTP is NOT firewall friendly because it violates #1 and #2.
Most WAN acceleration protocols are NOT firewall friendly because they violate #3 (most use UDP) and #4.
SSH’s SFTP is QUITE firewall friendly because it conforms to #1,2 and 3.
HTTP/S is probably the MOST firewall friendly protocol because it conforms to #1, 2, 3 and 4.

As these examples suggest, the attribute file transfer protocols most often give up to enjoy firewall friendliness is transfer speed.

When proprietary file transfer “gateways” are deployed in a DMZ network segment for use with specific internal file transfer servers, the “firewall friendliness” of the proprietary protocol used to link gateway and internal server consists of the following attributes instead:

1) Internal server MUST connect to DMZ-resident server (connections directly from the DMZ segment to the internal segment are NOT firewall friendly)
1) SHOULD use a single port (less important than #1)
3) SHOULD uses TCP (less important than #2)


An MDN (“Message Disposition Notification”) is the method used by the AS1, AS2 and AS3 protocols (the “AS protocols”) to return a strongly authenticated and signed success or failure message back to the senders of the original file.  Technically, MDNs are an optional piece of any AS protocol, but MDNs’ critical role as the provider of the “guaranteed delivery” capability in all of the AS protocols means that MDNs are usually used.

Depending on the protocol used and options selected, the MDN will be returned to the sender in one of the following ways:

Via the same HTTP/S stream used to post the original file: AS2 senders may request that MDNs are sent this way.  This type of transfer is popularly called “AS2 with synchronous MDNs” (or “AS2 sync” for short).  When small files are involved, this type of transfer is the fastest AS protocol transfer currently available.

Via a separate HTTP/S stream back to the sender’s server: AS2 senders may request that MDNs are sent this way.  This type of transfer is popularly called “AS2 with ansynchronous MDNs” (or “AS2 async” for short).  This type of transmission is slightly more resiliant to network hiccups and long processing turnaround times of large files than “AS2 sync” transmissions.

Via email: All AS1 MDNs are returned this way.  AS2 aysnc senders may also request that MDNs are sent this way.

Via FTP: All AS3 MDNs are returned this way.

Full MDNs (the signed responses) are sometimes retained by the sender and/or recipient as irrefutable proof of guaranteed delivery.  The use of X.509 certificates to authenticate and sign both the original file transmission and the MDN receipt often allows MDNs to rise to the level of legally binding nonrepudiation in many jurisdictions.


AS3 (“Applicability Standard 3”) is an SMIME-based transfer protocol that uses FTP/S to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

AS3 is an unpopular implementation of the AS2 protocol.  Many vendors successfully sell software that supports AS2 but not AS1 or AS3.  However, AS3’s design as an FTP-based protocol allows many companies to implement it with minimal file transfer technology investments at their perimeters; they simply need to implement AS3 internally and make sure it can access a plain old FTP/S server exposed to the Internet.

See also “AS1” for the email-based variant and “AS2” for the HTTP/S-based variant.


AS2 (“Applicability Statement 2”) is an SMIME-based transfer protocol that uses HTTP/S to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

There are two main reasons that AS2-based transmission systems are unpopular unless specifically requested by particular partners are complexity and cost.

In terms of complexity, AS2 configurations can involve up to four different X.509 certificates on each side of a transfer, plus hostnames, usernames, passwords, URLs, MDN delivery options, timeouts and other variables.  Configuration and testing of each new partner can be a full or multi-day affair, where simpler protocols such as FTP may require hours or minutes.  To hide as much of the configuration complexity as possible from administrators, some AS2 products (such as Cleo’s Lexicom) come with dozens or hundreds of preconfigured partner profiles, but knowledge of the underlying options is still often necessary to troubleshoot and deal with periodic updates of partner credentials or workflows.

In terms of cost, AS2 products that can connect to multiple trading partners are rarely available for less than ten thousand dollars, and the ones that ship with well-developed list of partner profiles cost much more than that.   One factor that drives up this cost is that any marketable AS2 product will be “Drummond Certified“.  The cost of high-end AS2 products is driven up by the fact that compiling and keeping up an extensive library of partner profiles in an expensive endeavor in its own right.  Implementing AS2 securely across a multiple-zone network also tends to drive up costs because intermediate AS2 gateways are often required to prevent direct Internet- or partner-based access to key internal systems.

Another factor working against voluntary AS2-based implementations is transfer speed.  The use of HTTP-based encoding and the requirement that MDNs are only compared after the complete file has been delivered often tips the operational balance in favor of other technology.

AS3 was developed, in part, to cope with AS’s slow HTTP-based encoding, but other modifications (“optional profiles“) to the AS2 protocol have also been introduced to address other limitations.  For example, the optional “AS2 Restart” feature was introduced industry-wide to cope with large files whose delivery was heretofore dependent on long-lasting, unbroken HTTP streams.

Nonetheless, AS2 is considered to be the most successful and most widely adopted of any vendor-independent file transfer protocol that builds both transmission security and guaranteed delivery into the core protocol.

See also “AS1” for the email-based variant, “AS3” for the FTP-based variant and “AS2 optional profiles” for additional information about available AS2 features.


AS1 (“Applicability Statement 1”) is an SMIME-based transfer protocol that uses plain old email protocols (such as SMTP and POP3) to transmit files with end-to-end encryption and guaranteed delivery/non-repudiation (when MDNs are in use).

End-to-end encryption is accomplished through the use of asymmetric encryption keyed with the public and private parts of properly exchanged X.509 certificates.  Guaranteed delivery is accomplished through the use of strong authentication and signing, also through the use of the public and private parts of properly exchanged X.509 certificates.

AS1 is an unpopular implementation of the AS2 protocol, at least for new implementations.  Many vendors successfully sell software that supports AS2 but not AS1 or AS3.  However, AS1’s design as an email-based protocol allows many companies to implement it without investing in extra file transfer technology at their perimeters; they simply need to implement AS1 internally and make sure it can access email.

See also “AS2” for the HTTP-based variant and “AS3” for the FTP/S-based variant.

Event Log Analyzer by SolarWinds