To Home page

Replacing TCP and UDP

Google has a TCP/SSL replacement, QUIC, which avoids round tripping and renegotiation by integrating the security layer with the reliability layer, and by supporting multiple asynchronous streams within a stream

Layering a new peer-to-peer packet network over the Internet is simply what the Internet is designed for. UDP is broken in a few ways, but not that can't be fixed. It's simply a matter of time before a new virtual packet layer is deployed - probably one in which authentication and encryption are inherent.

For authentication and encryption to be inherent, needs to connect between public keys, needs to be based on Zooko's triangle.  Also needs to penetrate firewalls, and do protocol negotiation with an unlimited number of possible protocols - avoiding that internet names and numbers authority.

There is an excellent and interesting pure layer replacement of TCP, but being a pure layer replacement, cannot provide security against man in the middle. 

Ian Grigg “Good protocols divide into two parts, the first of which says to the second, trust this key completely!”. 

This might well be the basis of a better problem factorization than the layer factorization – divide the task by the way trust is embodied, rather than the basis of layered communication. 

Trust is an application level issue, not a communication layer issue, but neither do we want each application to roll its own trust cryptography – which at present web servers are forced to do. (Insert my standard rant against SSL/TLS). 

Most web servers are vulnerable to attacks akin to session cookie fixation attack, because each web page reinvents session cookie handling, and even experts in cryptography are apt to get it wrong. 

The correct procedure is to generate and issue a strongly unguessable random https only cookie on successful login, representing the fact that the possessor of this cookie has proven his association with a particular database record, but very few people, including very few experts in cryptography, actually do it this way. Association between a client request and a database record needs to be part of the security system. It should not something each web page developer is expected to build on top of the security system. 

TCP constructs a reliable pipeline stream connection out of unreliable packet connections. 

There are a bunch of problems with TCP.  No provision was made for protocol negotiation and so any upgrade has to be fully backwards compatible.  A number of fixes have been made, for example the long fat pipe problem has been fixed by window size negotiation, which is semi incompatible and leads to flaky behavior with old style routers, but the transaction problem remains intolerable.  The transaction problem has been reduced by protocol level workarounds, such as “Keep alive” for http, but these are not entirely satisfactory.  The fix for syn flooding works, but causes some minor unnecessary degradation of performance under syn flood attacks, because the syn cookie is limited to 48 bits – needs to be 128 bits both to deal with the syn flood attack, and to prevent TCP hijacking. 

TCP is inefficient over wireless, because interference problems are rather different to those provided for in the TCP model.  This problem is pretty much insoluble because of the lack of protocol negotiation. 

There are cases intermediate between TCP and UDP, which require different balances of timeliness, reliability, streaming, and record boundary distinction. DCCP and SCTP have been introduced to deal with these intermediate cases, SCTP for when one has many independent transactions running over a single connection, and DCCP for data where time sensitivity matters more than reliability such as voice over IP.  SCTP would have been better for html and https than TCP is, though it is a bit difficult to change now.  Problems such as password-authenticated key agreement transaction to a banking site require something that resembles encrypted SCTP, analogous to the way that TLS is encrypted TCP, but nothing like that exists as yet. Standards exist for encrypted DCCP, though I think the standards are unsatisfactory and suspect that each vendor will implement his own incompatible version, each of which will claim to conform to the standard. 

But a new threat has arrived:  TCP man in the middle forgery. 

Connection providers, such as Comcast, frequently sell more bandwidth than they can deliver.  To curtail customer demands, they forge connection shutdown packets (reset packets), to make it appear that the nodes are misbehaving, when in fact it is the connection between nodes, the connection that Comcast provides, that is misbehaving. Similarly, the great firewall of China forges reset packets when Chinese connect to web sites that contain information that the Chinese government does not approve of. Not only does the Chinese government censor, but it is able to use a mechanism that conceals the fact of censorship. 

The solution to all these problems is to have protocol negotiation, standard encryption, and flow control inside the encryption. 

A problem with the OSI Layer model is that as one piles one layer on top of another, one is apt to get redundant round trips. 

According google research 400 milliseconds reduces usage by 0.76%, or roughly two percent per second of delay

Redundant round trips become an ever more serious problem as bandwidths and processor speeds increase, but round trip times reminds constant, indeed increase as we become increasingly global and increasingly rely on space based communications. 

Used to be that the biggest problem with encryption was the asymmetric encryption calculations – the PKI model has lots and lots of redundant and excessive asymmetric encryptions. It also has lots and lots of redundant round trips. Now that we can use the NVIDIA GPU with CUDA as a very high speed cheap massively parallel cryptographic coprocessor, excessive PKI calculations should become less of a problem, but excess round trips are an ever increasing problem. 

Any significant authentication and encryption overhead will result in people being too clever by half, and only using encryption and authentication where it is needed, with the result that they invariably screw up and fail to use it where it is needed – for example the login on the http page. So we have to lower the cost of encrypted authenticated communications, so that people can simply encrypt and authenticate everything without needing to think about it. 

To get stuff right, we have to ditch the OSI layer model – but simply ditching it without replacement will result in problems. It exists for a reason, and we have to replace it with something else. I am working on an idea for a replacement, a protocol compiler, which provides compile time protocol layering, in place of OSI's run time protocol layering, as discussed in “Generic Client Server Program”) I hope to publish it in due course, but I am not going to report that idea in this posting. 

A packet protocol that establishes an encrypted connection on top of unreliable packets with minimal round trips without increasing fragility to DoS.

To establish a connection, we need to set a bunch of values specific to this particular channel, and also create a shared secret that eavesdroppers and active attackers cannot discover. 

The client is the part that initiates the communication, the server is the party that responds. 

I assume a mode that provides both authentication and encryption – if a packet decrypts into a valid message, this shows it originated from an entity possessing the shared secret. This does not provide signing – the recipient cannot prove to a third party that he received it, rather than making it up. 

The client typically uses a transient public key. If it has a permanent relationship with the server, it uses a durable public key representing that relationship, one key per relationship. This public key is not in fact public, but is a shared secret between the server and the client. The corresponding durable secret key is not necessarily stored on the client for an unlimited time, but may be, at the cost of an extra round trip, generated from a durable salt stored on the server and a short password that is not a shared secret, but a truly private secret known only to the client, subject to dictionary attacks by the server, or by anyone that manages to steal the server login database, but not subject to dictionary attacks by eavesdroppers or by active adversaries who interfere with messages. Hence the need for the client durable public key to remain secret. 

If the client wishes to prove rightful possession of a certain reputation to a third party, it uses transient cookies issued by a reputation server. Servers, however, generally have distributed reputation attached to their long lived public keys – distributed reputation being held by clients, not by some reputation server. 

For the moment I ignore the hard question of server key distribution, glibly invoking Zooko's triangle without proposing an implementation of the other two points and three sides of the triangle or a solution to the problem of managing distributed reputations in Zooko's triangle.  (Be warned that whenever people charge ahead without solving the key distribution problem, the result is a disaster.)

If the client wishes to login, wishes the server to recall a durable relationship:

Client generates random ephemeral private key x, generates ephemeral public key X=g^x, recollects relationship specific durable private and public keys c and C, recollects Server public key S

Client -> Server:

Client's network address and port on which client will receive packets, protocol identifier, protocol version and variant numbers, X, a client identifier that enables the server to recall the public key corresponding to the durable relationship, and client time that the message was sent.

If the requested protocol is not OK, we go into protocol negotiation. Assuming it is OK, which it probably will be, server assigns a port number that the client is to use in sending it packets. An 8 bit port number is too small. The port number has to be variable size, set according the server circumstances, up to a maxium of 128 bits.

Server generates random ephemeral private key y, generates ephemeral public key Y=g^y + C, and random number v.

Server does not generate a new ephemeral private key for every connection attempt. It generates a new ephemeral private key at most every few seconds. It does however generate a new random number v for every connection attempt.

The port address has to contain enough bits that DoS cannot cause the server to rapidly rotate through all free ports. Server encrypts the port number using a symmetric key known only to itself, together with the network address information and other connection setup material, v, and sufficient information to identify which value of y and Y, out of several recent values, it is using for this connection attempt.

Let us call this block of encrypted information Q. This value will be sent to client, and then back to server, unchanged. Its function is to avoid the necessity for the server to allocate memory for a client that has not yet validated. Instead the state information is sent back and forth. To save space, v could be a hash of Q.

It does no harm to use the same value of y with several clients, provided that each client uses its own X – it is only a problem if y stays unchanged for days. We limit the frequency at which y is changed to be such that the CPU cannot be overloaded. It would do harm to use the same value of v with several clients, for if one client knows in advance what the value of v is going to be, it can cook X to fake being another client.

Server --> Client:

Q, Y, v, port number, the channel setup information port on which server will receive packets, server time that the previous message from the client was received, server time that this message was sent, and the various bits of information such as the window shift etc needed for flow control of the channel, and a request for proof of work on Q.

The proof of work is trivial or non existent if the server is not under load, but is increased as the server load approaches the maximum the server is capable of, in order to throttle demand.

Client does the proof of work, and generates a random number u, which is generated after Server has committed itself to Y, because we don't want Server to know u until after it has committed to a particular value for y. Similarly, we did not want client to know v until after it committed itself to a particular value for x.

Client computes shared secret as hash of ((Y-C) . S^u)^(x+cv)

Now Client encrypts and authenticates the first packet of actual information, the payload to be transmitted in this encrypted and authenticated conversation, preceding it with the random number u.

Client --> Server:

u, server encrypted setup information as received, proof of work, payload encrypted by shared secret. client time that the previous message from the server was received and that this message to the server was sent, encrypted by the shared secret.

Server checks the proof of work, decrypts server encrypted setup information to make sure it is validly formatted, and therefore that it originated from the itself, then creates an entry in its hash table for this connection. It computes the shared secret as hash of (X.C^v)^(y+su)

This will agree with the client side shared secret, for they are both equal to g^((y+su)(x+cv))

You will notice that the server only allocates memory and does heavy computation *after* the client has successfully performed proof of work and shown that it is indeed capable of receiving data sent to the advertised network address.

Now we have a shared secret, protocol negotiated, client logged in, in one round trip plus the third one way trip carrying the actual data – the same number of round trips as when setting up an unencrypted unauthenticated TCP connection.

You will notice there is no explicit step checking that both have the same shared secret – This is because we assume that each packet sent is also authenticated by the shared secret, so if they do not have the same secret, nothing will authenticate.

Let us suppose instead that the client is *not* going to login – that the client is a random anonymous client logging in to a known server with a widely known public key.

In that case, the protocol is the same except that c is always zero, and v is irrelevant.

These documents are licensed under the Creative Commons Attribution-Share Alike 3.0 License