YTread Logo
YTread Logo

Stanford Seminar - The TLS 1.3 Protocol

Mar 19, 2024
um, so I guess two preparatory notes um, one, I talk very fast, so feel free, I'll try to slow down, but feel free to say hi if I'm going too fast or interrupt me and ask questions, um, second, uh, I just want to level set a little bit here uh this is this talk is like about uh communication security that involves cryptography in the same way that plumbing, you know, involves hydrodynamics, I mean, you know we use it but we don't really understand it and we just In In a way, we treat it as a building block, but you don't need to understand much about cryptography for this to make sense.
stanford seminar   the tls 1 3 protocol
Understand what I'm talking about, but you do need to understand what the basic things do. set of levels like how many people recognize the word RSA, okay, Diffy Helman signatures, Max, fantastic, well, I'm going to assume you know those things that, seriously, are the ones I like the most, like I was telling the professor Benet before, just two minutes ago, the Striking. What has gone wrong over the years with TLS is how little it really depends on understanding cryptography and how much it depends on understanding the kind of logic of how systems are put together and what the building blocks actually do, in place of ordering. of breaking cryptographic primitives, although there have been parts of that too, so here's my timeline of what I plan to talk about.
stanford seminar   the tls 1 3 protocol

More Interesting Facts About,

stanford seminar the tls 1 3 protocol...

I'm going to go over the background of TS and give a brief introduction to some of how TLS works, the current version of TLS, and then talk about some of the problems that we've actually encountered. I'm actually going to start doing it interstitially, so I'll host a feature and talk about some of the issues we've had. are um and the idea here is to motivate why we're doing something new and B to give you an idea of ​​the kinds of things that are going wrong and the rationale for how we're trying to do it better this time and why So I want to talk about the goals of TS 1.3, which is the version we're currently working on, show you what Point T looks like and then finally spend a slide or two talking about where we are and how we plan to move forward, so what is TLS?
stanford seminar   the tls 1 3 protocol
As I was writing this, I was distressed to realize how long I've been working on this technology, um uh, which is over 20 years now, which is really kind. depressing on some level um um in no way was I the original designer of this um the version we're using now so TS was basically designed by Netscape in the pre-prehistory of the web um when um as suggested above they needed something to do with security and they need it very quickly and the basic design of Criterium was that we would like people to be able to buy things over the Internet and put their credit cards in the browser and that was the main thing that people cared about, the idea.
stanford seminar   the tls 1 3 protocol
People had it in their heads that this was going to be like Unix sockets but it was going to be secure and at the time it was called secure sockets layer, so there were several very early versions of SSL, starting with SSLV 1, which never really came out. from the sslv2 lab, which was relatively widely implemented but had a number of problems. The version we're using now is this thing called um, so what basically outperformed slv2 was slsl version 3, which is very similar to TLS now. um and that was designed in a relatively short time by Netscape by some Netscape people to fix some perceived problems with version 2o and that had a very wide implementation and then was standardized by ATF with the usual kind of modest modifications to the service of both. improvements and not invented here, so even TLS goes back to about 1998 at this point and version 3 goes back to about 95, so at this point, to give credit to the people who do this work, they said Let's build something with security over um for TCP and it's been proven to be an incredibly powerful primitive because a lot of things went over TCP and just having a secure channel abstraction is very powerful and is now used for almost anything you can.
Consider that the initial designs were for HTTP, but it's used for VPN, it's used for email, both sending and retrieving, it's used for Internet of Things communication, it's even used for voice and video security, now there is a version of TLS. called TLS datagram, which is basically the same concept for UDP um that's what's part of what's used for voice and video um sometimes um and also for iot so like I said, Netscape were the original people that did this , but the Internet took care of it. engineering working group or contributed and then picked up and since then there have basically been three versions of TLS, one which I say was its first standard version 1.1, which was like a trivial change and 1.2, which was done in response to the work I guess .
Almost eight years ago we broke md5 on sha one when everyone was terrified that all our hash functions were going to be destroyed, so when sv3 Rel shipped it basically assumed that you had md5 on sha one and what we wanted to do is do it. work with shot two and sha three and whatever, so it turned out to be a lot harder to do than I would have actually thought because things are more fragile than you think, like I said, TLS and SSL present this abstraction secure channel and the The basic model that almost everyone operates with is this: the customer knows who they want to talk to.
The client has a domain name for the server and the server doesn't know who the client is, so the client says: I want to connect to this. website and that, although in principle there are TLS mechanisms for the client to authenticate to the server, in practice they are used quite a bit, but not as much as the basic mode used by almost all web browsers, to which the client connects . the server and the client do not authenticate at all and if the client wants to authenticate it authenticates itself by typing a password on some web page which is then sent over the secure channel and the reason this is supposed to be secure is because the browser I verified that you were talking to Gmail and therefore everything they are typing will go to Gmail.
Once this is done, all data is encrypted and authenticated, so the guarantee from the client's perspective is supposed to be that everything I send is just going to the server and everything I receive from the server was secured and Transit and it actually came from the server than someone else and the guarantee from the server's perspective is a little more confusing since it's the same person who sent me the initial block of data. Now it sends me the last block of data, but I don't really know who they are, so you know if there is a password, so I hope that the password that I receive and the request or read the email from the same guy is the guarantee.
We're supposed to be receiving it, but again, you don't know who it is if you don't authenticate them. TLS has this type of structure when SSL and TLS were designed, a number of curative secure

protocol

s were being designed with SSH and IPC. and a bunch of other things and they all converged after a bunch of FAL starts on the same basic structure, which is a handshake

protocol

that's generally pretty sophisticated and the job of the handshake protocol is to actually negotiate the key material to that you want to establish a shared key between the client and the server you want to negotiate the algorithms the modes the parameters what this typically means is what key exchange are you using what encryption algorithm are you using that type of thing um and you want to authenticate one or both sides, I said in TLS that largely means the server, but an IPC or SSH could mean both the client and the server, so the vast majority of the complexity in TLS is that the fragment is the handshake once the handshake is performed. also a logging protocol that actually transports the data from the client to the server and back, which is comparatively simple, although we have seen problems or not providing the advertised security guarantees and I will have some material on that later uh like everyone basically all these systems the initial handshake uses public cryptography uses Diffy Helman American curve Diffy Helman uses RSA something like that and the registry protocol but that only negotiates a key and the registry protocol uses some type of symmetric Rography these days typically Al AES or chaa something like that, okay, this doesn't need to be there in case people didn't know what I was talking about, so this slide here shows basically the skeleton of the handshake for essentially basic TLS. 1.2 uh handshake, which is almost identical to the T.
I think at this level the abrasion is identical to the TL, the sslv3 handshake, so this is what people did for a long time, almost all the time, until basically probably the F after five. years, um, maybe after three years, maybe, then the client opens with this greeting message from the client that contains the most important thing for now is that it contains non-random elements and like all of these, a common problem with cryptographic protocols is the replay problem, replay attacks. the non-random serves to guarantee the client that whatever the server sends to the server's messages is new, the server responds with its own non-randoms and with a certificate and the certificate links in this case its RSA public key to its identity, so at this point the In principle, the client can validate, it can't validate that it's actually talking to the server, but you can at least validate that there is some credential that links an RSA public key to the claim.
That endpoint needs the server, uh, until the server receives this first message. fact does not require knowing the private key, so the client only knows that there is someone with this identity and as I said, the client generates some random value on this. What's called the master secret, which is basically a 48-bit random secret. 48 per string, not 48 bits, that would be bad and it encrypts it under the server's RSA public key and sends it to the server and then follows up with a Mac throughout the handshake with the master secret as the key for that Mac and is called, it is called message finished, so the guarantee that the client obviously has at this point is that only the real server should be able to decrypt this message and therefore when the server responds with its own Mac after the handshake hands, the client knows that you are now speaking. to the true server at least theoretically because no one else could have reconstructed the key material to generate the master secret.
Also, the convention I'm using here is that anything in talic is encrypted, so these messages are also encrypted, as a practical matter. the server couldn't even have read the completion message and couldn't have sent a completion message validating historically until recently TLS has used composite encryption modes, by which I mean you would use AES and you would use hmax shaan and put them together TS is now moving very aggressively towards an authenticated encryption with additional data Encryption mode um which does not have better security guarantees at least under the implementation parameters, so once you have done those things like I told you You can see most of this and just the weight four times as much data in four things that happen in the handshake as the application data and that's probably about right in terms of what the protocol is like, once you've done it.
If you start sending application data back and forth, then life would be that simple. um already life is more complicated so the client on the server has various capabilities and one thing that actually worked quite well is when TLS and SSL were first implemented people had Dez and triple Dez and now we have AES and chaa and stuff like that and we made that transition pretty smoothly without breaking anyone, which is actually pretty good and we've also gone through several versions of the protocol without breaking anyone too much, which is also pretty good, um, that's me, it's like doing that, okay, cool, um, so there's more stuff here, um, the client has their version numbers, conversion negotiation, the client offers a bunch of Cipher Suites and what's in a Cipher Suite is um, basically. the cryptographic parameters you want to use, so there's a key exchange algorithm and a signature algorithm and a bulk encryption algorithm and a and a Mac um although these days those last two become an AED algorithm, it's also possible to negotiate compression um that became It turned out to be a bad plan and we're going to eliminate it.
I'll talk about that very briefly and at some point between sslv3 and version 2, someone decided to get nice so that the client could send arbitrary things in their Hello client, um, in case you want to say some things that this grammar didn't allow you to say and basically you can send a bunch of extensions and the extensions are just two pairs, I'll come back to that in a moment so that The basic model that TLS has always followed and that goes back to the SL version, um three, is that the client offers a bunch of stuff and the server picks that into a list, so pretty much anything it saw, the client offered a list, the server can pick one. choose a Cipher Suite CH, a method ofcompression and then the extensions that you actually echo.
By the way, I haven't talked about the session ID. I'll get to that in a minute, so this slide is just redundant, the cipher suite contains basically all the parameters you want to use there's been a big split in protocol design, by the way, in terms of taste on whether you should do a individual selection for each of them or you have to make a suite containing them all and they both don't work. being terrible comes close to the individualYou can't really mix and match a selection because it turns out that a lot of things are not compatible with other things, so you have a hardware security module that likes to sign things with an algorithm, but then you have to use another algorithm for the key. swap um or over hasing um and conversely if you make candy you get common explosion so neither of these things work very well um Ike um ipack made the chinese menu and TLS does comor explosion um those are the two main options So wait, so I want to talk a little bit about how things have gone wrong over the years, so I'm going to give a couple of examples of TLS features that turned out to not be as impressive as they could have been.
I wish TLS had what on the surface looked like a very elegant mechanism for renegotiating, so the idea would be that I set up a TLS connection with you and for some reason I'm sad about the parameters and there are many reasons why I could be. Sad, one reason I might be sad is that I've been encrypting for the last month with the same keys and I might think it's time for new keys. This is a much bigger problem if you're using like that if you're using AES, there would still be a problem with AES because it could only be encrypted, maybe you know two to 32 blocks before people start breaking down, so TLS had, as I said, what seemed to be a super superficially elegant mechanism where you could just um, again, everything is encrypted, so you could just take the initial handshake and then you could renegotiate by offering a new handshake. link over the encrypted connection and the client's server would pick up where they left off, so it doesn't seem like a bad idea. works pretty well, sometimes there's a bit of confusion about what the status of the data is if it's not alleviated between some of these handshake messages that made people a little sad and in fact there's a discussion going on right now on the Taos Ming on how to deal with that, but that's not the part that made people sad, it turns out that the part that made people sad is this now, if you look at the date, you'll notice that TLS was designed around the year 94 and this It was discovered in 2010, so like I was saying before, the interesting thing about this is that you don't need to know anything about cryptography, to understand what the problem is here, you just have to see the problem and have someone finally solve it, so the problem is actually quite simple, that's it.
Remember I said we simply do a new handshake over the old connection. Well, there is nothing connecting the old connection and the new one, so it is possible to have a situation where the parties are confused about how many connections there have been, hence the basic attack. It looks like this, the attacker, um, so we always assume that the network attacker can do whatever he wants with the packets, so the attacker makes his own connection to the server and um, and makes his own TS connection, remember that the attacker, in general, for HTP is not authenticated so the attacker makes his own absolutely normal connection to the server and then what he does is he sends his own message over the channel to send it as a get um and I wish that error wasn't bidirectional , but there it is, so it sounds like the beginning of an HTTP request but not the end of an HTTP request, so they have the server kind of warm and the server is reading this and the server is ready to continue reading and then it What they do is take the client's information. connection and they intercept it and take the TLS connection from the client and just send it over their own encrypted channel to the server and at this point the client and the server have an encrypted connection that the attacker can't read, but that's okay because they don't it needs to read it and then the first thing the client does it sends its own HTP request in the channel so you guess I'm assuming there is web here and somewhere in that htb request is its cookie um and so you know if If you do this exactly right , you'll get the right number of clifts and it looks like it's all one big request with a poorly formed header line which is what customers get and now what the taggers managed to do is get splicing. your own request on the client's authentication credentials and you get a bad example.
I wish I had used the post here, but if that request has any side effects, then it basically convinces the client to do something for them that they otherwise wouldn't have done. This is bad, so the server client tactic injects an arbitrary prefix into the client connection one thing you will find by the way is that many of these attacks are attacks that don't look very good in traditional network environments because it is difficult, for For example, if this is like a telet connection or it's a tet connection or it's like an IMAP connection, it's harder to convince clients to generate the data that you want the G to generate, but in web connections it's usually very easy because The attacker gives enormous amounts of control over what the client actually does, so the traditional thing you'll do is create your own web page that points to the victim's web page and now you can synthesize all kinds of things and make the client do it for you so the advent of the web has made this threat much more serious and if you look at a number of attacks that we've seen lately on TLS, they basically rely on the attacker having pretty tight control. from your web browser, but that's okay because you can get that kind of control pretty easily, but they really only work because of that environment, so that's okay, this is bad for the client and the server can't tell that the server thinks which is a renegotiation and server May wanted the renegotiation so let me do this first and I'll be back so the first idea we had when this came up was to turn off the renegotiation we're not going to let anyone renegotiate um that's it.
It's not the worst idea in the world, but it turns out that the reason you need renotification and the most important of these first two is to hide the client certificate and Cent post-handshake authentication, so I don't think it's not there. sure to have it. a slide showing client authentication, but in TLS 1.2 the client certificate is sent unencrypted, it is not sent encrypted and that means any network attacker can look at the client certificate and see who the client is, which obviously has serious privacy issues so it is Attractive, the good thing about this renegotiation technique is that if this second handshake is legitimate then the client certificate would be encrypted because it will be encrypted by the first connection so that would be nice, is the other reason why people want to do it often.
This is what you often have setups where only part of a website is protected, so you have a website that is authenticated using client authentication and the client will browse the website for a while and then eventually will achieve some protection. resource and then and only then will the server want to interrogate the client for the certificate and this is further compounded by the fact that UUIs and browsers are really bad with client certificates so it is incredibly invasive to ask the client a certificate and therefore the servers avoid it at all costs, so the point is that if you have one of these multi-level sites, what you want to do is let the client serve for a while and only then force a renegotiation this is going to be a problem later we'll get to um so the solution for this once you know about this the solution for this doesn't seem so bad um we basically did it um we did a new a lot of the situations that you What you find the way is that we had this really quite complicated device and we found things that we didn't like and we said what is the minimum change that we can make to fix this particular problem and maybe some problems related to it, but without redesigning everything completely because that turned out to be very expensive as we're seeing now, so the minimal solution for this is maybe a little bit bigger than the minimal solution, but a modestly minimal solution for this is to add a new extension to the Holo client server and the way that this works is basically like this you echo these Macs from the previous handshake, um, from the previous handshake into the new handshake, and what happens obviously is that in the case here, the server thinks it has a Mac from this first handshake, but the customer doesn't, so you get a handshake. fails, so it seems to work pretty well, although it turns out for other reasons this has problems a bit later, although again it took about four years to figure out it was that good, right, I wish I'd done this in the opposite order.
What I was talking about before, which is that the client was browsing very well, asked for some secure resource and then the server says "Oh, now I need your certificate" and he asks for it and by the way, the way the server What it does is send this greeting request message that says: please start a new handshake with me wow imp, well, that really should have been a backslash, shouldn't it? Okay, so to understand how this solution didn't produce exactly the desired result, you need to do it. I understand one more feature in TLS, which is something called session resumption, so when these things were designed, public key operations, I mean, RSA was incredibly expensive and was the bottleneck to building any kind of secure server, so What when the system was built, they designed a system that would allow you to amortize a public key exchange over multiple connections, this is called resumption and the basic idea is super simple, okay, I'm not really okay, oh, okay, okay , so the basic idea was as it really was.
Really simple: on your initial connection when you were using RSA or Helman, but back then it was RSA that the server would give you an identifier and when you came back you would say I know about this identifier and the server would like to look. put it in this table and wow, I have the key, so it looks like this in red is new, so the client connects. This is the new hand sheet. The client connects to the server and says: here is my session ID. The server says fantastic. Here is the session. ID and I wish there wasn't a certificate there because it doesn't belong there um but at this point you're encrypted and you're good to go um oh, this is very, I see, this is very embarrassing, that's what I wanted this to be the slide. so correct that the initial handshake looks like this.
I thought I only had one slide. The initial handshake looks like this with just the new session ID saying this is what it is and then the resumption looks like this. there's no public crypto, so obviously this is a big advantage, a big performance improvement, and if you go back and look at the work that was done when TLS first came out, you'll see a lot of measurements that basically show what your server will do. Web. if you don't have resumption, this is what it will do if you have resumption and it's just a no-brainer, there's no way, there's no way that any person would still not run this and in fact, this no-brainer that there was a ton of post work um partly done partly done here Stanford with Professor Ben and hav shakam um on download to figure out how to eliminate the need for the server to store the session ID database by downloading the data to the client because it was very attractive to I have this type of optimization, however, it turns out that this doesn't necessarily make the world as much better as you expect, so this very good work came out of Inria, back in 2004, in something called a triple handshake, so The background here it's the Inria guys from Carth Baravan and some other people have created this fully verified TLS implementation called MLS which is written in I mean um FP and they actually have a formal proof of it, so somewhere along the way they figure out the proof that they did it and they find this, so the first thing that is simple and has been known for a while is that it is possible for an attacker to force two TLS connections with the same key material and the way this works is that if you are doing RSA so if you are connecting to the attacker you simply give the attacker the key you want to use and the attacker can make their own connection to the server and give the server the same key and the only diversification in the handshake is random messages , but the attacker can simply repeat them and thus you end up with the same key material.
This has been known for a long time. Interestingly, what wasn't known is that it's also possible to do Diffy Helman if you're not careful, um, and that was something new, but um, a lotpeople use RSA, that wasn't the big answer, so these scammers have the same secret, but that's not so serious, uh, normally, because look, you're connecting to the attacker, the attacker sees the data, he gets everything he wants on the server and has and and has, so it could send whatever it wanted anyway, where this gets serious is when you connect it with resume, so the client disconnects and makes a new connection. and makes a connection it apparently thinks with the attacker, but the attacker sends the data to the server and now has a connection with it and because the keys are the same and the session IDs are the same, it now has a connection with it. a connection to a server, but you have a connection to a server that you know the attacker also knows the keys for and what the attacker does is exactly the same thing I just showed you with the reh handshake, he sends his own request because knows the channel keys for this secure resource and then gets out of the way and lets the server request renegotiation at this point the server and the client renegotiate and at this point the client sends a certificate and signs the handshake and the client authenticates to the server.
He believes that he is authenticated to the attacker, but in reality he is authenticated to the server. So this is not good. I think we can agree. um uh so um that was unfortunate um um um so this basically resurrects this this resurrects the um the renegotiation attack um as Caric points out but the attacker can control the mission but he gets the client to do something on his behalf with a server, once plus, unfortunately, the fix for this and you can see a kind of slow rollback in the TL 1.3 design. Here is the problem, you might think the problem is a restart, but the problem is not the problem, it is the unknown key. share the first handshake and the solution for this is to stop it so that it is not possible to have two TLS handshakes from different people with the same key material, so the solution that all baravans propose is to take the certificate from the server . and actually take the entire handshake transcript up to the point of key exchange and digest it into the key material so that basically that ties the master key to the server identity so you can't have two connections of two people with the same different identities. in the same keys unless, well, actually, as long as your r, as long as your random numbers are new, but as long as they have different identities, um, unless you break the hash, of course, and because the resumed handshake is connected. it has the same King stuff from the initial handshake that it inherits, that it inherits the certificate and that the link continues, so things are implemented because you have to change all the browsers and you have to change all the servers, yeah, right, then the state of the game.
Yes, the situation is actually somewhat depressing, which is that renegotiating the solution for renegotiation has only been widely implemented recently, um, and the solution for the triple handshake is literally that we are in a no sh, in an Alpha version of Firefox now, um uh. So yeah, it's not great and it takes a long time to fix these things and the worst part, and I'll get to this in a second, is that it's not clear what you do as a customer when you encounter someone who doesn't support you. these things, um, so the way this is negotiated is you have an extension that says, please, like you know, do the session hash and what do you do when you run into someone who doesn't accept, isn't that extension worth ?
That's problematic sir, what exploits have they based this right on? I can't say if it's good or bad, but one of the strange things about working in this field is you. get these very, very good works that show that the protocols are B are very broken in one way or another or maybe bad maybe bad maybe not bad but broken and then you never hear about anyone exploiting them so we know what you do. I don't know, I mean, someone, someone demonstrates it. I don't want to say it doesn't work. I mean, someone will show you a demo, but you never listen well.
I was going on Amazon and you know someone bought some stuff. I know how to use my credit card number um I don't really have an answer for this uh my my simplistic answer is that everything else is in a much worse situation than this is not the easiest way to fix it, what about the Las Intelligence agencies can use this Stu, so neither of these attacks are fantastic quick attacks against the kinds of things you'd probably want to do. I'm not an intelligence agency, but about the kinds of things you'd probably want to do if you're an intelligence agency because it's primarily about authentication rather than confidentiality.
Some of the attacks that have been published in recent years on the confidentiality of TLs are things that you could imagine the intelligence agency implementing, for example, A strange attack on the L Jam attack that came out earlier this year or the um or alternatively maybe the Lucky 13 uh work that Kenny Patterson did in the hallway, so some of the things in there are attacks that are threats to confidentiality, um, so those are things you can imagine. intelligence agency that deploys in many cases the problem is that they are active attacks, so the good news about this, by the way, I must be clear, the good news is that almost all of these attacks are active attacks and that does not mean that you can not We implement them in the wild, but it means that if you're going to implement them at scale in the wild, you may have some side effects that people notice, so it's not like we have a bunch of passive attacks that are really easy to understand, The point is, again, I'm not saying it's not possible for people to implement this in the field, it's just that it's not trivial, the other thing of course worth pointing out is Our defenses aren't very good. in terms of detecting things so it's possible that these attacks exist all the time and we just have no idea of ​​course the AG intelligence tax is much better because they have access to the infrastructure or they may have the right so I think .
I think to give an example of one where it might be difficult to mount an active attack, there's this Lucky 13 attack that Kenny Patterson's group did that relied on the client sending the same data to the server over and over again. Again, true, then you could mount an attack like that, but the consequence would be that the user's machine will generate a lot of extra traffic and they might notice it under some circumstances, so some of the attacks are more detectable than others, Dave, so two different ones. One of the questions was from when the specification stabilized.
You said you're now starting to get a reasonable amount of deployment. Use calendar time for approximately one or two years. Well, the second question is: you referenced client authentication instead of server and my impression has been that on average TLS has been used for client authentication only a small amount comparatively, yes, if that's starting to change, how, when and by whom, so there are a couple of configurations in which client authentication is done. widely deployed companies do this quite often. First of all, you're absolutely right, the vast majority of TLS is one-way authenticated server, only the setups where it's widely deployed are these enterprise setups and they're also starting to deploy more in web PSTN and things like that where you want authentication. mutual that they are not really classic client server configurations, they are just configurations that happen to be client server, but it is absolutely true that this is a smaller set of things.
It's worth noting that the renegotiation information attack offer doesn't depend on, that works against classic client authentication, even with passwords, which is incredibly common, so this mainly works with certificate verification because, in this case the attacker could have convinced you to give him your password in any case he didn't need it but because the client thinks he is talking to the attacker not the victim but the other applies like for Amazon Eric , since we are slowing it down. no, please let me comment on berou's article from May, which showed that more than half a dozen popular implementations are seriously flawed as a result of client-side and server-side comps with all these pads that aren't supposed to. are. to exist, it sounds like you're focusing on the spec, what could you do in 1.3 that would minimize the large number of routes that shouldn't be there?
Thank you for, thank you for including that. Okay, so, yeah, we removed a b. We're removing a lot of things, so, so FS 1.3 has become more complicated in terms of, well, maybe in terms of protocol, but I mean in terms of a size of effort than we initially imagined, but the goals they are still the same, so the first one. The goal was to clean things up, by which I mean remove a bunch of stuff that is either unsafe or not widely used or both, and there's a matrix of how unsafe it is versus how widely it is used to try to improve the security of the system and I think this goes two ways: one is to try to improve the specification and the other is to simplify things enough to have a better chance of making an implementation that is secure.
I'm not. Of course, I want to attest to how we did in the second half and we are certainly trying. As I said before, the privacy properties of ts12 are actually not fantastic and things have changed and people's opinion on how much should be encrypted is something like this. has become more aggressive in the last 20 years, the other thing that is somewhat relevant is that TLS performance used to be concentrated on the CPU and CPUs have become very, very fast and cryptographic algorithms have become much faster compared , especially with the one-bit increase to curved protography, but unfortunately the speed of light has not become faster and despite a lot of effort we have not made much progress on that, minimizing the number of round trips has become a very important property and there's been a lot of work on it over the year, some of that here um and finally the unfortunate part of this is that we have a lot of important use cases that we can't rule out, so we would like to have a world where people stop using TLS 1.2 and we start using T 1.3 and if we take a large set of use cases that people really trust and abandon them, then we will have a forked world, whether there is 1.2 1.3 and we constantly fix 1.2, so it has to be there.
There has to be a balance between trying and this is where the complexity unfortunately comes into play. There has to be a balance between keeping all the use cases that people are really interested in so they can convert, and at the same time trying to eliminate everything that is dangerous. I wish they didn't use yeah so um an SSL at least there was uh all the compute was put into the services and there were two attacks that took advantage of that yeah that's changed and are you planning to move more compute to the client? The lithic curve inherently pushes the calculation more on the client just because the Maath Crypt um it's not a deliberate design decision it's just the way it turns out to be um there's been a lot of discussion about other anti-dust measures mainly um mainly anti measures that like involve puzzles and things like that and it's hard to know how they're actually going to work, the bottom line is that most of these systems, unless you do something, unless you do something explicit, it's pretty easy for the client to generate garbage and force to the server to do it. something or another um and that's true, that's true with almost any protocol that doesn't have an explicit kind of puzzle solving and the difficulty with them is that they tend to have very unpleasant differential impacts on, you know, slow, slow. devices versus fast devices and the other problem is that they tend to add trips R um because you because you I mean, if you look when I show you the queue from point three, it becomes obvious that the server is being asked to do something on the first message and that something is expensive and the only way and there is no real way for him to verify that the customer has done something beforehand, so the only way for him to basically reject the customer is to force him to eat a back-and-forth order. to solve some puzzle again and that reduces the performance that Des wants, but yes, we have talked about it.
The other thing is, frankly, that the degree to which cryptography has become so fast has really pushed back people's concern that the two computational if Adam Langley had a very nice talk a few years ago about implementing TS in Google and how little impact it had on their infrastructure, so people have gotten bored with the whole performance thing, the whole CPU performance thing. I'm not saying it doesn't matter at all, there are still places where it matters, but it's a much smaller impact than before, where the cost isdominant, so we have a bunch of things that we removed and I'll go through them reasonably quickly, the biggest one, the one that really hurts people here, is static RSA, so for a long time TLS had exactly the property that you're talking about. speaking, where it was RSA, only it was largely Diffy Helman ecdhe. and people didn't use them because they're too expensive and um, that A was too expensive for the server and B had terrible PFS properties, by which I mean, none, I mean the server's private key, was basically revealed, it was revealed. game over so we did it so that was bad and the other thing was it was messing up the handshake in terms of now you had to support DHE and RSA modes so we just removed it completely.
The other thing we removed is the TSU support. This thing we were calling Custom Diffy source groups and what I mean by that is it allowed the server to basically create a Dy Helman group and just tell the client to use it instead of using one of the groups known as The NIS. groups and this was true, this is definitely true, this was the only way to run a finite field in Helman, meaning you could go, you could certainly use a group that someone else had generated, but there was no way to say use. group 32, actually you had to give the client the whole group and it was possible with EC, but it was possible with EC to use custom groups, but no one did it, like both because we want people to move to EC and we wanted the squeeze of hands was smaller and because there were concerns about the and because and because the triple handshake attack against Diffy Helman actually used a carefulWe built an incorrect group, we thought it would be a good idea to not allow servers to specify their own group, so TLS now B TS 1.3 will only allow you to use a small set of groups that are defined, there is another one.
Reason why this is attractive, I'll get to in a second. We actually did a survey and found two sites that specify custom groups for EC for EC, yeah, break it, you're going to break two sites, those people might have a bad day Amazon. and who else could be so right, so in a way the concern that goes back is that you say, well, if you have a small number of groups that make you an attractive target for pre-computation attacks and I think we have two basic answers to that, we did that and we decided to do this anyway and the two basic reasons were that we would like to push Feld to a much larger group, so we didn't specify any finite field Group 2048 um so that makes it much harder to mount those types of attacks because you have a much stronger group and the second is that you want to push people towards EC and we don't think it's a good idea for people to generate their own.
EC Groups is that correct, okay, thank you. I didn't know. So we want people to use a very small number of EC groups. TLS 13 basically specifies that it recommends a total of four EC groups. p256 p34. Two Cur 2559 and goldilux 448 um what are the new two new curves specified by cfrg, so we don't want to encourage people to invent their own um EC groups, um, REM compression, which I said was a little harmful, um , we eliminated the negotiation, but then we had to sort of add on this special adaptation, we'll talk about it in a minute. um there have been a series of a series of increasingly dangerous attacks on these composite encryption modes, in part because TLS used a strange encryption construction, a strange composite mode construction where it was made up. and then they were encrypted, instead of encrypted, then macerated, so to simplify the system and avoid worries about this, we just changed them completely and all we specify is aad, which in practice means G counter mode or chaa po 13. um5 and we've also readjusted the resumption a little bit so I don't think I need to rationalize why we did this so I'm not going to rationalize this for a minute so it was that good there's a good document. of the Beast um, the criminal attack that shows how to exploit compression, so the general property is that and this is a great example, as I was saying, of where the web environment is particularly dangerous, so there is a good attack that shows that if the attacker could control some of the data on the wire and then they didn't know some of the other data on The Wire, they could use compression to determine what the other data on the wire was and the basic idea is that you have some adventure of Target data like a cookie or a password and some data that you know you have and you want to see how much redundancy Visa, the cookie or the password has and if you get a lot of compensation and look at the size of the output ciphertext and if you get a lot compression then you have a good indication that you have a lot of redundancy and if you get bad compression then you are probably off and you just keep iterating over and over again until you actually remove the cookie or the password and it turns out Like I said, this is an example where The web environment is particularly dangerous because the attacker has the ability to implement Javas from the client and make the client do these things over and over again with different text on different input planes so no one really knows how to do this safely and generic, um, that's a big open research problem if anyone has an answer on how to use compression and ion together, so we don't know how to do this safely generically and the goal of TS is to provide generic mechanisms, so we just removed this entirely as a side note. htb2 recently standardized their own compression mechanism for headers, but it's incredibly limited and doesn't do a very good job of compression compared to what is state-of-the-art, but it only leaks a very small amount of information about what it is, it's a representative, he only finds exact replicas, so if you have the exact same head over and over again, it works, but if you have all the things one by one, it doesn't work at all, so it doesn't make such a good impression, but for other hand makes it much more difficult to attack this.
This keeps coming up because of how two months ago someone asked us to put compression back on and they yelled at us, um, uh, en. in a weak, perhaps weak, attempt to eliminate compression even at 1.2 um. Tails 1.3 implants are prohibited from offering compression um even if they do 1.2 and they have to suffocate if someone offers them compression, so we hope that people stop using this in 1.2 and would also do it in the implementation, how much depends on compression, um people have basically already turned it on people are already starting to turn it off in browsers so I work from Milla we think a lot about browsers but browsers are also the dominant cost where almost all the bandwidth and CPU, um, on TLS, it's not that people don't use TLS stuff, it's just that browsers are so big, and people have already started disabling this in browsers because, as they're terrified of, the Internet of the Whole W has a huge impact, there will be some small subpopulations that use compression, these people are nice, they can't be accommodated, um, I don't think I need to explain why they remove you and I don't think I need to explain this either.
Oh, we also removed TLS 1.3 which allowed you to negotiate point formats so you could have uncompressed points or you could have compressed points with two different types of point compression depending on what type of group you had, so we've removed that completely. So now you can only have uncompressed points, which is what everyone really cares about, so yes, I'll explain the reasoning for this, the TL implementation is already essentially only supported on compressed points, which was the required thing, and so There wasn't much hope that people were going to move to compressed points just compressed points and that would also mean that you had to do both compressed and decompressed because you always have to support UNC compression, so we thought we might as well maintain instead of trade , it was bad and maintaining decompression was easier and, um, for all the new groups, the 25519 and the 448, naturally they only have one coordinate, so the uncompressed form is actually the short form, so just We allow one. form for each group, if it turns out that people were really badly out of shape, that p256 took up twice as much space, we define p256 compressed and call it uncompressed format, no one seems to care about this, so, um, the basic layout. the basic design modality that people learned in the past, I don't know 20 years, about how to do things faster is to stop being so stupid about what the server knows, so if you look at 1 point, if you look at 1.2, basically the client. it doesn't assume anything about the capabilities of the service and that means you basically lose the first message from the client, it's essentially an advertisement and you don't learn anything and you and you spend that first sh largely learning the server's public key, so there is a long strain of work um going back to FasTrack and then snapart um and stuff like that on how to optimize these exchanges if you think the client knows that the server parameters are somehow and there are two things that a client might have, might have specific knowledge about you could have general knowledge and TLS 1.3 takes advantage of both, so to improve the general knowledge problem we try to narrowly restrict the range of options that servers could actually have so that we only have a very small number of Diffy homman groups um and we only have Diffy homman or elliptic curve diyman and this means that the client can make a pretty good guess at what the server is capable of and in particular if the client generates a share of Diffy Helman in one of the two popular groups basics that the server will probably accept as extraordinarily high and so basically the two groups that are most likely to be the case are p56 and 25519, so what this means is that the client can, in its offer initial handshake, offer the server Diffy home and, in fact, you may offer more than one. but I just drew it here as one so it chooses 2 p256 and offers the server Dy home and share and this means the server can start encrypting immediately because the server can generate its own Helman share and start encrypting encrypt under the shared key immediately, so this allows you to reduce a single handshake round trip immediately and also has the property that basically everything the server sends beyond the initial share def Helman is encrypted, which is obviously attractive if you want because part of our goal is to encrypt more of the handshake that you send to the group essentially in one of a group than in a group that, according to general knowledge, you think that everyone admits, that it's probably p56 or 25509 or whatever the server last used on you. if you remember then there's a there's a there's a if you guess wrong there's a way to deal with it so it doesn't just fail um there's no way um obviously obviously uh if you don't have any information on the capabilities of the service, there's no other way to do it. that this basically, um, unfortunately, this looks bigger, so it's L.
A reorganization of the 1.2 handshake to a large extent, um, but basically making the client speak first, so it used to be like this in 1.2 in Diffy Helman, the client, the server sent his to help and share in his first message, but the client's to help and share didn't come until his third message and now you basically moved everything up in one to remember the goal, the goal that we were. looking for performance was that the client would experience a back and forth exchange and the server would experience fine. We didn't say the server, but the server now has zero back and forth basically because the server can start talking right away. the server doesn't know who it's talking to of course, but there are plenty of times where the server really wants to say something even though it doesn't know who it's talking to, so a good example would be if your hp2 has a configuration framework that basically it says these are the properties that I like you to use for this connection, the server can send them right away, the other thing the server can do, optimistically, we're still working out the details, HTTP has something called server. press where the server can say in case you want to make this request later, here's the data, so it's quite plausible that the client comes in and looks this naive, the client probably wants index.html or wants favicon. ico and so you could probably hit any of them at this point, so you say you say some execution trips, um, so, and the client basically gets, the client sends his finished Mac at the end and, uh, and he can support his application at the same time.
So the customer basically experiences a one-shot handshake at this point. So, like I said, this is it. Diffy Helman makes it much easier to assume it's Diffy Helman because then we don't think about what would happen if it were the server. The public key is carried in this, but not the client's sh um, soYou may very well want to perform client authentication. Client authentication looks as expected, that is, on the second server flight, the server sends you a request which is basically please. send me a notice and then the client basically sends their certificate and Things Are new, obviously, their certificate and a signature on the handshake at this point, so it's still the same forwarding exchange, but you'll notice that the client's certificate is encrypted, so the privacy properties here refer to what you expect, i.e. that the server certificate is encrypted against passive attackers, which has some value in a non-web context, perhaps not so much in a web context because you can obtain the certificate yourself, but the client certificate is encrypted. active attackers because the client validates the server's lookup before sending its own certificate, so we have already eliminated one of the reasons for renegotiation, which is to practice the client's certificate while improving performance a little.
It's worth noting that this is indeed um Sigma um by cek um uh it took us quite a while to figure that out by the way um because the messages were in a funny order and um and we and in particular it used to be the Certificate request came between the certificate and the signature um and it was like and all of a sudden like Caric was looking at this and said this is just like Sigma and we started drawing diagrams so let's say the client can generally have a pretty good guess about what servers are. groups are especially because there are only a few, but the client may be wrong and the most common thing will be the client will say that he thinks the server wants p256, but the server has a much higher security guarantee, he probably doesn't have much It makes sense if you have two 256 groups it is possible that the server supports both, so if there are two groups of equivalent power and the client offers you one, the server should probably choose it, but if you want a greater guarantee of security and basically the server just He says look, me.
Look I see you give me but I wish you would give me p34 so it comes back with p34 and the client sends the same thing so this is unfortunate and shouldn't happen very often because it's consuming a round trip but uh , we. Since, as I say, clients are starting to acquire more and more server-specific knowledge for things like hsts and hpkp.um and cookies, especially on the web, it's very common for the client to have talked to the server before and know where there are the properties of the service so that the client can memorize things, yes, okay, yes, this is for protocol engineering reasons.
I happen to understand this in a second, so in TLS it doesn't really matter, but in dtls it's very nice to have a situation where the server can force you to leave and not have to maintain any state because you, um. you were asking before about two-two attacks. A very common anti-dos measure is for the server to say: I'm going to give you a random value and please come back and show me that you have that random value, thus proving that you are at the IP address you say you are and so on, um and so on. , for streaming protocols like TLS like TCP, that doesn't help much because they all have their own mechanism, but if you're doing TCB, fast open or you're doing dtls over UDP, then you'd like to be able to do this rejection mechanism without having to, um, without the server having to maintain any state and I know that doesn't sound like an answer to the question, but I am. get there and then the problem is that we wanted to harmonize these mechanisms, we had the same mechanism, so every time the server said go and come back, they could do the same thing and so we just didn't have any. more protocol messages and it turns out that the server has to find some way to maintain the um to maintain the correct state and I forgot to mention that it is zata, which we are very concerned about the tax reduction on this exchange here because what happens if I offer p256 and the server says we will come back with, as you know, P1 p160, so we don't want to allow downgrade attacks and therefore the standard way to reduce PR taxes in TLS is to continue with the handshake ? effective for all the exchange here, so it turns out that for protocol engineering reasons, which I talk about in more detail offline, um, if you basically echo, if you basically just add what you sent, it makes it easier to keep. the floating state, which may not sound very convincing, I would love to bring it up.
The basic observation is that the client and the server can basically just look at the handshake and say, well, the last thing in the list must have been must be what was added and then and then roll back the state check the rest of the History without have to say exactly yes Being initiates the hash the first Being good The server doesn't have to do it that's what I'm saying um um In fact, we have two mechanisms that we are discussing, one of which is this and the other, in It's actually downloading it in a cookie, but we'd like to keep both and having that list makes it easier, they're not encrypted, they're not. encryption, that's right, that can't be because you don't have it, so it's the same random in both messages.
Yes, it's the same message except it contains the same thing. Yes, it could be encrypted with a key known to the server, but now we just have to do it. store a key um let's pick that up offline okay that's how you get to a round trip there's no way to get to zero career trips without doing a bunch of scoring and in particular no is there a way to get zero career trips without having the The client talks to the server beforehand as far as I know if anyone knows that would be great please point it out yeah um I mean there are if you assume side channels like you says, if the keys in the DNS or something like that or you assume um IP, but I mean, but in general, talking to Dy, you can do this, so the basic observation and this goes back to at least a quick start from Adam Langley and migu, but I think actually even more so um is that the client can cache the service for the hel parameters and use them for a future handshake, in fact, it goes back to FasTrack um and then it can send data from the application on its first flight basically piggybacking them on the initial customer. um, but it obviously requires the server to pre-prepare the client, so what does this look like?
I didn't show the preparation handshake, but it's actually very simple: the client sends his greeting to the server configuration, which is the identifier of the key he is using. it's a semi-static Dy home key and then piggybacks the app data there. I wish my arrows were farther apart and it would be more obvious, but the app data is associated with the first arrow, not the second, um, so everything works fine. It has a number of unfortunate drawbacks that no one knows how to fix. The first drawback is that you don't get PFS for this first FL of data and that's obvious, you can't get PFS because you're using a key that the server has to hold on to, so you can't have PFS.
I will understand it in the next slide. So, the PFS thing people are willing to tolerate, so first I have to say that I'm going to say a lot of bad things. about this mechanism, but this mechanism is very valuable because milliseconds are money that goes anyway, so TLS, any repeat is like the standard, any repeat mechanism that everyone uses, which is that each side provides a random value or at least new in your handshake and you mix that with the key stuff and then the keys have to be new obviously this can't be supported by jtt because the client has to talk first so the server doesn't have the chance to send you anything, um, then the customer is the customer. the data sent to the client has no response the data sent to the server no um once you get past the F once you get past this first message when you read it then you are fine but the first message is problematic so people thought they knew how solve this problem, everyone understood that solving this problem required server-side state, there was no way around it and a bunch of mechanisms were suggested, um, and people thought they had an answer, so the answer we were planning to use we borrowed it from snapart but also one used in quick, which is the server um maintains a random list from each client side.
Any client that has ever sent you is indexed by some context token provided by the server, so you don't have to keep it forever and the client. provides a timestamp, so basically what the server does is look up the context token and say what's the first thing I know about the last thing I know and then if there's a duplicate here, if there's no duplicate, it says that It's new and that totally sounds like it's going to work, it doesn't, it works fine if you basically have a globally consistent store. It doesn't work if you have a store that can become inconsistent, so the story is actually kind of entertaining in that.
We, um, I've been giving talks about how we're going to do this and saying we're going to steal a snapart mechanism, so we're in this T C room and I say we're going to S. the snapart mechanism and Daniel Kong Gore says how the quick start mechanism works and I start explaining it and he says what's going on in this case and Langley and I look at each other and say, oh, this is not good, um, um, like this. the basics, so I'll give you an artificial example because it's easy to understand the non-artificial example the non-artificial example involves multiple data centers.
The example looks like so one thing I should mention is that the desired application interface is zero The rtt data should be transparent to the application so what I mean by that is what happens when you connect to the server and the server you have forgotten all the repeat state. Well what you want to do is take the Des that you sent in your rtt and send it in an rtt and leave your next message because if you don't then your G was lost and since what you wanted was the home page that won't work for you, so imagine saying with the side that no, it's okay if not. side effects now imagine you have side effects so the client connects to the server and you send your client hello and you send your urt as something with the purchase of something and the server as um the being and and and you send it to the attacker as usual and the ATT, I mean, it thinks it's talking to the server and the server's credentials, but the attacker intercepts it because it's a network attacker and sends it to the server and the server processes the purchase and returns an acknowledgment saying that everything is good and the attacker eats it and sends it to the client um and then the attacker T and then somehow the attacker manages to summon the server to restart and the server has now lost all of its date including the date of mutual repetition, that's okay because we have a practical mechanism for recovery that the server says listens.
I don't have the replacement date for your entry, so send this message again once you've incorporated my random value so that the attacker will force a restart and rebroadcast the exact same messages to the server. the server says no no I can't do your rtt and here's a here like a random no and the client now retransmits its request, at which point this RSE processed it twice, so that's bad, and it's easy convince yourself if you think about it. It's kind of like if you have multiple data centers, you can do the same thing, um, unless you're willing to basically replicate, unless you're willing to not only replicate a replay store between data centers, but you're also willing to willing to do it.
Basically I'm saying that if someone goes to the wrong data center, the one who doesn't have the master copy of the store, you basically stole their connection until you check the master copy and this is just not practical, there are many other failures. modes that absolutely and operationally cause exactly this, you just take it back and ask him to send the guy a free return lab, so is it worth worrying about? Well, so, when, when, when we found, when, when this was discovered, where, so that's exactly right, um. One thing to recognize is that in many places in the browser stack, if you send forgetting any of this zero-T nonsense that you send, you send your post request and if you like the first one, the browser just sends it again. new because I like it because it wants things to happen, um and it's really true for Gets.
By the way, anything that is ostensibly idempotent, the browser will be perfectly happy to retransmit and even things that are not idempotent under some circumstances will also retransmit. This happens all the way up the stack and by the way, even if the browser doesn't retransmit the next thing that happens is the user what the heck and presses the R button again, so this isn't great, but it's not like it's not clear what it is so bad. It's actually um, I mean, it's not good, don't get me wrong, um, so where did things end up? um, likeI mean, the real problem is these multiple distributed data centers and um, the final resolution was basically, there was a very sophisticated mechanism when Crick and squeeze to try to solve this problem and the resolution that the working group shows us after a long discussion and also, when you talk to Google, which they go fast, I think they don't even try, they basically say that there is a special mechanism that sends your rtt data they only send things with a dant there and they don't, and if you don't like it's your responsibility and obviously it's not safe to have it enabled by default, it has to be a special API, everyone agrees on that. but it's simply too valuable a feature not to have, is putting the server's IP address into the client.
Randomness so that multiple data sensors know that when you reproduce the error, there was already, let's pick up this line too if If you have a solution to this problem, I'm very happy. We thought about it quite a bit, but that doesn't mean we've explored everything. I will say that configuration IDs have a data center, that's how it works at all, so TLS, one thing is that TLS has long supported a pre-shared key mode that was largely used for iot type applications where people couldn't afford public crypto at all or also applications where they could. afford public cryptocurrencies, but they couldn't pay for the certificates or couldn't implement them fairly, so you have like a password whose job or even a long key whose job was to initiate the connection for you, so sometimes you use it with Diffy Helman it is used without Diffy helmet, so at some point it became clear that psk and resume were actually very similar, they basically had like a long term key, a symmetric long term key and all you need to do is restart the connection. with that TLS is basically ts13 basically removes resume entirely and replaces it with just psk plus a mechanism for the server to have room to send you a tag for the existing master secret so basically the way this works is exactly like on normal T 1.2 T um ticket resumption, the server sends you here is a tag and please attach to this tag the key that we just negotiated and then you can use that tag for psk negotiation so that you have a good advantage of merging some things um so I like to show this diagram as if it has a more complicated look, this is less complicated, it's actually TLS.
Everyone likes to take each individual thing and put it to receive its own message, so a lot of these messages only have one small thing in them, but like they are vertically massive, so this is the unified FL for all Ts 1.3 um so what's on the top is the client hello with the zrt, what's on the right is the second server stream and what's on the left is the second client stream and as you can see, we decorate things with um uh with the Sig sections with sigma that show authentication because, um, I think that one is incorrect, so when you look at a diagram like that and you start it long enough, you can convince yourself that we can have a logic of unified authentication instead of a bunch of different authentication logic tied together and this observation, um, I think. probably starts with Hugo KK and H Wei um who suggested this thing called op which is basically a generalization of this kind of rtt ZT D framework um and basically the idea is that you have two input keys, one of which is the input key long term which corresponds to the static definition exchange that you would do for ztt and one of which is the short key, the short ephemeral key which corresponds to the Emeral definition exchange and then you subject them all to the same key derivation diagram and that gives you all the keys, all the key stuff available now in different modes, you might have different configurations for which these keys are in PS k, for example, both keys are pre-shared keys, but there is a single authentication logic for all of them um and I think I think Hugo and Hugo and Hotch published um published a draft of their article on Tals Mist recently um so you get a key derivation schedule that looks like this um with the fal secret and the static secret and a bunch of hdfs and then produce these keys at the end, so I guess yeah, I have a couple more things to cover.
I just said we killed the deal, which made everyone sad and we really thought we were going to get our way. Nothing here like a hamburger in the slot and Microsoft came out and said we have all these applications where people want to do client authentication after the handshake finishes and exactly the setup I told you, we have a website that is partially safe and partially safe. insecure and like we can't cut, because it's a huge burden for us to cut the 1.3 queue without cutting this too and in fact it's such a burden to renegotiate the hdp2 ban with TLS 1.2 and they're having to pad it.
I came back in because it was so difficult to have this feature, so we thought a lot about this and again we thought about trying to simplify the logic as much as we could and the basic plan that becomes immediately evident is that the client is the server. you can send a certificate request to any part of the handshake and get nothing more than this authentication block, i.e. the certificate, the signature and the Mac, which if you remember, the sigma paper is exactly what you use for authentication or in zigma um and if you think and and the way this works is that the signature signs the transcript of the handshake plus the certificate and the Max and the Mac Max everything there plus the signature um that's not the only way to make the sigma composition but it's one of them um this piece is still in a bit of a complicated development, um and they haven't made any specifications and halfway through this process while people were looking at it, um, this team out of the world hallay and Oxford asked the question of what happens if you combine pre-shared elements. keys and boost Cent authentication, so this is something we always wanted to work, but we weren't quite sure how to make it work, so I hadn't included it in the spec yet, um, but it says you obviously want to work because if you have resume sessions you want to be able to add client authentication afterwards and again if you go to the spec it's like it's not in the spec but it's something we always have in our head so if you take the naive version of When you do this and you do what I just said before, which is add the C certificate request and the auth block and you do it in psk, you run into some problems, so this comes from a really pretty good analysis that these guys did with a formal tool called Tamarind um and there's a setup phase where the attacker basically connects to the attacker and the attacker connects to the server and they do a handshake and the server gives him this key tag, the ticket session and then the attacker gives you exactly the same key tag, but for your keys there are two different keys on the left and right because we already solved that problem, there are different keys, but the key tag is the same, there is no nothing to stop him from doing that, and so on then. what happens is you connect to the attacker and you do a new resumed handshake and the attacker's server does a new resumed handshake and again they are totally and again these are separate keys because the keys are separate because we have already resolved that problem. um and the ATT and the server um requests AIC and then forwards the signature through the authentication signature through the system right and the attacker remembers that he has both keys even though the keys are different and for structural reasons, the signatures in the current version of TLS do not cover the Max handshake and the reason they went for the Max handshake was because we were trying to make the RT ZT composition logic work better and it turned out to be a pain to do. that. um sorry not the RT the um the rtt um sorry a rtt um nonc off and C off um worked better um and so when we sat down and designed this pr16 316 the post authentication stuff to T, we put it back because we found a way to keep it like that actually made the system even cleaner, but if you don't, then we have an attack here, because the attack because if you don't sign, if you don't. sign if you are not signing anything that is on the drive from the master key, then you can just resend the authentication signature, so the problem was avoided, so know that we were very happy to see this result for two reasons. one because it reinforced that the direction we were planning to take was the right direction and second because it meant that people were actually doing some really substantial work to do a formal analysis of TLS 1.3 to find this, so I think they have a pretty complete model in this point of the ter and the theoretical improver so thank you very much to this group for finding this and a really nice capture, very nice, because I say it reinforces the direction that we decided to go in, okay, right?
So the last thing I want to talk about is defenses against traffic analysis, so Tails 1.2 has extremely limited traffic analysis defenses, i.e. basically none, the content type, i.e. the type of message you are sending, whether it is a handshake. o the alert data is clear and there is a provision for padding of the P, but only for block ciphers and only a very small amount of padding up to 255 octets and in practice people tend to use minimal padding , there is no padding at all in the transmitter's DEA mode, so you can estimate me given the encryption type, you can immediately estimate how long the actual message must have been.
Like I said, people's opinions on what you should do here have changed since it was just designed. So there's a lot of pressure to do something here and the two main things that happened were: add, encrypt the content type so you can't see what type of message it is, so for example, in this exchange here you would. I won't be able to say there is another authentication handshake unless you like packet size analysis and to print packet size analysis we are allowing arbitrary amounts of padding, much like compression, no one knows how to do padding generic that works, we know. how to add the padding, we don't know how big the padding should be, so if the app work with the padding, all we do is a mechanism, the design here is clever, it's not mine, it's a guy called Martin Thompson , in cooperation with Adam.
Langley, the old TS pays like this, so the new TS payload has the exact same header because everyone is terrified that if we screw this header up and there are all these middle boxes they will be of similar length in the wrong place and choke up now we're not going to mess up the header, but we did, we take the content type and we fix the external content type and the internal content type is the actual content type and it goes at the end and the reason why it goes at the end are two reasons, first because it means that when you decrypt you don't have to MIM copy everything right a um and when you encrypt copy everything right a um and the second is because if we pad the M with zeros and promise that the content type will never be zero, then you can start to fill basically by moving to the right and counting to the left, um and that's really good because it means that the client doesn't, the receiver doesn't have to know if the fill is in use, so first of all, they may not have no padding and it has no impact on the log size so you could use all the other structures we had as a padding length and that always took a bite and that pissed people off because they said we don't want to, we don't want to burn a bite in the filling if we don't want filling, which hardly anyone does um and so this allows Basically, you can do it pretty easily, which is pretty clever and I can say it's clever because it wasn't me, yeah, I had one more thing. , the other type of privacy property, and this is more work in progress here.
This is called a server flag, so let's say you want to be like Amazon and you want to have a stack of servers, all on the same piece of hardware, so that each server can have its own certificate so you know you have a harmless decompression and hidden. comom and they and they have different certificates so when and obviously when the client connects to you you have to prove that you are the right guy and the wrong guy hopefully it will disconnect so the old school way is to give it to each server own IP address and that's really not working very well with this whole IP address exhaustion thing, believe it or not, it's actually what people in general have to do and the reason is that first there are two very unlucky ones that don't don't do the right thing um the right thing is this thing called serame indication which is an extension in hell of the client that saysI would like to talk to fubar.com unfortunately Android 2.2 and Windows XP were shipped with no server name indication and what that means is that if you want to run a system that is accessible to everyone on the Internet, you have to have your own IP address for everything and even now that's a pretty big action of human trafficking, um, but they're starting to be there, they're starting now. to be sites that um hosting services that say look we're going to offer you two tiers, you can have your own IP tier which is expensive and you can have the Sni exclusive tier if you don't mind Android 22 and that's cheap. and that's why people are trying to move towards this and it is a required feature for TS 1.3, it never was before, so unfortunately yes, there is still about 9% of the traffic.
According to some statistics, it's really bad, but the good news is that you know if you're building sort of you know, if you're doing something like, for example, Google Docs, right where you don't really want to like a little multimedia tablet, anyway, i mean, you can create things you like if you're the only one. desktop really wanting you're basically saying well I don't care I said XP but basically XP is an XP browser actually largely dominated by Firefox or Chrome so it's not like that and those guys do Sni really well so it's like if I made the new Sni and frankly if you're using that version of IE like you're going to have a bad day no matter what, yeah you probably shouldn't bother with TS CU, you've got enough others exactly right so I'll do it unfortunate thing is that Sni leaks the server identity so we had all this trouble curing the server certificate and now the identity is being rented which really sucks and it would be nice to hide the Sni and the app that people keep pointing out is anti-censorship , so it's quite common, for example, to have sites that are harmless Co-located with sites that are not harmless, so say: do you know that you have your site hidden and it's on GitHub's GitHub Pages, so you're sitting along with many things that no sensor cares that I talked to and the sensor doesn't? they want to block it completely or it's on the Google app engine they want to block the Google app engine completely, but if you can look at the Sni you can say well this is what I want to censor in this, I want it to be that way It's very good figure out how to encrypt that.
The working group is still struggling with this little feature. The difficulty is the general idea. The only thing that makes sense is to save that information somewhere in Zer in Zer data and basically say well. and Bas and they basically say that if you can't use your rtt you're falling back into your censor and that's how life is, um and the typical thing would be to say that you would have a fake Sni in the in the public header and you would have somewhere in the encrypted data you would have the real Sni and if you connect with someone and they have forgotten the Self, the key material they use to By decrypting the ZT data, basically Conn, you get whatever security certificate and it's probably wrong and you choke and you have it and then you have to live with it, but there are many settings where that is plausible, particularly if you know it's a cover site, then you can, what happens is you connect to the cover site and then you get from Somehow, you get them their news rtt key and you reconnect so the general idea seems simple, the details are a little complicated and the details largely have to What exactly does it have to do with how much violence we are willing to do? against the rest of TLS to accommodate this? feature um there um there are some uh there are some very simple designs and oh, there's a three-way tradeoff which is how much are we going to maintain TLS that way is um um what a good job it does and how much effort has to be made to make encrypted Sni over unencrypted Sni, so some very nice designs that don't make any changes to TS basically and work fine, but have the consequence that I can draw for you later. um um, but the implication is that it's a lot more work to do encrypted Sni than unencrypted Sni and so part of the value proposition is it's nice to have a situation where sites would automatically encrypt us and me if they could. and it is not clear that we can. get it, so we're struggling with that tension, this is my last slide, so it's perfect, where are things now?
We are up to date, this is an ITF effort, like I said, we are currently at draft 10, almost. every major issue has at least one tentative resolution with the exception of this unencrypted issue. um, we're really hoping to get this encryption without changing much else, so it doesn't have a huge impact. "Hoping to have a draft, um, the time I'm spending here is time, I'm not writing, so, um, but I hope to have by the end of the month basically all the things that we have resolutions for in the draft, so" To so we'll have a pretty complete draft, we're already starting to see formal models emerging, which is great, and there's this workshop coming up in February, hosted by Co NSS, with the program chairs, I think, Matt Green and Kenny Patterson.
We basically get a final read on what people think about this protocol before we implement it, hopefully, which would be nice. We are trying this time to have a lot of analysis before implementation. And implementation is starting to emerge. I am working on the implementation of firefighting, now there are other people working on impinging other engines. We'll probably be the first ones we hope to have on some kind of Alpha version of Firefox at least by the end of the year, although it probably will. It will be on by default, um, and we expect there to be an ITF loss sometime in Q1, late Q1, maybe Q2, with Stander released later in 2016, yeah, probably They will, they will probably follow him. we, but yes, we spoke to them, we don't have, we don't have a date yet, if you want to follow this work, the way this is done, penultimate SL, I apologize, there is an ITF list, obviously, we.
We're doing this new thing where we do almost all of our work on GitHub, so you can go to GitHub and see all the proposal changes and the current draft status, and if you find things that are small or big, I guess send us a request instead of having to write us a long email, so contribute now. I'm done, sir. The first thing I have to understand is that I have no idea what I'm talking about, but the pho Alliance has protocols for client authentication that seem like they would fit very well here, have you looked at what we have?
In fact, we're just meeting father today, um, so basically there are two current modalities for doing client authentication via sorry. I mean there are many modes of authentication with reusable tokens of various types, but the two basic modes that read public key based client authentication are one that is done at the TL layer and the second one that is done at the application, so the application layer ones are phto um then phto is not um or su um what is called token binding um which also comes out of the same much of the same computer um then the token binding is about boosting cookies and ph is about using tokens, but everyone is kind Of the same, there is something where the client signs um in phto, so in token binding it is extremely similar to post-client authentication, i.e. you are signing some things that arose from the TL connection.
Pho, you sign the domain instead of um, right? that is correct, yes, but it is not tied to the TS endpoint unless the challenges have not specified the party relying on the challenge. I don't think that's IT for the connection, even if that was my question, it seemed like I could do it. So the token binding will sign something that comes from the TL connection and therefore won't be portable between TLS connections, so the mental test you need to have is to imagine that the client's certificate validation library is completely broken . Which is the? impact on the system, so with the TS client disabled, the design criteria you would expect to have is even if the client's certificate validation library was totally destroyed, that wouldn't allow you to impersonate the client and Pho wouldn't has that property, except the token. binding, as I understand it, if you are signing the server's certificate at that time, even if you don't fully validate the certificate, it is never like that, the server will check it, it will verify it and we will fail anyway, um, if that's not the case .
True, I'd be interested to know because it should be, but in HTTP 2 all traffic is encrypted, so I was wondering when you say you're not removing compression from TLS 1.3, does that even mean a uh? web page HTML web page is not compressed when sent from the server, no, um, uh, so TLS had a native compression mechanism and you could negotiate in TLS that would be transparent to the application, that's what we're removing if there is an app that has its own compression, that's totally fine, sorry, it will still work like it always worked, it may or may not be fine, some of the same attacks that were used in PS compression, the infringement attack if you have it . data from two separate sources that are being compressed by the same compression algorithm, you are at risk, even if you have it, so imagine that you put some layer, imagine that you staple a compression layer on top of TLS between TLS and HTP.
Coming back, you've had the same issues that you have with TLS compression, so that's not the case, so yeah, I'm not saying it's okay to do this, I'm saying we shouldn't win, we're not. breaking you're just no worse off than before we did something um and like I say HTP, there are at least two HTP compression mechanisms that are still active one is the other is this higher compression mechanism that was specifically added and the other is there's a bunch of payload compression mechanisms like gzip that are still functional, yeah we don't have, I mean, hb2 doesn't really know much about this, just a question about the substrate, thank you very much.

If you have any copyright issue, please Contact