DNS Working Group
Wednesday, 22nd May 2024
At 9 a.m.:
... testing testing testing testing testing testing
DORIS HAUSER: Good morning and welcome to the DNS Working Group. So you can see the agenda already on the screens. First we have a small announcement, for everyone who wants to vote at the General Meeting today, please note that you have to register to get your sticker, to be able to vote, and also, it is already mandatory to have a two factor on your account so if you haven't set that up, please do so in time, just so you are not getting into a little bit of a time hassle later today.
Yes, so the first small point op our agenda is the new Chair selection. So, it was the end of Moritz's first term and he got re‑elected so that's apparently a good sign for him.
(Applause)
I hope you are already ‑‑ already happy about that, I am.
Moritz, are you here already?
MORITZ MULLER: Yes, I am. Yes, thank you, I also take this as a good sign, I am sorry that I couldn't be here with you in Krakow, my wife is expecting a baby, so I prefer to stay at home, but I am looking forward to the next RIPE meeting.
DORIS HAUSER: Best of all reasons, congratulations.
MORITZ MULLER: Thanks.
DORIS HAUSER: Okay. Well, then, let's get on with our first first speaker, that is Peter Thomassen with DNSSEC bootstrapping and Knot DNS and PowerDNS.
PETER THOMASSEN: Hello. I assume you can hear me, I don't have a share button, which is fine. I do, actually, okay. So one second. Here we go. Can you see this? I suppose you can see this, otherwise you will complain, probably.
DORIS HAUSER: Unfortunately not ‑‑ oh, no, that's the wrong slide, wait a second.
PETER THOMASSEN: Moritz says it looks right to him.
DORIS HAUSER: Now it's fine, okay.
PETER THOMASSEN: So I will give an update on the implementation of the DNSSEC bootstrapping protocol, and before I go into technicalities I will first explain what it is very quickly, some of you probably have heard of it so I will keep it short.
If you would like to enable DNSSEC for delegation you have to put DS records into the parent zone, they are a hash of the validation key from the child and the way that usually works is that the registrant has to talk to the DNS operator, hey give me the parameters and they forward it to the parent, either through the registrant or registry, people often don't know about it and it is quite slow process, it's easy to make mistakes and different formats, the interfaces differ by registrar, it's quite a process if you are not a very technical person. So a much better thing would be if this was automated and without the human being involved, and so there is actually RFC80/78 for that that the DNS operator puts CDS record in the child zone and the parent can discover it so the human doesn't have to do any copying of cryptographic strings and things like that. But unfortunately there hasn't been auth enteritation for that so far, because usually CDS records are signed by the DNSSEC key that you use for the child, that's fine for updates if you already have but if you provision it for the first time you don't have a chain of trust so this is the problem we are trying to solve.
So, from insecure, I just mentioned before, so if you have example from example.com for example and the DNS operator puts the CDS records the parent can look at it and provision the DNS record set but this step here doesn't really have authentication, right?
So, what to do about it without having to trust things on first use. The idea of the protocol is that we know that the DNS operator has name servers so they have a host name, for example, NS1.provider.net and if we put a constraint that DNS operator is supporting this protocol should sign their name server host names zones with DNSSEC, then you can use that name server host name to publish stuff and make announcements over, announcements about zones that you host, like public authenticated signals and the way the records that you publish in the child zone, you can publish also an identical copy of at a sub‑domain of the name server host name so if the child zone is example.com, here the copy owner name would be underscore DS boot for DNSSEC bootstrapping, DNS dot underscore signal and the name of the host ‑‑ the name of the name server host.
So this can be signed with the DNSSEC key that the DNS operator has for their own domain. And parents looking at this can also go and validate that copy, using DNSSEC, so they can verify that the CDS record in the child zone is actually the one that the DNS operator endorses and that we kind of transfer trust from the DNS operator to the child domain.
So this uses an established chain of trust, it's kind of a detour and it's authenticated and immediate, you don't have to go to different portals and login and it's resilient against attackers and in that sense, extends the existing RFC.
How about the status of the protocol and implementations?
The protocol has passed IETF last call and is about to finish review, one minor issue to be resolved and it will proceed to get published as an RFC. There are implementations on the parent side for this processing stuff at switch, for example, for .CH and .LI and TLDs working towards deployment although there is intricacies with ICANN regulations so that probably will take a little bit.
On the child side there are implementations at various operators like Cloudflare, both implementations I think are proprietary, at the ESEC we use the implementations I will be talking about now, those have been developed earlier this year.
For example, if you use Knot DNS there is now a module that's called auth signal for authenticated signals from the DNS operator and it is able to produce the CDS and CDNSKEY could published records and on the host, if you have the domain in your Knot DNS configuration you will have some configuration file like this, are it has a zone block and then domain example.com and it has DNSSEC signing, and you might have some other domains. So this is what you already have and the new thing is the blue code here, so you can add another domain which is the underscore signal .NS.provider.net and you will have to turn on two modules, one is the on‑line signing module because responses are generated dynamically at query time, and the on‑line signing module needs a specific configuration so etc., up here and that's something that you don't need to touch, it just needs to be there and then also the auth signal module needs to be enabled as well and this takes care of responding properly to queries like the one we had earlier on the left‑hand side of the diagram. So that's all you have to do for Knot DNS. And this is in the stable ‑‑ we use currently.
For PowerDNS, it is not yet emerged, implementation is available and expected I think for PowerDNS 5, and there are still decisions to be made how exactly to do it, both them are pretty simple and I will display them.
First of all you have to create the zone and that's like creating any other zone, and for simplicity I used an environment variable here to capture the name server host name because it will appear in other comments too.
Now, the first approach of how this is implemented is that you have to secure the zone, then you have to set specific NSEC3 configuration stuff and you have to put it in narrow mode, which is essentially what PowerDNS needs for its ‑‑ for the way it works with online signing. Then, you have to call rectify zone which is another thing, and the actual new thing is set Meta signal zone 1, and you can see of course that those four comments kind of go together because the last one doesn't make sense without the first three, so the second approach is essentially just encapsulating this in one command so this will a decided letter so this is how the power implementation will work. If you feel like experimenting you can take the code were the request that's linked here.
There is also one other implementation that I didn't mention earlier which is for set‑ups which can't do online synthesis, which don't have any keys on the secondaries or that use name server like I think NSD which don't do online signing, and in this case it will be useful if one is able to generate a signalling zone without these copied.CDS records from a list of the zones that you manage, and so we wrote software that can do this in kind of a general way, there is DS boot generated, it will be later in June and acceptance standard inputs and S records can owner name and name server targets and also CDS and CDS key records, you can infer what the signalling zones and you can put in the corresponding copies of CDS records which you also take from standard inputs which also have owner names and from that you can construct the stuff, which is what the programme does.
You can also overwrite things like hard code the name server host names and in the command line or you can specify parameters to write the output to a file or you can have it read in an existing signalling zone are for updating it instead of creating a new one and so here is an example of how you can use it, you call DS boot generate and you put the read flag to read an existing zone file that is expected at the name in the current directory and the right flag to have it updated on disI can and we ‑‑ disc, two signalling names ‑‑ signalling zones to be under, the name servers and it's and it's ‑‑ from standard input we take those CDS records and that will cause the generation of these records and the output, essentially.
You can also update the signalling zone file to delete CDS records that you you no longer need, because you don't have the customer already or the zone has already boot strapped, in which case you can put in the NS records only for the child domain deleted dot test without putting CDS records and the software will remove the corresponding name node from the out pull file.
This can go plugged into signing pipelines or whatever people have if if they don't want to be use online synthesis.
Now the last slide, so upcoming developments are to improve the efficiency of the whole thing, not specifically of bootstrapping but of CDS processing as a whole, it's usually implemented by parents as a daily scan which has lots of traffic and things rarely change so it's a lot of fishing and casting nets for very few fish and the timing is uncertain, if the parent runs it every night at 3 a.m. and if the record is published a minute earlier it will be effective immediately or a minute after almost 24 hours for it to become effective. A better way would be if the child operator could send a notification to the parent and for that others and myself have been working on the general notified draft which has been adopted by the IETF, and it reuses the note fire messages in DNS that are used for replication coordination, they have a field for a record type and usually the note fires are of type SOA but you can put different types so the idea is lets have another file of DS or CDS or something and send that to the parent. The main question is where exactly do you send it because you don't know who does the CDS processing, are it could be the registry or the registrar so the draft processes to have a D sync record which is new added to the parent zone or the sub‑zone which is called underscore signal so you essentially take the name for which you are interested in, and you insert the underscore signal label, after the first label and this is where you can he expect the DSYNC records to be and it has the record type it relates to for example, CDS and the scheme one is one and port number is 51 and the host name to which the send the notified packet and so this can be set up as a wild card record if the registry wants to receive all the note fires or it can be done in a child‑specific way if the registry want to somehow announce registrar.
Yeah, so we plan to implement this in a few scenarios at the up am coulding IETF in Vancouver and then see if we can approve the DNSSEC automation landscape. That's all I got.
The bootstrapping implementations have been funded by the NLnet foundation, so I would like to thank them and otherwise there's an employer and I am happy to take any questions.
(Applause)
DORIS HAUSER: Do we have any questions in the room? Are there any online? No, not right now.
Okay ‑‑ yeah, then, thank you, let's move on with the agenda.
Next up is Dave Lawrence with DELEG update.
DAVE LAWRENCE: Good morning, everyone. It's so good to see you here. It works. So, many of you are already familiar with what has been going on to update the way that DNS delegations work but for some of you I wanted to give a quick background. Back at the IETF in Prague last November Peter wanted to of a brainstorming session, are to improve DELEG ‑‑ wrong conference. What we really wanted to do for the purpose of the brainstorming was identify where the pinpoints were with the existing DNS protocol and how we might be able to advance on improving that situation. One of the things we quickly realised was that probably the best way to facilitate that was being able to signal between a resolver and the authoritative server when each was able to support a new feature of the protocol different than the ways things are currently done. Right now we have to rely on fall back behaviours and we wanted to make it more frictionless to identify why the resolution process was naturally happening.
Other things we realised besides enabling new features, it would help with improved, several different forms of encrypted DNS protocol that right now in order to get to them you have to have some pre‑compiled list that falls outside the normal DNS protocol so we thought this could help with being able to use that more seamlessly and so DELEG was born, short for delegations, of course, which was supposed to be an extensible parent side record that could specify the attributes of the different name servers that a zone was being delegated to.
Our initial effort actually was based on a draft that Tim April of Akamai had written right after the pandemic started and he called it NS 2 at the time and essentially it has much the same core ideas as what we pursued with DELEG so Peter, Ralph and I also signed as co‑authors for Tim's draft because Tim is not exactly ‑‑ as actively involved in the testify right now, one of the things was important was getting contributions from a large number of of industry experts from across the industry, one of the frequent criticisms of the IETF standard process is that it is kind of rivry towerish, it is not well represented by all aspects of the industry but by people, typically implementers who are deeply involved with the DNS protocol but not so much what running an operation or registry is really like and so we had many, many people participating in that hackathon but some of the ones that also dug in with us in trying to get the DELEG effort off the ground including the names here, Roy and David and shim on, there were many more contributions some from others of you in this room, I can look and see right off the bat four or five people who I know also commented and very, very much appreciated all that feedback, I do want to call out Roy and shoe man for having done some important testing which was if we added this new record in the parent would it be true that old resolvers who have no idea what is going on with this would not be impacted by the presence of of this new information? And that is largely a new statement but we don't need to go into the details right now.
When we socialised it beyond that hackathon group, we were looking for people, anybody, ICANN, web implementers who would be like this is a horrible idea please don't do that. By and large the feedback was positive, that's not the say people didn't have some concerns, but we received far more encouraging than hold off, that gave us the incentive to continue to pursue it, in fact at the last RIPE back in, at the end of November, I socialised the idea here to the RIPE community, it was definitely as aspirational, we kept out hard saying well this is the way it's going to be and it was hopefully indicated at that talk this is the way we would like it to be but we are definitely, we are talked to the operators in NANOG and we get more feedback about the idea of improving delegations than people saying no, just leave everything the way it is. We did publish the first draft in January which is in the record at the IETF, and the intent at the time was that perhaps the DNS sub‑Working Group could adopt this existing work and then for any future protocol development that relied on DELEG we could spin that off into another group.
So for those of you who are not aware the basic idea of the way the record looks is this is a parent side record that does get DNSSEC signed rather than relying on a transition into the sub‑zone to get the, to continue the chain of trust, the pneumonic we have chosen is DELEG and it looks like service binding record where you can define different attributes of the name server you are talking to, are here is a simplified example which shows what a typical use case of the NS records and glue records would look like threw there were several other ideas advanced including that being able to specify in A LP M which defines the transport, and similarly we also realised that this could solve one of our big issues that operators have with maintaining DNSSEC at the parent, in conjunction with work like Peter is doing, but in particular, we would be able to specify a service binding record in the child that identifies all the attributes to the name server, all the details of this are not that important in part because things are in a state of flux and based upon what I am about to say next, something might emerge that looks very different from the way we have initially proposed it.
So, the next step was to, after submitting the draft, was to facilitate further interest in the IETF and a mailing list was set up, I want to just point out that those of you who know that my perm domain name is DD.org is completely unrelated, just a coincidence, not indicating specific advocacy for this one approach. That mailing list was spun up to start diving deeper into it and one was to start a birds of a feather session at the IETF where this is an a meeting of like‑minded people can come together to decide if a problem is worth working on and if it is how should that be pursued and a Working Group take it to a particular Working Group. We did convene that BoF at the March IETF which was recently held in Brisbane and so the two big questions are: Should we work on it, and if we did should it be working separate in a Working Group, the consensus from the group participants in the group was pretty much yes and yes to both questions, we should work on this and that it should be in a new Working Group. Notably, that means that the new Working Group, which is not yet chartered but soon will be, anticipated to be, is also called DELEG, and it's another two things called DELEG, when you are talking about it you are going hopefully to be clear in context which one you are talking about, whatever emerges from the DELEG Working Group might not look like what the organise proposal looked like. It did continue in March and April focusing what the proposed charter for the new Working Group would be and so the Working Group proposal is actually out and live now, the charter that is being proposed to the IESG who must formally any new Working Groups was just finalised a week ago and it specifies a few different things, notably of the expected documentation to come out of the Working Group is first supposed to be a requirements document that identifies really clearly what we are trying to do, without listing the possible solutions for it, and then there will be, there's three broad areas of consensus about areas that could be addressed and that we should identify solutions and to put forward as drafts and eventually standards there.
The new group is proposed for the Internet area, for those of you familiar with the IETF it's broken into several sub‑areas which cover Internet technology, the Internet area is for the fundamental proposals that affect the Internet as a whole and that's where most of the DNS Working Groupses are, notably it is not where DNSOP is because that's in the operations area but the ADD group I could chair are all in the Internet area, and Warren Kumari is proposed to be our new area for the group, the other is Erik Frank who worked with Warren is getting this proposal off the ground, one of them has to take responsibility for handling the Working Group. The proposed Chairs that an are Brian and Duane from Verisign who many of you are very familiar with and they had a lot of people apply for it and made it really hard to identify specifically the people they wanted to hone in or there were bouncing acts going on there, I think they might a great choice. One of the new things warn wants to try is pretty much he wants specifically to have an explicit review of whether the Working Group Chair should continue on essentially every two years after coming up so this will be an interesting new experience for the Working Group. The proposal is on the tel chat to get together to handle the current business and that should be a week from now. It is very likely that approval would happen, probably with a few more charter tweaks because that is what the IESG does.
So, where that leaves us now is, in anticipation of having a new Working Group created the DELEG draft offers including Patrik, myself, Ralph and Tim, are continuing to work on the original draft. We do expect to have an 01 version of the draft out in the data tracker before the Vancouver IETF. A little bit of shuffling, to have some feedback about areas that could be made more clear and so on, if you would like to contribute it is available on GitHub, and so the point we will be seeking adoption by the new Working Group, but probably not any time before the Dublin IETF in November, in part because of this issue I mentioned where the charter is pretty clear on saying we really should have a requirements document before we start having a solutions document.
And speaking of which, the requirements document needs writing. I am about to send a message out to the DD list, hey by the way, has anybody started on this? Because I am unaware of it, but just because I am unaware doesn't mean it hasn't happened. I am interested in contributing to such a document, if you are look for a message on how we could possibly get started on this if it hasn't already been. One of the things that has come up in the DNSOP and DELEG BoF there are some competing ideas about how we could advance this core idea of imdelegations, like hacking the DNS record, I am not in favour of the other approaches, we like the approach we like, it should be noted they are out there and one of the interesting things about this, whatever we come out with, even if there is not our original DELEG proposal and DNS hack, DNS hack is we have added friction to the system and one of the great hallmarks of the Internet has been that relatively frictionless ease with which things can be innovated and the IETF adds some friction but it generally adds friction to create a better end product, like there are several things that have been implemented outside the IETF then brought to the IETF and improved by the process or, unfortunately, had to document here is what exists and here is why it has engineering faults but you are going to have to live with them.
So, I don't know whether any of these other ideas are going to come to fruition as drafts themselves but we will be looking for the possibility of that and it will probably come up again and again while we start honing in on just what the requirements are and what solutions we can do. Somebody submitted EPP draft for extension Working Group for facilitating the communication of DELEG from registrars to their registries but at the moment, because we really don't know exactly where DELEG is going, probably going to have to put that on the back‑burner until something more concrete startsy merging.
First, I will blame Willem for the whole DELEG pun because I think most of our Dutch colleagues tend to pronounce it DELEG and don't do Google image search for things because you get a lot of undesirable responses.
Any questions?
JIM REID: Just a random member of the DNS group here. Nice presentation and you summarised the situation fairly well. Two comments, just for the benefit of everybody here is that it's not necessary to fully go through the IETF process and have a Working Group forming BoF before a Working Group is formed, and the vision ‑‑ or the opinion of the IETF leadership would be that the BoF in Brisbane was so successful and constructive that essentially all the bumps have been taken care of and go straight directly to forming the Working Group, once we got one or two things sorted out with the charter.
I think the other concern that we are going of to have when the Working Group meets for the first time in Vancouver, is there's going to be figured out how we are going to timetable all the work because in some cases overlapping or competing ideas and I think it's going to be a big challenge for the Working Group Chairs in particular how they are going to coordinate and trade off the balances between these competing balances where they see the future of DELEG going and I think that's going to be an interesting challenge but this is a very exciting idea. I am a bit concerned we may have some concerns that we might go into unearth more problems about zone cut, I think we might go through all that yet again but we will see how that happenings as things progress.
DAVE LAWRENCE: Thank you, Jim. One of the things I did want to mention, besides your opinion on GitHub, if you are operator, providing that constant impetus for moving forward, it doesn't have to be any kind of weekly thing but showing up every once in a while, I like this and would like to see it implemented or implementing it yourself is tremendously valuable, it is likely to see at least useful adoption on a relatively short time frames even though we would expect the old method of delegations to be around for the rest of our lifetimes.
JIM REID: Don't do stuff on GitHub, we have our mailing list
DAVE LAWRENCE: For our document in particular, right, for discussions, I 100 percent agree
JIM REID: This is potentially going to have significant repercussions across the whole of the DNS infrastructure, we in the IETF are going to need a much more considered approach towards deployment and implementation, specifically with DoH, that was one and dusted in one meeting at the IETF and the implications of that later on, creates all sort of stuff, we have to be careful we try to either avoid those problems for the future, at least we have a better understanding of what we are going to get into it before.
DAVE LAWRENCE: I would respond a little bit only I am over time.
(Applause)
WILLEM TOOROP: Next is Martin Pels from the RIPE NCC, who will give an update.
MARTIN PELS: Hi, I would like to talk about what we have been doing since RIPE 87.
So, first off, we have built some things. My colleague Anand spoke last time.
In Tokyo, we have quite a bit of delays in getting hardware over to Asia, and I'm happy to report that the ‑‑ went, finally, live last month, we are in NTT building in Tokyo, we are peering at JPNAP and /TKEUT see, and the site is now carrying 10% of our AuthDNS traffic so we are quite pleased with that.
Then, some other expansions we did is, we added a few hosted DNS instances in Venezuela, United States, and Indonesia, and we have been very busy with upgrades, we were running on all of our DNS servers and this is going end of of life around July of this year so we have upgraded all of our AuthDNS to Oracle Linux 9 and migrated from Ansible to salt stack. We still have a few back end systems to take care of but the DNS servers themselves are all Oracle 9 now.
Then, sadly, since RIPE 87, we also had one other, so to give a little bit of background on that, we run a set‑up with a number of core sites where we have hardware routers and servers, but we also have several hosted DNS instances, and these are single server deployments either on VMs or on physical hardware and each server obviously has a DNS server application but it also runs a routing daemon that talks BGP to an ISP for, to participants on IXP, and the routing daemon has its own routing table with Anycast prefixes that we advertise to the world and the connected peering LANs and the routes that they receive from the BGP speakers.
So, we nicely migrated everything to SaltStack. This went quite smooth, no issues there, but what we failed to do here is, actually configure the SaltStack to not ‑‑ to user specific routing daemon, packet version. So what happened is, the routing daemon software that we use, there was a new release upstream. Our boxes automatically downloaded that and started using using a new release and this release introduced a breaking change where the connected routes were not imported in the routing table. So what happened was, BGP routes we received could not be installed because there was no next‑hop. We were still advertising the Anycast prefixes, so we were getting traffic in but we could not send any traffic out because the box didn't know where to send it.
So, this led to time‑outs for resolvers talking to eight of our 19 AuthDNS sites. We run multiple different so it did not affect all of our HOPs but depending on where you were, you had ‑‑ you got time‑outs.
We did a couple of things to deal with this. Obviously, we fixed the release pinning, we also made some changes to our alerting, normally in this case we only received alerts by e‑mail, we had made some changes so that we could also in these specific cases get SMS alerts and we are investigating if we can do automated withdrawals of prefixes if there is a problem with the particular hosted box. We are being very careful with this because we don't want to get into a situation where we automatically withdraw our Anycast prefixes on all of the instances and turn off the entire service, but that's something we will look into for individual cases.
Then, I wanted to talk a bit more about the service side of what with we run. So, our AuthDNS set‑up is composed of five different services. We have PR I dot AuthDNS which is the reserve DNS for all of the IP space that is handed out by the RIPE NCC, do the reverse delegations for that. Then we have RIR.AuthDNS, this is the secondary DNS for reserve DNS delegations of the other RIRs, this is a full mesh set‑up so each RIR is secondary for the Reverse‑DNS space delegations of all the other RIRs.
Then, we have manus.ut.DNS which runs our own zones like ripe.net and the last two are ccTLD and NS.ripe.net, about which I want to talk a bit more.
So, about the ccTLDs, this service is defined in the RIPE document, RIPE‑663. It is meant for smaller ccTLDs that are not able to support their own full infrastructure, and we offer a service to them to be secondary for their zones.
In the RIPE document, it is defined that this is only based on agreements with a fixed lifetime, so that we can evaluate and reevaluate if the ccTLDs still adhere to the criteria we have for this. And these criteria are is that they are less than 10,000 delegations in the zone and they should not have more than 3 other secondary name servers, providers, and they should not have any commercial name server providers because we figure if they are able to pay for commercial parties to handle their secondary DNS, then they don't need our support anymore and they can be on their own.
So, when this document was written in 2016, we had 77 ccTLDs on the platform, and this has slowly decreased and we are now at 30, the list is on the slide. And we expect this to slowly go down in the future and as more and more TLDs mature.
Then, the last service I wanted to talk is NS.ripe.net. How this works, this is a service for larger LIRs with /16 or more IPv4 space and what they could do is, they could actually, in their domain objects, configure NS.ripe.net as a secondary service and have us serve their zone.
Now, this service has been offered for quite a while, but we see a lot of problems with it. First of all, it's unfair to smaller LIRs, as I mentioned you need a /16 IPv4 to be able to apply for this, not many LIRs have that.
The second issue is that it's a service that used to be something special but, these days, anyone can run, and many do, run DNS services for others, so we are actually competing with our members with this.
And then there are various technical issues that we see. The biggest one is that we see provisioning failures where people configure us as, we configure us as secondary and then we try to fetch the zones from the primary and that fails. We see this for about half of the zones that are configured on this platform, so there is many brokenness.
Due to time, I won't go into all the other technical problems that we have with them, some of them are easier to fix than others, but addressing them would take a lot of our time and this would still leave the non‑technical issues that is not for smaller LIRs and we would be competing with our members.
Instead, we propose to should the service done. My colleague, Anand, wrote an article about this in more detail. In essence, the timeline is we want to stop accepting updates and new additions to the service from July 1st. Then, the rest of the year we will reach out to the existing users of the service and ask them to move away, and then, in January 2025, we plan to turn it off.
So, on the list, we actually announced the end of service date as December 31st. We got some comments about that, that it's in the middle of the holiday period so please don't do that. Our hope is that, by December 31st, everybody has moved away so we can actually turn off the service sooner. If it turns out that at the end of the year, there are still some zones left, we don't mind waiting a couple of days and turning it off in January, if that makes people feel better.
So, if you have any comments about this, after the session, because we are out of time, or you can send a mail to the list and let us know, if you really, really want us to keep this, come up with a good reason because otherwise we will shut it down. Thanks, that's all.
(Applause)
WILLEM TOOROP: The next speaker is Sandoche Balakrichenan, who will talk about energy consumption of DNS, measuring it.
SANDOCHE BALAKRICHENAN: Good morning. So, today, I am going to talk about studying DNS energy consumption. Actually, I presented some material for a lightning talk but thanks to the Chairs they have given me 15 minutes, so I have some preliminary results also here.
What is the motivation for us to do this study? One as part of a corporate social responsibility, the company that I work for is AFNIC which manages Internet name space for France so we have engagement with the French government which mandates us to study our carbon footprint and if possible reduce it.
The second one is technical. So if you see I have two differences here, one is from the APNIC side but we see there are more and more encrypted traffic in the DNS, that's about 23% when I saw on 16th May. And the second one is that ‑‑ the second difference that I have, that a lot of research materials which demonstrate that when you do encryption, decryption, encryption, the energy consumption increases. So this is the study which demonstrates while you do SSL connection, the consumption gets increased so from a technical view we wanted to study these measurements and if you see the many studies on web traffic but none on DNS so we thought that there is research area that would be interesting to study.
So, AFNIC, the company that I work for, as part of its corporate social responsibility effort, has been quantifying its carbon footprint, it has been working with its counterparts in Europe on a framework, so this framework is based upon the ISO 14064 which classifies into three scopes, so I have scope 1, 2 and 3. So if I want to explain it in a simple manner, if I want to drink a coffee, I need a ‑ and a cup, to, cup to drink it so the energy taken to manufacture the kettle and the cup comes under scope 1. If you relate to the DNS world the energy needed for the hardware materials like the servers, the cables, etc., comes under the scope 1 category.
The next one is if I want to drink the coffee I need to correct it to electric cable and to make the coffee, so if I relate it to the DNS world, it is how much energy needed to have a DNS resolution, and the third one is the other supply chain activities; for example, the office we have, the employees to and fro commuting from office to the home, me travelling here to RIPE, all this comes under the scope 3 and what AFNIC has been doing is that quantifying the scope 3 from 2018 to 2023 so if you could see we were able to reduce the carbon footprint and for 2023, for scope 3, our carbon footprint is around 625 tonnes of CO2 equivalent and we divide it by the number of domains we have, 4.3 million domain names and it comes to 147, so this is the carbon footprint, the host the domain name for a year and if you relate it to layman's terms it's like if you are travelling by car for 600 metres.
So what weigh want to focus here on the scope 2 is how much energy is taken to do it in its resolution.
So, there are some preliminary measurements that we did, so we wanted to do the energy consumption at the authoritative server, the resolver, we wanted to identify the tools and architectures needed for this because this is new thing for us, and decide on the metrics to measure.
So, as I said earlier, we focus on the scope 2, so it's consumption for DNS resolution that we wanted to study and the energy, watts, kilowatts, period of time it's kilowatt‑hours.
Where to measure? As TLD we have access to the authoritative servers, from a TLD perspective so we could do the measurements here and we wanted to do at the resolver point also, we don't operate a resolver, we created a laboratory environment to study that.
So if you see here, it's an authoritative server level, we connected hardware, watt metres, to four data centres and these study the energy consumption per second and we have a where we collect this data which ‑‑ from this watt metre we send exporter this data so that we could have a graph on that. If you see for us like our servers are using 37 .8 kilowatts per day.
For resolver measurements we had a resolver running on BIND and what we did was we chained the tool, we had a software probe here called strap ender to study the consumption and used this to generate the DNS traffic on a number of queries per second.
So this is an unusual measurement that we had, we had a different types of DNS traffic from UDP to door to door. ‑‑ DoH to DoT. If you see this graph, what we could understand it's linear up to 750 queries per second and after that, we see a burst in the TCP and DoT, but that is not understandable for us why ‑‑ why the TCP is more than DoH, so we need to dig further deeper here but as I said earlier, it's preliminary results and we don't know what caused this but we will do further measurements to see that it comes back again.
So, the work in progress.
As we saw earlier, we wanted to study the different revolutions of DNS from normal UDP to the encrypted traffic, and we know that it's quite impossible to completely study the DNS resolution because DNS is ‑‑ for that reason we wanted to mathematical model to estimate the number of packets and the packet size for these different types of DNS traffic and we wanted to benchmark the mathematical model based upon real measurements we have and approximate this mathematical model so we have an accepted interval. Finally, to convert the measurements that we have into understandable terms that is CO2 equivalent, as we did in the scope 3 earlier with the activities for the DNS.
So this is a slide that I took from deep dive presentation, so this slide, just to demonstrate that it's quite hard to completely quantify a DNS resolution because for one single DNS query you have number of queries being sent over the wild, so we understand that it is difficult to quantify completely and we did want it in a simple manner, taking a lot of assumptions so this mathematical model could be evolved.
First thing I said earlier we want to estimate the number of packets for the different types of traffic so simple mathematical model, where we have two packets and without cache we have 8 packets because it has to go to the root, the TLD and the SL D.
SPEAKER: [[Inaudible]] to the TLD.
SANDOCHE BALAKRICHENAN: These are the assumptions we have. For TCP, with cache, if we go like we have like a connection phase where there are three packets and a closing phase where we have seven packets and we assume that the ACK is not piggy‑backed, we have two packets, the estimates with cache is 11 packets.
If we go to the DoH and DoT, we have DoH example here, just with TLS 1.3, TLS adds four more packets so with cache and without cache we have this calculation that is 23 and 29 so this is an estimation of the number of packets but, we have to also see the packet size so that also has an part in energy consumption, for example, for a TLS communication we have X.509 certificates, like R.S.A., ECDSA etc. So that also has to be taken into account.
Just for estimating the carbon footprint of an UDP packet here, we take some assumptions also here, so I have only the UDP example here, we take the end use use of 512, for two packets 1024 bytes and to get it to understandable limit we have to convert it to gigabits so we can work ‑‑ convert this 1025 into gigabit and from a sustainable web design framework, the energy consumed is the byte transferred into 0 .81 kilowatts per gigabit, for raffic,wetakethisassumptionandmultiplythenumberofforkilowatthourandwecometothisvalue,iamnotgoingtotellthevalue,andthentoconvertthattothisisalsofromthesustainablewebdesignframeworkwhereweusemultipleathatinto442for,itcomesto...scope3estimationsitcomesto1.4kilometrebackup.theseareallassumptionsthatwehavesowehave,wehave,thisisastudythatwehavejuststartedwehavestartedtothinkaboutit,wearehavingforthcomingprogrammeswiththetwodifferenceonfocusingonthistopicandasweknowwedon'toperatednsresolvers,wewouldliketocollaboratewithotherparties,peopletriedforexample,wealso.
We also have your feedback. Thank you, I hope I am in time. Two minutes, thank you. Questions? /PH
SPEAKER: Lawrencely man from Netnod. Thank you, I think this isn't important work, and we, as community, and this is in the case of you here, need to start to work with this but it is enormous field and there are such an a lot of variables in here that it's, it strikes me as very difficult to reach any useful results at an early stage, but if we don't start to attack it and start to work with the problem we will never have any appreciation at all.
But there was some assumptions in your presentation there that I felt were quite far from what I would assume is reality, so, as long as you make it clear when you present any results of this that this is based on a very strict and limited set of assumptions, I think this is is valuable work because that will give others a reason to work with other parts to this and slowly and carefully we can start to build a better picture of the energy consumption.
SANDOCHE BALAKRICHENAN: I do agree with you, it's something that has just started so there are a lot of assumptions here that is not correct, I completely agree with you but I think it's the best step, we will dig further and get some maturity.
AUDIENCE SPEAKER: If no one begins we will never reach the goal so thank you.
SPEAKER: Hello, it's more remark than a question. I am Alex Lemer from France, too. And it's more a remark than a question. This kind of evaluation, of CO2 equivalence is interesting, yeah, but don't take into account when you design a system you have bought some hardware, that's your user for certain capacity of infrastructure and by doing like, like economy analysis on the small part from each query, don't take into account when you bought some hardware, this hardware have more capacity to do stuff, and when you reduce the footprint of one query, you don't reduce a footprint of the whole system and, in my opinion, this kind of evaluation that is ‑‑ that is granted by or local administration to see each query, each usage, is not the best way to try to ‑‑ in my opinion, we should evaluate the system as a black box, which hardware did I need, which software did I need and how to limit the quantity on the macroscopic scale, instead of the micro scopic side of reducing the query.
SANDOCHE BALAKRICHENAN: I do agree with, the consumption depends upon the hardware materials, also, what type of resources are used, whether the resource is done by energy by wind or by fossil fuels so all this to be taken into account but that comes under the scope 1 category.
Under the scope 2 category what we would like to have a look, it's just assumptions here also, in the future if, for example, we have two name minimisations in the DNS, whether such things could be used to reduce, for example, how much traffic goes out and also, for example, to see, because most of the servers are overprovisioned today, can we reduce it so that it can be ‑‑ it reduces the carbon footprint so these are all objectives that we have, these are all, said, hypothesis and we have to dig so I agree with you.
WILLEM TOOROP: I am sorry, I have to cut the queue because we have so little time, thank you, Sandoche, that was wonderful.
Next up are Shane Kerr and Dave Knight and Shane has been working with a whole bunch of other people on the resolver best practices or recommendations from the RIPE NCC and this is further a panel discussion, I guess, discussing how people in practice are doing those best practices or comparing it. Shane.
SHANE KERR: We are going to do a discussion here, the slides are very light, we will get right into it.
The idea is that we had, the RIPE community built a task force to come up with recommendations for DNS resolvers, we published that document officially May 1st, and so now what this discussion is not going to be a detailed analysis of that or a value of the history of it or anything like that, what we want to do today is talk to a couple people who work in organisations that run large scale DNS resolvers and do a reality check and see, well, we have written these recommendations based on what we thought were good ideas, what are we actually doing in practice and how does it work, what things are interesting about the comparison and so on, so hopefully it's interesting.
This is kind of what we are going to talk about here.
So, who are we. My name is Shane Kerr, I currently work for IBM where I came across as part of the ‑‑ we do manage NS1 and I have the Chair of the task force which created this document, recommendations for DNS resolvers. I will let my colleagues here interview themselves.
BABAK FARROKHI: I am the director of operations at Quad9, we run one of the largest DNS resolvers around the world, operating in 119 countries in 26 locations and yeah, this is why we are here to discuss the best practices document, so I will hand over to Dave.
DAVE KNIGHT: UltraDNS recurer, we also operate a public resolver so I can talk about that but we also operate our internal infrastructure resolvers and we use the resolvers of others when in a Cloud environment, when I have looked at this document I have tried to think about it in terms of all of those things, but for our public stuff we do, we have a completely open public resolver but we have white labelled resolvers for paying customers which are accessible only to them so we have quite a range of different uses and /PHREPLGSs of resolver stuff to talk about.
SHANE KERR: Great, thank you, both. Right, so we are just ‑‑ we just went through the document together, we both ‑‑ we had all read it previously and went through it yesterday and picked out a few things we find interesting.
So the idea is, we are going to talk about this and just mention how we do things, what we found interesting about the recommendation and if any of you in the audience have anything you want to say, hop in, we don't have a whole lot of time here but our goal is to find interesting things and dig into it, if we run out of time we run out of time I think it will be great anyway.
First thing we had, we had a bit of discussion in the document about multiple implementations and so the idea is if you run a single type of software then you are vulnerable to zero days but there's a cost to that ‑‑ you can get around that by running more than one implementation, but then you have to maintain different versions and that also introduces chances for instability and problem, so I don't know, I think, Dave, you were the one that wanted to talk about this most.
DAVE KNIGHT: This is interesting because the document suggests it is a good idea to have multiple implementations but in our experience the kind of vulnerability which renders a single implementation unviable to the point where you would definitely want to switch to another is reasonably rare, so there is a kind of cost benefit analysis that you need to do there because obviously running multiple implementations introduces operational complexity and makes things harder to debug or more surprising and then you would have to think about how you run this, are you keeping a second implementation as a backup or active active, one is more complicated and the other means the one we had as a backup we forgot to add to this the configurations and when we add it it doesn't work the same. What I would suggest you would want to carefully consider this before doing it.
BABAK FARROKHI: I have to slightly disagree with you because add Quad9, security is our top priority, right? And then this is actually one of the trade‑offs that we are aware of like to take and this is why we have three different implementations, just in case we got into a security issue, we don't want to take any chances, we just want to be able to turn this off, and then let the other resolver software to handle the load and that's why.
SHANE KERR: I also found, you you mentioned something yesterday I found quite interesting, one of your implementations causes more operational issues, and that's the kind of thing you would only discover by running multiple limitations ‑‑ implementations, pull one software implementation out of service and either replace it with something else or just leave it out and that's something you can only do when running multiple implementations.
Jim?
JIM REID: Jim Reid. Sorry I have to disagree with what you were saying, Dave. I think it's important when you are running key services for a large installed base or expensive argumentative customers who have got very demanding requirements, you should have multiple implementations in place and probably have a set of circumstances where you have got different sets of name servers providing the service, running different implementations at the same time and you are switching between these, oh this is a matter of routine, so there's some kind of catastrophic packet of death you can switch all the implementations away from the exposed platform on to something else, without figuring out what configuration fell or what magic button do I have to push to make this happen? If you are doing those things as a matter of routine it will always click into place if and when somebody goes wrong. While you are right to say there's a trade‑off between the cost of these multiimplementations there's a ‑‑ the analysis is you are looking at it purely from the ‑‑ I think a lot of people look at this primarily from the point of view of running the DNS service itself, not a better understanding of the reputational risk of the business. Taking UltraDNS great platform as it is but if it was to fail for some reason the public perception is UltraDNS is crap and that's a big business risk for you.
DAVE KNIGHT: Sure, I guess to respond to, we were thinking about that in terms of this document and isn't telling people how to run a big global Anycast service. People who run a small enterprise network might read and benefit from this document and their needs are going to be at least a little bit different from the likes of us, I am not saying don't do it, think about the bigger picture before you make a decision.
JIM REID: Exactly. I think the question that has to be put to people: If your DNS goes down what's the reputational cost? What's the actual financial cost to that? It's not just in terms it of I can't send e‑mails or videos off Google.
DAVE KNIGHT: Yeah, thank you.
SPEAKER: ISC. I have seen a deployment or two because we provide support for DNS and I think the point of having competent people who are handle multiple implementations is a very important one because if the operational stuff is confused, multiple implementations is not going to help because if they have trouble configuring one they will have much bigger trouble troubleshooting two or three or you name it so I think it's the balance and I mean, I agree with, you know, everything which was said; it makes sense on paper, that's perfect, but it needs to take the reality into account because different organisations work differently and, yeah, I have list of SIM cards I am never going to buy so that's so.
SHANE KERR: All right, cool. Next topic: In the document we have a breakdown of different ways you can deploy your service and we kind of present this contrast, one option is doing things in bare metal, you run things all your own stuff, the other option is putting things in the Cloud and that was kind of our suggestion but Dave, maybe you wanted to talk about what your thoughts on that are.
DAVE KNIGHT: Yeah, really just reading that in the document it came to paint this picture of you could do things on bare metal that's how you get performance,or you could do it in the Cloud. It doesn't make it, for ourselves we run all of our resolver workloads in virtual machines, always where we can control the hyper visor on the hardware, the performance difference between running them on that and hardware is negligible, we can take to the attach with SIROV so there's no real difference for us and that was something to point out, that the document is making it look like there's a strict separation and really there isn't.
And one other thing that I was going to suggest was, where we do things in the Cloud, we use a Cloud that specialises in delivery of IP Anycast and high volume UDP services and perhaps this is out of date but we don't imagine we can run our services effectively in a big public Cloud. So we deliberately have chosen Cloud environments which are well‑tuned to the kind of of things we are doing with DNS, but and we don't run things in one of the big five clouds.
BABAK FARROKHI: I would like to add, also Cloud is available in some hotspots, they are not available everywhere and one of our missions is to be present to as many countries as possible and while Cloud is not in those countries and also would like to have for security reasons to own the hardware to have more control so that's why we prefer to run on our own VMs, on our own metals.
SHANE KERR: Great, clouds are bad, good.
Next topic: High availability. I don't even remember what we were discussing here. Sorry.
DAVE KNIGHT: Yeah, this is why I have the notes, now I can remember. Yes, I think there wasn't anything terribly contentious there, we just ‑‑ with this document, we are not standing up here trying to pick apart faults in T there's a lot of things in there to like, it's a good document and if anything, we kind of need to apologise for not having made any of the commence during the formative stage of the document rather than after it was published. So this I think was one of the ones where we just wanted to acknowledge how we actually do it, because in that high availability section, it suggests various different options and I think both of us we do it with IP Anycast and it's Anycast all the way down to the server and I had other notes but I don't think it's really relevant.
SHANE KERR: I remember now, there was a distinction between how at each site, you guys had the different resolvers do high availability so I think you do BGP within the site and ‑‑
DAVE KNIGHT: That's right. We used IP Anycast and v4 and v6 and that is the way to the server so each individual workload is running a BGP speaker which advertises service prefixes from the server which are then aggregated in a router and covering prefixes to the Internet. I don't know.
BABAK FARROKHI: I would like to ‑‑ stack and we run multiple metals in our hotspot locations and we unload balancer in front.
SHANE KERR: For me that was a key difference, to have a load balancer versus not a load balancer. Yeah.
DAVE KNIGHT: One last note I had on that, one thing we do do, when we use a Cloud environment, because that's further from our control, we use covering prefixes for the services that we advertise there, so that in the event that there's any problem with the command and control of that Cloud, we can unilaterally start advertising more specifics and pool all that traffic back into our own network or DDoS mitigation network without needing to touch the command and control for the Cloud so it's kind of a little security mechanism that we built into the routing which protects us from any failure of a third party operator.
SHANE KERR: Cool. All right. The next section, we have recommendations around ECS. This is EDNS‑Client‑Subnet and so the works is that a resolver when it sends a query on behalf of an end user can include information about the end user's IP address, in principle the idea is it let's the authoritative server provide better information in its reply, for example geolocations, directing to a different services and things like that. It does have privacy implications and operational implications so we didn't make a strong recommendation one way or the other, I believe, in the document, but it was important to mention it and I think both people were interested in it. I don't know, do either of you support ‑‑
DAVE KNIGHT: We don't do ECS A at all and my one comment on the document, maybe it could have said a little more to make you think about the privacy implications of it but we don't do it.
BABAK FARROKHI: It's a double‑edged sword, some people like it and others don't. I appreciate, pretty much we are nothing else. If you use 9999 by default ECS it is disabled, but you can use ECS, you can use 999.11, so we do both.
SHANE KERR: The more widely distributed your edge is the less useful ECS becomes because you are much more likely to get to a resolver close to you which is more likely to get results anyway.
AUDIENCE SPEAKER: On the other end of this spectrum if you are concentrated one place you don't need ECS because no impact. I think that's something we have trouble explaining to not really experienced operators because they just see ECS we want this feature because we want it, and that's it, and like, most of the time it's unneeded because that's not a lot of people which are distributed over the place.
SHANE KERR: Yeah. Although if you are doing your resolvers in big clouds and there's only like ten locations in the world, maybe that's the sweet spot, I don't know, anyway.
The next thing is, are we will a little bit of text around RPKI, so this is router ‑‑ route authentication‑ish stuff, Dave was chastising me yesterday for calling ‑‑ it's questionable how much benefit this provides in the DNS context but maybe we can talk about how you guys implementation this.
BABAK FARROKHI: We sign but we don't validate.
SHANE KERR: So it's two parts, validation and publishing, yeah. Much
BABAK FARROKHI: But that's on the road map. We see a huge value in validation because we don't want to run into this BGP trouble but that's something that is is planned before the end of this year.
DAVE KNIGHT: We are in the process of publishing signed ROAs for all address space, all of our v4 resolver stuff is done now and we are now doing v6 ones so we should have that finished in the next six months
SHANE KERR: Shame on you for doing v4 first
DAVE KNIGHT: There are reasons for this order of events. And we recommended strongly you should strongly do this. I had another note on this. That was it. We don't do validation but that's because we are largely a transit centric provider and our are mostly doing this already so it's not necessary for us to do it as well.
AUDIENCE SPEAKER: One of the things is that it's always the DNS but some of the security problems that the DNS has been blamed for have actually been route hijacks so it is significant in that context; I mean one of the hard sells on DNSSEC has been oh you are adding actually more complexity to your system for limited protections, well route hijacks are one of the places you get your protections, this if the fixed can't answer for it good, I am still very much in in favour of route authentications
SHANE KERR: We have some language about how you handle negative trust anchors so they are when you are using DNSSEC and you are trying to resolve an answer and the people maintaining the authoritative servers have messed it up, somehow, they haven't resigned their zone or put the wrong key in the parent and you have a high confidence that it's actually just operator error, you can put something in your resolver to configure it and say turn off DNSSEC for this zone, it's not a great solution but it's also needed in some cases. Although maybe less now than it used to be. So, are maybe you want to talk ‑‑
DAVE KNIGHT: My response is very quick, which is we looked around ‑‑ we could deploy an NTA, we don't know that we ever have.
BABAK FARROKHI: We did that quite a lot so we had a recent incident with one of the ccTLD operators having some trouble with DNSKEY and we had to immediately, after confirmation with them, to deploy an NTA because the whole country could not open their websites, like the banks, so that was one of the use cases but we should also mention we removed it quickly after they were back online so that's one of the things.
SHANE KERR: Yeah. I guess that's it. So the next thing is we also have text in the document about encrypted transport and this is encryption from the client, the end user, to the resolver and our recommendation was, you should enable at least one, and we didn't specify what you should do because that's going to depend on what you think your end users are going to have and things like that but the three basic approaches that are available today are: DoT, which is DNS server TLS which is sort of a plain encryptic channel, and HTTPS which adds a bunch of unnecessary and, and DoQ which is the great future that's going to save us all, DNS over QUIC but that's not very widely deployed.
So that was our recommendations.
I know Quad9 implements two of them now or just one?
BABAK FARROKHI: Two plus one, we do DNSCrypt, to encourage you to implement, we are seeing constant increase in the traffic, and scary pace, scary for us as operators because it is more hardware to encrypt and decrypt but that's great for users and we are seeing maybe a few percent increase like every month.
SHANE KERR: Oh, would you
BABAK FARROKHI: And that's huge.
DAVE KNIGHT: We have some DoT and DoH on some of our resolver addresses. I think all of those which are in commercial relationships, the fee public open service doesn't have it yet, yeah, that's something that we are working towards.
SHANE KERR: Okay. Are either of you going to be looking at the QUIC stuff either?
DAVE KNIGHT: Yeah
SHANE KERR: Maybe we will have another report in a year or so. This one is, we make a recommends that you support NSID, especially if you use Anycast because otherwise you send a query and it comes back and you don't know where from and I think both of you have strong feelings about this, right?
DAVE KNIGHT: I certainly do. I think that whenever you run an otherwise hard to identify service, particularly Anycast things on the Internet is so helpful to debugging to make it easy for users to identify it. You know, not just when they are trying to solve a problem but also for Anycast service the user wants to know is this end point reasonably geographically close to where I am? With that in mind, the document, I think the document suggests that it's typical or it's ‑‑ people often use codes, airport identification codes to identify a node, which I would suggest, are not great bus they are often not human readable, for that thing if a person using it wants to see if this near me but
SHANE KERR: Dave was in Canada and they broke the codes by putting an I in front of every one
DAVE KNIGHT: We use UN low codes which have a country code and a more meaningful name for a city after it and also, not all locations have airports, so deployed Anycast nodes in a place where there wasn't an airport in the country that we could give it a name for, and another thing to note on the identification stuff, you know, security people will often get upset, we can't put the name of the server in a way that's published because that's bad. What's in the identification doesn't have to be a domain name; it's just a string, as long as there's meaningful information in there, it's useful, you should definitely always do it.
BABAK FARROKHI: It depends on the scale, right? When you are operating more than a /THOUP resolvers in 100 countries and a user asks you about performance issue, the first question: Which one? Where are you looking at? Which resolver are you hitting. They don't know so you have to ask for NSID and put hosting in there because that's helpful and we don't see any security threat.
SHANE KERR: Cool. We are running out of time, I think ‑‑ we had a lot more things we wanted to talk about. I will pick just one at ran done, maybe stir up a little controversy as we go into the break here. We have a text discussing DNS cookies, and they are kind of a light way of getting some assurance that you are talking to the same user to prevent DNS reflection and amplification attacks. So, at my organisation on the authoritative side, we don't support it. I don't know ‑‑ I guess none of of us support DNS cookies
BABAK FARROKHI: Well we do.
SHANE KERR: Oh
BABAK FARROKHI: It's fun, it's one of the things that some people like and some people don't, so we are like if you like DNS cookies, just use 999.11, we support but not any other resolvers. Give everyone the choice.
SHANE KERR: Okay, yeah, yeah.
DAVE KNIGHT: We don't currently support DNS cookies. We have a look and say it seems reasonably easy to enable this but we have yet to find the motivation to get it done.
Andre: If you don't support cookies make sure you don't break if you get cookies.
SHANE KERR: That's fair. All right, well we are out of time, thank you, everyone, we are around for a few days so you can come and talk to us
WILLEM TOOROP: That was very interesting, maybe we can do something else again in next RIPE. Thank you all for coming, thanks, everybody, that works for us and see you next time. Niall, you want ‑‑
NIALL O'REILLY: One last remark with my RIPE vice‑chair hat on, in introducing that panel session, Willem, I think you mentioned that Shane had led the task force for the RIPE NCC. It was a RIPE community task force.
WILLEM TOOROP: Absolutely, yeah, thanks for correcting me. Thank you, all.
LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND