On 15-Dec-15 17:48, Mark Andrews via RT wrote: > On 16/12/2015, at 1:31 AM, "Timothe Litt via RT" wrote: > >> Tue Dec 15 14:31:56 2015: Request 41298 was acted upon. >> Transaction: Ticket created by litt@acm.org >> Queue: bind9 >> Subject: Special use zone handling >> Owner: Nobody >> Requestors: litt@acm.org >> Status: new >> Ticket >> ----------------------------------------------------------------------- >> >> Currently bind supports automatic empty zones (only) for reverse address >> zones in private IPv4 and reserved IPv6 spaces. It doesn't do other >> special-use zone handling specified in several RFCs. >> >> http://www.iana.org/assignments/special-use-domain-names/special-use-domain-names.xhtml >> >> The other "special-use" zones (as of today) are: >> >> example. >> example.com. >> example.net. >> example.org. >> invalid. >> local. >> localhost. >> onion. >> test. >> >> It seems to me that most of the missing special handling can be >> implemented by adding automatic empty zones. > Actually they can't for the names in the root zone. Queries for these names > that make it to the DNS still need to have negative responses that can be > handed to a validator and not get bogus out the other end. The automatic > empty zones do not achieve that. > > The simplest way to not send traffic to the root servers is to slave the > root zone. This doesn't help with example.{com.net,org}. > > Mark Sigh, DNSSEC... It's not explicit in the RFCs, but I interpret them to imply that the validators should be short-circuiting the special-use zones just as the resolvers should. That is: unless configured with a local trust anchor/NTA for a zone, a validator can always reply "validation success, NXDOMAIN" without consulting the root for all the special use zones except *example*. If a zone is locally configured, the validator must have a local trust anchor (or NTA), since the root's signatures are not meaningful. (This is true today.) In this case, there is no root traffic as validation starts with the anchor. *example* is actually signed by the root for the stub webpages provided by ICANN (TXT, A, AAAA, NS, SOA). If overridden for local use (e.g. to test documentation), a local trust anchor (or NTA) must be provided to the validator... Otherwise, validation proceeds normally (does consult root). test and local are provably NXDOMAIN in the root - so for local use, there must be a local trust anchor (or NTA). A validator can short-circuit if neither is provided. invalid and onion can't be used locally. But they are NXDOMAIN in the root. So always short-circuitable by a validator. That leaves localhost. NXDOMAIN in the root. If done per 6761 (e.g. just the a/aaaa records), validator can short-circuit as the data is "defined by protocol". If a real zone file is provided, the validator needs a trust anchor. There is an oddity in that one of the RFCs (7686 2.7 + 1para) says that ICANN may "reserve .onion by entering it into the root zone database", special handling not withstanding. This seems ill-considered as existing resolvers wouldn't ignore it. But even in this case, it's 'virtually NXDOMAIN' in the root by spec. I think these observations are implementable regardless of whether the validator is internal to named or completely independent. Bottom line: validators should be short-circuiting special use zones in the same way as resolvers. They require a local (negative) trust anchor if a private version of a zone is configured. They shouldn't generate root traffic for private zones. (But do for the public *example* zones) Automatic zones can provide the authoritative data (nxdomain) without consulting the root, even for the names in the root. Or am I missing something? Timothe Litt ACM Distinguished Engineer -------------------------- This communication may not represent the ACM or my employer's views, if any, on the matters discussed.