<<<
Chronological Index
>>> <<<
Thread Index
>>>
[ga] Re: [ga-roots] Operational Stability of the Internet
On 18 May 2001, at 12:34, Craig Simon wrote:
> I'll give it a shot.
>
> First we define stability for this case: Let's say it means providing
> strong guarantees that a TLD, once made accessible to the Internet-using
> public via the legacy DNS, will not become inaccessible.
And when the current proposed entry of a duplicate TLD causes
technical havoc in the DNS, should it remain? I agree that once a TLD
is in the root, any root, it should remain. However, the legacy root
should not enter a duplicate in any case. If there is one, it should be
understood that it may not work out. We are going to see a real mess
here when email is received by unintended recipients, nameservers with
different IP addresses and the same hostname are queried...
In the Paroot current zone generator, for instance, if a "new" TLD is
entered and it already exists, it will be automatically rejected by the
database. The same should be true for the legacy root.
>
> This definition says nothing (yet) about guaranteeing the TLD registry's
> treatment of SLDs within the TLD, or guaranteeing that web content and
> email addresses made available via subordinate SLDs and 3LDs, etc. must
> remain accessible over time.
Nor should it, really. You are then talking about business model, aren't
you? In order for a TLD to be operational, it should have capability of
delegating SLDs, of course, but establishing email addresses, etc. is
up to whoever provides those services for the SLD. Some registries
would offer only SLDs or 3LDs. Any registry that delegates DN's should
bear responsibility for their accessibility, but that should be based on
their charter. A registry cannot bear responsibility for subdomains of a
delegated domain.That delegation implies a transfer of responsibility for
that node, based upon the user agreement for the registry. An entire
node could be made "private" by instituting non-routables at all levels
below it.
>
> Therefore, to say we have a bare minimum of stability in the DNS, a TLD
> once published in the root must stay published in the root (barring
> exceptional circumstances like .NATO, perhaps), and the TLD operator must
> meet some minimum bandwidth and uptime performance requirements to answer
> valid DNS queries.
That is a given, IMO. It is not operational if it can't respond to queries.
Bandwidth, however could easily be proportional to the expectations of
the TLD operation. There are some that would be quite limited, while
others would have enormous numbers. It should be scalable.
>
> So, adding one or several new TLDs to the root shouldn't be much of a
> problem. In fact dozens of ccTLDs were added each year during the 90's up
> till about '97. But nightmare scenarios do pop up when opening the TLD
> floodgates an an FCFS basis is contemplated. No one disputes that the root
> can handle tens of thousands of new TLDs from a technical standpoint, but
> vetting them would turn out to be a significant administrative burden for
> the designated gatekeeper.
>
> My initial presuppositon is that TLD integrity has to be held to a higher
> standard than SLD integrity.
With the gTLDs, this is true. However, the ccTLDs delegate SLDs that
are then utilized in the same way as the gTLDs. e.g., .com.au, .co.uk,
etc. Those registries are held to the same standard as a TLD. The
context of the TLD/SLD is important in this case. However, it is entirely
up to the TLD holder to delegate this reponsibility - not the root operator.
If you want to challenge that prevailing
> design choice, and say "Let them sink or swim without vetting," then
> opening the floodgates means that: 1) new technologies would have to be
> constructed for automating entries, modifications, etc. in the root (bumpy
> on the fly perhaps, but do-able); 2) we may to our dismay discover that the
> root has an upper size limit after all, and; 3) all the TM, cybersquatting,
> whois/privacy and other disputes that have plagued this community at the
> SLD level will be replicated at the TLD level.
Number one is quite do-able. Number 2 is highly unlikely given that
there are so many already in the roots. Number three's problem would
be drastically reduced with several hundred TLDs available. It was
artificially created by the TM lobby in striving to control the Internet for
those special interests.
FCFS is fair, but should be mitigated with operational standards, IMO.
Proof of operability is essential and there is a responsibility to
registrants to ensure that it is operational. However, it is also a market
issue to a large degree. Small companies can succeed or fail and
should not have to be a major corporation to participate in the industry.
Hopefully, the TLDA will be helpful in establishing some operational
definitions that its members (TLD holders) will strive to achieve.
Leah
>
> In other words, one by itself isn't the problem. The first drop of a flood
> is.
>
> Craig Simon
>
> "Tim Langdell, PhD" wrote:
> >
> > "Stability of the Internet"
> >
> > Can anyone out there tell me what is meant when someone talks of new
> > domain names (and let's say that this means new TLDs for the sake of
> > argument) affecting the "stability" of the Internet? Can anyone give me
> > even one example of how the introduction of a new TLD could affect such
> > "stability" in the slightest way?
> >
> > Tim
>
> --
> This message was passed to you via the ga-roots@dnso.org list.
> Send mail to majordomo@dnso.org to unsubscribe
> ("unsubscribe ga-roots" in the body of the message).
> Archives at http://www.dnso.org/archives.html
>
--
This message was passed to you via the ga-full@dnso.org list.
Send mail to majordomo@dnso.org to unsubscribe
("unsubscribe ga-full" in the body of the message).
Archives at http://www.dnso.org/archives.html
<<<
Chronological Index
>>> <<<
Thread Index
>>>
|