[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [wg-c] voting on TLDs
On Mon, Mar 06, 2000 at 08:45:28AM -0800, Dave Crocker wrote:
>
> The concern for stability has been present from the start of discussions
> about gTLD expansion, roughly five years ago. It has covered:
>
> 1. Technical and operational impact on the root
>
> 2. Administrative and operational capabilities of registries
>
> 3. Disruption due to legal distraction from the trademark community.
>
> A significant problem coming from any one of these 3 different directions
> will render the DNS unstable. The record of listing and discussing these 3
> categories of concern is massive and public.
>
> The portion of Paul Vixie's opinion about the first concern, technical
> issues, attends to an entirely reasonable basis for believing that the
> purely technical limit to the right is quite high. Other senior technical
> commentators focus quite heavily on conservative operations practise when
> scaling a service. They conclude that one, or a few, hundred names is a
> reasonable near-term limit.
From: http://www.dnso.org/wgroups/wg-c/Arc01/msg00191.html for context, and
http://www.dnso.org/wgroups/wg-c/Arc01/msg00192.html , I quote:
"A million names under "." isn't fundamentally harder to write code or
operate computers for than are a million names under "COM"."
This was Paul's response to Eric Brunner's direct question on the
matter of adding names and stability. That eliminates concern #1.
Concern #3 is never going to go away, because the TM/IP community will
always feel infringed upon. It's what they do for a living. As long
as character strings exist, the boogeyman of infringement within those
strings will be seen. Nothing can be done to eliminate #3, unless the
lawyers themselves are eliminated. The merits of that approach are best
left for a different conversation. :) This concern is a red herring.
Which leaves us with concern #2. NSI has provided AMPLE evidence of this
behavior, and I will continue to assert that the Internet has NOT come
crashing down around our ears. They have at times been the very model
of gross incompetence, and have done things many would not even think to
test in a controlled scenario.
Yet, somehow, the net continues to exist.
I therefore insist that Concern #2 has been tested. It has been tested in
the most severe case -- the case in which the registry is a single point
of failure. We are proposing adding additional registries and additional
TLDs. If the Internet didn't curl up and die with NSI mismanaging the
only registry in existence, I have great confidence that the Internet
will continue to route around problems at the technical level, and
with new registries and open competition, the customer base will do the
same.
Yes, the SRS has had problems, and will continue to have them. However,
as much as you want to insist that there may be some catostrophic problem
lurking around the corner, and you want to make hand-waving proposals,
I will continue to point my finger at running code.
You're familiar with the "running code v. proposals" rule of thumb, I'm
sure.
>
>
> > > Indeed, please DO look at NSI. Their history ain't nearly as wonderful as
> > > you seem to believe.
> >
> >I think that is exactly what he meant, the net has not
> >destabilized. There are
>
> Except for ignoring NSI's very long learning curve, which included messing
> up individual registrations randomly and seriously, corrupting the whois
> data base, and corrupting the root, I suppose you are right...
>
Hm. Nope. Net's still working. See above for expansion on this counter
to your statement.
>
> >currently over 240 registries operating, and with various types of management
> >models and with a lot of variance in their operating structure. There have
> >been problems that have resulted in entire TLDs not being able to be resolved
> >for several hours. But the net has not destabilized. Indeed, they were minor
>
> Service outages of "several hours" for end-users does not constitute an
> instability?
>
When was the period of time during which the Internet was unusable? I
must've been logged off that day. Perhaps they should coordinate with
the yearly cleaning effort that occurs every April 1.
If you want to go after those responsible for service outages of several
hours, then I recommend you petition ICANN to institue mandatory minimum
QoS levels for all ISPs worldwide. End users do not rely on the roots.
Any ISP that can't notice and correct a DNS problem of such duration will
be routed around by their customer base in both the short and long term.
And this works all the way up the line to the registry level.
Mmmmmm, bottom-up processes.
--
Mark C. Langston
mark@bitshift.org
Systems & Network Admin
San Jose, CA