sendmail.net | the internet mail network   brought to you by Sendmail, Inc.
home search about us site map policies

  topics
using sendmail
interviews
articles
conferences

  sites
sendmail, inc.
our commercial products and services
sendmail.org
the sendmail consortium site

home > interviews > paul vixie

Q&A: Paul Vixie

Your browser found this site in no small part through the efforts of Paul Vixie. Vixie is the head architect of BIND, the most popular implementation of DNS. He's also president and founder of the Internet Software Consortium, the home of BIND, INN, and the DHCP server. As if that weren't enough, Vixie leads the fight against spam with the Mail Abuse Protection System (MAPS), which includes the renowned Realtime Blackhole List, through which countless spammers (and one or two ISPs) have learned the power of the Internet community to say no to network abuse. He took a break from his extremely busy schedule to answer our questions about all that and more.

How would you characterize the key security problems faced by sysadmins, especially in dealing with email and the Internet? What are their key weapons in fighting theft of service and other network abuse?

"Security problems faced by sysadmins" is a large topic, of which theft of service is a relatively minor part. Of that part, it's safe to say that the original design of virtually all Internet technology took no account of human nature - because the subset of humanity who used the early Internet had been preselected by their employers and schools and research labs and whatnot to weed out rudeness. Now that the Internet has the full spectrum of humanity as users, the technology is showing its weakness: it was designed to be used by friendly, smart people. Spammers, as an example of a class, are neither friendly nor smart.

The chief problem faced by sysadmins in this area is that the fundamental design philosophy of the Internet does not support the kind of users they now have to contend with. Sometimes this is fixable in the implementations, but more often it's a fundamental protocol problem or even a fundamental philosophic problem. In my daily life, I am a closed person: anyone who I don't know, and who can't find a way to be introduced to me by someone I do know, cannot form a relationship with me. On the Internet it used to be the case that systems, and users, could be considerably more open: anyone who wanted to talk to them probably had a mutually beneficial reason. That time is past, but the technical environment hasn't changed. Sysadmins have to be prepared to do a lot of integration and even some development in order to maintain a robust infrastructure in the face of this technical disparity.

How successful have the various elements of the Mail Abuse Prevention System been in stemming spam and other network abuse? The RBL seems to be a huge (if occasionally controversial) success. What about other aspects of MAPS? For instance, has the MAPS Transport Security Initiative's anti-relay campaign been effective in reaching sysadmins and vendors? How about the RSS?

All of the MAPS projects have been at least moderately successful. The TSI succeeded more because it was an advertising forum for cooperative transport vendors than because it offered recipes on how to secure existing transports. The RSS actually blocks more spam, for most users, than the RBL. But the RBL was not designed to stop spam - it was a way to educate network owners. As a nonprofit, MAPS can't put any wood behind dull arrows - every project has to show its success before it can even become publicly known.

In quantitative terms, how pervasive is third-party relaying today?

In 1994 we counted 40,000 MX hosts in all of COM. There were only 400,000 domains at that time, but a lot of them shared MX relays. Of those, 60 percent were open relays.

The numbers are too high today to sweep them all, but of 3.5 million domains under COM I'll bet there are still less than 100,000 MX relays. I'm guessing that less than 20 percent are open relays (since most of the new ones are secure, and many of the old ones are being upgraded as each is abused).

The problem is much worse for non-COM servers, especially internationals in Asia and Eastern Europe and South America - pretty much any secondary market where US companies dump their old equipment when they upgrade. (Note that these same countries are going to have a lot more Y2K problems than we do, too, since they're buying our old PBXs and cell phone switches at scrap prices.)

The MAPS Relay Spam Stopper is designed to identify email relays that have been used by spammers. Can you describe the project in more detail? For instance, how do you verify that a server is an open relay?

When the RSS receives spam, it sends a test message through the most recent relay point (literally, the IP address we got the mail from, which might not be the place it originated). If it relays our test message, then we know it is an open relay. If it doesn't, then we parse the Received: headers looking for other relay input addresses whose output address might be the same as the TCP peer address used to relay the original spam to us.

It's important to note that we never probe servers on speculation. Only when we receive spam from a host do we check to see if it is an open relay. Other relay-blocking services on the Net are known to probe entire address spaces looking for possible open mail relays, but we at MAPS consider that probing to be, itself, a kind of network abuse.

How can people use the RSS? What should they not do?

Mail transport administrators can use the RSS by configuring their mailers to automatically reject all mail from IP addresses we currently identify as being "open relays." Note that this will cause some valid mail to be rejected, and individual mail transport administrators need to decide for themselves whether this is a reasonable risk tradeoff. I think it is, and so I have configured my mail transport to block all mail that's coming from IP addresses on the RSS (and the RBL, and the DUL). But I would never recommend that anyone else use this form of blocking unless it fits with their own operations philosophy.

You've written about how software engineering differs from the "one smart programmer" approach to creating Open Source software. What are the key advantages of applying an engineering approach to Open Source projects? What main obstacles do you see?

I think I answered this better in the chapter I wrote for O'Reilly than I could do here. It's on the Web if you want to go look at it.

Software engineering doesn't come cheap (especially when you figure in things like commercial-grade development tools and rigorous QA), but funding Open Source development at that level can be tricky. What funding models do you see that successfully combine rigorous methodology with a commitment to Open Source development?

Alas, most open source projects will never seek methodology of any kind. Those that do will seek recurring funding such that the revenue level corresponds to the level of public use of their software. Selling support contracts is one way to create that kind of feedback loop. Grants, so far, have only been useful for major additions of functionality rather than the long, difficult road of regular maintenance releases.

How would you sum up the reasons for upgrading to the current version of BIND?

There are bad security problems in pre-8.2.2. Nobody should run those.

With DNS "undergoing its inevitable rototilling," you're slated to speak at LISA '99 about what to expect from BIND-9 and EDNS. Can you give us a preview of what you'll be discussing?

Nope. If you want to know, you should be there. (I haven't even made my slides yet.)

Tim O'Reilly speaks about how Open Source software gives weight to open standards (Apache and HTTP, sendmail and SMTP) and keeps them from becoming pawns in proprietary skirmishes. What practical steps can we take to strengthen and extend this relationship between open standards and Open Source?

The open source implementations have to be well enough funded to track the standards and meet customer needs. The high wall we had to get over was in getting companies to stop believing that they had to "own" all of the intellectual property that went into their products. The way to compete in the modern era is to take the same starting point other companies take, but to do a better job at it. This includes folding one's enhancements back into the freely redistributable base product, since that is the principal way a company can influence the field in the direction of its own strategic vision. If your vision is wrong, the code will eventually be dropped. But if your vision is right and no one but you has the code, you lose anyway.

Brian Carpenter, chair of the IAB, recently sent a technical comment to icann.org citing the critical importance of the unique root of the public DNS and arguing against its displacement by directory systems using multiple public DNS roots - systems designed to help users resolve vague or ambiguous references. How pervasive are these technologies, and how serious is the potential threat they pose to the stability of the Internet?

About a half dozen different bands of net.kooks all want to be the new IANA. Each one claims a right of succession and each one speaks of the Internet's population as some kind of new-fangled proletariat. And each one completely misunderstands the DNS. DNS is a (1) distributed (2) coherent (3) reliable (4) autonomous (5) database. It is not a mapping service, and it encodes fact, not policy. There can only be one set of root name servers per NAT cloud, and only one set of root name servers on the common global Internet.

It's possible that detailed protocol revisions to the DNS would make it possible for multiple root server sets to cooperate. But in the protocol as it is defined, and in the implementations of the current protocol, there is no provision whatsoever to avoid explosive mixtures of incompatible data from multiple sources of authority. And DNS security, a badly needed amendment to the base DNS protocols, will make this even more true.

The focus on Linux has been quite successful in promoting the Open Source model to the mainstream, but that pitch has largely overlooked the other interrelated standards and technologies that underlie the Internet/Open Source revolution. Is it appropriate at some point to begin presenting Open Source as a balanced ecosystem?

It can't be an ecosystem. Open Source developers are Lone Ranger types, and the reason they're working in the area at all is usually so they can be alone on a frontier. I think what you'll see is that Linux will continue to fragment and eventually lose ABI compatibility between the major Linux distributors.

The only way to make one of these things cohesive is to make the parties interdependent. Unfortunately for "the cause," there is no proximate barrier to entry - a new Open Source project just takes a FreeBSD machine and a 28K modem. The fact that it can't scale to millions of users, and thus ultimately has to lose in competition with closed-source commercial equivalents, is not proximately apparent to each developer. Thus they feel no interdependence.

The ISC and sendmail clearly have many concerns in common. To what extent do the BIND and sendmail development teams consult or collaborate?

We have tried - ever since 1992 or so, when Eric came back and started flogging sendmail again - to coordinate our releases so that those of BIND's libraries that sendmail needed would not drift in stupid ways. Eric has yelled at me more than once for creating compiler warnings in sendmail for no good cause.

  talkback
sign up to join
log in to post
change your profile

message board

comments? ideas?
give us feedback

  store
buy the knife
buy the t-shirt




home search about us site map policies