2b15cb3d09
Thanks to roberto for providing pointers to wedge this into HEAD. Approved by: roberto
76 lines
13 KiB
HTML
76 lines
13 KiB
HTML
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
|
<html>
|
|
<head>
|
|
<meta http-equiv="content-type" content="text/html;charset=iso-8859-1">
|
|
<meta name="generator" content="HTML Tidy, see www.w3.org">
|
|
<title>Automatic Server Discoveryschemes</title>
|
|
<link href="scripts/style.css" type="text/css" rel="stylesheet">
|
|
</head>
|
|
<body>
|
|
<h3>Automatic Server Discovery Schemes</h3>
|
|
<img src="pic/alice51.gif" alt="gif" align="left"><a href="http://www.eecis.udel.edu/%7emills/pictures.html">from <i>Alice's Adventures in Wonderland</i>, Lewis Carroll</a>
|
|
<p>Make sure who your friends are.</p>
|
|
<p>Last update:
|
|
<!-- #BeginDate format:En2m -->10-Mar-2014 05:04<!-- #EndDate -->
|
|
UTC</p>
|
|
<br clear="left">
|
|
<h4>Related Links</h4>
|
|
<script type="text/javascript" language="javascript" src="scripts/hand.txt"></script>
|
|
<script type="text/javascript" language="javascript" src="scripts/config.txt"></script>
|
|
<h4>Table of Contents</h4>
|
|
<ul>
|
|
<li class="inline"><a href="#assoc">Association Management</a></li>
|
|
<li class="inline"><a href="#bcst">Broadcast/Multicast Scheme</a></li>
|
|
<li class="inline"><a href="#mcst">Manycast Scheme</a></li>
|
|
<li class="inline"><a href="#pool">Server Pool Scheme</a></li>
|
|
</ul>
|
|
<hr>
|
|
<h4 id="modes">Introduction</h4>
|
|
<p>This page describes the automatic server discovery schemes provided in NTPv4. There are three automatic server discovery schemes: broadcast/multicast, many cast, and server pool, which are described on this page. The broadcast/multicast and many cast schemes utilize the ubiquitous broadcast or one-to-many paradigm native to IPv4 and IPv6. The server pool scheme uses DNS to resolve addresses of multiple volunteer servers scattered throughout the world.</p>
|
|
<p> All three schemes work in much the same way and might be described as <i>grab-n'-prune.</i> Through one means or another they grab a number of associations either directly or indirectly from the configuration file, order them from best to worst according to the NTP mitigation algorithms, and prune the surplus associations.</p>
|
|
<h4 id="assoc">Association Management</h4>
|
|
<p>All schemes use an iterated process to discover new preemptable client associations as long as the total number of client associations is less than the <tt>maxclock</tt> option of the <tt>tos</tt> command. The <tt>maxclock</tt> default is 10, but it should be changed in typical configuration to some lower number, usually two greater than the <tt>minclock</tt> option of the same command. </p>
|
|
<p>All schemes use a stratum filter to select just those servers with stratum considered useful. This can avoid large numbers of clients ganging up on a small number of low-stratum servers and avoid servers below or above specified stratum levels. By default, servers of all strata are acceptable; however, the <tt>tos</tt> command can be used to restrict the acceptable range from the <tt>floor</tt> option, inclusive, to the <tt>ceiling</tt> option, exclusive. Potential servers operating at the same stratum as the client will be avoided, unless the <tt>cohort</tt> option is present. Additional filters can be supplied using the methods described on the <a href="authentic.html">Authentication Support</a> page.</p>
|
|
<p>The pruning process uses a set of unreach counters, one for each association created by the configuration or discovery processes. At each poll interval, the counter is increased by one. If an acceptable packet arrives for a persistent (configured) or ephemeral (broadcast/multicast) association, the counter is set to zero. If an acceptable packet arrives for a preemptable (manycast, pool) association and survives the selection and clustering algorithms, the counter is set to zero. If the the counter reaches an arbitrary threshold of 10, the association becomes a candidate for pruning.</p>
|
|
<p>The pruning algorithm is very simple. If an ephemeral or preemptable association becomes a candidate for pruning, it is immediately demobilized. If a persistent association becomes a candidate for pruning, it is not demobilized, but its poll interval is set at the maximum. The pruning algorithm design avoids needless discovery/prune cycles for associations that wander in and out of the survivor list, but otherwise have similar characteristics. </p>
|
|
<p>Following is a summary of each scheme. Note that reference to option applies to the commands described on the <a href="confopt.html">Configuration Options</a> page. See that page for applicability and defaults.</p>
|
|
<h4 id="bcst">Broadcast/Multicast Scheme</h4>
|
|
<p>A broadcast server generates messages continuously at intervals by default 64 s and time-to-live by default 127. These defaults can be overridden by the <tt>minpoll</tt> and <tt>ttl</tt> options, respectively. Not all kernels support the <tt>ttl</tt> option. A broadcast client responds to the first message received by waiting a randomized interval to avoid implosion at the server. It then polls the server in client/server mode using the <tt>iburst</tt> option in order to quickly authenticate the server, calibrate the propagation delay and set the client clock. This normally results in a volley of six client/server exchanges at 2-s intervals during which both the synchronization and cryptographic protocols run concurrently.</p>
|
|
<p>Following the volley, the server continues in listen-only mode and sends no further messages. If for some reason the broadcast server does not respond to these messages, the client will cease transmission and continue in listen-only mode with a default propagation delay. The volley can be avoided by using the <tt>broadcastdelay</tt> command with nonzero argument.</p>
|
|
<p>A server is configured in broadcast mode using the <tt>broadcast</tt> command and specifying the broadcast address of a local interface. If two or more local interfaces are installed with different broadcast addresses, a <tt>broadcast</tt> command is needed for each address. This provides a way to limit exposure in a firewall, for example. A broadcast client is configured using the <tt>broadcastclient</tt> command. </p>
|
|
<p>NTP multicast mode can be used to extend the scope using IPv4 multicast or IPv6 broadcast with defined span. The IANA has assigned IPv4 multicast address 224.0.1.1 and IPv6 address FF05::101 (site local) to NTP, but these addresses should be used only where the multicast span can be reliably constrained to protect neighbor networks. In general, administratively scoped IPv4 group addresses should be used, as described in RFC-2365, or GLOP group addresses, as described in RFC-2770.</p>
|
|
<p>A multicast server is configured using the <tt>broadcast</tt> command, but specifying a multicast address instead of a broadcast address. A multicast client is configured using the <tt>multicastclient</tt> command specifying a list of one or more multicast addresses. Note that there is a subtle distinction between the IPv4 and IPv6 address families. The IPv4 broadcast or mulitcast mode is determined by the IPv4 class. For IPv6 the same distinction can be made using the link-local prefix FF02 for each interface and site-local prefix FF05 for all interfaces.</p>
|
|
<p>It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance.</p>
|
|
<p>Since an intruder can impersonate a broadcast server and inject false time values, broadcast mode should always be cryptographically authenticated. By default, a broadcast association will not be mobilized unless cryptographically authenticated. If necessary, the <tt>auth</tt> option of the <tt>disable</tt> command will disable this feature. The feature can be selectively enabled using the <tt>notrust</tt> option of the <tt>restrict</tt> command.</p>
|
|
<p>With symmetric key cryptography each broadcast server can use the same or different keys. In one scenario on a broadcast LAN, a set of broadcast clients and servers share the same key along with another set that share a different key. Only the clients with matching key will respond to a server broadcast. Further information is on the <a href="authentic.html">Authentication Support</a> page.</p>
|
|
<p>Public key cryptography can be used with some restrictions. If multiple servers belonging to different secure groups share the same broadcast LAN, the clients on that LAN must have the client keys for all of them. This scenario is illustrated in the example on the <a href="autokey.html">Autokey Public Key Authentication</a> page.</p>
|
|
<h4 id="mcst">Manycast Scheme</h4>
|
|
<p>Manycast is an automatic server discovery and configuration paradigm new to NTPv4. It is intended as a means for a client to troll the nearby network neighborhood to find cooperating servers, validate them using cryptographic means and evaluate their time values with respect to other servers that might be lurking in the vicinity. It uses the grab-n'-drop paradigm with the additional feature that active means are used to grab additional servers should the number of associations fall below the <tt>maxclock</tt> option of the <tt>tos</tt> command.</p>
|
|
<p>The manycast paradigm is not the anycast paradigm described in RFC-1546, which is designed to find a single server from a clique of servers providing the same service. The manycast paradigm is designed to find a plurality of redundant servers satisfying defined optimality criteria.</p>
|
|
<p>A manycast client is configured using the <tt>manycastclient</tt> configuration command, which is similar to the <tt>server</tt> configuration command. It sends ordinary client mode messages, but with a broadcast address rather than a unicast address and sends only if less than <tt>maxclock</tt> associations remain and then only at the minimum feasible rate and minimum feasible time-to-live (TTL) hops. The polling strategy is designed to reduce as much as possible the volume of broadcast messages and the effects of implosion due to near-simultaneous arrival of manycast server messages. There can be as many manycast client associations as different addresses, each one serving as a template for future unicast client/server associations.</p>
|
|
<p>A manycast server is configured using the <tt>manycastserver</tt> command, which listens on the specified broadcast address for manycast client messages. If a manycast server is in scope of the current TTL and is itself synchronized to a valid source and operating at a stratum level equal to or lower than the manycast client, it replies with an ordinary unicast server message.</p>
|
|
<p>The manycast client receiving this message mobilizes a preemptable client association according to the matching manycast client template. This requires the server to be cryptographically authenticated and the server stratum to be less than or equal to the client stratum. </p>
|
|
<p>It is possible and frequently useful to configure a host as both manycast client and manycast server. A number of hosts configured this way and sharing a common multicast group address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance.</p>
|
|
<p>The use of cryptograpic authentication is always a good idea in any server discovery scheme. Both symmetric key and public key cryptography can be used in the same scenarios as described above for the broadast/multicast scheme.</p>
|
|
<h4 id="pool">Server Pool Scheme</h4>
|
|
<p>The idea of targeting servers on a random basis to distribute and balance the load is not a new one; however, the NTP pool scheme puts this on steroids. At present, several thousand operators around the globe have volunteered their servers for public access. In general, NTP is a lightweight service and servers used for other purposes don't mind an additional small load. The trick is to randomize over the population and minimize the load on any one server while retaining the advantages of multiple servers using the NTP mitigation algorithms.</p>
|
|
<p>To support this service, custom DNS software is used by pool.ntp.org and its subdomains
|
|
to discover a random selection of participating servers in response to a DNS query.
|
|
The client receiving this list mobilizes some or all of them, similar to the
|
|
manycast discovery scheme, and prunes the excess. Unlike <tt>manycastclient</tt>,
|
|
cryptographic authentication is not required. The pool scheme solicits a single
|
|
server at a time, compared to <tt>manycastclient</tt> which solicits all servers
|
|
within a multicast TTL range simultaneously. Otherwise, the pool server discovery
|
|
scheme operates as manycast does.</p>
|
|
<p>The pool scheme is configured using one or more <tt>pool</tt> commands with DNS names
|
|
indicating the pool from which to draw. The <tt>pool</tt> command can be used more
|
|
than once; duplicate servers are detected and discarded. In principle, it is
|
|
possible to use a configuration file containing a single line <tt>pool
|
|
pool.ntp.org</tt>. The <a href="http://www.pool.ntp.org/en/use.html">NTP Pool
|
|
Project</a> offers instructions on using the pool with the <tt>server</tt> command, which is suboptimal but works with older versions of <tt>ntpd</tt> predating the <tt>pool</tt> command. With recent ntpd, consider replacing the
|
|
multiple <tt>server</tt> commands in their example with a single <tt>pool</tt> command.</p>
|
|
<hr>
|
|
<script type="text/javascript" language="javascript" src="scripts/footer.txt"></script>
|
|
</body>
|
|
</html>
|