Advertisment

Synchronizing Time over the Net

author-image
PCQ Bureau
New Update

Synchronizing time means ensuring that every one follows the right time, whether on their watches, computer clocks, navigation systems, etc. The need to synchronize time becomes crucial in today’s world because programs running on different computers and communicating with each other need to follow a uniform time system. Otherwise, there may arise situations like you receiving e-mail five minutes before the time it was sent. The Kerberos authentication system, too, which protects against attacks by time-stamping transa- ctions and refusing to accept requests that seem to be ‘repeats’ of previously accepted requests, require time synchronization.

Advertisment

You can use various methods, which give different levels of accuracy, to synchronize time. These can vary from manually synchronizing the time on your watch from information on TV or radio to automatic synchronization of navigation systems using radio signals synchronized with atomic clocks.



Here’s a a brief history of time to get started.

Greenwich Mean Time

Internet time

Introduced by watchmaker Swatch in 1998, this standard aims to give people all over the world a uniform Internet time, irrespective of what the geographical time is. The reference for this is the Biel Mean Time

(BMT) meridian at Biel, Switzerland.

Internet time is measured in .beats and one Swatch beat equals 1 minute, 26.4 seconds. A day is divided into 1000 .beats and a day in Internet time begins at midnight BMT (denoted as @000 Swatch .beats). So, noon in the day would be @500

Swatch.beats. This time will be the same all over the world. Swatch’s site,

www.swatch.com, offers you a converter to convert your time into Internet time.

This concept, however, hasn’t taken off.

Advertisment

In the beginning, time was measured using the movement of the Sun or the moon with devices like sundials and shadow clocks. The first ‘standardized’ way of measuring time–the Greenwich Mean Time (GMT), also called Universal Time (UT)–was developed in the 1840s. It was extended all over the world in the 1880s.

The world was divided into 24 time zones, using parallel north-south lines called meridians, with the meridian at Greenwich, UK at zero degree longitude and the other meridians at every 15 degrees east and west of it. However, since most countries didn’t want different time zones within their countries, the meridians were made to veer from a straight line. The international dateline was drawn to follow the 180-degree meridian in the Pacific Ocean.

The mean solar time of the Greenwich meridian was the reference time, and the mean solar time of the central meridian of each time zone was to be followed by all countries in that time zone. In the beginning, the reference time was determined using astronomical calculations based on the Earth’s rotation and revolution. However, since this was not very accurate because the motion of the Earth fluctuates by a few thousandths of a second each day, a time scale called the Universal Coordinated Time (UTC), based on atomic clocks was introduced in 1972.

Advertisment

Atomic time

Atomic time standards are based on the resonance of atoms, or the frequency at which an atom absorbs and emits electromagnetic radiations. The advantage of this is that this frequency is stable and remains constant over time for each atom of a particular chemical element, though it is different across elements. In 1967, the frequency of the cesium atom was used to define a second: one second was equal to 9,192,631,770 oscillations of the cesium atom’s resonant frequency.

UTC is based on this standard and uses atomic clocks. However, some adjustments are occasionally required. When the difference between time kept by UTC and the one based on the Earth’s motion approaches one second, a one-second adjustment, called a leap second, is made in

UTC.

Advertisment

Time synchronization over the Net

Sites offering NTP time servers

This is done using a client-server model. One or more servers, called time-servers, synchronize time with a reference clock and are then connected to more servers or clients so that the latter can synchronize time with them.

Advertisment

A reference clock is some device or machinery that gives you the current time accurately. Cesium clocks are prime candidates, but are expensive. Time servers, therefore, also use terrestrial broadcasts by standards’ agencies or receivers that receive time signals broadcasted by standards’ agencies.

Various protocols are used for time synchronization over the Net, the notable ones being Daytime Protocol, Time Protocol, and Network Time Protocol.

Daytime Protocol

Advertisment

This protocol is used by small computers running MS-DOS and similar OSs, and runs over both TCP/IP and UDP/IP. The server listens on port 13 for requests and responds to them by sending the date and time as an ASCII character string and the protocol specifies that this should be just one line. A popular syntax is Weekday, Month Date, Year Time-Zone.

Time protocol

This protocol provides a site-independent machine-readable date and time, and can run on top of both TCP/IP and UDP/IP. The time server listens for requests from clients on port 37 and returns a 32-bit unformatted binary number that gives the time in UTC seconds since midnight January 1, 1900 GMT. This binary number can represent time for 136 years (the base year will serve till 2036). Conversion to local time is done by software on the client end.

Advertisment

Network Time Protocol (NTP)

How computer clocks work

Any clock needs two mechanisms, one for measuring equal increments in time, and another for adding these increments and telling you how much time has passed and what time it is now.

Computers’ clocks store time in number of bits, and adding to these bits makes the time go on. The more the number of bits in which the time is stored, the better it is, because increasing the bits can either widen the range of the time value, or increase the resolution (the smallest possible increase in time that the clock model allows) keeping the range of the time value the same. In general, 64 bits let you get nanosecond resolution and give a more than sufficient range of time. Most time bits use a time scale of seconds because this helps in updating them faster.

Computer clocks are subject to errors while reading time, which affect their accuracy. Because all clock hardware is not very accurate, the frequency that makes time increase is not accurate. This is called a frequency error and is measured in PPM (parts per million). Moreover, this frequency varies over time, mostly due to changes in temperature, air pressure or magnetic fields, making the clock drift up to one second a day.

While synchronizing, final accuracy depends on the time source being used and the network connection; slow connection and

network delays compromise accuracy. In general, accuracy ranges between 5-100 ms on the Internet, and can fall further due to network delays.

This most widely used protocol is primarily used by large computers or networks. It runs automatically and continuously synchronizes time. It carries UTC time and the synchronization accuracy can be very high, up to 1 ms. Some servers also use a simpler version of this protocol, called Simple Network Time Protocol

(SNTP).

An NTP primary server, also called stratum 1 server, which runs NTP software connects to a reference clock (called stratum 0). It also connects to several other computers (called stratum 2), which in turn connect to several other computers. So, stratum 2 computers are clients for the stratum 1 server and query it automatically to synchronize their OS clocks. They are also servers to other computers (called stratum 3), which query them to synchronize time. These stratums can go up to 16, but the accuracy lowers as you go down from stratum 1. Each server in a stratum can support up to hundreds of clients (some or all of which can be servers to other clients in turn), so that the number of computers that can be synchronized from a stratum 1 server is virtually unlimited. This also means that the system is highly scalable.

Each client is connected to more than one server, and NTP software on the client machine monitors the stability and accuracy of all configured servers to decide the best available source for time synchronization. This makes the protocol more fault tolerant.

An NTP server, which supports both TCP/IP and UDP/IP, listens for requests from clients on port 123. In its request packet, the client sends out its own local time. The server then stores its own estimate of the current time in the packet and returns it. The client then logs the time at which it received the packet to estimate the time it took the packet to travel. To estimate the current time correctly, this round-trip time is taken into account. The shorter this time, the more accurate the estimate of the current time. The client NTP tests the quality of several such packet exchanges, and if the server is considered valid, the protocol accepts the current time as given by it.

The study of time and of human endeavors to study, quantify, and control it is almost as unending as time itself. We hope this brief journey in time has been interesting for you.

Pragya Madan

Advertisment