In a passive optical network (PON), a number of optical network units (ONUs) are placed in a corresponding number of offices or homes, and are coupled by passive devices to a single optical line terminal (OLT), that may be placed, for example, in a central office of a telephony service provider. Such a passive optical network (PON) may be configured as a single medium that is shared among multiple optical network units (ONUs). The optical line terminal (OLT) may communicate (in the downstream direction) with the multiple optical network units (ONUs) by broadcasting Ethernet packets, as illustrated in FIG. 1A. Each optical network unit (ONU) extracts packets addressed to itself based on the media-access-control (MAC) address in the normal manner.
Transmission of the Ethernet packets (in the upstream direction) from multiple optical network units (ONUs) to the optical line terminal (OLT) must be coordinated, to avoid collisions (e.g. in case transmissions by two or more optical network units (ONUs) overlap partially) on the shared medium. In some prior art systems, each optical network unit (ONU) is allocated an identical fraction of the total bandwidth (in a time-division-multiplexed channel), and the optical network units (ONUs) synchronize their transmissions to avoid collisions. For example, as noted in an article entitled “Design and Analysis of an Access Network based on PON Technology” by Glen Kramer and Biswanath Mukheijee each of N (e.g. 16) optical network units (ONUs) is assigned a time slot, and each optical network unit (ONU) may transmit any number of packets that may fit within the allocated time slot, as illustrated in FIG. 1B. If a packet cannot be completely transmitted within a current time slot, it is transmitted in the next slot Although there are no collisions and no packet fragmentation, such a fixed round-robin slot allocation cannot handle bursty traffic situations.
A dynamic scheme that reduces the timeslot size when there is no data would allow the excess bandwidth to be used by other ONUs, as noted in an article entitled “Interleaved Polling with Adaptive Cycle Time (IPACT): A Dynamic Bandwidth Distribution Scheme in an Optical Access Network” by Glen Kramer, Biswanath Mukherjee and Gerry Pesavento. As noted therein, if the OLT is to make an accurate timeslot assignment it should know exactly how many bytes are waiting in a given ONU. One approach could be to relieve the OLT from timeslot assignment altogether and leave it to ONUs. In such a scheme, every ONU, before sending its data, will send a special message announcing how many bytes it is about to send. The ONU that is scheduled next (say, in round-robin fashion) will monitor the transmission of the previous ONU and will time its transmission such that it arrives to the OLT right after the transmission from the previous ONU.
Another approach is to perform simple hub-polling wherein the OLT polls each ONU as to the amount of bandwidth needed by the ONU (which may be the same as its current queue length). However, as noted in the just-described article, simple hub polling has a disadvantage of increasing the polling cycle due to accumulation of walk times (switchover times). In access networks, this problem is even more pronounced than in LANs, as the propagation delays in the access network are much higher. For example, in a PON with 16 ONUs each having 200 μs round-trip delay (ONU and OLT are 20 km apart) the accumulated walk time is 3.2 ms. This is how much time would be wasted in every cycle in the hub polling.
The just-described article proposes an interleaved polling approach where the next ONU is polled before the transmission from the previous one has arrived. This scheme provides statistical multiplexing for ONUs and results in efficient upstream channel utilization. A three ONU example is described as follows in reference to FIGS. 2A–2D. Let us imagine that at some moment of time t0 the OLT knows exactly how many bytes are waiting in each ONUs buffer and the Round-Trip Time (RTT) to each ONU. The OLT keeps this data in a polling table shown in FIG. 2A. At time t0 , the OLT sends a control message (Grant) to ONUI, allowing it to send 6000 bytes (see FIG. 2A). Since, in the downstream direction, the OLT broadcasts data to all ONUs, the Grant should contain the ID of the destination ONU, as well as the size of the granted window (in bytes). Upon receiving the Grant from the OLT, ONU1 starts sending its data up to the size of the granted window (FIG. 2B). In our example—up to 6000 bytes. At the same time, the ONU keeps receiving new data packets from its user. At the end of its transmission window, ONU1 will generate its own control message (Request). The Request sent by ONU1 tells the OLT how many bytes were in ONU1's buffer at the moment when the Request was generated. In our case there were 550 bytes.
Even before the OLT received a reply from ONU1, it knows when the last bit of ONU1's transmission will arrive. This is how the OLT calculates this: (a) the first bit will arrive exactly after the RTT time. The RTT in our calculation includes the actual round-trip time, Grant processing time, Request generating time, and a preamble for the OLT to perform bit- and byte-alignment on received data, i.e., it is exactly the time interval between sending a Grant to an ONU and receiving data from the same ONU. (b) since the OLT knows how many bytes (or bits) it has authorized ONU1 to send, it knows when the last bit from ONU1 will arrive. Then, knowing RTT for ONU2, the OLT can schedule a Grant to ONU2 such that first bit from ONU2 will arrive with a small guard interval after the last bit from ONU1 (FIG. 2B). The guard intervals provide protection for fluctuations of round-trip time and control message processing time of various ONUs. Additionally, the OLT receiver needs some time to readjust its sensitivity due to the fact that signals from different ONUs may have different power levels because ONUs are located at different distances from the OLT (far-near problem).
After some time, the data from ONU1 arrives. At the end of the transmission from ONU1, there is a new Request that contains information of how many bytes were in ONU1's buffer just prior to the Request transmission. The OLT will use this information to update its polling table (see FIG. 2C). By keeping track of times when Grants are sent out and data is received, the OLT constantly updates the RTT entries for the corresponding ONUs Similarly, the OLT can calculate the time when the last bit from ONU2 will arrive. Hence, it will know when to send the Grant to ONU3 so that its data is tailed to the end of ONU2's data. After some more time, the data from ONU2 will arrive. The OLT will again update its table, this time the entry for the ONU2 (see FIG. 2D).
If an ONU emptied its buffer completely, it will report 0 bytes back to the OLT. Correspondingly, in the next cycle the ONU will be granted 0 bytes, i.e., it will be allowed to send a new request, but no data. Note that the OLT's receive channel is almost 100% utilized (Requests and guard times consume a small amount of bandwidth). Idle ONUs (without data to send) are not given transmission windows. That leads to a shortened cycle time, which in turns results in more frequent polling of active ONUs. As it is clear from the description above, there is no need to synchronize the ONUs to a common reference clock (as traditionally done in TDMA schemes). Every ONU executes the same procedure driven by the Grant messages received from the OLT. The entire scheduling and bandwidth allocation algorithm is located in the OLT.
U.S. Pat. No. 6,324,184 granted to Hou , et al. on Nov. 27, 2001 (that is incorporated by reference herein in its entirety) discloses a method and apparatus for dynamically allocating bandwidth among a number of subscriber units in an upstream channel of a communication network, such as a multichannel hybrid fiber coax (HFC) cable television system. Specifically, U.S. Pat. No. 6,324,184 illustrates, in FIG. 3 a time division multiple access (TDMA) frame structure used therein. A transport stream, shown generally at 300, includes first, second, and third superframes, denoted by reference numerals 310, 350 and 380, respectively. Each superframe is shown as being comprised of a number NF of frames, although the number of frames need not be the same in each superframe on different channels. In particular, the first superframe 310 includes frames 320, 330 . . . 340, the second superframe 350 includes frames 360, 362 . . . 364, and the third superframe 380 includes frames 390, 392 . . . 394. Furthermore, each frame is shown including a number Ns of slots, although the number of slots need not be the same in each frame. For example, the first frame 320 of superframe 310 includes slots 322, 324, 326 and 328. Moreover, the size of each superframe, frame or slot may vary.
U.S. Pat. No. 5,930,262 granted to Sierens, et al. on Jul. 27, 1999 (that is also incorporated by reference herein in its entirety) discloses a method for TDMA management, in which a central station is enabled to transmit downstream frames to the terminal stations to allow the terminal stations to transfer upstream frames to the central station in time slots assigned thereto by way of access grant information included in the downstream frames. The downstream frame is a superframe having a matrix structure with rows and columns, and a first portion and a second portion of the matrix structure is an overhead portion and an information portion respectively. The overhead portion includes the access grant information and the size of the overhead portion is flexibly adaptable. The central station and the terminal stations are adapted to send and to interpret the superframe.
U.S. Pat. No. 5,930,262 states that the first byte of such a frame is a predetermined synchronization byte S. Bytes 2 to 188 can be used for user data, followed by 16 bytes for error correcting code. Using the frame as a basic block (row), a superframe is constructed of 8 consecutive frames. The superframe is divided in columns containing dedicated blocks. The column containing the first byte of every frame contains a synchronization byte S as mentioned earlier. The next 8 columns form a TDMA Control Block TCB which contains 1 bit for superframe synchronization S', a second bit for specifying a counter C for slot synchronization of the TDMA and per row maximum 4 Transmit Enable Addresses TEA1–TEA4 for specification of the terminal station allowed to send information in a corresponding timeslot of the upstream channel.
According to U.S. Pat. No. 5,930,262, the TEAs listed in the downstream frame indicate which terminal station may upon the consecutive zero crossing of its counter transmit an upstream burst. If 4 upstream bursts have to start during a specific downstream frame, then 4 TEAs will be required in the corresponding row of the TCB part of the downstream frame. If only 3 upstream bursts have to start, the fourth TEA is assigned a zero value by the central station. Typically, a row of the TCB controls the burst starting transmission during the next frame. It cannot control transmission during the current frame, since some latency is required for processing the TEA in the terminal station. It should be noted that use could be made of special code TEAs as a result of which any terminal station would be allowed to transfer upstream information, thereby realizing a combination of TDMA and of the Aloha or contention principle. Acknowledgements could then be broadcast in operation and maintenance messages.
See also a presentation entitled “Ethernet PON (EPON) TDMA Interface in PHY Layer and other considerations” by J. C. Kuo and Glen Kramer, IEEE 802.3 Ethernet in the First Mile (EFM) Study Group, Portland, Oreg., March 2001.