The communications industry is on the cusp of a revolution characterized by three driving forces that will forever change the communications landscape. First, deregulation has opened the local loop to competition, launching a whole new class of carriers that are spending billions to build out their networks and develop innovative new services. Second, the rapid decline in the cost of fiber optics and Ethernet equipment has made them an attractive option for access loop deployment. Third, the Internet has precipitated robust demand for broadband services, leading to an explosive growth in Internet Protocol (IP) data traffic while, at the same time, putting enormous pressure on carriers to upgrade their existing networks.
These drivers are, in turn, promoting several key market trends. In particular, the deployment of fiber optics is extending from the telecommunications backbone to Wide-Area Network(s) (WAN) and Metropolitan-Area Network(s) (MAN) and the local-loop. Concurrently, Ethernet is expanding its pervasiveness from Local-Area Network(s) to the MAN and the WAN as an uncontested standard.
The confluence of these factors is leading to a fundamental paradigm shift in the communications industry, a shift that will ultimately lead to widespread adoption of a new optical IP Ethernet architecture that combines the best of fiber optic and Ethernet technologies. This architecture is poised to become the dominant means of delivering bundled data, video, and voice services on a single platform.
Passive Optical Networks (PONs) address the “last mile” of communications infrastructure between a Service Provider's Central Office (CO), Head End (HE), or Point of Presence (POP) and business or residential customer locations. Also known as the “access network” or “local loop”, this last mile consists predominantly—in residential areas—of copper telephone wires or coaxial cable television (CATV) cables. In metropolitan areas—where there is a high concentration of business customers—the access network often includes high-capacity synchronous optical network (SONET) rings, optical T3 lines, and copper-based T1 lines.
Historically, only large enterprises can afford to pay the substantial costs associated with leasing T3 (45 Mbps) or optical carrier (OC)-3 (155 Mbps) connections. And while digital subscriber line (DSL) and coaxial cable television (CATV) technologies offer a more affordable interim solution for data, the are infirmed by their relatively limited bandwidth and reliability.
Yet even as access network improvements have remained at a relative bandwidth standstill, bandwidth has been increasing dramatically on long haul networks through the use of wavelength division multiplexing (WDM) and other technologies. Additionally, WDM technologies have penetrated metropolitan-area networks, thereby boosting their capacities dramatically. At the same time, enterprise local-area networks have moved from 10 Mbps to 100 Mbps, and soon many will utilize gigabit (1000 Mbps) Ethernet technologies. The end result is a gulf between the capacity of metro networks on one side, and end-user needs and networks on the other, with a last-mile “bottleneck” in between. Passive optical networks—and in particular EPONs—promise to break this last-mile bottleneck.
The economics of EPONs are compelling. Optical fiber is the most effective medium for transporting data, video, and voice traffic, and it offers a virtual unlimited bandwidth. But the cost of deploying fiber in a “point-to-point” arrangement from every customer location to a CO, installing active components at each endpoint, and managing the fiber connections within the CO is prohibitive. EPONs address these shortcomings of point-to-point fiber solutions by using a point-to-multipoint topology instead of point-to-point; eliminating active electronic components such as regenerators, amplifiers, and lasers from the outside plant; and by reducing the number of lasers needed at the CO.
Unlike point-to-point fiber-optic technology, which is typically optimized for metro and long haul applications, EPONs are designed to address the demands of the access network. And because they are simpler, more efficient, and less costly than alternative access solutions, EPONS finally make it cost effective for service providers to extend optical fiber into the last mile.
Accordingly, EPONs are being widely recognized as the access technology of choice for next-generation, high speed, low cost access network architectures. EPONs exhibit a shared, single fiber, point-to-multipoint passive optical topology while employing gigabit Ethernet protocol(s) to deliver up to 1 Gbps of packetized services that are well suited to carry voice, video and data traffic between a customer premises and a CO. Adding to its attractiveness, EPONs have been recently ratified by the Institute of Electrical and Electronics Engineers (IEEE) Ethernet-in-the-First Mile (EFM) task force in the IEEE 802.3ah specification.
With reference to FIG. 1, there is shown a typical EPON as part of overall network architecture 100. In particular, an EPON 110 is shown implemented as a “tree” topology between a service provider's CO 120 and customer premises 130[1] . . . 130[N], where a single trunk or “feeder” fiber 160 is split into a number of “distribution” fibers 170[1] . . . 170[N] through the effect of 1×N passive optical splitters 180.
As can be further observed with reference to this FIG. 1, the trunk fiber 160 is terminated at the CO 120 at Optical Line Terminator (OLT) device 190 and split into the number of distribution fibers 170[1] . . . 170[N] which are each either further split or terminated at an Optical Network Unit (ONU) 150[1] . . . 150[N] located at a respective customer premises 130[1] . . . 130[N]. As can be determined with reference to this FIG. 1, in the downstream direction (from the OLT to the ONUs) the EPON is a point-to-multipoint network, while in the upstream direction (from the ONUs to the OLT), the EPON is a multipoint-to-point network.
The process of sending data downstream from an OLT to an ONU on an EPON shared network topology is somewhat different from the process of sending data upstream from an ONU to the OLT. More specifically, in the downstream direction the EPON provides a broadcast medium which transmits every Ethernet frame (packets) simultaneously to all ONUs. Each individual ONU then extracts only the packets destined for it, and ignores others. Downstream bandwidth sharing among any traffic belonging to various ONUs is therefore simple and dictated by an egress scheduling policy implemented at the OLTs EPON interface.
In the upstream direction however, only one ONU can transmit to the OLT at a given time to prevent collisions, since the trunk fiber is shared by all of the individual ONUs. To allow sharing of the upstream bandwidth among the various ONUs, and to prevent collisions of packets originating from different ONUs, an EPON media access control protocol based upon time-division multiplexing has been developed and generally adopted by the IEEE 802.3ah Task Force. This protocol, named the Multipoint-to-Point Control Protocol (MPCP) allows the OLT to arbitrate between various ONUs requesting upstream transmission over the shared medium by assigning exclusive timeslots to individual ONUs. Accordingly, each ONU can transmit packets upstream only during its assigned timeslot(s).
In performing this arbitration, MPCP utilizes two Ethernet control messages namely, a GATE message and a REPORT message. The GATE message (sent by an OLT to an ONU) assigns a transmission timeslot window to the ONU. The GATE message specifies transmission start and end times during which the ONU can transmit queued, customer traffic upstream to the OLT. The REPORT message (sent by an ONU to the OLT) is used by an ONU to report bandwidth requirements for upstream transmission of its traffic. The REPORT message contains queue occupancy information, which can aid the OLT in allocating appropriate bandwidth to the ONU. A diagram depicting the relationships among the various components and protocols associated with upstream transmission in EPONs, is shown in FIG. 2.
Additionally, the GATE and REPORT messages also provide mechanisms for global time-synchronization between the OLT and ONUs, to ensure accurate, collision-free operation of the TDM-based bandwidth arbitration. According to the IEEE 802.3ah standard, MPCP timings are measured in time-quantum units of 16 nanoseconds, consequently, GATE transmission grants and REPORT queue lengths are specified in these 16 nanosecond time-quantum units.
MPCP therefore provides a mechanism for the OLT to arbitrate the upstream bandwidth by dynamically allocating non-overlapping transmission grants to the ONUs, based upon ONU report messages received, and an allocation policy configured at the OLT. However, since MPCP does not specify a particular policy which the OLT uses to allocate bandwidth among the ONUs, this policy choice is left to the specific OLT implementation. Such a dynamic bandwidth allocation policy, which provides the foundation upon which an OLT constructs and sends GATE messages to the ONUs, is appropriately named Dynamic Bandwidth Allocation (DBA) Algorithm. 
These DBA algorithms or schemes must account for the potentially bursty nature of traffic and adapt to instantaneous ONU bandwidth requirements while performing statistical multiplexing. Their design and/or selection has a profound impact on the fairness, delay, jitter and other characteristics of upstream bandwidth allocation in an EPON.
As is known by those skilled in the art, DBA schemes employed by an OLT work in conjunction with the MPCP protocol to assign transmission grant schedules for active ONUs. In essence, the DBA is responsible for inter-ONU traffic scheduling over an upstream EPON channel.
DBA Design Criteria
A number of DBA schemes have been investigated and reported in the literature. Despite their differences however, each can be compared and evaluated according to a number of objective criteria. Designing and/or choosing an appropriate DBA for implementation within an OLT, requires a careful evaluation of each of these criteria.
Fairness/Efficiency in Bandwidth Allocation
Since DBA schemes dictate the policy for allocating transmission grant schedules to active ONUs, fairness in bandwidth allocation among various ONUs is a critically important property of the DBA. In particular, the DBA must fairly share the available bandwidth among contending ONUs based upon current demand and any Service Level Agreement (SLA) that exists between service customers served by a particular ONU and the service provider. In addition to fairness, the bandwidth allocation scheme must also exhibit high efficiency, such that the upstream link utilization is high, and transmission grants are not wasted. As is known, a potential source of wasted upstream bandwidth is frame delineation, wherein an ONU may not completely fill up allocated transmission grants due, in part, to the variably-sized Ethernet frames and infeasibility of frame fragmentation.
Work Conservation Property
Bandwidth allocation schemes may be either “work conserving” or “non-work conserving”. Work conserving schemes provide that bandwidth is never wasted so long as at least one ONU has data to transmit. As such, an individual ONU may acquire as much bandwidth as it needs, provided that the bandwidth demands from all other ONUs on the EPON are met. In contrast, non-work conserving schemes do not permit an individual ONU to exceed, for example, a maximum bandwidth allocation as provided by a Service Level Agreement associated with that individual ONU. Consequently, an individual ONU that has reached its maximum bandwidth allocation as per its associated SLA, will not be granted more than that maximum amount—even if there exists free bandwidth. As a result, EPONs that utilize a non-work conserving scheme may have periods of time when no data is transmitted, even though individual ONUs (who have reached maximum allocation) have unfulfilled demands. Despite this apparent drawback, non-work conserving schemes may nevertheless be employed as they tend to reduce potential congestion in the core network while exhibiting tighter jitter bounds.
Delay and Jitter Bounds
Whatever specific DBA scheme is implemented, it should result in both small delay and delay-jitter bounds. This necessitates that an ONU containing queued packets waiting for transmission receives transmission grants from the OLT as quickly and as regularly as possible.
For applications involving real-time voice and video traffic, minimizing the packet transmission delay (defined as the time period between when a packet is queued up at an ONU to when it is subsequently transmitted to an OLT) is especially critical as it largely determines end-to-end application performance. In addition, a long packet transmission delay has the added detriment of requiring additional buffering at the ONU in the upstream direction to prevent dropped packets.
The delay-jitter bound refers to the variation in the delay(s) of packets as they travel over an EPON. For applications such as streaming video and voice, a small jitter bound is important as it minimizes a playout buffer needed at the destination and thereby ensures a smooth, uninterrupted voice/video playback.
Since a DBA scheme is implemented and run at the OLT, it does not have direct access to packet queue information, but instead relies on MPCP REPORT messages to provide this information, and MPCP GRANT messages to signal the start of packet transmission. The latency in sending REPORT and GRANT messages over the EPON introduces an additional delay and jitter factor for packets traversing upstream, since the DBA might not enable transmission of queued packets at an ONU until a round of REPORT and GRANT messages have been exchanged.
Implementation Complexity
A DBA scheme needs to be simple and fast enough to process REPORT messages and send out GRANT messages a quickly as possible in order to meet low-delay, real-time requirements. Consequently, the implementation complexity of a specific DBA scheme, coupled with its ability to scale to a large number of ONUs while still preserving its real-time capabilities, is of paramount concern.
DBA Classification
A number of DBA schemes have been proposed in the literature. (See, for example, M. P. McGarry, M. Maier, and M. Reisslein, “Ethernet PONs: A Survey of Dynamic Bandwidth Allocation (DBA) algorithms”, IEEE Communications Magazine, 2004). These schemes may be broadly classified into three major groups, namely: Static SLA-Based Bandwidth Allocation; Demand-Based Bandwidth Allocation; and Demand+SLA-Based Bandwidth Allocation.
Static SLA-Based Bandwidth Allocation
Static SLA-based bandwidth allocation schemes are arguably the simplest of the bandwidth allocation schemes that can be implemented at the OLT. With these static SLA-based bandwidth allocation schemes, the transmission grant schedule generated by the DBA is fixed (based upon parameters such as the ONU SLA, etc), and repeated continuously to provide a TDM-like service to each of the ONUs. These schemes run in a cyclic fashion, with a fixed cycle length. Consequently, in every cycle, the DBA assigns a fixed transmission grant to each ONU—in a round-robin manner. The transmission grant schedule within a particular cycle is solely based upon static ONU parameters such as those defined by an SLA. Consequently, it does not take into account dynamic factors such as ONU demands carried in REPORT messages.
While these static SLA-based bandwidth allocation schemes—because of their TDM-like operation—provide low delay and jitter bounds for ONU traffic, they are primarily suited to constant-bit-rate traffic, since they cannot provide statistical multiplexing of the traffic. When ONU traffic patterns are bursty an dynamic, static SLA-based bandwidth allocation schemes lead to low bandwidth utilization and general unfairness.
Demand-Based Bandwidth Allocation
Demand-based bandwidth allocation schemes are characterized by their consideration of demands from ONUs while allocating bandwidth, but not any ONU specific service level agreement. (See, for example, G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT: A Dynamic Protocol for an Ethernet PON”, IEEE Communications Magazine, February 2002; and H. Miyoshi, T. Inoue, and K. Yanashita, “D-CRED:Efficient Dynamic Bandwidth Allocation Algorithm in Ethernet Passive Optical Networks”, Journal of Optical Networking, August 2002) Such demand-based bandwidth allocation schemes are typically polling-based. As such, the OLT polls each ONU in a round-robin manner and—based on the queue lengths in the received REPORT message(s)—issues transmission grants to ONUs as required.
Additionally, the cycle length for each round-robin iteration is not fixed when a demand-based bandwidth allocation scheme is employed rather it varies depending upon ONU demands in the previous iteration. Consequently, these demand-based bandwidth allocation schemes achieve high bandwidth utilization but cannot provide SLA-based fairness or service differentiation among ONUs, nor can they provide low delay and jitter bounds due to the variable cycle-lengths involved.
Demand+SLA-Based Bandwidth Allocation
Demand+SLA-based bandwidth allocation schemes such as those described by M. Ma, Y. Zhu and T. Cheng in a paper entitled “A Bandwidth Guaranteed Polling MAC Protocol for Ethernet Passive Optical Networks”, which was presented at IEEE INFOCOM, 2003; and another entitled “Dynamic Bandwidth Allocation For Quality-Of-Service Over Ethernet PONs”, which was authored by C. Assi, Y. Ye, S. Dixit, and M. Ali and appeared in IEEE Journal on Selected Areas in Communciations, in November 2003, take into account both ONU SLAs and ONU demands (as reported in their REPORT messages) while creating transmission grant schedules. In each cycle—according to these Demand+SLA-based bandwidth allocation schemes—the transmission grant schedule is created from the most recent REPORT message(s) received by registered ONUs and any configured SLA parameters for these ONUs, using an appropriate fairness model (such as max-min fairness or proportional fairness).
In addition, certain DBA schemes can be further categorized based upon whether they handle intra-ONU scheduling in addition to inter-ONU scheduling. Up until this point, most DBA schemes consider inter-ONU scheduling only, that is, determining which ONU should transmit traffic at a particular time.
However, Quality-Of-Service (QOS) considerations typically require that individual ONUs queue traffic into multiple priority queues—one for each class of service supported. How an individual ONU selects packets from among its multiple priority queues for transmission within its assigned grant window is typically determined locally by the individual ONU. This local, priority queue selection determination is generally referred to as intra-ONU scheduling.
Accordingly, some DBA schemes have been proposed that consider both intra-ONU and inter-ONU scheduling. These “hybrid” schemes, such as that described by G. Kramer, B. Mukherjee, S. Dixit, Y. Ye, and R. Hirth in a paper entitled “Supporting Differentiated Classes of Service in Ethernet Passive Optical Networks”, which appeared in Journal of Optical Networking, in August 2002, operate by inspecting separately reported queue lengths contained in MPCP REPORT messages sent from an ONU to the OLT. The DBA is therefore able to determine separate bandwidth demands for each QoS supported within an individual ONU. If necessary, the DBA can allocate multiple transmission grants per ONU in a given cycle—one for each of its (the ONUs) priority queues, in contrast to issuing a single transmission grant to the ONU and letting the ONU schedule its priority queues locally.
While such hybrid schemes do advantageously provide intra-ONU and inter-ONU scheduling under the control of a single, centralized scheme while simultaneously enabling fine-grained QoS service differentiation, they nevertheless suffer from an inability to scale, and are prohibitively complex. In addition, these prior art hybrid schemes are not well supported by standards-based protocols, i.e., MPCP GATE protocols, and therefore are unsuitable for widespread adopotion.
Consequently methods and apparatus that improve the upstream transmission characteristics of an EPON—and in particular those incorporating and satisfying the DBA design criteria described previously—would represent a significant advance in the art. Such a method and apparatus is the subject of the present invention.