Conventionally, Fiber To The X (FTTX) is known as a configuration of a network of optical communication. FTTX is a system of a network configuration that connects a building of a user and a station facility of a communication carrier, and there are various systems depending on a laying situation of optical fibers. FIG. 16 is an explanatory diagram for explaining an example of a configuration of FTTX.
According to an FTTX system example depicted in FIG. 16, a building of a user and a station facility of a communication carrier are coupled with optical fiber, the FTTX system includes optical network terminals (ONTs) 1 to 3 as a terminal device on the user side, and an optical line terminal (OLT) 5 as a terminal device on the communication carrier side. Moreover, the FTTX includes a star coupler 4 that multiplexes or demultiplexes optical signals that are transmitted and received between the ONTs 1 to 3 and the OLT 5.
For example, the ONTs 1 to 3 are placed in respective buildings of users, and perform transmission and reception of data to and from the OLT 5 via the star coupler 4. The OLT 5 then transmits transmission data to the ONTs 1 to 3 with continuous signals in a band at 1.49 micrometers. Moreover, the OLT 5 receives burst signals as reception data that are time-division multiplex in a band at 1.31 micrometers, and respectively transmitted from the ONTs 1 to 3.
FIG. 17 is an explanatory diagram for explaining a burst signal receiver. A burst signal receiver 10 is placed inside the OLT 5 depicted in FIG. 16, and is coupled to a retiming block 20 that performs clock extraction from a received signal, as depicted in FIG. 17. Moreover, as depicted in FIG. 17, the burst signal receiver 10 includes a light receiving element 11, a preamplifier 12, an amplitude detecting circuit 13, a threshold creating circuit 14, a comparator 15, an amplifier 16, and an output buffer 17.
The light receiving element 11 receives an optical signal, and converts it into a current signal. The preamplifier 12 then converts the current signal converted by the light receiving element 11 into a voltage signal at a certain level. The amplitude detecting circuit 13 detects an amplitude from the voltage signal converted by the preamplifier 12; and the threshold creating circuit 14 creates a code identifying threshold that is a threshold for identifying a code of an optical signal that is input based on the amplitude detected by the amplitude detecting circuit 13.
The comparator 15 compares the upper-end level of an amplitude of the optical signal detected by the amplitude detecting circuit 13 and a certain threshold that is predetermined, and outputs a comparison result to the output buffer 17. The amplifier 16 encodes the voltage signal converted by the preamplifier 12 by using the code identifying threshold created by the threshold creating circuit 14. The output buffer 17 determines whether the optical signal is a burst signal from the comparison result by the comparator 15, and outputs to the retiming block 20 a signal input from the amplifier 16, only if the optical signal is a burst signal.
Specifically, in order to avoid that a noise is output as a signal from the burst signal receiver 10, the output buffer 17 outputs a signal input from the amplifier 16, only when the upper-end level of the amplitude of the burst signal is equal to or higher than the certain threshold. As for the examples of the conventional technologies, refer to Japanese Examined Utility Model Application Publication No. 08-4755, and Patent Document 2: Japanese Laid-open Patent Publication No. 2004-260230, for example.
However, according to the conventional technology described above, there is a problem. The detection precision of burst signal deteriorates in a burst signal receiver when the transmission rate of optical signal is increased by using an existing circuit. Such deterioration cannot be improved without deterioration in detected signal quality. The following description explains deterioration in detection precision of burst signal that arises when shifting the “transmission rate” of optical signal from “1.2 Gigabit per second (Gbps)” to “10 Gbps” by using an existing circuit, and then explains problems of the conventional technology.
To begin with, when the transmission rate of optical signal is increased, the signal amplitude of a burst signal is decreased. FIG. 18 is an explanatory diagram for explaining a Trans Impedance Amplifier (TIA) transimpedance ratio with respect to each transmission rate. FIG. 18 is a table in which a transmission rate is associated with a minimum reception-level requirement (IEEE), a minimum reception-level requirement (ITU-T), a BER requirement, and a TIA transimpedance ratio.
“Transmission rate” indicates the number of bits to be transmitted per second; and “minimum reception-level requirement (IEEE)” indicates an attenuation allowance of optical signal in a communication channel in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards. Moreover, “minimum reception-level requirement (ITU-T)” indicates an attenuation allowance of optical signal in a communication channel in accordance with International Telecommunication Union Telecommunication Standardization Sector (ITU-T) standards. Furthermore, “bit error rate (BER) requirement” indicates an allowable range of bit error included in received data; and “TIA transimpedance ratio” indicates a converting ratio to a voltage by a TIA included in the preamplifier 12.
As depicted in FIG. 18, at “transmission rate: 1.2 G”, the other items are “minimum reception-level requirement (IEEE): −29.5 dBm, minimum reception-level requirement (ITU-T): −28.0 dBm, BER requirement: <10−12, and TIA transimpedance ratio: 1”. At “transmission rate: 2.5 G”, the other items are “minimum reception-level requirement (IEEE): −, minimum reception-level requirement (ITU-T): −28.0 dBm, BER requirement: <10−12, and TIA transimpedance ratio: 1/2”. Accordingly, an optical burst signal needs to be identified with a reception level <−28.0 dBm.
On the other hand, when “transmission rate” is shifted to “10 G”, the other items are “minimum reception-level requirement (IEEE): −28.0 dBm, minimum reception-level requirement (ITU-T): −28.0 dBm, BER requirement: <10−3, and TIA transimpedance ratio: 1/8”. In other words, when “transmission rate” is shifted to “10 G” while the attenuation allowance of optical signal in the current communication channel remains the same, despite that “TIA transimpedance ratio” decreases, it is needed to identify an optical burst signal at a reception level <−28.0 dBm.
FIG. 19 is an explanatory diagram for explaining a BER property. The vertical axis of FIG. 19 denotes BER, and the horizontal axis denotes minimum reception-level requirement (dBm). As depicted with a broken line in FIG. 19, the BER property of “1.2 G/2.5 G BER” is indicated by a straight line connecting “minimum reception-level requirement: −28, BER: 1E-12” and “minimum reception-level requirement: −34, BER: 1E-3”.
On the other hand, as depicted with a solid line in FIG. 19, the BER property of “10 G BER” is indicated by a straight line connecting “minimum reception-level requirement: −22, BER: 1E-12” and “minimum reception-level requirement: −28, BER: 1E-3”. In other words, when the transmission rate is shifted to “10 G”, the burst-signal receiver detects optical signals at a level that is “6 dBm” lower than “minimum reception-level requirement: −22” for “BER requirement: 1E-12”.
Along the shift of the transmission rate, the frequency property of the TIA is set optimally. As a frequency property of a TIA, “cutoff frequency: fc”, which is a marginal frequency that causes gain reduction, is explained below. FIG. 20 is an explanatory diagram for explaining a cutoff frequency of a TIA. The cutoff frequency of a TIA is obtained by Expression (1) and Expression (2) using “feedback resistor: Rf”, “open gain: G”, “parasitic capacity of light receiving element: Cpd”, and “parasitic capacity of TIA input: Ci”, depicted in FIG. 20. “Ri” denotes input impedance.Ri=Rf/G  (1)fc=1/(2*π*Ri*(Cpd+Ci))  (2)
As expressed in Expression (2), the cutoff frequency includes “Ri: input impedance”, and when shifting the transmission rate from “1.2 G” to “10 G”, the TIA needs approximately eight times the band, so that “Ri” is to be divided to approximately 1/8. In other words, an output amplitude of a voltage signal output from the TIA turns to approximately 1/8, and detection precision of burst signal is deteriorated.
Moreover, when a speedup of the transmission rate of optical signal is implemented, an output level from the TIA decreases because of a tail current or transient response characteristics of the light receiving element. Specifically, when there is a level difference about 20 dBm in the signal level between consecutive time-division multiplex burst signals, an output level from the TIA decreases due to a tail current or transient response characteristics of the light receiving element. FIG. 21 is an explanatory diagram for explaining deterioration in the amplitude of a burst signal. “Burst signal” depicted in FIG. 21 denotes a burst signal input into a light receiving element, and “APD multiplication factor change” denotes change in multiplication factor of burst signal by avalanche photodiode (APD), which is a light receiving element; and “TIA output” denotes output of a voltage signal output from the TIA.
As depicted in FIG. 21, burst signals input into the APD are consecutive burst signals (n and n+1) attenuated by “−8 dBm”, and “−28 dBm”, respectively; and GT (guard time) between the burst signals is tens nanoseconds. In the above case, as depicted in FIG. 21, regarding the multiplication factor in the APD, when shifting from the n-th burst signal to the (n+1)th burst signal, a delay occurs in the recovery of the multiplication factor, resulting in a need for a few microseconds of a multiplication factor recovery delay time. Due to a delay until reaching a desirable multiplication factor of the (n+1)th burst signal, the multiplication factor in the APD changes, and deterioration of the amplitude due to lack of the required multiplication factor is produced on a voltage signal to be output to the TIA, consequently detection precision of burst signal is deteriorated. The limiter level is at the maximum output of the TIA.
Furthermore, when a speedup of the transmission rate of optical signal is implemented, the detection error of burst signal increases. Specifically, due to change in BER that arises when a speedup of the transmission rate of optical signal is implemented, the S/N ratio that is a ratio between a signal and a noise decreases, and the detection error of burst signal increases. FIG. 22 is a schematic diagram for explaining a burst-signal detection error with respect to each transmission rate. FIG. 22 is a table in which a transmission rate is associated with a minimum reception-level requirement (ITU-T), a BER requirement, an S/N ratio, and a burst-signal detection error.
As depicted in FIG. 22, when a “transmission rate” is “1.2 G”, “S/N ratio: 14.2, burst-signal detection error: 7.0%”, and “minimum reception-level requirement (ITU-T): −28.0 dBm, BER requirement: <10−12”. On the other hand, where “minimum reception-level requirement (ITU-T): −28.0 dBm, BER requirement: <10−3”, and a “transmission rate: 10 G”, “S/N ratio: 4.8”, and “burst-signal detection error: 20.8%”. Accordingly, when implementing a speedup of the transmission rate of optical signal, the detection error of burst signal is increased, and the detection precision is deteriorated.
As described above, when implementing a speedup of the transmission rate of optical signal, the detection precision of burst signal is deteriorated due to various factors. A case of solving the above problem by inserting an amplifier in a former process to a burst signal detector and amplifying the amplitude of a burst signal is explained below. FIG. 23 is an explanatory diagram for explaining burst-signal detection according to a conventional technology.
“Burst signal” depicted in FIG. 23 denotes a signal input into the burst-signal receiver. Moreover, “amplitude detection level and code identifying threshold” denote an amplitude detection level for detecting an input signal, and a code identifying threshold for identifying a code of a burst signal and removing a noise. Moreover, “amplitude detection level and code identifying threshold (after amplification of prebias area)” indicate an amplitude detection level and a code identifying threshold after the prebias area of the burst signal is amplified by the amplifier.
As depicted in FIG. 23, a burst signal includes a data area in which data is converted into a signal, and a prebias area that comes prior to the data area. When a signal is input, the burst signal receiver then detects the upper-end level of a signal amplitude as an amplitude detection level, and distinguishes whether the input signal is a burst signal. Moreover, the burst signal receiver creates a code identifying threshold equivalent to a half of the detected amplitude detection level.
Here, suppose the amplitude of a burst signal is amplified by the amplifier in order to solve the above-described problem. In the case described above, although an amplitude detection level for detecting a burst signal turns high, simultaneously a noise is also amplified, and detected signal quality is deteriorated.