Home > 5G Network > Fronthaul Design In Cloud Radio Access Networks: A Survey

Fronthaul Design In Cloud Radio Access Networks: A Survey

  • B. Downlink Compression:

    The downlink compression techniques are mainly two types, point-to-point compression, and multiterminal compression. The point-to-point compression at the downlink side is shown in Figure. 6, where the user messages that need to be sent to different RRHs are encoded separately. Later, joint precoding is applied over the encoded signals. These jointly precoded signals are transmitted over separate fronthaul links after compressing individually. These compressed signals are decompressed at the RRH side before transmitting to UEs. But, in multiterminal compression, as shown in Figure. 7, after encoding of the UE messages, and joint precoding, the compression is done jointly. Joint compression is a network-aware compression technique. Multivariate joint compression is preferred over point-to-point compression, as point-to-point compression is not optimal from the network information-theoretic point of view.

    Instead of separate compression after joint precoding, joint precoding and multiterminal compression problem are considered in [59]. As a network information-theoretic compression, joint compression was used in this case, which can compress the additive quantization noise at UEs. An optimization problem was formulated to maximize the weighted sum-rate with fronthaul capacity and power constraints. The optimization problem is solved through an iterative algorithm, and the numerical results show that a stationary point is reached. Even though the proposed system model yields better performance results as compared to traditional separate precoding and compression design, but it has its limitations. The proposed system model didn’t consider only a single cluster; hence intercluster interference is not taken in to account. A multicluster scenario with the design of joint precoding and compression is presented in [60]. An optimization problem of the weighted sum-rate across all the clusters is proposed. The numerical results and simulation results showed a compression gain over the traditional approach. In contrast to the pure compression strategies proposed in [59], and [60], a hybrid compression scheme is proposed in [61]. As part of this hybrid compression technique, some UEs are transmitted messages directly through RRHs, which are in access to those UEs. Therefore, the remaining fronthaul capacity can be shared for the transmission of joint precoded and compressed signals to other UEs. In this way, the optimum utilization of fronthaul can be achieved. From the results, it was shown that the hybrid compression scheme is a useful alternative technique.

    The works in [59]-[61] considered static channels, which is a limitation. A block-ergodic fading channel model is considered for the downlink compression in [62]. A cluster of RRHs is considered, where RRHs have multiple antennas to serve the UEs. Unlike the previous works, the design of joint precoding and compression at the BBU pool is not considered in this work. An interplay is done for the functional split of precoding and compressor units. Two types of architectures are proposed as part of the study, namely, Compression After Precoding (CAP) and Compression Before Precoding (CBP). In CAP, the BBU pool does all the baseband processing, such as joint precoding and joint compression. As part of CBP architecture, the BBU pool forwards the compressed messages and the precoding matrices. RRH does channel encoding and precoding over the received matrices. The problem is formulated as ergodic capacity optimization. From the results, it was observed that CAP strategy is good at handling interference management, whereas CBP is better at lower fronthaul capacity requirements.

  • C. Point-to-Point Compression

    LTE was originally introduced to improve system performance. With the help of OFDM and MIMO, LTE has become a major key player in cellular technology. The cell throughput has increased enormously from 3G to 4G. Later, LTE-A has introduced CoMP, carrier aggregation, and enhanced MIMO. The introduction of these technologies into LTE-A causes the amount of bandwidth required to be five times that of LTE. In order to support these high data rates, a huge amount of fiber is required in LTE/LTE-A based C-RAN for the connection between BBU and RRHs. Hence, IQ sample compression techniques are required to reduce the amount of fiber required.

    A low latency baseband signal compression algorithm for CPRI transmission in the LTE system is proposed in [63]. The IQ samples are with certain bit-width and complex values. Based on the characteristics of the LTE signal, the amount of transmission data can be reduced by removing the redundant spectrum bandwidths and bit-bit-wide compression. Removing the redundant spectrum bandwidth means reducing the number of signals. The bit-bit-wide or bit-width compression is done by combining block scaling and non-uniform (non–linear) quantizer. This combination also minimizes the quantization error. Reducing the number of signals and bit-width compression will be done before CPRI framing. The results are observed for low compression ratio and high compression ratio. For low compression ratios, EVM is observed to be less than 1%, and for high compression ratios, the EVM is deteriorating. Hence, loss of information is negligible in the case of low compression ratios. The performance results are observed to be ideal until 11 bits. The proposed compression scheme is lossy and achieves a 1/2 compression ratio.

    Even the compression ratio is excellent in lossy compression algorithms, the distortion of the reconstructed signal after decompression is high as compared to the original signal before compression. So, the signal quality decreases. The signal distortion depends on the compression ratio and the compression algorithm used. The compression algorithm in [63], involves the transmission of overhead since the scaling factor for block scaling is needed to be transmitted for de-compression node. Thus, there is a need for a low complexity compression algorithm that does not involve any transfer of overhead. A simple IQ data compression scheme that can run with low-performance processors either at BBU or RRH is proposed in [64]. The compression scheme involves IQ-bit width reduction and a common lossless audio compression scheme, FLAC algorithm. The proposed IQ compression scheme is both lossy and lossless, with a 1/2 compression ratio. In lossless compression algorithms, even though the compression ratio is less, but the original signal will be reconstructed fully. The proposed compression scheme is observed to meet the performance requirements of a system. Hence, the amount of optical fronthaul links can be reduced by half.

    In LTE/LTE-A based Cloud-RAN architecture, RRH nodes cooperate to perform advanced techniques like CoMP or coordinated scheduling. In order to exploit the redundancy in the shared information among the RRH nodes, a fronthaul compression technique for distributed LTE/LTE-A C-RAN architecture is proposed in [65]. To use the baseband signal redundancy in both time and frequency domains, two network nodes are introduced at both the ends of the fronthaul link. In the frequency domain, the unused subcarriers are exploited.

    In downlink precoding information, the time and frequency redundancy are exploited. The compression ratio depends on the resource blocks’ occupancy in each cell. Hence, in the aggregation of multiple cell networks, the statistical multiplexing of the user’s data reduces the aggregate data rate requirement of the fronthaul links. Thus this compression scheme avoids utilizing all the links under less load situations. The proposed lossless compression scheme achieves a high compression ratio of 30:1 at low cell loads, such as LTE 2$\times$2 MIMO cells. At 50% cell loads, a compression ratio of 6:1 is achieved, and even at high cell loads, a compression ratio of 3:1 is achieved. Despite the high compression ratios, the disadvantage of this scheme is that all the modulation and demodulation functionalities are moved to the RRH. The BBU only supplies the bits needed as inputs for the QAM constellation, which causes a burden on the RRH.

    A low complexity, time-domain based compression scheme for LTE uplink and downlink is proposed in [66], which exploits redundancies in temporal and spectral characteristics of LTE signal. The proposed scheme is a combination of rescaling, non-uniform quantization, noise-shaping error feedback, and resampling. The proposed compression technique achieved a compression ratio of 5 for the 15-bit representation of complex baseband values. The proposed method is validated using LTE-link level simulation.

    As part of the compression algorithms in [67]-[63], the sample value of the baseband signal is quantized directly without exploiting the characteristics of the sample values. Hence, in view of this point and to reduce the transmission bandwidth, a low latency CPRI compression algorithm for the LTE downlink baseband signal is presented in [68]. Since, in the case of high SNR, the modulated LTE downlink signal is distributed around constellation points in the spectrum. The codebook space and code length can be reduced if the modulation signal’s constellation points are clustered into a number of clustering centers. As part of the algorithm, clustering and coding are done in the frequency domain. The redundant bandwidth is removed through I/Q data in the frequency domain, clustering and quantifying the points on the constellation and finally adaptively selecting the appropriate modulation scheme. The proposed compression scheme is analyzed through both MATLAB simulation and FPGA-based performance analysis. In the low throughput scenarios, such as QPSK modulation, the data compression is observed to be 4%. In the high throughput scenarios, such as 64-QAM modulation, the data compression is observed to be 15%. In all the testing scenarios, EVM is observed to be less than 0.025%.

    As a considerable amount of redundancy can be reduced in time-domain based compression schemes, a time-domain based compression is proposed in [69]. Resampling, block scaling, and quantization is used in [69] to do fronthaul compression. A compression factor of 3.9, 3.4, 2.9, and an EVM of 4.6%, 2.3%, 1.15% is achieved for LTE downlink signals with 5, 6, and 7 quantized bits/sample respectively.

    A low complexity compression scheme for LTE uplink and downlink is proposed in [70], which exploits the baseband signal’s temporal and spatial characteristics. The compression scheme is a combination of reducing signal sampling, Digital Automatic Gain Control (DAGC) compression, Lloyd-Max quantization, and Cyclic prefix replacement. In this context, the objective of the DAGC is to reduce the baseband signal dynamic range, normalize the power of each symbol and selection of the average power reference based on the best de-modulation range. DAGC is easy to implement in digital logic, but it alone can not achieve high compression ratios. Lloyd-Max quantization is a non-linear quantization technique, which studies the statistical characteristics of the baseband signal to minimize the average noise power. Resampling leaves the low-frequency components of the recovered OFDM signal leaves unchanged, whereas the distortion is introduced in the high-frequency components of the OFDM symbol, such as the start and end of the OFDM symbol. In view of this fact, the cyclic prefix is replaced at the start and end of the OFDM symbol. The compression scheme achieved a compression ratio of 3.56, a latency of $3.4 \mu$sec, and an EVM of approximately 1.89%. The proposed compression architecture is also implemented on the FPGA board.

    A Linear predictive coding based compression scheme for LTE downlink is proposed in [71], which is based on linear prediction and Huffman coding. Since a large amount of redundancy can be reduced in time-domain based compression schemes, this scheme also focuses on time-based compression. Instead of using the resampling to reduce LTE oversampling overhead, time-domain based linear predictive coding takes LTE frame-based LTE structure into account. Here, LPC is adapted to OFDM. The Huffman encoder at the BBU compresses the I/Q samples, and the decoder at the RRH decompresses the samples. The filter which implements this compression scheme is called the LPC filter, which has fewer taps than the ones used in resampling techniques. The computation cost in terms of encoding and decoding is also less. This technique has strict restrictions on power consumption and latency. The proposed scheme achieves an EVM of 0.9% and 2.1% at a compression ratio of 3.3:1 and 4:1, respectively.

    A combination of resampling and vector quantization is adapted in [72] to do fronthaul compression. A compression factor of 5.5, 4.5, 3.8, and an EVM of 4.2%, 2.1%, 1% is achieved for LTE downlink signals with 5, 6, and 7 quantized bits/sample respectively.

    Most of the proposed compression schemes are not applicable in the case of multipoint-to-multipoint operation over packetized networks. The point-to-point compression schemes mostly offer compression schemes targeting a dedicated link. In multipoint-to-multipoint operation, the fronthaul traffic has to coexist with the other variable-rate traffics, which may cause network congestion. In the distributive network, even the addition of one RRH causes the previously attained compression ratio to be changed. Hence, a rate-adaptive fronthaul network should be realized with the tunable fronthaul compression scheme. The Linear predictive coding based compression scheme proposed in [71] is extended in [73] to realize such a network. As part of the extension, LPC is adapted to OFDM with adjustable scaling. The compression technique uses both the number of quantization bits and a loading factor, which is a continuous variable to fine-tune the compression ratio. Adjustable gains are introduced to do LPC based compression. Adjustable gains are combined to fixed quantizer and fixed Huffman dictionary. Depending on the fronthaul capacity, the adjustable gain can be used to make the fine regulation between EVM and achievable compression ratio. This type of compression through adjustable gains is an alternative to other compression schemes that varies a discrete number of bits of the quantizer.
Pages ( 10 of 12 ): « Previous1 ... 89 10 1112Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.