Partial joint processing with efficient backhauling using particle swarm optimization
 Tilak Rajesh Lakshmana^{1}Email author,
 Carmen Botella^{2} and
 Tommy Svensson^{1}
DOI: 10.1186/168714992012182
© Lakshmana et al; licensee Springer. 2012
Received: 30 June 2011
Accepted: 29 May 2012
Published: 29 May 2012
Abstract
In cellular communication systems with frequency reuse factor of one, user terminals (UT) at the celledge are prone to intercell interference. Joint processing is one of the coordinated multipoint transmission techniques proposed to mitigate this interference. In the case of centralized joint processing, the channel state information fed back by the users need to be available at the central coordination node for precoding. The precoding weights (with the user data) need to be available at the corresponding base stations to serve the UTs. These increase the backhaul traffic. In this article, partial joint processing (PJP) is considered as a general framework that allows reducing the amount of required feedback. However, it is difficult to achieve a corresponding reduction on the backhaul related to the precoding weights, when a linear zero forcing beamforming technique is used. In this work, particle swarm optimization is proposed as a tool to design the precoding weights under feedback and backhaul constraints related to PJP. The precoder obtained with the objective of weighted interference minimization allows some multiuser interference in the system, and it is shown to improve the sum rate by 66% compared to a conventional zero forcing approach, for those users experiencing low signal to interference plus noise ratio.
Keywords
coordinated multipoint joint processing particle swarm optimization precoding stochastic optimization.1 Introduction
Future cellular communication systems tend to be spectrally efficient with a frequency reuse factor of one. The aggressive reuse of frequency resources causes interference between cells, especially at the celledge. Therefore, the user experience is affected and the performance of such systems is interference limited. To overcome this problem, coordinated multipoint (CoMP) transmission/reception is proposed [1]. Joint processing (JP) is one of the techniques that falls into the framework of CoMP transmission. In the downlink, JP involves the coordination of base stations (BSs) such that the interfering signals are treated as useful signals when transmitting to a user terminal (UT). Note that this technique was previously referred to as network coordination [2].
One of the approaches to alleviate the complexity requirements in JP is to arrange the BSs in clusters [3]. The BSs involved in JP within a cluster control the intracluster interference, while the BSs belonging to neighboring clusters give rise to intercluster interference. In a static clustering approach the cooperating set of BSs does not change with time, but this can create unfairness for UTs on the cluster edge. Hence, dynamic clustering helps in maintaining fairness among UTs. An example of dynamic clustering could be a family of clusters operating in round robin fashion where each cell takes its turn to be at the cluster boundary. Clustering techniques can also be divided into usercentric or networkcentric depending on where the clustering decision is taking into account the UT determined channel conditions. Since CJP implies full cooperation, it requires extensive feedback and backhaul resources in the cooperative cluster. In order to bring JP close to realistic scenarios, one can further reduce the complexity for a given cluster through suboptimal approaches.
Several such approaches have been considered in the literature to reduce the requirements of CJP, such as limited feedback [7, 8] and limited backhauling [5, 6, 9, 10]. Partial joint processing (PJP) is a general framework aiming to reduce the complexity requirements of CJP, basically the feedback and backhaul load. In the particular PJP approach considered in this article, a CCN or the serving BS instructs the UTs to report the CSI of the links in the cluster of BSs whose channel gain fall within an active set threshold or window, relative to their best link (usually the serving BS) [7]. This is summarized in Algorithm 1. Note that a similar approach is used in [8]. PJP can be regarded as a usercentric clustering when it is implemented over a static cluster, since overlapping subclusters or active sets of BSs are dynamically formed. Note that CJP is a particular case of PJP when the threshold tends to infinity.
Algorithm 1: Active set thresholding for limited feedback based on[7]
1: Choose: threshold = 10 dB
2: for each UT do
3: Measure the channel gain from all BSs
4: bestLink = max{channel strength from all BSs}
5: if (bestLink  otherLink) ≤ threshold then
6: UT feed backs the CSI of otherLink
7: CCN marks this link as active
8: else
9: Feedback load reduction:
10: UT does not feed back the otherLink
11: CCN marks this link as inactive
12: end if
13: UT feeds back the bestLink
14: CCN marks this link as active
15: end for
In PJP, the CSI of the links reported by the UTs to the CCN are marked as active links and those not reported are marked as inactive. Based on these, the CCN forms an aggregated channel matrix for interference control, where the coefficients of the inactive links are set to zero. In this article, the CCN identifies the BSs that fall outside the threshold window for a given UT based on the links for which the UT has not reported CSI. It is assumed that the obtained CSI is error free. Protocol aspects of this communication need to be addressed in more detail in a real system implementation. As a result, the aggregated channel matrix is now sparse. Linear techniques such as zero forcing (ZF) can invert the aggregated channel matrix to remove interference, but these techniques fail to invert a sparse aggregated channel matrix and at the same time reduce the backhaul load, such that only the BSs in the active set of a UT receive the precoding weights [9].
The question thus arises, in the PJP framework, can the gains achieved with CSI feedback load reduction translate to an equivalent backhaul load reduction, in the sense that the number of CSI coefficients constituting the feedback load (assuming a single tap channel for simplicity) is the same as that of the precoding weights in the backhaul (Figure 1 illustrates this notion). Particle swarm optimization (PSO) is proposed in this article as a tool to obtain a solution that fits this requirement, since it can find the precoding weights without actually inverting the sparse aggregated channel matrix.
1.1 State of the art techniques
Precoding design for clustered scenarios under JP is a recent problem. In [11], a large network is divided into a number of disjoint clusters of BSs. Linear precoding is carried out within these clusters to suppress intracluster interference as well as intercluster interference. In the case of overlapping clusters, soft interference nulling (SIN) precoding technique is proposed in [12]. For SIN, the complete CSI is available at all BSs and the user data is made available only to the BSs in the coordination cluster. Hence, the BSs can jointly encode the message for transmission. Moreover, in [12], multiple spatial streams are allowed up to the total number of transmit antennas in the coordination clusters. As the exhaustive search for the best clustering combination has a very high complexity, two simple clustering algorithms are proposed in [12]. They are: (a) nearest bases clustering and (b) nearest interferers clustering. The SIN iterative precoder optimization algorithm does not remove the interference completely, but performs better than or equal to any linear interferencefree precoding scheme [[12], Proposition 1]. SIN precoding relaxes the restriction to have zero interference, due to that SIN precoding works even when the number of transmit antennas is less than the total number of receive antennas within a coordination cluster. It should be noted that SIN achieves backhaul reduction in terms of the precoded weights and user data being available at BS where needed, but it does not provide feedback load reduction.
For JP, as long as the aggregated channel matrix at the CCN is well conditioned for inversion, linear ZF beamforming (BF) techniques can be used for interference control. It has been shown in [9] that when using techniques that achieve CSI feedback reduction, such as active set thresholding in PJP, this reduction does not translate to an equivalent backhaul load reduction with the linear ZF BF. When calculating the ZF BF based on the sparse aggregated channel matrix, a link that has been defined as an inactive link may be mapped with a nonzero BF weight for that link. This causes unnecessary backhauling, since the UT has reported that link as inactive and that BS is already outside the active set of that UT. Instead, the BS could use this resource to serve another UT. An intuitive approach could be that the CCN resorts to nulling the BF weights where the links are expected to be inactive. This is a suboptimal solution. In [13], a partial ZF precoding design is proposed based on [14] to remove the interference in a PJP scenario. This solution performs better than the linear ZF BF with a weight nulling assumption, and works even for a sparse aggregated channel matrix at the CCN, but it does not achieve an equivalent backhaul load reduction. On the other hand, there is no linear technique in the literature that can invert the sparse aggregated channel matrix and preserve the zeros in the transposed version of the inverse, when the aggregated channel matrix is not diagonal or blockdiagonal.
To the best of our knowledge, the problem of backhaul load reduction equivalent to feedback load reduction has only been addressed in [9], where two solutions are proposed. One based on scheduling (medium access controlMAC layer approach) and a second one based on a ZF precoding PHY layer approach. The limitations of this approach are discussed in Section 2.2.
1.2 Contributions
The active set thresholding technique in PJP (limited feedback of CSI) is used to achieve the feedback load reduction and these gains need to be preserved with an equivalent backhaul load reduction (limited backhauling of precoding weights). To achieve this, a stochastic optimization algorithm such as PSO is proposed for precoding design. PSO has been shown to obtain the optimal linear precoding vector, aimed to maximize the system capacity in a multiusermultiple input multiple output (MUMIMO) system [15]. The main distinguishing factor of our article compared to [15] is that the PSO is used for designing the precoder under a multicell setting with PJP. PSO has also been proposed as a tool for a scheduling strategy in a MUMIMO system [16]. Recently, a multiobjective PSO has been proposed for accurate initialization of the channel estimates in a MIMOOFDM iterative receiver [17]. Drawing inspiration from [15–17], and combining the state of the art PSO implementation with expendable parallel computing power at the CCN, a PSO based precoder should be feasible for the scenario under consideration.
 (1)
Weighted interference minimization: Minimize the interference for the UTs and improve the UT experiencing the minimum signal to interference plus noise ratio (SINR).
 (2)
Sum rate maximization.
In addition, to fairly compare the linear ZFbased precoder and the proposed PSObased precoder, the use of perturbation theory and Gershgorin's discs is introduced. These discs can be used to obtain a quick graphical snapshot of the intracluster interference remaining in the system. The sum rate bounds under a constrained backhaul and imperfect channel knowledge are important [18] and it is part of our future work.
The article is organized as follows. The system model and the limitations in the state of the art linear solutions are discussed in Section 2. The PSO as a tool for precoder design is presented in Section 3. In this section, the objective function, the termination criteria, the convergence of PSO and the complexity in terms of the big $\mathcal{O}\phantom{\rule{0.25em}{0ex}}$ notation are analyzed. An interesting connection is made between the signal to interference ratio (SIR) and Gershgorin's discs in Section 4. The simulation results are presented in Section 5 and the conclusions are drawn in Section 6.
Notation: The boldface uppercase letters, boldface lowercase letters and italics such as X, x and x denote matrices, vectors and scalars, respectively. The ℂ^{m×n}is a complex valued matrix of size m×n. The (·) ^{ H } is the conjugate transpose of a matrix.  ·  _{ F } is the Frobenius Norm, diag(A) and OffDiag (A) are the diagonal and offdiagonal elements of the matrix A. Block diagonalizing the matrices A and B is denoted as blockdiag(A, B). The i th row and the j th column of a matrix A is represented as A(i, j). To access all the elements of the i th row of a matrix A is A(i, :) and for the j th column is A(:, j). vec(A) is the vector of stacked columns of matrix A. ℜ{A(i, j)} and ℑ{A(i, j)} are the real part and the imaginary parts of A(i, j), respectively. H and $\tilde{\mathbf{H}}$ denote the aggregated channel matrix at the CCN due to full CSI feedback and the sparse aggregated channel matrix at the CCN due to limited CSI feedback, respectively. W and $\tilde{\mathbf{W}}$ denote the BF matrix and sparse BF matrix, respectively. The BF matrix with power allocation forms the precoding matrix, $\overline{\mathbf{W}}$.
2 System model
The aggregated channel matrix available in the CCN is $\mathbf{H}\in {\u2102}^{M\times K{N}_{\text{T}}}$, and it is of the form $\mathbf{H}={\left[{\mathbf{h}}_{1}^{T}{\mathbf{h}}_{2}^{T}\phantom{\rule{0.3em}{0ex}}...\phantom{\rule{0.3em}{0ex}}{\mathbf{h}}_{M}^{T}\right]}^{T}$, where ${\mathbf{h}}_{m}\in {\u2102}^{1\times K{N}_{\text{T}}}$ is the channel from all the BSs to the m th UT in the cluster. The precoding matrix $\overline{\mathbf{W}}$ is obtained from the aggregated BF matrix $\mathbf{W}\in {\u2102}^{K{N}_{\text{T}}\times M}$ after power allocation. The BF matrix is of the form W = [w_{1}w_{2} ... w _{ M }], ${\mathbf{w}}_{m}\in {\u2102}^{K{N}_{\text{T}}\times 1}$is the BF for the m th UT. The transmitted symbols to the M UTs are x ∈ ℂ^{M×1}. The receiver noise n at the UTs is spatially and temporally white with variance σ^{2}, and it is uncorrelated with the transmitted symbols.
2.1 Linear beamforming
As stated in Section 1, the link for which the CSI is reported to the CCN is marked as an active link and the unreported CSI is marked as an inactive link. These active and inactive links can be represented with a binary matrix of size M × K. The (m, k)th element in this matrix corresponds to the link between the m th UT and the k th BS. An active link is represented with '1' and an inactive link is represented with '0'.
In Equation (2), the linear ZF BF completely removes the interference by inverting the aggregated channel matrix H. With small active set thresholds, there are few active links, forming a sparse aggregated channel matrix $\tilde{\mathbf{H}}$ at the CCN. If the sparse aggregated channel matrix $\tilde{\mathbf{H}}$ is invertible, then the BF matrix $\tilde{\mathbf{W}}$ thus formed may not have zeros at places where needed. If each BS were to have N_{T} antennas each, then the pseudoinverse could generate BF weights for some of the N_{T} antennas and not for the BS as a whole. Moreover, a UT might receive its data from a BS outside of the active set of a given UT. The effects of ZF are highly undesirable as it results in extra and unnecessary backhaul load on the cluster, as well as unnecessary transmissions on these links. The ZF solution over a sparse aggregated channel matrix without any scheduling constraint cannot achieve an equivalent reduction in backhaul load.
In this article, the following ZF scenarios are considered, where the ZF is performed using the pseudoinverse as in Equation (2) on the aggregated channel matrix at the CCN. The main focus is on the ZF with limited feedback (LFB) and limited backhauling (LBH), where the gains of feedback load reduction need to be preserved in the backhaul load reduction. This is denoted as ZF:LFB + LBH. The LFB is achieved based on the active set thresholding technique. The LBH with ZF is achieved with an intuitive approach of nulling of the BF coefficients based on the inactive links in the binary matrix. When the UT is allowed to feed back all the CSI (full feedback, FFB) and allowing full backhauling (FBH), it is represented as ZF:FFB + FBH. This scenario is considered to show the upper bound of the ZF technique, as in the case of CJP. The scenario ZF with FFB and LBH is considered to have a similar configuration as that of the SIN precoding technique [12]. This is denoted as ZF:FFB + LBH. Finally, the scenario ZF with LFB and FBH is considered, similar to that considered in [9], where the ZF is allowed to have the precoded weights at BSs where it is not desired and allowing FBH. This is represented as ZF:LFB + FBH. It should be noted that this approach does achieve some backhaul reduction, but not necessarily equivalent to the feedback load.
2.2 Limitations of the state of the art
The following subsections capture the limitations with the state of the art solutions.
2.2.1 The invertibility of the aggregated channel matrix
To maintain the orthogonality between the UTs, as highlighted earlier, the condition K · N_{T} ≥ M needs to be satisfied. Due to this, the number of columns of the matrix H is always greater or equal to the number of rows, and the only way to invert the aggregated channel matrix is by using the right inverse as shown in Equation (2). The invertibility of the linear ZF BF is limited by the ability to invert (HH^{ H } )^{1} or in other words, the rank of HH^{ H } should be equal to the number of UTs, whose channels are linearly independent.
In the PJP framework, the active set threshold can be increased such that the UTs can feed back the CSI of any additional BSs that fall within this window, thereby increasing the chances of inverting the aggregated channel matrix as proposed in [13]. The worst case could be that the UTs would need to feed back the complete CSI from all the BSs like in the case of CJP. The CCN can now invert the aggregated channel matrix to obtain the BF weights, but at the expense of increasing the feedback load.
2.2.2 Required nulls in beamformer
As stated before, to the best of our knowledge, to overcome the invertibility of the aggregated channel matrix and the required nulls in the BF, the MAC layer and the PHY layer approaches are proposed in [9]. These approaches are analyzed for the remaining part of this section.
In the scheduling MAC layer approach, BS subgroups are formed such that the transmission to the UTs in each time slot is disjoint, where each BS is transmitting in only one subgroup. These disjoint sets give rise to a sparse aggregated channel matrix at the CCN, which presents a blockdiagonal form. Note that the scheduling approach can be mapped to a disjoint clustering solution. This approach solves the problem of equivalent backhaul load reduction, as the inverse of a block diagonal matrix is block diagonal itself, thereby retaining the zeros or nulls in the BF weights where needed. In a given time slot, if the collocated UTs prefer services from the same set of BSs, then the MAC layer approach can only serve the UTs in a time division multiplexing fashion, as disjoint BS sets need to be selected for transmission. To guarantee fairness, such UTs will have to wait for a long time to be served.
where ${\tilde{\mathbf{H}}}_{\text{el}}$ is obtained after processing the sparse aggregated channel matrix $\tilde{\mathbf{H}}$ in the CCN, after eliminating the columns corresponding to the zeros from $\text{vec}\left(\tilde{\mathbf{W}}\right)$. These zeros correspond to the nulls expected in the BF, I_{ K } is the identity matrix of size K × K, where K = 3. w_{el} contains the vectorized nonzero BF weights that need to be remapped to form the final BF matrix $\tilde{\mathbf{W}}$.
An example of a sparse aggregated channel matrix giving rise to an overdetermined system when PHY layer precoding is applied
$\tilde{\mathbf{H}}$  BS _{1}  BS _{2}  BS _{3} 

UT _{1}  h _{11}  h _{12}  0 
UT _{2}  0  h _{22}  h _{23} 
UT _{3}  0  0  h _{33} 
The PHY layer solution does not comment on the fact that there is no PHY layer solution without scheduling the UTs, as the invertible part of the pseudoinverse in ${\left({\tilde{\mathbf{H}}}_{\text{el}}{\tilde{\mathbf{H}}}_{\text{el}}^{H}\right)}^{1}$ may not be feasible. In short, the PHY layer solution needs some scheduling constraints to obtain the BF weights.
Due to the limitations in this closedform solution in Equation (6), a proper comparison of the proposed PSO with this PHY layer solution is generally not possible. Hence, here the PHY layer solution of [9] is not considered in the simulations. However, an interested reader can refer to [21] where the comparison is performed when [9] is feasible. In the subsequent section, PSO is presented as a tool for precoder design for backhaul load reduction equivalent to the feedback load reduction in the PJP framework.
3 PSO for precoding in the PJP framework
The PSO was inspired from the movement of a swarm, such as a shoal of fish, a flock of birds, etc, to find food or to escape from enemies, by splitting up into groups. There is no apparent leader of the swarm other than the social interactions between the bird like objects (or boids). The coherent movement of these boids is modeled based on their social interactions with their neighbors. The algorithm simulating these social aspects was simplified in [22] and it was found to perform optimization. In this article, a basic PSO algorithm [23] with inertia weight and velocity restriction is implemented and it is capable of finding a stable solution based on a given objective function.
Classical optimization methods are especially preferred when the optimization problem is known to be convex but this is not the case here. Numerical methods such as Newton's method are not feasible as the objective function is nondifferentiable. Other classical techniques could fail but PSO would always find an equilibrium/stable solution. PSO was chosen over other evolutionary algorithms, as it requires very few parameters to configure, it is easier to understand with computationally lesser bookkeeping and it fits well for reducing the backhaul load. In [23], PSO is viewed as a paradigm within the field of swarm intelligence and the performance measures of basic PSO are highlighted. This reference also provides detailed differences between PSO and other evolutionary algorithms.
In this article, each bird in a swarm carries the real and imaginary parts of the nonzero elements of the BF matrix, i.e., the i th member of the swarm is the i th particle that carries all the (n = 2 · K · N_{T} · M) BF coefficients. The '2' is due to PSO treating the real and the imaginary part of the complex BF coefficients as another dimension to the search space. Hence, the particle having the best n values needs to be found for a given objective function. For example, an infinite threshold would yield n = 2 · K · N_{T} · M nonzero CSI coefficients in the aggregated channel matrix of size [M × K · N_{ T } ]. With an active set threshold of 0 dB then only the best link (or reference link) would be fed back by each UT yielding n = 2 · 1 · N_{ T } · M. The real and the imaginary parts of the nonzero BF matrix, $\tilde{\mathbf{W}}$, are mapped to a particle. This mapping, during initialization, is only for illustrating how the BF is translated to a particle. These steps can be omitted in the actual implementation. The position, X(i, j), and the velocity, V(i, j), of the i th particle with the j th BF coefficient are stochastically initialized as X(i, j) = x_{min}+r · (x_{max}  x_{min}) and $\mathbf{V}\left(i,j\right)=\frac{1}{\mathrm{\Delta}t}\left(\frac{\left({x}_{\text{max}}{x}_{\text{min}}\right)}{2}+s\phantom{\rule{0.3em}{0ex}}\cdot \left({x}_{\text{max}}{x}_{\text{min}}\right)\right)$, respectively. Here r and s are random numbers picked from a uniform distribution in the interval [0, 1], and x_{max} is the maximum value that a BF coefficient is initialized with. This does not mean that the position of the particle will not exceed this value, i.e., the particles in the PSO can actually go beyond these limits. The same holds for the velocity of the particle, but it is restricted by a maximum velocity, v_{max}, so that the particle does not diverge. Δt is the time step length. The total number of particles is Q. Recall that each particle is indexed using the variable i, where each particle is carrying n BF coefficients. These coefficients are indexed using the variable j.
The variables p and q are random numbers drawn from a uniform distribution in the interval [0, 1]. The terms involving c_{1} and c_{2} are called the cognitive component and the social component, respectively. The cognitive component tells how much a given particle should rely on itself or believe in its previous memory, while the social component tells how much a given particle should rely on its neighbors. The cognitive and social constant factors, c_{1} and c_{2}, are equal to 2, as highlighted in [22]. An inertia weight, w, is used to bias the current velocity based on its previous value, such that when the inertia weight is initially being greater than 1 the particles are biased to explore the search space. When the inertia weight decays to a value less than 1, the cognitive and social components are given more attention [24]. The decaying of the inertia weight is governed by a constant decay factor β, such that w ← w · β.
The pseudocode of PSO described above is summarized in Algorithm 2.
Algorithm 2: Pseudocode for obtaining the BF via PSO. Steps 3 to 5 are only mentioned for illustration and can be avoided prior to initialization
1: Initialization:
2: Determine the number of nonzero coefficients n needed in the BF matrix, $\tilde{\mathbf{W}}$
3: Map the BF to the particle:
4: $\mathbf{X}\left(i,j\right)\leftarrow \Re \left\{\tilde{\mathbf{W}}\left(l,m\right)\right\},l\in \left\{1,...,K{N}_{\text{T}}\right\},m\in \left\{1,...,M\right\}$
5: $\mathbf{X}\left(i,j+1\right)\leftarrow \Im \left\{\tilde{\mathbf{W}}\left(l,m\right)\right\}$
6: Stochastically initialize particles with BF coefficients:
7: ${x}_{\text{max}}=1/\text{max}\left\tilde{\mathbf{H}}\left(i,j\right)\right$
8: x_{min} = x_{max}
9: Position: X(i, j) = x_{min} + r · (x_{max}  x_{min})
10: Velocity: $\mathbf{V}\left(i,j\right)=\frac{1}{\mathrm{\Delta}t}\left(\frac{\left({x}_{\text{max}}{x}_{\text{min}}\right)}{2}+s\cdot \left({x}_{\text{max}}{x}_{\text{min}}\right)\right)$
11: while Termination Criterion do
12: for the i th particle in the swarm do
13: Demap the variables in a particle to form the BF matrix
14: $\tilde{\mathbf{W}}\left(l,m\right)\leftarrow \left\{\mathbf{X}\left(i,j\right)\right\}+\text{i}\cdot \left\{\mathbf{X}\left(i,j+1\right)\right\}$
15: Evaluate the objective function f(X(i, :))
16: Store:
17: if f(X(i,:)) < f^{ pb } (X(i,:)) then
18: Particles' Best: X^{ pb } (i,:)←X(i,:)
19: end if
20: if f(X(i,:)) < f^{ sb } (X(i,:)) then
21: Swarm's Best: x^{ sb } ← X(i,: )
22: ${\tilde{\mathbf{W}}}^{\mathbf{s}\mathbf{b}}\left(l,m\right)\leftarrow \left\{{\text{x}}^{sb}\left(j\right)\right\}+\text{i}.\left\{{\text{x}}^{sb}\left(j+1\right)\right\}$
23: end if
24: end for
25: for Each particle in the swarm with BF coefficients do
26: Update:
27: Velocity: $\mathbf{V}\left(i,j\right)\leftarrow w\cdot \mathbf{V}\left(i,j\right)+{c}_{1}\cdot p\cdot \left(\frac{{\mathbf{X}}^{pb}\left(i,j\right)\mathbf{X}\left(i,j\right)}{\mathrm{\Delta}t}\right)+{c}_{2}\cdot q\cdot \frac{{\mathbf{X}}^{sb}\left(j\right)\mathbf{X}\left(i,j\right)}{\mathrm{\Delta}t}$
28: Restrict velocity:  V(i, j) < v_{max}
29: Position: X(i, j) ← X(i, j) + V(i, j) · Δt
30: end for
31: w ← w · β
32: end while
33: return BF Weight Matrix, ${\tilde{\mathbf{W}}}^{\mathbf{s}\mathbf{b}}$
3.1 Objective function
The particle with the best BF coefficients is demapped to obtain the BF matrix, $\tilde{\mathbf{W}}$. The maximum transmit power at each BS is constrained to P_{max} and power allocation based on [20] is applied as per Equation (3). This is referred to as power adjustment on the BF matrix, forming a precoding matrix, $\overline{\mathbf{W}}$. There are two ways in which this can be applied, either in every iteration of the PSO (in short, PwrAdj) or after obtaining the best particle from the PSO (in short, NoPwrAdj). Making sure that at least one BS is transmitting at maximum power in every iteration consumes more computational resources, but on the contrary, if this is done after running the PSO algorithm, then this normalization skews or disfigures the best precoding weights. Both cases of power normalization are considered in the objective functions below. It should be noted that for the NoPwrAdj case, the objective function is evaluated without any restriction on the BS transmit power. This means that it is possible to exceed the BS power constraint when evaluating the objective function. Nevertheless, the final precoding weights after applying Equation (3) satisfy the BS power constraint. The flexibility of choosing an objective function gives another degree of freedom for the PSObased precoder.
In this article, two different objective functions are considered for the PSO to optimize.
3.1.1 Weighted interference minimization
The goal of every particle is to minimize this multiobjective function iteratively. Finally, the swarm's best particle will contain the best BF that has managed to minimize Equation (9).
3.1.2 Sum rate maximization
The PSO presented in Algorithm 2 involves minimization of the objective function. Hence, to maximize the sum rate, the objective function is written as f(X(i,:)): = R_{tot}. This means that prior to evaluating the objective function, the sum rate per cell as in Equation (5) needs to be calculated for every iteration.
3.2 Termination criteria
 (1)
Maximum number of iterations has been exceeded.
 (2)
A solution fulfilling a target value is found.
 (3)
No improvement is observed over a number of iterations.
 (4)
Normalized swarm radius is close to zero.
In practice, any one of the above mentioned criteria can be used for termination. In this article, the third criterion is used for termination.
3.3 Convergence
With a basic PSO, the notion of convergence means that the swarm has moved towards an equilibrium state [23]. The Lemma 14.2 in [23] shows that the basic PSO does not satisfy the convergence condition for global search. In our article, a basic PSO with basic variations such as velocity restriction and inertia weight has been used. Proving the optimality conditions of the PSO is not easy, but what can be said is that a stable solution can be achieved. Hence, suitable variations of the PSO algorithm need to be considered in future work, such as Random Particle PSO or Multistart PSO, since they satisfy the convergence condition for global search and can be considered for global optimization [23].
3.4 Computation complexity analysis

Block1: The initialization of particles carrying the BF coefficients from steps 6 to 10, has a computational complexity of $\mathcal{O}\left(Qn\right)$, where Q is the number of particles which is a constant throughout the simulation and n is the number of BF coefficients.

• Block2: From steps 12 to 24, the complexity is $\mathcal{O}\left(Q\cdot \text{complexityoftheobjectivefunction}\right)$. Demapping from the i th particle to the BF matrix consumes $\mathcal{O}\left(n\right)$, which is independent of the objective function. But now we shall represent the dimension of the BF matrix W in terms of n and M as $\left[\frac{n}{2M}\times M\right]$.
Objective function: Weighted interference minimization
The complexity of HW is $\mathcal{O}\left(Mn\right)$, the Frobenius norm constitutes $\mathcal{O}\left({M}^{2}\right)$ and the SINR of the m th user is $\mathcal{O}\left(2\frac{n}{2M}M\right)$. To find the minimum SINR user, the SINR for all the M users is calculated as $\mathcal{O}\left(Mn\right)$. Therefore, the complexity of weighted interference minimization is $\mathcal{O}\left(Mn\right)+\mathcal{O}\left({M}^{2}\right)+\mathcal{O}\left(Mn\right)$ and can be simplified to $\mathcal{O}\left(Mn\right)$.
Objective function: Sum rate maximization
The calculation of SINR and consequently the sum rate per cell yields $\mathcal{O}\left(Mn\right)$.
Therefore, considering the worst case objective function, the complexity of Block2 is $\mathcal{O}\left(QMn\right)$.


• Block3: From steps 25 to 30, the time and space complexity can only grow with the number of BF coefficients. Hence, the computational complexity is $\mathcal{O}\left(Qn\right)$.
Finally, the overall complexity of the PSO is $\mathcal{O}\left(Qn+c\left(QMn+Qn\right)\right)$ and can be simplified to $\mathcal{O}\left(cMn\right)$, ignoring the constants and lower order terms.
In this article, we consider M single antenna UTs and K BSs with N_{T} antennas each. As shown in Algorithm 2, the number of BF coefficients carried by a particle is n = 2·M · K · N_{T}. Therefore, the complexity of the PSO is $\mathcal{O}\left(c{M}^{2}K{N}_{T}\right)$. Assuming that orthogonality is maintained in the system such that the number of UTs is M = K · N_{T}, we have $\mathcal{O}\left(c{M}^{3}\right)$. The complexity of ZF BF is merely that of the pseudoinverse which is of the order $\mathcal{O}\left({M}^{2}K{N}_{\text{T}}\right)$ and can be simplified to $\mathcal{O}\left({M}^{3}\right)$ under orthogonality constraint. Comparison between PSO and ZF in terms of execution time may not be fair as only a basic PSO with basic variations is being implemented in MATLAB and ZF is bound to perform better. But, it should be noted that the PSO always provides an equilibrium solution while the ZF might not. Hence, it is difficult to perform a completely fair comparison.
4 Analysis of interference using Gershgorin's discs
In the case of ZF BF, the feasibility of the solution is determined by the ability to invert (HH^{ H } )^{1}. In [25, 26], it is shown that any approach to improve the channel inversion must aim to reduce the effect of the largest eigen value. Another metric that has been used in the framework of ZF in a singlecell setup is the Frobenius norm of the channel H, since it is proportionally related to the link level performance as shown in [25]. Their proposed network coordinated BF algorithm combines both metrics such that the mean of the largest eigen value of (HH^{ H } )^{1} should be small and the mean of the Frobenius norm of H should be large, so that SINR of the UT is large and the bit error rate is improved, respectively.
In the case of PSO, analyzing the properties of the obtained precoder via (HH^{ H } )^{1} is not meaningful. To evaluate the performance of a PSObased precoder, $\mathbf{H}\overline{\mathbf{W}}$ is analyzed here. But, ${\left\right\mathbf{H}\overline{\mathbf{W}}\left\right}_{F}$ does not give an insight into the properties of the precoder, as the offdiagonal elements are the residual intracluster interference remaining in the system. Interference is completely removed when the offdiagonal elements of $\mathbf{H}\overline{\mathbf{W}}$ are zeros. However, note that complete removal of interference is not maximizing the sum rate and therefore suboptimal in that sense.
In the framework of perturbation theory [27], these offdiagonal elements can be seen as a perturbation over the diagonal elements of $\mathbf{H}\overline{\mathbf{W}}$. In this context, Gershgorin circle theorem [27] can be used to analyze the behavior of different precoding techniques. Gershgorin circle theorem says that for a given square matrix A, the elements in the main diagonal give an estimate of the eigen values on the complex plane. For a given element in the diagonal, the sum of the absolute values of the corresponding row is the length of the radius of the Gershgorin disc around this estimated eigen value. The circumference of this disc is called the Gershgorin's circle. The Gershgorin's circle theorem tells that all the eigen values of the matrix A lie within the union of these discs. This theorem was mainly used to describe how well the elements in the diagonal of a matrix approximate their eigen values. Hence, Gershgorin's discs can be used here to fairly visualize how the intracluster interference is removed with the PSObased precoder and the linear ZF BF, as shown in Section 5.3.
where the right hand side of the inequality is the radius of the i th disc.
5 Simulation results
Simulation parameters
System parameters  Values 

Number of BSs\UTs  3\6 
Number of antennas at BS\UT  3\1 
Shadow fading, ${\gamma}_{SF}$  $\mathcal{N}\left(0,\phantom{\rule{0.3em}{0ex}}8\text{dB}\right)$ 
Pathloss model, ${\gamma}_{SF}$ (d in Kms)  128.1 + 37.6 · log_{10}(d) 
Rayleigh fast fading, Γ  $\mathcal{C}\mathcal{N}\left(0,\phantom{\rule{0.3em}{0ex}}1\right)$ 
BS antenna gain, G  9 dBi 
Correlation between antennas at BS, ρ  0.5 
Number of channel realizations  10^{4} 
Max. BS Tx power with celledge SNR = 15 dB  0.0603 W (17.8 dBm) 
Noise bandwidth  1 MHz 
Noise figure  0 dB 
Active set threshold for LFB  10 dB 
PSO parameters
Parameters  Values 

Number of particles, Q  30 
Number of variables, n  Number of ℜ & ℑ BF coeff. 
x_{max} = x_{min}  $1/\text{max}\left\{\left\tilde{\mathbf{H}}\left(m,l\right)\right\right\}$ 
Time step length, Δt  1 
Max. velocity, v_{max}  (x_{max}x_{min})/Δt 
Cognitive factor, c_{1}  2 
Social factor, c_{2}  2 
Inertia weight, w  1.4 → 0.4 
Constant decay factor, β  0.99 
Max. number of iterations  500 
Various precoding configurations
Nos.  Precoder  Feedback  Backhaul  Power constraint 

1  PSO:FFB + FBH + PwrAdj  Full  Full  Every iteration 
2  PSO:FFB + FBH + NoPwrAdj  Full  Full  After convergence 
3  PSO:LFB + LBH + PwrAdj  Limited  Limited  Every iteration 
4  PSO:FFB + LBH + PwrAdj  Full  Limited  Every iteration 
5  PSO:FFB + LBH + NoPwrAdj  Full  Limited  After convergence 
6  ZF:LFB + LBH  Limited  Limited  After ZF 
7  ZF:FFB + LBH  Full  Limited  After ZF 
8  ZF:FFB + FBH  Full  Full  After ZF 
5.1 Objective function: weighted interference minimization
Alternately, PSO with the objective of maximizing the minimum SINR of the UT was simulated. In the case of LFB and LBH, a 2.1% relative increase in the average sum rate per cell was observed when compared to weighted interference minimization but at the cost of 7.7% relative increase in BSs power consumption and 45% relative increase in interference. As expected interference is greatly affected, hence, weighted interference minimization is preferred.
Based on the analysis in Section 3.4 and on the prior experience, the number of BF coefficients carried by a particle decreases with the sparsity of the aggregated channel matrix. With LBH, the PSO converges faster than the case when there is FBH. Reference to Figure 9 could be unfair, due to the reason cited earlier that the solution of PSO with LBH is fed to one of the particles in the case of FBH. If this is not performed, then the faster convergence of the PSO is observed (not shown here).
5.2 Objective function: sum rate maximization
5.3 Gershgorin's circles
It is interesting to note that for the PSO with LFB and LBH, the actual eigen values map closely to the estimated Gershgorin's eigen values, unlike the ZF with LFB and LBH. From an interference point of view, having concentric circles helps containing the interference within the largest circle. ZF with LFB and LBH shows this attribute.
6 Conclusions
In this work, a particle swarm stochastic optimization algorithm has been proposed in a partial JP framework to design the precoding weights for efficient backhauling, achieving a backhaul reduction proportional to the reduction in the CSI feedback. In this context, two objective functions have been considered, a weighted interference minimization and a sum rate maximization. In the proposed weighted interference minimization, the SINR of the weakest UT is iteratively improved, in addition to the interference minimization. With the limited feedback and limited backhaul constraints, and the weighted interference minimization as the objective function, the average sum rate per cell of the UTs is improved by 66.53% with respect to a ZF precoder. The particle swarm based precoder allows some multiuser interference to remain in the system, still improving the sum rate, and it uses the BS transmit power more effectively.
With recent developments in swarm intelligence, the complexity and the feasibility can be improved to achieve a faster and a more robust particle swarm algorithm. There is potential for improving the PSO algorithm with capabilities to perform global search, such as random PSO, which should improve the already promising results presented in this article.
Abbreviations
 BF:

beamformer (ing)
 bps:

bits per second
 BS(s):

base station(s)
 CCN:

central coordination node
 CDF:

cumulative distribution function
 CJP:

centralized joint processing
 CoMP:

Coordinate MultiPoint (transmission)
 CSI:

channel state information
 FBH:

Full BackHauling
 FFB:

Full FeedBack
 JP:

joint processing
 LBH:

Limited BackHauling
 LFB:

Limited FeedBack
 MAC:

medium access control
 MUMIMO:

MultiUser Multiple Input Multiple Output
 NoPwrAdj:

no power adjustment
 PJP:

partial joint processing
 PHY:

physical
 PwrAdj:

power adjustment
 PSO:

particle swarm optimization
 SIR:

signal to interference ratio
 SINR:

signal to interference plus noise ratio
 SNR:

signal to noise ratio
 SIN:

soft interference nulling
 UT(s):

user terminal(s)
 ZF:

zero forcing.
Declarations
Acknowledgements
The authors would like to thank the reviewers for their critical comments that greatly improved the article. This study has been partly supported by the Swedish Research Council, within the project 62120094555 Dynamic Multipoint Wireless Transmission. This study is also been supported by The Swedish Agency for Innovation Systems (VINNOVA) and by the EU FP7 project INFSOICT247223 ARTIST4G. C. Botella's work has been supported by the Spanish MEC Grants CONSOLIDERINGENIO 2010 CSD200800010 "COMONSENS" and COSIMA TEC201019545C0401. The authors would also like to acknowledge the members of the VR meetings, in particular Jingya Li and Agisilaos Papadogiannis for the valuable discussions. Thanks to Bhavishya Goel for the fruitful discussions on complexity. The computations were performed on C^{3}SE computing resources.
Authors’ Affiliations
References
 3GPP TR 36.814900, 3rd Generation Partnership Project; Technical specification group radio access network; Further advancements for EUTRA physical layer aspects (Release 9) 2010.
 Karakayali MK, Foschini GJ, Valenzuela RA: Network coordination for spectrally efficient communications in cellular systems. IEEE Wirel Commun 2006, 13(4):5661. 10.1109/MWC.2006.1678166View Article
 Gesbert D, Hanly S, Huang H, Shitz SS, Simeone O, Yu Wei: Multicell MIMO cooperative networks: a new look at interference. IEEE J Sel Areas Commun 2010, 28(9):13801408.View Article
 Papadogiannis A, Hardouin E, Gesbert D: Decentralising multicell cooperative processing: a novel robust framework. EURASIP J Wirel Commun Netw 2009, 110. Article ID 890685
 Marsch P, Fettweis G: A framework for optimizing the downlink of distributed antenna systems under a constrained backhaul. Proc European Wireless Conference, Paris, France 2007, 15.
 Marsch P, Fettweis G: On multicell cooperative transmission in backhaulconstrained cellular systems. Ann Telecommun 2008, 63(5):253269. 10.1007/s1224300800283View Article
 Botella C, Svensson T, Xu X, Zhang H: On the performance of joint processing schemes over the cluster area. Proc IEEE Vehicular Technology Conference, Taipei, Taiwan 2010, 15.
 Papadogiannis A, Bang H, Gesbert D, Hardouin E: Downlink overhead reduction for multicell cooperative processing enabled wireless networks. IEEE Personal, Indoor and Mobile Radio Communications, Cannes, France 2008, 15.
 Papadogiannis A, Bang HJ, Gesbert D, Hardouin E: Efficient selective feedback design for multicell cooperative networks. IEEE Trans Veh Technol 2011, 60(1):196205.View Article
 Boccardi F, Huang H, Alexiou A: Network MIMO with reduced backhaul requirements by MAC coordination. Proc IEEE Asilomar Conference on Signals, Systems and Computers, Pacific Grove 2008, 11251129.
 Zhang J, Chen R, Andrews JG, Ghosh A, Heath RW: Networked MIMO with clustered linear precoding. IEEE Trans Wirel Commun 2009, 8(4):19101921.View Article
 Ng CTK, Huang H: Linear precoding in cooperative MIMO cellular networks with limited coordination clusters. IEEE J Sel Areas Commun 2010, 28(9):14461454.MathSciNetView Article
 Lakshmana TR, Botella C, Svensson T, Xu X, Li J, Chen X: Partial joint processing for frequency selective channels. Proc IEEE Vehicular Technology Conference, Ottawa, Canada 2010, 15.
 Wei X, Weber T, Kuhne A, Klein A: Joint transmission with imperfect partial channel state information. Proc IEEE Vehicular Technology Conference, Barcelona, Spain 2009, 15.
 Fang S, Wu G, Li SQ: Optimal multiuser MIMO linear precoding with LMMSE receiver. EURASIP J Wirel Commun Netw 2009, 110. Article ID 197682
 Hei Y, Li X, Yi K, Yang H: Novel scheduling strategy for downlink multiuser MIMO system: particle swarm optimization. Sci China F 2009, 52(12):22792289. 10.1007/s1143200902128MathSciNetView Article
 Knievel C, Hoeher PA, Tyrrell A, Auer G: Particle swarm enhanced graphbased channel estimation for MIMOOFDM. Proc IEEE Vehicular Technology Conference, Budapest, Hungary 2011, 15.
 Marsch P, Fettweis G: On downlink network MIMO under a constrained backhaul and imperfect channel knowledge. Global Telecommunications Conference, California, USA 2009, 16.
 Spencer Q, Swindlehurst A, Haardt M: Zeroforcing methods for downlink spatial multiplexing in multiuser MIMO channels. IEEE Trans Signal Process 2004, 52(2):461471. 10.1109/TSP.2003.821107MathSciNetView Article
 Zhang H, Dai H: Cochannel interference mitigation and cooperative processing in downlink multicell multiuser MIMO networks. EURASIP J Wirel Commun Netw 2004, 2: 222235.
 Lakshmana TR, Botella C, Svensson T: Partial Joint Processing with Efficient Backhauling in Coordinated MultiPoint Networks. Proc IEEE Vehicular Technology Conference, Yokohama, Japan 2012, 15.
 Kennedy J, Eberhart RC: Particle swarm optimization. Proc IEEE International Conference on Neural Networks, Perth, Australia 1995, 19421948.View Article
 Engelbrecht AP: Fundamentals of Computational Swarm Intelligence John Wiley. 2005, 171172.
 Shi YH, Eberhart RC: Parameter selection in particle swarm optimization. The 7th Annual Conference on Evolutionary Programming, San Diego, USA 1998, 591601.
 Chae CB, Kim SH, Heath RW: Network coordinated beamforming for cellboundary users: linear and nonlinear approaches. IEEE J Sel Top Signal Process 2009, 3(6):10941105.View Article
 Peel CB, Hochwald BM, Swindlehurst AL: A vectorperturbation technique for nearcapacity multiantenna multiuser communicationpart I: channel inversion and regularization. IEEE Trans Commun 2005, 53(1):195202. 10.1109/TCOMM.2004.840638View Article
 Horn RA, Johnson CR: Matrix Analysis. Cambridge University Press, Cambridge; 1985:344347.View Article
 3GPP TR 36.942a20, 3rd Generation Partnership Project; Technical specification group radio access network; Evolved universal terrestrial radio access; Radio frequency system scenarios (Release 10) 2011.
 ARTIST4G D1.2, Innovative advanced signal processing algorithms for interference avoidance, ARTIST4G technical deliverable84. [https://ictartist4g.eu/projet/workpackages/wp1/documents/d1.2/d1.2.pdf]
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.