Skip to main content

Joint source and relay design for two-hop amplify-and-forward relay networks with QoS constraints

Abstract

In this paper, we consider the joint design of source precoding matrix and the relay precoding matrix in a two-hop multiple-input multiple-output relay network. The goal is to find a pair of matrices in order to minimize the power consumption and at the same time meet pre-selected quality of service constraints that are defined as the mean square error of each data stream. Using majorization theory, we simplify the matrix-valued optimization problem into a scalar-valued one. We then propose a lower bound and an upper bound of the original problem, both in convex forms. Specifically, the latter is solved by a multi-level water-filling algorithm that is much efficient than directly applying the interior point method. Numerical examples corroborate the proposed studies and also demonstrate the tightness of both bounds to the original problem.

1 Introduction

Relay networks have recently attracted much attention because of their promising capability in achieving reliable communication and wide coverage for the next generation of wireless systems [1, 2]. Different types of relaying strategies, e.g., amplify and forward (AF), decode and forward (DF), and compress and forward (CF) were introduced in [35], respectively. DF and CF decode data before retransmission and are, thus, also known as regenerative strategies; AF relay amplifies the received data only and is known as non-regenerative strategy. The computational simplicity in AF relay makes it highly attractive and a strong candidate for the real-time application. On the other side, multi-input multi-output (MIMO) technique [6] can enhance the data transmission rate by introducing the spatial diversity gain. Therefore, combining relaying and MIMO becomes a natural way to further advance the future wireless communication systems.

Most of the existing works focus on the AF MIMO relay networks, where a certain performance criterion is optimized subject to power constraints at both source and relay. For example, the mutual information and the total mean square error (MSE) criteria are selected as objective functions in [7, 8], respectively. The authors in [9] investigated the diversity multiplexing tradeoff for MIMO relays. Moreover, there are some works on beamforming design for special types of AF MIMO relays; for instance, the authors in [10] considered codebook design for half-duplex AF MIMO relays, while the author in [11] considered beamforming design for rather vast types of MIMO relays. Applying the majorization theory [12], the authors in [13] proposed a unified framework for most performance criteria. The extension of [13] to the multiple relay case was introduced in [14].

All of the mentioned methods above considered enhancing the system performance by maximizing or minimizing a certain objective function constraint on power consumption at some or all nodes. Although this model improves the system performance, it does not guarantee a certain quality of service (QoS) requirement for an individual user. The importance of considering QoS becomes more obvious in practical applications supporting several types of service each with a different reliability requirement. One of the pioneering works considering QoS in MIMO point-to-point systems is [15], where the authors optimized the transmission power subject to predefined sets of QoS, e.g., individual MSE, signal-to-noise ratio (SNR), and bit error rate. In [16], the authors investigated the optimization of source beamforming and relay weighting matrix in order to minimize the total power subject to a given set of QoS for multi-input single-output broadcast channel. On the aspect of relay network, AF MIMO relay network with QoS constraints has been investigated in [17]. Applying majorization theory, the author in [17] proposed a unified framework for the design of the optimal structure of the source precoding and relay amplifying matrices. The author applied a successive geometric programming method to obtain the optimal power loading among data streams. Unfortunately, the computational complexity of the proposed solution in [17] compromises its suitability for practical implementation. With similar assumptions, the authors in [18] considered a simplified version of the problem in which only the relay power is minimized. Then, the minimization is executed over a convex lower bound of the objective function. In a rather more general setup, the authors in [19] studied the joint relay and source power minimization and applied the majorization theory to reduce the problem to a scalar one. Then, using QoS convex relaxation, an upper bound and a lower bound on the optimal results are presented.

In this paper, building upon the results in [19], we take a specific look into a dual-hopa AF MIMO relay network. We first jointly design the source and the relay precoding matrices such that the overall transmission power is minimized subject to a given set of QoS constraints. Applying the majorization theory, we reduce the original matrix-valued problem to a scalar-valued one and then propose two new convex optimization problems whose objective values serve as the lower bound and the upper bound of the original problem. While both new problems can be handled by the existing convex optimization tools, e.g., CVX[20], we specifically design a multi-level water-filling (MLWF) algorithm to solve the upper bound problem that can further reduce the computational complexity. Compared with the successive geometric programming approach developed in [17], the MLWF algorithm does not require any optimization tool and thus is easier to implement for practical relay systems. Numerical results corroborate the proposed studies and clearly demonstrate the tightness of the proposed lower bound and upper bound, especially over low MSE region.

The rest of this paper is arranged as follows: Section 2 presents the system model and formulates the optimization problem in matrix form. In Section 3, the optimization is simplified to a scalar-valued problem from the majorization theory. Two suboptimal problems whose objectives serve as the upper bound and the lower bound of the original problems are derived in the same section. In Section 4, the upper bound problem is solved from a multi-level water-filling algorithm coupled with decomposition methods. The simulation results are presented in Section 5, and conclusions are drawn in Section 6.

1.1 Notations

Vectors and matrices are boldface small and capital letters, respectively; the transpose, complex conjugate, Hermitian, inverse, and pseudo-inverse of A are denoted by AT, A, AH, A−1, and A, respectively; a denotes the Euclidean norm of the vector A; diag{a} is the diagonal matrix with diagonal elements given by A, while diag{a} is a vector with entries taken from the diagonal elements of A ; I is the identity matrix; and E {·} is the statistical expectation. Moreover, basic notations and definitions of majorization theory can be found in Appendix 1.

2 System model

As shown in Figure 1, we considrer a three-node multi-antenna relay network that consists of a source node, a relay node, and a destination node, equipped with M, N, and P antennas, respectively. We assume that the direct link between the source and the destination is weak and, therefore, can be neglected. Denote the baseband MIMO channels between the source and relay as the N × M matrix H s , while that between the relay and destination as the P × N matrix H r . We further assume that the channel state information is perfectly known at all nodes. Suppose that L data streams, denoted by x, are precoded by an M × L precoding matrix B at source node. We require L ≤ min{M,N,P} so that the data streams can be detected with linear method at the destination. With an N × N precoding matrix F at relay, the received signal at destination is as follows:

z= H r F H s Bx+ H r F v r + v d ,
(1)
Figure 1
figure 1

System model for a three-node AF MIMO relay network.

where v r and v d are additive white complex Gaussian noise at relay and destination, respectively, i.e., v r CN(0, σ r 2 I N ) and v d CN(0, σ d 2 I N ). Without loss of generality, we set σ d 2 = σ r 2 =1. Since the correlation and power can be designed through B, the data streams from the source can be assumed independent from each other, i.e., E{xxH} = I.

We consider the minimum mean square error detection at destination with a P × L decoding matrix T. The estimated data can be expressed as follows:

x ̂ = T H z ,
(2)

with the error matrix

C = E { ( x ̂ x ) ( x ̂ x ) H } .
(3)

The MSE is defined as t r(C). For the given B and F, one can easily find the optimal T from the standard approach [21]:b

T = H r F H s B ( H r F H s B ) H + H r F ( H r F ) H + I 1 H r F H s B ,
(4)

and the corresponding error matrix is

C = I + ( H r F H s B ) H ( H r F ( H r F ) H + I ) 1 H r F H s B 1 .
(5)

As in [15], the QoS measurement here is taken as the MSE of each individual data stream. Let us denote the QoS vector as ρ = [ρ1,…,ρ L ]T, i.e., the MSE of the i th data stream is required to be smaller than ρ i . Note that ρ i <1 is necessary to avoid trivial solutions.

From (5), the QoS constraints are stated as follows:

C ii ρ i or diag { C } ρ ,
(6)

where ‘ ≤’ denotes the element-wise operation if used between vectors.

The average power consumed by the source is computed from

E { tr ( B x x H B H ) } = tr ( BB H ) ,
(7)

while that spent by the relay is

E { tr ( ( F H s B x + F v r ) ( F H s B x + F v r ) H ) } = tr ( F ( H s BB H H s H + I ) F H ) .

3 Optimization problem

The goal is to find the optimal B and F to minimize the overall power spent by the source and relay and at the same time meet the QoS requirements.

Define

M = H r F H s B ,
(8)
R = ( ( H r F ) ( H r F ) H + I ) 1 .
(9)

The optimization problem is then expressed as follows:

(P1): min B , F tr ( BB H ) + tr ( F ( H s BB H H s H + I ) F H ) subject to diag { ( I + M H R M ) 1 } ρ .
(10)

Unfortunately, the problem is non-convex and cannot be solved in an efficient way.

3.1 Equivalent problem

Let us define a new optimization:

(P2) : min B ~ , F tr ( B ~ B ~ H ) + tr ( F ( H s B ~ B ~ H H s H + I ) F H ) subject to diag { ( I + M ~ H R M ~ ) 1 } w ρ M ~ = H r F H s B ~ M ~ H R M ~ is diagonal ,
(11)

where w means weak majorization, and the details can be found in Appendix 1.

Theorem 1. Problems (P1) and (P2) are equivalent.

Proof. The idea is to prove that for each feasible point of (P1), there is a corresponding feasible point in (P2) that yields the same objective value and vice versa.

(P1) →(P2): For any feasible B in (P1), construct a new matrix B ~ =BQ, where Q is the unitary eigenmatrix of MHR M. Then, M ~ H R M ~ is a diagonal matrix. It can be readily checked that the objective value of (P2) with ( B ~ ,F) is the same as the objective value of (P1) with (B,F). Moreover, since I+ M ~ H R M ~ is a diagonal matrix, there is

diag { ( I + M ~ H R M ~ ) 1 } = λ { ( I + M ~ H R M ~ ) 1 } = λ { ( I + M H R M ) 1 } = λ { Z } ,
(12)

where Z ( I + M H R M ) 1 . From Z i i ≤ ρ i , one can conclude that diag{Z}wρ. We further know from Lemma 2 in Appendix 1 that λ {Z} wdiag{Z}. Therefore, λ{Z}w ρ and we achieve the conclusion that for any feasible point (B,F) in (P1), there is always a corresponding feasible point ( B ~ ,F) in (P2) that gives the same objective value.

(P2) →(P1): Define Z ~ = ( I + M ~ H R M ~ ) 1 and assume that ( B ~ ,F) is a feasible point of (P2). From (P2), we know that diag{ Z ~ }=λ{ Z ~ } w ρ holds. From Lemma 3 in Appendix 1, we know that there exists a vector c satisfying both λ{ Z ~ }c and c ≤ ρ. From Lemma 2, we know that for each cλ{ Z ~ }, there exists a matrix W with diag{W}=c and λ{W}=λ{ Z ~ }. Let W= Q H Z ~ Q and define B= B Q ~ . Then, diag{(I+MHR M)− 1} = diag {W} ≤ρ. Moreover, the objective function of (P1) with (B,F) is the same as the objective function of (P2). □

3.2 Processing matrix variables

Based on Theorem 1, we can solve (P2) instead of (P1).

Define two new positive semi-definite matrices as well as their singular value decomposition (SVD) as follows:

X = H s B ~ B ~ H H s H = U X Λ X U X H ,
(13)
Y = H r F ( X + I ) F H H r H = U Y Λ Y U Y H ,
(14)

where U X and U Y are the N × L and P × L orthonormal matrices, respectively. Throughout this paper, we always sort the singular values and eigenvalues in an increasing order. Note that it is still possible that Λ X or Λ Y contains some zero entries.

The matrices H s B ~ and H r F can be represented as follows:

H s B ~ = U X Λ X 1 2 V X H ,
(15)
H r F = U Y Λ Y 1 2 V Y H ( X + I ) 1 2 ,
(16)

where V X can be any L × L unitary matrix, and V Y can be any N × L orthonormal matrices. These two matrices will be designed later to fulfill the optimality requirement.

Let

H s = U s Λ s V s H and H r = U r Λ r V r H
(17)

be the SVD of H s and H r respectively. We further express the singular matrices as U s [ U s , 2 , U s , 1 ] and U r [ U r , 2 , U r , 1 ], in which Us,1 and Ur,1 contain L first columns.

One can rewrite (15) and (16) as follows:

Σ s 0 L × ( M L ) 0 ( N L ) × L 0 ( N L ) × ( M L ) V s H B ~ = U s H U X Λ X 1 2 V X H , Σ r 0 L × ( N L ) 0 ( P L ) × L 0 ( P L ) × ( N L ) V r H F = U r H U Y Λ Y 1 2 V Y H ( X + I ) 1 2 .
(18)

There are three cases to be discussed:

  • If N = M = L, then B ~ can be re-expressed from (18) as

    B ~ = V s Σ s 1 U s , 1 H U X Λ X 1 2 V X H .
    (19)
  • If M > N = L, (18) can be simplified as

    Σ s 1 0 L × ( M L ) V s H B ~ = U s , 1 H U X Λ X 1 2 V X H .
    (20)

    The solution for B ~ is not unique, and the one which minimizes the objective function should be chosen. Basically, we need to solve

    min B ~ tr ( B ~ B ~ H ) subject to Σ s 1 0 L × ( M L ) V s H B ~ = U s , 1 H U X Λ X 1 2 V X H .
    (21)

    Note that the second term in the objective function is not included in (21) since it will be taken cared by F. From Lemma 9 in [14], the optimal structure of B ~ is as follows:

    B ~ = V s Σ s 1 0 L × ( M L ) T U s , 1 H U X Λ X 1 2 V X H .
    (22)
  • If M > L and N > L, then (20) holds if and only if U s , 2 H U X = 0 ( ( N L ) × N ) . In this case, (22) is still the optimal structure for B ~ . Please refer to [14] for the detailed derivation.

Following the same procedure, the optimal structure of F can be computed as follows:

F = V r Σ r 1 0 L × ( N L ) T U r , 1 H U Y Λ Y 1 2 V Y H ( X + I ) 1 2 .
(23)

We then proceed to the optimization over new variables U X , V X , U Y , V Y , Λ X , and Λ Y . Substituting (22) and (23) into (P2) yields the new objective function

tr( Λ s U s , 1 H U X Λ X U X H U s , 1 Λ s )+tr( Λ r U r , 1 H U Y Λ Y U Y H U r , 1 Λ r ),
(24)

where the pseudo-inverses of Λ s and Λ r are Σ s 1 0L× (N − L)]T and Σ r 1 0 L × ( M L ) T , respectively.

We apply the inversion lemma [22],

( A 1 + A 2 A 3 A 4 ) 1 = A 1 1 A 1 1 A 2 ( A 3 1 + A 4 A 1 1 A 2 ) 1 × A 4 A 1 1 ,
(25)

for the constraint and obtain

( I + M ~ H R M ~ ) 1 =I M ~ H ( R 1 + M ~ M ~ H ) 1 M ~ G ,
(26)

where G is defined as the corresponding term. Substituting (22) and (23) into G yields

G = V X Λ X 1 2 ( Λ X + I ) 1 2 U X H V Y Λ Y 1 2 [ Λ Y + I ] 1 Λ Y 1 2 V Y H U X × ( Λ X + I ) 1 2 Λ X 1 2 V X H .
(27)

Remark 1. Representing the problem in terms of these new variables decreases the dependency between the objective function and constraints and thus facilitates the optimization procedure. For example, U Y is only involved in objective function, but not in the constraints. Hence, we can adjust U Y to change the objective without affecting the constraint. Reversely, V X and V Y only affect the constraint, but not the objective function.

Remark 2. On the other hand, the diagonality constraint in (P2) can be satisfied by only adjusting V X so that the other constraint and the objective functions will not be affected.

3.3 Simplification

In the following subsections, we derive the optimum structures for V X , V Y , U X , and U Y , respectively.

3.3.1 Optimal V X and V Y

The functionality of the optimal V X is to make G diagonal. Moreover, if there is a V Y such that all components in diag{IG} are minimized simultaneously, then this V Y must be optimal because it provides the largest possible freedom to optimize over the rest variables, i.e., U X , U Y , Λ X , and Λ Y . Applying (9.H.2 in [12]) on G, we obtain

λ{G} w diag{ G ̂ },
(28)

where

G ̂ = Λ X 1 2 ( Λ X + I ) 1 2 Λ Y 1 2 [ Λ Y + I ] 1 Λ Y 1 2 ( Λ X + I ) 1 2 Λ X 1 2
(29)

is the multiplication of the diagonal matrices in (27). According to the definition of submajorization and supermajorization in Appendix 1, (28) can also be written as follows:

λ { G } w diag { G ̂ }
(30)

and is equivalent to

1 λ { G } w 1 diag { G ̂ } ,
(31)

where 1 is an all one vector. Moreover, (31) can be re-expressed as follows:

λ{IG} w diag{I G ̂ }.
(32)

Since one can always find V X that makes G diagonal, there is λ{I − G} = diag{I − G}. Moreover, from Lemma 2, we can rewrite (32) as follows:

diag{IG} w diag{I G ̂ },
(33)

which indicates that diag{I G ̂ } is a simultaneous lower bound for all the elements in diag{IG}. Therefore, G= G ̂ must hold at the optimal point, and this can be achieved if V Y  = U X and V X  = I.

3.3.2 Optimal U X and U Y

With the optimal V Y , the constraint is independent from both U Y and U X . Therefore, one can find the optimal structure of U X and U Y purely from

min U X , U Y tr ( Λ s U s , 1 H U X Λ X U X H U s , 1 Λ s ) + tr ( Λ r U r , 1 H U Y Λ Y U Y H U r , 1 Λ r ) .
(34)

We need the following matrix inequality (9.H.1.h in [12]) to proceed: Given two L × L positive semi-definite matrices A1 and A2 with eigenvalues λ l (A1) and λ l (A2) arranged in the increasing order, there is

l = 1 L λ l ( A 1 ) λ L l + 1 ( A 2 ) tr ( A 1 A 2 ) l = 1 L λ l ( A 1 ) λ l ( A 2 ) .
(35)

Then, the first part in (34) can be lower bounded by the following:

tr ( Λ s Λ X Λ s ) tr ( Λ s U s , 1 H U X Λ X U X H U s , 1 Λ s ) .
(36)

Obviously, the minimum value can be achieved when U X  = U s , since the diagonal elements of Λ s and Λ X are all arranged in increasing order. Similar discussion holds for the second term in (34), namely

tr ( Λ r Λ Y Λ r ) tr ( Λ r U r , 1 H U Y Λ Y U Y H U r , 1 Λ r ) ,
(37)

and the minimum is achieved when U Y  = Ur,1.

Substituting the optimal U X , U Y , V X , and V Y into the objective function and the constraints of (P2), we obtain the following:

min Λ X , Λ Y tr ( Λ s Λ X Λ s ) + tr ( Λ r Λ Y Λ r ) subject to I Λ X Λ Y ( Λ X + I ) 1 ( Λ Y + I ) 1 w ρ .
(38)

Define a i and b i as the i th diagonal entries of ( Λ s ) 2 and ( Λ r ) 2 , respectively. Further, define x i and y i as the i th diagonal entries of Λ X and Λ Y , respectively. Then, problem (P2) is converted to a scalar form:

(P3): min x i , y i i = 1 L a i x i + b i y i
(39)
subject to i = 1 k y i + x i + 1 y i + x i + y i x i + 1 i = 1 k ρ i k = 1 , , L , x i 0 , y i 0 , i.
(40)

Unfortunately, the constraint (40) is non-convex, and the problem cannot be solved efficiently. We then propose the following two convex bounds for each summand on the left-hand side of (40):

y i + x i + 1 y i + x i + y i x i + 1 y i + x i + 2 y i + x i + y i x i + 1 ,
(41)
y i + x i + 1 y i + x i + y i x i + 1 y i + x i y i + x i + y i x i .
(42)

Replacing the corresponding term in (40) by the right-hand side (RHS) in (41) or (42) while keeping the same objective function, we can obtain the upper or the lower bound of the original problem, respectively. Both the lower bound and the upper bound problems can be solved by the existing convex optimization tools based on the interior point method, e.g., CVX, [20, 23].

4 Algorithm for the upper bound problem

Power minimization is ensured by considering the minimization of the upper bound of the objective value. Therefore, we take a detailed look into the upper bound problem and target at a more efficient solution. The upper bound problem is restated as follows:

min x i , y i i = 1 N a i x i + b i y i subject to i = 1 k 1 x i + 1 + 1 y i + 1 i = 1 k ρ i k = 1 , , N , x i 0 , y i 0 , i.
(43)

It is hard to obtain the closed form solution even with the Karush-Kuhn-Tucker (KKT) conditions. Observing the symmetry, we can apply the primal decomposition method [23] to break down the problem into two simpler subproblems.

4.1 The decomposition method

Problem (43) can be decomposed into two parallel subproblems together with a master problem connecting as a bridge [23]. Defining a new auxiliary vector t=[t1,t2,…,t N ]T, the subproblems are as follows:

Φ x ( t ) : min x i i = 1 N a i x i subject to i = 1 k 1 x i + 1 t k k = 1 , , N , x i 0 , i
Φ y ( t ) : min y i i = 1 N b i y i subject to i = 1 k 1 y i + 1 i = 1 k ρ i t k k = 1 , , N , y i 0 , i

and the master problem is

min t Φ x ( t ) + Φ y ( t ) .
(44)

In many cases, the subproblems do not have closed-form solution. Therefore, searching the optimum T for the master problem can be iteratively done from the subgradient method. Note that the convergence of the decomposed problem to the global optimal point can be guaranteed since (43) is convex.

4.2 Solving the subproblems

The Lagrangian corresponding to Φ x (t) is as follows:

L = k = 1 N a k x k + k = 1 N μ k ( i = 1 k 1 1 + x i t k ) k = 1 N γ k x k ,
(45)

and the corresponding KKT conditions are the following:

a k i = k N μ i 1 ( 1 + x k ) 2 γ k = 0 ,
(46)
μ k ( i = 1 k 1 1 + x i t k ) = 0 ,
(47)
i = 1 k 1 1 + x i t k 0 ,
(48)
γ k x k = 0 ,
(49)
x k 0 , μ k 0 , γ k 0 .
(50)

For simplicity, we define μ ~ k = i = k N μ i . Multiplying both sides of (46) by x k and combining with (49) give the following:

x k a k = μ ~ k x k ( 1 + x k ) 2 x k ( a k ( 1 + x k ) 2 μ ~ k ) = 0 .
(51)

After a straightforward calculation, we obtain the following:

x k = ( μ ~ k 1 / 2 a k 1 / 2 1 ) + μ ~ k a k 0 μ ~ k < a k .
(52)

Based on (52), the KKT conditions are reduced to the following:

x k = ( μ ~ k 1 / 2 a k 1 / 2 1 ) + μ ~ k a k 0 μ ~ k < a k ,
(53)
μ ~ k μ ~ k 1 ,
(54)
i = 1 k 1 1 + x i t k 0 .
(55)

Next, we propose an efficient algorithm that could find the solution that satisfies (53), (54), and (55) simultaneously.

Algorithm 1 Multi-level water-filling algorithm

The standard water-filling algorithm embedded in step 3 can be found in Appendix 2. For an explicit demonstration, we present the algorithm in a diagram in Figure 2.

Figure 2
figure 2

Proposed multi-level water-filling algorithm.

We next need to prove the optimality of the above MLWF. Since the problem is convex, the output of the algorithm is the optimal solution if and only if all of the KKT conditions are satisfied. The conditions (55) are simultaneously satisfied from the step 4 in the algorithm. Condition (53) is itself satisfied by the nature of the water-filling algorithm, that is the water levels are always non-negative. The above MLWF algorithm satisfies (54) as well, as seen from the following lemma.

Lemma 1. Successive water levels achieved by the proposed MLWF algorithm are ordered decreasingly.

Proof. The algorithm has two loops: one is the inner loop ‘steps 3 → 4 → 5 → 3’ and the other is the outer loop ‘steps 3 → 4 → 5 → 6 → 3.’ Each time when the inner loop finishes, one water level μ ~ (L,H) will be achieved. Then, we proceed to compute other water levels. In the inner loop, we first adopt the water level given by standard water-filling algorithm. Our hypothesis is that the aforementioned water level satisfies all of the conditions (55). Then, we check our hypothesis by searching whether there is a k=l0 for which the corresponding condition in (55) is violated.

Assume l0 with 1 ≤ L < l0< H ≤ N is the point at which the inner loop finishes, i.e., μ ~ (L, l 0 ) satisfies all the conditions from L to l0. At this point, i = L l 0 1 x i + 1 = i = L l 0 ρ i is true due to the fact that the constraint holds with equality in the standard water-filling algorithm. Moreover, μ ~ (L,H)< μ ~ (L, l 0 ) holds since μ ~ (L,H) is not large enough to satisfy (55) with k = l0, though it satisfies all of the conditions from k = H to k = l0+1. On the other hand, we have

μ ~ (L,l)< μ ~ (L,H), l 0 +1lH,
(56)

since, based on our assumption, conditions (55) with index greater than l0 are satisfied with the water level introduced by μ ~ (L,H). From (56) and μ ~ (L,H)< μ ~ (L, l 0 ), we conclude that

μ ~ (L,l)< μ ~ (L, l 0 ), l 0 +1lH.
(57)

The search for the next water level (of course in the following inner loop), is between H = N and L = l0 + 1. We then only need to prove μ ~ (l, l 0 +1)< μ ~ (1, l 0 ), l 0 +1lH. First, we know μ ~ (L,l) satisfies

i = L l 1 x i + 1 = i = L l ρ i .
(58)

From the last inner loop, we know that applying μ ~ (L,l) on the condition i = L l 0 1 x i + 1 i = L l 0 ρ i , does not work, i. e.,

i = 1 l 0 1 x i + 1 > i = 1 l 0 ρ i .
(59)

Therefore, using μ ~ (L,l) in both (58) and (59) yields i = l 0 + 1 l 1 x i + 1 < i = l 0 + 1 l ρ i . On the other hand, μ ~ ( l 0 +1,l) gives i = l 0 + 1 l 1 x i + 1 = i = l 0 + 1 l ρ i . Therefore, we infer μ ~ ( l 0 +1,l)< μ ~ (L,l). Together with (57), we conclude μ ~ ( l 0 +1,l)< μ ~ (L, l 0 ). □

Meanwhile, the subproblem Φ y (t) can be solved with the same MLWF, which will not be restated here.

Algorithm 2 The master algorithm

This algorithm is presented by a diagram in Figure 3.

Figure 3
figure 3

Proposed master algorithm.

5 Simulation results

In this section, we numerically examine the performance of the proposed method. For all examples, we assume that the channel matrices (H r and H s ) have independent and identically distributed Gaussian entries with zero mean and variance 1. We consider 6 × 6 ×6 MIMO relay system, and 1000 Monte-Carlo runs are taken for average. The upper bound optimization will be solved both by the proposed decomposition algorithms and by the CVX convex optimization toolbox [20], while the lower-bounded optimization will only be solved by CVX. Moreover, we also compare the proposed solutions with a suboptimal method and a trivial method listed in the following:

  1. 1.

    Diagonalization method. We reduce the problem into a scalar problem using SVD of channel matrices. If we consider the structure of B = U s Λ B and F= U r Λ F V s H , we can reduce the problem into a simple scalar problem. In this method we basically choose the precoding matrices to match the channel matrices. The optimization problem of (10) is simplified to the following:

    (P4): min x i , y i i = 1 L a i x i + b i y i subject to y i + x i + 1 y i + x i + y i x i + 1 ρ i i x i 0 , y i 0 , i ,
    (60)

    where a i , b i , x i , and y i are the i th diagonal entries of ( Λ s ) 2 , ( Λ r ) 2 , Λ B , and Λ F , respectively. Note that the problem in (60) is a special case of (P3) in (39).

  2. 2.

    Naive method. We consider a solution that satisfies all of the constraints in (43) with equality. In this method, x and y are given by the following:

    x = y = 2 / ρ 1 .

In the first example, we take equal QoS requirements for all data streams. The total consumed powers versus MSE are depicted in Figure 4 for the upper bound with the proposed method, the upper bound with CVX, the lower bound, and the suboptimal diagonalization method with CVX, as well as the naive method. It can be observed that the performance of the decomposition method is exactly the same as the one obtained from CVX. We can also observe that the lower and the upper bounds are quite tight at high SNR when they meet each other. The numerical results from the diagonalization method depicts weaker performance than the proposed method with roughly 2 dB more power consumption. The naive method, on the other hand, has the poorest performance and requires more power than the proposed method (roughly 3 dB).

Figure 4
figure 4

Performance with equal QoS constraint.

The performances with unequal QoS constraints are presented in Figure 5, where the QoS vector is chosen as ρ = ρ[1, 1.5, 2, 2.5, 3, 3]T. One may notice that all curves are about 2 dB higher than those in Figure 4.

Figure 5
figure 5

Performance with unequal QoS constraint 3 ρ 1 = 2 ρ 2 = 3 2 ρ 3 = 6 5 ρ 4 = ρ 5 = ρ 6 .

This is because the smallest QoS constraint drags down the overall performance and thus demands for more power. Once again, we find that the upper bound and the lower bound are quite close to each other especially at high SNR values. Meanwhile, the proposed method coincides with the upper bound from the CVX. Moreover, the advantage of the proposed method over the suboptimal diagonalization method and the naive method is fairly explicit.

We also measure the average running time for the proposed method and the upper bound with CVX, measured in MATLAB. The results are shown in Table 1.

Table 1 Average time spent by each approach

Remark 3. Note that resorting to simulation cannot serve as a rigorous measurement of the complexity analysis. Nevertheless, the large difference in the running time fairly indicates the much higher efficiency of the proposed decomposition method over the regular convex optimization tool.

The effect of the number of antennas on each node is also investigated here. We set the QoS requirements for all data streams as 0.2 and vary the number of antennas from 2 to 9. The results are displayed in Figure 6.

Figure 6
figure 6

Performance of the system with different number of antennas. With fixed identical QoS for all of the data streams ρ=0.2.

6 Conclusion

In this paper, we considered the joint design of source precoding matrix and relay precoding matrix in a two-hop AF MIMO relay network. We minimized the power consumption subject to a set of predefined QoS constraints of each data stream. Using matrix calculus and majorization theory, we simplified the original matrix-valued problem to a relatively simpler scalar one and proposed two bounding problems that are convex and can be solved efficiently. We specifically designed a primary decomposition method to solve the upper bound problem that has less complexity than directly applying the interior point method. Numerical examples are provided to corroborate the proposed studies.

Appendix 1

Basics of majorization theory

Here, we briefly introduce the basics of majorization theory while the more comprehensive discussion can be found in [12].

Definition 1. If for any vector x=[ x 1 , x 2 ,, x n ] R n , let

x ( 1 ) x ( n )

represents the elements of x in an increasing order. Similarly, assume

x [ 1 ] x [ n ]

represents the elements of x in a decreasing order.

Definition 2. For any x,y R n , x is majorized by y if

i = 1 k x ( i ) i = 1 k y ( i ) , 1 k n 1
(61)
i = 1 n x ( i ) = i = 1 n y ( i ) ,
(62)

and is denoted by xy or yx.

Definition 3. [[12]] For anyx,y R n , x is weakly supermajorized by y if

i = 1 k x ( i ) i = 1 k y ( i ) ,1kn.
(63)

We denote this with x  w y (or equivalently y  w x). Also, x is weakly submajorized by y if

i = 1 k x [ i ] i = 1 k y [ i ] ,1kn.
(64)

We denote this with x w y (or equivalently y w x).

Lemma 2. (9.B.1 in [[12]]) Let M be an n×n Hermitian matrix with a vector diag{M} denoting its diagonal elements and let vector λ{M} contain its eigenvalues. Then,

diag ̄ { M } diag { M } λ { M }

where diag ̄ { M } i =mean(diag{M}).

Reversely, given vectors a and b with ab, then there exists an n × n Hermitian matrix M whose diagonal elements are a and eigenvalues are b.

Lemma 3. (5.A.9 in [[12]]) For any a and b satisfying awb, there exists a vector x such that

x b , a x .

Appendix 2

Standard water-filling algorithm

A standard water-filling algorithm will be used to solve the following convex problem which appeared in our MLWF algorithm:

min x i i = 1 N a i x i subject to i = 1 N 1 x i + 1 ρ k , x i 0 , i.
(65)

The water-filling algorithm that yields the optimum x i ’s is given by the following:

  • Input: The number of all positive eigenvalues N, all inverse eigenvalues { a i } i N , and a positive feasible initial vector for ρ i .

  • Output: Allocated powers { x i } i N and the optimum water level {μ}.

  1. 1.

    Sort all of the eigenvalues in the increasing order and a N + 1 = . Set L = N.

  2. 2.

    If a L  = a L+1, then L = L − 1. Set μ = a N

  3. 3.

    If μ i = 1 N a i 1 / 2 ( ρ ( N L ) ) 2 , then the optimum solution is μ= i = 1 N a i 1 / 2 ( ρ ( N L ) ) 2 and x i = μ 1 / 2 a i 1 / 2 1, or else, go to the next step.

  4. 4.

    Set L=L−1 and μ=a L go back to step 3.

Endnotes

aDual-hop relay network is of particular importance and has been the most frequently studied type in the past decades.

bWe assume perfect channel knowledge in this paper, while the channel estimation can be performed by the method in [24].

References

  1. Laneman JN, Wornell GW: Distributed space time block coded protocols for exploiting cooperative diversity in wireless networks. IEEE Trans. Inform. Theory 2003, 49: 2415-2425. 10.1109/TIT.2003.817829

    Article  MathSciNet  Google Scholar 

  2. Laneman JN, Tse DNC, Wornell GW: Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inform. Theory 2004, 50: 3062-3080. 10.1109/TIT.2004.838089

    Article  MathSciNet  Google Scholar 

  3. Van De Meulen EC: Three-terminal communication channels. Adv. Appl. Prob 1971, 3: 120-154. 10.2307/1426331

    Article  Google Scholar 

  4. Information transmission through a channel with relay, Technical Report B7-67, The Aloha System, University of Hawaii, Honolulu. 1976.

  5. Cover T, El Gamal A: Capacity theorems for the relay channel. IEEE Trans. Inform. Theory 1979, 25: 572-584. 10.1109/TIT.1979.1056084

    Article  MathSciNet  Google Scholar 

  6. Telatar IE: Capacity of multi-antenna Gaussian channels. Eur. Trans. Telecom 1999, 10: 585-595. 10.1002/ett.4460100604

    Article  Google Scholar 

  7. Tang X, Hua Y: Optimal design of non-regenerative MIMO wireless relays. IEEE Trans. Wireless Commun 2007, 6(4):1398-1407.

    Article  Google Scholar 

  8. Hammerstrom I, Wittneben A: Power allocation schemes for amplify-and-forward MIMO-OFDM relay links. IEEE Trans. Wireless Commun 2007, 6(8):2798-2802.

    Article  Google Scholar 

  9. Yang S, Belfiore JC: Towards the optimal amplify-and-forward cooperative diversity scheme. IEEE Trans. Inform. Theory 2007, 53(9):3114-126.

    Article  MathSciNet  Google Scholar 

  10. Khoshnevis B, Yu W, Adve R: Grassmannian beamforming for MIMO amplify-and-forward relaying. IEEE J. Sel. Areas Commun 2008, 26: 1397-407.

    Article  Google Scholar 

  11. Hua Y: An overview of beamforming and power allocation for MIMO relays. In Proceedings of IEEE Military Communications Conference. San Jose, CA; 2010:375-380.

    Google Scholar 

  12. Marshall AW, Olkin I: Inequalities: Theory of Majorization and Its Applications. Academic, New York; 1979.

    Google Scholar 

  13. Rong Y, Tang X, Hua Y: A unified framework for optimizing linear non-regenerative multicarrier MIMO relay communication systems. IEEE Trans. Signal Process 2009, 57: 4837-4851.

    Article  MathSciNet  Google Scholar 

  14. Rong Y, Hua Y: Optimality of diagonalization of multi-hop MIMO relays. IEEE Trans. Wireless Commun 2009, 8: 6068-6077.

    Article  Google Scholar 

  15. Palomar DP, Lagunas MA, Cioffi JM: Optimum linear joint transmit-receive processing for MIMO channels with QoS constraints. IEEE Trans. on Signal Processing 2004, 52(5):1179-1197. 10.1109/TSP.2004.826164

    Article  MathSciNet  Google Scholar 

  16. Zhang R, Chai CC, Liang Y: Joint beamforming power control for multiantenna relay broadcast channel with QoS constraints. IEEE Trans. Sig. Proc 2009, 57(2):726-737.

    Article  MathSciNet  Google Scholar 

  17. Y Rong: Multi-hop non-regenerative MIMO relays-QoS considerations. IEEE Trans. Sig. Proc 2011, 59(1):290-303.

    Article  Google Scholar 

  18. Fu Y, Yang L, Zhu W, Liu C: Optimum linear design of two-hop MIMO relay network with QoS requirements. IEEE Trans. Sig. Proc 2011, 59(5):2257-2269.

    Article  MathSciNet  Google Scholar 

  19. Mohammadi J, Gao F, Rong Y: Design of amplify and forward MIMO relay networks with QoS constraint. In Proceedings of the Global Telecommunications Conference (GLOBECOM 2010). Miami, USA; 2010:6-10.

    Google Scholar 

  20. Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 2.0 beta. 2012.http://cvxr.com/cvx

    Google Scholar 

  21. Kay SM: Fundumentals of Statistical Signal Processing: Estimation Theory. Prentice-Hall, Englewood Cliffs; 1993.

    Google Scholar 

  22. Petersen KB, Pedersen MS: The Matrix Cookbook. 2007.

    Google Scholar 

  23. Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, New York; 2004. Available (online) at http://www.stanford.edu/~boyd/cvxbook.html

    Book  Google Scholar 

  24. Gao F, Cui T, Nallanathan A: On channel estimation and optimal training design for amplify and forward relay network. IEEE Trans. Wireless Commun 2008, 7(5):1907-1916.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Basic Research Program of China (973 Program) under grants 2013CB336600 and 2012CB316102, the National Natural Science Foundation of China under grant 61201187, and the Tsinghua University Initiative Scientific Research Program under grant 20121088074. Jafar Mohammadi’s work was also supported by Helmholtz Research School on Security Technologies. Y. Rong’s work was supported under Australian Research Council’s Discovery Projects funding scheme (project numbers DP110100736, DP110102076).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feifei Gao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Mohammadi, J., Gao, F., Rong, Y. et al. Joint source and relay design for two-hop amplify-and-forward relay networks with QoS constraints. J Wireless Com Network 2013, 108 (2013). https://doi.org/10.1186/1687-1499-2013-108

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-108

Keywords