Skip to main content

Diversity analysis, code design, and tight error rate lower bound for binary joint network-channel coding

Abstract

Abstract

Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this article, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear transform of a set of source codewords. Our main contributions are the proposition of an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate to assess the performance of the network code. The lower bound on the diversity order is only valid for JNCCs where the relays transform only two source codewords. We then validate this analysis with an example which compares the JNCC performance to that of a standard layered construction. Our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate.

1 Introduction

Point-to-point communication has revealed many of its secrets. Driven by new applications, research in wireless communication is now focusing more on the optimization of communication in wireless networks. For example, the joint operation of multiple network layers can be optimized, denoted as cross-layer design [1, 2], thereby leaving the classical layered architectures, such as the seven-layer open systems interconnect (OSI) model ([3], p. 20). Another example of network optimization is cooperative communication, where multiple nodes in the network cooperate to improve their error performance. Cooperation may occur in many forms at different layers, e.g., cooperative channel coding at the physical layer and network coding at the network layer. Network coding refers to the case where the intermediate nodes in the network are allowed to perform encoding operations over multiple received streams from different sources. In a standard layered construction, the decoding of the network code is performed at the network layer, after the point-to-point transmissions have been decoded at the physical layer. Channel coding refers to the case where nodes perform coding over one point-to-point wireless link only. Cooperative channel coding is achieved by letting one or more relays transmit redundant bits for one source at a time. Usually, channel coding and network coding are studied separately (e.g., [46] for cooperative channel coding and [711] for network coding).

Standard linear network coding consists of taking linear combinations of several source packets. In general, non-binary coefficients are used in the linear combinations. In JNCC, cooperative channel coding (e.g., decode and forward [12]) and cross-layer design are combined, by using the network code for decoding at the physical layer. The rationale behind JNCC is to improve the joint error rate performance (i.e., the average error rate performance over all users participating in the network) by letting the redundancy of the network code help to decode the noisy channel output [13]. In that case, a joint optimization of the network and channel code is useful. For example, one can opt to let the network and channel code be represented by one parity-check matrix of a binary code, referred to as joint network-channel coding (JNCC). Hence, the coefficients multiplying the packets in the case of standard linear network coding are replaced by matrices in the case of JNCC.

Mostly, the two most important performance metrics are (R,P e ), where R is the information rate and P e is the error rate. Here, we consider a fixed information rate R, so that the aim is to minimize P e for a given point-to-point channel quality, expressed by γ, the signal-to-noise ratio (SNR) per symbol. Expressing the asymptotic (for large γ) error rate as P e = 1 g γ d , where g and d are defined as the coding gain and the diversity order, respectively, improving the performance refers to maximizing first d and then g (because d has the larger impact). Next to minimizing the error rate, scalability of the code design (e.g., to larger networks) is also an important criterion often recurring in the literature. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI model. However, it must be verified that important metrics, such as the diversity order d and the scalability to large networks, are not negatively affected.

Binary JNCC received much attention in the last years. Pioneering articles [14, 15] designed turbo codes and LDPC codes, respectively, for the multiple access relay channel (MARC) and for the two-way relay channel [16]. However, the code design was not immediately scalable to general large networks and did not contain the required structure to achieve full diversity. The study of Hausl et al. [1416] was followed by the interesting study of Bao et al. [17], presenting a JNCC that is scalable to large networks. However, this JNCC was not structured to achieve full diversity and has weak points from a coding point of view [18]. A deficiency in the literature, for general networks with a number of sources and relays, is the lack of a detailed diversity analysis in the case that the sources can act as a relay (which is for example the model assumed by [17]). The effect of the parameters of the JNCC on the diversity order is in general not known, because of the many degrees of freedom in such networks. Related to this, we mention [19, 20], where the authors designed a JNCC for the case where the sources cannot act as a relay, but other nodes play the role of relay to communicate to one destination. As the source nodes are excluded to act as a relay node in this model, the diversity analysis in [19, 20] is different from ours.

In this article, we consider a JNCC where the network code forms an integral part of the overall error-correcting code, that is used at the destination to decode the information from the sources. The rest of the article is organized as follows. In Section ‘Diversity analysis of JNCC’, we perform a diversity analysis, leading to an upper bound on the diversity order of any linear binary JNCC following our system model, and to a lower bound on the diversity order for a particular subset of linear binary JNCCs. The upper and lower bound depend on the parameters of the JNCC and can be used to verify whether a particular JNCC has the potential to achieve full diversity on a certain network. Second, in Section ‘Practical JNCC for n u r =2’, a specific JNCC of the LDPC-type is proposed that achieves full diversity for a well identified set of wireless networks. The scalability of this specific JNCC to large networks is discussed. The coding gain c is not considered in the body of the article and the parameters of our proposed code may be further optimized by applying techniques such as in [19], to maximize c. To assess the performance of the proposed JNCC, we determine the outage probability, a well known lower bound of the word error rate, in Section ‘Lower bound for the WER’. We also present a tighter word error rate lower bound in Section ‘Calculation of a tighter lower bound on WER’, that takes into account the particular structure of the JNCC. In Section ‘Numerical results’, the numerical results corroborate the established theory. We also briefly comment on the coding gain achieved by the proposed JNCC and conclusions are drawn for different classes of large networks.

The main contribution of this article is to indicate the effect of the parameters of the JNCC on the diversity order, for networks that fit our channel model. More specifically, we propose an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate that is tighter than the outage probability and thus better suited to assess the performance of the overall error-correcting code. The main contributions are summarized in the lemmas, propositions and corollaries. These can be a guide for any coding theorist designing JNCCs. Further, our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate. This conjecture is important, because one will now need to clearly motivate the use of JNCC instead of a standard layered construction, given the extra efforts that are required for JNCC.

This article extends the study, published in [18], by also considering non-perfect source-relay channels, by considerably extending the diversity analysis, by providing an achievability proof for the diversity order of the proposed JNCC, by clearly indicating the set of wireless networks where the proposed JNCC is diversity-optimal, by providing a tighter lower bound on the word error rate, and by providing more numerical results.

2 Joint network-channel coding

We first illustrate joint network-channel coding by means of a simple example. Consider two sources orthogonally broadcasting a vector of symbols, mapped from the binary vectors s1 and s2, respectively, to a relay and a destination. This channel is denoted as a multiple access relay channel (MARC) in the literature. Supposing that the relay is able to decode the received symbols, the relay computes a binary vector r1, which is mapped to symbols and transmitted to the destination. The relation between all bits is expressed by the JNCC, whose parity-check matrix has the following general form,

(1)

The matrix H p represents the parity-check matrix for the point-to-point channel code. Each of the binary vectors s1, s2, and r1, can be separately decoded using this code. The bottom part of H represents the GLNC, which we denote as H GLNC =[ H 1 1 H 2 1 H 1 ]. It expresses the relation between r1, s1, and s2. More specifically, we have

H 1 r 1 = H 1 1 s 1 + H 2 1 s 2 .
(2)

Note that GLNC includes standard network codes used in an OSI communication model as a special case. In the latter case, the matrices H j i and H i (considering more than one relay in general) are identity matrices or all-zero matrices, so that the network code simplifies to the relay packet being a linear combination of source packets, also expressed as XORing of packets or symbol-wise addition of packets.

Ideally, the overall matrix H conforms optimized degree distributions that specify the LDPC code. When the channels between sources and relay are perfect, we can drop the first three sets of rows and only keep the GLNC, represented by HGLNC; in this case the information bits of the code are s1 and s2, and r1 contains the parity bits. This is still a JNCC as the redundancy in the network code is used to decode the received symbols on the physical layer at the destination. In [21, 22], it is proved that the matrices H p do not affect the diversity order in the case of the MARC.

3 System model

We consider wireless networks with m s sources directly communicating to a common destination (e.g., cellphones communicating to a base station). Two time-orthogonal phases are distinguished. In the source phase, the sources orthogonally broadcast their respective source packet. In the following relay phase, the relays orthogonally broadcast their respective packet. All considered sources overhear each other during the source phase, and act as relay in the relay phase. Other nodes, not acting as a source, might be present in the network (i.e., overhearing the sources) and also act as relay. Hence, we consider a total of m r relays, where m r m s . This general network model, which is practically relevant as it fits many applications, is adopted in, e.g., [17]. Take for example any large network and consider a volume in space (cf. picocells or femtocells) where all nodes can overhear each other. These nodes form sub-networks and can be modeled by our proposed model. Note that in the literature, sometimes other models are assumed, such as the MN − 1 model [19, 20], where M sources are helped by N relays (the relays are nodes different from the sources) to communicate to one destination.

All devices have one antenna, are half-duplex and transmit orthogonally using BPSK modulation. The K information bits of each source are encoded via point-to-point channel codes into a systematic codeword, denoted as source codeword, of length L, expressed by the column vector s u s for user u s , u s [1,…,m s ]. The parity-check matrix of dimension (LK) × L of this point-to-point codeword is denoted by H p , which is the same for each user u s , so that H p s u s =0 for all u s . In the relay phase, each relay u r , u r [1,…,m r ], transmits a point-to-point codeword r u r of length L to the destination, also satisfying H p r u r =0. Hence, all slots have equal duration, the coding rate of the point-to-point channels is R c , p = K L , and the overall coding rate is R c = m s K ( m s + m r ) L = R c , p m s m s + m r . We define the fraction of source transmissions in the total number of transmissions as the network coding rate R n = m s m s + m r , so that R c = Rc,pR n . The overall codeword of length (m s + m r )L is expressed by the column vector

x= s 1 T s m s T r 1 T r m s T r m r T T .
(3)

The destination declares a word error if it can not perfectly retrieve all m s K information bits, and the overall word error rate is denoted by Pew.

All relevant channels between differenta pairs of network nodes are assumed independent, memoryless, with real additive white Gaussian noise and multiplicative real fading (Rayleigh distributed with expected squared value equal to one). The fading coefficient of a wireless link is only known at the receiver side of that link. We consider a slow fading environment with a finite coherence time that is longer than the duration of the source phase and the relay phase, so that the fading gain between two network nodes takes the same value during both phases. We denote the fading gain from node u to the destination as α u , with E[ α u 2 ]=1. All point-to-point channels have the same average signal-to-noise ratio (SNR), denoted by γ. Differences in average SNR between the channels would not alter the diversity analysis, on the condition that the large SNR behavior inherent to a diversity analysis refers to allb SNRs being large. Denoting the received symbol vector at the destinationc in timeslot i as y i , the channel equation is

y u s = α u s s u s + n u s , u s = 1 , , m s y m s + u r = α u r r u r + n m s + u r , u r = 1 , , m r ,
(4)

where n i CN(0, 1 γ I) is the noise vector in timeslot i, s u s =2 s u s 1 and r u r =2 r u r 1 (BPSK modulation).

Hence, at the destination, each of the m s independent fading gains between the sources and the destination affects 2L bits (L bits in the source phase and L bits in the relay phase) and each of m r m s fading gains between the non-source relays and the destination affects L bits, assuming that all m r relays could decode the messages received from the sources. Hence, from the point of view of the destination, the overall codeword is transmitted on a block fading (BF) channel with m r blocks, each affected by its own fading gain, where m s blocks have length 2L and m r m s blocks have length L. This notion will be essential in the subsequent diversity analysis (Section ‘Diversity analysis of JNCC’).

In the source phase, relay u r attempts to decode the received symbols from sources belonging to the decoding set S( u r ). The users that are successfully decoded at relay u r are added to its retrieval set, denoted by R( u r ), R( u r )S( u r ), with cardinality l u r . Next, in the relay phase, relay u r transmits a relay packet, which is a linear transformation of n u r source codewordsd originated by the sources from the transmission set T( u r )={ u 1 ,, u n u r } of relay u r , with T( u r )R( u r ). If l u r < n u r , then relay u r does not transmit anything. In Section ‘Diversity analysis of JNCC’, we show that n u r is an important parameter that strongly affects the diversity order.

For example, user 3 attempts to decode the messages from users 1, 2, and 5, and succeeds in decoding the messages from users 1 and 5 from which a linear transformation is computed. Hence, S(3)={1,2,5}, R(3)=T(3)={1,5}, l3 = n3 = 2. Because the channel between a node and the destination remains constant during both source and relay phases, a relay has no interest in including its own source message in S( u r ).

Using the transmission set for each relay, the GLNC in Equation (2) generalizes to

H u r r u r = u s T ( u r ) H u s u r s u s ,
(5)

where the matrices H u r and H u s u r are of dimension K × L. Hence, each transmitted relay codeword r u r is a linear transformation of n u r source codewords. The superscript u r in H u s u r indicates that the vector s u s is in general not transformed by the same matrix for all relays u r where u s T( u r ). The overall parity-check matrix H is thus expressed as

H= H c H GLNC ,
(6)

where H c is block diagonal with H p on its diagonal, representing the channel code, and

H GLNC = H 1 1 H m s 1 H 1 0 0 H 1 2 H m s 2 0 H 2 0 H 1 m r H m s m r 0 0 H m r
(7)

represents the GLNC.

Table 1 provides an overview of the notation presented in the system model.

Table 1 Overview of notation for JNCC for larger networks

4 Diversity analysis of JNCC

Before passing to the actual diversity analysis, we provide the well-known formal definition of the diversity order ([23], Chap. 3).

Definition 1

The diversity order attained by a code C is defined as

d = lim γ log P ew log γ ,

where γ is the signal-to-noise ratio.

In other words, Pew γd, where denotes proportional to.

In the proofs of propositions in this article, we will often use the diversity equivalence between a BF channel and a block binary erasure channel (block BEC), which was proved in [24, 25]. A block BEC channel is obtained by restricting the fading gains in our model to belong to the set {0,}, so that a point-to-point channel is either erased or perfect. Denoting the erasure probability Pr α u r = 0 by ε, a diversity order d is achieved if Pew εdfor small ε[26]. A diversity order of d is thus achievable if there exists no combination of d − 1 erased point-to-point channels leading to a word error. On the other hand, a diversity order of d is not achievable if there exists at least one combination of d − 1 erased channels leading to word error.

In this section, we present the relation between the diversity order d and the parameters { n u r , u r =1,, m r }, as well as between d and the choice of {T( u r ), u r =1,, m r }. This guides the code design and furthermore, the potential, of a linear binary JNCC satisfying some conditions, to achieve full diversity, can be verified without performing Monte Carlo simulations.

We first prove that the diversity order is a function of only the network coding rate R n (Section ‘Diversity as a function of the network coding rate’). We then determine in Section ‘Space diversity by cooperation’ the relation between the diversity order d and the set { n u r , u r =1,, m r }, for any linear binary JNCC expressed as in Equations (6) and (7). The set { n u r , u r =1,, m r } actually determines the maximal spatial diversity that can be achieved by cooperation, leading to an upper bound on the diversity order. In Section ‘A lower bound based on {T( u r )} for n u r =2’, we propose a lower bound on the diversity order in the case that n u r =n=2, which depends on all transmission sets {T( u r ), u r =1,, m r }. In Section ‘Diversity order with interuser failures’, we discuss how the diversity order is affected by interuser failures. Finally, in Section ‘Diversity order in a layered construction’, we briefly comment on the diversity order in a layered construction, such as the OSI model.

4.1 Diversity as a function of the network coding rate

We denote the maximum achievable diversity order by dmax. We will determine dmax in this section and show that it only depends on the network coding rate R n = m s m s + m r .

Proposition 1

Under ML decoding, the maximum diversity order dmax that can be achieved by any linear JNCC is

d max = 1 + m r 2 , if m r 2 m s 1 + m r m s , if m r > 2 m s .
(8)

Proof

See Appendix 1. □

Note that the maximal diversity order does not depend on L. It can actually be reformulated in the following way:

d max = 1 + ( 1 R n ) ( m r + m s ) 2 , if m r 2 m s 1 + m r ( m s + m r ) R n , if m r > 2 m s ,
(9)

which for m r = m s = m reduces to the maximum diversity order for a standard BF channele with m blocks and coding rate R n [2729].

Hence, the maximum diversity order does not change when the point-to-point channel coding rate Rc,pchanges. This corresponds with our intuition as the parity bits of the point-to-point codes only provide redundancy within one block forming a point-to-point codeword, hence these parity bits cannot combat erasures which affect the complete point-to-point codeword. Another consequence is that the maximal diversity order of JNCC cannot be larger than in a layered approach, with the same network coding rate.

In the remainder of the article, full diversity refers to the diversity order being equal to the maximal diversity order, d = dmax, from (8).

4.2 Space diversity by cooperation

We denote the word error rate for each source u s by P ew , u s , which is the fraction of packets where at least 1 of the K information bits from source u s is erroneously decoded at the destination. Associated to P ew , u s , we define d u s , so that P ew , u s 1 γ d u s for large γ. We have that max u P ew , u s P ew u s P ew , u s . From Definition 1, it follows that

d= min u s d u s .
(10)

Denote t u s , u s {1,…,m s }, as the number of times that source u s is included in the transmission set of a relay: t u s = u r u s ( u s T( u r )), where (.) is the indicator function, which equals one when its argument is true and zero otherwise. Some simple measures can be determined: t min = min u s t u s and t av = u r = 1 m r n u r m s . We will show that d u s depends on t u s and thus, by Equation (10), d depends on tmin. We denote 1 + tmin by d R , which we call the space diversity order, as it is the minimal number of channels that convey a source message to the destination.

Proposition 2

For any linear JNCC, applied in our system model, the diversity order d is upper bounded as

d d R = 1 + t min .

Proof

We use the diversity equivalence between a BF channel and block BEC [24, 25]. Assume that the channel between source u s and the destination is erased. Source u s is included in at most t u s transmission sets. Assume that all t u s channels between the relays, that include source u s in their transmission set, and the destination are also erased. Then the destination does not receive any information on source u s so that it can never retrieve its message. The probability of occurrence of this event is ε 1 + t u s , so that P ew , u s ε 1 + t u s , hence d u s 1+ t u s . Using Equation (10), we obtain Proposition 2. □

Note that the proof of Proposition 2 is based on the assumption that relay u r only considers packets transmitted in the source phase for inclusion in S( u r ). In the case that relay u r computes its relay packet also based on packets transmitted by other relays during the relay phase, the diversity order becomes more difficult to analyze.

In Corollary 1, we propose the conditions on tmin so that the space diversity order d R is not smaller than the maximum achievable diversity order.

Corollary 1

For any linear JNCC, applied in our system model, full diversity can be achieved only if tminq, where

q = m r 2 , if m r 2 m s m r m s , if m r > 2 m s

Proof

The proof follows directly from Propositions 1 and 2. □

Given a GLNC, and thus a choice of T( u r ), one can verify whether the condition in Corollary 1 holds. In the disaffirmative case, full diversity cannot be achieved. To get more insight for the code design, we consider the simplest case of a network code where the cardinality of the transmission set is constant ( n u r =n).

Corollary 2

For any linear JNCC, applied in our system model, with constant n u r =n, full diversity can be achieved only if

n m 2 , if m r = m s = m n m s 2 , if 2 m s m r > m s n m s m s 2 m r , if m r > 2 m s
(11)

Proof

It always holds that tmintav and if n u r =n, then t av = m r n m s . From Corollary 1, full diversity can be achieved only if m r n m s q. Because m r n m s m r n m s , we have the necessary condition that nq m s m r . As n is an integer, this bound can be tightened, yielding n m s m r q . Filling in q from Corollary 1 yields Corollary 2. □

Table 2 illustrates Corollary 2, showing the set of networks in which a certain parameter n is diversity-optimal, which means that the choice of n does not prevent the code to achieve full diversity. In Section ‘Practical JNCC for n u r =2’, we propose a JNCC for n = 2, where taking n = 2 is diversity-optimal in all networks corresponding to bold elements in Table 2.

Table 2 Minimal value n for a JNCC with constant n u r =n to maintain its capability to achieve full diversity

4.3 A lower bound based on {T( u r )} for n u r =2

A certain relay does not help one source only, but a combination of sources, expressed by the transmission set T( u r ) for each relay u r . In this section, we provide a lower bound on the diversity order, based on the choice of {T( u r ), u r =1,, m r }. If this lower bound and the upper bound in the previous section are tight, the exact diversity order of JNCCs can so be determined, as will be illustrated in Section ‘Practical JNCC for n u r =2’.

Based on T( u r ), m s and m r , we construct the (m s + m r ) × m s coding matrix M, where

M u s , u s = 1 for u s = 1 , , m s M u r + m s , u s = 1 if u s T ( u r ) , u s , u r M i , u s = 0 otherwise
(12)

The matrix M expresses the presence of a source-codeword in each transmission, i.e., M i , u s =1 if s u s is considered in transmission i (i = 1,…,m s and i = m s + 1,…,m s + m r correspond to the source and relay transmission phases, respectively). Therefore, the upper part of M is an identity matrix as each source u s transmits its own codeword s u s in the source phase. The matrix M represents what is often called the “coding header” or “the global coding coefficients” in the network coding literature (see e.g., [30]).

Consider a block BEC channel where e of the m r blocks have been erased. The indices of the fading gains corresponding to the erased blocks are collected in the set E={ E 1 ,, E e }, E i {1,, m r }). Based on E, we construct M E which corresponds to the subset of transmissions that are not erased, i.e., all rows E i (if E i m s ) and m s + E i , for i = 1,…,e, in M are dropped. We denote the rank of M E as r M E . The set (e) collects all possible matrices M E which can be constructed from M if |E|=e.

Consider an example for m s = m r = 3. Assume that T(1)={2,3}, T(2)={1,3}, and T(3)={1,2}, so that

M= 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1 1 1 0
(13)

Next, assume that E={1}. Hence, the channel between user 1 and the destination is erased, so that rows 1 and 4 from M are dropped:

M E = 0 1 0 0 0 1 1 0 1 1 1 0 ,
(14)

and r M E =3. It can be verified that all matrices M E (1) have rank r M E =3. However, there exist matrices M E (2) having rank r M E <3.

We can now define a metric that depends on {T( u r )}.

Definition 2

We define d M = e + 1, where e is the maximal cardinality of E such that r M E = m s for each M E (e).

A simple computer program can compute d M , given T( u r ), m s and m r .

Lemma 1

In a JNCC following the form of Equation (6) with m s = m r and constant n u r =n=2, the metric d M is at most three.

Proof

If m s = m r and n = 2, then the minimum column weight of M is smaller than or equal to three. Erasing the three rows where M i , u s =1, for a certain u s corresponding to the minimum column weight, leads to M E having at least one zero column, and thus r M E < m s . By Definition 2, d M < 4. □

In the next proposition, we provide a lower bound on the diversity order under ML decoding or Belief Propagation (BP) decoding [31]. We denote

u s u r = H p H u s u r , u r = H p H u r ,
(15)

which are square matrices of dimension L.

Proposition 3

Using ML decoding, the diversity order of a JNCC following the form of Equation (6) with constant n u r =n=2, is lower bounded as

d d M ,

if the matrices u s u r , u s T( u r ), u r {1,, m s }, have full rank.

Using BP-decoding, the diversity order of a JNCC following the form of Equation (6) with constant n u r =n=2, is lower bounded as

d d M ,

if, for each u r , the set of L equations

u r r u r = u s T ( u r ) u s u r s u s ,
(16)

can be solved with BP in the case of only one unknown source-codeword vector.

Proof

See Appendix 2. □

We can simplify the condition for BP decoding, stated in Proposition 3, when we assume that the parity bits of point-to-point codes do not have a support in HGLNC, or said differently, when the LK right most columns of the matrices H u r and H u s u r are zeroes. In that case, one iteration in the backward substitution, mentioned in Appendix 2, corresponds to solving the K unknown information bits of s u via the set of K equations

H u u r s u = u s T ( u r ) u s u H u s u r s u s + H u r y m s + u r .
(17)

In Section ‘Practical JNCC for n u r =2’, we propose a JNCC where the parity bits of point-to-point codes do not have a support in HGLNC, so that we take (17) instead of (16) as condition for BP decoding in the remainder of the article.

4.4 Diversity order with interuser failures

It is often easier to prove that a particular diversity order is achieved assuming perfect interuser channels (see for example in Section ‘Practical JNCC for n u r =2’). Here, we discuss how this diversity order is affected by interuser failures.

Lemma 2

In the case of non-reciprocal interuser channels, any JNCC achieves the same diversity order with or without interuser channel failures.

Proof

See Appendix 3. □

In the case of reciprocal interuser channels, the achieved diversity order with interuser failures depends on the transmission sets {T( u r ), u r =1,, m r }. We propose an algorithm to construct {T( u r )} in Section ‘Practical JNCC for n u r =2’ and we will then discuss the diversity order with reciprocal interuser channels.

4.5 Diversity order in a layered construction

In a layered construction, such as the standard OSI model, the destination first attempts to decode the point-to-point transmissions. If it can not successfully retrieve the transmitted point-to-point codeword for a particular node-to-destination channel, then it declares a block erasure, where a block refers to one point-to-point codeword. Denoting this block erasure probability by ε, we have that ε 1 γ ([23], Equation (3.157)). If for example e blocks of length L are erased, then the decoding corresponds to solving a set of equations with eL unknowns.

Standard linear network coding consists of taking linear combinations of several source packets. In general, non-binary coefficients are used in the linear combinations. Hence, packets are treated symbol-wise, which is shown to be capacity achieving for the layered construction [8]. A consequence of this symbol-wise treatment is that the effective block length of the network code reduces to m s + m r and the set of equations, that are available at the destination for decoding, is expressed by the coding matrix M E . At this block length, ML decoding (which is equivalent to Gaussian elimination at the network layer) has low complexity. Under ML decoding, a sufficient condition for successful decoding is r M E = m s . Also, for ML decoding, the maximum number of erasures e = d M − 1 (Definition 2), so that the condition r M E = m s is satisfied, is equal to the minimum distance of the non-binary code minus one. The minimal distance is, for a given coding rate, maximum for maximum distance separable (MDS) codes, so that d M is maximum for MDS codes as well. Also note that random linear network codes are MDS codes with high probability for a sufficiently large field size [32].

Table 3 provides an overview of the notation presented in this diversity analysis.

Table 3 Overview of notation introduced in the diversity analysis

Tables 1 and 3 indicate the complexity of the analysis of JNCC for large networks.

5 Practical JNCC for n u r =2

In the literature, a detailed diversity analysis is most often lacking. Codes were proposed and corresponding numerical results suggested that a certain diversity order was achieved on a specific network. It is sometimes not clear why this diversity order is achieved, and how it would vary if the network or some parameters change. In the previous section, we made a detailed diversity analysis of a JNCC following the form of Equation (6). However, the utility of for example Proposition 3 is limited to JNCCs following the form of Equation (6) with a constant n u r =2, which suggests that it is very hard to rigorously prove diversity claims in general. However, the modest analysis made in Section ‘Diversity analysis of JNCC’ can be applied in some cases and we will show its utility through an example.

We consider networks with m s = m r = m ≥ 4 and a JNCC following the form of Equation (6) with n u r =n=2 for u r = 1,…,m. We will rigorously prove that a diversity order of three is achieved, using the propositions of Section ‘Diversity analysis of JNCC’. From Table 2, it can be seen that this JNCC is diversity-optimal for m = 4 and m = 5. In Section ‘Numerical results’, we provide numerical results for m = 5.

From Table 2, it is clear that restricting n to two is not diversity-optimal in larger networks. However, it also has some advantages. If n = 2, then every relay just needs to decode 2 users, and encoding is restricted to taking a linear transformation of only two source packets. Furthermore, taking n = 2 does not impose infeasible constraints on the number of sources in the vicinity of a relay in the case that spatial neighborhoods are taken into account. Next, the theoretical analysis is simpler in the case n = 2. Finally, taking n = 2 allows to reuse strong codes designed for the multiple access relay channel, e.g., in [21, 22].

Besides the diversity order, we indicated in Section ‘Introduction’ that scalability is also very important. The JNCC proposed here is scalable to any large network without requiring a redesign of the code. This means that we provide an on the fly construction method. The latter is particularly important for self regulating networks. As a node adds itself to the network, it can seamlessly integrate to the network. Together with the new symbols sent by the new node, a new JNCC code is formed which still possesses all desirable properties. Finally, note that due to the large block length of JNCC, ML decoding is too complex and low-complexity techniques, such as BP decoding, must be used.

Hence, two properties are claimed: scalability to large networks and a diversity order of three (which is full diversity in some cases) under BP decoding. The JNCC code is presented in two steps. First, we present the design of {T( u r )} and thus the coding matrix M. In a second step (Equation (20)), we specify the matrices H u r and H u s u r and we will prove that the scalability and the diversity order of three are achieved.

5.1 First step: design of T( u r )

The transmission sets {T( u r )} have a large impact on the diversity order. For example, in [18], a random construction was studied (each relay chooses n = 2 sources at random) and it was shown that E[ t u s ]=2, but Var[ t u s ]=2 as well, so that most probably tmin < 2 and d R < 3 (Proposition 2). So we need a more intelligent construction.

We present an algorithm to determine {T( u r )}, given m s and m r , and we subsequently determine the corresponding metrics tmin and d M . We define the function f m s (x)= ( x 1 ) mod m s +1 which adapts the modulo operation to the range 1 f m s (x) m s .

Algorithm 1

Choose transmission set T( u r ) .

The transmission set T( u r ) is expressed via the bottom part of M. An example of such a matrix M is given in Equation (18) for m s = m r = 5.

M= 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0
(18)

If a node is added as a source node, it adopts the largest source index, m s + 1, and relay-only nodes, with indices larger than or equal to m s + 1, increment their index by one. The function f m s (x) is updated to the new m s . Note that the algorithm corresponds to a deterministic cooperation strategy, which avoids extra signalling to the destination regarding the code design.

We first consider the case of perfect interuser channels and prove that Algorithm 1 yields d = 3 (Corollary 3). We then consider interuser failures and prove that the diversity order is not affected (Lemma 3).

Corollary 3

Having perfect links from sources to relays, the diversity order of a JNCC, with m s = m r and with transmission set constructed via Algorithm 1, achieves a diversity order d = 3 using BP-decoding, if, for each u r , Equation (17) can be solved with BP in the case of only one unknown source-codeword vector.

Proof

Because the links between sources and relays are perfect, the relays will never stay silent. In the case that m r = m s and n u r =2, we have that tmin = tav = 2 and so d R = 3.

Next, we show that d M = 3 (and thus, according to Lemma 1, d M is maximized if n = 2). Consider |E|=2. Without loss of generality, consider that E={1,2}. Consider the set of equations M E z=c. Variables z 3 ,, z m s can be recovered via the top m s − 2 rows of M E . The two relays u1 and u2 having source u s in their transmission set (T( u 1 ) and T( u 2 ), respectively) are

u 1 = f m s ( u s 1 ) , u 2 = f m s ( u s 2 ) .

Hence, source 1 is included in T(m1) and T(m), and source 2 is included in T(m) and T(1). Hence, relay transmission m − 1 can be used to retrieve source 1 and relay transmission m can be used to retrieve source 2, as long as m ≥ 4. Hence, M E has full rank. The generalization to any set E satisfying |E|=2, is straightforward. Therefore, we have that d M = 3.

As d R = d M = 3, the proof follows immediately from Propositions 2 and 3. □

Next, it can be proved that a JNCC applied in our system model has a diversity order of three, if it has a diversity order of three when all interuser channels are perfect. This is proved in general for non-reciprocal interuser channels in Lemma 2, and here, we consider reciprocal interuser channels.

Lemma 3

A JNCC, with transmission set constructed via Algorithm 1, achieves the same diversity order with or without interuser channel failures when m s > 4 or when m s = m r = m ≤ 4.

Proof

See Appendix 4. □

For conciseness, we do not consider the other cases, m r > m s ≤ 4.

5.2 Second step: JNCC of LDPC-type

In the first step, we specified {T( u r )} and proved that d R = d M = 3 if m r = m s = m > 3. According to Corollary 3, a diversity order of three is achieved under BP decoding if, for each u r , Equation (17) can be solved with BP in the case of only one unknown source-codeword vector. In the second step, we specify the sub matrices H u r , H u s u r , u r ,u s , to satisfy this condition, given that {T( u r )} is constructed according to Algorithm 1.

A simple solution is to replace the K left most columns in all K × L sub matrices H u r , H u s u r , u r ,u s , by identity matrices. In this case, the joint network channel coding essentially reduces to a layered solution: the source-codewords are decoded at the relays and simply added according to Equation (5). If the network code is used at the physical layer, it has to deal with noise and a more advanced code might be required.

In the literature, a full-diversity close-to-outage performing JNCC for the Multiple Access Relay Channel (MARC) has been proposed [21, 22], which is a code in the form of Equation (1). These codes are such that the set of equations

H 1 1 s 1 + H 2 1 s 2 + H 1 r 1 = 0

can be decoded via BP if only one coding vector s1, s2 or r1 is erased and the other coding vectors are perfectly known. We denote this JNCC by MARC-JNCC. The matrix HGLNC, MARC of the MARC-JNCC is given by Equation (A.7) in [21]f:

(19)

where s j = [1i j 2i j p j ] is the codeword from source j, with [1i j 2i j ] and p j denoting the information bits and the parity bits, respectively (j = 1,2); 1i j and 2i j each contain K 2 information bits. However, the parity bits p j are not involved in HGLNC, MARC. The matrices R i , with i = 1,2,3, are random matrices, chosen according to the required degree distributions of the LDPC code. To facilitate future notation, we denote

H 1 = I R 1 0 0 I 0 , H 1 = R 1 I 0 I 0 0 ,
H 2 = 0 I 0 I R 2 0 , H 2 = I 0 0 R 2 I 0 .

and H3 = R3, so that H GLNC =[ H 1 ̄ H 2 ̄ H 3 ], where H i ̄ = H i or H i (it will become clear hereunder which one has to be chosen at each relay). In H 1 ̄ and H 2 ̄ , the first two block columns each consist of K/2 columns (corresponding to information bits) and the last block column consists of LK columns (corresponding to parity bits from the point-to-point codes). The zero block columns indicate that parity bits from point-to-point codes have no support in these matrices. Now replace all sub matrices H u r , H u s u r by these matrices, for each relay u r , so that in each block column corresponding to information bits, we have a random matrix R i ; this is required to conform any preferred degree distribution of the LDPC code. For example, HGLNC can be given by

(20)

Each set of rows and each set of columns in H will have at least one random matrix, so that any LDPC code degree distribution can be conformed. We denote this JNCC by the SMARC-JNCC, where S stands for scalable.

Proposition 4

In a network following the system model proposed in Section ‘System model’ and using BP, the SMARC-JNCC achieves a diversity order d = 3.

Proof

Consider the set of K equations

H 3 r u r = H 1 ̄ s u 1 + H 2 ̄ s u 2 ,{ u 1 , u 2 }T( u r ).
(21)

In [21], it is proved that this set of K equations can be solved using the matrices proposed above. We provide another more simple proof here. Consider a block BEC. Because H 1 ̄ and H 2 ̄ are upper- or lower-triangular, with ones on the diagonal, the unknown K information bits can be retrieved using backward substitution, hence it can be retrieved with BP as well.

By Corollary 3 and Lemma 3, the SMARC-JNCC achieves a diversity order d = 3. □

Note that the information bits of a source need to be split in two parts: bits of the type 1i and 2i. This allows the introduction of the matrices R1 and R2 in Equation 19, so that all information bits have a random matrix in their corresponding block column in the parity-check matrix. Now, the LDPC code can conform any degree distribution.

6 Lower bound for the WER

To assess the performance of the SMARC-JNCC we need to compare it with the outage probability limit (Section “Calculation of the outage probability”). We show that the outage probability limit is not always tight and we propose a tighter lower bound, which is presented in Section “Calculation of a tighter lower bound on WER”.

6.1 Calculation of the outage probability

The outage probability limit is the probability that the instantaneous mutual information between the sources and sinks of the network is less than the transmitted rate. The outage probability is an achievable (using a random codebook) lower bound of the average WER of coded systems in the limit of large block length [27, 33, 34].

For a multi-user environment, two types of mutual information are considered. First, it is verified whether the sum-rate, R c in this case, is smaller than the instantaneous mutual information between all the sources and the sink. Then, it is verified whether each individual source rate, R c m s in this case, is smaller than the instantaneous mutual information between the nodes, transmitting information for this source, and the destination. The outage probability for the MARC was determined in [21, 35] using the method described above.

The outage probability is

P out = P E out ,

where E out is denoted as an outage event. Similarly as in [21, 35], an outage event is given by

E out = R c u s = 1 m s I ( S u s ; D ) + u r = 1 m r B u r I ( R u r ; D ) m s + m r u s = 1 m s R c m s I ( S u s ; D ) + j | u s T ( j ) B j I ( R j ; D ) m s + m r ,

where

B j = i T ( j ) 1 1 I ( S i ; R j ) > R c , p .

The terms I(S i ;D), I(R i ;D), and I(S i ;R j ) are the instantaneous mutual informations of the corresponding point-to-point channels with input x {−1,1}, received signal y = α i x + w with wCN(0, 1 γ ), conditioned on the channel realization α i , which are determined by applying the formula for mutual information [36, 37]:

I ( X ; Y | α i ) = 1 E Y | { x = 1 , α i } log 2 1 + exp 4 y α i γ ,

where E Y | { x = 1 , α i } is the mathematical expectation over Y given x = 1 and α i .

We now consider the outage probability of a layered construction, such as the standard OSI model, where the destination first decodes the point-to-point transmissions, declaring a block erasure if decoding is not successful. For the network code, we assume a maximum distance separable (MDS) code, which is outage-achieving over the (noiseless) block-erasure channel [26]. That is, any m s correctly received packets suffice for decoding. Accordingly, an outage event for the layered construction, denoted as E out , l is given by

E out , l = u s = 1 m s E s , u s + u r = 1 m r E r , u r > m r u s = 1 m s 1 E s , u s + j | u s T ( j ) ( 1 E r , j ) = 0 ,

where

E s , u s = 1 1 I ( S u s ; D ) < R c , p

and

E r , u r = 1 B u r 1 1 I ( R j ; D ) > R c , p

The outage probability for JNCC and a layered construction are compared in Figure 1 for m s = m r = 5, coding matrixgM given in Equation (18) and Rc,p= 6/7. The overall spectral efficiency is R = 3/7 bpcu, so that E b / N 0 = 7 γ 3 .

Figure 1
figure 1

The outage probabilities of JNCC and a layered construction are compared. The spectral efficiency is R = 3/7 bpcu.

The main conclusion is that the difference between both outage probabilities is only 1 dB. Hence, on a fundamental level, the achievable coding gain by JNCC with respect to a standard layered construction is small for the adopted system model.

6.2 Calculation of a tighter lower bound on WER

According to information theory, the outage probability is achievable, where the proof relies on using random codebooks. However, the nature of the JNCC protocol largely deviates from a random code. For example, the parity bits corresponding to the point-to-point codes are forced in a block diagonal structure in H c (see Equation 6), which is not taken into account in the outage probability limit. In fact, in Proposition 1, it was proved that the maximal diversity order does not depend on R c but on R n , which is not taken into account in the outage probability limit. Therefore, we argue that the outage probability limit is in general not achievable by a JNCC, which is illustrated by means of an example.

Consider a network with m s = m r = 3. The adopted point-to-point codes have coding rate Rc,p= 0.5, so that R c = 0.25. We take n u = 2 and adopt the coding matrix M, given in Equation (13). Because of the small coding rate R c , the outage probability achieves a diversity order of three (Figure 2). However, it follows from Proposition 1 that dmax = 2. We therefore propose a new lower bound, which takes into account the point-to-point codes.

Figure 2
figure 2

The conventional and tighter outage probability of JNCC are compared.

A bit node is essentially protected by two codes: a point-to-point code (H c ) and a network code (HGLNC), which is illustrated on the factor graph [38] representation (a Tanner notation [39] is adopted)h of the decoder (Figure 3).

Figure 3
figure 3

The depicted part of the factor graph (using a Tanner notation) illustrates that a bit node (bit i on the figure) is essentially connected to two sets of check nodes, corresponding with H c and H GLNC , respectively. A set of check nodes is denoted as CND for check node decoder. The LLR-value coming from the CND corresponding with H c is denoted as L c . The LLR-value corresponding with the channel observation is denoted as Lobs,i.

Usually, both codes are characterized by separate degree distributions, denoted as (λ c (x),ρ c (x)) and (λGLNC(x),ρGLNC(x)) for H c and HGLNC, respectively.

The new lower bound assumes a concatenated decoding scheme. At the destination, first the point-to-point codes are decoded and then soft information is passed to the network decoder. This is illustrated in Figure 4, where the soft information is denoted by the log-likelihood ratio (LLR) L obs , i . Note that the bit node of bit i is duplicated to be able to clearly indicate L obs , i . Applying the sum-product algorithm (SPA) on this factor graph or the original factor graph (without node duplication) is equivalent. This follows immediately from the sum-product rule for variable nodes (([40]see Section 4.4)) and ([38], Equation (5)).

Figure 4
figure 4

The bit node in Figure3 can be duplicated with a single edge between both nodes as shown in this figure. The LLR L obs , i is the sum of all incoming LLR-values from the left, and contains the soft information which is passed to the network code decoder in a concatenated coding scheme.

The LLR L obs , i can be viewed as a new channel observation as it remains fixed during the iterative decoding of the network code (HGLNC). The maximum rate that can be achieved by the network code is given by

1 m s + m r u s = 1 m s I ( S u s ; L obs ) + u r = 1 m r B u r I ( R u r ; L obs ) .

The terms I( S u ; L obs ) and I( R u ; L obs ) are the mutual informations between the channel input x {−1,1} and the associated random variable L obs , conditioned on the channel realization α u , determined by applying the formula for mutual information [36, 37], i.e., I(X; L obs | α u ) is

1 E L obs | { x = 1 , α u } log 2 1 + p L obs ( l | x = 1 , α u ) p L obs ( l | x = 1 , α u ) ,

The density of the random variable L obs can be obtained by means of density evolution [41], given the degree distributions of the point-to-point code, or by means of Monte Carlo simulations, given the actual factor graph of the point-to-point code. Both approaches yield to the same results in our simulations.

Similarly to the conventional case, an outage event, denoted as E out , 2 is given by

E out , 2 = R n u s = 1 m s I ( S u s ; L obs ) + u r = 1 m r B u r I ( R u r ; L obs ) m s + m r u s = 1 m s R n m s I ( S u s ; L obs ) + j : u s T ( j ) B j I ( R j ; L obs ) m s + m r .

Note that the network coding rate is used instead of the overall rate R c , which corresponds to Proposition 1.

The tight lower bound presented here is a valid lower bound if the point-to-point codes are first decoded, followed by the network code, without iterating back to the point-to-point codes.

Let us now go back to the small network example with m s = m r = 3, considered in the beginning of this section. Figure 2 compares the conventional outage probability (Section ‘Calculation of the outage probability’) with the tighter lower bound proposed here. As mentioned before, the conventional outage probability has a larger diversity order than what is achievable, while the tighter lower bound only achieves a diversity order of two.

We are seeing a 3 dB difference at an outage probability of 10−4. To assess the performance of the network code only, given a certain point-to-point code, the WER of the SMARC-JNCC should be compared with the tight lower bound presented here. In the subsequent sections, we always include both lower bounds.

7 Numerical results

In this section, we provide numerical results for the SMARC-JNCC. We will clarify the proposed techniques on an illustrating network example, where m s = m r = 5 (Figure 5). We use the same network example as in [17, 18] so that a comparison is possible.

Figure 5
figure 5

The network example that will be used in this document is illustrated. The solid lines represent interuser channels, the dashed line is the channel to the destination. Only the channels from the perspective of user 1 are shown for clarity, but all other users see equivalent channels.

For simplicity, we assume non-reciprocal interuser channel in the simulation results. Note that in the case that m s > 4 and Algorithm 1 is used to construct {T( u r ), u r =1,, m r }, reciprocity is irrelevant for our proposed code, as it applies that iT(j) if jT(i).

We compare the error rate performance of the SMARC-JNCC with the outage probability limit and the tighter lower bound, which are presented in Section ‘Lower bound for the WER’, and with standard network coding techniques (using identity matrices in HGLNC) and a layered network construction (also using identity matrices in HGLNC, and where, at the destination, the network code is only decoded after decoding all point-to-point codewords separately and taking a hard decision).

The point-to-point code used in the simulations is an irregular LDPC code [41] characterized by the standard polynomials λ(x) and ρ(x) [41]:

λ ( x ) = i = 2 d b λ i x i 1 , ρ ( x ) = i = 2 d c ρ i x i 1 .

where λ(x) and ρ(x) are the left and right degree distributions from an edge perspective. The coefficients λ i and ρ i are the fraction of edges connected to a bit node and check node, respectively, of degree i. The adopted point-to-point code is fetched from [42], has coding rate Rc,p= 6/7 and conforms the following degree distributions:

λ 2 = 0 . 173 , λ 3 = 0 . 223 , λ 4 = 0 . 095 , λ 5 = 0 . 51 ρ 24 = 0 . 96 , ρ 25 = 0 . 04 .

7.1 Perfect source-relay links

We start by assessing the performance of HGLNC, the bottom part of Equation (20), which determines the diversity order. Therefore, we assume perfect links between sources and relays. Hence, the channel model is the same as described in Section ‘System model’, with the exception of the interuser channels, which are assumed to be perfect (no fading and no noise). The parameters used for the simulation are K = L = 900, m s = m r = 5 (so that N = 10 K = 9000), where N is the block length of the overall codeword. The overall spectral efficiency is R = 0.5 bpcu, so that E b /N0 = 2γ.

Figure 6 shows that a diversity order of 3 is achieved for SMARC-JNCC, which corroborates Corollary 3. It performs at 2.5 dB from the outage probability (because no point-to-point codes are considered, only the conventional outage probability is shown), which may be improved by optimizing the degree distributions. We also show a JNCC, where all submatrices H u r , H u s u r , u r ,u s are replaced by identity matrices, denoted as the I-JNCC. Finally, we show an I-JNCC with irregular { n u r }, with coding matrix M, given by

M= 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 1 0 .
(22)
Figure 6
figure 6

The word error rate of the SMARC-JNCC is compared to that of the I-JNCC, assuming perfect source-relay channels.

It is clear that, even without optimizing the SMARC-JNCC, there is a benefit in terms of coding gain compared to the I-JNCC.

7.2 Rayleigh faded source-relay links

Now, we assess the performance of the complete parity-check matrix H of the SMARC-JNCC. We use the channel model as described in Section ‘System model’. Hence, all links have the same statistical model and the average SNR is the same for all channels. The parameters used for the simulation are K=606, Rc,p= 6/7, L = 707, m s = m r = 5 (so that N = 10L = 7070). The overall spectral efficiency is R = 3/7 bpcu, so that E b /N0 = 7γ/3. Because the simulation time would be very large if every point-to-point source-relay link had to be decoded separately, we made an approximation. The word error rate of the point-to-point code when transmitted on a channel with fading gain α is smaller than 10−4when α2γ = 5.5 dB. Therefore, we assumed that a relay had correctly decoded the source-codeword if α2γ > 5.5 dB and not otherwise. We also add the performance of the SMARC-JNCC from Section ‘Perfect source-relay links’, corresponding to perfect source-relay links and R = 0.5 bpcu, as a reference curve (note that the reference curve corresponds to a larger spectral efficiency—the coding rate R c is larger—than for the other curves, which slightly disadvantages the reference curve in terms of error performance).

Figure 7 shows that a diversity order of 3 is still achieved, which corroborates Proposition 4. In addition, two main conclusions can be made. First of all, the coding gain loss due to interuser failures is 6.5 dB, which is very large. Second, the benefit in terms of coding gain of the SMARC-JNCC compared to the I-JNCC is considerably decreased, compared to Section ‘Perfect source-relay links’, which corresponds to the small horizontal SNR-gap between the outage probabilities of a layered and joint construction. Also note that the tighter lower bound using density evolution, is close to the conventional lower bound in this case (probably due to the larger coding rate Rc,p). Finally, the WER performance of a layered construction is shown, which coincides with that of the I-JNCC.

Figure 7
figure 7

The word error rate of the SMARC-JNCC is compared to that of the I-JNCC and a layered construction, assuming Rayleigh faded source-relay channels. The reference curve is the performance of the SMARC-JNCC assuming perfect source-relay channels (Section ‘Perfect source-relay links’).

7.3 Gaussian source-relay links

We test again the complete parity-check matrix H of the SMARC-JNCC, now assuming that the source-relay links are Gaussian, having additive white Gaussian noise only, without fading; fading occurs on the source-destination and relay-destination links only. We assume that the average SNR is the same for all channels. The parameters used for the simulation are the same as in Section ‘Rayleigh faded source-relay links’.

Figure 8 shows that in the case of Gaussian interuser channels, the loss compared to perfect interuser channels is very small. Furthermore, the performance of the I-JNCC has improved a lot in comparison with Section ‘Perfect source-relay links’, where HGLNC only was used. The degree distributions causing the poor coding gain of the I-JNCC in Section ‘Perfect source-relay links’, have changed considerably through the point-to-point codes, significantly improving the coding gain.

Figure 8
figure 8

The word error rate of the SMARC-JNCC is compared to that of the I-JNCC, assuming Gaussian source-relay channels. The reference curve is the performance of the SMARC-JNCC assuming perfect source-relay channels (Section ‘Perfect source-relay links’).

8 Conclusion

We put forward a general form of joint network-channel codes (JNCCs) for a wireless communication network where sources also act as relay. The influence of important parameters of the JNCC on the diversity order is studied and an upper and lower bound on the diversity order are proposed. The lower bound is only valid for the case where the number of sources is equal to the number of relays, and where each relay only helps two sources.

We then proposed a practical JNCC that is scalable to large networks. Using the diversity analysis, we managed to rigorously prove its achieved diversity order, which is optimal in a well identified set of wireless networks. We verified the performance of a regular LDPC code via numerical simulations, which suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction.

Appendix 1

Proof of Proposition 1

The maximal diversity order can be derived using the diversity equivalence between a block BEC and a BF channel [24, 25]. Assume a block BEC, so that a block s u s or r u r is completely erased or perfectly known. Consider the case that e1blocks of length 2L and e2 blocks of length L have been erased, where e = e1 + e2 is the total number of erasures, e1m s and e2m r m s . Hence, the number of unknown bits is equal to e12L + e2L. Considering the structure of H from (6) containing the block-diagonal matrix H c , it follows that the e12L + e2L erased bits appear in only (2e1 + e2)(LK) + m r K of the available (m s + m r )Lm s K parity equations, i.e., (2e1 + e2)(LK) equations involving H c and all m r K equations involving HGLNC. Hence, the unknown bits can be retrieved only if there are sufficient linearly independent useful equations. This yields the necessary condition:

m r 2 e 1 + e 2 .
(23)

Denoting by e = e1 + e2 the total number of erased blocks, the largest value emax of e for which e1 and e2 satisfy (23) for all e1m s and e2m r m s is given by

e max = m r 2 m r 2 m s m r m s m r > 2 m s
(24)

Hence, dmax = emax + 1, yielding Proposition 1.

Appendix 2

Proof of Proposition 3

Before we present the actual proof, we first propose two lemmas.

Lemma 4

Any binary a × b matrix S, ab, where all rows have weight 2 cannot have full rank b.

Proof

If a matrix has full rank, there is no vector z0 such that S z = 0. However, if S has row weight 2, then S 1 = 0, where 1 corresponds to a column vector with each entry equal to 1. □

Consider now a column vector of b unknown variables z and a set of constraints on these variables, which are stacked in S so that S z = c, where c is a column vector of known constants. In general, solving S for z corresponds to performing Gaussian elimination of S. However, under some conditions, this simplifies to backward substitution.

Lemma 5

If a binary a × b matrix S, ab, has full rank b and maximal row weight of 2, Gaussian elimination simplifies to backward substitution.

Proof

Without loss of generality, we eliminate all redundant (linearly dependent) rows in S to obtain a square matrix of size b. By Lemma 4, there must be at least one row in S with unit weight to have full rank. Starting from this known variable, we can solve for a further variable in z at each step as the row weight is smaller than or equal to 2.

Assume that this backward substitution procedure cannot be continued until all variables are known. That is, after successive decoding, there are k rows consisting of a combination of z i k + z j k where neither z i k nor z j k are known. We split the matrix S into two parts: S unknown { 0 , 1 } k × m s and S known { 0 , 1 } m s k × m s . The former comprises the rows involving only unknown variables (note that the weight of each row of Sunknown is 2). The latter consists of the rows involving only known variables. If the number of unknown variables is equal to k, then the rank of Sunknown must be equal to k which is impossible by Lemma 4. So, the matrix S was not full rank which contradicts our assumption. If the number of unknown variables is smaller than k, then there were redundant (linearly dependent) rows in Sknown which contradicts the assumptions again. We conclude that the procedure only fails if S does not have full rank. □

To prove Proposition 3, we use the diversity equivalence between a block BEC and the BF channel. In a block BEC, the channel Equation (4) simplifies to

y u s = ε u s s u s , u s = 1 , , m s y m s + u r = ε u r r u r , u r = 1 , , m r ,
(25)

where ε i = 0 when the channel is erased and ε i = 1 otherwise. Hence, ε i = 0 if iE and ε i = 1 if i E ̄ , where E ̄ is the complement of E.

Source-codewords s i can be retrieved from the transmissions in the source phase if ε i = 0. Decoding the other source-codewords at the destination is performed through the parity-check matrix H (Equation (6)). We split H in two parts:

H= H left H right ,
(26)

where Hleft and Hright have m s L and m r L columns, respectively. We also define s= [ s 1 T s m s T ] T and r= [ r 1 T r m r T ] T . As H x = 0, we have that

H left s= H right r.
(27)

As we consider a block BEC, some transmissions are perfect. As in Appendix 1, consider the case that e1 blocks of length 2L and e2 blocks of length L have been erased, where e= e 1 + e 2 =|E| is the total number of erasures, e1m s and e2m r m s . Considering the structure of H from (6) containing the block-diagonal matrix H c , it follows that the e12L + e2L erased bits appear in only (2e1 + e2)(LK) + m r K of the available (m s + m r )Lm s K parity equations, i.e., (2e1 + e2)(LK) equations involving H c and all m r K equations involving HGLNC. Next, (e1 + e2)K from the m r K equations involving HGLNC cannot be used to solve erased bits in s as these equations always have at least two unknowns. The overall set of equations to decode s thus becomes

s u s = y u s u s E ̄ H p y u s = 0 u s E u s T ( u r ) H u s u r s u s = H u r y m s + u r u r E ̄ ,
(28)

or, using the notation from (15),

s u s = y u s u s E ̄ u s T ( u r ) u s u r s u s = u r y m s + u r u r E ̄ ,
(29)

where y i = 1 + y i 2 (BPSK modulation). We can stack the coefficients of all elements in s in a matrix H s . For example, if m s = m r = 3, E={1}, T(2)={1,3} and T(3)={1,2}, then

(30)

It is now easy to see that M E , as defined in Section ‘A lower bound based on {T( u r )} for n u r =2’, is closely related to H s : [ M E ] i , j =1 if [H s ](i−1)L + 1…iL,(j−1)L + 1…jL0 and [ M E ] i , j =0 if [H s ](i−1)L + 1…iL,(j−1)L + 1…jL= 0.

If |E| d M 1, then M E has full rank, according to Definition 2. As established in Lemma 5, the set of equations represented by M E can be solved using backward substitution. This means that at each iteration, there is an equation with only one unknown. Consider a particular iteration and denote the index of the unknown by u. In H s , this corresponds to an equation with an unknown source-codeword vector s u of the type

H p s u = 0 H u u r s u = u s T ( u r ) u s u H u s u r s u s + H u r y m s + u r .
(31)

or of the type s u = y u .

Under ML decoding, we obtain what was claimed if the matrices u s u r , u s T( u r ), u r {1,, m r } have full rank. Under BP decoding, we obtain what was claimed if, for each u r , the set of L Equation (30) can be solved with BP in the case of only one unknown source-codeword vector s u .

Appendix 3

Proof of Lemma 2

A relay may not succeed in successfully decoding the message from a source, denoted as a failure. There are m2m interuser channels, which all have a probability of failure. Hence, there exist i = 0 m 2 m m 2 m i different cases, where each case corresponds to a combination of failures and successes. We denote the case where all interuser channels are successful as case 1.

Using Bayes’ law, the error rate can be split:

P ew = i P(casei)P(ew|casei).
(32)

Defining the diversity order corresponding to each case as d c , i = lim γ log P ( case i ) P ( ew | case i ) log γ , it follows that the overall diversity order is d = min i dc,i.

The probability of f failures on independent interuser channels is proportional to 1 γ f ([23], Equation (3.157)) so that for this case i,

d c , i = lim γ log P ( case i ) log γ lim γ log P ( ew | case i ) log γ
(33)
=f lim γ P ( ew | case i ) log γ
(34)

The diversity order in the case of perfect interuser channels (f = 0) is dc,1. That is, the error-correcting code can bear dc,1−1 erasures on node-destination links. Hence, dc,idc,1only if P(ew|casei) c γ d c , 1 f , or, all information can still be retrieved at the destination, given that f interuser channels and dc,1f−1 node-destination channels are erased. Let us check whether this is true for all f.

A relay stays silent if it cannot decode all source codewords corresponding to its transmission set. If there are f interuser failures, there are at most f relays which stay silent in the relay phase. This corresponds to at most f additional node-destination erasures adding to the assumed dc,1f−1 already erased node-destination channels, yielding a total of at most dc,1−1 erased node-destination channels, which can be supported by the code, by the definition of dc,1.

Appendix 4

Proof of Lemma 3

In the case that m s > 4 and Algorithm 1 is used to construct {T( u r ), u r =1,, m r }, reciprocity is irrelevant for our proposed code, as it applies that iT(j) if jT(i). Hence, if m s > 4, the proof given in Appendix 3 is always valid.

Now consider the case that dc,1= 2, which corresponds to m s = m r = m < 4 (see Proposition 1). In the case of f = 1 interuser channel, dc,i is always larger than one, because P(ew|casei) c γ as at least one channel, the source-destination channel, needs to fail to loose the corresponding information bits.

Finally, consider the case that m s = m r = m = 4 and thus dc,1= 3. Hence, in the case of no interuser failures, the code can support two node-destination failures, corresponding to four erased transmissions from two nodes, in the source phase and in the relay phase. Reciprocity is relevant as iT(j) if jT(i) for (i,j) is (1,3) and (2,4). Because P(ew|casei) c γ , we only have to consider the case that f = 1, denoted as case i in general. Hence, in the case that the interuser channel between sources one and three or two and four have been erased, relays one and three or two and four, respectively, stay silent. Note that the transmission sets from the remaining active relays are disjoint when Algorithm 1 is used, and because n = 2, they support all sources u s = 1,…,4. If one node-destination channel is consequently erased, which corresponds to at most two transmissions, the destination has to recover the information bits from the erased source-codeword. Because relay u r cannot have u r in their own transmission set T( u r ), the erased relay codeword does not contain any information on the erased source-codeword, which implies that the information is in the remaining relay codeword. Hence, we have that P(ew|casei) c γ 2 or by (33), dc,i≥ 3. In other words, interuser failures do not decrease the diversity order.

Endnotes

aUnless mentioned otherwise, we assume that channels are reciprocal, i.e., the channel from u1 to u2 is the same as the channel from u2 to u1.

bIn practice, increasing the SNR value can be achieved by increasing the transmission power of a node, so that both the SNR of the node-to-destination channels and channels between non-destination nodes increase.

cFor conciseness, we do not formulate the equation for channels between non-destination nodes.

dNote that relays u are not allowed to consider relay codewords r u r for inclusion in S(u). As a consequence, the right part of HGLNC is diagonal in Equation (7). This restriction was not always applied in the literature (e.g., [17]), but it simplifies the theoretical analysis and code design.

eA standard BF channel is a channel with B blocks of length L, where each block is affected by an independent fading gain. The maximal achievable diversity order on this channel is given by 1 + B(1−R c ), where R c is the coding rate [2729].

fThe attentive reader will notice that the first two block rows in Equation (A.7) in [21] are not used here. These block rows are only necessary if a source is helped by one relay only and no point-to-point codes are available, which is not the case here.

gThe coding matrix expresses the transmission sets for each relay, which is required to determine the outage probability.

hFor a specific instance, the parity-check matrix can be graphically represented by a bipartite graph, denoted as a Tanner graph. The graphical Tanner graph representation is equivalent to the factor graph, which can be used for decoding.

References

  1. Shakkottai S, Rappaport TS, Karlsson PC: Cross-layer design for wireless networks. IEEE Commun. Mag 2003, 41(10):74-80. 10.1109/MCOM.2003.1235598

    Article  Google Scholar 

  2. Srivastava V, Motani M: Cross-layer design: a survey and the road ahead. IEEE Commun. Mag 2005, 43(12):112-119.

    Article  Google Scholar 

  3. Bertsekas D, Gallager R: Data Networks. Prentice Hall; 1992.

    MATH  Google Scholar 

  4. Duyck D, Boutros JJ, Moeneclaey M: Low-density graph codes for slow fading relay channels. IEEE Trans. Inf. Theory 2011, 57(7):4202-4218.

    Article  MathSciNet  Google Scholar 

  5. Hunter TE: Coded cooperation: a new framework for user cooperation in wireless systems, Ph.D. thesis. University of Texas at Dallas; 2004.

    Google Scholar 

  6. Laneman JN, Tse D, Wornell GW: Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inf. Theory 2004, 50(12):3062-3080. 10.1109/TIT.2004.838089

    Article  MathSciNet  MATH  Google Scholar 

  7. Ahlswede R, Cai N, Li S-YR, Yeung RW: Network Information Flow. IEEE Trans. Inf. Theory 2000, 46(4):1204-1216. 10.1109/18.850663

    Article  MathSciNet  MATH  Google Scholar 

  8. Koetter R, Médard M: An algebraic approach to network coding. IEEE/ACM Trans. Netw 2003, 11(5):782-795. 10.1109/TNET.2003.818197

    Article  Google Scholar 

  9. Li SYR, Yeung RW, Cai N: Linear network coding. IEEE Trans. Inf. Theory 2003, 49(2):371-381.

    Article  MathSciNet  MATH  Google Scholar 

  10. Rebelatto JK, Uchôa-Filho BF, Li Y, Vucetic B: Multi-user cooperative diversity through network coding based on classical coding theory. IEEE Trans. Sig. Process 2012, 60(2):916-926.

    Article  Google Scholar 

  11. Xiao M, Skoglund M: M-user cooperative wireless communications based on non-binary network codes. In Proc. Inf. Theory Workshop (ITW). Volos, Greece; 2009:316-320.

    Google Scholar 

  12. Kramer G, Gastpar M, Gupta P: Cooperative strategies and capacity theorems for relay networks. IEEE Trans. Inf. Theory 2005, 51(9):3037-3063. 10.1109/TIT.2005.853304

    Article  MathSciNet  MATH  Google Scholar 

  13. Guo Z, Huang J, Wang B, Cui JH, Zhou S, Willett P: A practical joint network-channel coding scheme for reliable communication in wireless networks. In Proc. of the ACM intern. symp. on mob. ad hoc netw. and comp.. Louisiana, New Orleans; 2009:279-288.

    Google Scholar 

  14. Hausl C, Dupraz P: Joint network-channel coding for the multiple-access relay channel. Proc. IEEE Commun. Soc. Sensor Ad Hoc Commun. Netw 2006, 3: 817-822.

    Google Scholar 

  15. Hausl C, Schreckenbach F, Oikonomidis I, Bauch G: Iterative network and channel decoding on a tanner graph. In Proc. Allerton Conf. on Commun. Control and Computing. Monticello, Illinois; 2005. http://scholar.google.com/citations?view_op=view_citation&hl=en&user=4GFQzXIAAAAJ&citation_for_view=4GFQzXIAAAAJ:d1gkVwhDpl0C

    Google Scholar 

  16. Hausl C, Hagenauer J: Iterative network and channel decoding for the two-way relay channel. In Proc. IEEE Int. Conf. on Comm. Istanbul, Turkey; 2006:1568-1573.

    Google Scholar 

  17. Bao X, Li JT: Generalized adaptive network coded cooperation (GANCC): a unified framework for network coding and channel coding. IEEE Trans. Commun 2011, 59(11):2934-2938.

    Article  MathSciNet  Google Scholar 

  18. Duyck D, Capirone D, Heindlmaier M, Moeneclaey M: Towards full-diversity joint network-channel coding for large networks. In Proc. of Europ. Wirel. Conf. Vienna, Austria; 2011:1-8.

    Google Scholar 

  19. Li J, Yuan J, Malaney R, Azmi MH, Xiao M: Network coded LDPC code design for a multi-source relaying system. IEEE Trans. Wirel. Comm 2011, 10(5):1538-1551.

    Article  Google Scholar 

  20. Li J, Yuan J, Malaney R, Xiao M: Binary field network coding design for multiple-source multiple-relay networks. In IEEE Int. Conf. on Comm. Sydney, NSW, Australia; 2011:1-6.

    Google Scholar 

  21. Duyck D, Capirone D, Boutros JJ, Moeneclaey M: Analysis and construction of full-diversity joint network-LDPC codes for cooperative communications. Eur. J. Wirel. Commun. Netw 2010(Art ID 805216, 2010): http://jwcn.eurasipjournals.com/content/2010/1/805216

  22. Duyck D, Capirone D, Boutros JJ, Moeneclaey M: A full-diversity joint network-channel code construction for cooperative communications. In Proc. IEEE Intern. Symp. on Personal, Indoor and Mob. Radio Comm. (PIMRC). Tokyo, Japan; 2009:1282-1286.

    Google Scholar 

  23. Tse DNC, Viswanath P: Fundamentals of Wireless Communication. Cambridge University Press, Cambridge; 2005.

    Book  MATH  Google Scholar 

  24. Boutros JJ, Controlled doping via high-order rootchecks in graph codes presentedat: IEEE Communication Theory Workshop Sitges. Catalonia, Spain; 2011. Available online from http://www.josephboutros.org/coding/root_LDPC_doping.pdf

    Google Scholar 

  25. Duyck D: Design of LDPC coded modulations for wireless fading channels. Ph.D. dissertation. Ghent University, Ghent, Belgium); in Press (to be published in 2012)

  26. Guillén i Fãbregas A: Coding in the block-erasure channel. IEEE Trans. Inf. Theory 2006, 52(11):5116-5121.

    Article  MATH  MathSciNet  Google Scholar 

  27. Guillén i Fãbregas A, Caire G: Coded modulation in the block-fading channel: coding theorems and code construction. IEEE Trans. Inf. Theory 2006, 52(1):91-114.

    Article  MATH  MathSciNet  Google Scholar 

  28. Knopp R, Humblet PA: On coding for block fading channels. IEEE Trans. Inf. Theory 2000, 46(1):189-205. 10.1109/18.817517

    Article  MATH  Google Scholar 

  29. Malkamaki E, Leib H: Evaluating the performance of convolutional codes over block fading channels. IEEE Trans. Inf. Theory 1999, 45(5):1643-1646. 10.1109/18.771235

    Article  MathSciNet  MATH  Google Scholar 

  30. Chou PA, Wu Y, Jain K: Practical network coding. In Proc. Allerton Conf. on Communication, Control, and Computing. (Illinois; 2003.

    Google Scholar 

  31. McEliece RJ, MacKay DJC, Cheng J-F: Turbo decoding as an instance of Pearl’s “belief propagation” algorithm. IEEE J. Sel. Area Commun 1998, 16(2):140-152. 10.1109/49.661103

    Article  Google Scholar 

  32. Ho T, Médard M, Koetter R, Karger DR, Effros M, Shi J, Leong B: A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52(10):4413-4430.

    Article  MATH  MathSciNet  Google Scholar 

  33. Biglieri E, Proakis J, Shamai S: Fading channels: information-theoretic and communications aspects. IEEE Trans. Inf. Theory 1998, 44(6):2619-2692. 10.1109/18.720551

    Article  MathSciNet  MATH  Google Scholar 

  34. Ozarow LH, Shamai S, Wyner AD: Information theoretic considerations for cellular mobile radio. IEEE Trans. Veh. Technol 1994, 43(2):359-379. 10.1109/25.293655

    Article  Google Scholar 

  35. Hausl C: Joint network-channel coding for the multiple-access relay channel based on turbo codes. Eur. Trans. Telecommun 2009, 20(2):175-181. 10.1002/ett.1349

    Article  Google Scholar 

  36. Cover TM, Thomas JA: Elements of Information Theory. Wiley, New York; 2006.

    MATH  Google Scholar 

  37. Ungerboeck G: Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theory 1982, IT-28(1):55-67.

    Article  MathSciNet  MATH  Google Scholar 

  38. Kschischang F, Frey B, Loeliger H-A: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 2001, 47(2):498-519. 10.1109/18.910572

    Article  MathSciNet  MATH  Google Scholar 

  39. Tanner M: A recursive approach to low complexity codes. IEEE Trans. Inf. Theory 1981, 27(5):533-547. 10.1109/TIT.1981.1056404

    Article  MathSciNet  MATH  Google Scholar 

  40. Wymeersch H: Iterative Receiver Design. Cambridge University Press, Cambridge; 2007.

    Book  Google Scholar 

  41. Richardson TJ, Urbanke RL: Modern Coding Theory. Cambridge University Press, Cambridge; 2008.

    Book  MATH  Google Scholar 

  42. Hamdani D, Safrianti E: Construction of short-length high-rates LDPC codes using difference families. Makara, Teknologi 2007, 11(1):25-29.

    Google Scholar 

Download references

Acknowledgements

This study was supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM++ (contract no. 216715).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dieter Duyck.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Duyck, D., Heindlmaier, M., Capirone, D. et al. Diversity analysis, code design, and tight error rate lower bound for binary joint network-channel coding. J Wireless Com Network 2012, 350 (2012). https://doi.org/10.1186/1687-1499-2012-350

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-350

Keywords