Joint network-channel codes (JNCC) can improve the performance of communication in wireless networks, by combining, at the physical layer, the channel codes and the network code as an overall error-correcting code. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI-model. The main performance metrics for JNCCs are scalability to larger networks and error rate. The diversity order is one of the most important parameters determining the error rate. The literature on JNCC is growing, but a rigorous diversity analysis is lacking, mainly because of the many degrees of freedom in wireless networks, which makes it very hard to prove general statements on the diversity order. In this article, we consider a network with slowly varying fading point-to-point links, where all sources also act as relay and additional non-source relays may be present. We propose a general structure for JNCCs to be applied in such network. In the relay phase, each relay transmits a linear transform of a set of source codewords. Our main contributions are the proposition of an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate to assess the performance of the network code. The lower bound on the diversity order is only valid for JNCCs where the relays transform only two source codewords. We then validate this analysis with an example which compares the JNCC performance to that of a standard layered construction. Our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate.

Point-to-point communication has revealed many of its secrets. Driven by new applications, research in wireless communication is now focusing more on the optimization of communication in wireless networks. For example, the joint operation of multiple network layers can be optimized, denoted as cross-layer design [1, 2], thereby leaving the classical layered architectures, such as the seven-layer open systems interconnect (OSI) model ([3], p. 20). Another example of network optimization is cooperative communication, where multiple nodes in the network cooperate to improve their error performance. Cooperation may occur in many forms at different layers, e.g., cooperative channel coding at the physical layer and network coding at the network layer. Network coding refers to the case where the intermediate nodes in the network are allowed to perform encoding operations over multiple received streams from different sources. In a standard layered construction, the decoding of the network code is performed at the network layer, after the point-to-point transmissions have been decoded at the physical layer. Channel coding refers to the case where nodes perform coding over one point-to-point wireless link only. Cooperative channel coding is achieved by letting one or more relays transmit redundant bits for one source at a time. Usually, channel coding and network coding are studied separately (e.g., [4–6] for cooperative channel coding and [7–11] for network coding).

Standard linear network coding consists of taking linear combinations of several source packets. In general, non-binary coefficients are used in the linear combinations. In JNCC, cooperative channel coding (e.g., decode and forward [12]) and cross-layer design are combined, by using the network code for decoding at the physical layer. The rationale behind JNCC is to improve the joint error rate performance (i.e., the average error rate performance over all users participating in the network) by letting the redundancy of the network code help to decode the noisy channel output [13]. In that case, a joint optimization of the network and channel code is useful. For example, one can opt to let the network and channel code be represented by one parity-check matrix of a binary code, referred to as joint network-channel coding (JNCC). Hence, the coefficients multiplying the packets in the case of standard linear network coding are replaced by matrices in the case of JNCC.

Mostly, the two most important performance metrics are (R,P_{
e
}), where R is the information rate and P_{
e
}is the error rate. Here, we consider a fixed information rate R, so that the aim is to minimize P_{
e
} for a given point-to-point channel quality, expressed by γ, the signal-to-noise ratio (SNR) per symbol. Expressing the asymptotic (for large γ) error rate as ${P}_{e}=\frac{1}{g{\gamma}^{d}}$, where g and d are defined as the coding gain and the diversity order, respectively, improving the performance refers to maximizing first d and then g (because d has the larger impact). Next to minimizing the error rate, scalability of the code design (e.g., to larger networks) is also an important criterion often recurring in the literature. JNCC is increasingly proposed as an alternative to a standard layered construction, such as the OSI model. However, it must be verified that important metrics, such as the diversity order d and the scalability to large networks, are not negatively affected.

Binary JNCC received much attention in the last years. Pioneering articles [14, 15] designed turbo codes and LDPC codes, respectively, for the multiple access relay channel (MARC) and for the two-way relay channel [16]. However, the code design was not immediately scalable to general large networks and did not contain the required structure to achieve full diversity. The study of Hausl et al. [14–16] was followed by the interesting study of Bao et al. [17], presenting a JNCC that is scalable to large networks. However, this JNCC was not structured to achieve full diversity and has weak points from a coding point of view [18]. A deficiency in the literature, for general networks with a number of sources and relays, is the lack of a detailed diversity analysis in the case that the sources can act as a relay (which is for example the model assumed by [17]). The effect of the parameters of the JNCC on the diversity order is in general not known, because of the many degrees of freedom in such networks. Related to this, we mention [19, 20], where the authors designed a JNCC for the case where the sources cannot act as a relay, but other nodes play the role of relay to communicate to one destination. As the source nodes are excluded to act as a relay node in this model, the diversity analysis in [19, 20] is different from ours.

In this article, we consider a JNCC where the network code forms an integral part of the overall error-correcting code, that is used at the destination to decode the information from the sources. The rest of the article is organized as follows. In Section ‘Diversity analysis of JNCC’, we perform a diversity analysis, leading to an upper bound on the diversity order of any linear binary JNCC following our system model, and to a lower bound on the diversity order for a particular subset of linear binary JNCCs. The upper and lower bound depend on the parameters of the JNCC and can be used to verify whether a particular JNCC has the potential to achieve full diversity on a certain network. Second, in Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’, a specific JNCC of the LDPC-type is proposed that achieves full diversity for a well identified set of wireless networks. The scalability of this specific JNCC to large networks is discussed. The coding gain c is not considered in the body of the article and the parameters of our proposed code may be further optimized by applying techniques such as in [19], to maximize c. To assess the performance of the proposed JNCC, we determine the outage probability, a well known lower bound of the word error rate, in Section ‘Lower bound for the WER’. We also present a tighter word error rate lower bound in Section ‘Calculation of a tighter lower bound on WER’, that takes into account the particular structure of the JNCC. In Section ‘Numerical results’, the numerical results corroborate the established theory. We also briefly comment on the coding gain achieved by the proposed JNCC and conclusions are drawn for different classes of large networks.

The main contribution of this article is to indicate the effect of the parameters of the JNCC on the diversity order, for networks that fit our channel model. More specifically, we propose an upper and lower bound on the diversity order, a scalable code design and a new lower bound on the word error rate that is tighter than the outage probability and thus better suited to assess the performance of the overall error-correcting code. The main contributions are summarized in the lemmas, propositions and corollaries. These can be a guide for any coding theorist designing JNCCs. Further, our numerical results suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction, both on a fundamental level, expressed by the outage probability, as on a practical level, expressed by the word error rate. This conjecture is important, because one will now need to clearly motivate the use of JNCC instead of a standard layered construction, given the extra efforts that are required for JNCC.

This article extends the study, published in [18], by also considering non-perfect source-relay channels, by considerably extending the diversity analysis, by providing an achievability proof for the diversity order of the proposed JNCC, by clearly indicating the set of wireless networks where the proposed JNCC is diversity-optimal, by providing a tighter lower bound on the word error rate, and by providing more numerical results.

2 Joint network-channel coding

We first illustrate joint network-channel coding by means of a simple example. Consider two sources orthogonally broadcasting a vector of symbols, mapped from the binary vectors s_{1} and s_{2}, respectively, to a relay and a destination. This channel is denoted as a multiple access relay channel (MARC) in the literature. Supposing that the relay is able to decode the received symbols, the relay computes a binary vector r_{1}, which is mapped to symbols and transmitted to the destination. The relation between all bits is expressed by the JNCC, whose parity-check matrix has the following general form,

(1)

The matrix H_{
p
}represents the parity-check matrix for the point-to-point channel code. Each of the binary vectors s_{1}, s_{2}, and r_{1}, can be separately decoded using this code. The bottom part of H represents the GLNC, which we denote as ${H}_{\text{GLNC}}=\left[{H}_{1}^{1}{H}_{2}^{1}{H}_{1}\right]$. It expresses the relation between r_{1}, s_{1}, and s_{2}. More specifically, we have

Note that GLNC includes standard network codes used in an OSI communication model as a special case. In the latter case, the matrices ${H}_{j}^{i}$ and H_{
i
}(considering more than one relay in general) are identity matrices or all-zero matrices, so that the network code simplifies to the relay packet being a linear combination of source packets, also expressed as XORing of packets or symbol-wise addition of packets.

Ideally, the overall matrix H conforms optimized degree distributions that specify the LDPC code. When the channels between sources and relay are perfect, we can drop the first three sets of rows and only keep the GLNC, represented by H_{GLNC}; in this case the information bits of the code are s_{1} and s_{2}, and r_{1} contains the parity bits. This is still a JNCC as the redundancy in the network code is used to decode the received symbols on the physical layer at the destination. In [21, 22], it is proved that the matrices H_{
p
} do not affect the diversity order in the case of the MARC.

3 System model

We consider wireless networks with m_{
s
}sources directly communicating to a common destination (e.g., cellphones communicating to a base station). Two time-orthogonal phases are distinguished. In the source phase, the sources orthogonally broadcast their respective source packet. In the following relay phase, the relays orthogonally broadcast their respective packet. All considered sources overhear each other during the source phase, and act as relay in the relay phase. Other nodes, not acting as a source, might be present in the network (i.e., overhearing the sources) and also act as relay. Hence, we consider a total of m_{
r
} relays, where m_{
r
}≥ m_{
s
}. This general network model, which is practically relevant as it fits many applications, is adopted in, e.g., [17]. Take for example any large network and consider a volume in space (cf. picocells or femtocells) where all nodes can overhear each other. These nodes form sub-networks and can be modeled by our proposed model. Note that in the literature, sometimes other models are assumed, such as the M − N − 1 model [19, 20], where M sources are helped by N relays (the relays are nodes different from the sources) to communicate to one destination.

All devices have one antenna, are half-duplex and transmit orthogonally using BPSK modulation. The K information bits of each source are encoded via point-to-point channel codes into a systematic codeword, denoted as source codeword, of length L, expressed by the column vector ${\mathbf{s}}_{{u}_{s}}$ for user u_{
s
}, u_{
s
}∈ [1,…,m_{
s
}]. The parity-check matrix of dimension (L − K) × L of this point-to-point codeword is denoted by H_{
p
}, which is the same for each user u_{
s
}, so that ${H}_{p}{\mathbf{s}}_{{u}_{s}}=\mathbf{0}$ for all u_{
s
}. In the relay phase, each relay u_{
r
}, u_{
r
}∈ [1,…,m_{
r
}], transmits a point-to-point codeword ${\mathbf{r}}_{{u}_{r}}$ of length L to the destination, also satisfying ${H}_{p}{\mathbf{r}}_{{u}_{r}}=\mathbf{0}$. Hence, all slots have equal duration, the coding rate of the point-to-point channels is ${R}_{c,p}=\frac{K}{L}$, and the overall coding rate is ${R}_{c}=\frac{{m}_{s}K}{({m}_{s}+{m}_{r})L}={R}_{c,p}\frac{{m}_{s}}{{m}_{s}+{m}_{r}}$. We define the fraction of source transmissions in the total number of transmissions as the network coding rate ${R}_{n}=\frac{{m}_{s}}{{m}_{s}+{m}_{r}}$, so that R_{
c
}= R_{c,p}R_{
n
}. The overall codeword of length (m_{
s
} + m_{
r
})L is expressed by the column vector

The destination declares a word error if it can not perfectly retrieve all m_{
s
}K information bits, and the overall word error rate is denoted by P_{ew}.

All relevant channels between different^{a} pairs of network nodes are assumed independent, memoryless, with real additive white Gaussian noise and multiplicative real fading (Rayleigh distributed with expected squared value equal to one). The fading coefficient of a wireless link is only known at the receiver side of that link. We consider a slow fading environment with a finite coherence time that is longer than the duration of the source phase and the relay phase, so that the fading gain between two network nodes takes the same value during both phases. We denote the fading gain from node u to the destination as α_{
u
}, with $\mathbb{E}\left[{\alpha}_{u}^{2}\right]=1$. All point-to-point channels have the same average signal-to-noise ratio (SNR), denoted by γ. Differences in average SNR between the channels would not alter the diversity analysis, on the condition that the large SNR behavior inherent to a diversity analysis refers to all^{b} SNRs being large. Denoting the received symbol vector at the destination^{c} in timeslot i as y_{
i
}, the channel equation is

where ${\mathbf{n}}_{i}\sim \mathcal{C}\mathcal{N}(\mathbf{0},\frac{1}{\gamma}I)$ is the noise vector in timeslot i, ${\mathbf{s}}_{{u}_{s}}^{\prime}=2{\mathbf{s}}_{{u}_{s}}-1$ and ${\mathbf{r}}_{{u}_{r}}^{\prime}=2{\mathbf{r}}_{{u}_{r}}-1$ (BPSK modulation).

Hence, at the destination, each of the m_{
s
}independent fading gains between the sources and the destination affects 2L bits (L bits in the source phase and L bits in the relay phase) and each of m_{
r
}− m_{
s
}fading gains between the non-source relays and the destination affects L bits, assuming that all m_{
r
}relays could decode the messages received from the sources. Hence, from the point of view of the destination, the overall codeword is transmitted on a block fading (BF) channel with m_{
r
} blocks, each affected by its own fading gain, where m_{
s
}blocks have length 2L and m_{
r
}− m_{
s
}blocks have length L. This notion will be essential in the subsequent diversity analysis (Section ‘Diversity analysis of JNCC’).

In the source phase, relay u_{
r
}attempts to decode the received symbols from sources belonging to the decoding set $\mathcal{S}\left({u}_{r}\right)$. The users that are successfully decoded at relay u_{
r
} are added to its retrieval set, denoted by $\mathcal{R}\left({u}_{r}\right)$, $\mathcal{R}\left({u}_{r}\right)\subset \mathcal{S}\left({u}_{r}\right)$, with cardinality ${l}_{{u}_{r}}$. Next, in the relay phase, relay u_{
r
} transmits a relay packet, which is a linear transformation of ${n}_{{u}_{r}}$ source codewords^{d} originated by the sources from the transmission set $\mathcal{T}\left({u}_{r}\right)=\{{u}_{1},\dots ,{u}_{{n}_{{u}_{r}}}\}$ of relay u_{
r
}, with $\mathcal{T}\left({u}_{r}\right)\subset \mathcal{R}\left({u}_{r}\right)$. If ${l}_{{u}_{r}}<{n}_{{u}_{r}}$, then relay u_{
r
}does not transmit anything. In Section ‘Diversity analysis of JNCC’, we show that ${n}_{{u}_{r}}$ is an important parameter that strongly affects the diversity order.

For example, user 3 attempts to decode the messages from users 1, 2, and 5, and succeeds in decoding the messages from users 1 and 5 from which a linear transformation is computed. Hence, $\mathcal{S}\left(3\right)=\{1,2,5\}$, $\mathcal{R}\left(3\right)=\mathcal{T}\left(3\right)=\{1,\phantom{\rule{1em}{0ex}}5\}$, l_{3} = n_{3} = 2. Because the channel between a node and the destination remains constant during both source and relay phases, a relay has no interest in including its own source message in $\mathcal{S}\left({u}_{r}\right)$.

Using the transmission set for each relay, the GLNC in Equation (2) generalizes to

where the matrices ${H}_{{u}_{r}}$ and ${H}_{{u}_{s}}^{{u}_{r}}$ are of dimension K × L. Hence, each transmitted relay codeword ${\mathbf{r}}_{{u}_{r}}$ is a linear transformation of ${n}_{{u}_{r}}$ source codewords. The superscript u_{
r
}in ${H}_{{u}_{s}}^{{u}_{r}}$ indicates that the vector ${\mathbf{s}}_{{u}_{s}}$ is in general not transformed by the same matrix for all relays u_{
r
}where ${u}_{s}\in \mathcal{T}\left({u}_{r}\right)$. The overall parity-check matrix H is thus expressed as

In other words, P_{ew} ∝ γ^{−d}, where ∝ denotes proportional to.

In the proofs of propositions in this article, we will often use the diversity equivalence between a BF channel and a block binary erasure channel (block BEC), which was proved in [24, 25]. A block BEC channel is obtained by restricting the fading gains in our model to belong to the set {0,∞}, so that a point-to-point channel is either erased or perfect. Denoting the erasure probability $\text{Pr}\left[{\alpha}_{{u}_{r}}=0\right]$ by ε, a diversity order d is achieved if P_{ew} ∝ ε^{
d
}for small ε[26]. A diversity order of d is thus achievable if there exists no combination of d − 1 erased point-to-point channels leading to a word error. On the other hand, a diversity order of d is not achievable if there exists at least one combination of d − 1 erased channels leading to word error.

In this section, we present the relation between the diversity order d and the parameters $\{{n}_{{u}_{r}},{u}_{r}=1,\dots ,{m}_{r}\}$, as well as between d and the choice of $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$. This guides the code design and furthermore, the potential, of a linear binary JNCC satisfying some conditions, to achieve full diversity, can be verified without performing Monte Carlo simulations.

We first prove that the diversity order is a function of only the network coding rate R_{
n
}(Section ‘Diversity as a function of the network coding rate’). We then determine in Section ‘Space diversity by cooperation’ the relation between the diversity order d and the set $\{{n}_{{u}_{r}},{u}_{r}=1,\dots ,{m}_{r}\}$, for any linear binary JNCC expressed as in Equations (6) and (7). The set $\{{n}_{{u}_{r}},{u}_{r}=1,\dots ,{m}_{r}\}$ actually determines the maximal spatial diversity that can be achieved by cooperation, leading to an upper bound on the diversity order. In Section ‘A lower bound based on $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ for ${n}_{{u}_{r}}=2$’, we propose a lower bound on the diversity order in the case that ${n}_{{u}_{r}}=n=2$, which depends on all transmission sets $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$. In Section ‘Diversity order with interuser failures’, we discuss how the diversity order is affected by interuser failures. Finally, in Section ‘Diversity order in a layered construction’, we briefly comment on the diversity order in a layered construction, such as the OSI model.

4.1 Diversity as a function of the network coding rate

We denote the maximum achievable diversity order by d_{max}. We will determine d_{max} in this section and show that it only depends on the network coding rate ${R}_{n}=\frac{{m}_{s}}{{m}_{s}+{m}_{r}}$.

Proposition 1

Under ML decoding, the maximum diversity order d_{max} that can be achieved by any linear JNCC is

which for m_{
r
}= m_{
s
}= m reduces to the maximum diversity order for a standard BF channel^{e} with m blocks and coding rate R_{
n
}[27–29].

Hence, the maximum diversity order does not change when the point-to-point channel coding rate R_{c,p}changes. This corresponds with our intuition as the parity bits of the point-to-point codes only provide redundancy within one block forming a point-to-point codeword, hence these parity bits cannot combat erasures which affect the complete point-to-point codeword. Another consequence is that the maximal diversity order of JNCC cannot be larger than in a layered approach, with the same network coding rate.

In the remainder of the article, full diversity refers to the diversity order being equal to the maximal diversity order, d = d_{max}, from (8).

4.2 Space diversity by cooperation

We denote the word error rate for each source u_{
s
}by ${P}_{\text{ew},{u}_{s}}$, which is the fraction of packets where at least 1 of the K information bits from source u_{
s
} is erroneously decoded at the destination. Associated to ${P}_{\text{ew},{u}_{s}}$, we define ${d}_{{u}_{s}}$, so that ${P}_{\text{ew},{u}_{s}}\propto \frac{1}{{\gamma}^{{d}_{{u}_{s}}}}$ for large γ. We have that $\underset{u}{max}{P}_{\text{ew},{u}_{s}}\le {P}_{\text{ew}}\le \sum _{{u}_{s}}{P}_{\text{ew},{u}_{s}}$. From Definition 1, it follows that

$d=\underset{{u}_{s}}{\text{min}}{d}_{{u}_{s}}.$

(10)

Denote ${t}_{{u}_{s}}$, u_{
s
}∈ {1,…,m_{
s
}}, as the number of times that source u_{
s
} is included in the transmission set of a relay: ${t}_{{u}_{s}}=\sum _{{u}_{r}\ne {u}_{s}}$$({u}_{s}\in \mathcal{T}({u}_{r}\left)\right)$, where (.) is the indicator function, which equals one when its argument is true and zero otherwise. Some simple measures can be determined: ${t}_{\mathit{\text{min}}}=\underset{{u}_{s}}{min}{t}_{{u}_{s}}$ and ${t}_{\text{av}}=\frac{\sum _{{u}_{r}=1}^{{m}_{r}}{n}_{{u}_{r}}}{{m}_{s}}$. We will show that ${d}_{{u}_{s}}$ depends on ${t}_{{u}_{s}}$ and thus, by Equation (10), d depends on t_{min}. We denote 1 + t_{min} by d_{
R
}, which we call the space diversity order, as it is the minimal number of channels that convey a source message to the destination.

Proposition 2

For any linear JNCC, applied in our system model, the diversity order d is upper bounded as

$d\le {d}_{R}=1+{t}_{\text{min}}.$

Proof

We use the diversity equivalence between a BF channel and block BEC [24, 25]. Assume that the channel between source u_{
s
}and the destination is erased. Source u_{
s
}is included in at most ${t}_{{u}_{s}}$ transmission sets. Assume that all ${t}_{{u}_{s}}$ channels between the relays, that include source u_{
s
}in their transmission set, and the destination are also erased. Then the destination does not receive any information on source u_{
s
}so that it can never retrieve its message. The probability of occurrence of this event is ${\epsilon}^{1+{t}_{{u}_{s}}}$, so that ${P}_{\mathit{\text{ew}},{u}_{s}}\ge {\epsilon}^{1+{t}_{{u}_{s}}}$, hence ${d}_{{u}_{s}}\le 1+{t}_{{u}_{s}}$. Using Equation (10), we obtain Proposition 2. □

Note that the proof of Proposition 2 is based on the assumption that relay u_{
r
}only considers packets transmitted in the source phase for inclusion in $\mathcal{S}\left({u}_{r}\right)$. In the case that relay u_{
r
}computes its relay packet also based on packets transmitted by other relays during the relay phase, the diversity order becomes more difficult to analyze.

In Corollary 1, we propose the conditions on t_{min} so that the space diversity order d_{
R
}is not smaller than the maximum achievable diversity order.

Corollary 1

For any linear JNCC, applied in our system model, full diversity can be achieved only if t_{min} ≥ q, where

The proof follows directly from Propositions 1 and 2. □

Given a GLNC, and thus a choice of $\mathcal{T}\left({u}_{r}\right)$, one can verify whether the condition in Corollary 1 holds. In the disaffirmative case, full diversity cannot be achieved. To get more insight for the code design, we consider the simplest case of a network code where the cardinality of the transmission set is constant (${n}_{{u}_{r}}=n$).

Corollary 2

For any linear JNCC, applied in our system model, with constant ${n}_{{u}_{r}}=n$, full diversity can be achieved only if

It always holds that t_{min} ≤ ⌊t_{av} ⌋ and if ${n}_{{u}_{r}}=n$, then ${t}_{\text{av}}=\frac{{m}_{r}n}{{m}_{s}}$. From Corollary 1, full diversity can be achieved only if $\u230a\frac{{m}_{r}n}{{m}_{s}}\u230b\ge q$. Because $\frac{{m}_{r}n}{{m}_{s}}\ge \u230a\frac{{m}_{r}n}{{m}_{s}}\u230b$, we have the necessary condition that $n\ge q\frac{{m}_{s}}{{m}_{r}}$. As n is an integer, this bound can be tightened, yielding $n\ge \u2308\frac{{m}_{s}}{{m}_{r}}q\u2309$. Filling in q from Corollary 1 yields Corollary 2. □

Table 2 illustrates Corollary 2, showing the set of networks in which a certain parameter n is diversity-optimal, which means that the choice of n does not prevent the code to achieve full diversity. In Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’, we propose a JNCC for n = 2, where taking n = 2 is diversity-optimal in all networks corresponding to bold elements in Table 2.

Table 2

Minimal valuenfor a JNCC with constant${n}_{{u}_{r}}\phantom{\rule{.3em}{0ex}}=\phantom{\rule{.3em}{0ex}}n$to maintain its capability to achieve full diversity

m_{
r
}∖m_{
s
}

1

2

3

4

5

6

7

1

0

2

1

1

3

1

1

1

4

1

1

2

2

5

1

2

2

2

2

6

1

2

2

2

3

3

7

1

2

2

2

3

3

3

8

1

2

2

2

3

3

4

4.3 A lower bound based on $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ for ${n}_{{u}_{r}}=2$

A certain relay does not help one source only, but a combination of sources, expressed by the transmission set $\mathcal{T}\left({u}_{r}\right)$ for each relay u_{
r
}. In this section, we provide a lower bound on the diversity order, based on the choice of $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$. If this lower bound and the upper bound in the previous section are tight, the exact diversity order of JNCCs can so be determined, as will be illustrated in Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’.

Based on $\mathcal{T}\left({u}_{r}\right),{m}_{s}$ and m_{
r
}, we construct the (m_{
s
} + m_{
r
}) × m_{
s
}coding matrix M, where

The matrix M expresses the presence of a source-codeword in each transmission, i.e., ${M}_{i,{u}_{s}}=1$ if ${\mathbf{s}}_{{u}_{s}}$ is considered in transmission i (i = 1,…,m_{
s
}and i = m_{
s
} + 1,…,m_{
s
} + m_{
r
} correspond to the source and relay transmission phases, respectively). Therefore, the upper part of M is an identity matrix as each source u_{
s
} transmits its own codeword ${\mathbf{s}}_{{u}_{s}}$ in the source phase. The matrix M represents what is often called the “coding header” or “the global coding coefficients” in the network coding literature (see e.g., [30]).

Consider a block BEC channel where e of the m_{
r
}blocks have been erased. The indices of the fading gains corresponding to the erased blocks are collected in the set $\mathcal{E}=\{{\mathcal{E}}_{1},\dots ,{\mathcal{E}}_{e}\},{\mathcal{E}}_{i}\in \{1,\dots ,{m}_{r}\}$). Based on $\mathcal{E}$, we construct ${M}_{\mathcal{E}}$ which corresponds to the subset of transmissions that are not erased, i.e., all rows ${\mathcal{E}}_{i}$ (if ${\mathcal{E}}_{i}\le {m}_{s}$) and ${m}_{s}+{\mathcal{E}}_{i}$, for i = 1,…,e, in M are dropped. We denote the rank of ${M}_{\mathcal{E}}$ as ${r}_{{M}_{\mathcal{E}}}$. The set $\mathcal{\mathcal{M}}\left(e\right)$ collects all possible matrices ${M}_{\mathcal{E}}$ which can be constructed from M if $\left|\mathcal{E}\right|=e$.

Consider an example for m_{
s
}= m_{
r
}= 3. Assume that $\mathcal{T}\left(1\right)=\{2,3\}$, $\mathcal{T}\left(2\right)=\{1,3\}$, and $\mathcal{T}\left(3\right)=\{1,2\}$, so that

Next, assume that $\mathcal{E}=\left\{1\right\}$. Hence, the channel between user 1 and the destination is erased, so that rows 1 and 4 from M are dropped:

and ${r}_{{M}_{\mathcal{E}}}=3$. It can be verified that all matrices ${M}_{\mathcal{E}}\in \mathcal{\mathcal{M}}\left(1\right)$ have rank ${r}_{{M}_{\mathcal{E}}}=3$. However, there exist matrices ${M}_{\mathcal{E}}\in \mathcal{\mathcal{M}}\left(2\right)$ having rank ${r}_{{M}_{\mathcal{E}}}<3$.

We can now define a metric that depends on $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$.

Definition 2

We define d_{
M
}= e^{∗} + 1, where e^{∗} is the maximal cardinality of $\mathcal{E}$ such that ${r}_{{M}_{\mathcal{E}}}={m}_{s}$ for each ${M}_{\mathcal{E}}\in \mathcal{\mathcal{M}}\left(e\right)$.

A simple computer program can compute d_{
M
}, given $\mathcal{T}\left({u}_{r}\right),{m}_{s}$ and m_{
r
}.

Lemma 1

In a JNCC following the form of Equation (6) with m_{
s
}= m_{
r
}and constant ${n}_{{u}_{r}}=n=2$, the metric d_{
M
}is at most three.

Proof

If m_{
s
}= m_{
r
}and n = 2, then the minimum column weight of M is smaller than or equal to three. Erasing the three rows where ${M}_{i,{u}_{s}}=1$, for a certain u_{
s
}corresponding to the minimum column weight, leads to ${M}_{\mathcal{E}}$ having at least one zero column, and thus ${r}_{{M}_{\mathcal{E}}}<{m}_{s}$. By Definition 2, d_{
M
}< 4. □

In the next proposition, we provide a lower bound on the diversity order under ML decoding or Belief Propagation (BP) decoding [31]. We denote

Using ML decoding, the diversity order of a JNCC following the form of Equation (6) with constant ${n}_{{u}_{r}}=n=2$, is lower bounded as

$d\ge {d}_{M},$

if the matrices ${\mathcal{\mathscr{H}}}_{{u}_{s}}^{{u}_{r}}$, ${u}_{s}\in \mathcal{T}\left({u}_{r}\right),{u}_{r}\in \{1,\dots ,{m}_{s}\}$, have full rank.

Using BP-decoding, the diversity order of a JNCC following the form of Equation (6) with constant ${n}_{{u}_{r}}=n=2$, is lower bounded as

can be solved with BP in the case of only one unknown source-codeword vector.

Proof

See Appendix 2. □

We can simplify the condition for BP decoding, stated in Proposition 3, when we assume that the parity bits of point-to-point codes do not have a support in H_{GLNC}, or said differently, when the L − K right most columns of the matrices ${H}_{{u}_{r}}$ and ${H}_{{u}_{s}}^{{u}_{r}}$ are zeroes. In that case, one iteration in the backward substitution, mentioned in Appendix 2, corresponds to solving the K unknown information bits of s_{
u
}via the set of K equations

In Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’, we propose a JNCC where the parity bits of point-to-point codes do not have a support in H_{GLNC}, so that we take (17) instead of (16) as condition for BP decoding in the remainder of the article.

4.4 Diversity order with interuser failures

It is often easier to prove that a particular diversity order is achieved assuming perfect interuser channels (see for example in Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’). Here, we discuss how this diversity order is affected by interuser failures.

Lemma 2

In the case of non-reciprocal interuser channels, any JNCC achieves the same diversity order with or without interuser channel failures.

Proof

See Appendix 3. □

In the case of reciprocal interuser channels, the achieved diversity order with interuser failures depends on the transmission sets $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$. We propose an algorithm to construct $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ in Section ‘Practical JNCC for ${n}_{{u}_{r}}=2$’ and we will then discuss the diversity order with reciprocal interuser channels.

4.5 Diversity order in a layered construction

In a layered construction, such as the standard OSI model, the destination first attempts to decode the point-to-point transmissions. If it can not successfully retrieve the transmitted point-to-point codeword for a particular node-to-destination channel, then it declares a block erasure, where a block refers to one point-to-point codeword. Denoting this block erasure probability by ε, we have that $\epsilon \propto \frac{1}{\gamma}$ ([23], Equation (3.157)). If for example e blocks of length L are erased, then the decoding corresponds to solving a set of equations with eL unknowns.

Standard linear network coding consists of taking linear combinations of several source packets. In general, non-binary coefficients are used in the linear combinations. Hence, packets are treated symbol-wise, which is shown to be capacity achieving for the layered construction [8]. A consequence of this symbol-wise treatment is that the effective block length of the network code reduces to m_{
s
} + m_{
r
} and the set of equations, that are available at the destination for decoding, is expressed by the coding matrix ${M}_{\mathcal{E}}$. At this block length, ML decoding (which is equivalent to Gaussian elimination at the network layer) has low complexity. Under ML decoding, a sufficient condition for successful decoding is ${r}_{{M}_{\mathcal{E}}}={m}_{s}$. Also, for ML decoding, the maximum number of erasures e^{∗} = d_{
M
}− 1 (Definition 2), so that the condition ${r}_{{M}_{\mathcal{E}}}={m}_{s}$ is satisfied, is equal to the minimum distance of the non-binary code minus one. The minimal distance is, for a given coding rate, maximum for maximum distance separable (MDS) codes, so that d_{
M
} is maximum for MDS codes as well. Also note that random linear network codes are MDS codes with high probability for a sufficiently large field size [32].

Table 3 provides an overview of the notation presented in this diversity analysis.

Table 3

Overview of notation introduced in the diversity analysis

d_{max}

Maximum diversity order that can be achieved by any code $\mathcal{C}$ for a fixed m_{
s
}and m_{
r
}

t_{min}

Minimum number of times that a source is included in the transmission set of any relay

d_{
R
}

An upper bound on the diversity order d, d_{
R
}= 1 + t_{min}

n

Is equal to ${n}_{{u}_{r}}$ in the case that ${n}_{{u}_{r}}$ is fixed by the protocol and thus constant

m

Represents m_{
s
}and m_{
r
}when m_{
r
}= m_{
s
}

M

Coding header indicating the presence of the source codewords in all transmissions; depends on $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$

$\mathcal{E}$

Set, collecting the indices of the blocks that are erased in the case of the BBEC

${M}_{\mathcal{E}}$

Reduced coding header obtained from M where all erased transmissions have been removed

$\mathcal{\mathcal{M}}\left(e\right)$

Collection of all possible matrices ${M}_{\mathcal{E}}$ when $\left|\mathcal{E}\right|=e$

${r}_{{M}_{\mathcal{E}}}$

Rank of ${M}_{\mathcal{E}}$

d_{
M
}

In some cases, d_{
M
}, which depends on M, is an upper bound on the diversity order d

Tables 1 and 3 indicate the complexity of the analysis of JNCC for large networks.

5 Practical JNCC for ${n}_{{u}_{r}}=2$

In the literature, a detailed diversity analysis is most often lacking. Codes were proposed and corresponding numerical results suggested that a certain diversity order was achieved on a specific network. It is sometimes not clear why this diversity order is achieved, and how it would vary if the network or some parameters change. In the previous section, we made a detailed diversity analysis of a JNCC following the form of Equation (6). However, the utility of for example Proposition 3 is limited to JNCCs following the form of Equation (6) with a constant ${n}_{{u}_{r}}=2$, which suggests that it is very hard to rigorously prove diversity claims in general. However, the modest analysis made in Section ‘Diversity analysis of JNCC’ can be applied in some cases and we will show its utility through an example.

We consider networks with m_{
s
}= m_{
r
}= m ≥ 4 and a JNCC following the form of Equation (6) with ${n}_{{u}_{r}}=n=2$ for u_{
r
}= 1,…,m. We will rigorously prove that a diversity order of three is achieved, using the propositions of Section ‘Diversity analysis of JNCC’. From Table 2, it can be seen that this JNCC is diversity-optimal for m = 4 and m = 5. In Section ‘Numerical results’, we provide numerical results for m = 5.

From Table 2, it is clear that restricting n to two is not diversity-optimal in larger networks. However, it also has some advantages. If n = 2, then every relay just needs to decode 2 users, and encoding is restricted to taking a linear transformation of only two source packets. Furthermore, taking n = 2 does not impose infeasible constraints on the number of sources in the vicinity of a relay in the case that spatial neighborhoods are taken into account. Next, the theoretical analysis is simpler in the case n = 2. Finally, taking n = 2 allows to reuse strong codes designed for the multiple access relay channel, e.g., in [21, 22].

Besides the diversity order, we indicated in Section ‘Introduction’ that scalability is also very important. The JNCC proposed here is scalable to any large network without requiring a redesign of the code. This means that we provide an on the fly construction method. The latter is particularly important for self regulating networks. As a node adds itself to the network, it can seamlessly integrate to the network. Together with the new symbols sent by the new node, a new JNCC code is formed which still possesses all desirable properties. Finally, note that due to the large block length of JNCC, ML decoding is too complex and low-complexity techniques, such as BP decoding, must be used.

Hence, two properties are claimed: scalability to large networks and a diversity order of three (which is full diversity in some cases) under BP decoding. The JNCC code is presented in two steps. First, we present the design of $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ and thus the coding matrix M. In a second step (Equation (20)), we specify the matrices ${H}_{{u}_{r}}$ and ${H}_{{u}_{s}}^{{u}_{r}}$ and we will prove that the scalability and the diversity order of three are achieved.

5.1 First step: design of $\mathcal{T}\left({u}_{r}\right)$

The transmission sets $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ have a large impact on the diversity order. For example, in [18], a random construction was studied (each relay chooses n = 2 sources at random) and it was shown that $\mathbb{E}\left[{t}_{{u}_{s}}\right]=2$, but $\text{Var}\left[{t}_{{u}_{s}}\right]=2$ as well, so that most probably t_{min} < 2 and d_{
R
}< 3 (Proposition 2). So we need a more intelligent construction.

We present an algorithm to determine $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$, given m_{
s
} and m_{
r
}, and we subsequently determine the corresponding metrics t_{min} and d_{
M
}. We define the function ${f}_{{m}_{s}}\left(x\right)=\left((x-1)\phantom{\rule{0.3em}{0ex}}mod\phantom{\rule{0.3em}{0ex}}{m}_{s}\right)+1$ which adapts the modulo operation to the range $1\le {f}_{{m}_{s}}\left(x\right)\le {m}_{s}$.

The transmission set $\mathcal{T}\left({u}_{r}\right)$ is expressed via the bottom part of M. An example of such a matrix M is given in Equation (18) for m_{
s
}= m_{
r
}= 5.

If a node is added as a source node, it adopts the largest source index, m_{
s
} + 1, and relay-only nodes, with indices larger than or equal to m_{
s
} + 1, increment their index by one. The function ${f}_{{m}_{s}}\left(x\right)$ is updated to the new m_{
s
}. Note that the algorithm corresponds to a deterministic cooperation strategy, which avoids extra signalling to the destination regarding the code design.

We first consider the case of perfect interuser channels and prove that Algorithm 1 yields d = 3 (Corollary 3). We then consider interuser failures and prove that the diversity order is not affected (Lemma 3).

Corollary 3

Having perfect links from sources to relays, the diversity order of a JNCC, with m_{
s
}= m_{
r
}and with transmission set constructed via Algorithm 1, achieves a diversity order d = 3 using BP-decoding, if, for each u_{
r
}, Equation (17) can be solved with BP in the case of only one unknown source-codeword vector.

Proof

Because the links between sources and relays are perfect, the relays will never stay silent. In the case that m_{
r
}= m_{
s
}and ${n}_{{u}_{r}}=2$, we have that t_{min} = t_{av} = 2 and so d_{
R
}= 3.

Next, we show that d_{
M
}= 3 (and thus, according to Lemma 1, d_{
M
}is maximized if n = 2). Consider $\left|\mathcal{E}\right|=2$. Without loss of generality, consider that $\mathcal{E}=\{1,2\}$. Consider the set of equations ${M}_{\mathcal{E}}\mathbf{z}=\mathbf{c}$. Variables ${z}_{3},\dots ,{z}_{{m}_{s}}$ can be recovered via the top m_{
s
}− 2 rows of ${M}_{\mathcal{E}}$. The two relays u_{1} and u_{2} having source u_{
s
} in their transmission set ($\mathcal{T}\left({u}_{1}\right)$ and $\mathcal{T}\left({u}_{2}\right)$, respectively) are

Hence, source 1 is included in $\mathcal{T}(m-1)$ and $\mathcal{T}\left(m\right)$, and source 2 is included in $\mathcal{T}\left(m\right)$ and $\mathcal{T}\left(1\right)$. Hence, relay transmission m − 1 can be used to retrieve source 1 and relay transmission m can be used to retrieve source 2, as long as m ≥ 4. Hence, ${M}_{\mathcal{E}}$ has full rank. The generalization to any set $\mathcal{E}$ satisfying $\left|\mathcal{E}\right|=2$, is straightforward. Therefore, we have that d_{
M
}= 3.

As d_{
R
}= d_{
M
}= 3, the proof follows immediately from Propositions 2 and 3. □

Next, it can be proved that a JNCC applied in our system model has a diversity order of three, if it has a diversity order of three when all interuser channels are perfect. This is proved in general for non-reciprocal interuser channels in Lemma 2, and here, we consider reciprocal interuser channels.

Lemma 3

A JNCC, with transmission set constructed via Algorithm 1, achieves the same diversity order with or without interuser channel failures when m_{
s
}> 4 or when m_{
s
}= m_{
r
}= m ≤ 4.

Proof

See Appendix 4. □

For conciseness, we do not consider the other cases, m_{
r
}> m_{
s
}≤ 4.

5.2 Second step: JNCC of LDPC-type

In the first step, we specified $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ and proved that d_{
R
}= d_{
M
}= 3 if m_{
r
}= m_{
s
}= m > 3. According to Corollary 3, a diversity order of three is achieved under BP decoding if, for each u_{
r
}, Equation (17) can be solved with BP in the case of only one unknown source-codeword vector. In the second step, we specify the sub matrices ${H}_{{u}_{r}}$, ${H}_{{u}_{s}}^{{u}_{r}}$, ∀u_{
r
},u_{
s
}, to satisfy this condition, given that $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ is constructed according to Algorithm 1.

A simple solution is to replace the K left most columns in all K × L sub matrices ${H}_{{u}_{r}}$, ${H}_{{u}_{s}}^{{u}_{r}}$, ∀u_{
r
},u_{
s
}, by identity matrices. In this case, the joint network channel coding essentially reduces to a layered solution: the source-codewords are decoded at the relays and simply added according to Equation (5). If the network code is used at the physical layer, it has to deal with noise and a more advanced code might be required.

In the literature, a full-diversity close-to-outage performing JNCC for the Multiple Access Relay Channel (MARC) has been proposed [21, 22], which is a code in the form of Equation (1). These codes are such that the set of equations

can be decoded via BP if only one coding vector s_{1}, s_{2} or r_{1} is erased and the other coding vectors are perfectly known. We denote this JNCC by MARC-JNCC. The matrix H_{GLNC, MARC} of the MARC-JNCC is given by Equation (A.7) in [21]^{f}:

(19)

where s_{
j
}= [1i_{
j
}2i_{
j
}p_{
j
}] is the codeword from source j, with [1i_{
j
}2i_{
j
}] and p_{
j
} denoting the information bits and the parity bits, respectively (j = 1,2); 1i_{
j
} and 2i_{
j
} each contain $\frac{K}{2}$ information bits. However, the parity bits p_{
j
}are not involved in H_{GLNC, MARC}. The matrices R_{
i
}, with i = 1,2,3, are random matrices, chosen according to the required degree distributions of the LDPC code. To facilitate future notation, we denote

and H_{3} = R_{3}, so that ${H}_{\text{GLNC}}=\phantom{\rule{0.3em}{0ex}}\left[\stackrel{\u0304}{{H}_{1}}\stackrel{\u0304}{{H}_{2}}{H}_{3}\right]$, where $\stackrel{\u0304}{{H}_{i}}={H}_{i}\text{or}\phantom{\rule{0.3em}{0ex}}{H}_{i}^{\prime}$ (it will become clear hereunder which one has to be chosen at each relay). In $\stackrel{\u0304}{{H}_{1}}$ and $\stackrel{\u0304}{{H}_{2}}$, the first two block columns each consist of K/2 columns (corresponding to information bits) and the last block column consists of L − K columns (corresponding to parity bits from the point-to-point codes). The zero block columns indicate that parity bits from point-to-point codes have no support in these matrices. Now replace all sub matrices ${H}_{{u}_{r}}$, ${H}_{{u}_{s}}^{{u}_{r}}$ by these matrices, for each relay u_{
r
}, so that in each block column corresponding to information bits, we have a random matrix R_{
i
}; this is required to conform any preferred degree distribution of the LDPC code. For example, H_{GLNC} can be given by

(20)

Each set of rows and each set of columns in H will have at least one random matrix, so that any LDPC code degree distribution can be conformed. We denote this JNCC by the SMARC-JNCC, where S stands for scalable.

Proposition 4

In a network following the system model proposed in Section ‘System model’ and using BP, the SMARC-JNCC achieves a diversity order d = 3.

In [21], it is proved that this set of K equations can be solved using the matrices proposed above. We provide another more simple proof here. Consider a block BEC. Because $\stackrel{\u0304}{{H}_{1}}$ and $\stackrel{\u0304}{{H}_{2}}$ are upper- or lower-triangular, with ones on the diagonal, the unknown K information bits can be retrieved using backward substitution, hence it can be retrieved with BP as well.

By Corollary 3 and Lemma 3, the SMARC-JNCC achieves a diversity order d = 3. □

Note that the information bits of a source need to be split in two parts: bits of the type 1i and 2i. This allows the introduction of the matrices R_{1} and R_{2} in Equation 19, so that all information bits have a random matrix in their corresponding block column in the parity-check matrix. Now, the LDPC code can conform any degree distribution.

6 Lower bound for the WER

To assess the performance of the SMARC-JNCC we need to compare it with the outage probability limit (Section “Calculation of the outage probability”). We show that the outage probability limit is not always tight and we propose a tighter lower bound, which is presented in Section “Calculation of a tighter lower bound on WER”.

6.1 Calculation of the outage probability

The outage probability limit is the probability that the instantaneous mutual information between the sources and sinks of the network is less than the transmitted rate. The outage probability is an achievable (using a random codebook) lower bound of the average WER of coded systems in the limit of large block length [27, 33, 34].

For a multi-user environment, two types of mutual information are considered. First, it is verified whether the sum-rate, R_{
c
} in this case, is smaller than the instantaneous mutual information between all the sources and the sink. Then, it is verified whether each individual source rate, $\frac{{R}_{c}}{{m}_{s}}$ in this case, is smaller than the instantaneous mutual information between the nodes, transmitting information for this source, and the destination. The outage probability for the MARC was determined in [21, 35] using the method described above.

The terms I(S_{
i
};D), I(R_{
i
};D), and I(S_{
i
};R_{
j
}) are the instantaneous mutual informations of the corresponding point-to-point channels with input x ∈ {−1,1}, received signal y = α_{
i
}x + w with $w\sim \mathcal{C}\mathcal{N}(0,\frac{1}{\gamma})$, conditioned on the channel realization α_{
i
}, which are determined by applying the formula for mutual information [36, 37]:

where ${\mathbb{E}}_{Y\left|\right\{x=1,{\alpha}_{i}\}}$ is the mathematical expectation over Y given x = 1 and α_{
i
}.

We now consider the outage probability of a layered construction, such as the standard OSI model, where the destination first decodes the point-to-point transmissions, declaring a block erasure if decoding is not successful. For the network code, we assume a maximum distance separable (MDS) code, which is outage-achieving over the (noiseless) block-erasure channel [26]. That is, any m_{
s
} correctly received packets suffice for decoding. Accordingly, an outage event for the layered construction, denoted as ${\mathcal{E}}_{\text{out},l}$ is given by

The outage probability for JNCC and a layered construction are compared in Figure 1 for m_{
s
}= m_{
r
}= 5, coding matrix^{g}M given in Equation (18) and R_{c,p}= 6/7. The overall spectral efficiency is R = 3/7 bpcu, so that ${E}_{b}/{N}_{0}=\frac{7\gamma}{3}$.

The main conclusion is that the difference between both outage probabilities is only 1 dB. Hence, on a fundamental level, the achievable coding gain by JNCC with respect to a standard layered construction is small for the adopted system model.

6.2 Calculation of a tighter lower bound on WER

According to information theory, the outage probability is achievable, where the proof relies on using random codebooks. However, the nature of the JNCC protocol largely deviates from a random code. For example, the parity bits corresponding to the point-to-point codes are forced in a block diagonal structure in H_{
c
}(see Equation 6), which is not taken into account in the outage probability limit. In fact, in Proposition 1, it was proved that the maximal diversity order does not depend on R_{
c
} but on R_{
n
}, which is not taken into account in the outage probability limit. Therefore, we argue that the outage probability limit is in general not achievable by a JNCC, which is illustrated by means of an example.

Consider a network with m_{
s
}= m_{
r
}= 3. The adopted point-to-point codes have coding rate R_{c,p}= 0.5, so that R_{
c
}= 0.25. We take n_{
u
}= 2 and adopt the coding matrix M, given in Equation (13). Because of the small coding rate R_{
c
}, the outage probability achieves a diversity order of three (Figure 2). However, it follows from Proposition 1 that d_{max} = 2. We therefore propose a new lower bound, which takes into account the point-to-point codes.

A bit node is essentially protected by two codes: a point-to-point code (H_{
c
}) and a network code (H_{GLNC}), which is illustrated on the factor graph [38] representation (a Tanner notation [39] is adopted)^{h} of the decoder (Figure 3).

Usually, both codes are characterized by separate degree distributions, denoted as (λ_{
c
}(x),ρ_{
c
}(x)) and (λ_{GLNC}(x),ρ_{GLNC}(x)) for H_{
c
} and H_{GLNC}, respectively.

The new lower bound assumes a concatenated decoding scheme. At the destination, first the point-to-point codes are decoded and then soft information is passed to the network decoder. This is illustrated in Figure 4, where the soft information is denoted by the log-likelihood ratio (LLR) ${L}_{{\text{obs}}^{\prime},i}$. Note that the bit node of bit i is duplicated to be able to clearly indicate ${L}_{{\text{obs}}^{\prime},i}$. Applying the sum-product algorithm (SPA) on this factor graph or the original factor graph (without node duplication) is equivalent. This follows immediately from the sum-product rule for variable nodes (([40]see Section 4.4)) and ([38], Equation (5)).

The LLR ${L}_{{\text{obs}}^{\prime},i}$ can be viewed as a new channel observation as it remains fixed during the iterative decoding of the network code (H_{GLNC}). The maximum rate that can be achieved by the network code is given by

The terms $I({S}_{u};{L}_{{\text{obs}}^{\prime}})$ and $I({R}_{u};{L}_{{\text{obs}}^{\prime}})$ are the mutual informations between the channel input x ∈ {−1,1} and the associated random variable ${L}_{{\text{obs}}^{\prime}}$, conditioned on the channel realization α_{
u
}, determined by applying the formula for mutual information [36, 37], i.e., $I(X;{L}_{{\text{obs}}^{\prime}}|{\alpha}_{u})$ is

The density of the random variable ${L}_{{\text{obs}}^{\prime}}$ can be obtained by means of density evolution [41], given the degree distributions of the point-to-point code, or by means of Monte Carlo simulations, given the actual factor graph of the point-to-point code. Both approaches yield to the same results in our simulations.

Similarly to the conventional case, an outage event, denoted as ${\mathcal{E}}_{\text{out},2}$ is given by

Note that the network coding rate is used instead of the overall rate R_{
c
}, which corresponds to Proposition 1.

The tight lower bound presented here is a valid lower bound if the point-to-point codes are first decoded, followed by the network code, without iterating back to the point-to-point codes.

Let us now go back to the small network example with m_{
s
}= m_{
r
}= 3, considered in the beginning of this section. Figure 2 compares the conventional outage probability (Section ‘Calculation of the outage probability’) with the tighter lower bound proposed here. As mentioned before, the conventional outage probability has a larger diversity order than what is achievable, while the tighter lower bound only achieves a diversity order of two.

We are seeing a 3 dB difference at an outage probability of 10^{−4}. To assess the performance of the network code only, given a certain point-to-point code, the WER of the SMARC-JNCC should be compared with the tight lower bound presented here. In the subsequent sections, we always include both lower bounds.

7 Numerical results

In this section, we provide numerical results for the SMARC-JNCC. We will clarify the proposed techniques on an illustrating network example, where m_{
s
}= m_{
r
}= 5 (Figure 5). We use the same network example as in [17, 18] so that a comparison is possible.

For simplicity, we assume non-reciprocal interuser channel in the simulation results. Note that in the case that m_{
s
}> 4 and Algorithm 1 is used to construct $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$, reciprocity is irrelevant for our proposed code, as it applies that $i\notin \mathcal{T}\left(j\right)$ if $j\in \mathcal{T}\left(i\right)$.

We compare the error rate performance of the SMARC-JNCC with the outage probability limit and the tighter lower bound, which are presented in Section ‘Lower bound for the WER’, and with standard network coding techniques (using identity matrices in H_{GLNC}) and a layered network construction (also using identity matrices in H_{GLNC}, and where, at the destination, the network code is only decoded after decoding all point-to-point codewords separately and taking a hard decision).

The point-to-point code used in the simulations is an irregular LDPC code [41] characterized by the standard polynomials λ(x) and ρ(x) [41]:

where λ(x) and ρ(x) are the left and right degree distributions from an edge perspective. The coefficients λ_{
i
} and ρ_{
i
} are the fraction of edges connected to a bit node and check node, respectively, of degree i. The adopted point-to-point code is fetched from [42], has coding rate R_{c,p}= 6/7 and conforms the following degree distributions:

We start by assessing the performance of H_{GLNC}, the bottom part of Equation (20), which determines the diversity order. Therefore, we assume perfect links between sources and relays. Hence, the channel model is the same as described in Section ‘System model’, with the exception of the interuser channels, which are assumed to be perfect (no fading and no noise). The parameters used for the simulation are K = L = 900, m_{
s
}= m_{
r
}= 5 (so that N = 10 K = 9000), where N is the block length of the overall codeword. The overall spectral efficiency is R = 0.5 bpcu, so that E_{
b
}/N_{0} = 2γ.

Figure 6 shows that a diversity order of 3 is achieved for SMARC-JNCC, which corroborates Corollary 3. It performs at 2.5 dB from the outage probability (because no point-to-point codes are considered, only the conventional outage probability is shown), which may be improved by optimizing the degree distributions. We also show a JNCC, where all submatrices ${H}_{{u}_{r}}$, ${H}_{{u}_{s}}^{{u}_{r}}$, ∀u_{
r
},u_{
s
}are replaced by identity matrices, denoted as the I-JNCC. Finally, we show an I-JNCC with irregular $\left\{{n}_{{u}_{r}}\right\}$, with coding matrix M, given by

It is clear that, even without optimizing the SMARC-JNCC, there is a benefit in terms of coding gain compared to the I-JNCC.

7.2 Rayleigh faded source-relay links

Now, we assess the performance of the complete parity-check matrix H of the SMARC-JNCC. We use the channel model as described in Section ‘System model’. Hence, all links have the same statistical model and the average SNR is the same for all channels. The parameters used for the simulation are K=606, R_{c,p}= 6/7, L = 707, m_{
s
}= m_{
r
}= 5 (so that N = 10L = 7070). The overall spectral efficiency is R = 3/7 bpcu, so that E_{
b
}/N_{0} = 7γ/3. Because the simulation time would be very large if every point-to-point source-relay link had to be decoded separately, we made an approximation. The word error rate of the point-to-point code when transmitted on a channel with fading gain α is smaller than 10^{−4}when α^{2}γ = 5.5 dB. Therefore, we assumed that a relay had correctly decoded the source-codeword if α^{2}γ > 5.5 dB and not otherwise. We also add the performance of the SMARC-JNCC from Section ‘Perfect source-relay links’, corresponding to perfect source-relay links and R = 0.5 bpcu, as a reference curve (note that the reference curve corresponds to a larger spectral efficiency—the coding rate R_{
c
}is larger—than for the other curves, which slightly disadvantages the reference curve in terms of error performance).

Figure 7 shows that a diversity order of 3 is still achieved, which corroborates Proposition 4. In addition, two main conclusions can be made. First of all, the coding gain loss due to interuser failures is 6.5 dB, which is very large. Second, the benefit in terms of coding gain of the SMARC-JNCC compared to the I-JNCC is considerably decreased, compared to Section ‘Perfect source-relay links’, which corresponds to the small horizontal SNR-gap between the outage probabilities of a layered and joint construction. Also note that the tighter lower bound using density evolution, is close to the conventional lower bound in this case (probably due to the larger coding rate R_{c,p}). Finally, the WER performance of a layered construction is shown, which coincides with that of the I-JNCC.

7.3 Gaussian source-relay links

We test again the complete parity-check matrix H of the SMARC-JNCC, now assuming that the source-relay links are Gaussian, having additive white Gaussian noise only, without fading; fading occurs on the source-destination and relay-destination links only. We assume that the average SNR is the same for all channels. The parameters used for the simulation are the same as in Section ‘Rayleigh faded source-relay links’.

Figure 8 shows that in the case of Gaussian interuser channels, the loss compared to perfect interuser channels is very small. Furthermore, the performance of the I-JNCC has improved a lot in comparison with Section ‘Perfect source-relay links’, where H_{GLNC} only was used. The degree distributions causing the poor coding gain of the I-JNCC in Section ‘Perfect source-relay links’, have changed considerably through the point-to-point codes, significantly improving the coding gain.

8 Conclusion

We put forward a general form of joint network-channel codes (JNCCs) for a wireless communication network where sources also act as relay. The influence of important parameters of the JNCC on the diversity order is studied and an upper and lower bound on the diversity order are proposed. The lower bound is only valid for the case where the number of sources is equal to the number of relays, and where each relay only helps two sources.

We then proposed a practical JNCC that is scalable to large networks. Using the diversity analysis, we managed to rigorously prove its achieved diversity order, which is optimal in a well identified set of wireless networks. We verified the performance of a regular LDPC code via numerical simulations, which suggest that as networks grow, it is difficult to perform significantly better than a standard layered construction.

Appendix 1

Proof of Proposition 1

The maximal diversity order can be derived using the diversity equivalence between a block BEC and a BF channel [24, 25]. Assume a block BEC, so that a block ${\mathbf{s}}_{{u}_{s}}$ or ${\mathbf{r}}_{{u}_{r}}$ is completely erased or perfectly known. Consider the case that e_{1}blocks of length 2L and e_{2} blocks of length L have been erased, where e = e_{1} + e_{2} is the total number of erasures, e_{1} ≤ m_{
s
}and e_{2} ≤ m_{
r
}− m_{
s
}. Hence, the number of unknown bits is equal to e_{1}2L + e_{2}L. Considering the structure of H from (6) containing the block-diagonal matrix H_{
c
}, it follows that the e_{1}2L + e_{2}L erased bits appear in only (2e_{1} + e_{2})(L − K) + m_{
r
}K of the available (m_{
s
} + m_{
r
})L−m_{
s
}K parity equations, i.e., (2e_{1} + e_{2})(L − K) equations involving H_{
c
}and all m_{
r
}K equations involving H_{GLNC}. Hence, the unknown bits can be retrieved only if there are sufficient linearly independent useful equations. This yields the necessary condition:

${m}_{r}\ge 2{e}_{1}+{e}_{2}.$

(23)

Denoting by e = e_{1} + e_{2} the total number of erased blocks, the largest value e_{max} of e for which e_{1} and e_{2} satisfy (23) for all e_{1} ≤ m_{
s
}and e_{2} ≤ m_{
r
}− m_{
s
} is given by

Before we present the actual proof, we first propose two lemmas.

Lemma 4

Any binary a × b matrix S, a ≥ b, where all rows have weight 2 cannot have full rank b.

Proof

If a matrix has full rank, there is no vector z ≠ 0 such that Sz = 0. However, if S has row weight 2, then S1 = 0, where 1 corresponds to a column vector with each entry equal to 1. □

Consider now a column vector of b unknown variables z and a set of constraints on these variables, which are stacked in S so that Sz = c, where c is a column vector of known constants. In general, solving S for z corresponds to performing Gaussian elimination of S. However, under some conditions, this simplifies to backward substitution.

Lemma 5

If a binary a × b matrix S, a ≥ b, has full rank b and maximal row weight of 2, Gaussian elimination simplifies to backward substitution.

Proof

Without loss of generality, we eliminate all redundant (linearly dependent) rows in S to obtain a square matrix of size b. By Lemma 4, there must be at least one row in S with unit weight to have full rank. Starting from this known variable, we can solve for a further variable in z at each step as the row weight is smaller than or equal to 2.

Assume that this backward substitution procedure cannot be continued until all variables are known. That is, after successive decoding, there are k rows consisting of a combination of ${\mathbf{z}}_{{i}_{k}}+{\mathbf{z}}_{{j}_{k}}$ where neither ${\mathbf{z}}_{{i}_{k}}$ nor ${\mathbf{z}}_{{j}_{k}}$ are known. We split the matrix S into two parts: ${S}_{\text{unknown}}\in {\{0,1\}}^{k\times {m}_{s}}$ and ${S}_{\text{known}}\in {\{0,1\}}^{{m}_{s}-k\times {m}_{s}}$. The former comprises the rows involving only unknown variables (note that the weight of each row of S_{unknown} is 2). The latter consists of the rows involving only known variables. If the number of unknown variables is equal to k, then the rank of S_{unknown} must be equal to k which is impossible by Lemma 4. So, the matrix S was not full rank which contradicts our assumption. If the number of unknown variables is smaller than k, then there were redundant (linearly dependent) rows in S_{known} which contradicts the assumptions again. We conclude that the procedure only fails if S does not have full rank. □

To prove Proposition 3, we use the diversity equivalence between a block BEC and the BF channel. In a block BEC, the channel Equation (4) simplifies to

where ε_{
i
}= 0 when the channel is erased and ε_{
i
}= 1 otherwise. Hence, ε_{
i
}= 0 if $i\in \mathcal{E}$ and ε_{
i
}= 1 if $i\in \stackrel{\u0304}{\mathcal{E}}$, where $\stackrel{\u0304}{\mathcal{E}}$ is the complement of $\mathcal{E}$.

Source-codewords s_{
i
} can be retrieved from the transmissions in the source phase if ε_{
i
}= 0. Decoding the other source-codewords at the destination is performed through the parity-check matrix H (Equation (6)). We split H in two parts:

where H_{left} and H_{right} have m_{
s
}L and m_{
r
}L columns, respectively. We also define $\mathbf{s}={[{\mathbf{s}}_{1}^{T}\dots {\mathbf{s}}_{{m}_{s}}^{T}]}^{T}$ and $\mathbf{r}={[{\mathbf{r}}_{1}^{T}\dots {\mathbf{r}}_{{m}_{r}}^{T}]}^{T}$. As Hx = 0, we have that

As we consider a block BEC, some transmissions are perfect. As in Appendix 1, consider the case that e_{1} blocks of length 2L and e_{2} blocks of length L have been erased, where $e={e}_{1}+{e}_{2}=\left|\mathcal{E}\right|$ is the total number of erasures, e_{1} ≤ m_{
s
}and e_{2} ≤ m_{
r
}− m_{
s
}. Considering the structure of H from (6) containing the block-diagonal matrix H_{
c
}, it follows that the e_{1}2L + e_{2}L erased bits appear in only (2e_{1} + e_{2})(L − K) + m_{
r
}K of the available (m_{
s
} + m_{
r
})L − m_{
s
}K parity equations, i.e., (2e_{1} + e_{2})(L − K) equations involving H_{
c
}and all m_{
r
}K equations involving H_{GLNC}. Next, (e_{1} + e_{2})K from the m_{
r
}K equations involving H_{GLNC} cannot be used to solve erased bits in s as these equations always have at least two unknowns. The overall set of equations to decode s thus becomes

where ${\mathbf{y}}_{i}^{\prime}=\frac{1+{\mathbf{y}}_{i}}{2}$ (BPSK modulation). We can stack the coefficients of all elements in s in a matrix H_{
s
}. For example, if m_{
s
}= m_{
r
}= 3, $\mathcal{E}=\left\{1\right\}$, $\mathcal{T}\left(2\right)=\{1,3\}$ and $\mathcal{T}\left(3\right)=\{1,2\}$, then

(30)

It is now easy to see that ${M}_{\mathcal{E}}$, as defined in Section ‘A lower bound based on $\left\{\mathcal{T}\right({u}_{r}\left)\right\}$ for ${n}_{{u}_{r}}=2$’, is closely related to H_{
s
}: ${\left[{M}_{\mathcal{E}}\right]}_{i,j}=1$ if [H_{
s
}]_{(i−1)L + 1…iL,(j−1)L + 1…jL}≠ 0 and ${\left[{M}_{\mathcal{E}}\right]}_{i,j}=0$ if [H_{
s
}]_{(i−1)L + 1…iL,(j−1)L + 1…jL}= 0.

If $\left|\mathcal{E}\right|\le {d}_{M}-1$, then ${M}_{\mathcal{E}}$ has full rank, according to Definition 2. As established in Lemma 5, the set of equations represented by ${M}_{\mathcal{E}}$ can be solved using backward substitution. This means that at each iteration, there is an equation with only one unknown. Consider a particular iteration and denote the index of the unknown by u. In H_{
s
}, this corresponds to an equation with an unknown source-codeword vector s_{
u
} of the type

or of the type ${\mathbf{s}}_{u}={\mathbf{y}}_{u}^{\prime}$.

Under ML decoding, we obtain what was claimed if the matrices ${\mathcal{\mathscr{H}}}_{{u}_{s}}^{{u}_{r}}$, ${u}_{s}\in \mathcal{T}\left({u}_{r}\right),{u}_{r}\in \{1,\dots ,{m}_{r}\}$ have full rank. Under BP decoding, we obtain what was claimed if, for each u_{
r
}, the set of L Equation (30) can be solved with BP in the case of only one unknown source-codeword vector s_{
u
}.

Appendix 3

Proof of Lemma 2

A relay may not succeed in successfully decoding the message from a source, denoted as a failure. There are m^{2}−m interuser channels, which all have a probability of failure. Hence, there exist $\sum _{i=0}^{{m}^{2}-m}\left(\begin{array}{c}{m}^{2}-m\\ i\end{array}\right)$different cases, where each case corresponds to a combination of failures and successes. We denote the case where all interuser channels are successful as case 1.

Defining the diversity order corresponding to each case as ${d}_{c,i}=-\underset{\gamma \to \infty}{lim}\frac{logP\left(\text{case}\phantom{\rule{0.3em}{0ex}}i\right)P\left(\mathit{\text{ew}}\right|\text{case}\phantom{\rule{0.3em}{0ex}}i)}{log\gamma}$, it follows that the overall diversity order is d = min_{
i
}d_{c,i}.

The probability of f failures on independent interuser channels is proportional to $\frac{1}{{\gamma}^{f}}$ ([23], Equation (3.157)) so that for this case i,

The diversity order in the case of perfect interuser channels (f = 0) is d_{c,1}. That is, the error-correcting code can bear d_{c,1}−1 erasures on node-destination links. Hence, d_{c,i}≥ d_{c,1}only if $P\left(\mathit{\text{ew}}\right|\text{case}i)\le \frac{c}{{\gamma}^{{d}_{c,1}-f}}$, or, all information can still be retrieved at the destination, given that f interuser channels and d_{c,1}−f−1 node-destination channels are erased. Let us check whether this is true for all f.

A relay stays silent if it cannot decode all source codewords corresponding to its transmission set. If there are f interuser failures, there are at most f relays which stay silent in the relay phase. This corresponds to at most f additional node-destination erasures adding to the assumed d_{c,1}−f−1 already erased node-destination channels, yielding a total of at most d_{c,1}−1 erased node-destination channels, which can be supported by the code, by the definition of d_{c,1}.

Appendix 4

Proof of Lemma 3

In the case that m_{
s
}> 4 and Algorithm 1 is used to construct $\left\{\mathcal{T}\right({u}_{r}),{u}_{r}=1,\dots ,{m}_{r}\}$, reciprocity is irrelevant for our proposed code, as it applies that $i\notin \mathcal{T}\left(j\right)$ if $j\in \mathcal{T}\left(i\right)$. Hence, if m_{
s
}> 4, the proof given in Appendix 3 is always valid.

Now consider the case that d_{c,1}= 2, which corresponds to m_{
s
}= m_{
r
}= m < 4 (see Proposition 1). In the case of f = 1 interuser channel, d_{c,i} is always larger than one, because $P\left(\mathit{\text{ew}}\right|\text{case}\phantom{\rule{0.3em}{0ex}}i)\le \frac{c}{\gamma}$ as at least one channel, the source-destination channel, needs to fail to loose the corresponding information bits.

Finally, consider the case that m_{
s
}= m_{
r
}= m = 4 and thus d_{c,1}= 3. Hence, in the case of no interuser failures, the code can support two node-destination failures, corresponding to four erased transmissions from two nodes, in the source phase and in the relay phase. Reciprocity is relevant as $i\in \mathcal{T}\left(j\right)$ if $j\in \mathcal{T}\left(i\right)$ for (i,j) is (1,3) and (2,4). Because $P\left(\mathit{\text{ew}}\right|\text{case}\phantom{\rule{0.3em}{0ex}}i)\le \frac{c}{\gamma}$, we only have to consider the case that f = 1, denoted as case i in general. Hence, in the case that the interuser channel between sources one and three or two and four have been erased, relays one and three or two and four, respectively, stay silent. Note that the transmission sets from the remaining active relays are disjoint when Algorithm 1 is used, and because n = 2, they support all sources u_{
s
}= 1,…,4. If one node-destination channel is consequently erased, which corresponds to at most two transmissions, the destination has to recover the information bits from the erased source-codeword. Because relay u_{
r
} cannot have u_{
r
} in their own transmission set $\mathcal{T}\left({u}_{r}\right)$, the erased relay codeword does not contain any information on the erased source-codeword, which implies that the information is in the remaining relay codeword. Hence, we have that $P\left(\mathit{\text{ew}}\right|\text{case}\phantom{\rule{0.3em}{0ex}}i)\le \frac{c}{{\gamma}^{2}}$ or by (33), d_{c,i}≥ 3. In other words, interuser failures do not decrease the diversity order.

Endnotes

^{a}Unless mentioned otherwise, we assume that channels are reciprocal, i.e., the channel from u_{1} to u_{2} is the same as the channel from u_{2} to u_{1}.

^{b}In practice, increasing the SNR value can be achieved by increasing the transmission power of a node, so that both the SNR of the node-to-destination channels and channels between non-destination nodes increase.

^{c}For conciseness, we do not formulate the equation for channels between non-destination nodes.

^{d}Note that relays u are not allowed to consider relay codewords ${\mathbf{r}}_{{u}_{r}}$ for inclusion in $\mathcal{S}\left(u\right)$. As a consequence, the right part of H_{GLNC} is diagonal in Equation (7). This restriction was not always applied in the literature (e.g., [17]), but it simplifies the theoretical analysis and code design.

^{e}A standard BF channel is a channel with B blocks of length L, where each block is affected by an independent fading gain. The maximal achievable diversity order on this channel is given by 1 + ⌊ B(1−R_{
c
})⌋, where R_{
c
} is the coding rate [27–29].

^{f}The attentive reader will notice that the first two block rows in Equation (A.7) in [21] are not used here. These block rows are only necessary if a source is helped by one relay only and no point-to-point codes are available, which is not the case here.

^{g}The coding matrix expresses the transmission sets for each relay, which is required to determine the outage probability.

^{h}For a specific instance, the parity-check matrix can be graphically represented by a bipartite graph, denoted as a Tanner graph. The graphical Tanner graph representation is equivalent to the factor graph, which can be used for decoding.

Declarations

Acknowledgements

This study was supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM++ (contract no. 216715).

Authors’ Affiliations

(1)

Department of Telecommunications and Information processing, Ghent University

Hunter TE: Coded cooperation: a new framework for user cooperation in wireless systems, Ph.D. thesis. University of Texas at Dallas; 2004.

Laneman JN, Tse D, Wornell GW: Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inf. Theory 2004, 50(12):3062-3080. 10.1109/TIT.2004.838089MathSciNetView Article

Ahlswede R, Cai N, Li S-YR, Yeung RW: Network Information Flow. IEEE Trans. Inf. Theory 2000, 46(4):1204-1216. 10.1109/18.850663MathSciNetView Article

Koetter R, Médard M: An algebraic approach to network coding. IEEE/ACM Trans. Netw 2003, 11(5):782-795. 10.1109/TNET.2003.818197View Article

Li SYR, Yeung RW, Cai N: Linear network coding. IEEE Trans. Inf. Theory 2003, 49(2):371-381.MathSciNetView Article

Rebelatto JK, Uchôa-Filho BF, Li Y, Vucetic B: Multi-user cooperative diversity through network coding based on classical coding theory. IEEE Trans. Sig. Process 2012, 60(2):916-926.View Article

Xiao M, Skoglund M: M-user cooperative wireless communications based on non-binary network codes. In Proc. Inf. Theory Workshop (ITW). Volos, Greece; 2009:316-320.

Kramer G, Gastpar M, Gupta P: Cooperative strategies and capacity theorems for relay networks. IEEE Trans. Inf. Theory 2005, 51(9):3037-3063. 10.1109/TIT.2005.853304MathSciNetView Article

Guo Z, Huang J, Wang B, Cui JH, Zhou S, Willett P: A practical joint network-channel coding scheme for reliable communication in wireless networks. In Proc. of the ACM intern. symp. on mob. ad hoc netw. and comp.. Louisiana, New Orleans; 2009:279-288.

Hausl C, Dupraz P: Joint network-channel coding for the multiple-access relay channel. Proc. IEEE Commun. Soc. Sensor Ad Hoc Commun. Netw 2006, 3: 817-822.

Hausl C, Hagenauer J: Iterative network and channel decoding for the two-way relay channel. In Proc. IEEE Int. Conf. on Comm. Istanbul, Turkey; 2006:1568-1573.

Bao X, Li JT: Generalized adaptive network coded cooperation (GANCC): a unified framework for network coding and channel coding. IEEE Trans. Commun 2011, 59(11):2934-2938.MathSciNetView Article

Duyck D, Capirone D, Heindlmaier M, Moeneclaey M: Towards full-diversity joint network-channel coding for large networks. In Proc. of Europ. Wirel. Conf. Vienna, Austria; 2011:1-8.

Li J, Yuan J, Malaney R, Azmi MH, Xiao M: Network coded LDPC code design for a multi-source relaying system. IEEE Trans. Wirel. Comm 2011, 10(5):1538-1551.View Article

Li J, Yuan J, Malaney R, Xiao M: Binary field network coding design for multiple-source multiple-relay networks. In IEEE Int. Conf. on Comm. Sydney, NSW, Australia; 2011:1-6.

Duyck D, Capirone D, Boutros JJ, Moeneclaey M: Analysis and construction of full-diversity joint network-LDPC codes for cooperative communications. Eur. J. Wirel. Commun. Netw 2010(Art ID 805216, 2010): http://jwcn.eurasipjournals.com/content/2010/1/805216

Duyck D, Capirone D, Boutros JJ, Moeneclaey M: A full-diversity joint network-channel code construction for cooperative communications. In Proc. IEEE Intern. Symp. on Personal, Indoor and Mob. Radio Comm. (PIMRC). Tokyo, Japan; 2009:1282-1286.

Tse DNC, Viswanath P: Fundamentals of Wireless Communication. Cambridge University Press, Cambridge; 2005.View Article

Duyck D: Design of LDPC coded modulations for wireless fading channels. Ph.D. dissertation. Ghent University, Ghent, Belgium); in Press (to be published in 2012)

Guillén i Fãbregas A: Coding in the block-erasure channel. IEEE Trans. Inf. Theory 2006, 52(11):5116-5121.View Article

Guillén i Fãbregas A, Caire G: Coded modulation in the block-fading channel: coding theorems and code construction. IEEE Trans. Inf. Theory 2006, 52(1):91-114.View Article

Knopp R, Humblet PA: On coding for block fading channels. IEEE Trans. Inf. Theory 2000, 46(1):189-205. 10.1109/18.817517View Article

Malkamaki E, Leib H: Evaluating the performance of convolutional codes over block fading channels. IEEE Trans. Inf. Theory 1999, 45(5):1643-1646. 10.1109/18.771235MathSciNetView Article

Chou PA, Wu Y, Jain K: Practical network coding. In Proc. Allerton Conf. on Communication, Control, and Computing. (Illinois; 2003.

McEliece RJ, MacKay DJC, Cheng J-F: Turbo decoding as an instance of Pearl’s “belief propagation” algorithm. IEEE J. Sel. Area Commun 1998, 16(2):140-152. 10.1109/49.661103View Article

Ho T, Médard M, Koetter R, Karger DR, Effros M, Shi J, Leong B: A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52(10):4413-4430.View Article

Biglieri E, Proakis J, Shamai S: Fading channels: information-theoretic and communications aspects. IEEE Trans. Inf. Theory 1998, 44(6):2619-2692. 10.1109/18.720551MathSciNetView Article

Ozarow LH, Shamai S, Wyner AD: Information theoretic considerations for cellular mobile radio. IEEE Trans. Veh. Technol 1994, 43(2):359-379. 10.1109/25.293655View Article

Hausl C: Joint network-channel coding for the multiple-access relay channel based on turbo codes. Eur. Trans. Telecommun 2009, 20(2):175-181. 10.1002/ett.1349View Article

Cover TM, Thomas JA: Elements of Information Theory. Wiley, New York; 2006.

Ungerboeck G: Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theory 1982, IT-28(1):55-67.MathSciNetView Article

Kschischang F, Frey B, Loeliger H-A: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 2001, 47(2):498-519. 10.1109/18.910572MathSciNetView Article

Tanner M: A recursive approach to low complexity codes. IEEE Trans. Inf. Theory 1981, 27(5):533-547. 10.1109/TIT.1981.1056404MathSciNetView Article

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.