Throughout the paper, we will denote sets by capital letters, variables by lowercase letters, vectors by bold lowercase letters, and matrices by bold capital letters. For a vector
, we denote its
th component by
and its transpose by
. We use capital letters for both the sets and the cardinality of sets.

Consider

sensing nodes and

sink nodes in the region of interest. Let

be a probability space with a

-algebra

of random events, and have a finite set

with the corresponding probability

. There are

objective functions

, defined on the subset

of a Hilbert space. Let

,

. Then

is the objective vector function of sensing node

. Let

be the column vector of variables of sensing node

,

, and

a column vector function. We can formulate the primal problem (

**PP**) as follows:

The objective function,

, may be a coupled one. In order to design a distributed algorithm, we introduce auxiliary variable

to decouple it. Assume that the node set associated with coupled variables of

is

. Let

, which denotes the node set associated with coupled variables of

of sensing node

, then the decoupled primal problem (

**DPP**) can be given by

where
is the
th decoupled objective function, and
is the corresponding vector of auxiliary variables.

In our formulation, the objective functions are deterministic, taking the advantage that each sensing node can be obtained from the network. The constraint set contains the random factors of the networks, such as message exchange, and environmental effect. If we know the distribution,
,
, of
, we can transform the problem into a deterministic one, by calculating the expectation. However, in WSNs, there is often no prior knowledge about the randomness from the networks themselves and the environmental effect. Therefore, we develop an algorithm without this prior knowledge, which can be achieved by the stochastic quasigradient method [7].

To decompose the problem, we take Lagrange dual approach. The Lagrange function [

21] of (3) is given by

where
is the objective vector function of sensing node
. It is a formal expression which can be transformed into different objective functions for different applications.

We call

and

decoupled prices (

is used to decouple the coupling of variables and

is used to decouple the coupling of objective functions). Since (4) is separable, we exploit the decomposable structure of Lagrangian function and decompose the problem into

subproblems. Maximization is achieved in each sensing node

,

, with the knowledge of local variables

) and the current state

, by solving the following optimization problem

.

At iteration

, each sensing node

updates its resource variables

and auxiliary variables

according to

We proceed to solve the dual problem. Let

. Then the dual problem (

) is given by

At iteration

, each sensing node can acquire the state of random variables

. The stochastic quasigradient method only needs this current state information of the system and utilizes it to form the stochastic subgradients of

at iteration

. For the dual problem,

, prices are updated according to

where
and
are the stochastic quasigradients of
.

where
is the state of
at iteration
.

We summarize our algorithm for the general formulation of stochastic multiobjective optimization problem (ASMOP) in the Algorithm 1.

**Algorithm 1:** ASMOP.

(
) Price update algorithm: At times
, decoupled prices are updated according to

(
) Sensor node
's Algorithm: At time
, each sensing node
updates its variables

according to

To prove that the algorithm can converge to the optimal solution of the primal problem, we make the following assumptions.

Theorem 1.

If (1) hold, then from an arbitrary point of
,
and
, the sequence generated by (7), (9), and (10) converges. Every limit point
of the sequence
is primal-dual optimal.

Proof.

Let the sequences of iteration

and

be generated by (9) and (10), respectively. Then to guarantee the convergence of the algorithm, according to [

7,

22], the current stepsize and quasigradients

,

and

should be chosen such that

It can be seen that
,
, satisfy (14). From [22]; we know that
and
from (12) and (13) also satisfy (15) and (16).

From assumptions (1) and (2), the primal function is concave and the dual function
is convex in
and
for a fixed
. From (7), (9), (10), (11), (12), and (13), we can conclude that the sequence converges to the optimal solution by solving the dual problem [22]. As the primal problem is a convex optimization problem, there is no gap between the primal and dual problems. So the sequence
generated by the algorithm is primal-dual optimal.

Remarks 3.

Because of multipath routing, the problem,
, may not be strictly concave even if
is strictly concave. This may lead to oscillation of the sequences generated by the algorithms. There are several ways to cope with this problem. For example, we can first add some augmented variables to
and adopt the first-order Lagrangian method to solve it [23].

The main difference of our proposed approach is that we adopt the knowledge of multiobjective optimization and provide some potential interfaces for each layer. In this way, we can take the advantages of both the layered architectures and cross-layer design. In other words, we can implement different algorithms in each module according to specific applications. In Figure 1,
and
act as the interface variables between different modules and sensor nodes. Through
and
, the network architectures can be decomposed into different modules and each module fulfills corresponding functionality distributively. From (7), we can transform the multiple objectives of the whole network into the multiple objectives of each sensor node. Optimizing the objective vector function of each sensor node can achieve the global optimal solution. Therefore, it is very convenient to implement algorithms in each module
to solve the objective vector function independently according to different requirements.