Skip to main content

Modeling interaction using trust and recommendation in ubiquitous computing environment

Abstract

To secure computing in pervasive environment, an adaptive trust and recommendation based access control model (based on human notion of trust) is proposed. The proposed model provides support to calculate direct as well as indirect trust based on recommendations. It handles situations (by itself) both in which the requesting entity has a past experience with the service and a stranger entity requesting to access the service without any past interaction with the service. It encompasses the ability to reason human cognitive behavior and has the capability to adjust in accordance with behavioral pattern changes. X-bar control chart is used to handle malicious recommendation. The defense mechanisms incorporated by proposed model against attacks such as bad mouthing attack, oscillating behavior attack and conflicting behavior attack are also demonstrated.

1 Introduction

The technological advances over last few years have revolutionized the world of computing. Computer systems (once been deemed as isolated dedicated systems) are nowadays replaced by interactive handheld smart devices. The widespread availability of the Internet has encouraged the growth of open distributed environments. In such open environments, the desire of anywhere, anytime service access is bringing Mark Weiser's [1] vision of ubiquitous computing closer to reality. It envisions densely networked world of smart and intelligent but invisible communication and computation devices interacting with each other for resource sharing and service provisioning. Lack of fixed infrastructure in ubiquitous environment promotes computational services to become as mobile as their users.

The dynamism of the ubiquitous infrastructure means that entities (which offer services) will be confronted with requests from entities that they have never met before; mobile entities will need to obtain services within environments that are unfamiliar and possibly hostile [2]. This amplifies the severity of security tribulations as entities offering and requesting services are continuously joining or leaving. Access to collaborative resources in ubiquitous environment demands some way of authenticating an entity, as well as a way of determining the extent of access that entity may have to the shared resources. In traditional computing environments access to resources is constrained by either secure authentication mechanisms or by physical network boundaries. On the contrary, in ubiquitous environment interaction and mobility of the entities imply stringent requirement on service providers about the type and level of access that they will provide to their collaborators. Consider a scenario that shows the need of a new mechanisms for access control model in open environment.

John is walking through a shopping mall. Suddenly he remembers that he has to send an important document to his colleague through an e-mail. He does not have an Internet service on his cell phone. He wants to request Internet service from any nearby device but his cell phone is an unknown device that is not pre-configured to be trustworthy to access services offered by other devices in shopping mall. An access control model to handle inter-domain service access is therefore needed, that can allow John to access services offered by other devices of network.

To handle these issues, user access to resources should be based on trustworthiness rather than the traditional techniques that statically determine the access rights of the entities. To meet the security concerns in ubiquitous resource sharing environment, the model should be able to deal with devices and environment of unknown origin and also should be adaptive to the dynamics of mobile and socially motivated computing models [3].

Trust is an elementary channel of socializing in a human world. It is a human cognitive function that spurs social interactions. Recently, many researchers are working to imitate the way human assess trust, as a channel for socializing into our new security concepts for ubiquitous computing [4–13]. There emphasizes is on the computational capabilities, with less concern for systematically designed trust infrastructure resulting in ambiguous requirements for access control in ubiquitous environment. In this article, our focus is on developing a framework for access control in ubiquitous environment based on the real world characteristics of trust. Our vision is to allow mobile users to walk into any computing environment and access required services. Users can request services in the environment using various handheld or wearable devices. Study involves asserting the trust reasoning capabilities within the system which will help it to determine the level of access that entity may or may not have to shared resources. It uses recommendations from trusted services as a mean for trust to be propagated between unknown entities. However, in the open, dynamic ubiquitous environments, numerous malicious recommenders who give unfair recommendations to maximize their own gains can also exist. Therefore, incorporating mechanisms to avoid or reduce the influence of unfair recommendations is a fundamental problem for trust models in ubiquitous environments. The framework uses X-bar control chart to filter out unfair recommendations, assuming that recommendations provided by different recommenders follow the same probability distribution. The model is also sensitive to suspicious behavior and incorporates the concept of maximum achievable trust which increases as the entity continues to behave positively and is decremented each time entity shows an alternating behavior. We believe, the proposed model is the first model that has introduced adaptive policy based management to handle strategic malicious behavior and X-bar control chart to handle malicious recommendations in order to provide an attack resistant model.

2 Related work

Security is a primary concern in any computing environment. This is particularly true for ubiquitous computing environment as it allows adhoc interaction of known and unknown autonomous entities. Trust based on human notion is applied to cope with new security concerns in ubiquitous environments. Blaze et al. [14], first proposed decentralized trust-management PolicyMaker. Their trust model was based on credential verification and secures application policies to restrict access to resources and services. However it did not use recommendations to choose a suitable reputed service. Moreover, its complex computational requirements made it not feasible for ubiquitous environment. Kagal et al. [15, 16] argue that large, open systems do not scale well with centralized security solutions. They instead, propose a security solution (Centaurus) based on trust management, which involves developing a security policy and assigning credentials to entities. Centaurus depends heavily on the delegation of trust to third party. Recent research trend [4–13, 17–28] is to build autonomous trust management as fundamental building block to design future security framework. SECURE project [4, 5] presents a formal trust and risk framework to secure collaboration between ubiquitous computer systems. It demonstrates the aspects of the trust life cycle in three stages: trust formation, trust evolution and trust exploitation.

Shand et al. [6] proposed trust mechanism with risk assessment model for resource sharing. Model also computed recommendations using a transitive combination of values. However it suffered with issue of long chains of recommendation. He et al. [7] proposed a trust model based on cloud theory. They used expected value, entropy and hyper entropy to define a trust cloud representing trust relationship. Ya-Jun et al. [8] have presented a trust-based access control model that relies on trust negotiation to establish initial trust for authenticating strangers. It is an extension of the role based access control thus suffers with its inherent issues. Jameel et al. [9] proposed a trust model based on the vectors of trust values of different entities in ubiquitous computing. The trust computation takes into account peer reputation, confidence, and history of past interaction and time based evaluation to calculate trust value. A method for detecting a malicious recommendation is presented. The model takes into account the aggregate of events in order to compute the behavior pattern of an entity. Sequence of outcome of interaction has no effect on the evolution of trust thus ignoring the relevance of event that occurred at different times. Deno and Sun [11] proposed a Probabilistic Trust Management in Pervasive Computing that takes trust value as a probability with which a device performs satisfactory interactions with its neighbor. The problem with this model is that it cannot distinguish between getting one positive outcome out of 2 interactions and getting 100 positive outcomes out of 200 interactions because in both cases the probability is equal to 0.5 [12]. Further, we point out that even if the trustee has no past interaction with the trustier and the outcome of the maiden interaction is also negative, the model assigns 0.33 trust value. TRULLO [13] is model proposed for assigning initial trust value using single value decomposition. It sets initial trust values based on properties of user's past experiences. Ahamed and Sharmin [17] proposed a Trust-based secure Service discovery model, for a truly pervasive environment. It is a hybrid model that makes service sharing decisions based on mutual trust. It associates a security level with each service. This allows the service manager to decide which services can be shared without explicit user input. However, this security level is not used to regulate the maximum trust value to be achieved by the user i.e., the security level does not define the maximum trust possible to be earned for a user with the given history. Komarova and Riguidel [18] proposed an Adjustable Trust Model for Access Control. All parameters of this model are defined by policy set using natural language. In this model the trust value grows/declines linearly after each interaction, which is not in accordance with human behavior. Almenarez et al. [19–22] proposed a mathematical evolutionary model also known as pervasive trust management model (PTM). PTM uses historical behavior, behavior feedback and total number of interactions for calculating new trust value. The model takes into account the aggregate of events while calculating trust. We have found that PTM does not provide protection against paradoxical behavior as claimed in [22]. An attacker can easily manipulate the system to gain high trust despite of having paradoxical behavior. Omnipresent Formal Trust Model (FTM) [23, 24] presents a flexible trust model incorporating a behavioral model to handle interactions. However, it fails to handle situations where a malicious user can launch strategic attack as the trust value is not modified considering the old behavior pattern.

3 Definitions and theoretical framework

Here we present the definitions and framework for our model.

Pervasive environment: Pervasive environment is a framework or milieu, in which autonomous entities (also referred to as pervasive device) interact with each other. The interaction may be anonymous or one-way. Pervasive entities tend to trust each other for service sharing and congregate to emulate a social group.

Entity: Entity is a generic term used for a subject that accesses services provided by a service provider; it can be a user, pervasive device or requesting service itself.

Service: Each service provides some functionality and can be accessed by entities or other services. Services are provided by different service providers. A pervasive environment has set of services with their associated trust requirements and other security policies.

Service policy: Service policy defines set of rules associated with the service. An entity must conform to service policy in order to access that service.

Service interface: A service usually has different levels, each addressing different set of users. Service interface defines these levels within the same service.

Security level: Each service maintains a set of numerical values that defines the security level of its service interfaces. This security level defines the rate of reward/penalty after each interaction.

Trust: A concrete and mathematical definition of trust that has been followed in this article is given by Diego Gambetta [29]:

Trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently or his capacity ever be able to monitor it) and in a context in which it affects his own action.

When a service trusts some entity, it implicitly mean that the past behavior of the entity was high enough or at least not that detrimental for the service to consider providing him the access.

Trust representation: The level of trust a service can have on an entity is represented by a trust value. A higher trust value corresponds to a higher probability that an entity can be trusted. In our approach, 0 corresponds to total absence of trust. This can occur only if the service completely distrusts an entity. The table below outlines trust levels, their corresponding range of trust values and their meaning as used in our trust model.

4 Characteristics of our trust model

The proposed framework is designed for secure collaboration between known and unknown entities in an uncertain environment. The model calculates trustworthiness of each entity, analyze the behavior pattern of entity and provide service access decision in compliance with security policies. The framework has following characteristics:

  • Trust is a relationship established between an entity and a service for a specific service interface thus representing the amount of trust a service has, in an entity, to authorize access to the particular service interface. Model incorporates the trust dimensions of subjectiveness, time and context. The notation {Entity E; Service S; ServiceInterface SI; Time t} is used to describe a trust relationship.

  • The discrete levels of trust are used in this model, referred to as the trust value.

    0 ≤ T i ( E i , S i , S I i , t i ) < 1
  • There are three main sources to establish trust. Direct trust is computed on the basis of experiences the entity had with the requested service (Tdir). When the system has no personal interactions with the entity in question, a communicated opinion about the trustworthiness of an entity can be requested Trecom (indirect trust). Initially new entities joining a pervasive environment for the first time have neither evidence of past experiences nor any reference. In this case ignorance value (Tiv) is assigned which can be updated as additional information becomes available.

    ∃ T i ( E i , S i , SI i , t i ) = { T i ( E i , S i , SI i , t i ) → T dir ( E i , S i , SI i , t i ) ∨ T recom ( E i , S i , SI i , t i ) ∨ T iv ( E i , S i , SI i , t i ) }
  • Trust is service interface specific. An entity may have different trust values for same service but different interfaces.

    T i ( E i , S i , S I i , t i ) ≠ T i ( E i , S i , S I j , t i ) i ≠ j
  • Trust is a time variant value, it decays with time given that entity has no new interaction. The trust an entity has acquired at time t in a perspective of a specific service might not be the same as the trust attributed to him in the same perspective, at time t + Δt

    T i ( E i , S i , S I i , t + Δ t ) < T i ( E i , S i , S I i , t )
  • Social trust affects the trust factor. An entity is more likely to be trusted if it is trusted by the peer services as compared to the other services located in other autonomous pervasive environments. The trust of other services provides a basis of assigning maiden trust value.

  • Trust value increases with good actions and decreases with bad actions.

    { ( I cur + → T i ( E i , S i , S I i , t i ) ≥ T i - 1 ( E i , S i , S I i , t i ) ) } { ( I cur - → T i ( E i , S i , S I i , t i ) ≤ T i - 1 ( E i , S i , S I i , t i ) ) }
  • Counters the suspicious behavior by limiting the maximum achievable trust value. Model also monitors entity for constant positive actions to increase maximum achievable trust.

    { c n i > c n i - 1 → T max i < T max i - 1 } { c p i > c p i - 1 → T max i > T max i - 1 }
  • Reward/penalty rates change with the behavior of entity. Penalty factor increases with the consecutive negative behavior. Reward factor increases with the consecutive positive behavior.

    Δ n i = { Δ n i c n i > c n i - 1 ∧ I cur - → Δ n i ≥ Δ n i - 1 } Δ p i = { Δ p i c p i > c p i - 1 ∧ I cur + → Δ p i ≥ Δ p i - 1 }

5 Architecture of our trust model

It is widely acknowledged that traditional security measures fail to provide necessary flexibility for interactions between known and unknown entities in an uncertain environment. This leverages us to design our trust based security architecture based on human notion of trust to allow access to resources in an uncertain environment. In our model, we assume that all entities are autonomous and some of them are mobile. Entities in our model try to access the services. Thus, we establish trust relationships between entities and the services. Each service maintains a list of trustworthy and untrustworthy entities, the trust value associated with them, and time when trust value was last revised and number of interaction the entity had with the service. An overview of our proposed trust-based security framework is shown in Figure 1. The framework consists of three main layers. The model allows service requestor to access a particular service interface or shared resource in the network on the bases of its trust value maintained in trust repository of each service. If no prior trust information is available, recommendation evaluator module seeks recommendation from peer services located within the same pervasive environment or from trusted parties offering same service located in other autonomous pervasive environments. The recommended trust value computed by indirect trust computation module form the basis for new trust relationship. Similarly, if no recommendation is available for the entity, the service can assign it an ignorance value based on the security level of service interface the entity is requesting. Performance interpretation module is responsible for the evolution process. It evaluates the behavior patterns of entity involved in interaction according to its actions as additional evidence becomes available. It is connected with Trust repository and interaction monitoring module of the system. Direct trust computation takes place after culmination of an interaction and obtaining some observation from the interaction monitoring module. The basic function of a policy analyzer is to process the request; to determine whether the requestor is permitted to do the requested action in presence of the policies defined for that service interface.

Figure 1
figure 1

Proposed model.

5.1 Indirect trust computation

Indirect trust computation holds key importance where requesting service has no personal interactions with the entity in question to run a direct trust computation. Indirect trust computation (Algorithm 1) is carried out by Reputation evaluator module. It seeks recommendation for further information when the amount of observation is insufficient for the service to define the trustworthiness of the entity requesting the service. It requests recommendation, with respect to the entity in question, from peer services located within the same pervasive environment or from trusted parties in other autonomous pervasive environments. The reputation evaluator module computes the recommended trust value of entity E i for service S i and service interface SI i at time t as Trecom(E i , S i , SI i , t) based on peer services recommended trust value and other services in other autonomous pervasive environment's recommended trust value given as

T recom ( E i , S i , S I i , t ) = α T p ( E i , S i , S I i , t ) + ( 1 - α ) T o ( E i , S i , S I i , t )

The weights given to peer services recommended trust value (T p ) and other services in other autonomous pervasive environment's recommended trust value (T o ) are α and (1-α) respectively where α is a positive constant that can be fine tuned to have trust value for an entity between 1 and 0. In both cases recommended trust value is computed as average of the product of the trust value and the confidence level on that trust value of all the recommenders. The peer recommended trust value is computed as average of the product of the recommended trust value and the confidence level (CL) on that trust value of all the recommenders.

T p ( E i , S i , S I i , t ) = ∑ j = 1 N T j ( E i , S j , S I k , t ) * CL N p

where N p is total number of peer recommendation and SI k represent any ServiceInterface. Similarly, if N o represent total number of recommendations from other autonomous environments then recommended trust value from other autonomous pervasive environment's is given as:

T o ( E i , S i , S I i , t ) = ∑ j = 1 N T j ( E i , S j , S I k , t ) * CL N o

Confidence level: CL measures the reliability of the recommending service. It defines how certain we are about the trust value recommended by the recommending service and is given by:

CL = η * γ * SL 0 ≤ CL ≤ 1

where η is normalized interaction value, γ is Time based Experience and SL is security level of recommending Service Interface.

5.1.1 Time based experience

As much as change is about adapting to the new, it is about detaching from old. Human nature study shows that all relations show a liability of newness in which the rate of decay slows over time [6]. Trust based access control mechanism is based on the human notions and uses history of interaction for trust value computation. If each experience is given same weight in trust computation regardless of the time when the interaction happened, the computed trust value can lead to false results. Instead, the services having old experiences with the entity in question should have a less weight in peer recommendation than the new ones. Older experience should decay with time and has a less effect. Let t and t c denote the time of last interaction and current time respectively then decay function denoted by γ is defined as:

γ ( t , t c ) = α ( 1 - β ) Δ t

where Δt = t c - t, α and β are adjustable positive constant that can be tuned accordingly to define the rate of decay. Figure 2 depicts that, as the time since last interaction grows, the impact of recommended value in trust calculation decreases.

Figure 2
figure 2

Effect of time on recommendation.

5.1.2 Effect of experience

Recommended trust is dependent on experiences of the entity with the requested service. An experience is the result of interaction with an entity. The greater the number of interactions of the recommending peer service with the entity in question, the greater is the confidence level. Hence, the confidence level on the recommender is directly proportional to the number of interactions it had with the entity. Since trust value is given by 0 ≤ T i ≤ 1, we require a normalization function that can limit the number of interactions in the range 0 to 1. The function we have used to normalize interaction value (n t ) is given as:

η = n t - n t min n t max - n t min where 0 ≤ η ≤ 1

where n t max and n t min represent the maximum and minimum number of interaction, n t min =1 and 1≤ n t max ≤∞. Assuming that n t max =50, the Figure 3 depicts that recommended trust value is directly proportional to the number of interactions the recommending service had with the entity.

Figure 3
figure 3

Effect of experience on recommendation.

5.1.3 Effect of sensitivity of recommender

The recommendation based trust value calculation process, while evaluating the trust value given by the service, takes into account the sensitivity of the service interface offering the recommendation. For example, a file service has multiple interfaces depending on the type of functionality it provides and each service interface has a security level associated with it. The sensitivity of the recommender depends on service interface security level. Figure 4 shows that recommended trust value is dependent on sensitivity of the recommender (SL).

Figure 4
figure 4

Effect of experience on recommendation.

Judging the recommendation: Malicious entities often manipulate the indirect trust calculation by sending false recommendations to lower or increase the recommended trust value of the requesting entity. These malicious recommendations can highly influence the access control mechanism. In our model, we propose a simple mechanism based on control charts to determine whether a recommendation is honest or is malicious. Once the recommendation evaluator module collects all the recommendations, prior to calculating the indirect trust, these recommendations undergo a filtration process based on X-charts. X-charts consist of two limits, upper control limit and lower control limit. These two limits define a region in which recommendations falls if they are honest. It is assumed the recommendation are normally distributed with known mean μ and known standard deviation σ then upper control limit (UCL) and lower control limit (LCL) are

UCL = μ + 3 σ n
(1)
LCL = μ - 3 σ n
(2)

n being the size of recommendations. The steps to follow for judging the recommendations by constructing the control charts are:

  1. (1)

    Collect the recommendations for the entity in question.

  2. (2)

    Calculate the mean and standard deviation of the recommendation sample.

  3. (3)

    Calculate the control limits.

  4. (4)

    Verify if the recommendation lie within the control limits.

  5. (5)

    Discard out of bound recommendations.

5.2 Direct trust computation

Performance interpretation module is responsible for the direct trust computation (Algorithm 2). It evaluates the behavior pattern of the entity involved in an interaction according to its actions, as additional evidence becomes available. Each service maintains the following information for each entity that is updated during trust evaluation:

  • No of positive interactions with the entity n p

  • No of negative interactions with the entity n n

  • No of times the entity has oscillated between positive and negative behavior OnOffcount

  • No of continuous positive interactions with the entity c p

  • No of continuous negative interactions with the entity c p

  • Maximum trust that an entity can achieve on a given set of interactions Tmax

  • Entity ever being blacklisted isBlackListed

  • Entity ever being distrusted isDistrusted

All services residing in a pervasive environment do not need the same level of security. A weather service can be offered with even low trust value to an entity, and with frequent positive interactions entity can quickly become the trusted user of the service. But Internet

Algorithm 1 Recommendation

Require: Recommendations

Ensure: RecommendedTrust

1: if E i isStranger = true then

2:   Broadcast Recommendation Request

3:   while (isReply) do

4:         T i = getRecommendation(E i , S i , SI i , t i )

5:         i + +

6:   end while

7:   n = i  // n is total no of recommendations received

8:   compute mean μ and standard deviation σ of all recommendation

9:   compute UCL and LCL using equation 1 and 2

10:   for each recommendation T i

11:   while j ≤ n do

12:         if T j > LCL and T j < UCL then

13:

n j = n t j - n t m i n n t m a x - n t m i n Δ t j = t c - t j γ j = α ( 1 - β ) Δ t j C L j = n j * γ j * S L j

14:         if S j ∈ Peer Service then

15:

T p = ( T p + ( T j * C L j ) ) n p + 1

16:         if S j ∈ AutonomousService then

17:

T o = ( T o + ( T j * C L j ) ) n o + 1

18:         end if

19:      end if

20:   end if

21:   j ++

22:   end while

T r e c o m = α T p + ( 1 - α ) T o

23: end if

24: return T recom

services are more sensitive and require more positive behavior before declaring an entity completely trusted. Similarly negative interaction for Internet service is required to decline the trust at a higher rate as compared to the weather service. Model associates a security level (sl) value with each service to control rate of reward/penalty after each interaction. Each service maintains a numerical value that signifies the security level of the service.

Trust evaluation takes place after completing an interaction. If the entity has demonstrated positive behavior during an interaction, its number of positive interactions with the entity n p is incremented. Otherwise, the interaction is considered negative and the number of negative interactions with the entity n n is incremented. c p and c n are counter number of continuous positive and negative interactions respectively. c p is incremented when consecutive positive interactions are performed and is set to 0 when entity show change in behavior. c n is incremented in similar way on consecutive negative interactions. Depending on the outcome of the interaction, a positive behavior is rewarded by increasing service trust in the entity and negative behavior is penalized by reducing the service trust in the entity. The updated trust value is calculated using the previous trust value and impact of current interaction in the form of reward/penalty rate using following equation:

T i = T i - 1 + Δ p for I cur = positive interaction T i - 1 - Δ n for I cur = negative interaction

where "Icur" indicates current interaction, T i and Ti-1indicate new trust value and previous trust value respectively. The reward Δp and penalty rate Δn for each type of behavior is dependent on

  • Total no of interactions of entity n t

  • Total no of positive interactions of entity n p

  • Total no of negative interactions of entity n n

  • Counter for consecutive negative interactions of entity c n

  • Counter for consecutive positive interactions of entity c p

  • Security level sl where 0.5 ≤ sl ≤ 3.

  • Slope rate σ p for positive and σ n for negative interaction is defined by on/off counter. Each time entity changes its behavior from positive to negative σ n is changed by

    σ n i = 2 σ n i - 1

    and each time entity changes its behavior from negative to positive, slope rate σ p is changed by

    σ p i = σ p i - 1 2

    For positive behavior reward rate Δp is calculated as:

    Δ p = α n p n t * 2 σ p * c p * s l
    (3)

    where α is a constant and its value is 0.01. Reward rate increases with consecutive positive interactions. Similarly for negative behavior penalty rate Δn is calculated as:

    Δ n = α n n n t * 2 σ n * c p s l
    (4)

Penalty rate Δn is dependent on the sensitivity of the relationship. According to the formula, a very trustworthy entity is not declared distrustful after just one or two bad interactions. But if negative behavior persists the entity is penalized rapidly. Penalty factor increases with the consecutive negative behavior. In general, the negative behavior converges to 0 affirming an entity as completely distrustful and positive behavior converges to 1 affirming an entity as completely trustful.

5.2.1 Time based aging of trust value

Trust is time variant; it decays with the passage of time. When an entity sends a service access request message, the specific service first checks the trust repository for the entities previous trust value to decide the level of access the requesting entity can have. The proposed model incorporates time based aging on that trust value. The trust value updated long time ago decays with the time and does not carry same weight as the one updated recently. Let t and t c denote the time the trust value was last updated and current time respectively then the same decay function γ as defined for computing time based experience is used:

γ = α ( 1 - β ) Δ t

Algorithm 2 updateTrustV alue

Require: EntityE, SecurityLevels

Ensure: newTrustV alue

1: if I cur = positive then

2:       if I last ! = positive then

3:            c n = 0

σ p i = σ p i - 1 2

4:         else

5:            c p = c p +1

6:      end if

7:      if c p ≥ c posTh then

8:            increment T max

9:      end if

10:      n p + +

11:      Î”p = calculate increment

12:      T i = Min (Ti-1+ Δp, T max )

13:else

14:      if T last ! = false then

15:         c p = 0

16:         OnOff count + +

17:         if OnOff count ≥ OnOff countTh then

18:            Distrust (E)

19:      end if

20:      if OnOff count > 1 then

21:            Decrement T max

σ n i = 2 σ n i - 1

22:         end if

23:      else

24:         c n + +

25:      end if

26:      n n + +

27:      Î”n = calculate increment

28:      T i = Man(Ti-1- Δn, 0)

29:      if T i < = 0 then

30:         Distrust(E)

31:      end if

32: end if

33: resturn T i

Where Δt = t c - t. α and β are adjustable positive constant that can be tuned accordingly to define the rate of decay. The impact of time based aging on trust value is calculated as:

T i = T i * γ
(5)

5.3 Policy analyzer

Policies provide a more constrained means for adaptive behavior of entities [30]. The model proposes an adaptive approach to handle strategic malicious behavior through a policy based management. It incorporates a set of rules for strategic attack detections, together with appropriate actions and controls to counter these attacks. The set of access policies used for trust computing are

  1. (1)

    Trust value symbolizes the level of trust a service has on an entity. Different Trust levels and their corresponding Trust values are described in Table 1

Table 1 Trust levels and their description
  1. (2)

    Rate of reward/penalty is controlled by service security level that is defined by the service.

  2. (3)

    Entity is distrusted when current trust approaches to 0.

  3. (4)

    Distrusted entity is allowed to interact again after forgiveness time.

  4. (5)

    T max of entity is decremented and is made equal to its current trust value each time entity changes its behavior

  5. (6)

    T max is incremented if continuous positive behavior of an entity exceeds c posTh

  6. (7)

    Entity is black listed if it is distrusted d times, where 1 ≤ d ≤ 3

  7. (8)

    Entity is distrusted when OnOffcount approaches to OnOffcountTh

Algorithm 3 illustrates the working of proposed model in the presences of policy analyzer to provide access to requesting service.

6 Malicious attacks and defense solutions in proposed model

The open nature of ubiquitous environment makes the access control models designed for this environment vulnerable to attackers. These attackers are motivated by selfish intent to either get illegitimate access to services or manipulate the reputation of others for their own benefit. In this section, we investigate attacks against trust and reputation models and how proposed model protects against these attacks in order to provide an attack resistant model.

6.1 Bad mouthing attack

In bad mouthing attack, one or more entities falsely provide dishonest recommendation either to elevate trust values of malicious entities or to lessen the trust values of honest entities. The proposed model uses various mechanisms to avoid and detect this attack. Model uses X-bar control chart to filter out unfair recommendations, assuming that recommendations provided by different recommenders follow the same probability distribution. Let us take the data set of 15 recommendations :{0.2,0.3, 0.25, 0.25, 0.3, 0.8, 0.8, 0.8,0.9, 0.9, 0.7, 0.85,0.75, 0.76, 0.9} We assume that 30% of the recommenders are providing malicious recommendation (T < 0.5)

Algorithm 3 permitService

Require: EntityE, TrustValueT

Ensure: AccessLevel

1:if E i isStranger = false then

2:   if (isBlacklisted(Ei)=true) then

3:      return denyAccess

4:   else if (isDistrusted(Ei)=true) then

5:      if (Δt < t forgiveness then

6:         return denyAccess

7:      else

8:         T dir = searchTrustV alue(E i )

Δ t = t c - t

9:      end if

10:   end if

11: else

12:   requestRecommendation()

13:   if (n p + n ≠ 0) then

14:      T i = Compute T recom

15:   else

16:      T i = assign ignorance value

17:   end if

18: end if

l i = m a p T r u s t L e v e l ( E i , T i )

19: return grantAccess of Level l i

to lessen the trust value of entity in question. Figure 5 shows how malicious recommendations are detected using X-bar control chart. The recommendations that lie within the interval specified by LCL and UCL are considered honest, outliers are discarded. The proposed method was able to detect malicious recommendations 100%. However, 20% valid recommendations are also discarded. Secondly, the proposed model also calculates the confidence level on the recommended trust value to diminish the effect of fake recommendations. This mechanism does not avoid the bad mouthing attack, but it could minimize its effects. Confidence level is dependent on the size of experience, time of last interaction and also the sensitivity of recommending entity. The size of experience is a measure of the number of times the two entities have interacted. We use the size of past experience to give more relevance to the services that know the entity in question for a long time. Accordingly, we assume that the trust level of entity with more past experience has already converged to a steady trust value and therefore its judgment should be more relevant than the judgment of an entity that has less number of interactions with the entity in question. The proposed model distinguishes between old and recent interaction, giving less weight to valid but old recommendations.

Figure 5
figure 5

Detecting malicious recommendations.

Thirdly, trust propagation chain is categorized; recommendations from peer services are given more weight in recommended trust calculation process in respect with recommendation from services in other autonomous pervasive environments

6.2 Oscillating attack

Malicious entities show oscillating behavior by behaving well and badly alternatively, hoping that they will not be detected and their trust value will continue to grow while causing damage. Attackers attempts to exploit the dynamic properties of trust through inconsistent behaviors. In proposed model Performance Interpretation module attempts to judge the behavior of the entity and decreases the trust value in proportion to the number of negative actions and its on/off behavior count. The final trust value is always less than the initial trust value, as the maximum achievable trust is decremented each time an entity shows an alternating behavior. Figure 6 shows that regardless of the fact that the number of positive interactions are greater than the number of negative interaction the trust value continues to decrement and after judging the behavior, model declares the entity completely distrusted and blacklist it.

Figure 6
figure 6

Strategic behavior attack.

6.3 Conflicting behavior attack

In this attack, malicious entities can behave inconsistently in the user domain. They can manipulate recommended trust value by performing differently to different services. For example, the attacker can always behave well with one service and behave badly to another service. These two services will develop conflicting opinions about the malicious entity. When some other service requests recommendation about the malicious entity from these services, the recommendations will not agree with each other. It will assign low recommendation trust to the recommending service believing that that it has sent dishonest recommendation. Also most of the recommendation models consider the aggregate of recommendations; in that case malicious entity can go undetected. The proposed model uses Confidence level on each recommendation is avoid this attack. Confidence level is dependent on the sensitivity of the service. Even if the malicious entity shows conflicting behavior with different services, model gives more relevance to the recommendation from services that are more sensitive than the ones that have ordinary sensitivity level. Figure 7 depicts that entity has shown conflicting behavior with different services, but its recommendation is weighted on the basis of sensitivity of the service with which it has interacted. In this scenario even though a malicious entity has shown good behavior with service G but since its sensitivity level is low (G = 3), recommendation from this service is given least weight.

Figure 7
figure 7

Detecting conflicitng behavior.

7 Verification of system correctness

In this section, we verify that the most relevant objectives of proposed model described above to demonstrate novel theoretical concepts.

Proposition 1: The model observes the behavior of the entity and gives gradual increment/decrement initially and exponential increment/decrement subsequently before completely trusting/distrusting the entity.

Proof: The increment/decrement factor for a given positive/negative interaction is calculated by using Equations 3 and 4 respectively. Figure 8 shows trust establishment of an entity with positive and negative behaviors. The model awards gradual increment/decrement initially, however, after establishing continuity in the behaviors, the awards becomes exponential.

Figure 8
figure 8

Growth/decline in trust value.

Proposition 2: The model judges the oscillating behavior of the entity by lowering the Tmax of the entity.

Proof: The algorithm keeps history of entity s oscillating behavior as OnOffcount. The algorithm alters Tmax using policy 5 each time an entity changes its behavior. Figure 9 below shows how the proposed model reacts to oscillating behavior of entity by lowering the Tmax.

Figure 9
figure 9

Effect of oscillating behaviour on maxachievable trust value.

Proposition 3: The model keeps history of distrustful behavior of entity and frequent distrustful behavior renders entity black listed.

Proof: Distrustful behavior of the entity is logged as distrutCount. The entity is rendered blacklisted if the count exceeds distrustThreshold. The procedure used by the model is shown in Algorithm 4:

Algorithm 4 DisTrustEntityE

1: isDistrusted = true

2: distrustCount + +

3: if distrustCount > = distrustThreshold then

4:   isBlacklisted = ture

5:   //No future interaction with entity

6: end if

Proposition 4: Reward/penalty rate after each interaction is controlled by service security level.

Proof: The slope of trust increment/decrement is dependent upon the security level of the requested service. High security level demands pro longed positive interactions to achieve maximum trust and vice versa. The most secure service will have security level 0.5. Whereas, the least secure service may have the security level equal to 3. Figure 10 depicts the effect of security level on increment/decrement of trust value.

Figure 10
figure 10

Effect of security level on trust value.

Proposition 5: In the proposed model trust value decays with time, assigning more weights to recent observation and less weight to previous observations.

Proof: In the model, each service keeps a time stamp with its latest interaction with every entity. Each time an entity makes a request time based aging is computed on the basis of time stamp of current time and the last time the trust value was updated using Equation 3. Figure 11 shows the decay of trust value with time during the interval in which the entity did not had any new interaction with the service.

Figure 11
figure 11

Time based aging.

8 Comparison with other models

An important property of trust model in pervasive environment is to be adaptive, having the capability to adjust in accordance with behavioral pattern changes. The proposed model is compared with PTM [22], FTM [23] and Wang and Varadharajan [31] trust models by considering positive and negative interactions randomly (Figure 12). In proposed model, when an entity shows constant positive behavior, the increase factor grows exponentially; i.e., is gradual in beginning and then rises rapidly. A high trust value is achieved through long-term interactions with good behaviors. The increase in value depends on the number of constant positive actions. In Wang and FTM model, entity performing positively is rewarded rapidly at the beginning but gradually the trust earning rate decreases. In PTM, positive behavior has an exponential form and then a logarithmic. In all the models, as the entity continues to show constant positive behavior, they converge to maximum trust value i.e., 1. However in our proposed model, maximum achievable trust value is adjustable and controlled by the adaptable policies.

Figure 12
figure 12

Comparison with other models.

When an entity shows negative behavior, our proposed model attempts to judge whether an entity has performed negative action intentionally or unintentionally, by analyzing its sequence of interactions. Penalty factor increases with the consecutive negative behavior. Also, if an entity has history of showing deceptive behavior in past the rate of trust decline increases. Wang, FTM and PTM models punish an entity in a similar way by quickly decreasing the trust value.

Finally, if an entity shows oscillating behavior by randomly mixing both positive and negative actions, model attempts to judge the behavior and decreases the trust value in proportion to the number of negative actions and its on/off behavior count. The final trust value is always less than the initial trust value, as the maximum achievable trust is decremented each time an entity shows an alternating behavior. In Wang model trust value is more inclined towards the last interaction value and does not consider the historical behavior. Whereas PTM takes into account the aggregate of events in order to compute the behavior pattern of an entity. Sequence of outcome of interaction has no effect on the evolution of trust thus ignoring the significance of interaction taking place at different times.

9 Conclusion

In this article, an adaptive trust and recommendation based access control architecture for pervasive environment is proposed. The proposed model handles by itself situations both in which the requesting entity has a past experience with the service and a stranger entity requesting to access the service without any identity and past interaction with the service. The main contribution of the article includes: (1) we define an adaptive trust evolution algorithm that dynamically adjust trust value according to the entity's behavior thus minimizing human involvement for the security management; (2) we introduce the concept of maximum achievable trust to regulate the susceptible behavior; (3) An adaptive policy analyzer is incorporated for strategic attack detection together with appropriate actions and controls to counter these attack; (4) motivated by human nature, we use the confidence level to judge the recommendations; and (5) also, in order filter the malicious recommendation, we introduce X-bar control charts. In addition, we have showed the effectiveness of our model, by demonstrating it to be attack resistant against Bad mouthing attack, Oscillating behavior attack and conflicting behavior attack. Our future research will also be focused on implementation of the the proposed model on smart devices (laptops, PDAs and smart phones), to analyze the performance and optimized utilization of resources.

References

  1. Weiser M: The Computer for the Twenty-First Century. Scientific American; 1991.

    Google Scholar 

  2. Cahill V, Gray E, Seigneur J, Jensen C, Chen Y, Shand B, Dimmock N, Twigg A, Bacon J, Wagealla W, Terzis S, Nixon P, Serugendo G, Bryce C, Carbone M, Krukow K, Nielsen M: Using trust for secure collaboration in uncertain environments. IEEE Pervasive Computing Mobile and Ubiquitous Computing 2003, 2: 52-61.

    Article  Google Scholar 

  3. Robinson P, Vogt H, Wagealla W: Some Research Challenges in Pervasive Computing, Privacy, Security and Trust within the Context of Pervasive Computing. Volume 780. Springer, US; 2004:1-16.

    Book  Google Scholar 

  4. English C, Nixon P, Terzis S, McGettrick A, Lowe H: Security models for trusting network appliances. In 5th IEEE International Workshop on Networked Appliances. Liverpool; 2002:39-44.

    Google Scholar 

  5. English C, Wagealla W, Nixon P, Terzis S, McGettrick A, Lowe H: Trusting collaboration in global computing. In 1st International Conference on trust management. Greece; 2003:136-149.

    Chapter  Google Scholar 

  6. Shand B, Dimmock N, Bacon J: Trust for ubiquitous, transparent collaboration. In 1st IEEE International Conference on Pervasive Computing and Communications. Texas, USA; 2003:153-160.

    Google Scholar 

  7. He R, Niu J, Yuan M, Hu J: A novel cloud-based trust model for pervasive computing. In 4th International Conference on Computer and Information Technology. China; 2004:693-700.

    Google Scholar 

  8. Ya-Jun G, Fan H, Ping-Guo Z, Rong L: An access control model for ubiquitous computing application. In 2nd International Conference on Mobile Technology, Applications and Systems. China; 2005:128-133.

    Google Scholar 

  9. Jameel H, Hung LX, Kalim U, Sajjad A, Lee S, Lee Yk: A trust model for ubiquitous systems based on vectors of trust values. In 7th IEEE International Symposium on Multimedia. Irvine, USA; 2005:674-679.

    Chapter  Google Scholar 

  10. Bhatti R, Bertino E, Ghafoor A: A trust-based context-aware access control model for web-services. Distrib Parallel Databases 2005, 18(1):83-105. 10.1007/s10619-005-1075-7

    Article  Google Scholar 

  11. Deno MK, Sun T: Probabilistic trust management in pervasive computing. In IEEE/IFIP International Conference on Embedded and Ubiquitous Computing. China; 2008:610-615.

    Google Scholar 

  12. Hang C, Wang Y, Singh MP: An adaptive probabilistic trust model and its evaluation. In 7th international joint conference on Autonomous agents and multiagent systems. Portugal; 2008:1485-1488.

    Google Scholar 

  13. Quercia D, Hailes S, Capra L: TRULLO-local trust bootstrapping for ubiquitous devices. In 4th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. Pennsylvania, USA; 2007:1-9.

    Google Scholar 

  14. Blaze M, Feigenbaum J, Lacy J: Decentralized trust management. In 17th IEEE Symposium on Security and Privacy. Oakland; 1996:164-173.

    Google Scholar 

  15. Kagal L, Finin T, Joshi A: trust-based security in pervasive computing environments. IEEE Comput 2001, 34(12):154-157. 10.1109/2.970591

    Article  Google Scholar 

  16. Kagal L, Undercoffer J, Perich F, Joshi A, Finin T: Vigil: enforcing security in ubiquitous environments. Grace Hopper Celebration of Women in Computing 2002 2002.

    Google Scholar 

  17. Ahamed SI, Sharmin M: A trust-based secure service discovery model for pervasive computing. Int J Commun 2008, 31(18):4281-4293.

    Google Scholar 

  18. Komarova M, Riguidel M: Adjustable trust model for access control. In 5th international conference on Autonomic and Trusted Computing. Norway; 2008:429-443.

    Chapter  Google Scholar 

  19. Almenarez F, Marin A, Campo C, Garcia C: PTM: a pervasive trust management model for dynamic open environments. 1st Workshop on Pervasive Security, Privacy and Trust in Conjuntion with Mobiquitous 2004.

    Google Scholar 

  20. Almenarez F, Marin A, Campo C, Garcia C: TrustAC: trust based access control for pervasive devices. In 2nd International Conference Security in Pervasive Computing. Germany; 2005:225-238.

    Chapter  Google Scholar 

  21. Almenarez F, Marin A, Diaz D, Sanchez J: Developing a model for trust management in pervasive devices. In 3rd IEEE International Workshop on Pervasive Computing and Communication Security. Pisa, Italy; 2006:267-272.

    Google Scholar 

  22. Almenarez F, Marin A, Diaz D, Cortes A, Campo C, Garcia C: Trust management for multimedia P2P applications in autonomic networking. Adhoc Netw 2011, 9(4):687-690.

    Google Scholar 

  23. Haque M, Ahamed SI: An omnipresent formal trust model (FTM) for pervasive computing environment. In 31st Annual International Computer Software and Applications Conference. Beijing, China; 2007:49-56.

    Google Scholar 

  24. Ahamed SI, Haque M, Endadul M, Rahman F, Talukder N: Design, analysis, and deployment of omnipresent formal trust model (FTM) with trust bootstrapping for pervasive environments. J Syst Softw 2010, 83: 253-270. 10.1016/j.jss.2009.09.040

    Article  Google Scholar 

  25. Gray E, OConnell P, Jensen C, Weber S, Seigneur J, Yong C: Towards a Framework for Assessing Trust-Based Admission Control in Collaborative Ad Hoc Applications. Volume 66. Technical Report, Dept. of Computer Science, Trinity College Dublin; 2002.

    Google Scholar 

  26. Josang A: An algebra for assessing trust in certification chains. In Proceedings of the Network and Distributed Systems Security. San Deigo, USA; 1999.

    Google Scholar 

  27. Abdul-Rahman A, Hailes S: Supporting trust in virtual communities. In 33th Hawaii International Conference on System Sciences. Mani; 2000:1-9.

    Google Scholar 

  28. Dimmock N, Belokosztolszki A, Eyers D, Bacon J, Moody K: Using trust and risk in role-based access control policies. In 9th ACM Symposium on Access control models and technologies. New York, USA; 2004:156-162.

    Google Scholar 

  29. Gambetta D: Can We Trust Trust?. In, Trust: Making and Breaking Cooperative Relations. Basil Blackwell, Oxford; 1990.

    Google Scholar 

  30. Sloman M, Lupu E: Security and management policy specification. IEEE Netw 2002, 16(2):10-19. 10.1109/65.993218

    Article  Google Scholar 

  31. Wang Y, Varadharajan V: Interaction trust evaluation in decentralized environments. In 5th International Conference on Electronic Commerce and Web Technologies. Spain; 2004:144-152.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdul Ghafoor.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Iltaf, N., Ghafoor, A. & Hussain, M. Modeling interaction using trust and recommendation in ubiquitous computing environment. J Wireless Com Network 2012, 119 (2012). https://doi.org/10.1186/1687-1499-2012-119

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2012-119

Keywords