Skip to main content

Tim Muller to research when to trust online reviews

Posted:

EPSRC has awarded Tim Muller of our security research group an EPSRC First Grant to help with the battle against fake reviews online. The one-year project aims to identify when a rating system of online products or services is sufficiently robust to overcome manipulation. The aim is to increase users’ trust in rating systems.

The project entitled ‘Provably Secure Decisions Based on Potentially Malicious Trust Ratings’ will look at the factors that mean ratings can be trusted, and how probability calculations can help.

Whenever you design a system that uses ratings (for example, flag a comment, thumbs up/down), there will be attackers trying to manipulate these ratings. Several classes of such attacks exist in theory and in practice. A simple example is attackers using multiple accounts to provide good ratings about themselves. We might be able to detect attacks after they've occurred, and block the attacking accounts. However, there is nothing stopping the attacker from creating more accounts, and masquerading them as honest, up to the point where they perform another attack. So, the question is, can we use these ratings to make the right decision, even if these attacks inevitably do occur?

To define what ‘the right decision’ is, is not always trivial. Think of a movie recommendation system; is it meaningful to talk about the right movie to watch? If, on the other hand, you want to download an application that has been rated to (not) contain malware, then the right decision is obvious: You want to download it, if and only if it's actually malware free. We look at cases where the right decision is obvious, but the issue is only to deduce this from the ratings that are given.

Now, if (and this is a big if!) honest ratings are perfectly accurate and a rating is more likely to be honest than malicious, then the majority of the ratings is expected to be correct. For example, say 70% of the raters are honest, and we ask five people, then we expect three or four honest ratings. But we may get unlucky and get only two honest ratings. What if we want to be 99% sure that we make the right decision? In this case, it's a matter of asking enough people to rate; namely 29.

For that simple example, we implicitly asserted that attackers always lie. What if they don't? Well, in the example, the amount of correct ratings goes up, so the strategy of going with the most rated option can only improve. Thus, we can say confidently that the probability of making the right choice is at least 99%, no matter what the attackers do. Therefore, we can say that asking 29 people to rate, and following the majority vote, is ‘1%-robust’. When asking fewer people to rate (27, the largest odd number below 29, avoiding ties), there exists a strategy for the attacker to increase the probability of making the wrong decision to over 1% (1.17% if the attacker always lies). Thus, asking 29 people is the minimum required for 1%-robustness; we call this optimality. Finally, stability is about the fact that, as long as honesty is more likely than maliciousness, following the majority is the best option.

When users interact multiple times, simple majority schemes cease to be optimal. Then, furthermore, ‘always lying’ is not our worst-case; we must consider attackers who sometimes tell the truth. The upside of multiple interactions is that robustness is achievable, even when honest users are outnumbered. Intuitively, this is because fake ratings have a non-zero probability of being identified as such, and decrease posterior probability of honesty for attackers, and increase it for honest users. Effectively, we slowly start trusting honest raters. Finally, introducing the notion that honest users aren’t perfect, and may make mistake lowers the effectiveness of trust.

Tim's goal during his EPSRC First Grant is to find relationships between all these parameters, and effectively identify when robustness of ratings can be achieved.