We consider the “Weighted Approximate-Rank Pairwise-” (WARP-) loss, as introduced in the WSABIE paper of Weston et. al (2011, see references), in the context of making recommendations using implicit feedback data, where it has been shown several times to perform excellently. For the sake of discussion, consider the problem of recommending items
WARP considers each observed user-item interaction
Minimising the rank?
Ideally we would like to make updates to the model parameters that minimised the rank of item
The problem with the rank is that, while it does depend on the model parameters, this dependence is not continuous (the rank being integer valued!). So it is not possible to speak of gradients. So what is to be done instead? The approach of the authors is to derive a differentiable approximation to the logarithm of the rank, and to minimise this instead.
Derivation: approximating the (log of the) rank
WARP has been shown several times to perform very well for implicit feedback recommendation. However, the derivation of the approximation of the log of the rank used in WARP, as outlined in the WSABIE paper, is nonsense. I can only think that the authors arrived at WARP in another way. Let’s look at it more closely. In the following:
is the score assigned by the model to item for user . is some function that defines the error as a function of the rank. In the WSABIE paper, is approximately the natural logarithm (for the derivation below, however, it doesn’t matter what is)
The most obvious problem with the derivation is the approximation marked with an asterix (*). At this step, the authors approximate the indication function
To appreciate why the derivation is nonsense, however, you have to notice that the it has nothing to do with
Optimisation
WARP considers each observed interaction
The naive approach to computing
However it is still the case that
WARP optimises for item to user recommendations
With its negative sampling technique, WARP optimises for recommending items to a user. For instance, the problem of recommending users to items (so, transposing the interaction matrix) is not trained for. I wonder if some extra uplift could be obtained by training for both problems simultaneously.
Normalising for the total number of items
With the optimisation stated as above, the learning rate will need to be re-tuned for datasets that have different numbers of items, since the gradient weighting
which ranges between 0 and 1.
Implementations
There are two implementations of WARP for recommendation that I know of, both in Python:
- LightFM, written by Maciej Kula. Works well. Also implements BPR with uniform sampling and WARP k-OS (which I’ve not investigated yet).
- MREC, written by Levy and Jack at Mendeley, has a matrix factorisation recommender trained using WARP. I’ve not tried this one out yet.
References
Jason Weston, Samy Bengio and Nicolas Usunier, Wsabie: Scaling Up To Large Vocabulary Image Annotation, 2011, (PDF).