Before you begin...

I know some folks just want to dive straight into the code. If that's you, here you go:

markdouthwaite/xanthus
Neural recommendation models in Python, using Tensorflow 2.0 & Keras. - markdouthwaite/xanthus

If you're more interested in figuring out what exactly all this 'Neural Recommender' stuff actually means, and maybe see how it can work in a basic example, you're in luck – read on!

What is a recommendation system?

At this point, recommendation systems are pretty much ubiquitous in the digital world. If you aren't using one to help manage your customer's journey on your online store, service or app, then you're increasingly in the minority, and you're almost certainly losing out on a fair chunk of cash, one way or another. So what can a recommendation system do?

One way of thinking about the systems is as a way of doing a specialised, passive form of search. When a customer arrives on a site with a recommendation system, the site can instantly show the customer personalised results for content they are most likely to enjoy, engage with and ultimately pay for in some fashion. This can simultaneously increase revenues while also improving a customer's experience of the site, often creating a virtuous loop where a customer is more likely to re-engage with a service or product down the line.

To trundle out some classic nuggets of wisdom from 'industry statistics': nearly five years ago, Netflix valued the contribution of its recommendation system to its bottom line at $1 billion. Given how much Netflix has grown in the last five years, it is probably safe to assume this has increased by quite a bit too. The canonical example of the effect of recommendation systems on commercial performance is Amazon. Amazon estimate that fully 35% of their purchases are accessed through recommendation services. Similar numbers are commonly cited by others using these types of systems too. That brings us to the next question:

What is Xanthus?

Well, Xanthus isn't a recommendation system. That may be a bit of a surprise given all the preamble, but stick with it. Xanthus is a software package that was built with the aim of translating state of the art (SOTA) modelling techniques described in academic literature into an easy-to-use recommender model framework. In other words, it provides tools to build the models that sit at the heart of a recommendation system, but not the infrastructure, business logic and other bits and pieces that constitute 'the full thing'.

The next logical question is probably: why bother? First off, it's a bit of fun. Deep Learning frameworks and their communities have come a long way in the last few years and with such a fast moving domain, it is easy to get left behind on some of the latest advancements even if you take your eye off the ball for even a few months. I've been looking for an excuse to use TensorFlow 2.0 (one of the pre-eminent Deep Learning software packages) in a personal project for a while now, and after a couple years experience building recommendation systems as part of my day job (using other Machine Learning (ML) techniques), trying my hand at translating SOTA academic work into something a little more tangible for a passing user seemed like a good way to 'stretch my legs' on these topics.

Digging a bit deeper

So why dig into Deep Learning? Neural recommendation systems have become the de-facto standard for many of the big players in the world of search, recommendation systems and personalisation. Some of the most prolific users of the technology make some quite extraordinary (but potentially accurate) claims about the technology's effectiveness with respect to 'traditional' recommendation approaches.

To give you some idea of what traditional means in the field though, that's pretty much anything older than a few years old. For example, Netflix's esteemed first (and arguably most talked about) recommendation system was based on these traditional techniques, was first prototyped sometime in 2009-10, and by some accounts only became fully operational in 2016, and this traditional ML approach is still worth billions of dollars to the company.

Despite the incredible success of this system, Amazon claims their latest neural recommendation approaches are twice as good as these traditional approaches. So frankly that is why. I want to see if these claims stack up on some benchmark datasets. If you check back in over the next few months, I'll be publishing the results of these experiments too.

Now then, justifications over, how about a little code?

Starting at the beginning

Let's take a look at how to get started with Xanthus. Arguably the canonical example for all recommendation problems is the problem of movie recommendations. Fortunately, there's a classic curated dataset from GroupLens that provides exactly what you'll need: the MovieLens dataset.

For the rest of the post, the 'small' MovieLens dataset of 100,000 movie ratings will be used. This dataset gives you an anonymous user identifier and a list of all the movies each of these users rated on a movie database service (IMDB). If you're feeling adventurous, you could try the code below with the 1 million ratings, 10 million ratings or even 27 million ratings dataset. You might want to be prepared to go get a coffee or three while it runs though.

Before you continue, you'll want to install Xanthus. The package is available either on GitHub or through PyPI. Practically, this means you'll need to open a new terminal and run:

pip install xanthus

Just like any other Python package. Now, open a new Jupyter notebook, and in the first cell, you can run:

from xanthus.datasets import download 
download.movielens(version="latest-small", output_dir="data")

This will download and unzip a copy of MovieLens for your soon-to-exist Neural Recommender model. Next up, you'll want to load this data. Time for pandas to make an appearance:

import pandas as pd

ratings = pd.read_csv("data/ml-latest-small/ratings.csv")
movies = pd.read_csv("data/ml-latest-small/movies.csv")

Next, you'll need to get your data into a format ready to be used by Xanthus. To do this, you'll need to re-map your data. You can do this with:

ratings = ratings.rename(columns={"userId": "user", "movieId": "item"})

With this done, you might find it helpful to convert the movieId column in your data to correspond to more human-readable labels (i.e. the names of movies!). You can do this with pandas as follows:

title_mapping = dict(zip(movies["movieId"], movies["title"]))
ratings.loc[:, "item"] = ratings["item"].apply(lambda _: title_mapping[_])

For reasons beyond the scope of this post, you might want to drop low ratings from your dataset (how mysterious!). This can be achieved with:

ratings = ratings[ratings["rating"] > 3.0]

Now you're ready to get Xanthus in on the action!

Modelling time

As any good ML practitioner will tell you, you need to at least create some sort of train/test split in your data. This helps you evaluate your models on problems for which they haven't 'seen the answers', which in turn helps you figure out if you've actually made a good model that can generalise well to other problems, or a poor model that will only give sensible answers to problems for which the answers are already known (i.e. why bother in the first place!).

For recommendation problems creating a train-test split has a few additional considerations too. Again, these are beyond the scope of this post, but if you're interested, make sure to check the Xanthus documentation for more information. Anyway, Xanthus provides some utilities to quickly create train test splits using a 'leave one out' splitting policy. There's a bunch of material on this approach, but if you're interested, here's a quick overview from Wikipedia. You can create your new split datasets with:

from xanthus.utils import create_datasets
train_dataset, test_dataset = create_datasets(df, policy="leave_one_out")

Now time to train a model! There's a few different model architectures provided by Xanthus (as described in the package's motivating paper). In order of complexity, these are the Generalized Matrix Factorization (GMF), Multilayer Perceptron (MLP) and Neural Matrix Factorization (NeuMF) architectures. You can import the 'baby' neural recommender model of the set the GMF model with:

from xanthus.models import GeneralizedMatrixFactorizationModel as GMFModel

model = GMFModel(
    fit_params=dict(epochs=10, batch_size=256),
    n_factors=16,
    negative_samples=4,
)

For the purposes of this post, the exact meaning of these arguments is going to be skipped over here. Again, if you're interested there's a more detailed 'Getting Started' guide in the Xanthus docs. To train your new model, simply run:

model.fit(train_dataset)

You should see TensorFlow boot up, and the model should be trained for 10 iterations (epochs). You now have a trained (albeit very imperfectly) neural recommender model. Now for the fun bit: seeing what it predicts. To do this, you'll need to make use of a couple of other Xanthus utilities:

from xanthus.evaluate import he_sampling

_, test_items, _ = test_ds.to_components(shuffle=False)
users, items = he_sampling(test_ds, train_ds, n_samples=200)

Once again, a 'proper' evaluation of this recommender model is beyond the scope of this post, but suffice it to say: Xanthus provides some tools for that too. To generate some predictions, you can use the items sampled from the he_sampling function above. This essentially draws n_samples from the dataset that each user in the dataset has not interacted with, and appends a single item that the user has interacted with. In other words, it lets you recommend items to a user that a user has not yet seen/rated (if you exclude the 'viewed' element). To get the recommendations, you can run:

recommended = model.predict(users=users, items=items[:, 1:], n=3)
recommended_df = encoder.to_df(users, recommended)

Some example recommendations generated on this machine are as follows:

User Movie 1 Movie 2 Movie 3
38 Star Trek: Generations (1994) Jaws (1975) Men in Black (a.k.a. MIB) (1997)
338 Star Wars: Episode V - The Empire Strikes Back (1980) Terminator 2: Judgment Day (1991) Blade Runner (1982)
567 Shutter Island (2010) Seven (a.k.a. Se7en) (1995) Goodfellas (1990)

Not too bad, eh?  To get truly great results from these models takes a little work, but if you're up for it, you should be able to get SOTA results from Xanthus.

That's it for this post, thanks for reading! Make sure to follow (star) the Xanthus project on GitHub if you want to follow future updates!

markdouthwaite/xanthus
Neural recommendation models in Python, using Tensorflow 2.0 & Keras. - markdouthwaite/xanthus