Somehow, it is that time again. Here's what happened on the blog (and elsewhere!) in September 2020.

This month I was a little infatuated by Streamlit. I've gone from spending all of 30 minutes looking through the documentation and announcement details to being buried in it for pretty much a few days straight. I wrote a short post on how to build and deploy Streamlit apps to Google Cloud:

Deploying Streamlit Apps to GCP
Streamlit is a minimal, modern data visualization framework that’s rapidly becoming the go-to dataapp framework in the Python ecosystem. This post introduces Streamlit, and shows you how to securely and scalably deploy your Streamlit apps with Google App Engine.

A couple of months ago I open sourced Xanthus, a Deep Learning (DL) based recommendation model library. At the time, I promised I'd run some benchmarks to see how good it really was. The good news is, it seems quite good! Here's the post:

How Good is Xanthus?
Xanthus is a Deep Learning (DL) library built on top of Tensorflow and uses the Keras API to implement various neural recommendation model architectures. This post benchmarks the models implemented in Xanthus against some popular ‘classic’ matrix factorisation models.

News and articles from around the web

Time for some of my favourite articles from around the web this month. I think these capture some pretty interesting trends for ML-driven tech:

1. Learning how to learn: self-improving optimisers

One of the great early breakthroughs of the DL movement was in establishing a viable approach replacing hand-crafted input features (feature engineering) with learned functions that perform this task as part of a model. Put differently: this capability lets DL models figure out which inputs are useful, and how they can be transformed to help a model achieve good results. This has been particularly important in the computer speech and vision world, where this was one of the most time consuming and 'brittle' tasks practitioners needed to perform.

So what's this paper about? Well there's another bit of hand-tuning that is a sometimes problematic for DL practitioners: the job of tweaking the training of the model itself (in DL-speak: selecting and tuning the 'optimizer'). This paper outlines work on how to go about designing and learning models that learn how best to train other models (and themselves). This radically changes how models are trained and could dramatically reduce the barrier to entry (i.e. time, skill, maybe cost) for some applications of DL. Could this be as impactful as the early DL breakthroughs? Possibly!

Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
Much as replacing hand-designed features with learned functions hasrevolutionized how we solve perceptual tasks, we believe learned algorithmswill transform how we train models. In this work we focus on general-purposelearned optimizers capable of training a wide variety of problems with nouser-…

2. TensorFlow open-sources deep learning recommender tools

I'm a bit of a fan of recommendation systems – or perhaps more generally: the opportunity ML provides to change how people interact with machines by configuring the machines to their needs. Maybe we'll have less of this pesky programming nonsense in future – and maybe some alternatives to ubiquitous screens to boot.

Anyway, I've worked on a few recommendation systems professionally, and tinkered with recommender models privately too, and try and keep up to speed with the field. Excitingly, the folks over at Google Brain have open-sourced a load of tools and models they've developed on top of TensorFlow 2. This is pretty big news: you can now use 'state-of-the-art' tooling and models in your own projects for free. Impressive stuff.

Introducing TensorFlow Recommenders
Introducing TensorFlow Recommenders, a library for building flexible and powerful recommender models.

3. NVIDIA buys Arm to (maybe) create a true 'AI-first' hardware company

NVIDIA are a large US computing company known for their graphics hardware (GPUs). Arm is a British computing company known for their their ARM CPUs. The latter has also developed a strong presence in the world of Edge Computing (typically low-power, on-device computing) for personal devices and the Internet of Things, while the former has become the de facto supplier of GPU hardware for various ML/AI applications in the cloud.

This merger would allow NVIDIA to span both 'The Edge' and 'The Cloud', allowing them to target emerging trends in on-device compute (e.g. giving advanced ML features to privacy-aware consumers by keeping private data on-device) while powering enterprise-scale ML in the cloud. There's understandably some competition concerns, but if this were to go through it would be huge for the commercial world of AI.

NVIDIA to Acquire Arm for $40 Billion, Creating World’s Premier Computing Company for the Age of AI
NVIDIA and SoftBank Group Corp. today announced a definitive agreement under which NVIDIA will acquire Arm Limited from SBG and the SoftBank Vision Fund in a transaction valued at $40 billion.

4. Addressing the 'Causal Problem' in ML

There has been a lingering problem in ML – which has only been heightened in the current rush towards DL-focussed ML research – and that is the problem of causality. Statistical models (and by extension, most modern ML approaches) do not have a 'concept' of causality built into them. They're learned descriptions of the input data. In other words, they do not (currently) capture the cause and effect of a problem. Instead, they indicate that things are (likely to be) related, but not how (causally-speaking).  This limits the explainability and applicability of ML models in some use-cases.

However, there's a group of vocal advocates of a different approach to ML spearheaded by Judea Pearl – a highly respected if occasionally controversial (research-wise) figure in the Computer Science world – to develop approaches to 'Causal Inference' (CI). This 'new world' of CI would require some pretty fundamental changes to how the ML world currently operates, but could unlock some serious potential if progress can be made. Here's a blog post from Judea Pearl on directions he'd like to pursue in this area.

5. The neuromorphic computing comeback continues

As the name suggests, neuromorphic computing aims to develop hardware and software that more closely mimics the structure and learning capabilities in the brain than many existing ML hardware/software implementations. The benefits of such an approach are believed to be (as you might expect) improved performance and efficiency on certain types of task.

In particular, so-called 'spiking neurons' are observed to occur in the brain, yet these dynamics have thus-far proven unruly in the current generation of ML technology: it has been difficult to effectively train models with these properties. This paper introduces an approach that allows practitioners to implement and train spiking neural networks, (potentially) resolving a long-standing challenge in this field. This could see a shift in the predominant forms of DL in use, and potentially usher in new, more efficient ML hardware too.

EventProp: Backpropagation for Exact Gradients in Spiking Neural Networks
We derive the backpropagation algorithm for spiking neural networks composedof leaky integrate-and-fire neurons operating in continuous time. Thisalgorithm, EventProp, computes the exact gradient of an arbitrary loss functionof spike times and membrane potentials by backpropagating errors in time…

Other bits and pieces

And now for some miscellaneous pieces.

First up, here's an interesting piece from Benedict Evans on how Amazon's profits stack up:

Amazon’s profits, AWS and advertising — Benedict Evans
The bigger Amazon gets, the more it’s worth reading the accounts. Does AWS subsidise the whole thing? Is the revenue $250bn - or $450bn? And is that ad business just a footnote, or is it bringing in more cash than AWS?

Next, ever wonder how those nice-looking code snippets are made for Twitter posts (and elsewhere)? Snippets like this:

Snapshots like this are made with Carbon.

Well this one is made using Carbon, a nice little website for rendering sharp, internet-friendly code snippets. I somehow didn't realise this, so I thought I'd share this in the event anyone else had missed it too!

Carbon
Carbon is the easiest way to create and share beautiful images of your source code.

And that's it for this month, thanks for reading!

If you'd like to get this newsletter direct to your inbox, remember to sign up now!