Time rolls ever onwards, though perhaps a little more strangely in a year like this. Here's what happened in October. If you're interested in getting this newsletter direct to your inbox, feel free to sign up to the mailing list!

This month I focussed on starting a series of posts on fundamental concepts from Computer Science. I have a few articles in the works on this, including the basics of software testing, and some hints and tips on applying these ideas to Machine Learning (ML) software too. Stay tuned.

In the meantime, I've kicked off this thread of articles with two posts looking at a staple of the software professional's toolkit: Object Oriented Programming (OOP). A good understanding of the concepts of OOP can supercharge your ability to design software, and can help you interact with open source software too. Here they are:

Object-Oriented Programming: A Practical Introduction (Part 1)
Whether you’re a fan or not, OOP is a valuable tool in your programming toolkit. It’s also sometimes a little bewildering for new programmers (and some more experienced ones too). This post provides a (brief) practical introduction to OOP concepts.
Object-Oriented Programming: A Practical Introduction (Part 2)
In Part 1 of this mini-series, you saw how OOP concepts can be used to structure and manipulate code. In this part, you’ll see how these ideas are formally defined, and look at a couple of more advanced concepts too.

Now showing on Medium

Besides the above posts, if you're a Medium user, you can now find some of my posts over on Medium on the popular Towards Data Science publication (and soon, elsewhere!). This blog is still going to be my main channel for releasing content, however! You can follow me here:

Mark Douthwaite – Medium
Read writing from Mark Douthwaite on Medium. Applied AI specialist, computer scientist, software engineer. Read more at https://mark.douthwaite.io/. Every day, Mark Douthwaite and thousands of other voices read, write, and share important stories on Medium.

And by way of example, here's my article introducing Serverless computing ideas:

A Brief Introduction to Serverless Computing
Over the last decade or two, cloud computing has come to dominate many of the skills and processes needed to develop ‘modern’ software. This is increasingly true for adjacent fields too, including…

News and articles from around the web

This month saw a some pretty cool news in the world of ML and AI. You may notice a bit of a trend this month: I've included a couple of examples to capture my idea of where the long-term value of AI/ML technology to society is likely to come from (spoiler: it's not necessarily BI-like applications).

1. Changing the world, one video call at a time

It's not uncommon to read about how ML and AI stack up huge energy bills in order to train the latest-and-greatest models. Some articles go as far as pointing out that training a single model can have the same carbon footprint as that of five average-sized cars over their lifetime. There's some truth to that, and it is an important fact to be aware of.

However, it misses a key point. When used effectively, AI (and ML) can enable extremely 'high-leverage' software capabilities – higher perhaps than possible with traditional software techniques. By enabling new approaches to old problems, it's possible to make a change that dramatically reduces the overall carbon footprint of AI/ML in one fell swoop, while also improving the quality of services delivered to those with less-mature digital infrastructure.

A great example of this is work being done by NVIDIA on developing image- and video-processing techniques to dramatically reduce the bandwidth requirements for high-quality video calls. Among other things, they're aiming to reduce the incidence rate of choppy video calls with dodgy sound quality and heavily pixelated faces. This announcement will also likely reduce energy requirements too. Imagine reducing energy requirements for billions of digital devices in a single step. Here's their announcement video:

2. How to market AI-first products

I am of the opinion that in future, the language around AI is likely change. At the moment, AI/ML is (relatively) new and exciting: it can provide a material advantage to businesses to hop on the hype-train (and can often generate a fair bit of business value to boot!). However, I think mature 'AI-first' software offerings are likely to drop references to AI pretty much altogether. This will certainly be true for the current wave of AI tech as it is assimilated into the more general software landscape.

At the end of the day: a good deal of the 'real' value of a product is in the experience. What does this product do that will excite users? That is where current-gen AI tech can really shine. When used carefully, it can enable experiences and productivity gains that'd be hard to imagine with traditional software.

We're at the early stages of this transition phase to more mature AI products, but there's some early standout examples. These are companies that offer products that can only exist because of breakthroughs in AI technology, but broadly omit to mention that they're AI (or AI-enhanced) products at all. Instead, they let the product features do the talking for them. Perhaps my favourite of these (from just the video alone) is Descript:

3. Putting things into perspective

With a lot of practitioners arriving in the field with little grounding in the history and progress of the field, it's often easy for them to assume everything is pretty much brand new. Moreover, it's easy to lose sight of the long-term goals of the field, and maybe some of the more philosophical questions too. I think it's practically useful for practitioners (in any field) to understand their relative position in the story of that field. It can lend perspective on how and why things have evolved the way they have, allow them to frame their knowledge and language in light of the goals of the field, and understand where things might be going.

To that end, here's a section of a lecture given by Richard Feynman in the mid 1980's on the question of 'Can Machines Think?'. I think this clip does a good job of answering these sorts of questions, and might help those that struggle to delineate the boundaries of AI and ML. Plus, Feynman was a great communicator, so it's usually worth listening to his lectures anyway!

4. A visualisation tool for exploring embeddings and dimensionality reduction techniques

I'm a big fan of visualisation tools. They're often great for communicating new concepts, and for strengthening my own intuitions too. One topic that's sometimes poorly understood by new entrants to the field of AI is the power and utility of well-constructed embeddings.

Embeddings are special vector-representations of data (maybe words, documents, images or rows in tables) that can be created to store rich information about a domain or specific problem. They're commonly 'high-dimensional' vectors too, meaning that visualising them as a 2D or 3D plot can be a bit problematic.

This tool lets you play around with a few dimensionality reduction approaches that – as the name suggests – allow you to take these high-dimensional vectors and render them in a plotting-friendly lower-dimensional space. This tool comes packaged with TensorBoard these days too. It's a great resource for getting a feel for what the various dimensionality reduction techniques do, but also for how and why embeddings can be so useful in ML applications.

Embedding projector - visualization of high-dimensional data
Visualize high dimensional data.

5. A creative way of developing an intuition for 'loss-landscapes'

Understanding the importance of loss functions to your ML models is a central aspect of how many ML and AI practitioners are taught. Having an intuition for how to improve your latest model's performance is a valuable skill, after all. For basic models, these 'landscapes' are relatively uninteresting to look at – basic Linear Regression has a nice bowl-shaped landscape, for example. But for more complex models, these landscapes can provide some useful intuition on how you should configure your model.

On that note: I came across the 'A.I. Loss Landscape' project which aims to visualise the loss landscapes of Deep Learning (DL) models. If you're interested in the technical details of how DL models 'learn', then this might be useful. If not, then there are some cool-looking visualisations you might enjoy anyway!

Loss Landscape | A.I deep learning explorations of morphology & dynamics
Explore the morphology and dynamics of deep learning optimization processes and gradient descent with the A.I Loss Landscape project.

Odds and ends

Time for the stragglers. First up, here's a great resource I came across: Amazon Builders' Library. This is a super-valuable resource full of information on how to architect and implement cloud software (on AWS, of course). If you're interested in 'high level' ideas around software architecture and have a bit of experience with AWS, you may find this interesting:

The Amazon Builders’ Library

Finally, I've migrated to Brave, a privacy-focussed web browser. It's faster and more energy-efficient than Chrome, and blocks cookies and trackers by default. I'd recommend checking it out if you haven't already!

Secure, Fast & Private Web Browser with Adblocker | Brave Browser
The Brave browser is a fast, private and secure web browser for PC, Mac and mobile. Download now to enjoy a faster ad-free browsing experience that saves data and battery life by blocking tracking software.

And that's it for this month, thanks for reading!

If you'd like to get this newsletter direct to your inbox, remember to sign up now!