Jonathan De Vita is a computer scientist who specialised in AI and coding as part of his studies at Lancaster University. This article will look at algorithms and a recent study which suggests that a little memory can outweigh a vast amount of time.
An algorithm is a system designed to solve a problem or perform a computation. Essentially, algorithms are a precise set of instructions to conduct specified step-by-step actions either through hardware- or software-based routines.
In computer science, computer programming and mathematics, the term ‘algorithm’ is generally used to describe a process that solves a recurrent problem. Algorithms are widely used across all areas of IT today, including playing a major role in automated systems and as specifications for performing data programming.
Typically starting with initial input and instructions describing a specific computation, algorithms are used for a variety of different purposes, from sorting datasets to more complicated endeavours such as recommending user content on social media. They work by following a set of rules or instructions to solve a problem or complete a task. Algorithms may be expressed as programming languages, natural languages, flowcharts, control tables or pseudocode.
With algorithms, and computation as a whole, time and memory are the two most fundamental resources. Every algorithm requires time to run along with memory, also known as space, to store data while the algorithm is running. Until very recently, it was believed that algorithms for completing certain tasks required an amount of space proportional to their runtime. However, a recent study has turned this widely held belief on its head, with research from MIT suggesting that it could be possible to transform any algorithm, irrespective of its purpose, into a form that demands much less space.
In 2024, computer scientist Ryan Williams made a startling discovery about the relationship between memory and time in computing. His research provided compelling evidence that memory was much more powerful than he and his peers had believed up until that point, with his studies suggesting that a small amount of memory was as helpful as a lot of time in all conceivable computations. Since the notion seemed improbable, Williams assumed he had made a mistake, overhauling his research to try to identify any errors. However, despite close scrutiny and hours of poring over his results, he could not find a single flaw.
Ryan Williams, who serves as a theoretical computer scientist at MIT, explained in a Wired interview that ‘I just thought I was losing my mind.’ However, presented with such compelling evidence, he began to wonder for the first time whether memory really might be infinitely more powerful than previously assumed.
Over the ensuing months, Ryan Williams fleshed out the details, scrutinising every component and soliciting feedback from trusted contemporaries. In February 2025, he published his paper online to widespread acclaim. Avi Wigderton, who serves as a theoretical computer scientist at the Institute for Advanced Study in Princetown, New Jersey, described Williams’ research as ‘amazing’ and ‘beautiful’ in a congratulatory email headed with the subject line ‘You blew my mind.’
Algorithm designers have long studied space-time trade-offs for specific tasks like sorting data. Even saving a small amount of space would be seen as a huge leap forward. Ryan Williams’ research centred around the concept of solving problems with extremely limited space, drawing on the pioneering work of complexity theorist Stephen Cook and his tree evaluation problem. Ryan Williams’ research has established a quantitative gap between the power of memory and the power of time, although, as Williams would be the first to admit, much work remains to be done in terms of defining their precise relationship.
