skip to content

Department of Computer Science and Technology

Read more at: In Praise of Undergraduate Research

In Praise of Undergraduate Research

8 August 2019

In my last post I discussed the Janus automatic binary parallelisation tool that my postdoc, Kevin, has developed. At VEE earlier this year we had another paper on Janus , this time extending it to extract other forms of parallelism—automatic vectorisation for data-level parallelism and software prefetching for memory-level parallelism. We show how these schemes are applied to binaries in the context of Janus (with a neat trick for dealing with bounds-checking code when inserting prefetches to arrays) and evaluate them together. I’m not aware of any other work that tries to extract all three forms of parallelism at once. However, what I liked best about this paper was not the techniques, nor the results, but the fact that the two passes...


Read more at: Janus: Statically-Driven and Profile-Guided Automatic Dynamic Binary Parallelisation

Janus: Statically-Driven and Profile-Guided Automatic Dynamic Binary Parallelisation

18 February 2019

One of the themes of my research has been and continues to be the exploitation of parallelism in its many forms. I’ve looked into data-level parallelism by improving the performance of SLP by, for example, reducing the number of instructions that are vectorised and (spoiler alert for a future publication) I have a PhD student working on speculative vectorisation. With Sam Ainsworth , formerly my PhD student, now a postdoc, I have published research that exploits memory-level parallelism within the compiler , architecture and in both with a programmable prefetcher . We’ve also looked into taking advantage of parallelism for error detection . However, the first work I did in this area, and the kind of work...


Read more at: Minute Madness on Program Parallelisation

Minute Madness on Program Parallelisation

25 May 2016

Today was the annual Wheeler lecture at the Computer Laboratory, and before the main event, a talk by Andrew Herbert, there was a Minute Madness where people from across the Lab, ranging from MPhil students through to professors, talked for one minute about their research with a single slide as a prop. My slide and something approximating the words I used are below.

“Hello! My group works on ways of making applications go faster, through a technique called program parallelisation.

If you look on the left of the slide, the red wavy arrow represents a regular sequential application with a single thread of execution within it. This means that instructions execute one...


Read more at: Alias Analysis in HELIX

Alias Analysis in HELIX

21 December 2015

One of the most important parts of our HELIX compiler is the data dependence analysis we run on the compiler’s IR to determine which instructions are independent of each other. You can read more about HELIX in general in our original CGO 2012 paper (click through my publications page to get free access to the ACM version).

HELIX’s initial data dependence pass is split into two phases, and it’s the memory alias analysis stage that is most interesting. This has the job of identifying the locations in memory that are read and written by each instruction so that we can respect all data dependences within the loops we parallelise. Since alias analysis is not precise, we need to...