skip to content

Department of Computer Science and Technology

 
Photo by Anaïs Berkes

Energy and Environment Group (EEG)

The Energy and Environment Research Group applies computer science to address renewable energy integration, energy demand reduction, and the assessment and management of environmental impact (e.g. climate change, biodiversity loss, deforestation) from anthropogenic activities.

We operate in an interdisciplinary manner, collaborating with climate scientists, ecologists, engineers, lawyers, regulators, and economists, and conducting wide engagement with external partners to effect evidence-based outcomes.

Goal

Our primary goal is to have a measurable impact on tools and techniques for de-risking our future. To do so, we share recent advances at the intersection of computer science, energy, and environment through seminars, workshops, and scientific publications. We also help form collaborations between group members coordinate interdisciplinary initiatives across University departments. 

Membership

EEG members are, in the first instance, faculty members in the Department for Computer Science and Technology and their students. We also invite membership from Postdocs, PhDs, Lab Visitors and Master’s students primarily from other departments, as appropriate.

Seminars

A list of talks for the current term can be found below; talks from prior terms are linked to this page. Seminar details can also be found at talks.cam. Recordings from the EEG seminar series are available to watch online (link) We thank the Institute of Computing for Climate Science for their sponsorship of this series





Michaelmas 2023

Date Title & Abstract of Presentation Speaker Name Affiliation
September 22 Title
Fast Tagging of Pollinator Field Videos with Convolutional Tsetlin Machines
Sachin Mathew University of Cambridge
  Abstract
Entomologists devote a large portion of their time manually tagging video data from camera traps in order to conduct their research. This is an enormous time, labor, and resource sink. Automation would greatly decrease the amount of work required to complete this task and would give these researchers the freedom to allocate their resources elsewhere. Despite the difficulty of this task due to the comparable scale of the insects and visual noise, the structure of these static camera videos lends itself to be interpreted by sufficiently robust machine learning models.
This work aims to address the task of tagging location specific events within insect camera traps --- such as pollination events --- at real-time or close to real-time speeds by implementing a pipeline of fast video regularization, background subtraction, and machine learning inference using the highly parallelizable and embeddable Convolutional Tsetlin Machine (CTM) architecture. This work presents a pipeline of fast regularization and background subtraction models, compared by the metrics of event detection rate, pollination image detection rate as well as pipeline iteration speed. Through this exploration a subset of operations were found such that individual pollination events were tagged with relative accuracy at fast rates with outputs easily interpretable by CTMs for an even higher detection rate at very high possible speeds in embedded systems.
September 29 Title
Computational Agroecology: Rethinking Agriculture with State Spaces
Barath Raghavan USC
  Abstract
Agriculture has long been central to human civilization. Modern farming practices developed in a long stable period, but that era is now over due to climate change, pest and pathogen evolution, topsoil loss, and fossil fuel depletion. Computing has been used to optimize agricultural systems as-is, but seldom to rethink agriculture itself to meet these challenges. In this talk, I discuss a way of reconceptualizing agriculture using state spaces. I explore how this meta-systematic approach may allow us to design new productive, sustainable, multi-functional, and site-specific farming systems and reconcile approaches such as precision agriculture and agroecology that are often at odds.
October 6 Title
Measuring small-scale tropical forest disturbance with GEDI
Amelia Holcomb University of Cambridge
  Abstract
Forest disturbance, defined as partial reduction in forest cover that does not result in conversion to non-forested land, has surpassed deforestation by area in the Brazilian Amazon. In addition to causing direct carbon emissions, disturbance also diminishes ecosystem integrity by harming forest structure, even when canopy cover remains. Recent advances using LandSat and Sentinel-1 have improved detection of disturbances at fine spatiotemporal resolution but are so far unable to quantify the changes in forest structure and biomass associated with a detected disturbance. The Global Ecosystem Dynamics Investigation (GEDI), a novel spaceborne LiDAR system, has captured billions of 25-meter diameter footprints measuring forest height, plant area, and understory structure since it began collecting data in 2019. Though there is no guaranteed repeat cycle, GEDI often measures the same location several times; some of these coincident footprints happen to fall before and after a detected disturbance. In this work, we develop a general-purpose open-source pipeline for identifying these locations and use it to find over 7,100 coincident footprint pairs with intermediate disturbance events across the Amazon basin. We also identify a control set of over 34,000 coincident footprint pairs from disturbed areas but without intermediate disturbance events. Analysis of this continent-scale dataset demonstrates that GEDI can detect canopy and biomass losses in non-stand-replacing disturbances as small as 0.09 ha. GEDI’s unique three-dimensional view of forest structure allows us to identify varied effects of different disturbance types, including areas where the upper canopy retains its height, but the understory suffers substantial losses. Finally, we model the relationship between LandSat and Sentinel-1 disturbance detection parameters and GEDI-measured percent biomass loss. This is the first step towards a pan-tropical, spatially explicit estimate of carbon losses and structural changes due to forest disturbance.
October 13 Title
Creating a high-resolution canopy height map of the Earth
Konrad Schindler  ETH Zürich
  Abstract
The talk will summarise our efforts to densely map vegetation height over all land surfaces of the Earth. As data sources, we have employed optical imagery from ESA’s Sentinel-2 satellite and sparse waveforms recorded by NASA’s space-borne GEDI laser ranger. These observations were fused with the help of probabilistic deep learning models to obtain a canopy height map with a ground sampling distance of 10m, as well as an associated map of predictive uncertainty.
Moreover, I will give a short overview over other Earth observation and environmental monitoring activities at ETH’s chair of Photogrammetry and Remote Sensing.
October 20 Title Jay Taneja University of Massachusetts, Amherst, USA
  Abstract
October 27 Title    
  Abstract
November 3 Title
Tokenized Carbon Credits
Derek Sorensen University of Cambridge
  Abstract
Carbon credits are one potential tool for climate change mitigation. For various reasons, proof of stake (i.e. energy efficient) blockchains are a natural technological choice for trading carbon credits and offsetting emissions. In this talk, we will discuss tokenized carbon credits, or carbon credits which are distributed, traded, and retired on the blockchain. We will discuss challenges to their efficacy as a tool for mitigating climate change.
November 10 Title
From open source and open data to “open computation”: a climate science perspective
Roly Perera University of Cambridge
  Abstract
Working in the open and making source code and data freely available are essential to modern scientific practice, but don’t by themselves help us understand the complex relationships between data and computational outcomes. In this talk I will present (and demonstrate) techniques from programming languages research which automate the creation of data-driven software able to reveal how outputs were computed from data. Our current prototype can automatically highlight relevant parts of an underlying dataset as visual outputs are selected; extending this to produce computational “explanations” of how those data were aggregated or transformed during the computation of the selected outputs is work-in-progress. I will close by talking about some longer-term challenges for this idea in the context of climate science, and the prospects for language-based transparency taking us beyond open source and open data to computations which are able to explain how they work.
November 17 Title Laura Innice Duncanson University of Maryland
  Abstract
November 24 Title
Telling tree stories through laser scans and AI
Stefano Puliti Norwegian Institute of Bioeconomy Research
  Abstract
December 1 Title
Batteries Beyond Power Supplies
Liang He University of Colorado Denver
  Abstract
December 8 Title Milto Miltiadou University of Cambridge
  Abstract
December 15 Title
The real information in climate simulations
Milan Klöwer Massachusetts Institute of Technology
  Abstract

Cambridge Conservation Research Institute (UCCRI)

News