Equivariance can enhance the data efficiency of machine learning models by incorporating prior knowledge about a problem.
Thanks to their flexibility and generality, steerable CNNs are a popular design choice for equivariant networks.
By leveraging concepts from harmonic analysis, these networks model symmetries through specific constraints on their learnable weights or filters.
This framework facilitates the practical implementation of a wide variety of equivariant architectures - e.g. to most Euclidean isometries, including E(3), E(2) and their subgroups.
However, unknown or imperfect symmetries can sometimes lead to overconstrained weights and suboptimal performance.
This challenge motivated the study of strategies to enforce softer priors into the models.
In the second half of this talk, we will discuss a novel probabilistic approach to learning the degrees of equivariance in steerable CNNs.
The method replaces the equivariance constraint on the weights with an expectation over a learnable distribution, which is analytically computed by leveraging its Fourier decomposition.
Link to join virtually: https://cam-ac-uk.zoom.us/j/87421957265
This talk is being recorded. If you do not wish to be seen in the recording, please avoid sitting in the front three rows of seats in the lecture theatre. Any questions asked will also be included in the recording. The recording will be made available on the Department’s webpage