In recent work on discrete neural networks, I considered such networks whose activation functions are polymorphisms of finite, discrete relational structures. The general framework I provided was not entirely categorical in nature but did provide a steppingstone to a categorical treatment of neural nets which are definitionally incapable of overfitting. In this talk I will outline how to view neural nets as categories of functors from certain multicategories to a target multicategory. Moreover, I will show that the results of my PhD thesis allow one to systematically define polymorphic learning algorithms for such neural nets in a manner applicable to any reasonable (read: functorial) finite data structure.