References

  1. Christopher M. Bishop (2006): Pattern recognition and machine learning. Information science and statistics. Springer, New York, doi:10.978.038731/0732.
  2. Filippo Bonchi, Fabio Gadducci, Aleks Kissinger, Pawel Sobocinski & Fabio Zanasi (2016): Rewriting modulo symmetric monoidal structure. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science - LICS '16, pp. 710–719, doi:10.1145/2933575.2935316.
  3. Robin Cockett, Geoffrey Cruttwell, Jonathan Gallagher, Jean-Simon Pacaud Lemay, Benjamin MacAdam, Gordon Plotkin & Dorette Pronk (2019): Reverse derivative categories. arXiv:1910.07065 [cs, math].
  4. Matthieu Courbariaux, Yoshua Bengio & Jean-Pierre David: BinaryConnect: Training Deep Neural Networks with binary weights during propagations. arXiv:1511.00363 [cs].
  5. Dheeru Dua & Casey Graff (2017): UCI Machine Learning Repository.
  6. Richard O. Duda, Peter E. Hart & David G. Stork (2000): Pattern Classification (2nd Edition). Wiley-Interscience, USA.
  7. Brendan Fong, David I. Spivak & Rémy Tuyéras (2019): Backprop as Functor: A compositional perspective on supervised learning. arXiv:1711.10455 [cs, math].
  8. Bruno Gavranovi\'c (2020): Learning Functors using Gradient Descent. Electronic Proceedings in Theoretical Computer Science 323, pp. 230–245, doi:10.4204/EPTCS.323.15. arXiv: 2009.06837.
  9. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv & Yoshua Bengio (2016): Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv:1602.02830 [cs]. ArXiv: 1602.02830.
  10. Nathan Jacobson (2012): Basic Algebra I: Second Edition. Courier Corporation.
  11. Alex Krizhevsky (2009): Learning Multiple Layers of Features from Tiny Images. Department of Computer Science, University of Toronto.
  12. Yves Lafont (2003): Towards an algebraic theory of Boolean circuits. Journal of Pure and Applied Algebra 184(2-3), pp. 257–310, doi:10.1016/S0022-4049(03)00069-0.
  13. Yann Lecun, Léon Bottou, Yoshua Bengio & Patrick Haffner (1998): Gradient-Based Learning Applied to Document Recognition. In: Proceedings of the IEEE, pp. 2278–2324, doi:10.1109/5.726791.
  14. Rajat Raina, Anand Madhavan & Andrew Y. Ng (2009): Large-scale deep unsupervised learning using graphics processors. In: Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09. ACM Press, Montreal, Quebec, Canada, pp. 1–8, doi:10.1145/1553374.1553486.
  15. A. Martín del Rey, G. Rodríguez Sánchez & A. de la Villa Cuenca (2012): On the boolean partial derivatives and their composition. Applied Mathematics Letters 25(4), pp. 739–744, doi:10.1016/j.aml.2011.10.013.
  16. Sebastian Ruder (2017): An overview of gradient descent optimization algorithms. arXiv:1609.04747 [cs].
  17. Peter Selinger (2010): A survey of graphical languages for monoidal categories. arXiv:0908.3347 [math] 813, pp. 289–355, doi:10.1007/978-3-642-12821-9-4.
  18. David Sprunger & Bart Jacobs (2019): The differential calculus of causal functions. arXiv:1904.10611 [cs].
  19. David Sprunger & Shin-ya Katsumata (2019): Differentiable Causal Computations via Delayed Trace. In: 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). IEEE, Vancouver, BC, Canada, pp. 1–12, doi:10.1109/LICS.2019.8785670.
  20. Erwei Wang, James J. Davis, Peter Y. K. Cheung & George A. Constantinides (2019): LUTNet: Rethinking Inference in FPGA Soft Logic. IEEE International Symposium on Field-Programmable Custom Computing Machines, doi:10.1109/FCCM.2019.00014.
  21. Fabio Zanasi (2017): Rewriting in Free Hypergraph Categories. Electronic Proceedings in Theoretical Computer Science 263, pp. 16–30, doi:10.4204/EPTCS.263.2.
  22. Ivan Zhegalkin (1927): Sur le calcul des propositions dans la logique symbolique.

Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org