Machine Learning en 50 artículos for the working mathematician (tomo I)

2/2/2024
AUTOR
Colegio de matemáticas Bourbaki

Para aprender Ciencia de Datos, Machine Learning, Finanzas Cuantitativas y por supuesto Inteligencia Artificial, las matemáticas no son un adorno sino una habilidad indispensable que nos permite juzgar mejor los distintos procesos en el desarrollo de soluciones data-driven.

Desde nuestra fundación hemos dedicado nuestros contenidos y propuesta académica a mejorar las habilidades matemáticas de la amplísimo grupo de personas dedicadas a estas áreas.

A partir de Febrero del 2024 todos nuestros estudiantes recibirán las que a nuestro modo de ver son las referencias esenciales, esperamos que esta lista sea de gran ayuda para todos aquellos que comienzan y desean profundizar en esta área maravillosa. Estos son los nombres las referencias que les proponemos:

  • Ciencia de datos en 50 artículos for the working analyst
  • Machine Learning en 50 artículos for the working mathematician
  • Deep Learning en 50 artículos for the working scientist
  • Finanzas Cuantitativas en 50 artículos for the working analyst

En esta edición de nuestro boletín les compartimos la primera mitad de los artículos indispensables que un matemático debe de conocer para acercarse a Machine Learning.

  1. Hagerup, T. and Rüb, C., 1990. A guided tour of Chernoff bounds. Information processing letters, 33(6), pp.305-308.
  2. Cortes, C. and Vapnik, V., 1995. Support-vector networks. Machine learning, 20, pp.273-297.
  3. Bottou, L. and Bousquet, O., 2007. The tradeoffs of large scale learning. Advances in neural information processing systems, 20.
  4. Mallat, S.G. and Zhang, Z., 1993. Matching pursuits with time-frequency dictionaries. IEEE Transactions on signal processing, 41(12), pp.3397-3415.
  5. Daubechies, I., 1992. Ten lectures on wavelets. Society for industrial and applied mathematics.
  6. LeCun, Y., Touresky, D., Hinton, G. and Sejnowski, T., 1988, June. A theoretical framework for back-propagation. In Proceedings of the 1988 connectionist models summer school (Vol. 1, pp. 21-28).
  7. Power, A., Burda, Y., Edwards, H., Babuschkin, I. and Misra, V., 2022. Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177.
  8. Tishby, N., Pereira, F.C. and Bialek, W., 2000. The information bottleneck method. arXiv preprint physics/0004057.
  9. Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O., 2021. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), pp.107-115.
  10. Novikoff, A.B., 1962, April. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata (Vol. 12, No. 1, pp. 615-622).
  11. Schäfer, A.M. and Zimmermann, H.G., 2006. Recurrent neural networks are universal approximators. In Artificial Neural Networks–ICANN 2006: 16th International Conference, Athens, Greece, September 10-14, 2006. Proceedings, Part I 16 (pp. 632-640). Springer Berlin Heidelberg.
  12. Hornik, K., Stinchcombe, M. and White, H., 1990. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural networks, 3(5), pp.551-560.
  13. Zhou, D.X., 2020. Universality of deep convolutional neural networks. Applied and computational harmonic analysis, 48(2), pp.787-794.
  14. Cybenko, G., 1989. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4), pp.303-314.
  15. Levy, O. and Goldberg, Y., 2014. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems, 27.
  16. Candès, E.J., Romberg, J. and Tao, T., 2006. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory, 52(2), pp.489-509.
  17. Shannon, C.E., 1948. A mathematical theory of communication. The Bell system technical journal, 27(3), pp.379-423.
  18. Cover, T. and Hart, P., 1967. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1), pp.21-27
  19. Auer, P., Cesa-Bianchi, N. and Fischer, P., 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47, pp.235-256.
  20. Parberry, I., 1994. Circuit complexity and neural networks. MIT press.
  21. Bartlett, P.L. and Mendelson, S., 2002. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov), pp.463-482.
  22. Belkin, M., Hsu, D., Ma, S. and Mandal, S., 2019. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences, 116(32), pp.15849-15854.
  23. Arora, S., Liang, Y. and Ma, T., 2017. A simple but tough-to-beat baseline for sentence embeddings. In International conference on learning representations.
  24. Valiant, L.G., 1984. A theory of the learnable. Communications of the ACM, 27(11), pp.1134-1142.
  25. Belkin, M. and Niyogi, P., 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6), pp.1373-1396.

Oferta académica