Precision Machine Learning for HEP

For a few years now, the High Energy Physics (HEP) community has started to substantially explore Machine Learning techniques for a diverse variety of tasks and, as such, a large number of ideas, applications, and tools are being published. This is not only the case within experimental collaborations, but also among the phenomenology and formal theory communities. Even more, as we head towards the High Luminosity era of the Large Hadron Collider (HL-LHC), in which unprecedented amount of highly complex data need to be simulated, collected, and finally analyzed, the importance of building reliable and more efficient methods, techniques, and workflows for HEP is becoming compulsory. As the community has already realized, ML plays a crucial role in this development. In particular, ML, together with new frontiers in hardware acceleration, provide a potential solution to meet the expected computing resources for simulating and reconstructing the products of the collisions and will also be essential for developing novel strategies for triggering and reconstructing data, as well as for the statistical analysis, interpretation, and preservation of such data. This represents a brand new field of research, which effectively complements the HL-LHC physics program. Furthermore, supported by the largely enhanced precision expected at the HL-LHC, dedicated ML methods will provide great opportunities to pursue data-driven searches, i.e. for anomaly detection, data quality monitoring and efficient background estimation. However, to ensure a systematic implementation of ML methods in the HEP workflows, one needs to carefully study their properties and capabilities against complex, high-dimensional data and to assess their ability to match the required precision, typically much higher than that of industrial and “real-life” applications. This program, which can go under the name of “Precision ML”, cannot be separated from the development of novel techniques for hardware acceleration, the design of reliable quality metrics, and the proper assessment of the relevant uncertainties. We believe that the joint effort of experts from the INFN theory and experimental communities can help shape this Precision ML program and contribute to it.

Staff:
Riccardo Torre
Simone Marzani
Andrea Coccaro (ATLAS Collaboration)
Francesco Armando di Bello (ATLAS  Collaboration)
Fabrizio Parodi (ATLAS Collaboration)
Carlo Schiavi (ATLAS Collaboration)
Federico Sforza (ATLAS Collaboration)

Postdocs:
Marco Letizia

PhD students:
Samuele Grossi