| Literature DB >> 32647596 |
Abstract
In high energy physics, graph-based implementations have the advantage of treating the input data sets in a similar way as they are collected by collider experiments. To expand on this concept, we propose a graph neural network enhanced by attention mechanisms called ABCNet. To exemplify the advantages and flexibility of treating collider data as a point cloud, two physically motivated problems are investigated: quark-gluon discrimination and pileup reduction. The former is an event-by-event classification, while the latter requires each reconstructed particle to receive a classification score. For both tasks, ABCNet shows an improved performance compared to other algorithms available.Entities:
Year: 2020 PMID: 32647596 PMCID: PMC7329190 DOI: 10.1140/epjp/s13360-020-00497-3
Source DB: PubMed Journal: Eur Phys J Plus ISSN: 2190-5444 Impact factor: 3.911
Description of each feature used to define a point in the point cloud implementation for quark–gluon classification. The latter two features are the global information added to the network
| Variable | Description |
|---|---|
| Difference between the pseudorapidity of the constituent and the jet | |
| Difference between the azimuthal angle of the constituent and the jet | |
| Logarithm of the constituent’s | |
| Logarithm of the constituent’s | |
| Logarithm of the ratio between the constituent’s | |
| Logarithm of the ratio between the constituent’s E and the jet E | |
| Distance in the | |
| PID | Particle type identifier as described in [ |
| m(jet) | Jet mass |
| Jet transverse momentum |
Fig. 1ABCNet architecture used for quark–gluon tagging. Fully connected layer and encoding node sizes are denoted inside “{}”. For each GAPLayer, the number of k-nearest neighbours (k) and heads (H) is given
Comparison between the performance achieved with ABCNet and different available implementations
| Acc | AUC | 1/ | 1/ | Parameters | |
|---|---|---|---|---|---|
| ResNeXt-50 | 0.821 | 0.9060 | 30.9 | 80.8 | 1.46M |
| P-CNN | 0.827 | 0.9002 | 34.7 | 91.0 | 348k |
| PFN | – | 0.9005 | – | 82k | |
| ParticleNet-Lite | 0.835 | 0.9079 | 37.1 | 94.5 | |
| ParticleNet | 0.9116 | 366k | |||
| ABCNet | 230k |
The uncertainty quoted corresponds to the standard deviation of nine trainings with different random weight initialisation. If the uncertainty is not quoted, then the variation is negligible compared to the expected value
Fig. 2Distribution of the -scaled distribution of the jet constituents averaged over all images in the test sample. The leftmost images are the quark (top) and gluon (bottom) jet averages after the pre-processing. The first 5% of the jet constituents with the highest self-attention coefficients for the first and second GAPLayers are shown on the images in the centre and right, respectively
Variable description for each feature used to define a point in the point cloud implementation for the pileup mitigation problem. The latter two features are the global information added to the network
| Variable | Description |
|---|---|
| Particle’s pseudorapidity | |
| Particle’s azimuthal angle | |
| Logarithm of the particle | |
| Boolean flag identifying if the particle is charged | |
| Logarithm of the ratio between the particle | |
| Logarithm of the ratio between the particle | |
| PUPPI weight for the particle | |
| Boolean flag identifying if the particle passes the SoftKiller | |
| NPU | number of pileup interactions |
| NPART | Number of reconstructed particles associated with jets |
Fig. 3ABCNet architecture used for pileup identification. Fully connected layer and encoding node sizes are denoted inside “{}”. For each GAPLayer, the number of k-nearest neighbours (k) and heads (H) are given
Fig. 4Distribution of the dijet mass using the different pileup mitigation algorithms (left) and the jet mass resolution (right). A narrower resolution peak means better performance. All distributions are normalised to unit
Resolution width for different pileup mitigation strategies. The resolution width is extracted by fitting the distributions shown in Fig. 4 (right) with a Gaussian function
| Algorithm | Resolution width |
|---|---|
| SoftKiller | 0.022 |
| PUPPI | 0.021 |
| ABCNet |
Fig. 5PCC for each pileup mitigation algorithm for different NPU. ABCNet is trained on (blue) or (orange)