Supported by the U.S. National Science Foundation (PHY-1417326, PHY-1719914) and the National Natural Science Foundation of China (11465018)

^{3}and published under licence by Chinese Physical Society and the Institute of High Energy Physics of the Chinese Academy of Sciences and the Institute of Modern Physics of the Chinese Academy of Sciences and IOP Publishing Ltd

As part of a recent analysis of exclusive two-photon production of ^{+}^{−} pairs at the LHC, the CMS experiment used di-lepton data to obtain an “effective” photon-photon luminosity. We show how the CMS analysis on their 8 TeV data, along with some assumptions about the likelihood for events in which the proton breaks up to pass the selection criteria, can be used to significantly constrain the photon parton distribution functions, such as those from the CTEQ, MRST, and NNPDF collaborations. We compare the data with predictions using these photon distributions, as well as the new LUXqed photon distribution. We study the impact of including these data on the NNPDF2.3QED, NNPDF3.0QED and CT14QEDinc fits. We find that these data place a useful and complementary cross-check on the photon distribution, which is consistent with the LUXqed prediction while suggesting that the NNPDF photon error band should be significantly reduced. Additionally, we propose a simple model for describing the two-photon production of ^{+}^{−} at the LHC. Using this model, we constrain the number of inelastic photons that remain after the experimental cuts are applied.

Article funded by SCOAP^{3}

With the start of the 13 TeV run of the Large Hadron Collider (LHC), more precise theory calculations are needed to correctly interpret the present and upcoming experimental data. Calculations at the next-to-next-to-leading order (NNLO) in quantum chromodynamics (QCD) are becoming standard, so that the theoretical uncertainty can be reduced to the same order as the experimental uncertainty. At this level of precision, the leading-order electroweak correction is also important, because the square of the coupling of the strong interaction (_{s}

One particular electroweak correction of interest is that due to photons coming from the proton in the initial state. This requires the inclusion of the photon as a parton inside the proton, with an associated parton distribution function (PDF). This is necessary both for consistency when electroweak corrections are included and because the photon-initiated processes can become significant at high energies. The treatment of the photon PDF in a global analysis was first performed by the MRST collaboration [_{0}, as the sum of the (inelastic) CT14QED plus the “elastic” photon contribution [^{−4} to around 0.4. Recently, a new determination of the photon PDF, LUXqed, was obtained from the lepton-photon structure functions [

With the large amounts of data to be collected at the LHC, photon-initiated processes will become increasingly important. For instance, a precise determination of the quartic couplings of photons and

In this paper we consider the CMS studies of exclusive two-photon production of

Recently, the CMS experiment at the LHC has performed measurements of the ^{+}^{−}^{±}^{∓}), which identified the _{WW}^{+}^{−}^{+}^{−} events (away from the ^{+}^{−} events with no additional associated charge tracks to that predicted from purely elastic scattering (after subtracting possible quark-initiated contamination, estimated from ^{+}^{−}^{±}^{∓}

Since these predicted cross sections use their respective extracted photon-photon luminosities, they include both elastic and inelastic contributions. Therefore, they can be used to constrain the photon PDFs if we make some assumptions about the fraction of dissociative events that pass the no-additional-charged-tracks cut. For this comparison, we calculate the total cross section for ^{1)} via the photon-photon fusion process ^{+}^{−}, with the proper _{elastic} is calculated using EPA photon PDFs from both colliding protons; _{single-dissociative} is obtained by using one EPA photon PDF and one inelastic photon PDF; while _{double-dissociative} is calculated using inelastic photon PDFs from both colliding protons. The inelastic photon PDF is taken as the difference between an inclusive photon PDF (such as CT14QEDinc, NNPDF3.0QED and LUXqed photon PDFs) and the EPA photon PDF. We note that the CT14QEDinc PDF includes both elastic and inelastic contributions to the photon PDF, and can be well-approximated by the linear sum of the elastic component from the EPA and the inelastic component from CT14QED at any given scale ^{γ}_{0} = 1.3 GeV. The EPA photon PDF is the black curve, while the two CT14QEDinc photon PDFs start at the scale _{0} = 1.3 GeV with either 0% or 0.11% inelastic photon momentum fraction. For example, at

(color online) Various elastic (EPA), inelastic (CT14QED) and inclusive (CT14QEDinc, LUXqed, NNPDF3.0) photon PDF distributions at (a)

(color online) The photon momentum fraction inside the proton as a function of _{0}. Below that scale, extrapolation is used.

Using this approximation we can calculate the predicted cross section as a function of

(color online) CT14QEDinc predictions with initial inelastic photon momentum fraction varying from 0% to 0.3% compared with the CMS result at

We can also calculate the same cross section using the other photon PDFs (assumed to be inclusive) in the same manner, as a function of

(color online) Various PDF set predictions (with their PDF uncertainty ranges) compared to the CMS result at 8 TeV, at the 68% CL.

To facilitate the comparison of theory predictions of various production rates induced by the photon-photon fusion process at the LHC, we compute the photon-photon parton luminosity for each of the PDF sets, defined as:
_{1}_{2} = ^{2}/_{1} and _{2} are the momentum fractions of the photons from each proton; the factorization scale _{F}

(color online) Photon-photon luminosity predicted by various photon PDFs for an invariant mass of 1.5 TeV to 4.5 TeV, at the LHC with 13 TeV collider energy. The lower error curves of NNPDF2.3QED and NNPDF3.0QED predictions are below the

Next, we examine the impact of the CMS data on the CT14QEDinc, NNPDF2.3QED and NNPDF3.0QED photon PDFs. We adopt the PDF Bayesian reweighting technique to study its effect. The idea of reweighting PDFS was originally proposed by Giele and Keller in Ref. [^{2}) values of the comparison between the new data and theory prediction from each of the PDF replicas. The central value of any observable is the weighted average of the values extracted from each of the PDF replicas, and its PDF error is given by the weighted root-mean-square (RMS) of those values [

The results of including the CMS data to reweight the different photon PDF replicas are shown in Figs. _{16}), where _{16} is the replica at the 16^{th} percentile, as done in Fig.

(color online) The (a) NNPDF2.3QED and NNPDF3.0QED, and (b) CT14QEDinc photon PDF-induced uncertainties in the lepton pair invariant mass distribution, via ^{−}^{+} at the 13 TeV LHC, before and after PDF-reweighting (PR).

(color online) The ratio of common PDF sets to LUXqed result, along with the LUXqed uncertainty band (light red), after imposing the constraint from the CMS data at the 68% cl. The CT14 band corresponds to the range from the PDF members shown in brackets after reweighting. The NNPDF bands are calculated using the reweighted replicas. The uncertainty is given by the standard deviation of the updated replicas. Note the different

(color online) The ratio of common PDF sets to the LUXqed result, along with the LUXqed uncertainty band (light red), before imposing the constraint from the CMS data at the 68% cl. The CT14 band corresponds to the range from the PDF members shown in brackets after reweighting. The NNPDF bands are calculated using the reweighted replicas. The uncertainty is given by the standard deviation of the updated replicas. Note the different

The CMS data can also be used to test the above proposed model. Based on the cuts used by CMS and the LUXqed PDF set, the 95% confidence limit for

We have shown that the “effective” photon-photon luminosity obtained by the CMS Collaboration from analyzing the exclusive two-photon production of ^{+}^{−} pairs at the LHC can constrain some photon PDFs, particularly the NNPDF2.3QED and NNPDF3.0QED photon PDFs. On the other hand, the uncertainty predicted by the LUXqed PDFs, with ^{±}^{±}

We thank Tao Han, Lucian Harland-Lang, Joey Huston, Valery Khoze, Wayne Repko, Richard Ruiz and Misha Ryskin for helpful discussions. We also thank Tie-Jiun Hou and Pavel Nadolsky for providing the Monte Carlo replicas of CT14QEDinc photon PDFs. C.-P. Yuan is also grateful for support from the Wu-Ki Tung endowed chair in particle physics.

We emphasize that, although we are using the ^{+}^{−} cross section for the comparison, it is in fact the effective photon-photon luminosity extracted from the CMS di-muon data that constrains the photon PDFs.