Jekyll2021-10-25T22:55:36+00:00/feed.xmlHomeAn amazing website.Nolan Millerlsqfit-gui2021-10-25T00:00:00+00:002021-10-25T00:00:00+00:00/software/lsqfit_gui<p><a href=""><img src="./../../assets/images/git_logo.png" alt="git repo" /></a></p>
<p><em>Coming soon!</em></p>
<!--more-->Nolan MillerComing soon!Determining properties of hyperons [talk]2021-07-28T00:00:00+00:002021-07-28T00:00:00+00:00/talks/hyperon_talk<p><a href="https://indico.cern.ch/event/1006302/contributions/4373305/">Lattice 2021 bulletin</a> [<a href="https://millernb.gitlab.io/hyperon-talk/presentation.pdf">PDF of slides</a>]</p>
<!--more-->
<h2 id="title">Title</h2>
<p>Thank my contributors.</p>
<h2 id="why-hyperons">Why hyperons?</h2>
<p>First, what are hyperons and why do we care about them?</p>
<ul>
<li>By definition, a hyperon is a baryon containing at least one strange quark but no quarks of a heavier flavor</li>
<li>Historically, hyperons were first discovered around the 1960s. Gell-man observed that these baryons could be classified into a baryon octet and decuplet if the baryons were composed of partons individually obeying an SU(3) flavor $\times$ SU(2) spin symmetry. Using this picture on the left, Gell-man was able to predict the existence of the Omega hyperon, and it was soon discovered few years later.</li>
</ul>
<p>So why do we still need to study hyperons now?</p>
<ul>
<li>These days we have a more comprehensive model for understanding the particle zoo – namely, the Standard model.</li>
<li>One prediction of the Standard Model is that the CKM matrix, which describes flavor mixing by the weak interaction, should be unitary.</li>
<li>This leads to the so-called “top-row unitarity” condition. One way to study this is through hyperons, as I will elaborate on later.</li>
<li>Hyperons might be stable on the order of millions of years in neutron stars. Understanding properties of hyperons is important for modeling the equation of state of neutron stars, which dictates how soft or squishy neutron stars are.</li>
<li>We know that $\chi$PT works well for mesons but not necessarily baryons. Testing the convergence of chiral expressions for the hyperon mass formulae and axial charges serves as an important for heavy baryon $\chi$PT.</li>
</ul>
<p>Why do we need the lattice?</p>
<ul>
<li>Nucleon structure has been well-studied experimentally. Many body bound states of nucleons are abundant (chemistry)</li>
<li>Hypernuclear structure much harder to study – requires experiments like the LHCb</li>
<li>Difficulty comes from their instability</li>
<li>Can’t use them for scattering</li>
</ul>
<p>Notes:</p>
<ul>
<li>The baryon decuplet/octet comes from SU(6) symmetry (a superset of SU(3) flavor $\otimes$ SU(2) spin, as can be seen from either picture). We have $6 \otimes 6 \otimes 6 = 56 + \cdots$, with the $56$ irrep decomposing as $56 \otimes 10 \otimes 4 \oplus 8 \otimes 2$. Only a single $10$ and single $8$ survive once color symmetry is considered too.</li>
</ul>
<h2 id="experimental-determination-of-v_us">Experimental determination of $V_{us}$</h2>
<ul>
<li>To check tow-row unitarity, need to calculate $V_{ud}$, $V_{us}$, $V_{ub}$</li>
<li>But $V_{ub}$ is negligible, so mostly just a relationship between $V_{ud}$ and $V_{us}$</li>
<li>Unlike determinations of $V_{ud}$, which can be obtained purely from experiment + theory, the best estimates of $V_{us}$ require LQCD.</li>
<li>Three ways to estimate: $V_{us}$: kaon, hyperon, and tau decays</li>
<li>Historically hyperons were used to determine $V_{us}$, but these days the most precise determinations come from kaons</li>
<li>In fact, of these three sources, the widest error currently comes from using hyperon decays</li>
</ul>
<p>So why bother with hyperon decays?</p>
<ul>
<li>First reason: the LHCb will soon give improved estimates of hyperon decay widths, which should improve this method for estimating $V_{us}$; estimate this source can be competitive if we can determine the lattice part to ~1%</li>
<li>Second: although you can estimate $V_{us}$ using either Kl2 or Kl2 decays, the two sources give different results</li>
</ul>
<p>Notes:</p>
<ul>
<li>tau exclusive: tau -> {pion, kaon} + neutrino</li>
<li>tau inclusive: tau -> sum of all possible hadronic states + neutrino</li>
<li>tau theory problems: relies on finite energy sum rule assumptions; error is probably underestimated</li>
<li>$V_{us}$ from tau can also be estimated using exclusive tau decays + $F_K/F_\pi$</li>
<li>Historically $V_{us}$ was determined from hyperon decays but assuming SU(3) flavor symmetry (broken by ~15%). This SU(3) flavor assumption allowed one to relate the form factors of the electric charge and magnetic moment of the baryons.</li>
</ul>
<h2 id="tension-between-kl2-kl3">Tension between Kl2, Kl3</h2>
<ul>
<li>See this discrepency in this figure from the most recent FLAG review</li>
</ul>
<p>[Explain plot]</p>
<ul>
<li>$V_{us}$ can either be determined from $F_K/F_\pi$ (diagonal band) or the 0-momentum form factor (horizontal band)</li>
<li>$V_ud$ is determined experimentally from superallowed beta decays</li>
<li>Intersection of vertical blue band with other diagonal or horizontal band yields $V_{us}$</li>
<li>The two results are clearly in disagreement</li>
<li>In fact, the problem is worse (or better, depending on your perspective) as a check of unitarity</li>
<li>If you assume the experimental value of $V_{ud}$ and calculate the top-row unitarity check, the $F_K/F_\pi$ determination gives a disagreement from unitarity at the 3.2 sigma level</li>
<li>Likewise, the Kl3 measurement gives a disagreement at the 5.6 sigma level</li>
<li>We’d like to use hyperons decays to see which of these estimates of $V_{us}$ is more likely to be correct</li>
</ul>
<p>Notes:</p>
<ul>
<li>Quoted numbers: FLAG average for $N_f = 2 + 1 + 1$; results are similar-ish for $N_f = 2 + 1$, though less drastic (2.3- and 4.3-sigma)</li>
<li>Quoted numbers assume superallowed beta decay estimate; using only the lattice gives a roughly 2-sigma deviation.</li>
<li>Dashed line: correlation between $V_{us}$ and $V_{ud}$ assuming SM unitarity.</li>
</ul>
<h2 id="project-goals">Project goals</h2>
<ul>
<li>Mass spectrum – good first check</li>
<li>Next axial charges, vector charges</li>
<li>Finally calculate form factors</li>
<li>Explain lattice</li>
</ul>
<h2 id="previous-work">Previous Work</h2>
<p>Not working in a vacuum</p>
<p>Mass spectrum:</p>
<ul>
<li>Hyperon spectrum has been analyzed numerous times</li>
<li>Well-known figure from BMW</li>
<li>We have a different lattice setup</li>
</ul>
<p>Axial charges:</p>
<ul>
<li>Less work on hyperon form factors</li>
<li>First calculation of hyperon axial charge from LQCD occured in 2007, used only a single lattice spacing & pion mass (Lin)</li>
<li>Lin (2018): recent work</li>
<li>Lin: first extrapolation of hyperon axial charges to continuum limit; used ratio of hyperon axial charge to nucleon axial charge</li>
<li>Lin used a taylor expansion in the pion mass, lattice spacing, and volume – we plan to perform a simultaneous chiral fit, as I’ll elaborate on later</li>
<li>Compared to Lin, we will benefit from having more ensembles available in our analysis, including more at the physical pion mass</li>
</ul>
<p>Vector form factors:</p>
<ul>
<li>Not shown here</li>
<li>Work by Sasaki (2017) and Shanahan et al (2015) on the hyperon transition vector form factors</li>
</ul>
<p>Notes:</p>
<ul>
<li>Lin used ratio instead: (1) cancellations between numerator and denominator, (2) avoid uncertainties from renomalization of the axial-current operator</li>
</ul>
<h2 id="xi-correlator-fits">$\Xi` $Correlator fits</h2>
<ul>
<li>We’ve fit most of the hyperon correlators for the ensembles we have</li>
<li>Stability plot on right</li>
<li>Masses of Xi vs $m_pi^2$ on various ensembles, ranging from $a=0.15$ to $a=0.06$ and 3 ensembles at the physical pion mass</li>
</ul>
<h2 id="fit-strategy-mass-formulae">Fit strategy: mass formulae</h2>
<p>Consider the S=2 hyperons</p>
<ul>
<li>Explain lambda_chi – dimensionless LECs</li>
<li>Examing the chiral mass formulae, we see that the formulae have many common LECs, namely the sigma terms and the axial charges</li>
<li>This suggests when we perform this analysis, we should fit the hyperon mass formulae simultanously</li>
<li>But this also suggests that our fit will benefit from simultaneously fitting the hyperon masses with the axial charges, which should improve the precision of both the mass extrapolations and the axial charge extrapolations</li>
</ul>
<h2 id="hyperon-mass-spectrum-xi-preliminary-results">Hyperon mass spectrum: $\Xi$ preliminary results</h2>
<ul>
<li>We have calculated the mass spectrum for the hyperons; as an example, the results for the $\Xi$ are shown</li>
<li>We perform 40 Bayesian fits and perform a model average over these models</li>
<li>[Explain models]</li>
</ul>
<p>[Explain plot]:</p>
<ul>
<li>Histogram of all models, sorted by what order chiral terms are included</li>
<li>We see that the models that contribute the most to the model average are those without xpt terms included</li>
<li>But we see that the models with xpt terms also agree</li>
<li>In summary, not yet sure whether we can say xpt is converging for these observables</li>
</ul>
<h2 id="summary">Summary</h2>
<p>[See slide]</p>
<p>Notes:</p>
<ul>
<li>Use Feynman-Hellman fits</li>
</ul>Nolan MillerLattice 2021 bulletin [PDF of slides]Scale setting with $m_\Omega$2020-12-09T00:00:00+00:002020-12-09T00:00:00+00:00/software/scale_setting<p><a href="https://github.com/callat-qcd/project_scale_setting_mdwf_hisq"><img src="./../../assets/images/git_logo.png" alt="git repo" /></a></p>
<p>Python code for our scale setting analysis.</p>
<!--more-->
<h1 id="scale-setting-with-mω-and-w0">Scale setting with m<sub>Ω</sub> and w<sub>0</sub></h1>
<p>This repository performs the chiral, continuum and infinite volume extrapolations of <code class="language-plaintext highlighter-rouge">w_0 m_Omega</code> to perform a scale setting on the <a href="https://arxiv.org/abs/1701.07559">MDWF on gradient-flowed HISQ</a> action. The present results accompany the scale setting publication available at <a href="https://arxiv.org/abs/2011.12166">arXiv:2011.12166</a>.</p>
<p>The analysis was performed by Nolan Miller (<a href="https://github.com/millernb">millerb</a>) with the <code class="language-plaintext highlighter-rouge">master</code> branch, and Logan Carpenter (<a href="https://github.com/orgs/callat-qcd/people/loganofcarpenter">
loganofcarpenter</a>) with cross checks by André Walker-Loud (<a href="https://github.com/walkloud">walkloud</a>) on the <code class="language-plaintext highlighter-rouge">andre</code> branch.</p>
<p>The raw correlation functions can be found <a href="https://a51.lbl.gov/~callat/published_results/">here</a> and the bootstrap results for the ground state masses and values of <code class="language-plaintext highlighter-rouge">Fpi</code> are contained in the file <code class="language-plaintext highlighter-rouge">data/omega_pi_k_spec.h5</code>.</p>
<h2 id="how-to-use">How to use</h2>
<p>To generate the extrapolation and interpolation results from the paper, run <code class="language-plaintext highlighter-rouge">python scale-setting.py -c [name]</code>. This will automatically create the folder <code class="language-plaintext highlighter-rouge">/results/[name]/</code> . A summary of the results is given inside <code class="language-plaintext highlighter-rouge">/results/[name]/README.md</code>. Extra options can be viewed by running <code class="language-plaintext highlighter-rouge">python scale-setting.py --help</code>, which is given below for convenience.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>usage: scale-setting.py [-h] [-c COLLECTION_NAME] [-m MODELS [MODELS ...]] [-ex EXCLUDED_ENSEMBLES [EXCLUDED_ENSEMBLES ...]] [-em {all,order,disc,alphas}] [-df DATA_FILE] [-re] [-mc] [-nf] [-na] [-d]
Perform scale setting
optional arguments:
-h, --help show this help message and exit
-c COLLECTION_NAME, --collection COLLECTION_NAME
fit with priors and models specified in /results/[collection]/{prior.yaml,settings.yaml} and save results
-m MODELS [MODELS ...], --models MODELS [MODELS ...]
fit specified models
-ex EXCLUDED_ENSEMBLES [EXCLUDED_ENSEMBLES ...], --exclude EXCLUDED_ENSEMBLES [EXCLUDED_ENSEMBLES ...]
exclude specified ensembles from fit
-em {all,order,disc,alphas}, --empirical_priors {all,order,disc,alphas}
determine empirical priors for models
-df DATA_FILE, --data_file DATA_FILE
fit with specified h5 file
-re, --reweight use charm reweightings on a06m310L
-mc, --milc use milc's determinations of a/w0
-nf, --no_fit do not fit models
-na, --no_average do not average models
-d, --default use default priors; defaults to using optimized priors if present, otherwise default priors
</code></pre></div></div>
<p>To fine-tune the results, either re-run the fits using the options above or by modifying <code class="language-plaintext highlighter-rouge">/results/[name]/settings.yaml</code>. Similarly, the fits can be constructed with different priors by editing <code class="language-plaintext highlighter-rouge">/results/[name]/priors.yaml</code> and re-running <code class="language-plaintext highlighter-rouge">python scale-setting.py -c [name]</code>.</p>
<p>In addition to this library, this repo contains Juypyter notebooks. The fit for a single model can be explored in <code class="language-plaintext highlighter-rouge">/notebooks/fit_model.ipynb</code>. The model average is provided in <code class="language-plaintext highlighter-rouge">/notebooks/average_models.ipynb</code>. Some miscellaneous drudgery (eg, the paper’s sensitivity figure) is available in <code class="language-plaintext highlighter-rouge">/notebooks/bespoke_plots.ipynb</code>.</p>
<h2 id="requirements">Requirements</h2>
<p>This work makes extensive use of Peter Lepage’s Python modules <a href="https://github.com/gplepage/gvar"><code class="language-plaintext highlighter-rouge">gvar</code></a> and <a href="https://github.com/gplepage/lsqfit"><code class="language-plaintext highlighter-rouge">lsqfit</code></a>, which are used to construct the fits and model average. Further, the settings and priors are primarily tweaked by the accompanying <code class="language-plaintext highlighter-rouge">yaml</code> files loaded via <a href="https://github.com/yaml/pyyaml"><code class="language-plaintext highlighter-rouge">PyYAML</code></a>.</p>Nolan MillerPython code for our scale setting analysis.Scale setting the Möbius Domain Wall Fermion on gradient-flowed HISQ action using the Omega baryon mass and the gradient-flow scales $t_0$ and $w_0$2020-11-24T00:00:00+00:002020-11-24T00:00:00+00:00/publications/scale_setting<p><a href="https://doi.org/10.1103/PhysRevD.103.054511" target="_blank">Phys. Rev. D 103, 054511 (2021)</a>
[<a href="https://arxiv.org/abs/2011.12166" target="_blank">arXiv:2011.12166</a>]</p>
<!--more-->
<p>We report on a sub-percent scale determination using the Omega baryon mass and gradient-flow methods.
The calculations are performed on 22 ensembles of $N_f=2+1+1$ highly improved, rooted staggered sea-quark configurations generated by the MILC and CalLat Collaborations. The valence quark action used is Möbius Domain-Wall fermions solved on these configurations after a gradient-flow smearing is applied with a flowtime of $t_{\rm gf}=1$ in lattice units. The ensembles span four lattice spacings in the range $0.06 \lesssim a \lesssim 0.15$ fm, six pion masses in the range $130 \lesssim m_\pi \lesssim 400$ MeV and multiple lattice volumes. On each ensemble, the gradient-flow scales $t_0/a^2$ and $w_0/a$ and the omega baryon mass $a m_\Omega$ are computed. The dimensionless product of these quantities is then extrapolated to the continuum and infinite volume limits and interpolated to the physical light, strange and charm quark mass point in the isospin limit, resulting in the determination of $\sqrt{t_0} = 0.1422(14)$ fm and $w_0 = 0.1709(11)$ fm with all sources of statistical and systematic uncertainty accounted for. The dominant uncertainty in this result is the stochastic uncertainty, providing a clear path for a few-per-mille uncertainty, as recently obtained by the Budapest-Marseille-Wuppertal Collaboration.</p>Nolan MillerPhys. Rev. D 103, 054511 (2021) [arXiv:2011.12166]Lattice calculation of $F_K/F_\pi$ from a mixed domain-wall on HISQ action [talk]2020-10-30T00:00:00+00:002020-10-30T00:00:00+00:00/talks/FK_Fpi_talk<p><a href="https://meetings.aps.org/Meeting/DNP20/Session/DL.2">American Physical Society bulletin</a> [<a href="https://millernb.gitlab.io/aps-2020-10/presentation.pdf">PDF of slides</a>]</p>
<!--more-->
<h2 id="title-slide">Title slide</h2>
<p>Today I’m here to talk about my lattice determination of the ratio of the pseudoscalar decay rates $F_K$ and $F_\pi$ using a mixed domain-wall on HISQ action, which was only possible due to the work of other members of CalLat.</p>
<h2 id="why-f_kf_pi">Why $F_K/F_\pi$?</h2>
<p>As we all know, the quark eigenstates of the weak and strong interactions are different. One way this difference is manifested is through K, K-bar mixing, in which we see the quarks oscillate between flavors.</p>
<p>In the standard model, the difference between the quark eigenstates of the weak and strong interactions is encoded in the CKM matrix. If the eigenstates has been the same, the matrix would’ve been diagonal. However, the eigenstates are not, so we have off-diagonal entries that allow mixing between different generations and flavors.</p>
<h2 id="unitarity-of-the-ckm-matrix">Unitarity of the CKM Matrix</h2>
<p>According to the standard model, the CKM matrix is unitary. From the top row of the CKM matrix, we get the following relation. The CKM matrix entry $V_{ud}$ can be precisely determined experimentally through superallowed beta decays; however, $V_{us}$ cannot and must instead be determined through lattice methods. The last entry in this relation, $V_{ub}$, is comparably small, so this equation predominantly relates $V_{ud}$ and $V_{us}$.</p>
<p>Marciano [mar-see-an-o] has related $F_K/F_\pi$ and $\vert V_{us} \vert/\vert V_{ud} \vert$ to kaon/pion decay rates, so by combining our $F_K/F_\pi$ result with experimental results for $V_{ud}$ determined via superallowed nuclear beta decays, we can precisely determine $V_{us}$.</p>
<p>Here are the definitions of the pseudoscalar decay constants, which we can use to generate values on the lattice.</p>
<h2 id="why-lattice-qcd">Why <em>Lattice</em> QCD?</h2>
<p>As previously stated, $V_{us}$ is easily accessed by lattice, not experiments, especially if you have a few hundred million dollars to spare.</p>
<p>So what is lattice QCD? Lattice QCD is a non-pertubative approach to QCD, which is particularly useful in the low-energy limit where the coupling constant becomes greater than 1. The basic idea behind lattice QCD is to imagine what would happen if quark and gluon fields were discretized to a lattice, rather than permitting them to lie anywhere in spacetime, and then considering the limit where the lattice spacing goes to 0. Perhaps unsurprisingly, there are infinitely many ways of discretizing the QCD action, but they aren’t all equally useful.</p>
<p>In contrast with experimentalists, lattice practitioners have the advantage of being able to tune QCD parameters, allowing us to perform lattice “experiments” in “alternative” universes, thereby probing how QCD observables are impacted by their underlying parameters.</p>
<p>Lattice methods can also be used in conjunction with effective field theory, increasing the precision of our results.</p>
<h2 id="why-f_kf_pi-via-lattice-qcd">Why $F_K/F_\pi$ via Lattice QCD?</h2>
<p>$F_K/F_\pi$ is a ``gold-plated” quantity. Unlike many other QCD observables, it can be easily calculated to high-precision on the lattice. The quantity is dimensionless, meaning we don’t have to worry about scale setting. The numerator and denominator are correlated, further improving statistics. The quantity is mesonic, so it doesn’t have the signal-to-noise issue associated with baryonic observables. And the full chiral expansion is known to NNLO, so we’re only limited by our statistics, not theory.</p>
<h2 id="comparison-of-lattice-actions">Comparison of Lattice Actions</h2>
<p>As I previously stated, there are infinitely many ways of discretizing QCD. We use a mixed action, which is to say that we discretize the sea and valence quarks differently. Since the sea quarks are generally less important than the valence quarks, we use an action that allows us to cheaply produce many field configurations. This also means we can use generate additional pion and kaon data for the same amount of computational resources. Our action, unlike some others, has no $O(a)$ discretization errors.</p>
<h2 id="f_kf_pi-models">$F_K/F_\pi$ models</h2>
<p>The goal of this work is to determine $F_K/F_\pi$ at the physical point, that is, at the physical pion and kaon masses and in the continuum, infinite volume limit. To this end, we use chiral perturbation theory to expand $F_K/F_\pi$ in terms of the pseudoscalar masses.</p>
<p>At LO we expect $F_K/F_\pi = 1$ as this is the SU(3) flavor limit. In the SU(3) flavor limit, kaons and pion are identical. The top row, therefore, offers corrections to $F_K/F_\pi$ via $\chi$PT. The terms in the bottom row are lattice artifacts that must be accounted for.</p>
<p>When we perform our extrapolation, we don’t limit ourselves to a single model. Instead we consider 24 different models and then take the model average. The 24 different models come from the following choices:</p>
<ol>
<li>At NLO, whether we (a) use the NLO expressions for $F_K$ and $F_\pi$ in the numerator and denominator or (b) take the taylor expansion of $F_K$ and $F_\pi$. It sounds pedantic, but the latter choice removes one LEC at NLO.</li>
<li>At N2LO, whether we use the full $\chi$PT expression, which includes chiral logs, or just use a taylor expansion. Regardless, the N3LO correction is just a taylor series correction.</li>
<li>What we use for our renormalization/chiral cutoff</li>
<li>Whether or not we include the $\alpha_S$ term, which is a lattice correction from radiative gluons and is a quirk particular to some action discretizations.</li>
</ol>
<h2 id="model-paramters">Model Paramters</h2>
<p>Because we’re fitting a chiral expansion, we need to determine the parameters in this expansion. At LO, there are no parameters to be determined since $F_K/F_\pi$ is 1. At NLO, there is only a single chiral LEC, assuming we Taylor-expand the ratio: the Gasser-Leutwyler [gawh-ser loot-why-lehr] constant $L_5$. But at higher orders, there are many more parameters. At N2LO, there are 11 more; and at N3LO, there are 6 more.</p>
<p>We use 18 different ensembles in our lattice calculation, each of which is a datapoint in our fit. So we have essentially 18 parameters to fit with only 18 datapoints. While a frequentist might deem the endeavor hopeless at this point, a Bayesian would not. We can constraint the parameters by assigning them prior distributions. And from the graph, we see the fit is improved even as we add more parameters to our fit: the widest band has only two parameters if we include a lattice spacing correction, but the narrowest band has as many parameters as we have data points.</p>
<p>We have a rough idea of what the width of our parameters should be based on the size of our expansion parameters. Regardless, we can check whether our parameters are reasonable by using the empirical Bayes method, which uses the data to determine the most likely priors that would support that data.</p>
<h2 id="model-averaging">Model Averaging</h2>
<p>Again, we have 24 different candidate models to describe our data. We give each model a different weight in accordance to the model’s Bayes factor, which we then use to average each model’s extrapolation to the physical point. The Bayes factor is calculated by marginalizing over each of the model parameters and therefore allows us to compare models with different parameters. Additionally, it automatically penalizes overcomplicated models.</p>
<h2 id="comparison-of-models">Comparison of Models</h2>
<p>In this slide we see how our different model choices impact the model average.</p>
<p>In the top plot, we see that the data prefers $F_\pi$ for the cutoff. In the bottom right plot, we see that the the data heavily prefers using a pure Taylor expansion at N2LO, suggesting we have insufficient data to discern the N2LO chiral logs. In the last plot on the bottom right, we see that Taylor-expanding the ratio at NLO is not prefered.</p>
<p>While not shown here, models with and without the $\alpha_S$ correction have about equal weight.</p>
<h2 id="error-budget">Error Budget</h2>
<p>Next we can break-down where our sources of error come from. The plot on the right largely reiterates what I said before, but there are a few additional things we can glean from it. For example, we have a single ensemble generated at $a=0.06$ fm. If we hadn’t generated this ensemble, our uncertainty would’ve slightly increased and our extrapolation would’ve shift down by roughly half a sigma.</p>
<p>Looking at our error budget, we find that the largest source of error came from statistics, and the second largest source came from discretization, giving us a clear path for improving our result: simply increase the number of configurations and ensembles.</p>
<p>Finally, the up and down sea quarks are degenerate in our action, we also calculate an SU(2) isospin correction.</p>
<h2 id="previous-results">Previous Results</h2>
<p>Comparing our result with other collaborations, we see that we are our result is in good agreement. The blue band is our result, and the green band is the FLAG average, which is essentially the lattice equivalent of the PDG.</p>
<p>Again, we emphasize that each of these groups is using a different lattice action. Our goal here isn’t to determine the most precise value of $F_K/F_\pi$ but to check that our action is behaving reasonably, in much the same way that experimentalists calculate the same quantity in different ways to check that their methods are valid.</p>
<p>So while our result might not be the most precise, we have accomplished the goal we set out to do, which was to verify that our action yields reasonable results.</p>
<h2 id="v_us-from-f_k-f_pi">$|V_{us}|$ from $F_K/ F_\pi$</h2>
<p>Finally, as I mentioned at the start of my presentation, we can use $F_K/F_\pi$ to determine $V_{us}$. Using Marciano’s relation, we get the red band. The blue band is the FLAG average for $V_{us}$, which was determined by a different method using semileptonic form factors and the Ademollo–Gatto [ah-di-mall-o gat-o] theorem.</p>
<p>The green band is the experimental result for $V_{ud}$ as determined by superallowed nuclear beta decays. The intersection of the green band and the red band, therefore, yields our determination of $V_{us}$. There’s a little bit of tension between our result and the FLAG average.</p>
<p>Finally, we calculate the unitarity condition for the CKM matrix mentioned before and find that our result supports it.</p>
<h2 id="summary">Summary</h2>
<p>In conclusion, we can calculate $V_{us}$ from $F_K/F_\pi$, which allows us to test the unitarity condition of the CKM matrix. Further, $F_K/F_\pi$ is a <em>gold-plated</em> quantity, which we can use to compare lattice actions. We see that our action gives a result congruent with previous determinations of $F_K/F_\pi$. Finally, we see that model averaging is a method that allows us to evaluate the fitness of many models without biasing our result by commiting to a single one.</p>
<p>I’d like to once again thank my collaborators in CalLat. Thanks for listening!</p>Nolan MillerAmerican Physical Society bulletin [PDF of slides]spacetime-plots2020-09-26T00:00:00+00:002020-09-26T00:00:00+00:00/software/linear_grapher<p><a href="https://gitlab.com/millernb/linear-grapher/"><img src="./../../assets/images/git_logo.png" alt="git repo" /></a> <a href="https://mybinder.org/v2/gl/millernb%2Flinear-grapher/master?filepath=spacetime_plotter.ipynb"><img src="./../../assets/images/binder_logo.png" alt="Binder" /></a></p>
<p>A python noteboook for plotting points and lines, expressly written for making <a href="https://en.wikipedia.org/wiki/Spacetime_diagram">spacetime diagrams</a>. To get started with a tutorial, <a href="https://mybinder.org/v2/gl/millernb%2Flinear-grapher/master?filepath=spacetime_plotter.ipynb">launch the binder instance of the notebook</a>.</p>
<!--more-->
<h2 id="purpose">Purpose</h2>
<p>The purpose of this repo is to abstract away the details of creating a plot in <code class="language-plaintext highlighter-rouge">matplotlib</code>, a module which is notoriously tricky for first-time users (especially those with little or no programming experience). For instance, <a href="https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/simple_plot.html#sphx-glr-gallery-lines-bars-and-markers-simple-plot-py">per the documentation</a>, here is about the simplest plot one can make using <code class="language-plaintext highlighter-rouge">matplotlib</code>.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">matplotlib</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="c1"># Data for plotting
</span><span class="n">t</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">arange</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">2.0</span><span class="p">,</span> <span class="mf">0.01</span><span class="p">)</span>
<span class="n">s</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">+</span> <span class="n">np</span><span class="p">.</span><span class="n">sin</span><span class="p">(</span><span class="mi">2</span> <span class="o">*</span> <span class="n">np</span><span class="p">.</span><span class="n">pi</span> <span class="o">*</span> <span class="n">t</span><span class="p">)</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="p">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">s</span><span class="p">)</span>
<span class="n">ax</span><span class="p">.</span><span class="nb">set</span><span class="p">(</span><span class="n">xlabel</span><span class="o">=</span><span class="s">'time (s)'</span><span class="p">,</span> <span class="n">ylabel</span><span class="o">=</span><span class="s">'voltage (mV)'</span><span class="p">,</span>
<span class="n">title</span><span class="o">=</span><span class="s">'About as simple as it gets, folks'</span><span class="p">)</span>
<span class="n">ax</span><span class="p">.</span><span class="n">grid</span><span class="p">()</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span>
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: center"><img src="https://gitlab.com/millernb/linear-grapher/-/raw/master/figs/matplotlib_simple.png" alt="image" /></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">A simple matplotlib plot</td>
</tr>
</tbody>
</table>
<p>This “simple example”, however, is already too complicated for our purposes – we’re only interested in plotting lines and points. To that end, let’s plot a couple line. Recall that a line can be specified by either (1) a pair of points or (2) a point and a line. Suppose we’re interested in plotting two lines with slope $-0.5$, one passing through $A = (1, 2)$ and the other passing through $B = (1, -1)$. Using <code class="language-plaintext highlighter-rouge">matplotlib</code>, we cannot directly plot this line – instead, we must convert this description of a line into a collection of points, which can then be plotted via <code class="language-plaintext highlighter-rouge">matplotlib.pyplt.plot</code>.</p>
<p>Using either the code in this repo or <code class="language-plaintext highlighter-rouge">matplotlib</code>, we should get the following plot.</p>
<table>
<thead>
<tr>
<th style="text-align: center"><img src="https://gitlab.com/millernb/linear-grapher/-/raw/master/figs/two_lines.png" alt="image" /></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">Two lines, a grid, and a legend</td>
</tr>
</tbody>
</table>
<p>Here is the matplotlib implementation.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">matplotlib</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="c1"># Determine range
</span><span class="n">xlim</span> <span class="o">=</span> <span class="p">(</span><span class="o">-</span><span class="mi">5</span><span class="p">,</span> <span class="mi">5</span><span class="p">)</span>
<span class="c1"># Line a
</span><span class="n">m_a</span> <span class="o">=</span> <span class="o">-</span><span class="mf">0.5</span>
<span class="n">A</span> <span class="o">=</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">label_a</span> <span class="o">=</span> <span class="s">'a'</span>
<span class="c1"># Line b
</span><span class="n">m_b</span> <span class="o">=</span> <span class="o">-</span><span class="mf">0.5</span>
<span class="n">B</span> <span class="o">=</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">label_b</span> <span class="o">=</span> <span class="s">'b'</span>
<span class="c1"># Convert description to arrays
</span><span class="k">for</span> <span class="n">label</span><span class="p">,</span> <span class="n">m</span><span class="p">,</span> <span class="n">P</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">([</span><span class="n">label_a</span><span class="p">,</span> <span class="n">label_b</span><span class="p">],</span> <span class="p">[</span><span class="n">m_a</span><span class="p">,</span> <span class="n">m_b</span><span class="p">],</span> <span class="p">[</span><span class="n">A</span><span class="p">,</span> <span class="n">B</span><span class="p">]):</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">P</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">-</span> <span class="n">m</span> <span class="o">*</span><span class="n">P</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">linspace</span><span class="p">(</span><span class="n">xlim</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xlim</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">m</span> <span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">b</span>
<span class="c1"># Actually plot the line
</span> <span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">label</span><span class="p">,</span> <span class="n">lw</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="c1"># Format plot to look nice
</span><span class="n">plt</span><span class="p">.</span><span class="n">xlim</span><span class="p">(</span><span class="n">xlim</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylim</span><span class="p">(</span><span class="n">xlim</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">gca</span><span class="p">().</span><span class="n">set_aspect</span><span class="p">(</span><span class="s">'equal'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">minorticks_on</span><span class="p">()</span>
<span class="n">plt</span><span class="p">.</span><span class="n">tick_params</span><span class="p">(</span><span class="n">direction</span><span class="o">=</span><span class="s">'in'</span><span class="p">,</span> <span class="n">which</span><span class="o">=</span><span class="s">'both'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">grid</span><span class="p">(</span><span class="n">which</span><span class="o">=</span><span class="s">'major'</span><span class="p">,</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.7</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">grid</span><span class="p">(</span><span class="n">which</span><span class="o">=</span><span class="s">'minor'</span><span class="p">,</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.2</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">bbox_to_anchor</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">loc</span><span class="o">=</span><span class="s">'upper left'</span><span class="p">,</span> <span class="n">prop</span><span class="o">=</span><span class="p">{</span><span class="s">'size'</span><span class="p">:</span> <span class="mi">16</span><span class="p">})</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">ls</span><span class="o">=</span><span class="s">'--'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axvline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">ls</span><span class="o">=</span><span class="s">'--'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">grid</span><span class="p">()</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span>
</code></pre></div></div>
<p>Compare with the much simpler implementation in this repo.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">plotter</span> <span class="kn">import</span> <span class="n">plot_diagram</span>
<span class="c1"># Change these
</span><span class="n">xlim</span> <span class="o">=</span> <span class="p">(</span><span class="o">-</span><span class="mi">5</span><span class="p">,</span> <span class="mi">5</span><span class="p">)</span>
<span class="n">lines</span> <span class="o">=</span> <span class="p">[</span>
<span class="p">(</span><span class="s">'a'</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="o">-</span><span class="mf">0.5</span><span class="p">),</span> <span class="c1"># line a
</span> <span class="p">(</span><span class="s">'b'</span><span class="p">,</span> <span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">),</span> <span class="o">-</span><span class="mf">0.5</span><span class="p">),</span> <span class="c1"># line b
</span><span class="p">]</span>
<span class="c1"># Make plot
</span><span class="n">plot_diagram</span><span class="p">(</span><span class="n">lines</span><span class="p">,</span> <span class="n">xlim</span><span class="p">)</span>
</code></pre></div></div>
<p>Of course, there are additional wrinkles to a <code class="language-plaintext highlighter-rouge">matplotlib</code> approach that must be accounted for (plotting vertical lines, specifying a line by two points instead, plotting a single point). The <code class="language-plaintext highlighter-rouge">matplotlib</code> wrapper <code class="language-plaintext highlighter-rouge">plot_diagram</code> takes care of those issues for us.</p>Nolan MillerA python noteboook for plotting points and lines, expressly written for making spacetime diagrams. To get started with a tutorial, launch the binder instance of the notebook.$F_K/F_\pi$ from Möbius domain-wall fermions solved on gradient-flowed HISQ ensembles2020-05-10T00:00:00+00:002020-05-10T00:00:00+00:00/publications/FK_Fpi<p><a href="https://doi.org/10.1103/PhysRevD.102.034507" target="_blank">Phys. Rev. D 102, 034507 (2020)</a>
[<a href="https://arxiv.org/abs/2005.04795" target="_blank">arXiv:2005.04795</a>]</p>
<!--more-->
<p>We report the results of a lattice quantum chromodynamics calculation of $F_K/F_\pi$ using Möbius domain-wall fermions computed on gradient-flowed $N_f=2+1+1$ highly-improved staggered quark ensembles. The calculation is performed with five values of the pion mass ranging from $130 \lesssim m_\pi \lesssim 400$ MeV, four lattice spacings of $a\sim 0.15, 0.12, 0.09$ and $0.06$ fm and multiple values of the lattice volume. The interpolation/extrapolation to the physical pion and kaon mass point, the continuum, and infinite volume limits are performed with a variety of different extrapolation functions utilizing both the relevant mixed-action effective field theory expressions as well as discretization-enhanced continuum chiral perturbation theory formulas. We find that the $a\sim0.06$ fm ensemble is helpful, but not necessary to achieve a subpercent determination of $F_K/F_\pi$.
We also include an estimate of the strong isospin breaking corrections and arrive at a final result of $F_{\hat{K}^+}/F_{\hat{\pi}^+} = 1.1942(45)$ with all sources of statistical and systematic uncertainty included. This is consistent with the Flavour Lattice Averaging Group average value, providing an important benchmark for our lattice action. Combining our result with experimental measurements of the pion and kaon leptonic decays leads to a determination of $|V_{us}|/|V_{ud}| = 0.2311(10)$.</p>Nolan MillerPhys. Rev. D 102, 034507 (2020) [arXiv:2005.04795]$F_K/F_\pi$ from MDWF on HISQ2020-04-27T00:00:00+00:002020-04-27T00:00:00+00:00/software/fk_fpi<p><a href="https://github.com/callat-qcd/project_fkfpi/tree/nolan"><img src="./../../assets/images/git_logo.png" alt="git repo" /></a></p>
<p>Python code for our $F_K/F_\pi$ analysis.</p>
<!--more-->Nolan MillerPython code for our $F_K/F_\pi$ analysis.