|The tropical route of QBO teleconnections in a climate model |
Jorge L. Garcia-Franco, Lesley J. Gray, Scott Osprey, Robin Chadwick, and Zane Martin
Recommendation: Publishable after moderate revisions
I thank the authors for their detailed response to my initial comments, including a careful discussion of the differences between their study and that of Rao et al 2020. I appreciate their focus on the role of the QBO level (70hpa vs. 30hPa) and signal to noise ratio (i.e. how many years are included in the composite). I'm going to sign my review, as it is about to become fairly obvious who wrote this review (if it wasn’t already obvious from the first round). I also want to apologize if my initial review was a bit too harsh and dismissive, as I really like this paper!
I have one more general comment on section 3.1, and also a few remaining comments on the difference between Rao et al 2020 and this paper.
Section 3.1 still seems to be overly precise when trying to compare the model to observations. The GPCP response shown in figure 1a and the left column of figure 2 reflects the fact that the data is available only from 1979 to the near present. Over this period there were more EN during WQBO. If the observational precip dataproduct was available for more years, then the observed signal would be different (as the authors show shortly for SST). In other words, there is substantial uncertainty on the observed response.
Because of this, I don't think it makes sense in the text to compare the model to observations in a quantitative sense nor to focus on the details of the response, as the observational signal is fundamentally unknown (e.g. Deser et al 2017; Journal of Climate on ENSO teleconnections) and the model SSTs are not the same as obs SSTs.
Stated another way, I would expect the model response to be weaker than obs because the SST response shown in figure 2 is weaker in the ENSO region.
Stated a third way, if we had a gridded, observed precip product for the period 1953 to the near present, I speculate that the agreement with the model would be better.
If the authors agree with my interpretation, the text itself in section 3.1 needs to be modified, though the main conclusions will be generally unchanged (and in fact, the model would actually become more suitable for the analysis the authors subsequently perform).
I also have a few comments on the discrepancy between Rao et al and this paper. The first is that Rao et al considered many models where the QBO would be ill-defined at 70hPa. Hence it would be impossible to consider the role of the QBO at 70hPa on impacts outside of the QBO region in such models. While the Met Office models do indeed have a too-weak QBO at 70hPa, this model was actually one of the better ones in this regard (though its periodicity was too long as the authors acknowledge). In order to have a common definition for all models, Rao et al adopted the 30hPa level for all models.
Second, Rao et al identified a tropical convective signal associated with the QBO at 30hPa which differs from the one in this paper in its pattern. Rao et al also identified a robust signal in 100hPa buoyancy frequency for this phase of the QBO in observations and in most models, including the Met Office models which were among the best performing (Figure 9 of Rao et al). My interpretation is not that the winds at 30hPa have a direct effect on buoyancy frequency and convection, rather that this is a convenient way to pick a particular phase of the QBO whose downward extension has a direct impact on the TTL. For this specific phase of the QBO, the Met Office models struggle to represent the convective impact even as they did a reasonable job with the buoyancy frequency anomalies at 100hPa. This could be because of biases in the QBO itself (e.g. downward propagation to the lower stratosphere, or the overly long stalling of lower stratospheric anomalies), or a small signal to noise ratio that a single ensemble member may miss (as the authors point out).
My own speculation/intuition based on the results from Rao et al and the current paper is that there may be multiple QBO regimes with an impact on tropical convection, but future work is clearly needed to sort out whether this indeed the case and why. While I agree that the 70hPa level is best to diagnose a direct impact on the TTL, the unfortunate reality is that nearly all models still struggle with the downward extension of the QBO to the lower stratosphere with very little progress having been made recently and with few ideas on how to improve the situation (other than substantially more resolution, as suggested in Garfinkel et al 2022; JAMES). Hence a focus on 70hPa necessarily excludes many models which may still have teleconnections from the QBO higher up. I would suggest that as a community, we should consider teleconnections associated with different QBO levels (e.g. both 70hPa and 30hPa), so as to be able to include models with relatively larger biases in the QBO in the lowermost stratosphere.
Performing such an analysis is well outside the scope of the authors’ paper, and specifically the authors could decide to not include any of it. However, the authors may want to include more about this sensitivity to QBO level and the nature of biases in most models in their discussion section.
Line 227 please rewrite “for the most part of the simulation”
Line 242: “However, the equatorial Atlantic and Pacific MAM responses are stronger when ENSO events are included.” This isn’t obvious to me from figure 4.
Table 1: I found the caption included for this table confusing. Are the stated units (“#months EN/# months W”) correct? Shouldn’t it be (“#months ENSO/# months QBO”)? Also, “standard deviation of the PDF” is confusing as well – I think you mean you did a bootstrapping in order to quantify the uncertainty of #months ENSO/# months QBO, but maybe I misread.
Section 3.4 The word “explain” on line 373, 399, and 440 seems overstated. There is no casual explanation here, as the authors note later. Rather the authors are establishing a self-consistent framework or schematic that allows for connecting tropical anomalies in disparate regions.
Deser, Clara, Isla R. Simpson, Karen A. McKinnon, and Adam S. Phillips. "The Northern Hemisphere extratropical atmospheric circulation response to ENSO: How well do we know it and how do we evaluate models accordingly?." Journal of Climate 30, no. 13 (2017): 5059-5082.
Garfinkel, Chaim I., Edwin P. Gerber, Ofer Shamir, Jian Rao, Martin Jucker, Ian White, and Nathan Paldor. "A QBO Cookbook: Sensitivity of the Quasi‐Biennial Oscillation to Resolution, Resolved Waves, and Parameterized Gravity Waves." Journal of Advances in Modeling Earth Systems 14, no. 3 (2022): e2021MS002568.
This work by Garcia-Franco et al. looks at the relationships between
the QBO and tropical climate in observations and centennial pre-industrial
CMIP6 simulations with one coupled climate model.
The connections are difficult to diagnose from observations so long simulations are useful.
The paper is overall interesting and covers many topics, but the authors
should check the consistency of the symbols and names used (see comments by line).
It can be confusing to read different acronyms for the same quantities.
The units reported in the plots should be verified.
Given the central role of model simulations, more information on its
skill at simulating QBO and ENSO should be provided. For example, how realistic
is the QBO amplitude at 70 hPa for this specific model?
Apart from composite differences, some climatologies should be discussed.
In the introduction reference to Geller et al., 2016 on gravity wave changes would fit.
Model-dependence of the results should be stressed, since different
configurations of a single model are analysed and QBO/SST biases may play a big role.
The causality analysis on how the QBO influences ENSO is not very convincing as it stands.
I guess the authors should also say something about the frequency of LN/EN
events during neutral QBO (QBO-N).
The section about monsoons should be revised and maybe shortened, since QBO
surface impacts may be very dependent of any QBO bias. For example, Giorgetta
et al. 1999 (cited) nudged to QBO, so it was realistic in their case.
The data description should be modified to provide pertinent information.
Specific comments by line
L52, maybe 'on the convective process'?
L55, define 'CMIP', rephrasing L62
L63, are GWs tied somehow to sources?
L76, both monthly means?
L82, is there a reason for not using the standard 0.25x0.25?
L83, it is a bit strange to put the (generic) link only for ERA5;
I would move to data availability with direct links for all datasets
(for ERA5 https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-pressure-levels-monthly-means?tab=overview) and proper citation https://confluence.ecmwf.int/pages/viewpage.action?pageId=197704114
L90seq, define 'N' and 'ORCA' for the components resolution
L91, UKESM or UKESM1?
L94, So 3 simulations in total? Would be good to state that you have two models
with lower resolution and one with better resolution, which is the main interest.
L96, I would move to 'data availability' or similar
L98, Here or later I would add something about some relevant model properties (e.g.
both models have more spectral power in 2-3 years compared to observations).
Also ocean resolution seems to be important for mean biases, and the
realism of the ITCZ should be mentioned as well.
L105, What about UKESM1? Not sure why only HadGEM is mentioned.
L111, years or months?
L117, Which levels? Above you just mention 70 hPa.
L119, '1' and '2' are subscripts
L124, The product you use (GPCP?) for this index is providing convective and stratiform
precipitation separately? Or is it a total precipitation? If not, remove convective
(here and also in all instances following).
Can you explain why using a precip-based IOD index rather than the standard SST-based one?
Please add a reference if it was used before.
L125, I'd use same style for EN3.4, with 
L133seq, This symmetry seems strange (given the ENSO asymmetry and QBO stalling) can you provide numbers?
L135, This is 'observed' for ERA5? Can you provide the values for HadSST? It is useful to compare model/observation statistics.
L140, Maybe start with 'When estimating correlations, they are...'
Fig 1, 'mm day-1' in brackets, or move 'pr' to title
L159, Please comment on the ITCZ realism.
L162, Add reference
L206, I guess would be useful to have a table in the method section with the
different numbers for ENSO and QBO. Why 120, does it have a special meaning?
L209, But the wet anomaly in the Pacific and dry in the Atlantic are more marked with ENSO included.
This is also seen in Fig5.
Fig5, If regression coefficients are re-scaled (caption), then a prime is missing in a&d titles.
See Supplement as well.
L214, (1) -> (Fig. 1)
L216, it was EN3.4 before
L221, why no significance in FigS3?
L225, mention Gray et al., 1992
Fig6, I'd use E and W for QBO in (b). Moreover I would define once all the acronyms
(EN, LN, E, W) in the methods and be consistent throughout (no 'ea', EN3.4, etc.).
Suggest NE or NN for Neutral ENSO. Moreover, would it be easier to read the plot ordering
the boxplots as LN/NE/LN ? Why not showing E and W phases separately for the amplitude?
L238, Have you stated which level are descent rates for? From the methods I got that the
amplitude is integrated in the 10-70 layer, but descent rate is by level.
L246, See Geller et al 2016 about GW variations.
L252, So the frequency would be for example (# months EN) / (# months W)?
Maybe mention that IOD will be considered later?
L260, ENSO3.4 -> EN3.4 (or maybe ENSO)
Fig 7,  missing around mm day-1 (check other plots as well). I guess IOD-prc is same as IOD?
L266, write months in full. Can you elaborate on how the difference model/obs
depends on the ENSO evolution in the model (e.g. Lengaigne et al., 2006)?
Also worth nothing how the model index amplitudes are 2-3 times smaller than obs.
Fig8, as before, why 'convective'? Why now using a higher confidence level?
L275, but could this be model-dependent?
L280, Please avoid the mix of abbreviations and months in full
L286, Maybe the Indian Ocean sector, rather than IOD?
L293, why '.'?
L295, atmospheric circulations. However, the model biases should be noted.
L300, How are these longitudes selected?
Fig9, Only convective, stratiform rainfall removed? Is panel (b) indicating a double ITCZ bias?
Can you comment in the text?
L317, remove 'rate'
Fig10, define acronyms MSD, NAM. For more direct comparison you could mask values
over oceans? Do you know why the regions show very net boundaries in some cases?
Compare with Lee and Wang, 2012 their Fig4
Fig11, I am confused by vector sizes. They are 3 or 0.3 10-2 Pa s-1,
but their lengths do not differ by a factor 10. Please clarify.
Also the plots are quite busy, can you try improving them?
L330, Mention the QBO biases which may be important
L335, If you integrate to the top, then the integration bounds are swapped
and 0->p_top (or p_surf)? Gravitational acceleration (g) rather than constant (G)?
How do you compute the divergent component of zonal wind?
L351, To me some QBO/ENSO superposition can also be seen from the plots.
L406, or role of QBO bias...
L416, have you ever mentioned TRMM in the text?
L420, 'observations' -> 'variables'
L422, revise. you speak about days, I understand the input data is monthly mean,
so it this weighting already built in? Does the MOHC model have 360_day calendar?
L435, I guess the 'i' subscript is redundant with one predictor? Same in Fig S3
L440, State that summation is 'j=1...N', as X_0 appears already
L446, Is there a stray A3?
L513, why uppercase?
Geller et al JGRA 2016 https://agupubs.onlinelibrary.wiley.com/doi/10.1002/2015JD024125
Gray et al., JMSJ, 1992 https://www.jstage.jst.go.jp/article/jmsj1965/70/5/70_5_975/_article
Lee and Wang, CD, 2012 https://link.springer.com/article/10.1007/s00382-012-1564-0
Lengaigne et al., JC, 2006 https://journals.ametsoc.org/view/journals/clim/19/9/jcli3706.1.xml