the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The teleconnection of extreme El Niño–Southern Oscillation (ENSO) events to the tropical North Atlantic in coupled climate models
Jake W. Casselman
Joke F. Lübbecke
Tobias Bayr
Wenjuan Huo
Sebastian Wahl
Daniela I. V. Domeisen
Download
- Final revised paper (published on 22 May 2023)
- Supplement to the final revised paper
- Preprint (discussion started on 07 Nov 2022)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on wcd-2022-57', Anonymous Referee #1, 09 Jan 2023
The Teleconnection of Extreme ENSO Events to the Tropical North Atlantic in Coupled Climate Models
Casselman, et al
Recommendation: Publishable after moderate revisions
This paper continues the authors’ previous attempts to understand why the impact of ENSO on the tropical North Atlantic may be nonlinear. Several distinct mechanisms connect ENSO to the tropical North Atlantic, and the authors consider the relationship between ENSO and each of them as a function of ENSO strength. Two distinct modeling frameworks are included so as to form a more complete picture. The key result is that nonlinearities are present, however most of the nonlinearities are accounted for by the nonlinear relationship between ENSO and tropical Pacific upper level divergence.
The paper is mostly convincing, well-written, and certainly contributes to the literature. I have several comments that likely will require moderate revisions, but that hopefully will lead to an improved final version.
- In the paragraph from line 36 to 42, the authors note that the ENSO-> TNH connection is affected not just by ENSO magnitude, but also by whether the event is Central Pacific vs. Eastern Pacific and also by how quickly the event decays (and also possibly by an ENSO->NAO connection). The authors never return to this possibility (neither in the results nor the discussion). Are these effects truly negligible?
Even if no additional analyses are performed or included, I would expect the discussion section to return to these possibilities at the very least. In particular, some of the difference between the models could be due to differences in how each captures these possibilities, and not just model biases.
- The authors make a big deal of the one-month bias in CESM-WACCM in the ENSO->TNA connection, however Figure 1 is based on three-month running means and so it is hard to deduce biases of one-month. Can you produce a version of Figure 1 without the 3-month running mean?
Further, the bias appears to be mostly associated with La Nina and not El Nino, and so would only affect the ENSO->TNA relationship for La Nina. However nonlinearity is present for El Nino too.
- I had trouble following the logic behind figure 2. The authors seem to be claiming that if a model misses the seasonal cycle of std dev of the five indices shown, then it must struggle to capture any possibility of an ENSO->TNA pathway through said mechanism. Given that the biases in Figure 2 are small in an absolute sense (i.e. the ordinate does not start at zero in Figure 2), I don’t think this argument is correct. Further, it is not clear why the seasonal cycle of std dev is particularly relevant, and not the std dev in late winter when the ENSO->TNA pathway is peaking.
While the std dev (and hence the mechanisms) could be too-weak or too-strong by ~10% or so, the biases are small! I don’t understand why the authors think the mechanism is grossly deficient if there is a 10% or 15% bias by this metric. Or perhaps I misunderstood the entire discussion of figure 2…
- Discussion of Figure 3 and 5: the authors appear to be concluding that there are model deficiencies if the LOWESS grey cloud does not encompass the best-fit slope. However, shouldn't a proper hypothesis test take into consideration the uncertainty of the linear fit? That is, the linear fit also has uncertainty, and if this uncertainty is neglected, then the authors may be too easily discarding the null-hypothesis that the model is doing a reasonable job.
- This is more a suggestion for future work than for the current paper: I wonder if Pacemaker type integrations (e.g. used originally for the hiatus, but now part of CMIP6) would be particularly helpful, as the tropical Pacific SSTs should be more reliable and hopefully also the Pacific divergence and Pacific mean state. If all models following this protocol have a similar tropical Pacific (both mean state and variability), then the causes of intermodel differences in teleconnections may be more easily understood. This suggestion is for future work, not the current paper which already has enough models included.
Minor comments
Line 79 “warm” misspelled
Citation: https://doi.org/10.5194/wcd-2022-57-RC1 -
RC2: 'Comment on wcd-2022-57', Anonymous Referee #2, 26 Jan 2023
Title: The Teleconnection of Extreme ENSO Events to the Tropical North Atlantic in Coupled Climate Models
Authors: Casselman et al.
Remarks: In this article, authors discussed the Pacific-Atlantic connections quantitatively through different ENSO teleconnection mechanisms. Authors used two coupled model simulations and compared with one observational/reanalysis dataset. It’s an interesting study, but in its current form it lacks clarity. The teleconnection mechanisms are quantitatively discussed without discussing the robustness of the indices simulated by the model and compared with observations, which is important to show the spatial diversity of the indices used in the study. Also, as title says, “extreme ENSO”, I do not see much about where authors can conclude about the nonlinearity in La Nina and linearity in El Nino. For that, they have to reconsider the entire methodology, with substantial analysis, which is currently missing in manuscript. For example, if authors just see individual maps for each extreme El Nino and la Nina observational SST anomalies and precipitation anomalies map, there will be large diversity from event to another. Please see the details comments below. Unfortunately, in its present form I do not recommend this manuscript for the journal publication, but I encourage re-submission.
Specific Comments:
- According to figure 1 (TNA), anomaly peaks in FMA in observations and even in model too except CESM. So, I think may reconsider their analysis about MAM? Authors have used one observation and two model’s output. To have a more robust peak, authors may consider couple of more observations, which will help them to provide a robustness finding about the peak. For, Figure 2b, peak of TNA appears in MAM? Can you please clarify the difference in peak between two figures?
- Authors have used several indices to explain teleconnections such as, TT, secondary Gill, and Southeastern low index. I think it is important to see how well model reproduce these indices in term of spatial pattern before they could be used as index. These patterns may have spatial diversity among the models and that is important to see first before into going to detail. So, I suggest authors to show these patterns in model and compared with observations.
- What is the reason to shift Nino definition from 3-months to 5-months? Because, to identify the Peak in ENSO and TNA 3-month running mean was used.
- Figure 2: authors looked at the seasonality of the teleconnection indices, which is different from index to index. Given the fact ENSO conditions are there, then do you think that within each El Nino, there are different mechanism which may play a role in teleconnection depending on seasonality? Or they are different from one ENSO to another event?
- Figure 4: Here authors have used 5-months Nino34 index for a pointwise correlation, where they are looking at the total ENSO response without identifying the extreme ENSO as they discussed in Fig. 1? In figure 4, authors have used a total index without separating the El Nino and La Nina. So how this explains the extreme ENSO? how this explains the nonlinearity? Also, here authors used the upper-level divergence index without looking at spatial pattern, it’s important to see the spatial pattern in model and observations?
- Page 12: Line 275-280: Also, authors are discussing teleconnections 200hpa divergence? How well model simulate the ENSO-precipitation teleconnections? It is important to see the spatial patten before it could be used as an index.
- Line 325-335: Authors discussed about the linearity of El Nino and non-linearity of La Nina, but it’s not clear, how? Also, how linearity and nonlinearity is related to the pre-conditioning of SSTA? What does this mean?
- Please check some typos for example, line 79: “….. warn troposphere ….” Could be warm troposphere ..?
Citation: https://doi.org/10.5194/wcd-2022-57-RC2 -
AC1: 'Comment on wcd-2022-57', Jake Casselman, 20 Mar 2023
PDF version attached in supplement
Response to Reviewers
We want to thank all reviewers for their insightful review of our manuscript and for the time to review our work. Please find our detailed responses to the reviewers’ comments and suggestions below.
The changes have been included in the manuscript (indicated in bold in the annotated manuscript). All line indications refer to the new (annotated) version of the manuscript.
The main changes to the manuscript are listed here:
-
Inclusion of several new supplementary figures which show the spatial pattern of each of the major mechanism fields for CESM-WACCM, FOCI, and reanalysis to compare the mecha- nisms before moving to index creation.
-
The introduction of bootstrapped linear fits, which allows for more robust descriptions of the significance of any nonlinearity (Fig. 1,3,5).
-
Clarifications on the manuscript throughout to improve readability
Reviewer 1:
Question 1:
In the paragraph from line 36 to 42, the authors note that the ENSO → TNA connection is affected not just by ENSO magnitude, but also by whether the event is Central Pacific vs. Eastern Pa- cific and also by how quickly the event decays (and also possibly by an ENSO→NAO connection). The authors never return to this possibility (neither in the results nor the discussion). Are these effects truly negligible? Even if no additional analyses are performed or included, I would expect the discussion section to return to these possibilities at the very least. In particular, some of the difference between the models could be due to differences in how each captures these possibilities, and not just model biases.
Answer:
In our manuscript, we presented a supplementary figure that displays the spatial distribution of each mechanism. Originally referenced as Figure S9 in the concluding remarks, it is now labeled as Figure S13. As previously mentioned on lines 324 and 326 (in the updated manuscript), our analysis revealed that CESM-WACCM exhibits a more eastern pattern than FOCI, with a com- parison to reanalysis. To provide further insight into this finding, we added a new supplementary figure, Figure S4, which illustrates the SST pattern associated with the Secondary Gill index for each model and reanalysis. This figure expands on the longitudinal differences and helps to support our argument. Finally, we included a concluding statement from lines 389 to 393 that emphasizes the importance of future studies in this area.
Question 2:
The authors make a big deal of the one-month bias in CESM-WACCM in the ENSO→TNA con- nection, however Figure 1 is based on three-month running means and so it is hard to deduce biases of one-month. Can you produce a version of Figure 1 without the 3-month running mean? Further, the bias appears to be mostly associated with La Nina and not El Nino, and so would only affect the ENSO→TNA relationship for La Nina. However nonlinearity is present for El Nino too.
Answer:
Thank you for the suggestion. We have created the equivalent of Figure 1 using only monthly values, which you can find below. The inclusion of JRA-55 (left) shows that CESM-WACCM (center) is still early by one month (March), while FOCI (right) peaks around April. However, it is a very good point that you have brought up about how the biases are more associated with La Nina, so we have included this on lines 173-174.
Question 3:
I had trouble following the logic behind figure 2. The authors seem to be claiming that if a model misses the seasonal cycle of std dev of the five indices shown, then it must struggle to capture any possibility of an ENSO→TNA pathway through said mechanism. Given that the biases in Figure 2 are small in an absolute sense (i.e. the ordinate does not start at zero in Figure 2), I don’t think this argument is correct. Further, it is not clear why the seasonal cycle of std dev is particularly relevant, and not the std dev in late winter when the ENSO→TNA pathway is peaking. While the std dev (and hence the mechanisms) could be too-weak or too-strong by 10% or so, the bi- ases are small! I don’t understand why the authors think the mechanism is grossly deficient if there is a 10% or 15% bias by this metric. Or perhaps I misunderstood the entire discussion of figure 2. . .
Answer:
Thank you for your detailed explanation. First, we did not mean to say that the ENSO → TNA pathway is struggling, but only that the timing of the mechanism may shift. In fact, we explicitly state in our discussion that the teleconnection between ENSO and TNA SSTAs is well represented in both models we used (CESM-WACCM and FOCI: “Using ensemble simulations from two CGCMs, namely CESM-WACCM and FOCI, we show that overall, the teleconnection between ENSO and the TNA SSTAs is well represented in both models.”)
Second, we agree with your point about the small biases and have made adjustments to the language to reflect this. Additionally, we chose to show the seasonal cycle in our analysis, which includes when the pathway is peaking, instead of focusing only on the peak of the relationship. This allowed us to determine if the peaks moved forward or backward in reference to the peaks in reanal- ysis.
Third, we do not believe the models are ’grossly deficient’ due to biases, as this would contradict the remainder of our paper. To ensure that our manuscript does not give this impression, we have further edited the tone surrounding the discussion of Figure 2.
Overall, the changes include the following:
• On line 206, we have added an emphasis that the overall biases are small and point out to the reader that the origins of the graphs are not zero to ensure they are not misled
• On line 211-212, we made sure to emphasize that in absolute terms the bias is small
Question 4:
Discussion of Figure 3 and 5: the authors appear to be concluding that there are model deficien- cies if the LOWESS grey cloud does not encompass the best-fit slope. However, shouldn’t a proper hypothesis test take into consideration the uncertainty of the linear fit? That is, the linear fit also has uncertainty, and if this uncertainty is neglected, then the authors may be too easily discarding the null-hypothesis that the model is doing a reasonable job.
Answer:
Thank you for the great suggestion. We have now included a bootstrapped linear fit for Figures 1, 3, and 5, which helps to increase the robustness of our results. We have also added an explanation of our methods on line 118. We have also included it within the text, including in the following:On lines 184 to 191, we have updated the manuscript to compare Lowess and linear fit shading, which is consistent with previous statements and enhances the robustness of our statements.
In the paragraph starting at line 225, we have included references to the shading, which allows us to now discuss the significance better and increases the robustness of the concluding statement in this section (i.e., ”Overall, these results clearly show that the tropical pathway towards the TNA is nonlinear, but there are inconsistencies between CESM-WACCM and FOCI. Namely, the nonlinearity for TT is much more significant in CESM-WACCM, and the nonlinearity for the Secondary Gill response is only present in FOCI, albeit it is not significantly different from the linear fit. ”)
On Line 289, we are now referring to the shading directly, which makes our conclusions much more robust for the importance of the linearity with upper-level divergence.
In the paragraph starting at line 296, we have used shading to enhance the robustness of our statements about the linearity of the extratropical response in Figure 5. This figure examines the relationship between the mechanism and the 200 hPa divergence, rather than the Pacific SSTAs.
Question 5:
This is more a suggestion for future work than for the current paper: I wonder if Pacemaker type integrations (e.g. used originally for the hiatus, but now part of CMIP6) would be particularly help- ful, as the tropical Pacific SSTs should be more reliable and hopefully also the Pacific divergence and Pacific mean state. If all models following this protocol have a similar tropical Pacific (both mean state and variability), then the causes of intermodel differences in teleconnections may be more easily understood. This suggestion is for future work, not the current paper which already has enough models included.
Answer:
Thank you for the suggestion, and we agree completely.
Question 6:
Line 79 “warm” misspelled
Answer:
Thank you for pointing this out. We have updated the manuscript
Reviewer 2:
Question 1:
According to figure 1 (TNA), anomaly peaks in FMA in observations and even in model too except CESM. So, I think may reconsider their analysis about MAM? Authors have used one observation and two model’s output. To have a more robust peak, authors may consider couple of more obser- vations, which will help them to provide a robustness finding about the peak. For, Figure 2b, peak of TNA appears in MAM? Can you please clarify the difference in peak between two figures?
Answer:
Figure 1 doesn’t show the seasonal evolution of the TNA anomaly for observation, but this can be found in Figure S3, where the peak timing varies depending on the subsampled strength of ENSO. The same can be said in Figure 1, whereby depending on the strength of the subsampled ENSO event, the peaks also vary even within a model (example: FOCI’s TNA SSTAs peak in FMA during extreme La Nin ̃a, but MAM during strong La Nin ̃a.
For our model results to be compared to reanalysis, we chose to stick with MAM even as the peaks may vary. The difference between the peaks in Figure 1 and Figure 2 is that Figure 2 does not subsample based on a specific ENSO type and uses the entire time series. We further explained this in the manuscript around line 180.
We have also included an extended comparison of the ONDJF Nino34 and MAM TNA SSTAs from 1854 to 2021 using ERSSTv5 and have included the result in Figure S5. Figure S5 displays the results of this comparison, which show that the peak of the TNA SSTAs in MAM increases in robustness. Specifically, all anomalies except for moderate El Nin ̃o peaks in MAM demonstrate this increase.
Question 2:
Authors have used several indices to explain teleconnections such as, TT, secondary Gill, and South- eastern low index. I think it is important to see how well model reproduce these indices in term of spatial pattern before they could be used as index. These patterns may have spatial diversity among the models and that is important to see first before into going to detail. So, I suggest authors to show these patterns in model and compared with observations.
Answer:
Thank you for your suggestion. We had previously plotted them as a preliminary analysis and have now included the figures in the Supplementary material (Figures S2-4). We have also now included text comparing the results within the main manuscript, on lines 160-163, to complement the methodology section where we introduce the indices.
Question 3:
What is the reason to shift Nino definition from 3-months to 5-months? Because, to identify the Peak in ENSO and TNA 3-month running mean was used.
Answer:
Thank you for your comment. As there is no strict definition for identifying ENSO events, we decided to use ONDJF instead of DJF for identifying events, as ONDJF has the largest variance (Wang et al., 2019). Picking the time period with the largest variance helps to reduce the chance of missed ENSO events (i.e., if the peak is early or late) and reduces the influence of intraseasonal variations in the tropical ocean. Trenberth (1997) also defines an ENSO event when the 5-month running mean of SSTa in Nino3.4 is 0.5 standard deviations for six consecutive months. Using an index based on a longer period of 5 months, as opposed to the standard 3 months for DJF, can better account for subtle variations in the onset and peak timing of ENSO when applying the filtering method.
Similarly, to achieve a higher resolution of the anomaly and accurately account for the timing of each mechanism’s peak, we analyzed the anomalies over 3 months. However, unlike ENSO events where the timing of the peak is not a primary concern, our analysis of the 3-month averaged mech- anisms places importance on accurately identifying the timing of the peak, thus justifying 3-month averages over 5-month averages.
Trenberth, K. E. (1997). The definition of El Nin ̃o. Bulletin of the American Meteorological So- ciety, 78(12), 2771–2777. https://doi.org/10.1175/1520-0477(1997)078¡2771:TDOENO¿2.0.CO;2
Wang, B., Luo, X., Yang, Y. M., Sun, W., Cane, M. A., Cai, W., ... Liu, J. (2019). Historical change of El Nin ̃o properties sheds light on future changes of extreme El Nin ̃o. Proceed- ings of the National Academy of Sciences of the United States of America, 116(45), 22512–22517. https://doi.org/10.1073/pnas.1911130116
Question 4:
Figure 2: authors looked at the seasonality of the teleconnection indices, which is different from index to index. Given the fact ENSO conditions are there, then do you think that within each El Nino, there are different mechanism which may play a role in teleconnection depending on season- ality? Or they are different from one ENSO to another event?
Answer:
Thank you for your comment. In Figure 2, we show the standard deviation of each index as it varies from season to season, and these standard deviations are not associated with any ENSO condition, rather we use the entirety of the time series for each model. We clarify this in the updated text on line 200.
During each ENSO event, the various mechanisms are all likely active but to varying degrees. In terms of different mechanisms between each El Nino, the importance of each of the different mechanisms, both in terms of the overall contribution to TNA SSTAs and the contribution from season to season, likely varies. For example, the extratropical pathway’s anomaly tends to peak earlier than the two tropical pathway anomalies, which can then influence the relative importance each mechanism has based on the season (compared to the other mechanisms).
Not only does the significance of each mechanism vary across different seasons, but it also ex- hibits variability when comparing individual ENSO events. For example, even if we compare two supposedly equivalent El Nino events (i.e., equal SSTA magnitude and SSTA pattern), the peak magnitude of the mechanisms can change due to other factors within the atmosphere or ocean. And even if the anomalies of the mechanisms are equal, the mechanisms may still produce different in- fluences on the TNA SSTs, as the coupling between the atmosphere and ocean can also be influenced by interannual variability.
We expand on a similar subject in our paper by Casselman et al., (2021) using an AGCM, but dissecting these differences within this current paper is outside its intended scope.
Casselman, Jake W., Bernat Jim ́enez-Esteve, and Daniela IV Domeisen. ”Modulation of the El Nin ̃o teleconnection to the North Atlantic by the tropical North Atlantic during boreal spring and summer.” Weather and Climate Dynamics 3.3 (2022): 1077-1096. https://wcd.copernicus.org/articles/3/1077/2022/
Question 5:
Figure 4: Here authors have used 5-months Nino34 index for a pointwise correlation, where they are looking at the total ENSO response without identifying the extreme ENSO as they discussed in Fig. 1? In figure 4, authors have used a total index without separating the El Nino and La Nina. So how this explains the extreme ENSO? how this explains the nonlinearity? Also, here authors used the upper-level divergence index without looking at spatial pattern, it’s important to see the spatial pattern in model and observations?
Answer:
Thank you for your comments. Figure 4 was created as it was unclear which regions of divergence are most important, so we first used a correlation analysis to get a better idea of where we should construct our indexes (explained on line 267 of the updated manuscript). From this correlation, we used the peaks of the correlation to construct the indices.
In terms of how this explains the nonlinearity, it helps to show any shifts in the linear regions, but only in a descriptive way. Regarding how Figure 4 relates to extreme ENSO events, as it is a linear correlation, one cannot comment on such factors. At its core, the intent of Figure 4 was to justify the use of specific divergence indices over the Pacific, which we then use in Figure 5. Figure 5 is where we aim to answer questions pertaining to the nonlinearity and extreme ENSO influences, not Figure 4.
Thanks for your feedback on the spatial pattern. We have now included the upper-level (200 hPa) divergence composites in the Supplementary material (Figure S8) and made reference to them in the main manuscript, including that the composites complement the observed westward shift of CESM’s correlation compared to JRA-55 and FOCI (line 275).
Question 6:
Page 12: Line 275-280: Also, authors are discussing teleconnections 200hpa divergence? How well model simulate the ENSO-precipitation teleconnections? It is important to see the spatial patten before it could be used as an index.
Answer:
Thank you for your feedback. To understand how well the models simulate the ENSO upper-level response and to ensure it was sufficient before using the field as an index, we had previously created Figure S9, which shows the 200 hPa divergence biases both in a latitudinal averaged way (a-b) as well as the spatial pattern (c-d).
Question 7:
Line 325-335: Authors discussed about the linearity of El Nino and non-linearity of La Nina, but it’s not clear, how? Also, how linearity and nonlinearity is related to the pre-conditioning of SSTA? What does this mean?
Answer:
Thank you for your feedback. In lines 330-333 we summarize the findings from Casselman et al. (2021) as opposed to the results within the current manuscript. In lines 333-336, we relate back to the current study to explain what happens when the preconditioning influence is removed. To clarify the paragraph, we have made several edits, including:
• Emphasizing when we are talking about the results from Casselman et al. (2021) on line 338.
• Further expanded on what is meant by preconditioning and why it is important
• Further explained what is meant by the nonlinearity by further explaining how a nonlinearity in SSTAs refers to a plateau in the magnitude of the TNA SSTAs between strong and extreme El Nin ̃o
Question 8:
Please check some typos for example, line 79: “..... warn troposphere ....” Could be warm tropo- sphere ..?
Answer:
Thank you for pointing this out, we have updated the manuscript.
-