Achieving realistic Arctic-midlatitude teleconnections in a climate model through stochastic process representation
- 1University of Oxford, Oxford, United Kingdom
- 2Mathematics and Logistics, Jacobs University, Bremen, Germany
- 3Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
- 1University of Oxford, Oxford, United Kingdom
- 2Mathematics and Logistics, Jacobs University, Bremen, Germany
- 3Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
Abstract. The extent to which interannual variability in Arctic sea ice influences the midlatitude circulation has been extensively debated. While observational data supports the existence of a teleconnection between November sea ice in the Barents-Kara region and the subsequent winter circulation, climate models do not consistently reproduce such a link, with only very weak inter-model consensus. We show, using the EC-Earth3 climate model, that while a deterministic ensemble of coupled simulations shows no evidence of such a teleconnection, the inclusion of stochastic parameterizations to the ocean and sea ice component of EC-Earth3 results in the emergence of a robust teleconnection comparable in magnitude to that observed. We show that this can be accounted for entirely by an improved ice-ocean-atmosphere coupling due to the stochastic perturbations. In particular, the inconsistent signal in existing climate model studies may be due to model biases in surface coupling, with stochastic parameterizations being one possible remedy.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(11884 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
Journal article(s) based on this preprint
Kristian Strommen and Stephan Juricke
Interactive discussion
Status: closed
-
RC1: 'Comment on wcd-2021-61', Anonymous Referee #1, 04 Nov 2021
This study examines the effects of stochastic parameterizations on the link between November sea ice and the winter NAO in historical coupled model simulations. The authors find that in the control simulations the connection is weak and opposite of what is seen in observations. When they add the stochastic parameterizations to the ocean and sea ice model components, the connection between the November sea ice and winter NAO switches sign and becomes closer to the observed value. They attribute the differences to improved ice-ocean-atmosphere coupling. I think that this an interesting study with potentially important results worthy of publication. However, I do have a number of issues that need to be addressed, so my recommendation is for major revisions.
Major comments:
1.The use of different sea ice regions for the model and observations is problematic. The authors have correlated the NAO with sea ice concentration at all gridpoints and cherry-picked the regions with the largest correlations (which is different in the model and observations). Given the weak correlations combined with large internal variability, there is a good chance the internal variability is contributing to the regions with the highest correlations. This means all the subsequent analysis and discussion about statistical significance is not reliable because the region was not selected a priori. The authors should use the Barents-Kara (BK) Sea for both observations and model correlations. I don’t even think this will have that large of an effect on the analysis and conclusions because there are clearly differences in correlations over just the BK Sea (Figure 4).
The justification for this is not at all convincing. The authors claim that because models have different biases, the regions with the most sea ice variability is different across different models and the real world. However, The sea ice in the BK Sea in the OCE does not look that different than in ERA5, so I don’t see why they cannot use the same region. The leading EOF in ERA5 looks very similar around the BK region (Figure 2). I can see maybe shifting the regions slightly to account for biases (e.g. if the model ice edge is 1° too far south in the model, shift the region definition 1° to the south), but to use a very different region is not justifiable and introduces additional issues.
2. The model the authors use may be an outlier and the results may not be that relevant to other models. This is very briefly mentioned in the discussion, but I think there are reasons to think this may not work as well in other models. Most models tend have a weak connection between reduced sea ice and a negative NAO. In addition, as mentioned in the introduction, model experiments forced with reduced sea ice also tend to show a weak negative NAO response. However the control model used here shows the opposite sign correlation compared to most models, and a previous study (Ringgaard et al. 2020, doi:10.1007/s00382-020-05174-w) shows that a version of this model shows no NAO response to reduced sea ice in the BK Sea. In addition, the improved correlation in the OCE version are still weak. Could it not be the case that the OCE is just improving the flaws in this particular model, which brings it more in line with other models? This would then mean that applying the same methods in other models may not have as large of an effect.
3. The authors claim that mean state changes cannot explain the differences, but I don’t find their arguments that convincing. They argue that AMIP ensemble with prescribed SSTs and sea ice show weak correlations. First of all, taking the correlations of the AMIP ensemble at face value would suggest that close to half of the difference can be explained by the mean state. Second, there are many other difference related to the coupling of sea ice and SSTs that could cancel out the improvements made by correcting the mean state biases in the AMIP experiments. It is likely that the improved mean state explains at least some of the differences and it can’t be ruled out that it is entire explanation.
4. The authors conclude that the link between sea ice and the NAO is stronger because of improved ice-ocean-atmosphere coupling. This is a bit vague and could be investigating a little further. What about the coupling is actually being improved? Because the authors argue that coupling on short timescales can explain the difference, there could be a lot value in doing similar analysis to what was done in Figure 7, but with other variables. For example, does the OCE ensemble have a stronger upward heat flux and temperature response following reduced sea ice?
5. The title and abstract need to be more specific. Many different links between the Arctic and the midlatitudes have been hypothesized via a number of different mechanisms. It is misleading to refer to Arctic-midlatitude links very generally, when the authors have only investigated one specific link between November Barents-Kara sea ice and the winter NAO in interannual variability. Even with this correlation, the authors have only looked at one mechanism (they have not investigated the stratospheric mechanism).
Other comments:
L35: What is meant by ‘More seriously’? Are the model experiments with imposed sea ice anomalies not serious?
L35-38: Another recent study that could be cited/discussed here is Siew et al. 2021 (doi:10.1126/sciadv.abg4893).
L30-42: Somewhere in this discussion it should be mentioned that observed correlation seems to be highly intermittent when looking at the much longer record (Kolstad and Screen 2019, doi:10.1029/2019GL083059). In the middle of the 20th century, the sign of the connection appears to be opposite compared to the recent period.
L38-42: This is not an accurate description of Blackport et al. 2019. This study has nothing to do with the connection between November BK sea ice and the winter NAO and is not that relevant for this study. A much more relevant study that argues that the correlation between November BK sea and winter NAO may not be causal is Peings, 2019 (doi:10.1029/2019GL082097).
L41: Warner et al. 2020 do not suggest tropical forcing as a common driver of sea ice and the NAO. They did suggest this may be the case for other aspects of the mid-latitude circulation, but not the NAO.
L198-207/Figure 1: The main takeaway from this is that OCE reduces the sea ice everywhere. The changes in variability are also entirely consistent with just a reduction in sea ice extent everywhere .
Figure 1 and 3: I think that it would be more useful to show plots for OCE-ERA5 as well to make the improvements easier to see.
Figure 2: What does the sea ice variability look like in the Barents-Kara sea in CTRL? There is substantially less variability connected with the EOF1, but is that because it is in other EOFs or because there is substantially less variability? I don't think it is latter based on Figure 1.
L218: sea surface temperatures
L243: Blackport et al. 2019 did not do this and has little to do with the NAO.
L279-281: I don’t understand this. The Bering sea is a completely different region which will have different impacts on the circulation, so I don’t see how it can be the equivalent to the BK Sea.
L281-283: There has been a lot more work looking at the response/correlation to sea ice in different regions than what is portrayed here (e.g. Screen 2017 doi :10.1175/JCLI-D-16-0197.1, McKenna et al .2017 doi:10.1002/2017GL076433, Blackport et al. 2019). The reason there has been more on the Barents-Kara is because there are stronger links in both observations and models.
L307: I don’t think any study, including Koenigk and Brodeau (2017), state that the observed signal is a spurious signal. This study, and others like it, express caution that it could be. There is a lot internal variability and spurious signals can arise in model simulations of similar length to the observed record even when there is no/weak signal overall. It also the case that the recent observed correlation appears to be unusually high compared to the longer record(Kolstad and Screen 2019).
Figure 5a: The fact that all simulations start off with a higher correlations than over the whole period intrigues me. Because all simulations start of the same ocean state, is it possible that they happened to be initialized in particular state of low frequency variability that contributes to a stronger correlation?
L317-319: I don’t understand why that would suggest it is coincidental. You wouldn’t be able to rule it out, but that is very different from suggesting that it is.
L322-323: Is it actually the case that each 30 year period is statistically significant from 0? I doubt that this is the case given that some 30 year periods show correlations close to 0.
L328: How often do they attain correlations that exceed the observed correlation?
Figure 5b: I think it is misleading to plot it this way because the overlapping 30 year periods are obviously not independent. There are really only about 6 independent data points in the OCE distribution. I don’t doubt that the differences are statistically significant, but this plot likely exaggerates the perceived significance.
L350-352: Isn’t it more relevant to know whether or not these correlations are statistically different from the correlations in OCE or CTRL?
L360-368: The regressions of November zg500 on November sea ice is likely not the response to the sea ice anomalies(at least not entirely). Instead, a large part of it is the atmospheric circulation that forces the sea ice anomalies. The sign of the NAO is opposite to what would be expected if it was the response. Unless the authors are arguing that the initial response to reduced sea ice is a positive NAO, but that contradicts what is shown in Figure 7.
L380-385:This negative feedback between the sea ice and NAO was identified in a number of studies including Strong et al. 2009,doi:10.1175/2009JCLI3100.1 .
L435: This is not reproducing the result of Blackport et al. 2019. They examined the regression between winter circulation and winter sea ice, not November sea ice.
L425-456/Figure 9. I am not sure I understand the point of this analysis. The authors have already established that feedback between sea ice and the NAO, so I don’t see how the NAO forcing of the sea ice could explain the difference between OCE and CTRL. There could potentially by a stratospheric pathway where there are causality issues, as suggested by Peings 2019, but the authors have effectively argued against this being the reason for the improvement by showing that difference can entirely be explain based on the daily coupling. The authors should more clearly explain the motivation for it, or remove it.
L463:Figure 9->Figure 10
L516: How would the varying model biases contribute to the inconsistencies within long simulations from a single model? Note that there also appears to be large inconsistencies between short periods in observations as well (Kolstad and Screen 2019).
-What do the trends in NAO look like? If the improved correlations represent a response to sea ice loss, it may be expected that there is more negative NAO trends in the OCE simulations. This could have implications for the midlatitude response to sea ice loss and global warming, not only for seasonal predictions. This may be a bit beyond the scope of the study, and a larger ensemble may be needed to find robust differences, but it would really simple to check.
-
AC1: 'Reply on RC1', Kristian Strommen, 14 Feb 2022
We thank the reviewer for their constructive and insightful feedback. Before responding to the key points made, we need to point out that last November the authors were able to obtain additional supercomputing units and have since used these to double the ensemble size from 3 to 6. The new ensemble members have proven to be consistent with the original members, which adds considerable confidence to the hypothesis that the stochastic schemes are genuinely improving the teleconnection. All Figures and diagnostics in the revised version of the paper have therefore been expanded to include these members. As a result of this, some details of the discussion have changed. Because most of these changes are in line with suggestions made by the reviewers, we hope this will not cause a nuisance to the reviewers, who might rightfully wish we had waited with submitting in the first place until this larger ensemble was obtained. Unfortunately, it was not clear prior to submission if the required computing units would be obtainable.
- Concerning the choice of sea ice region, we agree that a differing choice for the model and observations leaves us open to accusations of cherry-picking, and at the very least some discussion of sensitivity of results to the choice should have been included. We have now done more extensive testing of the use of different regions and can report the following. If one uses Barents-Kara for all data sets, then the conclusions are qualitatively similar, in that there is a consistent improvement of the ice-NAO correlations when adding stochasticity, and these improvements can be explained using the LIM model. However, quantitatively speaking the results are somewhat weaker, with the correlations in OCE being generally smaller (and not as comparable in magnitude to ERA5) when using Barents-Kara as opposed to Barents-Greenland. We also found that using just the Barents sea for the model gave quantitatively almost identical results to using Barents-Greenland, and the increased ensemble size now singles out the Barents sea anyway (revised Figure 4). On the other hand, the Barents November sea ice in ERA5 has zero correlation with the NAO: it is definitely necessary to extend the region out to the Kara sea for ERA5.
After careful consideration, we believe it is still justifiable to somewhat adjust the sea ice region in the model compared to observations. The results discussed above have led us to use Barents-Kara for ERA5 and Barents for EC-Earth. The difference between the two regions is therefore even smaller now, with EC-Earth simply omitting the Kara sea. An equivalent table to Table 1 which uses Barents-Kara for all data sets will be included in Supporting Information of the revised paper, and we will clearly highlight and discuss the fact that qualitatively (but not quantitatively) similar results are obtained with this uniform choice. We hope this will go a long way towards addressing the reviewer’s objections.
We now expand on our justification. There are two key points. The first is that both the mean state and the seasonal evolution of the sea ice edge is clearly different in CTRL compared to ERA5.It’s true that the bias of CTRL and OCE in the mean sea ice in the Kara sea (Figure 1a,b) is on the order of 10% less ice than in ERA5, and this not huge on the face of it. But the biases in the standard deviation (Figure 1c) clearly point to a big change in how far equatorward the ice edge tends to extend to every year: the sign of the pattern (negative near pole, red equatorwards) says that in CTRL, the ice edge tends to extend further outwards. This is important because the heatflux anomalies are dominated by the variations in the location of the ice edge: if the ice edge has moved, so will the largest heatflux anomalies. The 10% difference in the mean state is therefore in all likelihood misleadingly small, smoothing out more important interannual variations in the ice edge in the Kara sea. This change in the seasonal ice edge evolution in EC-Earth3 is further corroborated by the visibly different EOFs (Figure 2). It is true as the reviewer states that the local magnitude of the patterns in the Barents-Kara region are similar between ERA5 and OCE, but clear visible differences still remain. In ERA5, the typical November pattern is evidently an increase (decrease) of ice in Barents-Kara and a decrease (increase) in the Barents sea closer to Russia as well as in the Laptev sea. In OCE, the typical behaviour is an increase/decrease along the entire ice edge from Greenland up to Chukchi. In particular, sea ice anomalies in Barents-Kara may, in the model world, be expected to often come hand-in-hand with sea ice anomalies elsewhere that don’t look anything like that of observation. Since it has been noted in previous papers ([1,2] and others that the reviewer themselves provide) that sea ice anomalies in regions other than Barents-Kara may have different, even opposing, impacts on the atmospheric circulation, we do not think such possible effects can be considered negligible.
The second key point is, as discussed in our paper, that there is evidence in the literature that the teleconnection depends on the atmospheric mean state, in particular the position of the storm track. Since the storm track is almost always biased to some degree in climate models, it does not seem unreasonable to suggest that the sea ice region in models best placed to interact with the storm track is slightly different than that in observations.
The fundamental issue here is that external forcing, including that from teleconnections, very often projects onto the dominant modes of variability (e.g. [3,4]). Not only do these differ between models and observations (Figure 2), but in the case considered here, there is non-linearity embedded at both ends: with sea ice as discussed in [1] and with the North Atlantic Oscillation in the visible multimodal behaviour of the jet [5]. We therefore take the view that model biases, in both the mean and the variability, cannot be easily ignored, and indeed many studies have examined the influence of such biases on teleconnections (e.g. [6] for just one recent example). There are also several precedents in the literature for using sea ice EOFs to compute Arctic-NAO teleconnections (e.g. Wang, Ting and Kushner 2017, or the Strong et al paper you pointed us to in your comments), and such approaches would inevitably highlight different regions in models vs observations. It is certainly true that allowing for regions or patterns to shift in models opens up the possibility of cherry picking, and so sensitivity to such shifts should be clearly discussed, which we failed to do. But the flip side is that allowing for no model-dependent diagnostics may overly penalise models and give the impression that model skill (or inter-model consensus) is weaker than it is.
It is the authors’ impression that there has perhaps been too little consideration in the literature on potential (small) shifts in the key sea ice region, and we think this is an important point that we wish to highlight as part of our work. The revised version will expand on all the above points to better justify the choice made. Of course, we accept that the reviewer may disagree on some or indeed all of the above points, or be of the opinion that a proper justification of the above points would require more work which would likely be inappropriate to include in this paper. We hope that if this is the case, that our emphasis of the qualitatively similar results obtained with Barents-Kara, and the change from using Barents-Greenland to Barents for the model, will nevertheless allow you to consider your objection adequately addressed. - We would challenge the assertion that “most models tend to have a weak connection”. The range of correlations between Barents-Kara and the NAO found across the coupled CMIP6 models is very well approximated by a normal distribution with mean 0, standard deviation 0.17 and a 95% confidence interval of 0.28. While the exact mean of 0.018 is positive, almost half the CMIP6 models have negative correlations. The EC-Earth3 CTRL ensemble, with its average correlation of -0.06, is in no way an outlier in this distribution and is in fact dead average: this was extremely briefly noted in the submitted paper (line 337), and we have now made this more clear by revising Figure 5 to include the CMIP6 distribution. The inclusion of additional ensemble members has also now produced CTRL members with slightly positive correlations in the period 1980-2015, so there seems to be even less cause to find EC-Earth3 particularly objectionable. Its biases in the mean ice state are also in no way notably worse than many other models.
Note that the slightly positive mean of the CMIP6 distribution is consistent with findings in earlier literature reviews which report that `most’ models show a positive association, but it is clear that this consensus is weak. Another point here is that many of the experiments carried out in the literature are not directly comparable with each other: e.g. many model experiments analysing the role of sea ice use fixed anthropogenic forcings, while the models we consider here are using historical forcings. This may account for any remaining discrepancies.
That being said, the point that the stochastic schemes may have differing impacts in other models should have been emphasised more. There are examples from earlier work which show consensus across models in some cases and lack of consensus in others. This will be expanded on in the revised manuscript. - The potential importance of the mean state is a point raised by all of the reviewers, and upon further consideration we agree. It is actually even clearer after having increased the ensemble size that coupling alone isn’t sufficient, and it is likely a combination of coupling and mean state. This will be expanded upon further in a response to another reviewer.
References:1. Koenigk, T., Caian, M., Nikulin, G. et al. Regional Arctic sea ice variations as predictor for winter climate conditions. Clim Dyn 46, 317–337 (2016). https://doi.org/10.1007/s00382-015-2586-1
2. Sun, L., Deser, C., & Tomas, R. A. (2015). Mechanisms of Stratospheric and Tropospheric Circulation Response to Projected Arctic Sea Ice Loss, Journal of Climate, 28(19), 7824-7845.
3. Shepherd, T. Atmospheric circulation as a source of uncertainty in climate change projections. Nature Geosci 7, 703–708 (2014). https://doi.org/10.1038/ngeo2253
4. Corti, S., Molteni, F. & Palmer, T. Signature of recent climate change in frequencies of natural atmospheric circulation regimes. Nature 398, 799–802 (1999). https://doi.org/10.1038/19745
5. Woollings, T., Hannachi, A. and Hoskins, B. (2010), Variability of the North Atlantic eddy-driven jet stream. Q.J.R. Meteorol. Soc., 136: 856-868. https://doi.org/10.1002/qj.625
6. Karpechko, AY, Tyrrell, NL, Rast, S. Sensitivity of QBO teleconnection to model circulation biases. Q J R Meteorol Soc. 2021; 147: 2147– 2159. https://doi.org/10.1002/qj.4014 - Concerning the choice of sea ice region, we agree that a differing choice for the model and observations leaves us open to accusations of cherry-picking, and at the very least some discussion of sensitivity of results to the choice should have been included. We have now done more extensive testing of the use of different regions and can report the following. If one uses Barents-Kara for all data sets, then the conclusions are qualitatively similar, in that there is a consistent improvement of the ice-NAO correlations when adding stochasticity, and these improvements can be explained using the LIM model. However, quantitatively speaking the results are somewhat weaker, with the correlations in OCE being generally smaller (and not as comparable in magnitude to ERA5) when using Barents-Kara as opposed to Barents-Greenland. We also found that using just the Barents sea for the model gave quantitatively almost identical results to using Barents-Greenland, and the increased ensemble size now singles out the Barents sea anyway (revised Figure 4). On the other hand, the Barents November sea ice in ERA5 has zero correlation with the NAO: it is definitely necessary to extend the region out to the Kara sea for ERA5.
-
AC1: 'Reply on RC1', Kristian Strommen, 14 Feb 2022
-
RC2: 'Comment on wcd-2021-61', Anonymous Referee #2, 19 Nov 2021
Strommen and Juricke explore the question of why Arctic-midlatitude teleconnections in climate models are generally weaker than observed by employing a modified version of EC-Earth3 which includes stocasticity in the sea ice and ocean components. The study finds that the Nov. sea ice - winter NAO teleconnection is improved in the model integrations with stocasticity and this appears to be mainly related to stocasticity in the sea ice component. I find this result very interesting; however, like the authors, I am left wondering whether OCE may be getting "the right answer for the wrong reasons". I recommend some major revision of the manuscript to address the key issues below:
1. The authors argue with respect to Figure 9 that OCE and ERA5 have similar daily timescale forcing, suggesting that OCE is getting things right for the right reasons. I'm not entirely convinced of this given that the b coefficient in OCE is larger than ERA5. It would be interesting to see the coupling between ice and other variables using the LIM to provide a bit more evidence that OCE is getting things right, for example the relationship between ice and a variable that is more thermodynamically connected to ice. The authors also note that the difference seen in Figure 9 could be due to chance. If so, can you show similar plots as Figure 9 for each ensemble member of OCE? If chance plays a role maybe there is some evidence of this if all ensemble members are examined individually.
2. Figure 9h and 9i seem to suggest something is quite unrealistic about how this model represents fall sea ice variability. In the Blackport et al. (2019) paper, they examine a version of EC-Earth, EC-EarthV2.3, I believe. Are you able to reproduce their findings with EC-Earth3P used here for the CTRL runs (it would be great to see plots similar to their Fig. 4c, f, and i? It seems that you are getting very different patterns (Fig. 9h), which makes me concerned about the suitability of this model for this study.
3. Could the direct effect of mean state changes be quantified using AMIP-style runs with monthly sea ice and SSTs from the coupled OCE runs? I think it is important to get a better sense of what is going on - is it the stocasticity itself or the effect of the stocasticity on the mean state. Untangling this has implications in terms of how this study informs model development.
Minor comments:1. lines 25-30: lots of issues with parentheses that need to be tidied up.
2. line 27: You may want to say "negative NAO" rather than just "NAO" for clarity.
3. Section 2: there are many different abbreviations/acronyms for the model used in this section. After you finish describing the various configurations, can you tell the author which name you are going to stick with throughout the paper? Something like, "Hereafter, the model will be referred to as...".
4. Line 153: What prescribed SSTs and sea ice?
5. line 162: extra parentheses
6. line 218: Figures -> Figure and ssea -> sea
7. Table 1 caption: there is a missing section number - just shows ??
8. line 405-406: I don't think this is the correlation you are showing. It's sea ice and NAO, correct?
9. FIgure 9i does not really look like Figure 9g to me. And it seems a bit strange that Fig. 9h does not look anything at all like Fig. 9g.
10. line 463: FIg. 9 -> Fig. 10
-
AC2: 'Reply on RC2', Kristian Strommen, 14 Feb 2022
We thank the reviewer for the insightful comments, and bring their attention to the increased ensemble size obtained since submission: see the response to RC1 for further details.
The main concern raised is about Figure 9, which suggested that perhaps OCE was recovering a correct looking teleconnection for the wrong reasons. After doubling the ensemble size the mismatch with observational data has been notably reduced. The improved teleconnection in OCE still appears more driven by the forcing of the ice on the atmosphere, but a clear NAO signal is now also seen for years where the atmosphere drives the ice. We hope this will help reassure the reviewer.
It is perhaps also worth pointing out that we are either way still suggesting that there is “something quite unrealistic” about the CTRL model, to paraphrase the reviewer. We are suggesting that the lack of a teleconnection is unrealistic, and that its improvement in OCE is a genuine improvement. The point being that this is an important result even if CTRL is unrealistic in some ways, because it implies that the considerable intermodel spread in reproducing the observed teleconnection may to a large extent be due to model biases rather than internal variability. If that is the case, then the teleconnection may be much more robust than many studies suggest it is. But in any case, EC-Earth3 does not seem to be a particularly poor model: see the response to RC1 for more on that.
Note that the EC-Earth figures from Blackport et al. 2019 are not reproducible with our data. While the model used is closely related, the EC-Earth experiments considered in Blackport et al essentially use fixed forcings (they use 400 5-year simulations each covering the same period), while our experiments are 65 successive years with historical forcings. Identical diagnostics would not be expected as a result, so we don’t see any discrepancies here as a point of concern.
As for plots elucidating the mechanisms more clearly, we produced some additional lag correlation/regression plots between sea ice and heatfluxes (this also being suggested by RC1) as well as some other diagnostics to help clarify. While these do hint at some small improvements in OCE to the daily time-scale local coupling between ice and heatfluxes, our analysis generally suggests that the flaws in CTRL are not clearly visible in the local thermodynamic coupling. Instead, the errors in CTRL appear to be primarily due to errors in the subsequent adjustment and growth of the initial pressure anomaly across the North Atlantic and ice edge more broadly. In fact, this is already what the LIM results suggest, but this was not really made clear in the submitted manuscript. All this will be discussed (and the relevant new plots included) in the revised paper. Unfortunately, a thorough analysis of errors in the more remote response is not going to be possible to include in this already lengthy paper and will have to be left for future work (though we include some speculation).Finally, regretfully no time or resources are available to carry out experiments of the sort you describe at present, though we agree they would help. The role of the mean state (also raised by the other reviewers) is discussed in more detail in the revised manuscript in any case, but it has not proven possible to decisively nail down the contribution of mean state vs coupling in our analysis. Besides the complications of local vs remote responses discussed above, it is likely that the inherently non-linear component to ice/heatflux coupling plays a role which our analysis, entirely based on anomalies, cannot detect. Possible non-linear diagnostics that could be explored in follow-up work are discussed in, e.g. Caian et al. An interannual link between Arctic sea-ice cover and the North Atlantic Oscillation (2018), Clim Dyn. We hope that the extra diagnostics and discussion, including of potential future work, will satisfy the reviewer anyway.
-
AC2: 'Reply on RC2', Kristian Strommen, 14 Feb 2022
-
RC3: 'Comment on wcd-2021-61', Anonymous Referee #3, 01 Dec 2021
This is an interesting paper on Arctic-midlatitude teleconnections. While focusing on the role of stochastic parameterizations, the paper also provides useful and novel physical insight into the possible factors affecting the representation of this teleconnection in climate models. The paper was clearly and logically written, and the results interesting, so I enjoyed reading it. I do however have several comments regarding the physical interpretation of the findings.
Main comments:
1) Interpretation of Fig. 6
I’m not sure I fully agree with the interpretation of the lagged relationships between Z500 and sea ice – or I may have misunderstood the authors. The text L360–368 seems to imply that the Z500 anomalies are a “response” to the sea ice at all lag times. This makes sense at positive lags (December onwards, when Z500 lags the sea ice), but for the November anomalies (1st row of Fig. 6) we also need to consider the possibility that it is the circulation driving the sea ice, rather than the other way around. I think this is indeed what is happening: the Z500 anomalies are consistent with northerly flow into the Barents sea area, which would drive enhanced sea ice concentration. I believe this also explains why the November Z500 anomalies are so consistent among ERA5, CTRL and OCE. In any case, the possible two-way interaction between Z500 and the sea ice needs to be discussed in the context of Fig. 6.
2) AMIP results
I am still unclear as to why the AMIP simulations show no midlatitude response to the sea ice anomalies. I understand the result in Fig. 7 that there is two-way coupling, and the NAO → ice effect is absent from AMIP. But the ice → NAO effect should be in AMIP, so why don’t we see that? Also, is this result consistent with any prior work looking at AMIP runs with other climate models?
3) Coupling timescales
I feel some clarification is needed on the timescales at play in the sea ice–NAO coupling. Figure 7 suggests the coupling happens on daily timescales; but it’s not obvious how to reconcile this with the finding that the NAO responds to November sea ice anomalies on the timescale of a *season* (DJF). My interpretation would be that the sea ice anomalies are relatively persistent (Fig. B5), so the November anomalies are a skillful predictor of those occurring later in the winter season – and these anomalies continue forcing the NAO through the winter. Is this consistent with the authors’ thinking? Please clarify in the paper.
4) Coupling in CTRL
Figure 8b suggests the BG sea ice in CTRL does have a measurable impact on the NAO, which appears at odds with the lack of an ice → NAO relationship in Fig. 7. Is this because the BG sea ice varies so little in CTRL – so that even though the effect is there, the impact is minimal because there’s almost no forcing?
5) NAO definition
I was unclear as to the NAO metric as defined L166, and since this is key to the result, the definition seems important. I don’t understand the subtraction of the daily climatology after the calculation of the PC. Why not deseasonalize the data beforehand? If using non-deseasonalized data, there is a risk that the EOFs are capturing the seasonal cycle (an externally forced signal), rather than the true internal atmospheric variability. It was also unclear to me whether the EOFs were calculated for each CTRL and OCN realization separately, or whether these realizations were concatenated prior to computing the EOFs. While it probably makes little difference, I’d favor the latter, which should give more robust EOFs – and ensures any differences among the realizations aren’t due to differences in the EOF basis.
Minor comments:
1) Please fix the citation format – the parentheses are often in the wrong places. I suspect this may be due to mixing the Natbib commands \citet and \citep in LaTeX. One example is L25, where it should be “(Hoskins and Karoly 1981)”, “(Garcia-Serrano et al. 2015)”.
2) Consider clarifying the definition of the word “deterministic” – not being a stochastic parameterization expert, I initially thought this might mean “prescribed SST” as opposed to coupled, when actually this means “not stochastic”.
Typos etc:
L52: “are a manifestation”
L169: “are computed”
L208: “to reduce”
L218: “sea surface”
L229: “Examination… supports”
L297–300: This text is a repetition of L179–183, so I suggest deleting.
L405: Strictly speaking, Table 1 shows the correlations between the LIM NAO and LIM sea ice – not LIM NAO with true NAO. The latter is shown in Fig. B6.
L423: “may have changed” → I think you mean “between CTRL and OCN”, but it’s not entirely obvious from the phrasing.
Caption of Table 1, L3: broken link to section 5.2
Figures 4 and 6: Suggest highlighting the BK and BG regions with boxes in the maps
-
AC3: 'Reply on RC3', Kristian Strommen, 14 Feb 2022
We thank the reviewer for the insightful comments, and bring their attention to the increased ensemble size obtained since submission: see the response to RC1 for details.
- About Figure 6, yes you are absolutely right that there is a 2-way interaction there which we failed to comment on. This will be included in the revisions.
- Yes, there is evidence in prior literature that this teleconnection is weaker in AMIP models. This was mentioned in line 520, citing Blackport and Screen (2021), though we believe earlier studies (cited in their paper) had pointed to this as well. For EC-Earth in particular, the study Caian et al. An interannual link between Arctic sea-ice cover and the North Atlantic Oscillation (2018), Clim Dyn, showed that ice/NAO links are weaker in an AMIP simulation than a coupled simulation, something they attributed to the missing coupling. Our paper provides further evidence to the importance of coupling to get a good teleconnection, though several questions remain about exact mechanisms. We show that while the initial, local ice->heatflux response appears similar for both CTRL and OCE, the subsequent growth and evolution of the anomaly is significantly better in OCE. Presumably, as you point out, the initial local anomaly would be highly realistic in the AMIP simulations, but the failure to propagate the anomaly would likely be even worse given the total lack of coupling. Caian et al. includes some other discussion on possible mechanisms here. We will discuss some simple hypotheses as well, including the alignment of the sea ice edge with the eddy-driven jet, and the importance of sea ice adjustments further afield from the source region (Barents/Barents-Kara). This will be discussed in the revised paper.
- Yes, exactly: the initial anomaly is long-lasting due to the persistence of sea ice, but is ultimately damped away by the opposing response of the NAO. We will revise the paper to make this clearer. More discussion about the initial local response vs more remote adjustments are also included, as per point 2 above.
- All reviewers have commented on the mean state, and in hindsight the minimal role we ascribed to the mean state wasn’t justified. We can’t see any meaningful difference in the November 1st initial conditions (of the ice and NAO) between CTRL and OCE, but the LIM model takes anomalies as input, which ignores any non-linear effects. Since such non-linearity is likely to be present here, our analysis can’t really address this. On balance, it is likely that the improvements in OCE are due to both the mean state and the coupling, and we will make this clearer in the revised paper.
- The NAO EOF was computed separately for each dataset, to allow the centers of NAO action to shift between each dataset according to differences in the mean state: this will be made clearer in revisions. We believe it is important to allow for some shifts between models to not obscure signals or overly penalise models (i.e. penalising both for mean state biases and changes to modes of variability). That being said, in this case there is little difference between the CTRL and OCE NAO, with a pattern correlation between the two of around 0.97. The results are therefore highly unlikely to change if using the exact same NAO pattern for CTRL and OCE. This will be mentioned in revisions.
- About Figure 6, yes you are absolutely right that there is a 2-way interaction there which we failed to comment on. This will be included in the revisions.
-
AC3: 'Reply on RC3', Kristian Strommen, 14 Feb 2022
-
EC1: 'Comment on wcd-2021-61', Camille Li, 02 Mar 2022
Thanks to the referees for their in-depth reviews and the authors for their careful consideration of the points raised. Because of rather substantial edits and a re-interpretation of the results in response to the reviews, I'm offering the referees a chance to comment further on the revised manuscript.
Peer review completion
















Interactive discussion
Status: closed
-
RC1: 'Comment on wcd-2021-61', Anonymous Referee #1, 04 Nov 2021
This study examines the effects of stochastic parameterizations on the link between November sea ice and the winter NAO in historical coupled model simulations. The authors find that in the control simulations the connection is weak and opposite of what is seen in observations. When they add the stochastic parameterizations to the ocean and sea ice model components, the connection between the November sea ice and winter NAO switches sign and becomes closer to the observed value. They attribute the differences to improved ice-ocean-atmosphere coupling. I think that this an interesting study with potentially important results worthy of publication. However, I do have a number of issues that need to be addressed, so my recommendation is for major revisions.
Major comments:
1.The use of different sea ice regions for the model and observations is problematic. The authors have correlated the NAO with sea ice concentration at all gridpoints and cherry-picked the regions with the largest correlations (which is different in the model and observations). Given the weak correlations combined with large internal variability, there is a good chance the internal variability is contributing to the regions with the highest correlations. This means all the subsequent analysis and discussion about statistical significance is not reliable because the region was not selected a priori. The authors should use the Barents-Kara (BK) Sea for both observations and model correlations. I don’t even think this will have that large of an effect on the analysis and conclusions because there are clearly differences in correlations over just the BK Sea (Figure 4).
The justification for this is not at all convincing. The authors claim that because models have different biases, the regions with the most sea ice variability is different across different models and the real world. However, The sea ice in the BK Sea in the OCE does not look that different than in ERA5, so I don’t see why they cannot use the same region. The leading EOF in ERA5 looks very similar around the BK region (Figure 2). I can see maybe shifting the regions slightly to account for biases (e.g. if the model ice edge is 1° too far south in the model, shift the region definition 1° to the south), but to use a very different region is not justifiable and introduces additional issues.
2. The model the authors use may be an outlier and the results may not be that relevant to other models. This is very briefly mentioned in the discussion, but I think there are reasons to think this may not work as well in other models. Most models tend have a weak connection between reduced sea ice and a negative NAO. In addition, as mentioned in the introduction, model experiments forced with reduced sea ice also tend to show a weak negative NAO response. However the control model used here shows the opposite sign correlation compared to most models, and a previous study (Ringgaard et al. 2020, doi:10.1007/s00382-020-05174-w) shows that a version of this model shows no NAO response to reduced sea ice in the BK Sea. In addition, the improved correlation in the OCE version are still weak. Could it not be the case that the OCE is just improving the flaws in this particular model, which brings it more in line with other models? This would then mean that applying the same methods in other models may not have as large of an effect.
3. The authors claim that mean state changes cannot explain the differences, but I don’t find their arguments that convincing. They argue that AMIP ensemble with prescribed SSTs and sea ice show weak correlations. First of all, taking the correlations of the AMIP ensemble at face value would suggest that close to half of the difference can be explained by the mean state. Second, there are many other difference related to the coupling of sea ice and SSTs that could cancel out the improvements made by correcting the mean state biases in the AMIP experiments. It is likely that the improved mean state explains at least some of the differences and it can’t be ruled out that it is entire explanation.
4. The authors conclude that the link between sea ice and the NAO is stronger because of improved ice-ocean-atmosphere coupling. This is a bit vague and could be investigating a little further. What about the coupling is actually being improved? Because the authors argue that coupling on short timescales can explain the difference, there could be a lot value in doing similar analysis to what was done in Figure 7, but with other variables. For example, does the OCE ensemble have a stronger upward heat flux and temperature response following reduced sea ice?
5. The title and abstract need to be more specific. Many different links between the Arctic and the midlatitudes have been hypothesized via a number of different mechanisms. It is misleading to refer to Arctic-midlatitude links very generally, when the authors have only investigated one specific link between November Barents-Kara sea ice and the winter NAO in interannual variability. Even with this correlation, the authors have only looked at one mechanism (they have not investigated the stratospheric mechanism).
Other comments:
L35: What is meant by ‘More seriously’? Are the model experiments with imposed sea ice anomalies not serious?
L35-38: Another recent study that could be cited/discussed here is Siew et al. 2021 (doi:10.1126/sciadv.abg4893).
L30-42: Somewhere in this discussion it should be mentioned that observed correlation seems to be highly intermittent when looking at the much longer record (Kolstad and Screen 2019, doi:10.1029/2019GL083059). In the middle of the 20th century, the sign of the connection appears to be opposite compared to the recent period.
L38-42: This is not an accurate description of Blackport et al. 2019. This study has nothing to do with the connection between November BK sea ice and the winter NAO and is not that relevant for this study. A much more relevant study that argues that the correlation between November BK sea and winter NAO may not be causal is Peings, 2019 (doi:10.1029/2019GL082097).
L41: Warner et al. 2020 do not suggest tropical forcing as a common driver of sea ice and the NAO. They did suggest this may be the case for other aspects of the mid-latitude circulation, but not the NAO.
L198-207/Figure 1: The main takeaway from this is that OCE reduces the sea ice everywhere. The changes in variability are also entirely consistent with just a reduction in sea ice extent everywhere .
Figure 1 and 3: I think that it would be more useful to show plots for OCE-ERA5 as well to make the improvements easier to see.
Figure 2: What does the sea ice variability look like in the Barents-Kara sea in CTRL? There is substantially less variability connected with the EOF1, but is that because it is in other EOFs or because there is substantially less variability? I don't think it is latter based on Figure 1.
L218: sea surface temperatures
L243: Blackport et al. 2019 did not do this and has little to do with the NAO.
L279-281: I don’t understand this. The Bering sea is a completely different region which will have different impacts on the circulation, so I don’t see how it can be the equivalent to the BK Sea.
L281-283: There has been a lot more work looking at the response/correlation to sea ice in different regions than what is portrayed here (e.g. Screen 2017 doi :10.1175/JCLI-D-16-0197.1, McKenna et al .2017 doi:10.1002/2017GL076433, Blackport et al. 2019). The reason there has been more on the Barents-Kara is because there are stronger links in both observations and models.
L307: I don’t think any study, including Koenigk and Brodeau (2017), state that the observed signal is a spurious signal. This study, and others like it, express caution that it could be. There is a lot internal variability and spurious signals can arise in model simulations of similar length to the observed record even when there is no/weak signal overall. It also the case that the recent observed correlation appears to be unusually high compared to the longer record(Kolstad and Screen 2019).
Figure 5a: The fact that all simulations start off with a higher correlations than over the whole period intrigues me. Because all simulations start of the same ocean state, is it possible that they happened to be initialized in particular state of low frequency variability that contributes to a stronger correlation?
L317-319: I don’t understand why that would suggest it is coincidental. You wouldn’t be able to rule it out, but that is very different from suggesting that it is.
L322-323: Is it actually the case that each 30 year period is statistically significant from 0? I doubt that this is the case given that some 30 year periods show correlations close to 0.
L328: How often do they attain correlations that exceed the observed correlation?
Figure 5b: I think it is misleading to plot it this way because the overlapping 30 year periods are obviously not independent. There are really only about 6 independent data points in the OCE distribution. I don’t doubt that the differences are statistically significant, but this plot likely exaggerates the perceived significance.
L350-352: Isn’t it more relevant to know whether or not these correlations are statistically different from the correlations in OCE or CTRL?
L360-368: The regressions of November zg500 on November sea ice is likely not the response to the sea ice anomalies(at least not entirely). Instead, a large part of it is the atmospheric circulation that forces the sea ice anomalies. The sign of the NAO is opposite to what would be expected if it was the response. Unless the authors are arguing that the initial response to reduced sea ice is a positive NAO, but that contradicts what is shown in Figure 7.
L380-385:This negative feedback between the sea ice and NAO was identified in a number of studies including Strong et al. 2009,doi:10.1175/2009JCLI3100.1 .
L435: This is not reproducing the result of Blackport et al. 2019. They examined the regression between winter circulation and winter sea ice, not November sea ice.
L425-456/Figure 9. I am not sure I understand the point of this analysis. The authors have already established that feedback between sea ice and the NAO, so I don’t see how the NAO forcing of the sea ice could explain the difference between OCE and CTRL. There could potentially by a stratospheric pathway where there are causality issues, as suggested by Peings 2019, but the authors have effectively argued against this being the reason for the improvement by showing that difference can entirely be explain based on the daily coupling. The authors should more clearly explain the motivation for it, or remove it.
L463:Figure 9->Figure 10
L516: How would the varying model biases contribute to the inconsistencies within long simulations from a single model? Note that there also appears to be large inconsistencies between short periods in observations as well (Kolstad and Screen 2019).
-What do the trends in NAO look like? If the improved correlations represent a response to sea ice loss, it may be expected that there is more negative NAO trends in the OCE simulations. This could have implications for the midlatitude response to sea ice loss and global warming, not only for seasonal predictions. This may be a bit beyond the scope of the study, and a larger ensemble may be needed to find robust differences, but it would really simple to check.
-
AC1: 'Reply on RC1', Kristian Strommen, 14 Feb 2022
We thank the reviewer for their constructive and insightful feedback. Before responding to the key points made, we need to point out that last November the authors were able to obtain additional supercomputing units and have since used these to double the ensemble size from 3 to 6. The new ensemble members have proven to be consistent with the original members, which adds considerable confidence to the hypothesis that the stochastic schemes are genuinely improving the teleconnection. All Figures and diagnostics in the revised version of the paper have therefore been expanded to include these members. As a result of this, some details of the discussion have changed. Because most of these changes are in line with suggestions made by the reviewers, we hope this will not cause a nuisance to the reviewers, who might rightfully wish we had waited with submitting in the first place until this larger ensemble was obtained. Unfortunately, it was not clear prior to submission if the required computing units would be obtainable.
- Concerning the choice of sea ice region, we agree that a differing choice for the model and observations leaves us open to accusations of cherry-picking, and at the very least some discussion of sensitivity of results to the choice should have been included. We have now done more extensive testing of the use of different regions and can report the following. If one uses Barents-Kara for all data sets, then the conclusions are qualitatively similar, in that there is a consistent improvement of the ice-NAO correlations when adding stochasticity, and these improvements can be explained using the LIM model. However, quantitatively speaking the results are somewhat weaker, with the correlations in OCE being generally smaller (and not as comparable in magnitude to ERA5) when using Barents-Kara as opposed to Barents-Greenland. We also found that using just the Barents sea for the model gave quantitatively almost identical results to using Barents-Greenland, and the increased ensemble size now singles out the Barents sea anyway (revised Figure 4). On the other hand, the Barents November sea ice in ERA5 has zero correlation with the NAO: it is definitely necessary to extend the region out to the Kara sea for ERA5.
After careful consideration, we believe it is still justifiable to somewhat adjust the sea ice region in the model compared to observations. The results discussed above have led us to use Barents-Kara for ERA5 and Barents for EC-Earth. The difference between the two regions is therefore even smaller now, with EC-Earth simply omitting the Kara sea. An equivalent table to Table 1 which uses Barents-Kara for all data sets will be included in Supporting Information of the revised paper, and we will clearly highlight and discuss the fact that qualitatively (but not quantitatively) similar results are obtained with this uniform choice. We hope this will go a long way towards addressing the reviewer’s objections.
We now expand on our justification. There are two key points. The first is that both the mean state and the seasonal evolution of the sea ice edge is clearly different in CTRL compared to ERA5.It’s true that the bias of CTRL and OCE in the mean sea ice in the Kara sea (Figure 1a,b) is on the order of 10% less ice than in ERA5, and this not huge on the face of it. But the biases in the standard deviation (Figure 1c) clearly point to a big change in how far equatorward the ice edge tends to extend to every year: the sign of the pattern (negative near pole, red equatorwards) says that in CTRL, the ice edge tends to extend further outwards. This is important because the heatflux anomalies are dominated by the variations in the location of the ice edge: if the ice edge has moved, so will the largest heatflux anomalies. The 10% difference in the mean state is therefore in all likelihood misleadingly small, smoothing out more important interannual variations in the ice edge in the Kara sea. This change in the seasonal ice edge evolution in EC-Earth3 is further corroborated by the visibly different EOFs (Figure 2). It is true as the reviewer states that the local magnitude of the patterns in the Barents-Kara region are similar between ERA5 and OCE, but clear visible differences still remain. In ERA5, the typical November pattern is evidently an increase (decrease) of ice in Barents-Kara and a decrease (increase) in the Barents sea closer to Russia as well as in the Laptev sea. In OCE, the typical behaviour is an increase/decrease along the entire ice edge from Greenland up to Chukchi. In particular, sea ice anomalies in Barents-Kara may, in the model world, be expected to often come hand-in-hand with sea ice anomalies elsewhere that don’t look anything like that of observation. Since it has been noted in previous papers ([1,2] and others that the reviewer themselves provide) that sea ice anomalies in regions other than Barents-Kara may have different, even opposing, impacts on the atmospheric circulation, we do not think such possible effects can be considered negligible.
The second key point is, as discussed in our paper, that there is evidence in the literature that the teleconnection depends on the atmospheric mean state, in particular the position of the storm track. Since the storm track is almost always biased to some degree in climate models, it does not seem unreasonable to suggest that the sea ice region in models best placed to interact with the storm track is slightly different than that in observations.
The fundamental issue here is that external forcing, including that from teleconnections, very often projects onto the dominant modes of variability (e.g. [3,4]). Not only do these differ between models and observations (Figure 2), but in the case considered here, there is non-linearity embedded at both ends: with sea ice as discussed in [1] and with the North Atlantic Oscillation in the visible multimodal behaviour of the jet [5]. We therefore take the view that model biases, in both the mean and the variability, cannot be easily ignored, and indeed many studies have examined the influence of such biases on teleconnections (e.g. [6] for just one recent example). There are also several precedents in the literature for using sea ice EOFs to compute Arctic-NAO teleconnections (e.g. Wang, Ting and Kushner 2017, or the Strong et al paper you pointed us to in your comments), and such approaches would inevitably highlight different regions in models vs observations. It is certainly true that allowing for regions or patterns to shift in models opens up the possibility of cherry picking, and so sensitivity to such shifts should be clearly discussed, which we failed to do. But the flip side is that allowing for no model-dependent diagnostics may overly penalise models and give the impression that model skill (or inter-model consensus) is weaker than it is.
It is the authors’ impression that there has perhaps been too little consideration in the literature on potential (small) shifts in the key sea ice region, and we think this is an important point that we wish to highlight as part of our work. The revised version will expand on all the above points to better justify the choice made. Of course, we accept that the reviewer may disagree on some or indeed all of the above points, or be of the opinion that a proper justification of the above points would require more work which would likely be inappropriate to include in this paper. We hope that if this is the case, that our emphasis of the qualitatively similar results obtained with Barents-Kara, and the change from using Barents-Greenland to Barents for the model, will nevertheless allow you to consider your objection adequately addressed. - We would challenge the assertion that “most models tend to have a weak connection”. The range of correlations between Barents-Kara and the NAO found across the coupled CMIP6 models is very well approximated by a normal distribution with mean 0, standard deviation 0.17 and a 95% confidence interval of 0.28. While the exact mean of 0.018 is positive, almost half the CMIP6 models have negative correlations. The EC-Earth3 CTRL ensemble, with its average correlation of -0.06, is in no way an outlier in this distribution and is in fact dead average: this was extremely briefly noted in the submitted paper (line 337), and we have now made this more clear by revising Figure 5 to include the CMIP6 distribution. The inclusion of additional ensemble members has also now produced CTRL members with slightly positive correlations in the period 1980-2015, so there seems to be even less cause to find EC-Earth3 particularly objectionable. Its biases in the mean ice state are also in no way notably worse than many other models.
Note that the slightly positive mean of the CMIP6 distribution is consistent with findings in earlier literature reviews which report that `most’ models show a positive association, but it is clear that this consensus is weak. Another point here is that many of the experiments carried out in the literature are not directly comparable with each other: e.g. many model experiments analysing the role of sea ice use fixed anthropogenic forcings, while the models we consider here are using historical forcings. This may account for any remaining discrepancies.
That being said, the point that the stochastic schemes may have differing impacts in other models should have been emphasised more. There are examples from earlier work which show consensus across models in some cases and lack of consensus in others. This will be expanded on in the revised manuscript. - The potential importance of the mean state is a point raised by all of the reviewers, and upon further consideration we agree. It is actually even clearer after having increased the ensemble size that coupling alone isn’t sufficient, and it is likely a combination of coupling and mean state. This will be expanded upon further in a response to another reviewer.
References:1. Koenigk, T., Caian, M., Nikulin, G. et al. Regional Arctic sea ice variations as predictor for winter climate conditions. Clim Dyn 46, 317–337 (2016). https://doi.org/10.1007/s00382-015-2586-1
2. Sun, L., Deser, C., & Tomas, R. A. (2015). Mechanisms of Stratospheric and Tropospheric Circulation Response to Projected Arctic Sea Ice Loss, Journal of Climate, 28(19), 7824-7845.
3. Shepherd, T. Atmospheric circulation as a source of uncertainty in climate change projections. Nature Geosci 7, 703–708 (2014). https://doi.org/10.1038/ngeo2253
4. Corti, S., Molteni, F. & Palmer, T. Signature of recent climate change in frequencies of natural atmospheric circulation regimes. Nature 398, 799–802 (1999). https://doi.org/10.1038/19745
5. Woollings, T., Hannachi, A. and Hoskins, B. (2010), Variability of the North Atlantic eddy-driven jet stream. Q.J.R. Meteorol. Soc., 136: 856-868. https://doi.org/10.1002/qj.625
6. Karpechko, AY, Tyrrell, NL, Rast, S. Sensitivity of QBO teleconnection to model circulation biases. Q J R Meteorol Soc. 2021; 147: 2147– 2159. https://doi.org/10.1002/qj.4014 - Concerning the choice of sea ice region, we agree that a differing choice for the model and observations leaves us open to accusations of cherry-picking, and at the very least some discussion of sensitivity of results to the choice should have been included. We have now done more extensive testing of the use of different regions and can report the following. If one uses Barents-Kara for all data sets, then the conclusions are qualitatively similar, in that there is a consistent improvement of the ice-NAO correlations when adding stochasticity, and these improvements can be explained using the LIM model. However, quantitatively speaking the results are somewhat weaker, with the correlations in OCE being generally smaller (and not as comparable in magnitude to ERA5) when using Barents-Kara as opposed to Barents-Greenland. We also found that using just the Barents sea for the model gave quantitatively almost identical results to using Barents-Greenland, and the increased ensemble size now singles out the Barents sea anyway (revised Figure 4). On the other hand, the Barents November sea ice in ERA5 has zero correlation with the NAO: it is definitely necessary to extend the region out to the Kara sea for ERA5.
-
AC1: 'Reply on RC1', Kristian Strommen, 14 Feb 2022
-
RC2: 'Comment on wcd-2021-61', Anonymous Referee #2, 19 Nov 2021
Strommen and Juricke explore the question of why Arctic-midlatitude teleconnections in climate models are generally weaker than observed by employing a modified version of EC-Earth3 which includes stocasticity in the sea ice and ocean components. The study finds that the Nov. sea ice - winter NAO teleconnection is improved in the model integrations with stocasticity and this appears to be mainly related to stocasticity in the sea ice component. I find this result very interesting; however, like the authors, I am left wondering whether OCE may be getting "the right answer for the wrong reasons". I recommend some major revision of the manuscript to address the key issues below:
1. The authors argue with respect to Figure 9 that OCE and ERA5 have similar daily timescale forcing, suggesting that OCE is getting things right for the right reasons. I'm not entirely convinced of this given that the b coefficient in OCE is larger than ERA5. It would be interesting to see the coupling between ice and other variables using the LIM to provide a bit more evidence that OCE is getting things right, for example the relationship between ice and a variable that is more thermodynamically connected to ice. The authors also note that the difference seen in Figure 9 could be due to chance. If so, can you show similar plots as Figure 9 for each ensemble member of OCE? If chance plays a role maybe there is some evidence of this if all ensemble members are examined individually.
2. Figure 9h and 9i seem to suggest something is quite unrealistic about how this model represents fall sea ice variability. In the Blackport et al. (2019) paper, they examine a version of EC-Earth, EC-EarthV2.3, I believe. Are you able to reproduce their findings with EC-Earth3P used here for the CTRL runs (it would be great to see plots similar to their Fig. 4c, f, and i? It seems that you are getting very different patterns (Fig. 9h), which makes me concerned about the suitability of this model for this study.
3. Could the direct effect of mean state changes be quantified using AMIP-style runs with monthly sea ice and SSTs from the coupled OCE runs? I think it is important to get a better sense of what is going on - is it the stocasticity itself or the effect of the stocasticity on the mean state. Untangling this has implications in terms of how this study informs model development.
Minor comments:1. lines 25-30: lots of issues with parentheses that need to be tidied up.
2. line 27: You may want to say "negative NAO" rather than just "NAO" for clarity.
3. Section 2: there are many different abbreviations/acronyms for the model used in this section. After you finish describing the various configurations, can you tell the author which name you are going to stick with throughout the paper? Something like, "Hereafter, the model will be referred to as...".
4. Line 153: What prescribed SSTs and sea ice?
5. line 162: extra parentheses
6. line 218: Figures -> Figure and ssea -> sea
7. Table 1 caption: there is a missing section number - just shows ??
8. line 405-406: I don't think this is the correlation you are showing. It's sea ice and NAO, correct?
9. FIgure 9i does not really look like Figure 9g to me. And it seems a bit strange that Fig. 9h does not look anything at all like Fig. 9g.
10. line 463: FIg. 9 -> Fig. 10
-
AC2: 'Reply on RC2', Kristian Strommen, 14 Feb 2022
We thank the reviewer for the insightful comments, and bring their attention to the increased ensemble size obtained since submission: see the response to RC1 for further details.
The main concern raised is about Figure 9, which suggested that perhaps OCE was recovering a correct looking teleconnection for the wrong reasons. After doubling the ensemble size the mismatch with observational data has been notably reduced. The improved teleconnection in OCE still appears more driven by the forcing of the ice on the atmosphere, but a clear NAO signal is now also seen for years where the atmosphere drives the ice. We hope this will help reassure the reviewer.
It is perhaps also worth pointing out that we are either way still suggesting that there is “something quite unrealistic” about the CTRL model, to paraphrase the reviewer. We are suggesting that the lack of a teleconnection is unrealistic, and that its improvement in OCE is a genuine improvement. The point being that this is an important result even if CTRL is unrealistic in some ways, because it implies that the considerable intermodel spread in reproducing the observed teleconnection may to a large extent be due to model biases rather than internal variability. If that is the case, then the teleconnection may be much more robust than many studies suggest it is. But in any case, EC-Earth3 does not seem to be a particularly poor model: see the response to RC1 for more on that.
Note that the EC-Earth figures from Blackport et al. 2019 are not reproducible with our data. While the model used is closely related, the EC-Earth experiments considered in Blackport et al essentially use fixed forcings (they use 400 5-year simulations each covering the same period), while our experiments are 65 successive years with historical forcings. Identical diagnostics would not be expected as a result, so we don’t see any discrepancies here as a point of concern.
As for plots elucidating the mechanisms more clearly, we produced some additional lag correlation/regression plots between sea ice and heatfluxes (this also being suggested by RC1) as well as some other diagnostics to help clarify. While these do hint at some small improvements in OCE to the daily time-scale local coupling between ice and heatfluxes, our analysis generally suggests that the flaws in CTRL are not clearly visible in the local thermodynamic coupling. Instead, the errors in CTRL appear to be primarily due to errors in the subsequent adjustment and growth of the initial pressure anomaly across the North Atlantic and ice edge more broadly. In fact, this is already what the LIM results suggest, but this was not really made clear in the submitted manuscript. All this will be discussed (and the relevant new plots included) in the revised paper. Unfortunately, a thorough analysis of errors in the more remote response is not going to be possible to include in this already lengthy paper and will have to be left for future work (though we include some speculation).Finally, regretfully no time or resources are available to carry out experiments of the sort you describe at present, though we agree they would help. The role of the mean state (also raised by the other reviewers) is discussed in more detail in the revised manuscript in any case, but it has not proven possible to decisively nail down the contribution of mean state vs coupling in our analysis. Besides the complications of local vs remote responses discussed above, it is likely that the inherently non-linear component to ice/heatflux coupling plays a role which our analysis, entirely based on anomalies, cannot detect. Possible non-linear diagnostics that could be explored in follow-up work are discussed in, e.g. Caian et al. An interannual link between Arctic sea-ice cover and the North Atlantic Oscillation (2018), Clim Dyn. We hope that the extra diagnostics and discussion, including of potential future work, will satisfy the reviewer anyway.
-
AC2: 'Reply on RC2', Kristian Strommen, 14 Feb 2022
-
RC3: 'Comment on wcd-2021-61', Anonymous Referee #3, 01 Dec 2021
This is an interesting paper on Arctic-midlatitude teleconnections. While focusing on the role of stochastic parameterizations, the paper also provides useful and novel physical insight into the possible factors affecting the representation of this teleconnection in climate models. The paper was clearly and logically written, and the results interesting, so I enjoyed reading it. I do however have several comments regarding the physical interpretation of the findings.
Main comments:
1) Interpretation of Fig. 6
I’m not sure I fully agree with the interpretation of the lagged relationships between Z500 and sea ice – or I may have misunderstood the authors. The text L360–368 seems to imply that the Z500 anomalies are a “response” to the sea ice at all lag times. This makes sense at positive lags (December onwards, when Z500 lags the sea ice), but for the November anomalies (1st row of Fig. 6) we also need to consider the possibility that it is the circulation driving the sea ice, rather than the other way around. I think this is indeed what is happening: the Z500 anomalies are consistent with northerly flow into the Barents sea area, which would drive enhanced sea ice concentration. I believe this also explains why the November Z500 anomalies are so consistent among ERA5, CTRL and OCE. In any case, the possible two-way interaction between Z500 and the sea ice needs to be discussed in the context of Fig. 6.
2) AMIP results
I am still unclear as to why the AMIP simulations show no midlatitude response to the sea ice anomalies. I understand the result in Fig. 7 that there is two-way coupling, and the NAO → ice effect is absent from AMIP. But the ice → NAO effect should be in AMIP, so why don’t we see that? Also, is this result consistent with any prior work looking at AMIP runs with other climate models?
3) Coupling timescales
I feel some clarification is needed on the timescales at play in the sea ice–NAO coupling. Figure 7 suggests the coupling happens on daily timescales; but it’s not obvious how to reconcile this with the finding that the NAO responds to November sea ice anomalies on the timescale of a *season* (DJF). My interpretation would be that the sea ice anomalies are relatively persistent (Fig. B5), so the November anomalies are a skillful predictor of those occurring later in the winter season – and these anomalies continue forcing the NAO through the winter. Is this consistent with the authors’ thinking? Please clarify in the paper.
4) Coupling in CTRL
Figure 8b suggests the BG sea ice in CTRL does have a measurable impact on the NAO, which appears at odds with the lack of an ice → NAO relationship in Fig. 7. Is this because the BG sea ice varies so little in CTRL – so that even though the effect is there, the impact is minimal because there’s almost no forcing?
5) NAO definition
I was unclear as to the NAO metric as defined L166, and since this is key to the result, the definition seems important. I don’t understand the subtraction of the daily climatology after the calculation of the PC. Why not deseasonalize the data beforehand? If using non-deseasonalized data, there is a risk that the EOFs are capturing the seasonal cycle (an externally forced signal), rather than the true internal atmospheric variability. It was also unclear to me whether the EOFs were calculated for each CTRL and OCN realization separately, or whether these realizations were concatenated prior to computing the EOFs. While it probably makes little difference, I’d favor the latter, which should give more robust EOFs – and ensures any differences among the realizations aren’t due to differences in the EOF basis.
Minor comments:
1) Please fix the citation format – the parentheses are often in the wrong places. I suspect this may be due to mixing the Natbib commands \citet and \citep in LaTeX. One example is L25, where it should be “(Hoskins and Karoly 1981)”, “(Garcia-Serrano et al. 2015)”.
2) Consider clarifying the definition of the word “deterministic” – not being a stochastic parameterization expert, I initially thought this might mean “prescribed SST” as opposed to coupled, when actually this means “not stochastic”.
Typos etc:
L52: “are a manifestation”
L169: “are computed”
L208: “to reduce”
L218: “sea surface”
L229: “Examination… supports”
L297–300: This text is a repetition of L179–183, so I suggest deleting.
L405: Strictly speaking, Table 1 shows the correlations between the LIM NAO and LIM sea ice – not LIM NAO with true NAO. The latter is shown in Fig. B6.
L423: “may have changed” → I think you mean “between CTRL and OCN”, but it’s not entirely obvious from the phrasing.
Caption of Table 1, L3: broken link to section 5.2
Figures 4 and 6: Suggest highlighting the BK and BG regions with boxes in the maps
-
AC3: 'Reply on RC3', Kristian Strommen, 14 Feb 2022
We thank the reviewer for the insightful comments, and bring their attention to the increased ensemble size obtained since submission: see the response to RC1 for details.
- About Figure 6, yes you are absolutely right that there is a 2-way interaction there which we failed to comment on. This will be included in the revisions.
- Yes, there is evidence in prior literature that this teleconnection is weaker in AMIP models. This was mentioned in line 520, citing Blackport and Screen (2021), though we believe earlier studies (cited in their paper) had pointed to this as well. For EC-Earth in particular, the study Caian et al. An interannual link between Arctic sea-ice cover and the North Atlantic Oscillation (2018), Clim Dyn, showed that ice/NAO links are weaker in an AMIP simulation than a coupled simulation, something they attributed to the missing coupling. Our paper provides further evidence to the importance of coupling to get a good teleconnection, though several questions remain about exact mechanisms. We show that while the initial, local ice->heatflux response appears similar for both CTRL and OCE, the subsequent growth and evolution of the anomaly is significantly better in OCE. Presumably, as you point out, the initial local anomaly would be highly realistic in the AMIP simulations, but the failure to propagate the anomaly would likely be even worse given the total lack of coupling. Caian et al. includes some other discussion on possible mechanisms here. We will discuss some simple hypotheses as well, including the alignment of the sea ice edge with the eddy-driven jet, and the importance of sea ice adjustments further afield from the source region (Barents/Barents-Kara). This will be discussed in the revised paper.
- Yes, exactly: the initial anomaly is long-lasting due to the persistence of sea ice, but is ultimately damped away by the opposing response of the NAO. We will revise the paper to make this clearer. More discussion about the initial local response vs more remote adjustments are also included, as per point 2 above.
- All reviewers have commented on the mean state, and in hindsight the minimal role we ascribed to the mean state wasn’t justified. We can’t see any meaningful difference in the November 1st initial conditions (of the ice and NAO) between CTRL and OCE, but the LIM model takes anomalies as input, which ignores any non-linear effects. Since such non-linearity is likely to be present here, our analysis can’t really address this. On balance, it is likely that the improvements in OCE are due to both the mean state and the coupling, and we will make this clearer in the revised paper.
- The NAO EOF was computed separately for each dataset, to allow the centers of NAO action to shift between each dataset according to differences in the mean state: this will be made clearer in revisions. We believe it is important to allow for some shifts between models to not obscure signals or overly penalise models (i.e. penalising both for mean state biases and changes to modes of variability). That being said, in this case there is little difference between the CTRL and OCE NAO, with a pattern correlation between the two of around 0.97. The results are therefore highly unlikely to change if using the exact same NAO pattern for CTRL and OCE. This will be mentioned in revisions.
- About Figure 6, yes you are absolutely right that there is a 2-way interaction there which we failed to comment on. This will be included in the revisions.
-
AC3: 'Reply on RC3', Kristian Strommen, 14 Feb 2022
-
EC1: 'Comment on wcd-2021-61', Camille Li, 02 Mar 2022
Thanks to the referees for their in-depth reviews and the authors for their careful consideration of the points raised. Because of rather substantial edits and a re-interpretation of the results in response to the reviews, I'm offering the referees a chance to comment further on the revised manuscript.
Peer review completion
















Journal article(s) based on this preprint
Kristian Strommen and Stephan Juricke
Kristian Strommen and Stephan Juricke
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
582 | 212 | 25 | 819 | 10 | 7 |
- HTML: 582
- PDF: 212
- XML: 25
- Total: 819
- BibTeX: 10
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(11884 KB) - Metadata XML