YTread Logo
YTread Logo

David Randall, Colorado State University: “Model Development: The Journey is the Reward”

Mar 11, 2024
We've all seen this photo before, but I chose it anyway because he's sitting in his office and I have very fond memories of sitting with him in his office, so I just wanted to post it, so let's talk. a little bit about how, in my opinion, how Arakawa approached the science of it and it has to do with how we use

model

s to learn about the real world, so one way is to create the

model

first because you have to think. about how nature works and how I formulate this and use it to create any kind of model, but maybe a numerical model and this happens before I start writing code.
david randall colorado state university model development the journey is the reward
Another way is to compare the results of the simulations with the observations and see what looks good and what looks bad um and then try to understand why and the Third Way is to do numerical experiments like, you know, taking out the Rocky Mountains or something like that, so there are many possibilities, so I'll come back to this later. to set it up now and let's go back to grad school, so this photo was taken in the 9th floor Penthouse in Belter Hall at that address, probably in '72, maybe three at that time, all global circulation models were made In this country, they had not done so.
david randall colorado state university model development the journey is the reward

More Interesting Facts About,

david randall colorado state university model development the journey is the reward...

It spread internationally, but they're just getting started. The work was purely academic in the sense that global models were not being used to make operational forecasts and was shortly before they were being used to make enthalpy simulations during the soon-to-arrive climate change. It was relatively modest the developers of the model were the same people as the users of the model today we have a small group of developers and a large community of users it was not like that back then and it took approximately two hours quickly, I put it in quotes because the computer was fast at the time to simulate a day with a grid spacing of about 400 kilometers, which is very crude by today's standards.
david randall colorado state university model development the journey is the reward
This is a machine or at least a type of machine that I used when I was a student at the old Med medical center. In the center, on the east side of Westwood Boulevard, it was a model 91 IBM 360, it had four megabytes of main memory, four megabytes, it could do about five and a half million multiplications per second, the disk drives contained two megabytes and were about the size of a closed washing machine and they made a lot of noise uh the input was through a card reader, the most of you probably never saw one and the printers printed one line at a time so they were called line printers and they made a ton of noise and we didn't really have any graphic output so um computer I'll do it and then I'll forget about it Well, computer power has increased by about a factor of 100 billion, from five megaflops to 500 petaflops and actually I know that some of the machines are a little bit faster than that, now there are 100 billion.
david randall colorado state university model development the journey is the reward
You know, over the course of my career, it's absolutely amazing. We all know computers are faster, but this is much faster. So this is a simulation, it looks like a reality, but it is very high. resolution simulator and during the same years as the machines got faster the models themselves improved enormously and this guy had a lot to do with that so I'll show you this list of innovations and I probably left something out and m I apologize if it's your favorite thing, but it's not in chronological order, but maybe it's a little bit chronological, so I started with our college program.
I won't read this list to you because it's too long, but I think a lot of these things. I'll mention more than half of them again as the talk goes on and this slide will appear again near the end. Say you can remove this, maybe this slide will appear again near the end. Okay, so the plan for the talk. I'm going to discuss some recent research in my group and point out the many ways in which it builds on Macio's ideas and, uh, the main theme, not the only one, but the main theme will be something called the cruel curl, which I think.
I'll explain to you, um, it's a global model that's formulated in a particular way and we're going to try to explain to you the reasoning for why it is the way it is and here's a very cryptic diagram that I'll show you again at the end with some labels on things and I hope that at the end of the talk you understand what this diagram means, so when you build a model you have to choose things, make many, many decisions and certainly one of the first things may not be the first. The thing is to choose the Grid or the way to discretize the sphere.
Let me put it this way so I can include spectral models. The particular model I'm going to describe to you uses a quasi-uniform quasi-isotropic geodesic grid that is derived from an icosahedron by recursive subdivision of small polygons, and so the resolutions you see here go down to about a kilometer of grid spacing. Okay, of course, they can go further and this morning we heard from Chateau about even higher resolution grids, so I. I mentioned here that in the 1960s Harakawa was involved in a project to investigate the possibility of using grids like this to build global models, so he was there from the beginning and you know this is still used today.
In Japan they are using it in Germany and also in this country the empaths. I didn't care, you have to choose the system of equations, now you know, you could think well, we know what the equations are, what you mean, there are options and in particular, there are systems of equations that filter the sound waves this morning. I asked John Thuburn about sound waves, so this article came out in 2008 and this archive encounter describes a particularly clever, precise and simple system of continuous sound wave filtering equations. In the equations, you filter out sound waves that propagate vertically, but not those that propagate horizontally sideways, so the ground wave that we all heard about in relation to that big underwater volcano a few months ago, the ground wave was in the model, but the waves that propagate vertically are filtered out and that's important because those are the ones that force you to take small time steps or do numerical tricks so that you can take longer time steps.
The only approximation in this unified system is that in the continuity equation which is what you see there, the density of the air is replaced by the quasistatic density and that what that means is the density that would be calculated by integrating the hydrostatic equation with a condition of contour, so all you do is replace row by row Q and everything else remains exactly the same as in the full compressible model. system, it turns out that this system, in addition to filtering vertically propagating sound waves, conserves energy which many other analysts do not and is also more accurate than the other systems, so it is a win-win, but like other systems analytics, the unified system makes you win. solving a three dimensional elliptic equation can be quite expensive, especially with a very fine grid, so I'll come back to this.
The next thing we have to choose are the predictor variables, that is, the variables that are actually staggered over time as a model. run and here I want to connect our color lamp paper with this. It's actually a chapter from a book that came out in 1977. and the most cited part of this chapter of the book is about ABCD egrets. I'm talking about geostrophic adjustment and uh and computational. So actually, on this street like grid that we've selected, there's a problem that John Thurber didn't really mention this morning, he mentioned it quickly, but there's something I call degrees of freedom problem, so Oracle and Lamb in that chapter of the book recommended the use of the cigarette we heard about today and the simplest way to define the C grid.
The simplest general way is to say that a secret model predicts the outer normal component of the wind in each cell wall, so If the mass lives in each of these seven cells here in each cell wall, not just this one but all of them, the model predicts the normal exit velocity in the wall and not the tangential one, so if you look at this, okay, Let's look at the central dark gray cell, which is where the mass lives, it has six walls, so there would be six velocity components in the wall, six normal components outward, but each of them is shared between two cells, so There are actually only three wind variables per mass point. but there are too many, there are supposed to be only two, so there are too many wins and what that means is that there will be spurious computational modes in the wind field, so this is a practical problem that makes the solution noisy and can cause problems, especially when there is water in the model, so in 1994 I was interested in using these geodetic grids and I proposed a solution to this problem in which instead of predicting the winds you predict the vorticity and divergence, this is called Z grid and John mentioned it like that. tomorrow um, so in this case it's a non-staggered grid, so the mass lives in the center of the cell, but so do the vorticity and divergence, so obviously there's a divergence and a vorticity for each mass, two degrees of freedom in the wind field for each mass that is. the right number so that the degrees of problem free disappear, but the problem is that we still need to know the wind, it is not enough to know the vorticity in Divergence because we have to infect things, so to get the wind we have to solve a couple of two-dimensional elliptic equations at each level of the model and that could be done, but it is a complication and I will return to this when the Z-grid formulation I just described is combined with the unified system that filters sound waves. leads to a three dimensional elliptic equation for pressure and this is something that most of you have probably heard of, you know it's characteristic of elastic systems, so this complicated looking equation here, the Delta Pi, that you see there and again, there's the pressure output for some reference and it's the unknown and there's a forcing that I didn't write down because it looks messy, but there's a forcing on the right side, so you solve this equation for the Delta pi um unknown and the The problem is the boundary conditions, the upper and lower boundary conditions, uh, are that in the case of a flat surface D Pi DZ is zero at the top and bottom, those are boundary conditions that limit the vertical derivative of the unknown rather than the unknown itself and that can work but leads to slow convergence so we prefer to have boundary conditions where the unknown variable itself is constrained in the limits so we will again this.
The way to solve the problem is to use the vector vorticity model. so here's another reference before we move on to the next slide, this is of course an ectotropical air clinic, Petty, and this is a large cumulus cloud, and here we have vorticity in this sense, and it's the vertical component of the vorticity that is Really what you see when you look at the image and here we have vorticity, but there is the updraft and then it expands and sinks out into the clear air, so the vorticity is like this and the vorticity vector associated with this convection It's actually horizontal, in fact, since the cloud is somewhat cylindrical, you could imagine it's something like a vortex ring which I'll come back to as a smoke ring, so it's wrapped around the cloud at each level, so which I'm going to call the vector vorticity model. was published here by Junior and Akio Arakawa and I don't remember what year, but about 10 years ago they were interested in creating a three-dimensional cloud model in which the vorticity seen in this image was actually a primary forecast variable of the model, so the idea now is that large-scale motions are controlled primarily by the vertical component of vorticity and small-scale motions are controlled by horizontal vorticity.
Vector right in the plane of the sphere, vorticity is coherent, maybe not the right word, but vortices persist and can move so they stick together and they are fascinating to see well, we are all fascinated by divergence, dispersers, gravity waves, which disperse, so I intend this to be an aphorism, but realistic simulation of vorticity is key for both large and small scales, so the BBM, this is Arakawan Jones' idea, the vbm predicts the horizontal vorticity vector instead of a horizontal wind vector, so just to put it out there if this is the three dimensional vorticity vector here, uh, it should actually be a capital V, but anyway this is the Horizontal Vorticity Vector which calls it ETA and this is the vertical component of the vorticity multiplying the internal Vector that points so this is what Ada looks like: it involves the vertical shear of the horizontal wind and it involves the horizontal gradient of the vertical motion and That's important, the gradient horizontal from vertical motion, um, and this is the definition of SATA that you will know, so the vvm leads to an equationthree-dimensional elliptic, but it's not an elliptic equation for pressure, it's an elliptic question for vertical motion and I'm not showing the derivation here, but basically what you're seeing here is essentially a form of the continuity equator using the definition of ADA again.
I'm not showing the steps, so this looks kind of three-dimensional. polishing of w here if you ignore the densities and what's on the right side is the horizontal vorticity curl Del cross Ada okay, so what the hell is that? But before we answer that question, if we assume that the boundary conditions are W equals zero at the top and bottom for Simplicity, without topography, then these are deer clay type, so this limits the unknown directly instead of its derivative, which leads to much better convergence of the iterative solver and yet is a big practical advantage of this. vvm predicts the tangential component of the horizontal vorticity at each cell wall now I need to take a minute to explain this complicated looking diagram. um this is a grid cell and this is a vertical side to the right and it's hexagonal as you can see so here's a normal component of the wind on this particular wall pointing out the normal out here's the one that is on that wall and so on, so we're showing three of them, we're not showing all six and, uh, Ada, the horizontal vorticity vector or the tangential component. it lives above and below the wind, so remember this is horizontal vorticity, it's doing this, so this normal exterior here you can consider it associated with the rotational motion around this Ada.
I hope you can visualize this, um, to predict the tangential component of ADA on the wall allows you to calculate the normal component of the wind half a level below the blue arrow, that means it's a secret, so we go back to where we started with the Z grid that I described above, we had too much wind. components per mass point now we have too many vorticity components per mouse point basically the same situation we did when going from C to Z, the C grew into a zigarette and we replaced U and B with the curl and divergence of the horizontal wind. so we're going to do the same thing here, we're going to replace the tangential component of the Cell Wall Velocity of the Cell Wall Vorticity with the Divergence and Curvature of the horizontal vorticity vector.
It's the same idea just translated from the wind. to vorticity, so this is how it works and what I'm doing here is comparing the zigrid model on the left side with curl curl on the right with the zigrid model that we are predicting Zeta and Delta with, which are defined here with curl curl we are predicting the curl of the horizontal vorticity vector there it is and the divergence of the horizontal vorticity vector, but the three-dimensional vorticity vector is not divergent because it is curl, which means that the divergence of the horizontal vorticity vector is negative d z a DZ, so what can we do instead of playing with Delta data?
We can just predict Zeta and then calculate minus t to Theta to Z and that gives us Delta data, so what we're actually predicting is Zeta and Gamma, where gamma is this thing. So Zeta relates to this and Gamma relates to this and I'm going to say a little bit more about how that works here in a second, but first let me comment that it's vorticity all the time and curl curl is a surf town north of Sydney. . Australia, so what is the vorticity curvature? Vorticity has a curvature in the vortex lines that you can try to imagine making loops or rings, for example with this column and this is like the cluster I showed before.
You know there is upward movement here. and a sinking motion that you can't really see on the outside, so there's horizontal vorticity and it's forming a sort of loop around the cylindrical column like a smoke ring, there's your curl, okay, here's another Way, these are streets of clouds, so the air is rising. where you see the cloud and it descends between the clouds, then the horizontal vorticity is like this and it is changing in this direction, a horizontal vorticity effector points in that or that direction and it is changing in the direction perpendicular to its own direction, then it has a curl , so these are two examples of the cruel vector of the word tissue that you can visualize and in both cases the vertical movement comes into play very directly.
Well, here we are alternating between going up and down as we move up from the Cloudy. clear region region here we have upward motion in the column and sinking around the outside, so there is a very direct physical link between the vertical velocity and the curvature of the horizontal vorticity vector, so how does it work really well? we give the initial conditions so we will initialize Zeta and then we will calculate the Delta data from this and we will initialize gamma and then the way to get data is to solve a couple of two dimensional elliptic equations at each level, like with the zigra model once.
We know that ETA do exactly the same thing that the BBM does and it doesn't explain all those steps, but that's in German or on a couple of papers, that's how it works and it's a different grid. I mean, I didn't. I actually mentioned the a grid before, but the a grid has the two horizontal components of wind and mass on a non-staggered grid, so they're all defined at the centers of these cells, um, the zigrid. I explained Zeta Delta H stepless all the center of the cells now we have another non step grid with Zeta gamma NH all in the center of the cells and I want to call it Omega so going back to this diagram a geodesic grid is used, the unified sound wave filtering system, the vector vorticity model and the Omega grain.
So that's what this image means and it works, so we built a global version of this using the infighting system and tested it. I'll show you two slides with tests here where it's a DC MIP test, so this is an internal test. dynamic core comparison that was organized by Cristiano and others and in this particular case you are simulating non-hydrostatic gravity waves, so the curl curl results are along the equator. Curl curl results are shown in these left panels for four different times. starting from an initial condition and here are the endgame results which is one of the models that John Duper contributed to the

development

and I just showed some stuff this morning so the main point is that they look pretty similar , here and there.
In addition to the color shading, the outlines are drawn and here is just the shading so they look slightly different, but they are basically very similar. This is the standard Health Suarez test and you know, I'm not going to do it. I talk about this, but basically the results are as expected, so let me see how it goes here. Okay, now I'm going to change the subject a little. This slide was created by Akio when he was thinking about global CRMs and he is very aware. that computers are getting faster and people are interested in replacing convection parameterizations and models with direct simulation of convection on sufficiently fine grids, so he made this little diagram in which the horizontal axis is the resolution of the model , actually the grid spacing is fine, so here is the extremely rough grid spacing of 1000 kilometers here is a grid spacing of 100 meters this is similar to what Masaki was talking about this morning the Center's operating model European and the nsap operating model are both somewhere around here now out of here um, so a GCM in the old sense type would live here and the grid space area of ​​hundreds of kilometers can extend up to, you know, 50 or so, a CRM cloud resolution model lives here somewhere, so the grid spacing is on the order of a kilometer or two or finer.
So what Akio was thinking was how to get from here to here, so that the vertical axis is the degree of parameterization that increases downward, so that the GCM with coarse grid spacing, the convection is highly polymerized and in the CRM is much less parameterized. you know the pronunciations of turbulence and so on, but it's much more explicit, so we recently started, two years ago, we started a project at CSU called Earthworks, which I'm going to tell you a little bit about here. It was led by CSU by myself and Jim Hurl and is supported by the National Science Foundation for five years to develop a coupled global storm resolution model that uses a single, nearly uniform global grid for the atmosphere, ocean, sea ice and the Earth's surface, and this new model is largely based on CESN.
You know, we're not building this from scratch by any means, so we're using Cam 6 physics but with the impasse non-hydrostatic dynamic core living on a geodesic grid. We are using the m-step ocean model. developed at Los Alamos by Todd Ringler and others and I think that was mentioned a little bit this morning, so these two dynamic cores are formulated essentially the same way and the grids are the same and the grids can be at various resolutions, of course, but Our plan is to use the same grid space for movement atmosphere. There is also a sea ice model that lives on the same grid and then of course there is a community land model.
There's no dynamic core involved with that, but that'll be it. This has been implemented in the network and then there is a coupler that now ncar wants to call a mediator. I don't understand the reasoning, but anyway this fits the different pieces of the model together. um and uh, you know, this is not a project. cesm or in the car is a CSU project, but we are preserving compatibility with the evolving cesm codebase and we are interacting with the cesm community, this will be open source and we will have releases and you will be able to find sets of comps and GitHub etc. so the idea is to create this as what amounts to an unofficial or maybe Renegade version of cesm where the ocean model of cesm is replaced, which will be mom 6 and then like murder. next to the ocean impasse because we want to use the same grid for everything we have a nice logo you see here this one was designed by a graphic you know a professional artist at ncar so one of these hexagons is the atmosphere the other one is the ocean and one is the land surface and this only indicates that it is a global model.
We have t-shirts with this and like I said, all the components will use the same model, so the same grid, so one grain will rule them all and it will be ours. The target resolution is uh 3.76 or 3.75 kilometers globally, which is in the range that the cloud allows. Well, sometimes people call it storm resolution. Well, I recently learned that people call these models K scale. Decay scale apparently means kilometer scale grid spacing. It's okay, it's nice for the atmosphere. and the Earth's surface with a K scale model, you can do, you know, thunderstorms and things you can do, hurricanes with realistic intensity, you can resolve the topography pretty well, as long as the peak in Colorado is resolved at least waves of early gravity of the kind young June was talking about. this morning the orographic gravity waves and also the gravity waves forced by convection and other things, the coasts of course are well resolved, so you can think about explicitly making estuaries, lakes and large rivers, so the Amazon Mississippi would be resolved here and the cities of New York City fairly well resolved Fort Collins would have a number of positive points and for the ocean you get the most energetic eddies, the main scales, at least far from the poles, you get convection depth of the ocean that will make it yours in some places, but it is important, of course, to obtain the topography of the bottom. you get internal gravity waves like in the atmosphere and, again, estuaries, so I'll show you a couple of results from Earthworks.
What we've done is we start with low resolution and just put the parts together and that's not an easy task. -trivial thing you might think, okay, we'll take the ocean model, click on it and that's it, but it has to fit into the cesm software infrastructure and that took a lot of time for a very experienced programmer, but it's done now um. and then we start course resolution and work our way down by doubling the grid or halving the grid space and then we run that for a while and then we do it again and each time you put the grid space in half , something breaks, so something is being used. too much memory, there is some other kind of software problem like that, so we find all these problems that no one knew were there and they get fixed and then we duplicate the good space again and there is another problem, so we are working to reduce it.
We can run the atmospheric model with a 3.75 kilometer grid inCheyenne, which is the current stimulus gauge that barely hits, but that machine is not our target machine. I'll talk about that here. We are currently running the coupled model with 30 kilometers. mesh and I'll show you that here in just a second these are some results from the atmosphere model uh simulate an observed case this is a forecast from observing the initial conditions here are the observations from the radar data this is a radar reflectivity calculated by you from a version of a model that used Warf physics and this is the same kind of thing from a virtual model that uses chamber 6 physics, which is what's in Camp six, but it's mg3, which is version three of Morrison guttleman, it's the latest and greatest microphysics and you I can see that, you know, camera physics actually does a pretty reasonable job.
I won't show a slide on this, but Chamber 6's convection scheme stops working as the grid spacing gets finer when you get to maybe seven and a half kilometers it almost doesn't do anything uh now I'm going to show you two movies and uh the one on the right will play first are the same except that one of them was run on the 30 kilometer grid that's the one on the right and the other one was running the 60 kilometer grid, so this notion of atmosphere on the same grid and started with a realistic ocean thermal structure but without currents, so the current rotates at the beginning of the simulation and what We will see the initial condition on day 65 from January 1st.
And in the case of the 60 kilometer grid, it comes out until day 220, in the case of the 30 kilometer slope result until day 230. This is something that has to do with the way the movies were made, It has nothing to do with the model, so what you'll see when you click on the movie on the right side will play and you'll see Eddie appear along the equator as things rotate. residues of tropical instability I don't know why oceanographers call them now it seems as if but and you see that the northern hemisphere is warming because the seasons are changing well and the southern hemisphere is cooling now I will do the same with the 60 kilometers, the grid of the Corsa in the left side and the same kind of thing, but the seasonal warming in the northern hemisphere is somewhat weaker, well, noticeably weaker, so there are definitely differences in these two, uh, two. resolutions, so now the coupled 30km run shown here on the right has completed two full years and a bit more, but I don't have the second year results to show you here anyway, the point is that the plumbing works , but now. you know, even with a very high resolution like this and now I'm thinking mainly about the atmosphere model, we still need parameterizations of small scale dynamical processes.
I say dynamic processes because of course we also need microphysics and radiation, but we need turbulence and shallow convection. That's not solved on a three kilometer grid and Akio was thinking about this kind of thing as one of the last big projects of his. So this brings up the issue of the Gray Zone. I think this term was used today and people talked about it a little bit. scalaware parameterizations, so the problem is with scales larger than about 10 Delta X, where Delta X is the horizontal grid space. The scale is larger than about 10 Delta so you have to parameterize it, no doubt about it, but there are scales that are slightly larger than the grain spacing and we can say that they are represented because you can see them and you can do numbers on the grid for those scales, but they are not well solved, which means that the equations don't predict their behavior accurately, so in a sense they have to be partially parameterized and this came up a little bit in John's talk this morning, so the gray area is the area where it doesn't have enough resolution to explicitly resolve eddings, let's say they are storms, okay and the grid spacing is 10 kilometers.
You can't explicitly resolve storms with the special grid at 10 kilometers, but they can try to grow in a sort of uh, distorted way, but you also can't represent them statistically like with parameterization because you know that maybe only one storm would fit in a cell of the grid, so you can't do statistics in that situation, so you're between a rock and a hard place. difficult place there is no good way forward this is what is meant by Grayson there has always been a Grayson so when we were running atmosphere models with cluster spacing of 400 kilometers the smallest synoptic scales are in the Zone gray, things like fronts, etc. 40 kilometer Delta Camp, turbulence is in the formation of so we have it all, it's always been there and we're not going to move away from it, so we can think about creating and this is what Akita wanted to do resolution independent parameter sessions.
The idea is that you have a set of equations and a code and a name lists a set of parameter settings and you can run it with a course grid and you can run it with a fine grid and it will do the right thing within the limits of the resolution of any way, so it's very flexible and a flexible tool, okay? So we have this document from our Wu coven and Chen Ming is here, so that's what this document is about and I won't say more about it here, but it's a really pioneering effort with very interesting results, the last thing I want to say. subjects check what it is, um, to put some kinds of markers on what a resolution-independent parameterization should look like.
This is what is often called the scale aware parameter section and I'm going to call it resolution independent even though it has more letters, um. I claim that a resolution-independent parameterization has to be predictive, that is, there has to be a passage in time of the parameterization variables themselves, which really means that there is a memory in the parameterization itself. The current

state

of parameterized physics depends in part on its recent past

state

. um and one reason for this is that the processes are not in equilibrium particularly at high resolution uh quasi-equilibrium certainly does not apply with the grid spacing of you know 10 kilometers and at high resolution the eddies almost resolved, so the eddies which can be represented on the grid, but not simulated accurately, may undergo life cycles.
You know they can be individual clusters, so memory is obviously key. The second point is that independent resolution-independent parameterizations must be non-local. Single column status is not enough. determine what is happening in that column because eddies can be affected or propagate between columns and of course we can all visualize things like this propagation of storms that are on and an independent resolution has paramization has to be very flexible, deep convection has However, shallow convection and turbulence must be parameterized at all resolutions down to basically a few kilometers, so resolution-independent parameterizations are a model

development

problem. very challenging, if you set this as a goal, I want to figure out how to do this in the process of trying to understand that you have to think about how nature works in ways that maybe you didn't before when you think about very large grid cells, so we need to think a lot um and another question is and this connects to the Earthworks slides that I showed.
Our very high resolution model is actually a good idea. If we calculate everything right, we stop thinking because you know that the model tells us what is happening, that is all we need to know. Here I want to present a couple of points of view that have recently appeared in the literature. Here's an article by Tim Palmer and Bjorn Stevens, who most of you will recognize, appeared in the proceedings of the National Academy in 2019 and it's a perspective that's okay, so it's an essay and there's a little quote from the article here Simulations will stimulate theory by focusing efforts on the salient question of the foundations of the predicted Ramadan changes in developing parameterizations with little physical basis now as a parameterization.
I take those words, uh, you know, uh. kind of challenging, um, here's another article that came out a little bit later by Kerry, who's here, this is a preview of agu and again it's kind of a perspective, it's an essay that Kerry wrote and here they go from it, Are we computing too much and thinking too little? The current desire to bypass theory encoded in parameterizations in favor of Brute Force resolution may turn out to be the subordination of understanding to mere simulation. These are pretty contrasting points of view, okay, and I don't completely agree with just you on any of them.
I think we should do both well, so these guys are arguing that they want to do high resolution and that the parameterizations have little physical basis. Kerry says we should think more and that the theory is encoded in parameterizations, in other words the premise in some sense represents what I understand about the system, that's an idea, not just a curfew, so getting back to this, I showed you this previous list of innovations, a lot of the things in this list I mentioned some of the articles, I actually showed you the cover and I showed you this before. too close to the start of the talk, how do we use models to learn about the real world?
So Karkawa did a lot of rigorous model development, thinking very carefully about what to do before writing the code. Our recovery, Schubert is a good example. Oracola Lamb, potential astrotrophy. John mentioned the client as a good example, although there are many good examples, so for him developing the model was a way to learn about nature and I think it is a very good way to learn about nature, so that is what I mean with the trip. Is Royal's work and the model development process learning and was that the goal? You can learn, as I said before, by comparing the results of simulations with observations.
He didn't do much of this, especially with the global model. There are many examples in your work of this kind of thing, but you did numerical experiments, not so much with the GCM that you did in the United States, which experiments what club results and uses the results of those to formulate parameterizations of gravity waves and convection waves. and So it's an interesting take and the work that Steve Kruger did in his PhD work was one of the first examples of using a cloud resolution model to think about better ways to develop parameterizations for GCM, so Arakawa and Your group were really pioneering this area, so I wasn't going to show this image because I know many of you have seen it before.
This is the presenter from 1972. I'm showing it because Jim McWilliams asked me to show it right away, so here's Jim Akio. on the right, obviously, and I'm in the middle and Wayne is on the left. Wayne looks the same now and here's another one so I really like this one because it's kind of a private conversation over lunch at a meeting and wow that's it. what I enjoy doing with alcohol is the end. Any questions about that too first, although yes, the other one. So the target performance at 3.75 kilometers per pair is that we think we can achieve a little more than one simulated here to walk today on a sufficiently powerful machines that are available right now, so they are not fast enough to do ipcc or cmub style simulations, but they're fast enough to do a lot of interesting studies on things like nasal cell convection and internal gravity waves and physics stuff that I'm personally into. very interested and that we can analyze with a very high resolution Global simulation like that, you can't really analyze it with observations because they don't exist, so that's the application or one type of application, the other type of application is predicted, so uh The team includes weather guys like me and weather guys like Skyrock and we intend to make apps of both types here.
Did you mention that the air force model with physical account, the deep convention was mainly assumed by the scale of results? Yes Yes. in fine enough, is this a way? After that, you touched on any Force scale where the parameters simply give me some form of skin awareness. No, I have in mind to replace the convection and turbulence schemes in the model with something else that I didn't do. We talk about here, butthat's the plan, so you know there's still a grayscale and it still takes something better than Jean McFarland going to sleep and that's the plan.
Sorry no. I don't quite understand, can you lower your mask for a minute? Yeah, I was just wondering because it's our dynamic themes, so this happens. Very good main date. I just want to ask you to go ahead and align yourself currently in some way with philosophical questions. So from a purely practical point of view, you could divide the reasons for increasing the resolution of climate models into two things, I don't mention both, but just because you want to resolve a phenomenon, there is a direct interest in predicting it as a scale. systems, you want to know what your frequency is and how it might change, the other is that you want to do better at things that you already can and absorb quite well and that you may have had for a long time, so my question is is the second one being fulfilled? aim.
In other words, as we improve the resolution, they are the results that we have been able to achieve for a long time, for example, for example, the global average temperature is improving. Scottish cloud bias reviewed this morning is improving, are we right? To be optimistic about it, well, the strategy was Cloud because it needs an increase in vertical resolution and probably some better physics as well, so I certainly don't claim that that's going to improve by just increasing the horizontal resolution, uh, when things like tropical cyclones Obviously it gets better as the lower resolution goes up, so when you get to 30 kilometers, well, it's not realistic, but it's getting there just fine and that's an example of something we're crudely solving today with our simulation of cement from 50 to 100 kilometers. that we could do a better job in the future with models like this, you think there's one here, so in a traditional way, say, a development organization or general implements, usually the processes that the foundation then puts them into a single column model and test it. and put in a GCM, do you think this paradigm has to do especially with individual common tests, but specifically with single column paragraph spirit?
Oh well, you know, sometimes the individual columns use parameterized physics like our Akash Uber and sometimes they use crms, so in the CRM. case, I think there's still room to use that paradigm, but maybe not the Riverside files, so you mentioned a little bit that you know they're juicier equations that you want to solve and integrate into everything and also related to or maybe other aspects of the scale of knowledge of the conversations or parameters you want to restrict do you think there is a concern for something you think about whether the model parameters depend on the choice of the model in terms of the distribution method you choose or they can also be variables that you choose?
What parameters are you talking about? Parameters in the parameters sections themselves, like in microphysics, let's say, of course, yes, well, I think, in my opinion, the correct philosophy is that if there is a parameter in your model, it should be something that you can at least in In principle, maybe you can't measure it in practice, but in principle it should be honorable and then you should stick to your best measured estimate of that thing. So I mean complaining about Health would be an example, right, we are far from perfect, but it's definitely useful and there are a lot of numerical parameters in there and they can be set based on measurements in Kansas and then we stick to those values ​​don't change them just because it gives us a better answer.
What I mean by this is that if you look at his published work, there aren't many examples where comparing model results to observations is the main point of the paper he does. but the main point of the article is the first bullet point.

If you have any copyright issue, please Contact