A Wise Forecasting Philosophy

This is a sample lesson page from the Certificate of Achievement in Weather Forecasting offered by the Penn State Department of Meteorology. Any questions about this program can be directed to: Steve Seman


From this page, you should be able to explain why "swinging for the fences" is a risky forecasting approach that won't lead to success over the long haul. Also, you should be able to identify the key steps to making a good forecast, and be able to execute a "probabilistic" internal dialogue with yourself when making deterministic forecasts.


How do good forecasters go about making a forecast? What general approach do they use? As we've already discussed, anybody can interpret model guidance with a little bit of training, but the goal here is to put you on the path to becoming a good weather forecaster (a rare breed). To get us started in discussing the forecasting philosophy advocated in this course, I'm going to use an analogy that involves baseball (although softball would work, too). So, if you're not into baseball, forgive me, but I think the analogy is still effective at getting the point across even for those who aren't into baseball or softball.

Photograph of a batter awaiting a pitch.
What does baseball have to do with weather forecasting? As it turns out, the prudent approach of successful "singles" hitters can serve as a basis for a successful forecasting strategy.
Credit: Steve Seman

When it comes to forecasting, you can take one of two approaches, which I associate with a batter's approach in baseball (or softball). I'll call one approach the "Dave Kingman approach" and the other the "Tony Gwynn approach." The first paragraphs of their Wikipedia biographies highlight their contrasts in hitting style. To boil down the differences, here are a few key stats from their careers:

  • Dave Kingman (played from 1971-1986): 16 seasons, 442 home runs, 1,816 strike outs
  • Tony Gwynn (played from 1982-2001): 20 seasons, 135 home runs, 434 strike outs

What should you note from those stats? Well, for starters, both players had fairly long, but different careers. Dave Kingman hit a lot of home runs, but he also struck out a lot. In fact, when he retired, he had the fourth most strike outs in Major League Baseball history. On the other hand, Tony Gwynn didn't hit many home runs, but he rarely struck out. Tony Gwynn was the ideal "singles hitter," he had a very high batting average (top-20 all-time, in fact), so he was very effective at getting hits and getting on base (not making outs).

When it comes to forecasting, the "Tony Gwynn approach" is more successful over the long haul. When forecasting, I try to hit a lot of "singles," and I'm very careful about taking large risks to avoid strike outs (major forecast busts). When you take the "Dave Kingman approach" and always try to swing for the fences to hit home runs in weather forecasting, you might hit the occasional home run (make risky forecast that turns out great), but more often you're going to strike out (like Dave Kingman) and end up with a bad forecast.

The name of the game in forecasting is to minimize your error as often as possible, which is akin to consistently hitting "singles" in baseball. Such an approach will allow you to minimize or eliminate huge forecast busts. Now, you might be thinking, "but I'd rather hit home runs -- home runs are better!" I assure you, however, that even great weather forecasters have a hard time hitting home runs consistently. Also, keep this in mind: Tony Gwynn, the ultimate singles-hitter, was inducted into the baseball hall of fame on the first ballot. Dave Kingman, the monster home-run hitter, hardly got hall-of-fame consideration, despite the fact that when he retired, 400 home runs was almost a guarantee for hall-of-fame induction.

What does applying this philosophy to a weather forecast look like? Allow me to show an example: METEO 410 students had to grapple with a rather challenging forecast for Atlantic City, New Jersey a number of years ago, when a powerful low-pressure system was slated to move up the Atlantic Coast and dump heavy rain on Atlantic City. In fact, one model predicted nearly three and a half inches for the day! You can see the basic set-up in the GFS four-panel forecast prog below.

GFS forecast prog valid at 00Z on March 17, 2007.
The GFS forecast valid at 00Z on March 17, 2007 predicted that a strong area of low pressure would bring heavy rain to Atlantic City, New Jersey. This model run was initialized at 12Z on March 14.
Credit: Penn State University / Phil Lutzak

But, the daily record for precipitation at Atlantic City was only 1.14 inches, meaning that many models were forecasting double or even triple the daily-record rainfall. That was a red flag that predicting such huge precipitation amounts could be risky. In the final analysis, the models were too high (surprise, surprise) and students that swung for the fences along with the models had huge forecast errors (they struck out big time). Atlantic City did set a daily rainfall record, but only 1.24 inches actually fell.

How could a forecaster have gone beyond mere "model reading" and come up with a prudent forecast? The key is to assess the overall weather pattern to look for clues that could make or break a forecast. For example, you may recall from previous studies that forecasters look for strong low-level jet streams at 850 mb to import deep moisture in synoptic-scale heavy rain events. A skilled forecaster in this case may have noticed that the strongest portion of the low-level jet stream was likely to be offshore, as suggested by this GFS forecast from the day before the event (note the 850-mb prog, bottom middle), which meant that the strongest deep moisture convergence might be east of Atlantic City. With only a peripheral encounter with the best moisture convergence in the cards, doubling or tripling the daily rainfall record was pretty unlikely, and a forecaster could have gone with a more conservative forecast. This case is a microcosm of the notion that swinging for the fences and/or forecasting exactly what the models predict will happen (or what you want to happen) in an extreme situation is a risky approach that often doesn't work well.

The attributes of a good forecaster ...

The Atlantic City case is pretty scary, isn't it? Models (and many forecasters) predicted double or triple the amount of rain that actually fell. The models can be your worst enemy if you don't use them in an insightful (and cautious) way. Of course, insight comes with experience, and to gain worthwhile forecasting experience, you need to develop specific attributes. What attributes to good forecasters possess? The list below shouldn't surprise you after all of your time in the certificate program. To become a good forecaster, you must:

  1. Possess a working knowledge of the behavior of the atmosphere.
  2. Possess a working knowledge of forecasting principles and techniques.
  3. Gain enough experience to know which principles to apply to any given situation.
  4. Develop the ability to interpret statistical and numerical model guidance.
  5. Acquire knowledge of how a general forecast must be modified to account for local effects at a specific site.

Hopefully, all the certificate courses have given you a solid foundation for attribute #1. Don't get too comfortable, though. Number 1 is a "work in progress," even for experienced forecasters! Weather forecasting is all about life-long learning.

This course provides a solid foundation for you to learn and apply a working knowledge of forecasting principles and techniques (#2), and the process of making forecasts in this course will be critical to building your body of forecasting experience (#3). This, too, will be a "work in progress." The lessons in this course will also enhance your ability to interpret statistical and numerical model guidance (#4). Finally, #5 is crucial: Know your local climatology! We'll talk more about the types of climatological information that good forecasters need to know later in the lesson.

Radar mosaic of the eastern U.S.
Observations, satellite imagery, and radar imagery, like this radar image of the eastern U.S. from 0930Z on August 27, 2020 should be an important starting point for your forecasting routine.
Credit: WSI / Penn State University

Good forecasters also develop a forecasting routine that allows them to apply the knowledge and skills described above. How exactly do good forecasters go about making a forecast? Ultimately, different forecasters will approach the forecasting process a bit differently based on personal preferences for data types and their forecasting objective in a particular situation. Still, for making a short-range forecast, good forecasters tend to follow the same basic recipe, which I've outlined below for you:

Forecast Tip

When you create your own short-range forecasts...

  1. Start with the "big picture." Look at local observations and also observations "upstream" (regions from which relevant systems will approach the local forecast area). Is the air that's moving into your area notably warmer / cooler or moister / drier?
  2. Look at surface and upper-air analyses, satellite and radar imagery -- really, any tool that helps form a clearer "big picture" of the present state of the atmosphere.
  3. Peruse the model runs to get an overall sense of the forecasting issues (shortwave troughs, fronts, jet streaks, other sources of vertical motion, etc.). Compare models. Look for consensus or a lack of consensus (that is, the degree of uncertainty in the forecast). Look for trends in the models (so you need to look at previous model runs). Also look at ensemble solutions for the overall synoptic-scale pattern, trying to identify where the models are having problems (or where there is a relatively high confidence in the forecast). In general, try to get your arms around the key issues of the forecast. The big picture is the boss!
  4. Now it's time to get down and dirty with details: statistical guidance (like the National Blend of Models), ensemble solutions for specific forecast parameters, forecast skew-T's, and other advanced techniques that we'll learn about later. Your big picture analysis should tell you which techniques will be appropriate, so this stage of the forecasting process might differ from day to day.

You will learn that statistical guidance like the National Blend of Models (NBM) and some ensemble mean forecasts tend to be good over the long haul. But, they may or may not be good for tomorrow's specific forecast. Of course, you should always consider these tools, but don't automatically accept them or reject them out of hand. Indeed, you must have a clear "big picture" in mind (be one with the atmosphere) before you make any decisions about tomorrow's forecast. In other words, you must put model guidance into a proper meteorological context. Developing this kind of an approach takes practice, of course, and you'll get plenty of it this semester.

While sorting out the details of the forecast, you have to keep your wits about you. The amount of data can be overwhelming, and sometimes there's little consensus among the models. Especially in cases with great model uncertainty, I think it's wise to establish an internal dialogue with yourself. For example, if the range in ensemble precipitation forecasts for tomorrow is from 0.10 inches to 0.75 inches, ask yourself questions like "Is it more likely that something closer to 0.10 inches or 0.75 inches will fall tomorrow? Why?" Then back up your answers to your internal questions with sound meteorological reasoning (perhaps based on various lifting mechanisms available and the overall moisture characteristics of the air mass, for this example). Even though we'll focus on deterministic forecasts this semester, it's still important to think probabilistically. This will help you hit more "singles" and avoid swinging for the fences, trying to hit a home run instead (but often striking out with a big forecast bust).

I hope you noticed in the forecasting process outlined above that good forecasters start with observations and the atmospheric "big picture." On that note, we're going to look at a case study to give you an example of how forecasters can use the big picture to diagnose the expected weather in a particular region. Keep reading!