<
>

One hundred years into live ball era, is baseball a better game?

Style of play has been a hot topic this season, when records are falling left and right, and center, too. That sounds pretty exciting, but a lot of people have a lot of problems with which records are falling because of what it says about the product. Which, by the way, remains wonderful. Perfect? No. But still pretty damned good.

Whether you love it, hate it or fall somewhere in the middle, you can't really debate whether baseball has changed. We know it has. We keep track of what happens and have been keeping track since Ulysses S. Grant was president. The changes recently have been startling -- more homers, more strikeouts, more hit by pitches, more relief pitchers, etc. Some like it, some don't.

The 2019 season is likely to eventually stand out as historically extreme. Perhaps current trends will continue for a while longer, or even accelerate, but it is just as likely that some parts of the game will begin to swing back in the other direction. It doesn't feel like that will happen because our necks are all sore from watching middle infielders hit 400-foot homers to the opposite field. But the game has always eventually rediscovered its equilibrium. Of course, sometimes it needs a nudge to do so, and those talks and debates are happening all over the place.

One such nudge, or a series of them, took place 100 years ago, when the period of baseball history we now call the dead ball era came to an end. As with all things in baseball history, there is some disagreement about the end points of the era, though the generally accepted period is from 1901 through 1919. The actual punctuation mark on it was really Aug. 17, 1920 -- the day Indians shortstop Ray Chapman was killed by a dingy ball thrown by Yankees submariner Carl Mays. That's when baseball started throwing dirty balls out of play.

However, the evolution began before that. The yarn used to wind around the ball changed. Ballpark dimensions were shrinking. Scuffing and defacing balls became illegal, though such infractions have carried on surreptitiously ever since. And a left-handed pitcher named Babe Ruth started focusing on his hitting and showed us all that people kind of dig the home run.

This season marks the 100th season of the live ball era, which, I suppose, will never end. I mean, in what circumstance would baseball ever go back to using soiled, spongy, defaced balls, no matter how tired detractors get of home runs? I've been meaning to mark this anniversary all season, and with the 2019 campaign on track to shatter all kinds of records, this seems like a good time to make a comparison: How, exactly, in a statistical sense, has big league baseball changed since the end of the dead ball era? And have all these changes been good?

I'm going to run through a number of categories and view them from the perspective of a fan attending a game in 1919, as compared with one who has turned up at the ballpark this season. Every season is but a snapshot of time, and while it might be representative of its era, it also might not be. Back in 1987, we had a sudden surge in home runs, and the very next year, pitchers dominated. Yet we don't think of those years as being a dividing line between eras.

Also, because we've drawn such a clearly defined line between the dead ball and the live ball eras, it suggests that everything changed all at once. That's not the case. It was an evolution, one that already was underway even before the line we've drawn. Teams averaged 0.20 homers per game in 1919; the next season it was 0.26. By 1929, we'd reached 0.55. This season, it's 1.40.

Let's get to the categories:

Average number of batters needed to see a home run: 1919 -- 188; 2019 -- 27. Which is better? I think the game is nicely balanced at about one homer per game, so neither extreme is ideal for me. But given modern sensibilities, waiting five games between homers, as was the norm a century ago, seems harsh. I'll give a grudging edge to 2019.

Games to see a home run: 1919 -- 5; 2019 -- 1. I just covered this, but if you were someone like me who, as a kid, got to attend only a few games a season, weren't you disappointed if there weren't any home runs? Of course, when there are 13 of them or something, the novelty wears off quickly.

Batters to see a hit: 1919 -- 4; 2019 -- 4. Believe it or not, this year's overall batting average (.254) isn't that different from the one in 1919 (.263). Many more of the hits are homers, but in terms of the "H" column on the scoreboard, the difference isn't anything the typical patron would notice in the typical game. That said, because it involved more non-homer hits, I'll take 1919's path to the similar batting average.

Balls in play to see a hit: 1919 -- 3.66; 2019 -- 3.37. I'm trying to stay away from decimal points, but I need them in this category or else it will make the change look more stark than it is. The point here is something I don't think a lot of people realize: Hitters might not hit as many balls into play as they did in lower-strikeout eras, but they hit the ball harder, and even though fielders have never been better, BABIP levels are quite a bit higher than they used to be. Ever since 1994, it has been around .300. From the 1940s through the 1970s, that number tended to be more in the .270-.280 range. Today's flavor is better, I think.

Batters to see a baserunner: 1919 -- 3; 2019 -- 3. This season's aggregate on-base percentage (.323) is virtually identical to what it was in 1919 (.322).

Baserunners to see a stolen base: 1919 -- 12; 2019 -- 24. Were they reckless back in the day? Maybe. But that would be fun to watch. Give me 1919 on the basepaths.

Steal attempts to see a caught stealing: 1919 -- 2; 2019 -- 4. With great risk comes, well, great risk. Teams are much more efficient at mowing down would-be thieves, but at the same time, stolen base success rates are far higher than they used to be. I like to think watching 1919 base stealers would be more exhilarating, but there is a fine line between daring and stupid. I'll take the 2019 model of efficiency.

Baserunners to see a balk: 1919 -- 597; 2019 -- 355. I don't have a strong preference here, but it is interesting to me that balks are more likely to be called now than they used to be. Slight edge to 1919.

Batters to see a walk: 1919 -- 14; 2019 -- 12. Not a stark difference. No winner here. Walks aren't a great thrill generally, but sometimes they do generate a nice buzz at the ballpark. Think of instances when a home batter battles a tough pitcher in a long at-bat and earns his way on. Fans appreciate it.

Batters to see a hit by pitch: 1919 -- 158; 2019 -- 95. For all the noise old-school fans like to make about past generations of pitchers owning the inside of the plate, batters get hit by pitches far more frequently now than they used to. This year's rate (0.40) would equal the modern-day record (since 1901) that was set just last season. I don't like it when people get hit by flashing spheres, so I guess I'll go with 1919. But of course all of this has less to do with the relative machismo of pitchers and everything to do with modern-day padding and armor that allows hitters (like Anthony Rizzo) to virtually stand on top of the plate.

Batters to see a strikeout: 1919 -- 12; 2019 -- 4. Probably should put this category in bold face, or in all caps. I mean -- wow. And while I enjoy a good power pitcher, going with the 1919 rate of whiffs is an easy choice.

Batters to see a double: 1919 -- 29; 2019 -- 22. Meh. Nothing to see here, really. I'm a bit surprised that they are more prevalent now, so I'll give the nod to the present.

Batters to see a triple: 1919 -- 80; 2019 -- 241. This sad development has been written about quite a bit -- triples are disappearing. That's not good. In 1919, three-baggers outnumbered homers by more than 2-to-1. I don't want to lose too many homers, but give me a triple every other game and I'll be happy.

Batters to see a sacrifice: 1919 -- 31; 2019 -- 237. They did not use win expectancy tables in 1919. I hate sacrifices and always have. The current game has it right.

Batters to see an intentional walk: 1941 -- 134; 2019 -- 245. They didn't track intentional walks until 1941, so I've subbed in the oldest available rate just for a comparison. Obviously, 2019 wins at any rate because intentional walks suck. I wouldn't have even included this category except that one proposed, but unadopted, change the owners bandied about back in 1920 was this: a ban of intentional walks.

Batters to see a double play: 1919 -- 53; 2019 -- 46. Fielders, as mentioned, are way better now, both in terms of proficiency and in technology (equipment, positioning, etc.). I don't know that the difference is stark enough to declare a winner, but I'll go with 2019. Double plays are always aesthetically pleasing.

Batters to see a wild pitch: 1919 -- 4; 2019 -- 4. Nothing wild here -- a push.

Innings to see a run scored: 1919 -- 2.3; 2019 -- 1.8. Again, I have to go to the decimal point, this time to show that there is a difference. The scoring levels in 1919 were too low, and that season's run average (3.88) was high for the era. This year's level (4.85) is on the high side, but it's not obscene. The issue is more how runs are being scored, not how many of them. The real winner is a season about halfway between these two campaigns. I'm declaring a stalemate in this category.

Fielding chances to see an assist: 1919 -- 3; 2019 -- 4. This season will be the fourth in a row that we've set a record low in assists per game. It's a function of all the strikeouts and fly balls. As you can see, this is one of those categories you really only notice in the composite stats. Watching any given game isn't going to scream to you that fielders aren't making as many throws as they used to. Well, I like throws, so I'll give the edge to 1919.

Fielding chances to see an error: 1919 -- 29; 2019 -- 61. Fielding percentages have improved incessantly over time, though it looks like this will be the sixth straight season that the overall fielding percentage has been exactly .984. I think we've found our level in this category and routine fielding plays have taken on the tenor of the extra point in football. In 1919, the composite fielding percentage was .966. That's a little more drama, but it's not exactly kicking the ball around, either. This is a tough call because I don't really want players to be worse but I do like 1919's percentage better. Mistakes can be exciting.

So, who wins? My final tally is eight wins each for 1919 and 2019, with four no-decisions. What does that tell you? It's what we knew all along. Baseball might take many forms, but it's always great. But it's at its best when we get a mix of everything that we got 100 years ago, along with everything we see now.

In other words, baseball is at its best when the game is balanced. Be patient. We'll get back there before our 2119 comparison.

Extra innings

1. Recently, I read this intriguing piece from Craig Edwards at FanGraphs, which looked at the bridging of the gap in per-inning effectiveness between starters and relievers. He finds that when you break down 2019 pitching performance according to leverage, what you find is that the change can mostly be attributed to a sudden spike in low-leverage innings. Thus, he sees it largely as a competitive issue -- more rebuilding teams, more blowouts, etc., lead to more low-leverage spots, when lesser pitchers tend to be used. I still see it very much as a supply problem, as in teams are burning through relievers too quickly, but agree with most of what Craig has written.

In any event, it got me wondering: What would the standings look like if low-leverage situations were simply ignored? And does looking at the pecking order in this way actually reveal anything? First, here are the no-lo standings, broken down by league, not division:

Pretty interesting. In the American League, by raw run differential, the Astros look like a markedly better team than the Yankees. But when you remove the low-leverage runs, the gap between them virtually disappears. Meanwhile, the Twins and Indians come out virtually even-steven. In the National League, the Dodgers remain the team to beat but take a huge hit on their actual win pace and in Pythagorean wins. The Nationals shoot up the board and suddenly look like co-favorites in the NL pennant chase.

These observations are reflected if you isolate performance in low-leverage spots. To illustrate this, I calculated Pythagorean wins per 162 games for low-leverage spots only, and compared that with each team's overall Pythagorean mark.

Here are the teams whose overall run differential has been bolstered the most by, in effect, kicking teams when they're already down:

And here are the teams whose run differentials have suffered the most by letting games get really out of hand:

By and large, the really good teams excel in low-leverage spots, just as they do overall. And vice versa -- the bad teams are still bad when games aren't close, just as they tend to struggle when the game is within reach. Perhaps this is a function of depth, or lack thereof, which would explain the placement of several of these teams.

The Dodgers have been lauded for their superior depth for several years now, and if that's playing into this effect, you can see it here: L.A. has bludgeoned teams in low-leverage spots to the tune of 118 wins per 162 games. Conversely, back before the AL Central standings achieved some separation, I bemoaned the lack of depth on the White Sox. My thought then was that if Chicago had done a better job building the back half of its roster, as well as the ready-to-help talent in Triple-A, there was enough top-shelf talent to make a run. That statement has turned out to be overblown -- you can see in the no-lo standings listed above that even if you remove low-leverage runs, the White Sox still profile as only a 73-win team. However, the gap I saw between Chicago's core players and the rest of the 40-man roster appears to have been real.

Anyway, all of this is worth chewing on, but does it mean anything, really? It just might. I made these calculations for each season from 2009 to 2018, then sought to run correlations between several flavors of won-lost records and postseason performance, which I defined as Pythagorean wins per 162 playoff games. As a reminder, the closer the correlation coefficient of two sets of numbers is to 1, the more they are correlated. The closer to 0, the worse the relationship is.

First, let's look at correlations to actual won-lost record:

Pythagorean wins*: .941
No-Lo Pythagorean wins: .940
Lo-Lev Pythagorean wins: .827

All of these approaches at turning run differentials into wins correlate well with actual won-lost records. It's no surprise -- Bill James figured out the relationship between runs and wins decades ago. However, it is telling that removing low-leverage performance appears to have virtually no effect on the relationship in these correlations.

Now here are correlations between several categories and Pythagorean wins in postseason play:

No-Lo Pythagorean wins: .433
Pythagorean wins: .292
Actual wins: .282
Lo-Lev Pythagorean wins: .041

That is a bit surprising, enough so that I'm hoping someone more advanced in statistical analysis might see this and dive in to study the issue in more depth. This appears to be a solid tell when it comes to assessing likely playoff performance. (Actually, I'm sure someone already has studied this. When it comes to baseball, someone somewhere has studied just about everything.) First, you can see that the correlations here are pretty low. The sample sizes of postseason matchups are small, and there is a large degree of randomness at work. There is a reason we tend to refer to October baseball as a crapshoot.

Still, while the relationship between actual and Pythagorean wins to postseason performance is roughly the same, you get a sizable bump if you strip out low-leverage performance. Again, one way to interpret this is the issue of depth: You simply don't need as many players to navigate October as you do from March to September.

To me, this makes sense on an intuitive level. The entire purpose of leverage-based metrics is to account for the fact that some situations are more impactful in determining the winner of a game than others. So it follows that you get a better glimpse of the real effectiveness of teams if you strip out the runs that had little impact on win probabilities. That's especially true if the players mostly responsible for many of the low-leverage numbers aren't around when the most important games arrive each autumn.

2. Here is an alternate set of standings:

AL EAST: 1. Red Sox; 2. Yankees; 3. Rays; 4. Orioles; 5. Blue Jays.

AL CENTRAL: 1. Twins; 2. Royals; 3. White Sox; 4. Indians; 5. Tigers.

AL WEST: 1. Astros; 2. Rangers; 3. Mariners; 4. Athletics; 5. Angels.

NL EAST: 1. Mets; 2. Nationals; 3. Phillies; 4. Braves; 5. Marlins.

NL CENTRAL: 1. Cardinals; 2. Cubs; 3. Reds; 4. Pirates; 5. Brewers.

NL WEST: 1. Dodgers; 2. Diamondbacks; 3. Rockies; 4. Giants; 5. Padres.

The teams are ranked within their divisions based on a very simple measurement: the number of innings they've gotten from the four starting pitchers they have used the most this season. All that matters is volume of innings, not performance.

There's a chicken-and-egg dynamic underlying this leaderboard because you have to pitch with a certain amount of acuity to be allowed to accumulate many innings in the first place; otherwise, the team will try someone else. Still, the standings dovetail pretty well with the actual standings. In other words, for all the work that has shifted from the rotation to the bullpen over the years, we're still at a point where getting a lot of innings from your core starters remains a good indicator of how far you can go.

At the same time, we see some stark examples of how teams are getting it done in other ways. The Braves lead the NL East by a good margin even though it took them about half the season to settle on a stable rotation. The Brewers continue to hang near the playoff race despite once again shunning the traditional starting pitching construct. The A's also have won while tweaking their rotation on the go.

Nevertheless, while smart, budget-challenged teams can figure out ways to navigate around the lack of a bedrock, 1-through-4 top of the rotation, having that base remains the easiest path toward contention. It's hard to imagine that will change anytime soon.

3. Justin Verlander has been in the news a lot lately, both for his spat with a Detroit sportswriter and because he's still really good and doing really good stuff. Lost in all of the headline material was something Verlander accomplished the night of the infamous reporter-ban incident, Aug. 21.

You'll recall that the reason everyone wanted to talk to him after the game was that he'd gone the distance that night and allowed only two hits, both of which happened to be solo homers. That was enough for Verlander to drop a 2-1 decision to the Tigers, his former team. Well, alackaday, it happens, right?

Actually, it doesn't. When Verlander lost a nine-inning complete game, he became the first pitcher to do so since it happened to Rich Hill on Aug. 23, 2017 -- two days shy of two years earlier. Barring extra innings, you have to be on the visiting team to even have a chance to lose a nine-inning complete game. Still, it's a rare thing, and it didn't used to be.

According to the Play Index at Baseball Reference, which has data going back to 1908, we didn't have such a game last season, the first time that ever happened. There were 16 such games as recently as 2005. And before that, forget about it. It was nothing worth mentioning. The record is 302, in 1915. Here's the year-by-year totals for each season ending in 9:

Losing decisions, 9 or more IP
2019: 1
2009: 4
1999: 10
1989: 36
1979: 84
1969: 54
1959: 50
1949: 82
1939: 112
1929: 148
1919: 192
1909: 297