Climate Change Follies

6 Mar

Climate change deniers adopt the language of science while ignoring its actual methods. There’s always a “debate” or the assertion that “scientists don’t fully understand” something inconvenient like evolution or climate change. It’s more the case that these so-called “skeptics” don’t actually understand the science they attempt to refute.

Take climate change. Deniers often say that climate scientists don’t know what they’re talking about because in the 1970s they claimed we were on the brink of an ice age. However, these deniers don’t seem to read the sources they are citing. The 1975 Science News article beloved by deniers is entitled “Climate Change: Chilling Possiblities” and opens with the stark statement that the “unusually beneficial climate of the past few decades may be degenerating, facing humanity with a new challenge to survival.”

Yes, an ice age would suck. Perhaps human activity contributing to global warming helped avoid it. Yay. I like summer.

But the article is actually an analysis of human activity and global warming. Not the next ice age:

“Within a century or so the projected heat generation from human activity is likely to equal one percent of the heat earth absorbs from the sun. Under the simplest set of assumptions, this additional heat would raise the global temperature about a degree Celsius, but after various corresponding changes are taken into account, the overall effect might be a temperature rise as great as 3.0 degrees. …As a result, a survey of nine American cities showed increased rainfall in the vicinity ranging from 9.0 to 27.0 percent. The severity of these storms is also affected: Near Houston, Tex., hailstorms were found to increase by 430 percent. The most detailed of these studies is under way in the St. Louis area, where an urban-related 25 percent increase of thunderstorms as found to affect some 1,000 to 2,000 square miles of the surrounding area. The cumulative effect of such changes from all cities is not known, but the possible interactions increase the likelihood of severe consequences as urbanization continues.”

Check and mate.

The Right To Not Have a Boss

26 Feb

Utopia14(Vonnegut)On Sunday, New York Times columnist Ross Douthat published an editorial about how the lower classes are working fewer hours while the rich are working longer hours. Let’s gallop past the quasi-Charles-Murray pop sociological analysis that concludes that the wealthy are more industrious and moral than the poor. Douthat attempts to link the wish/forecast of 19th century utopians of a world free of toil with what he perceives as another sign that the social fabric is eroding further.

Douthat concludes that “there is a certain air of irresponsibility to giving up on employment altogether.” He admits that pundits like him “aren’t the ones stocking shelves at Walmart” or looking for a job “that probably pays less than our last one did.” Yet he exhorts the working poor to work, even at “a grinding job,” because work is its own reward. By his lights, people should happily do unpleasant, low-wage work with no hope of advancement because work is inherently ennobling.

Although there are many problems with his analysis, let’s look at his core illogic. He conflates work with drudgery. The hope and goal of utopians like Edward Bellamy was that technology and more humane forms of economic organization would eliminate exactly the kind of work that Douthat thinks the working poor should still do cheerfully. Conversely, the affluent work longer hours because what they do is intellectually or financially rewarding. This is exactly what the utopians hoped for—a world free of toil where everyone has interesting and rewarding work.

Nobody today asks what the utopians asked: What is the economy for? We blindly pursue economic growth, even when it does not increase our happiness and threatens the planet. But the utopians had an answer: The economy is our way of satisfying our material and spiritual wants sustainably with ever decreasing effort.

But Douthat is not all wrong. He says that for Marx the ideal workday was “to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner.” Americans in the 19th century often referred to “improving” their leisure time by improving themselves: reading history and philosophy or writing letters to loved ones. Our modern forms of leisure would strike them as wasteful and perhaps immoral abuses of our affluence.

I wrote about the history of technological unemployment in a previous post. Because of the mass media and commercialized entertainment, our increased affluence is just as likely to give us the world of Brave New World or Player Piano. We are more likely to spend our leisure time narcotizing rather than improving ourselves.

7th Grader Sends Hello Kitty Doll into Space

9 Feb

…well, at least into the upper atmosphere.

A generation ago, weather balloons were almost always sent up (launched?) by governments.

Now, a 13 year old is doing it for a science fair project.

What science fair projects will 13 year olds do in another generation?

You Promised Me Mars Colonies. Instead, I Got Facebook

5 Feb

 

The November/December 2012 issue of MIT Technology Review ran the cover on the right, showing a distinctly unhappy Buzz Aldrin. He appears on the left, photographed by Neil Armstrong during the Apollo 11 lunar landing. This issue of TechRev tackled the issue of Why We Can’t Solve Big Problems.  The implication is, of course, that we are no longer able to undertake large projects and long-range planning. This is particularly disturbing in light of the civilizational challenges of resource depletion and climate change that we will have to contend with this century.

A related issue is that technological advances are notoriously hard to predict. Scientific American columnist David Pogue noted a year ago that “The Future Is For Fools.” He gave some noteworthy examples of hilariously wrong predictions about technology, like one of the Warner Brothers exclaiming in 1927, “Who the hell wants to hear actors talk?” Pogue concludes by quoting Alan Kay: “The best way to predict the future is to invent it.”

Perhaps. Still, it’s worth remembering that from the standpoint of 1969, it seemed far more likely that we would have Mars colonies (or at least a permanent lunar outpost) than handheld devices capable of retrieving any piece of information or communicating instantly with anyone else. In 1969, after all, computers filled up large rooms, were staffed by teams of programmers, and were symbols of conformity and centralized control.

The difference is a matter of scale and direction. The Internet and mobile phone revolution occurred in a decentralized way, with individual tinkerers, programmers, and firms pursuing innovations they thought were interesting and/or profitable.  The space program was a huge, centralized marshaling of resources over a long period to accomplish a fairly useless goal–to land a dozen men on the moon for a couple days. As inspiring as Apollo was, it didn’t make anyone’s life better in the long run.

This returns us to the question of solving big problems. Although we got Facebook and not Mars colonies, it’s pretty clear that the Internet is a pretty useful technology. Try imagining life without it. Ditto cellphones. I think the Buzz Aldrin of 1969 would be impressed with the Internet and mobile phones.

 

Big Blue Marble

10 Dec

“Big Blue Marble” image taken on 7 Dec. 1972 by the crew of Apollo 17.

Forty years ago, the crew of the last Apollo mission took this photograph of Earth during their return flight from the moon. This is one of the most important photographs ever taken. In 1948 British astronomer Fred Hoyle stated that a photo of the earth from space would generate “a new idea as powerful as any in history.” Hoyle was right: that photo reinforced the growing ecology and environmental movements. But its most important long-run effect is that it has expanded the scale and scope of how humanity thinks about its place in the universe.

I rate the chances at about 50% that we will find life elsewhere in the universe by the end of this century. In my opinion, this is the most pressing scientific question facing humanity. We are actively trying to answer that question along two directions, and our efforts will likely accelerate over the course of this century.

The first direction is to explore our own backyard. We first placed an extended-mission vehicle on another planet in 1997, the Mars Pathfinder mission. Since then, there’s been a lot of hardware placed in Mars orbit or on the planet itself. The latest mission, Curiosity, has sensed organic molecules in Martian soil, though scientists are not certain that this indicates microbial life. Within this century, it’s likely that American, European, Japanese, and other nations’ space agencies will thoroughly explore Mars, and even other bodies like Jupiter’s and Saturn’s moons. By 2200 we should have a pretty good answer to the question of whether life exists elsewhere in the solar system, and if there is a common origin for it.

The more interesting question is whether life exists outside our solar system. The first confirmed discovery of an extrasolar planet was in 1995. Thanks to the Kepler space telescope, we have confirmed the existence of nearly a thousand of them with another two thousand candidate planets awaiting confirmation.  Follow-on space telescopes are poised to discover tens of thousands of extrasolar planets and to extract enough signal to get a rough guess at their atmospheric composition. The presence of water vapor, oxygen, carbon dioxide, and methane would be a pretty good indicator of life with a similar chemistry to ours.

Then what? Let’s say we find extrasolar planets with high potential for bearing life. Do we have the scientific and technological expertise to send a robotic probe across several light years to investigate? If such a mission takes a hundred years or more, can we plan that far ahead? And most importantly, how will discovering life around another star change us?

Science in the 21st Century

5 Dec

I’m presently reading two very interesting books on the future of scientific exploration. I just finished Martin Rees’s From Here to Infinity: A Vision for the Future of Science  and I just started Paul Gilster’s Centauri Dreams: Imagining and Planning Interstellar Exploration.  Rees is presently Astronomer Royal of Britain but is more known for his pessimism that humanity has only a 50% chance of surviving the 21st century. (And he claims he’s an optimist…)

Let’s assume for the sake of argument that humanity makes it through the next century. What will the 21st century hold? For perspective, reflect that both relativity and quantum mechanics were young and undergoing development in 1912. Medicine was primitive compared to today’s standards. Electronics was barely on the horizon–the first industrial application of the vacuum tube was the 1915 transcontinental telephone circuit.

What will science look like in the year 2100?  Gilster interviewed several scientists who think that we could launch an interstellar probe to reach a nearby star in 50  years or less. There are also projects on tap to analyze the spectra of the atmospheres of extrasolar planets. Some proposed projects have the optical baseline to resolve features like continents and mountain ranges on extrasolar planets. Over the next century, we will probably have a sustained research presence on Mars and other targets of interest in our solar system.

I’ll take better than even money that we will discover life in our solar system and beyond by the end of the 21st century.

All Watched Over By Machines of Loving Grace

29 Sep

Many commentators have called our economy a “jobless recovery”–labor productivity, corporate profits, stock market performance, per capita GDP, average household wealth are all up. Yet median income and wealth are down. On the surface, this indicates a skewed distribution of income and wealth. The gains are going to capital and not to labor.

Yet there’s a deeper story here.  Two researchers at the National Bureau of Economic Research, Nir Jaimovich and Henry E. Siu recently showed that recovery from the last three recessions has been “jobless” because of productivity gains from technology.  One of the authors reminds us in a recent New York Times interview, “In the broad sweep of history, technology is good. We’ve been wrestling with this for 200 years. Remember the Luddites.”  Progressive commentator Jim Hightower puts a more human face on the trend and urges us to ask, “For whom and to what end? Now is the time to start a national debate on the true cost of this shift — and to demand that we humans be factored into their ‘revolution.’”

Technological unemployment has a long history (and future). As early as the middle of the 19th century, John Stuart Mill noted that “demand for commodities is not demand for labor.” The Great Depression seemed to confirm Mill’s point and many economists warned of “permanent technological unemployment.”

Another way to look at this problem is to acknowledge that it requires less labor to produce material goods. This has been happening since the power, transportation, and communications revolutions of the 19th century. It seems to be accelerating today. Only about 20% of our labor force is devoted to agriculture and manufacturing. This figure is likely to decline continuously over time, given advances in computing technology, information and communication, robotics, 3D printing, etc.

So where does that leave us workers? Perhaps it’s time to decouple livelihood and work. Perhaps economic thought will shift from an economics of scarcity to an economics of abundance.

Fiction provides some scenarios for reflection. Edward Bellamy’s novel Looking Backward is the story of a 19th century protagonist who wakes up from a long sleep in the world of 2000. In Bellamy’s utopia, people in the year 2000 have abundant goods and ample leisure. People join an Industrial Army for a couple decades and work at an occupation of their choosing before retiring to a life of study and contemplation.

Iain M. Banks’s wonderful Culture novels take place in a far future. In his books, humans and artificial intelligences called The Culture live in a post-scarcity universe where nobody works unless they want to. In many ways this is a projection of Bellamy’s utopia. With lots more sex, drugs, and misanthropic AIs.

Then we have the dystopias. Aldous Huxley posits a global dictatorship based on mass entertainment and consumer goods. His Brave New World is materially rich but spiritually empty. Kurt Vonnegut’s first novel Player Piano describes a world where automation has eliminated virtually all agricultural and manufacturing jobs. The former working class is taken care of through public relief, but is bored and restless. In alliance with a couple elite managers, they launch a failed revolt against their technocratic society.

So:  where does this leave us? Increased life chances for everyone, a larger wealth gap between the rich and poor, or material abundance and spiritual impoverishment?

Follow

Get every new post delivered to your Inbox.