Thursday, May 31, 2012

Hooters Girls are not Hicks Neutral

This post is a continuation of what will eventually be a long series of discussions of innovation in the economy, with a longer-term objective of gaining some clarity on how innovation is understood, directed and regulated in the economy. This post is also part of a argument that asserts that the discipline of economics has failed to provide an account of innovation in the economy that helps to guide 21st century policy making, and more fundamentally, that economics is simply incapable of providing such an account. A big claim, sure to raise the hackles of card carry economists. So fasten your seatbelt.

A first step is to define what I mean by “innovation.” Economics in the Schumpeterian tradition defines innovation as one component of the so-called “linear model of innovation” (see Godin, here in PDF) which lays out technological change as a process of
Invention ---> Innovation ---> Diffusion
Much has been written about this model (PDF), which I will review at another time. However it has been used in the economic literature over the past half-century, Schumpeter himself defined innovation very precisely (and did not follow the so0called linear model):
. . . any “doing things differently” in the realm of economic life - all these are instances of what we shall refer to by the term Innovation. It should be noticed at once that that concept not synonymous with “invention”. . . It is entirely immaterial whether an innovation implies scientific novelty or not. Although most innovations can be traced to some conquest in realm of either theoretical or practical knowledge, there are many which cannot. Innovation is possible without anything we should identify as invention and invention does not necessarily induce innovation, but produces of itself no economically relevant effect at all. . .
In a nutshell:
We will now define innovation more rigorously by means of the production function previously introduced. This function describes the way in which quantity of product [outputs] varies if quantities of factors [inputs] vary. If, instead of quantities of factors, we vary the form of the function, we have an innovation. ... We will simply define innovation as the setting up of a new production function. This covers the case of a new commodity, as well as those of a new form of organization. ... Innovation combines factors in a new way.
I will return to the technical aspects of the so-called “production function” (illustrated below) in a later post. For now, it is only important to understand that an innovation changes the relationship between inputs and outputs in the economy. For instance, a restaurant takes employees (labor), food, appliances (and other capital) and combines them to make meals. A change in the relationship between the various inputs and meals served would be an innovation, which would be measured by a change in the rate of productivity.
What Schumpeter called “innovation” seems to have been characterized as “technology” by many economists in the years since. For instance, Daron Acemoglu in his widely read textbook on economic growth explains:
Economists normally use the shorthand expression “technology” to capture factors other than physical and human capital that affect economic growth and performance. It is therefore important to remember that variations in technology across countries include not only differences in production techniques and in the quality of machines used in production but also disparities in productive efficiency.
Which of course brings us to Hooters girls, the hot young women in skimpy clothes who serve chicken wings and other delicacies at the US restaurant chain called Hooters. The company explains its innovative approach to the restaurant business as follows:
The first Hooters opened October 4, 1983, in Clearwater, Florida. During its history, the Hooters concept has undergone very little change. The current logo, uniform, menu and ambiance are all very similar to what existed in the original store. This lack of change is understandable given the tremendous success the Hooters concept has enjoyed. Hooters has continued to rank high amongst the industry's growth leaders. Hooters has proven successful in small-town America, major metropolitan areas and internationally. . .

The element of female sex appeal is prevalent in the restaurants, and the company believes the Hooters Girl is as socially acceptable as a Dallas Cowboy cheerleader, Sports Illustrated swimsuit model, or a Radio City Rockette. The Hooters system employs over 25,000 people - over 17,000 of which are Hooters Girls. The "nearly world famous" Hooters Girls are the cornerstone of the Hooters concept, and as part of their job, these all-American cheerleaders make promotional and charitable appearances in their respective communities. Hooters hires women who best fit the image of a Hooters Girl to work in this capacity. The chain hires both males and females to work in management and host, staff, service bar, and kitchen positions. The Hooters Girl uniform consists of orange shorts and a white tank top. Pantyhose and bras are required.
The company explains its name and innovative approach:
The chain acknowledges that many consider "Hooters" a slang term for a portion of the female anatomy. Hooters does have an owl inside its logo and uses an owl theme sufficiently to allow debate to occur over the meaning's intent. The chain enjoys and benefits from this debate. In the end, we hope Hooters means a great place to eat. . .

Sex appeal is legal and it sells. Newspapers, magazines, daytime talk shows, and local television affiliates consistently emphasize a variety of sexual topics to boost sales. Hooters marketing, emphasizing the Hooters Girl and her sex appeal, along with its commitment to quality operations continues to build and contributes to the chain's success. Hooters' business motto sums it up, "You can sell the sizzle, but you have to deliver the steak.”
So what Hooters has done, in Schumpeterian innovation terms, is to combine input factors in a new way. In this case the company has carefully selected its labor in a precise manner intended to increase the demand for its product. Presumably, the underlying assumption is that a different labor pool would result in a lower demand. So while all of the other inputs (food, appliances, etc.) could have remained the same as any other restaurant chain, the Hooters innovation led to a restaurant chain that (they claim) “has continued to rank high amongst the industry's growth leaders.”

The discipline of economics does have a terminology for the type of innovation represented by the Hooters girl – “technical change that is Hicks biased.” What "Hicks biased" technical change means is that unlike – say -- a new chicken wing fryer that can produce more wings per employee, but otherwise leaves labor and capital unchanged, some changes in the input-output relationship that result from "technology" are not independent of labor or capital (an example of the latter would be the substitution of low-sulfur Wyoming coal for West Virginia coal to reduce air pollution, but I digress).

It is of course hard to think of the Hooters girls as any sort of "biased technical change to the production function," and the tortured language begins to shed some light on the limits of economics in the context of Schumpeterian innovation.

Economics does well in providing a framework for understanding the effects on productivity of innovations that result from new chick-wing fryers, but runs into troubles in providing a framework for understanding Hooters girls as innovations. This I think, helps to explain why economics has focused on “technology” rather than “innovation” (in the original Schumpeterian sense). Make no mistake, economics is critically important, but to understand innovation -- how it happens, and crucially, how it is directed and regulated -- requires more than what economics can offer.

The Taxation Distraction

Martin Wolf, the consistently excellent FT economic columnist, has an excellent blog post up which explains why a focus on taxation as a focus of economic policy debates is misguided.  he writes:
The focus of US economic policy discussion at present is almost entirely on fiscal deficits and the level of taxes. My view is that these are second or even third order issues. What matters far more is the capacity of the economy to offer satisfactory lives for the citizenry. This depends on far more fundamental forces than deficits and taxes, such as innovation, jobs and incomes. Evidently, I am arguing that taxes and deficits do not determine these outcomes. I am suggesting this because they do not.

So I want to address two widely held, but mistaken, views. The first is that lower taxes are the principal route to better economic performance. The second is that the financial crisis is a crisis of western welfare states.
He demonstrates this argument using the figure shown at the top of this post. Wolf lays out the full argument in his blog post, which deserves a full read.

In the US the debate over the role of government takes many forms (and it seems that blog posts on any policy subject eventually arrive there no matter what the starting point or initial direction of travel). The form of the debate at present is framed in terms of a Hamiltonian vs. Jeffersonian approach to government. Snore. It is old wine in new bottle s(in this case the bottles too are old) .

From a policy perspective, the important questions are not simply on how high are taxes and how much does government spend, but how it is spent and what that spending means for growth in GDP per capita. Moving discussion from the former to the latter is, to put it mildly, a challenge.

Wednesday, May 30, 2012

Two Summer Book Recommendations

Two of my long-time colleagues have had intriguing new books just come out, just in time for the summer reading season.
Mike Smith has written short analysis of the Joplin tornado from just over one year ago -- When the Sirens Were Silent: How the Warning System Failed a Community. Anyone who enjoyed Mike's first book, Warnings: The True Story of How Science Tamed the Weather, will also enjoy Mike's eye-opening and insight look at what happened at Joplin last year.

This is from the book description:
What if the warning system failed to provide a clear, timely notice of a major storm? Tragically, that scenario played out in Joplin, Missouri, on May 22, 2011. As a wedding, a high school graduation, and shopping trips were in progress, an invisible monster storm was developing west of the city. When it arrived, many were caught unaware. One hundred sixty-one perished and one thousand were injured. "When the Sirens Were Silent" is the gripping story of the Joplin tornado. It recounts that horrible day with a goal of insuring this does not happen again.
Have a look and while you are at it, check out Warnings as well.
John McAneney has written a crime thriller -- Shifting Sands -- set against the backdrop of New Zealand science policy of the late 1990s. The story is a great read but even more fun for a wonk like me because of the backdrop of science politics, big monied commercial interests and petty academic squabbles. I read it on the plane last week and it was a great escape.

This is from the book description:
New Zealand 1997. As a serial killer traumatises the country, Caspian is re-evaluating his career. Science has been his life, but an unorthodox approach to problem solving is out of favour with the new corporate ethos that sees science as a business. There are other pressures too as Caspian’s beautiful French wife, Marie-Claire, is becoming increasingly disenchanted with life at a remote beach on Maori tribal lands, an environment and lifestyle that Caspian is reluctant to give up. But when the body of a colleague is washed up on the sand and another, Robert, is arrested on suspicion of murder, nothing can ever be quite the same again. As he seeks to help his Maori detective friend establish Robert’s innocence, Caspian stumbles across a scam involving illegal genetic engineering experiments with a money trail leading to an international pharmaceutical company. As the suspense grows, the dirty underbelly of science for profit is revealed as a culture of corruption where the truth no longer holds any currency.
 What fun!

Congrats Mike and John;-)

Hedersdoktor at LiU

Here is me last week at the Linköping University commencement ceremony where I received an honorary doctorate. I had a great time and was deeply honored to receive the award. The commencement ceremony was a special occasion.

I look forward to continued collaborations with all of my friends and colleagues at Linköpings universitet. I'd especially like to get started a student exchange program, as there is a lot of complementary work going on between CU and LiU.

Thanks LiU!!

Tuesday, May 29, 2012

Hot Hands and Guranteed Winners

In a 2009 paper I laid out an argument that explored what happens when "the guaranteed winner scam meets the hot hand fallacy" (PDF). It went as follows, drawing upon two dynamics:
The first of these dynamics is what might be called the ‘guaranteed winner scam’. It works like this: select 65,536 people and tell them that you have developed a methodology that allows for 100 per cent accurate prediction of the winner of next weekend’s big football game. You split the group of 65,536 into equal halves and send one half a guaranteed prediction of victory for one team, and the other half a guaranteed win on the other team. You have ensured that your prediction will be viewed as correct by 32,768 people. Each week you can proceed in this fashion. By the time eight weeks have gone by there will be 256 people anxiously waiting for your next week’s selection because you have demonstrated remarkable predictive capabilities, having provided them with eight perfect picks. Presumably they will now be ready to pay a handsome price for the predictions you offer in week nine.
The second,
. . . is the ‘hot hand fallacy’ which was coined to describe how people misinterpret random sequences, based on how they view the tendency of basketball players to be ‘streak shooters’ or have the ‘hot hand’ (Gilovich et al., 1985). The ‘hot hand fallacy’ holds that the probability in a random process of a ‘hit’ (i.e. a made basket or a successful hurricane landfall forecast) is higher after a ‘hit’ than the baseline probability.9 In other words, people often see patterns in random signals that they then use, incorrectly, to ascribe information about the future.
In the paper I used the dynamics to explain why there is not likely to be convergence on the skill of hurricane landfall forecasts anytime soon. The existence of (essentially) an infinite number of models of hurricane landfalls coupled with the certainty that unfolding experience will closely approximate a subset of available models creates a context ripe for seeing spurious relationships and chasing randomness. However, the basic argument has much more general applicability.

A new paper is just out by Nattavudh Powdthavee and Yohanes E. Riyanto from the Institute for the Syudy of Labor in Bonn, Germany provides some empirical support for this argument. The paper --titled, "Why Do People Pay for Useless Advice? Implications of Gambler’s and Hot-Hand Fallacies in False-Expert Setting --  looks "experimentally whether people can be induced to believe in a non-existent expert, and subsequently pay for what can only be described as transparently useless advice about future chance events."

In the study they authors operationalized the dynamics of the "the guaranteed winner scam meets the hot hand fallacy" using coin flips, while going to great lengths to ensure that the participants were aware that the coin being flipped was fair (i.e., the flips were random), even going so far as to have the participants furnish the coin.

They found that upon receiving an accurate "prediction" of the subsequent coin flip, many participants were willing to abandon any assumption of randomness and pay for a prediction of the next toss:
On average, the probability of buying the prediction in Round 2 for people who received a correct prediction in Round 1 was 5 percentage points higher than those who previously received an incorrect prediction in Round 1 (P=0.046). The effect is monotonic and well-defined; probabilities of buying were 15 percentage points (P=0.000), 19 percentage points (P=0.000), and 28 percentage points (P=0.000) higher in Rounds 3, 4, and 5 . . .
 The authors identify two interesting results:
The first is that observations of a short streak of successful predictions of a truly random event are sufficient to generate a significant belief in the hot hand of an agent; the perception which also did not revert even in the final round of coin flip. By contrast, the equally unlikely streak of incorrect predictions also generated a relatively weak belief in the existent of an “unlucky” agent whose luck was perceived to be likely to revert before the game finishes; evidence which was reflected in an increase in the subject’s propensity to buy in the final round of coin flip.
The study also looked at whether characteristics of the participants might be related to their behavior, finding: "there is no statistical evidence that some people are systematically more (or less) susceptible to the measured effects."

What does this study mean for how we thing about science in decision making?

While the authors focus on "false" experts, the findings have much broader relevance in the context of "true" experts. The simple reason for this is that the distribution of legitimate scientific findings about many complex subjects covers an enormous range of possible outcomes. Not all of these outcomes can simultaneously be correct -- whether they are looking at the past, at causality or offering projections of the future.

In the example that use from my paper cited above, I explain how a single scientific paper on hurricane landfalls provides 20 scientifically legitimate predictions of how many hurricanes would hit the US over the subsequent 5 years:
Consider, for example, Jewson et al. (2009) which presents a suite of 20 different models that lead to predictions of 2007–2012 landfall activity to be from more than 8 per cent below the 1900–2006 mean to 43 per cent above that mean, with 18 values falling in between. Over the next five years it is virtually certain that one or more of these models will have provided a prediction that will be more accurate than the long-term historical baseline (i.e. will be skilful). A broader review of the literature beyond this one paper would show an even wider range of predictions. The user of these predictions has no way of knowing whether the skill was the result of true predictive skill or just chance, given a very wide range of available predictions. And because the scientific community is constantly introducing new methods of prediction the ‘guaranteed winner scam’ can go on forever with little hope for certainty.8
Such models are of far more than academic interest -- they guide hundreds of billions of dollars in investment and financial decisions related to insurance and resinsurance. What if such decisions rest on an intellectual house of cards? How would we know?

The general issue is that a bigger problem than discerning legitimate from illegitimate expertise is figuring out how to use all of the legitimate expertise at our disposal. The dynamics of the "guaranteed winner scam meets the hot hand fallacy" also presents a challenge for experts themselves in interpreting the results of research in light of evolving experience. As experts are people too, they will be subject to the same incentives in and obstacles to interpreting information as were found by Powdthavee and Riyanto.
The dominant strategies in political discourse used to deal with this situation of too much legitimate science are to argue that there is one true perspective (the argument from consensus) or that experts can be judged according to their non-expert characteristics (argument by association). My experiences over the past decade or so related to extreme events and climate change provides a great example how such strategies play out in practice, among both experts and activists.

As we have learned, neither strategy is actually a good substitute for evaluating knowledge claims and understanding that uncertainty and ignorance are often irreducible, and decisions must be made accordingly.

Saturday, May 26, 2012

Some Items Not Blogged Last Week


I've been away this past week so I had little time to blog a few blog-worthy items that crossed my desk. Below is a quick round up of some of the most interesting ones.

Above is a music video featuring some local (to where I am now) talent. Normal service returns after the holiday and we see how well I perform in the Bolder Boulder with jet lag;-)
Coming next week on this blog ... an extended series of posts on economics, innovation, technology. Stay tuned!

Wednesday, May 23, 2012

UK GM Wheat War: Not Really About Science

In the UK there is a battle brewing over a scientific trial involving genetically modified wheat. Last weekend a protester attempted to vandalize the trial, and a larger civil action is expected on May 27.  The ongoing battle, and its close cousin in the climate wars, tell us something about what can happen to science when it becomes the central battleground over politics and technology. Unfortunately, the scientific community itself has contributed to such tactics.

Plant scientists at Rothamsted Research, a complex of buildings and fields in Hertfordshire, UK, that prides itself on being the longest-running agricultural research station in the world, have spent years preparing for their latest experiment — which will attempt to prove the usefulness of a genetically modified (GM) wheat that emits an aphid alarm pheromone, potentially reducing aphid infestation.

Yet instead of looking forward to watching their crop grow, the Rothamsted scientists are nervously counting the days until 27 May, when protesters against GM crops have promised to turn up in force and destroy the experimental plots.

The protest group, it must be acknowledged, has a great name — Take the Flour Back. And it no doubt believes that it has the sympathy of the public. The reputation of GM crops and food in Britain, and in much of mainland Europe, has yet to recover from the battering it took in the late 1990s. In Germany, the routine destruction of crops by protesters has meant that scientists there simply don't bother to conduct GM experiments any more.

The Rothamsted scientists have also attempted to win over the public, with a media campaign that explains what they are trying to do and why. After the protesters announced their plans to “decontaminate” the research site, the scientists tried to engage with their opponents, and pleaded with them to “reconsider before it is too late, and before years of work to which we have devoted our lives are destroyed forever”. The researchers say that in this case they are the true environmentalists. The modified crop, if it works, would lower the demand for environmentally damaging insecticides.
It would be a mistake to conclude that the protesters are in some way anti-science or fearful that the genetically modified crops might fail to work as advertised (though surely some protesters do have these views). Their main concern is that the crops will perform exactly as advertised, and lead to further gains in agricultural productivity.

It is not science that they fear, but the implications of scientific advances for economic and political outcomes. The organization leading the UK protests calls itself Take the Flour Back, and clearly explains its rationale as follows:
Our current political system chooses to deal with world hunger through the model of “food security”, arguing that there is not enough food to go around and that we need techno-fixes to solve this. This approach ignores the fact that there is a global food surplus – many people just can’t afford to buy food. This problem is being amplified by land grabs- communities that used to grow food for themselves are being forced out of their ancestral homes, often by corporations expanding cash crop production.

The industrial food system throws away (in the journey from farms to traders, food processors and supermarkets), between a third and a half of all the food that it produces – enough to feed the world’s hungry six times over. (2)

Free trade policies imposed by the International Monetary Fund make it much harder for governments to protect small and family farmers from big multinationals. With the expansion of free-market capitalism, agricultural systems in many countries in the global south have become focused on producing cash crops for export to rich western nations. At the same time, their markets have been opened to food imports, including imports from US and EU companies at less than the cost of production. US farmers benefit from billions of dollars in subsidies which make up as much as 40% of US net farm income. This means they can afford to export their crops at well below production cost. (3) This is ruining the livelihoods of small farmers in the global south.
This is not the statement of a group concerned primarily with the potential unanticipated risks of GM crops to the environment or people, but rather, it is the manifesto of a group concerned that GM crops will perform exactly as intended.

Like many issues where science and politics intersect, those opposed to the productivity gains made possible by agricultural innovation have sought to use science as a basis for realizing political ends. A primary strategy in such efforts is typically to argue that the science compels a particular political outcome.  In the case of GM crops, opponents of the technology (mainly in Europe) have argued that the techniques are unproven or risky. However, such tactics have not succeeded. So the next step beyond waging a political battle over science is now direct action against the technology of concern.

This situation is of course in many respects parallel to the climate debate. Efforts to compel emissions reductions through invocations that science compels certain political outcomes have borne little fruit, so some activists have taken it upon themselves to directly attack the technologies at the focus of their concern.

One difference between the climate wars and the GM wars is that some prominent scientists are participating in the direct action against technology (such as James Hansen and IPCC contributor Marc Jaccard). Another important difference is that in the case of GM crops, it is research itself being targeted, and the scientific community objects.

One argument invoked by scientists in support of GM technology is that the world needs more food. But the world needs more energy too. In condoning direct attacks on energy technologies, the scientific community may have opened the door to tactics that it does not much like when they are applied closer to home.

Monday, May 21, 2012

Beyond Manna from Heaven

Writing at The Breakthrough Journal blog, Ted Nordhaus and Michael Shellenberger argue that conventional economics is not up to the task of offering sound policy advice for the 21st century.  They write
In the 70 years that have passed since Joseph Schumpeter coined the term "creative destruction," economists have struggled awkwardly with how to think about growth and innovation. Born of the low-growth agricultural economies of 18th Century Europe, the dismal science to this day remains focused on the question of how to most efficiently distribute scarce resources, not on how to create new ones -- this despite two centuries of rapid economic growth driven by disruptive technologies, from the steam engine to electricity to the Internet.

There are some important, if qualified, exceptions. Sixty years ago, Nobelist Robert Solow and colleagues calculated that more than 80 percent of long-term growth derives from technological change. But neither Solow nor most other economists offered much explanation beyond that. Technological change was, in the words of one of Solow's contemporaries, "manna from heaven."
Where does that "manna from heaven" originate? In pricing incentives, of course, derived from economic theory. But once you take a closer look at both practice and the theoretical origins, you find that economics explains far less than we've been led to believe.

Nordhaus and Shellenberger revisit the climate issue to illustrate how far conventional economics has led us astray. They provide an overview of a debate that they engaged in with an economist from the Environmental Defense Fund, Gernot Wagner, who argues against evidence and common sense that by creating the right pricing incentives, drastic emissions reductions goals can be met in the near term:
 "[W]e can achieve US emissions reduction goals for 2020 and possibly even 2030 through deployment of existing technologies. . . Price goes up, demand goes down. Economists typically call it the 'law of demand'--one of the very few laws we've got."
The theory is sound, its application is not -- a point that readers of this blog and The Climate Fix will well understand.

The good news is that many are beginning to move beyond the precepts of economic theory and take a look at the simple mathematics of the real world. For instance,  Ulrich Hoffmann, an economist at the UN Conference on Trade and Development, has done the math, which is illustrated in the figure below showing how much the world would need to decarbonize its economic activity in order to stabilize carbon dioxide at 450 ppm.


Based on these straightforward mathematics he concludes (Hoffmann has a more in-depth analysis here in PDF):
The arithmetic of economic and population growth, efficiency limits related to the rebound effect, as well as systemic issues call into question the hopes of decoupling economic growth from GHG growth. One should not deceive oneself into believing that such an evolutionary (and often reductionist) approach would be sufficient to cope with the complexities of climate change. “Green growth” proponents need to scrutinise the historical macro- (not micro-) economic evidence, in particular the arithmetic of economic and population growth, as well as the significant influence of the rebound effect.
Such messages are not well-received by conventional economists. In their exchange, Wagner explains to Nordhaus and Shellenberger that economic theory trumps real world evidence, and this means that debate over such issues is not necessary:
The main points from climate economics are no longer up for debate: carbon is a pollutant; we need make polluters pay, either through a cap or a price. Virtually all economists agree--from Holtz-Eakin, Laffer and Mankiw on one side to Stiglitz, Sachs, and Krugman on the other.

Once again, this one is not up for debate. You can argue that politically we can't get there, so we need to do other things in the short term, but it's not up for debate whether this is the economically correct solution.
Like most debates on climate this one ends predictably, with Wagner appealing to the motives of Shellenberger and Nordhaus:
[Y]our entire operation seems to be geared toward propagating contrarian-sounding views that once in a while get you some attention and get picked up by an editor somewhere, but otherwise are just that: contrarian for the sake of wanting to be different from the pack
Snore. But the larger point here is that there are articles of faith in the discipline of economics which are viewed as taboo to challenge, even when they fail to represent themselves in practice with the simplicity and elegance of theory.

However, a closer look at economic theory finds a much shakier foundation than is represented within the discipline. Writing at Slate, Konstantin Kakaes has a great piece that sums up how economics went astray when it comes to innovation:
Robert Solow, winner of the 1987 Nobel Memorial Prize in Economic Sciences, is famous for, in the recent words of a high-ranking State Department official, “showing that technological innovation was responsible for over 80 percent of economic growth in the United States between 1909 and 1949.”. . Typically, technical or technological progress isn’t explicitly defined by those invoking Solow, but people take it to mean new gadgets.

However, Solow meant something much broader. On the first page of “Technical Change and the Aggregate Production Function,” the second of his two major papers, he wrote: “I am using the phrase ‘technical change’ as a shorthand expression for any kind of shift in the production function. Thus slowdowns, speedups, improvements in the education of the labor force, and all sorts of things will appear as ‘technical change.’ ” But his willfully inclusive definition tends to be forgotten.

Solow was constructing a simple mathematical model of how economic growth takes place. On one side was output. On the other side was capital and labor. Classical economists going back to Adam Smith and David Ricardo had defined the “production function”—how much stuff you got out of the economy—in terms of capital and labor (as well as land). Solow’s point was that other factors besides capital, labor, and land were important. But he knew his limitations: He wasn’t clear on what those factors were. This is why he defined “technical change” as any kind of shift (the italics are his) in the production function. He wasn’t proving that technology was important, as economists in recent years have taken to saying he did. All Solow was saying is that the sources of economic growth are poorly understood.
Instead of "technology" Solow was really talking about "innovation." That innovation need not be understood because it was "manna from heaven" is characteristic of many arguments from conventional economists and is particularly endemic in the climate debate. From such a perspective, of course anyone asking about where innovation comes from -- other than from the magic of the invisible hand -- must either be ignorant or malign.

But as Hoffmann's essay explains, once you actually do the math of energy innovation in the context of real-world social and political forces, you see that understanding processes of innovation requires more than simply understanding the "law of demand."

Kakaes contines:
The cautionary tale of Solow is emblematic of how economists get science and technology wrong. One economist creates a highly idealized mathematical model. The model’s creator is, as Solow was, honest about its limitations. But it quickly gets passed through the mill and acquires authority by means of citation. A few years after Solow’s paper came out, Kenneth Arrow, another Nobel Prize winner, would write that Solow proved the “overwhelming importance [of technological change in economic growth] relative to capital formation.” It’s a sort of idea laundering: Solow said that we don’t know where growth in economic output comes from and, for want of a better alternative, termed that missing information “technical change.” But his admission of ignorance morphed into a supposed proof that new technologies drive economic growth.
Remarkably, in the 21st century, our policy debates reflect the fact that we do not have a good idea where innovation comes from, how it is directed and how we prepare for its inevitable downsides. Too often conventional economics presents an an obstacle to debating and discussing this topic.

As Nordhaus and Shellenberger conclude,
Over the next century, global energy demand will double, and perhaps triple. But even were energy consumption to stay flat, significantly reducing emissions from today's levels will require the creation of disruptive new technologies. It's a task for which a doctrine focused on the efficient allocation of scarce resources could hardly be more ill-suited.
Read the three essays discussed here in full at The Breakthrough Journal blog, at Bridges Trade BioRes review, and at Slate.