Bel romanzo, che deve far pensare i critici dell'operato della privacy e i policymaker in tema di sicurezza informatica.
Bel romanzo, che deve far pensare i critici dell'operato della privacy e i policymaker in tema di sicurezza informatica.
Una critica agli approcci degli ultimi 50 anni verso lo sviluppo delle infrastrutture in molti paesi in via i sviluppo. Enfasi, dati e resoconti sui fallimenti legati al fatto che si sbagliano gli incentivi per come questi vengono percepiti localmente.
Speravo si potesse trarre qualche spunto per l'ammodernamento delle reti tlc, ma non me ne e' venuto nessuno.
Privilegi, carriere, misfatti e fatturati da multinazionale. L'inchiesta sul sindacato.
Sara' stato querelato ? perche' l'immagine che se ne trae e' quella di sindacati che pensano in primo luogo ai propri vertici, in secondo a se stessi, in terzo ai propri iscritti (perlopiu' pensionati), poi ai lavoratori, e non alla società.
Consigliatomi da Sebastiano Barisoni
L'italia e' una straordinaria fucina di informazioni che possono riempire migliaia di questi libretti.
Nel 1997 il decreto per il terremoto in umbria stanzio' una somma per restaurare una rocca in provincia di Cuneo.
Sempre in quell'anno fu varata una legge sui beni culturali che all'articolo 12 si occupava di vernici ed aerosol.
Nel 1995 un decreto legge in favore del settore marittimo e portuale regolo' una spedizione straordinaria per assistere le popolazioni del Ruanda.
Nel 1965 si guadagnava una cattedra universitaria a 35 anni, nel 2005 a 53-59 anni.
e via citando...
Consigliatomi da un lettore del blog.
Interessante libro sui futuri sconvolgimenti causati dalla presenza umana sul pianeta.
Pieno di dati e tabelle.
Se posso fare una critica e' che in sistemi cosi' complessi il nesso causa effetto non e' scontato. Input e output si perturbano a vicenda e non si possono fare previsioni accurate sulle tecnologie che altereranno queste perturbazioni (basta riuscire a fare batterie meglio, che il profilo del pianeta cambia, anche geopoliticamente). In una parola si ignora l'effetto esponenziale della ricerca che potrebbe consentire di fare geoingegneria ambientale cambiando radicalmente i trend in atto.
Certo, un cigno nero non e' prevedibile..
Regalatomi da Roberto Siagri
Un breve saggio agiografico dell'amico Roberto Siagri, per saperne di piu' sulla sua vita e sul suo pensiero in tema di azienda e tecnologia
Crichton e' straordinario nella preparazione dei riferimenti tecnici e storici dei suoi romanzi. In questo libro, non fa eccezione.
Leggere come eravamo solo un secolo fa, ai tempi dei nonni o dei bisnonni dei nostri genitori, ci dovrebbe far riflettere su cosa riteniamo oggi scontato e su come ci scandalizziamo se non tutti al mondo la pensano come noi.
...erano proprietari di pensioni she vivevano nel quartiere i spesso accettavano a pagamento dell'affitto merci rubate ma molti altri erano cittadini eminenti, padroni in absentia, che si servivano di un loro rappresentante col pelo sullo stomaco per riscuotere gli affitti e mantenere una qualche parvenza d'ordine. In questo periodo molti erano i nidi di cornacchie di grande notorietà a Seven Dials, Rosemary Lane, Jacob's Island e Ratcliffe Highway, ma nessuno era più famoso dei due ettari e mezzo in piena Londra che costituivano il nido di St. Giles, la cosiddetta « Terra santa. » Situato nei pressi di Leicester Square, dove c'erano i teatri, di Haymarket, centro della prostituzione, e degli eleganti negozi di Regent Street, questo nido di cornacchie -occupava una posizione strategica ideale per qualsiasi criminale che volesse « inguattarsi. »
Secondo resoconti contemporanei, la Terra Santa era « una densa massa di case talmente vecchie che sembra impossibile che non crollino, in mezzo alle quali si snodano serpeggiando stretti e tortuosi vicoli. Non c'è la minima possibilità di isolarsi e chi si avventura in questo luogo trova le strade - chiamiamole così per pura cortesia - affollate di oziosi e vede attraverso finestre vetrate solo per metà stanze stipate sino alla soffocazione ». Si accenna anche alle « cunette d'acqua stagnante », alla « sozzura che riempie gli androni », ai « muri scoloriti dalla fuliggine », alle « porte che cascano dai cardini » e ai o bambini che sciamano, dappertutto facendo i loro bisogni dove capita ».
Almeno fin quando l'orologio delle Guardie a cavallo nun suonava le sette e non comparivano i primi esempi di quel particolare fenomeno urbano che sono i pendolari, avviati al lavoro sulla « diligenza di Marrowbone », vale a dire a piedi. Erano gli eserciti di donne e ragazze assunte come cucitrici nei laboratori delle fabbriche di vestiti del West End, dove sgobbavano dodici ore al giorno per pochi scellini la settimana.
Alle otto venivano tolti gli sportelli ai negozi sulle grandi arterie; apprendisti e commessi allestivano le vetrine in vista dei commerci della giornata, disponendo quelli che un osservatore sarcastico definiva a gli innumerevoli gingilli e ghiribizzi della moda ».
Tra le otto e le nove era l'ora di punta e le strade si riempivano di uomini. Andavano tutti a lavorare, impiegati statali o cassieri di banca, agenti di cambio, pasticcieri o saponificatori, a piedi, in omnibus, in tiro a due, in calesse, dando vita a un rumoroso, vivace, intensissimo traffico di veicoli e di guidatori che imprecavano, bestemmiavano e davano grandi frustate ai cavalli.
In mezzo a tutto questo gli spazzini davano inizio alle loro fatiche quotidiane. Nell'aria satura d'ammoniaca, raccoglievano i primi escrementi equini sfrecciando tra carri e omnibus. E avevano molto da fare: secondo Henry Mayhew, un cavallo londinese depositava in media' nelle strade cittadine sei tonnellate di stereo all'anno, e in città di cavalli ce n'era almeno un milione.
Al processo si fecero molte congetture sulla signorina Miriam e sulle sue origini. Molte delle testimonianze fanno pensare che fosse un'attrice. Questo spiegherebbe la sua capacità di imitare i modi e gli accenti di classi sociali diverse; la sua tendenza a truccarsi il viso in un'epoca in cui una donna rispettabile non avrebbe mai spalmato cosmetici sulla propria carne; e la sua presenza come amante dichiarata di Pierce. In quei tempi la linea divisoria tra un'attrice e una prostituta era estremamente sottile. E gli attori erano; a motivo della loro professione, dei nomadi vaganti che avevano in genere rapporti con i criminali o appartenevano direttamente alla malavita. Comunque, quale che fosse il suo passato, la signorina Miriam, sembrava l'amante di Pierce da parecchi anni.
Era avvenuta nell'ottobre del 1854. Era un periodo di sconvolgimenti politici e di scandali militari. L'amor proprio della nazione aveva subito un duro colpo. La guerra di Crimea si stava avviando al disastro. All'inizio, osserva J.B. Priestley, a le classi superiori avevano accolto con entusiasmo la guerra come un meraviglioso picnic su larga scala in un luogo remoto e romantico. Sembrava quasi che il Mar Nero fosse stato aperto al turismo. Ricchi ufficiali come Lord Cardigan decisero di portarsi lo yacht. Le mogli di alcuni comandanti insistettero per essere presenti, accompagnate dalle rispettive cameriere. Numerosi civili disdissero le vacanze programmate altrove per seguire l'esercito e assistere allo spettacolo. »
Lo spettacolo divenne presto una disfatta. Le truppe britanniche erano male addestrate, male equipaggiare e guidate in maniera inetta. Lord Raglan, che le comandava, aveva sessantacinque anni ed era a vecchio per la sua età ». Sembrava spesso che stesse ancora combattendo a Waterloo e chiamava « francesi » i nemici, benché la Francia fotse ora sua alleata.
Il signor Henry Fowler quasi non credeva ai suoi occhi. Nella luce fioca di un lampione stradale a gas c'era una delicata creatura eon le guance rosee, meravigliosamente giovane. Non poteva aver superato di molto i dodici anni, l'età del consenso, e rivelava nell'atteggiamento e nella timidezza dei modi la sua fragilità e la sua inesperienza.
Le si avvicinò e la piccola rispose sottovoce alle sue domande, esitando molto e tenendo gli occhi bassi, prima di condurlo in una casa per appuntamenti galanti non molto lontana. Il signor Fowler guardò l'edificio con una certa trepidazione, non essendo la facciata particolarmente attraente. Fu quindi una piacevole sorpresa quando alla lieve bussata della bambina venne ad aprire la porta una donna di straordinaria bellezza che la ragazzetta chiamava « signorina Miriam ». Entrando nell'atrio Fowler vide subito che questa casa d'appuntamenti non era uno di quei postacci dove s'affittano
letti per cinque scellini l'ora e quando il tempo è scaduto passa il proprietario a battere il bastone sulla porta; qui i mobili erano foderati di peluche, e c'erano tende sontuose e bei tappeti persiani e arredi di gusto e di qualità. La signorina Miriam si comportò con dignità straordinaria anche quando gli chiese centocinquanta sterline; ma i suoi modi erano talmente raffinati che Fowler pagò
A quell'epoca, come oggi del resto, le ferrovie britanniche erano assai riluttanti a funzionare nel giorno del Signore. Il lavoro domenicale era considerato generalmente inutile e sconveniente da tutte le aziende, e le ferrovie in particolare avevano sempre avuto una strana tendenza moralistica. Per esempio, fumare in treno continuò a essere vietato anche quando il fumo era ormai un'abitudine estremamente diffusa; per cui se un signore voleva gustarsi un sigaro doveva dare una mancia all'inserviente, altra cosa severamente proibita. Questo stato di cose continuò, nonostante le insistenti pressioni dell'opinione pubblica, sino al 1868) quando il Parlamento approvò finalmente una legge che costringeva le compagnie ferroviarie a concedere ai passeggeri il diritto al fumo.
Così, benché tutti ammettessero che anche gli uomini più timorati di Dio potevano aver bisogno di mettersi in viaggio il giorno del Signore, e benché la diffusa abitudine di trascorrere il weekend fuori città portasse a una sempre maggiore richiesta di treni domenicali, le compagnie ferroviarie s'opponevano caparbiamente a questa tendenza. Nei 1854 la South Eastern Railway faceva circolare la domenica soltanto quattro treni, e l'altra compagnia che si serviva della London Bridge, la London & Greenwich Railway, ne teneva in servizio sei, cioè meno della metà del numero normale.
Nel 1854 molti abitanti delle città vittoriane erano preoccupati da quella che veniva considerata un'ondata di criminalità stradale. Le successive periodiche « epidemie » di violenza avrebbero portato in seguito, tra il 1862 e il 1863, al panico dei pedoni e all'approvazione di una « Legge sulle rapine » da parte del Parlamento. Questa legge prevedeva per i contravventori, pene insolitamente dure, tra le quali la fustigazione e a rate » per dar modo ai prigionieri di rimettersi prima di subire altre frustate - e l'impiccagione. Fu così che in Inghilterra si impiccò più gente nel 1863 che in tutti gli anni precedenti a partire dal 1838.
Queste brutali aggressioni stradali erano la più volgare forma d'attività della malavita. I duristi erano generalmente disprezzati dagli altri criminali che non amavano né i lavori grossolani né gli atti di violenza. Il metodo più frequentemente impiegato consisteva nell'attirare la vittima, preferibilmente ubriaca, in qualche angolo buio servendosi di un complice, possibilmente una donna; dopo di che il durista lo « zuccherava », vale a dire lo bastonava e lo derubava, lasciandolo svenuto nel rigagnolo. Non era un modo elegante di guadagnarsi da vivere.
Da settimane la folla si riuniva nelle vicinanze immediate Coldbath Fields nell'incerta speranza di poter dare un'occhiata ladro famoso. E la casa di Pierce a Mayfair fu invasa per tre volte da avidi cacciatori di souvenirs. Una « signora bennata a - non viene altrimenti definita - fu fermata mentre lasciava la casa con un fazzoletto maschile. E dichiarò senza il minimo imbarazzo di aver voluto procurarsi un ricordo di quell'uomo.
Il « Times » dichiarò che questa attrazione per un criminale era « deplorevole ed addirittura decadente » ed arrivò al punto da insinuare che il comportamento dell'opinione pubblica riflettesse « qualche fatale pecca della mentalità inglese
E quindi una delle curiose coincidenze della storia il fatto che il 29 maggio, quando Pierce iniziò la sua deposizione, l'attenzione del pubblico e della stampa fosse rivolta altrove. Del tutto inaspettatamente, infatti, l'Inghilterra si trovava ora ad affrontare una nuova prova di proporzioni nazionali: un'orribile e sanguinosa rivolta in India,
Il crescente impero britannico - che alcuni chiamavano l'Impero brutale' - aveva subito negli ultimi decenni due grossi smacchi. Il primo a Kabul, nell'Afghanistan, dove nel 1842 erano morti in sei giorni 16.500 inglesi tra soldati, donne e bambini. Il secondo era stata la Guerra di Crimea, da poco conclusa con richieste di una riforma dell'esercito. Questa presa di posizione era talmente decisa da far cadere in discredito persino Lord Cardigan, sino a poco tempo prima eroe nazionale; arrivarono addirittura ad accusarlo (ingiustamente) di non essere stato presente alla carica della Brigata leggera e il suo prestigio venne ulteriormente indebolito dal matrimonio eon la famigerata cavallerizza Adeline Horsey de Horsey.
Poi esplose l'ammutinamento degli indiani, terzo affronto alla supremazia mondiale della Gran Bretagna e terzo colpo alla sicumera degli inglesi. Che gli inglesi in India fossero baldanzosamente sicuri di sé è ampiamente dimostrato dal fatto che vi tenevano soltanto 34.000 militari europei i quali comandavano un quarto di milione dì soldati indigeni - i cosiddetti sipahi non eccessivamente fedeli ai loro capi britannici.
Dal 1840 l'Inghilterra aveva cominciato a comportarsi in India in maniera sempre più arbitraria. Il nuovo fervore evangelico in patria aveva determinato spietate riforme religiose in colonia; thug e sati erano stati tolti di mezzo e gli indiani non erano affatto entusiasti al vedere degli stranieri che cercavano di sovvertire le loro antiche traizioni religiose
Uno dei migliori libri che ho letto di recente. Molto netto nel giudizio. Delle varie forme di capitalismo descritte, solo il capitalismo basato sull'imprenditorialita' innovativa, puo' funzionare.
Disincantato e molto duro in alcuni passaggi.
But what is innovation, beyond something new? As we (and others) use the term, it is the marriage of new knowledge, embodied in an invention, with the successful introduction of that invention into the marketplace. Even the best inventions are useless unless they have been designed, marketed, and modified in ways that make them commercially viable. This requires someone who realizes the commercial opportunity presented by the innovation (or even a seemingly small element of the breakthrough), which sometimes is not the purpose the inventor had in mind, and then takes all the steps necessary to turn that opportunity into something many consumers will want to buy. These tasks are inherently entrepreneurial, an insight we will return to repeatedly throughout this book.
The financing problem just for richer countries is enormous. Consider the United States, where the challenge is the least acute among developed economies. As shown in figure 3, in 2004, benefit payments under the United States Social Security and Medicare programs totaled roughly
percent of GDP, accounting for about a quarter of all federal spending (which, in turn, is about 20 percent of GDP) and roughly 30 percent of federal tax revenue. In 2oTo, the earliest baby boomers will begin retiring, a trend that will pick up speed as the years pass. As it does, the promised in-
come and medical benefits will soar.
Thus, the Congressional Budget Office (CBO), the United States government's neutral and official government scorekeeper, has projected spending on these two programs, together with Medicaid (another entitlement program that supports health care for low-income individuals and fami-lies) to rise to 13 percent of GDP by zoz.5 and to 19 percent of GDP by 2050 (CBO, 2003).
In short, growth matters to aging societies because it makes it easier to afford government promises of support made to the elderly, among others. Aging, in turn, has two very different effects on the growth process. On the positive side, aging labor forces--up to a point--mean that the typical worker has more experience. More experienced workers, in turn, are more productive, so that as societies age, they should display faster productivity growth, other things being held constant. 14 But in aging societies, not everything can be held constant. As societies grow older, they are likely to have a lower proportion of young adults without families or children to support, and thus the cohort of individuals that are more likely to take the risks that lead to the formation and growth of high-impact enterprises will be smaller. After some point, aging societies are likely to be less entrepreneurial, in the sense of the terni that we are using it in this
book: developing and growing enterprises that have high-growth potential. True, many senior citizens or near retirees in the United States are jumping off the corporate ladder to start their own consulting operations or specialty stores, the traditional retirement pursuit of the elderly in Japanese societies. But, other things being equal, it is difficult for older individuals to have acquired the knowledge needed to conic up with and commercialize the kinds of breakthrough technologies and services that drive economic growth. That is one of the reasons why, vc will argue in chapter 7, countries like Japan and those in Western Europe face an even steeper uphill economic climb than the United States in financing the income and medical needs of their retiring populations in the future.
Such technology-driven growth is essential, I believe, ifwe are not to drown in our own problems .... Without breakthroughs in medical science, it won't be possible to supply the health care to a generation of aging Americans without bankrupting the young.
Without breakthroughs in energy production and distribution, it won't be possible to bring Third World economies up to industrialized living standards without badly damaging the environment and stripping the world of natural resources. Without rapid economic growth powered by new technologies, it won't be possible to reduce poverty or ensure the next generation a better life than we have. (Mandel, 2004, xi-xii)
Just citing the hope for improvements in future technology begs the question: who comes up with it and, just as important, how does it get introduced into economies? As for the first question, economists generally agree that technological development is at least loosely tied to investment in the process of discovering new technologies, or research and development (R&D). But the more interesting question that so far has not been well studied, in our view, relates to the conditions under which new technology is introduced and used in economies. The answer to this puzzle turns very much on how an economic system is organized.
One other policy implication stands out from Romer's work, however: that technological advance is not likely to occur, at least in economies at the frontier where imitation is not an option, unless those who undertake it are assured of some reward. Hence the importance of imperfect competition, or something other than the perfectly competitive ideal where so many firms are making an identical product that they compete away any excess profits. If some extraordinary profits are not available to the individuals or firms who leap into the unknown, taking the risks to develop and commercialize something new, then technological advance would not occur. That is why economists typically defend the importance of an effective system of intellectual property tights that confers monopoly status on innovators for some limited period of time, or why market structures should not be perfectly competitive in dynamic industries, at least in the short to intermediate run. Continuing technological advance, however, competes away any short-run profits' so that over the long run they disappear.8 We draw on these key insights in our discussion of what is essential to entrepreneurial capitalism in subsequent chapters.
For many ofus, November 9, 1989-the day the Berlin Wall fell
marked the end of the tcrriling cold war struggle between communism and capitalism. Capitalism had triumphed and communism was reduced to a mere historical curiosity. Looked at that way, the term "capitalism" seemed to refer to a simple and uniformly characterized form of economic organization, something we would recognize if we saw it even if we had no formal definition for it. But this view of capitalism turns out to be a seriously misleading oversimplification. As we will emphasize in this chapter, in the countries that we would all consider "capitalistic," the organization of the economy, the economic role of government, and a variety of other attributes differ profoundly. Some capitalist economics come close to being socialistic, while others are far more regulated. Moreover, the form taken by capitalism in a particular country has profound implications for its growth performance, and that is why, for our purposes here, it simply will not do to put all forms of capitalism into a single category. Rather, we will classier the economies of the different capitalist countries in four categories:
1. state-guided capitalism, in which government tries to guide the market, most often by supporting particular industries that it expects to become "winners";
2. oligarchic capitalism, in which the bulk of the power and wealth is held by a small group of individuals and families;
3. big-finn capitalism, in which the most significant economic activities are carried out by established giant enterprises; and
4, entrepreneurial capitalism, in which a significant role is-played by small, innovative firms.'
About the only thing these systems have in common is that they recognize the right of private ownership of property; beyond that they are very different. In particular, the economies in one category tend to have growth records very different from thqs,e in another, and that is because their mechanisms of growth, innovation, and entrepreneurship vary substantially. We will maintain that one of the most promising ways to promote growth in an economy that is currently characterized by a slow-moving form of capitalism is to adopt reforms that move it toward a type of capitalism with a more powerful growth engine. For the same reason, economies that already are characterized by a fast-growing form of capitalism must vigilantly watch out for developments that might undermine their membership in that group.
Governments can and do guide capitalism in other ways as well, for example, by favoring certain companies or sectors with tax breaks, exclusive licenses (legalized monopolies), or government contracts. Favored companies thus can become "national champions," whose success is assured by government policy. Governments can also support industries through protective measures, such as tariffs, insulating domestic companies from foreign competition. In addition, governments can guide the activities of foreign investors or partners, allowing them only in certain sectors and under certain conditions (commonly, that the foreign partner share and eventI!ally transfer its technology and know-how to the local partner). China's joint ventures with American manufacturers and Japanese arrangements with U.S. aerospace companies are examples of this type of guidance.
State-guided capitalism can overlap to some degree with big-firm capitalism, but the two systems are fundamentally different. They overlap when, for example, national champion firms are favored by the state. These firms typically have large numbers of employees, who are managed in a highly structured way. Innovation, to the extent that it exists, is orga- nized, separately budgeted for, and closely managed. It is rare in a stateguided system to have more than a few national champions, if only because the size of the domestic market may not allow more than a certain number. Meanwhile, other large firms may prosper, perhaps by conducting substantial business with government or by tapping into domestic and/or foreign markets that generate growth of the enterprise. Economies can then come to be dominated by big firms, but not necessarily directed toward that outcome by government policy.
It also may be tempting to equate state-driven capitalism with central planning, but the two systems also are very different. In centrally planned economies, the state not only picks winners, it also owns the means ofproduction, sets all prices and wages, often cares little about what consumers may want and thus provides essentially no incentive for innovation that benefits the individual. On the contrary, the bureaucrats who rati the large "firms" in the former Soviet bloc countries, which were the apotheosis of central planning, were paid according to the amounts their plants produced, regardless of quality or whether consumers actually wanted the output. Central planning, by its nature, is not conducive to the adoption of breakthrough technology, the Soviet space program that launched Sputnik in 1958 being perhaps the only exception. But this effort was the kind of thing state socialism does best: a massive command-and-control activity for a specific, even limited purpose. It generated little in the way of pervasive long-run economic benefits.
It is important to note that, without adopting "state guidance" in the sense in which we use the term here, government nonetheless can play an important role in providing public goods and services whose benefits are shared widely throughout the population without necessarily seeking to decree which particular sectors or industries should prosper. For example, governments routinely provide basic infrastructure-roads, water and sanitation systems, education, police and judicial systems-and fund basic scientific research. In undertaking these activities, governments are simply providing a platform on which all economie actors can carry out their activities. Providing "public goods," or those whose benefits no single individual or firm can fully appropriate, is the basic job of governments (along with national defense). Doing so does not mean that governments are thereby "guiding" the economy. Providing public goods is normal in every form of capitalist economy, and not only in those that are guided by the state.
There are drawbacks, even dangers, to state-guided capitalism. Indeed, given our proclivity to favor the other forms of capitalism, it may not surprise readers to learn that we see many more drawbacks than advantages, especially once these successfully state-guided capitalist economies approach the per capita income levels of richer, less state-guided economies.
BELIEVING THAT STATE GUIDANCE WILL WORK FOREVER Governments that guide their economies with some success can learn the wrong lessons from the past. For countries whose economies have grown rapidly under the guiding hand of the state-one thinks of many Asian economies in particular-it can be tempting to conclude that indefinite continuation of the same approach will yield growth benefits. But the world changes. After piclcing the low-hanging fruit, the difficulties of harvesting grow much greater. So it is, and has been, for a number of countries where state guidance has worked for a period.
EXCESSIVE INVESTMENT A good example of what can go wrong is what happened to South Korea in the late 199os. Long accustomed to directing its banks to provide loans to the larger South Korean conglomerates ("chaebols"), South Korea's government induced too many- banks to invest excessively in the expansion of the semiconductor, steel, and chemicals industries. When the financial crisis that began in Southeast Asia during the summer of 1997 spread to South Korea, the country's banks and, more important, the companies that had borrowed to expand were so overextended that the South Korean economy came close to collapse. It was rescued only when the United States government led an international effort to prop up the country's financial institutions by extending the maturities of their deposits (Blustein, zooi). Only later would the South Korean government force a number of the ehaebols to restructure and induce its banks to apply commercia!, rather than government-directed, criteria to the country's lending....
PICKING THE WRONG WINNERS AND LOSERS Excess investment is not the only drawback of state-guided capitalism. As such countries approach the technological frontier, they no longer can just pick a sector or an industry, figuring, "We'll find out how the firms in that industry work and 'one up' them." Instead, once at the frontier, a country comes to the proverbial fork in the road. Which direction to choose? That is the question that firms in advanced economies face every day. They are not sure which new products and services consumers will want. They also don't know the outcome of their R&D efforts, however planned they may be.
Governments in state-guided economies re not comfortable with the seemingly chaotic, unplanned, rough-and-tumble process that is the hallmark of capitalism unconstrained by bureaucracy. Instead, having seen firsthand their initial success at picking sectors for their export prospects (with sales in the domestic economy to follow), these governments are apt to believe that the same process of guidance can continue to produce the winners of the future. But once economies are at the frontier where success is not so easy to generate-because diete are no clear Jeaders to copy or follow-mistakes are easy to make. That is how Malaysia ended up building one of the world's largest high-technology parks in the 19905, a multibillion-dollar venture that still does not seem to have paid off. And it is what has led Singapore to launch major effort aimed at making the country one of the world's leaders in biotechnology, offering large salaries and perquisites to leading researchers from all over the world if they would spend significant time in Singapore. That gamble may yet work, but Singapore is not alone in believing that it can become the next Silicon Valley of biotech. South Korea has made major strides in the biotechnology field, in part because its government does not have the strict laws against cloning that are found in the United States. Meanwhile, in the United States, numerous states and localities are staking out their claims to be the center of the biotech revolution. Some will be successful in this biotech race, but not everyone.
SUSCEPTIBILITY TO CORRUPTION In economies wherdà business success depends on whether it receives favors from government, ;'there is always a danger of corruption. Firms will find subtle or not-so-subtle ways to earn those favors. China) where corruption is a well-known feature of the system, is a good example. As we will suggest shortly, although China has grown rapidly, it could grow faster were it free of corruption.
DIFFICUlTY "PULLÌNG THE PLUG" AND REDIRECTING GOVERNMENT RESOURCES A final danger of state-guided capitalism is that once a state has committed its resources and prestige to particular ventures or sectors, it can be hard to "pull the plug" if it becomes clear that major restructuring is called for or that competitors in other countries are surpassing them. Either governments don't want to lose face, or more commonly, politically powerfhl interests impede the ability of well-intentioned governments to abandon their interventions. The best examples of this prob1cm are the agricultural subsidies extended by virtually all rich-country governments, despite the falling and now relatively small share of employment engaged in agriculture (in the United States, it is under 3 percent). Furthermore, despite the liberalized trading rules negotiated under GATT and then the World Trade Organization, rich countries still attempt to protect certain manufacturing industries from import competition, whether through "temporary" protection authorized by the so-called escape clause in the INTO agreement or via the more permanent variety: antidumping duties and countervailing duties to offset foreign subsidies (despite overwhelming condemnation of antidumping remedies in particular by economists). Indeed, it is ironic that political pressures often force governments to support failing industries rather than those industries with promise for the future, largely because the dying industries and their employees can be counted upon to cry most loudly for government assistance.
As already suggested, the form of capitalism we call "oligarchic" is easily confused with state-guided capitalism because under the former the state also is apt to be heavily involved in directing the economy. Capitalism is defined as "oligarchic" when, even though the economic system is nominally capitalist and property rights protect those who own substantial property, government policies are designed predominantly or exclusively to promote the interests of a very narrow (usually very wealthy) portion of the population or, what may be worse, the interests of the ruling autocrat and his (or her) friends and family (in this instance, the system is better characterized as a "Kleptocracy"). This form of capitalism is, unfortunately, all too common in too many parts of the world, encompassing perhaps one billion or more of the world's population. In these societies, economic growth is not a central objective of the government, whose main goal is instead to maintain and enhance the economic position of the oligarchic few (including government leaders themselves) who own most of the country's resources. This fact distinguishes oligarchic capitalism from other autocratic, or less-titan-democratic societies, where growth clearly is a central objective but where capitalism is repressively "guided" by the state. Of course, even in oligarchic economies, governments and the ruling elites to whom they respond may be and probably are interested to some degree in promoting growth, but only as a peripheral objective or a "constraint": to achieve enough growth to keep "the natives" from rebelling and overthrowing those in power as well as giving the ruling elites a larger accumulation of national wealth from which to expand their larceny. It is these circumstances, along with the repressive powers that such governments exercise, which lead us reluctantly to conclude in chapter 6 that revolution may be the most effective (and perhaps the only) way to undo oligarchic capitalism and move toward a system where economywide growth becomes a primary goal of government.
But where do these radical, breakthrough innovations come from? The answer is that transformational technologies, and hence entrepreneurial capitalism, would not exist without entrepreneurs, who recognize an opportunity to sell some thing or service that hadn't been there before and theh act on it. Radical breakthroughs tend to be disproportionately developed and brought to market by a single individual or new firm, although frequently, if not generally, the ideas behind the breakthroughs originate in larger firms (or universities) that, because of their bureaucratic structures, do not exploit them (Moore and Davis, 2004, 32). As Jean-Baptiste Say noted at the beginning of the nineteenth century, without the entrepreneur, "[scientific] knowledge might possibly have lain dormant in the memory of one or two persons, or in the pages of literature" (Say, j834, 81). Although the finding is now somewhat dated, one thorough statistical study has found that smaller, younger firms produce substantially more innovations per employee than larger, more established firms (Acs and Audretsch, 1990).
Beyond this, paradoxically, studies have found (for the United States at least) that the typical entrepreneur earns less monetary compensation than her employee counterpart. Why then do so many entrepreneurs willingly engage in what is inherently risky activity? Because the additional psychic rewards-being one's own boss, pride in self-accomplishment, and so forth-make the entrepreneurial endeavor worthwhile even if the entrepreneur does not gain the mega-prize. This, in turn, helps explain why entrepreneurs have a comparative advantage relative to large companies
Economies characterized by entrepreneurial capitalism are also dynamic in another sense: there is a constant churning of firms in the pecking order among all firms, in contrast with greater stability in firm rankings in economies characterized by big-firm capitalism, Consider, for example, the contrasting experiences of the United States and Europe. Of the twenty-five largest firms in the United States in 1998, eight did not exist or were very small in 1960. In Europe, all twenty-five of the companies that were the largest in 1998 were already large in 1960. Moreover, the pace of the change in America seems to have accelerated. Whereas it took twenty years to replace one-third of the Fortune oo companies in 1960, it took just four years to accomplish this task in 1998 (Commission of the European Communities, 2003)16
Because radical change is so disruptive, entrepreneurial economies can benefit from properly constructed safety nets that shield some of the viethus of change from its harsh impacts (without at the same time destroying their initiative to get back on their feet). This may seem paradoxical or counterintuitive. The former chief scientist of Israel once told two of the present authors in conversation that she believed one reason Israel was so entrepreneurial was that its people had a high level of discomfort, brought about largely by external threats to their physical security. In societies where individuals may be too comfortable-much of Western Europe, for example-people may be reluctant to take the risks inherent in any entrepreneurial endeavor. Indeed, in 2004, a French government employee wrote a best-selling book called Bonjour Paresse (Hello Laziness), which extolled the virtues of not working hard. This "avoidance of work" ethic is now a serious cultural issue across Western Europe, manifesting itself in a noticeable drop in average hours worked per year by employed individuals in major European countries (see chapter 7).
Thus, although it may seem counterintuitive, constructive safety nets that catch the fallen without destroying their incentive to get back up can be more important in high-income, entrepreneurial economics than in economies with lower average standards of living. This is because the potential losers from change in high-income countries have more to lose and thus greater incentive to try to stop it or slow it down.
To summarize, entrepreneurial capitalism is the system we believe is most conducive to radical innovation. But no advanced economy can survive only with entrepreneurs (just as individuals cannot survive by eating just one type of food). Big firms remain essential to refine and mass-produce the radical innovations that entrepreneurs have a greater propensity to develop or introduce. One area for future research is the optimal mix of entrepreneurial and large firms. To address this challenge, however, requires better data sets than currently exist. (Readers interested in the important but overlooked topic of what data are required to test the hypotheses advanced in this book should consult the appendix.)
Throughout most of recorded history and in almost all societies, accumulation of wealth has been a primary goal of enterprising individuals. In the vernacular familiar to American readers, individuals have pursued one of the two primary roads to acquire wealth: increasing the size of the pie and taking one's fair share from the increase, or simply taking more of the pie, whether or not it grows. Until the time of the Industrial Revolution, the second of these options-redistribution of what was already there-was pursued overwhelmingly That fact, ultimately, explains why the economic growth achieved by industrial countries in the last two centuries is unparalleled in previous history, ancient or recent.
So manifest are the immediate advantages of wealth-grabbing activities over activities that increase the total wealth of society that it is not easy to explain what led modern free-market economies to move toward the latter. The obvious answer is the appearance of new institutions that reined in the enterprising wealth-grabbing options and limited their benefits, while at the same time offering greater reward and certainty of payoffs to the enterprising individuals who contributed to economic growth. Put that way, it becomes clear that such a revolutionary change in incentive structure must have been a piece of great good luck for societies where the revolutions occurred; indeed, something of a miracle.
An entrepreneurial economy must have entrepreneurs-not just any entrepreneurs, but innovative entrepreneurs. We submit that three preconditions are necessary to generate them. But just as important, entrepreneurial economies must have ways to ensure that the successful entrepreneurs that grow into large firms are kept on their toes. Otherwise, as suggested in chapter 8, big-firm capitalism can become sclerotic. Our fourth condition addresses this particular danger.
Easy to Start and Grow a Business
To encourage the formation of innovative entrepreneurial enterprises, governments should lower the costs of "formality" (business and property registration and ease of hiring and firing workers); have a workable bankruptcy system in place; and facilitate the formation and growth of their formal financial sectors, which channel resources to innovative entrepreneurs. The first condition should hardly be a surprise. If entrepreneurship is about starting and growing a commercial enterprise (we ignore for this purpose so-called social entrepreneurs who might have other objectives in mind), then it must be easy and inexpensive to do so-formally, that is. In other words, licensing requirements should be few (unless the business requires some kind of special expertise, such as a medical care facility), the time and the cost required to fill out the necessary applications should be kept to a minimum, and so should time required for approval. These same elements apply to registration of property and collateral (to secure loans); these steps should be easily managed.
In an age increasingly dominated by the Internet, many or all of these activities can be conducted online, and in parts of developed economies, they already are. For developing countries that lack the infrastructure for high-speed Internet communication from remote locations, the application process can be accelerated at the appropriate registry with relatively low-cost electronic kiosks or similar equipment.
BANKRUPTCY PROTECTION It may seem paradoxical, but another important, but indirect, factor affecting the costs of the entry is the cost of exit or failing. In most societies and throughout history, bankruptcy has been a mark of shame, if not a criminal offense requiring the bankrupt to serve time in jail. The United States and some other countries have taken a more enlightened attitude toward debtors who cannot pay their debts when they come due (one of the definitions of bankruptcy): depending on the part of the law they invoke, those who "declare" bankruptcy are excused from some of their debts, provided they agree to repay the balance over some rescheduled time period.3 Effective bankruptcy protection is critical to promoting entrepreneurship, since without it, many would-be entrepreneurs would be unwilling to take the risks of starting a business, knowing that ifthey fail they could lose everything, on top of facing the severe social stigma of having declared bankruptcy. Indeed, it is safe to speculate that there is a strong negative correlation between the strength of that stigma and attitudes toward entrepreneurship in any given society: the more society penahzes failure, the less entrepreneurship it will get. (This proposition has its analogue in labor protection: the more difficult it is to fire workers, the less incentive firms have to hire them.) Those social scientists who attribute differences in entrepreneurship rates between countries to differences in cultural attitudes (a subject we will soon explore) thus may be missing an important underlying policy that influences culture, namely, the policy toward bankruptcy.
ACCESS TO FINANCE A third essential factor in starting most businesses is access to capital. J. R. Hicks, one of the great British economists, observed that the liquidity of capital markets in eighteenth-century England helped ignite the innovation associated with the Industrial Revolution by allowing inherently illiquid long-term investments in capital equipment to be financed (Hicks, 1969, '4.3-45). Early in his distinguished career, Joseph Schumpeter emphasized the importance of banks in funding entrepreneurs and established businesses, spurring technological innovation and hence economic growth (Schumpeter, 1911). In recent years, with more attention paid by economists to the sources of growth, there is a growing consensus that economic growth depends to at least some degree on the maturity and soundness of economies' financial systems (Levine, 2004). After all, the central role of financial systems-financial intermediaries and capital markets-is to channel hinds of those with excess funds (savers) to those who are likely to earn the highest returns on those funds (investors). As banks, other financial intermediaries (insurance companies, pension funds), and capital markets (stock and bond markets) grow in size and sophistication they become more efficient in performing this critical function. The more efficient they are, the more risk that savers are likely to take with thcir funds, which should foster more investment and entrepreneurship. As Columbia University economists Massimiliano Arnarante and Edmund Phelps succinctly put it: "Financiers are the channel through which innovations can be transformed from mere ideas to a source of economic growth" (Amarante and Phelps, 2005).
A cursory reading of history indicates that the pursuit of wealth by at least some individuals has been present in virtually every society (there are exceptions-medieval serfs, monks in monasteries, and the like-but these are the exceptions that only prove the rule). As we noted at the outset of the chapter, there are fundamentally two ways in which wealth may be acquired: by undertaking productive activities that enlarge the size of total output for any society, or by ignoring that objective and seeking instead to gain a larger share of whatever output is generated, In the vernacular, the choices are to expand the pie or to seek larger slices.
Clearly, economic growth requires activities of the first type-those that expand the pie or total output-and we will refer to this as productive en-
trepreneurship. In turn, we have previously identified two types of productive entrepreneurship: innovative and replicative. For entrepreneurial societies, we are interested in the former, for it is only by commercializing new products and services or by adopting new and better ways of making or delivering existing ones that the economic frontier moves out.
It is not sufficient for entrepreneurial economies to make it easy for entrepreneurs to start their businesses. Such individuals and the firms they found must be rewarded for their success. Several institutions are important are in this regard: the rule of law (effectively enforced), intellectual property protection (but not too much), taxes that are not unduly onerous, arid rewards and mechanisms to facilitate imitation in certain environments.
The more interesting questions are *here do innovative entrepreneurs get their ideas, and do incentives matter here too? In answering these questions, it is useful first to dispel the notion that innovation is something that is entirely new. Of course, innovative products and services are new, but they could not exist without many other components or ideas that already exist. As the famous scientist Isaac Newton once said, "If I have seen further than others, it is by standing upon the shoulders of giants." So, too, with innovative entrepreneurs, or any inventor for that matter: technological breakthroughs happen only when related ideas or products, already in the marketplace, are put together in new ways (Hargadon, 2003). Successful innovative entrepreneurs are the ones who recognize and then realize the commercial opportunities that such reconbinations offer.
So what does government policy have to do with all this? The answer is: plenty. Although inventors will tinker simply because they are good at it or love to do it, as with any other activity, one will get more innovation if it is actively encouraged and rewarded. European monarchs recognized this as early as 1300, providing inventors with temporary exclusive rights, or what today we rail "monopoly profits," for their innovations.'the concept really took hold in England several centuries later and was formally embodied in the United States Constitution by America's founding fathers (Jaffee and Lerncr, a004). Congress implemented the constitutional guarantee initially by providing seventeen years of monopoly protection, since extended to twenty years, both periods running from the date the Patent Office awards the patent (after determining it to be an advance of the "prior art"). Other nations have since introduced their own forms of patent protection, though it is common outside the United States for the protection to be awarded to the "first to file" the application, and to that date in particular.
Yet even with the temporary monopoly profits awarded to innovators under patents, the lion's share of the gains from innovation still spill over to the rest of society. This is a good thing, as long as patent holders ate adequately compensated, since societies benefit most from innovation when it is rapidly diffused. For example, one noted study of one hundred American firms found that "information concerning development decisions is generally in the hands of rivals within 12 to 18 months, on the average, and information containing the detailed nature and operation of a new product or process generally leaks out within about a year" (Mansfield, Schwartz, and Wagner, 1981, 911). William Nordhaus estimates that inventors capture as little as 3 percent of the total social benefits of their inventions (Nordhaus, zoo).
Still, even with spillovers of this magnitude, having a patent remains a prized possession and thus must continue to act as a powerful force for stimulating innovation. Indeed, there is a danger that this force can be too powerful. If patents are too easy to come by-that is, temporary monopolies are awarded for developments that are not truly novel but instead are "obvious" and thus unworthy of legal protection-then society will stimulate too many "temporary" monopolies. Patents that are unjustly awarded will then discourage entrepreneurship because they will prevent others with truly novel ideas that are deserving of patent protection in their own right (or at least the ability to be left alone without fear of lawsuits) from entering markets and competing against those whose patents are not deserved. This is an increasingly serious problem in the United States, which we discuss further in chapter 8.
How then can the winners of the competitive race be motivated to keep innovating, whether incrementally or radically? Or, at the very least, how can society prevent the winners in oné round of economie competition from thwarting the next generation of entrepreneurs who threaten to topple the previous winners? We consider here two institutions that would seem essential for this task: antitrust law and enforcement and openness to international trade and investment. We will take up a third important institution referred to earlier-the law and practices surrounding the transfer of new technology out of university laboratories and into the marketplace-in chapter 8, where we look ahead to ways to keep winners on their toes in all capitalist economies.
ANTITRUST Ask many microeconomists (and many plaintiffs' lawyers) how society can best assure continuation of Red-Queen-style competitive races in markets where only a few winners are left standing, and they will utter four words: "enforce the antitrust laws" We will not digress here to discuss these laws, which have become common throughout the developed and much of the developing world (though unevenly enforced), in great detail. For our purposes it is sufficient to highlight three common themes that run through these laws: that competitors should not be allowed to fix prices (exceptln rare circumstances where joint activity is necessary for products or services to exist, such as common royalties for copyrighted works); that mergers between firms already dominant in concentrated markets ought not to be allowed; and that firms with "market power" (those with the ability to set prices on their own rather to accept the impersonal verdict of the market) should not be allowed to abuse that power through exclusive arrangements and other behavior having no legitimate business purpose that cements their market position. 12
Indeed, China owes much of its economic success to the welcome mat its leaders have put out to foreign investors. And investors have responded, pouring ever-increasing sums, talent, and know-how into the country. By 2004, China had become the leading destination in the world for foreign direct investment (FDI)-that is, "stick,' investments in plant and equipment or at least significant minority stakes in domestic firms-attracting more than $6o billion in that year alone. One of the amazing things about China's success in this regard is that foreign investors have continued to rush into China, although legal protections for contracts and property, and the courts that support them, are far from ideal, and corruption reportedly is pervasive (Wei, 2001). The best explanation we can give for this oddity is that China's large and rapidly growing domestic market makes the country "too big to pass up" so that investors appear more than willing to wait for the legal and institutional systems to improve. China's agreement to make necessary changes, and to open further parts of its economy that have been sheltered from foreign investment (notably, financial services), as part of its entry into the World Trade Organization gives investors reason to believe that their hopes will be realized (although in zooó there were disturbing signs of a potential backlash against foreign investment, especially takeovers of Chinese firms by foreign investors).
Somewhat ironically, poor countries that want to emulate China's success in attracting foreign direct investment will have to take measures that, as a by-product, should fostei domestic entrepreneurship in their own countries. Foreign direct investment has long been very unevenly distributed around the world, being concentrated in rich countries and in only a selected handful of developing or emerging market economies. For developing countries that have not been prime destinations for foreign investment to have any chance at cracking into this select circle of destination countries, their governments will have to take steps to make foreign in vestors feel welcome. At the top of this list are such essentials as enforceable rights of contract and property and a minimum of corruption.
The key, in our view, is that whatever set of institutions is in place must be stable and viewed widely by residents and foreign investors as trustworthy, so that all parties can reasonably expect to know what the rules are when they conduct business or go about their private lives.
Getting to this point is not something that happens with a wave of the hand or through some official pronouncement; it can take decades if not generations to establish (although the Russian experience of entrepreneurial values being handed down through family relationships in less than a generation is an encouraging sign that the transition can be much shorter). This isn't to say that growth cannot happen until this occurs, just that the circle in which commercial transactions take place can widen only when parties at both ends of any bargain have a common understanding of the rules. Since growth occurs largely through trade, which permits the specialization of labor, the m&e rapidly this circle of trust widens, the greater will be the opportunities for growth. In effect, trust can substitute for formal legal rights, and where it works, it can be a lot less costly than reliance on detailed legal documents (Fukuyama, 1996). This helps to explain why China, which has lacked a formal legal system, has been able, so far, to defy conventional wisdom and grow as rapidly as it has.
But, as Chinese leaders are learning, trust goes only so far. As the distance between parties grows-so that seller and buyer do not know each other or may not be engaged in repeat transactions-trust becomes an inadequate substitute for law.
Israeli government policy-beyond welcoming immigrants by providing Hebrew-language training and temporary housing and other living support-has facilitated the start-up and expansion of high-tech entrepreneurial ventures, in particular, through a government-supported venture fund that provided seed capital to enterprises that already had some private sector backing. In his exhaustive review of this program, Professor Dan Breznitz of Georgia Tech has concluded that this matching requirement, coupled with the nimble decision-making by the fund's leaders, made government support successful (Breznitz, 2005). Although it is difficult to know with precision how many companies have prospered as a result, the overall picture of entrepreneurial success is unmistakable: Israeli companies have been remarkably successful in "going public" on the New York Stock Exchange.
Third, ultimately more thought must be given to processes by which aid can be dehvered directly to the intended beneficiaries-the sick, children in schools, and so forth-immunizing it from the influence or direction of local governments. This would reduce the "leakages" in the aid pipeline associated with corruption, inefficiency, or substitution. The Gates Foundation, for example, is committing huge sums to preventing and fighting diseases in third world countries, and it is doing so directly, not through government intermediaries. Circumventing the distorting influences of local governments is more difficult to do with monies provided for educa tion or to build infrastructure, which are inherently governmental klinetions. Il We leave it to those more expert than ourselves to see whether aid supplied privately nonetheless can be delivered directly toward these uses.
First, the standard prescriptions for improved economic performance in both Japan and continental Europe, which look very much like the "Washington Consensus" prescriptions for developing countries, lack a central organizing principle. Precisely what kind of capitalism do the proponents of the standard prescriptions envision for these other countries? Our answer to this question, again not surprising to readers who have made it this far, is that continental Europe and Japan, as perhaps the leading exemplars of big-firm capitalism, need a healthy dose of what we have called "innovative entrepreneurship." Although some large firms in these parts of the world have been truly innovative-Toyota in Japan or Nokia in Europe, to take two examples-the United States experience teaches that the most reliable source of radical innovation (and that is what is required to step up growth) is to be found among new, vibrant firms that do nor have a vested interest in preserving their current markets. Ironically, the European and Japanese economies were built by entrepreneurs and still have many
smaller firms, indeed so small in some eases (Italy being a prime example) that they are unable to take advantage of economies of scale necessary to match the low prices coining our of China and other low-cost producing nations or find it difficult to grow.' Nonetheless, the industrial makeup of these economies is far more stable-and stagnant-than that of the United States where the names of companies in any list of "top" enterprises changes from decade to decade. Thus, we find it fair and useful, though admittedly convenient, to characterize the continental European and Japanese economies as leading exemplars of big-firm capitalism and to suggest that only by renewing the innovative entrepreneurial spirit that once helped build these economies cari each reasonably expect to grow more rapidly in the future.
Second, and of equal importance, any reform program aimed at enhancing growth over the long run must take account of fundamental political realities in both parts of the world: that abrupt, radical change is un-likely to be embraced by the majority of voters or, even if initially embraced, is not likely, given current realities, to be maintained for a sustained period. Instead, if any reform package is to have a chance at producing an enduring and constructive impact, it probably must be incremental in nature. The model we suggest here draws on the way in which China has gradually embraced capitalism, as opposed to Russia's sudden turn from central planning to something akin to Wild West capitalism.
Economic growth is also useful, if not essential, if countries want to make good on their costly promises to their citizens: promises to pay for their health care, their retirement, to cushion the economic pain of unemploynicnt, and so forth. While European governments have proniised their citizens more protection than those of most other countries, including Japan, these two parts of the world share a common demographic challenge-population aging-that will be far easier to meet with faster growing economies. Figure should make this clear. As recently as 1995,the share of the population represented by those over sixty-five was pretty much the same--roughly within two percentage points of r percent-in Western Europe, Japan, and the United States. By aoo5, however, the share of the elderly in Japan had soared to zo percent, while the share in the United States remained flat at 53 percent. By zoao, all rich countries will have aged to the point where the share of the elderly will be approaching 30 percent in Japan, ao- percent in Western Europe, and 17 percent in the United States. By 2030, it is predicted that there will be only one worker for every retiree in Italy, and a ratio of i . workers to each retiree in Germany (Baily and Kirkegaard, 2005, xó). Feeding, clothing, and caring for retirees will therefore require a very large and increasing share of what employed persons produce in total.
Figure . Population over Age Sixty-five as Ratio of Total Population. Source: OECD Factbook 2005: Economic, Environmental, and Social Statistics. Paris: OECD, August 2005
Population aging in these societies will have certain unavoidable implications. As the share of the population over sixty-five increases, the ratio of those working to those not working-and receiving pension and health care benefits in retirement-will steadily fall. Only if workers become increasingly productive, that is, only if economie output per employed person grows, will workers experience the rising living standards their parents enjoyed, unless of course retirement benefits are cut, which is highly unlikely as elderly citizens comprise an ever larger share of voting publics. More than generational warfare is at stake. If wages do not continue to rise, then those most able to leave-those with the skills neetisary to prosper in an increasingly global and technological economy-will do so, making it more difficult for the economies they leave to support aging populations. Indeed, many of continental Europe's "best and brightest" have already crossed the English Channel to work in the more thriving economies of Great Britain and Ireland, or the Atlantic Ocean to reach the United States (to the extent they are able fo do so given the more restrictive immigration policies pursued since the September ii terrorist attacks) 2
In principle, Japan and Western Europe would find it much easier to manage their aging challenge if they adopted substantially more liberal immigration policies, but this, too, is highly unlikely. Japan has a long history of not accepting immigrants, and for cultural reasons it is not likely to change course despite the growing fiscal pressures implied by its rapidly aging population. Meanwhile, many European countries already face considerable difficulties in absorbing their existing immigrant populations, which, as shown in table 13, are substantial in a number of nations, and which in some countries constitute a greater share of the populations than is the case in the United States, which is widely known for its welcoming attitude toward immigrants. But as the 2005 riots among Islamic immigrants living in France and the ongoing tensions between the native and immigrant Islamic populations in historically tolerant Netherlands illustrate, many European countries have had a hard time frilly accepting into the economie, political mainstreams of their societies immigrants who are often less skilled and hold different religious beliefs. As a result, immigrants typically suffer much higher unemployment rates and earn lower wages than natives, who already have their own substantial unemployment problems (Great Britain and Ireland excepted).
For these reasons, therefore, neither Japan nor the European countries are likely to be able to reduce significantly the financial burdens of their aging populations by accepting more immigrants. Both regions must find a way to grow more rapidly in the years ahead or else face wrenching generational warfare over the generosity of retirement benefits.
In some ways, this challenge will be more difficult for the countries to meet at this time than following World War II, when both parts of the world were flat on their backs and citizens worked hard simply to survive. Furthermore, the nations then were too poor to support a broad economic safety net-for those no longer working or those still in the labor force who were looking for work. Today, however, residents of both Western Europe and Japan are largely comfortable economically-at least the healthy majority who have jobs-and they perceive no immediate crisis, even though many of their children cannot locate suitable employment. The central question both parts of the world face is how soon their citizenry will wake up and realize the magnitude of the economic challenge before them and whether, when they do, a crisis already will be at hand.
Despite this thought leadership, both Europe (including Great Britain)
and Japan moved away from their entrepreneurial toots toward a very different sort of capitalism, one that focused on not only preserving large firms, but also actively promoting them through various forms of state guidance: subsidies, implicit or explicit directions to banks to support particular enterprises, and other kinds of state assistance (although there has been an active debate among academics for some time over the importance of these measures for growth). In Japan, this big-firm capitalism took the form of zaihatsus, or financial-industrial conglomerates, in which the countries' largest banks both loaned money to and invested in the equity of that country's emerging large enterprises. In Europe, too, banks took equity positions in large borrowers, but neither the firms nor the banks called all the shots. As Columbia University's Edmund Phelps has explained, a "corporatist" economic model evolved in the early decades of the twentieth century in continental Europe, South America, and East Asia, one in which property may have been privately owned but the fundamental decisions about how national savings were to be allocated were made by social consensus-including firms, labor unions, banks, and, we would add, government. (Phelps, aoo6). This corporarist model was (and is) similar to the iron triangle of "big firms, big labor, and big government," minus the banks, so well described for the United States in the first rwo decades following World War II by the late john Kenneth Galbraíth in his then-best-selling work The New Industrial State (a description that no longer fits the U.S. economy, as we have argued at various points in this book).
The corporatist model, especially the active involvement of organized labor in firm governance, flowered after World War II, especially in Germany, the home of "codetermination," or the practice of labor representatives sitting on corporate boards (often along with bank representatives). Before the war, labor unions struggled to gain the right to strike, both in Europe and in the United States. For a time, formal labor participation in firm governance in the postwar era was viewed favorably, even in the United States, as an important instrument for gaining labor's cooperation in productivity and quality improvements in firms while avoiding strikes. In more recent times, however, labor's involvement seems to have acted as a brake on innovation, especially changes that lead to job loss. To this extent, labor board representatives have a clear conflict of interest, since their main objective---prorecting existing jobs-is nor coincident with the central objective of the firm, which is to maximize its current and future profitability.
The Japanese and European financial systems heavily favored well established companies rather than start-ups or fledgling enterprises for two reasons. Banks naturally were more interested in lending to larger companies whose shares they had bought and could potentially trade. At the same time, securities markets developed more slowly in Japan and Europe than in the United States, an outcome that may have been an unintended consequence of advanced banking institutions in the former countries, which were not nearly as prevalent in the latter. In particular, it is possible, if not likely, that securities markets developed more quickly and more deeply in the United States precisely because commercial banks were prohibited from underwriting securities under the Glass-Steagall Act of 1933, enacted in the midst of the Depression. This prohibition ironically protected investment banks (that do underwrite stocks and bonds) from competition for more than six decades before Congress effectively repealed Glass-Steagall in 999 (by enacting the Gramm-Leach-Bliley Financial Modernization Act), during which time several of these institutions (notably, Goldman Sachs, Morgan Stanley, and Merrill Lynch) grew into financial powerhouses. The same thing did not occur in Europe very likely because the large universal banks there already had easier ways to finance the activities of their large firm customers-simply by lending to themthan by underwriting their securities.
Would-be European entrepreneurs also complain about a lack of access to capital, though the role of government policy on this issue is less clear. European governments do not guide private banks' lending decisions, but complaints about the lack of access to bank capital are more common than in the United States (European Commission, 2002). But in an age when credit card financing is widely available and is used in developed economies to finance start-ups, and many start-ups themselves do not seem to require much capital, it is not clear how significant a constraint access to capital really is, any longer, to the lauriEhing of at least some new enterprises (Hurst and Lusardi, 2004). For ventures that are more capitalintensive, however, the absence of early stage or "seed" capital is a problem that Europeans, and the French in particular, have raised as a significant barrier to innovative entrepreneurship. Later we suggest that to the extent that financing is a problem, it is closely related to other policies that inhibit growth of new enterprises in Europe.
Indeed, one significant finding of a study of OECD economies is that successful start-up enterprises in the United States add workers at a much faster rate than those in Europe (OECD, 2003). Certainly, the major reason for this must be the more restrictive labor rules in Europe. To make matters worse, generous unemployment compensation and health and disability programs that can provide payments to unemployed workers for extended periods dampen the supply of workers who are actively looking for work at any one time. Although this effect may lower the measured unemployment rate by reduc-ing the measured labor force-which includes only those already working and unemployed individuals who are actively looking for work-it dampens total economic output and its growth.
Accordingly, contrary to what one would expect in a dynamic economy, where good firms grow and poorly performing firms do not, in several European countries the very opposite has been the case. As one study has documented, in these countries the companies in the bottom quartile of performance-the least productive firms-have grown more rapidly than the best-performing companies. As the study authors conclude, "the United States eliminates its least productive companies; the EU does not," a result they attribute to the oppressive combination of excessive product and labor market regulation and zoning rules that inhibit entry by more innovative firms (Baily and Farrell, 2006b).
Now that they are at or close to the technological-frontier, both continental Europe and Japan have no choice, if they want to grow faster (as they should for reasons already outlined), except to foster more innovation. It is possible, of course, that through some combination of luck and good policy, existing large firms or perhaps foreign multinationals (large firms from abroad) will advance innovation to some degree. Yet as we have seen, large ffrms typically are better known for their incremental advances than for their radical breakthroughs. If the latter is what European and Japanese policy makers want, their top priority should be to promote the formation and growth of innovative enterprises.
On the surface, it might appear that European leaders have recognized this in their Lisbon agenda. In particular, the European Commission in 2003 issued a Green Paper on Entrepreneurship, which outlined a series of ways to promote small- and medium-sized enterprises, or SMEs. Japanese government officials also, from time to time, give a nod to the importance of SME growth.
But the very term "small- and medium-sized enterprises," or its acronym SME, reveals a fundamental eonftìsion about the meaning of "entrepreneurship." There is a world of difference between what we have called "replieative" entrepreneurs and "innovative" entrepreneurs. Although replicative entrepreneurship offers those who undertake it a financial means of support, it is only through innovative entrepreneurship-commercial activities that embody some new product or service, or method of production or delivery-that societies advance their technological frontiers and thus their standards of living.
This distinction explains why self-employment data, for example, can be highly misleading, at worst, or of little use, at best, in assessing how successfill economies are in promoting innovative entrepreneurship. As shown in table 16, Europe and Japan have plenty of self-employed individuals. In some European countries, the share of self-employed in the workforce exceeds that of the United States.
It may be useful and necessary to go even further to ease European workers' anxiety that changes at the margin will affect them. Denmark provides one role model. It permits firms to hire and fire without significant hurdles and also puts strict limits on its (high) unemployment compensation benefits, but at the same time the government shares in the cost of retraining and subsidizes the wages of workers who take new jobs. According to at least one media account, this system appear to account for the fact that Denmark's unemployment rate is about half that of its European neighbors, and the reported rates of anxiety about job security are far below those found in its European counterparts (Walker, 2006a). A related approach is the concept of wage insurance. Under this system, governments would compensate displaced workers for a limited period (perhaps two years) for some portion of an loss in income if they take a new job that pays less than the old one. Indeed, to its credit, Germany has adopted a limited version of wage insurance for older workers (as has the United States, but only for those who can prove they were displaced by foreign imports) and reduced the term of unemployment compensation payments instead.
One cannot simply dismiss the big-firm regime peril. For one thing, the U.S. economy has not always had such a nice blend of big-firm and entrepreneurial capitalism. In the 1950S and I9óos-the heyday of Big Auto, Big Steel, and Ma Bell (the old AT&T telephone system)-the economy was much closer to a regime of big-firm capitalism. Then came the oil price shock of 1973 -74, the years of stagflation, and two decades of disappointing productivity growth. A resurgence of entrepreneurial innovationlargely in information technology and communications-coupled with more intense foreign competition (which .forced the older big firms to become vastly more efficient and to improve the quality of their products) helped to turn the U.S. economy around. But, as the saying goes, nothing lasts forever. It is conceivable that the U.S. economy might revert to a more strictly big-firm regime, and it is wishfial thinking to believe that this pattern could not reemerge.
Indeed, one farsighted economist, the late Mancur Olson, argued that something like this is likely to be the destiny of all economies, especially those in democratic societies (Olson, 198z). As economies age, Olson asserted, special-interest groups grow in number and power; as this happens, it becomes more likely that they will come into conflict. Like physical objects subject to Newtonian lays, calls for action by some special-interest groups meet with counterrcactions from others, all aimed at thwarting each other's ambitions. Too often, the result can be paralysis or "rentseeking" of the worst sort with regulations and policies that benefit particular groups without conferring benefits on, and even detracting from, the general welfare. The proliferation of trade associations and lobbyistswhich was apparent when Olson wrote his book in the early 198os and is even more so now-is powerful confirmation of his insight. A reversion to big-firm capitalism would threaten analogous effects, leading to interest-group paralysis via a slowing of the rate of radical innovation.
The last of these issues is, in general, no longer a matter of serious contention as it used to be in earlier periods of history in some noncapitalist societies, when lungs or robber barons or warlords could simply expropriate the earnings from productive activity. Today, laws defining property rights and the penalties for fraud or tieft provide reasonable security for the earnings of the productive entrepreneur, at least in economically advanced nations. Only one element in the set of institutions that ensures the requisite security of these asset accumulations-the patent system-requires more extensive discussion below, and it makes most sense to do so in the United States context. Although patents are necessary to provide incentives for productive entrepreneurship, the United States system in recent years appears to have become too protective, which can not only discourage new entrepreneurs from creating new markets but also insulate existing firms from the hot breath of competition.
One other worrisome public policy development that may affect the launch of future innovative companies is the unintended effect of the recent tightening of the bankruptcy lawtongress amended the U.S. bankruptcy laws in 2005 unaware that as many as 20 percent of all personal bankruptcies may, in fact, be business-related (Lawless and Warren, 2005).
Since many entrepreneurs begin by bankrolling their activities with credit card charges, the modifications of the bankruptcy law, which force those who declare bankruptcy to repay more of their debts (seemingly a good thing), may unintentionally discourage the formation of new enterprises.
Effective bankruptcy laws are important because the more difficult it is to exit from a business, the less likely it is that innovative entrepreneurs will take the risk of getting started in the first place. We must not forget that impediments to the exit of an unsuccessful firm can be, in effect, the equivalent of an increase in the cost of entry.
In principle, additional revenue raised through taxes on consumption rather than higher income tax rates should do the least economic harm because consumptions taxes should encourage private saving,-which mì turn should reduce interest rates and thereby increase investment (which could conceivably lead to lnqher growth in the long run though slower growth in the short run due to the depressing effect of consumption taxes on aggregate demand). In contrast, raising income tax rates entails some risk of discouraging work effort, although an across-the-board increase in income tax rates for everyone (employees and business owners alike) would not necessarily discourage entrepreneurial activity in particular.'
Consumption taxes have their drawbacks, however. For one thing, it is difficult to design a tax on consumption that is progressive in impact, namely, one under which taxpayers with low incomes pay lesser amounts relative to their incomes than those with higher incomes. In addition, and more pertinent to our subject at baud, certain kinds of consumption taxes-a value-added tax, for example-could actually hurt entrepreneurs by requiring them to pay taxes on inputs of production before they earn revenues on the sales of their products and services (and thus become eligible for rebates on those input taxes). Entrepreneurs may also suffer an adverse impact on their cash flow from a straight sales tax on certain inputs.
Accordingly, a more progressive way of raising revenue without discouraging work effort or entrepreneurship would be to "broaden the income tax base"-that is, to keep marginal income tax rates where they are but cut back on deductions and exemptions. This may be the ideal economie outcome, but a broaden-the-base approach may be the least politically palatable way of raising additional revenue since it would target specific, identifiable groups-those that 'now benefit from the deductions (the home-building industry, state and local govermnent officials, and charitable organizations, to name just a few)-rather than impose burdens across the board.
Clearly, it is not to our comparative advantage to outline here the tax package that is least politically damaging and that least hurts growth and entrepreneurship. All we can do is highlight the trade-offs involved while underscoring that the alternative, doing nothing-that is, talting no action to raise revenue, given the certain continuation of Social Security, Medicare, and Medicaid in something resembling their current
The legal protection of ideas -in the form of patents, trademarks, copyrights, and trade secrets law-has long been part of the American legal fabric. Patents and copyrights, in particular, are mentioned in the Constitution. Patent rights have their origins in Italy, dating from the fifteenth century.
In principle, so-called intellectual property rights are supposed to encourage inventors and entrepreneurs to engage in activities that generate and propagate innovations. Reality is more complicated. When it comes to promoting entrepreneurship, the protection of intellectual property tights cuts both ways. On one hand, some legal protection surely is warranted to provide incentives for innovation, though the lion's share oYthe rewards for innovation accrues to society as a whole, not to the inventor or original entrepreneur.9 On the other hand, too much legal protection-in particular, mistaken protection of products or methods of production or service delivery that are not truly novel-can retard innovation and entrepreneurship. Inappropriate or excessively broad legal protection raises barriers to entry by entrepreneurs, discouraging some from developing or promoting new processes or products altogether. Finding the right balance, or threading the needle between these two outcomes, is difficult and yet vitally important.
Institutions created for the protection of intellectual property give rise to a second conflict of purpose. Patents and copyrights, as the means of protecting society's interests in intellectual products, have two primary objectives. One goal is to ensure that the creators of the property have an opportunity to obtain some reward from their efforts, both as a matter of equity and as an incentive for the expenditure of further creative effort. But the second and apparently rather incompatible goal is ease of access and dissemination to others, to ensure that the benefits of the innovation to society as a whole are as substantial and as widely available as is reasonably feasible.
The conflict between these two goals is widely recognized. The lower the hurdles to accessing intellectual property, the less its creators can hope to charge for its use. If just anyone can make use of new, legally protected ideas, with no impediment whatsoever, the price of access is apt to be driven toward zero. There is a way, however, to reconcile these two goals; at least in principle. Contrary to what one might suppose, patents generally have not served primarily to impede dissemination, but in a wide range of circumstances they have facilitated and encouraged it. To appreciate this, it is useful to take a quick historical detour before returning to how patent law can promote both invention and its disclosure.
The historical oddity is that patents began not so much to reward the creation of new, commercially useftil knowledge but rather to promote its transfer from one country to another. As North and Thomas (1973) have shown, so-called letters patent date from the 13005 in England and, as is true today, granted a monopoly to the recipients, for a specified period, over production and sale of the item named in the letter. 10 Initially these rights were granted not to the creator or inventor of the invention, but to a forcgn producer who could steal the idea from his own country and export its use to England. In other words, patents were used to induce technology transfer, not necessarily technology creation. One of the first such letters was awarded to a Flemish weaver for this purpose. In the ensuing years, England encouraged the relocation of many other activities (aside from weaving) from the Continent across the English Channel: mining, metal working, silk manufacturing, ribbon making, and so on. Indeed, of the fifty-five grants of monopoly privilege made under Elizabeth I, twentyone were issued to aliens or naturalized subjects for a variety of products. 11
The modern practice of awarding patents primarily to the inventors within a country was adopted into English law in the Statute of Monopolies of 16 23 in the wake of parliamentary anger over royal misuse of letters patent to reward royal favorites, and for other purposes having no connection with incentives for generating innovations. Since then, and particularly in recent decades, the voluntary dissemination of patented material has become a major economic activity. More to the point, patent laws around the world since have required holders of patent rights to disclose the technical details that justi' the patent. Thus it is that patents, rather than impeding the process, have played a key role in making efficient and voluntary dissemination possible and attractive to the patent owner. Indeed, since at least the latter half of the nineteenth century, the sale or rental of access to intellectual property has become so attractive that it has resulted in the creation of markets dedicated to such transactions with the assistance of professionals who have specialized in the required activities.
Actually, one can state a principle that describes the conditions under which a particle interferes with itself. Admittedly, to state a principle is not an entirely satisfactory solution, and definitely not a valid explanation, but at least it allows a synthetic presentation ofthe experiments, and thus constitutes the least 'committed' interpretation, the safest step. This principle, called the indistinguishability principle12, can be expressed like this:
Interference appears when a particle can take several paths in order to arrive at the same detector, and the paths are indistinguishable after detection.
3.3.3 The experiment performed in Vienna in the experimental research into the transition between the dassical world and the quantum world, Zeiinger and Arndt have taken an important step. They have shown that some large molecules produce interference effects (Fig. 3.4). The molecules in question are collections of sixty carbon atoms and their symbol is C60. In these molecules, discovered in 1985, the atoms are arranged according to a particular symmetry, that of a football - indeed a traditional football, formed with hexagons and pentagons stitched together, having exactly sixty vertices, that is sixty points where three lines meet. Sixty carbon atoms arrange themselves according to the same structure in order to form C50 molecules.
Such beautiful molecules deserved a name other than that of their chemical composition. At the moment of baptism, the scientists were reminded of the work of Richard Buckminster Fuller, an American architect who had designed and built numerous glass domes whose supporting structure has the symmetry that we are talking about28. It is therefore in memory of Buckminster Fuller that the C60 molecules are not called footballenes, but fullerenes, and sometimes buckyballs.
As far as their size is concerned, fullerenes are dearly closer to atoms than cars or footballs and in this sense we are not surprised to see them display quantum behaviour. Nevertheless, the criterion for the observation of quantum behaviour is not the small size ofthe physical object, but the possibility of creating a situation ofindistinguishability. Viewed from this angle, we better understand that the interference of large molecules constitutes an important result In effect, the bigger the molecule, the more chance there is that some or other of its constituentparts will interact with the environment, and if on one ofthe possible paths a non-controlled interaction takes place, the interference is quicldy lost. Now, a molecule with sixty carbon atoms means a system of sixty nuclei - which for carbon means 360 protons and as many neutrons - and 360 electrons. In total, 1080 'elementary' quantum particles (I am disregarding the fact that protons and neutrons are in turn composed of three quarks each, because that final composition has some characteristics that merit a separate discussion).
The experimental observation of fullerene interference29 should not be considered as just a further verification of the quantum behaviour of matter, but rather as a real discoverj3° - we are far from demonstrating the interference of cars, but these are already large objects: it was not evident a priori that such a large collection of quantum particles would itself exhibit collective quantum behaviour3t. The way is open to investigating much bigger molecules, such as insulin and other 'biological' molecules. The rush for size has just begun.
4.2.1 The interferometry of atoms The apparatus produced at Constance is a Mach-Zehnder interferometer, like that of Rauch. The particles used are atoms, rubidium atoms, to be precise. Atoms are quantum objects consisting of a nucleus, heavy and carrying a positive electric charge, and electrons, much lighter, negatively charged particles. We know this thanks to the well-known symbol in which the atom is represented like a small solar system with a few planet-electrons orbiting around a sun-nucleus. Again, as with the arrows we used to represent spin: the electrons and the nucleus themselves being quantum particles, this symbol is only a pale imitation ofwhat an atom really is; but it is a useflil picture to keep in mind.
From this structure of the atom some consequences ensue that are significant for the goal that interests us. On the one hand, the trajectory of the atom is essentially determined by the motion of the nucleus (if we can continue with the planetary analogy, we see clearly that the orbit of the enormous Jupiter around the Sun is only slightly affected by the presence of the satellites that gravitate around the planet). Consequently, in the experiment that we want to design, the beam splitters can be devised to act on the nucleus; and the electrons will follow the motion. On the other hand, it is relatively easy to modify the physical state of an electron, particularly that of 'external' eiwons (those that are furthest from the nucleus). Therein lies the solution: it is by modifying the state of an electron on the path, or to be precise, its energy, that we will be able to introduce distinguishability without influencing the motion of the nucleus.
Figure 4.2 illustrates the results of the experiment. The part on the left of the figure represents the initial apparatus, with the number of particles detected behind each output. We observe an interference fringe characterized by the fact that the peaks of intensity are complementary on either side, that is, a peak to the right corresponds to a trough to the left anuiffvice versa. The part on the right ofthe figure represents the modified apparatus - on one of the two paths, the energy of an external electron has been modified. Now, by measuring the energy of the electron we are able to learn which path it took. It is not necessary to add an instrument that effectively measures this energy to the apparatus - the important thing is that we have introduced distinguishability. The informalion encoded in the atom is sufficient for us to be able in principle to discern the two paths. As we see in the figure, the interference disappears. Thus, the mechanism proposed by Heisenberg does not explain the disappearance of interference completely. We must be content (for the moment at least) with the indistinguishability principle.
The word complementarity was forged by Niels Bohr. The concept that it conveys is closely linked with the iridistinguishabiity principle that has been discussed in this book. In order to clarify the idea, let's go back to the apparatus of Fig. 1.3. Our description was this: if we do not know by which path the particles travel in the interferometer (two indistinguishable paths), all of the particles take a certain output (interference); if we detect the particles in the interferometer (distinguishability), the output will be random. Bohr would say instead that the path and the output are two pieces of complementary information - we cannot arrange it so that all of the particles4take the same path and the same output. At the risk of missing something very profound, we will remember that Bohr's complementarity principle says the same thing as the indistinguishability principle, from a different angle.
The future will tell us if one of these concepts will disappear to the benefit of the benefit of the other or they are destined to survive together, or if both will be erased by new, more precise notions.
The word uncertainty, as far as it goes, is unfortunate because in physics we already use it to describe the imprecision of measurements. If a length is measured with a ruler graduated in millimetres, the value that one reads is affected by an uncertainty of (give or take) a millimetre. In other words, with that ruler, one cannot discriminate two lengths that differ by less than a millimetre. The Heisenberg mechanism is an attempt to restore a principle of uncertainty in measurement to quantum physics, an attempt, in other words, to base the wealth of phenomena that we encounter in quantum physics on our technical limitations (essential or accidental). From experiments like that of Constance we learn that the principle of quantum physics is not a principle of uncertainty in that sense, rather a principle of indetermination - as precise as our measurements are, we will not be able to determine two pieces of information that are complementary in the Bohr sense. Quite the opposite, we have always worked under the assumption of perfect detectors - imperfect detectors could introduce so many counting errors that the quantum interference would be masked. In summary, the concept of uncertainty is ambiguous in physics; and, if we want to retain the traditional sense of 'limitation in the precision of measurement', then this concept is not adequate to describe quantum behaviour.
6.3.3 Sending a message?
I have already subjected the students to a considerable tour de force we have reviewed the indistinguishability principle, introduced the notion of correlation and revealed interference in correlations.
Then, we saw that this prediction of quantum physics seems to throw back into question a well-known and well-established fact in physics, the fact that the speed of light is the limiting speed for communication. I cannot let it rest there, even if my audience is tired. I cannot depart leaving those who are listening to me with the impression that quantum physics could one day allow communication faster the light.
Whatever the explanation might be, the quantum particles introduced correlations at a distance. However, this phenomenon cannot be employed for communication, it cannot be used to send a message, whether faster or slower than light. The reason for this is: whether we are in a situation of perfect correlation, perfect anti-correlation, or whatever situation in between, concerning the correlation of two particles, nothing changes in the results that we obsen'e for each particle individually. Specifically, for the Franson interferometer that we have considered, we have said that for each side, half the particles are detected at one detector, the other half at the other. Alice, who observes only the particles that have gone to the left, sees random detections; on the right, Bob may modify his interferometer at will and nothing will change for Alice. It is only when Alice and Bob speak to each other (by telephone, for example) and they compare their results, that they notice the existence of correlations between the particles. An ordinary medium of communication (telephone, internet, meeting at a bistro) is therefore absolutely necessary in order to be aware of the quantum correlations - these correlations with only themselves do not allow communication.
Entanglement will free itself from the maze of interpretations to get closer to the laboratory mostly thanks to the work of john Bell. But before talking about Bell, this brief history of the EPR argument brings us to our first meeting with David Bohm. Bobm's name is known by most physicists through an ingenious one-particle interference phenomenon that he predicted with his student Yakir Aharonov, and that naturally bears the name Aharonov-Bohm. Most physicists, on the other hand, have not heard of the interpretation of quantum mechanics proposed by Bohm, because it
Brief history of quantum correlations
does not conform to the orthodox doctrine, and does not therefore have the right to be mentioned in institutional courses. We will talk about it in Chapter 9, because Bohm's 'mechanics of pilot waves' is the most elaborate alternative interpretation, and it is highly instructive for clearing up its problematic aspects. Here, we are concerned with Bohm's contribution to the EPR argument. This contribution is rather technical in nature - Bohm rewrote the EPR argument in terms of two particle spins, whereas Einstein, Podoiski and Rosen had used dynamic variables of position and momentum. It is an important step, because in the mathematical formalism of quantum physics the spin is the easiest system to deal with. This simplification opens the way for the work of Bell.
7.2.4 john Bell, the person
Serious and rather reserved, john Bell worked actively in a mainstream research domain (he was a particle physics theorist at CERN in Geneva), but he obtained his principal result by working on the 'philosophical' subject of quantum correlations. As we said above, the first step that he took consisted of removing the restraint of von Neumann's theorem, then by constructing an explicit localvariable model for single quantum particles. As a next step, he set out to find the local-variable model for two particles... and he came up with his own impossibility theorem. Is this theorem bound to fail as von Neumann's? This is highly improbable (in my view, utterly impossible): in his time, von Neumann's theorem was accepted almost without criticism; while Bell's theorem has already undergone forty years of intensive studies and has resisted any attack - moreover, as we saw in this very text, the formulation of the theorem is not difficult.
As for philosophical preferences, John Bell would have liked to find a local-variable model reproducing the whole of quantum physics: a priori, he favoured 'local realism'. But he honestly accepted the conclusion of his theorem and of the experiments. His premature death achieved his ascension to the status of cult phys
apparent in the recollections of those who met him. We know now that for two particles the indistinguishability principle, that is, quantum theory, predicts correlations whose characteristics are the following:
1. quantum correlations do not disappear by increasing the distance between the particles, and therefore their origin cannot be the reception of a common signal;
2. quantum correlations violate Bell's inequality, and therefore nor can their origin be a common decision taken at the source.
In other words, if quantum theory is correct, neither of the two usual mechanisms that explain correlations can be invoked! But is quantum theory correct? Will correlations be maintained over a great distance? Are they really going to violate Bell's inequality?
8.2.1 The Aspect experiment carried to perfection
So we have jumped sixteen years and about a thousand kilometres to find ourselves in Innsbruck in 1998. In the city of the Golden Roof, we meet up again with the group of Anton Zeilinger, the whole of which is about to move to Vienna - where, the reader will recall, they will notably demonstrate interference for the large C60 molecules.
The experiment65 performed by Zeilinger and his collaborators Gregor Weihs, Thomas Jennewein, Christoph Simon and Harold Weinfurter is the definitive version of the Aspect experiment of 1982. The photons emitted by the source - a source, moreover, of a different type and more efficient than that used in Orsay - travel along optical fibres installed in the campus ofthe University of Innsbruck, to the analysers, which are found at a distance of 400 metres apart (in Aspect's experiment, the whole apparatus was confined to a laboratory, therefore the distance between the analysers was a few metres). At such distances and with judicious electronics, it is possible to implement rapid and random changes that assure that each particle cannot be informed about the configuration that its companion will encounter. The correlations persist in violating Bell's inequality - the locality loophole is permanently closed!
8.2.2 Correlations at 10 km
The Austrians' article appears in the December 7th 1998 edition of the journal Physics! Review Letters. A month and a half prior, on October 26th, another quantum-correlation experiment had appeared in the same journal66 It was by Wolfgang Tittel, Jflrgen Brendel, Hugo Zbinden and Nicolas Gisin. The Geneva group is doing it again: those who in 1996 had demonstrated the feasibility of quantum cryptography over long distances (20 km) demonstrate two years later that quantum correlations are equally stable and violate Bell's inequality over distances of kilometres. While Zeiinger's group ran their own optical fibres through the university campus at Innsbruck, the Geneva group adopted another strategy - asking the Swiss telecommunications operator to be allowed to use, for several hours, the fibres already installed between the telecom stations. On the appointed day, the physicists distribute themselves between the stations at Cornavin (in the heart of Geneva), Bernex and Bellevue (two outer suburban areas). At Cornavin they put the source of the pairs of photons, and at Bemex and Bellevue the two analysers - it is a Franson interferometer. For the non-locality, what is important is the distance between the two analysis stations, Bellevue and Bernex - 10.9 km as the crow flies. The correlations violate Bell's inequality just as in the Innsbruck experiment, without any possible ambiguity.
The Geneva physicists have not added rapid switching to their experiment. Unlike that of Innsbruck, their experiment is not designed to dose the locality loophole, but to demonstrate the violation of Bell's inequality over large distances. This experiment is probably the one that has caused the most excitement. When, in the year 2000, the American Physical Society wanted to record, in ten posters, the stages marking twentieth century physics, quantum correlations gained a place in these posters thanks to the Geneva experiment.
8.3 A curious argument
We have before us some experiments, reproduced by several independent research groups, which confirm the theoretical prediction: all of the criteria appear to be assembled so that we are able to conclude that quantum interference of distant particles is confirmed experimentally. It is in fact the conclusion drawn by the majority of physicists.. . and what objection could we still raise?
One objection has nevertheless been put forward, based on the imperfection of the detectors. Current photon counters have a fairly limited efficiency - they detect at best (let's be optimistic to simplify things) half of the photons. In order to understand the argument, which we call the detection loophole, I will begin with an example inspired by everyday life.
Let's suppose that police radars only measure the speed of half the cars. This could be due simply to the slowness of the electronics within the radar, which, after having measured the speed of one car, has a certain amount of dead time before being able to measure another. In this case, the statistics for offences are significant, all the same. But there could be another reason for the fact that the radar does not see half the vehicles - the police could have badly installed their radar, such that only vehicles that are tall enough send back a signal, so sports cars, always lower than average, are not seen. In this case, all of the sports cars can exceed the speed limit without being seen - the statistics for offences will be distorted, because we only measure the speed of the slower vehicles.
The detection loophole is based on the same idea. Current detectors detect less than half the photons that are sent This is a fact, but as in the example of the cars, it is legitimate to ask ourselves whether or not the photons detected constitute a representative sample of all of the photons. It could be that it is not the case, that only certain photons, suitably 'programmed', activate our detectors. These photons, the detection loophole argument continues, could be, furthermore, programmed to violate Bell's inequality, but if we detected all of the photons, we might see that Bell's inequality is not violated.
In order to grasp the weirdness of this loophole, it is necessary to remember the sessions in the school laboratory or even at university. Atone time or another, every one ofus obtains an experimental result that disagrees with the theoretical prediction. We have looked for the error, and if we failecito find it, we have written in our report a loose statement like, 'the instruments are too imprecise'. We have cited the uncertainty of the measurements to explain the disagreement between the experiment and theoretical calculation. The detection loophole is perhaps the first example in the history of physics where the imprecision of the measurements is cited to explain the perfect agreement between theory and experiment!
Just as for john Bell, it is difficult for me to believe that quantum theory gives precise predictions only because of the poor efficiency of the detectors, and that it is destined for a miserable failure the day our detectors are perfect69. It is equally necessary to know that techniques exist (ion traps) in which the detectors are practically perfect, and the quantum correlations do not disappear. These experiments dose the detection loophole, but unfortunately the particles (ions, that is, atoms that have lost or gained one or more electrons) are very close to each other, and therefore the locality loophole stays open70. At the time of writing this book, what is lacking in order to convince the last sceptics is an experiment in which both loopholes are closed. A few proposals exist, and it is possible that, by the time the reader reads these lines, this experiment will have been performed. The two loopholes of locality and detection will have disappeared from the scene then, last witnesses to the great discussions about two-particle correlations begun by the sceptic Albert Einstein and the orthodox Niels Bohr in 1935.
9.3 Other foundations
Second on my list was the approach of those who attempt to derive the indistinguishability criterion from other principles that are judged more fundamental, but that are not of a physical nature. One example of this approach comes from the school called 'quantum logic'. The reader glimpsed the subject of quantum logic in Chapter 2, when we saw that the properties of a quantum system, unlike the properties of the sets of cars, are not connected to each other according to the rules of set theory. Let's take for example the work of the school in Geneva, a quantum logic approach initiated by Josef Jauch and continued by Constantin Piron.
Piron showed that one can derive the indistinguishability criterion from five axioms. The first three axioms are a formalization of the two following postulates. (I) If a physical system acquires a new property, it inevitably loses another that it possessed beforehand. An ordinary example: if i acquire the property 'being seated', I lose the property 'being standing' that I possessed beforehand. For a quantum example, we have already seen that the property 'exhibiting interference' can be lost to acquire the property 'being in a given path'. (II & III) Every property is the opposite of another. This simply means that if 'being seated' is a property, 'not being seated' is also a property. Each of us accepts such postulates much more easily than the indistinguishability criterion - it would be nice if we could derive the criterion only from postulates as intuitive as these. Unfortunately, things take a turn for the worse with axioms IV and V, which are strictly mathematical requirements81 for which, despite significant efforts, neither Piron nor any member of his school knew how to find a simple interpretation. At this stage, then, we are faced with a choice: either we accept all five of Piron's axioms, in which case the indistinguishability principle is no longer a first
The mechanistic interpretation of pilot waves
principle but a consequence; or we admire the truly remarkable effort of the Geneva school, but we retain the indistinguishability criterion as a first principle - as I did in this book.
The school of quantum logic is only one example of a much broader class of interpretations that rapidly sink into deep epistemological discourse. All ofthese interpretations are not incompatible with the orthodox approach, and concede that if we restrict ourselves to the framework of physics we cannot say much. The program of looking resolutely outside physics to solve the conundrum of quantum phenomena is, in principle, very sound, but in my opinion has never bçen carried through satisfactorily - the surprising or 'incomprehensible' side of the indistinguishability principle does not disappear, is it simply pushed a degree further away, whether in the epistemological hypotheses or in the axioms.
9.4 The mechanistic interpretation of pilot waves
Among the unorthodox interpretations, I will focus on the most complete and successful: the interpretation of the 'pilot wave' initiated by Louis De Broghe and re-elaborated by David Bohm.
We saw in the first chapters of this book that quantum particles sometimes behave like corpuscles (each particle only stimulating one detector), sometimes like waves (interference). De Broglie's ingenious idea consists of exploring the possibility that the corpuscle and the wave are both a physical reality. More precisely, quantum particles could be corpuscles, very localized, which move around guided by a wave. It is the wave that explores all the possible paths, and it is the modification of the properties of the wave that influences the 'choice' made by the corpuscle at each beam splitter. It is just like a cork floating in a river, downstream of an island: certainly, the cork passed on only one side of the island; nevertheless, its trajectory after the island is also influenced by the water that has taken the other path. This example illustrates the explanation of Young's double-slit experiment by a pilot wave.
Lungo, ripetitivo e pesante. peccato perche' i concetti espressi sono interessantissimi.
«Ne fui affascinato ricorda.» Vi lesse in che modo James Watson e Francis Crick avessero scoperto nel 1952 la struttura a doppia elica del DNA. Apprese come il codice genetico fosse stato decifrato negli anni Cinquanta e Sessanta, come gli scienziati avessero a poco a poco svelato le complesse strutture delle proteine e degli enzimi. Essendo sempre stato maldestro in laboratorio - «Ho fatto pessime figure in ogni laboratorio in cui ho messo piede» - seguì con grande ammirazione i minuziosi esperimenti che portarono la biologia molecolare alla scoperta delle basi della vita: gli interrogativi che avevano reso necessaria questa o quella dimostrazione scientifica, i mesi trascorsi a progettare ogni esperimento e costruire le apparecchiature necessarie, e poi il senso di trionfo o di scoraggiamento quando il risultato era ormai a portata di mano. «Judson aveva la capacità di infondere la vita nel dramma della scienza.»
Ma allora, si chiese Arthur, cosa dobbiamo pensare della tastiera standard QWERTY, usata per quasi tutte le macchine per scrivere e i computer nel mondo occidentale? (Il nome QWERTY è formato dalle prime sei lettere della prima riga.) È questo il modo più funzionale di disporre i caratteri sulla tastiera di una macchina per scrivere? No di sicuro. Un ingegnere di nome Christopher Scholes progettò nel 1873 la tastiera QWERTY proprio per rallentare i dattilografi veloci: nelle macchine di allora, se il dattilografo batteva troppo in fretta, i martelletti dei singoli tasti tendevano a incastrarsi nella piastrina guidacaratteri. La Remington Sewing Machine Company avviò una grande produzione della macchina dotata di una tastiera QWERTY, cosicché molti dattilografi ne divennero pratici. Di conseguenza anche altre società iniziarono a produrne, e altri dattilografi acquisirono familiarità con quella particolare disposizione dei tasti, che è così entrata nell'uso comune. A chi ha sarà dato, pensò Arthur: rendimenti crescenti.
Oppure consideriamo la competizione fra il sistema Beta e il VHS nella videoregistrazione alla metà degli anni Settanta. Già dal 1979 risultò chiaro che il VHS si sarebbe imposto sul mercato
E chiaro che argomenti del genere non convincessero chi non intendeva lasciarsi convincere. Allo IIASA, nel febbraio 1982, mentre Arthur rispondeva alle domande del pubblico dopo una conferenza sui rendimenti crescenti, un economista americano ospite si alzò in piedi e chiese con una certa irritazione: «Mi citi un solo esempio di tecnologia alla quale siamo legati e che non sia superiore alle sue rivali!»
Arthur diede uno sguardo all'orologio della sala perché stava esaurendo il tempo a sua disposizione, e quasi senza pensarci fece: «Oh! L'orologio.»
L'orologio? Beh, spiegò, gli orologi attuali hanno lancette che si muovono "in senso orario". Ma la sua teoria supponeva l'esistenza di tecnologie fossili, sepolte nelle profondità della storia, affidabili quanto quelle prevalse in seguito. Solo per un caso le prime erano scomparse. «A quanto so, in qualche periodo della storia sarebbero potuti esistere orologi con le lancette che si muovevano all'indietro. E forse con una diffusione uguale a quelli conosciuti oggi.»
L'interlocutore non si lasciò impressionare. Intervenne allora un altro eminente economista americano e ribatté: «Non mi sembra che questo sia un vero esempio di lock-in. Io, tanto per dire, ho un orologio digitale.»
Per Arthur significava perdere di vista la questione.
Comunque il tempo per quel giorno era scaduto. In fondo, si trattava solo di una congettura. Circa tre settimane dopo ricevette una cartolina dal suo collega dello JIASA James Vaupel, in vacanza a Firenze. La cartolina riproduceva l'orologio della Cattedrale di Santa Maria del Fiore a Firenze, progettato da Paolo Uccello nel 1443 e caratterizzato dal moto "antiorario" delle lancette. (Erano riportate anche le 24 ore.) Sul retro, Vaupel aveva scritto semplicemente: «Congratulazioni!»
Arthur fu così colpito da quell'orologio che ne ricavò una diapositiva da proiettare nel corso delle future lezioni sui fenomeni di lock-in. Questo esempio stimolava sempre qualche reazione. Una volta Arthur mostrò la diapositiva durante una conferenza a Stanford, e a un certo punto un laureato in economia si alzò, rovesciò la diapositiva ed esclamò trionfalmente: «Vede? E un imbroglio! In realtà l'orologio procede in senso orario!» Per fortuna, però, Arthur aveva già raccolto una discreta documentazione, compresa la diapositiva di un orologio con movimento "antiorario" recante impressa una didascalia latina.
Al membro anziano Nick Metropolis l'idea del nuovo Istituto piacque per il rilievo dato da Cowan alla computazione. E per diversi buoni motivi: Metropolis era praticamente, a Los Alamos, "Mister Computer". Era stato lui a sovrintendere alla costruzione del primo calcolatore del Laboratorio verso la fine degli anni Quaranta, sulla base di un progetto pionieristico del leggendario matematico ungherese john von Neumann, dell'Institute for Advanced Study di Princeton, consulente e spesso ospite a Los Alamos. (La macchina era stata soprannominata Mathematical Analyzer, Numerator, Integrator, And Computer ovvero MANIAC.) Assieme al matematico polacco Stanislaus Ulam, Metropolis era stato un precursore dell'arte della simulazione al computer, e proprio grazie a lui Los Alamos disponeva ora di alcuni fra i maggiori e più veloci supercomputer del pianeta.
Eppure Metropolis pensava che il Laboratorio non fosse abbastanza innovativo neppure in questo campo. In collaborazione con Giancarlo Rota, un matematico del MIT che usufruiva di una borsa di studio a Los Alamos e che sovente si fermava per periodi abbastanza lunghi, Metropolis sottolineò ai membri riuniti che la scienza della computazione era in uno stato di fermento paragonabile a quello della biologia e delle scienze non lineari. Si verificavano mutamenti rivoluzionari già soltanto nella progettazione dell'hardware, disse. I computer in uso che eseguivano i passi di calcolo in ordine seriale avevano raggiunto il massimo della velocità possibile, e i progettisti cominciavano a disegnare nuove macchine in grado di compiere centinaia, o migliaia, o addirittura milioni di passi di calcolo in parallelo. Erano ricerche importantissime: chiunque desiderasse affrontare seriamente il tipo di problemi complessi di cui parlava Cowan avrebbe avuto bisogno, con ogni probabilità, di un simile strumento.
Kauffman era anche infastidito dalla tacita assunzione, peraltro molto diffusa, che i particolari fossero tutto. Iparticolari biomolecolari erano importanti, certo. Eppure se il genoma doveva essere perfettamente organizzato e sintonizzato per essere in grado di funzionare, non poteva aver avuto origine da un processo evolutivo incerto e casuale. Sarebbe stato come rimescolare un mazzo di carte non truccate e poi servirsi a bridge una mano di tredici picche: possibile, tuttavia non molto probabile. «Non mi quadrava proprio - commenta. - Non si può chiedere tanto a Dio o alla selezione naturale. Se dovessimo spiegare l'ordine biologico in base a un susseguirsi di dettagliati e molto improbabili casi di selezione ad hoc, se tutto ciò che vediamo dovesse essere frutto di una dura lotta, noi non saremmo qui. Il mondo non avrebbe avuto il tempo sufficiente per produrre in maniera aleatoria tutto ciò che oggi esiste.»
Doveva esserci qualcosa di più, lo sentiva. «In qualche modo, volevo riuscire a dimostrare che l'ordine è presente fin dall'inizio, senza dover essere costruito, né prodotto dall'evoluzione. Speravo di riuscire a dimostrare che l'ordine in un sistema regolatore di geni è naturale, quasi inevitabile. Doveva essere gratuito e spontaneo. » Se il ragionamento era corretto, allora l'istintiva proprietà di autorganizzazione della vita doveva essere l'altra faccia della selezione naturale. I dettagli genetici specifici di ogni determinato organismo sarebbero il prodotto di mutazioni casuali e della selezione naturale, in perfetto accordo con la teoria di Darwin. L'organizzarsi della vita stessa, l'ordine, sarebbe invece qualcosa di più profondo ed essenziale. Trarrebbe origine direttamente dalla struttura della rete, non dai particolari. L'ordine, dunque,
sarebbe uno dei segreti del Grande Vecchio.
Il corso di Scienza delle Comunicazioni di Burks era l'ambiente migliore per approfondire le questioni che lo interessavano: Cos'è l'emergenza? Cos'è il pensiero? Come funziona? Quali sono le sue leggi? Cosa significa in realtà per un sistema "adattarsi"? Holland riempì intere risme di carta con idee su questi problemi, dopodiché le archiviò sistematicamente in cartelle di carta Manila etichettate Glasperlenspiel 1, Clasperlenspiel 2, eccetera.
Glas-cosa? «Das Glasperlenspiel» sorride. Era l'ultimo romanzo di Hermann Hesse, pubblicato nel 1943 quando l'autore era in esilio in Svizzera. Holland lo scoprì un giorno in una pila di libri che un suo compagno di camera aveva preso in prestito dalla biblioteca. Il titolo in tedesco significa letteralmente "il gioco delle perle di vetro", anche se nelle traduzioni inglesi il libro è di solito intitolato Master of the Game; o con il suo equivalente latino Magister Ludi [Maestro del gioco]. Ambientato in una società del lontano futuro, il romanzo narra di un gioco che veniva praticato in origine dai musicisti; l'idea era quella di impostare un tema su una specie di abaco con perle di vetro, e poi cercare di intessere ogni sorta di con«appunto e variazione sul tema muovendo le perle avanti e indietro. Nel corso del tempo, però, il gioco perse le sue origini semplici e divenne uno strumento molto sofisticato, controllato da una casta di potenti sacerdoti-intellettuali. «Il bello era che si poteva scegliere qualsiasi combinazione di temi - dice Holland - un po' di astrologia, un po' di storia cinese, un po' di matematica, e poi provare a svilupparli come un tema musicale.»
quest'alga marina raggiunga l'adattamento ottimale abbastanza in fretta, e allora la specie si troverà in un equilibrio evolutivo.
Ma vediamo ora cosa accade con l'alga dai 1000 geni ipotizzando che questi non siano indipendenti. In tal caso, per essere certa di ottenere il massimo livello di adattamento, la selezione naturale dovrebbe esaminare ogni combinazione possibile di geni, poiché ciascuna ha potenzialmente un livello di adattamento diverso. E quando si calcola il numero totale delle combinazioni, non è 2 moltiplicato per 1000, bensì 2 moltiplicato per sé stesso 1000 volte, ovvero 21000, cioè circa 10: una cifra astronomica, in confronto alla quale persino il numero di mosse possibili negli scacchi è infinitesimo. «L'evoluzione non può neppure provare un così gran numero di combinazioni-sostiene Holland. - E per quanti progressi possiamo fare nel campo dei computer, neppure noi possiamo provarle.» E neppure se ogni particella elementare nell'universo osservabile fosse un supercomputer macinante numeri a partire dal Big Bang, non saremmo ancora vicini alla conclusione di questo calcolo. E stiamo parlando di una semplice alga marina. L'uomo e altri mammiferi hanno un numero di geni circa 100 volte superiore, e la maggior parte di quei geni si presentano in molte più di due varietà.
Così, ancora una volta, ribadisce Holland, abbiamo un sistema che cerca la propria via in uno spazio immenso di possibilità, senza nessuna realistica speranza di riuscire a ottenere il singolo posto "migliore" da occupare.
Tutto ciò che l'evoluzione può fare è sforzarsi di raggiungere dei miglioramenti, non la perfezione. Questo era il problema che lui si era proposto di risolvere nel lontano 1962: come? Comprendere l'evoluzione di organismi dotati di molteplici geni non significava soltanto
sostituire le equazioni a una variabile di Fisher con equazioni a molte variabili. Holland voleva sapere in che modo l'evoluzione potesse esplorare questo spazio immenso di possibilità e trovare combinazioni utili di geni, senza dover investigare ogni centimetro quadrato di territorio.
Un simile vertiginoso aumento di possibilità era del resto già ben noto agli studiosi di intelligenza artificiale. Al Carnegie Tech (oggi Carnegie-Mellon University) di Pittsburgh, per esempio, Allen Newell e Herbert Simon avevano condotto, a partire dalla metà degli anni Cinquanta, uno studio fondamentale sui procedimenti messi in atto dall'uomo per risolvere i problemi. Invitando un campione di persone a esprimere verbalmente ciò che pensavano mentre erano impegnate in svariati rompicapi e giochi, compresi gli scacchi, Newell e Simon avevano concluso che la risoluzione dei problemi implica sempre una ricerca mentale "passo a passo" attraverso le innumerevoli possibilità del grande "spazio dei problemi", in
cui ogni passo è guidato da una regola euristica empirica: "In questa situazione, è opportuno fare quel passo". Trasformando la loro teoria in un programma noto come General Problem Solver, e applicandolo poi agli stessi rompicapi e giochi, Newell e Simon avevano mostrato che quel modo di rapportarsi allo spazio dei problemi poteva riprodurre -abbastanza bene lo stile umano di ragionamento. In effetti il loro concetto di ricerca euristica era già ben avviato a diventare il convenzionale sapere dominante nell'intelligenza artificiale.
Dei quattordici programmi presentati, otto erano "corretti" e non avrebbero mai tradito per primi. Ognuno di essi superò senza difficoltà i sei programmi scorretti. Così, per risolvere il problema, Axeirod indisse un secondo torneo, nel quale invitava i concorrenti a tentare di battere il programma Tit for Tat. Furono presentati altri sessantadue programmi, e di nuovo Tit for Tat si impose. La conclusione fu quasi inevitabile. I tipi corretti - o meglio corretti, benevoli, duri e chiari - riescono a piazzarsi al primo posto.
Holland e gli altri membri del BACH ne rimasero affascinati. «Il dilemma del prigioniero mi aveva sempre tormentato - ammette Holland. -Era una di quelle cose che
non mi piacevano per niente. Vederne la soluzione fu perciò un sollievo, una vera gioia.»
Non sfuggì a nessuno che il successo del Tit for Tat aveva profonde conseguenze sia per l'evoluzione biologica sia per i rapporti umani. Nel suo libro del 1984 Giochi di reciprocità. L'insorgenza della cooperazione, Axelrod sottolineava che i princìpi su cui era basato il Tit for Tat potevano condurre alla cooperazione in una grande varietà di contesti sociali, comprese alcune tra le circostanze meno promettenti che si potessero immaginare.
II suo esempio preferito era il sistema "vivi e lascia vivere" che fu adottato spontaneamente durante la Prima Guerra Mondiale, quando le unità trincerate sulla linea del fronte si astenevano dallo sparare per uccidere finché i nemici facevano altrettanto. Da una parte all'altra della terra di nessuno i nemici non avevano alcuna possibilità di comunicare, e di certo fra loro non nutrivano sentimenti amichevoli. Ma il sistema funzionò perché da entrambi i fronti uno stesso reparto si trovava impantanato in trincea da mesi, e quindi c'era una possibilità di adattamento reciproco.
In un capitolo del "libro scritto in collaborazione con il biologo e collega del BACH William Hamilton (riduzione di un articolo pubblicato sulla rivista Science nel 1981 e vincitore di un Premio), Axelrod sottolineò anche come le interazioni Tit for Tat conducano alla cooperazione nel mondo naturale pur escludendo il beneficio dell'intelligenza. Tra gli esempi ci sono i licheni, in cui un fungo estrae sostanze nutritive dalla roccia sottostante costituendo un substrato per alghe che, a loro volta, forniscono al fungo i vantaggi della fotosintesi; le acacie mirmecofile, che ospitano e nutrono un tipo di formica che a sua volta protegge l'albero; e il caprifico, i cui fiori procurano il cibo alla blastofaga, un minuscolo imenottero lungo appena un millimetro, che a sua volta ne impollina i fiori e ne diffonde i semi.
Più in generale, scrisse Axelrod, il processo di coevoluzione dovrebbe permettere allo stile cooperante del Tit for Tat di prosperare anche in un mondo pieno di individui infidi e ambigui.
sostenibilità globale sarà possibile solo se la società umana subirà nel corso dei prossimi decenni almeno sei transizioni fondamentali:
1. una transizione demografica verso una popolazione mondiale grosso modo stabile;
2. una transizione tecnologica verso una condizione d'impatto ambientale minimo per persona;
3. una transizione economica verso un mondo in cui ci si impegni seriamente ad attribuire a beni e servizi i loro costi reali - compresi i costi ambientali - cosicché l'economia mondiale sia incentivata a vivere della "rendita" della natura, e non a intaccarne il "capitale";
4. una transizione sociale verso una divisione più ampia di tale rendita, verso accresciute opportunità di un lavoro non distruttivo per le famiglie povere del mondo;
5. una transizione delle istituzioni verso un insieme di alleanze sovranazionali che faciliti un avvicinamento globale ai problemi globali e consenta d'integrare i molti aspetti della politica;
6. una transizione dell'informazione verso un mondo dove la ricerca scientifica, l'istruzione e un controllo a livello mondiale consentano a un gran numero di persone di capire la natura delle sfide che si trovano a dover affrontare.
Senza dubbio è necessario raggiungere lo scopo senza incorrere in una delle catastrofi globali di Classe A di Cowan. E se vogliamo avere qualche speranza di riuscire, proseguì Gell-Mann, lo studio dei sistemi complessi adattativi ha di sicuro un'importanza decisiva. Capire queste sei transizioni fondamentali significa capire le forze economiche, sociali e politiche che sono profondamente intrecciate e dipendenti l'una dall'altra. Non ci si può limitare a esaminare ogni singolo pezzo del problema, com'è stato fatto in passato, e sperare di descrivere il comportamento del sistema nella sua totalità. Non esiste altra via che considerare il mondo come un sistema interconnesso, anche se i modelli sono rozzi.
Ma per poter passare a una società sostenibile, continuò Gell-Mann, è ancora più importante assicurarsi che esista un mondo in cui valga la pena di vivere. Una società umana sostenibile potrebbe essere identificata senza difficoltà con una qualche distopia orwelliana contraddistinta da un rigido controllo e da una vita angusta e confinata per quasi tutti coloro che ne fanno parte. Dovrebbe invece essere una società adattabile, forte e resistente a piccoli disastri, in grado di imparare dal propri errori, una società dinamica, che possa far crescere in sé la qualità e non solo la quantità della vita.
Per conseguire tale obiettivo si dovrà certo combattere una dura battaglia, dichiarò Gell-Mann. In Occidente gli intellettuali e gli amministratori sono in genere altamente razionalisti, perché identificano i modi in cui gli effetti indesiderabili si verificano e cercano rimedi tecnici che li blocchino. È possibile allora avere una verifica delle emissioni, dei trattati sul controllo degli armamenti, degli anticoncezionali e via dicendo: di sicuro tutte cose importanti. Tuttavia una soluzione efficace richiederà molto di più: rinunciare o sublimare o trasformare le nostre tradizionali necessità..