Textos da Internet

30.12.04

 

Bruxelas exige que Portugal cobre taxa nas bibliotecas

JOSÉ MOTA

Funcionamento das bibliotecas pode estar em risco com a obrigatoriedade do pagamento de taxas


Sérgio Almeida

Comissão Europeia leva Estado português a tribunal por incumprimento de directiva comunitária Medida coloca em causa a Rede Nacional de Bibliotecas Públicas, afirma Rui Pereira, presidente do IPLB

Portugal é um dos países da União Europeia que se arrisca a sofrer sanções pesadas de Bruxelas por não ter cumprido ainda a directiva comunitária que exige o fim do empréstimo gratuito de livros nas bibliotecas e instituições oficiais.

Tal como a Espanha e a República da Irlanda, o Estado português vai ter de responder perante o Tribunal de Justiça das Comunidades, no Luxemburgo, pela "transposição incompleta para a legislação nacional do direito de comodato público". Significa isto que, segundo a Comissão Europeia, o empréstimo gratuito de um livro "pode ter efeitos negativos na sua venda", com a consequente diminuição no pagamento dos direitos de autor que cabem aos titulares da obra.

A Directiva 92/100/CE - que está em vigor há sete anos na maior parte dos países europeus - pretende obter uma salvaguarda financeira para os autores, ao mesmo tempo que harmoniza a questão dos direitos de autor nos países membros da União Europeia.

A aplicação da medida, todavia, não suscita consenso no próprio meio literário. José Manuel Mendes, presidente da Associação Portuguesa de Escritores, expressa, a título individual, a convicção de que "é possível encontrar outros modelos de financiamento" que não onerem o funcionamento das bibliotecas e permitam conciliar "a desejada difusão do livro com a defesa dos direitos dos autores".

Investimento em causa

Director do Instituto Português do Livro e das Bibliotecas (IPLB), Rui Pereira revela que, com a aplicação da medida, não serão os utentes mas o próprio Estado (através do organismo que tutela as bibliotecas, o IPLB) a assumir os encargos, que deverão situar-se nos quatro milhões de euros anuais.

"Se formos nós a assumir esse pagamento, é a própria Rede Nacional de Bibliotecas Públicas que pode ser colocada em causa, pois não nos será possível manter os actuais índices de investimento", sentencia Rui Pereira, que aponta como exemplo do empenho do Estado português no combate à iliteracia os 93 milhões de euros gastos nos últimos 12 anos na construção de bibliotecas.

A estratégia do Estado português passa por solicitar uma moratória da decisão até 2012, ano em que estará cumprido o objectivo de dotar cada um dos 308 municípios com uma biblioteca renovada.

A rede conta com apenas 143 bibliotecas inauguradas, mas existem 77 em construção e 42 em fase de projecto.

Estatuto de excepção

Para evitar o cumprimento efectivo da directiva comunitária, Portugal quer ver reconhecido junto de Bruxelas o "princípio da diversidade cultural e da subsidariedade", ou seja, é invocada a especificidade da situação portuguesa em termos de iliteracia - a mais elevada da OCDE - que, como adianta o presidente do IPLB, "não pode ser comparada à da Finlândia ou Alemanha, por exemplo".

No documento a apresentar, a transposição incompleta da directiva é justificada como "uma leitura mais abrangente da lei", não necessariamente ilegal. O incentivo à leitura e o desejo de defender os interesses dos utentes das bibliotecas são os outros argumentos que, segundo o Governo, justificam a não implementação da medida.

BIBLIOTECÁRIOS

PREOCUPADOS

COM TAXA

A possibilidade de ser instituída uma taxa sobre o empréstimo de livros e outros documentos nas bibliotecas portuguesas, sejam elas públicas, escolares ou universitárias, está a deixar em polvorosa os bibliotecários nacionais. Numa petição on-line, que conta já com 20 mil assinaturas, a Associação de Bibliotecários Portugueses (ABP) alerta para as consequências "catastróficas" da aplicação da medida num país em que "as dificuldades económicas e os incipientes hábitos de leitura dificultam o acesso de vastos sectores da sociedade ao conhecimento da cultura". Com o pagamento da taxa - assumida pelas próprias bibliotecas e pelo organismo que as superintende, o Instituto Português do Livro e das Bibliotecas -, a ABP considera que "os trabalhos em curso de promoção da leitura seriam asfixiados", já que a verba disponível deixaria de ser investida na aquisição de obras. Por isso, os autores da petição não têm dúvidas em afirmar que os próprios autores - aparentemente, os principais beneficiados com a medida - serão lesados. "No mercado livreiro português, as bibliotecas representarão, em muitos casos, pelo menos 10% das vendas", adiantam.

O que diz a lei

CE impõe taxa

Há 12 anos, a Comissão Europeia (CE) emitiu uma directiva (92/100/CEE) relativa aos direitos de aluguer e comodato com o objectivo de defender os direitos de autor em matéria de propriedade intelectual. Na prática, a medida, que deveria ter sido implementada até 1997, prevê que os autores recebam uma verba sempre que as suas obras sejam requisitadas através de instituições públicas. De acordo com a lei, "os autores e outros titulares têm o direito exclusivo de permitir ou proibir o empréstimo público das suas obras ou outros objectos protegidos pelos direitos de que são titulares"

Excepção portuguesa

No ano da transposição da directiva comunitária para a lei nacional, o Governo português emitiu o Decreto-lei nº 223/97, que isenta as bibliotecas, arquivos e museus do pagamento da taxa. Uma habilidade legislativa que a CE, após sucessivos alertas feitos, não está disposta a tolerar por muito mais tempo. Bruxelas acusa Portugal, assim como a Espanha e a República da Irlanda, de terem feito uma "transição incompleta para a legislação nacional do direito de comodato público". A França, a Itália e o Luxemburgo - países que, no início do ano, se encontravam também na lista dos incumpridores - já regularizaram entretanto a situação.

Mais ameaças Segundo a agência Lusa, Portugal vai também à barra do tribunal comunitário pela não aplicação, prevista na mesma directiva, das disposições relativas ao direito de aluguer. Para a CE, a legislação nacional, ao juntar a nova categoria de "produtores de vídeos", "pode impedir o funcionamento do mercado único".

27.12.04

 

What’s Next for Google

By Charles H. Ferguson January 2005
For Eric Schmidt, Google’s CEO, 2004 was a very good year. His firm led the search industry, the fastest-growing major sector in technology; it went public, raising $1.67 billion; its stock price soared; and its revenues more than doubled, to $3 billion. But as the search market ripens into something worthy of Microsoft’s attention, those familiar with the software business have been wondering whether Google, apparently triumphant, is in fact headed off the cliff.

I’ve seen it happen before. In September 1995, I had breakfast with Jim Barksdale, then CEO of Netscape Communications, at Il Fornaio in Palo Alto, CA, a restaurant popular with Silicon Valley dealmakers. Netscape had gone public a few months earlier, and Netscape Navigator dominated the browser market. Vermeer Technologies, the company that Randy Forgaard and I had founded 18 months earlier, had just announced the release of FrontPage, a Windows application that let people develop their own websites. Netscape and Microsoft were both preparing to develop competing products. Our choice was to stay independent and die or sell the company to one of them.

At breakfast, and repeatedly over the following months, I tried to persuade Barksdale to take Microsoft seriously. I argued that if it was to survive, Netscape needed to imitate Microsoft’s strategy: the creation and control of proprietary industry standards. Serenely, Barksdale explained that Netscape actually invited Microsoft to imitate its products, because they would never catch up. The Internet, he said, rewarded openness and nonproprietary standards. When I heard that, I realized that despite my reservations about the monopolist in Redmond, WA, I had little choice. Four months later, I sold my company to Microsoft for $130 million in Microsoft stock*. Four years later, Netscape was effectively dead, while Microsoft’s stock had quadrupled.

Google now faces choices as fundamental as those Netscape faced in 1995. Google, whose headquarters in Mountain View, CA—familiarly called the Googleplex—is only five kilometers from Netscape’s former home, needn’t perish as Netscape did, but it could. Despite everything Google has—the swelling revenues, the cash from its initial public offering, the 300 million users, the brand recognition, the superbly elegant engineering—its position is in fact quite fragile. Google’s site is still the best Web search service, and Gmail, its new Web-based e-mail service, Google Desktop, its desktop search tool, and Google Deskbar, its toolbar, are very cool. But that’s all they are. As yet, nothing prevents the world from switching (painlessly, instantly) to Microsoft search services and software, particularly if they are integrated with the Microsoft products that people already use.

In November 2004, Microsoft launched a beta, or test, version of a search engine designed to answer questions posed in everyday language and to serve results customized to users’ geographical locations. Microsoft has also created additional search software for its Internet Explorer browser and its Office productivity applications. That Microsoft is developing its own Web search engine and desktop search tools is significant in itself. But its competition with Google will have repercussions far beyond the existing search business—or even the software industry itself. Google and Microsoft will be fighting to control the organization, search, and retrieval of all digital information, on all types of digital devices. Collectively, these markets are much larger than the existing market for search services. Over the next several decades, in the view of search industry insiders I’ve spoken with, they could generate perhaps half a trillion dollars in cumulative revenue.

Microsoft is starting late but has extraordinary resources and powers of persistence—and it joined the browser wars late, too. In contrast, Google is youthful, adventurous, and innovative, and it does some things extremely well. The contest could end in a Cold War standoff, a decisive victory for either side, or even mutual destruction, if the competition frightens away customers and investors.



Peaceful coexistence, however, seems unlikely.

The Prize and the Contestants

Eric Schmidt and Microsoft’s Bill Gates will be competing against each other for the third time. For both men, the contest is personal as well as financial.

Gates’s philanthropic ambitions depend on Microsoft’s continued health. And like a rock star who yearns to be admired for his brains, Gates wants to create new technology. Only by doing so can he overcome his reputation as the college dropout who built his empire by turning other people’s ideas into mediocre products. “Bill Gates is desperate to prove that he can innovate,” commented a Microsoft executive who prefers to remain anonymous. “And it just might kill us.” He pointed to the ambitious goals and long delays that have plagued Longhorn, Microsoft’s future (and search-centric) version of Windows.

By contrast, the three men who run Google have impeccable technology credentials. Schmidt has a PhD from the University of California, Berkeley, did research at Xerox PARC, and became chief technology officer of Sun Microsystems, where he oversaw the development of many impressive technologies. In business, however, Schmidt has twice been beaten by Gates. The first time was at Sun; the second was at Novell, where Schmidt was CEO. Both firms made enormous mistakes. Schmidt wasn’t entirely responsible, however, because his hands were tied by his superiors at Sun and by his predecessors at Novell. At Google, Schmidt must once again share power—with Larry Page and Sergey Brin, Google’s brilliant but young and possibly overconfident founders, both “on leave” from Stanford University’s PhD program in computer science. Page and Brin still call many of the shots, and the company’s unusual capital structure gives them about 30 percent of the voting shares.


Google seeks to become the gatekeeper for not only the public Web but also the “dark” or hidden Web of private databases, dynamically generated pages, controlled-access sites, and Web servers within organizations (estimated to be tens or even hundreds of times larger than the public Web); the data on personal computer hard drives; and the data on consumer devices ranging from PDAs to cell phones to iPods to digital cameras to TiVo players. Google’s founders understand the scale of the opportunity. Larry Page recently said, “Only a fraction of the world’s information is indexed on our computers. We are continually working on new ways to index more.... Thirty percent [of our engineers] are devoted to emerging businesses.” And Sergey Brin once told Technology Review’s editor in chief, “The perfect search engine would be like the mind of God.”

Until now, competition in the search industry has been limited to the Web and has been conducted algorithm by algorithm, feature by feature, and site by site. This competition has resulted in a Google and Yahoo duopoly. If nothing were to change, the growth of Microsoft’s search business would only create a broader oligopoly, similar, perhaps, to those in other media markets. But the search industry will soon serve more than just a Web-based consumer market. It will also include an industrial market for enterprise software products and services, a mass market for personal productivity and communications software, and software and services for a sea of new consumer devices. Search tools will comb through not only Microsoft Office and PDF documents, but also e-mail, instant messages, music, and images; with the spread of voice recognition, Internet telephony, and broadband, it will also be possible to index and search telephone conversations, voice mail, and video files.

All these new search products and services will have to work with each other and with many other systems. This, in turn, will require standards.

The emergence of search standards would encourage tremendous growth and provide many benefits to users. But standardization would also introduce a new and destabilizing force into the industry. Instead of competing through incremental improvements in the quality and range of their search services, Microsoft, Google, and Yahoo will be forced into a winner-take-all competition for control of industry standards. Steve Jurvetson, a venture capitalist at the firm of Draper Fisher Jurvetson in Menlo Park, CA, says, “This is something of a holy war for Microsoft*, and one they can’t bear to lose.”

In short, the search industry is ready for an architecture war.

*For examples of Bill Gate's previous
crusades against competitors,
see our visual history.


Pursuing Lock-In
Architecture wars (also known as standards wars) occur because information technology markets require standards in order to manage complexity, communication, and technological change. Historically, proprietary control over a major information technology standard has created more wealth than nearly any other human activity. Architectural domi­nance mints money; and managed properly, it lasts forever. IBM’s mainframe architecture was introduced in 1964; Intel developed its first microprocessor in 1971; Microsoft’s first operating system was introduced in 1981; Cisco Systems marketed its first router in 1986. None shows any signs of disappearing, and each has already generated hundreds of billions of dollars in cumulative revenues.

It is only standardization that makes it possible for any browser to display any Web page, or for people to read the documents and e-mail messages they receive from each other. Standards are generally based upon the interfaces that constitute the authorized ways for software systems to communicate with each other. These include application programming interfaces, or APIs, like those Microsoft provides for developing Windows applications; communications protocols such as HTTP (the hypertext transfer protocol), which allows browsers to communicate with websites; and content or document structures, such as the HTML (hyper­text markup language) standard for Web pages, or the document structure used by Microsoft Word. These standards are embedded in larger architectures used in the design of general-purpose commercial systems, or platforms, such as the Windows operating system. Platforms, in turn, are used as the starting point for specific applications, such as word processors or accounting systems.

Sometimes standardization is achieved through nonproprietary efforts managed by governments, standards bodies, or industry coalitions. Examples include the basic Internet protocols, the HDTV broad­casting standard, and most telephone stan­dards. In other cases, like that of the Ethernet protocol invented by Bob Metcalfe while at Xerox PARC, a company donates an architecture to a standards body in the hope of creating or expanding a market. The open-source movement is an interesting variant of nonproprietary standardization based on decentralized control. In the case of open-source software like the Linux operating system, a community of creators and users in effect votes continuously on the direction of a standard.

But in most information technology markets, standardization is achieved via market competition. These contests are extremely complex, but they have a common underlying logic, which Charles Morris and I described a decade ago in our book Computer Wars. The best technology does not always win; superior strategy is often more important. Winners do tend, however, to share several important characteristics. They provide general-purpose, hardware-independent architectures, like Microsoft’s operating systems, rather than bundled hardware and software, like Apple’s and Sun’s systems. Winning architectures are proprietary and difficult to clone, but they are also externally “open”—that is, they provide publicly accessible interfaces upon which a wide variety of applications can be constructed by independent vendors and users. In this way, an architecture reaches all markets, and also creates “lock-in”—meaning that users become captive to it, unable to switch to rival systems without great pain and expense.

Architecture wars generally begin with a fierce competition for market share. Eventually, the market settles on a de facto standard, a dominant architecture under the proprietary control of one company. Subsequently, only a few rivals survive in the leader’s shadow, while the leader expands its empire into neighboring markets.

The search industry is the next place in which a vast architectural empire could be built. Some portions of the emerging search space are now occupied by Google, others by Microsoft, most by nobody. But in the end, there will probably be room for just one architecture. Google’s idyllic childhood must therefore give way to a contest much like those Microsoft has fought and won against companies ranging from IBM to Novell to Apple to Net­scape. But for several reasons, this architecture war may end differently. First, many of the companies defeated by Microsoft over the past 20 years suffered as much from self-inflicted wounds as from Microsoft’s predation. In Eric Schmidt, Google may have a CEO with the technological depth and painfully acquired experience essential to surviving Bill Gates. Second, Google’s principal services run on a platform that Microsoft doesn’t control—the Web. Third, in some cases (like its fight against Linux, for example), Microsoft’s software is now the high-cost incumbent.

Fourth, some analysts believe that Microsoft has lost its edge, that its size and age have bred complacency. Commenting on the collision between Google and Microsoft, Internet industry observer John Battelle recently wrote, “Microsoft is indeed a fearsome competitor, with extraordinary resources (and I don’t mean the $50 billion in cash; I mean the ability to leverage Windows). But it’s a middle-aged company that moves far more slowly than it did ten years ago, when it first recognized the Web threat.” (For John Battelle’s views on the future of publishing, see “Megaphone,” p. 36.)

Fifth, Microsoft hasn’t always won: Adobe and Intuit are doing just fine, MSN hasn’t killed AOL or Yahoo, and the Xbox hasn’t defeated the Japanese game industry (not yet, anyway). And finally, Microsoft’s recent entry in the search wars—the beta version of MSN’s search tool—is decidedly unimpressive. (Then again, Windows 1.0 was pretty bad, too.)

So Google’s defeat is not a foregone conclusion. Indeed, if it does everything right, it could become an enormously pow­erful and profitable company, representing the most serious challenge Microsoft has faced since the Apple Macintosh. But if Microsoft gets serious about search—and there is every reason to believe that it will—Google will need brilliant strategy and flawless execution simply to survive.

Arming Secretly
Does Google understand the gravity of the challenges that may confront it? Does it have a strategy for winning an architectural war? The evidence is equivocal.

Google has software developers skilled enough to construct a powerful architectural position. It has hired both newly minted PhDs and experienced technologists from Netscape and even Microsoft. One of its newer employees is Adam ­Bosworth, famous to software developers for developing the HTML engine in Microsoft’s Internet Explorer and for his pioneering work on the Extensible Markup Language, or XML, the standard for machine-to-machine communication on the Web. Other recent hires, significant for their architectural expertise, include Rob Pike, a pioneer of the Unix operating system at Bell Labs; Joshua Bloch, a leading Java coder from Sun; and Cédric Beust, who developed the Weblogic platform at BEA Systems.

One Google manager, who preferred not to be named, said his company understands the need for proprietary control, and that future products would prove it. In late 2004, Google did release two im­portant new APIs, for its Deskbar search tool and its advertising systems. But the Google executive declined to com­ment on future plans, noting that his employer had become secretive to the point of paranoia. (Indeed, Google’s senior executives refused to be interviewed for this article.)

The executive then went on to say, “Look, everyone here—right up to our CEO and board of directors—has had the shit kicked out of them over the last five years. A lot of them were at Netscape, or at failed dot coms. Nobody I work with is complacent, and they’re all very smart.” But there are two important people who haven’t had the shit kicked out of them: Google’s founders. In a Playboy interview published shortly before Google’s IPO, Brin and Page did not mention competitive threats. Rather, they talked about corporate ethics, the creation of foundations, and their efforts to make Google a great place to work.

Google is a great place to work. My friends there absolutely love the place, and in part for that reason, they work very hard. Google allows pets and provides employees with laundry service, drinks, meals, massages, car washes, and (soon) child care. Its corporate motto is “Don’t be evil.” But long ago, a professor of mine, noting my youthful idealism, remarked that the only successful neutral nations are those, like Switzerland, that are permanently armed to the teeth. And for Google, neutrality is not an option.

But what specifically should Google do? How is Microsoft likely to attack, what will the contest look like, and what will decide its outcome? Let’s begin with the current state of search.


The State of Search
For a long time, search engines were expensive luxuries for those who operated them. They never made money. Market leadership traded hands repeatedly. Sites like AltaVista rose to prominence and fell away. The entirely separate business of selling software products for text indexing and retrieval was a backwater. But then things changed. As the Internet and the Web grew, searchable digital content began to supplant conventional media, and efforts to improve the quality of search results intensified.

Early search engines ranked results largely according to crude criteria such as the number of times a page mentioned the user’s chosen keywords. But in a research collaboration that began in 1995, when they were still graduate students, Brin and Page applied a practice called citation ranking to the Web, and it turned out to be a much more reliable way to find relevant information.

For many years, reference publications like the Science Citation Index have ranked scientific papers’ “impact” by count­ing the number of times they were cited in other papers. Brin and Page’s insight was that if hyperlinks were viewed as citations, the same thing could be done for the Web. That insight led to the first truly superior search engine. Stanford applied for a patent on Brin and Page’s “PageRank” technique in 1998 (it was granted in 2001). Soon afterward, Brin and Page started Google and raised money from top-tier venture capital firms Sequoia Capital and Kleiner, Perkins, Caufield, and Byers.

Today, the search industry has two layers. The leaders, Google and Yahoo, both provide “retail” search services on their own websites. But both firms also license, on a highly selective basis, their infrastructure and services to other companies in a “wholesale” market. For example, Google provides the underlying search services for AOL and Amazon.com’s A9 search sub­sid­i­ary. Looksmart powered MSN Search for some years. Now, however, Microsoft is developing its own search engine.

Google holds nearly 40 percent of the U.S. retail search market, more than 50 percent of the U.S. wholesale market, and larger shares of the global market. Yahoo enjoys a rough parity with Google in the United States, and Baidu has been expanding in China. Interestingly, while Google operates its own service in China, it also holds an equity stake in Baidu.

Google derives nearly all of its revenues from advertising, of two distinct kinds. First, it places advertisements on pages of search results returned by its own site. Those advertisements are selected according to the words used in the search. Advertisers bid in highly complex auctions for the right to place ads on results pages for searches that use specific terms like “used cars,” “SUVs,” and so forth. Second, Google manages advertising for a wide network of external websites for which it provides ad placement services. It has combined its search engine with sophisticated text-matching and auction systems to target, price, sell, and evaluate its advertisements, both those placed on its own site and those on its affiliates’.

Some of these affiliates use Google’s search services, but most do not. In fact, almost half of Google’s revenue and profits come from its external advertising network, a business where its superior indexing and search capabilities play a less critical role. Google also sells a “search appliance,” a Linux server running its indexing and search software, to organizations wishing to provide search services for their internal Web servers. This business, however, is quite small.

Yahoo’s search business is similar. Like Google, Yahoo earns a substantial fraction of its total revenue through search-related advertising, both on its own site and on a network of affiliates. Yahoo’s portal offers a wider variety of information services than Google, including news, dating, chat, and shopping. But Google is rapidly diversifying: in addition to allowing users to download its free personal search tool, Google’s website has news, shopping, e-mail, and photo storage services in various stages of development.

Today, the wholesale search market has significant barriers to entry. Economies of scale have asserted themselves, secondary competitors have folded, and the creation of new search engines by startups is becoming prohibitively expensive. Consider: to crawl, index, and search more than eight billion pages—still only a fraction of the Web—Google now operates a global infrastructure of more than 250,000 Linux-based servers of its own design, according to one Google executive I spoke with, and it is becoming a major consumer of electrical power, computer hardware, and telecommunications bandwidth.

But the consolidation of the wholesale market does not mean that the search industry is mature. Quite the contrary.

First, there is no lack of new competition. This comes from any number of sources: large firms, like Amazon and its A9 subsidiary, with sufficient resources to enter the market; startups commercializing a wide variety of new search functions; information retrieval and filtering firms such as LexisNexis or Vivísimo, whose products are competitive with or complementary to Web-based search services; and, in a class by itself, Microsoft. Moreover, while basic Web crawling is a mature technology with high barriers to entry, many other search-related functions are not. Secondly, services that have thus far been confined almost exclusively to the public Web are now expanding to personal computers, the dark Web, and other platforms. Finally, the search arena is still unstructured and without standards. Search sites are self-contained islands. They do not interoperate, and independent developers cannot use search sites as platforms upon which to offer specialized products and services, because, with minor exceptions, the search industry lacks open APIs. For the most part, each service is confined to what it can do on its own.

But the search industry cannot resist APIs, standards, and open architectures much longer. No single company can offer users all the functions they want. Users will demand search products and services that work across many different platforms. And Microsoft will almost certainly exploit both its ownership of the Windows platform and its search engine. Indeed, Microsoft has already announced that it intends to provide third-party developers with APIs to its new search engine, enabling them to construct applications based on it.

Trends in Search: Technology
The advantage conferred on Google by its PageRank algorithm, once overwhelming, is gradually disappearing. Many other clever algorithms have been developed; indexing and searching are being applied to more data sources and data types; and ever more nuanced user interfaces and functions are being offered to users.

Some of these efforts seem quite promising. Amazon has scanned more than 100,000 books and made their text searchable for Amazon users. Google Print provides a similar service and also offers direct links to bookselling sites. PubSub, a small startup in New York City, has developed a high-performance “matching engine” that monitors online information: if you subscribe to a topic, PubSub will scan data in real time and notify you whenever there is news. For the sorting and clus­tering of search results, the leader is Vivísimo, a Carnegie Mellon University spinoff in Pittsburgh, with its new Clusty website. Software from Blinkx, of San Francisco, lets users search multiple information sources, including their desktops, websites, and blogs. X1 Technologies of Pasadena, CA, also provides a popular desktop search tool.

As these examples suggest, many new search functions are being introduced by startups rather than by Google or established companies. A few of these startups may become large, independent firms. But most will remain small vendors, will be acquired, or will simply fail, depending on what Google, Yahoo, or Microsoft choose to do. Many offer products that would be natural additions or complements to existing search services, since their utility depends upon access to a search engine. But Google and Yahoo do not usually provide such access, even though it would benefit users. Google’s sole Web API is laughably limited, offering little functionality and contractually restricting users to 1,000 queries per day.

Just what services could be built upon a fully open Google architecture? They could take many forms, but some of the most obvious would make indexing and searching processes on the desktop, on Web servers, and on Google’s own website work together better. A single search could then span not just Google’s index of the public Web but whatever other sources might be appropriate: a newspaper archive, a medical database, an antique-car parts catalogue, or your own hard drive. Google, or others building upon its APIs, would unify the results, explain any access restrictions on particular sources, and facilitate purchases of information. At the same time, independent firms could create services that call on Google’s search and indexing functions to retrieve information, but present that information in new and creative ways.

As the search industry evolves, it also touches upon—and often competes with—a widening array of other industries, from publishing to software, in both business and consumer markets. The search industry wants to become the starting point for a larger proportion of digital activities. Some companies are happy to oblige:Ama­zon, for instance, opens its databases to search services, so that search results can point directly to relevant Amazon products, bypassing the need to navigate Amazon’s own site. Others are less welcoming. Microsoft will be displeased, to put it mildly, if Google Desktop begins to supplant the traditional Windows desktop interface and file systems.

However, the most important trend in the search industry is the proliferation of new computing platforms—and the increasing cross-pollination of data between these devices, PCs, and Web services. These emerging—and merging—markets represent Google and Microsoft’s greatest opportunity for future growth and the greatest threat they pose to each other. In the absence of a common architecture, the information on these systems is almost unsearchable. Today, a user cannot possibly conduct a search such as “Show me everything about the Chinese economy that has appeared in the last month in my e-mail attachments, Word documents, bookmarked websites, corporate portal, voice mail, or Bloomberg subscription.” Many computing platforms, old and new, have no useful search facilities at all. Most existing search tools are available on only one or at most a few platforms; and due to their lack of standardization, they cannot talk to each other.

Thus, while Google provides an ex­cellent service for searching the public Web and has made a good start on PCs with Google Desktop (the hard-drive search tool) and Google Deskbar (which performs searches without launching a browser), the search universe as a whole remains a mess, full of unexplored territories and mutually exclusive zones that a common architecture would unify. Given the size and growth rate of the markets involved, the dominant search provider a decade from now could easily have revenues of $20 or $30 billion per year.

Google Versus Microsoft
Who will win? Google certainly has impressive assets. Moreover, Microsoft does not own the server side of the Web and probably never will. Nor does it control the architectures of the newer computing platforms, whose markets are growing much faster than the PC’s. And in these newer markets, Microsoft faces a painful choice: either provide search technology that will run on, and thereby support, competing platforms such as Linux machines, or let others take the lead.

Yet Microsoft’s control of Windows, Internet Explorer, and Office is a real advantage. For instance, if desktop search tools enjoyed deeper access to the internal document structures of Word and Excel, they would be much more useful. Similarly, operating systems can potentially collect information about user behavior that could improve search tools substantially. Other recent search innovations are really enhancements to the Web browser. Google, Ask Jeeves, A9, Blinkx, Yahoo, and Microsoft are all providing search toolbars that can be downloaded into the browser, and independent developers have created many search-related enhancements to the open-source Firefox browser.

But we know who really owns the browser. Ramez Naam, group program manager for MSN Search, declined to say whether or not search functions would be integrated directly into Microsoft’s Internet Explorer. But a Microsoft executive, who asked to remain unnamed, told me that his company had recently reconstituted its browser development organization. “Microsoft effectively disbanded the Internet Explorer group after killing Netscape,” he said. “But recently, they realized that Firefox was starting to gain share and that browser enhancements would be useful in the search market.” He agreed that if Microsoft got “hard-core” about search (as Bill Gates has promised), then, yes, Google would be in for a very rough time.

Why? Because in contrast to Microsoft, Google doesn’t yet control standards for any of the platforms on which this contest will be waged—not even for its own website. Although Google has released noncommercial APIs—which programmers may use for their own purposes, but not in commercial products—until recently, it avoided the creation of commercial APIs. In late 2004, however, Google announced APIs for its advertising systems and for the Google Deskbar. The advertising APIs could help create an infrastructure of firms dependent on Google’s platform and specializing in the management of automated, Web-based advertising strategies. This could protect Google’s advertising revenues against future price competition from Microsoft. The Google Deskbar APIs, likewise, should encourage third parties to create search functions for the Windows desktop.

These steps, however, are at best half-measures. Google has not yet faced the need for full architectural competition and in some respects has arguably been moving in the wrong direction. It still has not provided open APIs for its core search engine. (Raúl Valdés-Pérez, Vivísimo’s CEO, says that he tried to license Google’s search engine services but was refused.) Furthermore, it sells its search software to enterprises only in the form of a bundled, Linux-based hardware system. This alienates other hardware and software vendors, leaves most of the non-Linux market unserved, and presents a huge opportunity for Microsoft.

Google may feel that APIs are of secondary importance in its coming war with Microsoft. Two Google employees (both of whom prefer not to be named) told me that Google’s leaders believe that the company’s expertise in infrastructure—knowing how to build and operate those 250,000 servers—constitutes a competitive advantage more important than APIs or standards. This could be a major, even fatal, error. Microsoft can certainly obtain or cultivate the skills necessary to operate large-scale computing infrastructures; indeed, it already operates MSN, with nearly 10 million users.

Worse, Google may feel that APIs can wait. Peter Norvig, the company’s director of search quality, told Technology Review, “We’ve had the API project for a few years now. Historically, it’s not been that important: it’s had one person, sometimes none. But we do think that this will be one important way to create additional search functions. Our mission is to make information available, and to that end we will create a search ecology. We know we need to provide a way for third parties to work with us. You’ll see us release APIs as they are needed.”

Those words do not convey much sense of urgency. There is, however, another possibility: Google understands that an architecture war is coming, but it wants to delay the battle. One Google executive told me that the company is well aware of the possibility of an all-out platform war with Microsoft. According to this executive, Google would like to avoid such a conflict for as long as possible and is therefore hesitant to provide APIs that would open up its core search engine services, which might be interpreted as an opening salvo. The release of APIs for the Google Deskbar may awaken Microsoft’s retaliatory instincts nonetheless. For Google to challenge Microsoft on the desktop before establishing a secure position on the Web or on enterprise servers could be unwise.

Strategies and Prescriptions
In all of Microsoft’s successful battles, it has used the same strategies. It undercuts its competitors in pricing, unifies previously separate markets, provides open but proprietary APIs, and bundles new functions into platforms it already dominates. Once it has acquired control over an industry standard, it invades neighboring markets.

In contrast, the losers in these contests have usually made one or more common mistakes. They fail to deliver architectures that cover the entire market, to provide products that work on multiple platforms from multiple companies, to release well-engineered products, or to create barriers against cloning. For example, IBM failed to retain proprietary control over its PC architecture and then, in belatedly attempting to recover it, fatally broke with established industry standards. Apple and Sun restricted their operating systems to their own hardware, alienating other hard­ware vendors. Netscape declined to create proprietary APIs because it thought Microsoft would never catch up. Google—and Yahoo—would do well to take note.

What will Microsoft do? Publicly, it doesn’t care about building a broad search architecture reaching across many platforms. “There will be a lot of innovation and competition around search by a broad number of vendors, but it is wishful thinking to believe it is a platform tidal wave like the initial emergence of the browser and the Web,” says Charles Fitzgerald, Microsoft’s general manager of platform strategy. And indeed, Microsoft has begun innocently enough: a decent though unspectacular search site, some software, no bundling—nothing, you know, violent. But the company will provide APIs to its Web search engine, and its long-term strategy could be brutal. If it acts logically, it will bundle better search facilities into Internet Explorer and Office; it will build advanced indexing and searching tools into both its PC and server operating systems; and it will alter its own products to make searches of many kinds more fruitful. Search tools could tailor results to a user’s interests, based upon data collected by the operating system. Microsoft could even deliberately cause failures in Google’s products—for example, altering its file formats so that Google’s crawlers could not properly index Word or Excel files. Microsoft has been accused of such conduct repeatedly in the past, notably in its battles against the DR-DOS operating system (an attempted clone of MS-DOS) and Lotus spreadsheet software.

If it acts logically, Microsoft would also perform a “cashectomy” on Google—just as it did in the browser wars when it gave away Internet Explorer. Even with nearly $2 billion in cash, Google is vulnerable to this tactic. For instance, Microsoft could offer free wholesale access to its search engine. Then it could attack Google’s ­advertising networks by offering free or subsidized advertising placement. These businesses are based primarily upon agreements with third-party websites, most of which have no long-term allegiance to Google. (Google’s forthcoming advertising APIs could, however, change this.) Finally, Microsoft will try to play competitors off against each other, as is its custom. Microsoft thrives when its opponents are fragmented and possess no alternative common standard.

So what should Google do? Given Microsoft’s ferocity in the past, panic might be a productive first step. Google should understand that it faces an architecture war and act accordingly. Its most urgent task must be to turn its website into a major platform, as some other firms have already done. Amazon, as we have noted, does not merely operate a retail website. It has developed proprietary but open APIs that have made it the capital of an electronic economy (see “Amazon: Giving Away the Store,” p. 28). Other merchants set up stores under the Amazon umbrella, and other websites can offer direct links to Amazon’s product pages. Recently, Amazon has gone even further, creating ways for consumers to search and find products without visiting Amazon at all.

Thus, Google should first create APIs for Web search services and make sure they become the industry standard. It should do everything it can to achieve that end—including, if necessary, merging with Yahoo. Second, it should spread those standards and APIs, through some combination of technology licensing, alliances, and software products, over all of the major server software platforms, in order to cover the dark Web and the enterprise market. Third, Google should develop services, software, and standards for search functions on platforms that Microsoft does not control, such as the new consumer devices. Fourth, it must use PC software like Google Desktop to its advantage: the program should be a beachhead on the desktop, integrated with Google’s broader architecture, APIs, and services. And finally, Google shouldn’t compete with Microsoft in browsers, except for developing toolbars based upon public APIs. Remember Netscape.

When Google’s Peter Norvig was read this list—presented not as recommendations, but as things that Google would do—he did not deny any of it. When Technology Review asked, “If we reported any of this, would we be wrong?”, Norvig answered, “We don’t like the word ‘beachhead.’ That implies a war, and we don’t want to go there.” Pressed, he said, “Factually, nothing wrong”—although he stressed that APIs were only one way Google might create a “search ecology.” But historically, proprietary APIs have been the only way to create a loyal customer base—one that can’t readily switch to a competitor.

Big Questions
Would such an architectural strategy work? I’m not sure, but I think so. I also suspect that if Google doesn’t do something like this fast, and Microsoft attacks, Google will go down. Its decline would take longer than Netscape’s precipitous descent, but it would be no less final. And at least during the second term of the George W. Bush administration, it is highly unlikely that antitrust policy would come to the rescue.

Whether Google or Microsoft wins, the implications of a single firm’s controlling an enormous, unified search industry are troubling. First, this firm would have access to an unparalleled quantity of personal information, which could represent a major erosion of privacy. Already, one can learn a surprising amount about ­people simply by “googling” them. A decade from now, search providers and users (not to mention those armed with subpoenas) will be able to gather far more personal information than even financial institutions and intelligence agencies can collect today. Second, the emergence of a dominant firm in the search market would aggravate the ongoing concentration of media ownership in a global oligopoly of firms such as Time Warner, Ber­telsmann, and Rupert Murdoch’s News Corporation.

If the firm dominating the search industry turned out to be Microsoft, the implications might be more disturbing still. The company that supplies a substantial fraction of the world’s software would then become the same company that sorts and filters most of the world’s news and information, including the news about software, antitrust policy, and intellectual property. Moreover, Microsoft could reach a stage at which its grip on the market remains strong, but its productivity falls prey to complacency and internal politics. Dominant firms sometimes do more damage through incompetence than through predation.

Indeed, as so many have noted, much of Microsoft’s software is just plain bad. In contrast, Google’s work is often beautiful. One of the best reasons to hope that Google survives is simply that quality improves more reliably when markets are competitive. If Google dominated the search industry, Microsoft would still be a disciplining presence; whereas if Microsoft dominated everything, there would be fewer checks upon its mediocrity.

Disclosure: As the result of selling Vermeer Technologies to Microsoft in 1996, Charles Ferguson still holds a substantial quantity of Microsoft stock, a position which is partially but not completely hedged. He has no other financial interest relevant to this article.

 

Victoire antipub à la télé suédoise

Victoire antipub à la télé suédoise

Deux réalisateurs poursuivaient une chaîne privée pour avoir interrompu leurs films.

Par Olivier TRUC

lundi 27 décembre 2004 (Liberation - 06:00)

Stockholm de notre correspondant


c'est un revirement important qui vient d'ébranler le paysage audiovisuel suédois. TV4, l'équivalent de TF1, vient de perdre le procès qui l'opposait à deux réalisateurs qui n'avaient pas toléré que la diffusion de leur film sur cette chaîne soit interrompue par des spots publicitaires. Une première en Suède, qui a surpris tout le monde, d'autant que le Conseil supérieur de l'audiovisuel suédois avait, pour sa part, donné raison à la chaîne commerciale en septembre 2002. «C'est un petit pas pour nous, mais un grand pas pour l'humanité !» s'est exclamé Claes Eriksson, l'un des réalisateurs, à l'annonce du verdict.

Changement. C'est durant l'été 2002 que TV4, principale chaîne commerciale privée, avait diffusé le Requin qui en savait trop, de Claes Eriksson ­ une comédie satirique sur la société de consommation et la course à l'argent dans les années 80 ­ et Alfred, de Vilgot Sjöman, un film sur Alfred Nobel.

Lorsque TV4 avait acheté les droits en 2000, avec l'accord des réalisateurs, il était encore interdit aux chaînes émettant de Suède de faire des coupures publicitaires. Mais deux ans plus tard, lors de la diffusion des films, un changement de la loi sur la radio et la télévision, intervenu entre-temps, avait rendu possible le saucissonnage de pub. Le tribunal, après avoir visionné les films, a jugé que les coupures publicitaires ont signifié «un changement de l'oeuvre de nature à enfreindre la particularité littéraire et artistique de l'auteur».

TV4 s'est dit «surpris» de cette décision. «Nous avons revu tous les contrats pour nous assurer que cela ne se reproduise pas, a dit Göran Ellung, de TV4. Le jugement ne change rien à notre programmation, il ne concerne que ce cas particulier.»

En novembre, TV4 avait déjà reçu pour la première fois un avertissement de l'autorité de régulation audiovisuelle pour coupure publicitaire intempestive lors de la diffusion, en décembre 2003, de Léon de Luc Besson. Il s'agissait de la scène où Léon apprend à Mathilda à tirer et où, l'oeil sur la lunette de visée, elle va pour la première fois appuyer sur la détente. Pub. Pour le CSA suédois, interrompre le film à ce moment d'extrême dramatisation était une violation de l'intégrité de l'oeuvre.

Toutefois, la victoire symbolique des deux réalisateurs suédois peut être une victoire à la Pyrrhus. Claes Eriksson en avait vite fait l'expérience. En 2002, après la diffusion de son premier film, il avait appelé TV4, qui prévoyait d'en diffuser un deuxième, pour leur demander de ne pas le couper avec des pubs. Le film avait alors disparu de la grille de programme.

Risque financier. Si le droit d'auteur sort renforcé de ce procès, le jugement pourrait s'avérer lourd de conséquences pour les films à venir. «Un réalisateur orthodoxe sur la question des droits d'auteur, qui voudrait limiter la diffusion de son film aux chaînes non commerciales, pourrait avoir des problèmes de financement», a déclaré Peter Danowsky, avocat spécialiste des droits d'auteur, au quotidien Dagens Nyheter. Selon lui, il existe un risque que les sociétés de production refusent de financer des films qui ne peuvent pas être diffusés n'importe où.
 

What's a Wiki?

The past several years have seen many forms of collaborative electronic communication take shape. Some are based on instant messaging, others are e-mail-centric, still others rely on HTML-based content, and the list goes on. One form that is growing in popularity—though it doesn't yet have the star power of Weblogs, which have grown by several hundred percent this year—is the wiki. A wiki (derived from the Hawaiian term for quick) is essentially a small piece of server software that allows users to freely create and edit Web content using any Web browser and no other special tools. Or, in one simpler description, a wiki is "the simplest online database that could possibly work." No HTML or programming knowledge is needed to contribute to a wiki.

A Google search on the term wiki brings up many examples and resources. Teenagers—probably taking time out from their well-documented instant messaging sessions—are using them to collaborate. People in the scientific community make use of them to cast a wide net for discussion participants on topics of interest. One of the more robust wikis is Wikipedia, which bills itself as "the free encyclopedia." It is a multilingual, open-content encyclopedia that anyone can edit. The English version contains approximately 120,000 entries. Under Astronomy, you'll find a variety of contributions, from articles on star formation to extragalactic astronomy and more. Under Hobbies there are articles on dumpster diving and restoring antique machinery in addition to the mainstream.

Wikis have several unique properties compared with other kinds of collaborative communication forums. Any and all information being aggregated in a wiki can be changed or deleted by anyone (though many wikis preserve previous copies of posted contributions in the background). Unlike protected Web pages, articles added to a wiki are at the editorial mercy of the wiki's other participants. Ward Cunningham, coauthor of the Addison-Wesley book The Wiki Way: Collaboration and Sharing on the Internet, refers to this aspect of the wiki as "fragility." There is trust involved in the development of a wiki. Cunningham's book comes with a CD-ROM that enables you to create a wiki easily on a Web server. There's also a free-resources list on the Web.

The positive spin on the fragility of a wiki though, is that any flames or spam can be immediately removed so that wikis imbue good participants with a kind of survival-of-the-fittest power. "You need to generate real content," says one wiki source on the Web. "Anything else will be removed. So anyone can play, but only good players last."

Cunningham also draws many distinctions between wikis and another popular means of Web communication: blogs, or Weblogs. "Blogs and wikis are polar opposites in many ways, though they're seen as similar" he says. "A blog tends to reflect the biases and opinions of an author, while a wiki is more like an open cocktail party. In a wiki you try to speak without a strong voice, seeking consensus to create something permanent, while on a blog you're developing your own voice and it's very much about your voice."

Cunningham also points out that you can go away from a wiki and come back at any time to pick up a conversation without much inconvenience, which isn't the case with e-mail-centric group discussions. "E-mail doesn't self-organize," he emphasizes.

In terms of future trends for wikis, Cunningham says "there's a lot of interest in combining the timelessness of wikis—the fact that you can go away from them and come back—with the attention-grabbing aspect of blogs. Integrating blogs and wikis is a hot item right now."

25.12.04

 

Lost and Found Department: Chasing Shadows

Ray Carney

At the beginning of Citizen Kane the dying Charles Foster Kane whispers the word “Rosebud,” and a reporter scurries about for a few days and pieces together his life story from the two syllables.

If only life were as simple as the movies. In the late 1980s, a few years before John Cassavetes’ death, I had a series of “Rosebud” conversations with him. The American independent filmmaker told me things about his life and work that he had never previously revealed. Our discussions covered a lot of territory, but one of the things I spent the most time querying him about was the fate of alternative versions of his films. Because Cassavetes made most of his movies outside the studio system and financed them himself (paid for from the salary he made acting in other directors’ films), he was free from the constraints that limit Hollywood filmmakers. He could take as long as he wanted to shoot his projects, spend as much time as he needed to edit them, and if he was so inclined, re-shoot or re-edit them as much as he wanted. In short, Cassavetes made films the way poets write or painters paint. The result was that at various points in their creation, most of his works–including Faces, Husbands, A Woman Under the Influence, and The Killing of a Chinese Bookie–existed in wildly differing versions, with different characters, different scenes, and different running times.

The film we spent the most time talking about was Shadows. Cassavetes’ first feature, generally regarded as the beginning of the American independent movement, had had a vexed history. The filmmaker, in effect, made it twice, filming an initial version in 1957 and screening it in the fall of 1958 at New York’s Paris Theater for invited audiences. But, dissatisfied with the response, Cassavetes re-shot much of the movie in early 1959, replacing approximately half of the footage in the original print with newly created material. In late 1959 the so-called “second version” of Shadows premiered.

What made the Shadows story especially interesting was that a number of critics and viewers who saw both versions were convinced that Cassavetes had made a grievous mistake. Jonas Mekas’s “Movie Journal” column, published in The Village Voice on January 27, 1960, can stand for all:

“I have no further doubt that whereas the second version of Shadows is just another Hollywood film–however inspired, at moments–the first version is the most frontier-breaking American feature film in at least a decade. Rightly understood and properly presented, it could influence and change the tone, subject matter, and style of the entire independent American cinema.”

At the end of his piece, Mekas expressed the hope that Cassavetes would come to his senses, withdraw the second version and release the first version; but it was not to be. Cassavetes did show the first version several more times after he created the second version, but for unknown reasons, the first version ceased to be available for screenings a few years later. Despite the fact that Cassavetes made several public statements that he was willing to screen and have people view the first version, its last screenings took place in the early 1960s. It was not shown again in his lifetime or following his death. For the forty-five years that followed, the only version of Shadows anyone would ever see would be the re-shot, re-edited version. The first version of the film assumed a legendary, ghost-like status. Where was it? Why, given Cassavetes' avowed willingness to have it screened, was it never shown again? Had it been lost or destroyed? Did it still exist? Where was it? Did Cassavetes himself know?

When I asked Cassavetes the whereabouts of the earlier print, he said he doubted it still existed. The likelihood of its survival at the point was all the more remote when one took into account the modesty of his filmmaking operations in the late 1950s. The filmmaker told me that the first version of Shadows had existed only as a single 16mm print. He had not had enough money to make a duplicate or a backup, and the negative had been cut up to make the second version.

The one small lead he offered me was that he said he vaguely remembered donating the early print to a film school. Jonas Mekas subsequently told me of a conversation he had with Cassavetes in which the filmmaker was slightly more specific and said that he had donated the print of the first version to "a school in the Midwest."

Unfortunately for my peace of mind, the damage had been done. I contacted every school in the Midwest, starting with the alma mater of Cassavetes’ wife, Gena Rowlands, the University of Wisconsin, which seemed a likely suspect. To be sure I wasn’t failing to pursue any leads, I also tracked down anyone I could locate who had been associated with the schools’ film programs at the point the gift would have been made, presumably twenty-five or thirty years earlier. I had many wonderful conversations, but came up empty-handed.

Around the point Cassavetes died, in 1989, I expanded the search. I contacted staff members at every major American film archive, museum, and university film program to see if the print had somehow been squirreled away in one of their collections. After all, the title would have been the same for the first and second versions; maybe they had the first version and didn’t realize it. I began making announcements at film events I organized or presided over. I asked friends–critics, filmmakers, and ordinary people–to spread word of the quest. In the mid-1990s when I started a web site, I posted a notice there. My friends joked that “shadowing Shadows” had become a kind of madness.

There was no shortage of tips and leads to pursue over the course of the next decade. I communicated with hundreds of people in person, on the telephone, and, subsequently, via e-mail. I tracked down people who had been present at one of the early screenings. I recorded their accounts of what had been in the first version. I talked to people who thought they knew what had become of it. There were thrilling days when it seemed that the print was within my grasp if only I could get in touch with a particular person who knew someone who knew someone who knew someone. But that final someone always eluded me. There were wild goose chases where I flew into a strange city and met with a collector whom I had been led to believe had the print in his possession. Needless to say, each time the film turned out to be the second version.

The comedy was not lost on me–or my amused friends. So many of the accounts of what had been in the early version–including Cassavetes’ own–contradicted each other that I joked that the longer the search went on the less I knew. Everything had been much clearer when I began. I teach literature as well as film, and one day in a Henry James seminar I was leading a discussion of “The Aspern Papers” and “The Figure in the Carpet,” two comical stories about endless, pointless, maniacal scholarly searches that never get anywhere, when I began laughing so hard that I had to stop the discussion and explain to my students that I had suddenly, shockingly recognized my own particular scholarly madness in James’s characters. Was I really just crazy?

There were also comical tricks of fate. For example, while searching for the Shadows print in the Library of Congress collection, I stumbled across an uncatalogued, unrecognized, long print of Faces. Very interesting, very valuable; but, sorry, wrong movie. (Click here to read more about my Faces discovery.)

I can’t say I didn’t get discouraged. Sometime in the mid-1990s, I put the search on the back burner and decided to take another tack. If I could not actually find the physical print of the first version, I would imaginatively reconstruct it by drawing on memories of the cast, crew, and people who had seen it, as well as by studying the second version, which included approximately thirty minutes of footage that had been in the earlier print. I re-interviewed the cast and crew to pick their brains for memories about the first version, then studied the composite second version shot-by-shot for tell-tale clues about which footage had been filmed in 1957 and which in 1959. It’s what scholars call using “internal evidence” to study the revision of a work.

Almost all of that research had to be done at actual movie theater screenings, since a video image doesn’t reveal the kind of detail I required to draw my conclusions. I pulled friends, dragging their feet and complaining, into 35mm screenings in theaters, handed them clipboards, and we sat together in the front row, whispering in each others’ ears and taking notes about how an actor’s socks or the part in his hair changed in two successive shots. We noted the length of the shadows on the ground to tell what time of day scenes were filmed; and the size of the leaves and the openness of buds on bushes in a park scene to decide the month. We took notes about the models of the automobiles or the names of films or plays visible on marquees in the background. That was only the raw material, the data of the experiment; the fun of it was to connect the dots, to reconstruct the entire first version out of such spider-web tangles of interconnections. The title of a film on a marquee would allow us to date an actress’s hairdo, which would then allow us to date the scarf that an actor wore in another scene and … skipping three or four more intervening steps … we could finally deduce that another scene, different from any of the preceding ones, was definitely filmed in March 1957 in the late afternoon of a day following a heavy rain.

It took scores of screenings and years of note-taking. Shadows was a gargantuan jig-saw puzzle with thousands of missing pieces, but as I and my patient and forbearing helpers put one tiny bit next to another tiny bit, large chunks of the big picture of what had been filmed in the two different periods of shooting emerged. It may not have been the most profound scholarly work I’ve ever done, but it was certainly the most fun–a little like playing Trivial Pursuit, doing the Times crossword, and lining up the colors on a Rubik’s Cube at the same time. The downs gave me the acrosses; the straight edges gave me the borders; the posters gave me the socks and scarves and hairdos. Shadows became my personal Dead Sea scrolls. Decoding the Rosetta stone or Linear B must have felt like this. What larks.

I eventually published two books of conclusions: the first a monograph about the film for the British Film Institute “Film Classics” series; the second an augmented, revised version of the BFI book that I sell on my web site. I’m embarrassed to admit that I continued going to Shadows screenings and taking notes for more than a year after the BFI book was published. I was in so deep I couldn’t stop working on the changes just because my text had gone to the printer.

Then one day two years ago, some time after the BFI book appeared, one of the friends I had told about the search called, saying he had run into a woman who might have some information. When I finally tracked her down and got in touch with her, it was your typical “good news-bad news” situation.

The good news was that she confirmed that, yes, the title sounded familiar. Her father had been a junk dealer who ran a second-hand shop in downtown Manhattan. One of the ways he replenished his stock was by attending “lost and found” sales held by the New York City Subway System. There were so many forgotten umbrellas, mittens, eyeglasses, hats, pencils, pens, and other things left on the trains that the Transit Authority annually auctioned off the unclaimed items. Though a watch or nice piece of jewelry might go for more, everything else generally went for ten to twenty dollars per “lot”–a box which might contain fifteen or twenty umbrellas, mittens, scarves, or hats. (Click here to read a news story about a typical New York City Subway Sale.) One year a long time ago (it was impossible to pin down the date), there was a fiberboard film container in one of the boxes her father bought. When he got home and opened up the carton he saw the title Shadows (or at least that was what his daughter thought she remembered him saying it was) scratched on the outer leader of one of the reels, but since he had never heard of the movie, she told me he simply put it aside and joked that he was disappointed it was not a porno film.

In this case, the good news was also the bad news. The subway was the wrong place to find the first version. Not only didn’t it square with Cassavetes’ account of what he had done with it, but it just didn’t seem a plausible scenario. If the only print in the universe had been left on a subway car, why hadn’t whoever lost it simply claimed it the next day? The odds were also strongly against it being the right version. While there had only been a single print of the first version, dozens of prints of the later version had been struck and put in circulation once Cassavetes became an established filmmaker. A print found on the subway was far more likely to have been one of the many prints of the second version that was couriered to or from a college film society or art house booking in the 1960s or 1970s.

Even worse news was that all of this had taken place something like thirty or forty years earlier. In my very first conversation with her, the woman emphasized that even assuming her memory of the title was correct, there was virtually no chance the actual print still existed. The junk shop had gone out of business long ago. The father had died years before. The members of the family no longer lived in New York. The children had married and had their own families and had moved away to other cities. The store's contents had been sold off or thrown out years ago. Shadows was just a distantly remembered word in a comical family story. The woman put so little stock in the print’s survival that she didn’t even really want to search for it when I asked her to. She told me she had no idea where to look.

It would take almost two years of polite pestering on my part before she came up with anything; but I have to admit that even as I went through the motions of talking to her every few weeks to remind her to ask other family members if they had any idea if the print might have survived or where it might be, I privately wrote off this lead as one more dead end in a dead-end story. I resumed making announcements at Cassavetes events and posting inquiries on my web site and elsewhere, but had virtually no expectation of a positive outcome. I had given up.

That’s why when the film was finally located in the attic of one of the children’s houses in Florida, shipped to my home, and in my hands, I didn’t even bother to look at it for a while. After the Fedex man dropped it off, I peeked inside the carton and confirmed that Shadows was written on the outer film leader; but after I had done that, I put the reels back in the box, closed up the fiberboard container, and resumed doing my university homework. I was convinced that the odds were absolutely against it being the right print of the film. Indifference suddenly turned to excitement and then to terror a few hours later when I manually unspooled four or five feet of footage from the first reel and held it up to my desk lamp. All I could make out was a figure walking down a street, but that was enough. The second version of Shadows began with a crowd scene.

In ten seconds I went from being blasé to being afraid to touch the print for fear of leaving marks on it. I have a projector in my house, but although it took some self-restraint, I didn’t dare project it. If this actually were the long-lost first version of Shadows, it would be just my luck to have the projector shred or burn it. Even short of that sort of catastrophe, introducing a few microscratches by passing the print through a projector would be profoundly irresponsible. I might have spent 17 years and thirty or forty thousand dollars of my own money finding it, but the print was not really mine. It belonged to future generations. I owed them its preservation in as perfect condition as possible.

With a newly gingerly touch, I sealed up the carton and placed it carefully on my coffee table. After a few days of phone calls and emails to preservation experts, I made an appointment to have a high quality video copy created at a professional film transfer house so that the original would never have to be run through a projector. There was an inevitable delay of course. It took me three or four days to find the right place with the right equipment and I then had to wait for about a week for my appointment–ten days of suddenly anxious sleep, fearful with the completely irrational fear all collectors know, that my house would burn down in the interim, a week of hefting the carton to reassure myself that the film was still in it, before I was able to watch the movie from start to finish, and be sure that the whole thing was not just some sort of mistake on my part. It was the first version. It was an unprecedented and almost unbelievable moment in film history. Cassavetes' oeuvre had had a new work added to it. I had found a new first film by America's greatest filmmaker. Since the released film of the same title was actually Cassavetes' second movie, Cassavetes now had a new first film to his name.

The print consisted of two reels of 16mm black-and-white Kodak Safety Film with optical sound. The first reel was 36 minutes long; the second 42 minutes, making a total running time of 78 minutes. It exceeded my expectations in every respect. It was not a rough assembly or work-in-progress, but a finished work of art, complete in every detail, down to its innovative sound design and credits sequence. In terms of its content, there are more than forty minutes of scenes that are not in the second version. The discovery gives us a large chunk of new work by Cassavetes. It’s a little like discovering five or ten early Picassos–if film were taken as seriously as painting, that is.

Physically, although the celluloid base was shrunken and brittle from fifty years of storage, the emulsion was in superb condition. Remember that, unlike the movies we see in a movie theater, or the prints of the second version of Shadows that I myself had seen, this print was not a duplicate or a blow-up, and it had only been passed through a projector five or six times before it was lost. In film terms, it was pristine, as sharp and clear as a film can be. Not only was it custom printed directly from the original negative that had passed through Cassavetes’ camera in 1957, but it was virtually brand new, unworn and unscratched because it had not been watched. In the scenes that the two versions share, the image quality of the first version is, in fact, better even than that of the recently restored UCLA print of the second version (which is in fact of very little interest, since it contains no new material and is not really different from the long-available release prints of the standard second version).

But can I reveal an embarrassing secret? As wonderful as it was to find the first version, it was also a letdown–not only because the excitement of the years of searching were over, but because imaginatively reconstructing what had been in the print was much more intellectually stimulating than simply viewing it. Believe it or not, at moments I found myself feeling almost disappointed I had found it.

One could ask whether the discovery proves Jonas Mekas right; but that’s the wrong question. It doesn’t really matter. The two versions of Shadows are sufficiently different from each other, with different scenes, settings, and emphases, that they deserve to be thought of as different films. Each stands on its own as an independent work of art.

The real value of the first version is that it gives us an opportunity to go behind the scenes into the workshop of the artist. Art historians X-ray Rembrandt’s work to glimpse his changing intentions. Critics study the differences between the quarto and folio versions of Shakespeare’s plays. There is almost never an equivalent to these things in film. That is the value of the first version of Shadows. It allows us to eavesdrop on Cassavetes’ creative process–to, as it were, stand behind him as he films and edits his first feature. We watch him change his understanding of his film and his characters. His revisions as he moves from one version to the next–the scenes he adds, deletes, loops new dialogue into, adds music to, or moves to new positions as he re-films and re-edits Shadows–allow an almost unprecedented glimpse into the inner workings of the heart and mind of one of the most important artists of the past fifty years.

The odds against finding the print still astonish me–not only because it was the only copy in the world and was in such an unexpected location, but because of the timing. The junk-dealer’s children are themselves now in their late fifties and sixties and the brown cardboard carton would almost certainly have been thrown in the garbage when they died. (At least one of the children has already died.) My friends used to joke me that I was looking for a needle in a haystack, but after I found the print I realized that the situation was even more dire than that metaphor suggested. The haystack was not going to be there very much longer. If the print had been in an archive or museum, it could have patiently sat there for the next thousand years waiting for someone to discover it, but as a worthless object gathering dust in the corner of an attic, it would not have survived the next generation’s house clean-out. Though I had no idea that the clock was ticking while I was engaged in my search, after I found the print I realized that it had probably been the last chance to find it for all eternity.

As to why it was left on the subway, anybody’s guess is as good as mine. My own theory (based on personal knowledge of some of the individuals involved with the early screenings, though I dare not name names) is that one of the people associated with the first version was carrying the carton back from its final screening when an attractive blonde got on the train. The rest I leave to your imagination. Thank goodness for blondes. And junk collectors.

24.12.04

 

The DV To 35mm Technology Guide, Part One:

The DV To 35mm Technology Guide, Part One:
General Overview
an article by Chris Hauser





The Watchdog notes: You may call up a Table of Contents for the DV to 35mm Guide using links at the top and bottom of the page for each article. It will appear in a separate window. All three parts of this document are available to download and print as a single Adobe Acrobat .PDF file or as a single MicroSoft Word 97 .DOC file. Many thanks to Chris Hauser of Tape House Digital Film in New York City for this contribution.



Transferring Video Originated
Programs to 35mm Film




There are three methods of transferring video to film. Of the three, the Kinescope transfer (a camera shoots off of a video monitor) is the oldest. The Honeymooners non-film episodes are an example. An EBR (electron beam recorder) uses an electron tube and records directly onto B&W film as red, green and blue frames. This is then step printed through filters to color film. While much better than a kinescope, EBR’s are old technology. Both of these transfer methods have been characterized as soft, smeary, ghosting, strobing, or low contrast. The best choice is the Digital Film transfer (more later). Regardless of the method you choose the two most important issues with making a 35mm motion picture from video originated materials are: Interlaced Video and Frame Rate.



Interlaced Video



Problem: The broadcast NTSC and PAL standards interlace video frames. Each second of video contains 60 NTSC fields or 50 PAL fields. If the capture media is video, each field is recorded at a different point on the timeline. One second of NTSC video is actually 60 distinct images (fields) not 30 whole frames. For this reason, when you de-interlace the video (view field 1 and field 2 together as one frame) the frames will flicker. The video camera, the subject matter or both have moved between fields. Field based video cannot be viewed or transferred as frames without huge losses in quality.



Solution: Shoot in the Frame movie mode or progressive scan. In either system the camera captures images as frames not fields. Instead of 60 NTSC or 50 PAL images (fields) per second, Frame movie mode/progressive scan records 30 NTSC or 25 PAL video frames per second.


Look For Yourself: There is an excellent site on the Internet where you can view the difference between interlaced and progressive scan frames: http://home.earthlink.net/~demografx/intrlce.html



Frame Rate



Problem: 35mm motion picture runs at 24 frames per second. NTSC video runs at 30 frames per second. Shooting in progressive scan NTSC (frame mode) solves only half the problem. There is still a 6 frame per second difference between film and video. Removing these 6 frames will make video look like Max Headroom. For those of you who missed the TV series, one of the trademark visual effects was the stuttering motion whenever Max spoke or moved. Unless you desire this effect, do not shoot in NTSC.



Solution: Shoot in progressive frame mode in 25 fps PAL. One camera we’ve tested is the Canon DM-XL1. When transferring PAL video to 24 fps film, every video frame becomes a film frame. The extra frame per second makes one minute of video run 62 ½ seconds in theaters. The slowdown is generally imperceptible. Your camera audio is then stretched 4.1666% to match the new film length and processed to adjust for pitch. As a bonus, PAL captures more information than NTSC. The PAL standard is 625 lines versus NTSC 525. Those extra 100 scan lines increases resolution by about 20%. This method of shooting in PAL is not new. Advertising campaigns are frequently shot on film at 25fps and posted in PAL so they can broadcast worldwide while leaving an option open for cinema advertising. A PAL video master can be standards converted to NTSC (without the huge loss of quality compared to going NTSC to PAL) and also transferred to film.



Shooting



Aspect Ratios: Compose with the middle 72% of your viewfinder. So long as cables, dolly tracks, boom mikes and etc. are not in your viewfinder, your video can be dual purposed. The middle of your frame is the 1.85:1 aspect for cinema and the whole frame is for broadcast.



Lighting: Video does not have the same range as film. Film sees into shadows and holds detail in over-exposed areas. Video goes black or blows out. To avoid the video/electronic look you have to approach lighting differently. Example: Instead of shooting a character in direct sunlight at noon, try shooting when the sun is lower in the sky. A carefully planned back-lit shot hides the limitations of video while exploiting its strong points. Even in harsh sunlight, a back lit face with a background that is in open shade will look better than direct light on subject and background. When shooting indoors light low key and use dimmers on light fixtures. Also avoid direct light on white walls. Remember that video has less range than film.



Digital Film



What you need: Theaters need a composite print in order to show your film. The lab makes sound prints from two separate film elements. They need a picture negative and an optical track negative.



Picture Negative: Much of what is done in a Digital Film transfer is via software. YUV video frames are converted to an RGB color space. Each of these RGB frames is then made into a digital computer file. Each file is then interpolated from the incoming file size (video resolution) to a higher resolution. Unlike Kinescopes and EBR, the Digital Film transfer shoots frames at film resolution. When projected on a big screen, unprocessed video looks very grainy. By increasing the resolution, video takes on the correct look by mimicking film grain. Once the frames have been interpolated to film resolution, the computer files are output via a film recorder. This film becomes your OCN (original camera negative.)



Audio: 90 PAL minutes will run 93 ¾ minutes at 24fps. In order to sync the audio you need to stretch the track 4.1666%. Try to limit the audio stretch to the original DV audio. Music and effects should be mixed after the film transfer. The actual process of stretching audio is time consuming. You should not be stretching a full mix. If you do, the mix has to be broken out so that each element can be processed and checked individually. Your final mix should be made with either a work print or video off the OCN. Either way the mix has to be done for theater sound (Dolby SR, SDDS, Dolby SR-D and etc.)



Reels: For features, film must be broken down into reels for the lab, post production and theatre release. A feature in big release gets spliced into one big platter for continuous projection, but up until that point it must be in manageable reel sizes. 2000’ should be the absolute maximum size for one reel. 90’ at 24fps is one minute of program. This gives you about 22 minutes per reel. Since film needs an academy count as well as head and tail leader, the program length of each reel must stay under 20 film minutes (19:12 in PAL). When editing a feature for theatrical release, you must close each reel with a straight cut and audio out before last picture frame. This allows the projectionist to change reels without an audio pop or jump cut in the middle of a scene. It is also a good idea for your out going scene to be either in a different environment or at least a different camera angle (same setting) as the income scene. This will avoid a noticeable shift in color. Laboratories do not print reels in order. They print a whole bunch of reel ones, then a whole bunch of reel twos and so on. Don’t take the chance of having the end of one reel match the beginning of another. Film developing is a chemical process that is constantly changing. Changing scenes between reels hides potential differences in color and density.



Editing: If you are going to edit on an off line system, do not use the editing computer output as your master. Digital video already has compression. Compressing video further on an editing system will decrease image quality. Clone your camera DV tapes to another digital component format (D-1, D-Beta, DCT and etc.) and auto conform based on your off line. An exception to this rule would be to FireWire directly to a desktop computer (the MAC G3/G4 with "Edit DV" or "Final Cut Pro" software is worth looking into) and FireWire your finished cut onto a new DV tape. There is no further compression and/or generation loss. Go to Apple.com or Digitalorigin.com for computer and software specifications.



Free Advice



If you had a huge budget, you would probably shoot on film in the first place. With this in mind we offer the following suggestion: the potential for a finished feature on videotape being picked up for theater release is not the same as one in the can. Distributors like to see those metal ICC cases and hear film reels rattling inside. It’s conditioning. You could whack out a quick Kinescope for screenings, but why spend all your money to create a negative that no distributor would ever use. There is an excellent alternative to laying out the big bucks (yours I might add) for a feature transfer: make a trailer. From your finished video master cut a trailer, transfer it to film and shop that around. They may not even know it was from a tape source. Once they’re interested, have them view the feature on tape. The trailer is proof that your video will look good on film. Plenty of low budget independents have shot on 16mm only to have the distributor pay for the optical blowup. If your feature is commercially viable then the Digital Film transfer can be negotiated into your deal.



Conclusion: If you want to shoot and post on video then transfer to film, do so in the PAL standard with a digital camera that has a progressive scan option.



For more information contact Chris Hauser
TapeHouse Digital Film in New York City
tel. (212) 319-5084
cfh@tapehouse.com




Go to the THDF website




Move on to Part Two of the DV to 35mm Guide.

Move on to Part Three of the DV to 35mm Guide.

Open the Table of Contents for the DV to 35mm Guide.



Back to the XL1 Articles Menu


Thrown together by Chris Hurd


23.12.04

 

Christmas : Origin and History

The history of Christmas dates back over 4000 years. Many of our Christmas traditions were celebrated centuries before the Christ child was born. The 12 days of Christmas, the bright fires, the yule log, the giving of gifts, carnivals(parades) with floats, carolers who sing while going from house to house, the holiday feasts, and the church processions can all be traced back to the early Mesopotamians.

Many of these traditions began with the Mesopotamian celebration of New Years. The Mesopotamians believed in many gods, and as their chief god - Marduk. Each year as winter arrived it was believed that Marduk would do battle with the monsters of chaos. To assist Marduk in his struggle the Mesopotamians held a festival for the New Year. This was Zagmuk, the New Year's festival that lasted for 12 days.

The Mesopotamian king would return to the temple of Marduk and swear his faithfulness to the god. The traditions called for the king to die at the end of the year and to return with Marduk to battle at his side.

To spare their king, the Mesopotamians used the idea of a "mock" king. A criminal was chosen and dressed in royal clothes. He was given all the respect and privileges of a real king. At the end of the celebration the "mock" king was stripped of the royal clothes and slain, sparing the life of the real king.

The Persians and the Babylonians celebrated a similar festival called the Sacaea. Part of that celebration included the exchanging of places, the slaves would become the masters and the masters were to obey.

Early Europeans believed in evil spirits, witches, ghosts and trolls. As the Winter Solstice approached, with its long cold nights and short days, many people feared the sun would not return. Special rituals and celebrations were held to welcome back the sun.

In Scandinavia during the winter months the sun would disappear for many days. After thirty-five days scouts would be sent to the mountain tops to look for the return of the sun. When the first light was seen the scouts would return with the good news. A great festival would be held, called the Yuletide, and a special feast would be served around a fire burning with the Yule log. Great bonfires would also be lit to celebrate the return of the sun. In some areas people would tie apples to branches of trees to remind themselves that spring and summer would return.

The ancient Greeks held a festival similar to that of the Zagmuk/Sacaea festivals to assist their god Kronos who would battle the god Zeus and his Titans.

The Roman's celebrated their god Saturn. Their festival was called Saturnalia which began the middle of December and ended January 1st. With cries of "Jo Saturnalia!" the celebration would include masquerades in the streets, big festive meals, visiting friends, and the exchange of good-luck gifts called Strenae (lucky fruits).

The Romans decked their halls with garlands of laurel and green trees lit with candles. Again the masters and slaves would exchange places.

"Jo Saturnalia!" was a fun and festive time for the Romans, but the Christians though it an abomination to honor the pagan god. The early Christians wanted to keep the birthday of their Christ child a solemn and religious holiday, not one of cheer and merriment as was the pagan Saturnalia.

But as Christianity spread they were alarmed by the continuing celebration of pagan customs and Saturnalia among their converts. At first the Church forbid this kind of celebration. But it was to no avail. Eventually it was decided that the celebration would be tamed and made into a celebration fit for the Christian Son of God.

Some legends claim that the Christian "Christmas" celebration was invented to compete against the pagan celebrations of December. The 25th was not only sacred to the Romans but also the Persians whose religion Mithraism was one of Christianity's main rivals at that time. The Church eventually was successful in taking the merriment, lights, and gifts from the Saturanilia festival and bringing them to the celebration of Christmas.

The exact day of the Christ child's birth has never been pinpointed. Traditions say that it has been celebrated since the year 98 AD. In 137 AD the Bishop of Rome ordered the birthday of the Christ Child celebrated as a solemn feast. In 350 AD another Bishop of Rome, Julius I, choose December 25th as the observance of Christmas.
 

Deep In the Heart of Tuva

by Billy Bob Hargus (March 1997)

Within a region of Siberia/Mongolian border that is called Tuva, there is more freezing weather than all of the Great Lakes region combined, surrounded by mountains and desert. This is a land where wrestling, games with sheep bones and carnivore appetites are a standard. In 1921, a group of herders created this country after Russians, Chinese, Turks, Huns, Mongols and other armies had over-run the land continuously. Many people outside of this region would probably have never heard of it if it weren't for a unique vocal group that started up there around 1992. It's very fitting that this was the same year that the land experienced a resurgence of national pride: the Tuvan flag and official seal were revived. This is also when Kaagal-ool Khovalyg, Sayan Bapa, Anatoli Kuular, and Alexai Sarytlar started Huun-Huur-Tu and brought the unique Tuvan 'throat-singing' (or höömeï, pronounced her-may) to the world.

In Western music, there have been a number of musical pioneers who explored the contours of the human voice. Meredith Monk, Leon Thomas, Diamanda Galas, Joan LaBarbara, and Bobby McFerrin have made careers out of finding out what vocal chords can reverberate into besides words. Even more so than gospel exhortations of soul singers who explore the human range of emotion, Tuvan singers have explored the extensive potentials of the voice itself in a long tradition. They cultivated this tradition as communication with themselves, the sprits and nature around them, which they also imitated with voices. This may not be so uncommon in a land where cloven animals have far out-numbered humans for years. In fact, the group has said 'it's impossible that people who spend so much time around horses- one of the most rhythmic animals alive- would not have absorbed their sense of rhythm.'

As 'world music' has become a fixture in the West for a number of years now, it should be noted that the Tuvan throat singing is not just an exotic novelty but a part of a rich tradition. Initially, when groups started forming in Tuva, like in many other Third World cultures, Western styles were being copied. Tuvan copies of Beatles appeared as well as dance music used with throat singing over it. Luckily, there are also earnest practitioners of this fine art abound in Tuva to carry on the tradition for real.



So, what exactly is 'throat singing' then? Basically, it involves overtones that are heard in new music, applied to voices. Höömeï is the name applied to it though it is really only one of three or four styles also including sygyt and kargyraa. You hear a deep humming groan, mixed with a high pitched whistling sound. To practice this art, it's recommended not only to carefully practice proper breathing/inhaling and diet (no cold food before trying to do it) but also to, in the words of champion singer Kongar-ool Ondar, "be in an very uplifted mood; your soul, your inner spiritual voice, must be strong." Sometimes, three voicings are heard in one person's singing. Usually, it's done acapella by a single voice (but with its multi-voicing style, a singer may sound like a whole group). Nothing else you've heard is like this. Even hearing this on CD is nothing like witnessing this in person, seeing a group of seated men in 'native garb' give forth with amazing sounds that dart and cut through the air, filling a whole concert hall with the sound of their voices.

Thanks to Huun-Huur-Tu, who have done a number of tours around the whole world now, this music is getting to be more and more well-known and popularized. The group itself has made appearance on MTV, the Arts and Entertainment network, record movie soundtracks and participated in a jam session at Frank Zappa's house. There are now throat singing workshops conducted around the world now as well as International throat singing competitions held back in Tuva. To put things in perspective, blues singer Paul Pena has brought the tradition full circle to American music, linking to Howlin' Wolf's guttural groans.

Though they claim that mix experimentation into their work, Huun-Huur-Tu remain true to their culture. They know their own history and use their voices/music to continue the link that they're a part of now. Their name literally means 'the light that breaks over the grass at the beginning or end of the day.' As percussionist Alexander Bapa explains 'our ensemble used the name because the light rays on the steppe remind us of the seperate lines of sound in throat singing.' They also speak of 'respect for ancestors' and 'naturalness and sincerety' when speaking about their music. This can be attributed to the Buddhist faith and shamanism that have existed side-by-side and have been a part of the land for centuries. Even their instruments are regional creations such as 'horsehead' fiddle (igil), conch shell, shaman rattles. All of their CD's are representative of this great music. The best introduction is probably the wonderful DEEP IN THE HEART OF TUVA CD/booklet (with its extensive notes and background on the country)- this is where you hear all manifestations of this music: not just the group itself but also young boys, old men, women and others all exhort in this tradition. The origin of this tradition spans back to the 1930s with transcriptions that have only been brought to light in the last decade. Even today, the group uncovers newly-found 'old' material from singers around Tuva.

What is new and experimental about Huun-Huur-Tu though is the whole concept of the band itself. When the Soviets ran the area, state-sponsored troops of singers/dancers were common and the only outlet for young, upcoming talent. After this era ended, many performers went out on their own to perform. Traditionally, throat singing is done by one person. The idea of a throat singing group with instruments is new in Tuva. Unfortunately, it's also meant (ironically) that Huun-Huur-Tu finds it much easier to perform in the West than in their own homeland. The endless bureaucracy and lack of appropriate concert spaces means that you will usually not find the group doing a show in Tuva. This doesn't mean that they pander to Western audiences at all but that their own country (or at least its government and promoters) aren't ready for such innovations yet.

The best that can be hoped for with what Westerners call 'world music' is that it becomes common and familiar enough that it is no longer 'foreign.' The tradition of höömeï is certainly firmly understood and practiced within Tuva and is anything but 'foreign' there. Thanks to Huun-Huur-Tu and other practitioners, it will also be something familiar, common and welcome around the West as well. Hopefully, this won't just mean 'legitimization' just because another hemisphere is able to appreciate it. It will truly be 'world music' as in music that is known and appreciated around the world.



--------------------------------------------------------------------------------

Special thanks to Andrew Seidenfeld of No Problem Productions who provided assistance with this article.



--------------------------------------------------------------------------------

DISCOGRAPHY:
60 HORSES IN MY HERD (SHANACHIE) 1993
ORPHAN'S LAMENT (SHANACHIE) 1994
IF I HAD BEEN BORN AN EAGLE (SHANACHIE) 1997

WITH THE BULGARIAN VOICES/ANGELITE
FLY, FLY MY SADNESS (SHANACHIE, originally released with JARO, Germany) 1996

VARIOUS ARTISTS
DEEP IN THE HEART OF TUVA (ELLIPSIS ARTS) 1996

Archives

fevereiro 2004   março 2004   abril 2004   maio 2004   junho 2004   julho 2004   agosto 2004   setembro 2004   outubro 2004   novembro 2004   dezembro 2004   janeiro 2005   fevereiro 2005   março 2005   abril 2005   maio 2005   junho 2005   julho 2005   agosto 2005   setembro 2005   novembro 2005   dezembro 2005   janeiro 2006   fevereiro 2006   abril 2006   maio 2006   junho 2006   julho 2006   outubro 2006   janeiro 2008   maio 2008   setembro 2008   outubro 2008   novembro 2008   janeiro 2010  

This page is powered by Blogger. Isn't yours?