Showing posts with label forecast. Show all posts
Showing posts with label forecast. Show all posts

Saturday, September 12, 2009

Memory-Editing Drugs

Did you find director Michel Gondry's argument for his "Eternal Sunshine of a Spotless Mind" movie too far-fetched?

Well... read on.


The Messy Future of Memory-Editing Drugs | Wired Science | Wired.com

The Messy Future of Memory-Editing Drugs

Brainpmkzeta_2

The development of a drug that controls a chemical used to form memories sparked heady scientific and philosophical speculation this week.

Granted, the drug has only been tested in rats, but other memory-blunting drugs are being tried in soldiers with post-traumatic stress disorder. It might not be long before memories are pharmaceutically targeted, just as moods are now.

Some think this represents an opportunity to eliminate the crippling psychic effects of past trauma. Others see an ill-advised chemical intrusion into an essential human facility that threatens to replace our ability to understand and cope with life's inevitabilities.

Oxford University neuroethicist Anders Sandberg spoke with Wired.com about the future of memory-editing drugs. In some ways, said Sandberg, our memories are already being altered. We just don't realize it.

Wired.com:
Will these drugs, when they become available, work as expected?

Anders Sandberg: A lot of discussion is based on the false premise that they'll work as well as they would in a science fiction story. In practice, well-studied, well-understood drugs like aspirin have side effects that can be annoying or even dangerous. I think the same thing will go for memory editing.

Wired.com: How selective will memory editing be?

Sandberg: Current research seems to suggest that it can be pretty specific, but there will be side effects. It may not even be that you forget other memories. Small, false memories could be created. And we're probably not going to be able to predict that before we actually try them.

Wired.com: What's the right way to test the drugs?

Sandberg: The cautious approach works. Right now, there are small clinical trials using propranolol to reduce post-traumatic stress disorder, which is a good start. We should also find better ways of doing the trials, because we don't really know what we're looking for.

When testing a cancer drug, we look at side effects in terms of toxicity. Here we might want to look at all aspects of thinking, which is really hard, because you can't test for all of them.

In the future, since we're getting more technological forms of recording and documenting our lives, those will have a bigger part in testing the drugs. We'll be able to ask, How does this help in everyday life? How often do you get "tip of the tongue" phenomena? Does it increase in relation to the drug?

Wired.com:
It seems that it would be easy to test "tip of the tongue" drug effects on the sorts of small things one recalls on an everyday basis. But what if it's old, infrequently recalled but still-important memories that are threatened by side effects?

Sandberg: It's pretty messy to determine what is an important memory to us. They quite often crop up, but without us consciously realizing that we're thinking of the memory. That's probably good news, as every time you recall a memory, you also tend to strengthen it.


Wired.com:
How likely is the manipulation of these fundamental memories?

Sandberg: Big memories, with lots of connections to other things we've done, will probably be messy to deal with. But I don't think those are the memories that people want to give up. Most people would want to edit memories that impair them.

Of course, if we want to tweak memories to look better to ourselves, we might get a weird concept of self.


Wired.com:
I've asked about memory removal — but should the discussion involve adding memories, too?

Sandberg: People are more worried about deletion. We have a preoccupation with amnesia, and are more fearful of losing something than adding falsehoods.

The problem is that it's the falsehoods that really mess you up. If you don't know something, you can look it up, remedy your lack of information. But if you believe something falsely, that might make you act much more erroneously.

You can imagine someone modifying their memories of war to make them look less cowardly and more brave. Now they'll think they're a brave person. At that point, you end up with the interesting question of whether, in a crisis situation, they would now be brave.

Wired.com: You use another example of memory-editing drugs for soldiers in your article with S. Matthew Liao, that if the memory of a mistaken action is erased, a soldier might not learn from his remorse.

Sandberg: To some extent, we already have to deal with this. My grandfather's story of having been in the Finnish winter war as a volunteer shifted over time. He didn't become much braver from year to year, but there was a difference between the earlier and later versions.

We can't trust our memories. But on the other hand, our memories are the basis for most of our decisions. We take it as a given that we can trust them, which is problematic.


Wired.com:
But this fluidity of memory at least exists in an organic framework. Might we lose something in the transition to an abrupt, directed fluidity?

Sandberg: There's some truth to that. We have authentic fake memories, in a sense. My grandfather might have made his memories a bit more brave over time, but that was affected by his personality and his other circumstances, and tied to who he was. If he just went to the memory clinic and wanted to have won the battle, that would be more jarring.

If you do that kind of jarring change, and it doesn't connect to anything else in the personality, it's probably not going to work that well.

Wired.com: In your article, you also bring up forgiveness. If we no longer remember when someone has wronged us, we might not learn to forgive them, and that's an important social ability.

Sandberg: My co-author is more concerned than I am, but I do think there's something interesting going on with forgiveness. It's psychological, emotional and moral — a complex can of worms.

I can see problems, not from a moral standpoint, but legal. What if I hit you with my car, and to prevent PTSD you take propranolol, and afterwards in court think it wasn't too serious? A clever lawyer might argue that the victim's lack of concern means the crime should be disregarded.

I'm convinced that we're going to see a lot of interesting legal cases in the next few years, as neuroscience gets involved. People tend to believe witnesses. Suppose a witness says, "I'd just been taking my Ritalin" — should we believe him more, because we've got an enhanced memory? And if a witness has been taking a drug to impair memory, is that a reason to believe that her account is not true?

With this kind of neuroscientific evidence, it's very early to tell what we can trust. We need to do actual experiments and see measure how drugs enhance or impair memory, or more problematically, introduce a bias. Some drugs might enhance emotional memories over unemotional, or vice versa.

Wired.com: Is it paranoid to worry that someday people will be stuck drifting in a sea of shifting and unreliable memories?

Sandberg: I think we're already in this sea, but we don't notice it most of the time. Most people think, "I've got a slightly bad memory." Then they completely trust what they remember, even when it's completely unreliable.

Maybe all this is good, because it forces us to recognize that the nature of our memory is quite changeable.

Sunday, May 10, 2009

The Grid, Our Cars and the Net: One Idea to Link Them All

The Grid, Our Cars and the Net: One Idea to Link Them All | Autopia
By David Weinberger Email Author
May 8, 2009
11:57 am

robin_chase_main

Editor's note: Robin Chase thinks a lot about transportation and the internet, and how to link them. She connected them when she founded Zipcar, and she wants to do it again by making our electric grid and our cars smarter. Time magazine recently named her one of the 100 most influential people of the year. David Weinberger sat down with Chase to discuss her idea.

Robin Chase considers the future of electricity, the future of cars and the internet three terms in a single equation, even if most of us don't yet realize they're on the same chalkboard. Solve the equation correctly, she says, and we create a greener future where innovation thrives. Get it wrong, and our grandchildren will curse our names.

Chase thinks big, and she's got the cred to back it up. She created an improbable network of automobiles called Zipcar. Getting it off the ground required not only buying a fleet of cars, but convincing cities to dedicate precious parking spaces to them. It was a crazy idea, and it worked. Zipcar now has 6,000 cars and 250,000 users in 50 towns.

Now she's moving on to the bigger challenge of integrating a smart grid with our cars – and then everything else. The kicker is how they come together. You can sum it up as a Tweet: The intelligent network we need for electricity can also turn cars into nodes. Interoperability is a multiplier. Get it right!

Robin Chase

Robin Chase

Chase starts by explaining the smart grid. There's broad consensus that our electrical system should do more than carry electricity. It should carry information. That would allow a more intelligent, and efficient, use of power.

"Our electric infrastructure is designed for the rare peak of usage," Chase says. "That's expensive and wasteful."

Changing that requires a smart grid. What we have is a dumb one. We ask for electricity and the grid provides it, no questions asked. A smart grid asks questions and answers them. It makes the meter on your wall a sensor that links you to a network that knows how much power you're using, when you're using it and how to reduce your energy needs – and costs.

Such a system will grow more important as we become energy producers, not just consumers. Electric vehicles and plug-in hybrids will return power to the grid. Rooftop solar panels and backyard wind turbines will, at times, produce more energy than we can store. A smart grid generates what we need and lets us use what we generate. That's why the Obama Administration allocated $4.5 billion in the stimulus bill for smart grid R&D.

This pleases Chase, but it also makes her nervous. The smart grid must be an information network, but we have a tradition of getting such things wrong. Chase is among those trying to convince the government that the safest and most robust network will use open internet protocols and standards. For once the government seems inclined to listen.

Chase switches gears to talk about how cars fit into the equation. She sees automobiles as just another network device, one that, like the smart grid, should be open and net-based.

"Cars are network nodes," she says. "They have GPS and Bluetooth and toll-both transponders, and we're all on our cell phones and lots of cars have OnStar support services."

That's five networks. Automakers and academics will bring us more. They're working on smart cars that will communicate with us, with one another and with the road. How will those cars connect to the network? That's the third part of Chase's equation: Mesh networking.

In a typical Wi-Fi network, there's one router and a relatively small number of devices using it as a gateway to the internet. In a mesh network, every device is also a router. Bring in a new mesh device and it automatically links to any other mesh devices within radio range. It is an example of what internet architect David Reed calls "cooperative gain" - the more devices, the more bandwidth across the network. Chase offers an analogy to explain it.

"Wi-Fi is like a bridge that connects the highways on either side of the stream," she says. "You build it wide enough to handle the maximum traffic you expect. If too much comes, it gets congested. When not enough arrives, you've got excess capacity. Mesh takes a different approach: Each person who wants to cross throws in a flat rock that's above the water line. The more people who do that, the more ways there are to get across the river."

Cooperative gain means more users bring more capacity, not less. It's always right-sized. Of course, Chase points out, if you're trying to go a long distance, you're ultimately forced back onto the broadband bridge where the capacity is limited. But for local intra-mesh access, it's a brilliant and counter-intuitive strategy.

Mesh networking as a broad-based approach to networking is growing. A mesh network with 240 nodes covers Vienna. Similar projects are underway in Barcelona, Athens, the Czech Republic and, before long, in two areas of Boston not far from the cafe we're sitting in. But the most dramatic examples are the battlefields of Iraq and Afghanistan.

"Today in Iraq and Afghanistan, soldiers and tanks and airplanes are running around using mesh networks," said Chase. "It works, it's secure, it's robust. If a node or device disappears, the network just reroutes the data."

And, perhaps most important, it's in motion. That's what allows Chase's plural visions to go singular. Build a smart electrical grid that uses Internet protocols and puts a mesh network device in every structure that has an electric meter. Sweep out the half dozen networks in our cars and replace them with an open, Internet-based platform. Add a mesh router. A nationwide mesh cloud will form, linking vehicles that can connect with one another and with the rest of the network. It's cooperative gain gone national, gone mobile, gone open.

Chase's mesh vision draws some skepticism. Some say it won't scale up. The fact it's is being used in places like Afghanistan and Vienna indicates it could. Others say moving vehicles may not be able to hook into and out of mesh networks quickly enough. Chase argues it's already possible to do so in less than a second, and that time will only come down. But even if every car and every electric meter were meshed, there's still a lot of highway out there that wouldn't be served, right? Chase has an answer for that, too.

"Cars would have cellular and Wi-Fi as backups," she said.

The economics are right, she argues. Rather than over-building to handle peak demand and letting capacity go unused, we would right-size our infrastructure to provide exactly what we need, when we need it, with minimum waste and maximum efficiency.

"There's an economy of network scale here," she says. "The traffic-light guys should be interested in this for their own purposes, and so should the power-grid folks and the emergency responders and the Homeland Security folks and, well, everyone. Mesh networks based on open standards are economically justifiable for any one of these things. Put them together - network the networks – and for the same exact infrastructure spend, you get a ubiquitous, robust, resilient, open communication platform — ripe for innovation — without spending a dollar more."

The time is right, too. There's $7.2 billion in the stimulus bill for broadband, $4.5 billion for the smart grid and about $5 billion for transportation technology. The Transportation Reauthorization bill is coming up, too. At $300 billion it is second only to education when it comes to federal discretionary spending. We are about to make a huge investment in a set of networks. It will be difficult to gather the political and economic will to change them once they are deployed.

"We need to get this right, right now," Chase says.

Build each of these infrastructures using open networking standards and we enable cooperative gain at the network level itself. Get it wrong and we will have paved over a generational opportunity.

David Weinberger is a fellow at Harvard's Berkman Center for Internet and Society. E-mail him at self@evident.com.

Friday, March 13, 2009

Vindicating Lenin... sort of

“When we are victorious on a world scale, I think we shall use gold for the purpose of building public lavatories in the streets of some of the largest cities of the world.” - V.Lenin, 1921.

Solid 24k gold toilet. Hong Kong (meca do capitalismo liberal), 2001.

Monday, November 24, 2008

The Screen People of Tomorrow



Published: November 21, 2008

Everywhere we look, we see screens. The other day I watched clips from a movie as I pumped gas into my car. The other night I saw a movie on the backseat of a plane. We will watch anywhere. Screens playing video pop up in the most unexpected places — like A.T.M. machines and supermarket checkout lines and tiny phones; some movie fans watch entire films in between calls. These ever-present screens have created an audience for very short moving pictures, as brief as three minutes, while cheap digital creation tools have empowered a new generation of filmmakers, who are rapidly filling up those screens. We are headed toward screen ubiquity.

When technology shifts, it bends the culture. Once, long ago, culture revolved around the spoken word. The oral skills of memorization, recitation and rhetoric instilled in societies a reverence for the past, the ambiguous, the ornate and the subjective. Then, about 500 years ago, orality was overthrown by technology. Gutenberg’s invention of metallic movable type elevated writing into a central position in the culture. By the means of cheap and perfect copies, text became the engine of change and the foundation of stability. From printing came journalism, science and the mathematics of libraries and law. The distribution-and-display device that we call printing instilled in society a reverence for precision (of black ink on white paper), an appreciation for linear logic (in a sentence), a passion for objectivity (of printed fact) and an allegiance to authority (via authors), whose truth was as fixed and final as a book. In the West, we became people of the book.






Video Citing: TimeTube, on the Web, gives a genealogy of the most popular videos and their descendants, and charts their popularity in time-line form.



Now invention is again overthrowing the dominant media. A new distribution-and-display technology is nudging the book aside and catapulting images, and especially moving images, to the center of the culture. We are becoming people of the screen. The fluid and fleeting symbols on a screen pull us away from the classical notions of monumental authors and authority. On the screen, the subjective again trumps the objective. The past is a rush of data streams cut and rearranged into a new mashup, while truth is something you assemble yourself on your own screen as you jump from link to link. We are now in the middle of a second Gutenberg shift — from book fluency to screen fluency, from literacy to visuality.

The overthrow of the book would have happened long ago but for the great user asymmetry inherent in all media. It is easier to read a book than to write one; easier to listen to a song than to compose one; easier to attend a play than to produce one. But movies in particular suffer from this user asymmetry. The intensely collaborative work needed to coddle chemically treated film and paste together its strips into movies meant that it was vastly easier to watch a movie than to make one. A Hollywood blockbuster can take a million person-hours to produce and only two hours to consume. But now, cheap and universal tools of creation (megapixel phone cameras, Photoshop, iMovie) are quickly reducing the effort needed to create moving images. To the utter bafflement of the experts who confidently claimed that viewers would never rise from their reclining passivity, tens of millions of people have in recent years spent uncountable hours making movies of their own design. Having a ready and reachable audience of potential millions helps, as does the choice of multiple modes in which to create. Because of new consumer gadgets, community training, peer encouragement and fiendishly clever software, the ease of making video now approaches the ease of writing.

This is not how Hollywood makes films, of course. A blockbuster film is a gigantic creature custom-built by hand. Like a Siberian tiger, it demands our attention — but it is also very rare. In 2007, 600 feature films were released in the United States, or about 1,200 hours of moving images. As a percentage of the hundreds of millions of hours of moving images produced annually today, 1,200 hours is tiny. It is a rounding error.

We tend to think the tiger represents the animal kingdom, but in truth, a grasshopper is a truer statistical example of an animal. The handcrafted Hollywood film won’t go away, but if we want to see the future of motion pictures, we need to study the swarming food chain below — YouTube, indie films, TV serials and insect-scale lip-sync mashups — and not just the tiny apex of tigers. The bottom is where the action is, and where screen literacy originates.

Thursday, November 13, 2008

Aplicações do Google Insights - 1

clipped from www.google.com

The examples below showcase some different ways of using Google Insights for Search. Whether you’re an advertising agency, a small business owner, a multinational corporation, or an academic researcher, Insights for Search can help you gauge interest in pertinent search terms.

Choosing advertising messages

Insights can help you determine which messages resonate best. For example, an automobile manufacturer may be unsure of whether it should highlight fuel efficiency, safety, or engine performance to market a new car model.

When the three features are entered into Insights, we can see that there's a considerable amount of interest in car safety. With this information, the manufacturer may want to consider incorporating car safety into its marketing strategy.

2

clipped from www.google.com
Examining seasonality

Insights can be used to determine seasonality. For example, a ski resort may want to find out when people search for ski-related terms most often.

In this example, the same time frame (June through May) is being compared across several years.

The results are fairly consistent throughout the years: interest picks up in August and peaks in December and January. With this information, the ski resort can anticipate demand and make informed decisions about the appropriate allocation of everything from its advertising budget to staffing to resort resources.

Creating brand associations

Insights can be a helpful tool in creating brand associations. Take, for example, an advertising agency that needs to build a compelling advertising campaign for its client, a computer hardware company. The agency needs to know what competing brands are doing: how should they position their client's product against them?

3

clipped from www.google.com

When comparing laptops or notebook, it's useful to apply the Category filter, whereby the data will be narrowed down to just Computers & Electronics.

Carefully examining the resulting top related searches and the rising searches can help the agency better understand competitors' offers, thereby creating a campaign to differentiate their client's brand.

Entering new markets

Insights can be useful in determining a new market. A wine distributor may be looking to expand into new markets. By entering in wine + vino, and comparing the data across multiple countries, such as Argentina, Mexico, Spain, and Venezuela, the distributor can get a sense of where interest is more prevalent.

The resulting graph indicates greater interest in Spain and Argentina. Choosing Spain, for example, the distributor can examine the subregions and consider centralizing distribution in the La Rioja region, where interest appears to be the highest.

blog it

Usando o Google Trends e Google Insights

Google on Flu


blog it

Google Uses Searches to Track Flu’s Spread

Published: November 11, 2008

SAN FRANCISCO — There is a new common symptom of the flu, in addition to the usual aches, coughs, fevers and sore throats. Turns out a lot of ailing Americans enter phrases like “flu symptoms” into Google and other search engines before they call their doctors.

That simple act, multiplied across millions of keyboards in homes around the country, has given rise to a new early warning system for fast-spreading flu outbreaks, called Google Flu Trends.

Tests of the new Web tool from Google.org, the company’s philanthropic unit, suggest that it may be able to detect regional outbreaks of the flu a week to 10 days before they are reported by the Centers for Disease Control and Prevention.

In early February, for example, the C.D.C. reported that the flu cases had recently spiked in the mid-Atlantic states. But Google says its search data show a spike in queries about flu symptoms two weeks before that report was released. Its new service at google.org/flutrends analyzes those searches as they come in, creating graphs and maps of the country that, ideally, will show where the flu is spreading.

The C.D.C. reports are slower because they rely on data collected and compiled from thousands of health care providers, labs and other sources. Some public health experts say the Google data could help accelerate the response of doctors, hospitals and public health officials to a nasty flu season, reducing the spread of the disease and, potentially, saving lives.

“The earlier the warning, the earlier prevention and control measures can be put in place, and this could prevent cases of influenza,” said Dr. Lyn Finelli, lead for surveillance at the influenza division of the C.D.C. From 5 to 20 percent of the nation’s population contracts the flu each year, she said, leading to roughly 36,000 deaths on average.

The service covers only the United States, but Google is hoping to eventually use the same technique to help track influenza and other diseases worldwide.

“From a technological perspective, it is the beginning,” said Eric E. Schmidt, Google’s chief executive.

The premise behind Google Flu Trends — what appears to be a fruitful marriage of mob behavior and medicine — has been validated by an unrelated study indicating that the data collected by Yahoo, Google’s main rival in Internet search, can also help with early detection of the flu.

“In theory, we could use this stream of information to learn about other disease trends as well,” said Dr. Philip M. Polgreen, assistant professor of medicine and epidemiology at the University of Iowa and an author of the study based on Yahoo’s data.

Still, some public health officials note that many health departments already use other approaches, like gathering data from visits to emergency rooms, to keeping daily tabs on disease trends in their communities.

“We don’t have any evidence that this is more timely than our emergency room data,” said Dr. Farzad Mostashari, assistant commissioner of the Department of Health and Mental Hygiene in New York City.

If Google provided health officials with details of the system’s workings so that it could be validated scientifically, the data could serve as an additional, free way to detect influenza, said Dr. Mostashari, who is also chairman of the International Society for Disease Surveillance.

A paper on the methodology of Google Flu Trends is expected to be published in the journal Nature.

Researchers have long said that the material published on the Web amounts to a form of “collective intelligence” that can be used to spot trends and make predictions.

But the data collected by search engines is particularly powerful, because the keywords and phrases that people type into them represent their most immediate intentions. People may search for “Kauai hotel” when they are planning a vacation and for “foreclosure” when they have trouble with their mortgage. Those queries express the world’s collective desires and needs, its wants and likes.

Internal research at Yahoo suggests that increases in searches for certain terms can help forecast what technology products will be hits, for instance. Yahoo has begun using search traffic to help it decide what material to feature on its site.

Two years ago, Google began opening its search data trove through Google Trends, a tool that allows anyone to track the relative popularity of search terms. Google also offers more sophisticated search traffic tools that marketers can use to fine-tune ad campaigns. And internally, the company has tested the use of search data to reach conclusions about economic, marketing and entertainment trends.

“Most forecasting is basically trend extrapolation,” said Hal Varian, Google’s chief economist. “This works remarkably well, but tends to miss turning points, times when the data changes direction. Our hope is that Google data might help with this problem.”

Prabhakar Raghavan, who is in charge of Yahoo Labs and the company’s search strategy, also said search data could be valuable for forecasters and scientists, but privacy concerns had generally stopped it from sharing it with outside academics.

Google Flu Trends avoids privacy pitfalls by relying only on aggregated data that cannot be traced to individual searchers. To develop the service, Google’s engineers devised a basket of keywords and phrases related to the flu, including thermometer, flu symptoms, muscle aches, chest congestion and many others.

Google then dug into its database, extracted five years of data on those queries and mapped it onto the C.D.C.’s reports of influenzalike illness. Google found a strong correlation between its data and the reports from the agency, which advised it on the development of the new service.

“We know it matches very, very well in the way flu developed in the last year,” said Dr. Larry Brilliant, executive director of Google.org. Dr. Finelli of the C.D.C. and Dr. Brilliant both cautioned that the data needed to be monitored to ensure that the correlation with flu activity remained valid.

Google also says it believes the tool may help people take precautions if a disease is in their area.

Others have tried to use information collected from Internet users for public health purposes. A Web site called whoissick.org, for instance, invites people to report what ails them and superimposes the results on a map. But the site has received relatively little traffic.

HealthMap, a project affiliated with the Children’s Hospital Boston, scours the Web for articles, blog posts and newsletters to create a map that tracks emerging infectious diseases around the world. It is backed by Google.org, which counts the detection and prevention of diseases as one of its main philanthropic objectives.

But Google Flu Trends appears to be the first public project that uses the powerful database of a search engine to track a disease.

“This seems like a really clever way of using data that is created unintentionally by the users of Google to see patterns in the world that would otherwise be invisible,” said Thomas W. Malone, a professor at the Sloan School of Management at M.I.T. “I think we are just scratching the surface of what’s possible with collective intelligence.”

A version of this article appeared in print on November 12, 2008, on page A1 of the New York edition.

Friday, November 24, 2000

The Screen People of Tomorrow (cont.)

An emerging set of cheap tools is now making it easy to create digital video. There were more than 10 billion views of video on YouTube in September. The most popular videos were watched as many times as any blockbuster movie. Many are mashups of existing video material. Most vernacular video makers start with the tools of Movie Maker or iMovie, or with Web-based video editing software like Jumpcut. They take soundtracks found online, or recorded in their bedrooms, cut and reorder scenes, enter text and then layer in a new story or novel point of view. Remixing commercials is rampant. A typical creation might artfully combine the audio of a Budweiser “Wassup” commercial with visuals from “The Simpsons” (or the Teletubbies or “Lord of the Rings”). Recutting movie trailers allows unknown auteurs to turn a comedy into a horror flick, or vice versa.

Rewriting video can even become a kind of collective sport. Hundreds of thousands of passionate anime fans around the world (meeting online, of course) remix Japanese animated cartoons. They clip the cartoons into tiny pieces, some only a few frames long, then rearrange them with video editing software and give them new soundtracks and music, often with English dialogue. This probably involves far more work than was required to edit the original cartoon but far less work than editing a clip a decade ago. The new videos, called Anime Music Videos, tell completely new stories. The real achievement in this subculture is to win the Iron Editor challenge. Just as in the TV cookoff contest “Iron Chef,” the Iron Editor must remix videos in real time in front of an audience while competing with other editors to demonstrate superior visual literacy. The best editors can remix video as fast as you might type.

In fact, the habits of the mashup are borrowed from textual literacy. You cut and paste words on a page. You quote verbatim from an expert. You paraphrase a lovely expression. You add a layer of detail found elsewhere. You borrow the structure from one work to use as your own. You move frames around as if they were phrases.

Digital technology gives the professional a new language as well. An image stored on a memory disc instead of celluloid film has a plasticity that allows it to be manipulated as if the picture were words rather than a photo. Hollywood mavericks like George Lucas have embraced digital technology and pioneered a more fluent way of filmmaking. In his “Star Wars” films, Lucas devised a method of moviemaking that has more in common with the way books and paintings are made than with traditional cinematography.

In classic cinematography, a film is planned out in scenes; the scenes are filmed (usually more than once); and from a surfeit of these captured scenes, a movie is assembled. Sometimes a director must go back for “pickup” shots if the final story cannot be told with the available film. With the new screen fluency enabled by digital technology, however, a movie scene is something more flexible: it is like a writer’s paragraph, constantly being revised. Scenes are not captured (as in a photo) but built up incrementally. Layers of visual and audio refinement are added over a crude outline of the motion, the mix constantly in flux, always changeable. George Lucas’s last “Star Wars” movie was layered up in this writerly way. He took the action “Jedis clashing swords — no background” and laid it over a synthetic scene of a bustling marketplace, itself blended from many tiny visual parts. Light sabers and other effects were digitally painted in later, layer by layer. In this way, convincing rain, fire and clouds can be added in additional layers with nearly the same kind of freedom with which Lucas might add “it was a dark and stormy night” while writing the script. Not a single frame of the final movie was left untouched by manipulation. In essence, a digital film is written pixel by pixel.

The recent live-action feature movie “Speed Racer,” while not a box-office hit, took this style of filmmaking even further. The spectacle of an alternative suburbia was created by borrowing from a database of existing visual items and assembling them into background, midground and foreground. Pink flowers came from one photo source, a bicycle from another archive, a generic house roof from yet another. Computers do the hard work of keeping these pieces, no matter how tiny and partial they are, in correct perspective and alignment, even as they move. The result is a film assembled from a million individual existing images. In most films, these pieces are handmade, but increasingly, as in “Speed Racer,” they can be found elsewhere.

In the great hive-mind of image creation, something similar is already happening with still photographs. Every minute, thousands of photographers are uploading their latest photos on the Web site Flickr. The more than three billion photos posted to the site so far cover any subject you can imagine; I have not yet been able to stump the site with a request. Flickr offers more than 200,000 images of the Golden Gate Bridge alone. Every conceivable angle, lighting condition and point of view of the Golden Gate Bridge has been photographed and posted. If you want to use an image of the bridge in your video or movie, there is really no reason to take a new picture of this bridge. It’s been done. All you need is a really easy way to find it.

Similar advances have taken place with 3D models. On Google SketchUp’s 3D Warehouse, you can find insanely detailed three-dimensional virtual models of most major building structures of the world. Need a street in San Francisco? Here’s a filmable virtual set. With powerful search and specification tools, high-resolution clips of any bridge in the world can be circulated into the common visual dictionary for reuse. Out of these ready-made “words,” a film can be assembled, mashed up from readily available parts. The rich databases of component images form a new grammar for moving images.

After all, this is how authors work. We dip into a finite set of established words, called a dictionary, and reassemble these found words into articles, novels and poems that no one has ever seen before. The joy is recombining them. Indeed it is a rare author who is forced to invent new words. Even the greatest writers do their magic primarily by rearranging formerly used, commonly shared ones. What we do now with words, we’ll soon do with images.

For directors who speak this new cinematographic language, even the most photo-realistic scenes are tweaked, remade and written over frame by frame. Filmmaking is thus liberated from the stranglehold of photography. Gone is the frustrating method of trying to capture reality with one or two takes of expensive film and then creating your fantasy from whatever you get. Here reality, or fantasy, is built up one pixel at a time as an author would build a novel one word at a time. Photography champions the world as it is, whereas this new screen mode, like writing and painting, is engineered to explore the world as it might be.

But merely producing movies with ease is not enough for screen fluency, just as producing books with ease on Gutenberg’s press did not fully unleash text. Literacy also required a long list of innovations and techniques that permit ordinary readers and writers to manipulate text in ways that make it useful. For instance, quotation symbols make it simple to indicate where one has borrowed text from another writer. Once you have a large document, you need a table of contents to find your way through it. That requires page numbers. Somebody invented them (in the 13th century). Longer texts require an alphabetic index, devised by the Greeks and later developed for libraries of books. Footnotes, invented in about the 12th century, allow tangential information to be displayed outside the linear argument of the main text. And bibliographic citations (invented in the mid-1500s) enable scholars and skeptics to systematically consult sources. These days, of course, we have hyperlinks, which connect one piece of text to another, and tags, which categorize a selected word or phrase for later sorting.

All these inventions (and more) permit any literate person to cut and paste ideas, annotate them with her own thoughts, link them to related ideas, search through vast libraries of work, browse subjects quickly, resequence texts, refind material, quote experts and sample bits of beloved artists. These tools, more than just reading, are the foundations of literacy.

If text literacy meant being able to parse and manipulate texts, then the new screen fluency means being able to parse and manipulate moving images with the same ease. But so far, these “reader” tools of visuality have not made their way to the masses. For example, if I wanted to visually compare the recent spate of bank failures with similar events by referring you to the bank run in the classic movie “It’s a Wonderful Life,” there is no easy way to point to that scene with precision. (Which of several sequences did I mean, and which part of them?) I can do what I just did and mention the movie title. But even online I cannot link from this sentence to those “passages” in an online movie. We don’t have the equivalent of a hyperlink for film yet. With true screen fluency, I’d be able to cite specific frames of a film, or specific items in a frame. Perhaps I am a historian interested in oriental dress, and I want to refer to a fez worn by someone in the movie “Casablanca.” I should be able to refer to the fez itself (and not the head it is on) by linking to its image as it “moves” across many frames, just as I can easily link to a printed reference of the fez in text. Or even better, I’d like to annotate the fez in the film with other film clips of fezzes as references.

With full-blown visuality, I should be able to annotate any object, frame or scene in a motion picture with any other object, frame or motion-picture clip. I should be able to search the visual index of a film, or peruse a visual table of contents, or scan a visual abstract of its full length. But how do you do all these things? How can we browse a film the way we browse a book?

It took several hundred years for the consumer tools of text literacy to crystallize after the invention of printing, but the first visual-literacy tools are already emerging in research labs and on the margins of digital culture. Take, for example, the problem of browsing a feature-length movie. One way to scan a movie would be to super-fast-forward through the two hours in a few minutes. Another way would be to digest it into an abbreviated version in the way a theatrical-movie trailer might. Both these methods can compress the time from hours to minutes. But is there a way to reduce the contents of a movie into imagery that could be grasped quickly, as we might see in a table of contents for a book?

Academic research has produced a few interesting prototypes of video summaries but nothing that works for entire movies. Some popular Web sites with huge selections of movies (like porn sites) have devised a way for users to scan through the content of full movies quickly in a few seconds. When a user clicks the title frame of a movie, the window skips from one key frame to the next, making a rapid slide show, like a flip book of the movie. The abbreviated slide show visually summarizes a few-hour film in a few seconds. Expert software can be used to identify the key frames in a film in order to maximize the effectiveness of the summary.

The holy grail of visuality is to search the library of all movies the way Google can search the Web. Everyone is waiting for a tool that would allow them to type key terms, say “bicycle + dog,” which would retrieve scenes in any film featuring a dog and a bicycle. In an instant you could locate the moment in “The Wizard of Oz” when the witchy Miss Gulch rides off with Toto. Google can instantly pinpoint desirable documents out of billions on the Web because computers can read text, but computers are only starting to learn how to read images.

It is a formidable task, but in the past decade computers have gotten much better at recognizing objects in a picture than most people realize. Researchers have started training computers to recognize a human face. Specialized software can rapidly inspect a photograph’s pixels searching for the signature of a face: circular eyeballs within a larger oval, shadows that verify it is spherical. Once an algorithm has identified a face, the computer could do many things with this knowledge: search for the same face elsewhere, find similar-looking faces or substitute a happier version.

Of course, the world is more than faces; it is full of a million other things that we’d like to have in our screen vocabulary. Currently, the smartest object-recognition software can detect and categorize a few dozen common visual forms. It can search through Flickr photos and highlight the images that contain a dog, a cat, a bicycle, a bottle, an airplane, etc. It can distinguish between a chair and sofa, and it doesn’t identify a bus as a car. But each additional new object to be recognized means the software has to be trained with hundreds of samples of that image. Still, at current rates of improvement, a rudimentary visual search for images is probably only a few years away.

What can be done for one image can also be done for moving images. Viewdle is an experimental Web site that can automatically identify select celebrity faces in video. Hollywood postproduction companies routinely “read” sequences of frames, then “rewrite” their content. Their custom software permits human operators to eradicate wires, backgrounds, unwanted people and even parts of objects as these bits move in time simply by identifying in the first frame the targets to be removed and then letting the machine smartly replicate the operation across many frames.

The collective intelligence of humans can also be used to make a film more accessible. Avid fans dissect popular movies scene by scene. With maniacal attention to detail, movie enthusiasts will extract bits of dialogue, catalog breaks in continuity, tag appearances of actors and track a thousand other traits. To date most fan responses appear in text form, on sites like the Internet Movie Database. But increasingly fans respond to video with video. The Web site Seesmic encourages “video conversations” by enabling users to reply to one video clip with their own video clip. The site organizes the sprawling threads of these visual chats so that they can be read like a paragraph of dialogue.

The sheer number of user-created videos demands screen fluency. The most popular viral videos on the Web can reach millions of downloads. Success garners parodies, mashups or rebuttals — all in video form as well. Some of these offspring videos will earn hundreds of thousands of downloads themselves. And the best parodies spawn more parodies. One site, TimeTube, offers a genealogical view of the most popular videos and their descendants. You can browse a time line of all the videos that refer to an original video on a scale that measures both time and popularity. TimeTube is the visual equivalent of a citation index; instead of tracking which scholarly papers cite other papers, it tracks which videos cite other videos. All of these small innovations enable a literacy of the screen.

As moving images become easier to create, easier to store, easier to annotate and easier to combine into complex narratives, they also become easier to be remanipulated by the audience. This gives images a liquidity similar to words. Fluid images­ made up of bits flow rapidly onto new screens and can be put to almost any use. Flexible images migrate into new media and seep into the old. Like alphabetic bits, they can be squeezed into links or stretched to fit search engines, indexes and databases. They invite the same satisfying participation in both creation and consumption that the world of text does.

We are people of the screen now. Last year, digital-display manufacturers cranked out four billion new screens, and they expect to produce billions more in the coming years. That’s one new screen each year for every human on earth. With the advent of electronic ink, we will start putting watchable screens on any flat surface. The tools for screen fluency will be built directly into these ubiquitous screens.

With our fingers we will drag objects out of films and cast them in our own movies. A click of our phone camera will capture a landscape, then display its history, which we can use to annotate the image. Text, sound, motion will continue to merge into a single intermedia as they flow through the always-on network. With the assistance of screen fluency tools we might even be able to summon up realistic fantasies spontaneously. Standing before a screen, we could create the visual image of a turquoise rose, glistening with dew, poised in a trim ruby vase, as fast as we could write these words. If we were truly screen literate, maybe even faster. And that is just the opening scene.

Kevin Kelly is senior maverick at Wired and the author of “Out of Control” and a coming book on what technology wants.