Tuesday, September 2, 2008

A Chinese Challenge to Intel

Researchers have revealed details of China's latest homegrown microprocessor.


Enter the dragon: This single-core central processing unit, known as Loongson, or "dragon chip," was designed and manufactured in China. Chinese engineers have the goal of deploying quad-core chips by 2009.


In California last week, Chinese researchers unveiled details of a microprocessor that they

hope will bring personal computing to most ordinary people in China by 2010. The chip,

code-named Godson-3, was developed with government funding by more than 200 researchers at

the Chinese Academy of Sciences' Institute of Computing Technology (ICT).

China is making a late entry into chip making, admits Zhiwei Xu, deputy director of ICT.

"Twenty years ago in China, we didn't support R&D for microprocessors," he said during a

presentation last week at the Hot Chips conference, in Palo Alto. "The decision makers and

[Chinese] IT community have come to realize that CPUs [central processing units] are

important."

Tom Halfhill, an analyst at research firm In-Stat, says that the objective for China is to

take control of the design and manufacture of vital technology. "Like America wants to be

energy independent, China wants to be technology independent," Halfhill says. "They don't

want to be dependent on outside countries for critical technologies like microprocessors,

which are, nowadays, a fundamental commodity." Federal laws also prohibit the export of

state-of-the-art microprocessors from the United States to China, meaning that microchips

shipped to China are usually a few generations behind the newest ones in the West.

Despite its late start, China is making rapid progress. The ICT group began designing a

single-core CPU in 2001, and by the following year had developed Godson-1, China's first

general-purpose CPU. In 2003, 2004, and 2006, the team introduced ever faster versions of a

second chip--Godson-2--based on the original design. According to Xu, each new chip tripled

the performance of the previous one.

Godson chips are manufactured in China by a Swiss company called ST Microelectronics and are

available commercially under the brand name Loongson, meaning "dragon chip." Loongson chips

already power some personal computers and servers on the Chinese market, which come with the

Linux operating system and other open-source software. "They use a lot of open-source

software because it's free," says Halfhill. "The Chinese government wants to get as many PCs

into schools and as many workplaces as they can."



The latest Godson chips will also have a number of advanced features. Godson-3, a chip with

four cores--processing units that work in parallel--will appear in 2009, according to Xu,

and an eight-core version is also under development. Both versions will be built using

65-nanometer lithography processes, which are a generation older than Intel's current

45-nanometer processes. Importantly, Godson-3 is scalable, meaning that more cores can be

added to future generations without significant redesign. Additionally, the architecture

allows engineers to precisely control the amount of power that it uses. For instance, parts

of the chip can be shut down when they aren't in use, and cores can operate at various

frequencies, depending on the tasks that they need to perform. The four-core Godson-3 will

consume 10 watts of power, and the eight-core chip will consume 20 watts, says Xu.

This latest chip will also be fundamentally different from those made before. Neither

Godson-1 nor -2 is compatible with Intel's so-called x86 architecture, meaning that most

commercial software will not run on them. But engineers have added 200 additional

instructions to Godson-3 to simulate an x86 chip, which allows Godson-3 to run more

software, including the Windows operating system. And because the chip architecture is only

simulated, there is no need to obtain a license from Intel.

Erik Metzger, a patent attorney at Intel, says that the chip will only perform at about 80

percent of the speed of an actual x86 chip. "That implies that [the Chinese government] is

going after a low-end market," he says. This is the same market that Intel is targeting with

its classmate PC and low-power atom microprocessor. Metzger adds that the inner workings of

the chip, known as its instruction set, have not yet been disclosed, making it difficult to

know if or how any x86 patents may have been breeched.

The Chinese team hopes to further boost its chip program through collaboration with other

companies and researchers. "We still lag behind the international partners a lot," says Xu.

"But we are doing our best to join the international community."

Stretchy, High-Quality Conductors

Materials made from nanotubes could lead to conformable computers that stretch around any shape.

Malleable matrix: A researcher stretches a mesh of transistors connected by elastic conductors that were made at the University of Tokyo.

By adding carbon nanotubes to a stretchy polymer, researchers at the University of Tokyo made a conductive material that they used to connect organic transistors in a stretchable electronic circuit. The new material could be used to make displays, actuators, and simple computers that wrap around furniture, says Takao Someya, a professor of engineering at the University of Tokyo. The material could also lead to electronic skin for robots, he says, which could use pressure sensors to detect touch while accommodating the strain at the robots' joints. Importantly, the process that the researchers developed for making long carbon nanotubes could work on the industrial scale.

"The measured conductivity records the world's highest value among soft materials," says Someya. In a paper published last week in Science, Someya and his colleagues claim a conductivity of 57 siemens per centimeter, which is lower than that of copper, the metal normally used to connect transistors, but two orders of magnitude higher than that of previously reported polymer-carbon-nanotube composites. Someya says that the material is able to stretch up to about 134 percent of its original shape without significant damage.

Electronics that can bend and flex are already used in some applications, but they can't be wrapped around irregular shapes, such as the human body or complex surfaces, says John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign. Rogers, who recently demonstrated a spherical camera sensor using his own version of an elastic circuit, says that Someya's approach is a creative addition to the science of stretchable electronic materials. "It's a valuable contribution to an important, emerging field of technology," he says.

To make the stretchable polymer conductive, Someya's group combined a batch of millimeter-long, single-walled carbon nanotubes with an ionic liquid--a liquid containing charged molecules. The resulting black, paste-like substance was then slowly added to a liquid polymer mixture. This produced a gel-like substance that was poured into a cast and air-dried for 24 hours.

The benefit of adding the nanotubes to a polymer before it is cast, says Someya, is that the nanotubes, which make up about 20 percent of the weight of the total mixture, are more evenly distributed. And because each nanotube is about a millimeter in length, there's a high likelihood that in aggregate they will form an extensive network that allows electrical charge to propagate reliably throughout the polymer.

Previously, researchers have added micrometer-length carbon nanotubes to polymers, says Ray Baughman, a professor of materials science at the University of Texas. Most often, they would simply coat the polymer with nanotubes. Baughman says that Someya's work is exciting, but he notes that he would have expected that adding higher percentages of carbon nanotubes to polymers reduces their stretchiness.

According to Someya, the initial air-dried nanotube-polymer film is flexible but not that stretchable. In order to improve its stretchiness, a machine perforates it into a net-shaped structure that is then coated with a silicone-based material. This enables the material to stretch much farther without compromising its conductivity.

Baughman says that one of the main contributions of the University of Tokyo team's work is to demonstrate a way to make this sort of elastic conductor material in bulk. "This and so many other applications depend on the landmark advance of a team scaling up their production of ultralong carbon nanotubes," he says. The University of Tokyo group claims that from one furnace, it can make 10 tons of nanotubes per year. "It's nice work," Baughman says.

More-Efficient Solar Cells

A new solar panel could lower costs and improve efficiency.

Better cells: A new design for solar panels (top) improves their efficiency. Each panel is made of arrays of square solar cells. A conventional solar cell (bottom) requires thick silver contacts that block light and reduce cell performance. The new design uses a novel electrode that eliminates these silver contacts.


By changing they way that conventional silicon solar panels are made, Day4 Energy, a startup based in Burnaby, British Columbia, has found a way to cut the cost of solar power by 25 percent, says George Rubin, the company's president.

The company has developed a new electrode that, together with a redesigned solar-cell structure, allows solar panels to absorb more light and operate at a higher voltage. This increases the efficiency of multicrystalline silicon solar panels from an industry standard of about 14 percent to nearly 17 percent. Because of this higher efficiency, Day4's solar panels generate more power than conventional panels do, yet they will cost the same, Rubin says. He estimates the cost per watt of solar power would be about $3, compared with $4 for conventional solar cells. That will translate into electricity prices of about 20 cents per kilowatt-hour in sunny areas, down from about 25 cents per kilowatt-hour, he says.

Other experimental solar technologies could lead to much lower prices--indeed, they promise to compete with the average cost of electricity in the United States, which is about 10 cents per kilowatt-hour. But such technologies, including advanced solar concentrators and some thin-film semiconductor solar cells, probably won't be available for years. Day4's technology could be for sale within 18 months, the company says.

In conventional solar panels, the silicon that converts light into electricity is covered with a network of silver lines that conduct electrons and serve as connection points for soldering together the individual solar cells that make up a panel. The network consists of rows of thin silver lines that feed into thicker wires called bus bars. Day4 replaces these bus bars with a new electrode that consists of rows of fine copper wires coated with an alloy material. The wires are embedded in an adhesive and aligned on a plastic film. The coated copper wires run on top of and perpendicular to the thin silver lines, connecting them to neighboring cells. The new electrode conducts electricity better than the silver lines, resulting in less power loss. It also covers up less of the silicon than the bus bars, leaving more area for absorbing light.




What's more, the new electrode allowed Day4 to redesign solar cells to absorb more of the solar spectrum and convert this light into electricity more efficiently. Solar cells comprise two layers of silicon. For light to be converted into electricity, it has to pass through the first layer and reach the second. The thinner the top layer, the more light reaches the second layer to be converted into electricity. In a conventional cell, the silver lines are deposited and then heated to high temperatures, which causes the metal to diffuse into the silicon. The top layer must be thick enough that the silver does not diffuse through it and create a short circuit between the layers of the solar cell. By replacing the large bus bars with the new electrode, Day4 was able to make the top layer of the solar cells thinner, increasing the amount of light that can be converted into electricity. Also, since the silver can damage the silicon, replacing it with the new electrode increases the solar cell's power output.

The technology "sounds pretty exciting," says Travis Bradford, a solar-industry analyst with the Prometheus Institute for Sustainable Development, an energy research firm based in Cambridge, MA. The question, Bradford says, is whether the company can translate the latest advances from its lab to large-scale production without increasing costs.

Day4 has already started producing solar panels using its new electrode material--though not its new solar-cell designs. The company recently announced that it has the capacity to produce enough solar panels every year to generate 47 megawatts of electricity. These first-generation panels, which use conventional solar cells, have an efficiency of 14.7 percent. The company's next step is to put its new cell design into production and incorporate these cells into its solar panels, with the goal of improving their efficiency to 17 percent.

How (Not) to Fix a Flaw

Experts say disclosing bugs prevents security flaws from festering.

Efforts to censor three MIT students who found security flaws in the Boston subway's payment system have been roundly criticized by experts, who argue that suppressing such research could ultimately make the system more vulnerable.

The students were served with a temporary restraining order this weekend at the Defcon security conference in Las Vegas, preventing them from giving their planned talk on Boston subway's payment system.

According to slides submitted before the conference, which have also been posted online, their presentation "Anatomy of a Subway Hack" would have revealed ways to forge or copy both the old magnetic-stripe passes and the newer radio-frequency identification (RFID) cards used on Boston's subway, making it possible to travel for free. The restraining order was filed on behalf of the Massachusetts Bay Transportation Authority (MBTA), which spent more than $180 million to install the system, according to court documents. The MBTA has also brought a larger lawsuit accusing the students of violating the Computer Fraud and Abuse Act and accusing MIT of being negligent in its supervision of them.

One of the students involved, Zack Anderson, says his team had never intended to give real attackers an advantage. "We left out some details in the work we did, because we didn't want anyone to be able to attack the ticketing system; we didn't want people to be able to circumvent the system and get free fares," he says.

Marcia Hoffman, staff attorney with the Electronic Frontier Foundation, a digital-rights group that is assisting the MIT team with its defense, argues that researchers need to be protected as they investigate these types of flaws. "It's extremely rare for a court to bar anyone from speaking before that person has even had a chance to speak," she says. "We think this sets a terrible precedent that's very dangerous for security research."

The MBTA says it isn't trying to stop research, just buy time to deal with whatever flaws the students might have found. The agency also expressed skepticism about whether the MIT students had indeed found real flaws. "They are telling a terrific tale of widespread security problems, but they still have not provided the MBTA with credible information to support such a claim," says Joe Pesaturo, a spokesman for the MBTA. "It's that simple."

It is unclear, though, whether the MBTA can realistically buy the time it needs. Karsten Nohl, a University of Virginia PhD student who was one of the first to publish details of security vulnerabilities in MiFare Classic, the brand of wireless smart card used in Boston's system, says solving the problems could take a year or two and might even involve replacing all card readers and all cards in circulation.

This is not the first lawsuit to hit researchers who have studied the security of MiFare Classic. Last month, Dutch company NXP Semiconductors, which makes the MiFare cards, sued a Dutch university in an attempt to prevent researchers there from publishing details of similar security flaws. The injunction did not succeed, but as RFID technology continues to proliferate, other security experts are concerned about being able to discuss relevant security research openly.

Bruce Schneier, chief security technology officer at BT Counterpane, says the latest lawsuit only distracts from what's really at stake. "MiFare sold a lousy product to customers who didn't know how to ask for a better product," he says. "That will never get fixed as long as MiFare's shoddy security is kept secret." He adds, "The reason we publish vulnerabilities is because there's no other way for security to improve."

The same brand of RFID card is used on transport networks in other cities, including London, Los Angeles, Brisbane, and Shanghai, as well as for corporate and government identity passes. The technology has even been incorporated into some credit cards and cell phones.

Nohl says the industry should view the MIT students' work as a free service that could ultimately lead to better security. Although there has been plenty of academic research on the security of RFID, he says, little has yet made its way into products. "The core of the problem is still industry's belief that they should build security themselves, and that what they've built themselves will be stronger if they keep it secret," Nohl says.

Meanwhile, independent researchers have come up with a number of ideas for improving the security of RFID cards. Nohl and others are researching better ways of encrypting the information stored on the cards. But part of the problem is that the cards are passive, meaning that they will return a signal to any reader that sends a request. Tadayoshi Kohno and colleagues at the University of Washington are also working on a motion-sensing system that would let users activate their cards with a specific gesture, so that it does not normally respond to requests. Karl Koscher, one of the researchers who worked on the project, says their system is aimed at increasing security without destroying the convenience that has made the cards so popular.

First All-Nanowire Sensor

Researchers integrate nanowire sensors and electronics on a chip.

Squared away: University of California, Berkeley, researchers were able to create an orderly circuit array from two types of tiny nanowires, which can function as optical sensors and transistors. Each of the circuits on the 13-by-20 array serves as a single pixel in an all-nanowire image sensor.

Researchers at the University of California, Berkeley, have created the first integrated circuit that uses nanowires as both sensors and electronic components. With a simple printing technique, the group was able to fabricate large arrays of uniform circuits, which could serve as image sensors. "Our goal is to develop all-nanowire sensors" that could be used in a variety of applications, says Ali Javey, an electrical-engineering professor at UC Berkeley, who led the research.

Nanowires make good sensors because their small dimensions enhance their sensitivity. Nanowire-based light sensors, for example, can detect just a few photons. But to be useful in practical devices, the sensors have to be integrated with electronics that can amplify and process such small signals. This has been a problem, because the materials used for sensing and electronics cannot easily be assembled on the same surface. What's more, a reliable way of aligning the tiny nanowires that could be practical on a large scale has been hard to come by.

A printing method developed by the Berkeley group could solve both problems. First, the researchers deposit a polymer on a silicon substrate and use lithography to etch out patterns where the optical sensing nanowires should be. They then print a single layer of cadmium selenide nanowires over the pattern; removing the polymer leaves only the nanowires in the desired location for the circuit. They repeat the process with the second type of nanowires, which have germanium cores and silicon shells and form the basis of the transistors. Finally, they deposit electrodes to complete the circuits.

The printed nanowires are first grown on separate substrates, which the researchers press onto and slide across the silicon. "This type of nanowire transfer is good for aligning the wires," says Deli Wang, a professor of electrical and computer engineering at the University of California, Santa Barbara, who was not involved in the research. Good alignment is necessary for the device to work properly,since the optical signal depends on the polarization of light, which in turn is dependent on the orientation of the nanowires. Similarly, transistors require a high degree of alignment to switch on and off well.



Another potential advantage of the printing method is that the nanowires could be printed not only onto silicon, but also onto paper or plastics, says Javey. He foresees such applications as "sensor tapes"--long roles of printed sensors used to test air quality or detect minute concentrations of chemicals. "Our next challenge is to develop a wireless component" that would relay the signals from the circuit to a central processing unit, he says.

But for now, the researchers have demonstrated the technique as a way to create an image sensor. They patterned the nanowires onto the substrate to make a 13-by-20 array of circuits, in which each circuit acts as a single pixel. The cadmium selenide nanowires convert incoming photons into electrons, and two different layers of germanium-silicon nanowire transistors amplify the resulting electrical signal by up to five orders of magnitude. "This demonstrates an outstanding application of nanowires in integrated electronics," says Zhong Lin Wang, director of the Center for Nanostructure Characterization at Georgia Tech.

After putting the device under a halogen light and measuring the output current from each circuit, the group found that about 80 percent of the circuits successfully registered the intensity of the light shining on them. Javey attributes the failure of the other 20 percent to such fabrication defects as shorted electrodes and misprints that resulted in poor nanowire alignment. He notes that all of these issues can be resolved with refined manufacturing methods.

The researchers also plan to work toward shrinking the circuit to improve resolution and sensitivity. Eventually, says Javey, they want everything on the circuit to be printable, including the electrodes and contacts, which could help further reduce costs.

Bringing Invisibility Cloaks Closer

The fabrication of two new materials for manipulating light is a key step toward realizing cloaking.


Invisible net: A new material that can bend near-infrared light in a unique way has a fishnet structure. These images of a prism made from the material were taken with a scanning electron microscope. The holes in the net enable the material to interact with the magnetic component of the light, which enables the unusual bending and demonstrates its promise for use in future invisibility cloaks. In the inset, the layers of metal and insulating material that make up the metamaterial are visible.
Credit: Jason Valentine et al.

In an important step toward the development of practical invisibility cloaks, researchers have engineered two new materials that bend light in entirely new ways. These materials are the first that work in the optical band of the spectrum, which encompasses visible and infrared light; existing cloaking materials only work with microwaves. Such cloaks, long depicted in science fiction, would allow objects, from warplanes to people, to hide in plain sight.

Both materials, described separately in the journals Science and Nature this week, exhibit a property called negative refraction that no natural material possesses. As light passes through the materials, it bends backward. One material works with visible light; the other has been demonstrated with near-infrared light.

The materials, created in the lab of University of California, Berkeley, engineer Xiang Zhang, could show the way toward invisibility cloaks that shield objects from visible light. But Steven Cummer, a Duke University engineer involved in the development of the microwave cloak, cautions that there is a long way to go before the new materials can be used for cloaking. Cloaking materials must guide light in a very precisely controlled way so that it flows around an object, re-forming on the other side with no distortion. The Berkeley materials can bend light in the fundamental way necessary for cloaking, but they will require further engineering to manipulate light so that it is carefully directed.

One of the new Berkeley materials is made up of alternating layers of metal and an insulating material, both of which are punched with a grid of square holes. The total thickness of the device is about 800 nanometers; the holes are even smaller. "These stacked layers form electrical-current loops that respond to the magnetic field of light," enabling its unique bending properties, says Jason Valentine, a graduate student in Zhang's lab. Naturally occurring materials, by contrast, don't interact with the magnetic component of electromagnetic waves. By changing the size of the holes, the researchers can tune the material to different frequencies of light. So far, they've demonstrated negative refraction of near-infrared light using a prism made from the material.

Researchers have been trying to create such materials for nearly 10 years, ever since it occurred to them that negative refraction might actually be possible. Other researchers have only been able to make single layers that are too thin--and much too inefficient--for device applications. The Berkeley material is about 10 times thicker than previous designs, which helps increase how much light it transmits while also making it robust enough to be the basis for real devices. "This is getting close to actual nanoscale devices," Cummer says of the Berkeley prism.



The second material is made up of silver nanowires embedded in aluminum. "The nanowire medium works like optical-fiber bundles, so in principle, it's quite different," says Nicholas Fang, mechanical-science and -engineering professor at the University of Illinois at Urbana-Champagne, who was not involved in the research. The layered grid structure not only bends light in the negative direction; it also causes it to travel backward. Light transmitted through the nanowire structure also bends in the negative direction, but without traveling backward. Because the work is still in the early stages, it's unclear which optical metamaterial will work best, and for what applications. "Maybe future solutions will blend these two approaches," says Fang.

Making an invisibility cloak will pose great engineering challenges. For one thing, the researchers will need to scale up the material even to cloak a small object: existing microwave cloaking devices, and theoretical designs for optical cloaks, must be many layers thick in order to guide light around objects without distortion. Making materials for microwave cloaking was easier because these wavelengths can be controlled by relatively large structural features. To guide visible light around an object will require a material whose structure is controlled at the nanoscale, like the ones made at Berkeley.

Developing cloaking devices may take some time. In the short term, the Berkeley materials are likely to be useful in telecommunications and microscopy. Nanoscale waveguides and other devices made from the materials might overcome one of the major challenges of scaling down optical communications to chip level: allowing fine control of parallel streams of information-rich light on the same chip so that they do not interfere with one another. And the new materials could also eventually be developed into lenses for light microscopes. So-called superlenses for getting around fundamental resolution limitations on light microscopes have been developed by Fang and others, revealing the workings of biological molecules with nanoscale resolution using ultraviolet light, which is damaging to living cells in large doses. But it hasn't been possible to make superlenses that work in the information-rich and cell-friendly visible and near-infrared parts of the spectrum.

Commanding Your Browser

A new interface bypasses the mouse for some complex tasks.

The beauty of today's search engines is their simplicity. Type a few keywords into an empty box, and see the 10 most relevant results. This week, Mozilla Labs expects to launch a similar interface for its Firefox Web browser. The new interface, called Ubiquity, lets users carry out all sorts of complex tasks simply by typing instructions, in the form of ordinary sentences, into a box in the browser.

For example, to e-mail a paragraph or picture from a Technology Review article to a friend using Ubiquity, simply select the text or image, press a keyboard shortcut to reveal an input box, and type "e-mail to Max."

"You just type in things that feel natural to you," says Chris Beard, vice president and general manager of Mozilla Labs. Ubiquity, which is based on the Javascript programming language, will open an e-mail client and paste the highlighted text or image into a message. It will even guess which Max in an address book the snippet should be sent to, based on previous e-mailing patterns.

The idea, says Beard, is to make it easier to find and share information on the Web while avoiding cumbersome copy-and-paste instructions. Traditionally, if you want to e-mail a picture or a piece of text to a friend, look up a word in an online dictionary, or map an address, you have to follow a series of well-worn steps: copy the information, open a new browser tab or an external program, paste in the text, and run the program.

A common work-around is to use browser plug-ins--tiny programs that connect to other applications and can be added to the browser toolbar. For instance, StumbleUpon, a Web service that lets users bookmark and share interesting Web pages, offers a plug-in for Firefox so that new sites can be added or discovered with a single click. But adding multiple browser plug-ins takes up valuable screen space.

Ubiquity aims to eliminate both tiresome mouse movements and the need for multiple browser plug-ins.

The idea isn't unique to Mozilla Labs. Researchers at MIT have published work on a similar interface, called Inky. Another project, called Yubnub, allows people to quickly perform different online operations, such as searching for stock quotes, images, or items on eBay using the same text field.

What distinguishes Ubiquity is that it's being released as a Mozilla Labs project, which immediately makes both the program and its underlying code available to people eager to test the interface and contribute design and programming ideas to improve its functionality. Also, notes Mozilla's Beard, Ubiquity is highly customizable. From the start, the interface will come with built-in instructions or "verbs," such as "e-mail," "Twitter," and "Digg," but Beard expects people to add many new ones.

The project is being released in an early form--version 0.1--so it's not expected to work perfectly straightaway. Also, Beard doesn't assume that it will change the way people interact with their browser overnight. "Most people in the world will continue to use mouse-based interfaces," he says. But a language-based interface like Ubiquity could ultimately supplement the mouse, much as shortcut keys already do, he says.

A Plastic That Chills

Materials that change temperature in response to electric fields could keep computers--and kitchen fridges--cool.

Cool spool: Films of a specially designed polymer, just 0.4 to 2.0 micrometers thick, can get colder or hotter by 12 °C when an electric field is removed or applied across them.
Credit: Qiming Zhang, Penn State

Thin films of a new polymer developed at Penn State change temperature in response to changing electric fields. The Penn State researchers, who reported the new material in Science last week, say that it could lead to new technologies for cooling computer chips and to environmentally friendly refrigerators.

Changing the electric field rearranges the polymer's atoms, changing its temperature; this is called the electrocaloric effect. In a cooling device, a voltage would be applied to the material, which would then be brought in contact with whatever it's intended to cool. The material would heat up, passing its energy to a heat sink or releasing it into the atmosphere. Reducing the electric field would bring the polymer back to a low temperature so that it could be reused.

In a 2006 paper in Science, Cambridge University researchers led by materials scientist Neil Mathur described ceramic materials that also exhibited the electrocaloric effect, but only at temperatures of about 220 °C. The operating temperature of a computer chip is significantly lower--usually somewhere around 85 °C--and a kitchen refrigerator would have to operate at lower temperatures still. The Penn State polymer shows the same 12-degree swing that the ceramics did, but it works at a relatively low 55 °C.

The polymer also absorbs heat better. "In a cooling device, besides temperature change, you also need to know how much heat it can absorb from places you need to cool," says Qiming Zhang, an electrical-engineering professor at Penn State, who led the new work. The polymer, Zhang says, can absorb seven times as much heat as the ceramic.

Zhang attributes these qualities to the more flexible arrangement of atoms in polymers. "In a ceramic, atoms are more rigid, so it's harder to move them," he says. "Atoms can be moved in polymers much more easily using an electric field, so the electrocaloric effect in polymer is much better than ceramics."

The material's properties make it an attractive candidate for laptop cooling applications, says Intel engineer Rajiv Mongia, who studies refrigeration technologies. Computer manufacturers are looking for less bulky alternatives to the heat sinks and noisy fans currently used in laptops and desktop computers. The ideal technology would be small enough to be integrated into a computer chip.

Until now, says Mongia, exploring the electrocaloric effect for chip cooling had not made sense. The first ceramic materials didn't exhibit large enough temperature changes--chip cooling requires reductions of at least 10 °C--and the more recent ceramics don't work at low enough temperatures. They also contain lead, a hazardous material that is hard to dispose of safely. The polymers do not have those drawbacks. "The fact that they've been able to develop a polymer-type material that can be used in a relatively thin film is worth a second look," Mongia says. "Also, it's working in a temperature range that is of interest to us."

But chip-cooling devices will take a while to arrive. It now takes 120 volts to get the polymer to change its atomic arrangement, and that figure would need to be much lower if the material is to be used in laptops. "Ideally, you want it to work at voltages common within the realm of a notebook, in the tens of volts or less," Mongia says. The researchers will also need to engineer a working device containing the thin films.

Electrocaloric materials could make fridges greener. Current household fridges use a vapor-compression cycle, in which a refrigerant is converted back and forth between liquid and vapor to absorb heat from the insulated compartment. The need for mechanical compression lowers the fridge's efficiency. "Vapor-cooled fridges are 30 to 40 percent efficient," Mathur says. But because electrocaloric materials have no moving parts, they could lead to cooling devices that are more energy efficient than current fridges. What's more, current hydrofluorocarbon refrigerants contribute to global warming.

Refrigerators that use electrocaloric materials would have an advantage over the magnetic cooling systems that some companies and research groups are developing. Electric fields large enough to produce substantial temperature changes in electrocaloric materials are much easier and cheaper to produce than the magnetic fields used in experimental refrigeration systems, which require large superconducting magnets or expensive permanent magnets. However, refrigerators need temperature spans of 40 °C, which is a tall order for electrocaloric materials right now, Mathur says. "The main sticking point in terms of the technology is that we have thin films, and you can't cool very much with a thin film."

Zhang and his colleagues are now trying to design better electrocaloric polymers. They plan to study polymers made from liquid crystals, which are used in flat-panel displays. Liquid crystals contain rod-shaped molecules that will align with an electric field and revert to their original arrangement when the field is removed. Zhang says that this property could be exploited to make materials that absorb and release large amounts of heat in response to electric fields.

A Bridge between Virtual Worlds

Second Life's new program links virtual environments

Linking worlds: Two avatars, Brian White (left) and Butch Arnold, meet in 3rd Rock Grid, an independent OpenSim-based server.
Credit: Brian White

The first steps to developing virtual-world interoperability are now being tested between Second Life and other independent virtual worlds, thanks to the launch of Linden Lab's Open Grid Beta, a program designed for developers to test new functionality. The beta program will allow users to move between a Second Life test grid--a set of servers simulating a virtual world--and other non-Linden Lab grids running the OpenSim software. OpenSim is an independent open-source project to create a virtual-world server.

The discussion of linking together today's virtual worlds is not new, but this is the first running code that demonstrates previously hypothetical approaches--another tangible sign that Linden Lab is serious about interoperability. "We are still early in the game. The point of the beta is to give the rest of the development community the chance to try the protocols themselves," says Joe Miller, Linden Lab's vice president of platform and development. More than 200 users have signed up for the beta program, and currently 15 worlds have been connected.

In order to test virtual-world interoperability, a person needs at least two virtual worlds. For Linden Lab, the OpenSim project was a natural choice. It began in January 2007 at the nexus of two open-source projects--one to reverse-engineer the Second Life server APIs, and the other Linden Lab's open-source viewer initiative. The goal of the OpenSim project is to build a virtual-world server that supports the Linden Lab viewer or a derivative.

Today, there is a flourishing OpenSim community with 26 registered grids hosting approximately 2,300 regions. While this is certainly a small number compared with the 28,070 regions that make up the Second Life main grid, it still represents a significant number of independent virtual worlds. The open-source nature of the project, combined with the number of participants and the shared support of a common viewer, make OpenSim-based worlds ideal for interoperability tests.

Interoperability is the future of the Web, says Terry Ford, the owner and operator of an OpenSim-based world called 3rd Rock Grid. Ford is also participating in the program. "It may be [in] OpenSim's future, or maybe another package will spring up, but just as links from a Web page take you to another site, people will come to expect the ability to navigate between virtual worlds," he says.

Ford is Butch Arnold in Second Life, Butch Arnold in 3rd Rock Grid, and Butch Arnold in the OpenLife grid, and that's kind of the point. No one wants to have as many avatars as they do website accounts, but there is a fundamental difference between accounts, which hold data like a shopping cart, and avatars, which contain data regarding a person's virtual-world appearance. IBM's David Levine, who has been closely collaborating with Linden Lab on the interoperability protocols, says, "You don't care if your shopping-cart contents in your Amazon account [are] the same as other shopping carts. However, if you were moving region to region and had very different assets in each, that would be a problem."

Yet many efforts to let users share their avatars on the Web have not been successful. Levine says that the Open Grid Protocol has a chance because it is less ambitious. "We are not trying to do it across the entire Web. The focus is on the Linden main grid and a set of broadly similar grids."

To use the beta program, a participant starts an application called a viewer, the best example being the Second Life client. The viewer renders the virtual world and provides the controls for the avatar. Just like using a Web browser to log in to a website, the viewer is where a log-in request is initiated.

The log-in request is sent to the agent service, which stores things like the avatar's profile, password, and current location. As part of the beta, Linden Lab has implemented a proprietary version of the agent service running on a test grid. The avatar service now contacts the region service for the right placement of the avatar in the virtual world.

The region service is basically the Web server of virtual worlds. It is responsible for simulating a piece of the virtual landscape and providing a shared perspective to all avatars occupying the same virtual space. A collection of regions is called a grid. Linden Lab has proprietary code running all the Second Life regions. The OpenSim project provides source code that, when built, allows anyone to run his or her own region service.

From that point on, there is a three-way communication between viewer, agent service, and region service to provide the user's in-world experience. When the user wants to move to another region, he issues a teleport command in the viewer, and the same process happens. But in this case, the user is not required to log in again, even if the destination region is running on a non-Linden Lab server.

Last fall, Linden Lab formed the Architecture Working Group (AWG), which is the driving force behind the Open Grid Protocol--the architectural definition of interoperability. The team decided that the first step was to focus on the areas of log in and teleport. "We started with authentication information and being able to seamlessly pass the log-in credentials between two grids run by different companies," says Levine. "Many people ask me, 'Why did you start there?' Well, you can't do all the rest until you get logged in."

Miller says that in the next 18 months, a user can expect to see a lot of activity in the area of content movement. "How do I move content that is mine, purchased or created, between worlds safely and securely? The AWG has a lot of great thoughts on how this could work," he says.

Internet Security Hole Revealed

A researcher discloses the details of the major flaw he discovered earlier this year.

On Wednesday, at the Black Hat computer security conference in Las Vegas, Dan Kaminsky, director of penetration testing at IOActive, released the full details of the major design flaw he found earlier this year in the domain name server system, which is a key part of directing traffic over the Internet. Kaminsky had already revealed that the flaw could allow attackers to control Internet traffic, potentially directing users to phishing sites--bogus sites that try to elicit credit-card information--or to sites loaded with malicious software. On Wednesday, he showed that the flaw had even farther-reaching implications, demonstrating that attackers could use it to gain access to e-mail accounts or to infiltrate the systems in place to make online transactions secure.

Kaminsky first announced the flaw in the domain name system in July, at a press conference timed to coincide with the massive coordinated release of a temporary fix, which involved vendors such as Microsoft, Cisco, and Sun. He didn't release details of the flaw, hoping to give companies time to patch it before giving attackers hints about how to exploit it. Although the basics of the flaw did leak before Kaminsky's Black Hat presentation, he says he's relieved that not all of its implications were publicly discovered.

The domain name system is, as its name might imply, responsible for matching domain names--such as technologyreview.com--to the numerical addresses of the corresponding Web servers--such as 69.147.160.210. A request issued by an e-mail server or Web browser might pass through several domain name servers before getting the address information that it needs.

Kaminsky says that the flaw he discovered is a way for an attacker to impersonate a domain name server. Imagine that the attacker wants to hoodwink Facebook, for instance. He would start by opening a Facebook account. Then he would try to log in to the account but pretend to forget his password. Facebook would then try to send a new password to the e-mail address that the attacker used to create the account.

The attacker's server, however, would claim that Facebook got the numerical address of its e-mail server wrong. It then tells Facebook the name of the domain name server that--supposedly--has the right address. Facebook has to locate that server on its own; this is actually a safety feature, to prevent an attacker from simply routing traffic to his own fake domain name server in the first place.

At this point, the attacker knows that Facebook's server is about to look up where to find the domain name server. If he can supply a false answer before the real answer arrives, he can trick Facebook into looking up future addresses on his own server, rather than on the domain name server. He can then direct messages sent by Facebook anywhere he chooses.



The problem for the attacker is that the false answer needs to carry the correct authenticating transaction ID--and there are 65,000 possibilities. Moreover, once Facebook's server gets an answer, it will store the domain name server's numerical address for a certain period of time, perhaps a day. The flaw that Kaminsky discovered, however, allows the attacker to trigger requests for the domain name server's address as many times as he wants. If the attacker includes a random transaction ID with each of his false responses, he'll eventually luck upon the correct one. In practice, Kaminsky says, it takes the attacker's computer about 10 seconds to fool a server into accepting its false answer.

Fooling Facebook's server would mean that the attacker could intercept messages that Facebook intended to send to users, which could allow him to get control of large numbers of accounts. The attacker could use similar techniques to intercept e-mail from other sources, or to get forged security certificates that could be used to more convincingly impersonate banking sites. "We haven't had a bug like this in a decade," Kaminsky says.

Because the attack takes advantage of an extremely common Internet transaction, the flaw is difficult to repair. "If you destroy this behavior, you destroy [the domain name system], and therefore you destroy the way the Internet works," Kaminsky says. But the temporary fix that's being distributed will keep most people safe for now. That fix helps by adding an additional random number that gives the attacker a much smaller chance of being able to guess correctly and pull off the impersonation. In the past month, he says, more than 120 million broadband consumers have been protected by patches, as have 70 percent of Fortune 500 companies. "If they're big and vulnerable, and I thought so, I've contacted them and raised holy hell," Kaminsky says. Facebook has applied the patch, as have Apple, LinkedIn, MySpace, Google, Yahoo, and others.

But it's still uncertain how to put a long-term solution in place. Kaminsky calls the current patch a "stopgap," which he hopes will hold off attackers while the security community seeks a more permanent fix. Jerry Dixon, director of analysis for Team Cymru and former executive director of the National Cyber Security Division and US-CERT, says that "longer-term fixes will take a lot of effort." Changes to the domain name system must be made cautiously, he says, adding, "It's the equivalent of doing heart surgery." It would be easy for a fix to cause unintended problems to the system. In the meantime, Dixon says, "if I were asked by the White House to assess this, I would say it's a bad vulnerability. People need to patch this."

Finding Evidence in Fingerprints

A technique reveals drugs and explosives on the scene

Next on CSI: This series of images shows that fingerprint images made using mass spectrometry are comparable to those made using traditional means. In (A), mass spectrometry is used to produce a fingerprint by imaging the presence of cocaine; the mass-spectrometry fingerprint can be employed as a starting point for a computerized image (B) generated using commercial fingerprint-analysis software. Below, (C) and (D) show a traditional ink print made with the same fingertip, and the corresponding computer image. (Red and blue circles in the computer-generated images correspond to features of interest, such as where ridges intersect.)

A new method for examining fingerprints provides detailed maps of their chemical composition while creating traditional images of their structural features. Instead of taking samples back to the lab, law-enforcement agents could use the technique, a variation on mass spectrometry, to reveal traces of cocaine, other drugs, and explosives on the scene.

Fingerprints are traditionally imaged after coating crime-scene surfaces with chemicals that make them visible. These techniques can be destructive, and different methods must be used, depending on the surface under study, says John Morgan, deputy director of science and technology at the National Institute of Justice, the research branch of the U.S. Department of Justice. "Mass-spectrometric imaging could be a useful tool to image prints nondestructively on a wide variety of surfaces," says Morgan.

Traditional mass spectrometry, the gold standard for identifying chemicals in the lab that uses mass and charge measurements to parse out the chemical components of a sample, typically involves intensive sample preparation. It must be done in a vacuum, and the sample is destroyed during the process, making further examination impossible and eliminating information about the spatial location of different molecules in the sample that are needed to create an image.

R. Graham Cooks, a professor of analytical chemistry at Purdue University, who led the fingerprint research, and his group used a sample-collection technique that he developed in 2004 and that can be used with any commercial mass spectrometer. Desorption spray ionization uses a stream of electrically charged solvent, usually water, to dissolve chemicals in a fingerprint or any other sample on a hard surface. "The compounds dissolve, secondary droplets splash up and are then sucked into the mass spectrometer," explains Cooks. As the instrument scans over a surface, it collects thousands of data points about the chemical composition, each of which serves as a pixel. The mass-spectrometry method can create images of the characteristic ridges of fingerprints that also serve as maps of their chemical composition.

In a paper published in the journal Science this week, the Purdue researchers describe using the method to image clean fingerprints and prints made after subjects dipped their fingers in cocaine, the explosive RDX, ink, and two components of marijuana. "We know in the old-fashioned way who it was" by providing information about the fingers' ridges and whorls, says Cooks of the fingerprint-imaging technique. The technique could also address the problem of overlapping fingerprints, which can be difficult to tell apart: fingerprints made by different individuals should have a different chemical composition. And "you also get information about what the person has been dealing with in terms of chemicals," says Nicholas Winograd, a chemist at Pennsylvania State University, who was not involved in the research.

Some of the chemicals found in fingerprints come from things people have handled; others are made by the body. The metabolites found in sweat are not well understood, but it's likely that they differ with age, gender, and other characteristics that would help identify suspects, says Cooks. Mass spectrometry could help uncover these variations. And Winograd says that the chemicals found in fingerprints might also provide information about drug metabolism and other medically interesting processes. Winograd, Cooks, and many others have recently begun using mass spectrometry to study the molecular workings of cancerous tissues and cells. Mass spectrometry might reveal that diagnostic information exists in sweat as well, says Winograd.

However, Morgan cautions that the work is preliminary and that the technology may prove too expensive for widespread adoption by law-enforcement agencies. Indeed, Cooks has not developed a commercial version of the fingerprint-analysis instrument.

"They have a long way to go," agrees Michael Cherry, vice chairman of the digital technology committee at the National Association of Criminal Defense Lawyers, who has extensive experience interpreting fingerprints. He says that Cooks's group has demonstrated the potential of the technology. However, after examining some fingerprint images made using mass spectrometry, Cherry says that the technology will require further development to be good enough to hold up in court.

An Artificial Pancreas

A device that reads glucose levels and delivers insulin may be close at hand.

Artificial pancreas: Scientists are pairing continuous glucose monitors, such as the device pictured here (white device, top), with insulin pumps, such as the one pictured here (pagerlike device, bottom), to create an artificial pancreas for people with diabetes. In this commercial system by Medtronic, the glucose monitor wirelessly transmits data to the pump via a meter (not pictured). However, the user must still decide how much insulin he needs and dose it out himself. In an artificial pancreas, specially-designed algorithms would calculate how much insulin is required, and how quickly, and then signal the drug’s delivery without human intervention.


Today, people with diabetes have a range of technologies to help keep their blood sugar in check, including continuous monitors that can keep tabs on glucose levels throughout the day and insulin pumps that can deliver the drug. But the diabetic is still responsible for making executive decisions--when to test his blood or give himself a shot--and the system has plenty of room for human error. Now, however, researchers say that the first generations of an artificial pancreas, which would be able to make most dosing decisions without the wearer's intervention, could be available within the next few years.

Type 1 diabetes develops when the islet cells of the human pancreas stop producing adequate amounts of insulin, leaving the body unable to regulate blood-sugar levels on its own. Left unchecked, glucose fluctuations over the long term can lead to nerve damage, blindness, stroke and heart attacks. Even among the most vigilant diabetics, large dips and surges in glucose levels are still common occurrences. "We have data on hand today that suggests that you could get much better diabetes outcomes with the computer taking the lead instead of the person with diabetes doing it all themselves," says Aaron Kowalski, research director of the Juvenile Diabetes Research Foundation's Artificial Pancreas Project.

At its most basic level, an artificial pancreas consists of three components: a continuous sensor to detect glucose levels in real time, a miniature computer that can take those readings and use an algorithm to predict what will happen next and determine how much insulin is necessary to keep the levels steady, and an insulin pump driven by the computer that doses out the appropriate amount of the drug.

Two of the components--insulin pumps and continuous glucose monitors--are already on the commercial market (the latter received marketing approval by the U.S. Food and Drug Administration just a few years ago). "In the near term, you could probably create a pretty robust system with today's technologies," says Kowalski, whose group has spearheaded a coalition aimed at bringing an artificial pancreas to market as soon as possible.

Members of the consortium are experimenting with variations of this closed-loop system, so named because the computer algorithm connects the insulin pump and the glucose monitor, closing the loop. Perhaps the person closest to developing a commercial system is Roman Hovorka, a principal research associate at the University of Cambridge, in the U.K., where he leads the Diabetes Modelling Group. His first closed-loop study examined the effectiveness of the system when used overnight, during the hours when blood-sugar levels are likely to drop precipitously and complications can occur. "I want to move to an approach that could be commercialized, and the simplest is just to close the loop overnight, at a time when one cannot do too much about insulin anyway."


Hovorka used two devices, both commercially available. The first, a continuous glucose monitor, consists of a subcutaneous sensor that measures glucose levels in tissue beneath the skin and a device that communicates wirelessly with the sensor to download its data. The second is the pump itself, a pager-size device with an insulin reservoir that delivers the drug through a thin tube to a subcutaneous needle. Hovorka and his collaborators added an algorithm that not only put the pump and sensor in communication with each other, but also took the (sleeping) user out of the picture by determining precisely how much insulin to mete out every 15 minutes.

When tested in 12 children with type 1 diabetes, the closed-loop system brought the kids' blood-glucose levels into the target range 61 percent of the time, up from 23 percent for those who followed their normal routine. "With the closed loop, we are able to avoid the extremes--the extreme bad low and the extreme bad high," Hovorka says. He's currently working with device makers in the industry to create a marketable commercial product.

Technologically, the remaining obstacles for researchers are those of refinement--for example, constructing algorithms that are exquisitely honed to predict in which direction glucose levels are moving and at what rate. Other researchers are working on sensors that can monitor blood glucose over an extended period of time (currently, sensors must be replaced every three to eight days) and with improved accuracy.

Despite the fact that much of the technology is on the market, researchers must still prove to the FDA that their system is safe when combined with the algorithms, and that if anything goes wrong--if a sensor goes wonky or the insulin pump clogs up--the computer can sense it and either set off an alarm or turn the whole system off.

"You don't have to get the perfect system to make a tremendous advance and make it considerably easier to live with diabetes," says William Tamborlane, chief of pediatric endocrinology at Yale School of Medicine, who invented insulin-pump therapy in the late 1970s. As a clinician, he's more interested in seeing these incremental advances make their way to the patients than in waiting for a perfect system to be created. "We now have sensors that can say what the blood sugar's doing every minute," Tamborlane says. "And we have insulin pumps that can change how much insulin it gives on a minute-to-minute basis. We have the technology right now to come pretty close to what might be considered the ultimate solution."

Tuesday, August 19, 2008

A Spherical Camera Sensor

A stretchable circuit allows researchers to make simple, high-quality camera sensors.

The eyes have it: This camera consists of a hemisphere-shaped array of photodetectors (white square with gold-colored dots) and a single lens atop a transparent globe. The curved shape of the photodetector array provides a wide field of view and high-quality images in a compact package.

Today's digital cameras are remarkable devices, but even the most advanced cameras lack the simplicity and quality of the human eye. Now, researchers at the University of Illinois at Urbana-Champaign have built a spherical camera that follows the form and function of a human eye by building a circuit onto a curved surface.

The curved sensor has properties that are found in eyes, such as a wide field of view, that can't be produced in digital cameras without a lot of complexity, says John Rogers, lead researcher on the project. "One of the most prominent [features of the human eye] is that the detector surface of the retina is not planar like the digital chip in a camera," he says. "The consequence of that is [that] the optics are well suited to forming high-quality images even with simple imaging elements, such as the single lens of the cornea."

Electronic devices have been, for the most part, built on rigid, flat chips. But over the past decade, engineers have moved beyond stiff chips and built circuits on bendable sheets. More recently, researchers have gone a step beyond simple bendable electronics and put high-quality silicon circuits on stretchable, rubberlike surfaces. The advantage of a stretchable circuit, says Rogers, is that it can conform over curvy surfaces, whereas bendable devices can't.

The key to the spherical camera is a sensor array that can withstand a curve of about 50 percent of its original shape without breaking, allowing it to be removed from the stiff wafer on which it was originally fabricated and transferred to a rubberlike surface. "Doing that requires more than just making the detector flexible," says Rogers. "You can't just wrap a sphere with a sheet of paper. You need stretchability in order to do a geometry transformation."

The array, which consists of a collection of tiny squares of silicon photodetectors connected by thin ribbons of polymer and metal, is initially fabricated on a silicon wafer. It is then removed from the wafer with a chemical process and transferred to a piece of rubberlike material that was previously formed into a hemisphere shape. At the time of transfer, the rubber hemisphere is stretched out flat. Once the array has been successfully adhered to the rubber, the hemisphere is relaxed into its natural curved shape.

Because the ribbons that connect the tiny islands of silicon are so thin, they are able to bend easily without breaking, Rogers says. If two of the silicon squares are pressed closer together, the ribbons buckle, forming a bridge. "They can accommodate strain without inducing any stretching in the silicon," he says.

To complete the camera, the sensor array is connected to a circuit board that connects to a computer that controls the camera. The array is capped with a globelike cover fitted with a lens. In this setup, the sensor array mimics the retina of a human eye while the lens mimics the cornea.

Stretchable mesh: The square silicon photodetectors, connected by thin ribbons of metal and polymer, are mounted on a hemisphere-shaped rubber surface. The entire device is able to conform to any curvilinear shape due to the flexibility of the ribbons that connect the silicon islands. Credit: Beckman Institute, University of Illinois

"This technology heralds the advent of a new class of imaging devices with wide-angle fields of view, low distortion, and compact size," says Takao Someya, a professor of engineering at the University of Tokyo, who was not involved in the research. "I believe this work is a real breakthrough in the field of stretchable electronics."

Rogers isn't the first to use the concept of a stretchable electronic mesh, but this work distinguishes itself in that it is not constrained to stretching in limited directions, like other stretchable electronic meshes. And importantly, his is the first stretchable mesh to be implemented in an artificial eye camera.

The camera's resolution is 256 pixels. At the moment, it's difficult to improve resolution due to the limitations of the fabrication facilities at the University of Illinois, says Rogers. "At some level, it's a little frustrating because you have this neat electronic eye and everything's pixelated," he says. But his team has sidestepped the problem by taking another cue from biology. The human eye dithers from side to side, constantly capturing snippets of images; the brain pieces the snippets together to form a complete picture. In the same way, Rogers's team runs a computer program that makes the images crisper by interpolating multiple images taken from different angles.

The most immediate application for these eyeball cameras, says Rogers, is most likely with the military. The simple, compact design could be used in imaging technology in the field, he suggests. And while the concept of an electronic eye conjures up images of eye implants, Rogers says that at this time he is not collaborating with other researchers to make these devices biocompatible. However, he's not ruling out the possibility in the future.

Shape Matters for Nanoparticles

Particles the size and shape of bacteria could more effectively deliver medicine to cells.

Cell invaders: Cylindrical nanoparticles slip easily into cells. They could be used to deliver drugs to cancerous tissues.

Nanoparticles shaped to resemble certain bacteria can more easily infiltrate human cells, according to a new study. The results suggest that altering the shape of nanoparticles can make them more effective at treating disease.

Joseph DeSimone, a professor of chemistry and chemical engineering at the University of North Carolina at Chapel Hill and at North Carolina State University, tested how nano- and microparticles shaped like cubes, squat cylinders, and long rods were taken up into human cells in culture. He found that long, rod-shaped particles slipped into cells at four times the rate of short, cylindrical shapes with similar volumes. DeSimone, who reported the findings this week in the Proceedings of the National Academy of Sciences, notes that the faster nanoparticles resemble certain types of bacteria that are good at infecting cells. "A lot of rodlike bacteria get into cells quickly," he says. "Using the same size and shape, our particles get in very quickly too."

Researchers have long suspected that mimicking the distinctive shapes of bacteria, fungi, blood cells--even pollen--could improve the ability of nanoparticles to deliver drugs to diseased cells in the body. But it has been difficult to test this suspicion. What's needed is a way to quickly make billions of particles of identical size, chemistry, and shape, and then systematically vary these parameters to learn what effect they have.

DeSimone developed a way to easily design and test a wide variety of particle shapes, while at the same time controlling for size and chemical composition. For example, he can make particles of various shapes--boomerangs, donuts, hex nuts, cylinders, cubes--while keeping the size constant. He can also make boomerang-shaped particles of various sizes, or keep size and shape constant and vary only the chemical composition of the particles. The process gives researchers an unprecedented level of control, he says, which makes it easy to quickly test how changing various parameters of the nanoparticles, including shape, affect how they behave in tissues.

"Historically, most of the work with particles has been with spherical particles because making particles of different shapes has been very challenging," says Samir Mitragotri, a professor of chemical engineering at the University of California, Santa Barbara. DeSimone "demonstrates a very powerful technology that shows [that] particles of different shapes and materials can be prepared," Mitragotri says. "It goes well beyond current tools." He adds that the paper shows that "shape makes a big difference in biological response."

DeSimone also identified the precise mechanisms by which cells take in particles of different shapes. These mechanisms determine where the particles end up inside the cell. This new data could help researchers design particles that reach particular compartments within a cell that have a known level of acidity. The researchers could then fine-tune the particles so that they break down and release their cargo only once they reach the desired compartment. That way, the particles will only release drugs inside targeted cells, leaving healthy cells unharmed.

DeSimone is using his manufacturing technique to produce nanoparticles that deliver drugs to cancer cells. He's starting trials in mice for a number of cancer types--breast, ovarian, cervical, lung, prostate--and lymphoma. He's able to conduct so many trials because it's easy to add different treatment molecules to his particles. Particles developed for targeting breast cancer can easily be changed to target lung cancer, for example. During the tests, DeSimone will systematically vary doses, sizes, and so on to determine the least toxic, most effective combinations. "You can now barrage a lot of different cancers and look at what's the most efficacious design parameters you can put in the system," he says.

DeSimone has developed particles that resemble red blood cells in size, shape, and flexibility to help them circulate in the bloodstream without being removed by biological barriers. (He's testing these in animals as a potential basis for artificial blood.) He is also testing long, wormlike particles that can't easily be consumed by macrophages. "The particle has to overcome so many hurdles before it reaches its destination," Mitragotri says. Previously, researchers have been limited to changing the size and chemistry of particles. Adding the ability to control shape provides a "big boost in overcoming these hurdles," Mitragotri says.

Cloud Computing's Perfect Storm?

An Intel, Yahoo, and HP initiative will use large-scale research projects to test a new Internet-based computing infrastructure.
Last week, Intel, Yahoo, HP, and an international trio of research institutions announced a joint cloud-computing research initiative. The ambitious six-site project is aimed at developing an Internet-based computer infrastructure stable enough to host companies' most critical data-processing tasks. The project also holds an unusual promise for advances in fields as diverse as climate change modeling and molecular biology.

The new array of six linked data centers, one operated by each project sponsor, will be one of the largest experiments to date focusing on cloud computing--an umbrella term for moving complex computing tasks, such as data processing and storage, into a network-connected "cloud" of external data centers, which might perform similar tasks for multiple customers.

The project's large scope will allow researchers to test and develop security, networking, and infrastructure components on a large scale simulating an open Internet environment. But to test this infrastructure, academic researchers will also run real-world, data-intensive projects that, in their own right, could yield advances in fields as varied as data mining, context-sensitive Web search, and communication in virtual-reality environments.

"Making this marriage of substantial processing power, computing resources, and data resources work efficiently, seamlessly, and transparently is the challenge," says Michael Heath, interim head of the computer-science department at the University of Illinois at Urbana-Champaign, an institute that is part of the alliance. Heath says that for the project to be successful, the team, which also includes Germany's Karlsruhe Institute of Technology and Singapore's Infocomm Development Authority, needs "to be running realistic applications."

Much of the technology industry has recently focused on cloud computing as a next critical architectural advance, but even backers say that the model remains technologically immature.

Web-based software and the ability to "rent" processing power or data storage from outside companies are already common. The most ambitious visions of cloud computing expand on this, predicting that companies will ultimately use remotely hosted cloud services to perform even their most complex computing activities. However, creating an online environment where these complicated tasks are secure, fast, reliable, and simple still presents considerable challenges.

Virtually every big technology company, including Google, IBM, Microsoft, and AT&T, already has a cloud-computing initiative. Farthest along commercially may be Amazon, whose Web Services division already hosts computing, storage, databases, and other resources for some customers.

The new cloud-computing project will consist of six computing clusters, one housed with each founding member of the partnership, with each containing between 1,000 and 4,000 processors. Each of the companies involved has a specific set of research projects planned, with many broadly focusing on operational issues such as security, load balancing, managing parallel processes on a very large scale, and how to configure and secure virtual machines across different locations.

Researchers will be given unusually broad latitude to modify the project's architecture from top to bottom, developing and experimenting with ideas applying to hardware, software, networking functions, and applications. Project managers say that one goal is to see how changes at one technical level affect others.

"In the cloud, we have the opportunity for integrated design, where one entity can make design choices across an entire environment," says Russ Daniels, chief technology officer of HP's cloud-services division. "This way, we can understand the impact of design choices that we make at the infrastructure level, as well as the impact they have on higher-level systems."

HP, for example, will be focusing in part on an ongoing project called Cells as a Service, an effort to create secure virtual "containers" that are composed of virtual machines, virtual storage volumes, and virtual networks. The containers can be split between separate data centers but still treated by consumers as a traditional, real-world collection of hardware.

Among Yahoo's specific projects will be the development of Hadoop, an open-source software platform for creating large-scale data-processing and data-querying applications. Yahoo has already built one big cloud-computing facility called M45 that is operated in conjunction with Carnegie Mellon University. M45 will also be folded into this new project.

Running in parallel with this systems-level research will be the assortment of other research projects designed to test the cloud infrastructure.

Computer scientists at the Illinois facility have a handful of data- and processing-intensive projects under way that are likely to be ported to the cloud facilities. According to Heath, one key thrust will be "deep search" and information extraction, such as allowing a computer to understand the real-world context of the contents found in a Web page. For example, today's search engines have difficulty understanding that a phone number is in fact an active phone number, rather than just a series of digits. A project run by Urbana-Champaign professor Kevin Chang is exploring the idea of using the massive quantities of data collected by Web-wide search engines as a kind of cross-reference tool, so that the joint appearance of "555-1212" with "John Borland" multiple times online might identify the number as a phone number and associate it with that particular name.

Heath says that other projects might include experiments with tele-immersive communication--virtual-reality-type environments that let computers provide physical, or haptic, feedback to users as they communicate or engage in real-world activities controlled remotely over the Web.

In an e-mail, Intel Research vice president Andrew Chein said that other topics could include climate modeling, molecular biology, industrial design, and digital library research.

"By looking at what people are really doing, we will learn about what is really important from an infrastructure perspective," says Raghu Ramakrishnan, chief scientist for Yahoo's Cloud Computing and Data Infrastructure Group. "We already know enough to put forth systems that are usable today, but not enough that we can deliver on all the promise that people see in the paradigm."

The Brain Unmasked

New imaging technologies reveal the intricate architecture of the brain, creating a blueprint of its connectivity.

Brain
mapping: A variation on MRI called diffusion spectrum imaging allows scientists to map the neural fibers that relay signals in the brain. Each fiber in the image represents hundreds to thousands of fibers in the brain, each traveling along the same path. Credit: George Day, Ruopeng Wang, Jeremy Schmahmann, Van Wedeen, MGH

The typical brain scan shows a muted gray rendering of the brain, easily distinguished by a series of convoluted folds. But according to Van Wedeen, a neuroscientist at Massachusetts General Hospital, in Boston, that image is just a shadow of the real brain. The actual structure--a precisely organized tangle of nerve cells and the long projections that connect them--has remained hidden until relatively recently.

Traditional magnetic resonance imaging, or MRI, can detect the major anatomical features of the brain and is often used to diagnose strokes and brain tumors. But advances in computing power and novel processing algorithms have allowed scientists to analyze the information captured during an MRI in completely new ways.

Diffusion spectrum imaging (DSI) is one of these twists. It uses magnetic resonance signals to track the movement of water molecules in the brain: water diffuses along the length of neural wires, called axons. Scientists can use these diffusion measurements to map the wires, creating a detailed blueprint of the brain's connectivity.

On the medical side, radiologists are beginning to use the technology to map the brain prior to surgery, for example, to avoid important fiber tracts when removing a brain tumor. Wedeen and others are now using diffusion imaging to better understand the structures that underlie our ability to see, to speak, and to remember. Scientists also hope that the techniques will grant new insight into diseases linked to abnormal wiring, such as schizophrenia and autism.

On the next page is an animation of the wiring of a marmoset monkey.

The marmoset brain, shown above, is about the size of a plum. By scanning a dissected brain for 24 hours, scientists were able to generate a map with a spatial resolution of 400 microns. "The image quality and resolution are much higher than we can obtain in a living subject," says Wedeen.

As the brain rotates, you can see that all the neural fibers are visualized in half of the brain: the spiky fibers that look like pins in a pincushion are part of the cerebral cortex. The sparser half of the image displays only the fibers originating in the opposite side.

It's easy to see that this brain lacks the folding that is characteristic of the human brain. "The human brain would look 25 times as complicated," says Wedeen. "Every gyrus [fold] has its own story to tell."

Compressing Light

A new way to confine light could enable better optical communications and computing.
Guiding light: Light can be compressed between a semiconductor nanowire and a smooth sheet of silver, depending on the nanowire’s diameter and its height above the metal surface. Here, light is confined in a 100-nanometer gap by a nanowire with a 200-nanometer diameter.

A new way to compress light, designed by researchers at the University of California, Berkeley, could make optical communications on computer chips more practical. The researchers developed computer simulations that suggest that it is possible to confine infrared light to a space 10 nanometers wide. What's more, unlike other techniques for compressing light, the configuration will allow light to travel up to 150 microns without losing its energy, which is key for small optical systems.

Scaling down optical devices is important for future optical communications and computing. Light-based communications use wavelengths on the order of microns to carry information, and they are successful in large-scale applications such as optical fiber networks that span oceans. But to transmit data over short distances, like between circuit components on a microchip, long-wavelength light must be squeezed into tiny spaces.

Previously, scientists have effectively shrunk light by converting it into waves that travel along the surface of metals. But these waves lose their energy before they can successfully carry information useful distances. Optical fiber, on the other hand, carries light over several kilometers without energy loss, but it cannot be miniaturized less than half the size of the wavelength.

The Berkeley researchers combined these techniques to both compress the light and allow it to travel far enough to transmit information on computer chips. They place a semiconductor nanowire, such as gallium arsenide, within nanometers of a thin sheet of silver. Without the nanowire, light converted into surface waves would spread out over the silver sheet, and the light energy would be quickly dissipated. But with the nanowire present, charges pile up on both the silver and the nanowire surfaces, trapping light energy between them. The nanowire has the effect of confining and guiding surface waves, preventing them from spreading out over the metal and dissipating the light energy.

Using computer simulations to tune both the diameter of the nanowire and the distance between the nanowire and the metal, the researchers found an optimal arrangement that would allow light to be squeezed into the smallest space possible while still retaining a sufficient amount of energy: a nanowire with a 200-nanometer diameter placed 10 nanometers above the silver surface would give the best combination of results for communications wavelengths of about 1.5 microns.


Shape shifting: Light is confined to different parts of the waveguide when the diameter or height of the nanowire changes. From left to right: light travels inside a 400-nanometer nanowire placed 100 nanometers above the surface; some light begins to travel between the nanowire and the surface when the diameter is reduced to 200 nanometers; when the nanowire is just two nanometers above the surface, light is trapped in the tiny gap for both 200-nanometer and 400-nanometer nanowires.

"This could truly enable a revolution in the [nanophotonics] field," says Marin Soljacic, a physics professor at MIT. For example, the resolutions of sensing and imaging techniques are limited by the wavelength of light they use to measure objects; anything beneath the resolution can't be seen. A device that confines light beyond its natural wavelength, however, could measure and return information about what lies beyond these limits.

The group is cautiously optimistic about its innovation. "This is probably our biggest breakthrough in the last seven or eight years," says Xiang Zhang, a professor of mechanical engineering at UC Berkeley, who led the research. "But we still have a long way to go." The researchers have already started to demonstrate in experimental devices the performance that their simulations predicted. However, they have only tested the devices with visible light frequencies, which are still hundreds of nanometers smaller than the infrared frequencies used in communications. And while a propagation distance of 150 microns is good, says Zhang, they want a distance of at least a millimeter for practical devices on integrated chips.

With continued refinement, the technique could play several roles in optical computing. The setup could be used to steer light through certain paths on chips. The group is even toying with the idea of using the device to produce an ultrasmall light source. Still, any practical devices are several years away. "They will have to master the fabrication," says Soljacic. "But the simulations seem convincing, and I have complete faith that it will work."

Spit Sensor Spots Oral Cancer

An ultrasensitive optical protein sensor analyzes saliva.

Analyzing spit: Leyla Sabet, a member of the UCLA research team that built the new optical protein sensor, sits in front of the device. Based on a confocal microscope, the ultrasensitive system is being used by the researchers to detect biomarkers in saliva samples that are linked to oral cancer.

For the first time, an optical sensor, developed by researchers at the University of California, Los Angeles (UCLA), can measure proteins in saliva that are linked to oral cancer. The device is highly sensitive, allowing doctors and dentists to detect the disease early, when patient survival rates are high.

The researchers are currently working with the National Institute of Health (NIH) to push the technology to clinical tests so that it can be developed into a device that can be used in dentists' offices. Chih-Ming Ho, a scientist at UCLA and principal investigator for the sensor, says that it is a versatile instrument and can be used to detect other disease-specific biomarkers.

When oral cancer is identified in its early stages, patient survival rate is almost 90 percent, compared with 50 percent when the disease is advanced, says Carter Van Waes, chief of head and neck surgery at the National Institute on Deafness and Other Communication Disorders (NIDCD). The American Cancer Society estimates that there will be 35,310 new cases of oral cancer in the United States in 2008. Early forms are hard to detect just by visual examination of the mouth, says Van Waes, so physicians either have to perform a biopsy--remove tissue for testing--or analyze proteins in blood.

Detecting cancer biomarkers in saliva would be a much easier test to perform, but it is also technically more challenging: protein markers are harder to spot in saliva than in blood. To create the ultrasensitive sensor, researchers started with a glass substrate coated with a protein called streptavidin that enables other biomolecules to bind to the substrate and to one another. The researchers then added a molecule that would catch and bind the cancer biomarker--a protein in saliva called IL-8 that previous research has proved to be related to oral cancer. They also added molecules designed to keep the glass surface free of other proteins that might muddy detection of the biomarker. To visualize the target molecules, Ho's team then added a set of fluorescently tagged proteins designed to attach to the captured IL-8 markers.

Because saliva has a lower concentration of proteins than blood does, the team needed a highly sensitive method to detect the tagged proteins among the background noise, stray molecules in saliva that also fluoresce. So the researchers used a confocal microscope--an imaging system that employs a laser to collect the light generated from a sample--to analyze the saliva. Ho and his team found that focusing the laser light on a specific part of the sample resulted in a higher signal-to-noise ratio, allowing them to detect lower concentrations of the cancer biomarker.

Better Batteries Charge Up

A startup reports progress on a battery that stores more energy than lithium-ion ones.

A Texas startup says that it has taken a big step toward high-volume production of an ultracapacitor-based energy-storage system that, if claims hold true, would far outperform the best lithium-ion batteries on the market.

Dick Weir, founder and chief executive of EEStor, a startup based in Cedar Park, TX, says that the company has manufactured materials that have met all certification milestones for crystallization, chemical purity, and particle-size consistency. The results suggest that the materials can be made at a high-enough grade to meet the company's performance goals. The company also said a key component of the material can withstand the extreme voltages needed for high energy storage.

"These advancements provide the pathway to meeting our present requirements," Weir says. "This data says we hit the home run."

EEStor claims that its system, called an electrical energy storage unit (EESU), will have more than three times the energy density of the top lithium-ion batteries today. The company also says that the solid-state device will be safer and longer lasting, and will have the ability to recharge in less than five minutes. Toronto-based ZENN Motor, an EEStor investor and customer, says that it's developing an EESU-powered car with a top speed of 80 miles per hour and a 250-mile range. It hopes to launch the vehicle, which the company says will be inexpensive, in the fall of 2009.

But skepticism in the research community is high. At the EESU's core is a ceramic material consisting of a barium titanate powder that is coated with aluminum oxide and a type of glass material. At a materials-research conference earlier this year in San Francisco, it was asked whether such an energy-storage device was possible. "The response was not very positive," said one engineering professor who attended the conference.

Many have questioned EEStor's claims, pointing out that the high voltages needed to approach the targeted energy storage would cause the material to break down and the storage device to short out. There would be little tolerance for impurities or imprecision--something difficult to achieve in a high-volume manufacturing setting, skeptics say.

But Weir is dismissive of such reactions. "EEStor is not hyping," he says. Representatives of the company said in a press release that certification data proves that voltage breakdown of the aluminum oxide occurs at 1,100 volts per micron--nearly three times higher than EEStor's target of 350 volts. "This provides the potential for excellent protection from voltage breakdown," the company said.

Jeff Dahn, a professor of advanced materials in the chemistry and physics departments at Dalhousie University, in Nova Scotia, Canada, says the data suggests that EEStor has developed an "amazingly robust" material. "If you're going to have a one-micron dielectric, it's got to be pretty pure," he says.

Ian Clifford, CEO of ZENN Motor, says that the news "bodes well" for EEStor's next milestone: third-party verification that the powders achieve the desired high level of permittivity, which will help determine whether the materials can meet the company's energy-storage goals.

Weir says that EEStor's latest production milestones lay the foundation for what follows. It has taken longer than originally expected, he says, but the company is now in a position to deploy more-advanced technologies for the production of military-grade applications, alluding to EEStor's partnership with Lockheed Martin.

Weir says that momentum is building and that he'll start coming out with information about the company's progress on a "more rapid basis." Plans are also under way for a major expansion of EEStor's production lines. "There's nothing complex in this," he says, pointing to his past engineering days at IBM. "It's nowhere near the complexity of disk-drive fabrication."

Despite its critics, EEStor has won support from some significant corners. In addition to Lockheed Martin, venture-capital firm Kleiner Perkins Caufield & Byers is an investor, and former Dell Computer chairman Morton Topfer sits on EEStor's board.

The company is also in serious talks with potential partners in the solar and wind industry, where EEStor's technology can, according to Weir, help put 45 percent more energy into the grid. He says that the company is working toward commercial production "as soon as possible in 2009," although when asked, he gave no specific date. "I'm not going to make claims on when we're going to get product out there. That's between me and the customer. I don't want to tell the industry."

Dahn says that he hopes EEStor will succeed. "I hope it works like a charm, because it will be a lot easier than fuel cells and batteries if it comes to pass."