Researchers have revealed details of China's latest homegrown microprocessor.
Enter the dragon: This single-core central processing unit, known as Loongson, or "dragon chip," was designed and manufactured in China. Chinese engineers have the goal of deploying quad-core chips by 2009.
In California last week, Chinese researchers unveiled details of a microprocessor that they
hope will bring personal computing to most ordinary people in China by 2010. The chip,
code-named Godson-3, was developed with government funding by more than 200 researchers at
the Chinese Academy of Sciences' Institute of Computing Technology (ICT).
China is making a late entry into chip making, admits Zhiwei Xu, deputy director of ICT.
"Twenty years ago in China, we didn't support R&D for microprocessors," he said during a
presentation last week at the Hot Chips conference, in Palo Alto. "The decision makers and
[Chinese] IT community have come to realize that CPUs [central processing units] are
important."
Tom Halfhill, an analyst at research firm In-Stat, says that the objective for China is to
take control of the design and manufacture of vital technology. "Like America wants to be
energy independent, China wants to be technology independent," Halfhill says. "They don't
want to be dependent on outside countries for critical technologies like microprocessors,
which are, nowadays, a fundamental commodity." Federal laws also prohibit the export of
state-of-the-art microprocessors from the United States to China, meaning that microchips
shipped to China are usually a few generations behind the newest ones in the West.
Despite its late start, China is making rapid progress. The ICT group began designing a
single-core CPU in 2001, and by the following year had developed Godson-1, China's first
general-purpose CPU. In 2003, 2004, and 2006, the team introduced ever faster versions of a
second chip--Godson-2--based on the original design. According to Xu, each new chip tripled
the performance of the previous one.
Godson chips are manufactured in China by a Swiss company called ST Microelectronics and are
available commercially under the brand name Loongson, meaning "dragon chip." Loongson chips
already power some personal computers and servers on the Chinese market, which come with the
Linux operating system and other open-source software. "They use a lot of open-source
software because it's free," says Halfhill. "The Chinese government wants to get as many PCs
into schools and as many workplaces as they can."
The latest Godson chips will also have a number of advanced features. Godson-3, a chip with
four cores--processing units that work in parallel--will appear in 2009, according to Xu,
and an eight-core version is also under development. Both versions will be built using
65-nanometer lithography processes, which are a generation older than Intel's current
45-nanometer processes. Importantly, Godson-3 is scalable, meaning that more cores can be
added to future generations without significant redesign. Additionally, the architecture
allows engineers to precisely control the amount of power that it uses. For instance, parts
of the chip can be shut down when they aren't in use, and cores can operate at various
frequencies, depending on the tasks that they need to perform. The four-core Godson-3 will
consume 10 watts of power, and the eight-core chip will consume 20 watts, says Xu.
This latest chip will also be fundamentally different from those made before. Neither
Godson-1 nor -2 is compatible with Intel's so-called x86 architecture, meaning that most
commercial software will not run on them. But engineers have added 200 additional
instructions to Godson-3 to simulate an x86 chip, which allows Godson-3 to run more
software, including the Windows operating system. And because the chip architecture is only
simulated, there is no need to obtain a license from Intel.
Erik Metzger, a patent attorney at Intel, says that the chip will only perform at about 80
percent of the speed of an actual x86 chip. "That implies that [the Chinese government] is
going after a low-end market," he says. This is the same market that Intel is targeting with
its classmate PC and low-power atom microprocessor. Metzger adds that the inner workings of
the chip, known as its instruction set, have not yet been disclosed, making it difficult to
know if or how any x86 patents may have been breeched.
The Chinese team hopes to further boost its chip program through collaboration with other
companies and researchers. "We still lag behind the international partners a lot," says Xu.
"But we are doing our best to join the international community."
Tuesday, September 2, 2008
Stretchy, High-Quality Conductors
Materials made from nanotubes could lead to conformable computers that stretch around any shape.
Malleable matrix: A researcher stretches a mesh of transistors connected by elastic conductors that were made at the University of Tokyo.
By adding carbon nanotubes to a stretchy polymer, researchers at the University of Tokyo made a conductive material that they used to connect organic transistors in a stretchable electronic circuit. The new material could be used to make displays, actuators, and simple computers that wrap around furniture, says Takao Someya, a professor of engineering at the University of Tokyo. The material could also lead to electronic skin for robots, he says, which could use pressure sensors to detect touch while accommodating the strain at the robots' joints. Importantly, the process that the researchers developed for making long carbon nanotubes could work on the industrial scale.
"The measured conductivity records the world's highest value among soft materials," says Someya. In a paper published last week in Science, Someya and his colleagues claim a conductivity of 57 siemens per centimeter, which is lower than that of copper, the metal normally used to connect transistors, but two orders of magnitude higher than that of previously reported polymer-carbon-nanotube composites. Someya says that the material is able to stretch up to about 134 percent of its original shape without significant damage.
Electronics that can bend and flex are already used in some applications, but they can't be wrapped around irregular shapes, such as the human body or complex surfaces, says John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign. Rogers, who recently demonstrated a spherical camera sensor using his own version of an elastic circuit, says that Someya's approach is a creative addition to the science of stretchable electronic materials. "It's a valuable contribution to an important, emerging field of technology," he says.
To make the stretchable polymer conductive, Someya's group combined a batch of millimeter-long, single-walled carbon nanotubes with an ionic liquid--a liquid containing charged molecules. The resulting black, paste-like substance was then slowly added to a liquid polymer mixture. This produced a gel-like substance that was poured into a cast and air-dried for 24 hours.
The benefit of adding the nanotubes to a polymer before it is cast, says Someya, is that the nanotubes, which make up about 20 percent of the weight of the total mixture, are more evenly distributed. And because each nanotube is about a millimeter in length, there's a high likelihood that in aggregate they will form an extensive network that allows electrical charge to propagate reliably throughout the polymer.
Previously, researchers have added micrometer-length carbon nanotubes to polymers, says Ray Baughman, a professor of materials science at the University of Texas. Most often, they would simply coat the polymer with nanotubes. Baughman says that Someya's work is exciting, but he notes that he would have expected that adding higher percentages of carbon nanotubes to polymers reduces their stretchiness.
According to Someya, the initial air-dried nanotube-polymer film is flexible but not that stretchable. In order to improve its stretchiness, a machine perforates it into a net-shaped structure that is then coated with a silicone-based material. This enables the material to stretch much farther without compromising its conductivity.
Baughman says that one of the main contributions of the University of Tokyo team's work is to demonstrate a way to make this sort of elastic conductor material in bulk. "This and so many other applications depend on the landmark advance of a team scaling up their production of ultralong carbon nanotubes," he says. The University of Tokyo group claims that from one furnace, it can make 10 tons of nanotubes per year. "It's nice work," Baughman says.
Malleable matrix: A researcher stretches a mesh of transistors connected by elastic conductors that were made at the University of Tokyo.
By adding carbon nanotubes to a stretchy polymer, researchers at the University of Tokyo made a conductive material that they used to connect organic transistors in a stretchable electronic circuit. The new material could be used to make displays, actuators, and simple computers that wrap around furniture, says Takao Someya, a professor of engineering at the University of Tokyo. The material could also lead to electronic skin for robots, he says, which could use pressure sensors to detect touch while accommodating the strain at the robots' joints. Importantly, the process that the researchers developed for making long carbon nanotubes could work on the industrial scale.
"The measured conductivity records the world's highest value among soft materials," says Someya. In a paper published last week in Science, Someya and his colleagues claim a conductivity of 57 siemens per centimeter, which is lower than that of copper, the metal normally used to connect transistors, but two orders of magnitude higher than that of previously reported polymer-carbon-nanotube composites. Someya says that the material is able to stretch up to about 134 percent of its original shape without significant damage.
Electronics that can bend and flex are already used in some applications, but they can't be wrapped around irregular shapes, such as the human body or complex surfaces, says John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign. Rogers, who recently demonstrated a spherical camera sensor using his own version of an elastic circuit, says that Someya's approach is a creative addition to the science of stretchable electronic materials. "It's a valuable contribution to an important, emerging field of technology," he says.
To make the stretchable polymer conductive, Someya's group combined a batch of millimeter-long, single-walled carbon nanotubes with an ionic liquid--a liquid containing charged molecules. The resulting black, paste-like substance was then slowly added to a liquid polymer mixture. This produced a gel-like substance that was poured into a cast and air-dried for 24 hours.
The benefit of adding the nanotubes to a polymer before it is cast, says Someya, is that the nanotubes, which make up about 20 percent of the weight of the total mixture, are more evenly distributed. And because each nanotube is about a millimeter in length, there's a high likelihood that in aggregate they will form an extensive network that allows electrical charge to propagate reliably throughout the polymer.
Previously, researchers have added micrometer-length carbon nanotubes to polymers, says Ray Baughman, a professor of materials science at the University of Texas. Most often, they would simply coat the polymer with nanotubes. Baughman says that Someya's work is exciting, but he notes that he would have expected that adding higher percentages of carbon nanotubes to polymers reduces their stretchiness.
According to Someya, the initial air-dried nanotube-polymer film is flexible but not that stretchable. In order to improve its stretchiness, a machine perforates it into a net-shaped structure that is then coated with a silicone-based material. This enables the material to stretch much farther without compromising its conductivity.
Baughman says that one of the main contributions of the University of Tokyo team's work is to demonstrate a way to make this sort of elastic conductor material in bulk. "This and so many other applications depend on the landmark advance of a team scaling up their production of ultralong carbon nanotubes," he says. The University of Tokyo group claims that from one furnace, it can make 10 tons of nanotubes per year. "It's nice work," Baughman says.
More-Efficient Solar Cells
A new solar panel could lower costs and improve efficiency.
Better cells: A new design for solar panels (top) improves their efficiency. Each panel is made of arrays of square solar cells. A conventional solar cell (bottom) requires thick silver contacts that block light and reduce cell performance. The new design uses a novel electrode that eliminates these silver contacts.
By changing they way that conventional silicon solar panels are made, Day4 Energy, a startup based in Burnaby, British Columbia, has found a way to cut the cost of solar power by 25 percent, says George Rubin, the company's president.
The company has developed a new electrode that, together with a redesigned solar-cell structure, allows solar panels to absorb more light and operate at a higher voltage. This increases the efficiency of multicrystalline silicon solar panels from an industry standard of about 14 percent to nearly 17 percent. Because of this higher efficiency, Day4's solar panels generate more power than conventional panels do, yet they will cost the same, Rubin says. He estimates the cost per watt of solar power would be about $3, compared with $4 for conventional solar cells. That will translate into electricity prices of about 20 cents per kilowatt-hour in sunny areas, down from about 25 cents per kilowatt-hour, he says.
Other experimental solar technologies could lead to much lower prices--indeed, they promise to compete with the average cost of electricity in the United States, which is about 10 cents per kilowatt-hour. But such technologies, including advanced solar concentrators and some thin-film semiconductor solar cells, probably won't be available for years. Day4's technology could be for sale within 18 months, the company says.
In conventional solar panels, the silicon that converts light into electricity is covered with a network of silver lines that conduct electrons and serve as connection points for soldering together the individual solar cells that make up a panel. The network consists of rows of thin silver lines that feed into thicker wires called bus bars. Day4 replaces these bus bars with a new electrode that consists of rows of fine copper wires coated with an alloy material. The wires are embedded in an adhesive and aligned on a plastic film. The coated copper wires run on top of and perpendicular to the thin silver lines, connecting them to neighboring cells. The new electrode conducts electricity better than the silver lines, resulting in less power loss. It also covers up less of the silicon than the bus bars, leaving more area for absorbing light.
What's more, the new electrode allowed Day4 to redesign solar cells to absorb more of the solar spectrum and convert this light into electricity more efficiently. Solar cells comprise two layers of silicon. For light to be converted into electricity, it has to pass through the first layer and reach the second. The thinner the top layer, the more light reaches the second layer to be converted into electricity. In a conventional cell, the silver lines are deposited and then heated to high temperatures, which causes the metal to diffuse into the silicon. The top layer must be thick enough that the silver does not diffuse through it and create a short circuit between the layers of the solar cell. By replacing the large bus bars with the new electrode, Day4 was able to make the top layer of the solar cells thinner, increasing the amount of light that can be converted into electricity. Also, since the silver can damage the silicon, replacing it with the new electrode increases the solar cell's power output.
The technology "sounds pretty exciting," says Travis Bradford, a solar-industry analyst with the Prometheus Institute for Sustainable Development, an energy research firm based in Cambridge, MA. The question, Bradford says, is whether the company can translate the latest advances from its lab to large-scale production without increasing costs.
Day4 has already started producing solar panels using its new electrode material--though not its new solar-cell designs. The company recently announced that it has the capacity to produce enough solar panels every year to generate 47 megawatts of electricity. These first-generation panels, which use conventional solar cells, have an efficiency of 14.7 percent. The company's next step is to put its new cell design into production and incorporate these cells into its solar panels, with the goal of improving their efficiency to 17 percent.
Better cells: A new design for solar panels (top) improves their efficiency. Each panel is made of arrays of square solar cells. A conventional solar cell (bottom) requires thick silver contacts that block light and reduce cell performance. The new design uses a novel electrode that eliminates these silver contacts.
By changing they way that conventional silicon solar panels are made, Day4 Energy, a startup based in Burnaby, British Columbia, has found a way to cut the cost of solar power by 25 percent, says George Rubin, the company's president.
The company has developed a new electrode that, together with a redesigned solar-cell structure, allows solar panels to absorb more light and operate at a higher voltage. This increases the efficiency of multicrystalline silicon solar panels from an industry standard of about 14 percent to nearly 17 percent. Because of this higher efficiency, Day4's solar panels generate more power than conventional panels do, yet they will cost the same, Rubin says. He estimates the cost per watt of solar power would be about $3, compared with $4 for conventional solar cells. That will translate into electricity prices of about 20 cents per kilowatt-hour in sunny areas, down from about 25 cents per kilowatt-hour, he says.
Other experimental solar technologies could lead to much lower prices--indeed, they promise to compete with the average cost of electricity in the United States, which is about 10 cents per kilowatt-hour. But such technologies, including advanced solar concentrators and some thin-film semiconductor solar cells, probably won't be available for years. Day4's technology could be for sale within 18 months, the company says.
In conventional solar panels, the silicon that converts light into electricity is covered with a network of silver lines that conduct electrons and serve as connection points for soldering together the individual solar cells that make up a panel. The network consists of rows of thin silver lines that feed into thicker wires called bus bars. Day4 replaces these bus bars with a new electrode that consists of rows of fine copper wires coated with an alloy material. The wires are embedded in an adhesive and aligned on a plastic film. The coated copper wires run on top of and perpendicular to the thin silver lines, connecting them to neighboring cells. The new electrode conducts electricity better than the silver lines, resulting in less power loss. It also covers up less of the silicon than the bus bars, leaving more area for absorbing light.
What's more, the new electrode allowed Day4 to redesign solar cells to absorb more of the solar spectrum and convert this light into electricity more efficiently. Solar cells comprise two layers of silicon. For light to be converted into electricity, it has to pass through the first layer and reach the second. The thinner the top layer, the more light reaches the second layer to be converted into electricity. In a conventional cell, the silver lines are deposited and then heated to high temperatures, which causes the metal to diffuse into the silicon. The top layer must be thick enough that the silver does not diffuse through it and create a short circuit between the layers of the solar cell. By replacing the large bus bars with the new electrode, Day4 was able to make the top layer of the solar cells thinner, increasing the amount of light that can be converted into electricity. Also, since the silver can damage the silicon, replacing it with the new electrode increases the solar cell's power output.
The technology "sounds pretty exciting," says Travis Bradford, a solar-industry analyst with the Prometheus Institute for Sustainable Development, an energy research firm based in Cambridge, MA. The question, Bradford says, is whether the company can translate the latest advances from its lab to large-scale production without increasing costs.
Day4 has already started producing solar panels using its new electrode material--though not its new solar-cell designs. The company recently announced that it has the capacity to produce enough solar panels every year to generate 47 megawatts of electricity. These first-generation panels, which use conventional solar cells, have an efficiency of 14.7 percent. The company's next step is to put its new cell design into production and incorporate these cells into its solar panels, with the goal of improving their efficiency to 17 percent.
How (Not) to Fix a Flaw
Experts say disclosing bugs prevents security flaws from festering.
Efforts to censor three MIT students who found security flaws in the Boston subway's payment system have been roundly criticized by experts, who argue that suppressing such research could ultimately make the system more vulnerable.
The students were served with a temporary restraining order this weekend at the Defcon security conference in Las Vegas, preventing them from giving their planned talk on Boston subway's payment system.
According to slides submitted before the conference, which have also been posted online, their presentation "Anatomy of a Subway Hack" would have revealed ways to forge or copy both the old magnetic-stripe passes and the newer radio-frequency identification (RFID) cards used on Boston's subway, making it possible to travel for free. The restraining order was filed on behalf of the Massachusetts Bay Transportation Authority (MBTA), which spent more than $180 million to install the system, according to court documents. The MBTA has also brought a larger lawsuit accusing the students of violating the Computer Fraud and Abuse Act and accusing MIT of being negligent in its supervision of them.
One of the students involved, Zack Anderson, says his team had never intended to give real attackers an advantage. "We left out some details in the work we did, because we didn't want anyone to be able to attack the ticketing system; we didn't want people to be able to circumvent the system and get free fares," he says.
Marcia Hoffman, staff attorney with the Electronic Frontier Foundation, a digital-rights group that is assisting the MIT team with its defense, argues that researchers need to be protected as they investigate these types of flaws. "It's extremely rare for a court to bar anyone from speaking before that person has even had a chance to speak," she says. "We think this sets a terrible precedent that's very dangerous for security research."
The MBTA says it isn't trying to stop research, just buy time to deal with whatever flaws the students might have found. The agency also expressed skepticism about whether the MIT students had indeed found real flaws. "They are telling a terrific tale of widespread security problems, but they still have not provided the MBTA with credible information to support such a claim," says Joe Pesaturo, a spokesman for the MBTA. "It's that simple."
It is unclear, though, whether the MBTA can realistically buy the time it needs. Karsten Nohl, a University of Virginia PhD student who was one of the first to publish details of security vulnerabilities in MiFare Classic, the brand of wireless smart card used in Boston's system, says solving the problems could take a year or two and might even involve replacing all card readers and all cards in circulation.
This is not the first lawsuit to hit researchers who have studied the security of MiFare Classic. Last month, Dutch company NXP Semiconductors, which makes the MiFare cards, sued a Dutch university in an attempt to prevent researchers there from publishing details of similar security flaws. The injunction did not succeed, but as RFID technology continues to proliferate, other security experts are concerned about being able to discuss relevant security research openly.
Bruce Schneier, chief security technology officer at BT Counterpane, says the latest lawsuit only distracts from what's really at stake. "MiFare sold a lousy product to customers who didn't know how to ask for a better product," he says. "That will never get fixed as long as MiFare's shoddy security is kept secret." He adds, "The reason we publish vulnerabilities is because there's no other way for security to improve."
The same brand of RFID card is used on transport networks in other cities, including London, Los Angeles, Brisbane, and Shanghai, as well as for corporate and government identity passes. The technology has even been incorporated into some credit cards and cell phones.
Nohl says the industry should view the MIT students' work as a free service that could ultimately lead to better security. Although there has been plenty of academic research on the security of RFID, he says, little has yet made its way into products. "The core of the problem is still industry's belief that they should build security themselves, and that what they've built themselves will be stronger if they keep it secret," Nohl says.
Meanwhile, independent researchers have come up with a number of ideas for improving the security of RFID cards. Nohl and others are researching better ways of encrypting the information stored on the cards. But part of the problem is that the cards are passive, meaning that they will return a signal to any reader that sends a request. Tadayoshi Kohno and colleagues at the University of Washington are also working on a motion-sensing system that would let users activate their cards with a specific gesture, so that it does not normally respond to requests. Karl Koscher, one of the researchers who worked on the project, says their system is aimed at increasing security without destroying the convenience that has made the cards so popular.
Efforts to censor three MIT students who found security flaws in the Boston subway's payment system have been roundly criticized by experts, who argue that suppressing such research could ultimately make the system more vulnerable.
The students were served with a temporary restraining order this weekend at the Defcon security conference in Las Vegas, preventing them from giving their planned talk on Boston subway's payment system.
According to slides submitted before the conference, which have also been posted online, their presentation "Anatomy of a Subway Hack" would have revealed ways to forge or copy both the old magnetic-stripe passes and the newer radio-frequency identification (RFID) cards used on Boston's subway, making it possible to travel for free. The restraining order was filed on behalf of the Massachusetts Bay Transportation Authority (MBTA), which spent more than $180 million to install the system, according to court documents. The MBTA has also brought a larger lawsuit accusing the students of violating the Computer Fraud and Abuse Act and accusing MIT of being negligent in its supervision of them.
One of the students involved, Zack Anderson, says his team had never intended to give real attackers an advantage. "We left out some details in the work we did, because we didn't want anyone to be able to attack the ticketing system; we didn't want people to be able to circumvent the system and get free fares," he says.
Marcia Hoffman, staff attorney with the Electronic Frontier Foundation, a digital-rights group that is assisting the MIT team with its defense, argues that researchers need to be protected as they investigate these types of flaws. "It's extremely rare for a court to bar anyone from speaking before that person has even had a chance to speak," she says. "We think this sets a terrible precedent that's very dangerous for security research."
The MBTA says it isn't trying to stop research, just buy time to deal with whatever flaws the students might have found. The agency also expressed skepticism about whether the MIT students had indeed found real flaws. "They are telling a terrific tale of widespread security problems, but they still have not provided the MBTA with credible information to support such a claim," says Joe Pesaturo, a spokesman for the MBTA. "It's that simple."
It is unclear, though, whether the MBTA can realistically buy the time it needs. Karsten Nohl, a University of Virginia PhD student who was one of the first to publish details of security vulnerabilities in MiFare Classic, the brand of wireless smart card used in Boston's system, says solving the problems could take a year or two and might even involve replacing all card readers and all cards in circulation.
This is not the first lawsuit to hit researchers who have studied the security of MiFare Classic. Last month, Dutch company NXP Semiconductors, which makes the MiFare cards, sued a Dutch university in an attempt to prevent researchers there from publishing details of similar security flaws. The injunction did not succeed, but as RFID technology continues to proliferate, other security experts are concerned about being able to discuss relevant security research openly.
Bruce Schneier, chief security technology officer at BT Counterpane, says the latest lawsuit only distracts from what's really at stake. "MiFare sold a lousy product to customers who didn't know how to ask for a better product," he says. "That will never get fixed as long as MiFare's shoddy security is kept secret." He adds, "The reason we publish vulnerabilities is because there's no other way for security to improve."
The same brand of RFID card is used on transport networks in other cities, including London, Los Angeles, Brisbane, and Shanghai, as well as for corporate and government identity passes. The technology has even been incorporated into some credit cards and cell phones.
Nohl says the industry should view the MIT students' work as a free service that could ultimately lead to better security. Although there has been plenty of academic research on the security of RFID, he says, little has yet made its way into products. "The core of the problem is still industry's belief that they should build security themselves, and that what they've built themselves will be stronger if they keep it secret," Nohl says.
Meanwhile, independent researchers have come up with a number of ideas for improving the security of RFID cards. Nohl and others are researching better ways of encrypting the information stored on the cards. But part of the problem is that the cards are passive, meaning that they will return a signal to any reader that sends a request. Tadayoshi Kohno and colleagues at the University of Washington are also working on a motion-sensing system that would let users activate their cards with a specific gesture, so that it does not normally respond to requests. Karl Koscher, one of the researchers who worked on the project, says their system is aimed at increasing security without destroying the convenience that has made the cards so popular.
First All-Nanowire Sensor
Researchers integrate nanowire sensors and electronics on a chip.
Squared away: University of California, Berkeley, researchers were able to create an orderly circuit array from two types of tiny nanowires, which can function as optical sensors and transistors. Each of the circuits on the 13-by-20 array serves as a single pixel in an all-nanowire image sensor.
Researchers at the University of California, Berkeley, have created the first integrated circuit that uses nanowires as both sensors and electronic components. With a simple printing technique, the group was able to fabricate large arrays of uniform circuits, which could serve as image sensors. "Our goal is to develop all-nanowire sensors" that could be used in a variety of applications, says Ali Javey, an electrical-engineering professor at UC Berkeley, who led the research.
Nanowires make good sensors because their small dimensions enhance their sensitivity. Nanowire-based light sensors, for example, can detect just a few photons. But to be useful in practical devices, the sensors have to be integrated with electronics that can amplify and process such small signals. This has been a problem, because the materials used for sensing and electronics cannot easily be assembled on the same surface. What's more, a reliable way of aligning the tiny nanowires that could be practical on a large scale has been hard to come by.
A printing method developed by the Berkeley group could solve both problems. First, the researchers deposit a polymer on a silicon substrate and use lithography to etch out patterns where the optical sensing nanowires should be. They then print a single layer of cadmium selenide nanowires over the pattern; removing the polymer leaves only the nanowires in the desired location for the circuit. They repeat the process with the second type of nanowires, which have germanium cores and silicon shells and form the basis of the transistors. Finally, they deposit electrodes to complete the circuits.
The printed nanowires are first grown on separate substrates, which the researchers press onto and slide across the silicon. "This type of nanowire transfer is good for aligning the wires," says Deli Wang, a professor of electrical and computer engineering at the University of California, Santa Barbara, who was not involved in the research. Good alignment is necessary for the device to work properly,since the optical signal depends on the polarization of light, which in turn is dependent on the orientation of the nanowires. Similarly, transistors require a high degree of alignment to switch on and off well.
Another potential advantage of the printing method is that the nanowires could be printed not only onto silicon, but also onto paper or plastics, says Javey. He foresees such applications as "sensor tapes"--long roles of printed sensors used to test air quality or detect minute concentrations of chemicals. "Our next challenge is to develop a wireless component" that would relay the signals from the circuit to a central processing unit, he says.
But for now, the researchers have demonstrated the technique as a way to create an image sensor. They patterned the nanowires onto the substrate to make a 13-by-20 array of circuits, in which each circuit acts as a single pixel. The cadmium selenide nanowires convert incoming photons into electrons, and two different layers of germanium-silicon nanowire transistors amplify the resulting electrical signal by up to five orders of magnitude. "This demonstrates an outstanding application of nanowires in integrated electronics," says Zhong Lin Wang, director of the Center for Nanostructure Characterization at Georgia Tech.
After putting the device under a halogen light and measuring the output current from each circuit, the group found that about 80 percent of the circuits successfully registered the intensity of the light shining on them. Javey attributes the failure of the other 20 percent to such fabrication defects as shorted electrodes and misprints that resulted in poor nanowire alignment. He notes that all of these issues can be resolved with refined manufacturing methods.
The researchers also plan to work toward shrinking the circuit to improve resolution and sensitivity. Eventually, says Javey, they want everything on the circuit to be printable, including the electrodes and contacts, which could help further reduce costs.
Squared away: University of California, Berkeley, researchers were able to create an orderly circuit array from two types of tiny nanowires, which can function as optical sensors and transistors. Each of the circuits on the 13-by-20 array serves as a single pixel in an all-nanowire image sensor.
Researchers at the University of California, Berkeley, have created the first integrated circuit that uses nanowires as both sensors and electronic components. With a simple printing technique, the group was able to fabricate large arrays of uniform circuits, which could serve as image sensors. "Our goal is to develop all-nanowire sensors" that could be used in a variety of applications, says Ali Javey, an electrical-engineering professor at UC Berkeley, who led the research.
Nanowires make good sensors because their small dimensions enhance their sensitivity. Nanowire-based light sensors, for example, can detect just a few photons. But to be useful in practical devices, the sensors have to be integrated with electronics that can amplify and process such small signals. This has been a problem, because the materials used for sensing and electronics cannot easily be assembled on the same surface. What's more, a reliable way of aligning the tiny nanowires that could be practical on a large scale has been hard to come by.
A printing method developed by the Berkeley group could solve both problems. First, the researchers deposit a polymer on a silicon substrate and use lithography to etch out patterns where the optical sensing nanowires should be. They then print a single layer of cadmium selenide nanowires over the pattern; removing the polymer leaves only the nanowires in the desired location for the circuit. They repeat the process with the second type of nanowires, which have germanium cores and silicon shells and form the basis of the transistors. Finally, they deposit electrodes to complete the circuits.
The printed nanowires are first grown on separate substrates, which the researchers press onto and slide across the silicon. "This type of nanowire transfer is good for aligning the wires," says Deli Wang, a professor of electrical and computer engineering at the University of California, Santa Barbara, who was not involved in the research. Good alignment is necessary for the device to work properly,since the optical signal depends on the polarization of light, which in turn is dependent on the orientation of the nanowires. Similarly, transistors require a high degree of alignment to switch on and off well.
Another potential advantage of the printing method is that the nanowires could be printed not only onto silicon, but also onto paper or plastics, says Javey. He foresees such applications as "sensor tapes"--long roles of printed sensors used to test air quality or detect minute concentrations of chemicals. "Our next challenge is to develop a wireless component" that would relay the signals from the circuit to a central processing unit, he says.
But for now, the researchers have demonstrated the technique as a way to create an image sensor. They patterned the nanowires onto the substrate to make a 13-by-20 array of circuits, in which each circuit acts as a single pixel. The cadmium selenide nanowires convert incoming photons into electrons, and two different layers of germanium-silicon nanowire transistors amplify the resulting electrical signal by up to five orders of magnitude. "This demonstrates an outstanding application of nanowires in integrated electronics," says Zhong Lin Wang, director of the Center for Nanostructure Characterization at Georgia Tech.
After putting the device under a halogen light and measuring the output current from each circuit, the group found that about 80 percent of the circuits successfully registered the intensity of the light shining on them. Javey attributes the failure of the other 20 percent to such fabrication defects as shorted electrodes and misprints that resulted in poor nanowire alignment. He notes that all of these issues can be resolved with refined manufacturing methods.
The researchers also plan to work toward shrinking the circuit to improve resolution and sensitivity. Eventually, says Javey, they want everything on the circuit to be printable, including the electrodes and contacts, which could help further reduce costs.
Bringing Invisibility Cloaks Closer
The fabrication of two new materials for manipulating light is a key step toward realizing cloaking.
Invisible net: A new material that can bend near-infrared light in a unique way has a fishnet structure. These images of a prism made from the material were taken with a scanning electron microscope. The holes in the net enable the material to interact with the magnetic component of the light, which enables the unusual bending and demonstrates its promise for use in future invisibility cloaks. In the inset, the layers of metal and insulating material that make up the metamaterial are visible.
Credit: Jason Valentine et al.
In an important step toward the development of practical invisibility cloaks, researchers have engineered two new materials that bend light in entirely new ways. These materials are the first that work in the optical band of the spectrum, which encompasses visible and infrared light; existing cloaking materials only work with microwaves. Such cloaks, long depicted in science fiction, would allow objects, from warplanes to people, to hide in plain sight.
Both materials, described separately in the journals Science and Nature this week, exhibit a property called negative refraction that no natural material possesses. As light passes through the materials, it bends backward. One material works with visible light; the other has been demonstrated with near-infrared light.
The materials, created in the lab of University of California, Berkeley, engineer Xiang Zhang, could show the way toward invisibility cloaks that shield objects from visible light. But Steven Cummer, a Duke University engineer involved in the development of the microwave cloak, cautions that there is a long way to go before the new materials can be used for cloaking. Cloaking materials must guide light in a very precisely controlled way so that it flows around an object, re-forming on the other side with no distortion. The Berkeley materials can bend light in the fundamental way necessary for cloaking, but they will require further engineering to manipulate light so that it is carefully directed.
One of the new Berkeley materials is made up of alternating layers of metal and an insulating material, both of which are punched with a grid of square holes. The total thickness of the device is about 800 nanometers; the holes are even smaller. "These stacked layers form electrical-current loops that respond to the magnetic field of light," enabling its unique bending properties, says Jason Valentine, a graduate student in Zhang's lab. Naturally occurring materials, by contrast, don't interact with the magnetic component of electromagnetic waves. By changing the size of the holes, the researchers can tune the material to different frequencies of light. So far, they've demonstrated negative refraction of near-infrared light using a prism made from the material.
Researchers have been trying to create such materials for nearly 10 years, ever since it occurred to them that negative refraction might actually be possible. Other researchers have only been able to make single layers that are too thin--and much too inefficient--for device applications. The Berkeley material is about 10 times thicker than previous designs, which helps increase how much light it transmits while also making it robust enough to be the basis for real devices. "This is getting close to actual nanoscale devices," Cummer says of the Berkeley prism.
The second material is made up of silver nanowires embedded in aluminum. "The nanowire medium works like optical-fiber bundles, so in principle, it's quite different," says Nicholas Fang, mechanical-science and -engineering professor at the University of Illinois at Urbana-Champagne, who was not involved in the research. The layered grid structure not only bends light in the negative direction; it also causes it to travel backward. Light transmitted through the nanowire structure also bends in the negative direction, but without traveling backward. Because the work is still in the early stages, it's unclear which optical metamaterial will work best, and for what applications. "Maybe future solutions will blend these two approaches," says Fang.
Making an invisibility cloak will pose great engineering challenges. For one thing, the researchers will need to scale up the material even to cloak a small object: existing microwave cloaking devices, and theoretical designs for optical cloaks, must be many layers thick in order to guide light around objects without distortion. Making materials for microwave cloaking was easier because these wavelengths can be controlled by relatively large structural features. To guide visible light around an object will require a material whose structure is controlled at the nanoscale, like the ones made at Berkeley.
Developing cloaking devices may take some time. In the short term, the Berkeley materials are likely to be useful in telecommunications and microscopy. Nanoscale waveguides and other devices made from the materials might overcome one of the major challenges of scaling down optical communications to chip level: allowing fine control of parallel streams of information-rich light on the same chip so that they do not interfere with one another. And the new materials could also eventually be developed into lenses for light microscopes. So-called superlenses for getting around fundamental resolution limitations on light microscopes have been developed by Fang and others, revealing the workings of biological molecules with nanoscale resolution using ultraviolet light, which is damaging to living cells in large doses. But it hasn't been possible to make superlenses that work in the information-rich and cell-friendly visible and near-infrared parts of the spectrum.
Invisible net: A new material that can bend near-infrared light in a unique way has a fishnet structure. These images of a prism made from the material were taken with a scanning electron microscope. The holes in the net enable the material to interact with the magnetic component of the light, which enables the unusual bending and demonstrates its promise for use in future invisibility cloaks. In the inset, the layers of metal and insulating material that make up the metamaterial are visible.
Credit: Jason Valentine et al.
In an important step toward the development of practical invisibility cloaks, researchers have engineered two new materials that bend light in entirely new ways. These materials are the first that work in the optical band of the spectrum, which encompasses visible and infrared light; existing cloaking materials only work with microwaves. Such cloaks, long depicted in science fiction, would allow objects, from warplanes to people, to hide in plain sight.
Both materials, described separately in the journals Science and Nature this week, exhibit a property called negative refraction that no natural material possesses. As light passes through the materials, it bends backward. One material works with visible light; the other has been demonstrated with near-infrared light.
The materials, created in the lab of University of California, Berkeley, engineer Xiang Zhang, could show the way toward invisibility cloaks that shield objects from visible light. But Steven Cummer, a Duke University engineer involved in the development of the microwave cloak, cautions that there is a long way to go before the new materials can be used for cloaking. Cloaking materials must guide light in a very precisely controlled way so that it flows around an object, re-forming on the other side with no distortion. The Berkeley materials can bend light in the fundamental way necessary for cloaking, but they will require further engineering to manipulate light so that it is carefully directed.
One of the new Berkeley materials is made up of alternating layers of metal and an insulating material, both of which are punched with a grid of square holes. The total thickness of the device is about 800 nanometers; the holes are even smaller. "These stacked layers form electrical-current loops that respond to the magnetic field of light," enabling its unique bending properties, says Jason Valentine, a graduate student in Zhang's lab. Naturally occurring materials, by contrast, don't interact with the magnetic component of electromagnetic waves. By changing the size of the holes, the researchers can tune the material to different frequencies of light. So far, they've demonstrated negative refraction of near-infrared light using a prism made from the material.
Researchers have been trying to create such materials for nearly 10 years, ever since it occurred to them that negative refraction might actually be possible. Other researchers have only been able to make single layers that are too thin--and much too inefficient--for device applications. The Berkeley material is about 10 times thicker than previous designs, which helps increase how much light it transmits while also making it robust enough to be the basis for real devices. "This is getting close to actual nanoscale devices," Cummer says of the Berkeley prism.
The second material is made up of silver nanowires embedded in aluminum. "The nanowire medium works like optical-fiber bundles, so in principle, it's quite different," says Nicholas Fang, mechanical-science and -engineering professor at the University of Illinois at Urbana-Champagne, who was not involved in the research. The layered grid structure not only bends light in the negative direction; it also causes it to travel backward. Light transmitted through the nanowire structure also bends in the negative direction, but without traveling backward. Because the work is still in the early stages, it's unclear which optical metamaterial will work best, and for what applications. "Maybe future solutions will blend these two approaches," says Fang.
Making an invisibility cloak will pose great engineering challenges. For one thing, the researchers will need to scale up the material even to cloak a small object: existing microwave cloaking devices, and theoretical designs for optical cloaks, must be many layers thick in order to guide light around objects without distortion. Making materials for microwave cloaking was easier because these wavelengths can be controlled by relatively large structural features. To guide visible light around an object will require a material whose structure is controlled at the nanoscale, like the ones made at Berkeley.
Developing cloaking devices may take some time. In the short term, the Berkeley materials are likely to be useful in telecommunications and microscopy. Nanoscale waveguides and other devices made from the materials might overcome one of the major challenges of scaling down optical communications to chip level: allowing fine control of parallel streams of information-rich light on the same chip so that they do not interfere with one another. And the new materials could also eventually be developed into lenses for light microscopes. So-called superlenses for getting around fundamental resolution limitations on light microscopes have been developed by Fang and others, revealing the workings of biological molecules with nanoscale resolution using ultraviolet light, which is damaging to living cells in large doses. But it hasn't been possible to make superlenses that work in the information-rich and cell-friendly visible and near-infrared parts of the spectrum.
Commanding Your Browser
A new interface bypasses the mouse for some complex tasks.
The beauty of today's search engines is their simplicity. Type a few keywords into an empty box, and see the 10 most relevant results. This week, Mozilla Labs expects to launch a similar interface for its Firefox Web browser. The new interface, called Ubiquity, lets users carry out all sorts of complex tasks simply by typing instructions, in the form of ordinary sentences, into a box in the browser.
For example, to e-mail a paragraph or picture from a Technology Review article to a friend using Ubiquity, simply select the text or image, press a keyboard shortcut to reveal an input box, and type "e-mail to Max."
"You just type in things that feel natural to you," says Chris Beard, vice president and general manager of Mozilla Labs. Ubiquity, which is based on the Javascript programming language, will open an e-mail client and paste the highlighted text or image into a message. It will even guess which Max in an address book the snippet should be sent to, based on previous e-mailing patterns.
The idea, says Beard, is to make it easier to find and share information on the Web while avoiding cumbersome copy-and-paste instructions. Traditionally, if you want to e-mail a picture or a piece of text to a friend, look up a word in an online dictionary, or map an address, you have to follow a series of well-worn steps: copy the information, open a new browser tab or an external program, paste in the text, and run the program.
A common work-around is to use browser plug-ins--tiny programs that connect to other applications and can be added to the browser toolbar. For instance, StumbleUpon, a Web service that lets users bookmark and share interesting Web pages, offers a plug-in for Firefox so that new sites can be added or discovered with a single click. But adding multiple browser plug-ins takes up valuable screen space.
Ubiquity aims to eliminate both tiresome mouse movements and the need for multiple browser plug-ins.
The idea isn't unique to Mozilla Labs. Researchers at MIT have published work on a similar interface, called Inky. Another project, called Yubnub, allows people to quickly perform different online operations, such as searching for stock quotes, images, or items on eBay using the same text field.
What distinguishes Ubiquity is that it's being released as a Mozilla Labs project, which immediately makes both the program and its underlying code available to people eager to test the interface and contribute design and programming ideas to improve its functionality. Also, notes Mozilla's Beard, Ubiquity is highly customizable. From the start, the interface will come with built-in instructions or "verbs," such as "e-mail," "Twitter," and "Digg," but Beard expects people to add many new ones.
The project is being released in an early form--version 0.1--so it's not expected to work perfectly straightaway. Also, Beard doesn't assume that it will change the way people interact with their browser overnight. "Most people in the world will continue to use mouse-based interfaces," he says. But a language-based interface like Ubiquity could ultimately supplement the mouse, much as shortcut keys already do, he says.
The beauty of today's search engines is their simplicity. Type a few keywords into an empty box, and see the 10 most relevant results. This week, Mozilla Labs expects to launch a similar interface for its Firefox Web browser. The new interface, called Ubiquity, lets users carry out all sorts of complex tasks simply by typing instructions, in the form of ordinary sentences, into a box in the browser.
For example, to e-mail a paragraph or picture from a Technology Review article to a friend using Ubiquity, simply select the text or image, press a keyboard shortcut to reveal an input box, and type "e-mail to Max."
"You just type in things that feel natural to you," says Chris Beard, vice president and general manager of Mozilla Labs. Ubiquity, which is based on the Javascript programming language, will open an e-mail client and paste the highlighted text or image into a message. It will even guess which Max in an address book the snippet should be sent to, based on previous e-mailing patterns.
The idea, says Beard, is to make it easier to find and share information on the Web while avoiding cumbersome copy-and-paste instructions. Traditionally, if you want to e-mail a picture or a piece of text to a friend, look up a word in an online dictionary, or map an address, you have to follow a series of well-worn steps: copy the information, open a new browser tab or an external program, paste in the text, and run the program.
A common work-around is to use browser plug-ins--tiny programs that connect to other applications and can be added to the browser toolbar. For instance, StumbleUpon, a Web service that lets users bookmark and share interesting Web pages, offers a plug-in for Firefox so that new sites can be added or discovered with a single click. But adding multiple browser plug-ins takes up valuable screen space.
Ubiquity aims to eliminate both tiresome mouse movements and the need for multiple browser plug-ins.
The idea isn't unique to Mozilla Labs. Researchers at MIT have published work on a similar interface, called Inky. Another project, called Yubnub, allows people to quickly perform different online operations, such as searching for stock quotes, images, or items on eBay using the same text field.
What distinguishes Ubiquity is that it's being released as a Mozilla Labs project, which immediately makes both the program and its underlying code available to people eager to test the interface and contribute design and programming ideas to improve its functionality. Also, notes Mozilla's Beard, Ubiquity is highly customizable. From the start, the interface will come with built-in instructions or "verbs," such as "e-mail," "Twitter," and "Digg," but Beard expects people to add many new ones.
The project is being released in an early form--version 0.1--so it's not expected to work perfectly straightaway. Also, Beard doesn't assume that it will change the way people interact with their browser overnight. "Most people in the world will continue to use mouse-based interfaces," he says. But a language-based interface like Ubiquity could ultimately supplement the mouse, much as shortcut keys already do, he says.
A Plastic That Chills
Materials that change temperature in response to electric fields could keep computers--and kitchen fridges--cool.
Cool spool: Films of a specially designed polymer, just 0.4 to 2.0 micrometers thick, can get colder or hotter by 12 °C when an electric field is removed or applied across them.
Credit: Qiming Zhang, Penn State
Thin films of a new polymer developed at Penn State change temperature in response to changing electric fields. The Penn State researchers, who reported the new material in Science last week, say that it could lead to new technologies for cooling computer chips and to environmentally friendly refrigerators.
Changing the electric field rearranges the polymer's atoms, changing its temperature; this is called the electrocaloric effect. In a cooling device, a voltage would be applied to the material, which would then be brought in contact with whatever it's intended to cool. The material would heat up, passing its energy to a heat sink or releasing it into the atmosphere. Reducing the electric field would bring the polymer back to a low temperature so that it could be reused.
In a 2006 paper in Science, Cambridge University researchers led by materials scientist Neil Mathur described ceramic materials that also exhibited the electrocaloric effect, but only at temperatures of about 220 °C. The operating temperature of a computer chip is significantly lower--usually somewhere around 85 °C--and a kitchen refrigerator would have to operate at lower temperatures still. The Penn State polymer shows the same 12-degree swing that the ceramics did, but it works at a relatively low 55 °C.
The polymer also absorbs heat better. "In a cooling device, besides temperature change, you also need to know how much heat it can absorb from places you need to cool," says Qiming Zhang, an electrical-engineering professor at Penn State, who led the new work. The polymer, Zhang says, can absorb seven times as much heat as the ceramic.
Zhang attributes these qualities to the more flexible arrangement of atoms in polymers. "In a ceramic, atoms are more rigid, so it's harder to move them," he says. "Atoms can be moved in polymers much more easily using an electric field, so the electrocaloric effect in polymer is much better than ceramics."
The material's properties make it an attractive candidate for laptop cooling applications, says Intel engineer Rajiv Mongia, who studies refrigeration technologies. Computer manufacturers are looking for less bulky alternatives to the heat sinks and noisy fans currently used in laptops and desktop computers. The ideal technology would be small enough to be integrated into a computer chip.
Until now, says Mongia, exploring the electrocaloric effect for chip cooling had not made sense. The first ceramic materials didn't exhibit large enough temperature changes--chip cooling requires reductions of at least 10 °C--and the more recent ceramics don't work at low enough temperatures. They also contain lead, a hazardous material that is hard to dispose of safely. The polymers do not have those drawbacks. "The fact that they've been able to develop a polymer-type material that can be used in a relatively thin film is worth a second look," Mongia says. "Also, it's working in a temperature range that is of interest to us."
But chip-cooling devices will take a while to arrive. It now takes 120 volts to get the polymer to change its atomic arrangement, and that figure would need to be much lower if the material is to be used in laptops. "Ideally, you want it to work at voltages common within the realm of a notebook, in the tens of volts or less," Mongia says. The researchers will also need to engineer a working device containing the thin films.
Electrocaloric materials could make fridges greener. Current household fridges use a vapor-compression cycle, in which a refrigerant is converted back and forth between liquid and vapor to absorb heat from the insulated compartment. The need for mechanical compression lowers the fridge's efficiency. "Vapor-cooled fridges are 30 to 40 percent efficient," Mathur says. But because electrocaloric materials have no moving parts, they could lead to cooling devices that are more energy efficient than current fridges. What's more, current hydrofluorocarbon refrigerants contribute to global warming.
Refrigerators that use electrocaloric materials would have an advantage over the magnetic cooling systems that some companies and research groups are developing. Electric fields large enough to produce substantial temperature changes in electrocaloric materials are much easier and cheaper to produce than the magnetic fields used in experimental refrigeration systems, which require large superconducting magnets or expensive permanent magnets. However, refrigerators need temperature spans of 40 °C, which is a tall order for electrocaloric materials right now, Mathur says. "The main sticking point in terms of the technology is that we have thin films, and you can't cool very much with a thin film."
Zhang and his colleagues are now trying to design better electrocaloric polymers. They plan to study polymers made from liquid crystals, which are used in flat-panel displays. Liquid crystals contain rod-shaped molecules that will align with an electric field and revert to their original arrangement when the field is removed. Zhang says that this property could be exploited to make materials that absorb and release large amounts of heat in response to electric fields.
Cool spool: Films of a specially designed polymer, just 0.4 to 2.0 micrometers thick, can get colder or hotter by 12 °C when an electric field is removed or applied across them.
Credit: Qiming Zhang, Penn State
Thin films of a new polymer developed at Penn State change temperature in response to changing electric fields. The Penn State researchers, who reported the new material in Science last week, say that it could lead to new technologies for cooling computer chips and to environmentally friendly refrigerators.
Changing the electric field rearranges the polymer's atoms, changing its temperature; this is called the electrocaloric effect. In a cooling device, a voltage would be applied to the material, which would then be brought in contact with whatever it's intended to cool. The material would heat up, passing its energy to a heat sink or releasing it into the atmosphere. Reducing the electric field would bring the polymer back to a low temperature so that it could be reused.
In a 2006 paper in Science, Cambridge University researchers led by materials scientist Neil Mathur described ceramic materials that also exhibited the electrocaloric effect, but only at temperatures of about 220 °C. The operating temperature of a computer chip is significantly lower--usually somewhere around 85 °C--and a kitchen refrigerator would have to operate at lower temperatures still. The Penn State polymer shows the same 12-degree swing that the ceramics did, but it works at a relatively low 55 °C.
The polymer also absorbs heat better. "In a cooling device, besides temperature change, you also need to know how much heat it can absorb from places you need to cool," says Qiming Zhang, an electrical-engineering professor at Penn State, who led the new work. The polymer, Zhang says, can absorb seven times as much heat as the ceramic.
Zhang attributes these qualities to the more flexible arrangement of atoms in polymers. "In a ceramic, atoms are more rigid, so it's harder to move them," he says. "Atoms can be moved in polymers much more easily using an electric field, so the electrocaloric effect in polymer is much better than ceramics."
The material's properties make it an attractive candidate for laptop cooling applications, says Intel engineer Rajiv Mongia, who studies refrigeration technologies. Computer manufacturers are looking for less bulky alternatives to the heat sinks and noisy fans currently used in laptops and desktop computers. The ideal technology would be small enough to be integrated into a computer chip.
Until now, says Mongia, exploring the electrocaloric effect for chip cooling had not made sense. The first ceramic materials didn't exhibit large enough temperature changes--chip cooling requires reductions of at least 10 °C--and the more recent ceramics don't work at low enough temperatures. They also contain lead, a hazardous material that is hard to dispose of safely. The polymers do not have those drawbacks. "The fact that they've been able to develop a polymer-type material that can be used in a relatively thin film is worth a second look," Mongia says. "Also, it's working in a temperature range that is of interest to us."
But chip-cooling devices will take a while to arrive. It now takes 120 volts to get the polymer to change its atomic arrangement, and that figure would need to be much lower if the material is to be used in laptops. "Ideally, you want it to work at voltages common within the realm of a notebook, in the tens of volts or less," Mongia says. The researchers will also need to engineer a working device containing the thin films.
Electrocaloric materials could make fridges greener. Current household fridges use a vapor-compression cycle, in which a refrigerant is converted back and forth between liquid and vapor to absorb heat from the insulated compartment. The need for mechanical compression lowers the fridge's efficiency. "Vapor-cooled fridges are 30 to 40 percent efficient," Mathur says. But because electrocaloric materials have no moving parts, they could lead to cooling devices that are more energy efficient than current fridges. What's more, current hydrofluorocarbon refrigerants contribute to global warming.
Refrigerators that use electrocaloric materials would have an advantage over the magnetic cooling systems that some companies and research groups are developing. Electric fields large enough to produce substantial temperature changes in electrocaloric materials are much easier and cheaper to produce than the magnetic fields used in experimental refrigeration systems, which require large superconducting magnets or expensive permanent magnets. However, refrigerators need temperature spans of 40 °C, which is a tall order for electrocaloric materials right now, Mathur says. "The main sticking point in terms of the technology is that we have thin films, and you can't cool very much with a thin film."
Zhang and his colleagues are now trying to design better electrocaloric polymers. They plan to study polymers made from liquid crystals, which are used in flat-panel displays. Liquid crystals contain rod-shaped molecules that will align with an electric field and revert to their original arrangement when the field is removed. Zhang says that this property could be exploited to make materials that absorb and release large amounts of heat in response to electric fields.
A Bridge between Virtual Worlds
Second Life's new program links virtual environments
Linking worlds: Two avatars, Brian White (left) and Butch Arnold, meet in 3rd Rock Grid, an independent OpenSim-based server.
Credit: Brian White
The first steps to developing virtual-world interoperability are now being tested between Second Life and other independent virtual worlds, thanks to the launch of Linden Lab's Open Grid Beta, a program designed for developers to test new functionality. The beta program will allow users to move between a Second Life test grid--a set of servers simulating a virtual world--and other non-Linden Lab grids running the OpenSim software. OpenSim is an independent open-source project to create a virtual-world server.
The discussion of linking together today's virtual worlds is not new, but this is the first running code that demonstrates previously hypothetical approaches--another tangible sign that Linden Lab is serious about interoperability. "We are still early in the game. The point of the beta is to give the rest of the development community the chance to try the protocols themselves," says Joe Miller, Linden Lab's vice president of platform and development. More than 200 users have signed up for the beta program, and currently 15 worlds have been connected.
In order to test virtual-world interoperability, a person needs at least two virtual worlds. For Linden Lab, the OpenSim project was a natural choice. It began in January 2007 at the nexus of two open-source projects--one to reverse-engineer the Second Life server APIs, and the other Linden Lab's open-source viewer initiative. The goal of the OpenSim project is to build a virtual-world server that supports the Linden Lab viewer or a derivative.
Today, there is a flourishing OpenSim community with 26 registered grids hosting approximately 2,300 regions. While this is certainly a small number compared with the 28,070 regions that make up the Second Life main grid, it still represents a significant number of independent virtual worlds. The open-source nature of the project, combined with the number of participants and the shared support of a common viewer, make OpenSim-based worlds ideal for interoperability tests.
Interoperability is the future of the Web, says Terry Ford, the owner and operator of an OpenSim-based world called 3rd Rock Grid. Ford is also participating in the program. "It may be [in] OpenSim's future, or maybe another package will spring up, but just as links from a Web page take you to another site, people will come to expect the ability to navigate between virtual worlds," he says.
Ford is Butch Arnold in Second Life, Butch Arnold in 3rd Rock Grid, and Butch Arnold in the OpenLife grid, and that's kind of the point. No one wants to have as many avatars as they do website accounts, but there is a fundamental difference between accounts, which hold data like a shopping cart, and avatars, which contain data regarding a person's virtual-world appearance. IBM's David Levine, who has been closely collaborating with Linden Lab on the interoperability protocols, says, "You don't care if your shopping-cart contents in your Amazon account [are] the same as other shopping carts. However, if you were moving region to region and had very different assets in each, that would be a problem."
Yet many efforts to let users share their avatars on the Web have not been successful. Levine says that the Open Grid Protocol has a chance because it is less ambitious. "We are not trying to do it across the entire Web. The focus is on the Linden main grid and a set of broadly similar grids."
To use the beta program, a participant starts an application called a viewer, the best example being the Second Life client. The viewer renders the virtual world and provides the controls for the avatar. Just like using a Web browser to log in to a website, the viewer is where a log-in request is initiated.
The log-in request is sent to the agent service, which stores things like the avatar's profile, password, and current location. As part of the beta, Linden Lab has implemented a proprietary version of the agent service running on a test grid. The avatar service now contacts the region service for the right placement of the avatar in the virtual world.
The region service is basically the Web server of virtual worlds. It is responsible for simulating a piece of the virtual landscape and providing a shared perspective to all avatars occupying the same virtual space. A collection of regions is called a grid. Linden Lab has proprietary code running all the Second Life regions. The OpenSim project provides source code that, when built, allows anyone to run his or her own region service.
From that point on, there is a three-way communication between viewer, agent service, and region service to provide the user's in-world experience. When the user wants to move to another region, he issues a teleport command in the viewer, and the same process happens. But in this case, the user is not required to log in again, even if the destination region is running on a non-Linden Lab server.
Last fall, Linden Lab formed the Architecture Working Group (AWG), which is the driving force behind the Open Grid Protocol--the architectural definition of interoperability. The team decided that the first step was to focus on the areas of log in and teleport. "We started with authentication information and being able to seamlessly pass the log-in credentials between two grids run by different companies," says Levine. "Many people ask me, 'Why did you start there?' Well, you can't do all the rest until you get logged in."
Miller says that in the next 18 months, a user can expect to see a lot of activity in the area of content movement. "How do I move content that is mine, purchased or created, between worlds safely and securely? The AWG has a lot of great thoughts on how this could work," he says.
Linking worlds: Two avatars, Brian White (left) and Butch Arnold, meet in 3rd Rock Grid, an independent OpenSim-based server.
Credit: Brian White
The first steps to developing virtual-world interoperability are now being tested between Second Life and other independent virtual worlds, thanks to the launch of Linden Lab's Open Grid Beta, a program designed for developers to test new functionality. The beta program will allow users to move between a Second Life test grid--a set of servers simulating a virtual world--and other non-Linden Lab grids running the OpenSim software. OpenSim is an independent open-source project to create a virtual-world server.
The discussion of linking together today's virtual worlds is not new, but this is the first running code that demonstrates previously hypothetical approaches--another tangible sign that Linden Lab is serious about interoperability. "We are still early in the game. The point of the beta is to give the rest of the development community the chance to try the protocols themselves," says Joe Miller, Linden Lab's vice president of platform and development. More than 200 users have signed up for the beta program, and currently 15 worlds have been connected.
In order to test virtual-world interoperability, a person needs at least two virtual worlds. For Linden Lab, the OpenSim project was a natural choice. It began in January 2007 at the nexus of two open-source projects--one to reverse-engineer the Second Life server APIs, and the other Linden Lab's open-source viewer initiative. The goal of the OpenSim project is to build a virtual-world server that supports the Linden Lab viewer or a derivative.
Today, there is a flourishing OpenSim community with 26 registered grids hosting approximately 2,300 regions. While this is certainly a small number compared with the 28,070 regions that make up the Second Life main grid, it still represents a significant number of independent virtual worlds. The open-source nature of the project, combined with the number of participants and the shared support of a common viewer, make OpenSim-based worlds ideal for interoperability tests.
Interoperability is the future of the Web, says Terry Ford, the owner and operator of an OpenSim-based world called 3rd Rock Grid. Ford is also participating in the program. "It may be [in] OpenSim's future, or maybe another package will spring up, but just as links from a Web page take you to another site, people will come to expect the ability to navigate between virtual worlds," he says.
Ford is Butch Arnold in Second Life, Butch Arnold in 3rd Rock Grid, and Butch Arnold in the OpenLife grid, and that's kind of the point. No one wants to have as many avatars as they do website accounts, but there is a fundamental difference between accounts, which hold data like a shopping cart, and avatars, which contain data regarding a person's virtual-world appearance. IBM's David Levine, who has been closely collaborating with Linden Lab on the interoperability protocols, says, "You don't care if your shopping-cart contents in your Amazon account [are] the same as other shopping carts. However, if you were moving region to region and had very different assets in each, that would be a problem."
Yet many efforts to let users share their avatars on the Web have not been successful. Levine says that the Open Grid Protocol has a chance because it is less ambitious. "We are not trying to do it across the entire Web. The focus is on the Linden main grid and a set of broadly similar grids."
To use the beta program, a participant starts an application called a viewer, the best example being the Second Life client. The viewer renders the virtual world and provides the controls for the avatar. Just like using a Web browser to log in to a website, the viewer is where a log-in request is initiated.
The log-in request is sent to the agent service, which stores things like the avatar's profile, password, and current location. As part of the beta, Linden Lab has implemented a proprietary version of the agent service running on a test grid. The avatar service now contacts the region service for the right placement of the avatar in the virtual world.
The region service is basically the Web server of virtual worlds. It is responsible for simulating a piece of the virtual landscape and providing a shared perspective to all avatars occupying the same virtual space. A collection of regions is called a grid. Linden Lab has proprietary code running all the Second Life regions. The OpenSim project provides source code that, when built, allows anyone to run his or her own region service.
From that point on, there is a three-way communication between viewer, agent service, and region service to provide the user's in-world experience. When the user wants to move to another region, he issues a teleport command in the viewer, and the same process happens. But in this case, the user is not required to log in again, even if the destination region is running on a non-Linden Lab server.
Last fall, Linden Lab formed the Architecture Working Group (AWG), which is the driving force behind the Open Grid Protocol--the architectural definition of interoperability. The team decided that the first step was to focus on the areas of log in and teleport. "We started with authentication information and being able to seamlessly pass the log-in credentials between two grids run by different companies," says Levine. "Many people ask me, 'Why did you start there?' Well, you can't do all the rest until you get logged in."
Miller says that in the next 18 months, a user can expect to see a lot of activity in the area of content movement. "How do I move content that is mine, purchased or created, between worlds safely and securely? The AWG has a lot of great thoughts on how this could work," he says.
Internet Security Hole Revealed
A researcher discloses the details of the major flaw he discovered earlier this year.
On Wednesday, at the Black Hat computer security conference in Las Vegas, Dan Kaminsky, director of penetration testing at IOActive, released the full details of the major design flaw he found earlier this year in the domain name server system, which is a key part of directing traffic over the Internet. Kaminsky had already revealed that the flaw could allow attackers to control Internet traffic, potentially directing users to phishing sites--bogus sites that try to elicit credit-card information--or to sites loaded with malicious software. On Wednesday, he showed that the flaw had even farther-reaching implications, demonstrating that attackers could use it to gain access to e-mail accounts or to infiltrate the systems in place to make online transactions secure.
Kaminsky first announced the flaw in the domain name system in July, at a press conference timed to coincide with the massive coordinated release of a temporary fix, which involved vendors such as Microsoft, Cisco, and Sun. He didn't release details of the flaw, hoping to give companies time to patch it before giving attackers hints about how to exploit it. Although the basics of the flaw did leak before Kaminsky's Black Hat presentation, he says he's relieved that not all of its implications were publicly discovered.
The domain name system is, as its name might imply, responsible for matching domain names--such as technologyreview.com--to the numerical addresses of the corresponding Web servers--such as 69.147.160.210. A request issued by an e-mail server or Web browser might pass through several domain name servers before getting the address information that it needs.
Kaminsky says that the flaw he discovered is a way for an attacker to impersonate a domain name server. Imagine that the attacker wants to hoodwink Facebook, for instance. He would start by opening a Facebook account. Then he would try to log in to the account but pretend to forget his password. Facebook would then try to send a new password to the e-mail address that the attacker used to create the account.
The attacker's server, however, would claim that Facebook got the numerical address of its e-mail server wrong. It then tells Facebook the name of the domain name server that--supposedly--has the right address. Facebook has to locate that server on its own; this is actually a safety feature, to prevent an attacker from simply routing traffic to his own fake domain name server in the first place.
At this point, the attacker knows that Facebook's server is about to look up where to find the domain name server. If he can supply a false answer before the real answer arrives, he can trick Facebook into looking up future addresses on his own server, rather than on the domain name server. He can then direct messages sent by Facebook anywhere he chooses.
The problem for the attacker is that the false answer needs to carry the correct authenticating transaction ID--and there are 65,000 possibilities. Moreover, once Facebook's server gets an answer, it will store the domain name server's numerical address for a certain period of time, perhaps a day. The flaw that Kaminsky discovered, however, allows the attacker to trigger requests for the domain name server's address as many times as he wants. If the attacker includes a random transaction ID with each of his false responses, he'll eventually luck upon the correct one. In practice, Kaminsky says, it takes the attacker's computer about 10 seconds to fool a server into accepting its false answer.
Fooling Facebook's server would mean that the attacker could intercept messages that Facebook intended to send to users, which could allow him to get control of large numbers of accounts. The attacker could use similar techniques to intercept e-mail from other sources, or to get forged security certificates that could be used to more convincingly impersonate banking sites. "We haven't had a bug like this in a decade," Kaminsky says.
Because the attack takes advantage of an extremely common Internet transaction, the flaw is difficult to repair. "If you destroy this behavior, you destroy [the domain name system], and therefore you destroy the way the Internet works," Kaminsky says. But the temporary fix that's being distributed will keep most people safe for now. That fix helps by adding an additional random number that gives the attacker a much smaller chance of being able to guess correctly and pull off the impersonation. In the past month, he says, more than 120 million broadband consumers have been protected by patches, as have 70 percent of Fortune 500 companies. "If they're big and vulnerable, and I thought so, I've contacted them and raised holy hell," Kaminsky says. Facebook has applied the patch, as have Apple, LinkedIn, MySpace, Google, Yahoo, and others.
But it's still uncertain how to put a long-term solution in place. Kaminsky calls the current patch a "stopgap," which he hopes will hold off attackers while the security community seeks a more permanent fix. Jerry Dixon, director of analysis for Team Cymru and former executive director of the National Cyber Security Division and US-CERT, says that "longer-term fixes will take a lot of effort." Changes to the domain name system must be made cautiously, he says, adding, "It's the equivalent of doing heart surgery." It would be easy for a fix to cause unintended problems to the system. In the meantime, Dixon says, "if I were asked by the White House to assess this, I would say it's a bad vulnerability. People need to patch this."
On Wednesday, at the Black Hat computer security conference in Las Vegas, Dan Kaminsky, director of penetration testing at IOActive, released the full details of the major design flaw he found earlier this year in the domain name server system, which is a key part of directing traffic over the Internet. Kaminsky had already revealed that the flaw could allow attackers to control Internet traffic, potentially directing users to phishing sites--bogus sites that try to elicit credit-card information--or to sites loaded with malicious software. On Wednesday, he showed that the flaw had even farther-reaching implications, demonstrating that attackers could use it to gain access to e-mail accounts or to infiltrate the systems in place to make online transactions secure.
Kaminsky first announced the flaw in the domain name system in July, at a press conference timed to coincide with the massive coordinated release of a temporary fix, which involved vendors such as Microsoft, Cisco, and Sun. He didn't release details of the flaw, hoping to give companies time to patch it before giving attackers hints about how to exploit it. Although the basics of the flaw did leak before Kaminsky's Black Hat presentation, he says he's relieved that not all of its implications were publicly discovered.
The domain name system is, as its name might imply, responsible for matching domain names--such as technologyreview.com--to the numerical addresses of the corresponding Web servers--such as 69.147.160.210. A request issued by an e-mail server or Web browser might pass through several domain name servers before getting the address information that it needs.
Kaminsky says that the flaw he discovered is a way for an attacker to impersonate a domain name server. Imagine that the attacker wants to hoodwink Facebook, for instance. He would start by opening a Facebook account. Then he would try to log in to the account but pretend to forget his password. Facebook would then try to send a new password to the e-mail address that the attacker used to create the account.
The attacker's server, however, would claim that Facebook got the numerical address of its e-mail server wrong. It then tells Facebook the name of the domain name server that--supposedly--has the right address. Facebook has to locate that server on its own; this is actually a safety feature, to prevent an attacker from simply routing traffic to his own fake domain name server in the first place.
At this point, the attacker knows that Facebook's server is about to look up where to find the domain name server. If he can supply a false answer before the real answer arrives, he can trick Facebook into looking up future addresses on his own server, rather than on the domain name server. He can then direct messages sent by Facebook anywhere he chooses.
The problem for the attacker is that the false answer needs to carry the correct authenticating transaction ID--and there are 65,000 possibilities. Moreover, once Facebook's server gets an answer, it will store the domain name server's numerical address for a certain period of time, perhaps a day. The flaw that Kaminsky discovered, however, allows the attacker to trigger requests for the domain name server's address as many times as he wants. If the attacker includes a random transaction ID with each of his false responses, he'll eventually luck upon the correct one. In practice, Kaminsky says, it takes the attacker's computer about 10 seconds to fool a server into accepting its false answer.
Fooling Facebook's server would mean that the attacker could intercept messages that Facebook intended to send to users, which could allow him to get control of large numbers of accounts. The attacker could use similar techniques to intercept e-mail from other sources, or to get forged security certificates that could be used to more convincingly impersonate banking sites. "We haven't had a bug like this in a decade," Kaminsky says.
Because the attack takes advantage of an extremely common Internet transaction, the flaw is difficult to repair. "If you destroy this behavior, you destroy [the domain name system], and therefore you destroy the way the Internet works," Kaminsky says. But the temporary fix that's being distributed will keep most people safe for now. That fix helps by adding an additional random number that gives the attacker a much smaller chance of being able to guess correctly and pull off the impersonation. In the past month, he says, more than 120 million broadband consumers have been protected by patches, as have 70 percent of Fortune 500 companies. "If they're big and vulnerable, and I thought so, I've contacted them and raised holy hell," Kaminsky says. Facebook has applied the patch, as have Apple, LinkedIn, MySpace, Google, Yahoo, and others.
But it's still uncertain how to put a long-term solution in place. Kaminsky calls the current patch a "stopgap," which he hopes will hold off attackers while the security community seeks a more permanent fix. Jerry Dixon, director of analysis for Team Cymru and former executive director of the National Cyber Security Division and US-CERT, says that "longer-term fixes will take a lot of effort." Changes to the domain name system must be made cautiously, he says, adding, "It's the equivalent of doing heart surgery." It would be easy for a fix to cause unintended problems to the system. In the meantime, Dixon says, "if I were asked by the White House to assess this, I would say it's a bad vulnerability. People need to patch this."
Finding Evidence in Fingerprints
A technique reveals drugs and explosives on the scene
Next on CSI: This series of images shows that fingerprint images made using mass spectrometry are comparable to those made using traditional means. In (A), mass spectrometry is used to produce a fingerprint by imaging the presence of cocaine; the mass-spectrometry fingerprint can be employed as a starting point for a computerized image (B) generated using commercial fingerprint-analysis software. Below, (C) and (D) show a traditional ink print made with the same fingertip, and the corresponding computer image. (Red and blue circles in the computer-generated images correspond to features of interest, such as where ridges intersect.)
A new method for examining fingerprints provides detailed maps of their chemical composition while creating traditional images of their structural features. Instead of taking samples back to the lab, law-enforcement agents could use the technique, a variation on mass spectrometry, to reveal traces of cocaine, other drugs, and explosives on the scene.
Fingerprints are traditionally imaged after coating crime-scene surfaces with chemicals that make them visible. These techniques can be destructive, and different methods must be used, depending on the surface under study, says John Morgan, deputy director of science and technology at the National Institute of Justice, the research branch of the U.S. Department of Justice. "Mass-spectrometric imaging could be a useful tool to image prints nondestructively on a wide variety of surfaces," says Morgan.
Traditional mass spectrometry, the gold standard for identifying chemicals in the lab that uses mass and charge measurements to parse out the chemical components of a sample, typically involves intensive sample preparation. It must be done in a vacuum, and the sample is destroyed during the process, making further examination impossible and eliminating information about the spatial location of different molecules in the sample that are needed to create an image.
R. Graham Cooks, a professor of analytical chemistry at Purdue University, who led the fingerprint research, and his group used a sample-collection technique that he developed in 2004 and that can be used with any commercial mass spectrometer. Desorption spray ionization uses a stream of electrically charged solvent, usually water, to dissolve chemicals in a fingerprint or any other sample on a hard surface. "The compounds dissolve, secondary droplets splash up and are then sucked into the mass spectrometer," explains Cooks. As the instrument scans over a surface, it collects thousands of data points about the chemical composition, each of which serves as a pixel. The mass-spectrometry method can create images of the characteristic ridges of fingerprints that also serve as maps of their chemical composition.
In a paper published in the journal Science this week, the Purdue researchers describe using the method to image clean fingerprints and prints made after subjects dipped their fingers in cocaine, the explosive RDX, ink, and two components of marijuana. "We know in the old-fashioned way who it was" by providing information about the fingers' ridges and whorls, says Cooks of the fingerprint-imaging technique. The technique could also address the problem of overlapping fingerprints, which can be difficult to tell apart: fingerprints made by different individuals should have a different chemical composition. And "you also get information about what the person has been dealing with in terms of chemicals," says Nicholas Winograd, a chemist at Pennsylvania State University, who was not involved in the research.
Some of the chemicals found in fingerprints come from things people have handled; others are made by the body. The metabolites found in sweat are not well understood, but it's likely that they differ with age, gender, and other characteristics that would help identify suspects, says Cooks. Mass spectrometry could help uncover these variations. And Winograd says that the chemicals found in fingerprints might also provide information about drug metabolism and other medically interesting processes. Winograd, Cooks, and many others have recently begun using mass spectrometry to study the molecular workings of cancerous tissues and cells. Mass spectrometry might reveal that diagnostic information exists in sweat as well, says Winograd.
However, Morgan cautions that the work is preliminary and that the technology may prove too expensive for widespread adoption by law-enforcement agencies. Indeed, Cooks has not developed a commercial version of the fingerprint-analysis instrument.
"They have a long way to go," agrees Michael Cherry, vice chairman of the digital technology committee at the National Association of Criminal Defense Lawyers, who has extensive experience interpreting fingerprints. He says that Cooks's group has demonstrated the potential of the technology. However, after examining some fingerprint images made using mass spectrometry, Cherry says that the technology will require further development to be good enough to hold up in court.
Next on CSI: This series of images shows that fingerprint images made using mass spectrometry are comparable to those made using traditional means. In (A), mass spectrometry is used to produce a fingerprint by imaging the presence of cocaine; the mass-spectrometry fingerprint can be employed as a starting point for a computerized image (B) generated using commercial fingerprint-analysis software. Below, (C) and (D) show a traditional ink print made with the same fingertip, and the corresponding computer image. (Red and blue circles in the computer-generated images correspond to features of interest, such as where ridges intersect.)
A new method for examining fingerprints provides detailed maps of their chemical composition while creating traditional images of their structural features. Instead of taking samples back to the lab, law-enforcement agents could use the technique, a variation on mass spectrometry, to reveal traces of cocaine, other drugs, and explosives on the scene.
Fingerprints are traditionally imaged after coating crime-scene surfaces with chemicals that make them visible. These techniques can be destructive, and different methods must be used, depending on the surface under study, says John Morgan, deputy director of science and technology at the National Institute of Justice, the research branch of the U.S. Department of Justice. "Mass-spectrometric imaging could be a useful tool to image prints nondestructively on a wide variety of surfaces," says Morgan.
Traditional mass spectrometry, the gold standard for identifying chemicals in the lab that uses mass and charge measurements to parse out the chemical components of a sample, typically involves intensive sample preparation. It must be done in a vacuum, and the sample is destroyed during the process, making further examination impossible and eliminating information about the spatial location of different molecules in the sample that are needed to create an image.
R. Graham Cooks, a professor of analytical chemistry at Purdue University, who led the fingerprint research, and his group used a sample-collection technique that he developed in 2004 and that can be used with any commercial mass spectrometer. Desorption spray ionization uses a stream of electrically charged solvent, usually water, to dissolve chemicals in a fingerprint or any other sample on a hard surface. "The compounds dissolve, secondary droplets splash up and are then sucked into the mass spectrometer," explains Cooks. As the instrument scans over a surface, it collects thousands of data points about the chemical composition, each of which serves as a pixel. The mass-spectrometry method can create images of the characteristic ridges of fingerprints that also serve as maps of their chemical composition.
In a paper published in the journal Science this week, the Purdue researchers describe using the method to image clean fingerprints and prints made after subjects dipped their fingers in cocaine, the explosive RDX, ink, and two components of marijuana. "We know in the old-fashioned way who it was" by providing information about the fingers' ridges and whorls, says Cooks of the fingerprint-imaging technique. The technique could also address the problem of overlapping fingerprints, which can be difficult to tell apart: fingerprints made by different individuals should have a different chemical composition. And "you also get information about what the person has been dealing with in terms of chemicals," says Nicholas Winograd, a chemist at Pennsylvania State University, who was not involved in the research.
Some of the chemicals found in fingerprints come from things people have handled; others are made by the body. The metabolites found in sweat are not well understood, but it's likely that they differ with age, gender, and other characteristics that would help identify suspects, says Cooks. Mass spectrometry could help uncover these variations. And Winograd says that the chemicals found in fingerprints might also provide information about drug metabolism and other medically interesting processes. Winograd, Cooks, and many others have recently begun using mass spectrometry to study the molecular workings of cancerous tissues and cells. Mass spectrometry might reveal that diagnostic information exists in sweat as well, says Winograd.
However, Morgan cautions that the work is preliminary and that the technology may prove too expensive for widespread adoption by law-enforcement agencies. Indeed, Cooks has not developed a commercial version of the fingerprint-analysis instrument.
"They have a long way to go," agrees Michael Cherry, vice chairman of the digital technology committee at the National Association of Criminal Defense Lawyers, who has extensive experience interpreting fingerprints. He says that Cooks's group has demonstrated the potential of the technology. However, after examining some fingerprint images made using mass spectrometry, Cherry says that the technology will require further development to be good enough to hold up in court.
An Artificial Pancreas
A device that reads glucose levels and delivers insulin may be close at hand.
Artificial pancreas: Scientists are pairing continuous glucose monitors, such as the device pictured here (white device, top), with insulin pumps, such as the one pictured here (pagerlike device, bottom), to create an artificial pancreas for people with diabetes. In this commercial system by Medtronic, the glucose monitor wirelessly transmits data to the pump via a meter (not pictured). However, the user must still decide how much insulin he needs and dose it out himself. In an artificial pancreas, specially-designed algorithms would calculate how much insulin is required, and how quickly, and then signal the drug’s delivery without human intervention.
Today, people with diabetes have a range of technologies to help keep their blood sugar in check, including continuous monitors that can keep tabs on glucose levels throughout the day and insulin pumps that can deliver the drug. But the diabetic is still responsible for making executive decisions--when to test his blood or give himself a shot--and the system has plenty of room for human error. Now, however, researchers say that the first generations of an artificial pancreas, which would be able to make most dosing decisions without the wearer's intervention, could be available within the next few years.
Type 1 diabetes develops when the islet cells of the human pancreas stop producing adequate amounts of insulin, leaving the body unable to regulate blood-sugar levels on its own. Left unchecked, glucose fluctuations over the long term can lead to nerve damage, blindness, stroke and heart attacks. Even among the most vigilant diabetics, large dips and surges in glucose levels are still common occurrences. "We have data on hand today that suggests that you could get much better diabetes outcomes with the computer taking the lead instead of the person with diabetes doing it all themselves," says Aaron Kowalski, research director of the Juvenile Diabetes Research Foundation's Artificial Pancreas Project.
At its most basic level, an artificial pancreas consists of three components: a continuous sensor to detect glucose levels in real time, a miniature computer that can take those readings and use an algorithm to predict what will happen next and determine how much insulin is necessary to keep the levels steady, and an insulin pump driven by the computer that doses out the appropriate amount of the drug.
Two of the components--insulin pumps and continuous glucose monitors--are already on the commercial market (the latter received marketing approval by the U.S. Food and Drug Administration just a few years ago). "In the near term, you could probably create a pretty robust system with today's technologies," says Kowalski, whose group has spearheaded a coalition aimed at bringing an artificial pancreas to market as soon as possible.
Members of the consortium are experimenting with variations of this closed-loop system, so named because the computer algorithm connects the insulin pump and the glucose monitor, closing the loop. Perhaps the person closest to developing a commercial system is Roman Hovorka, a principal research associate at the University of Cambridge, in the U.K., where he leads the Diabetes Modelling Group. His first closed-loop study examined the effectiveness of the system when used overnight, during the hours when blood-sugar levels are likely to drop precipitously and complications can occur. "I want to move to an approach that could be commercialized, and the simplest is just to close the loop overnight, at a time when one cannot do too much about insulin anyway."
Hovorka used two devices, both commercially available. The first, a continuous glucose monitor, consists of a subcutaneous sensor that measures glucose levels in tissue beneath the skin and a device that communicates wirelessly with the sensor to download its data. The second is the pump itself, a pager-size device with an insulin reservoir that delivers the drug through a thin tube to a subcutaneous needle. Hovorka and his collaborators added an algorithm that not only put the pump and sensor in communication with each other, but also took the (sleeping) user out of the picture by determining precisely how much insulin to mete out every 15 minutes.
When tested in 12 children with type 1 diabetes, the closed-loop system brought the kids' blood-glucose levels into the target range 61 percent of the time, up from 23 percent for those who followed their normal routine. "With the closed loop, we are able to avoid the extremes--the extreme bad low and the extreme bad high," Hovorka says. He's currently working with device makers in the industry to create a marketable commercial product.
Technologically, the remaining obstacles for researchers are those of refinement--for example, constructing algorithms that are exquisitely honed to predict in which direction glucose levels are moving and at what rate. Other researchers are working on sensors that can monitor blood glucose over an extended period of time (currently, sensors must be replaced every three to eight days) and with improved accuracy.
Despite the fact that much of the technology is on the market, researchers must still prove to the FDA that their system is safe when combined with the algorithms, and that if anything goes wrong--if a sensor goes wonky or the insulin pump clogs up--the computer can sense it and either set off an alarm or turn the whole system off.
"You don't have to get the perfect system to make a tremendous advance and make it considerably easier to live with diabetes," says William Tamborlane, chief of pediatric endocrinology at Yale School of Medicine, who invented insulin-pump therapy in the late 1970s. As a clinician, he's more interested in seeing these incremental advances make their way to the patients than in waiting for a perfect system to be created. "We now have sensors that can say what the blood sugar's doing every minute," Tamborlane says. "And we have insulin pumps that can change how much insulin it gives on a minute-to-minute basis. We have the technology right now to come pretty close to what might be considered the ultimate solution."
Artificial pancreas: Scientists are pairing continuous glucose monitors, such as the device pictured here (white device, top), with insulin pumps, such as the one pictured here (pagerlike device, bottom), to create an artificial pancreas for people with diabetes. In this commercial system by Medtronic, the glucose monitor wirelessly transmits data to the pump via a meter (not pictured). However, the user must still decide how much insulin he needs and dose it out himself. In an artificial pancreas, specially-designed algorithms would calculate how much insulin is required, and how quickly, and then signal the drug’s delivery without human intervention.
Today, people with diabetes have a range of technologies to help keep their blood sugar in check, including continuous monitors that can keep tabs on glucose levels throughout the day and insulin pumps that can deliver the drug. But the diabetic is still responsible for making executive decisions--when to test his blood or give himself a shot--and the system has plenty of room for human error. Now, however, researchers say that the first generations of an artificial pancreas, which would be able to make most dosing decisions without the wearer's intervention, could be available within the next few years.
Type 1 diabetes develops when the islet cells of the human pancreas stop producing adequate amounts of insulin, leaving the body unable to regulate blood-sugar levels on its own. Left unchecked, glucose fluctuations over the long term can lead to nerve damage, blindness, stroke and heart attacks. Even among the most vigilant diabetics, large dips and surges in glucose levels are still common occurrences. "We have data on hand today that suggests that you could get much better diabetes outcomes with the computer taking the lead instead of the person with diabetes doing it all themselves," says Aaron Kowalski, research director of the Juvenile Diabetes Research Foundation's Artificial Pancreas Project.
At its most basic level, an artificial pancreas consists of three components: a continuous sensor to detect glucose levels in real time, a miniature computer that can take those readings and use an algorithm to predict what will happen next and determine how much insulin is necessary to keep the levels steady, and an insulin pump driven by the computer that doses out the appropriate amount of the drug.
Two of the components--insulin pumps and continuous glucose monitors--are already on the commercial market (the latter received marketing approval by the U.S. Food and Drug Administration just a few years ago). "In the near term, you could probably create a pretty robust system with today's technologies," says Kowalski, whose group has spearheaded a coalition aimed at bringing an artificial pancreas to market as soon as possible.
Members of the consortium are experimenting with variations of this closed-loop system, so named because the computer algorithm connects the insulin pump and the glucose monitor, closing the loop. Perhaps the person closest to developing a commercial system is Roman Hovorka, a principal research associate at the University of Cambridge, in the U.K., where he leads the Diabetes Modelling Group. His first closed-loop study examined the effectiveness of the system when used overnight, during the hours when blood-sugar levels are likely to drop precipitously and complications can occur. "I want to move to an approach that could be commercialized, and the simplest is just to close the loop overnight, at a time when one cannot do too much about insulin anyway."
Hovorka used two devices, both commercially available. The first, a continuous glucose monitor, consists of a subcutaneous sensor that measures glucose levels in tissue beneath the skin and a device that communicates wirelessly with the sensor to download its data. The second is the pump itself, a pager-size device with an insulin reservoir that delivers the drug through a thin tube to a subcutaneous needle. Hovorka and his collaborators added an algorithm that not only put the pump and sensor in communication with each other, but also took the (sleeping) user out of the picture by determining precisely how much insulin to mete out every 15 minutes.
When tested in 12 children with type 1 diabetes, the closed-loop system brought the kids' blood-glucose levels into the target range 61 percent of the time, up from 23 percent for those who followed their normal routine. "With the closed loop, we are able to avoid the extremes--the extreme bad low and the extreme bad high," Hovorka says. He's currently working with device makers in the industry to create a marketable commercial product.
Technologically, the remaining obstacles for researchers are those of refinement--for example, constructing algorithms that are exquisitely honed to predict in which direction glucose levels are moving and at what rate. Other researchers are working on sensors that can monitor blood glucose over an extended period of time (currently, sensors must be replaced every three to eight days) and with improved accuracy.
Despite the fact that much of the technology is on the market, researchers must still prove to the FDA that their system is safe when combined with the algorithms, and that if anything goes wrong--if a sensor goes wonky or the insulin pump clogs up--the computer can sense it and either set off an alarm or turn the whole system off.
"You don't have to get the perfect system to make a tremendous advance and make it considerably easier to live with diabetes," says William Tamborlane, chief of pediatric endocrinology at Yale School of Medicine, who invented insulin-pump therapy in the late 1970s. As a clinician, he's more interested in seeing these incremental advances make their way to the patients than in waiting for a perfect system to be created. "We now have sensors that can say what the blood sugar's doing every minute," Tamborlane says. "And we have insulin pumps that can change how much insulin it gives on a minute-to-minute basis. We have the technology right now to come pretty close to what might be considered the ultimate solution."
Subscribe to:
Posts (Atom)