A stretchable circuit allows researchers to make simple, high-quality camera sensors.
The eyes have it: This camera consists of a hemisphere-shaped array of photodetectors (white square with gold-colored dots) and a single lens atop a transparent globe. The curved shape of the photodetector array provides a wide field of view and high-quality images in a compact package.
Today's digital cameras are remarkable devices, but even the most advanced cameras lack the simplicity and quality of the human eye. Now, researchers at the University of Illinois at Urbana-Champaign have built a spherical camera that follows the form and function of a human eye by building a circuit onto a curved surface.
The curved sensor has properties that are found in eyes, such as a wide field of view, that can't be produced in digital cameras without a lot of complexity, says John Rogers, lead researcher on the project. "One of the most prominent [features of the human eye] is that the detector surface of the retina is not planar like the digital chip in a camera," he says. "The consequence of that is [that] the optics are well suited to forming high-quality images even with simple imaging elements, such as the single lens of the cornea."
Electronic devices have been, for the most part, built on rigid, flat chips. But over the past decade, engineers have moved beyond stiff chips and built circuits on bendable sheets. More recently, researchers have gone a step beyond simple bendable electronics and put high-quality silicon circuits on stretchable, rubberlike surfaces. The advantage of a stretchable circuit, says Rogers, is that it can conform over curvy surfaces, whereas bendable devices can't.
The key to the spherical camera is a sensor array that can withstand a curve of about 50 percent of its original shape without breaking, allowing it to be removed from the stiff wafer on which it was originally fabricated and transferred to a rubberlike surface. "Doing that requires more than just making the detector flexible," says Rogers. "You can't just wrap a sphere with a sheet of paper. You need stretchability in order to do a geometry transformation."
The array, which consists of a collection of tiny squares of silicon photodetectors connected by thin ribbons of polymer and metal, is initially fabricated on a silicon wafer. It is then removed from the wafer with a chemical process and transferred to a piece of rubberlike material that was previously formed into a hemisphere shape. At the time of transfer, the rubber hemisphere is stretched out flat. Once the array has been successfully adhered to the rubber, the hemisphere is relaxed into its natural curved shape.
Because the ribbons that connect the tiny islands of silicon are so thin, they are able to bend easily without breaking, Rogers says. If two of the silicon squares are pressed closer together, the ribbons buckle, forming a bridge. "They can accommodate strain without inducing any stretching in the silicon," he says.
To complete the camera, the sensor array is connected to a circuit board that connects to a computer that controls the camera. The array is capped with a globelike cover fitted with a lens. In this setup, the sensor array mimics the retina of a human eye while the lens mimics the cornea.
Stretchable mesh: The square silicon photodetectors, connected by thin ribbons of metal and polymer, are mounted on a hemisphere-shaped rubber surface. The entire device is able to conform to any curvilinear shape due to the flexibility of the ribbons that connect the silicon islands. Credit: Beckman Institute, University of Illinois
"This technology heralds the advent of a new class of imaging devices with wide-angle fields of view, low distortion, and compact size," says Takao Someya, a professor of engineering at the University of Tokyo, who was not involved in the research. "I believe this work is a real breakthrough in the field of stretchable electronics."
Rogers isn't the first to use the concept of a stretchable electronic mesh, but this work distinguishes itself in that it is not constrained to stretching in limited directions, like other stretchable electronic meshes. And importantly, his is the first stretchable mesh to be implemented in an artificial eye camera.
The camera's resolution is 256 pixels. At the moment, it's difficult to improve resolution due to the limitations of the fabrication facilities at the University of Illinois, says Rogers. "At some level, it's a little frustrating because you have this neat electronic eye and everything's pixelated," he says. But his team has sidestepped the problem by taking another cue from biology. The human eye dithers from side to side, constantly capturing snippets of images; the brain pieces the snippets together to form a complete picture. In the same way, Rogers's team runs a computer program that makes the images crisper by interpolating multiple images taken from different angles.
The most immediate application for these eyeball cameras, says Rogers, is most likely with the military. The simple, compact design could be used in imaging technology in the field, he suggests. And while the concept of an electronic eye conjures up images of eye implants, Rogers says that at this time he is not collaborating with other researchers to make these devices biocompatible. However, he's not ruling out the possibility in the future.
Tuesday, August 19, 2008
Shape Matters for Nanoparticles
Particles the size and shape of bacteria could more effectively deliver medicine to cells.
Cell invaders: Cylindrical nanoparticles slip easily into cells. They could be used to deliver drugs to cancerous tissues.
Nanoparticles shaped to resemble certain bacteria can more easily infiltrate human cells, according to a new study. The results suggest that altering the shape of nanoparticles can make them more effective at treating disease.
Joseph DeSimone, a professor of chemistry and chemical engineering at the University of North Carolina at Chapel Hill and at North Carolina State University, tested how nano- and microparticles shaped like cubes, squat cylinders, and long rods were taken up into human cells in culture. He found that long, rod-shaped particles slipped into cells at four times the rate of short, cylindrical shapes with similar volumes. DeSimone, who reported the findings this week in the Proceedings of the National Academy of Sciences, notes that the faster nanoparticles resemble certain types of bacteria that are good at infecting cells. "A lot of rodlike bacteria get into cells quickly," he says. "Using the same size and shape, our particles get in very quickly too."
Researchers have long suspected that mimicking the distinctive shapes of bacteria, fungi, blood cells--even pollen--could improve the ability of nanoparticles to deliver drugs to diseased cells in the body. But it has been difficult to test this suspicion. What's needed is a way to quickly make billions of particles of identical size, chemistry, and shape, and then systematically vary these parameters to learn what effect they have.
DeSimone developed a way to easily design and test a wide variety of particle shapes, while at the same time controlling for size and chemical composition. For example, he can make particles of various shapes--boomerangs, donuts, hex nuts, cylinders, cubes--while keeping the size constant. He can also make boomerang-shaped particles of various sizes, or keep size and shape constant and vary only the chemical composition of the particles. The process gives researchers an unprecedented level of control, he says, which makes it easy to quickly test how changing various parameters of the nanoparticles, including shape, affect how they behave in tissues.
"Historically, most of the work with particles has been with spherical particles because making particles of different shapes has been very challenging," says Samir Mitragotri, a professor of chemical engineering at the University of California, Santa Barbara. DeSimone "demonstrates a very powerful technology that shows [that] particles of different shapes and materials can be prepared," Mitragotri says. "It goes well beyond current tools." He adds that the paper shows that "shape makes a big difference in biological response."
DeSimone also identified the precise mechanisms by which cells take in particles of different shapes. These mechanisms determine where the particles end up inside the cell. This new data could help researchers design particles that reach particular compartments within a cell that have a known level of acidity. The researchers could then fine-tune the particles so that they break down and release their cargo only once they reach the desired compartment. That way, the particles will only release drugs inside targeted cells, leaving healthy cells unharmed.
DeSimone is using his manufacturing technique to produce nanoparticles that deliver drugs to cancer cells. He's starting trials in mice for a number of cancer types--breast, ovarian, cervical, lung, prostate--and lymphoma. He's able to conduct so many trials because it's easy to add different treatment molecules to his particles. Particles developed for targeting breast cancer can easily be changed to target lung cancer, for example. During the tests, DeSimone will systematically vary doses, sizes, and so on to determine the least toxic, most effective combinations. "You can now barrage a lot of different cancers and look at what's the most efficacious design parameters you can put in the system," he says.
DeSimone has developed particles that resemble red blood cells in size, shape, and flexibility to help them circulate in the bloodstream without being removed by biological barriers. (He's testing these in animals as a potential basis for artificial blood.) He is also testing long, wormlike particles that can't easily be consumed by macrophages. "The particle has to overcome so many hurdles before it reaches its destination," Mitragotri says. Previously, researchers have been limited to changing the size and chemistry of particles. Adding the ability to control shape provides a "big boost in overcoming these hurdles," Mitragotri says.
Cell invaders: Cylindrical nanoparticles slip easily into cells. They could be used to deliver drugs to cancerous tissues.
Nanoparticles shaped to resemble certain bacteria can more easily infiltrate human cells, according to a new study. The results suggest that altering the shape of nanoparticles can make them more effective at treating disease.
Joseph DeSimone, a professor of chemistry and chemical engineering at the University of North Carolina at Chapel Hill and at North Carolina State University, tested how nano- and microparticles shaped like cubes, squat cylinders, and long rods were taken up into human cells in culture. He found that long, rod-shaped particles slipped into cells at four times the rate of short, cylindrical shapes with similar volumes. DeSimone, who reported the findings this week in the Proceedings of the National Academy of Sciences, notes that the faster nanoparticles resemble certain types of bacteria that are good at infecting cells. "A lot of rodlike bacteria get into cells quickly," he says. "Using the same size and shape, our particles get in very quickly too."
Researchers have long suspected that mimicking the distinctive shapes of bacteria, fungi, blood cells--even pollen--could improve the ability of nanoparticles to deliver drugs to diseased cells in the body. But it has been difficult to test this suspicion. What's needed is a way to quickly make billions of particles of identical size, chemistry, and shape, and then systematically vary these parameters to learn what effect they have.
DeSimone developed a way to easily design and test a wide variety of particle shapes, while at the same time controlling for size and chemical composition. For example, he can make particles of various shapes--boomerangs, donuts, hex nuts, cylinders, cubes--while keeping the size constant. He can also make boomerang-shaped particles of various sizes, or keep size and shape constant and vary only the chemical composition of the particles. The process gives researchers an unprecedented level of control, he says, which makes it easy to quickly test how changing various parameters of the nanoparticles, including shape, affect how they behave in tissues.
"Historically, most of the work with particles has been with spherical particles because making particles of different shapes has been very challenging," says Samir Mitragotri, a professor of chemical engineering at the University of California, Santa Barbara. DeSimone "demonstrates a very powerful technology that shows [that] particles of different shapes and materials can be prepared," Mitragotri says. "It goes well beyond current tools." He adds that the paper shows that "shape makes a big difference in biological response."
DeSimone also identified the precise mechanisms by which cells take in particles of different shapes. These mechanisms determine where the particles end up inside the cell. This new data could help researchers design particles that reach particular compartments within a cell that have a known level of acidity. The researchers could then fine-tune the particles so that they break down and release their cargo only once they reach the desired compartment. That way, the particles will only release drugs inside targeted cells, leaving healthy cells unharmed.
DeSimone is using his manufacturing technique to produce nanoparticles that deliver drugs to cancer cells. He's starting trials in mice for a number of cancer types--breast, ovarian, cervical, lung, prostate--and lymphoma. He's able to conduct so many trials because it's easy to add different treatment molecules to his particles. Particles developed for targeting breast cancer can easily be changed to target lung cancer, for example. During the tests, DeSimone will systematically vary doses, sizes, and so on to determine the least toxic, most effective combinations. "You can now barrage a lot of different cancers and look at what's the most efficacious design parameters you can put in the system," he says.
DeSimone has developed particles that resemble red blood cells in size, shape, and flexibility to help them circulate in the bloodstream without being removed by biological barriers. (He's testing these in animals as a potential basis for artificial blood.) He is also testing long, wormlike particles that can't easily be consumed by macrophages. "The particle has to overcome so many hurdles before it reaches its destination," Mitragotri says. Previously, researchers have been limited to changing the size and chemistry of particles. Adding the ability to control shape provides a "big boost in overcoming these hurdles," Mitragotri says.
Cloud Computing's Perfect Storm?
An Intel, Yahoo, and HP initiative will use large-scale research projects to test a new Internet-based computing infrastructure.
Last week, Intel, Yahoo, HP, and an international trio of research institutions announced a joint cloud-computing research initiative. The ambitious six-site project is aimed at developing an Internet-based computer infrastructure stable enough to host companies' most critical data-processing tasks. The project also holds an unusual promise for advances in fields as diverse as climate change modeling and molecular biology.
The new array of six linked data centers, one operated by each project sponsor, will be one of the largest experiments to date focusing on cloud computing--an umbrella term for moving complex computing tasks, such as data processing and storage, into a network-connected "cloud" of external data centers, which might perform similar tasks for multiple customers.
The project's large scope will allow researchers to test and develop security, networking, and infrastructure components on a large scale simulating an open Internet environment. But to test this infrastructure, academic researchers will also run real-world, data-intensive projects that, in their own right, could yield advances in fields as varied as data mining, context-sensitive Web search, and communication in virtual-reality environments.
"Making this marriage of substantial processing power, computing resources, and data resources work efficiently, seamlessly, and transparently is the challenge," says Michael Heath, interim head of the computer-science department at the University of Illinois at Urbana-Champaign, an institute that is part of the alliance. Heath says that for the project to be successful, the team, which also includes Germany's Karlsruhe Institute of Technology and Singapore's Infocomm Development Authority, needs "to be running realistic applications."
Much of the technology industry has recently focused on cloud computing as a next critical architectural advance, but even backers say that the model remains technologically immature.
Web-based software and the ability to "rent" processing power or data storage from outside companies are already common. The most ambitious visions of cloud computing expand on this, predicting that companies will ultimately use remotely hosted cloud services to perform even their most complex computing activities. However, creating an online environment where these complicated tasks are secure, fast, reliable, and simple still presents considerable challenges.
Virtually every big technology company, including Google, IBM, Microsoft, and AT&T, already has a cloud-computing initiative. Farthest along commercially may be Amazon, whose Web Services division already hosts computing, storage, databases, and other resources for some customers.
The new cloud-computing project will consist of six computing clusters, one housed with each founding member of the partnership, with each containing between 1,000 and 4,000 processors. Each of the companies involved has a specific set of research projects planned, with many broadly focusing on operational issues such as security, load balancing, managing parallel processes on a very large scale, and how to configure and secure virtual machines across different locations.
Researchers will be given unusually broad latitude to modify the project's architecture from top to bottom, developing and experimenting with ideas applying to hardware, software, networking functions, and applications. Project managers say that one goal is to see how changes at one technical level affect others.
"In the cloud, we have the opportunity for integrated design, where one entity can make design choices across an entire environment," says Russ Daniels, chief technology officer of HP's cloud-services division. "This way, we can understand the impact of design choices that we make at the infrastructure level, as well as the impact they have on higher-level systems."
HP, for example, will be focusing in part on an ongoing project called Cells as a Service, an effort to create secure virtual "containers" that are composed of virtual machines, virtual storage volumes, and virtual networks. The containers can be split between separate data centers but still treated by consumers as a traditional, real-world collection of hardware.
Among Yahoo's specific projects will be the development of Hadoop, an open-source software platform for creating large-scale data-processing and data-querying applications. Yahoo has already built one big cloud-computing facility called M45 that is operated in conjunction with Carnegie Mellon University. M45 will also be folded into this new project.
Running in parallel with this systems-level research will be the assortment of other research projects designed to test the cloud infrastructure.
Computer scientists at the Illinois facility have a handful of data- and processing-intensive projects under way that are likely to be ported to the cloud facilities. According to Heath, one key thrust will be "deep search" and information extraction, such as allowing a computer to understand the real-world context of the contents found in a Web page. For example, today's search engines have difficulty understanding that a phone number is in fact an active phone number, rather than just a series of digits. A project run by Urbana-Champaign professor Kevin Chang is exploring the idea of using the massive quantities of data collected by Web-wide search engines as a kind of cross-reference tool, so that the joint appearance of "555-1212" with "John Borland" multiple times online might identify the number as a phone number and associate it with that particular name.
Heath says that other projects might include experiments with tele-immersive communication--virtual-reality-type environments that let computers provide physical, or haptic, feedback to users as they communicate or engage in real-world activities controlled remotely over the Web.
In an e-mail, Intel Research vice president Andrew Chein said that other topics could include climate modeling, molecular biology, industrial design, and digital library research.
"By looking at what people are really doing, we will learn about what is really important from an infrastructure perspective," says Raghu Ramakrishnan, chief scientist for Yahoo's Cloud Computing and Data Infrastructure Group. "We already know enough to put forth systems that are usable today, but not enough that we can deliver on all the promise that people see in the paradigm."
Last week, Intel, Yahoo, HP, and an international trio of research institutions announced a joint cloud-computing research initiative. The ambitious six-site project is aimed at developing an Internet-based computer infrastructure stable enough to host companies' most critical data-processing tasks. The project also holds an unusual promise for advances in fields as diverse as climate change modeling and molecular biology.
The new array of six linked data centers, one operated by each project sponsor, will be one of the largest experiments to date focusing on cloud computing--an umbrella term for moving complex computing tasks, such as data processing and storage, into a network-connected "cloud" of external data centers, which might perform similar tasks for multiple customers.
The project's large scope will allow researchers to test and develop security, networking, and infrastructure components on a large scale simulating an open Internet environment. But to test this infrastructure, academic researchers will also run real-world, data-intensive projects that, in their own right, could yield advances in fields as varied as data mining, context-sensitive Web search, and communication in virtual-reality environments.
"Making this marriage of substantial processing power, computing resources, and data resources work efficiently, seamlessly, and transparently is the challenge," says Michael Heath, interim head of the computer-science department at the University of Illinois at Urbana-Champaign, an institute that is part of the alliance. Heath says that for the project to be successful, the team, which also includes Germany's Karlsruhe Institute of Technology and Singapore's Infocomm Development Authority, needs "to be running realistic applications."
Much of the technology industry has recently focused on cloud computing as a next critical architectural advance, but even backers say that the model remains technologically immature.
Web-based software and the ability to "rent" processing power or data storage from outside companies are already common. The most ambitious visions of cloud computing expand on this, predicting that companies will ultimately use remotely hosted cloud services to perform even their most complex computing activities. However, creating an online environment where these complicated tasks are secure, fast, reliable, and simple still presents considerable challenges.
Virtually every big technology company, including Google, IBM, Microsoft, and AT&T, already has a cloud-computing initiative. Farthest along commercially may be Amazon, whose Web Services division already hosts computing, storage, databases, and other resources for some customers.
The new cloud-computing project will consist of six computing clusters, one housed with each founding member of the partnership, with each containing between 1,000 and 4,000 processors. Each of the companies involved has a specific set of research projects planned, with many broadly focusing on operational issues such as security, load balancing, managing parallel processes on a very large scale, and how to configure and secure virtual machines across different locations.
Researchers will be given unusually broad latitude to modify the project's architecture from top to bottom, developing and experimenting with ideas applying to hardware, software, networking functions, and applications. Project managers say that one goal is to see how changes at one technical level affect others.
"In the cloud, we have the opportunity for integrated design, where one entity can make design choices across an entire environment," says Russ Daniels, chief technology officer of HP's cloud-services division. "This way, we can understand the impact of design choices that we make at the infrastructure level, as well as the impact they have on higher-level systems."
HP, for example, will be focusing in part on an ongoing project called Cells as a Service, an effort to create secure virtual "containers" that are composed of virtual machines, virtual storage volumes, and virtual networks. The containers can be split between separate data centers but still treated by consumers as a traditional, real-world collection of hardware.
Among Yahoo's specific projects will be the development of Hadoop, an open-source software platform for creating large-scale data-processing and data-querying applications. Yahoo has already built one big cloud-computing facility called M45 that is operated in conjunction with Carnegie Mellon University. M45 will also be folded into this new project.
Running in parallel with this systems-level research will be the assortment of other research projects designed to test the cloud infrastructure.
Computer scientists at the Illinois facility have a handful of data- and processing-intensive projects under way that are likely to be ported to the cloud facilities. According to Heath, one key thrust will be "deep search" and information extraction, such as allowing a computer to understand the real-world context of the contents found in a Web page. For example, today's search engines have difficulty understanding that a phone number is in fact an active phone number, rather than just a series of digits. A project run by Urbana-Champaign professor Kevin Chang is exploring the idea of using the massive quantities of data collected by Web-wide search engines as a kind of cross-reference tool, so that the joint appearance of "555-1212" with "John Borland" multiple times online might identify the number as a phone number and associate it with that particular name.
Heath says that other projects might include experiments with tele-immersive communication--virtual-reality-type environments that let computers provide physical, or haptic, feedback to users as they communicate or engage in real-world activities controlled remotely over the Web.
In an e-mail, Intel Research vice president Andrew Chein said that other topics could include climate modeling, molecular biology, industrial design, and digital library research.
"By looking at what people are really doing, we will learn about what is really important from an infrastructure perspective," says Raghu Ramakrishnan, chief scientist for Yahoo's Cloud Computing and Data Infrastructure Group. "We already know enough to put forth systems that are usable today, but not enough that we can deliver on all the promise that people see in the paradigm."
The Brain Unmasked
New imaging technologies reveal the intricate architecture of the brain, creating a blueprint of its connectivity.
Brain mapping: A variation on MRI called diffusion spectrum imaging allows scientists to map the neural fibers that relay signals in the brain. Each fiber in the image represents hundreds to thousands of fibers in the brain, each traveling along the same path. Credit: George Day, Ruopeng Wang, Jeremy Schmahmann, Van Wedeen, MGH
The typical brain scan shows a muted gray rendering of the brain, easily distinguished by a series of convoluted folds. But according to Van Wedeen, a neuroscientist at Massachusetts General Hospital, in Boston, that image is just a shadow of the real brain. The actual structure--a precisely organized tangle of nerve cells and the long projections that connect them--has remained hidden until relatively recently.
Traditional magnetic resonance imaging, or MRI, can detect the major anatomical features of the brain and is often used to diagnose strokes and brain tumors. But advances in computing power and novel processing algorithms have allowed scientists to analyze the information captured during an MRI in completely new ways.
Diffusion spectrum imaging (DSI) is one of these twists. It uses magnetic resonance signals to track the movement of water molecules in the brain: water diffuses along the length of neural wires, called axons. Scientists can use these diffusion measurements to map the wires, creating a detailed blueprint of the brain's connectivity.
On the medical side, radiologists are beginning to use the technology to map the brain prior to surgery, for example, to avoid important fiber tracts when removing a brain tumor. Wedeen and others are now using diffusion imaging to better understand the structures that underlie our ability to see, to speak, and to remember. Scientists also hope that the techniques will grant new insight into diseases linked to abnormal wiring, such as schizophrenia and autism.
On the next page is an animation of the wiring of a marmoset monkey.
The marmoset brain, shown above, is about the size of a plum. By scanning a dissected brain for 24 hours, scientists were able to generate a map with a spatial resolution of 400 microns. "The image quality and resolution are much higher than we can obtain in a living subject," says Wedeen.
As the brain rotates, you can see that all the neural fibers are visualized in half of the brain: the spiky fibers that look like pins in a pincushion are part of the cerebral cortex. The sparser half of the image displays only the fibers originating in the opposite side.
It's easy to see that this brain lacks the folding that is characteristic of the human brain. "The human brain would look 25 times as complicated," says Wedeen. "Every gyrus [fold] has its own story to tell."
Brain mapping: A variation on MRI called diffusion spectrum imaging allows scientists to map the neural fibers that relay signals in the brain. Each fiber in the image represents hundreds to thousands of fibers in the brain, each traveling along the same path. Credit: George Day, Ruopeng Wang, Jeremy Schmahmann, Van Wedeen, MGH
The typical brain scan shows a muted gray rendering of the brain, easily distinguished by a series of convoluted folds. But according to Van Wedeen, a neuroscientist at Massachusetts General Hospital, in Boston, that image is just a shadow of the real brain. The actual structure--a precisely organized tangle of nerve cells and the long projections that connect them--has remained hidden until relatively recently.
Traditional magnetic resonance imaging, or MRI, can detect the major anatomical features of the brain and is often used to diagnose strokes and brain tumors. But advances in computing power and novel processing algorithms have allowed scientists to analyze the information captured during an MRI in completely new ways.
Diffusion spectrum imaging (DSI) is one of these twists. It uses magnetic resonance signals to track the movement of water molecules in the brain: water diffuses along the length of neural wires, called axons. Scientists can use these diffusion measurements to map the wires, creating a detailed blueprint of the brain's connectivity.
On the medical side, radiologists are beginning to use the technology to map the brain prior to surgery, for example, to avoid important fiber tracts when removing a brain tumor. Wedeen and others are now using diffusion imaging to better understand the structures that underlie our ability to see, to speak, and to remember. Scientists also hope that the techniques will grant new insight into diseases linked to abnormal wiring, such as schizophrenia and autism.
On the next page is an animation of the wiring of a marmoset monkey.
The marmoset brain, shown above, is about the size of a plum. By scanning a dissected brain for 24 hours, scientists were able to generate a map with a spatial resolution of 400 microns. "The image quality and resolution are much higher than we can obtain in a living subject," says Wedeen.
As the brain rotates, you can see that all the neural fibers are visualized in half of the brain: the spiky fibers that look like pins in a pincushion are part of the cerebral cortex. The sparser half of the image displays only the fibers originating in the opposite side.
It's easy to see that this brain lacks the folding that is characteristic of the human brain. "The human brain would look 25 times as complicated," says Wedeen. "Every gyrus [fold] has its own story to tell."
Compressing Light
A new way to confine light could enable better optical communications and computing.
Guiding light: Light can be compressed between a semiconductor nanowire and a smooth sheet of silver, depending on the nanowire’s diameter and its height above the metal surface. Here, light is confined in a 100-nanometer gap by a nanowire with a 200-nanometer diameter.
A new way to compress light, designed by researchers at the University of California, Berkeley, could make optical communications on computer chips more practical. The researchers developed computer simulations that suggest that it is possible to confine infrared light to a space 10 nanometers wide. What's more, unlike other techniques for compressing light, the configuration will allow light to travel up to 150 microns without losing its energy, which is key for small optical systems.
Scaling down optical devices is important for future optical communications and computing. Light-based communications use wavelengths on the order of microns to carry information, and they are successful in large-scale applications such as optical fiber networks that span oceans. But to transmit data over short distances, like between circuit components on a microchip, long-wavelength light must be squeezed into tiny spaces.
Previously, scientists have effectively shrunk light by converting it into waves that travel along the surface of metals. But these waves lose their energy before they can successfully carry information useful distances. Optical fiber, on the other hand, carries light over several kilometers without energy loss, but it cannot be miniaturized less than half the size of the wavelength.
The Berkeley researchers combined these techniques to both compress the light and allow it to travel far enough to transmit information on computer chips. They place a semiconductor nanowire, such as gallium arsenide, within nanometers of a thin sheet of silver. Without the nanowire, light converted into surface waves would spread out over the silver sheet, and the light energy would be quickly dissipated. But with the nanowire present, charges pile up on both the silver and the nanowire surfaces, trapping light energy between them. The nanowire has the effect of confining and guiding surface waves, preventing them from spreading out over the metal and dissipating the light energy.
Using computer simulations to tune both the diameter of the nanowire and the distance between the nanowire and the metal, the researchers found an optimal arrangement that would allow light to be squeezed into the smallest space possible while still retaining a sufficient amount of energy: a nanowire with a 200-nanometer diameter placed 10 nanometers above the silver surface would give the best combination of results for communications wavelengths of about 1.5 microns.
Shape shifting: Light is confined to different parts of the waveguide when the diameter or height of the nanowire changes. From left to right: light travels inside a 400-nanometer nanowire placed 100 nanometers above the surface; some light begins to travel between the nanowire and the surface when the diameter is reduced to 200 nanometers; when the nanowire is just two nanometers above the surface, light is trapped in the tiny gap for both 200-nanometer and 400-nanometer nanowires.
"This could truly enable a revolution in the [nanophotonics] field," says Marin Soljacic, a physics professor at MIT. For example, the resolutions of sensing and imaging techniques are limited by the wavelength of light they use to measure objects; anything beneath the resolution can't be seen. A device that confines light beyond its natural wavelength, however, could measure and return information about what lies beyond these limits.
The group is cautiously optimistic about its innovation. "This is probably our biggest breakthrough in the last seven or eight years," says Xiang Zhang, a professor of mechanical engineering at UC Berkeley, who led the research. "But we still have a long way to go." The researchers have already started to demonstrate in experimental devices the performance that their simulations predicted. However, they have only tested the devices with visible light frequencies, which are still hundreds of nanometers smaller than the infrared frequencies used in communications. And while a propagation distance of 150 microns is good, says Zhang, they want a distance of at least a millimeter for practical devices on integrated chips.
With continued refinement, the technique could play several roles in optical computing. The setup could be used to steer light through certain paths on chips. The group is even toying with the idea of using the device to produce an ultrasmall light source. Still, any practical devices are several years away. "They will have to master the fabrication," says Soljacic. "But the simulations seem convincing, and I have complete faith that it will work."
Guiding light: Light can be compressed between a semiconductor nanowire and a smooth sheet of silver, depending on the nanowire’s diameter and its height above the metal surface. Here, light is confined in a 100-nanometer gap by a nanowire with a 200-nanometer diameter.
A new way to compress light, designed by researchers at the University of California, Berkeley, could make optical communications on computer chips more practical. The researchers developed computer simulations that suggest that it is possible to confine infrared light to a space 10 nanometers wide. What's more, unlike other techniques for compressing light, the configuration will allow light to travel up to 150 microns without losing its energy, which is key for small optical systems.
Scaling down optical devices is important for future optical communications and computing. Light-based communications use wavelengths on the order of microns to carry information, and they are successful in large-scale applications such as optical fiber networks that span oceans. But to transmit data over short distances, like between circuit components on a microchip, long-wavelength light must be squeezed into tiny spaces.
Previously, scientists have effectively shrunk light by converting it into waves that travel along the surface of metals. But these waves lose their energy before they can successfully carry information useful distances. Optical fiber, on the other hand, carries light over several kilometers without energy loss, but it cannot be miniaturized less than half the size of the wavelength.
The Berkeley researchers combined these techniques to both compress the light and allow it to travel far enough to transmit information on computer chips. They place a semiconductor nanowire, such as gallium arsenide, within nanometers of a thin sheet of silver. Without the nanowire, light converted into surface waves would spread out over the silver sheet, and the light energy would be quickly dissipated. But with the nanowire present, charges pile up on both the silver and the nanowire surfaces, trapping light energy between them. The nanowire has the effect of confining and guiding surface waves, preventing them from spreading out over the metal and dissipating the light energy.
Using computer simulations to tune both the diameter of the nanowire and the distance between the nanowire and the metal, the researchers found an optimal arrangement that would allow light to be squeezed into the smallest space possible while still retaining a sufficient amount of energy: a nanowire with a 200-nanometer diameter placed 10 nanometers above the silver surface would give the best combination of results for communications wavelengths of about 1.5 microns.
Shape shifting: Light is confined to different parts of the waveguide when the diameter or height of the nanowire changes. From left to right: light travels inside a 400-nanometer nanowire placed 100 nanometers above the surface; some light begins to travel between the nanowire and the surface when the diameter is reduced to 200 nanometers; when the nanowire is just two nanometers above the surface, light is trapped in the tiny gap for both 200-nanometer and 400-nanometer nanowires.
"This could truly enable a revolution in the [nanophotonics] field," says Marin Soljacic, a physics professor at MIT. For example, the resolutions of sensing and imaging techniques are limited by the wavelength of light they use to measure objects; anything beneath the resolution can't be seen. A device that confines light beyond its natural wavelength, however, could measure and return information about what lies beyond these limits.
The group is cautiously optimistic about its innovation. "This is probably our biggest breakthrough in the last seven or eight years," says Xiang Zhang, a professor of mechanical engineering at UC Berkeley, who led the research. "But we still have a long way to go." The researchers have already started to demonstrate in experimental devices the performance that their simulations predicted. However, they have only tested the devices with visible light frequencies, which are still hundreds of nanometers smaller than the infrared frequencies used in communications. And while a propagation distance of 150 microns is good, says Zhang, they want a distance of at least a millimeter for practical devices on integrated chips.
With continued refinement, the technique could play several roles in optical computing. The setup could be used to steer light through certain paths on chips. The group is even toying with the idea of using the device to produce an ultrasmall light source. Still, any practical devices are several years away. "They will have to master the fabrication," says Soljacic. "But the simulations seem convincing, and I have complete faith that it will work."
Spit Sensor Spots Oral Cancer
An ultrasensitive optical protein sensor analyzes saliva.
Analyzing spit: Leyla Sabet, a member of the UCLA research team that built the new optical protein sensor, sits in front of the device. Based on a confocal microscope, the ultrasensitive system is being used by the researchers to detect biomarkers in saliva samples that are linked to oral cancer.
For the first time, an optical sensor, developed by researchers at the University of California, Los Angeles (UCLA), can measure proteins in saliva that are linked to oral cancer. The device is highly sensitive, allowing doctors and dentists to detect the disease early, when patient survival rates are high.
The researchers are currently working with the National Institute of Health (NIH) to push the technology to clinical tests so that it can be developed into a device that can be used in dentists' offices. Chih-Ming Ho, a scientist at UCLA and principal investigator for the sensor, says that it is a versatile instrument and can be used to detect other disease-specific biomarkers.
When oral cancer is identified in its early stages, patient survival rate is almost 90 percent, compared with 50 percent when the disease is advanced, says Carter Van Waes, chief of head and neck surgery at the National Institute on Deafness and Other Communication Disorders (NIDCD). The American Cancer Society estimates that there will be 35,310 new cases of oral cancer in the United States in 2008. Early forms are hard to detect just by visual examination of the mouth, says Van Waes, so physicians either have to perform a biopsy--remove tissue for testing--or analyze proteins in blood.
Detecting cancer biomarkers in saliva would be a much easier test to perform, but it is also technically more challenging: protein markers are harder to spot in saliva than in blood. To create the ultrasensitive sensor, researchers started with a glass substrate coated with a protein called streptavidin that enables other biomolecules to bind to the substrate and to one another. The researchers then added a molecule that would catch and bind the cancer biomarker--a protein in saliva called IL-8 that previous research has proved to be related to oral cancer. They also added molecules designed to keep the glass surface free of other proteins that might muddy detection of the biomarker. To visualize the target molecules, Ho's team then added a set of fluorescently tagged proteins designed to attach to the captured IL-8 markers.
Because saliva has a lower concentration of proteins than blood does, the team needed a highly sensitive method to detect the tagged proteins among the background noise, stray molecules in saliva that also fluoresce. So the researchers used a confocal microscope--an imaging system that employs a laser to collect the light generated from a sample--to analyze the saliva. Ho and his team found that focusing the laser light on a specific part of the sample resulted in a higher signal-to-noise ratio, allowing them to detect lower concentrations of the cancer biomarker.
Analyzing spit: Leyla Sabet, a member of the UCLA research team that built the new optical protein sensor, sits in front of the device. Based on a confocal microscope, the ultrasensitive system is being used by the researchers to detect biomarkers in saliva samples that are linked to oral cancer.
For the first time, an optical sensor, developed by researchers at the University of California, Los Angeles (UCLA), can measure proteins in saliva that are linked to oral cancer. The device is highly sensitive, allowing doctors and dentists to detect the disease early, when patient survival rates are high.
The researchers are currently working with the National Institute of Health (NIH) to push the technology to clinical tests so that it can be developed into a device that can be used in dentists' offices. Chih-Ming Ho, a scientist at UCLA and principal investigator for the sensor, says that it is a versatile instrument and can be used to detect other disease-specific biomarkers.
When oral cancer is identified in its early stages, patient survival rate is almost 90 percent, compared with 50 percent when the disease is advanced, says Carter Van Waes, chief of head and neck surgery at the National Institute on Deafness and Other Communication Disorders (NIDCD). The American Cancer Society estimates that there will be 35,310 new cases of oral cancer in the United States in 2008. Early forms are hard to detect just by visual examination of the mouth, says Van Waes, so physicians either have to perform a biopsy--remove tissue for testing--or analyze proteins in blood.
Detecting cancer biomarkers in saliva would be a much easier test to perform, but it is also technically more challenging: protein markers are harder to spot in saliva than in blood. To create the ultrasensitive sensor, researchers started with a glass substrate coated with a protein called streptavidin that enables other biomolecules to bind to the substrate and to one another. The researchers then added a molecule that would catch and bind the cancer biomarker--a protein in saliva called IL-8 that previous research has proved to be related to oral cancer. They also added molecules designed to keep the glass surface free of other proteins that might muddy detection of the biomarker. To visualize the target molecules, Ho's team then added a set of fluorescently tagged proteins designed to attach to the captured IL-8 markers.
Because saliva has a lower concentration of proteins than blood does, the team needed a highly sensitive method to detect the tagged proteins among the background noise, stray molecules in saliva that also fluoresce. So the researchers used a confocal microscope--an imaging system that employs a laser to collect the light generated from a sample--to analyze the saliva. Ho and his team found that focusing the laser light on a specific part of the sample resulted in a higher signal-to-noise ratio, allowing them to detect lower concentrations of the cancer biomarker.
Better Batteries Charge Up
A startup reports progress on a battery that stores more energy than lithium-ion ones.
A Texas startup says that it has taken a big step toward high-volume production of an ultracapacitor-based energy-storage system that, if claims hold true, would far outperform the best lithium-ion batteries on the market.
Dick Weir, founder and chief executive of EEStor, a startup based in Cedar Park, TX, says that the company has manufactured materials that have met all certification milestones for crystallization, chemical purity, and particle-size consistency. The results suggest that the materials can be made at a high-enough grade to meet the company's performance goals. The company also said a key component of the material can withstand the extreme voltages needed for high energy storage.
"These advancements provide the pathway to meeting our present requirements," Weir says. "This data says we hit the home run."
EEStor claims that its system, called an electrical energy storage unit (EESU), will have more than three times the energy density of the top lithium-ion batteries today. The company also says that the solid-state device will be safer and longer lasting, and will have the ability to recharge in less than five minutes. Toronto-based ZENN Motor, an EEStor investor and customer, says that it's developing an EESU-powered car with a top speed of 80 miles per hour and a 250-mile range. It hopes to launch the vehicle, which the company says will be inexpensive, in the fall of 2009.
But skepticism in the research community is high. At the EESU's core is a ceramic material consisting of a barium titanate powder that is coated with aluminum oxide and a type of glass material. At a materials-research conference earlier this year in San Francisco, it was asked whether such an energy-storage device was possible. "The response was not very positive," said one engineering professor who attended the conference.
Many have questioned EEStor's claims, pointing out that the high voltages needed to approach the targeted energy storage would cause the material to break down and the storage device to short out. There would be little tolerance for impurities or imprecision--something difficult to achieve in a high-volume manufacturing setting, skeptics say.
But Weir is dismissive of such reactions. "EEStor is not hyping," he says. Representatives of the company said in a press release that certification data proves that voltage breakdown of the aluminum oxide occurs at 1,100 volts per micron--nearly three times higher than EEStor's target of 350 volts. "This provides the potential for excellent protection from voltage breakdown," the company said.
Jeff Dahn, a professor of advanced materials in the chemistry and physics departments at Dalhousie University, in Nova Scotia, Canada, says the data suggests that EEStor has developed an "amazingly robust" material. "If you're going to have a one-micron dielectric, it's got to be pretty pure," he says.
Ian Clifford, CEO of ZENN Motor, says that the news "bodes well" for EEStor's next milestone: third-party verification that the powders achieve the desired high level of permittivity, which will help determine whether the materials can meet the company's energy-storage goals.
Weir says that EEStor's latest production milestones lay the foundation for what follows. It has taken longer than originally expected, he says, but the company is now in a position to deploy more-advanced technologies for the production of military-grade applications, alluding to EEStor's partnership with Lockheed Martin.
Weir says that momentum is building and that he'll start coming out with information about the company's progress on a "more rapid basis." Plans are also under way for a major expansion of EEStor's production lines. "There's nothing complex in this," he says, pointing to his past engineering days at IBM. "It's nowhere near the complexity of disk-drive fabrication."
Despite its critics, EEStor has won support from some significant corners. In addition to Lockheed Martin, venture-capital firm Kleiner Perkins Caufield & Byers is an investor, and former Dell Computer chairman Morton Topfer sits on EEStor's board.
The company is also in serious talks with potential partners in the solar and wind industry, where EEStor's technology can, according to Weir, help put 45 percent more energy into the grid. He says that the company is working toward commercial production "as soon as possible in 2009," although when asked, he gave no specific date. "I'm not going to make claims on when we're going to get product out there. That's between me and the customer. I don't want to tell the industry."
Dahn says that he hopes EEStor will succeed. "I hope it works like a charm, because it will be a lot easier than fuel cells and batteries if it comes to pass."
A Texas startup says that it has taken a big step toward high-volume production of an ultracapacitor-based energy-storage system that, if claims hold true, would far outperform the best lithium-ion batteries on the market.
Dick Weir, founder and chief executive of EEStor, a startup based in Cedar Park, TX, says that the company has manufactured materials that have met all certification milestones for crystallization, chemical purity, and particle-size consistency. The results suggest that the materials can be made at a high-enough grade to meet the company's performance goals. The company also said a key component of the material can withstand the extreme voltages needed for high energy storage.
"These advancements provide the pathway to meeting our present requirements," Weir says. "This data says we hit the home run."
EEStor claims that its system, called an electrical energy storage unit (EESU), will have more than three times the energy density of the top lithium-ion batteries today. The company also says that the solid-state device will be safer and longer lasting, and will have the ability to recharge in less than five minutes. Toronto-based ZENN Motor, an EEStor investor and customer, says that it's developing an EESU-powered car with a top speed of 80 miles per hour and a 250-mile range. It hopes to launch the vehicle, which the company says will be inexpensive, in the fall of 2009.
But skepticism in the research community is high. At the EESU's core is a ceramic material consisting of a barium titanate powder that is coated with aluminum oxide and a type of glass material. At a materials-research conference earlier this year in San Francisco, it was asked whether such an energy-storage device was possible. "The response was not very positive," said one engineering professor who attended the conference.
Many have questioned EEStor's claims, pointing out that the high voltages needed to approach the targeted energy storage would cause the material to break down and the storage device to short out. There would be little tolerance for impurities or imprecision--something difficult to achieve in a high-volume manufacturing setting, skeptics say.
But Weir is dismissive of such reactions. "EEStor is not hyping," he says. Representatives of the company said in a press release that certification data proves that voltage breakdown of the aluminum oxide occurs at 1,100 volts per micron--nearly three times higher than EEStor's target of 350 volts. "This provides the potential for excellent protection from voltage breakdown," the company said.
Jeff Dahn, a professor of advanced materials in the chemistry and physics departments at Dalhousie University, in Nova Scotia, Canada, says the data suggests that EEStor has developed an "amazingly robust" material. "If you're going to have a one-micron dielectric, it's got to be pretty pure," he says.
Ian Clifford, CEO of ZENN Motor, says that the news "bodes well" for EEStor's next milestone: third-party verification that the powders achieve the desired high level of permittivity, which will help determine whether the materials can meet the company's energy-storage goals.
Weir says that EEStor's latest production milestones lay the foundation for what follows. It has taken longer than originally expected, he says, but the company is now in a position to deploy more-advanced technologies for the production of military-grade applications, alluding to EEStor's partnership with Lockheed Martin.
Weir says that momentum is building and that he'll start coming out with information about the company's progress on a "more rapid basis." Plans are also under way for a major expansion of EEStor's production lines. "There's nothing complex in this," he says, pointing to his past engineering days at IBM. "It's nowhere near the complexity of disk-drive fabrication."
Despite its critics, EEStor has won support from some significant corners. In addition to Lockheed Martin, venture-capital firm Kleiner Perkins Caufield & Byers is an investor, and former Dell Computer chairman Morton Topfer sits on EEStor's board.
The company is also in serious talks with potential partners in the solar and wind industry, where EEStor's technology can, according to Weir, help put 45 percent more energy into the grid. He says that the company is working toward commercial production "as soon as possible in 2009," although when asked, he gave no specific date. "I'm not going to make claims on when we're going to get product out there. That's between me and the customer. I don't want to tell the industry."
Dahn says that he hopes EEStor will succeed. "I hope it works like a charm, because it will be a lot easier than fuel cells and batteries if it comes to pass."
A New View for Documents
Browser-based technologies aim to make it easier to view documents online.
Web words: Documents can be embedded in Web pages using Scribd's iPaper, which allows users to quickly navigate its pages, search, and copy and paste.
A new tool for embedding documents on Web pages is cropping up on sites as diverse as the storage service Drop.io; LabMeeting, a social network for scientists; and the Obama campaign's official blog. Launched earlier this year, the format, called iPaper, is technology from Scribd, a company that hopes to become the sort of clearinghouse for documents that YouTube is for videos. With iPaper, the company offers a browser-based system for viewing documents that retains their original formatting and can be employed by the 98 percent of Internet users who have installed Adobe Flash.
Although most Web pages are documents, they often don't display consistently from one browser to another, and it can be awkward to navigate through a large document if it's displayed as a series of connected pages on the Web. Alternatively, when individuals want to share documents with each other, they can have compatibility problems. For example, the new .docx format created by Microsoft's Office 2007 can't be accessed by many other programs, including earlier versions of Office. One traditional method to solve both of these problems has been Adobe PDFs, which preserve formatting and can be opened by most computer users.
However, Jared Friedman, chief technology officer of Scribd, sees a need for a solution to the problem that's built specifically for use through the browser. He says that browser-based versions have been built for most essential desktop programs. "In some sense, Adobe Acrobat is among the last programs to migrate online in a Web-based version," Friedman says.
Web-based software is typically stripped of some of the specialized features available in desktop versions but has added social features. IPaper is no exception. Users can convert documents, including PDFs, Word documents, and rich text files, into iPaper by uploading them to the Scribd website or to a website that supports Scribd's system. Readers can navigate documents by scrolling or flipping to a tile view, search them, and copy and paste. They can also share them, embed them on other sites, and, if the publisher chooses to allow it, download them in their original format to view offline.
FlashPaper, an earlier technology from Macromedia, inspired Scribd and iPaper, according to CEO and cofounder Trip Adler. Since Adobe didn't continue to support the product after it acquired Macromedia, Scribd decided to build its own version from scratch. The iPaper technology is built using Adobe Flash, and it streams documents to a Web page. This allows a reader to jump smoothly to page 500 of a document, for example, even if the rest of the document is still loading. Although Flash has recently become easier for search engines to index, Friedman says that streaming documents can still be a problem. Scribd supplements iPaper documents with a searchable format that crawlers can read.
Adler says that Scribd is still experimenting with business models, although the company has seen its technology adopted fairly widely. Storage companies such as Drop.io and Box use iPaper to allow their customers to view the items they have in storage without having to download them. Adler says that the Scribd site currently gets 21 million visitors a month. He notes that the company may make money through ads embedded in documents (a feature that's already available) or through buying and selling documents.
But Scribd may have more to worry about from Adobe than it thinks it does. Al Hilwa, program director for IDC's application development software research, says that Adobe has been working to fuse documents with Web presentation. He adds that the company has begun incorporating Flash into PDFs and making its various document technologies available through Acrobat.com.
Indeed, Adobe says that FlashPaper is not abandoned technology. Erik Larson, director of product management and marketing for Acrobat.com and the former product manager for FlashPaper at Macromedia, says, "FlashPaper as a product is no longer being developed, but FlashPaper as a concept is alive and well." He adds, "FlashPaper has become a set of Web services on a set of servers in the cloud."
Web words: Documents can be embedded in Web pages using Scribd's iPaper, which allows users to quickly navigate its pages, search, and copy and paste.
A new tool for embedding documents on Web pages is cropping up on sites as diverse as the storage service Drop.io; LabMeeting, a social network for scientists; and the Obama campaign's official blog. Launched earlier this year, the format, called iPaper, is technology from Scribd, a company that hopes to become the sort of clearinghouse for documents that YouTube is for videos. With iPaper, the company offers a browser-based system for viewing documents that retains their original formatting and can be employed by the 98 percent of Internet users who have installed Adobe Flash.
Although most Web pages are documents, they often don't display consistently from one browser to another, and it can be awkward to navigate through a large document if it's displayed as a series of connected pages on the Web. Alternatively, when individuals want to share documents with each other, they can have compatibility problems. For example, the new .docx format created by Microsoft's Office 2007 can't be accessed by many other programs, including earlier versions of Office. One traditional method to solve both of these problems has been Adobe PDFs, which preserve formatting and can be opened by most computer users.
However, Jared Friedman, chief technology officer of Scribd, sees a need for a solution to the problem that's built specifically for use through the browser. He says that browser-based versions have been built for most essential desktop programs. "In some sense, Adobe Acrobat is among the last programs to migrate online in a Web-based version," Friedman says.
Web-based software is typically stripped of some of the specialized features available in desktop versions but has added social features. IPaper is no exception. Users can convert documents, including PDFs, Word documents, and rich text files, into iPaper by uploading them to the Scribd website or to a website that supports Scribd's system. Readers can navigate documents by scrolling or flipping to a tile view, search them, and copy and paste. They can also share them, embed them on other sites, and, if the publisher chooses to allow it, download them in their original format to view offline.
FlashPaper, an earlier technology from Macromedia, inspired Scribd and iPaper, according to CEO and cofounder Trip Adler. Since Adobe didn't continue to support the product after it acquired Macromedia, Scribd decided to build its own version from scratch. The iPaper technology is built using Adobe Flash, and it streams documents to a Web page. This allows a reader to jump smoothly to page 500 of a document, for example, even if the rest of the document is still loading. Although Flash has recently become easier for search engines to index, Friedman says that streaming documents can still be a problem. Scribd supplements iPaper documents with a searchable format that crawlers can read.
Adler says that Scribd is still experimenting with business models, although the company has seen its technology adopted fairly widely. Storage companies such as Drop.io and Box use iPaper to allow their customers to view the items they have in storage without having to download them. Adler says that the Scribd site currently gets 21 million visitors a month. He notes that the company may make money through ads embedded in documents (a feature that's already available) or through buying and selling documents.
But Scribd may have more to worry about from Adobe than it thinks it does. Al Hilwa, program director for IDC's application development software research, says that Adobe has been working to fuse documents with Web presentation. He adds that the company has begun incorporating Flash into PDFs and making its various document technologies available through Acrobat.com.
Indeed, Adobe says that FlashPaper is not abandoned technology. Erik Larson, director of product management and marketing for Acrobat.com and the former product manager for FlashPaper at Macromedia, says, "FlashPaper as a product is no longer being developed, but FlashPaper as a concept is alive and well." He adds, "FlashPaper has become a set of Web services on a set of servers in the cloud."
A Cool Fuel Cell
A novel low-temperature electrolyte could make solid-oxide fuel cells more practical.
Conductive crystals: A scanning transmission electron microscope image shows the crystal structure of a new electrolyte material for solid-oxide fuel cells that works well at room temperature.
A new electrolyte for solid-oxide fuel cells, made by researchers in Spain, operates at temperatures hundreds of degrees lower than those of conventional electrolytes, which could help make such fuel cells more practical.
Jacobo Santamaria, of the applied-physics department at the Universidad Complutense de Madrid, in Spain, and his colleagues have modified a yttria-stabilized zirconia electrolyte, a common type of electrolyte in solid-oxide fuel cells, so that it works at just above room temperature. Ordinarily, such electrolytes require temperatures of more than 700 °C. Combined with improvements to the fuel-cell electrodes, this could lower the temperature at which these fuel cells operate.
Solid-oxide fuel cells are promising for next-generation power plants because they are more efficient than conventional generators, such as steam turbines, and they can use a greater variety of fuels than other fuel cells. They can generate electricity with gasoline, diesel, natural gas, and hydrogen, among other fuels. But the high temperatures required for efficient operation make solid-oxide fuel cells expensive and limit their applications. The low-temperature electrolyte reported by the Spanish researchers could be a "tremendous improvement" for solid-oxide fuel cells, says Eric Wachsman, director of the Florida Institute for Sustainable Energy, at the University of Florida.
In a solid-oxide fuel cell, oxygen is fed into one electrode, and fuel is fed into the other. The electrolyte allows oxygen ions to migrate from one electrode to the other, where they combine with the fuel; in the simplest case, in which hydrogen is the fuel, this produces water and releases electrons. The electrolyte prevents the electrons from traveling directly back to the oxygen side of the fuel cell, forcing them instead to travel through an external circuit, generating electricity. Via this circuitous route, they eventually find their way to the oxygen electrode, where they combine with oxygen gas to form oxygen ions, perpetuating the cycle.
The electrolyte--which is a solid material--typically only conducts ions at high temperatures. Santamaria, drawing on earlier work by other researchers, found that the ionic conductivity at low temperatures could be greatly improved by combining layers of the standard electrolyte materials with 10-nanometer-thick layers of strontium titanate. He found that, because of the differences in the crystal structures of the materials, a large number of oxygen vacancies--places within the crystalline structures of the materials that would ordinarily host an oxygen atom--formed where these two materials meet. These vacancies form pathways that allow the oxygen ions to move through the material, improving the conductivity of the materials at room temperature by a factor of 100 million.
The material is still some way from being incorporated into commercial fuel cells. For one thing, the large improvement in ionic conductivity will require further verification, Wachsman says, especially in light of the difficulty of measuring the performance of extremely thin materials. Second, the direction of the improved conductivity--along the plane of the material rather than perpendicular to it--will require a redesign of today's fuel cells. What's more, the limiting factor for the temperature in fuel cells now is the electrode materials. Before room temperature solid-oxide fuel cells are possible, these will also need to be improved.
Yet if initial results are confirmed by future research, the new materials will represent a significant advance. Ivan Schuller, a professor of physics at the University of California, San Diego, says that this represents a major change in the performance of electrolytes. He adds, "It will surely motivate much new work by others."
Conductive crystals: A scanning transmission electron microscope image shows the crystal structure of a new electrolyte material for solid-oxide fuel cells that works well at room temperature.
A new electrolyte for solid-oxide fuel cells, made by researchers in Spain, operates at temperatures hundreds of degrees lower than those of conventional electrolytes, which could help make such fuel cells more practical.
Jacobo Santamaria, of the applied-physics department at the Universidad Complutense de Madrid, in Spain, and his colleagues have modified a yttria-stabilized zirconia electrolyte, a common type of electrolyte in solid-oxide fuel cells, so that it works at just above room temperature. Ordinarily, such electrolytes require temperatures of more than 700 °C. Combined with improvements to the fuel-cell electrodes, this could lower the temperature at which these fuel cells operate.
Solid-oxide fuel cells are promising for next-generation power plants because they are more efficient than conventional generators, such as steam turbines, and they can use a greater variety of fuels than other fuel cells. They can generate electricity with gasoline, diesel, natural gas, and hydrogen, among other fuels. But the high temperatures required for efficient operation make solid-oxide fuel cells expensive and limit their applications. The low-temperature electrolyte reported by the Spanish researchers could be a "tremendous improvement" for solid-oxide fuel cells, says Eric Wachsman, director of the Florida Institute for Sustainable Energy, at the University of Florida.
In a solid-oxide fuel cell, oxygen is fed into one electrode, and fuel is fed into the other. The electrolyte allows oxygen ions to migrate from one electrode to the other, where they combine with the fuel; in the simplest case, in which hydrogen is the fuel, this produces water and releases electrons. The electrolyte prevents the electrons from traveling directly back to the oxygen side of the fuel cell, forcing them instead to travel through an external circuit, generating electricity. Via this circuitous route, they eventually find their way to the oxygen electrode, where they combine with oxygen gas to form oxygen ions, perpetuating the cycle.
The electrolyte--which is a solid material--typically only conducts ions at high temperatures. Santamaria, drawing on earlier work by other researchers, found that the ionic conductivity at low temperatures could be greatly improved by combining layers of the standard electrolyte materials with 10-nanometer-thick layers of strontium titanate. He found that, because of the differences in the crystal structures of the materials, a large number of oxygen vacancies--places within the crystalline structures of the materials that would ordinarily host an oxygen atom--formed where these two materials meet. These vacancies form pathways that allow the oxygen ions to move through the material, improving the conductivity of the materials at room temperature by a factor of 100 million.
The material is still some way from being incorporated into commercial fuel cells. For one thing, the large improvement in ionic conductivity will require further verification, Wachsman says, especially in light of the difficulty of measuring the performance of extremely thin materials. Second, the direction of the improved conductivity--along the plane of the material rather than perpendicular to it--will require a redesign of today's fuel cells. What's more, the limiting factor for the temperature in fuel cells now is the electrode materials. Before room temperature solid-oxide fuel cells are possible, these will also need to be improved.
Yet if initial results are confirmed by future research, the new materials will represent a significant advance. Ivan Schuller, a professor of physics at the University of California, San Diego, says that this represents a major change in the performance of electrolytes. He adds, "It will surely motivate much new work by others."
Video Microblogging Has Arrived
A San Francisco-based startup called 12seconds is a video version of Twitter, but how useful will it be?
In late July, a startup called 12seconds launched an early version of a product that lets people publicly post 12-second-long videos on the Internet about what they are doing. Using a Web camera or a cell-phone video camera, people record themselves doing anything--watching a football game at a bar, telling jokes, buying new shoes, playing with their child--and can upload it immediately to the Web, where others who subscribe to their videos get the update.
12seconds borrows heavily from the concepts of Twitter, an increasingly popular tool for so-called microblogging, in which people write pithy, 140-character updates on the status of their daily lives. A posted "tweet" can be published on Twitter's main page and sent directly to people who are following the person who posted. While initially laughed off as a waste of time, Twitter, founded in 2006, has slowly been gaining traction as more and more people and companies are finding it a useful way to quickly share information with a broad audience.
"Microblogging is really starting to take off," says Sol Lipman, founder of 12seconds. But in some instances, he says, short text updates just aren't as compelling as video. "I think video as a medium is significantly more engaging than text," Lipman notes. "If I'm at the bar with my friends, I want to show us having fun at the bar, not just text it."
The startup, based in San Francisco, was founded about five months ago and has no outside funding. Its ranks fluctuate between seven and ten people, depending on the workload, and about five of those employees work part time, says Lipman. 12seconds launched its "alpha" version of the product (alpha versions typically have fewer features than beta versions) on July 24, by providing four popular blogs, including TechCrunch, with 500 invitations to give out to their readers. Those invitations were snapped up quickly, says Lipman, leading to 7000 video uploads in just the first few days. In the coming weeks, the company will dole out additional invitations to the long queue of people turned away from the first round.
It's unsurprising that 12seconds has had such immediate small-scale success. Millions of people use Twitter, and many of them are interested in testing out new ways to update their status. Liz Lawley, a Twitter user and director of the Lab of Social Computing at Rochester Institute of Technology, says that she has seen a growing number of Twitter posts with links to 12seconds videos.
"I find it intriguing . . . I love the idea of enforced constraints," Lawley says, referring to the 140-character limit on Twitter and the 12-second limit on 12seconds. "I think constraints bring out wonderful creativity. Without constraints, what we do and think isn't as interesting."
But, Lawley notes, video microblogging isn't necessarily the next phase of microblogging. 12seconds suffers from the same problem that has kept video blogs from usurping the popularity of text blogs: it simply takes too long to get to the point. Lawley says that she can scan a page of 25 tweets in about six seconds and have a good idea of what they're about. Additionally, she can scan tweets while she's occupied with other tasks, such as sitting in a meeting or attending a talk at a conference. Video, however, requires that a viewer focus her aural and visual attention, and it's impossible to quickly scan large numbers of videos. "This is where video and audio really fall apart," Lawley says. "That 12 seconds is much more of a commitment. It's something we might be willing to do for our most intimate ties, but it's unscalable."
Lipman hopes that the early interest in 12seconds will translate into continued growth for the company. In the coming weeks, 12seconds will offer software that will let outside programmers build applications using its technology. Allowing programmers to use its platform is one of the important reasons that Twitter caught on as it has: the more people write programs for the service, the more visible it becomes. And visibility leads to more users, which is the name of the game in the social Web industry.
Another lesson learned from Twitter, says Lipman, is to be aware, from the beginning, of the challenges of adding more users to the service. Over the past year, Twitter's service has crashed innumerable times. One of the culprits is the programming language in which it was written, Ruby on Rails--it simply isn't designed to operate the large-scale e-scale communication infrastructure that Twitter has become. Lipman says that his team has picked a different programming language that scales well for the application that 12seconds intends, but this still doesn't mean that the service will be without its hiccups. "That's why we have this alpha stage," he says. "As we're going through this, we're watching what causes problems."
In late July, a startup called 12seconds launched an early version of a product that lets people publicly post 12-second-long videos on the Internet about what they are doing. Using a Web camera or a cell-phone video camera, people record themselves doing anything--watching a football game at a bar, telling jokes, buying new shoes, playing with their child--and can upload it immediately to the Web, where others who subscribe to their videos get the update.
12seconds borrows heavily from the concepts of Twitter, an increasingly popular tool for so-called microblogging, in which people write pithy, 140-character updates on the status of their daily lives. A posted "tweet" can be published on Twitter's main page and sent directly to people who are following the person who posted. While initially laughed off as a waste of time, Twitter, founded in 2006, has slowly been gaining traction as more and more people and companies are finding it a useful way to quickly share information with a broad audience.
"Microblogging is really starting to take off," says Sol Lipman, founder of 12seconds. But in some instances, he says, short text updates just aren't as compelling as video. "I think video as a medium is significantly more engaging than text," Lipman notes. "If I'm at the bar with my friends, I want to show us having fun at the bar, not just text it."
The startup, based in San Francisco, was founded about five months ago and has no outside funding. Its ranks fluctuate between seven and ten people, depending on the workload, and about five of those employees work part time, says Lipman. 12seconds launched its "alpha" version of the product (alpha versions typically have fewer features than beta versions) on July 24, by providing four popular blogs, including TechCrunch, with 500 invitations to give out to their readers. Those invitations were snapped up quickly, says Lipman, leading to 7000 video uploads in just the first few days. In the coming weeks, the company will dole out additional invitations to the long queue of people turned away from the first round.
It's unsurprising that 12seconds has had such immediate small-scale success. Millions of people use Twitter, and many of them are interested in testing out new ways to update their status. Liz Lawley, a Twitter user and director of the Lab of Social Computing at Rochester Institute of Technology, says that she has seen a growing number of Twitter posts with links to 12seconds videos.
"I find it intriguing . . . I love the idea of enforced constraints," Lawley says, referring to the 140-character limit on Twitter and the 12-second limit on 12seconds. "I think constraints bring out wonderful creativity. Without constraints, what we do and think isn't as interesting."
But, Lawley notes, video microblogging isn't necessarily the next phase of microblogging. 12seconds suffers from the same problem that has kept video blogs from usurping the popularity of text blogs: it simply takes too long to get to the point. Lawley says that she can scan a page of 25 tweets in about six seconds and have a good idea of what they're about. Additionally, she can scan tweets while she's occupied with other tasks, such as sitting in a meeting or attending a talk at a conference. Video, however, requires that a viewer focus her aural and visual attention, and it's impossible to quickly scan large numbers of videos. "This is where video and audio really fall apart," Lawley says. "That 12 seconds is much more of a commitment. It's something we might be willing to do for our most intimate ties, but it's unscalable."
Lipman hopes that the early interest in 12seconds will translate into continued growth for the company. In the coming weeks, 12seconds will offer software that will let outside programmers build applications using its technology. Allowing programmers to use its platform is one of the important reasons that Twitter caught on as it has: the more people write programs for the service, the more visible it becomes. And visibility leads to more users, which is the name of the game in the social Web industry.
Another lesson learned from Twitter, says Lipman, is to be aware, from the beginning, of the challenges of adding more users to the service. Over the past year, Twitter's service has crashed innumerable times. One of the culprits is the programming language in which it was written, Ruby on Rails--it simply isn't designed to operate the large-scale e-scale communication infrastructure that Twitter has become. Lipman says that his team has picked a different programming language that scales well for the application that 12seconds intends, but this still doesn't mean that the service will be without its hiccups. "That's why we have this alpha stage," he says. "As we're going through this, we're watching what causes problems."
Subscribe to:
Posts (Atom)