Future Energy eNews IntegrityResearchInstitute.org Feb. 9, 2005

1) New Solar Cell - Flexible material generates electricity from infrared heat with high 30% efficiency

2) Glaciers Melting at Alarming Rate - Alaska, Alps, Andes all report dramatic global warming effects

3) Cheap Oil is Gone - Oxford Institute for Energy Studies predicts sustained high prices

4) Cold Fusion Gets Activated - Unanimous decision that DOE should fund worthy proposals - IRI calls for Senate Hearing

5) Antigravity Report from European Space Agency - Good summary of theories in 20-page report (arxiv link here) though not exhaustive in its scope (e.g. electrogravitics overlooked). Nature magazine review below.

6) Super Charged Ultracapacitors - Storage time and energy density rival batteries for electric cars

7) Daintiest Dynamos - Nuclear betavoltaic batteries are the longest lasting since they continually produce electrons


1) Canadian Researcher Invents New Solar Cell

Astrid Poei, Science - Reuters, Thu Jan 13, 2005 http://story.news.yahoo.com/news?tmpl=story&u=/nm/20050113/sc_nm/energy_canada_solar_dc

TORONTO (Reuters) - It may only be a matter of time before we will be using our shirts to charge our cellphones.

Researchers at the University of Toronto have invented a flexible plastic solar cell that is said to be five times more efficient than current methods in converting energy from the sun into electrical energy.

Team leader Ted Sargent, a professor of electrical and computer engineering at the university, said the cell harnesses infrared light from the sun and can form a flexible film on the surface of cloth, paper or other materials.

And the film can turn 30 percent of the sun's power into usable electrical energy -- a far better performance than the 6 percent gleaned from the best plastic solar cells now in use.

"The fact that these materials harness the sun's energy using flexible materials potentially could allow you to weave the plastics into fibers, sort of the way we have synthetic fibers already, and to weave those into clothing and make something that's a wearable solar cell," Sargent said from Boston, where he is working until the summer.

"That's sort of portable electricity."

Sargent said the coating could be woven into a shirt or sweater and used to charge an item like a cellphone.

"We expect that our cellphones or our e-mail can go anywhere with us, but we don't have that expectation of a continuous supply of power. The best that we have is batteries, which run out," he said.

"So if we could have a wireless source of power like how the sun would provide, this would be pretty exciting."

Research about the new cell was published in the Sunday online edition of the scientific journal Nature Materials, and Sargent said he was now looking for investors who could turn the invention into a commercially viable product.

Terry White, president of the Solar Energy Society of Canada said solar cells on these lines could transform the industry.

"If they make (solar cells) both less expensive and the potential applications more varied, then it's a major breakthrough," he said.

Sargent said the technology could be available to the average consumer within five to ten years. But it already has Wall Street venture capitalists interested.

"The technology really caught my eye both in the scientific literature and the business prospects," said Josh Wolfe, managing partner at Lux Capital in Manhattan, a venture capital firm that makes an estimated $1- to $2-million U.S. investment per project in early stage nanotechnology.

"So the concept of having rollable newspaper displays or other things that could power your laptop or portable devices or create new products that are best left to the creativity of the engineers, to me, it represents a pretty sea-change."

So what happens if the sun doesn't shine?

"There is obviously no power in the visible (light)," said Sargent. "But in the infrared, it's not completely zero power. It doesn't have to be as hot as the sun, but everything that's warm, gives off some heat. Even people and animals give off heat. So there actually is some power remaining in the infrared even when it appears to us to be dark outside."


2) Alpinists' Ice-Dreamy Mountains Melting Away

By Katy Human, The Denver Post, 12 January 2005 http://www.denverpost.com/Stories/0,1413,36%257E53%257E2648220,00.html?search=filter

Where there was once cold, hard ice, there is now dirty slush and crumbling rock.

From the peaks and slopes of many of the world's most challenging mountains, ice and snow are dripping away, reshaping the century-old sport of alpinism and disquieting longtime mountain climbers.

"Among alpinists who have been climbing for 20, 30 years, there is this sense of urgency that these climbs are going away," said John Bicknell, a guide, co-director of the Colorado Mountain School in Estes Park and a former geologist. "For me, it'll be an immense loss. It's where I've spent most of my life. It's the terrain I most love."

Around the world, high-altitude regions are warming and melting. Kilimanjaro's glaciers have all but disappeared. Glacier National Park's are melting so fast that federal computer models predict they'll be gone by 2030.

Mark Dyurgerov, a University of Colorado glacier expert and former alpinist, calculated that the volume of the world's glaciers has dropped by about 10 percent in the past four decades. The decline is even faster in some places, he said, including the popular climbing meccas of Alaska, the Andes and the Alps.

Regardless of whether people, natural cycles or both are to blame, the effects are clear to climbers and guides. They're watching more rocks tumble down cliffs, throwing away useless old books and maps, and picking their way through miles of crumbly rock only to find climbs too dangerous to attempt.

"There are routes you cannot do anymore," said Jose Garcia, a Venezuelan who lives and works in Boulder and climbs around the world.

Three years ago, Garcia ventured into the ranges surrounding Piramide, a 19,000-foot-plus mountain in Peru. Avalanches constantly rumbled down the peak, loosened by warm temperatures and the changing structure of snow and ice hugging the mountain. "That used to be a very challenging, very interesting mountain," Garcia said. "You cannot climb it anymore. You can expect to die."

As glaciers draping the slopes of high mountains retreat, the ice moves, Garcia explained. Crevasses yawn wider and deeper, giant cliffs of ice called séracs break away, and melting ice or permafrost loosens boulders, which tumble down slopes.

"Books are now obsolete," Garcia said. "Maps also."

When considering a climb, he checks the Internet for new route descriptions and pictures, or he contacts colleagues who have been there recently.

Although mountain climbers are often labeled as risk-takers, most say the new risks do not make climbing more fun.

"Climbers seldom look for dangerous routes," said Gary Neptune, owner of Neptune Mountaineering in Boulder and a lifetime alpinist. "Challenging, yes, but minimizing danger, that's part of the game."

Climbing in Africa several years ago, he and his colleagues searched for a glacier - the known access route to a peak in the Rwenzori Mountains. It had disappeared sometime in the past 30 years - the age of the photographs in his guidebook, Neptune said.

He and his colleagues abandoned the climb. "It's all getting less predictable or more extreme," Neptune said.

In the Alps, paths to some peaks have morphed from smooth glacial hikes into dangerous scrambles up rock- strewn slopes. Grosses Wiesbachhorn - the pitch in Austria where alpinists first used ice pitons in the 1920s - hasn't been icy in years, Neptune said.

A few decades ago, Boulder climber and guide Bob Culp loved to practice ice climbing at the foot of a glacier coming off France's Mont Blanc. He took his son there a few years ago, he said, but the trail to the glacier was closed. The two took another route to the glacier's edge.

"While we were standing there looking, a baseball-sized rock came tumbling down and hit me in the hip," Culp said. "I wasn't hurt, but I thought, 'This is not a place we want to be right now."'


3) With Geopolitics, Cheap Oil Recedes Into Past

By JAD MOUAWAD New York Times, January 3, 2005 http://www.nytimes.com/2005/01/03/business/03oil.html?ex=1107072526&ei=1&en=abeb977dee80b732

IT was a year that people in the oil markets are unlikely to forget - a year that prices set records, forecasts lost touch with reality, and almost everything that could go wrong, did. It was also a year that politics returned to the oil market.

And the trend is likely to continue this year. While oil prices have declined since October, many of the issues that have vexed the oil industry in 2004 are expected to recur. Cheap oil increasingly looks like a thing of the past.

Through the 1990's, prices were stable, supplies were secure and there was plenty of extra capacity to keep energy costs low and world growth buzzing. At an average of $20 a barrel, oil was viewed as just another commodity.

But then came ethnic and labor troubles in Nigeria; chaos and protests in Venezuela before President Hugo Chávez won a referendum allowing him to stay in power; hardball energy politics in Russia; and the continuing insurgency in Iraq.

While supplies of oil to the world markets were rarely interrupted, the uncertainties created by these events raised crude oil prices in New York by two-thirds this year, to a high of more than $55 a barrel in October. And as energy costs surged, many analysts, traders and politicians woke up to the reality that oil was different from cocoa or coffee.

"Oil is a political commodity," said Robert Mabro, president of the Oxford Institute for Energy Studies, one of the world's foremost energy experts. "Geopolitics is the most fundamental issue if you're looking at oil markets. People seem to have forgotten that since the 1980's."

Of course, this is not the first time that oil and politics have mixed.

Decades ago, militant governments in Iran and Libya, for example, nationalized their oil sectors, forcing American and European companies out and taking charge of their natural resources. Then came the oil embargo and the price shocks of 1973-74 and 1978-81, with long lines for gasoline and steep rises in inflation.

But for the most part, politics had dropped off the energy map since then. In the 1980's, energy experts largely discounted a war between two of the Persian Gulf's top oil producers, Iran and Iraq, because Saudi Arabia and some other OPEC nations could simply crank up their production to make up for losses.

Even the invasion of Kuwait by Iraq in the summer of 1990, and the subsequent embargo on their oil exports, roiled energy markets for only a few weeks.

But in recent years, the oil industry has undergone a fundamental change. While demand has steadily increased each year, the industry's exploration efforts have not kept pace in new discoveries.

Now that worldwide production is running at full speed to meet increased demand, there is no cushion left in the system to weather a potential blow to producers like Iraq, Venezuela, Iran, Russia or Nigeria.

So, once again for oil markets, politics matters.

For instance, said Amy Myers Jaffe, the associate director of Rice University's energy program, Saudi Arabia's oil industry is no longer seen as being impenetrable to terrorist attacks; tensions in the Persian Gulf could swell over Iran's nuclear program; Nigerian factions may erupt in violence; and the fighting in Iraq goes on.

"All kinds of things can affect this market," Ms. Jaffe said, "especially when you're in a razor-thin situation. The only thing that could dramatically alter the outlook is a major economic recession."

The heightened geopolitical risk has translated into higher prices, something analysts call a "risk premium." Crude oil prices have averaged $30 a barrel since 2000, but last year crude oil in New York climbed to an average of $41 a barrel. While energy prices are high, adjusted for inflation they are below the level in March 1981, when crude oil approached $70 a barrel in today's dollars. Still, analysts do not expect prices to fall anytime soon.

High world prices since mid-2002 have helped sustain the economic recovery of Russia, which is raising output, according to the Energy Information Agency of the Department of Energy.

The former Soviet Union, of which Russia is by far the biggest country, is the world's largest producer, the agency says, followed by Saudi Arabia and the United States. The biggest consumers are the United States, which imports over half its needs; China; Japan; and the former Soviet Union, which uses about a third as much as it produces. Leo Drollas, chief economist for the Center for Global Energy Studies in London, expects oil prices to be higher in 2005, on average, than they have been this year. The institute was founded in 1990 by Sheik Ahmed Zaki Yamani, the former Saudi oil minister.

Even oil companies, which are usually extremely conservative about their price outlook, are coming around to that realization. Lord Browne, the chief executive of BP, now sees a new bottom of $30 a barrel for the next few years.

"There is something fundamental holding prices up, whether that's at $45, $40 or $35 a barrel," Mr. Mabro of the Oxford Institute said. "And politics won't improve things. Except if you believe a miracle is going to happen in Iraq."


4) US review rekindles cold fusion debate

Geoff Brumfiel, Nature, December 2, 2004 http://www.nature.com/news/2004/041129//full/041129-11.html

Energy panel split over whether experiments produced power.

Claims of cold fusion are intriguing, but not convincing. That is the conclusion of an 18-member scientific panel tasked with reviewing research in the area.

The findings, which were released on 1 December by the US Department of Energy, rekindle a 15-year-old debate over whether nuclear fusion can occur at room temperature.

According to the report, the panel was "split approximately evenly" on the question of whether cold experiments were actually producing power in the form of heat. But members agreed that there is not enough evidence to prove that cold fusion has occurred, and they complained that much of the published work was poorly documented.

The review is a positive step for the field of cold fusion, according to David Nagel at George Washington University in Washington DC, who co-authored the summary of cold-fusion work that the panel reviewed. "Most scientists think that cold fusion is laughable, but when the dust settled, the researchers reviewing our work were evenly split," he says.

"Most scientists think that cold fusion is laughable, but when the dust settled, the researchers reviewing our work were evenly split."

David Nagel

cold fusion researcher at George Washington University in Washington DC

Others remain sceptical, however. "It is astonishing that the panel didn't find cold fusion convincing after almost 15 years of additional research," says Bob Park, a professor of physics at the University of Maryland, College Park, and author of Voodoo Science, a book about junk science. Park says that although the quality of research has improved, no one should buy into cold fusion just yet.

Hot stuff

Fusion commonly occurs in stars like the Sun, where hydrogen atoms meld together to form helium and release huge amounts of energy in the process. Scientists have long believed that fusion has the potential to be an enormous source of power here on Earth. However, no one has yet been able to control fusion reactions because they only occur at temperatures and pressures similar to those found in stars.

Or so scientists thought until 1989, when Stanley Pons and Martin Fleischmann of the University of Utah claimed to have created a new kind of fusion inside a small canister of water. Pons and Fleischmann claimed that when they ran an electrical current between two palladium plates separated by water containing deuterium, a heavy isotope of hydrogen, it created a small but measurable fusion reaction.

"It is astonishing that the panel didn't find cold fusion convincing after almost fifteen years of additional research."

Bob Park

Professor of physics at the University of Maryland, Baltimore

In a highly publicized press conference in Utah, the scientists claimed that this 'cold fusion' had the potential to revolutionize the world's energy production.

Pons and Fleischmann's claims were quickly debunked by other scientists, who pointed out numerous experimental errors in the measurements. But the idea of cold fusion lives on in movies and science fiction, and among a small cadre of researchers.

Those researchers finally caught the ear of the US energy secretary, Spencer Abraham, who commissioned the review in August 2003 from the department's science directorate.

Although the reviewers remained sceptical, they were nearly unanimous in their opinion that the energy department should fund well-thought-out proposals for cold fusion. Nagel says that he expects many in the long neglected field to submit research plans in the coming months. "I will be among them," he adds.


Executive Summary of DOE Cold Fusion Nuclear Reactions Report - DOE Office of Science


US Navy's Space and Naval Warfare Systems Center in San Diego two-volume Cold Fusion report. "Thermal and nuclear aspects of the Pd/D2O system: a decade of research at Navy laboratories" - Dr. Scott Chubb was one of the main authors, with Introduction by Dr. Frank Gordon - Volume I, 3.5 Meg ~ 121 pages in PDF format http://www.spawar.navy.mil/sti/publications/pubs/tr/1862/tr1862-vol1.pdf

"US Gives Cold Fusion a Second Look After Fifteen Years" - New York Times - 2004


"DOE Warms to Cold Fusion" - Physics Today - 2004


"Cold Fusion Isn't Dead, It's Just Withering from Scientific Neglect"

Sharon Begley, Wall Street Journal, Science Journal, Sept. 5, 2003 http://online.wsj.com/article/0,,SB106270936017252700,00.html

"Reasonable Doubt Doesn't Stop Progress"

New Scientist Vol 177 Issue 2388 - 29 March 2003, page 36


Additional web sites for information on cold fusion:





5) Antigravity has Feet of Clay

Philip Ball , Nature, 26 January 2005; http://www.nature.com/news/2005/050124/full/050124-8.html

Space agency report is a downer for gravity-control researchers. Interstellar spacecraft powered by warp drives are still the stuff of science fiction.

Could astronauts take a leaf out of H. G. Wells's book The First Men in the Moon, and use spacecraft propelled by antigravity devices? Some see the idea as science fiction, but major space agencies take it seriously.

In 2001, the European Space Agency (ESA) commissioned two scientists to evaluate schemes for gravity control. They have concluded that, even if such control were possible, the benefits for lifting spacecraft out of the Earth's gravitational field would probably not be worth the effort.

But scientists working on such propulsion schemes dispute the report. "I regard the conclusion, even if correct, as uninteresting and, frankly, irrelevant", says James Woodward of California State University at Fullerton, who has worked for NASA on gravity-control propulsion.

NASA ran a research programme on speculative propulsion methods, called Breakthrough Propulsion Physics, from 1996 until its funding was cut in 2003. The project's founder and former manager, Marc Millis of NASA's Glenn Research Center in Cleveland, Ohio, says that the ESA report corrects some misconceptions in the field of gravity control. But he thinks its scope is too limited to rule out future research in the area.

"The risk of this paper is that the casual reader will more broadly interpret the negative findings to apply to all inquiries into gravitational or inertial manipulation," says Millis.

The report is not meant to kill off all such ideas, says one of its authors, cosmologist Orfeu Bertolami of Lisbon's Technical University in Portugal. "Our recommendation to ESA was to keep a critical eye on them," he says. But, he adds, "this should be a low-intensity activity. Our estimates show that conventional ideas [for propulsion] are much more effective."

Down to Earth

Wells's fantasy hinges on the invention of a substance that shields any object placed above it from the Earth's gravity. But can such a material really exist? Antigravity seems to violate the law of conservation of energy, which prohibits perpetual motion. Place a wheel half over such a gravity shield and the shielded segment will rise, causing the wheel to rotate forever without a power source.

"Conventional ideas for propulsion are much more effective" says Orfeu Bertolami, author of ESA report on antigravity. What's more, gravity cannot be screened out in the same way as light or sound: Einstein's general theory of relativity explains that gravity results from the way mass distorts space-time itself.

But relativity is not the last word on the subject. "Gravity does not fit into the standard model of particle physics," says Clovis de Matos, technical officer in charge of the ESA study. "And we do not understand the gravitational interaction at the quantum level."

De Matos explains that ESA commissioned the survey of gravity control partly to establish whether a quantum theory of gravity might expose loopholes in our current understanding that space technology could exploit.

Bertolami and his co-author, Martin Tajmar of the space technology company ARC Seibersdorf in Austria, looked at proposals for assisting spacecraft launch by weakening gravity. They were not impressed. "None of the proposals seemed convincing and detailed enough," says Bertolami. "Experimentally and theoretically they do not seem to meet a standard we could qualify as scientific."

Floating ideas

All the same, the researchers did feel that some ideas for modifying gravity are worth exploring. For example, as they are reaching the edge of the Solar System, NASA's Pioneer spacecraft are deviating from their expected trajectories. This has led some scientists to suggest that the current theory of gravity is incomplete.

There have also been suggestions that magnetic effects in materials whose behaviour is dominated by quantum effects, such as superconductors, might induce a kind of artificial gravity. NASA scientists have studied claims by Russian physicist Eugene Podkletnov that a spinning superconductor can act as a gravity shield, reducing the weight of an object placed above it by about 2%.

Independent scientists have been unable to reproduce this and similar claims, says Tajmar. He and Bertolami conclude that there are currently no good grounds for taking such effects seriously. All the same, they don't rule out the possibility of gravitational anomalies in quantum materials.

Other options involve the gravitational and inertial masses of objects. Gravitational mass determines the force of gravity experienced by the object; inertial mass determines how much force is needed to set it in motion. General relativity says that the two definitions are identical, but some theories of quantum gravity suggest that they differ.

Tajmar and Bertolami looked at schemes to alter one kind of mass, leaving the other unchanged. They found that reducing the inertial mass has no effect on the amount of fuel needed to launch a spacecraft. And altering the gravitational mass alone, by gravity shielding for example, doesn't help unless the shielding is almost total.


Tajmar, M. & Bertolami, O. "Hypothetical Gravity Control and Possible Influence on Space Propulsion" Report Preprint at http://xxx.arxiv.org/abs/physics/0412176 (2005).


6) Super Charged

By Glenn Zorpette, IEEE Spectrum Online, January, 2005 http://www.spectrum.ieee.org/WEBONLY/publicfeature/jan05/0105wcap.html#f3

A tiny South Korean company is out to make capacitors powerful enough to propel the next generation of hybrid-electric cars

Let's say it's 2010, and you're boiling off midlife ennui or burnishing your golden years in time-honored fashion: by zooming around in a high-performance road machine. The car accelerates powerfully, and yet it moves quietly and nimbly, slaloming through curves like a go-cart. Best of all, it sips gas like a connoisseur enjoying 40-year-old Armagnac. Would you believe you owe these rejuvenating, guilt-free thrills to a bunch of capacitors?

Not just any capacitors, of course. To understand what's going on under the hood of this car, you'll need to leave behind the Lilliputian world of the picofarad and the microfarad and enter the realm of the kilofarad. It is a place where NessCap Co., in Yongin, South Korea, holds sway.

NessCap is one of about 10 makers of ultracapacitors, devices that can store so much charge that they are beginning to blur the functional distinction between the capacitor and the battery. And according to some experts, nobody does it better than NessCap, which offers a unit rated at an impressive 5000 farads at 2.7 volts in a package a little bigger than a half-liter soda bottle. NessCap's capacitors "perform as well as or better than any others we've ever tested, in terms of energy and power density," says Marshall Miller, a research engineer at the University of California at Davis, where he specializes in testing advanced capacitors and other devices.

Ultracapacitors made by NessCap and others are just now starting to show up in products ranging from toys to experimental buses, basically as alternatives to batteries. The worldwide market isn't large; it was just US $38 million in 2002, the most recent year for which figures are available, according to the research firm Frost & Sullivan, in San Antonio. But NessCap and the handful of other makers of the largest ultracapacitors all have their sights set on the automotive market, which could do for their business what the iPod did for sales of MP3 songs. Frost & Sullivan, at least, is a believer; the company optimistically predicts total 2007 revenues for ultracapacitors of $355 million.

On paper, anyway, the idea is not far-fetched. In comparison with batteries, ultracapacitors can put out much more power for a given weight, can be charged in seconds rather than hours, and can function at more extreme temperatures. They're also more efficient, and they last much longer—in tests at the Idaho National Engineering and Environmental Laboratory, in Idaho Falls, upwards of 500 000 charge-discharge cycles have been recorded. Automotive traction batteries, for comparison, have much shorter lifetimes, particularly if they are discharged deeply.

Pondering the relative strengths of capacitors and batteries, Joel Schindall, associate director of the Laboratory for Electromagnetic and Electronic Systems at the Massachusetts Institute of Technology, in Cambridge, says: "In all ways other than energy density, an electric field is superior to chemistry for storing energy regeneratively, because it is completely reversible" and therefore intrinsically efficient and durable. Part of Schindall's research focuses on advanced materials that could be used as electrodes in future ultracapacitors.

Ultracapacitors are now establishing themselves in niches demanding a power source that can recharge quickly, be sealed into a system that has to last for years, or put out prodigious amounts of power in short bursts. Tokyo-based Ricoh Co. is using them in copier machines to store the energy needed to warm up the machines quickly, minimizing time spent in the energy-wasting standby mode. Makers of high-end car stereo amplifiers are using ultracapacitors to deliver the surges of power demanded by musical crescendos, without straining the vehicle's battery.

Another use is in solar tiles; a new twist in landscape architecture, they're used to guide pedestrians at night, by storing solar-generated electricity during the day and using it to power a small light-emitting diode panel after dark [see photo, "Bright Idea"]. Sealed into a walkway, wall, or staircase, these clear, rugged tiles have to last for a decade or more, working without fail night after night, withstanding subfreezing and sweltering temperatures alike—criteria only ultracapacitors can fulfill.

And then there are cars. The hybrid-electric vehicle, in its various forms, is poised for an increasing share of the automotive market in several parts of the world, including the United States. And ultracapacitors have already found their way into hybrids, albeit in a minor role: hardly noticed among the Toyota Prius's many celebrated technical breakthroughs is the fact that it uses ultracapacitors, from Panasonic, to power an electric-hydraulic pump in the mechanical braking system.

It's just the start of what some experts say ultra-capacitors will do for hybrids. For example, with their lightning-fast charge and discharge capability, ultracapacitors could handle the power surges needed for accelerating, allowing engineers to use a smaller battery pack in the vehicle (and eventually, perhaps, no battery pack at all). Shielded from high-current pulses, the batteries would last longer, too.

There are other intriguing possibilities, such as using the devices to give more or less ordinary cars "stop-and-go" operation, in which the gasoline engine is extinguished at stops and started instantly when the brake pedal is released. Ultracapacitors and a powerful starter motor would instantly jolt the engine back to life. Such vehicles would also make use of regenerative braking, converting into electricity the kinetic energy otherwise thrown off as heat in the brakes and storing that electricity in the ultracapacitors.

SO WHAT WILL IT TAKE FOR ULTRACAPACITORS to find a home under the hood? First, they've got to be a lot cheaper. Today, at roughly $9500 per kilowatthour, ultracapacitors are too expensive by a factor of five, at least, for cost-conscious carmakers. Second, automotive engineers would like to see the devices store more energy (as opposed to power) per unit weight, which would let the devices take over more of the energy-storage burden from batteries in future vehicles.

If NessCap and its competitors can achieve those goals and crack this market, the long-term future looks good. No one knows when, or even if, the fuel-cell car will become a mass-market reality—the estimates range from 10 to 30 years. But if it does happen, it's likely that ultracapacitors will be a big part of the reason. Fuel cells, by themselves, deliver power too sluggishly to briskly accelerate a full-size car. They must be mated to a faster-acting energy-storage device, and for this coupling, ultracapacitors are superior in many respects to batteries.

"Capacitors and fuel cells are made for each other," insists Andrew Burke, a specialist on ultracapacitors and a research engineer at the University of California, Davis. Honda, for example, used only ultracapacitors to supplement the fuel cell in its experimental FCX-V3 and FCX-V4 vehicles, several of which have been leased in California and Japan [see illustration, "Fueling Around"]. For these vehicles, Honda used its own ultracapacitors.

At first glance, at least, NessCap may seem an unlikely candidate to get ultracapacitors into a production car. NessCap's three main competitors—Maxwell Technologies in San Diego; Epcos in Munich, Germany; and Panasonic in Osaka, Japan—all have either deep-pocketed parent companies or revenue from other product lines with which to support their ultracapacitor development. (Panasonic ultracapacitors are manufactured by Matsushita Electronic Components Co., in Kadoma City, Japan.)

But what NessCap lacks in resources, it makes up in resourcefulness and determination. The company was founded in 2001 by Sun-wook Kim, a Korean entrepreneur and former research director at the Daewoo Group. Although Kim has a few other ventures, including a new organic-LED display factory in Singapore, NessCap is basically a stand-alone enterprise that will either succeed or fail on the strength of its ultracapacitors and on its executives' ingenuity in promoting them.

Certainly, the company is efficient: all of NessCap's 65 employees work in a boxy, yellowish, blue-trimmed building in a gritty suburb outside the Korean industrial city of Suwon. It houses NessCap's factory, offices, and R&D laboratories and its quality-control, testing, and shipping and receiving departments, as well as a subsidiary consumer-electronics spinoff and a warehouse. And though it's a small company, NessCap makes all its own electrodes for its capacitors. Among the company's closest competitors, only Panasonic can also make that claim, says NessCap's chairman, Inho Kim (who is not related to Sun-wook Kim).

This distinction is important, he says, because he expects electrode refinements to be the main source of future improvements in ultracapacitor performance—greater energy storage, for example—and decreases in cost. Electrode technology, Inho Kim estimates, determines "70 or 80 percent" of the capacitor's performance. "If you own the electrode-manufacturing technology, you can basically do anything," he argues.



GOAL: Cut the cost of ultracapacitors are superior to batteries in many respects and will almost certainly be used increasingly in hybrid-electric and fuel-cell cars


CENTER OF ACTIVITY: NessCap's facility in Yongin, South Korea


BUDGET: Approximately US $2 million


TO GET AN IDEA of where these improvements will come from, you've got to understand what separates an ultracapacitor from an ordinary capacitor (other than a whole lot of farads). First, consider the classic parallel-plate capacitor, a sandwich of two conductive plates separated by an insulator, or dialectric. When the plates are connected to the positive and negative terminals of a battery, opposite charges separate from each other and accumulate on the plates. Driven by the battery's voltage, an electric field permeates the dielectric. Associated with that field is a voltage that opposes the battery's voltage.

The field holds the accumulated, opposing charges apart; in doing so, it stores energy. So, unlike a battery, which stores energy in chemical form, a capacitor stores energy in an electric field; there are no moving parts and no chemical changes of state. To use a capacitor's energy, you just let its accumulated charges flow through a circuit, driven by the voltage associated with the field.

Capacitance is simply a measure of how much charge a capacitor can store for a given voltage. In mathematical terms, the capacitance equals the charge on the plates divided by the voltage difference between them. The charge, however, is proportional to the area of the plates; larger plates can hold more charge. And the voltage is related to the distance between the two plates; less separation allows more charge to accumulate for a given voltage. So to wring the most capacitance from a device, you want plates, or electrodes, that have a large area, and you want to separate those plates by a very small distance.

In the early 1960s, at the once mighty research laboratories of Standard Oil of Ohio (Sohio), researchers discovered that two pieces of activated carbon immersed in a liquid electrolyte formed an amazingly good capacitor, owing mainly to the fact that the activated carbon's myriad microscopic nodules had enormous surface area. Sohio licensed the technology to NEC Corp., Tokyo, in 1971, but it was Panasonic that pushed the concept hardest in the 1980s, followed by various projects sponsored by the U.S. Department of Energy in the 1990s.

Since Sohio's initial experiments 40 years ago, the basic concept has not changed much. Coat two metal-foil electrodes with activated carbon and put a paper separator between them. Immerse the whole thing in a liquid electrolyte.

Attach wires from the terminals of a battery to the two metal foils, and electrons immediately start accumulating in the carbon coated on the foil attached to the battery's negative terminal. Those electrons, in turn, attract positive ions from the electrolyte into the pores of the carbon on that foil. In the other electrode, meanwhile, positive charges accumulate, attracting negative ions from the electrolyte into the pores of the carbon. Both kinds of ions migrate freely through the paper separator that prevents the electrodes from touching each other and conducting current.

Notice that this so-called capacitor is actually a pair of capacitors in series with each other. At each electrode, there is a separation of charges—electrons and positive ions at the negative electrode, and positive charges and negative ions at the positive electrode. So at each electrode there are two layers of charge, which is why ultracapacitors are also known as electric double-layer capacitors.

The activated carbon's huge surface area comes from the great porosity of its microscopic nodules. It enables the positive and the negative ions migrating through the electrolyte to find plenty of nooks and crannies to occupy as they insinuate themselves as closely as possible into the oppositely charged carriers inside the carbon. Basically, as an electrode material, the activated carbon provides exactly the characteristics you want for high capacitance: vast surface area and the opportunity for the oppositely charged carriers to get atomically close to each other.

The surface area of the carbon varies, but 1500 square meters per gram is not unusual. So for typical electrodes weighing 250 grams, the total area would be 375 000 square meters—or roughly 50 soccer fields.

THE TRICK, OF COURSE, is getting that carbon onto the metal foil as uniformly and efficiently as possible. It is the first step in NessCap's manufacturing process—and the first topic of discussion on a tour of the company's small but spotless factory. All manufacturing at NessCap goes on in a series of three brightly lit rooms, whose Kelly green floors are all marked with yellow lines to show visitors where to walk.

In big, shiny, stainless steel mixers—think Cuisinarts on steroids—technicians mix several types of activated carbon with water and with binding agents that cause the carbon-powder particles to stick to each other and to the long strips of aluminum foil electrodes. The resulting slurry gets coated onto one side of the aluminum, dried in a kiln, and then coated onto the other side. After more drying, the coated strips are run through a hot press to increase the density of their carbon layers and give those layers a uniform thickness.

In the next room, machines scratch the carbon off the aluminum precisely and at regular intervals to make places where electrical leads are attached. Then the same machine winds together two long strips of the carbon-coated metal—one will be the anode, the other the cathode—with a strip of paper in between. "No other such machine exists in the world," says Inho Kim proudly.

In the third room, the wound electrode-separator assemblies are dried in a kiln and inserted in aluminum cases that are filled with electrolyte and welded shut. The finished capacitors are tested in a room across the hall; every single capacitor is tested before leaving the factory.

Upstairs, NessCap's R&D department occupies a couple of rooms that take up about the same total area as a decent restaurant kitchen. As in an old-time apothecary, glass cabinets filled with bottles of powders and reagents line the walls.

Not surprisingly, ultracapacitor researchers are mainly interested in two things: electrolytes and carbon. In virtually all high-performance ultracapacitors, the electrolyte is acetonitrile. It's great stuff, in the one way that really matters: it has terrifically low ionic resistance, roughly 15 ohm-centimeters, and that means high power density. But when acetonitrile burns, it can release cyanide, a fact that makes automakers unhappy. "Everybody's looking for a replacement for acetonitrile," says Burke at UC Davis. Several organic compounds, notably propylene carbonate, show promise, but none at the moment has ionic resistance as low as acetonitrile. (Honda used propylene carbonate in its own ultracapacitors, in the FCX fuel-cell cars.)

Still, it is the carbon challenge that most consumes ultracapacitor researchers now, because it is the key to the two main goals: getting costs down and improving the energy (as opposed to power) density. In a typical ultracapacitor, the electrode materials—the carbons, essentially—account for more than half the cost of the device, Sun-wook Kim says.

During a tour of the laboratory, NessCap's R&D director, Young-ho Kim, casually mentions that he's in the midst of running tests on no fewer than 10 mixtures of activated carbons, looking for a combination of low cost, high performance, and durability that has so far eluded ultracapacitor makers.

It all comes down to pores, he explains, drawing little circles on a piece of paper. You want pores that are all about 20 to 30 angstroms in diameter. Pores that are smaller than that aren't big enough to allow the ions to move in and out freely, which hurts performance. Lots of big pores, on the other hand, mean that the overall surface area is less than it should be, which also limits performance.

Ultracapacitor makers are working with two main types of carbon, phenyl-resin based and pitch based. Phenyl-resin carbons perform better and are the standard now. But the attraction of pitch-based carbons, which are derived from coke and are used in asphalt, is their low cost—about one-fifth to one-tenth that of phenyl-resin carbons.

The problem, Young-ho Kim says, is that it's harder to control the pore-size distributions in the pitch-based carbons, so they wind up with poorer characteristics. Their capacitance is usually about 30 percent less than that of the phenyl-resin-carbon devices, he explains. That means that 30 percent more material must be used, which, of course, detracts from the cost savings and makes the finished devices larger. Still, Sun-wook Kim is confident that work on the pitch-based carbons will be a key factor in reducing the overall cost per farad of the devices.

In the next breath, though, he dismisses the conventional wisdom that the carbons have to get down to $10 a kilogram to make ultracapacitors cost-competitive, from about $100 today (for the phenyl-based carbons). He insists that getting costs down will depend as much on manufacturing as on carbon prices. He points out that NessCap is now changing its manufacturing process to put its largest capacitors in cylindrical rather than rectangular cans. The simple shape change allows the electrode assembly to be wound more quickly, which in turn shaves more than 20 percent off the cost of making the capacitors, Inho Kim estimates.

WHILE NESSCAP AND ITS COMPETITORS FOCUS on getting the cost of the carbons down, a few other researchers are investigating exotic, pricey forms of carbon that could eliminate the one clear drawback of ultracapacitors—low energy density—and let them mount a serious challenge to batteries. Commercially available ultracapacitors generally can be counted on to store about 3 or 4 watthours per kilogram, Burke says. That's a far cry from the 60 or 70 Wh/kg typical of nickel-metal hydride batteries or the 110 to 130 Wh/kg delivered by lithium-ion batteries.

An ultracapacitor with batterylike energy density would be almost irresistible to automakers, to say nothing of countless other manufacturers, says John M. Miller, a retired Ford Motor Co. researcher. With high enough energy density, ultracapacitors could reduce or even eliminate the need for traction batteries in a hybrid car. "It's a pivotal time for energy-storage systems," he concludes.

Tantalizing claims have surfaced of exotic carbon-based technologies that could boost the energy density of ultracapacitors 10- or even 100-fold—well into the realm of advanced batteries. But so far, these claims have not held up to independent scrutiny, say both Burke and Marshall Miller at UC Davis. An independent Japanese researcher, Michio Okamura, claims to have developed a carbon-based electrode material that he calls nanogate, which is nonporous and can deliver energy densities well above 50 Wh/kg. But solid, independent verification of his claims is not yet available, according to Burke.

At MIT's electromagnetic laboratory, Schindall and lab director John Kassakian, with Ph.D. student Riccardo Signorelli, are leading a project to investigate the use of carbon nano-tubes, the latest miracle material, in electrodes. They are creating materials in which the nanotubes grow out perpendicularly from a substrate, like hair on a piece of scalp. The nanotubes would become electrically charged, just as the activated carbon does, so they would attract oppositely charged ions in the electrolyte. The nanotubes would also be spaced so as to hold these ions, much as a sea anemone grips small sea creatures in its tentacles. The advantage is that this arrangement can in theory trap many more ions than even the pores of activated carbon—enough perhaps to raise the energy density of an ultracapacitor 100-fold, Schindall estimates.

So far, he and Signorelli have demonstrated technology that can grow the right kind of nanotubes and space them appropriately. By next summer, they hope to grow a patch of electrode big enough to test in an electrolyte, in order to assess its capacitance characteristics. If it works as well as their studies suggest, and if it can be easily manufactured—two big ifs—the dream of a near-ideal energy storage device will be that much closer to realization. "Suddenly, electrical energy storage turns on its head—potentially," Schindall says.

MEANWHILE, FOR NESSCAP and its competitors, the game is basically this: find enough niche markets to stay afloat until technology advances make ultracapacitors even more attractive and automotive markets develop. And NessCap isn't waiting for the niche markets to come to it. Last year, the company started its own consumer-electronics firm, Infinity Inc., which is selling everything from crank-powered radios to solar tiles, all outfitted with NessCap capacitors.

NessCap is also working with several other companies on niche automotive applications. A well-known courier company, for example, is about to start using NessCap's ultracapacitors in 200 of its delivery vans. As they go about dropping off packages in densely populated areas, these vans must stop and restart their engines as many as 200 times a day. The short distances between stops means that the starter batteries can't recharge sufficiently and soon wear out. But the short distances are not a problem for ultracapacitors, which recharge in seconds and can easily store enough energy to fire up the engine. So the delivery-van system couples ultracapacitors for short-term energy storage with lead-acid batteries for longer-term storage.

Looking beyond these niche applications, Inho Kim has high hopes for "micro-hybrids," which would have a 12-V battery, as in a conventional car. Micro-hybrids are basically a very mild form of mild hybrid, propelled mainly by a gasoline engine but with a beefed-up electric starter motor fed by a small rack of ultracapacitors. The capacitors and motor provide the stop-and-go operation described above; the car could also make use of regenerative braking.

The guilt-free ultracapacitor-based roadster is probably more than a couple of years away. But a conventional car with a more reliable starter system, or even a micro-hybrid with an ultracapacitor boost, could be in your immediate future. If so, the revolution in energy storage will be well under way.


Kilofarad International, a trade group formed to promote the ultracapacitor industry, is an affiliate of the Electronic Components, Assemblies, and Materials Association. Its Web site is at http://www.kilofarad.org/.

Andrew Burke of the University of California, Davis, has written numerous technical articles on ultracapacitors. Several are available online, including a survey from 2000: http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1050&context=itsdavis.

Menahem Anderman, president of the consulting firm Advanced Automotive Batteries, plans to release a report on ultracapacitors for automotive uses in February. You can order the US $7200 report at http://www.advancedautobat.com/Ultracapacitor/index.html


7) The Daintiest Dynamos

By Amit LaL & James Blanchard, IEEE Spectrum, September, 2004 http://www.spectrum.ieee.org/WEBONLY/publicfeature/sep04/0904nuct1.html

By harvesting energy from radioactive specks, nuclear microbatteries could power tomorrow's microelectromechanical marvels—and maybe your cellphone, too

For several decades, electronic circuitry has been shrinking at a famously dizzying pace. Too bad the batteries that typically power those circuits have not managed to get much smaller at all.

In today's wrist-worn GPS receivers, matchbox-size digital cameras, and pocketable personal computers, batteries are a significant portion of the volume. And yet they don't provide nearly enough energy, conking out seemingly at the worst possible moment.

The reason is simple: batteries are still little cans of chemicals. They function in essentially the same way they did two centuries ago, when the Italian physicist Alessandro Volta sandwiched zinc and silver disks to create the first chemical battery, which he used to make a frog's leg kick.

Now, with technologists busily ushering in a new age of miniaturization based on microelectromechanical systems (MEMS), batteries have arrived at a critical juncture. MEMS are finding applications in everything from the sensors in cars that trigger air bags to injectable drug delivery systems to environmental monitoring devices. Many of these systems ideally have to work for long periods, and it is not always easy to replace or recharge their batteries. So to let these miniature machines really hit their stride, we'll need smaller, longer-lasting power sources.

For several years our research groups at Cornell University and the University of Wisconsin-Madison have been working on a way around this power-source roadblock: harvesting the incredible amount of energy released naturally by tiny bits of radioactive material.

The microscale generators we are developing are not nuclear reactors in miniature, and they don't involve fission or fusion reactions. All energy comes from high-energy particles spontaneously emitted by radioactive elements. These devices, which we call nuclear microbatteries, use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries [see table, "Energy Content"].

A speck of a radioisotope like nickel-63 or tritium, for example, contains enough energy to power a MEMS device for decades, and to do it safely. The particles these isotopes emit, unlike more energetic particles released by other radioactive materials, are blocked by the layer of dead skin that covers our bodies. They penetrate no more than 25 micrometers in most solids or liquids, so in a battery they could safely be contained by a simple plastic package [see sidebar, "Not All Radioisotopes Are Equal."]

Our current prototypes are still relatively big, but like the first transistors they will get smaller, going from macro- to microscale devices. And if the initial applications powering MEMS devices go well, along with the proper packaging and safety considerations, lucrative uses in handheld devices could be next. The small nuclear batteries may not be able to provide enough electric current for a cellphone or a PDA, but our experiments so far suggest that several of these nuclear units could be used to trickle charges into the conventional chemical rechargeable batteries used in handheld devices. Depending on the power consumption of these devices, this trickle charging could enable batteries to go for months between recharges, rather than days, or possibly even to avoid recharges altogether.

"IT IS A STAGGERINGLY SMALL WORLD THAT IS BELOW," said physicist Richard P. Feynman in his famous 1959 talk to the American Physical Society, when he envisioned that physical laws allowed for the fabrication of micro- and nanomachines and that one day we would be able to write the entire Encyclopaedia Britannica on the head of a pin.

Feynman's vision has finally begun to materialize, thanks to ever more sophisticated microelectronics. Micro- and nanoscale machines are poised to become a multibillion-dollar market as they are incorporated in all kinds of electronic devices. Among the revolutionary applications in development are ultradense memories capable of storing hundreds of gigabytes in a fingernail-size device, micromirrors for enhanced displays and optical communications equipment, and highly selective RF filters to reduce cellphone size and improve the quality of calls.

But, again, at very small scales, chemical batteries can't provide enough juice to power these micromachines. As you reduce the size of such a battery, the amount of stored energy goes down exponentially. Reduce each side of a cubic battery by a factor of 10 and you reduce the volume—and therefore the energy you can store—by a factor of 1000. In fact, researchers developing sensors the size of a grain of sand had to attach them to batteries they couldn't make smaller than a shirt button.

IN THE QUEST TO BOOST MICROSCALE POWER GENERATION, several groups have turned their efforts to well-known energy sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline, and diesel. Some groups are developing microfuel cells that, like their macroscale counterparts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator.

There are three major challenges for these approaches. One is that these fuels have relatively low energy densities, only about five to 10 times that of the best lithium-ion batteries. Another is the need to keep replenishing the fuel and eliminating byproducts. Finally, the packaging to contain the liquid fuel makes it difficult to significantly scale down these tiny fuel cells and generators.

The nuclear microbatteries we are developing won't require refueling or recharging and will last as long as the half-life of the radioactive source, at which point the power output will decrease by a factor of two. And even though their efficiency in converting nuclear to electrical energy isn't high—about 4 percent for one of our prototypes—the extremely high energy density of the radioactive materials makes it possible for these microbatteries to produce relatively significant amounts of power.

For example, with 10 milligrams of polonium-210 (contained in about 1 cubic millimeter), a nuclear microbattery could produce 50 milliwatts of electric power for more than four months (the half-life of polonium-210 is 138 days). With that level of power, it would be possible to run a simple microprocessor and a handful of sensors for all those months.

And the conversion efficiency won't be stuck at 4 percent forever. Beginning this past July we started working to boost the efficiency to 20 percent, as part of a new Defense Advanced Research Projects Agency program called Radio Isotope Micro-power Sources.

Space agencies such as NASA in the United States have long recognized the extraordinary potential of radioactive materials for generating electricity. NASA has been using radioisotope thermoelectric generators, or RTGs, since the 1960s in dozens of missions, like Voyager and, more recently, the Cassini probe, now in orbit around Saturn. Space probes like these travel too far away from the sun to power themselves with photovoltaic arrays.

RTGs convert heat into electricity through a process known as the Seebeck effect: when you heat one end of a metal bar, electrons in this region will have more thermal energy and flow to the other end, producing a voltage across the bar. Most of NASA's washing-machine-size RTGs use plutonium-238, whose high-energy radiation can produce enormous heat.

But as it turns out, RTGs don't scale down well. At the diminutive dimensions of MEMS devices, the ratio between an object's surface and its volume gets very high. This relatively large surface makes it difficult to sufficiently reduce heat losses and maintain the temperatures necessary for RTGs to work. So we had to find other ways of converting nuclear into electric energy.

ONE OF THE MICROBATTERIES WE DEVELOPED early last year directly converted the high-energy particles emitted by a radioactive source into an electric current. The device consisted of a small quantity of nickel-63 placed near an ordinary silicon p-n junction—a diode, basically. As the nickel-63 decayed, it emitted beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, creating paired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away from the junction, producing the current.

Nickel-63 is ideal for this application because its emitted beta particles travel a maximum of 21 µm in silicon before disintegrating; if the particles were more energetic, they would travel longer distances, thus escaping the battery. The device we built was capable of producing about 3 nanowatts with 0.1 millicurie of nickel-63, a small amount of power but enough for applications such as nanoelectronic memories and the simple processors for environmental and battlefield sensors that some groups are currently developing.

The new types of microbatteries we are working on now can generate substantially more power. These units produce electricity indirectly, like minute generators. Radiation from the sample is converted first to mechanical energy and then to oscillating pulses of electric energy. Even though the energy has to go through the intermediate, mechanical phase, the batteries are no less efficient; they tap a significant fraction of the kinetic energy of the emitted particles for conversion into mechanical energy. By releasing this energy in brief pulses, they provide much more instantaneous power than the direct-conversion approach.

For these batteries, which we call radioactive piezoelectric generators, the radioactive source is a 4-square-millimeter thin film of nickel-63 [see illustration, "Power From Within"]. On top of it, we cantilever a small rectangular piece of silicon, its free end able to move up and down. As the electrons fly from the radioactive source, they travel across the air gap and hit the cantilever, charging it negatively. The source, which is positively charged, then attracts the cantilever, bending it down.

A piece of piezoelectric material bonded to the top of the silicon cantilever bends along with it. The mechanical stress of the bend unbalances the charge distribution inside the piezoelectric crystal structure, producing a voltage in electrodes attached to the top and bottom of the crystal.

After a brief period—whose length depends on the shape and material of the cantilever and the initial size of the gap—the cantilever comes close enough to the source to discharge the accumulated electrons by direct contact. The discharge can also take place through tunneling or gas breakdown. At that moment, electrons flow back to the source, and the electrostatic attractive force vanishes. The cantilever then springs back and oscillates like a diving board after a diver jumps, and the recurring mechanical deformation of the piezoelectric plate produces a series of electric pulses.

The charge-discharge cycle of the cantilever repeats continuously, and the resulting electric pulses can be rectified and smoothed to provide direct-current electricity. Using this cantilever-based power source, we recently built a self-powered light sensor [see photo, "It's Got the Power"]. The device contains a simple processor connected to a photodiode that detects light variations.


Nuclear batteries can pack in energy at densities THOUSANDS OF TIMES greater than those of lithium-ion batteries


Also using the cantilever system, we developed a pressure sensor that works by "sensing" the gas molecules in the gap between the cantilever and the source. The higher the ambient pressure, the more gas molecules in the gap. As a result, it is more difficult for electrons to reach and charge the cantilever. Hence, by tracking changes in the cantilever's charging time, the sensor even detects millipascal variations in a low-pressure environment like a vacuum chamber.

To get the measurements at a distance, we made the cantilever work as an antenna and emit radio signals, which we could receive meters away—in this application the little machine was "radio active" in more ways than one. The cantilever, built from a material with a high dielectric constant, had metal electrodes on its top and bottom. An electric field formed inside the dielectric as the bottom electrode charged. When it discharged, a charge imbalance appeared in the electrodes, making the electric field propagate along the dielectric material. The cantilever thus acted like an antenna that periodically emitted RF pulses, the interval between pulses varying accordingly to the pressure.

What we'd like to do now is add a few transistors and other electronic components to this system so that it can not only send simple pulses but also modulate signals to carry information. That way, we could make MEMS-based sensors that could communicate with each other wirelessly without requiring complex, energy-demanding communications circuitry.

NUCLEAR MICROBATTERIES MAY ULTIMATELY CHANGE the way we power many electronic devices. The prevalent power source paradigm is to have all components in a device's circuitry drain energy from a single battery. Here's another idea: give each component—sensor, actuator, microprocessor—its own nuclear microbattery. In such a scheme, even if a main battery is still necessary for more power-hungry components, it could be considerably smaller, and the multiple nuclear microbatteries could run a device for months or years, rather than days or hours.

One example is the RF filters in cellphones, which now take up a lot of space in handsets. Researchers are developing MEMS-based RF filters with better frequency selectivity that could improve the quality of calls and make cellphones smaller. These MEMS filters, however, may require relatively high dc voltages, and getting these from the main battery would require complicated electronics. Instead, a nuclear microbattery designed to generate the required voltage—in the range of 10 to 100 volts—could power the filter directly and more efficiently.

Another application might be to forgo the electrical conversion altogether and simply use the mechanical energy. For example, researchers could use the motion of a cantilever-based system to drive MEMS engines, pumps, and other mechanical devices. A self-powered actuator could be used, for instance, to move the legs of a microscopic robot. The actuator's motion—and the robot's tiny steps—would be adjusted according to the charge-discharge period of the cantilever and could vary from hundreds of times every second to once per hour, or even once per day.

THE FUTURE OF NUCLEAR MICROBATTERIES depends on several factors, such as safety, efficiency, and cost. If we keep the amount of radioactive material in the devices small, they emit so little radiation that they can be safe with only simple packaging. At the same time, we have to find ways of increasing the amount of energy that nuclear microbatteries can produce, especially as the conversion efficiency begins approaching our targeted 20 percent. One possibility for improving the cantilever-based system would be to scale up the number of cantilevers by placing several of them horizontally, side by side. In fact, we are already developing an array about the size of a postage stamp containing a million cantilevers. These arrays could then be stacked to achieve even greater integration.

Another major challenge is to have inexpensive radioisotope power supplies that can be easily integrated into electronic devices. For example, in our experimental systems we have been using 1 millicurie of nickel-63, which costs about US $25—too much for use in a mass-produced device. A potentially cheaper alternative would be tritium, which some nuclear reactors produce in huge quantities as a byproduct. There's no reason that the amount of tritium needed for a microbattery couldn't cost just a few cents.

Once these challenges are overcome, a promising use for nuclear microbatteries would be in handheld devices like cellphones and PDAs. As mentioned above, the nuclear units could trickle charge into conventional batteries. Our one-cantilever system generated pulses with a peak power of 100 milliwatts; with many more cantilevers, and by using the energy of pulses over periods of hours, a nuclear battery would be able to inject a significant amount of current into the handheld's battery.

How much that current could increase the device's operation time depends on many factors. For a cellphone used for hours every day or for a power-hungry PDA, the nuclear energy boost won't help much. But for a cellphone used two or three times a day for a few minutes, it could mean the difference between recharging the phone every week or so and recharging it once a month. And for a simple PDA used mainly for checking schedules and phone numbers, the energy boost might keep the batteries perpetually charged for as long as the nuclear material lasts.

Nuclear microbatteries won't replace chemical batteries. But they're going to power a whole new range of gadgetry, from nanorobots to wireless sensors. Feynman's "staggeringly small world" awaits.


See Nuclear Solutions website for details of the "betavoltaic battery" invented by Dr. Paul Brown. Brown's patents include "Layered Metal Foil Semiconductor Power Device" #6,118,204, "Isotopic Semiconductor Batteries" #6,236,812, "Apparatus for Direct Conversion of Radioactive Decay Energy to Electrical Energy" #4,835,433 at www.uspto.gov .

Other nuclear beta particle battery patents include:


Become a Member of www.IntegrityResearchInstitute.org during our 2005 Membership Drive and receive two (2) free bonus products, besides the framable certificate, newsletters, annual report, calendar and catalog. See our website for more details. Join today!

Back to top