Low Tech Mag
Before the Industrial Revolution, people adjusted their energy demand to a variable energy supply. Our global trade and transport system -- which relied on sail boats -- operated only when the wind blew, as did the mills that supplied our food and powered many manufacturing processes.
The same approach could be very useful today, especially when improved by modern technology. In particular, factories and cargo transportation -- such as ships and even trains -- could be operated only when renewable energy is available. Adjusting energy demand to supply would make switching to renewable energy much more realistic than it is today.
Stoneferry (detail), a painting by John Ward of Hull.// //
Renewable Energy in Pre-Industrial Times
Before the Industrial Revolution, both industry and transportation were largely dependent on intermittent renewable energy sources. Water mills, windmills and sailing boats have been in use since Antiquity, but the Europeans brought these technologies to full development from the 1400s onwards.
At their peak, right before the Industrial Revolution took off, there were an estimated 200,000 wind powered mills and 500,000 water powered mills in Europe. Initially, water mills and windmills were mainly used for grinding grain, a laborious task that had been done by hand for many centuries, first with the aid of stones and later with a rotary hand mill.
"Een zomers landschap" ("A summer landscape"), a painting by Jan van Os.
However, soon water and wind powered mills were adapted to industrial processes like sawing wood, polishing glass, making paper, boring pipes, cutting marble, slitting metal, sharpening knives, crushing chalk, grinding mortar, making gunpowder, minting coins, and so on. [1-3] Wind- and water mills also processed a host of agricultural products. They were pressing olives, hulling barley and rice, grinding spices and tobacco, and crushing linseed, rapeseed and hempseed for cooking and lighting.
Even though it relied on intermittent wind sources, international trade was crucial to many European economies before the Industrial Revolution.
So-called 'industrial water mills' had been used in Antiquity and were widely adopted in Europe by the fifteenth century, but 'industrial windmills' appeared only in the 1600s in the Netherlands, a country that took wind power to the extreme. The Dutch even applied wind power to reclaim land from the sea, and the whole country was kept dry by intermittently operating wind mills until 1850. [1-3]
Abraham Storck: A river landscape with fishermen in rowing boats, 1679.
The use of wind power for transportation – in the form of the sailboat – also boomed from the 1500s onwards, when Europeans 'discovered' new lands. Wind powered transportation supported a robust, diverse and ever expanding international trading system in both bulk goods (such as grain, wine, wood, metals, ceramics, and preserved fish), luxury items (such as precious metals, furs, spices, ivory, silks, and medicin) and human slaves. 
Even though it relied on intermittent wind sources, international trade was crucial to many European economies. For example, the Dutch shipbuilding industry, which was centred around some 450 wind-powered saw mills, imported virtually all its naval stores from the Baltic: wood, tar, iron, hemp and flax. Even the food supply could depend on wind-powered transportation. Towards the end of the 1500s, the Dutch imported two thousand shiploads of grain per year from Gdansk.  Sailboats were also important for fishing.
Dealing with Intermittency in Pre-Industrial Times
Although variable renewable energy sources were critical to European society for some 500 years before fossil fuels took over, there were no chemical batteries, no electric transmission lines, and no balancing capacity of fossil fuel power plants to deal with the variable energy output of wind and water power. So, how did our ancestors deal with the large variability of renewable power sources?
To some extent, they were counting on technological solutions to match energy supply to energy demand, just as we do today. The water level in a river depends on the weather and the seasons. Boat mills and bridge mills were among the earliest technological fixes to this problem. They went up and down with the water level, which allowed them to maintain a more predictable operating regime. [1-2]
To some extent, our ancestors were counting on technological solutions to match energy supply to energy demand, just as we do today.
However, water power could also be stored for later use. Starting in the middle ages, dams were built to create mill ponds, a form of energy storage that's similar to today's hydropower reservoirs. The storage reservoirs evened out the flow of streams and insured that water was available when it was needed.  
But rivers could still dry out or freeze over for prolonged periods, rendering dams and adjustable water wheels useless. Furthermore, when one counted on windmills, no such technological fixes were available.  [6-7]
A technological solution to the intermittency of both water and wind power was the 'beast mill' or 'horse mill'.  In contrast to wind and water power, horses, donkeys or oxen could be counted on to supply power whenever it was required. However, beast mills were expensive and energy inefficient to operate: feeding a horse required a land area capable of feeding eight humans.  Consequently, the use of animal power in large-scale manufacturing processes was rare. Beast mills were mostly used for the milling of grain or as a power source in small workshop settings, using draft animals. 
Obviously, beast mills were not a viable backup power source for sailing ships either. In principle, sailing boats could revert to human power when wind was not available. However, a sufficiently large rowing crew needed extra water and food, which would have limited the range of the ship, or its cargo capacity. Therefore, rowing was mainly restricted to battleships and smaller boats.
Adjusting Demand to Supply: Factories
Because of their limited technological options for dealing with the variability of renewable energy sources, our ancestors mainly resorted to a strategy that we have largely forgotten about: they adapted their energy demand to the variable energy supply. In other words, they accepted that renewable energy was not always available and acted accordingly. For example, windmills and sailboats were simply not operated when there was no wind.
Painting: Mills in the Westzijderveld near Zaandam, a painting by Claude Monet.
In industrial windmills, work was done whenever the wind blew, even if that meant that the miller had to work night and day, taking only short naps. For example, a document reveals that at the Union Mill in Cranbrook, England, the miller once had only three hours sleep during a windy period lasting 60 hours.  A 1957 book about windmills, partly based on interviews with the last surviving millers, reveals the urgency of using wind when it was available:
Often enough when the wind blew in autumn, the miller would work from Sunday midnight to Tuesday evening, Wednesday morning to Thursday night, and Friday morning to Saturday midnight, taking only a few snatches of sleep; and a good windmiller always woke up in bed when the wind rose, getting up in the middle of the night to set the mill going, because the wind was his taskmaster and must be taken advantage of whenever it blew. Many a village has at times gone short of wheaten bread because the local mill was becalmed in a waterless district before the invention of the steam engine; and barley-meal bread or even potato bread had to suffice in the crisis of a windless autumn. 
In earlier, more conservative times, the miller was punished for working on Sunday, but he didn't always care. When a protest against Sunday work was made to Mr. Wade of Wicklewood towermill, Norfolk, he retorted: "If the Lord is good enough to send me wind on a Sunday, I'm going to use it".  On the other hand, when there was no wind, millers did other work, like maintaining their machinery, or took time off. Noah Edwards, the last miller of Arkley tower mill, Hertfordshire, would “sit on the fan stage of a fine evening and play his fiddle”. 
Adjusting Demand to Supply: Sailboats
A similar approach existed for overseas travel, using sail boats. When there was no wind, sailors stayed ashore, maintained and repaired their ships, or did other things. They planned their trips according to the seasons, making use of favourable seasonal winds and currents. Winds at sea are not only much stronger than those over land, but also more predictable.
Sailors planned their trips according to the seasons, making use of favourable seasonal winds and currents.
The lower atmosphere of the planet is encircled by six major wind belts, three in each hemisphere. From Equator to poles these 'prevailing winds' are the trade winds, the westerlies, and the easterlies. The six wind belts move north in the northern summer and south in the northern winter. Five major sea current gyres are correlated with the dominant wind flows.
The Maas at Dordrecht, a painting by Aelbert Cuyp, 1660.
Gradually, European sailors deciphered the global pattern of winds and currents and took full advantage of them to establish new sea routes all over the world. By the 1500s, Christopher Columbus had figured out that the combination of trade winds and westerlies enabled a round-trip route for sailing ships crossing the Atlantic Ocean.
The trade winds reach their northernmost latitude at or after the end of the northern summer, bringing them in reach of Spain and Portugal. These summer trade winds made it easy to sail from Southern Europe to the Caribbean and South America, because the wind was blowing in that direction along the route.
Wind map of the Atlantic, September 9, 2017. Source: Windy.
Taking the same route back would be nearly impossible. However, Iberian sailors first sailed north to catch the westerlies, which reach their southernmost location at or after the end of winter and carried the sailors straight back to Southern Europe. In the 1560s, Basque explorer Andrés de Urdaneta discovered a similar round-trip route in the Pacific Ocean. 
The use of favourable winds made travel times of sailboats relatively reliable. The fastest Atlantic crossing was 21 days, the slowest 29 days.
The use of favourable winds made the travel times of sailboats relatively predictable. Ocean Passages for the World mentions that typical passage times from New York to the English Channel for a mid-19th to early 20th century sailing vessel was 25 to 30 days. From 1818 to 1832, the fastest crossing was 21 days, the slowest 29 days. 
The journey from the English Channel to New York took 35-40 days in winter and 40-50 days in summer. To Cape Town, Melbourne, and Calcutta took 50-60 days, 80-90 days, and 100-120 days, respectively.  These travel times are double to triple those of today's container ships, which vary their speed based on oil prices and economic demand.
Old Approach, New Technology
As a strategy to deal with variable energy sources, adjusting energy demand to renewable energy supply is just as valuable a solution today as it was in pre-industrial times. However, this does not mean that we need to go back to pre-industrial means. We have better technology available, which makes it much easier to synchronise the economic demands with the vagaries of the weather.
Shipping in a calm, a painting by Charles Brooking, first half 18th century.
In the following paragraphs, I investigate in more detail how industry and transportation could be operated on variable energy sources alone, and demonstrate how new technologies open new possibilities. I then conclude by analysing the effects on consumers, workers, and economic growth.
On a global scale, industrial manufacturing accounts for nearly half of all energy end use. Many mechanical processes that were run by windmills are still important today, such as sawing, cutting, boring, drilling, crushing, hammering, sharpening, polishing, milling, turning, and so on. All these production processes can be run with an intermittent power supply.
The same goes for food production processes (mincing, grinding or hulling grains, pressing olives and seeds), mining and excavation (picking and shovelling, rock and ore crushing), or textile production (fulling cloth, preparing fibres, knitting and weaving). In all these examples, intermittent energy input does not affect the quality of the production process, only the production speed.
Many production processes are not strongly disadvantaged by an intermittent power supply.
Running these processes on variable power sources has become a lot easier than it was in earlier times. For one thing, wind power plants are now completely automated, while the traditional windmill required constant attention. 
Image: “Travailler au moulin / Werken met molens”, Jean Bruggeman, 1996.
However, not only are wind turbines (and water turbines) more practical and powerful than in earlier times, we can now make use of solar energy to produce mechanical energy. This is usually done with solar photovoltaic (PV) panels, which convert sunlight into electricity to run an electric motor.
Consequently, a factory that requires mechanical energy can be run on a combination of wind and solar power, which increases the chances that there's sufficient energy to run its machinery. The ability to harvest solar energy is important because it's by far the most widely available renewable power source. Most of the potential capacity for water power is already taken. 
Another crucial difference with pre-industrial times is that we can apply the same strategy to basic industrial processes that require thermal energy instead of mechanical energy. Heat dominates industrial energy use, for instance, in the making of chemicals or microchips, or in the smelting of metals.
In pre-industrial times, manufacturing processes that required thermal energy were powered by the burning of biomass, peat and/or coal. The use of these energy sources caused grave problems, such as large-scale deforestation, loss of land, and air pollution. Although solar energy was used in earlier times, for instance, to evaporate salt along seashores, to dry crops for preservation, or to sunbake clay bricks, its use was limited to processes that required relatively low temperatures.
We can apply the same strategy to basic industrial processes that require thermal energy instead of mechanical energy, which was not possible before the Industrial Revolution.
Today, renewable energy other than biomass can be used to produce thermal energy in two ways. First, we can use wind turbines, water turbines or solar PV panels to produce electricity, which can then be used to produce heat by electrical resistance. This was not possible in pre-industrial times, because there was no electricity.
Augustin Mouchot's solar powered printing press, 1882.
Second, we can apply solar heat directly, using water-based flat plate collectors or evacuated tube collectors, which collect solar radiation from all directions and can reach temperatures of 120 degrees celsius. We also have solar concentrator collectors, which track the sun, concentrate its radiation, and can generate temperatures high enough to melt metals or produce microchips and solar cells. These solar technologies only became available in the late 19th century, following advances in the manufacturing of glass and mirrors.
Limited Energy Storage
Running factories on variable power sources doesn't exclude the use of energy storage or a backup of dispatchable power plants. Adjusting demand to supply should take priority, but other strategies can play a supportive role. First, energy storage or backup power generation capacity could be useful for critical production processes that can't be halted for prolonged periods, such as food production.
Second, short-term energy storage is also useful to run production processes that are disadvantaged by an intermittent power supply.  Third, short-term energy storage is crucial for computer-controlled manufacturing processes, allowing these to continue operating during short interruptions in the power supply, and to shut down safely in case of longer power cuts. 
Binnenshaven Rotterdam, a painting by Jongkind Johan Berthold (1857)
Compared to pre-industrial times, we now have more and better energy storage options available. For example, we can use biomass as a backup power source for mechanical energy production, something pre-industrial millers could not do – before the arrival of the steam engine, there was no way of converting biomass into mechanical energy.
Before the arrival of the steam engine, there was no way of converting biomass into mechanical energy.
We also have chemical batteries, and we have low-tech systems like flywheels, compressed air storage, hydraulic accumulators, and pumped storage plants. Heat energy can be stored in well-insulated water reservoirs (up to 100 degrees) or in salt, oil or ceramics (for much higher temperatures). All these storage solutions would fail for some reason or another if they were tasked with storing a large share of renewable energy production. However, they can be very useful on a smaller scale in support of demand adjustment.
The New Age of Sail
Cargo transportation is another candidate for using renewable power when it's available. This is most obvious for shipping. Ships still carry about 90 percent of the world's trade, and although shipping is the most energy efficient way of transportation per tonne-kilometre, total energy use is high and today's oil powered vessels are extremely polluting.
Image by Arne List [CC BY-SA 2.0], via Wikimedia Commons
A common high-tech idea is to install wind turbines off-shore, convert the electricity they generate into hydrogen, and then use that hydrogen to power seagoing vessels. However, it's much more practical and energy efficient to use wind to power ships directly, like we have done for thousands of years. Furthermore, oil powered cargo ships often float idle for days or even weeks before they can enter a port or leave it, which makes the relative unpredictability of sailboats less problematic.
It's much more practical and energy efficient to use wind to power ships directly.
As with industrial manufacturing, we now have much better technology and knowledge available to base a worldwide shipping industry on wind power alone. We have new materials to build better and longer-lasting ships and sails, we have more accurate navigation and communication instruments, we have more predictable weather forecasts, we can make use of solar panels for backup engine power, and we have more detailed knowledge about winds and currents.
Thomas W. Lawson was a seven-masted, stell-hulled schooner built in 1902 for the Pacific trade. It had a crew of 18.
In fact, the global wind and current patterns were only fully understood when the age of sail was almost over. Between 1842 and 1861, American navigator Matthew Fontaine Maury collected an extensive array of ship logs which enabled him to chart prevailing winds and sea currents, as well as their seasonal variations. 
Maury's work enabled seafarers to shorten sailing time considerably, by simply taking better advantage of prevailing winds and sea currents. For instance, a journey from New York to Rio de Janeiro was reduced from 55 to 23 days, while the duration of a trip from Melbourne to Liverpool was halved, from 126 to 63 days. 
More recently, yacht racing has generated many innovations that have never been applied to commercial shipping. For example, in the 2017 America's Cup, the Emirates Team New Zealand introduced stationary bikes instead of hand cranks to power the hydraulic system that steers the boat. Because our legs are stronger than our arms, pedal powered 'grinding' allows for quicker tacking and gybing in a race, but it could also be useful to reduce the required manpower for commercial sailing ships. 
Speed sailing records are also telling. The fastest sailboat in 1972 did not even reach 50 km/h, while the current record holder -- the Vestas Sailrocket 2 -- sailed at 121 km/h in 2012. While these types of ships are not practical to carry cargo, they could inspire other designs that are.
Wind & Solar Powered Trains
We could follow a similar approach for land-based transportation, in the form of wind and solar powered trains. Like sailing boats, trains could be running whenever there is renewable energy available. Not by putting sails on trains, of course, but by running them on electricity made by solar PV panels or wind turbines along the tracks. This would be an entirely new application of a centuries-old strategy to deal with variable energy sources, only made possible by the invention of electricity.
Wind and solar powered trains would be an entirely new application of a centuries-old strategy to deal with variable energy sources.
Running cargo trains on renewable energy is a great use of intermittent wind power because they are usually operated at night, when wind power is often at its best and energy demand is at its lowest. Furthermore, just like cargo ships, cargo trains already have unreliable schedules because they often sit stationary in train-yards for days, waiting to become fully loaded.
Cardiff Docks, a painting by Lionel Walden, 1894
Even the speed of the trains could be regulated by the amount of renewable energy that is available, just as the wind speed determines the speed of a sailing ship. A similar approach could also work with other electrical transportation systems, such as trolleytrucks, trolleyboats or aerial ropeways.
Combining solar and wind powered cargo trains with solar and wind powered factories creates extra possibilities. For example, at first sight, solar or wind powered passenger trains appear to be impossible, because people are less flexible than goods. If a solar powered train is not running or is running too slow, an appointment may have to be rescheduled at the last minute. Likewise, on cloudy days, few people would make it to the office.
Solar PV panels cover a railway in Belgium, 2016. Image: Infrabel.
However, this could be solved by using the same renewable power sources for factories and passenger trains. Solar panels along the railway lines could be sized for cloudy days, and thus guarantee a minimum level of energy for a minimum service of passenger trains (but no industrial production). During sunny days, the extra solar power could be used to run the factories along the railway line, or to run extra passenger (or cargo) trains.
Consequences for Society: Consumption & Production
As we've seen, if industrial production and cargo transportation became dependent on the availability of renewable energy, we would still be able to produce a diverse range of consumer goods, and transport them all over the globe. However, not all products would be available all the time. If I want to buy new shoes, I might have to wait for the right season to get them manufactured and delivered.
Production and consumption would depend on the weather and the seasons. Solar powered factories would have higher production rates in the summer months, while wind powered factories would have higher production rates in the winter months. Sailing seasons also need to be taken into account.
If I want to buy new shoes, I might have to wait for the right season to get them manufactured and delivered.
But running an economy on the rhythms of the weather doesn't necessarily mean that production and consumption rates would go down. If factories and cargo transportation adjust their energy use to the weather, they can use the full annual power production of wind turbines and solar panels.
A Windmill at Zaandam, a painting by Claude Monet, 1871.
Manufacturers could counter seasonal production shortages by producing items 'in season' and then stocking it close to consumers for sale during low energy periods. In fact, the products themselves would become 'energy storage' in this scenario. Instead of storing energy to manufacture products in the future, we would manufacture products whenever there is energy available, and store the products for later sale instead.
However, seasonal production may well lead to lower production and consumption rates. Overproducing in high energy times requires large production facilities and warehouses, which would be underused for the rest of the year. To produce cost-efficiently, manufacturers will need to make compromises. From time to time, these compromises will lead to product shortages, which in turn could encourage people to consider other solutions, such as repair and re-use of existing products, crafted products, DIY, or exchanging and sharing goods.
Consequences for the Workforce
Adjusting energy demand to energy supply also implies that the workforce adapts to the weather. If a factory runs on solar power, then the availability of power corresponds very well with human rhythms. The only downside is that workers would be free from work especially in winter and on cloudy days.
However, if a factory or a cargo train runs on wind power, then people will also have to work during the night, which is considered unhealthy. The upside is that they would have holidays in summer and on good weather days.
Nachtelijk werk in de dokken (Night work at the docks), a painting by Henri Adolphe Schaep, 1856.
If a factory or a transportation system is operated by wind or solar energy alone, workers would also have to deal with uncertainty about their work schedules. Although we have much better weather forecasts than in pre-industrial times, it remains difficult to make accurate predictions more than a few days ahead.
However, it is not only renewable power plants that are now completely automated. The same goes for factories. The last century has seen increasing automation of production processes, based on computers and robots. So-called “dark factories” are already completely automated (they need no lights because there is nobody there).
It's not only renewable power plants that are now completely automated. The same goes for factories.
If a factory has no workers, it doesn't matter when it's running. Furthermore, many factories already run for 24 hours per day, partly operated by millions of night shift workers. In these cases, night work would actually decrease because these factories will only run through the night if it's windy.
Finally, we could also limit the main share of industrial manufacturing and railway transportation to normal working hours, and curtail the oversupply during the night. In this scenario, we would simply have less material goods and more holidays. On the other hand, there would be an increased need for other types of jobs, like craftsmanship and sailing.
What About the Internet?
In conclusion, industrial manufacturing and cargo transportation -- both over land and over sea -- could be run almost entirely on variable renewable power sources, with little need for energy storage, transmission networks, balancing capacity or overbuilding renewable power plants. In contrast, the modern high-tech approach of matching energy supply to energy demand at all times requires a lot of extra infrastructure which makes renewable power production a complex, slow, expensive and unsustainable undertaking.
Adjusting energy demand to supply would make switching to renewable energy much more realistic than it is today. There would be no curtailment of energy, and no storage and transmission losses. All the energy produced by solar panels and wind turbines would be used on the spot and nothing would go to waste.
Marina, a painting by Carol Popp de Szathmary, 1800s.
Admittedly, adjusting energy demand to energy supply can be less straightforward in other sectors. Although the internet could be entirely operated on variable power sources -- using asynchronous networks and delay-tolerant software -- many newer internet applications would then disappear.
At home, we probably can’t expect people to sit in the dark or not to cook meals when there is no renewable energy. Likewise, people will not come to hospitals only on sunny days. In such instances, there is a larger need for energy storage or other measures to counter an intermittent power supply. That's for a next post.
Kris De Decker. Edited by Jenna Collett.
Part of the research for this article happened during a fellowship at the Demand Centre, Lancaster, UK.
- How (Not) to Run a Modern Society on Solar and Wind Power Alone
- Back to Bascis: Direct Hydro Power
- Wind Powered Factories: History (and Future) of Industrial windmills
- Boat Mills: Water-Powered, Floating Factories
- Medieval Smokestacks: Fossil Fuels in Pre-Industrial Times
- The Bright Future of Solar Powered Factories
- How to Build a Low-Tech Internet
- How to Get Your Apartment Off the Grid
- Could we Run Modern Society on Human Power Alone?
 Lucas, Adam. Wind, Water, Work: Ancient and Medieval Milling Technology. Vol. 8. Brill, 2006.
 Reynolds, Terry S. Stronger than a hundred men: a history of the vertical water wheel. Vol. 7. JHU Press, 2002.
 Hills, Richard Leslie. Power from wind: a history of windmill technology. Cambridge University Press, 1996.
 Paine, Lincoln. The sea and civilization: a maritime history of the world. Atlantic Books Ltd, 2014.
 One of the earliest large hydropower dams was the Cento dam in Italy (1450), which was 71 m long and almost 6 m high. By the 18th century, the largest dams were up to 260 m long and 25 m high, with power canals leading to dozens of water wheels. 
 Although windmills had all kinds of internal mechanisms to adapt to sudden changes in wind speed and wind direction, wind power had no counterpart for the dam in water power.
 This explains why windmills became especially important in regions with dry climates, in flat countries, or in very cold areas, where water power was not available. In countries with good water resources, windmills only appeared when the increased demand for power created a crisis because the best waterpower sites were already occupied.
 Tide mills were technically similar to water mills, but they were more reliable because the sea is less prone to dry out, freeze over, or change its water level than a river.
 Sieferle, Rolf Peter, and Michael P. Osman. The subterranean forest: energy systems and the industrial revolution. Cambridge: White Horse Press, 2001.
 Freese, Stanley. Windmills and millwrighting. Cambridge University Press, 1957
 Wailes, Rex. The English windmill. London, Routledge & K. Paul, 1954
 The global wind pattern is complemented by regional wind patterns, such as land and sea breezes. The Northern Indian Ocean has semi-annually reversing Monsoon winds. These blow from the southwest from June to November, and from the northeast from December to May. Maritime trade in the Indian Ocean started earlier than in other seas, and the established trade routes were entirely dependent on the season.
 Jenkins, H. L. C. "Ocean passages for the world." The Royal Navy, Somerset (1973).
 Windmillers had to be alert to keep the gap between the stones constant however choppy the wind, and before the days of the centrifugal governor this was done by hand. The miller had to watch the power of the wind, to judge how much sail cloth to spread, and to be prepared to stop the mill under sail and either take in or let out more cloth, for there were no patent sails. And before the fantail came into use, he had to watch the direction of the wind as well and keep the sails square into the wind's eye. 
 Apart from electricity, the Industrial Revolution also brought us compressed air, water under pressure, and improved mechanical power transmission, which can all be valuable alternatives for electricity in certain applications.
 A similar distinction was made in the old days. For example, when spinning cloth, a constant speed was required to avoid gearwheels hunting and causing the machines to deliver thick and thin parts in rovings or yarns.  That's why spinning was only mechanised using water power, which could be stored to guarantee a more regular power supply, and not wind power. Wind power was also unsuited for processes like papermaking, mine haulage, or operating blast furnace bellows in ironworks.
 Very short-term energy storage is required for many mechanical production processes running on variable power sources, in order to smooth out small and sudden variations in energy supply. Such mechanical systems were already used in pre-industrial windmills.
 Leighly, J. (ed) (1963) The Physical Geography of the Sea and its Meteorology by Matthew Fontaine Maury, 8th Edition, Cambridge, MA: Belknap Press. Cited by Knowles, R.D. (2006) "Transport shaping space: the differential collapse of time/space", Journal of Transport Geography, 14(6), pp. 407-425.
 Rival teams rejected pedal power because they feared radical change, says Team New Zealand designer. The Telegraph, May 24, 2017.// //
While the potential of wind and solar energy is more than sufficient to supply the electricity demand of industrial societies, these resources are only available intermittently. To ensure that supply always meets demand, a renewable power grid needs an oversized power generation and transmission capacity of up to ten times the peak demand. It also requires a balancing capacity of fossil fuel power plants, or its equivalent in energy storage.
Consequently, matching supply to demand at all times makes renewable power production a complex, slow, expensive and unsustainable undertaking. Yet, if we would adjust energy demand to the variable supply of solar and wind energy, a renewable power grid could be much more advantageous. Using wind and solar energy only when they're available is a traditional concept that modern technology can improve upon significantly.
Image: Eye of the wind.100% Renewable Energy
It is widely believed that in the future, renewable energy production will allow modern societies to become independent from fossil fuels, with wind and solar energy having the largest potential. An oft-stated fact is that there's enough wind and solar power available to meet the energy needs of modern civilisation many times over.
For instance, in Europe, the practical wind energy potential for electricity production on- and off-shore is estimated to be at least 30,000 TWh per year, or ten times the annual electricity demand.  In the USA, the technical solar power potential is estimated to be 400,000 TWh, or 100 times the annual electricity demand. 
Such statements, although theoretically correct, are highly problematic in practice. This is because they are based on annual averages of renewable energy production, and do not address the highly variable and uncertain character of wind and solar energy.
Annual averages of renewable energy production do not address the highly variable and uncertain character of wind and solar energy
Demand and supply of electricity need to be matched at all times, which is relatively easy to achieve with power plants that can be turned on and off at will. However, the output of wind turbines and solar panels is totally dependent on the whims of the weather.
Therefore, to find out if and how we can run a modern society on solar and wind power alone, we need to compare time-synchronised electricity demand with time-synchronised solar or wind power availability.   In doing so, it becomes clear that supply correlates poorly with demand.
Above: a visualisation of 30 days of superimposed power demand time series data (red), wind energy generation data (blue), and solar insolation data (yellow). Average values are in colour-highlighted black lines. Data obtained from Bonneville Power Administration, April 2010. Source: The Intermittency of Solar Energy
Solar power is characterised by both predictable and unpredictable variations. There is a predictable diurnal and seasonal pattern, where peak output occurs in the middle of the day and in the summer, depending on the apparent motion of the sun in the sky.  
When the sun is lower in the sky, its rays have to travel through a larger air mass, which reduces their strength because they are absorbed by particles in the atmosphere. The sun's rays are also spread out over a larger horizontal surface, decreasing the energy transfer per unit of horizontal surface area.
When the sun is 60° above the horizon, the sun's intensity is still 87% of its maximum when it reaches a horizontal surface. However, at lower angles, the sun's intensity quickly decreases. At a solar angle of 15°, the radiation that strikes a horizontal surface is only 25% of its maximum.
On a seasonal scale, the solar elevation angle also correlates with the number of daylight hours, which reduces the amount of solar energy received over the course of a day at times of the year when the sun is already lower in the sky. And, last but not least, there's no solar energy available at night.
Image: Average cloud cover 2002 - 2015. Source: NASA.
Likewise, the presence of clouds adds unpredictable variations to the solar energy supply. Clouds scatter and absorb solar radiation, reducing the amount of insolation that reaches the ground below. Solar output is roughly 80% of its maximum with a light cloud cover, but only 15% of its maximum on a heavy overcast day. 
Due to a lack of thermal or mechanical inertia in solar photovoltaic (PV) systems, the changes due to clouds can be dramatic. For example, under fluctuating cloud cover, the output of multi-megawatt PV power plants in the Southwest USA was reported to have variations of roughly 50% in a 30 to 90 second timeframe and around 70% in a timeframe of 5 to 10 minutes. 
In London, a solar panel produces 65 times less energy on a heavy overcast day in December at 10 am than on a sunny day in June at noon.
The combination of these predictable and unpredictable variations in solar power makes it clear that the output of a solar power plant can vary enormously throughout time. In Phoenix, Arizona, the sunniest place in the USA, a solar panel produces on average 2.7 times less energy in December than in June. Comparing a sunny day at midday in June with a heavy overcast day at 10 am in December, the difference in solar output is almost twentyfold. 
In London, UK, which is a moderately suitable location for solar power, a solar panel produces on average 10 times less energy in December than in June. Comparing a sunny day in June at noon with a heavy overcast day in December at 10 am, the solar output differs by a factor of 65. The Intermittency of Wind Energy
Compared to solar energy, the variability of the wind is even more volatile. On the one hand, wind energy can be harvested both day and night, while on the other hand, it's less predictable and less reliable than solar energy. During daylight hours, there's always a minimum amount of solar power available, but this is not the case for wind, which can be absent or too weak for days or even weeks at a time. There can also be too much wind, and wind turbines then have to be shut down in order to avoid damage.
On average throughout the year, and depending on location, modern wind farms produce 10-45% of their rated maximum power capacity, roughly double the annual capacity factor of the average solar PV installation (5-30%).   In practice, however, wind turbines can operate between 0 and 100% of their maximum power at any moment.
Hourly wind power output on 29 different days in april 2005 at a wind plant in california. Source: 
For many locations, only average wind speed data is available. However, the chart above shows the daily and hourly wind power output on 29 different days at a wind farm in California. At any given hour of the day and any given day of the month, wind power production can vary between zero and 600 megawatt, which is the maximum power production of the wind farm. 
Even relatively small changes in wind speed have a large effect on wind power production: if the wind speed decreases by half, power production decreases by a factor of eight.  Wind resources also vary throughout the years. Germany, the Netherlands and Denmark show a wind speed inter-annual variability of up to 30%.  Yearly differences in solar power can also be significant.  How to Match Supply with Demand?
To some extent, wind and solar energy can compensate for each other. For example, wind is usually twice as strong during the winter months, when there is less sun.  However, this concerns average values again. At any particular moment of the year, wind and solar energy may be weak or absent simultaneously, leaving us with little or no electricity at all.
Electricity demand also varies throughout the day and the seasons, but these changes are more predictable and much less extreme. Demand peaks in the morning and in the evening, and is at its lowest during the night. However, even at night, electricity use is still close to 60% of the maximum.
At any particular moment of the year, wind and solar energy may be weak or absent simultaneously, leaving us with little or no electricity at all.
Consequently, if renewable power capacity is calculated based on the annual averages of solar and wind energy production and in tune with the average power demand, there would be huge electricity shortages for most of the time. To ensure that electricity supply always meets electricity demand, additional measures need to be taken.
First, we could count on a backup infrastructure of dispatchable fossil fuel power plants to supply electricity when there's not enough renewable energy available. Second, we could oversize the renewable generation capacity, adjusting it to the worst case scenario. Third, we could connect geographically dispersed renewable energy sources to smooth out variations in power production. Fourth, we could store surplus electricity for use in times when solar and/or wind resources are low or absent.
As we shall see, all of these strategies are self-defeating on a large enough scale, even when they're combined. If the energy used for building and maintaining the extra infrastructure is accounted for in a life cycle analysis of a renewable power grid, it would be just as CO2-intensive as the present-day power grid.Strategy 1: Backup Power Plants
Up to now, the relatively small share of renewable power sources added to the grid has been balanced by dispatchable forms of electricity, mainly rapidly deployable gas power plants. Although this approach completely "solves" the problem of intermittency, it results in a paradox because the whole point of switching to renewable energy is to become independent of fossil fuels, including gas. 
Most scientific research focuses on Europe, which has the most ambitious plans for renewable power. For a power grid based on 100% solar and wind power, with no energy storage and assuming interconnection at the national European level only, the balancing capacity of fossil fuel power plants needs to be just as large as peak electricity demand.  In other words, there would be just as many non-renewable power plants as there are today.
Every power plant in the USA. Visualisation by The Washington Post.
Such a hybrid infrastructure would lower the use of carbon fuels for the generation of electricity, because renewable energy can replace them if there is sufficient sun or wind available. However, lots of energy and materials need to be invested into what is essentially a double infrastructure. The energy that's saved on fuel is spent on the manufacturing, installation and interconnection of millions of solar panels and wind turbines.
Although the balancing of renewable power sources with fossil fuels is widely regarded as a temporary fix that's not suited for larger shares of renewable energy, most other technological strategies (described below) can only partially reduce the need for balancing capacity.Strategy 2: Oversizing Renewable Power Production
Another way to avoid energy shortages is to install more solar panels and wind turbines. If solar power capacity is tailored to match demand during even the shortest and darkest winter days, and wind power capacity is matched to the lowest wind speeds, the risk of electricity shortages could be reduced significantly. However, the obvious disadvantage of this approach is an oversupply of renewable energy for most of the year.
During periods of oversupply, the energy produced by solar panels and wind turbines is curtailed in order to avoid grid overloading. Problematically, curtailment has a detrimental effect on the sustainability of a renewable power grid. It reduces the electricity that a solar panel or wind turbine produces over its lifetime, while the energy required to manufacture, install, connect and maintain it remains the same. Consequently, the capacity factor and the energy returned for the energy invested in wind turbines and solar panels decrease. 
Installing more solar panels and wind turbines reduces the risk of shortages, but it produces an oversupply of electricity for most of the year.
Curtailment rates increase spectacularly as wind and solar comprise a larger fraction of the generation mix, because the overproduction's dependence on the share of renewables is exponential. Scientists calculated that a European grid comprised of 60% solar and wind power would require a generation capacity that's double the peak load, resulting in 300 TWh of excess electricity every year (roughly 10% of the current annual electricity consumption in Europe).
In the case of a grid with 80% renewables, the generation capacity needs to be six times larger than the peak load, while the excess electricity would be equal to 60% of the EU's current annual electricity consumption. Lastly, in a grid with 100% renewable power production, the generation capacity would need to be ten times larger than the peak load, and excess electricity would surpass the EU annual electricity consumption.   
This means that up to ten times more solar panels and wind turbines need to be manufactured. The energy that's needed to create this infrastructure would make the switch to renewable energy self-defeating, because the energy payback times of solar panels and wind turbines would increase six- or ten-fold.
For solar panels, the energy payback would only occur in 12-24 years in a power grid with 80% renewables, and in 20-40 years in a power grid with 100% renewables. Because the life expectancy of a solar panel is roughly 30 years, a solar panel may never produce the energy that was needed to manufacture it. Wind turbines would remain net energy producers because they have shorter energy payback times, but their advantage compared to fossil fuels would decrease. Strategy 3: Supergrids
The variability of solar and wind power can also be reduced by interconnecting renewable power plants over a wider geographical region. For example, electricity can be overproduced where the wind is blowing but transmitted to meet demand in becalmed locations. 
Interconnection also allows the combination of technologies that utilise different variable power resources, such as wave and tidal energy.  Furthermore, connecting power grids over large geographical areas allows a wider sharing of backup fossil fuel power plants.
Wind map of Europe, September 2, 2017, 23h48. Source: Windy.
Although today's power systems in Europe and the USA stretch out over a large enough area, these grids are currently not strong enough to allow interconnection of renewable energy sources. This can be solved with a powerful overlay high-voltage DC transmission grid. Such "supergrids" form the core of many ambitious plans for 100% renewable power production, especially in Europe.  The problem with this strategy is that transmission capacity needs to be overbuilt, over very long distances. 
For a European grid with a share of 60% renewable power (an optimal mix of wind and solar), grid capacity would need to be increased at least sevenfold. If individual European countries would disregard national concerns about security of supply, and backup balancing capacity would be optimally distributed throughout the continent, the necessary grid capacity extensions can be limited to about triple the existing European high-voltage grid. For a European power grid with a share of 100% renewables, grid capacity would need to be up to twelve times larger than it is today.  
Even in the UK, which has one of the best renewable energy sources in the world, combining wind, sun, wave and tidal power would still generate electricity shortages for 65 days per year.
The problems with such grid extensions are threefold. Firstly, building infrastructure such as transmission towers and their foundations, power lines, substations, and so on, requires a significant amount of energy and other resources. This will need to be taken into account when making a life cycle analysis of a renewable power grid. As with oversizing renewable power generation, most of the oversized transmission infrastructure will not be used for most of the time, driving down the transmission capacity factor substantially.
Secondly, a supergrid involves transmission losses, which means that more wind turbines and solar panels will need to be installed to compensate for this loss. Thirdly, the acceptance of and building process for new transmission lines can take up to ten years.  This is not just bureaucratic hassle: transmission lines have a high impact on the land and often face local opposition, which makes them one of the main obstacles for the growth of renewable power production.
Even with a supergrid, low power days remain a possibility over areas as large as Europe. With a share of 100% renewable energy sources and 12 times the current grid capacity, the balancing capacity of fossil fuel power plants can be reduced to 15% of the total annual electricity consumption, which represents the maximum possible benefit of transmission for Europe. 
Even in the UK, which has one of the best renewable energy sources in the world, interconnecting wind, sun, wave and tidal power would still generate electricity shortages for 18% of the time (roughly 65 days per year).  Strategy 4: Energy Storage
A final strategy to match supply to demand is to store an oversupply of electricity for use when there is not enough renewable energy available. Energy storage avoids curtailment and it's the only supply-side strategy that can make a balancing capacity of fossil fuel plants redundant, at least in theory. In practice, the storage of renewable energy runs into several problems.
First of all, while there's no need to build and maintain a backup infrastructure of fossil fuel power plants, this advantage is negated by the need to build and maintain an energy storage infrastructure. Second, all storage technologies have charging and discharging losses, which results in the need for extra solar panels and wind turbines to compensate for this loss.
The energy required to build and maintain the storage infrastructure and the extra renewable power plants need to be taken into account when conducting a life cycle analysis of a renewable power grid. In fact, research has shown that it can be more energy efficient to curtail renewable power from wind turbines than to store it, because the energy needed to manufacture storage and operate it (which involves charge-discharge losses) surpasses the energy that is lost through curtailment. 
If we count on electric cars to store the surplus of renewable electricity, their batteries would need to be 60 times larger than they are today
It has been calculated that for a European power grid with 100% renewable power plants (670 GW wind power capacity and 810 GW solar power capacity) and no balancing capacity, the energy storage capacity needs to be 1.5 times the average monthly load and amounts to 400 TWh, not including charging and discharging losses.   
To give an idea of what this means: the most optimistic estimation of Europe's total potential for pumped hydro-power energy storage is 80 TWh , while converting all 250 million passenger cars in Europe to electric drives with a 30 kWh battery would result in a total energy storage of 7.5 TWh. In other words, if we count on electric cars to store the surplus of renewable electricity, their batteries would need to be 60 times larger than they are today (and that's without allowing for the fact that electric cars will substantially increase power consumption).
Taking into account a charging/discharging efficiency of 85%, manufacturing 460 TWh of lithium-ion batteries would require 644 million Terajoule of primary energy, which is equal to 15 times the annual primary energy use in Europe.  This energy investment would be required at minimum every twenty years, which is the most optimistic life expectancy of lithium-ion batteries. There are many other technologies for storing excess electricity from renewable power plants, but all have unique disadvantages that make them unattractive on a large scale.  Matching Supply to Demand = Overbuilding the Infrastructure
In conclusion, calculating only the energy payback times of individual solar panels or wind turbines greatly overestimates the sustainability of a renewable power grid. If we want to match supply to demand at all times, we also need to factor in the energy use for overbuilding the power generation and transmission capacity, and the energy use for building the backup generation capacity and/or the energy storage. The need to overbuild the system also increases the costs and the time required to switch to renewable energy.
Calculating only the energy payback times of individual solar panels or wind turbines greatly overestimates the sustainability of a renewable power grid.
Combining different strategies is a more synergistic approach which improves the sustainability of a renewable power grid, but these advantages are not large enough to provide a fundamental solution.   
Building solar panels, wind turbines, transmission lines, balancing capacity and energy storage using renewable energy instead of fossil fuels doesn't solve the problem either, because it also assumes an overbuilding of the infrastructure: we would need to build an extra renewable energy infrastructure to build the renewable energy infrastructure.Adjusting Demand to Supply
However, this doesn't mean that a sustainable renewable power grid is impossible. There's a fifth strategy, which does not try to match supply to demand, but instead aims to match demand to supply. In this scenario, renewable energy would ideally be used only when it's available.
If we could manage to adjust all energy demand to variable solar and wind resources, there would be no need for grid extensions, balancing capacity or overbuilding renewable power plants. Likewise, all the energy produced by solar panels and wind turbines would be utilised, with no transmission losses and no need for curtailment or energy storage.
Windmill in Moulbaix, Belgium, 17th/18th century. Image: Jean-Pol GrandMont.
Of course, adjusting energy demand to energy supply at all times is impossible, because not all energy using activities can be postponed. However, the adjustment of energy demand to supply should take priority, while the other strategies should play a supportive role. If we let go of the need to match energy demand for 24 hours a day and 365 days a year, a renewable power grid could be built much faster and at a lower cost, making it more sustainable overall.
If we could manage to adjust all energy demand to variable solar and wind resources, there would no need for energy storage, grid extensions, balancing capacity or overbuilding renewable power plants.
With regards to this adjustment, even small compromises yield very beneficial results. For example, if the UK would accept electricity shortages for 65 days a year, it could be powered by a 100% renewable power grid (solar, wind, wave & tidal power) without the need for energy storage, a backup capacity of fossil fuel power plants, or a large overcapacity of power generators. 
If demand management is discussed at all these days, it's usually limited to so-called 'smart' household devices, like washing machines or dishwashers that automatically turn on when renewable energy supply is plentiful. However, these ideas are only scratching the surface of what's possible.
Before the Industrial Revolution, both industry and transportation were largely dependent on intermittent renewable energy sources. The variability in the supply was almost entirely solved by adjusting energy demand. For example, windmills and sailing boats only operated when the wind was blowing. In the next article, I will explain how this historical approach could be successfully applied to modern industry and cargo transportation.
Kris De Decker (edited by Jenna Collett)
 Swart, R. J., et al. Europe's onshore and offshore wind energy potential, an assessment of environmental and economic constraints. No. 6/2009. European Environment Agency, 2009.
 Lopez, Anthony, et al. US renewable energy technical potentials: a GIS-based analysis. NREL, 2012. See also Here's how much of the world would need to be covered in solar panels to power Earth, Business Insider, October 2015.
 Hart, Elaine K., Eric D. Stoutenburg, and Mark Z. Jacobson. "The potential of intermittent renewables to meet electric power demand: current methods and emerging analytical techniques." Proceedings of the IEEE 100.2 (2012): 322-334.
 Ambec, Stefan, and Claude Crampes. Electricity production with intermittent sources of energy. No. 10.07. 313. LERNA, University of Toulouse, 2010.
 Mulder, F. M. "Implications of diurnal and seasonal variations in renewable energy generation for large scale energy storage." Journal of Renewable and Sustainable Energy 6.3 (2014): 033105.
 INITIATIVE, MIT ENERGY. "Managing large-scale penetration of intermittent renewables." (2012).
 Richard Perez, Mathieu David, Thomas E. Hoff, Mohammad Jamaly, Sergey Kivalov, Jan Kleissl, Philippe Lauret and Marc Perez (2016), "Spatial and temporal variability of solar energy", Foundations and Trends in Renewable Energy: Vol. 1: No. 1, pp 1-44. http://dx.doi.org/10.1561/2700000006
 Sun Angle and Insolation. FTExploring.
 Sun position calculator, Sun Earth Tools.
 Burgess, Paul. " Variation in light intensity at different latitudes and seasons effects of cloud cover, and the amounts of direct and diffused light." Forres, UK: Continuous Cover Forestry Group. Available online at http://www. ccfg. org. uk/conferences/downloads/P_Burgess. pdf. 2009.
 Solar output can be increased, especially in winter, by tilting solar panels so that they make a 90 degree angle with the sun's rays. However, this only addresses the spreading out of solar irradiation and has no effect on the energy lost because of the greater air mass, nor on the amount of daylight hours. Furthermore, tilting the panels is always a compromise. A panel that's ideally tilted for the winter sun will be less efficient in the summer sun, and the other way around.
 Schaber, Katrin, Florian Steinke, and Thomas Hamacher. "Transmission grid extensions for the integration of variable renewable energies in europe: who benefits where?." Energy Policy 43 (2012): 123-135.
 German offshore wind capacity factors, Energy Numbers, July 2017
 What are the capacity factors of America's wind farms? Carbon Counter, 24 July 2015.
 Sorensen, Bent. Renewable Energy: physics, engineering, environmental impacts, economics & planning; Fourth Edition. Elsevier Ltd, 2010.
 Jerez, S., et al. "The Impact of the North Atlantic Oscillation on Renewable Energy Resources in Southwestern Europe." Journal of applied meteorology and climatology 52.10 (2013): 2204-2225.
 Eerme, Kalju. "Interannual and intraseasonal variations of the available solar radiation." Solar Radiation. InTech, 2012.
 Archer, Cristina L., and Mark Z. Jacobson. "Geographical and seasonal variability of the global practical wind resources." Applied Geography 45 (2013): 119-130.
 Rugolo, Jason, and Michael J. Aziz. "Electricity storage for intermittent renewable sources." Energy & Environmental Science 5.5 (2012): 7151-7160.
 Even at today's relatively low shares of renewables, curtailment is already happening, caused by either transmission congestion, insufficient transmission availability, or minimal operating levels on thermal generators (coal and atomic power plants are designed to operate continuously). See: “Wind and solar curtailment”, Debra Lew et al., National Renewable Energy Laboratory, 2013. For example, in China, now the world's top wind power producer, nearly one-fifth of total wind power is curtailed. See: Chinese wind earnings under pressure with fifth of farms idle, Sue-Lin Wong & Charlie Zhu, Reuters, May 17, 2015.
 Barnhart, Charles J., et al. "The energetic implications of curtailing versus storing solar- and wind-generated electricity." Energy & Environmental Science 6.10 (2013): 2804-2810.
 Schaber, Katrin, et al. "Parametric study of variable renewable energy integration in europe: advantages and costs of transmission grid extensions." Energy Policy 42 (2012): 498-508.
 Schaber, Katrin, Florian Steinke, and Thomas Hamacher. "Managing temporary oversupply from renewables efficiently: electricity storage versus energy sector coupling in Germany." International Energy Workshop, Paris. 2013.
 Underground cables can partly overcome this problem, but they are about 6 times more expensive than overhead lines.
 Szarka, Joseph, et al., eds. Learning from wind power: governance, societal and policy perspectives on sustainable energy. Palgrave Macmillan, 2012.
 Rodriguez, Rolando A., et al. "Transmission needs across a fully renewable european storage system." Renewable Energy 63 (2014): 467-476.
 Furthermore, new transmission capacity is often required to connect renewable power plants to the rest of the grid in the first place -- solar and wind farms must be co-located with the resource itself, and often these locations are far from the place where the power will be used.
 Becker, Sarah, et al. "Transmission grid extensions during the build-up of a fully renewable pan-European electricity supply." Energy 64 (2014): 404-418.
 Zero Carbon britain: Rethinking the Future, Paul Allen et al., Centre for Alternative Technology, 2013
 Wave energy often correlates with wind power: if there's no wind, there's usually no waves.
 Building even larger supergrids to take advantage of even wider geographical regions, or even the whole planet, could make the need for balancing capacity largely redundant. However, this could only be done at very high costs and increased transmission losses. The transmission costs increase faster than linear with distance traveled since also the amount of peak power to be transported will grow with the surface area that is connected.  Practical obstacles also abound. For example, supergrids assume peace and good understanding between and within countries, as well as equal interests, while in reality some benefit much more from interconnection than others. 
 Heide, Dominik, et al. "Seasonal optimal mix of wind and solar power in a future, highly renewable Europe." Renewable Energy 35.11 (2010): 2483-2489.
 Rasmussen, Morten Grud, Gorm Bruun Andresen, and Martin Greiner. "Storage and balancing synergies in a fully or highly renewable pan-european system." Energy Policy 51 (2012): 642-651.
 Weitemeyer, Stefan, et al. "Integration of renewable energy sources in future power systems: the role of storage." Renewable Energy 75 (2015): 14-20.
 Assessment of the European potential for pumped hydropower energy storage, Marcos Gimeno-Gutiérrez et al., European Commission, 2013
 The calculation is based on the data in this article: How sustainable is stored sunlight? Kris De Decker, Low-tech Magazine, 2015.
 Evans, Annette, Vladimir Strezov, and Tim J. Evans. "Assessment of utility energy storage options for increased renewable energy penetration." Renewable and Sustainable Energy Reviews 16.6 (2012): 4141-4147.
 Zakeri, Behnam, and Sanna Syri. "Electrical energy storage systems: A comparative life cycle cost analysis." Renewable and Sustainable Energy Reviews 42 (2015): 569-596.
 Steinke, Florian, Philipp Wolfrum, and Clemens Hoffmann. "Grid vs. storage in a 100% renewable Europe." Renewable Energy 50 (2013): 826-832.
 Heide, Dominik, et al. "Reduced storage and balancing needs in a fully renewable European power system with excess wind and solar power generation." Renewable Energy 36.9 (2011): 2515-2523.
Unlike solar and wind energy, human power is always available, no matter the season or time of day. Unlike fossil fuels, human power can be a clean energy source, and its potential increases as the human population grows. In the Human Power Plant, Low-tech Magazine and artist Melle Smets investigate the feasibility of human energy production in the 21st century.
To find out if human power can sustain a modern lifestyle, we are designing plans to convert a 22 floors vacant tower building on the campus of Utrecht University in the Netherlands into an entirely human powered student community for 750 people. We're also constructing a working prototype of the human power plant that supplies the community with energy.
The Human Power Plant is both a technical and a social challenge. A technical challenge, because there's a lack of scientific and technological research into human power production. A social challenge, because unlike a wind turbine, a solar panel or an oil barrel, a human needs to be motivated in order to produce energy.
Image: A human powered student room. Golnar Abbasi.// // The Rise and Fall of Human Power
Throughout most of history, humans have been the most important source of mechanical energy. Building cities, digging canals, producing food, washing clothes, communication and transportation: it all happened with human muscle power as the main source of energy. Human power was complemented with animal power, and windmills and watermills became increasingly important from the middle ages onwards. Most work, however, we carried out ourselves.
These days, human power plays virtually no role anymore. We have automated and motorised even the smallest physical efforts. Mechanical energy is now largely provided by fossil fuels, either as a primary fuel or converted to electricity. This 'progress' comes at a price. Industrial society is totally dependent on a steady supply of fossil fuels and electricity, which makes it very vulnerable to an interruption in this supply.
Digging the Panama Canal. Picture: National Archives.
Furthermore, fossil fuels are not infinitely available and their large-scale use causes a host of other problems. On the other hand, renewable energy sources such as wind and solar power are not always available, and their manufacturing is also dependent on fossil fuels. Meanwhile, in order to keep in shape and stay healthy, people go to the gym to exercise, generating energy that's wasted. The Human Power Plant wants to restore the connection between energy demand and energy supply.
Compared with fossil fuels and renewable energy sources, human power has a lot of advantages. A human can generate at least as much power as a 1 m2 solar PV panel on a sunny day -- and as much as 10 m2 of solar PV panels on a heavy overcast day. Human power is a dispatchable energy source, just like fossil fuels. Its power output is not dependent on the season, the weather or the time of the day. In fact, humans can be considered renewable energy sources and batteries at the same time.
Unlike fossil fuels, human power can be a clean energy source, which produces little or no air pollution and soil contamination. Moreover, the potential of human power increases as the human population grows, while all other energy sources need to be shared among an ever-growing amount of people. Furthermore, unlike solar panels, wind turbines, and batteries, humans don't need to be manufactured in a factory. In combination with the right diet, human power is carbon neutral.
The potential of human power increases as the human population grows, while all other energy sources need to be shared among an ever-growing amount of people.
Finally, humans are all-round power sources, just like fossil fuels. They not only supply muscle power that can be converted to mechanical energy or electricity, but also thermal energy, especially during exercise: a physically active human being can generate up to 500 watts of body heat. Furthermore, human waste can be converted to biogas and fertiliser. Arguably, human power is the most versatile and most sustainable power source on Earth.
Detail from the communal shower and laundry floor. Image: Golnar Abbasi.
Modern technology has greatly improved the potential of human power production. On the one hand, many electric devices have become very energy efficient. For example, solid state lighting consumes roughly ten times less power than old-fashioned lightbulbs, so that a quick workout can supply many hours of light. On the other hand, we now have much better technology for human power production, ranging from sophisticated exercise machines to biogas power plants.Lessons from the Gym
The power output of a human being is determined by three factors: the person, the duration of the effort, and the mechanical device that is used to convert human power into useful energy -- human power generation is often a symbiosis between man and tool or machine. Our legs are roughly four times stronger than our arms, which means that a human on a stationary bicycle machine can produce more power (75 to 100 watts) than a human operating a small hand crank (10 to 30 watts).
During shorter efforts, the mechanical power output of a human being can increase substantially: up to 500 watts on a bicycle and up to 150 watts while operating a hand crank over a period of one minute. However, age, gender and fitness also play an important role. Athletes can generate more power for a longer period of time -- up to 2,000 watts during three seconds, or up to 400 watts during one hour. So far the theory, which is far from complete.
Exercise machines for strength training are an interesting addition to stationary cycling machines for human power production.
During the research phase for the Human Power Plant, we followed a fitness programme to become better human power sources. This was a very instructive experience. One of the first things we learned is that there are important differences between individuals, even if they have similar age, gender and fitness.
Melle, the powerhouse in our team, could lift a heavier weight on almost any exercise machine. Kris, on the other hand, appeared to have better endurance, and could beat Melle with triceps and shoulder exercises. Such differences should be taken into account in order to achieve optimal energy production - there is no ready-made solution.
We also found out that exercise machines for strength training can produce a lot of power in a very short time, making them an interesting addition to stationary cycling machines for human power production. A five minute workout (including two breaks of one minute each) can supply more than 15 Wh of electricity, enough to charge a quarter of a laptop's battery or to power a desk lamp for 3 hours.
Finally, we quickly discovered that gyms are pretty boring places. The exercise equipment is often positioned in such a way that people all look in the same direction, which excludes all but the most primitive communication. And, while a stationary bicycle is considered to be the most energy-efficient human power machine, we found out that stationary cycling is no fun at all.How to Motivate Human Power?
The last point deserves extra attention. Unlike a windmill, a solar panel or an oil barrel, human power needs to be motivated in order to produce energy. If we make a switch to human power production, would everybody generate their own power for the sake of sustainability? Would people pay others to do it for them? Or, would people force others to do it for them?
A financial reward won’t do the trick, because at the current energy prices in the Netherlands, a human generating electricity would earn only 0.015€ per hour. Consequently, unless environmental awareness increases dramatically, the use of human power could open the door to new forms of slavery. Is such slavery justified for a reduction in CO2-emissions? Could we force refugees or criminals to produce power?
At the current energy prices in the Netherlands, a human generating electricity would earn only 0.015€ per hour.
These are disturbing questions, because the history of human power is -- broadly -- also the history of slavery. These days we import oil, coal and uranium, in the past we imported slaves. Luckily, there may be a third possibility. We can try and motivate people by making human energy production more fun, social, and exciting.
The few commercially available devices for human energy production are entirely focused on energy efficiency -- there's no attention to fun or motivation. They are also designed for emergency purposes, not for prolonged and daily use. For example, most hand cranks are made as compact as possible, while a larger device would be much more comfortable to use.
For the design of our prototype human power plant, we wanted to address these issues. We teamed up with makers and sports coaches to develop fitness machines that are suited for different types of human power sources, are fun too use, and produce a maximum amount of power.
To make power production more social, we decided that power producers should be able to talk to each other. They can even bring their pets to help with power production, creating a cosy and home-like atmosphere. This is not a new idea: dogs were commonly used as a source of mechanical power in pre-industrial times, and also provided their owners with a source of warmth.
Water Under Pressure
For extra motivation, all exercise machines in our prototype human power plant are facing a jacuzzi & shower where girls are invited to encourage the boys to flex their muscles and generate more power. Of course, the gender roles could be reversed, but during the first experiments we discovered that this is less energy-efficient. Girls don't seem to get motivated by guys in jacuzzis, at least not to the extent that guys get motivated by girls in jacuzzis.
The jacuzzi is not a gimmick, but an essential part of the prototype human power plant. That's because we opted for water under pressure as the energy carrier. The kinetic energy produced by humans and their pets is pumped into a pressure vessel, which produces water under pressure that is led to water turbines which supply mechanical energy and electricity. The jacuzzi is the receiving reservoir of this closed system.
With the choice for water under pressure, we want to make energy more visible and audible. More importantly, however, it allows us to produce electricity without the use of batteries and electronics -- which are not sustainable components. In our human power plant, the hydraulic accumulator takes the place of the battery and the voltage regulator. Small variations in human power production can be smoothed out, keeping the voltage constant. Longer term energy storage is provided by the humans themselves.
To find out if we could sustain a modern lifestyle with human power alone, we teamed up with architects to design plans for the conversion of a 22 floors tower building into an entirely human powered student community of 750 people.
The Willem C. Van Unnik building is the tallest building on the campus of Utrecht University. The concrete, steel and glass monolith, which occupies a central position on the campus, was built in the late 1960s and has been mostly empty for years. Maintaining it is an important cost for the university, who owns the building.
A time schedule tells the students when they have to produce elelectricity and heat, and when to perform other services for the community.
Because the university has the ambition to become carbon neutral in 2030, we propose to turn a problem into an opportunity. The ecological footprint of the human powered Van Unnik student community will be close to zero, and the building is already there.
Each student in the human powered Van Unnik student building is responsible for generating the electricity that’s used in his or her individual room. The lower floors of the building are reserved for communal energy production, providing both electricity and warmth. This energy is used to heat the building, prepare food, wash clothes, take showers, and so on.
More energy is supplied by a biogas plant, which is operated by the students and runs on their food waste and excrements. A time schedule tells every student when he or she has to produce electricity and heat, and when to perform other services for the community.Power Generation Schedule
According to our preliminary calculations, an entirely human powered student building is achievable. The students would maintain a modern lifestyle, including hot showers, computers, and washing machines. On the other hand, they would have to produce energy for 2 to 6 hours per day, depending on the season and their individual and communal preferences.
A human powered student community has enormous potential for a reduction in energy use. If students have to generate their own power, they are much less likely to waste it. How far would students go to reduce their efforts? Would hot showers go out of fashion? Would salads be the next culinary trend? Would typewriters make a comeback?
Energy use is also lowered by encouraging the communal organisation of daily household tasks, just like in the old days. Finally, the human powered student community applies low-tech solutions, such as fireless cookers, thermal underclothing, and heat exchange showers, which all maximize comfort in the context of a limited energy supply.
The design of the building and the construction of the prototype human power plant is documented on a separate blog: Human Power Plant. It's a work-in-progress, and comments are welcome. Once the project is complete, we will post an update on Low-tech Magazine.
Kris De Decker & Melle Smets.
- The forgotten future of the stationary bicycle
- The short history of early pedal powered machines
- Hand powered drillling tools and machines
- Human powered cranes and lifting devices
- The velomobile: high-tech bike or low-tech car?
- Power water networks
- How to get your apartment off the grid
The Romans are credited with the invention of the first smoke-free heating system in Western Europe: the hypocaust. Until recently, historians had assumed that its technology was largely lost after the collapse of the Roman Empire. In fact, however, it lived on in large parts of Europe, and was further developed into the “heat storage hypocaust”, an underground furnace on top of which granite stones would be piled, to then release hot air through vents in the floor. By this means, a room could be kept warm for days with just one firing of the hypocaust's furnace.// //
Hypocausts were heating systems that distributed the heat from an underground fire throughout a space beneath the floor. The heat was absorbed by the floor and then radiated into the room above. The effect on thermal comfort must have been similar to that of a modern-day hot water or electricity-based radiant floor heating system. The Roman hypocaust was characterised by its under-floor flue passages, created by small pillars bearing the floor's paving slabs. Sometimes, the heat was also fed through cavities in the walls before escaping from the building, thereby warming up the walls, too.
The Romans were not the first to develop a heating system in which the heat from a fire was fed under the floor from one side of a room to the other. The Chinese kang and dikang, the Korean ondol and the Afghan tawakhaneh were based on similar principles and date back to even earlier times. What's more, the Romans probably learned the technology from the Greeks. Nevertheless, it was the Romans who developed the hypocaust into a more sophisticated heating system, especially in their public bath houses, which were built all across Europe and around the Mediterranean.
For a long time, historians believed that the fall of the Roman Empire in around 500 AD marked the start of a hiatus in Europe's use of smoke-free heating. Nevertheless, although most public baths fell into disrepair in the Western Roman Empire, hypocausts continued to be built and used in the Early Middles Ages, especially in monasteries. The technology also lived on in the Eastern Roman (Byzantine) Empire and was adopted in the hammams of the Arabs, who reintroduced the hypocaust to Western Europe when they built the Alhambra palace in the 13th century. 
Smaller and cheaper systems, using ducts instead of pillars, also continued to be used, especially in smaller buildings. These hypocausts only heated part of the floor, but were much easier to build. We found just such a hypocaust in a remote village in Spain, which is still in use today.
Heat Storage Hypocausts
With the spread of Christianity and its monasteries to Northern Europe, the Roman hypocaust proved too inefficient for the region's colder climes. The first half of the 14th century, or possibly even earlier, saw the start of the practice of piling up granite stones on the top of the furnace vault to accumulate heat. [1, 2] Far from a simplified medieval imitation, the heat storage hypocaust represented a further stage in the development of this ancient technology. 
When the firing was complete, the vents in the hot plate were opened and hot air rose from the pile of stones into the room to be heated.
Unlike the Roman hypocaust, which was based on radiant heating, the heat storage hypocaust provided convective heating. The room to be heated featured a perforated “hot plate” above the pile of granite stones. Its perforations remained closed while the fire was burning, so that the smoke was kept out of the room and could escape through the chimney or a cavity in the wall. When the firing was complete and the furnace had been cleaned, the smoke flue was closed by means of a damper, the vents in the hot plate were opened and hot air rose from the pile of stones into the room. [2, 3]
Air vents in the floor of the Malbork castle in Poland. Picture: Robert Young. The hot air vents were usually round in shape and 10 to 12 cm in diameter.
Because of their poor heat storage capacity, Roman hypocausts had to be fired continuously. Adding a stone chamber to create the heat storage hypocaust made it easier to accumulate heat, meaning it was no longer necessary to keep the furnace constantly lit. In 1822, a number of experiments were conducted to establish the effectiveness of a then 400 year-old heat storage hypocaust in Poland's Malbork Castle. One such experiment involved heating the castle's 850 square-metre banqueting hall. [1-3]
A Weekly Fire
On 3 April, a cold furnace was lit for three and a half hours using 0.7 cubic metres of spruce wood. When the vents in the hot plate were opened, hot (200 ºC) air rushed into the banqueting hall, raising its temperature from 6 to 22.5°C in just 20 minutes. The air vents were then closed. By the following morning (4 April), the room's air temperature had fallen to 14°C. The air vents were opened and the temperature rose to 19°C in one hour–without any additional fire being lit.
A full six days after the fire was extinguished, the air rising from the vents had a temperature of 46°C
On 5 April, the temperature of the air escaping through the vents was 94°C and the room temperature rose from 10 to 16°C in half an hour. On 6 April, three days after the fire was extinguished, the air was still hot enough to raise the room's temperature from 10 to 12°C. Even on 9 April, a full six days later, the warm (46°C) air rising from the vents managed to lift the temperature in the hall from 8 to 10°C.
During his 1438 trip through Europe, the Spanish traveller Pero Tafur wrote that people placed "seats above the holes, also with holes in them. The people then sit down on those seats and unstop the holes and the heat rises between the legs to each one".  This is reminiscent of the footstoves used in Northern Europe during the Middle Ages.
Above: The heat storage hypocaust in the Malbork castle in Poland. Source: J. Kacperska.
Baltic Sea Region
The heat storage hypocaust was mainly used in the Baltic Region–Northern Germany, Denmark, Sweden, Finland, Estonia, Latvia, Lithuania, and Poland. To a lesser extent, they have been found further to the south and east, in places such as Western and Southern Germany, Switzerland, Austria, the Czech Republic, Hungary and Russia. Most were built in the 1400s and 1500s. 
Research into the history of heat storage hypocausts continues today. In his groundbreaking 1998 study, Klaus Bingenheimer estimated that Medieval Europe boasted a total of 500 hypocausts, of which 154 were of the heat storage variety.  Since then, however, many more have been discovered. For example, while Bingenheimer had evidence for only two heat storage hypocausts in Estonia, a 2009 paper by Andres Tvauri listed 95 heat storage hypocausts, either still standing or whose location had been documented. 
According to the latest estimates, there must have been at least 800-1,000 heat storage hypocausts around the Baltic Sea
In total, around 500 heat storage hypocausts have now been documented in the Baltic Region and, according to the latest estimates, there must have been at least 800-1,000 of them by the end of the 15th century , their use spreading from monasteries and castles to other public buildings, such as almshouses, town halls, guildhalls and hospitals. In Old Livonia, which covered present-day Estonia and Latvia, the technology also found its way into private homes. In Tallinn, Estonia's capital, a heat storage hypocaust was not the exception, but the rule, and at least 54 such systems have been discovered there. 
Hypocausts in Tallinn
Andres Tvauri's overview of the heat storage hypocaust in Estonia, one of the few available resources in English language, provides a wealth of technical details. Special covers or plugs, made of metal, stone or fired clay, were made to seal the hot air vents in the floor's “hot plates”. Small ceramic dishes have been found, placed on the hot stones directly under these venting holes. It is assumed that water was poured on them, to produce steam and thereby increase the air humidity level. 
Remains of heat storage hypocausts in Tallinn, Estonia. Source:  Kaarel Truu, 2016. In Tallinn's homes, the subterranean stoker's room of the hypocaust and the heated bedroom on the ground floor were usually connected by a flight of stairs.
The furnace was covered with a barrel vault on which the stones, with diameters of 40 to 50 cm, were piled to accumulate heat. The vault's bricks were laid to form three or four arches, with intervals of about 20 cm between them and medieval builders probably used an old vat in helping to shape the arches of the vault. When the furnace was completed, a fire was built in the vat.
A furnace's dimensions would depend on the size of the room to be heated. In private homes, where only the bedroom was heated, it would be one to two metres long, a little more than a metre wide and 50 to 60 cm high. In public buildings and monasteries, where large halls and rooms had to be heated, the furnaces would be much larger.
Heat storage hypocausts were only used for a fairly short period of time. By the fifteenth century, glazed tile stoves were already spreading through the Baltic countries. The tile stove is a radiant heating system with an interior maze of brick or stone channels designed to accumulate a fire's heat. It was more convenient to use and to build than the hypocaust, not to mention more energy efficient, as it takes less energy to heat people than to heat spaces.
Although it was possible to heat at least two separate rooms by means of one furnace, as a rule, the hypocaust was located under the heated room or rooms, which were always on the ground floor. Tile stoves could be built anywhere, even on a building's upper floors. Over the course of the 16th century, Old Livonia stopped using the heat storage hypocaust, which was replaced by a glazed tile stove, often built exactly where the hypocaust's furnace had previously stood. Elsewhere, in Poland for example, some heat storage hypocausts remained in use until the 18th and 19th centuries.
Kris De Decker. Edited by Roly Osborne.
 Atzbach, R. "The ‘Stube’and its Heating. Archaeological Evidence for a Smoke-Free Living Room between Alps and North Sea". Svart Kristiansen, M. & Giles, K.(red.)." Dwellings, Identities and Homes. European Housing Culture from the Viking Age to the Renaissance (2014).
 Bingenheimer K. "Die Luftheizungen des Mittelalters. Zur Typologie und Entwicklung eines Technikgeschichtlichen Phänomens", 1998
 Truu, K. "Keskaegsed kerishüpokaustid Tallinna vanalinnas", 2016
- Restoring the old way of warming: Heating people, not spaces
- Radiant and conductive heating systems
- The solar envelope: how to heat and cool cities without fossil fuels
- Insulation: first the body, then the home
- Medieval heating system lives on in Spain
Aaron Vansintjan takes to the streets of Hanoi, where the Vietnamese practice a food culture based largely on fermentation.
Although food spoils much faster in a tropical climate, the Vietnamese will often store it without refrigeration, and instead take advantage of controlled decay. Vietnam's decentralised food system has low energy inputs and reduced food waste, giving us a glimpse of what an alternative food system might look like.
Picture: Street food in Hanoi, Vietnam. Maxime Guilbot.// //
In a tropical climate, everything decays faster. Bread gets soft and mushy, milk spoils, the walls get moldy just months after a layer of fresh paint. Food poisoning is a constant concern. The heat and moisture make for an ideal breeding ground for bacteria and fungi. In this environment, you’d think people would be wary of any food product that smells funny. But in tropical Vietnam, food can get pretty pungent.
Take mắm tôm, a purplish paste made of fermented pureed shrimp. Cracking open a jar will result in a distinct smell of ‘there’s something wrong here’ with hints of marmite to whelm through the whole room. You have chao, a stinky fermented tofu, which was so rank that the smallest bite shot up my nose and incinerated my taste buds for an hour (‘Clears the palate!’ said the waiter encouragingly).
Consider rượu nếp, which is sticky rice mixed with yeast and left to ferment for several days ‘in a warm place’ — i.e. the counter. The result is a funky-smelling desert—literally rice left to rot until it turns in to a sweet wine pudding. On the 5th of May of the lunar calendar, Vietnamese people will eat rượu nếp in the morning to celebrate ‘inner parasite killing day’. Bonus: day-drunk by the time you arrive at work.
We shouldn’t forget Vietnam’s world-famous fish sauce — nước mắm — made from diluted fermented fish, a flavour that many people around the world continue to find totally intolerable.
In Vietnam, putrefaction is accepted as a part of life, even encouraged. But fermentation in Vietnam isn’t just an odd quirk in a tropical diet. To understand why fermentation is so integral to Vietnamese culture, you have to consider how it is embedded within people’s livelihoods, local agricultural systems, food safety practices, and a culture obsessed with gastronomy; where food is seen as a social glue. And when you bring together all these different puzzle pieces, an enchanting picture emerges: one in which fermentation can be a fundamental component of a sustainable food system.
Unlike many high-tech proposals like ‘smart’ food recycling apps, highly efficient logistics systems, and food packaging innovations, fermentation is both low-tech and democratic—anyone can do it. What’s more, it has low energy inputs, brings people together, is hygienic and healthy, and can reduce food waste.Rotting Food can be Safe and Healthy
At the entrance of a market in Hanoi, a woman with a dưa chua stand tells us that making ‘sour vegetables’ is easy: you just add salt to some cabbage and let it sit for a couple of days. As we talk, several customers come by, eager to scoop some brine and cabbage into a plastic bag. Worried that we’re discouraging her customers, she shoos us away. She isn’t lacking business.
Is fermentation really so effortless? The short answer is yes. Many recipes will call for two things: water and salt. At just a 1:50 ratio (2%) of salt to food, you can create an environment undesireable for all the bad bacteria and encourage all the good ones. Sauerkraut, kimchi, fish sauce, sriracha, and kosher dill pickles—are all made according to this principle.
Yet other types of fermentation are a bit more complicated. They call for sugar (e.g. wild fermented alcohol like ethiopian honey wine), yeast starters (rượu nếp, most wines and beers), special fungi (tempeh, miso), or some kind of combination of fungi, bacteria, salt, or sugar (kombucha). Yet others are simpler: to make cooking vinegar, just let that bottle of bad wine sit for a couple of days, and to make sourdough, just mix water and flour and leave it on your counter.
Fermentation is both low-tech and democratic. It can be a fundamental component of a sustainable food system
All in all, fermentation is just controlled decay: your most important ingredient is time. This can sound like a bit too much, too fast. Take the woman I met at the entrance of the market. Her dưa chua, while in great demand, looks like wilted cabbage, soppy, floating in murky brine. Some bubbles are forming on the edges of the plastic container—for the trained eye a sign of an active fermentation process, but for the uninitiated, an alarm bell.
There’s no use beating about the bush. That dưa chua is in fact rotting in a very similar way that a peat swamp is constantly rotting, belching large doses of methane into the world. What’s happening is an anaerobic fermentation—that is, without significant amounts of oxygen. This absence of oxygen and the high levels of salt creates an environment supportive to several bacteria that also find their home in our own digestive systems.
Those bubbles forming in the container are by-products of these bacteria: CO2 and methane. The bacteria also lower the pH and start breaking down raw food—essentially pre-digesting it for you. And, once the pH goes down even lower, you’ve created a monster so voracious that no other fungus, bacteria, or parasite with bad intentions will dare to enter its domain. So yes, it’s rotting just like a stinky swamp, and that’s a good thing.
It’s a good thing especially in a climate like that of Vietnam. Every fermentation is a small victory against the constant war against heat and humidity, which destroys all edibles in its path. Instead of eating raw cabbage and risking death by a thousand E. Coli, you can eat fermented cabbage and know, for a fact, that it won’t have you hunkering by the toilet bowl any time soon.
Not only that, but eating fermented food has significant health benefits. You might’ve noticed the new fad of ‘pro-biotic’—well all that really means is that the product contains some kind of active bacterial culture that looks like the flora in your own stomach. That would include, not just Go-gurt, Yoplait, Chobani, and Danone, but also several kinds of cheese, pickles, beer, and just about any other fermented product.
Eat about a tablespoon of any of these at the end of every meal, and you inoculate your stomach with a fresh batch of microbes that help you digest—all the more necessary when we eat antibiotics in our meat and bland diets of white bread and peanut butter, and drink chlorine in most municipal water systems.
Further, products like fish sauce and shrimp paste provide many impoverished Vietnamese with micro-nutrients, B-12 vitamin, proteins, and omega 3 fatty acids—comprising a significant part of people’s nutritional requirements. For a country that still remembers hunger and starvation, this is no small fry.A Diverse Food System
In the same market we talk to a vegetable vendor. Real estate in the neighborhood is getting more expensive, rents are going up. She’s having a hard time making ends meet. On her street many elderly have sold their farmland—which they used to grow vegetables and decorative flowers—and now, unemployed, they spend their time selling home-made fermented vegetables out of their front door.
In the same neighborhood, we meet Tuan, an elderly woman growing vegetables in the banks of a drained pond. She rarely goes to the market—she can grow much of her own food in this little patch. We ask her if she ever ferments her vegetables. Of course, but she doesn’t sell them—they’re just for herself and her family.
If you want a localised food system, you need to be able to store your food for long periods. Fermentation makes that possible.
After several months of studying Hanoi’s food system and the people who make their living off of it, Vân (my Vietnamese collaborator) and I are starting to see some patterns. In Western countries, the food system is shaped a bit like an hourglass: industrial farmers send their food to a supplier, who then engages with a handful of supermarket companies, who then sell to consumers.
In Vietnam, on the other hand, it looks more like an intricate web: wholesale night markets, mobile street vendors, covered markets, food baskets organized by office workers with family connections to farmers, guerilla gardening on vacant land. Food is grown, sold, and bought all over the place, and supermarkets are just a small (albeit growing) node in the complex latticework. Most people still get food at the market, but many also source their food from family connections.
In Vietnam, many people might have one ‘profession’, but when you ask a bit more questions it’ll turn out that they have half a dozen other jobs for ‘extra income’. There’s a generalised ‘hustle’: everyone is a bit of an entrepreneur. After talking with Tuan for several hours, we learned that she has, throughout her long life, fished, grown vegetables, corn, and fruit trees, sold rice noodles, bread, ice cream, roses, and silk worms. Now, aged 68, she grows decorative peach trees and grows vegetables when she can.
With an economy just decades shy of a highly regulated communist regime where the only food you could get was through rations, and the memory of famine still fresh in people’s mind, this is entirely understandable: with a finger in every pot, you can just about manage to survive. These two factors, a highly distributed food system and diversified livelihoods, make for a fertile environment for fermentation practices. With easy access to wholesale produce, many can turn to small-scale fermentation to compliment their income—or, in the case of Tuan, to spend less on food at the market.Preserving the Harvest, Bringing People Together
Vietnam hosts both the Red River delta and the Mekong delta—two of the most productive agricultural regions in the world. The heat and the vast water supply allow some areas of Vietnam to have three full growing seasons. That means three harvests, and that means lots of food at peak times, and sometimes so much that you can’t eat it all. That’s another bonus of fermentation: if your food system is local, you’re bound to stick to seasonal consumption. But by fermenting your harvest you can eat it slowly, over a long time period. It’s this principle that underlies much of fermentation culture in East Asia.
Take kim chi, a spicy fermented cabbage from Korea. Traditionally, the whole village would come together to chop, soak, salt, and spice the cabbage harvest every year. Then, these mass quantities of salted spicy cabbage were stored in large earthenware pots underground—where cooler temperatures lead to a more stable fermentation process. As a result, you can have your cabbage all year. If you want a localised food system, you need to be able to store your food for long periods. Fermentation makes that possible.
Food fermentation is a strange thing: it inverts what many regard as waste and turns it into a social, living, edible object.
Fermentation is also social. Fermenting large batches of summer’s bounty typically requires hours of chopping—the more the merrier. And chopping is the perfect time for sharing cooking tips, family news, and the latest gossip. In South Korea, now that kim chi production has been largely industrialized, people try to relive the social aspect of making it through massive kim chi parties in public spaces.
In a country like Vietnam, where a traditional food system still exists for a large part, fermentation remains embedded in social relations. Relatives and neighbors constantly gift each other fermented vegetables, and many dinners end with a batch of someone’s homebrewed rice wine—rượu men. Fermentation lends itself well to a gift economy: there is pride in your own creation, but there is also no shame in re-gifting. And because of its low costs, anyone can take part in it.Gastronomy, Tested with Time
It is a bit disingenuous to caricature Vietnam’s food culture as obsessed with rotting, and suggest that this is largely the result of a tropical climate. Rather, what we’re dealing here is difference in taste: what may seem strange and pungent to one culture is highly appreciated in another. In fact, one of the greatest impressions I have of Vietnamese culture is its deep appreciation for gastronomy: subtle, complex flavours, considered textures, modest spicing and well-balanced contrasts define Vietnamese cuisine.
Fermentation is a crucial part of this culture: the art of fermentation requires paying attention to how flavours change as food transforms, understanding these chemical shifts and using them to achieve a desired affect. It’s also clear that Vietnamese gastronomy is popular: it takes place in street food stalls, run by enterprising matriarchs, constantly experimenting with modern products and traditional flavors. It is cheap and, to ensure customer loyalty, it is surprisingly hygienic.
Street vendors rarely have fridges, nor do they have large cooking surfaces, dishwashing machines, or ovens. By and large, they make do with some knives, two bowls to wash fresh vegetables in, a large pot, a frying pan, coals or gas burners and — for products that may go bad during the day — fermentation. Having limited access to capital and consumer electronics, these vendors — most often women — ply their trade in a way that has stood the test of time.
They know the rules of hygiene and food safety, and, because they have to be careful with their money, they know exactly what kinds of food will go bad, and what kinds of food can be preserved. In doing so, they practice a food culture that has been passed down through generations—to a time before fridges, a global food system powered by container shipping, factory trawlers, and produce delivered to far-off markets by airplane.
While modern technology has provided many benefits for our diets, there are many innovations from the past that have been abandoned as the global food system was transformed by the availability of cheap fuel. One such innovation was the fish sauce industry that flourished during Ancient Roman times. For Romans, fermenting fish was a crucial aspect of a low-tech and seasonally-bound food system. In fact, it so happens that research now suggests Vietnamese fish sauce may actually have its origins in the Roman variant produced over 2,000 years ago.
Today, however, fermentation doesn’t fit so easily within the global food system. Harold McGee at Lucky Peach tells the story of how canned products were notoriously difficult to transport in the newly industrialized food system of the 19th century. Apparently, until the 20th century, metal cans would regularly explode, sending shrapnel and preserved tuna flying through the decks of transport ships. This was due to heat-resistant bacteria, which continued fermenting the product long after it was heat-treated.
Fermented food has to be produced locally: transporting it will risk explosions on the high seas
The solution was to subject the canned product to high temperatures over a long period of time, killing all remaining cultures, in turn changing their flavor. But in the case of fermented food, the problem has not gone away: if you want it to be actively fermenting, transporting it will risk explosions on the high seas. But heating stops the fermentation process, and kills its unique flavor.
It’s for this reason that products like kim chi, kombucha, and sauerkraut often have to be produced locally, despite increasing global demand. In some way, fermentation belies the industrial food system: the fact that it is alive means that it doesn’t quite fit in. You either have to kill it, thereby change it, or it will keep bubbling through the cracks.A Low-tech Food System is Possible
Fermentation cultures in Vietnam give us a glimpse of what an alternative food system might look like, one that is both decentralized and doesn’t depend on high inputs of fossil fuel energy to preserve food, high waste, and high-tech. Why does this matter? Well, in a world facing climate change, we need a low-impact food system, and fast.
But there are other reasons: with increasing concern over the health side effects of common chemicals such as BPA, found in almost all cans and pasta sauce jars, people are looking to safer kinds of preservation, which aren’t killing them and their families slowly. And with the rise of the local food and food sovereignty movements, many are realising that we need food systems that support everyone: from small farmers to low-income families.
Because of its low investment costs, fermentation lends itself well to supporting small businesses, allowing them to take advantage of seasonality while practicing a time-tested low-tech method of food preparation. Today, in response to increasing food insecurity, we are hearing increasing calls for a smarter, more efficient food system. Proposals such as intensive hydroponic and vertical farming, big data-powered logistics systems, smart agriculture technologies, and food waste recycling apps clog the news.
But we already have a low-tech innovation that works very well. Fermentation, because it is accessible to everyone, because of its low energy requirements, and because it fits right in to a more sustainable food system, should not be abandoned in the search for global food security.
A fish sauce factory in Vietnam. Source: Mui Ne info & events.
It’s easy to get the impression that we live in a world of scarcity, where there just isn’t enough food to go around, and food production all around the world is limited by technological backwardness. On the other hand, many of us are more and more concerned with the increasing problem of food waste in Western food systems. We seem to live in a world of both scarcity and abundance at the same time.
Food fermentation is a strange thing: it inverts what many regard as waste and turns it into a social, living, edible object. As a friend of mine once said, if you have too many grapes, you make wine. If you have too much wine, you throw a party. If you still have too much wine, you make vinegar. Fermentation turns scarcity and abundance on its head, belying easy categories of what is waste and what is too much.
Sustainability advocates worry a lot about making the ‘supply chain’ more ‘efficient’ — that is, increasing profits margins while making sure all food reaching consumers in a perfectly fresh state. Instead, we could consider taking advantage of decay. This isn’t hard: you just have to add some salt and water. We’ve done it for thousands of years, and, if we follow the example of food cultures like those in Vietnam, we can do it again.
- Pigeon towers: a low-tech alternative to synthetic fertilisers
- Garum: fermented fish sauce for the ancien Roman masses
- Reinventing the greenhouse
- Fruit walls: urban farming in the 1600s
- Burning the bones of the earth: lime kilns
- Well-tended fires outperform modern cooking stoves
- Recycling animal and human dung is the key to sustainable farming
The information society promises to dematerialise society and make it more sustainable, but modern office and knowledge work has itself become a large and rapidly growing consumer of energy and other resources.
Fantasy skyline. Image credit: Skyscrapercity.Welcome to the Office
These days, it's rather easy to define an "office worker": it's someone who sits in front of a computer screen for most of the working day, often in a space where others are doing the same, but sometimes alone in a "home office" or with a few others in a "shared office". In earlier times, many office workers were used not for their knowledge or intelligence, but for the mere objective capacity of their brains to store and process information. For example, "computers" were office workers who made endless calculations with the help of mechanical calculating machines. This category of office workers has become comparatively less important, because inanimate computers have taken over many of their jobs. Most office workers -- so-called "knowledge workers" -- are now paid to actually think and be creative.
There's a big chance that you are one of them. Roughly 70% of those in employment in industrial nations now have office jobs. The share of office workers in the total workforce has increased continuously throughout the twentieth century. For example, in the USA, the information sector employed 13% of workers in 1900, about 40% of workers in 1950, and more than 60% of workers in 2000.  The spectacular and so far unstoppable growth in the number of office workers is believed to have led to a so-called information society, an idea popularised by Fritz Machlup in his 1962 book The Production of Knowledge in the United States, and since then repeated by many others. 
Downtown Chicago. Photo credit: Charles Voogd, Wikipedia Commons.
Interestingly, there's no agreement as to what an information society actually is, but the most widely accepted definition is a society where more than half of the labour force engages in informational activities and where more than half of the GNP is generated from informational goods and services. Some say that the information society is characterised by the use of modern IT equipment, but that does not explain the growth of office work during the first half of the twentieth century. Others have argued that there is a transition from an economy based on material goods to one based on knowledge. Their claim is that this shift from the "industrial society" to the "information society" would make the economy less resource intensive. 
Indeed, unlike workers in manufacturing, service or agricultural industries, office workers don't really produce anything besides paper documents, electronic files, and a lot of chatter during formal and informal meetings. However, the rise of office work has not lowered the use of resources, on the contrary. For one thing, supporters of the sustainable information society ignore the fact that we have moved most of our manufacturing industries (and our waste) to low wage countries. We are producing and consuming more material goods than ever before, but the energy use of these activities has vanished from national energy statistics. Second, modern office work has itself become a large and rapidly growing consumer of energy and resources.The Energy Footprint of Office Work
The energy use of office work consists of multiple components: the energy use of the building itself (office equipment, heating, cooling and lighting), the energy used for commuting to and from the office, and the energy used by the communications networks that office work depends on. It also includes people who are not working in the office but who plug in their laptops in a place outside the office, which is also lighted, heated or cooled. As far as I could find out, nobody has ever tried to calculate the energy footprint of office work, taking all these components into account. We know more or less how much energy is used by commuting and telecommunication, but we don't know how much of that is due to office work.
Most information is available for the energy use of office buildings -- the icons of today's global knowledge economy. However, even in this case information is limited because most national statistics do not distinguish between different types of commercial buildings. The main exception is the US Commercial Buildings Energy Consumption Survey (CBECS), which is undertaken since 1979 and is the most comprehensive dataset of its type in the world. It further categorises offices into administrative or professional offices (such as real estate sales offices and university administration buildings), government offices (such as state agencies and city halls), banks and financial offices, and health service administrative centers. 
The modern, American-style office building -- a design increasingly copied all over the world -- is an insult to sustainability. Per square metre of floorspace, US office buildings are twice as energy-intensive as US residential buildings (which are no examples of energy efficiency either). [5-10]
In 2003, the most recent year for which a detailed analysis of office buildings was presented (published in 2010), there were 824,000 office buildings in the USA, which consumed 300 trillion Btu of heat and 719 trillion Btu of electricity.  The electricity use alone corresponds to 210 TWh, which equals a quarter of total US electricity produced by nuclear power in 2015 (797 Twh with 99 reactors). In other words, the US needs 25 atomic reactors to power its office buildings.  From 2003 to 2012, the number of US office buildings grew by more than 20%. How Did we Get Here?
The US office building, which appeared with the arrival of the Industrial Revolution, was initially quite energy efficient. From the 1880s until the 1930s, sunlight was the principal means of illuminating the workplace and the most important factor in setting the dimensions and layout of the standard office building in the US. According to the NYC-based Skyscraper Museum:
"Rentability depended on large windows and high ceilings that allowed daylight to reach as deeply as possible into the interior. The distance from exterior windows to the corridor wall was never more than 28 feet (8.5 m), which was the depth some daylight penetrated. Ceilings were at least 10 to 12 feet (3 - 3.65 m) in height, and windows were as big as possible without being too heavy to open, generally about 4 to 5 feet (1.2 - 1.5 m) wide and 6 to 8 feet (1.8 - 2.4 m) high. If the office was subdivided, partitions were made of translucent glass to transmit light." 
Many office buildings had window accomodating H-, T-, and L-shaped footprints to encourage natural lighting, ventilation, and cooling. This changed after the introduction of fluorescent light bulbs and air conditioning. Produced at an affordable price in the late 1930s, fluorescent lighting provided high levels of illumination without excessive heat and cost. The first fully air-conditioned American office buildings appeared around the 1930s. The combination of artificial lighting and air-conditioning made it possible to design office space much deeper than the old standard of 28 feet. Light courts and high ceilings were ditched, and office buildings were reconceived as massive cubes -- which were much cheaper to build and which maximised floor space. 
Air-conditioning also enabled the most characteristic feature of the modern office building: its glazed façade. From the 1950s onwards, under the influence of Modernist architecture, glass came to dominate in America -- early examples of this trend are the Lever Building (1952) and the Seagram building (1958). The US Modernist office building, a cube with a steel skeleton and glass curtain walls, is essentially a massive greenhouse that would be unbearable for most of the year without artificial cooling. Because glazed façades don't insulate well, energy use for heating is also high. In spite of all the glass, most US office buildings require artificial lighting throughout the day because many office workers are too far from a window to receive enough natural light.
Canary Wharf, London. Photo credit: David Iliff, Wikipedia Commons.
The arrival of electric office equipment from the 1950s onwards further increased energy use. According to the CEBECS survey, "more computers, dedicated servers, printers, and photocopiers were used in office buildings than in any other type of commercial building". According to the latest analysis, concerning the year 2003, American office buildings were using 27.6 million computers, 11.6 million printers, 2.1 million photocopiers, and 2.5 million dedicated servers. In addition to electricity consumed directly, this electronic equipment requires additional cooling, humidity control, and/or ventilation that also increase energy use. [5, 8]
While heating was the main energy use in pre-1950s office buildings, today cooling, lighting and electronic equipment (all operated by electricity), use 70% of all energy on-site. Note that this ratio doesn't include the energy that is lost during the generation and distribution of electricity. Depending on how electricity is produced, energy use at the source can be up to three times higher than on-site. Assuming thermal generation of electricity (coal or natural gas), the average US office building consumes up to twice as much energy for electricity than for heating.Cultural Differences
Technology alone, however, does not explain the rise of the typical air-conditioned office building, nor its high energy use today. Although fluorescent light bulbs and air conditioning soon became available in Europe, the all-glass, cube-like office building remained for a long time a uniquely North American phenomenon. In the 1920s, office work in the USA came under the influence of Frederick Taylor's 'Scientific Managament'. Time and motion studies, which had been carried out in factories since the 1880s, were now applied to office work as well. Men with stopwatches recorded the actions of (mostly female) employees with the aim of improving labour productivity.
Taylor's ideas were translated into office design through the concept of large, open floor spaces with an orderly arrangement of desks, all facing the direction of the supervisor. Private office rooms were abolished. By the late 1940s, American offices resembled factories in their appearance and methods. Although Taylorism left its mark on European offices, it was taken up with less enthusiasm and faced more resistance rooted in tradition than in the US. In the 1960s and 1970s, the Europeans rejected the application of Taylorist principles to office work more strongly, and developed their own type of office building. British office expert Frank Duffy calls it the "social democratic" office. [15, 16]
These buildings, "groundscrapers" rather than "skyscrapers", were designed like small cities, cut into separate "houses" that are united by internal "streets" or "squares". They were built with corridors and spacious rooms on either side, all naturally lit and ventilated, with employees working next to a window. The social democratic office building focuses on user comfort, a consequence of the fact that office workers in Europe, unlike those in the USA, obtained the right to form democratically elected workers' councils that could participate in organisational decision making. The UK, with its more American style of business, embraced the US approach in the 1980s. [15, 16]
An important difference between the "social democratic" office building and the "Taylorist" US/UK office building is that the first is usually owner-occupied, while the latter is generally a speculative building: It is built or refurbished to provide a return on investment, and rented by the room or floor. The speculative model is gaining ground: over the last two decades, US/UK-style office buildings have finally started spreading all over Europe, and beyond. Roughly 50% of new office buildings under construction in France and Germany -- the largest European markets outside the UK -- are now speculative buildings, roughly double their share in the 1980s. 
This is bad news, because speculative office buildings exclude lower energy alternatives and raise energy use. First, in order to maximise the return on investment, they are usually designed as square or rectangular buildings with deep floor plans and low ceilings, and built as high as planning regimes allow. Naturally lit and cooled buildings require a more horizontal build and higher ceilings, both aspects that conflict with maximising floorspace. Second, those who design speculative office buildings don't know who will occupy the finished spaces, which leads to an over-provision of services.
"Developers and letting agents focus on the 'needs' of the most demanding tenants, and hence what is required for an office to be marketable to any tenant", write the authors of a recent study that looks into the energy demand of UK office buildings -- and concludes that 92% of such buildings are over-provisioned. Lighting, cooling and heating systems are attuned to unrealistic occupancy rates and are consequently producing more light, heat and cold than is necessary. The Promise of Remote Working
If the high energy use of office work is questioned at all, it's usually followed by the proposal to work outside the office building. At least since the 1980s, home working has been touted as a trend with potential environmental benefits. Alvin Toffler's The Third Wave (1980) predicted that in the near future it would no longer be necessary to build offices because computers would enable people to work anywhere they wanted. In 1984, when personal computers had become common equipment in offices, Frank Duffy stated that "many office buildings quite suddenly are becoming obsolete". 
Financial District, Downtown Toronto. Paul Dex, Wikipedia Commons.
Obviously, no such thing happened: in spite of the personal computer, there are now more office buildings than ever before. However, the utopian vision of a radically changed work environment is still among us. Since the arrival of mobile phones, portable computers and the internet in the 1990s, the focus has shifted to "remote" or "agile" working, which includes working at home but also on the road and in so-called third places: coffeeshops, libraries or co-working offices.  These concepts suggest that offices will become meeting places for 'nomadic' employees equipped with mobile phones and laptops, how the office will become a more diverse and informal environment, or how in the near future offices may no longer be necessary because we can work anywhere and at any time.  According to a 2014 consultancy report:
"The term 'office' will become obsolete in the coming years. The modern workplace evolves into more of a shared workspace with flexible working arrangements that acts as more of a hub for workers on the go than an official place of work. The vast majority of jobs in most organisations can be accomplished from virtually any PC or mobile device, from just about anywhere". 
Frank Duffy, building further upon his 1980s predictions, writes in Work and the City (2008):
"The development of the knowledge economy and achievement of sustainability will both be made possible by the power of information technology... Office work can be carried out anywhere... In the knowledge economy more and more businesses, both large and small, will be operated as networks, depending at least as much on virtual communications as on face-to-face interactions. Networked organisations do not need to operate, manage or define themselves within conventional categories of workplaces or conventional working hours." Does it Matter Where We Work?
On the face of it, more people working outside the office has obvious potential for energy savings. Home workers don't have to travel to and from the office, which can save energy -- after all, commuting has also become energy-intensive since the democratisation of the car in the 1950s. Furthermore, home office workers tend to use less energy for heating, cooling and lighting than they do in the office, a finding that corresponds with the fact that office buildings consume double the energy per square metre of floorspace compared to residential homes. 
However, there are many ways in which the environmental advantages of remote working can disappear or become disadvantages. First, remote workers make use of the same office equipment, the same data centers and the same internet and phone infrastructure as people working in an office -- and these are now the main drivers behind the increasing energy use of office buildings. In fact, a networked office would surely increase energy use by communication services, because face-to-face meetings at the office are replaced and complemented by virtual meetings and other forms of electronic communication.
In Work and the City, Frank Duffy recalls his participation in a videoconferencing talk, expressing his awe for the quality of the experience. What he doesn't seem to realise, is that the Cisco Telepresence system that he was using requires between 1 and 3 kW of power (and 200W in standby) at either side , plus the energy use of routing and switching all those data through the network infrastructure.
Second, if work is done not at home but in third places, people might actually increase their energy use for transport when they visit different working spaces during the day. They might work from home in the morning and drive to the office in the afternoon, or they might go to the office in morning and to a co-working space later in the day. Likewise, if organisations shorten the distance between the office and the office worker by inviting them to work in shared spaces closer to their home, employees might actually decide to go live further away from their new working space, and keep the same time budget for commuting. 
Third, for an employee working at home, on the road, or in a third place, the heating, cooling and lighting of that alternative workspace is now often an extra load because his or her now empty space in the office is still being heated, cooled and lit. In most cases, today's home and remote workers occasion additional energy consumption.  This problem is recognised by the supporters of remote working, who stress that office buildings have to adapt to the new reality of the networked office by reducing floorspace and increasing the occupancy rates. This can happen through "hot-desking", sharing a smaller amount of desks between office workers who decide not to work at home -- and hope that not everybody will show up at the same time.
Noel Cass, who investigates energy demand in offices for the UK's Demand Centre at Lancaster University, has his doubts about this approach:
"Hot-desking" requires the depersonalisation of the desk, as if it was a coffee bar or a library, and that's easier said than done. Internet companies such as Google and Yahoo, who pioneered hot-desking arrangements and whose productivity is the rationale behind this trend, have gone back to giving each employee their personal space. In fact, these companies not only left behind the "non-territorial" office, they also have recognised that productivity is best secured by physical co-presence, discouraging telecommuting.
Office spaces now tend to be conceptualised as a 'destination' with increasing amenities on the job, in an effort to attract and retain talent and encourage them to spend more time there. Examples are domestic-like interiors, gym facilities, indoor swimming pools, dry cleaners, or dentists on site. So, who knows, instead of working at home, the future could be living at the office. Obviously, increasing amenities at the office might negate the energy savings obtained by fewer and shared office desks. 
In sum, office work will always include buildings, commuting, office equipment and a communication infrastructure. The focus on the location of office work -- at home, in the office, or elsewhere -- conceals the real cause that impacts energy use: the high energy use of all its components.
Lujiazui, Shanghai. Patrick Fischer, Wikipedia Commons.
If the commute happens, or could happen, by walking, biking, or taking a commuter train, instead of by car, the energy use advantage of working at home would be zero or insignificant. Similarly, if an office building is designed in such a way that it can be naturally lit and cooled, like in the old days, working from home would not save energy for cooling and lighting. Finally, the use of low energy office equipment and a low energy internet infrastructure would lower the energy use regardless of where people are working. In short, for energy use it doesn't matter so much where office work happens. What really matters is what happens at these places and in between them.How Much Office Work Do We Need?
In his 1986 book The Control Revolution, James Beniger states that there is a tight relationship between the volume and speed of energy conversion and material processing in an industrial system on the one hand, and the importance of bureaucratic organisation and information processing, in other words, office work, on the other hand:
Innovation in matter and energy processing create the need for further innovation in information processing and communication -- an increased need for control. Until the nineteenth century, the extraction of resources, even in the largest and most developed national economies, were still carried out with processing speeds enhanced only slightly by draft animals and wind and water power.
So long as the energy used to process and move material throughputs did not much exceed that of human labour, individual workers could provide the information processing required for its control. The Industrial Revolution sped up society's entire material processing system, thereby precipitating a crisis of control.
As the crisis in control spread through the material economy, it inspired a continuing stream of innovations in control technology -- a steady development of organisational, information-processing, and communication technology that lags industrialisation by perhaps 10 to 20 years. By the 1930s, the crisis of control had been largely contained. 
Although Beniger makes no reference whatsoever to sustainability issues, what he suggests here is another strategy to lower the energy use of office work: reduce the demand for it. If office work depends on the material and energy throughput in the industrial system, it follows that reducing this throughput will lower the need for office work. A slower, low energy, and more low-tech industrial system would decrease the need for control and thus for office work. An economy with smaller organisations operating more locally, would need less office work.
Cuatro Torres Business Area, Madrid. Xauxa Hakan Svensson, Wikipedia Commons.
By the 1900s, all management techniques and office tools that would be used for the next 70 years had been invented. James Beniger was not impressed by the arrival of the digital computer, which was becoming ubiquitous in offices when he wrote his book:
Contrary to prevailing views, which locate the origins of the information society in WWII or in the commercial development of television or computers, the basic societal transformation from industrial to information society had been essentially completed by the late 1930s.
Microprocessing and computer technology, contrary to currently fashionable opinion, do not represent a new force recently unleashed on an unprepared society but merely the most recent installment in the continuing development of the control revolution.
Energy utilisation, processing speeds, and control technologies have continued to co-evolve in a positive spiral, advances in any one factor causing, or at least enabling, improvements in the other two. Furthermore, information processes and flows need themselves to be controlled, so that informational technologies must continue to be applied at higher and higher layers of control -- certainly an ironic twist to the control revolution. 
Our so-called information economy mainly serves to manage an ever faster, larger and more complex production and consumption system, of which we have only outsourced the manufacturing part. Consequently, without the information economy -- without the office -- the industrial system would collape. Without the industrial system, there would be no need for the information society or the office -- in fact, office work could be like it was before 1850, when the biggest bank in the US was run by just three people with a quill. 
The sustainable image of the information society -- as contrasted to the dirty image of the industrial society -- is built on an obsession with dividing energy use into different statistical categories, fiddling around with figures on electronic calculating tools. In other words, it's a product of office work, hiding the true nature of office work.
Kris De Decker
This article was written for The Demand Centre, one of six academic research centres funded by the Research Councils UK to address "End Use Energy Demand Reduction". This article is a shortened version of the original piece, which is on Demand's website. The Demand Centre focuses on the use of energy as part of accomplishing social practices at home, at work and in moving around. It investigates how energy demand is shaped by material infrastructures and institutional arrangements, and how these systems reproduce interpretations of normal and acceptable ways of life.
- Why the Office Needs a Typewriter Revolution
- How to Get your Apartment Off the Grid
- Slow Electricity: The Return of DC Power?
- How to Build a Low-tech Internet
- Why we Need a Speed Limit for the Internet
- The Revenge of the Circulating Fan
 The Control Revolution: Technological and Economic Origins of the Information Society, James Beniger, 1986.
 The Growth of Information Workers in the US Economy, Edward N. Wolff, in "Communications of the ACM, October 2005/Vol.48, No.10, 2005.
 Theories of the Information Society (Third Edition), PDF, Frank Webster, 2006.
 Sustainability and the Information Society [PDF], Christian Fuchs, IFIP International Conference on Human Choice and Computers, 2006.
 2012 Commercial Buildings Energy Consumption Survey (CBECS), U.S. Energy Information Administration.
 A review on Buildings Energy Consumption Information [PDF], Luis Pérez-Lombard, José ortiz, Christine Pout. In "Energy and Buildings 40 (2008), pp.394-398.
 Power Density: A Key to Understanding Energy Sources and Uses (MIT Press), Vaclav Smil, 2015.
 Office Buildings, CBECS, 2010.
 BSD-152: Building Energy Performance Metrics. Building Science Corporation, 2010.
 U.S. Energy Use Intensity by Property Type, Technical Reference [PDF]. Energy Star, 2016.
 US Nuclear Power Plants, Nuclear Energy Institute.
 A Look at the US Commercial Building Stock: Results from EIA's 2012 Commercial Buildings Energy Consumption Survey (CEBECS). US Energy Information Administration, 2015.
 Downtown New York: The Architecture of Business / The Business of Buildings. Virtual Exhibition. The Skyscraper Museum.
 The European Office: Office Design and National Context, Juriaan van Meel, 2000.
 Work and the City (Edge Futures), Frank Duffy, 2008.
 White Collar: The American Middle Classes. C. Wright Mills, 1951.
 Office Buildings Go Up on Mere Speculation, Alessio Pirolo, The Wall Street Journal, October 7, 2014.
 Standards, Design and Energy Demand: The Case of Commercial Offices. James Faulconbridge & Noel Cass, 2016. Paper prepared for DEMAND Centre Conference, Lancaster, 13-15 April 2016.
 Papers in preparation, Noel Cass, The Demand Centre, Lancaster University, UK.
 Study: The Traditional Office Will Soon be Extinct. PC World, Tony Bradley, June 17, 2014.
 The Practice of Working from Home and the Place of Energy [PDF], Sam Hampton. Paper prepared for DEMAND Centre Conference, Lancaster, 13-15 April, 2016.
 Characteristics of Home Workers, 2014 [PDF]. Office for National Statistics, June 2014.
 Four million people are now homeworkers but more want to join them. TUC, June 5, 2015.
 Immersive TelePresence. Cisco.
Artificial cooling and digital equipment are the main drivers behind the quickly growing energy use of modern office work. To lower the energy use of the typical glass office building, many agree that we need to revert to earlier forms of architecture that were common up to the 1950s: T-, H- and L-shaped buildings, light wells, natural ventilation, and radiant heating and cooling systems.
Would the same hold true for office equipment? Should we revert to pre-1950s machines like manual typewriters and calculators, carbon paper, vertical filing cabinets, and the telegraph? Such a radical solution would lower energy use dramatically, but could we obtain equally good results by rethinking and redesigning office equipment, combining the best of mechanical and digital devices?
The Olivetti Sottsass mechanical typewriter, 1969. Source: eBay.
// //The Artisanal Office (Antiquity - 1870s)
Office work has accompanied humankind since the formation of social, economic and political organisation and state administration structures, and the functioning of economic trade. The first office institutions were founded in Antiquity, for example in Egypt, Rome, Byzantium, and China. The period from these early civilisations up to the beginning of the Industrial Revolution was marked by the stability of institutional forms and means of office work.  
The bulk of office work involved writing -- copying out letters and documents, adding up columns of figures, computing and sending out bills, keeping accurate records of financial transactions.  The only tools were pen and paper -- or rather the quill (the steel pen was invented only in the 1850s) and, before the 1100s in the Western world, stone or clay tablets, papyrus, or parchment.
Consequently, all writing -- and copying -- was done by hand. To copy a document, one simply wrote it again. Sometimes, letters were copied twice: one for the record, and the other to guard against the possibility that the first might get lost. The invention of the printing press in the late middle ages freed scribes from copying books, but the printing press was not suited for copying a few office documents. 
Source: Early Office Museum.
Communication was largely human-powered, too, using the feet rather than the hands: people ran around to bring oral or written information from one person to another, either inside buildings or across countries and continents. Finally, all calculating was done in the head, only aided by mathematical charts and tables (which were composed by mental reckoning), or by simple tools like the abacus (not a calculation machine but a memory aid, similar to writing down a calculation).The Mechanised Office (1870s - 1950s)
Before the Industrial Revolution, business operated mostly in local or regional markets, and their internal operations were controlled and coordinated through informal communication, principally by word of mouth except when letters were needed to span distances. From the 1840s onwards, the expansion of the railway and telegraph networks in North America encouraged business to grow and serve larger markets, at a time when improvements in manufacturing technology created potential economies of scale. 
The informal and primarily oral mode of communication broke down and gave way to a complex and extensive formal communication system depending heavily on written documents of various sorts, not just in business but also in government.  Between the 1870s and the 1920s, writing, copying, and other office activities were mechanised to handle this flow of information.
The birth of office equipment and systematic management was accompanied by three other trends. The first was the spectacular growth in the number of office workers, mainly women, who would come to operate these machines. The second was the rise of proper office buildings, which would house the quickly growing number of workers and machines. The third was a division of labour, mirroring the evolution in factories. Instead of performing a diverse set of activities, clerks became responsible for clearly defined sub-activities, such as typing, filing, or mail handling.
This article focuses exclusively on the machinery of office work, and more specifically its evolution in relation to energy use. While it's impossible to write a complete history of the office without taking into account the social and economic context, this narrow focus on machines reveals important issues that have not been dealt with in historical accounts of office work.Typewriters
Of central importance in the nineteenth-century information revolution was the typewriter, which appeared in 1874 and became widespread by 1900. (All dates are for the US, where modern office work originated). The "writing machine" made full-time handwriting obsolete. Typing is roughly five times quicker than handwriting and produces uniform text. However, the typewriter's influence went far beyond the writing process itself.
Underwood portable typewriter, 1930s. Source: Typewriter Heaven.
For copying, an even larger gain in speed was obtained in the combination of the typewriter with carbon paper, an earlier invention from the 19th century. This thin paper, coated with a layer of pigment, was placed in between normal paper sheets. Unlike a quill or pen, the typewriter provided enough pressure to produce up to 10 copies of a document without the need to type the text more than once. The typewriter was also made compatible with the stencil duplicator, which appeared around the same time and could make a larger number of copies. Considering the importance of writing and copying, the "writing machine" was a true revolution. [4-7]
The typewriter didn't reduce the amount of time that clerks spent writing and copying. Rather, the time spent writing and copying remained the same, while the production of paper documents increased. By the early years of the twentieth century, it became clear that old methods of storing documents -- stacked up in drawers or impaled on spikes -- could not cope with the increasing mounds of papers. This led to the invention of the vertical filing cabinet, which would radically expand the information that could be stored in a given space. [4-8]Mechanical Calculators
The typewriter quickly evolved into a diverse set of general and special purpose machines, just like the computer would one hundred years later. There appeared shorthand or stenographic typewriters (which further increased writing speed), book typewriters (which typed on bound books that lay flat when opened), automatic typewriters (which were designed to type form letters controlled by a perforated strip of paper), ultraportable and pocket typewriters (for writing short letters and notes while on the road), bookkeeping typewriters (which could count and write), and teletypewriters (which could activate another typewriter at a distance through the telegraph network). [4-7] The latter two will be dealt with in more detail below.
Mechanical calculating machines were another important tool in the new, mechanised office. "To clerks, mathematical machines are what the rock drill is to the subway labourer", stated an office management manual from 1919.  Mechanical calculating machines could add, subtract, multiply and divide through the motion of their parts. Many of these machines had a typewriter-style keyboard with a column for each digit entered (a "full keyboard"). This allowed numbers to be entered more quickly than on a more compact ten-key device, which became common only from the 1950s. 
Monroe Model K-20 Calculating Machine, 1921. The Smithsonian.
Devices designed especially for addition (and sometimes subtraction) were known as adding machines. Adding up long lists of numbers was typical for many business applications, and in mathematical terms many offices didn't need to function at any more sophisticated level. The first practical adding machine for routine office work -- the Comptometer -- was introduced in 1886.  At the beginning of the 1900s, the typewriter and the adding machine were combined into the adding typewriter or bookkeeping machine, which became central to the processing of all financial data. Teletypewriters
Obviously, the telegraph (1840s) and the telephone (1870s) also had an enormous impact on office work. The typewriter, beyond its use in business and government offices, also became an essential machine in telegraph offices. Initially, the telegrapher listened to the Morse sounder and wrote the received messages directly in plain language with a typewriter.  In the early 1900s, a special typewriter -- the "teletypewriter" or "teletype" -- was designed to transmit and receive telegraphic messages without the need for an operator trained in the Morse code. 
When a telegraphist typed a message, the teletypewriter sent electrical impulses to another teletypewriter at the other end of the line, which typed the same message automatically. From the 1920s onwards, teletypewriters became common in the offices of companies, governmental organisations, banks, and press associations. They were used for exchanging data over private networks between different departments of an organisation, a job previously done by messenger boys. 
A newly made telegraph key for radio amateurs. Source: Milestone Technologies.
Starting in the 1930s, central switching exchanges were established through which a subscriber could communicate by teletypewriter with any other subscriber to the service, similar to the telephone network but for the purpose of sending text-based messages. This became the worldwide telex-network, now largely demolished. Telex allowed the instantaneous and synchronous transmission of written messages, like today's chat or email over the internet, or like the exchange of text messages over the mobile phone network (teletypewriters could use the wireless telegraph infrastructure). Telex was also used for broadcasting news and other information, which was received on print-only teletypewriters. The Energy Footprint of the Mechanised Office
The office equipment that appeared in the late nineteenth century was in use until the 1970s, when it was replaced by computers. It is now considered obsolete, but upon a closer look, the superiority of today's computerised machines isn't as obvious as you would think. This is especially true when you take into account the energy that is required to make both alternatives work. Although it offered spectacular improvements over earlier methods, and although it could perform similar functions as today's digital information technology, much of the office equipment described above remained manually powered for decades. 
The first succesful electro-mechanical typewriter -- the IBM Electromatic -- was introduced in 1935, and the breakthrough came only in 1961, with the highly succesful IBM Selectric typewriters. Unlike a traditional typewriter, this machine used an interchangeable typing element, nicknamed the "golf ball", which spins to the right character and moves across the page as you type. 
Although electric motors were used on some of the mechanical calculators already in 1901, electrically driven calculators became common only between the 1930s and the 1950s, depending on the type. Pinwheel calculators remained manually operated until their demise in the 1970s. 
Unlike typewriters and calculating machines, the telephone and the telegraph could not function without electricity, which forms the basis of their operation. However, compared to today's communications networks, power use was small: until the late 1950s, almost all routing and switching in the telephone and telegraph infrastructure was done by human operators plugging wires into boards. The Digital Office (1950s - today)
With the arrival of the computer, eventually all office activities became electrically powered. The business computer appeared in the 1950s, although it was not until the mid-1980s that this 'machine' became a common office tool. Reading, writing, copying, data processing, communication, and information storage became totally dependent on electricity.Screens, printers & scanners
The computer took over the tasks of other machines in the office such as calculating machines, bookkeeping machines, teletypewriters, and vertical filing cabinets. In fact, on the surface, one could say that the computer is the office. After all, its dominant metaphor is taken from office work: it's got a "desktop", "files", "folders", "documents", and a "paper bin".  Furthermore, it can send and receive "mail", make phone calls and accomodate (virtual) face-to-face meetings.
On closer inspection, however, it becomes clear that the arrival of the computer also led to the appearance of new office equipment, which is just as essential to office work as the computer itself. The most important of these devices are printers, scanners, monitors, and new types of computers (data servers, smartphones, tablets). All these machines require electricity.
Monitors and data servers appeared because the computer introduced an alternative information medium to paper, the electronic format. Printers and scanners appeared because this new medium, contrary to expectations, did not replace the paper format. Although documents can be read, written, transmitted, stored and retrieved in a digital format, in practice both formats are used alongside each other, depending on the task at hand.
In spite of the computer, and later the internet, paper has stubbornly remained a key feature of office life. A 2012 study concluded that "most of the offices we visited were more or less full of paper".  This means that the use of resources further increases: to the electricity use of the digital devices, we also have to add the resources involved in making paper.
In their 2002 book The Myth of the Paperless Office, Abigail Sellen and Richard Harper investigate why and how office workers -- especially the growing group of knowledge workers -- are still using paper while new, digital technologies have become so widely available. 
They argue that office workers' reluctance to change is not simply a matter of irrational resistance: "These individuals use paper at certain stages in their work because the technology they are provided with as an alternative does not offer all they need." Obviously, digital documents have important advantages over paper documents. However, paper documents also have unique advantages, which are all too often ignored.
For example, it was found that office workers actively build up different kinds of paper arrangements on or near their office desks, reminding them of different matters and preparing them for specific tasks. Computers do not reproduce this kind of physical accumulation. Information exchange, for example in meetings, is another common office practice in which paper is used. Actions performed in relation to paper are, to a large extent, made visible to one's colleagues, facilitating social interaction. When using a laptop, it's impossible to know what other people in a meeting are looking at.  Welcome to the Paperless Office
Most important, however, is the point that paper tends to be the preferred medium for reading documents. Paper helps reading because it allows quick and flexible navigation through and around documents, reading across more than one document, marking up a document while reading, and interweaving reading and writing -- all important activities of modern knowledge work. 
Although some electronic document systems support annotation, this is never as flexible as pen and paper. Likewise, moving through online documents can be slow and frustrating -- it requires breaking away from ongoing activity, because it relies heavily on visual, spatially constrained cues and one-handed input. Opening multiple windows on a computer screen doesn't work for back-and-forth cross-referencing of other material during authoring work, both because of slow visual navigation and because of the limited space on the computer screen. 
The use of multiple computer screens (and the use of multiple computers at the same time) is an attempt to overcome the inherent limits of the digital medium and make it more "paper-like". With multiple screens, it becomes possible to interweave reading and writing, or to read across more than one document. Research has shown that work productivity increases when office workers have access to multiple screens -- a result that mirrors Sellen and Harpers findings about the importance of paper. [18-21]
The use of multiple monitors is rapidly increasing in the workplace, and the increase in "screen real estate" is not limited to two screens per office worker.  Fully integrated display sets of twelve individual screens are now selling for around $3,000.  A recent innovation are USB-powered, portable monitors, aimed at travelling knowledge workers but just as handy at the office. Because these monitors have their own set of dedicated hardware, rather than putting all the work of another screen on the computer itself, it's possible to connect up to five portable screens to a laptop.  A multi-touchscreen keyboard, already on the market, could solve the annotation issue.The Energy Footprint of the Digital Office
The problem with extra screens is that they increase energy use considerably. Adding a second monitor to a laptop roughly doubles its electricity use, adding five portable screens triples it. A 12-screen display with a suited computer to run it consumes more than 1,000 watt of power. If paper use can be reduced by introducing more and more computer screens, then the lower resource consumption associated with paper will be compensated for with a higher resource consumption for digital devices.
A similar switcheroo happened with information storage and communication. Digital storage saves paper, storage space and transportation, but in order to make digital information readily accessible, dataservers (the filing cabinets of the digital age) have to be fed with energy for 24 hours per day. And just as the typewriter and carbon paper increased the production of documents, so did the computer. Especially since the arrival of the internet, people can access more information more easily than ever before, resulting in an increase of both digital and paper documents. Ever cheaper, faster and better quality printers and copiers -- all digital devices -- keep encouraging the reproduction of paper documents. 
More screens. Live Wall Media.
The computer increases energy use in many different ways. First of all, digital technology entails extra energy use for cooling -- the main energy use in office buildings. A 2011 study, which calculated the energy use of two future scenarios, concluded that if the use of digital technology in the office keeps increasing, it would become impossible to design an office building that can be cooled without air-conditioning.  In the "techno-explosion" scenario, all office workers would have two 24'' computer screens, a 27'' touchscreen keyboard, and a tablet. The perhaps extreme scenario also includes one media wall per 20 employees in the office break zone.
On top of operational energy use and cooling comes a higher energy use during the manufacturing phase. The energy used for making a typewriter was spread out over many decades of use. The energy required for the production of a computer, on the other hand, is a regularly reoccuring cost because computers are replaced every three years or so. The internet, which has largely engulfed the telephone and telegraph infrastructure, has become another major source of power demand. The network infrastructure, which takes care of the routing and switching of digital information, uses roughly as much energy as all end-use computers connected to the internet combined.
The Lower Energy Office of the Future
The typewriter was just as revolutionary in the 1900s as is the computer today. Both machines transformed the office environment. However, when we consider energy use, the obvious difference is that the second information revolution was accomplished at much higher costs in terms of energy. So, maybe we should have a good look at pre-digital office equipment and find out what we can learn from it.
During the last ten years or so, the typewriter has seen a remarkable revival with artists and writers, a trend that was recently documented in The Typewriter Revolution: A typist Companion for the 21st Century (2015).  Like paper, the typewriter has many unique benefits. Obviously, a manual typewriter requires no electricity to operate. If it's built before the 1960s, it's built to outlast a human life. A typewriter doesn't become obsolete because its operating system is no longer supported, and it can be repaired relatively easily using common tools. If we compare energy input with a simple measure of performance, the typewriter gets a better score than the computer.
There are also practical advantages. A typewriter is always immediately ready for use. It needs no virus protection or software updates. It can't be hacked or spied upon. Finally, and this is what explains its success with writers and poets: it's a distraction-free, single-purpose machine that forces its user to focus on writing. There are no emails, no news alerts, no chat messages, no search engines and no internet shops.
For office workers, and for knowledge workers in particular, a typewriter could be just as useful as for a poet. Computers may have increased work productivity, but nowadays they are "connected to the biggest engine of distraction ever invented", the internet.  Studies indicate that web web activities are among the main distractions that keep office workers away from productive work.  Many online applications are especially designed to be addictive. 
A typewriter also forces people to write differently, combating distraction within the writing process itself. There is no delete key, no copy-and-paste function. With the computer, editing "became a part of writing from the very start, making the writer ever anxious about anything that just took place".  The typewriter, on the other hand, forces the writer to think out sentences carefully before committing them to paper, and to keep going forward instead of rewriting what was already written. 
The "Back-in-Time" Sustainable Office
How can we insert the common sense of the typewriter -- and other pre-digital equipment -- into the modern office? Basically, there are three strategies. The most radical is to replace all our digital devices by mechanical ones, and replace all dataservers with paper stacked in vertical filing cabinets, in other words we could go back in time.
This would surely lower energy use, and it's the most resilient option: for all their wonders, computers serve absolutely no purpose when there's no electricity. Nevertheless, this is not an optimal strategy, because we would lose all the good things that the computer has to offer. "The enemy isn't computers themselves: it's an all-embracing, exclusive computing mentality", writes Richard Polt in The Typewriter Revolution. 
Royal Quiet DeLuxe, 1953. Machines of Loving Grace.
Another strategy is to use mechanical office equipment alongside digital office equipment. There's some potential for energy reduction in the combined use of both technologies. For interweaving reading and writing, the typewriter could be used for writing and the computer screen for reading, which saves an extra screen and a printer. A typewriter could also be combined with a low energy tablet instead of a laptop or desktop computer, because in this configuration the computer's keyboard is less important.
Once finished, or once ready for final editing in a digital format, a typewritten text can be transferred to a computer by scanning the typewritten pages. The actual typewritten text can be displayed as an image ("typecasting"), or it can be scanned with optical recognition software (ORC), which converts typewritten text into a digital format. This process implies the use a scanner or a digital camera, however these devices use much less energy than a printer, a second screen, or a laptop. By reintroducing the typewriter into the digital office, the use of the computer could thus be reduced in time, while the 'need' for a second screen disappears.The Low-tech Sustainable Office
The third strategy is to rethink and redesign office equipment, combining the best of mechanical and digital devices. This would be the most intelligent strategy, because it offers a high degree of sustainability and resilience while keeping as much of the digital accomplishments as possible. Such a low-tech office requires a redesign of office equipment, and could be combined with a low-tech internet and electricity infrastructure.E-Typewriters
For low-tech writing, a couple of devices are available. A first example is the Freewrite, a machine that came on the market earlier this year after a succesful crowdfunding campaign.  Like a typewriter, it's a distraction-free machine that can only be used to write on, and that's always instantly ready to be used. Unlike a typewriter, however, it has a 5.5'' e-paper screen, it can store a million pages, and it offers a WiFi-connection for cloud-backups. Files are saved in plain text format for maximum reliability, minimal file size, and longest anticipated support.
Apart from a backspace key, there is no way to navigate through the text, and the small screen only displays ten lines of text. Drafting and editing have been separated with the intent to force the writer to keep going. For editing or printing, the text is then transferred to a computer using the WiFi connection.
The device is stated to have a "4+ week battery life with typical usage", which is defined as half an hour of writing each day with WiFi turned off. That's a strange way to communicate that the machine runs 14 hours on one battery charge, and when I asked the makers how much power it needs they answered that they "don't communicate this information". Nevertheless, enabling 14 hours of writing already beats the potential of the average laptop by a factor of three.Hardware Word Processors
Another type of digital typewriter is the hardware word processor. Before word processing became software on a personal computer in the 1980s, the word processor was a stand-alone device. Like a typewriter, a hardware word processor is only useful to write on, but it has the added capability of editing the text before printing. Although hardware word processors work and look like computers, they are non-programmable, single-purpose devices.  
The great advantage of a hardware word processor is that both writing and editing can happen on the same machine -- a typewriter or a machine like the Freewrite requires another machine to do the editing (unless you write multiple versions of the same text). The hardware word processor virtually disappeared when the general-purpose computer appeared. One notable exception is the Alphasmart, which was produced from 1992 until 2013.
This rugged portable machine is still widely traded on the internet and developed a cult following, especially among writers. The Alphasmart was conceived as an affordable computer for schools, but the low price was not its only appeal. The machine responded to the need for a tool that would make kids concentrate on writing, and not on editing or formatting text. Although it has full editing capabilities, the small screen (showing 6 lines in the lastest model) invites writing rather than excessive editing. 
The Alphasmart is especially notable for its energy efficiency, using as little electricity as an electronic calculator. The latest model could run for more than 700 hours on just three AA-batteries, which corresponds to a power use of 0.01 watt. The machine has a full-sized keyboard but a small, electronic calculator-like display screen, which requires little electricity. It has limited memory and goes into sleep-mode between keystrokes. The Alphasmart can be connected directly to a printer via a USB-cable, bypassing a computer entirely if the aim is to produce a paper document. Transferring texts to the computer for digital transmission, storage or further editing also happens via cable. 
Interestingly, Alphasmart released a more high-tech version of the device in 2002, the Alphasmart Dana. It was equipped with WiFi for transmitting documents, it had 40 times more memory than its predecessor, and it featured a touchscreen. The result was that battery life dropped twentyfold to 25 hours, clearly showing how quickly the energy use of digital technology can spiral out of control -- although even this machine still used only 0.14 watts of power, roughly 100 times less than the average laptop. 
Of course, a low-tech office doesn't exclude a real computer, a device that does it all. A small tablet with a wireless keyboard can be operated for as little as 3W of electricity and many of the capabilities of a laptop (including the distractions). An alternative to the use of a tablet is a Raspberry Pi computer, combined with a portable USB-screen. Depending on the model, a Raspberry Pi draws 0.5 to 2.5 watts of power, with an extra 6 or 7 watts for the screen. A Pi can serve as a fully functional computer with internet access, but it's also very well suited for a single-purpose, distraction-less word processing machine without internet access. Such machines could be powered with a solar system small enough to fit on the corner of a desk.Dot-Matrix Printers
Unless we revert to the typewriter, the office also needs a more sustainable way of printing. Since the 1980s, most printing in offices is done with a laser printer. These machines require a lot of energy: even when we take into account their higher printing speed, a laser printer uses 10 to 20 times as much electricity than a inkjet printer.  Unfortunately, inkjet printers are much more expensive to use because the industry makes a profit by selling overpriced ink cartridges.
Until the arrival of the laser printer, all printing in offices was done by dot-matrixprinters. Their power use and printing speed is comparable to that of inkjetprinters, but they are much cheaper to use -- in fact, it's the cheapest printing technology available. Like a typewriter, a dot-matrixprinter is an impact printer that makes use of an ink ribbon. These ribbons are sold as commodities and cost very little. Unlike a typewriter, the individual characters of a matrix printer are composed of small dots.
Dot-matrix printers are still for sale, for applications where printing costs are critical. Although they're not suited for printing images or colours, they are perfect for the printing of text. They are relatively noisy, which is why they were sometimes placed under a sound-absorbing hood. There is no practical low-tech alternative for the copier machine, which only appeared in the 1950s. However, since a photocopier is a combination of a scanner and a laserprinter, the copying of paper documents could happen by using a combination of a computer with a scanner and a dot-matrix or inkjet printer.
The information society promises to dematerialise society and make it more sustainable, but modern office and knowledge work has itself become a large and rapidly growing consumer of energy and other resources. Choosing low-tech office equipment would be a great start to address this problem. Such a strategy is especially significant in that the energy use goes far beyond the operational electricity use on-site.
Kris De Decker
Thanks to Elizabeth Shove, who pointed me to some of the most important references, and to Karolien Buurman and Thomas Op de Beeck, who made me (re)discover the dot-matrixprinter.
- The Curse of the Modern Office
- How to Get your Apartment Off the Grid
- Slow Electricity: The Return of DC Power?
- How to Build a Low-tech Internet
- Why we Need a Speed Limit for the Internet
 Evolution of the office building in the course of the 20th century: Towards an intelligent building, Elzbieta Niezabitowska & Dorota Winnicka-Jaskowska, in Intelligent Buildings International, 3:4, 238-249, 2011.
 Economy and Society, Max Weber, 1922.
 Machines in the Office, Rodney Dale and Rebecca Weaver, 1993.
 Innovation Junctions: Office Technologies in the Netherlands, 1880-1980 (PDF), Onno de Wit, Jan van den Ende, Johan Schot and Ellen van Oost, in Technology and Culture, Vol. 43, No. 1 (Jan., 2002), pp. 50-72
 Early Office Museum, website.
 The Myth of the Paperless Office (MIT Press), Abigail Sellen and Richard Harper, 2003.
 Office Management, Geoffrey S. Childs, Edwin J. Clapp, Bernard Lichtenberg, 1919.
 The Myth of the Paperless Office (MIT Press), Anton A. Huurdeman, 2003.
 Teleprinter, Encyclopedia Britannica.
 Nobody seems to have researched the energy use of pre-digital office equipment, so this information is partly derived from an online search through the databases of eBay, the Smithsonian Institution, and the Early Office Museum, and partly on fragmentary information from secondary sources. For example, a 1949 survey of the equipment in high school office machine courses in the state of Massachussetts shows that the majority of typewriters, calculators, adding machines, duplicators and addressing machines were manually operated, although most of these machines were less than 10 years old.
 The Typewriter Revolution: A Typist's Companion for the 21st Century, Richard Polt, 2015
 Gift of Fire, A: Social, Legal, and Ethical Issues in Computing, Sara Baase, 1997
 How the computer changed the office forever, BBC News, August 2013.
 Mundane Materials at Work: Paper in Practice, Sari Yli-Kauhaluoma, Mika Pantzar and Sammy Toyoki, Third International Symposium on Process Organization Studies, Corfu, Greece, 16-18 June, 2011.
 Productivity and multi-screen computer displays (PDF), Janet Colvin, Nancy Tobler, James A. Anderson, Rocky Mountain Communication Review, Volume 2:1, Summer 2004, Pages 31-53.
 Evaluating user expectations for widescreen content layout (PDF), Joseph H. Goldberg & Jonathan Helfman, Oracle, 2007
 Are two monitors better than one?, J.W: Owens, J. Teves, B. Nguyen, A. Smith, M.C. Phelps, Software Usability Research Laboratory, August 2012
 Are two better than one? A comparison between single and dual monitor work stations in productivity and user's windows management style. Chen Ling, Alex Stegman, Chintan Barhbaya, Randa Shehab, International Journal of Human-Computer Interaction, September 2016
 The best USB-powered portable monitors, Nerd Techy, 2016
 Trends in office internal gains and the impact on space heating and cooling (PDF), James Johnston et. al, CIBSE Technical Symposium, September 2011
 Employees waste 759 hours each year due to workplace distractions, The Telegraph, June 2015
 Internet Addiction: A New Clinical Phenomenon and Its Consequences (PDF), Kimberly S. Young, American Behavioral Scientist, Vol. 48 No.4, December 2004.
 The Binge Breaker, The Atlantic, November 2016.
 The future of writing looks like the past, Ian Bogost, The Atlantic, May 2016.
 Freewrite, website.
 Word Processing (History of) (PDF), Encyclopedia of Library and Information Science, Vol. 49, pp. 268-78, 1992.
 A brief history of word processing (through 1986), Brian Kunde.
 AlphaSmart: a history of one of Ed-Tech's Favorite (Drop-Kickable) Writing tools, Audrey Watters, Hackeducation, July 2015.
 AlphaSmart: Providing a Smart Solution for one Classroom-Computing "Job" (PDF), James Sloan, Inno Sight Institute, April 2012.
 Zeven instap zwart-wit laserprinters vergelijktest, Hardware.info, December 2014. The data were corrected for the higher printing speed of the laser printer.// //
The typical solar PV power installation requires access to a private roof and a big budget. However, wouldn't it be possible to get around these obstacles by installing small solar panels on window sills and balconies, connected to a low-voltage direct current (DC) distribution network? To put this theory to the test, I decided to power Low-tech Magazine's home office in Spain with solar energy, and write my articles off the grid.
Picture: Low-tech Magazine's solar powered office.// //
Solar panels have become cheaper and more efficient in recent years, but they are far from a universal solution, even in sunny regions. One reason is that a typical solar photovoltaic (PV) installation is still beyond the budget of many people. The average pricing for a 5kW residential PV system completed in 2014 varied from $11,000 in Germany to $16,450 in the USA. [1, 2] Roughly half of that amount concerns the installation costs. 
A second obstacle for solar power is that not everybody lives in a single-family dwelling with access to a private roof. Those who reside in apartment buildings have little chance of harvesting solar power with a conventional roof-mounted system. Furthermore, in apartment buildings, the roof would quickly become too crowded to cover the electricity use of all residents, a problem that grows larger the more floors there are in a building. Lastly, a typical solar installation is problematic when you're renting a place, whether it's a house or an apartment.
I'm one of those people who runs into every one of these obstacles: I live in a flat, I rent the place, and I don't have the budget for a conventional solar system. However, I receive a lot of sunshine. My apartment is located near Barcelona in Spain, a city with an average solar insolation of almost 1,700 kWh/m2/year (which is also the average figure across the USA). Furthermore, the 60 m2 apartment has the balcony and all windows facing south-south-west, and there is no shading by trees or other buildings.
The view from my home office.
These conditions allow me to get through the winter without a heating system, relying only on solar heat and thermal underclothing. Hot water is supplied by a solar boiler, which was installed by the landlord. Clothes are dried on the balcony. While tinkering with solar panels for an art project, I got an idea: with the sun already powering so much of my living space, wouldn't it also be possible to harvest solar power from the window sills and the balcony and take my apartment off the electricity grid? Such a PV installation would solve my problems:
- I don't need access to the roof.
- I can install the system myself, which makes it much cheaper.
- I can take the solar installation with me if I move to another place.
Obviously, the big question is whether or not such an unconventional solar system could generate the necessary electricity. As a first experiment, I decided to power my 10 m2 home office with solar panels placed on the 2.8 m long window ledge that runs along the windows of the office and the adjacent bedroom.Solar Powered Home Office
The window in my office is quite small (at 1.5 m2, it takes up only half of one wall). However, there's no need for power in the bedroom, which has been lighted by three IKEA SUNNAN lamps for years. Consequently, the full window ledge is available to power the home office. It offers enough space for five solar panels of 10W each, providing me with 50 watt-peak of solar power. The balcony will serve to power the rest of the apartment, and the plans for that second project are outlined at the end of this article.
With their placement on the window sill, the panels are shaded by the building itself in the morning. They receive direct sunlight from about 10 am to 5 pm in the pit of winter (a total of 7 hours), and from roughly 1 pm to 9 pm in the height of summer (a total of 8 hours). The maximum energy production is thus roughly 400 Wh per day.
The solar panels are connected in parallel and coupled to a solar charge controller and 550 Wh of lead-acid batteries. Assuming a 33% Depth-Of-Discharge (DoD) and a round-trip battery efficiency of 80%, this gives me a maximum energy storage of roughly 150 Wh.
Can you power a home office with 50 watt-peak solar panels and 150 Wh of energy storage?
Now let's look at the energy use of my home office, before it was solar powered. I sit here working most of the days, either researching, writing, or building and repairing stuff. Devices that regularly use electricity are:
- A laptop, which requires an average 20 watts of power.
- An external computer screen, which needs 16.5W of power.
- Two CFL lamps (20W & 12W) and one LED-lamp (3W).
This adds up to 35W of power during the day (with only the laptop and the screen in use) and 70W after sunset (the laptop, the screen, and the lights). I usually work in the mornings and evenings, roughly from 10 am to 2 pm and from 8 pm to 1 am. During the afternoon, I do other stuff or I work in the library.
Total electricity use in my office is thus (on average) 500 Wh per day, with little variation between winter and summer. On cloudy days I also use lights in morning, which can raise energy use to 640 Wh per day. Then there are some devices that occasionaly need power:
- A laser printer, which uses 4Wh of energy for warming up and printing eight text pages. This corresponds to operating my desk lamp (5W) for more than 45 minutes.
- A pair of PC loudspeakers (1.5W of power).
- Three USB bicycle lights (each use 1.4W of power while charging).
- A digital camera, which uses 3W while charging.
- A fan, which uses 30-40 watts of power.
- A mobile phone (a dumb one) that's charged once every few weeks.
Obviously, my solar PV system doesn't produce enough energy to power my home office. While regular electricity use is at least 500 Wh on a 9-hour working day, the window sills give me a maximum of 400 Wh per day. On overcast days, energy production can be as low as 40 to 200 Wh per day, depending on the type of cloud cover. Furthermore, energy storage is only 150 Wh under ideal circumstances, while most energy use (350 Wh) is after sunset.
And yet, here I am, typing this article on a solar powered laptop in a room that's lit by solar power. How is this possible? By following these strategies:
- Maximize solar power production by tilting the panels according to the season.
- Minimize power use by installing a low-voltage DC grid and using DC appliances.
- Force yourself to lower energy demand on dark days by going off the grid.
Below, we look at these points in more detail. My solar system has been in operation since November 2015, initially with only two 10W panels. Three more panels were added in early spring.1. Adjust the Tilt of the Solar Panels
Roof-mounted solar panels usually have a fixed angle in relation to the sun. Because the elevation of the sun varies throughout the year, a fixed angle is always a compromise. Panels that lay horizontal on a flat roof are relatively well positioned for energy production in summer, but much less so for use in winter. On the other hand, tilted solar panels perform much better in winter but not as well as in summer. On sloped roofs, the angle of the panels is often determined by the angle of the roof, which isn't necessarily the best angle for solar power production.
A PV panel that's optimally tilted towards the winter sun can triple electricity generation compared to a horizontally placed panel
Adjusting the angle of a solar panel according to the season can increase electricity production significantly in winter. In December, a PV panel in Barcelona that's optimally tilted towards the winter sun can triple electricity generation compared to a horizontally placed panel. Because the advantage is much smaller in other seasons, the average annual increase in power production is less than 10%. However, tilting the panels is the key to harvesting enough solar power during the winter months, when power shortages are most probable.
In the case of a balcony or window sill solar PV system, adjusting the angle of the solar panels is as simple as watering the plants. Although you could make small adjustments every hour, day or month, adapting the angle two or four times per year is as far as you should go.
There's another advantage to having the solar panels so close at hand: they can be cleaned regularly. Roof-mounted solar panels rarely get cleaned because the roof usually isn't very accessible. Losses due to dust and dirt are assumed to be 1% of generated energy, but in dry and dusty regions, as well as in traffic-heavy areas, they can be as high as 4-6% if washing is not undertaken on a regular basis. 
Adjusting the angle of a window sill solar panel is as simple as watering the plants
Obviously, it's crucial that the panels don't fall off the window ledge, no matter what happens. Window sills differ in shapes and sizes, which calls for a custom-made supporting structure. I have a fixed metal bar at my window sill, aimed at protecting plant containers, which allows me to securely lock the solar panels in place. I guess I'm lucky to have this, but it also shows how small design changes can make a big difference. As an additional safety measure, I loaded the wooden base of each panel with some heavy rocks.
Adding a mechanism to vary the tilt of the panels complicates the design, because the moving part has to be just as sturdy as the base. Following some failed attempts, I found a mechanism that seems to work, using vintage Meccano rods (2-3 layers thick and with larger nuts and bolts). One rod is connected to the base of the structure, while another is connected to the wooden board that carries the panel. Both rods are connected to each other in the middle. Loosening this connection allows me to adjust the length of the supports and thus the angle of the solar panels.Solar PV Windows?
Some readers might consider my approach soon-to-be-obsolete, because several companies are working on "solar PV windows": glass that doubles as an electricity generator. However, this technology would not perform as well as adjustable solar panels on window sills, for several reasons.
The panels on the left are optimally tilted for spring, the two panels on the right are still in winter position.
First of all, solar PV windows are most often entirely vertical, which is never an efficient angle to generate solar power -- their power generation is about 3 times lower than horizontal panels.  Secondly, in summer it would be impossible to open the windows or lower the shutters, which would quickly overheat my office and introduce a need for air-conditioning.
My solar PV installation, on the other hand, can produce power when the shutters are closed and when the windows are open. Last but not least, a window-integrated solar panel can't be taken with you when you move, while my system is entirely mobile.2. Opt for a Low-Voltage DC System
Typical solar PV systems convert the direct current (DC) electricity produced by solar panels into alternating current (AC) in order to make it compatible with the AC distribution system in a building. Because many modern appliances operate internally on DC, the AC electricity is then converted back to DC. The DC/AC-conversion is done by an inverter, which sits between the solar charge controller and the load. The second conversion happens in the (external or internal) AC/DC adapter of the devices that are being used.
The trouble with this double energy conversion is that it generates substantial energy losses. This is especially true in the case of solid-state devices such as LEDs and computers, where the combined losses of the DC/AC/DC conversion amount to roughly 30% -- see our previous article for further detail. Because these are also the devices that make up most of the load in my home office, it makes a lot of sense to avoid these losses by building a low-voltage DC system instead.
Like in a boat or a camper, the 12V DC electricity of my solar panels is used directly by 12V DC appliances, or stored in 12V DC batteries. If my solar panels generate their maximum output of 50W, my devices have 50W available. When battery power is involved, charging and discharging the battery adds 20% of energy loss, which leaves 40W available for the appliances.
The choice for a low voltage DC system raises energy efficiency by 40%
On the other hand, in a typical solar PV installation where a DC/AC/DC energy conversion takes place, the devices would only have 35W available, and the rest would be lost as heat during energy conversion. If lead-acid battery storage is used in such a system, only 28W of power remains. In short, in my specific case, choosing a DC system multiplies power production by 1.4 times.
The choice for a DC system saves not only energy but also space and costs. Less solar panels are needed and there's no need to buy a DC/AC inverter, which is a costly device that needs to be replaced at least once during the life of a solar system. Most importantly, you can build a DC solar power system yourself, even if you're as clumsy as I am. A low-voltage DC grid (up to 24V) is safe to handle because it carries no risk of electric shock.  Adding up all costs, I took my home office off the grid for less than 400 euro.Where to Find DC Appliances
Mounting a DC system implies the use of DC-compatible devices. However, because so many modern appliances operate internally on DC, this doesn't mean that you have to buy everything anew. To adapt the lighting in my office, I simply cut the mains plugs from the power cords, replaced them with DC-compatible plugs that fit straight into my solar charge controller, and substituted the light bulbs with 12V LED-bulbs. To run the laptop on DC, I replaced the power adapter by a DC-compatible power cord, which is available for use in cars. These power cords can be bought for every laptop model you can imagine.
My laptop with DC power cord.
Other devices are harder to adapt because the AC/DC adapter is located in the device itself. For example, I haven't figured out yet how to convert my external computer screen to operate directly on DC power.
Appliances that cannot be converted are usually available in a 12V DC version. Examples are refrigerators, slow cookers, televisions, air compressors, or power tools. These can be more expensive than their AC counterparts, because they are produced in much smaller quantities. DC refrigerators are very expensive because they use vacuum insulation. While this makes sense in a camper or sailboat where space is restricted, it's a needless cost in a common building.
The cigarette lighter receptacle in cars, initially designed to power an electrically heated cigarette lighter, has been the de facto standard DC connector for decades. More recently, it has been joined by another low voltage DC distribution system, the USB connector. USB cables operate on 5V DC and can transfer both data and energy. Many consumer electronics are now powered by them.
Currently, these devices are charged by the USB-port of a laptop or desktop computer, but they could be plugged straight into a solar PV system. While the standard USB-cable carries a maximum power of only 10 watts, the newer USB-PD standard accommodates devices with a power consumption of up to 100 watts.Overcast Days
The choice for a DC system has lowered power consumption in my home office considerably. My laptop's energy use has decreased by about 20%. Switching to DC-direct LED-lamps has halved power use for lighting from 35 to 16W. Based on the 9-hour working day described earlier, daily energy use of regularly used devices in my home office has come down from 500 to 350 Wh/day. This brings average energy use below energy production on sunny days (400 Wh), which are plentiful where I live.
Three 10W solar panels on the window sill of the bedroom.
In reality, the external computer screen and the laser printer are still running on grid power. The 350 Wh of energy use mentioned above includes the hypothetical use of a DC external screen (saving 15% of power compared to the AC version), but not the energy use of the printer. However, on sunny days, I have a significant surplus of electricity, which suggests that I could also operate the external screen and the printer. Even on partly cloudy days energy is abundant.
However, energy use remains too high during overcast days, when power production is between 40 and 200 Wh per day. Obviously, adding more solar panels and batteries would solve the issue, but that's not the way to go because the solar PV system would become more expensive, less practical, and less sustainable.
On sunny or partly cloudy days, I have more than enough electricity. On overcast days, I have to reduce energy demand.
To guarantee a daily 350 Wh of electricity during three consecutive heavy overcast days in December (a worst case scenario of only 40 Wh energy production per day), I would need to increase solar power capacity fourfold, from 50 to 200W peak capacity, and provide at least five times more batteries.
Although it would be possible to install 200W on the window sills, in that case the solar panels would stop solar light and heat from entering the rooms, which would be counterproductive. Furthermore, I would produce way too much electricity for most of the year.3. Adjust Energy Demand to Meet Available Supply
The solar charge controller and half of the home office battery storage.
There's another option to make the numbers match if there's not enough sun available, and that's using less energy. Suggesting a reduction in energy use is rather controversial, but there are a surprisingly large number of ways to reduce energy use, without having to revert to a typewriter and candles. Here are some possibilities for my home office:
- I could install a second working desk right next to the window. This eliminates the need for artificial lighting on dark winter days, which saves me at least another 40 Wh on days that electricity production is at its lowest.
- I could use less lights in evening during low solar power days. For most of the year, I have sufficient energy available to use all the lights in the room. However, most of the days I get by with only two lamps, and if necessary I could use a single 5W or even 3W lamp. When solar production is at its lowest, the latter still gives me more than 13 hours of light. I will never have to spend a night in the dark.
- I could shift loads towards sunny afternoons. Even in winter, the batteries can already be fully charged by around 2 or 3 pm on sunny days. Adding extra load to the system during these periods takes advantage of solar energy that would otherwise get wasted. This is when I can charge the bicycle lights, the digital camera or the phone, or when I can use the 12V soldering iron (my only power tool) or the printer. In summer I can use the surplus of energy to power two small USB-fans, and of course that's the time when I need the fans the most.
There are a surprisingly large number of ways to reduce energy use, without having to revert to a typewriter and candles
- I could change my working schedule. If I could manage to work from 9 am to 6 pm instead of in morning and evening, I obtain a double energy savings. I would need no more lighting, except for one hour or so in winter (which saves 70 to 80 Wh/day). Secondly, I would use more electricity while it's being generated, avoiding 20% battery charging and discharging losses while operating the laptop at night and in mornings (which saves another 30Wh). Changing my working schedule would lower daily electricity use to roughly 125Wh, less than half of maximum power production. Furthermore, all battery capacity would be available for overcast days, because there is no energy use at night.
- I could adapt computer work to solar conditions. There's a remarkable difference in power use for the laptop between writing (roughly 15W) and surfing the web (roughly 25W). In other words, I can work almost twice as long when I'm writing, which I could do whenever available energy is low.
- I could ditch my external computer screen. It can be very handy for some work, giving both a screen to read and a screen to write, but most of the time it's just wasting energy without being very useful. Ditching the external screen would save me another 150 Wh per day. However, it would probably increase the use of the printer, so it remains to be seen if this option really saves energy.
- During consecutive, heavy overcast days, I could revert to more drastic measures, like working in the library or not working at all. Or, I could do work that doesn't involve any energy use during the day, such as reading books and taking notes by hand. This would bring extra advantages; it can be refreshing to disconnect and concentrate on something in the old-fashioned way. Going out one evening is a fun and easy way to keep the power level high enough during periods of bad weather.
- I could build a pedal powered generator for when I really need more electricity during overcast days. Strictly speaking, this is not a reduction of energy demand, but of course it implies an effort from my side. Pedalling for 1 to 1.5 hours would generate roughly 100 Wh of electricity, which would allow me to work on the computer for 3 to 5 hours, or to operate the 5W LED-light throughout the night.
By keeping an eye on my barometer and being a bit flexible, it's not that hard to plan work according to the weather. However, until now I managed to take advantage of these opportunities mostly when it comes to lighting, and less so when using the laptop. This has nothing to do with computer use being less flexible than lighting. Rather, it's a consequence of how the system is built.
This became clear due to the rather clumsy way that I set up my experiment. Obviously, I wanted to test the installation in the depth of winter before writing about it. However, I only had two solar panels at the time. Therefore, I first tested my solar powered home office by running the laptop on solar energy for two weeks (while running the lights on grid power), followed by a two-week test of running the lights on solar energy (and the computer on grid power).
For lighting, it's impossible to fall back on grid power because I had to cut the power cords of all lamps to make them compatible with the 12V DC grid
The results were remarkably different for both periods. With the laptop, I could always fall back on grid power by simply switching the power cord. Consequently, there were no external factors that forced me to change my way of working in order to remain within the limits of the energy budget on a dark day. For lighting, however, it was impossible to fall back on grid power. I had to cut the power cords of all lamps to make them compatible with the 12V DC grid, which meant that I could not run them on AC grid power anymore.
During low power periods, I had no other choice than to lower energy demand for lighting, and that's exactly what I did, quite effortlessly I must say. I quickly made an extra desk at the window to avoid using artificial lights in morning, I switched the lights off whenever I left the room, and I worked with just a 5W or even a 3W light bulb if necessary.
Five months later, I have become totally accustomed to adjusting light levels to available solar power. On the other hand, I keep plugging my laptop into the grid if energy runs low. Why? Because I can. 
The new desk at the window.
Consequently, going off-grid seems to be the key to lowering energy demand considerably.  Having a limited energy supply also encourages the use of more energy efficient technology. For example, the energy savings I made by replacing the two remaining CFL lamps by LEDs could also have been achieved without building a solar PV system. However, the option only occurred to me after I took up the challenge of powering the office with solar energy.
Progress in energy efficient technology will steadily increase the possibilities of my off-grid system, with no risks of rebound effects
If I would not be able to fall back on grid power, I would probably also get a more energy efficient laptop.  In the future, I could also switch to lithium-ion batteries, which have lower losses than lead-acid batteries. Investing in more energy efficient technology would allow me to run the computer and the lights longer with the same amount of solar panels. With a limited power supply, there's no risk of rebound effects that negate these benefits.Build Multiple Solar PV Systems
As mentioned at the beginning of the article, the solar powered home office is only the first step towards converting the whole apartment to solar power and going totally off-grid. The second project will be the installation of a solar system on the 6-metre long balcony in front of the living room and the (open) kitchen. It will power the lights, the stereo-installation, the Wifi-router, all computer use outside the office, and all kitchen appliances.
The bedroom's electricity system.
This second experiment is much more challenging for two reasons. First, the living room and kitchen will also be used by the second person in this household, which will make it more complicated to manage energy use. Furthermore, although we don't have a toaster, a coffeemaker, or a microwave, the kitchen houses a much used high power appliance: the electric cookstove.
Because it would take too many solar panels and batteries to power the electric cookstove by solar PV panels, the plan is to replace it by non-electric alternatives: one or two solar cookers, a fireless cooker, and a rocket stove for the morning coffee. By using direct solar heat, we can make much more efficient use of the space on the balcony. Another plan is to build a low-tech food storage system that can store most of the food outside the refrigerator, keeping this energy-intensive appliance as small as possible or eliminating it altogether.
The balcony solar PV system will be totally independent of the window sill solar PV system
The balcony solar PV system will be totally independent of the window sill solar PV system. There are several advantages to this approach. As we have seen in the previous article, cable losses are relatively high in a low-voltage DC system. Setting up several independent systems greatly limits cable length (and cable clutter).
Secondly, installing separate systems allows total power use to surpass 150 watts -- which is the safety limit for a 12V DC system. Thirdly, multiple systems make it possible to start small and gradually expand the system. This avoids large upfront costs and allows you to learn from your mistakes.Learning from your Mistakes
In fact, it was one such mistake that made me decide to install two separate systems even in my relatively small 10 m2 home office. The two solar panels in front of the office are connected to half of the batteries (powering the lights), while the three solar panels in front of the bedroom are connected to the other half (powering the laptop).
This is because I short-circuited my first solar charge controller and had to buy a second one while the first one was being repaired. It was that or go without lights for three weeks. Thus, a final advantage of multiple systems is increased reliability: if one system fails, there is still electricity.
If the second experiment succeeds, and of course this remains to be seen, the plan is to stop the contract with our power provider, which is to be renewed in December. Obviously, it would be handy to keep a connection to the grid, but there are two important reasons not to do this. The first has been outlined above: going off-grid unleashes the creativity and willingness to lower energy demand.
Secondly, installing a solar system and holding on to a grid connection is financially disadvantageous. At least here in Spain, more than two-thirds of the electricity bill consists of fixed costs. Even if we would use much less grid electricity because of the solar system, our bill would remain more or less the same.
If the second experiment succeeds, and of course this remains to be seen, the plan is to stop the contract with our power provider
Some important challenges remain, most notably the washing machine, the bathroom and the laser printer. The problem with washing machine and bathroom is that they're on the north side of the building, far from the solar panels. We could go to a laundromat but there are none in town. A pedal powered washing machine requires space that we don't have.
The laser printer could be operated with an inverter, which can also be handy to power any other occasional device that doesn't run on 12V DC power. However, a relatively large and expensive inverter would be needed, because the startup power of the machine is above 400 watts. Luckily, I found that out before I fried another costly device.Before You Start
There are some things to keep in mind before you decide to install a low-tech solar PV system:
- You need enough sun. Solar panels on balconies and window ledges won't work everywhere. A similar system like mine, but 1,000 km further up north, would produce on average only half the electricity, with a much larger difference between winter and summer.
- You need the right exposure. Even if you're in a sunny climate, don't think of harvesting solar power if windows or balconies are oriented towards the north, the northwest, or the northeast. Shading by other buildings or trees can also smother your ambitions. You need at least 4 hours of direct sunshine on the panels each day.
- You need to be prepared to lower your energy use. Few apartment dwellers will have enough space available to generate sufficient solar power for an energy-intensive lifestyle.
- It may be impossible to close some windows completely. The cables from the panels enter my apartment by slightly opening the sliding window of the office. In winter, I cover this gap with cork. I don't use heating so no energy gets lost, but this might be problematic in other circumstances. You probably shouldn't drill a hole through the window or the wall if you are renting the place.
- Converting your apartment to solar power doesn't make you "100% sustainable". Fossil fuels are used to produce solar panels and batteries. The electricity I generate is likely more CO2-intensive per kWh than Spanish grid electricity, especially since my panels and batteries are manufactured in China. The only reason why my system is more sustainable than using grid electricity is because it forces me to lower electricity use considerably.
- Slow electricity: the comeback of DC power?
- Reinventing the greenhouse
- Off-grid: how sustainable is stored sunlight?
- How sustainable is Solar PV power?
- Restoring the old way of warming: heating people, not places
- If we insulate our houses, why not our cooking pots?
- How to make everything ourselves: open modular hardware
- The solar envelope: how to heat and cool cities without fossil fuels
Notes & Sources:
 Renewable Power Generation Costs in 2014 (PDF), International Renewable Energy Agency (IRENA), January 2015
 Photovoltaic System Pricing Trends: Historical, Recent, and Near-Term Projections (PDF), 2014 Edition, SunShot, U.S. Departmennt of Energy, September 2014
 Soft costs account for most of PV residential installation costs, PV Magazine, December 2013
 Spain's Photovoltaic Revolution: The Energy Return on Investment (SpringerBriefs in Energy), Pedro A. Prieto & Charles A. Hall, 2013
 Power Density: A Key to Understanding Energy Sources and Uses (MIT Press), Vaclav Smil, 2015
 Provided that total power use is below 100-150 watts (which corresponds to between 8 and 12 ampère for a 12V system). Also make sure to properly fuse your solar PV system to avoid electric fires.
 Laptop use is further complicated by the laptop battery. If the battery is not 100% charged, the computer will automatically try to charge it when you connect it to the solar system. However, power use of the laptop triples during charging, and unless there is full sun on the panels my system refuses to provide this amount of power. I "solved" this by keeping the battery 100% charged.
 There is interesting academic research about the relationship between energy infrastructures and energy demand, which we will discuss in a forthcoming article.
 Note that most energy use of a laptop is in the manufacturing. Switching to a more energy-efficient laptop isn't always a sustainable choice. Buying a second-hand device could be a solution.// //
Because many modern devices operate internally on direct current (DC), alternating current (AC) electricity is then converted back to DC electricity by the adapter of each device.
This double energy conversion, which generates up to 30% of energy losses, can be eliminated if the building's electrical distribution is converted to DC. Directly coupling DC power sources with DC loads can result in a significantly cheaper and more sustainable solar system. However, some important conditions need to be met in order to achieve this goal.
Picture: Brighton Electric Light Station, 1887. Stationary steam engines drive DC generators by means of leather belts. Source.// //
Electricity can be produced and distributed using alternating current or direct current. In the case of AC electricity, the current changes direction periodically, while the voltage reverses along with the current. In the case of DC electricity, the current flows in one direction and voltage remains constant. When electrical power transmission was introduced in the last quarter of the nineteenth century, AC and DC were competing to become the standard power distribution system -- a period in history known as the "war of currents".
AC won, mainly because of its higher efficiency when transported over long distances. Electric power (expressed in watt) equals current (expressed in ampère) multiplied by voltage (expressed in volt). Consequently, a given amount of power can be produced by a low voltage with a higher current or by a high voltage with a lower current. However, power loss due to resistance is proportional to the square of the current. Therefore, high voltages are the key to energy efficient power transmission over longer distances. 
The invention of the AC transformer in the late 1800s made it possible to easily step up the voltage in order to carry power over long distances, and then step it back down again for local use. DC electricity, on the other hand, couldn't be converted efficiently to high voltages until the 1960s. Consequently, it was impossible to transmit power effectively over long distances (> 1-2 km).
Illustration: Brush Electric Company's central power plant dynamos powered arc lamps for public lighting in New York. Beginning operation in December 1880 at 133 West Twenty-Fifth Street, it powered a 2-mile (3.2 km) long circuit. Source: Wikipedia Commons.
A DC power network implied the installation of relatively small power plants in every neighbourhood. This was not ideal because the efficiency of the steam engines that powered the dynamos depended on their size -- the larger a steam engine, the more efficient it becomes. Furthermore, steam engines were noisy and produced air pollution, while the low transport efficiency of DC power excluded the use of more distant, clean hydro power sources.
More than a hundred years later, AC still constitutes the basis of our power infrastructure. Although high-voltage DC has been gaining ground for long-distance transportation, all electrical distribution in buildings is based on alternating current, either at 110V or 220V. Low voltage DC systems have survived in cars, trucks, motorhomes, caravans and boats, as well as in telecommunication offices, remote scientific stations, and emergency shelters. In most of these examples, devices are powered by batteries that operate on 12V, 24V or 48V DC.
Renewed Interest in DC Power
Recently, two converging factors have renewed interest in DC power distribution. First, we now have better alternatives for decentralized power generation, the most significant of these being solar PV panels. They don't produce pollution and their efficiency is independent of their size. Because solar panels can be located right where energy demand is, long distance power transmission isn't a requirement. Furthermore, solar panels "naturally" produce DC power, and so do chemical batteries, which are the most practical storage technology for PV systems.
Solar PV panels naturally produce DC power, and a growing share of our electric appliances operate internally on direct current
Secondly, a growing share of our electrical appliances operate internally on DC power. This is true for computers and all other electronic gadgets, as well as for solid state lighting (LEDs), flat screen televisions, stereo equipment, microwave ovens, and an increasing amount of devices operated on DC motors with variable speed operation (fans, pumps, compressors, and traction systems). Within the next 20 years, we could see as much as 50% of the total loads in households being made up of DC consumption. 
In a building that generates solar PV power but distributes it indoors over an AC electrical system, a double energy conversion is required. First, the DC power from the solar panel is converted to AC power using an inverter. Then, AC power is converted back to DC power by the adapters of DC-internal appliances like computers, LEDs and microwaves. These energy conversions imply power losses, which could be avoided if a solar powered building would be equipped with DC distribution. In other words, a DC electrical system could make a solar PV system more energy efficient.
More Solar Power for Less Money
Because the operational energy use and costs of a solar PV system are nil, a higher energy efficiency translates into lower capital costs, as less solar panels are needed to generate a given amount of electricity. Furthermore, there is no need to install an inverter, which is a costly device that has to be replaced at least once during the life of a solar PV system. Lower capital costs also imply lower embodied energy: if less solar panels and no inverter are required, it takes less energy to produce the solar PV installation, which is crucial to improve the sustainability of the technology.
Less solar panels are needed to generate a given amount of electricity
A similar advantage would apply to electrical devices. In a building with DC power distribution, DC-internal electric devices can do away with all the components that are necessary for AC to DC conversion. This would make them simpler, cheaper, more reliable, and less energy-intensive to produce. The AC/DC adapters (which can be housed in an external power supply or in the device itself) are often the life-limiting component of DC-internal devices, and they are quite substantial in size. 
For example, for an LED light, approximately 40% of the printed circuit board is occupied by components necessary for AC to DC conversion.  AC/DC adapters have more disadvantages. As a result of a dubious commercial strategy, they are usually specific to a device, resulting in a waste of resources, money, and space. Furthermore, an adapter continues to use energy when the device is not operating, and even when the device is not connected to it.
DC power distribution would make devices simpler, cheaper, more reliable, and less energy-intensive to produce
Last but not least, low-voltage DC grids (up to 24V) are considered safe from shock or fire hazard , which allows electricians to install relatively simple wiring, without grounding or metal junction boxes, and without protection against direct contact. [4, 5, 6] This further increases cost savings, and it allows you to install a solar system all by yourself. We demonstrate such a DIY system in the next article, where we also explain how to obtain DC appliances or convert AC devices to DC.
How Much Energy Can Be Saved?
It's important to note, however, that the energy efficiency advantage of a DC grid is not a given. Energy savings can be significant, but they can also be very small or even turn negative. Whether or not DC is a good choice, depends mainly on five factors: the specific conversion losses in the AC/DC-adapters of all devices, the timing of the "load" (the energy use), the availability of electric storage, the length of the distribution cables, and the power use of the electrical appliances.
Eliminating the inverter results in quite predictable energy savings. It concerns only one device with a rather fixed efficiency (+90% -- although efficiency can plummet to about 50% at low load). However, the same cannot be said of AC/DC-adapters. Not only are there as many adapters as there are DC-internal devices, but their conversion efficiencies also vary wildly, from less than 50% for low power devices to more than 90% for high power devices. [6, 7, 8]
Consequently, the total energy loss of AC/DC-adapters can be very different depending on what kind of appliances are used in a building -- and how they are used. Just like inverters, adapters waste relatively more energy when little power is used, for instance in standby or low power modes. 
The conversion losses in adapters are highest for DVDs/VCRs (31%), home audio (21%), personal computers and related equipment (20%), rechargeable electronics (20%), lighting (18%) and televisions (15%). The electricity losses are lower (10-13%) for more mundane appliances like ceiling fans, coffee makers, dishwashers, electric toasters, space heaters, microwave ovens, refrigerators, and so on. .
Lighting and computers (which have high AC/DC-losses) usually make up a great share of total electricity use in offices, shops and institutional buildings. Households have more diverse appliances, including devices with lower AC/DC-losses. Consequently, a DC system brings higher energy savings in offices than in residential buildings.
The largest advantage is in data centers, where computers are the main load. Some data centers have already switched to DC systems, even if they're not powered by solar energy. Because a large adapter is more efficient than a multitude of small adapters, converting AC to DC at a local level (using a bulk rectifier) rather than at the individual servers, can bring energy savings between 5 and 30%. [6, 9] [10, 11]
The Importance of Energy Storage
If we assume an energy loss of 10% in the inverter and an average loss of 15% for all the AC/DC adapters, we would expect energy savings of about 25% when switching to DC distribution in a solar PV powered building. However, such a significant saving isn't guaranteed. To start with, most solar powered buildings are grid-connected. They don't store solar power in on-site batteries, but rely on the grid to deal with surpluses and shortages.
In a net-metered solar powered building, only loads coincident with solar PV output can benefit from a DC grid
This means that excess solar power needs to be converted from DC to AC in order to send it to the electric grid, while power taken from the grid needs to be converted from AC to DC in order to be compatible with the electrical distribution system of the building. Consequently, in a net-metered solar PV powered building, only loads coincident with solar PV output can benefit from a DC grid.
Once again, this means that the efficiency advantages of a DC system are usually larger in commercial buildings, where most electricity use coincides with the DC output from the solar system. In residential buildings, on the other hand, energy use often peaks in mornings and evenings, when little or no solar power is available.
Consequently, there is only a small advantage to obtain from a DC system in a net-metered residential building, as most electricity will be converted to or from AC anyway. A recent study calculated that a DC system could improve the energy efficiency of a solar-powered, net-metered American home on average by only 5% -- the figure is an average for 14 houses across the USA.  
Off-Grid Solar Systems
To realize the full potential of a DC grid, especially when it concerns a residential building, we need to store solar energy in on-site batteries. In this way, the system can store and use power in DC form. Energy storage can happen in an off-grid system, which is fully independent of the grid, but adding some battery storage to a net-metered building also improves the advantage of a DC system. However, energy storage adds another type of energy loss: the charging and discharging losses of the batteries. The round-trip efficiency for lead-acid batteries is 70-80%, while for lithium-ion it's about 90%.
Unfortunately, energy storage adds another type of energy loss -- the charging and discharging losses of the batteries -- and negates the cost advantages of a DC system
Exactly how much energy can be saved with on-site battery storage again depends on the timing of the load. Electricity used during the day -- when the batteries are full -- doesn't involve any battery charging and discharging losses. In that case, the energy savings of a DC system can be 25% (10% for eliminating the inverter and 15% for eliminating the adapters).
However, electricity used after sunset lowers the energy savings to 15% for lithium-ion batteries and between -5% and +5% for lead-acid batteries. In reality, electricity will probably be used both before and after sunset, so that efficiency improvements will be somewhere between those extremes (-5% to 25% for lead-acid, and 15-25% for lithium-ion).
Kensington Court Station: steam engine, dynamo and batteries. Source: Central-Station Electric Lighting, Killingworth Hedges, 1888.
On the other hand, battery storage brings an additional advantage: there are less or -- in a totally independent system -- no additional energy losses for the long-distance transmission and distribution of AC electricity. These losses vary a lot depending on the location. For example, average transmission losses are only 4% in Germany and the Netherlands, but 6% in the US and China, and between 15 and 20% in Turkey and India.  
If we add another 7% of energy savings due to avoided transmission losses, an off-grid DC system can bring energy savings between 2% and 32% for lead-acid batteries, and between 22% and 32% for lithium-ion batteries, depending on the timing of the load.
In an off-grid DC system, electricity use can be met with a solar system that's one-fifth to one-third smaller, depending on the type of batteries used
Assuming 50% energy use during the day and 50% energy use during the night, we arrive at a gain of 17% for an off-grid system using lead-acid batteries, and 27% for lithium-ion storage. This means that electricity use can be met with a solar system that is one-fifth to one-third smaller, respectively. Total cost savings will remain a bit larger, because we still don't need an inverter, and installation costs are lower or non-existent.
Unfortunately, introducing on-site electricity storage raises capital costs again, because we need to invest in batteries. This will negate the cost advantage we obtained through in choosing a DC system. The same goes for the energy invested in the production process: an off-grid DC system requires less energy for the manufacturing of solar panels, but it instigates at least as much energy use for the manufacturing of batteries.
However, we should compare apples to apples: a DC off-grid solar system is cheaper and more energy efficient than a AC off-grid system, and that's what counts. The life cycle analyses of net-metered solar systems do not represent reality, because they ignore an essential component of solar energy systems.
There's one more important thing to consider, though. As we have seen, power loss due to resistance is proportional to the square of the current. Consequently, low-voltage DC grids have relatively high cable losses within the building. There are two ways in which cable losses can make a choice for a DC system counterproductive. The first is the use of high power devices, and the second is the use of very long cables.
The energy loss in the cables equals the square of the current (in ampère), multiplied by the resistance (in ohm). The resistance is determined by the length, the diameter, and the conducting material of the cables. A copper wire with a cross section of 10 mm2, distributing 100 watts of power at 12 V (8.33 A) over a distance of 10 metres yields an acceptable energy loss of 3%. However, with a cable length of 50 metres, energy loss becomes 16%, and at a length of 100 metres, the energy loss adds up to 32% -- enough to negate the efficiency advantages of a DC grid even in the most optimistic scenario.
The relatively high energy losses in the cables limit the use of high power appliances
The relatively high cable losses also limit the use of high power appliances. If you want to run a 1,000 watt microwave on a 12V DC grid, the energy losses add up to 16% with a cable length of only 1 metre, and jump to 47% with a cable length of 3 metres.
Obviously, a low-voltage DC grid is not suited to power devices such as washing machines, dish washers, vacuum cleaners, electric cookers, electric ovens, or warm water boilers. Note that power use and not energy use is important in this regard. Energy use equals power use multiplied by time. A refrigerator uses much more energy than a microwave, because it's on 24 hours per day, but its power use can be small enough to be operated on a DC grid.
Cable losses also limit the combined power use of low power devices. If we assume a 12V cable distribution length of 12 metres, and we want to keep cable losses below 10%, then the combined power use of all appliances is limited to about 150 watts (8.5% cable loss). For example, this allows the simultaneous use of two laptops (20 watts of power each), a DC refrigerator (45 watts), and five 8 watt LED-lamps (40 watts in total), which leaves another 25 watts of power for a couple of smaller devices.
How to Limit Cable Losses
There are several ways to get around the distribution losses of a low-voltage DC system. If it concerns a new building, its spatial layout could significantly limit the distribution cable length. For example, Dutch researchers managed to reduce total cable length in a house down from 40 metres to 12 metres. They did this by moving the kitchen and the living room (where most electricity is used) to the first floor, just below the roof (where the solar panels are), while moving the bedrooms to the ground floor. They also clustered most appliances in the central part of the building, right below the solar panels (see the illustration below). 
Another way to reduce cable losses is to set up several independent solar systems per one or two rooms. This might be the only way to solve the issue in a larger, existing building that's designed without a DC system in mind. While this strategy implies the use of extra solar charge controllers, it can greatly reduce the cable losses. This approach also allows the power use of all appliances to surpass 150 watts.
Setting up independent solar systems per one or two rooms is one way to limit cables losses and increase total power use
A third way to limit cable losses is to choose a higher voltage: 24 or 48V instead of 12V. Because the energy losses increase with the square of the current, doubling the voltage from 12 to 24V makes cable losses 4 times smaller, and switching to 48V decreases them by a factor of nine. This approach also allows the use of higher power devices and increases the total power that can be used by a DC system. However, higher voltages also have some disadvantages.
First, most low-voltage DC appliances currently on the market operate on 12V, so that the use of a 24 or 48V DC network involves the use of more DC/DC-adapters, which step down the voltage and also have conversion losses. Second, higher voltages (above 24V) eliminate the safety advantages of a DC system. In data centers and offices, as well as in the American residential buildings in the study mentioned earlier, DC electricity is distributed throughout the building at 380V, but this requires just as stringent safety measures as with 110V or 220V AC electricity. 
Shortening cable length or doubling the voltage to 24V still doesn't allow for the use of high power devices like a microwave or a washing machine. There are two ways to solve this issue. The first is to install a hybrid AC/DC-system. In this case, a DC grid is set up for low power devices, such as LED-lights (< 10 watt), laptops (< 20 watt), a television (30-90 watt) and a refrigerator (<50 watt), while a separate AC grid is set up for high power devices. This is the approach for homes and small offices that's promoted by the EMerge Alliance, a consortium of manufacturers of DC products, which devised a standard for a 24V DC / 110-220V AC hybrid system. 
Low power devices are (on average) responsible for 35-50% of total electricity use in a home. Even in the best-case-scenario (50% of the load), a hybrid system halves the energy efficiency gains we calculated above, which leaves us with an energy savings of only 8.5% to 13.5%, depending on the types of batteries used. These figures will be lower still due to cable losses. In short, a hybrid AC/DC system brings rather small energy savings, that could easily be erased by rebound effects.
The second way to solve the problem of high power devices is simply not to use them. This is the approach that's followed in sailboats, motorhomes and caravans, where a supporting AC distribution system is simply not an option. This is the most sustainable solution to the limits of DC power, because in this case the choice for DC also results in a reduction of energy demand. Total energy savings could thus become much larger than the 17-27% calculated above, and then we finally have a radically better solution that could make a difference.
One way to solve the problem of high power devices is simply not to use them -- this is the approach that's followed in sailboats, motorhomes and caravans
Obviously, this strategy implies a change in our way of life. It would mean that electricity is used only for lighting, electronics and refrigeration, while non-electric alternatives are chosen for all other appliances. Not coincidentally, this is quite similar to how DC grids were operated in the late nineteenth century, when the only electric load was for lighting -- first arc lamps and later incandescent bulbs.
Thus, no dishwasher, but doing the dishes by hand. No washing machine, but doing the laundry in a laundromat or with a manually operated machine. No tumble dryer, but a clothes line. No convenient and time-saving kitchen appliances like electric kettles, microwaves and coffee machines, but a traditional cooking stove operated by (bio)gas, a solar cooker, or a rocket stove. No vacuum cleaner, but a broom and a carpet-beater. No freezer, but fresh ingredients. No electric warm water boiler, but a solar boiler and a small wash at the sink if the sun doesn't shine. No electric car, but a bicycle.
To figure out what's possible, we're converting Low-tech Magazine's headquarters into an off-grid 12V DC system -- more about that in the next post.
- Power Water Networks
- How to Build a Low-tech Internet
- Off-grid: How Sustainable is Stored Sunlight
- Solar PV Power: Why Location Matters
- Back to Bascis: Direct Hydropower
- Bike Powered Electricity Generators are not Sustainable
- How to Make Everything Ourselves: Open Modular Hardware
SOURCES & NOTES
 There is an analogy with hydraulic power: electric voltage corresponds to water pressure, while electric current corresponds to water flow. The invention of the hydraulic accumulator in the 1850s allowed higher water pressure and thus efficient transportation of water power over long distances.
 Study and simulation of a DC microgrid with focus on efficiency, use of materials and economic constraints (PDF), Simon Willems & Wouter Aerts, 2013-14
 Direct Current supply grids for LED lighting, LED professional
 DC microgrids scoping study: estimate of technical and economic benefits, Scott Backhaus et al., March 2015
 DC microgrids and the virtues of local electricity, Rajendra Singh & Krishna Shenai, IEEE Spectrum, 2014
 Comparison of cost and efficiency of DC versus AC in office buildings (PDF), Giuseppe Laudani, 2014
 Edison's Revenge, The Economist, 2013
 Catalog of DC appliances and power systems, Karina Garbesi, Vagelis Vossos and Hongxia Shen, 2011
 DC building network and storage for BIPV integration, J. Hofer et al., CISBAT 2015, 2015
 However, DC power in data centers will not bring us a less energy-hungry internet -- on the contrary.
 Also note that the efficiency of AC/DC adapters could be improved in a significant way, especially for low power devices. Many "wall warts" are needlessly wasteful because manufacturers of electric appliances want to keep costs down. If this would change, for example because of new laws, the advantage of switching to a DC grid would become smaller.
 Energy savings from direct-DC in US residential buildings, Vagelis Vossos et al, in Energy and Buildings, 2014
 In this study, the buildings use a combination of 24V DC for low power loads, and 380V DC for high-power devices and for distributing DC power throughout the house to limit cable losses.
 Electric power transmission and distribution losses (% of output), World Bank, 2014
 Rural areas usually have higher losses than urban areas, and a lone subdivision line that radiates out into the countryside can introduce very high losses.
 Concept for a DC low voltage house (PDF), Maaike Friedeman et al, Sustainable building 2002 conference
 A last -- but rather desperate -- way to lower distribution losses is to use thicker cables. The resistance in electric wires can be decreased not only by shortening the cables, but also by increasing their diameter (diameter here refers to the copper core). For example, if we would use 100 mm2 instead of 10 mm2 cables, we can have cables that are ten times longer for the same energy loss. Distributing 12V DC electricity across 100 metres of cable would yield an energy loss of only 3%. One problem with this approach is that the costs of electric cables increase linearly with the diameter. One metre of 100 mm2 cable will cost you about 50 euro, compared to 5 euro for a 10 mm2 cable. Sustainability also suffers because the higher use of copper has a significant environmental cost. Thick cables are heavy and less manageable, too. Thanks to Herman van Munster en Arie van Ziel for making this clear.
 Our standards, Merge Alliance, retrieved April 2016// //
During the second half of the nineteenth century, water motors were widely used in Europe and America. These small water turbines were connected to the tap and could power any machine that is now driven by electricity.
As we have seen in a previous article, operating motors with tap water was not very sustainable. Because of the low and irregular water pressure of the town mains, these motors used unacceptably high amounts of drinking water.
While the use of water motors in the US came to an end early in the twentieth century, the Europeans found a solution for the high water use of water motors and took hydraulic power transmission one step further.
They set up special "power water" networks, which distributed water under pressure for motive power purposes only, and switched to a much higher and more regular water pressure, made possible by the invention of the hydraulic accumulator.
Almost all these power water networks remained in service until the 1960s and 1970s. Hydraulic power transmission is very efficient compared to electricity when it is used to operate powerful but infrequently used machines, which can be distributed over a geographical area the size of a city.
A hydraulic accumulator. Picture: Les Chatfield.
"The use of water is a curiously neglected subject in the literature of engineering. As a romantic or popular facet of engineering, hydraulic power has never caught the public eye like the steam engine, the locomotive or even the internal combustion engine".
Ian McNeil, Hydraulic Power, 1972
The theoretical basis for hydraulic power transmission was laid in 1647 by French whizz-kid Blaise Pascal. By means of experiments, he discovered that water -- unlike air -- is virtually incompressible and transmits pressure equally in all directions.
The implications of the "hydrostatic paradox" were demonstrated in Pascal's "machine for multiplying forces", illustrated below. It consists of two upright cylinders, connected together by a pipe. The whole system is filled with water and sealed water-tight. One cylinder contains a small diameter plunger, while the other cylinder contains a plunger that has a cross-sectional area 100 times larger.
Pascal demonstrated that if a weight is placed on top of the small piston, it will be able to raise a weight placed on top of the larger piston that is 100 times heavier. Pascal's machine thus allowed forces to be multiplied -- in the example above, the ratio of force output to force input is 100 to 1. In other words, you can produce an output force of 100 kg for an input force of only 1 kg.A Machine for Multiplying Forces
Force multiplication was anything but new in the 1600s. More simple devices such as pulleys, gear trains, capstans, winches and treadwheels -- all variations on the 7,000 year old lever -- could also derive a high output force output from a small input force. For example, the Romans built cranes with a mechanical advantage of up to 70 to one, meaning that one man exerting a force of only 25 kg could raise a weight of 1.75 tonnes.
However, the hydraulic version of the lever has one outstanding advantage over earlier mechanisms: the friction loss is very small and independent of the mechanical advantage. Therefore, the possible multiplication ratio is almost infintely greater and both pistons may be a considerable distance apart -- up to about 25 km, as we shall see.
In hydraulics, friction loss is independent of the mechanical advantage, therefore the possibile force multiplication ratio is almost infinite
Increasing force multiplication could be done by either extending the proportion between the diameter of both plungers, or by applying greater power to the smaller piston. In common with the earlier mechanisms, what is gained in mechanical advantage is lost in velocity ratio.
If a small hydraulic force is converted into a larger force, its speed of operation will be reduced in exactly the inverse proportion, because the distance traversed increases in the same proportion as the force. For example, a person pressing down the small piston 10 centimetres would move the other piston up only 1/100th of that distance.
Consequently, in a closed system, the heavier weight could be lifted only over a very limited distance, depending on the length of the plunger. However, this limit is removed when more water is added to the system and the smaller piston, instead of coming down just once, makes a number of strokes -- in other words, when it functions as a pump. In this case, the larger piston will keep rising.The Hydraulic Press
Pascal could only prove his point indirectly, as the available materials at the time were not strong enough to withstand the pressure. It would take another century and a half before hydraulic force multiplication was put into practice. Its first use was not a lifting device, but rather the opposite: the hydraulic press, which generates a compressive force.
The conventional screw press of the time, little developed since the Romans had used it for pressing olives and grapes, required a great effort to operate, had large frictional energy loss (+80%), and could not have exerted more than 25 tonnes load. (The screw, which converts rotational motion into linear motion, is basically an inclined plane wrapped around a cylinder).//
Left: The screw press. Picture credit: Bruce K. Satterfield. Right: The hydraulic press.
The hydraulic press was invented in 1796 by English locksmith and carpenter Joseph Bramah. It was entirely based on the theoretical work of Pascal. Bramah's hydraulic press, which was driven by a hand-operated pump, brought a large increase in the load that could be exerted by a human.
With the available materials at the time, Bramah achieved an overall ratio of 1,000 to 1, which means that an effective load of 60 tonnes on the lifting piston could be balanced by a mere 60 kg on the pump handle. The efficiency of the hydraulic press was over 90%.Harbours and Dockyards
In spite of its eminent suitability for crane operation, hydraulics made little progress in this field during the first half of the nineteenth century. This was largely due to the problem of reliably and efficiently translating the linear motion of a ram to rotary motion of the crane barrel or drum. During the first half of the nineteenth century, cargo handling in harbours, dockyards and railway yards was still done by means of human powered cranes, but the need for taller and stronger cranes was great.
Starting in the 1830s, iron began to be used as a material for ship building, with a parallel growth in the dimensions of ships. Conventional lifting systems were no longer adequate. In most countries, the solution was found in the steam powered crane, which appeared in the 1850s. However, in harbours and dockyards in Britain, a worthy alternative appeared: the water powered crane.
During the first half of the nineteenth century, cargo handling in harbours, dockyards and railway yards was still done by means of human powered cranes
British engineer William Armstrong started designing and operating powerful hydraulic cranes in the 1840s. Being fully aware that hydraulics was best adapted for giving a slow, steady motion, Armstrong deviced a method of lifting the load at one stroke of a ram or piston, multiplying the motion sufficiently by means of pulleys.
However, his efforts were complicated by the low and irregular pressure of the town mains, which was the power source for these machines. The maximum power output of a water powered machine is determined by water pressure and water flow. In the town mains, water pressure was (and often still is) supplied by a water tower. Because the practical height of a water tower is limited, so is the water pressure. A 50 m (165 ft.) tall water tower can produce a water pressure of 70 pound-per-square-inch (psi).
Consequently, the only way to further increase the power output of a crane running on water from the town mains is to increase the water flow. However, this raises potable water consumption and increases the size and costs of pipes, valves, cylinders, and other parts of the system. Moreover, if there is a higher than average demand for potable water from other users, the water level in a water tower will fall, and so will the water pressure and the power output of the machine.The Hydraulic Accumulator
In 1851, Armstrong came up with an alternative solution that solved these issues: the hydraulic accumulator. Although much more compact than a water tower, it could produce a regular water pressure of 700 psi or higher -- at least 10 times the water pressure in the town mains. This allowed to produce an order of magnitude more power without raising water consumption or increasing the size of system components.
Armstrong's hydraulic accumulator was a contraption in which a ram or piston exerted pressure on the water in a vertical cylinder. The piston was loaded by dead weight ballast, which generally took on the form of a cylindrical ballast container surrounding the central cylinder (image below, on the left). The container was filled with crushed rock, scrap iron or other ballast material.
Hydarulic Accumulator in Bristol Harbour. Wikipedia Commons. Hydraulic Accumulator, Walsh Bay, Sydney. Source: NSW HSC Online.
For a water pressure of 700 psi the ballast was about 100 tonnes, acting on a ram of about 45 cm in diameter with a vertical stroke of 6 to 7 meters. Another type of accumulator utilised a rectangular platen to support a brickwork ballast (image above, on the right) or steel slabs. Hydraulic accumulators could be set up outdoors, or housed in a purpose designed building.
In comparison with a water tower, a hydraulic accumulator could deliver ten times more power, and maintain an even pressure all over the network
The workings of the hydraulic accumulator are somewhat similar to those of a water tower. The central cylinder has a water inlet and outlet at the bottom. Water from the docks could be pumped in through the inlet by a steam powered pump, raising the piston, while it could be pushed out through the outlet into the mains for distribution, lowering the piston.
Energy was stored by upward movement of the ram and recovered upon its descent. The pumping rate of the steam engine was regulated in function of the water level in the accumulator, either automatically via mechanical linkages or via the aid of a human being.
Contrary to a water tower, however, the accumulator could maintain an even pressure all over the system regardless of the volume of water in the cylinder, because it's the weight of the ballast and not the weight of the water that creates the pressure -- in other words, the hydraulic accumulator gives pressure by load instead of by elevation.
With a charging/discharging efficiency above 98%, and no self-discharge, the hydraulic accumulator was an extremely energy efficient device.Water Powered Factory Machinery
The introduction of the hydraulic accumulator had two important effects. First, it greatly expanded the range of hydraulically operated machines. The water motors connected to the town mains were household devices and workshop tools. But Armstrong and other engineers adapted high pressure water to a variety of industrial applications that required great power such as forging, punching, stamping, flanging, shearing and riveting (the predecessor of welding).
In harbours, high pressure water not only operated cranes and hoisting machines handing cargo on docks and in warehouses, but also lock gates, swing bridges, boat lifts, and graving docks. At railway yards, hydraulic power transmission was used for freight handling and for moving railway cars (using hydraulic capstans), as well as for operating turntables, elevators and traversing mechanisms. All these applications of hydraulic power would have been impossible with the low and irregular pressure prevailing on the town mains.
To give an idea of the importance of hydraulic power, it suffice to look once more at the evolution of lifting devices. In 1586, a 344 ton obelisk was moved between squares in Rome. Domenic Fontana, master builder of the Vatican, raised the obelisk with the help of 40 capstans worked by 400 men and 75 horses. In 1878, John Dixon raised another obelisk -- Cleopatra's needle, weighing 209 tons -- using four hydraulic lifting jacks, worked by four men.Power Water Networks
Secondly, the hydraulic accumulator made it possible to transmit power efficiently over large distances. For a 30 cm diameter pipeline, the pressure drop in water distribution amounts to about 10 psi per mile, a figure that is independent of water pressure. Thus, if you transmit water with a pressure of 70 psi over a distance of 7 miles (12 km), all energy is lost. But if you transmit water over the same distance with a pressure of 700 psi, a water pressure of 630 psi remains, which comes down to a transmission efficiency of 90%.
The high transmission efficiency of high-pressure water led to the construction of at least a dozen public power water networks with accumulator storage, half of them in Britain, in which centrally located steam engines pumped water into hydraulic accumulators that distributed high pressure water over a large geographical area. One or more accumulators would be installed at each hydraulic power station and others could be sited at strategic points along the supply main as sub-stations.
The idea of a truly hydraulic power network -- analogous to the electric grid that came a bit later -- was already outlined in a 1812 patent by Joseph Bramah, the inventor of the hydraulic press.
From the 1870s to the 1890s, hydraulic power networks were established in the leading industrial cities of Britain: Kingston upon Hull, London, Liverpool, Birmingham, Grimsby, Manchester and Glasgow. Dock and railway companies pioneered the technology, and remained the most important users for decades.
However, power water was also running manufacturing processes in factories, operating elevators in public, private and commercial buildings, and activating household devices and workshop tools. Anybody who was lucky enough to have a mains running through the street could connect to the public network. Power water consumption was metered, as it happens today with potable water and electricity.
The idea of a truly hydraulic power network -- analogous to the electric grid that came a bit later -- was already outlined in a 1812 patent by Joseph Bramah, the inventor of the hydraulic press. But Bramah, who also conceived the hydraulic accumulator and the hydraulic crane, was ahead of his time. It took another sixty years before his ideas were brought into practice by Armstrong and his contemporaries.London Hydraulic Power Company
The most extensive hydraulic power network was built in London, operated by the "London Hydraulic Company". At the company's peak in 1917, five interconnected central power stations pumped high pressure water in about a dozen hydraulic accumulators and almost 300 km of supply mains, powering more than 8,000 machines and serving most of the city. In London theatres and other cultural buildings, power water was moving floors, organ consoles, fire curtains and stages. Water under pressure worked water pumps and lifted the bascules of the Tower Bridge.
Fire hydrants were also advantageously served by the high pressure system and several hundreds of them were connected to the London Hydraulic Power Company's mains. These fire-fighting systems increased the pressure of the domestic water mains by injecting a small amount of high pressure water in them, using a jet pump. By itself, water at high pressure from the hydraulic power mains could not be supplied in adequate quantity to have an effect on a large fire, while the domestic supply mains had enough quantity but not enough pressure to reach the top floors of buildings.
In London, five interconnected central power stations pumped high pressure water in a dozen hydraulic accumulators and almost 300 km of supply mains, powering more than 8,000 machines and serving most of the city.
Another remarkable application of high pressure water in London was the Silent Dustman, a water powered vacuum cleaning system that came on the market in 1910. Several large hotels were completely "wired" for this system: water from the town mains was used in a jet pump to produce a vacuum in a pipe to which the system was to be fitted. Along these pipes were a number of nozzles to which flexible hoses could be fixed. Thus the dirt from the sweepers was drawn into the hydraulic pipe and carried away into the drains. The system, which operated silently and efficiently, remained in operation until 1937.
In London, however, hydraulic power does not seem to have made a great impact on the domestic scene. In The Hydraulic Age (1980), B. Pugh notes that this was "possibly due to the fact that in its day domestic labour was cheap and in plentiful supply. Had present-day conditions operated then possibly the story would have been different since the potentialities of hydraulic power were not less than those of electricity today."
Most public power water networks supplied water under a pressure of 700 to 800 psi (48 to 55 bar), with the exception of Manchester and Glasgow, where water was pressurized to 1120 psi. In these cities, there was a heavy demand for power for hydraulic presses used for baling, an application that required a higher pressure.Power Networks Outside Britain
The British power systems inspired similar networks elsewhere: Antwerp in Belgium, Buenos Aires in Argentina, and Melbourne and Sydney in Australia. While the Australian systems were reminiscent of those in Britain (with 80 km of mains, the one in Melbourne was the second largest ever built), the Argentinian system was used to pump sewage, and the network in Antwerp was aimed at the combined production of mechanical power and electricity. The latter was an attempt to overcome the very high transmission losses of electricity at the time.
In The Hydraulic Age, B. Pugh writes that:
"For power transmission, the early electric stations were faced with the same difficulties as the hydraulic power stations, their voltage being analogous to working pressure, and voltage drop due to mains resistance analogous to pressure drop due to pipe friction. The early electric public power stations were direct or continuous current stations, the voltage of generation essentially being only slightly higher (by the voltage drop in the cables) than at the consumer's premises which for safety reasons had to be less than 250 volts. Due to voltage limitation, the area of supply as well as the amount of power that could be transmitted was limited."
The network in Antwerp was aimed at the combined production of mechanical power and electricity
Since 1865, Antwerp had been using a high pressure hydraulic network for powering cranes, bridges and sluices in the harbour. To this was added a second network in 1893, which distributed high pressure water to electric substations scattered across the city (twelve according to the plan, but only three were built). There, water turbines generated electricity which was distributed in a radius of 500 m via underground electric conduits -- this was about the distance at which low voltage could be distributed efficiently.
The Antwerp system, which was used for operating street lighting, thus did on a large scale what water motors connected to dynamos did on a small scale with water from the town mains (see the previous article). About 66% of the hydraulic energy was converted to electricity. At its peak, the network reached a length of 23 km with an output of 1200 hp. There were also a number of places in London where consumers ran small electric generators from the hydraulic supply.Power Water Versus Electricity
The breakthrough in high voltage electric transmission at the turn of the century made systems like those in Antwerp immediately obsolete. The electricity generating part of the network disappeared in 1900. Producing water under pressure in order to produce electricity involves a fourfold energy conversion, which is needlessly wasteful if you can just produce electricity and transport it efficiently.
The expansion of efficient electrical transmission also stopped the construction of other large-scale power water networks before the century was over. "Had these systems been started some years earlier, they might have become vastly more popular", writes Ian McNeil in Hydraulic Power (1972). "A few years later, and they would probably never have been built at all."
However, almost all public power water systems that were built between the 1870s and 1890s remained in service until the 1960s and 1970s, eventually using electric motors instead of steam engines for pumping. The power water network operated by the London Hydraulic Company, the last to survive, worked until 1977. Most of the public power water networks kept growing during the first decades of the twentieth century, reaching their heydays at the end of the 1920s. The fatal decline came only when factories started leaving the cities in the 1960s and 1970s.
If electricity is the most efficient and practical way of transmitting and distributing power, then why did almost all power water networks remain in service for almost a century?
This raises two questions. First, why didn't power water become the universal method of power distribution that Joseph Bramah and William Armstrong had envisioned? And second, if electricity is the most efficient and practical way of transmitting and distributing power, then why did almost all power water networks remain in service for almost a century?
As a power transmission technology, power water has three important disadvantages in comparison to electricity. First of all, electricity can be transported efficiently over much longer distances. Hydraulic power transmission was (and still is) at least as efficient as electric power transmission up to distances of 15 to 25 km. Beyond those distances, however, electric transmission is a clear winner.
Greenland dock hydraulic lock gates in London, built in the 1880s. Picture credit: Chris Allen.
A second shortcoming of hydraulic transmission is that a complex distribution network introduces additional energy loss. Every curve or bend in the mains increases friction losses. The more intricate the network, the less efficient it becomes. Electric transmission doesn't have this problem, at least not in a significant way. The friction losses in the water mains limit the amount of machines that can be attached to a power water network, while electricity can be subdivided almost infinitely.
The third limitation of power water is the limited capacity of a hydraulic transmission line. Water under pressure can only be moved through thin pipes at walking speeds in order to avoid excessive friction losses. At higher speeds, the loss of friction increases as the square of the velocity and efficiency goes down fast, even over relatively short distances. This limits the flow rate and thus the power that could be delivered by a hydraulic transmission line.
Using a 10 to 12 cm diameter pipe -- a common size in most high pressure system at the time -- a hydraulic transmission line could produce a maximum continuous power of 115 to 205 horse power (85 to 150 kW). High voltage electric transmission lines of similar size can carry an amount of power that was orders of magnitude greater than that.Advantages of Power Water
However, none of these disadvantages mattered for the power water networks that we have discussed. These were all decentralized systems, with machines no more than 15-25 km away from the power source. Secondly, because the hydraulically operated machinery in harbours, railway yards, factories and buildings was characterized by slow motion and infrequent use, the slow transmission speed of power water presented no obstacle.
With the exception of the short-lived electricity generating system in Antwerp, none of the Armstrong-type power water networks supplied power to a large amount of continuously operating machines. (But note the medium pressure power water networks in Switzerland). Lastly, because a power water network operated relatively few (but very powerful) machines, friction loss through bends and curves in the network was limited.
Hydraulic pump, accumulator and press. Source: Portefeuille économique des machines, de l'outillage et du matériel, December 1864, Bibliothèque nationale de France.
The limitations of hydraulic transmission were very well understood at the end of the nineteenth century. However, engineers also grasped the unique benefits of the technology, which still hold today. For example, Robert Zahner, an advocate of yet another alternative to electricity, compressed air, wrote in The Transmission of Power by Compressed Air (1890) that:
"The practical incompressibility of water renders the hydraulic method unfit for transmitting regularly a constant amount of power. It can be used to advantage only where motive power is to be accumulated and applied at intervals, such as raising weights, operating punches, compressive forging and other work of intermittent character, requiring a great force through a small distance."
Hydraulic transmission is "admirably adapted for use with heavy machinery and equipment in operations requiring marked concentration of power, reciprocating straight-line motion, and intermittent action", wrote Louis Hunter in The Transmission of Power (1991). The main excellence of the hydraulic accumulator is that it allows to operate machines that require much more power than the energy source can supply -- Pascal's "force multiplication".
The limitations of hydraulic transmission were very well understood at the end of the nineteenth century. However, engineers also grasped the unique benefits of the technology, which still hold today.
When high force or torque are needed, hydraulic power systems are a much more compact and energy efficient solution than mechanical or electric drives. Both electric motors and combustion engines often need mechanical power transmission (gears, chains, belts) to convert their high rotational speed to a slower speed with higher torque.
Likewise, hydraulic power systems easily produce linear motion using hydraulic cylinders, while electric power requires costly linear motors or mechanical power transmissions such as rack-and-pinion assemblies. Hydraulic and electric power are complementary in this sense: one of the limitations of power water transmission was the relative difficulty of converting linear motion to rotary motion.
Pelton wheels were the most obvious choice, but their high rotational speed involved the use of gearing for the operation of slow speed machinery. A number of hydraulic engines of the ram type was available to supply rotative power involving variable or slow speed operation, but these engines had few advantages compared to electric or mechanical drives.
A third important advantage of hydraulics is that the power is always readily available in the pipes and in the accumulator, but when there is no demand there is no waste. When none of the machines in a power water network was in operation, the hydraulic accumulators kept the lines pressurized without using any energy. This advantage is especially relevant when machines are used intermittently.Hydraulics Today
Hydraulic power is still in use today, especially in heavy industrial equipment that requires a slow but powerful linear motion, and in mobile construction machinery such as excavators. However, the raised-weight hydraulic accumulator and the power water networks have disappeared.
The pressurized fluid is no longer water but oil, mixed with additives. (Vegetable oil had been used as a hydraulic medium in the 19th century). Unlike water, oil doesn't freeze and is not corrosive. However, it makes hydraulic power more expensive and it obviously doesn't permit the exhaust fluid to end up in the sewer network, the docks or the sea.
Partly as a consequence of the use of oil, there evolved the self-contained hydraulic power pack consisting of pump, hydraulic accumulator, and return flow systems, ready to be coupled to an electric motor or a diesel engine. The hydraulic accumulators in these systems are much smaller, they use a gas to compress the fluid, and they do not maintain a steady pressure.
Today's hydraulic accumulators (usually compressed gas types) have little in common with the raised-weight accumulators in power water networks. Picture: HYD.
While the practical benefits of hydraulics remain -- a large amount of power can be transferred and controlled precisely using very compact components -- the modern approach erases an important efficiency advantage specific to the more centralized power water networks of the nineteenth and twentieth century. In a city-wide power water network, a comparably small central power source -- a handful of hydraulic accumulators -- could operate a large number of very powerful machines. The pumping engines didn't have to be dimensioned for peak loads.
A great advantage of power water networks was that comparatively little power capacity was required to operate a large number of powerful machines over a wide area.
B. Pugh laments this evolution in The Hydraulic Age (1980):
"One century ago, only a few very large machines -- swing bridges and an occasional hydraulic press -- had their own individual pumping equipment. More recently, this trend spread throughout hydraulically operated machinery of all types and sizes, and is accepted practice today. With unit hydraulic power packs each piece of equipment will be driven by its own motor and will have its own instrumentation, filters, etcetera, which will call for periodic inspection and maintenance."
"The motor will run continuously while the unit is in use regardless of the load on the pump it drives. In the case of a number of such units not all will be working to capacity all the time. Appreciable economy could be effected by having a central pumping plant to supply a number of units and due to the diversification of the load the maximum load at any one time will be less than the sum of the individual maximum loads."
"An advantage of a large station over a number of smaller ones lies in the ability to meet diversity of demand. A number of small, independent power stations must each have sufficient capacity to meet the peak demand of its own area of supply and the peaks will not occur at the same time. A large station, embracing the total area of a number of small stations, will need only to meet the maximum simultaneous demand and this will normally be less than the sum total of the local peaks."Alternatives to Electricity
Just like mechanical power transmission technologies -- such as jerker line systems and endless rope drives -- power water networks have disappeared largely because electric transmission has superior efficiency over long distances. However, in a more decentralized energy system based on renewable energy, all these forgotten alternatives for electricity deserve to be reconsidered for specific purposes. Raised-weight hydraulic accumulators could be solar, wind or even pedal powered.
Around 1900, the superiority of electricity for transmitting power over very long distances was not disputed. For moderate distances, however, quite a few authors doubted its usefulness. For example, R. Kennedy wrote in Modern Engines and Power Generators (1905):
"Electricity offers paramount advantages for power transmission to a distance in most cases. Electrical engineers, however, claim far too much for it. They are apt to forget other means for transmitting power, which means have paramount advantages over electricity in a good many cases."
W.C. Unwin, the author of the most complete nineteenth-century book on power transmission (On the Development and Transmission of Power from Central Stations), expressed a similar concern in 1894:
"Granting that electrical distribution will play an important part before long in the development of systems of power distribution, there is a popular tendency at the moment to regard too exclusively electrical methods, and to overlook other means of power distribution which have been usefully applied in the past, and will, in suitable conditions, be still employed in the future... For transmission to moderate distances there is a choice of several means of transmission, and electrical distribution has not in such cases and up to the present established any universal superiority."
In the next installment of our power transmission series, we will discuss compressed air, which is probably the most usable alternative for electricity.
Kris De Decker
- Power from the Tap: Water Motors
- Back to Basics: Direct Hydropower
- The Mechanical Transmission of Power (3): Endless Rope Drives
- The Mechanical Transmission of Power (2): Jerker Line Systems
- The Mechanical Transmission of Power (1): Stangenkunst
Sources (in order of importance):
- "The Hydraulic Age", B. Pugh, 1980
- "Hydraulic Power (Industrial Archaeology)", Ian McNeil, 1972
- "On the Development and Transmission of Power from Central Stations", W.C. Unwin, 1894. Also here.
- "Hydraulic Machinery, with an introduction to hydraulics", R.G. Blaine, 1897
- "A History of Industrial Power in the U.S., 1780-1930: Vol 3: The Transmission of Power", Louis C. Hunter and Lynwood Bryant (1991)
- "Modern Engines and Power Generators; a Practical Work on Prime Movers and the Transmission of Power, Steam, Electric, Water and Hot Air -- Volume One", R. Kennedy, 1905
- "Modern Engines and Power Generators; a Practical Work on Prime Movers and the Transmission of Power, Steam, Electric, Water and Hot Air -- Volume Six", R. Kennedy, 1905
- "Power and Power Transmission", E.W. Kerr, 1908
- "Remnants of Early Hydraulic Power Systems" (PDF), J.W. Gibson, 3rd Australasian Engineering Heritage Conference 2009
- "L'eau à Genève et dans la région Rhône-Alpes: XIXe-XXe siècles", Serge Paquier, 2007
- "L'eau des villes: Aux sources des empires municipaux", Géraldine Pflieger, 2009
- "Revue technique de l'Exposition universelle de 1889, Section II, récepteurs hydrauliques" (PDF), 1893
- "Revue technique de l'Exposition universelle de 1889, Volume 9. Septième partie. Mécanique générale. Machins outils. Hydraulique générale. Travail du bois. Travail des métaux. Machineries industrielles.", 1893
- "L'usine des forces motrices de la Coulouvrenière à 100 ans: 1886-1986", Services industriels, 1986
- "Waterdruk in Antwerpen. Een stroom van elektriciteit", Dirk De Vleesschauwer and Noël Kerckhaert, 1993
- "Kroniek van de stroomverdeling van Antwerpen-stad tot de Rupelstreek tot de Eerste Wereldoorlog", Geschiedkundige Studiegroep Ten Boome. (website)
- "Het Zuiderpershuis, een monument. Brochure bij de tentoonstelling n.a.v. Open Monumentendag 2010" (PDF), Steunpunt Industrieel en Wetenschappelijk Erfgoed, 2010.
- "The Centrifugal Pump, Turbines, and Water Motors, Including the Theory and Practice of Hydraulics", Charles Herbert Innes, 1898
- "Metropolitan Works: Collected Papers on London History", Ralph Turvey, date unknown.
- "Hydraulic Power Company", The Vauxhall Society, 2012 (website)
- "London Hydraulic Power Co", Grace's Guide, date unknown (website)
- "Hydraulic Power", NSW HSC Online (website)
- "The Transmission of Power by Compressed Air", Robert Zahner, 1890
- "Water Engines", The Museum of Retrotechnology, 2011 (website)
- “The History of Cranes (The Classic Construction Series)“, Oliver Bachmann,1997.
- "On the employment of a column of water as a motive power for propelling machinery", William Armstrong, 1840
One of the constraints of solar power is that it is not always available: it is dependent on daylight hours and clear skies. In order to fill these gaps, a storage solution or a backup infrastructure of fossil fuel power plants is required -- a factor that is often ignored when scientists investigate the sustainability of PV systems.
Whether or not to include storage is no longer just an academic question. Driven by better battery technology and the disincentivization of grid-connected solar panels, off-grid solar is about to make a comeback. How sustainable is a solar PV system if energy storage is taken into account?
Picture: Tesla's lithium-ion home storage system.
In the previous article, we have seen that many life cycle analyses (LCAs) of solar PV systems have a positive bias. Most LCAs base their studies on the manufacturing of solar cells in Europe or the USA. However, most panels are now produced in China, where the electric grid is about twice as carbon-intensive and about 50% less energy efficient.  Likewise, most LCAs investigate solar PV systems in regions with a solar insolation typical of the Mediterranean region, while the majority of solar panels have been installed in places with only half as much sunshine.
As a consequence, the embodied greenhouse gas emissions of a kWh of electricity generated by solar PV is two to four times higher than most LCAs indicate. Instead of the oft-cited 30-50 grams of CO2-equivalents per kilowatt-hour of generated electricity (gCO2e/kWh), we calculated that the typical solar PV system installed between 2008 and 2014 produces close to 120 gCO2e/kWh. This makes solar PV only four times less carbon-intensive than conventional grid electricity in most western countries.
However, even this result is overly optimistic. In the previous article, we didn't take into account "one of the potentially largest missing components"  of the usual life cycle analysis of PV systems: the embodied energy of the infrastructure that deals with the intermittency of solar power. Solar insolation varies throughout the day and throughout the season, and of course solar energy is not available after sunset.
Off-grid Solar Power is Back
Until the end of the 1990s, most solar installations were off-grid systems. Excess power during the day was stored in an on-site bank of lead-acid batteries for use during the night and on cloudy days. Today, almost all solar systems are grid-connected. These installations use the grid as if it was a battery, "storing" excess energy during the day for use at night and on cloudy days.
Obviously, this strategy requires a backup of fossil fuel or nuclear power plants that step in when the supply of solar energy is low or nonexistent. To make a fair comparison with conventional grid electricity, including electricity generated by biomass, this "hidden" part of the solar PV system should also be taken into account. However, every single life cycle analyse of a solar PV ignores it. [3, 2].
Until now, whether or not to include backup power or storage systems was mainly an academic question. This might change soon, because off-grid solar is about to make a comeback. Several manufacturers have presented storage systems based on lithium-ion batteries, the technology that also powers our gadgets and electric cars. [4, 5, 6, 7] Lithium-ion batteries are a superior technology compared to the lead-acid batteries commonly used in off-grid solar PV systems: they last longer, are more compact, more efficient, easier to maintain, and comparatively more sustainable.
Lithium-ion batteries are more expensive than lead-acid batteries, but Morgan Stanley's 2014 report on solar energy predicts that the price of storage will come down to $125-$150 per kWh by 2020.  According to the report, this would make solar PV plus battery storage commercially viable in some European countries (Germany, Italy, Portugal, Spain) and across most of the United States. Morgan Stanley expects a lot from electric vehicle manufacturer Tesla, who announced a home storage system for solar power a few days ago (costing $350 per kWh).  Tesla is building a factory in Arizona that will produce as many lithium-ion batteries as there are currently produced by all manufacturers in the world, introducing economies of scale that can push costs further down.
Morgan Stanley expects off-grid solar PV to be commercially viable in some European countries and across most of the USA by 2020
Other factors also come into play when it comes to home storage for PV power. Solar panels have become so much cheaper in recent years that government subsidies and tax credits for grid-connected systems have come under pressure. In many countries, owners of a grid-connected solar PV system have received a fixed price for the surplus electricity they provide to the grid, without having to pay fixed grid rates. These so-called "net metering rules" or "feed-in rates" were recently abolished in several European countries, and are now under pressure in some US states. In its report, Morgan Stanley predicts that, in the coming years, net metering rules and solar tax credits will disappear altogether. 
A 5 kWh lithium-ion battery pack from Powertech Systems.
Utility companies are fighting the incentivisation of PV power succesfully with the argument that solar customers make use of the grid but don't pay for it, raising the costs for non-solar customers.  The irony is that the disincentivization of grid-connected solar panels makes off-grid systems more attractive, and that utilities might be chasing away their customers. If a grid-connected solar customer has to pay fixed grid fees and doesn't receive a good price for his or her excess power, it might become more financially savvy to install a bank of batteries. The more customers do this, the higher the costs will become for the remaining consumers, encouraging more people to adopt off-grid systems. 
Lead-Acid Battery Storage
Being totally independent of the grid might sound attractive to many, but how sustainable is a solar PV system when battery storage is taken into account? Because a life cycle analysis of an off-grid solar system with lithium-ion batteries has not yet been done, we made one ourselves, based on some LCAs of stand-alone solar PV systems with lead-acid battery storage.
One of the most complete studies to date is a 2009 LCA of a 4.2 kW off-grid system in Murcia, Spain. The 35 m2 PV solar array is mounted on a building rooftop and supplies a programmed lighting system with a daily constant load pattern of 13.8 kWh. The solar panels are connected to 24 open lead-acid batteries with a storage capacity of 110.4 kWh, offering three days of autonomy.  The study found an energy payback time of 9.08 years and specific greenhouse gas emissions of 131 gCO2e/kWh, which makes the system twice as energy efficient and 2.5 times less carbon-intensive than conventional grid electricity in Spain (337 gCO2/kWh). Manufacturing the batteries accounts for 45% of the embodied CO2, and 49% of the life cycle energy use of the solar system.
Lead-acid batteries easily double the energy and CO2 payback times of a solar PV system
This doesn't sound too bad, but unfortunately the researchers made some pretty optimistic assumptions. First of all, the results are valid for a solar insolation of 1,932 kWh/m2/yr -- Murcia is one of the sunniest places in Spain. At lower solar insolation, more solar panels would be needed to produce as much electricity, so the embodied energy of the total system will increase. . If we assume a solar insolation of 1,700 kWh/m2/yr, the average in Southern Europe, GHG emissions would increase to 139 gCO2e/kWh. If we assume a solar insolation of 1,000 kWh/m2/yr, the average in Germany, emissions amount to 174 gCO2/kWh.
Secondly, the researchers assume the lifespan of the lead-acid batteries to be 10 years. For the solar panels, they assume a lifetime of 20 years, which means that they included double the amount of batteries in the life cycle analysis. A lifespan of ten years is very optimistic for a lead-acid battery -- a fact that the scientists admit.  Most other LCA's looking at off-grid systems assume a battery life of 3 or 5 years [14, 15]. However, the lifetime of a lead-acid battery depends strongly on use and maintenance. Because of the low load of the system under discussion, a battery lifespan of 10 years is not completely unrealistic.
On the other hand, if the batteries are used for higher loads -- for example, in a common household -- their lifetime would shorten considerably. Because almost 50% of embodied CO2 and life cycle energy use of a PV solar system is due to the batteries alone, the expected lifespan of the 2.4 ton battery pack has a profound effect on the sustainability of the system.
A lead-acid battery system. SuperiorSolar.
If we assume a battery lifespan of 5 instead of 10 years, and keep the other parameters the same, the GHG emissions increase to 198 and 233 gCO2e/kWh for a solar insolation of 1,700 and 1,000 kWh/m2/yr, respectively. In grid-connected solar PV systems, assuming a longer life expectancy for the solar panels improves the sustainability of the system: the embodied energy and CO2 can be spread over a longer period of time. With off-grid systems, this effect is countered by the need for one or more replacements of the batteries.
If we increase the life expectancy of the solar panels from 20 to 30 years, and keep the battery lifespan at 10 years, CO2e emissions per kWh remain more or less the same. However, if we assume a battery lifespan of only 5 years and extend the lifespan of the solar panels to 30 years, GHG emissions would increase to 206 gCO2e/kWh for a solar insolation of 1,700 kWh/m2/yr, and decrease to 232 gCO2e/kWh for a solar insolation of 1,000 kWh/m2/yr.
Made in China
Thirdly, the researchers assume that all components -- PV cells, batteries, electronics -- are made in Spain, while we have seen in the previous article that manufacturing of solar PV systems has moved to China. Spain's electricity grid is 2.7 times less carbon-intensive (337 gCO2/kWh) than China's electric infrastructure (900 gCO2e/kWh), which means that the GHG emissions of all components of our system can be multiplied by 2.7. This results in specific carbon emissions of 353 and 471 gCO2e/kWh for a solar insolation of 1,700 and 1,000 kWh/m2/yr, respectively, which is higher than the carbon-intensity of the Spanish grid. Considering a battery lifespan of 5 instead of 10 years, emissions would rise to 513 and 631 gCO2e/kWh for a solar insolation of 1,700 and 1,000 kWh/m2/yr, respectively.
If solar panels and batteries are produced in China, the CO2-emissions are double those of conventional grid electricity
Although there are some assumptions by the researchers that are less optimistic -- such as a battery recycling rate of only 50% instead of the more commonly assumed +90% -- it's obvious that an off-grid system with lead-acid batteries is not sustainable, and definitely not when the components are manufactured in China. That doesn't make off-grid solar with lead-acid batteries pointless: compared to a diesel generator, a solar PV system with lead-acid batteries is often the better choice, which makes it a good solution for remote areas without access to the power grid. As an alternative for the centralized electricity infrastructure in western countries, however, it makes little sense.
Lithium-ion Battery Storage System
When we replace the lead-acid batteries by lithium-ion batteries, the sustainability of a stand-alone solar PV system improves considerably. At first glance this may seem counter-productive, because it takes more energy to produce 1 kWh of lithium-ion battery storage than it takes to manufacure 1 kWh of lead-acid battery storage. According to the latest LCA's, aimed at electric vehicle storage, the making of a lithium-ion battery requires between 1.4 and 1.87 MJ/wh, [16, 17, 18] while the energy requirements for the manufacture of a lead-acid battery are between 0.87 and 1.19 MJ/Wh. [18, 12]
6.6 kWh Energy management and lithium-ion storage solution in one. Bosch Power Tec.
Despite this, the higher overall performance of the lithium-ion battery means that considerably less storage is required. For a prolonged lifetime, lead-acid batteries demand a limited "Depth of Discharge" (DoD). If a lead-acid battery is fully discharged (DoD of 100%) its lifespan becomes very short (300 to 800 cycles, or roughly one to two years, depending on battery chemistry). The lifespan increases to between 400 and 1,000 cycles (1-3 years, assuming 365 cycles per year) at a DoD of 80%, and to between 900 and 2,000 cycles (2.5-5.5 years) at a DoD of 33%. . This means that, in order to get a decent lifespan, a lead-acid battery system should be oversized. For example, three times more battery capacity is needed at a DoD of 33%, because two thirds of the battery capacity cannot be used.
Although the lifespan of a lithium-ion battery also decreases when the depth of discharge increases, this effect is less pronounced than with its lead-acid counterpart. A lithium-ion battery lasts 3,000 to 5,000 cycles (8-14 years) at a DoD of 100%, 5,000 to 7,000 cycles (14-19 years) at a DoD of 80%, and 7,000 to 10,000 cycles (19-27 years) at a DoD of 33%.  As a consequence, lithium-ion storage usually has a DoD of 80%, while lead-acid storage usually has a DoD of 33 or 50%. In the LCA of the Spanish off-grid system discussed above, the assumption of three days of autonomy implies that 41 kWh of storage is required (3 x 13.8 kWh per day). Because the DoD is 33%, total storage capacity should be multiplied by three, which results in 123 kWh of batteries. If we would replace these by lithium-ion batteries with a DoD of 80%, only 50 kWh of storage is needed, or 2.5 times less.
6 x Less Batteries Needed
For utmost accuracy, we should mention that the lifespan of a battery isn't necessarily limited by the cycle life. When batteries are used in applications with shallow cycling, their service life will normally be limited by float life. In this case, the difference between lead-acid and lithium-ion is less pronounced: at no-cycling (float charge), lithium-ion lasts 14-16 years and lead-acid 8-12 years. Battery life will be limited by either the life cycle or the float service life, depending on which condition will be achieved first.  Nevertheless, if we focus on off-grid systems for households, the assumption of deep daily cycling better reflects reality, although there will be periods of float charge, for example during holidays.
The total storage capacity to be manufactured over the complete lifetime of a solar PV system is 6 times lower for lithium-ion than for lead-acid
If we also factor in the lifespan of the batteries, the advantage of lithium-ion becomes even larger. Assuming a lifespan of 20 years for the solar PV system and a DoD of 80%, the lithium-ion batteries will last as long as the PV panels. On the other hand, the lead-acid batteries have to be replaced at least 2-4 times over a period of 20 years. This further widens the gap in energy use for manufacturing when comparing lead-acid and lithium-ion batteries.  In the original LCA, a total storage capacity of about 240 kWh is needed over a lifespan of 20 years. On the other hand, the cycle life of the lithium-ion battery is 19-27 years, meaning that no replacement may be needed. Consequently, the total storage capacity to be manufactured over the complete lifetime of the system is 6 times lower for lithium-ion than for lead-acid. 
E3DC lithium-ion battery system. Picture: Thomas Salzmann.
If we take the most optimistic values for energy during manufacturing, being 0.87 MJ/Wh for lead-acid and 1.4 MJ/Wh for lithium-ion, and multiply them by total battery capacity over a lifetime of 20 years (248,000 Wh for lead-acid and 42,000 Wh for lithium-ion), this results in an embodied energy of 60 MWh for lead-acid (the value in the original LCA) and only 16.5 MWh for lithium. In conclusion, energy requirements for the manufacturing of the batteries is 3.6 times lower for lithium-ion than for lead-acid.
Another advantage of lithium-ion batteries is that they have a higher efficiency than lead-acid batteries: 85-95% for lithium-ion, compared to 70-85% for lead-acid. Because losses in the battery must be compensated with higher energy input, a higher battery efficiency results in a smaller PV array, lowering the energy requirements to manufacture the solar cells. In the original LCA, 4.2 kW of solar panels (35 m2) are needed to produce 13.8 kWh per day. If we assume the lead-acid batteries to be 77% efficient, and the lithium-ion batteries to be 90% efficient, the choice for lithium-ion would resize the solar PV array from 4.2 kW to 3.55 kW. We now have all the data to calculate the greenhouse gas emissions per kWh of electricity produced by an off-grid solar PV system using lithium-ion batteries.
GHG Emissions of the Off-grid System with Lithium-ion Batteries
In the original LCA, the batteries and the solar panels (including frames and supports) account for 59 and 62 gCO2e/kWh, respectively. The rest of the components add another 10 gCO2e/kWh, resulting in a total of 131 gCO2e/kWh. If we switch to lithium-ion battery storage, the greenhouse gas emissions for the batteries come down from 59 to 20 gCO2e/kWh. Because of the higher efficiency of the lithium-ion batteries, the greenhouse gas emissions for the solar panels come down from 62 to 55 gCO2e/kWh. This brings the total greenhouse gas emissions of the off-grid system using lithium-ion batteries to 85 gCO2e/kWh, compared to 131 gCO2e/kWh for a similar system with lead-acid storage.
While this result is an improvement, it's dependent on the assumptions of the researchers; most notably, a solar insolation of 1,932 kWh/m2/yr, and that all manufacturing of components occurs in Spain. If we adjust the value for a solar insolation of 1,700 kWh/m2/yr in order to compare with the other results, total GHG emissions become 92.5 gCO2e/kWh (assuming battery capacity remains the same). If we correct for a solar insolation of 1,000 kWh/m2/yr, the average in Germany, GHG emissions become 123.5 gCO2e/kWh. Furthermore, if we assume that the solar panels (but not the batteries or the other components) are manufactured in China, which is most likely the case, GHG emissions rise to 155 and 217 gCO2e/kWh for a solar insolation of 1,700 and 1,000 kWh/m2/yr, respectively.
Testing a lithium-ion battery. Picture: A123 Systems.
In conclusion, lithium-ion battery storage makes off-grid solar PV less carbon-intensive than conventional grid electricity in most western countries, even if the manufacturing of solar panels in China is taken into account. However, the advantage is rather small, which effects the speed at which solar PV systems can be deployed in a sustainable way. In the previous article, we have seen that the energy and CO2 savings made by the cumulative installed capacity of solar PV systems are cancelled out to some extent by the energy use and CO2 emissions from the production of new installed capacity. For the deployment of solar systems to grow while remaining net greenhouse gas mitigators, they must grow at a rate slower than the inverse of their CO2 payback time. [20, 21, 22]
Off-grid solar systems with lithium-ion battery storage can have GHG emissions below 30 gCO2e/kWh if they are produced in countries with clean electricity grids, and installed in countries with high solar insolation and carbon-intensive grids.
For solar panels manufactured in China and installed in countries like Germany, the maximum sustainable growth rate is only 16-23% (depending on solar insolation), roughly 3 times lower than the actual annual growth of the industry between 2008 and 2014. If we also take lithium-ion battery storage into account, the maximum sustainable growth rate comes down to 4-14%. In other words, including energy storage further limits the maximum sustainable growth rate of the solar PV industry.
On the other hand, if we would produce solar panels in countries with very clean electricity grids (France, Canada, etc.) and install them in countries with carbon-intensive grids and high solar insolation (China, Australia, etc.), even off-grid systems with lithium-ion batteries would have GHG emissions of only 26-29 gCO2/kWh, which would allow solar PV to grow sustainably by almost 60% per year. This result is remarkable and shows the importance of location if we want solar PV to be a solution instead of a problem. Of course, whether or not there's enough lithium available to deploy battery storage on a large scale, is another question.
Battery Production Powered by Renewable Energy?
Another way to improve the sustainability of battery storage is to produce the batteries using renewable energy. For example, Tesla announced that its "GigaFactory", which will produce lithium-ion batteries for vehicles and home storage, will be powered by renewable energy. [23, 24] To support their claim, Tesla published an illustration of the factory with the roof covered in solar panels and a few dozen windmills in the distance.
However, the final manufacturing process in the factory consumes only a small portion of the total energy cost of the entire production cycle -- much more energy is used during material extraction (mining). It's stated that the GigaFactory will produce 50 GWh of battery capacity per year by 2020. Because the making of 1 kWh of lithium-ion battery storage requires 400 kWh of energy [16, 17, 18], producing 50 GWh of batteries would require 20,000 GWh of energy per year.
If we assume an average solar insolation of 2,000 kWh/m2/yr and a solar PV efficiency of 15%, one m2 of solar panels would generate at most 295 kWh per year. This means that it would take 6,800 hectares (ha) of solar panels to run the complete production process of the batteries on solar power, while the solar panels on the roof cover an area of only 1 to 40 ha (there is some controversy over the actual surface area of the factory under construction). Tesla's claim, though potentially factually accurate, is an obvious example of greenwashing -- and everyone seems to buy it.
There are other ways to improve the sustainability of solar PV when storage is taken into account. Most of these solutions require that solar systems remain connected to the grid, even if they have a (more limited) local storage system. In this scenario, chemical batteries could help to balance the grid system, acting as peak-shaving and load-shifting devices. The electric grid has to be sized to meet peak demand, and battery storage could mean that less power plants are needed for that. Decentralized, grid-connected energy storage could also increase the share of renewables that the electricity infrastructure can handle. Of course, this "smart grid" approach should also be subjected to a life cycle analysis, including all electronic components.
Kris De Decker (edited by Jenna Collett)
EDIT: the paragraph about Tesla's GigaFactory was rewritten to reflect the fact that most energy is consumed during material extraction.
Sources & Notes:
 Domestic and overseas manufacturing scenarios of silicon-based photovoltaics: life cycle energy and environmental comparative analysis. Dajun Yue, Fengqi You, Seth B. Darling, in Solar Energy, May 2014
 Energy Payback for Energy Systems Ensembles During Growth (PDF), Timothy Gutowski, Stanley Gershwin and Tonio Bounassisi, IEEE, International Symposium on Sustainable Systems and Technologies, Washington D.C., May 16-19, 2010
 "Current State of Development of Electricity-Generating Technologies: A Literature Review", Manfred Lenzen, Energies, Volume 3, Issue 3, 2010.
 "Storage is the new solar: will batteries and PV create an unstoppable hybrid force?", Stephen Lacey, Greentechmedia, 2015
 "Report: Solar Paired with Storage is a 'Real, Near and Present' Threat to Utilities", Stephen Lacey, greentechmedia, 2014
 "Australia to pilot new power plan", Gregg Borschmann, ABC, May 2014
 "SolarCity Launches Energy Storage for Business Using Tesla Battery Packs", Eric Wesoff, greentechmedia, December 2013
 "Solar Power & Energy Storage: Policy Factors vs. Improving Economics" (PDF), Morgan Stanley Blue Paper, July 28, 2014
 "Tesla announces home battery system", Slashdot, May 1, 2015
 "Utilities wage campaign against rooftop solar", Joby Warrick, The Washington Post, March 2015
 "Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business" (PDF), Peter Kind, Energy Infrastructure Advocates, Edison Electric Institute, January 2013
 You could argue that you also need more battery storage, because there is a bigger chance of cloudy days. However, we assume battery capacity to remain the same.
 "Optimal Sizing and Life Cycle Assessment of Residential Photovoltaic Energy Systems With Battery Storage", A. Celik, in "Progress in Photovoltaics: Research and Applications", 2008.
 "Energy pay-back time of photovoltaic energy systems: present status and prospects", E.A. Alsema, in "Proceedings of the 2nd World Conference and Exhibition on photovoltaics solar energy conversion", July 1998.
 "Towards greener and more sustainable batteries for electrical energy storage", D. Larcher and J.M. Tarascon, Nature Chemistry, November 2014
 "Application of Life-Cycle Assessment to Nanoscale Technology: Lithium-ion Batteries for Electric Vehicles" (PDF), Environmental Protection Agency (EPA), 2013
 "Energy Analysis of Batteries in Photovoltaic systems. Part one (Performance and energy requirements)" (PDF) and "Part two (Energy Return Factors and Overall Battery Efficiencies)" (PDF). Energy Conversion and Management 46, 2005.
 The lifespan of the lithium-ion battery will probably be closer to 14-16 years (float charge lifespan) because of the shallow cycling assumption in the original LCA. However, since the assumed lifespan of 10 years for the lead-acid batteries is very optimistic, and because deep cycling is more common for household off-grid systems, we assume that no replacement of lithium-ion batteries is needed.
 "The climate change mitigation potential of the solar PV industry: a life cycle perspective", Greg Briner, 2009
 "Optimizing Greenhouse Gas Mitigation Strategies to Suppress Energy Cannibalism" (PDF). J.M. Pearce. 2nd Climate Change Technology Conference, May 12-15, 2009, Hamilton, Ontario, Canada.
 "Towards Real Energy Economics: Energy Policy Driven by Life-Cycle Carbon Emission", R. Kenny, C. Law, J.M. Pearce, Energy Policy 38, pp. 1969-1978, 2010
 "Construction of Tesla's $5B solar-powered Gigafactory in Nevada is progressing nicely", Michael Graham Richard, Treehugger 2014
 "Tesla's $5bn Gigafactory looks even cooler than expected, will create 22,000 jobs", Michael Graham Richard, Treehugger 2015
- PV solar power: why location matters
- The status-quo of electric cars: better batteries, same range
- Fruit walls: urban farming in the 1600s
- The bright future of solar powered factories
It's generally assumed that it only takes a few years before solar panels have generated as much energy as it took to make them, resulting in very low greenhouse gas emissions compared to conventional grid electricity.
However, a more critical analysis shows that the cumulative energy and CO2 balance of the industry is negative, meaning that solar PV has actually increased energy use and greenhouse gas emissions instead of lowering them.
The problem is that we use and produce solar panels in the wrong places. By carefully selecting the location of both manufacturing and installation, the potential of solar power could be huge.
Picture: Jonathan Potts.// //
There's nothing but good news about solar energy these days. The average global price of PV panels has plummeted by more than 75% since 2008, and this trend is expected to continue in the coming years, though at a lower rate. [1-2] According to the 2015 solar outlook by investment bank Deutsche Bank, solar systems will be at grid parity in up to 80% of the global market by the end of 2017, meaning that PV electricity will be cost-effective compared to electricity from the grid. [3-4]
Lower costs have spurred an increase in solar PV installments. According to the Renewables 2014 Global Status Report, a record of more than 39 gigawatt (GW) of solar PV capacity was added in 2013, which brings total (peak) capacity worldwide to 139 GW at the end of 2013. While this is not even enough to generate 1% of global electricity demand, the growth is impressive. Almost half of all PV capacity in operation today was added in the past two years (2012-2013).  In 2014, an estimated 45 GW was added, bringing the total to 184 GW.  .
Meanwhile, solar cells are becoming more energy efficient, and the same goes for the technology used to manufacture them. For example, the polysilicon content in solar cells -- the most energy-intensive component -- has come down to 5.5-6.0 grams per watt peak (g/wp), a number that will further decrease to 4.5-5.0 g/wp in 2017.  Both trends have a positive effect on the sustainability of solar PV systems. According to the latest life cycle analyses, which measure the environmental impact of solar panels from production to decommission, greenhouse gas emissions have come down to around 30 grams of CO2-equivalents per kilwatt-hour of electricity generated (gCO2e/kWh), compared to 40-50 grams of CO2-equivalents ten years ago. [7-11] 
According to these numbers, electricity generated by photovoltaic systems is 15 times less carbon-intensive than electricity generated by a natural gas plant (450 gCO2e/kWh), and at least 30 times less carbon-intensive than electricity generated by a coal plant (+1,000 gCO2e/kWh). The most-cited energy payback times (EPBT) for solar PV systems are between one and two years. It seems that photovoltaic power, around since the 1970s, is finally ready to take over the role of fossil fuels.
Manufacturing has Moved to China
Unfortunately, a critical review of the PV solar industry paints a very different picture. Many commenters attribute the plummeting cost of solar PV to more efficient manufacturing processes and scale economies. However, if we look at the graph below, we see that the decline in costs accelerates sharply from 2009 onwards. This acceleration has nothing to do with more efficient manufacturing processes or a technological breakthrough. Instead, it's the consequence of moving almost the entire PV manufacturing industry from western countries to Asian countries, where labour and energy are cheaper and where environmental restrictions are more loose.
Less than 10 years ago, almost all solar panels were produced in Europe, Japan, and the USA. In 2013, Asia accounted for 87% of global production (up from 85% in 2012), with China producing 67% of the world total (62% in 2012). Europe's share continued to fall, to 9% in 2013 (11% in 2012), while Japan's share remained at 5% and the US share was only 2.6%. 
Compared to Europe, Japan and the USA, the electric grid in China is about twice as carbon-intensive and about 50% less energy efficient. [13-15] Because the manufacture of solar PV cells relies heavily on the use of electricity (for more than 95%) , this means that in spite of the lower prices and the increasing efficiency, the production of solar cells has become more energy-intensive, resulting in longer energy payback times and higher greenhouse gas emissions. The geographical shift in manufacturing has made almost all life cycle analyses of solar PV panels obsolete, because they are based on a scenario of domestic manufacturing, either in Europe or in the United States.
LCA of Solar Panels Manufactured in China
We could find only one study that investigates the manufacturing of solar panels in China, and it's very recent. In 2014, a team of researchers performed a comparative life cycle analysis between domestic and overseas manufacturing scenarios, taking into account geographic diversity by utilizing localized inventory data for processes and materials.  In the domestic manufacturing scenario, silicon PV modules (mono-si with 14% efficiency and multi-si with 13.2% efficiency) are made and installed in Spain. In the overseas manufacturing scenario, the panels are made in China and installed in Spain.
For solar panels manufactured in China, the carbon footprint and the energy payback time are almost doubled
Compared to the domestic manufacturing scenario, the carbon footprint and the energy payback time are almost doubled in the overseas manufacturing scenario. The carbon footprint of the modules made in Spain (which has a cleaner grid than the average in Europe) is 37.3 and 31.8 gCO2e/kWh for mono-si and multi-si, respectively, while the energy payback times are 1.9 and 1.6 years. However, for the modules made in China, the carbon footprint is 72.2 and 69.2 gCO2e/kWh for mono-si and multi-si, respectively, while the energy payback times are 2.4 and 2.3 years. 
At least as important as the place of manufacturing is the place of installation. Almost all LCAs -- including the one that deals with manufacturing in China -- assume a solar insolation of 1,700 kilowatt-hour per square meter per year (kWh/m2/yr), typical of Southern Europe and the southwestern USA. If solar modules manufactured in China are installed in Germany, then the carbon footprint increases to about 120 gCO2e/kWh for both mono- and multi-si -- which makes solar PV only 3.75 times less carbon-intensive than natural gas, not 15 times.
Considering that at the end of 2014, Germany had more solar PV installed than all Southern European nations combined, and twice as much as the entire United States, this number is not a worst-case scenario. It reflects the carbon intensity of most solar PV systems installed between 2009 and 2014. More critical researchers had already anticipated these results. A 2010 study refers to the 2008 consensus figure of 50 gCO2e/kWh mentioned above, and adds that "in less sunny locations, or in carbon-intensive economies, these emissions can be up to 2-4 times higher".  Taking the more recent figure of 30 gCO2e/kWh as a starting point, which reflects improvements in solar cell and manufacturing efficiency, this would be 60-120 gCO2e/kWh, which corresponds neatly with the numbers of the 2014 study.
Solar insolation in Europe and the USA. Source: SolarGIS.
These results don't include the energy required to ship the solar panels from China to Europe. Transportation is usually ignored in LCAs of solar panels that assume domestic production, which would make comparisons difficult. Furthermore, energy requirements for transportation are very case-specific. It should also be kept in mind that these results are based on a solar PV lifespan of 30 years. This might be over-optimistic, because the relocation of manufacturing to China has been associated with a decrease in the quality of PV solar panels.  Research has shown that the percentage of defective or under-performing PV cells has risen substantially in recent years, which could have a negative influence on the lifespan of the average solar panel, decreasing its sustainability.
Solar PV electricity remains less carbon-intensive than conventional grid electricity, even when solar cells are manufactured in China and installed in countries with relatively low solar insolation. This seems to suggest that solar PV remains a good choice no matter where the panels are produced or installed. However, if we take into account the growth of the industry, the energy and carbon balance can quickly turn negative. That's because at high growth rates, the energy and CO2 savings made by the cumulative installed capacity of solar PV systems can be cancelled out by the energy use and CO2 emissions from the production of new installed capacity.  [19-20]
At high growth rates, the energy and CO2 savings made by the cumulative installed capacity of solar PV systems can be cancelled out by the energy use and CO2 emissions from the production of new installed capacity
A life cycle analysis that takes into account the growth rate of solar PV is called a "dynamic" life cycle analysis, as opposed to a "static" LCA, which looks only at an individual solar PV system. The two factors that determine the outcome of a dynamic life cycle analysis are the growth rate on the one hand, and the embodied energy and carbon of the PV system on the other hand. If the growth rate or the embodied energy or carbon increases, so does the "erosion" or "cannibalization" of the energy and CO2 savings made due to the production of newly installed capacity. 
For the deployment of solar PV systems to grow while remaining net greenhouse gas mitigators, they must grow at a rate slower than the inverse of their CO2 payback time.  For example, if the average energy and CO2 payback times of a solar PV system are four years and the industry grows at a rate of 25%, no net energy is produced and no greenhouse gas emissions are offset.  If the growth rate is higher than 25%, the aggregate of solar PV systems actually becomes a net CO2 and energy sink. In this scenario, the industry expands so fast that the energy savings and GHG emissions prevented by solar PV systems are negated to fabricate the next wave of solar PV systems. 
The CO2 Balance of Solar PV
Several studies have undertaken a dynamic life cycle analysis of renewable energy technologies. The results -- which are valid for the period between 1998 and 2008 -- are very sobering for those that have put their hopes on the carbon mitigation potential of solar PV power. A 2009 paper, which takes into account the geographical distribution of global solar PV installations, sets the maximum sustainable annual growth rate at 23%, while the actual average annual growth rate of solar PV between 1998 and 2008 was 40%.  
This means that the net CO2 balance of solar PV was negative for the period 1998-2008. Solar PV power was growing too fast to be sustainable, and the aggregate of solar panels actually increased GHG emissions and energy use. According to the paper, the net CO2 emissions of the solar PV industry during those 10 years accounted to 800,000 tonnes of CO2.  These figures take into account the fact that, as a consequence of a cleaner grid and better manufacturing processes, the production of solar PV panels becomes more energy efficient and less carbon-intensive over time.
Between 2009 and 2014, solar PV grew four times too fast to be sustainable
The sustainability of solar PV has further deteriorated since 2008. On the one hand, industry growth rates have accelerated. Solar PV grew on average by 59% per year between 2008 and 2014, compared to an annual growth rate of 40% between 1998 and 2008 .  On the other hand, manufacturing has become more carbon-intensive. For its calculations of the CO2 balance in 2008, the study discussed above considers the carbon intensity of production worldwide to be 500 gCO2e/kWh. In 2013, with 87% of the production in Asia, this number had risen to about 950 gCO2e/kWh, which halves the maximum sustainable growth rate to about 12%.
If we also take into account the changes in geographic distribution of solar panels, with an increasing percentage installed in regions with higher solar insolation, the maximum sustainable growth rate increases to about 16%. [23-24] Although more recent research is not available, it's obvious that the CO2 emissions of the solar PV industry have further increased during the period 2009-2014. If we would consider all solar panels in the world as one large energy generating plant, it would not have generated any net energy or CO2-savings.
The Solution: Rethink the Manufacture and Use of Solar PV
Obviously, the net CO2 balance of solar PV could be improved by limiting the growth of the industry, but that would be undesirable. If we want solar PV to become important, it has to grow fast. Therefore, it's much more interesting to focus on lowering the embodied energy of solar PV power systems, which automatically results in higher sustainable growth rates. The shorter the energy and CO2 payback times, the faster the industry can grow without becoming a net producer of CO2.
Annual net CO2 balance of the crystalline silicon PV industry at different growth rates for different combinations of countries of production and installation. Source: Briner 2009.
Embodied energy and CO2 will gradually decrease because of technological advances such as higher solar cell efficiencies and more efficient manufacturing techniques, and also as a consequence of the recycling of solar panels, which is not yet a reality. However, what matters most is where solar panels are manufactured, and where they are installed. The location of production and installation is a decisive factor because there are three parameters in a life cycle analysis that are location dependent: the carbon intensity of the electricity used in production, the carbon intensity of the displaced electricity mix at the place of installation, and the solar insolation in the place of installation. 
By carefully selecting the locations for production and installation we could improve the sustainability of solar PV power in a spectacular way. For PV modules produced in countries with low-carbon energy grids -- such as France, Norway, Canada or Belgium -- and installed in countries with high insolation and carbon-intensive grids -- such as China, India, the Middle East or Australia -- greenhouse gas emissions can be as low as 6-9 gCO2/kWh of generated electricity.   [14-15] That's 13 to 20 times less CO2 per kWh than solar PV cells manufactured in China and installed in Germany. 
Sustainable growth rates of 300-460% are possible when PV modules are produced in countries with low-carbon energy grids and installed in countries with high insolation and carbon-intensive grids
This would allow sustainable growth rates of up to 300-460%, far above what's even necessary. If solar PV would grow on average at a rate of 100% per year, it would take less than 10 years to meet today's electricity's demand. If it would grow at the 16% maximum sustainable growth rate we calculated above, meeting today's electricity demand would take until 2045 -- with no net CO2 savings. By that time, according to the forecasts, total global electricity demand will have more than doubled. 
Of course, producing and installing solar panels in the right places implies international cooperation and a sound economic system, none of which exist. Manufacturing solar panels in Europe or the USA would also make them more expensive again, while many countries with the right conditions for solar don't have the money to install them in large amounts.
CO2 mitigation potential for crystalline silicon PV modules produced in China and installed in different countries. Source: Briner 2009.
An alternative solution is using on-site generation from renewables to meet a greater proportion of the electricity demand of PV manufacturing facilities -- which can also happen in a country with a carbon-intensive grid. For example, if the electricity for the manufacturing of solar cells would be supplied by other solar cells, then the greenhouse emissions of solar PV systems could be reduced by 50-70%, depending on where they are produced (Europe or the USA).  In China, this decrease in CO2 emissions would even be greater.
In yet another scenario, we could dedicate nuclear plants exclusively to the manufacture of solar cells. Because nuclear is less carbon-intensive than PV solar, this sounds like the fastest, cheapest and easiest way to start producing a massive amount of solar cells without raising energy use and greenhouse emissions. But don't underestimate the task ahead. A 1 GW nuclear power plant can produce about 11 million square metres of solar panels per year, which corresponds to 1.66 GWp of solar power (based on the often cited average number of 150 w/m2). We would have needed 24 nuclear plants -- or 1 in 20 atomic plants worldwide -- working full-time to produce the solar panels manufactured in 2013. 
What About Storage?
Why does the production of solar PV requires so much energy? Because the low power density -- several orders of magnitude below fossil fuels -- and the intermittency of solar power require a much larger energy infrastructure than fossil fuels do. It's important to realize that the intermittency of solar power is not taken into account in our analysis. Solar power is not always available, which means that we need a backup-source of power or a storage system to jump in when the need is there. This component is usually not considered in LCAs of solar PV, even though it has a large influence on the sustainability of solar power.
Storage is no longer an academic question because several manufacturers -- most notably Tesla -- are pushing lithium-ion battery storage as an alternative for a grid-connected solar PV system. Lithium-ion batteries are more compact and technically superior to the lead-acid batteries commonly used in off-grid solar systems. Furthermore, the disincentivation of grid-connected solar systems in a growing number of countries makes off-grid systems more attractive.
In the next article, we investigate the sustainability of a PV-system with a lithium-ion battery. Meanwhile, enjoy the sun and stay tuned.
Kris De Decker (edited by Aaron Vansintjan)
Sources & Notes
 Utilities wage campaign against rooftop solar, Joby Warrick, The Washington Post, March 2015
 Solar Power & Energy Storage: Policy Factors vs. Improving Economics (PDF), Morgan Stanley Blue Paper, July 28, 2014
 Renewables 2014 Global Status Report, REN21, 2014
 Deutsche bank anticipates 2015 global solar PV demand at 54 GW. Solar Server. January 2015.
 Emissions from Photovoltaic Life Cycles, Vasilis M. Fthenakis, Hyung Chul Kim, Erik Alsema, in Environmental Science & Technology, 2008, 42 (6), pp. 2168-2174
 Renewable and Sustainable. Presentation at the Crystal Clear final event, Munich, M.J. De Wild-Scholten
 Update of PV energy payback times and life-cycle greenhouse gas emissions (PDF), In: 24th European Photovoltaic Solar Energy Conference. Hamburg, Germany. Fthenakis V., Kim, H.C., Held, M., Raugei, M., Krones, J.
 Life Cycle Inventories and Life Cycle Assessments of Photovoltaic Systems (PDF). IEA International Energy Agency, Report IEA-PVPS T12-02:2011. Vasilis Fthenakis. October 2011.
 Crystalline Silicon and Thin Film Photovoltaic Results -- Life Cycle Assessment Harmonization. National Renewable Energy Laboratory, 2013
 It should be noted that the latest data are not yet confirmed because they are not yet in the public domain, but we nevertheless assume the value of 30 grams CO2e/kWh.
 Domestic and overseas manufacturing scenarios of silicon-based photovoltaics: life cycle energy and environmental comparative analysis. Dajun Yue, Fengqi You, Seth B. Darling, in Solar Energy, May 2014
 Technical Paper: Electricity-specific Emission Factors for Grid Electricity (PDF). Matthew Brander, Aman Sood, Charlotte Wylie, Amy Haughton, and Jessica Lovell. Ecometrica, August 2011
 Life Cycle Inventories of Electricity Mixes and Grid, Version 1.3 (PDF). René Itten, Rolf Frischknecht, Matthias Stucki, Paul Scherrer Institut (PSI). June 2014.
 The climate change mitigation potential of the solar PV industry: a life cycle perspective, Greg Briner, 2009
 "Current State of Development of Electricity-Generating Technologies: A Literature Review", Manfred Lenzen, Energies, Volume 3, Issue 3, 2010.
 Solar Crisis: Cheap Chinese Solar Panels Prove Defective, Wolf Richter, Oil Price, May 2013
 Optimizing Greenhouse Gas Mitigation Strategies to Suppress Energy Cannibalism (PDF). J.M. Pearce. 2nd Climate Change Technology Conference, May 12-15, 2009, Hamilton, Ontario, Canada.
 Towards Real Energy Economics: Energy Policy Driven by Life-Cycle Carbon Emission, R. Kenny, C. Law, J.M. Pearce, Energy Policy 38, pp. 1969-1978, 2010
 A 2009 paper  sets the maximum sustainable growth rate at 32%, while a 2010 paper sets it at 41% . However, these figures are based on a solar insolation of 1,700 kWh/m2/yr, the average in Southern Europe, not on the actual geograhical distribution of solar panels.
 Energy Payback for Energy Systems Ensembles During Growth (PDF), Timothy Gutowski, Stanley Gershwin and Tonio Bounassisi, IEEE, International Symposium on Sustainable Systems and Technologies, Washington D.C., May 16-19, 2010
 In 2013, China alone accounted for almost one-third of new installations worldwide, adding a record 12.9 GW and bringing total PV capacity to 20 GW.  Solar panels manufactured and installed in China save as much greenhouse gases as solar panels manufactured and installed in Europe; the carbon-intensity of manufacturing is higher than in Europe, but so is the amount of carbon displaced from the local electricity grid. Unfortunately, the second largest grower in solar PV in 2013 was Japan (7 GW new capacity), which has both a relatively clean energy grid and relatively little sunshine. For its calculations of the CO2 balance in 2008, the paper discussed above considered a weighted average solar insolation of 1,200 kWh/m2/yr (reflecting the large share of PV power installed in Germany) and a weighted average displaced carbon intensity of 500 gCO2e/kWh (reflecting the importance of the German electric grid). We made the same calculation for the year 2013 and arrived to an average displaced carbon intensity of of 583 gCO2/kWh (only 15% higher than in the 1998-2008 period) and an average weighted solar insolation of about 1,250 kWh/m2/yr (only slightly above the 1,200 kWh/m2/yr between 1998 and 2008). This results in a sustainable growth rate of 16% for 2013. This figure is an approximation, as we don't know the exact location of solar panel systems. Most notably, the solar insolation in China varies considerably throughout the country. If we would choose the maximum solar insolation (2,185 kWh/m2/yr) instead of the average solar insolation (1,577 kWh/m2/yr), the average weighted solar insolation of the global solar PV capacity added in 2013 would raise from 1,250 to 1,465 kWh/m2/yr.
 These numbers don't take into account the energy used for building the PV factories, which can be substantial at high growth rates. To make a fair comparison, the same should be done for electricity produced by fossil fuels. However, including these data would lower the comparative advantage of solar PV because it takes much more energy to manufacture a 1 GW solar system than a 1 GW fossil fuel plant -- and the latter also has a longer lifespan. Furthermore, a higher CO2-intensity of the conventional grid would also raise the CO2-intensity of PV manufacture.
 For modules manufactured in China and placed in France or Norway, the CO2 balance is negative.
 This is not to suggest that solar PV should supply all electricity, because we also have other renewable power sources available. What we aim to show here is that energy and CO2 payback times define whether solar PV power is a solution or a problem, and to what extent.
 This calculation is based on an energy use of 5700 MJ for the manufacturing of one m2 of solar cells. Since the source for this number is from 1998 , we have halved this figure in order to compensate for technological progress. This is over-optimistic, but the energy efficiency of manufacturing will further improve, although with decreasing marginal results.
 Energy pay-back time of photovoltaic energy systems: present status and prospects, E.A. Alsema, in "Proceedings of the 2nd World Conference and Exhibition on photovoltaics solar energy conversion", July 1998.
- The bright future of solar thermal powered factories
- The solar envelope: how to heat and cool cities without fossil fuels
- Medieval smokestacks: fossil fuels in pre-industrial times
- The monster footprint of digital technology
- Fruit walls: urban farming in the 1600s
The modern glass greenhouse requires massive inputs of energy to grow crops out of season. That's because each square metre of glass, even if it's triple glazed, loses ten times as much heat as a wall.
However, growing fruits and vegetables out of season can also happen in a sustainable way, using the energy from the sun. Contrary to its fully glazed counterpart, a passive solar greenhouse is designed to retain as much warmth as possible.
Research shows that it's possible to grow warmth-loving crops all year round with solar energy alone, even if it's freezing outside. The solar greenhouse is especially successful in China, where many thousands of these structures have been built during the last decades.
A Chinese greenhouse. Picture: Chris Buhler, Indoor Garden HQ.// //
The quest to produce warm-loving crops in temperate regions initially didn't involve any glass at all. In Northwestern Europe, Mediterranean crops were planted close to specially built "fruit walls" with high thermal mass, creating a microclimate that could be 8 to 12°C (14 to 22°F) warmer than an unaltered climate.
Later, greenhouses built against these fruit walls further improved yields from solar energy alone. It was only at the very end of the nineteenth century that the greenhouse turned into a fully glazed and artificially heated building where heat is lost almost instantaneously -- the complete opposite of the technology it evolved from.
During the oil crises of the 1970s, there was a renewed interest in the passive solar greenhouse.  However, the attention quickly faded when energy prices came down again, and the all-glass greenhouse remained the horticultural workhorse of the Northwestern world. The Chinese, on the other hand, built 800,000 hectare of passive solar greenhouses during the last three decades -- that's 80 times the surface area of the largest glasshouse industry in the world, that of the Netherlands.
The Chinese Greenhouse
The Chinese passive solar greenhouse has three walls of brick or clay. Only the southern side of the building consists of transparant material (usually plastic foil) through which the sun can shine. During the day the greenhouse captures heat from the sun in the thermal mass of the walls, which is released at night.
At sunset, an insulating sheet -- made of straw, pressed grass or canvas -- is rolled out over the plastic, increasing the isolating capacity of the structure. The walls also block the cold, northern winds, which would otherwise speed up the heat loss of the greenhouse.
Chinese greenhouses. Picture: HortTechnology.
Being the opposite of the energy-intensive glass greenhouse, the Chinese passive solar greenhouse is heated all-year round with solar energy alone, even when the outdoor temperature drops below freezing point. The indoor temperature of the structure can be up to 25°C (45°F) higher than the outdoor temperature.
The incentive policy of the Chinese government has made the solar greenhouse a cornerstone of food production in central and northern China. One fifth of the total area of greenhouses in China is now a solar greenhouse. By 2020, they are expected to take up at least 1.5 million hectares. 
Improving the Chinese Greenhouse
The first Chinese-style greenhouse was built in 1978. However, the technology only took off during the 1980s, following the arrival of transparent plastic foil. Not only is foil cheaper than glass, it is also lighter and doesn’t require a strong carrying capacity, which makes the construction of the structure much cheaper. Since then, the design has continuously been improved upon. The structure became deeper and taller, allowing sunlight to be distributed better and ensuring that temperature fluctuations are decreased.
A: The original design from the 1980s with a glass canopy. B: An improved design from the mid-1980s, with plastic foil, a night curtain, and better insulated walls. This design is the most widespread. C: An improved design from 1995. The walls are thinner because they are insulated with modern materials. Automatic handling of the night curtain. D: The most recent design from 2007, which has a double roof for extra insulation.
In addition, cultivators are increasingly opting for modern insulation materials instead of using rammed earth or air cavities for the insulation of the walls, which saves space and/or improves the heat absorption characteristics of the structure. Synthetic insulation blankets, which are better suited for dealing with moisture, are also seeing increased use. The old-fashioned straw mats become heavier and insulate less when they become wet.
In some of the more recent greenhouses, the insulation blankets are rolled up and down automatically, and more sophisticated ventilation systems are used. Some greenhouses have a double roof or reflecting insulation installed. In addition, the plastic foil used for the greenhouses — obviously the least sustainable component of the system — is continuously being improved, resulting in a longer lifespan.
Performance of the Chinese Greenhouse
The performance of the Chinese greenhouse depends on its design, the latitude, and the local climate. A recent study observed three types of greenhouses in Shenyang, the capital of the Liaoning province. The city is at 41.8°N and is one of the most northern areas where the Chinese-style greenhouse is built (between latitudes 32°N and 43°N).
The research was conducted from the beginning of November to the end of March, the period during which the outside temperature drops below freezing. The average temperature in the coldest month is between -15°C and -18°C (5 to -0.4°F). 
Air cavities in a ruined solar greenhouse. Picture: Chris Buhler, Indoor Garden HQ.
The three greenhouses studied all have the same shape and dimensions (60 x 12.6 x 5.5 m), but the walls, the plastic foil, and the transparent layer vary. The simplest construction has walls of rammed earth and an inside layer of brick to increase the structures’ stability. The covering is a thin plastic film that is covered at night with a straw blanket.
The two other greenhouses have a northern wall of brick with extruded polystyrene foam as insulating material, whereby the width of the wall can be cut in half. They are also covered with a thicker PVC plastic foil. The best greenhouse adds to this a reflective coating on the insulation blanket, further reducing heat loss at night.
A Chinese greenhouse. Picture: Chris Buhler, Indoor Garden HQ.
The night curtain of a solar greenhouse: Energy Farms.
In the simplest greenhouse the temperatures dropped below the freezing point from early December until mid-January. Without extra heating, this greenhouse cannot grow any produce at this latitude. Only the most sophisticated greenhouse – with its reflecting insulation layer – succeeded in keeping the inside temperature above freezing at all times, using only solar energy.
What’s more, the temperature stayed above 10°C most of the time, which is the minimum temperature for the cultivation of warm season plants, like tomatoes and cucumbers. Of course, passive solar greenhouses in more southern locations would require less sophisticated insulation techniques to be operated without additional heating.
Solar Greenhouses in Northern Climates
If we go further north, similar solar passive greenhouses would require extra heating during the coldest months of the year, no matter how well they are insulated. Note that the farther north the greenhouse is located, the greater its slope will be. The slope of the roof is angled to be perpendicular to the sun's rays when it's lowest on the horizon.
In 2005, a Chinese-style greenhouse was tested in Manitoba, Canada, at a latitude of 50°N. A greenhouse that is 30 x 7 meters with a well-insulated northern wall (3.6 RSI glass fibre) and an insulation blanket (1.2 RSI cotton) was observed from January to April. During the coldest month (February) the outside temperature varied between +4.5°C and -29°C (40 to -20°F). While the interior temperature was on average 18°C (32.4°F) higher than the exterior, it turned out to be impossible to cultivate plants without extra heating during the winter. 
Cucumbers in a Chinese solar greenhouse. Picture: Energy Farms.
Nevertheless, energy savings can be huge in comparison to a glass greenhouse. To keep the temperature above ten degrees at all times, the heating system of the Canadian structure must deliver a maximum of 17 W/m2, or 3.6kW for the building.  In comparison, a glass greenhouse of equal proportions at the same interior and exterior temperatures would require a maximum capacity of 125 to 155 kW.
Note that these results can't be applied to all locations at 50°N. The Canadian research shows that solar output has a greater impact on the inside temperature of the structure than does the outside temperature. The correlation between inside temperature and sunlight is almost four times greater than the correlation between inside temperature and outside temperature.  For example, while Brussels lies at the same latitude as Manitoba, the latter has on average 1.5 times more sun.
Thermal capacity can be further improved by placing black painted water storage tanks against the north wall inside the structure. These capture extra solar energy during the day and release it during the night. A different method to improve the heat retention of a greenhouse is by earth berming the north, east and west walls. Yet another solution to improve insulation is the underground or "pit greenhouse".  However, this greenhouse receives less sunlight and is prone to flooding.
More Space Needed
The passive greenhouse could save a lot of energy, but a price would have to be paid: the profits generated by the Chinese greenhouse are two to three times lower per square meter than those of its fully glazed counterpart. In the more efficient Chinese greenhouses, an average 30 kg of tomatoes and 30 kg of cucumbers can be grown per square meter (numbers from 2005), while the average production in a glass greenhouses is about 60 kg of tomatoes and 100 kg cucumbers (numbers from 2003).  .
A Chinese solar greenhouse. Picture: Energy Farms.
A passive greenhouse industry would thus take up two to three times as much space to produce the same amount of food. This could be viewed as a problem, but of course what really eats space in agriculture is meat production. A more diverse and attractive supply of vegetables and fruits could make it more viable to reduce meat consumption, so land use shouldn't be a problem.
Compost Heated Greenhouses
Another issue with a solar powered greenhouse is the lack of a CO2-source. In modern greenhouses, operators aim to have a CO2-level at least three times the level outdoors to increase crop yield. This CO2 is produced as a byproduct of the fossil fuel based heating systems inside the greenhouses. However, when no fossil fuels are used, another source of CO2 has to be found. This is not only an issue for solar greenhouses. It's also one of the main reasons why geothermal energy and electric heat pumps are not advancing in the modern glasshouse industry.
In Chinese solar greenhouses, this issue is sometimes solved by the combined raising of produce and animals. Pigs, chickens, and fish all produce CO2 that can be absorbed by the plants, while the plants produce oxygen (and green waste) for the animals. The animals and their manure also contribute to the heating of the structure. Research of such integrated greenhouse systems has shown that the combined production of vegetables, meat, milk, and eggs raises yields quite substantially. 
Detail of a compost-heated greenhouse: Source: Pelaf.
Justin Walker, an American now living in Siberia, is building an integrated system using horses, goats and sheep in a monastery in Siberia. Considering the harsh climate, the structure is partly built below-ground, while its protruding parts are earth-bermed. Above the barn area is a hayloft that provides further winter insulation as well as ventilation in the summer when it is empty. His compost heat recovery system produces hot water that is piped through radiant floor heating zones in the floor of the greenhouse. The CO2 is supplied by the animals. 
Heating and CO2-production can also be done without housing animals in the greenhouse. Their manure suffices. As we have seen in the previous article, the use of horse manure for heating small-scale greenhouses dates back several centuries in Europe, and in China it was practised already 2.000 years ago. Since the 1980s, several compost heated greenhouse have been built in the USA. These have shown that a greenhouse can be entirely heated by compost if it is well-insulated, and that the method drastically enriches the CO2-levels in the soil and in the greenhouse air. To add to this, the compost also serves to increase soil fertility. 
Kris De Decker
 Energy performance optimization of typical chinese solar greenhouses by means of dynamic simulation (PDF), Alessandro Deiana et al., International conference of agricultural engineering, 2014, Zurich.
 Winter performance of a solar energy greenhouse in southern Manitoba (PDF), Canadian Biosystems Engineering. 2006.
 The solar greenhouse: state of the art in energy saving and sustainable energy supply. G. Bot et al., 2005
 Structure, function, application, and ecological benefit of a single-slope, energy-efficient solar greenhouse in China. HortTechnology, June 2010
 Integrated energy self-served animal and plant complementary ecosystem in China, in "Integrated energy systems in China -- the cold northwestern region experience", FAO, 1994
 See for example "The Solar Greenhouse Book" (PDF), published by Rodale Press in 1978
 The Earth Sheltered Solar Greenhouse Book, Mike Oehler, 2007
Fruit Walls: Urban Farming in the 1600s
We are being told to eat local and seasonal food, either because other crops have been tranported over long distances (usually by plane), or because they are grown in energy-intensive greenhouses. But it wasn't always like that. From the sixteenth to the twentieth century, urban farmers grew Mediterranean fruits and vegetables as far north as England and the Netherlands, using only renewable energy.
These crops were grown surrounded by massive "fruit walls", which stored the heat from the sun and released it at night, creating a microclimate that could increase the temperature by more than 10°C (18°F).
Later, greenhouses built against the fruit walls further improved yields from solar energy alone. It was only at the very end of the nineteenth century that the greenhouse turned into a fully glazed and artificially heated building where heat is lost almost instantaneously -- the complete opposite of the technology it evolved from.
Read more: Fruit Walls: Urban Farming in the 1600s.// //
We are being told to eat local and seasonal food, either because other crops have been tranported over long distances, or because they are grown in energy-intensive greenhouses. But it wasn't always like that. From the sixteenth to the twentieth century, urban farmers grew Mediterranean fruits and vegetables as far north as England and the Netherlands, using only renewable energy.
These crops were grown surrounded by massive "fruit walls", which stored the heat from the sun and released it at night, creating a microclimate that could increase the temperature by more than 10°C (18°F).
Later, greenhouses built against the fruit walls further improved yields from solar energy alone. It was only at the very end of the nineteenth century that the greenhouse turned into a fully glazed and artificially heated building where heat is lost almost instantaneously -- the complete opposite of the technology it evolved from.
Picture: fruit walls in Montreuil, a suburb of Paris.// //
The modern glass greenhouse, often located in temperate climates where winters can be cold, requires massive inputs of energy, mainly for heating but also for artificial lighting and humidity control.
According to the FAO, crops grown in heated greenhouses have energy intensity demands around 10 to 20 times those of the same crops grown in open fields. A heated greenhouse requires around 40 megajoule of energy to grow one kilogram of fresh produce, such as tomatoes and peppers. [source - page 15] This makes greenhouse-grown crops as energy-intensive as pork meat (40-45 MJ/kg in the USA). [source]
Dutch-style all-glass greenhouses. Picture: Wikipedia Commons.
In the Netherlands, which is the world's largest producer of glasshouse grown crops, some 10,500 hectares of greenhouses used 120 petajoules (PJ) of natural gas in 2013 -- that's about half the amount of fossil fuels used by all Dutch passenger cars. [source: 1/2]
The high energy use is hardly surprising. Heating a building that's entirely made of glass is very energy-intensive, because glass has a very limited insulation value. Each metre square of glass, even if it's triple glazed, loses ten times as much heat as a wall.
The design of the modern greenhouse is strikingly different from its origins in the middle ages [*]. Initially, the quest to produce warm-loving crops in temperate regions (and to extend the growing season of local crops) didn't involve any glass at all. In 1561, Swiss botanist Conrad Gessner described the effect of sun-heated walls on the ripening of figs and currants, which mature faster than when they are planted further from the wall.
Gessner's observation led to the emergence of the "fruit wall" in Northwestern Europe. By planting fruit trees close to a specially built wall with high thermal mass and southern exposure, a microclimate is created that allows the cultivation of Mediterranean fruits in temperate climates, such as those of Northern France, England, Belgium and the Netherlands.
The fruit wall reflects sunlight during the day, improving growing conditions. It also absorbs solar heat, which is slowly released during the night, preventing frost damage. Consequently, a warmer microclimate is created on the southern side of the wall for 24 hours per day.
Fruit walls also protect crops from cold, northern winds. Protruding roof tiles or wooden canopies often shielded the fruit trees from rain, hail and bird droppings. Sometimes, mats could be suspended from the walls in case of bad weather.
The fruit wall appears around the start of the so-called Little Ice Age, a period of exceptional cold in Europe that lasted from about 1550 to 1850. The French quickly started to refine the technology by pruning the branches of fruit trees in such ways that they could be attached to a wooden frame on the wall.
This practice, which is known as "espalier", allowed them to optimize the use of available space and to further improve upon the growth conditions. The fruit trees were placed some distance from the wall to give sufficient space for the roots underground and to provide for good air ciculation and pest control above ground.
Peach Walls in Paris
Initially, fruit walls appeared in the gardens of the rich and powerful, such as in the palace of Versailles. However, some French regions later developed an urban farming industry based on fruit walls. The most spectacular example was Montreuil, a suburb of Paris, where peaches were grown on a massive scale.
Established during the seventeenth century, Montreuil had more than 600 km fruit walls in the 1870s, when the industry reached its peak. The 300 hectare maze of jumbled up walls was so confusing for outsiders that the Prussian army went around Montreuil during the siege of Paris in 1870.
Peaches are native to France's Mediterranean regions, but Montreuil produced up to 17 million fruits per year, renowned for their quality. Building many fruit walls close to each other further boosted the effectiveness of the technology, because more heat was trapped and wind was kept out almost completely. Within the walled orchards, temperatures were typically 8 to 12°C (14-22°F) higher than outside.
The 2.5 to 3 metre high walls were more than half a metre thick and coated in limestone plaster. Mats could be pulled down to insulate the fruits on very cold nights. In the central part of the gardens, crops were grown that tolerated lower temperatures, such as apples, pears, raspberries, vegetables and flowers.
Grapes in Thomery
In 1730, a similar industry was set up for the cultivation of grapes in Thomery, which lies some 60 km south-east from Paris -- a very northern area to grow these fruits. At the production peak in the early twentieth century, more than 800 tonnes of grapes were produced on some 300 km of fruit walls, packed together on 150 hectares of land.
The walls, built of clay with a cap of thatch, were 3 metres high and up to 100 metres long, spaced apart 9 to 10 metres. They were all finished by tile copings and some had a small glass canopy.
Because vines demand a dry and warm climate, most fruit walls had a southeastern exposure. A southern exposure would have been the warmest, but in that case the vines would have been exposed to the damp winds and rains coming from the southwest. The western and southwestern walls were used to produce grapes from lower qualities.
Some cultivators in Thomery also constructed "counter-espaliers", which were lesser walls opposite the principal fruit walls. These were only 1 metre high and were placed about 1 to 2.5 metres from the fruit wall, further improving the microclimate. In the 1840s, Thomery became known for its advanced techniques to prune the grape vines and attach them to the walls. The craft spread to Montreuil and to other countries.
Storage system for grapes in Thomery. Picture: Topic Tops.
The cultivators of Thomery also developed a remarkable storage system for grapes. The stem was submerged in water-filled bottles, which were stored in large wooden racks in basements or attics of buildings. Some of these storage places had up to 40,000 bottles each holding one or two bunches of grapes. The storage system allowed the grapes to remain fresh for up to six months.
Serpentine Fruit Walls
Fruit wall industries in the Low Countries (present-day Belgium and the Netherlands) were also aimed at producing grapes. From the 1850s onwards, Hoeilaart (nearby Brussels) and the Westland (the region which is now Holland's largest glasshouse industry) became important producers of table grapes. By 1881, the Westland had 178 km of fruit walls.
A Serpentine Fruit Wall in the Netherlands. Wikipedia Commons.
A different type of fruit wall. Wikipedia Commons.
The Dutch also contributed to the development of the fruit wall. They started building fruit walls already during the first half of the eighteenth century, initially only in gardens of castles and country houses. Many of these had unique forms. Most remarkable was the serpentine or "crinkle crankle" wall.
Although it's actually longer than a linear wall, a serpentine wall economizes on materials because the wall can be made strong enough with just one brick thin. The alternate convex and concave curves in the wall provide stability and help to resist lateral forces. Furthermore, the slopes give a warmer microclimate than a flat wall. This was obviously important for the Dutch, who are almost 400 km north of Paris.
Variants of the serpentine wall had recessed and protruding parts with more angular forms. Few of these seem to have been built outside the Netherlands, with the exception of those erected by the Dutch in the eastern parts of England (two thirds of them in Suffolk county). In their own country, the Dutch built fruit walls as high up north as Groningen (53°N).
Another variation on the linear fruit wall was the sloped wall. It was designed by Swiss mathematician Nicolas Fatio de Duillier, and described in his 1699 book "Fruit Walls Improved". A wall built at an incline of 45 degrees from the northerm horizon and facing south absorbs the sun's energy for a longer part of the day, increasing plant growth.
Heated Fruit Walls
In Britain, no large-scale urban farming industries appeared, but the fruit wall became a standard feature of the country house garden from the 1600s onwards. The English developed heated fruit walls in the eighteenth and nineteenth centuries, to ensure that the fruits were not killed by frost and to assist in ripening fruit and maturing wood.
A heated fruit wall of Croxteth Hall Walled Kitchen Garden in Liverpoool. Picture: The Horticultural Therapist.
In these "hot walls", horizontal flues were running to and fro, opening into chimneys on top of the wall. Initially, the hollow walls were heated by fires lit inside, or by small furnaces located at the back of the wall. During the second half of the nineteenth century, more and more heated fruit walls were warmed by hot water pipes.
The decline of the European fruit wall started in the late nineteenth century. Maintaining a fruit wall was a labour-intensive work that required a lot of craftsmanship in pruning, thinning, removing leaves, etcetera. The extension of the railways favoured the import of produce from the south, which was less labour-intensive and thus cheaper to produce. Artificially heated glasshouses could also produce similar or larger yields with much less skilled labour involved.
The Birth of the Greenhouse
Large transparant glass plates were hard to come by during the Middle Ages and early modern period, which limited the use of the greenhouse effect for growing crops. Window panes were usually made of hand-blown plate glass, which could only be produced in small dimensions. To make a large glass plate, the small pieces were combined by placing them in rods or glazing bars.
Nevertheless, European growers made use of small-scale greenhouse methods since the early 1600s. The simplest forms of greenhouses were the "cloche", a bell-shaped jar or bottomless glass jug that was placed on top of the plants, and the cold- or hotframe, a small seedbed enclosed in a glass-topped box. In the hotframe, decomposing horse manure was added for additional heating.
How the greenhouse was born. Foto: Rijksdienst voor het Cultureel Erfgoed.
In the 1800s, some Belgian and Dutch cultivators started experimenting with the placement of glass plates against fruit walls, and discovered that this could further boost crop growth. This method gradually developed into the greenhouse, built against a fruit wall. In the Dutch Westland region, the first of these greenhouses were built around 1850. By 1881, some 22 km of the 178 km of fruit walls in the westland was under glass.
These greenhouse structures became larger and more sophisticated over time, but they all kept benefitting from the thermal mass of the fruit wall, which stored heat from the sun for use at night. In addition, many of these structures were provided with insulating mats that could be rolled out over the glass cover at night or during cold, cloudy weather. In short, the early greenhouse was a passive solar building.
Greenhouse built against a serpentine fruit wall. Source: Rijksdienst voor het culturele erfgoed.
A Dutch greenhouse from the 1930s, built against a brick wall. Picture: Naaldwijk in oude ansichten.
The first all glass greenhouses were built only in the 1890s, first in Belgium, and shortly afterwards in the Netherlands. Two trends played into the hands of the fully glazed greenhouse. The first was the invention of the plate glass production method, which made larger window panes much more affordable. The second was the advance of fossil fuels, which made it possible to keep a glass building warm in spite of the large heat losses.
Consequently, at the start of the twentieth century, the greenhouse became a structure without thermal mass. The fruit wall that had started it all, was now gone.
During the oil crises of the 1970s, there was a renewed interest in the passive solar greenhouse. However, the attention quickly faded when energy prices came down, and the fully glazed greenhouse remained the horticultural workhorse of the Northwestern world. The Chinese, however, built 800,000 hectare of passive solar greenhouses during the last three decades -- that's 80 times the surface area of all glass greenhouses in the Netherlands. We discuss the Chinese greenhouse in the second part of this article.
Kris De Decker
[*] The greenhouse was invented by the Romans in the second century AD. Unfortunately, the technology disappeared when the Western Roman Empire collapsed. The Romans could produce large glass plates, and built greenhouses against brick walls. Their technology was only surpassed by the Dutch in the 1800s. However, the Roman greenhouse remained a toy for the rich and never became an important food supply. The Chinese and Koreans also built greenhouses before or during the middle ages. Oiled paper was used as a transparant cover. All of these greenhouses had thick walls to retain the heat from the sun and/or a radiant heating system (such as the Chinese Kang or the Korean ondol).
Sources & more information:
- Open Air Grape Culture, John Phin, 1862
- The last peach orchards of Paris, Messy Nessy, 2014
- Geschiedenis van het leifruit in de Lage Landen, Wybe Kuitert, 2004
- Onzichtbaar achter glas, Ahmed Benseddik & Marijke Bijl, 2004
- Chasselas de Thomery, French Wikipedia
- Murs à pêches, French Wikipedia
- L'histoire des murs, website Murs à Pêches
- Food-Producing Solar Greenhouses, in "An assessment of technology for local development", 1980
- The development and history of horticulture, Edwinna von Bayer
- Geschiedenis van Holland, Volume 3, deel 1. Thimo de Nijs, 2003
- A Golden Thread: 2500 years of solar architecture and technology, Ken Butti & John Perlin, 2009
- Une histoire des serres: de l'orangerie au palais de cristal, Yves-Marie Allain, 2010
- Manual complet du jardinier, Louis Claude Noisette, 1862
- Onderhoud en restauratie van historische plantenkassen, Ben Kooij, 2011
- Leifruit: toekomst voor eeuwenoude hovernierskunst, Julia Voskuil, 2011
- The magic of Britain's walled gardens, Bunny Guinness, 2014
- Visiting the palace of Versailles' kitchen garden, Janet Eastman, 2015
- Hot Walls: An Investigation of Their Construction in Some Northern Kitchen Gardens, Elisabeth Hall, 1989
- History of fruit growing, Tom La Dell
- Fences of Fruit Trees, Brian Kaller, 2011
These days, so many households have a WiFi-router installed that sharing the signal of these devices could provide free mobile internet access across densely populated cities.
Image: WiFi-routers (green & red) and cell towers (blue) in London, 2014-15. Source: Wigle.
Telecom operators are deploying 4G-networks at a rapid rate. These mobile networks provide mobile internet access for smartphones at a "speed" that is comparable to that of a WiFi-connection. However, wireless internet access through 4G is expensive -- you need a paid subscription -- and it's energy-intensive: 4G access consumes twenty times more energy than making a connection through WiFi.  But do we really need those mobile networks?
At home, few of us access the internet through a cable these days. In industrialized countries, WiFi-routers now provide a wireless connection throughout the house. In cities, many thousands of these are deployed. Because the range of a WiFi-router can be 30 metres or more, the signal often reaches the street. Sharing the resources of these WiFi routers could make 4G (and 3G) mobile networks redundant, at least in densely populated areas.
Images: WiFi-routers in and around San Francisco, USA, 2014-15. The images are made by "wardrivers": people that drive through streets to record the location of wireless networks and then upload the data to maps. The colour differences indicate the density of nodes or, in the more detailed maps, the quality of the access points (green is high, red is low). Blue dots represent cell towers. Source: Wigle.
Research in French and British cities has revealed that downtown and residential neighbourhoods have more than enough access points and bandwidth available to make free and ubiquitous WiFi-access a reality, without the need for extra infrastructure. [2,3] Add to this that most broadband connections remain unexploited for much of the day and one can begin to see the logic of an open network. 
Nowadays, most in-house WiFi-networks are locked down with a password to protect privacy and security, and to prevent others from slowing down the home network. But while these issues are of real concern, they could be solved without denying access to freeloaders. It's perfectly possible to create two separate networks on a WiFi-router: a network for private use and a network for public use. The router can wall off one connection from the other, preventing snooping and security risks. The router can also priotize the network's owner's traffic over others, assuring minimum download and upload speeds.
This so-called "shared-wireless" approach is not new. Some companies (most notably FON) develop and sell routers with a dual access function. People that buy such a router (FON often works together with internet providers), gain access to all the routers associated to the community. However, there's not really a need for a commercial company to organize such a service. For example, the Electronic Frontier Foundation (EFF) designed open-source firmware called Open Wireless Router, which performs exactly the same function. [5,6]
An approach that's not economically driven would bring a lot of benefits. First of all, we would be able to connect to any WiFi-router, not only to those from our own internet service provider.  This results in multiple access points, which makes shared-wireless a viable alternative to mobile networks. Secondly, it would be a free service. By sharing a small part of the bandwidth of our home router, we gain free access to mobile internet whenever we step out of the door. Last but not least, a community approach would stimulate innovation to get the best out of the available resources.
A Surplus of Bandwidth
A 2014 experimental set-up of a shared-wireless network in a British city found that broadband users on fibre contracts have so much spare capacity available that it's not necessary to limit the bandwidth of freeloaders.  However, those with DSL connections don't have this spare capacity, especially not when it concerns upload speeds. In this case, it's required to give sharers priority over freeloaders, and limit the bandwidth of the public network. This could make it hard for freeloaders to use bandwidth-intensive applications such as video streaming. 
You could argue that this is a good thing, because it's precisely these bandwidth-hungry applications that push the power usage of the internet higher and higher. There's also ample opportunity for technical improvements. For example, it could be so arranged that the availability of public bandwidth varies depending on the activities of the owner. Thus, a DSL connection could be completely available to passers-by while the owner of the connection is at work or on holidays, for instance. 
WiFi-routers in Brussels, Belgium, 2014-15. Blue dots are cell towers. Source: Wigle.
Home WiFi routers could also be equipped with storage capabilities, which increases connectivity opportunities for mobile internet users. The stored packages can be forwarded to another home router in range, or relayed to a mobile user that may find another connectivity point. Research has shown that this approach -- using only 30 MB of storage per home router -- significantly inproves the service quality for mobile users [4,9] Another idea is WiFi-Direct, which connects two WiFi-enabled devices without the need for a WiFi-router -- similar to Bluetooth but much faster and with a wider range. 
The range of a WiFi-router can also be increased in a spectacular way through protocol changes and the use of antennas, which is especially interesting for remote regions. That's the topic of the next post.
Kris De Decker (edited by Jenna Collett)
 "A close examination of performance and power characteristics of 4G LTE networks" (PDF), Junxian Huang, June 2012. See also: "Emerging trends in electricity consumption for consumer ICT", Peter Corcoran, 2013 and "Energy consumption in mobile phones: a measurement study and implications for network applications" (PDF), Niranjan Balasubramanian, 2009. For an in-depth review of the internet's growing energy use, see our previous article: "Why we need a speed limit for the internet".
 Global Access to the Internet for All, internet draft, Internet Engineering Task Force (IETF), 2015
 "An evaluation of IEEE 802.11 community networks deployments", German Castignani, Lucien Loiseau, Nicolas Montavont, International Conference on Information Networking, 2011
 "Storage-enabled access points for improved mobile performance: an evaluation study", E. Koutsogiannis, 2011
 Why we need an open wireless movement, EFF, 2011
 New open-source router firmware opens your wi-fi network to strangers, Ars Technica, 2014
 "A feasibility study of an in-the-wild experimental public access WiFi network", Arjuna Sathiaseelan, 2014
 "Virtual Public Networks", Arjuna Sathiaseelan, 2014
 "A survey of Delay- and Disruption-Tolerant Networking Applications", Artemios G. Voyiatzis, 2012
 "WiFi Direct", WiFi Alliance.
Reverting to traditional handicrafts is one way to sabotage the throwaway society. In this article, we discuss another possibility: the design of modular consumer products, whose parts and components could be re-used for the design of other products.
Initiatives like OpenStructures, Grid Beam, and Contraptor combine the modularity of systems like LEGO, Meccano and Erector with the collaborative power of digital success stories like Wikipedia, Linux or WordPress.
An economy based on the concept of re-use would not only bring important advantages in terms of sustainability, but would also save consumers money, speed up innovation, and take manufacturing out of the hands of multinationals. Read more: How to make everything ourselves: open modular hardware.// //
In rich countries, however, the focus is on always-on connectivity and ever higher access speeds. In poor countries, on the other hand, connectivity is achieved through much more low-tech, often asynchronous networks.
While the high-tech approach pushes the costs and energy use of the internet higher and higher, the low-tech alternatives result in much cheaper and very energy efficient networks that combine well with renewable power production and are resistant to disruptions.
If we want the internet to keep working in circumstances where access to energy is more limited, we can learn important lessons from alternative network technologies. Best of all, there's no need to wait for governments or companies to facilitate: we can build our own resilient communication infrastructure if we cooperate with one another. This is demonstrated by several community networks in Europe, of which the largest has more than 35,000 users already.
Picture: A node in the Scottish Tegola Network.// //
More than half of the global population does not have access to the "worldwide" web. Up to now, the internet is mainly an urban phenomenon, especially in "developing" countries. Telecommunication companies are usually reluctant to extend their network outside cities due to a combination of high infrastructure costs, low population density, limited ability to pay for services, and an unreliable or non-existent electricity infrastructure. Even in remote regions of "developed" countries, internet connectivity isn't always available.
Internet companies such as Facebook and Google regularly make headlines with plans for connecting these remote regions to the internet. Facebook tries to achieve this with drones, while Google counts on high-altitude balloons. There are major technological challenges, but the main objection to these plans is their commercial character. Obviously, Google and Facebook want to connect more people to the internet because that would increase their revenues. Facebook especially receives lots of criticism because their network promotes their own site in particular, and blocks most other internet applications. 
Meanwhile, several research groups and network enthusiasts have developed and implemented much cheaper alternative network technologies to solve these issues. Although these low-tech networks have proven their worth, they have received much less attention. Contrary to the projects of internet companies, they are set up by small organisations or by the users themselves. This guarantees an open network that benefits the users instead of a handful of corporations. At the same time, these low-tech networks are very energy efficient.
WiFi-based Long Distance Networks
Most low-tech networks are based on WiFi, the same technology that allows mobile access to the internet in most western households. As we have seen in the previous article, sharing these devices could provide free mobile access across densely populated cities. But the technology can be equally useful in sparsely populated areas. Although the WiFi-standard was developed for short-distance data communication (with a typical range of about 30 metres), its reach can be extended through modifications of the Media Access Control (MAC) layer in the networking protocol, and through the use of range extender amplifiers and directional antennas. 
Although the WiFi-standard was developed for short-distance data communication, its reach can be extended to cover distances of more than 100 kilometres.
The longest unamplified WiFi link is a 384 km wireless point-to-point connection between Pico El Águila and Platillón in Venezuela, established a few years ago. [3,4] However, WiFi-based long distance networks usually consist of a combination of shorter point-to-point links, each between a few kilometres and one hundred kilometers long at most. These are combined to create larger, multihop networks. Point-to-points links, which form the backbone of a long range WiFi network, are combined with omnidirectional antennas that distribute the signal to individual households (or public institutions) of a community.
Picture: A relay with three point-to-point links and three sectoral antennae. Tegola.
Long-distance WiFi links require line of sight to make a connection -- in this sense, the technology resembles the 18th century optical telegraph.  If there's no line of sight between two points, a third relay is required that can see both points, and the signal is sent to the intermediate relay first. Depending on the terrain and particular obstacles, more hubs may be necessary. 
Point-to-point links typically consist of two directional antennas, one focused on the next node and the other on the previous node in the network. Nodes can have multiple antennas with one antenna per fixed point-to-point link to each neighbour.  This allows mesh routing protocols that can dynamically select which links to choose for routing among the available ones. 
Long-distance WiFi links require line of sight to make a connection -- in this sense, the technology resembles the 18th century optical telegraph.
Distribution nodes usually consist of a sectoral antenna (a small version of the things you see on mobile phone masts) or a conventional WiFi-router, together with a number of receivers in the community.  For short distance WiFi-communication, there is no requirement for line of sight between the transmitter and the receiver. 
To provide users with access to the worldwide internet, a long range WiFi network should be connected to the main backbone of the internet using at least one "backhaul" or "gateway node". This can be a dial-up or broadband connection (DSL, fibre or satellite). If such a link is not established, users would still be able to communicate with each other and view websites set up on local servers, but they would not be able to access the internet. 
Advantages of Long Range WiFi
Long range WiFi offers high bandwidth (up to 54 Mbps) combined with very low capital costs. Because the WiFi standard enjoys widespread acceptance and has huge production volumes, off-the-shelf antennas and wireless cards can be bought for very little money.  Alternatively, components can be put together from discarded materials such as old routers, satellite dish antennas and laptops. Protocols like WiLDNet run on a 266 Mhz processor with only 128 MB memory, so an old computer will do the trick. 
The WiFi-nodes are lightweight and don't need expensive towers -- further decreasing capital costs, and minimizing the impact of the structures to be built.  More recently, single units that combine antenna, wireless card and processor have become available. These are very convenient for installation. To build a relay, one simply connects such units together with ethernet cables that carry both signal and power.  The units can be mounted in towers or slim masts, given that they offer little windload.  Examples of suppliers of long range WiFi components are Ubiquity, Alvarion and MikroTik, and simpleWiFi.
Long Range WiFi makes use of unlicensed spectrum and offers high bandwidth, low capital costs, easy installation, and low power requirements.
Long range WiFi also has low operational costs due to low power requirements. A typical mast installation consisting of two long distance links and one or two wireless cards for local distribution consumes around 30 watts. [6,12] In several low-tech networks, nodes are entirely powered by solar panels and batteries. Another important advantage of long range WiFi is that it makes use of unlicensed spectrum (2.4 and 5 GHz), and thus avoids negotiations with telecom operators and government. This adds to the cost advantage and allows basically anyone to start a WiFi-based long distance network. 
Long Range WiFi Networks in Poor Countries
The first long range WiFi networks were set up ten to fifteen years ago. In poor countries, two main types have been built. The first is aimed at providing internet access to people in remote villages. An example is the Akshaya network in India, which covers the entire Kerala State and is one of the largest wireless networks in the world. The infrastructure is built around approximately 2,500 "computer access centers", which are open to the local population -- direct ownership of computers is minimal in the region. 
Another example, also in India, are the AirJaldi networks which provide internet access to approximately 20,000 users in six states, all in remote regions and on difficult terrain. Most nodes in this network are solar-powered and the distance between them can range up to 50 km or more.  In some African countries, local WiFi-networks distribute internet access from a satellite gateway. [15,16]
A node in the AirJaldi network. Picture: AirJaldi.
A second type of long distance WiFi network in poor countries is aimed at providing telemedicine to remote communities. In remote regions, health care is often provided through health posts scarcely equipped and attended by health technicians who are barely trained.  Long-range WiFi networks can connect urban hospitals with these outlying health posts, allowing doctors to remotely support health technicians using high-resolution file transfers and real-time communication tools based on voice and video.
An example is the link between Cabo Pantoja and Iquitos in the Loreto province in Peru, which was established in 2007. The 450 km network consists of 17 towers which are 16 to 50 km apart. The line connects 15 medical outposts in remote villages with the main hospital in Iquitos and is aimed at remote diagnosis of patients. [17,18] All equipment is powered by solar panels. [18,19] Other succesful examples of long range WiFi telemedicine networks have been built in India, Malawi and Ghana. [20,21]
WiFi-Based Community Networks in Europe
The low-tech networks in poor countries are set up by NGO's, governments, universities or businesses. In contrast, most of the WiFi-based long distance networks in remote regions of rich countries are so-called "community networks": the users themselves build, own, power and maintain the infrastructure. Similar to the shared wireless approach in cities, reciprocal resource sharing forms the basis of these networks: participants can set up their own node and connect to the network (for free), as long as their node also allows traffic of other members. Each node acts as a WiFi routing device that provides IP forwarding services and a data link to all users and nodes connected to it. [8,22]
In a community network, the users themselves build, own, power and maintain the infrastructure.
Consequently, with each new user, the network becomes larger. There is no a-priori overall planning. A community network grows bottom-up, driven by the needs of its users, as nodes and links are added or upgraded following demand patterns. The only consideration is to connect a node from a new participant to an existing one. As a node is powered on, it discovers it neighbours, attributes itself a unique IP adress, and then establishes the most appropriate routes to the rest of the network, taking into account the quality of the links. Community networks are open to participation to everyone, sometimes according to an open peering agreement. [8,9,19,22]
Wireless links in the Spanish Guifi network. Credit.
Despite the lack of reliable statistics, community networks seem to be rather succesful, and there are several large ones in Europe, such as Guifi.net (Spain), Athens Wireless Metropolitan Network (Greece), FunkFeuer (Austria), and Freifunk (Germany). [8,22,23,24] The Spanish network is the largest WiFi-based long distance network in the world with more than 50,000 kilometres of links, although a small part is based on optic fibre links. Most of it is located in the Catalan Pyrenees, one of the least populated areas in Spain. The network was initiated in 2004 and now has close to 30,000 nodes, up from 17,000 in 2012. [8,22]
Guifi.net provides internet access to individuals, companies, administrations and universities. In principle, the network is installed, powered and maintained by its users, although volunteer teams and even commercial installers are present to help. Some nodes and backbone upgrades have been succesfully crowdfunded by indirect beneficiaries of the network. [8,22]
Performance of Low-tech Networks
So how about the performance of low-tech networks? What can you do with them? The available bandwidth per user can vary enormously, depending on the bandwidth of the gateway node(s) and the number of users, among other factors. The long-distance WiFi networks aimed at telemedicine in poor countries have few users and a good backhaul, resulting in high bandwidth (+ 40 Mbps). This gives them a similar performance to fibre connections in the developed world. A study of (a small part of) the Guifi.net community network, which has dozens of gateway nodes and thousands of users, showed an average throughput of 2 Mbps, which is comparable to a relatively slow DSL connection. Actual throughput per user varies from 700 kbps to 8 Mbps. 
The available bandwidth per user can vary enormously, depending on the bandwidth of the gateway node(s) and the number of users, among other factors
However, the low-tech networks that distribute internet access to a large user base in developing countries can have much more limited bandwidth per user. For example, a university campus in Kerala (India) uses a 750 kbps internet connection that is shared across 3,000 faculty members and students operating from 400 machines, where during peak hours nearly every machine is being used.
Therefore, the worst-case average bandwidth available per machine is approximately 1.9 kbps, which is slow even in comparison to a dial-up connection (56 kbps). And this can be considered a really good connectivity compared to typical rural settings in poor countries.  To make matters worse, such networks often have to deal with an intermittent power supply.A node in the Spanish Guifi community network.
Under these circumstances, even the most common internet applications have poor performance, or don't work at all. The communication model of the internet is based on a set of network assumptions, called the TCP/IP protocol suite. These include the existence of a bi-directional end-to-end path between the source (for example a website's server) and the destination (the user's computer), short round-trip delays, and low error rates.
Many low-tech networks in poor countries do not comform to these assumptions. They are characterized by intermittent connectivity or "network partitioning" -- the absence of an end-to-end path between source and destination -- long and variable delays, and high error rates. [21,27,28]
Nevertheless, even in such conditions, the internet could work perfectly fine. The technical issues can be solved by moving away from the always-on model of traditional networks, and instead design networks based upon asynchronous communication and intermittent connectivity. These so-called "delay-tolerant networks" (DTNs) have their own specialized protocols overlayed on top of the lower protocols and do not utilize TCP. They overcome the problems of intermittent connectivity and long delays by using store-and-forward message switching.
Information is forwarded from a storage place on one node to a storage place on another node, along a path that eventually reaches its destination. In contrast to traditional internet routers, which only store incoming packets for a few milliseconds on memory chips, the nodes of a delay-tolerant network have persistent storage (such as hard disks) that can hold information indefinitely. [27,28]
Delay-tolerant networks combine well with renewable energy: solar panels or wind turbines could power network nodes only when the sun shines or the wind blows, eliminating the need for energy storage.
Delay-tolerant networks don't require an end-to-end path between source and destination. Data is simply transferred from node to node. If the next node is unavailable because of long delays or a power outage, the data is stored on the hard disk until the node becomes available again. While it might take a long time for data to travel from source to destination, a delay-tolerant network ensures that it will eventually arrive.
Delay-tolerant networks further decrease capital costs and energy use, leading to the most efficient use of scarce resources. They keep working with an intermittent energy supply and they combine well with renewable energy sources: solar panels or wind turbines could power network nodes only when the sun shines or the wind blows, eliminating the need for energy storage.
Delay-tolerant networking can take surprising forms, especially when they take advantage of some non-traditional means of communication, such as "data mules". [11,29] In such networks, conventional transportation technologies -- buses, cars, motorcycles, trains, boats, airplanes -- are used to ferry messages from one location to another in a store-and-forward manner.
Examples are DakNet and KioskNet, which use buses as data mules. [30-34] In many developing regions, rural bus routes regularly visit villages and towns that have no network connectivity. By equipping each vehicle with a computer, a storage device and a mobile WiFi-node on the one hand, and by installing a stationary WiFi-node in each village on the other hand, the local transport infrastructure can substitute for a wireless internet link. 
Outgoing data (such as sent emails or requests for webpages) is stored on local computers in the village until the bus comes withing range. At this point, the fixed WiFi-node of the local computer automatically transmits the data to the mobile WiFi-node of the bus. Later, when the bus arrives at a hub that is connected to the internet, the outgoing data is transmitted from the mobile WiFi-node to the gateway node, and then to the internet. Data sent to the village takes the opposite route. The bus -- or data -- driver doesn't require any special skills and is completely oblivious to the data transfers taking place. He or she does not need to do anything other than come in range of the nodes. [30,31]
In a data mules network, the local transport infrastructure substitutes for a wireless internet link.
The use of data mules offers some extra advantages over more "sophisticated" delay-tolerant networks. A "drive-by" WiFi network allows for small, low-cost and low-power radio devices to be used, which don't require line of sight and consequently no towers -- further lowering capital costs and energy use compared to other low-tech networks. [30,31,32]
The use of short-distance WiFi-links also results in a higher bandwidth compared to long-distance WiFi-links, which makes data mules better suited to transfer larger files. On average, 20 MB of data can be moved in each direction when a bus passes a fixed WiFi-node. [30,32] On the other hand, latency (the time interval between sending and receiving data) is usually higher than on long-range WiFi-links. A single bus passing by a village once a day gives a latency of 24 hours.
Obviously, a delay-tolerant network (DTN) -- whatever its form -- also requires new software: applications that function without a connected end-to-end networking path.  Such custom applications are also useful for synchronous, low bandwidth networks. Email is relatively easy to adapt to intermittent connectivity, because it's an asynchronous communication method by itself. A DTN-enabled email client stores outgoing messages until a connection is available. Although emails may take longer to reach their destination, the user experience doesn't really change.
A Freifunk WiFi-node is installed in Berlin, Germany. Picture: Wikipedia Commons.
Browsing and searching the web requires more adaptations. For example, most search engines optimize for speed, assuming that a user can quickly look through the returned links and immediately run a second modified search if the first result is inadequate. However, in intermittent networks, multiple rounds of interactive search would be impractical. [26,35] Asynchronous search engines optimize for bandwith rather than response time. [26,30,31,35,36] For example, RuralCafe desynchronizes the search process by performing many search tasks in an offline manner, refining the search request based on a database of similar searches. The actual retrieval of information using the network is only done when absolutely necessary.
Many internet applications could be adapted to intermittent networks, such as webbrowsing, email, electronic form filling, interaction with e-commerce sites, blogsoftware, large file downloads, or social media.
Some DTN-enabled browsers download not only the explicitly requested webpages but also the pages that are linked to by the requested pages.  Others are optimized to return low-bandwidth results, which are achieved by filtering, analysis, and compression on the server site. A similar effect can be achieved through the use of a service like Loband, which strips webpages of images, video, advertisements, social media buttons, and so on, merely presenting the textual content. 
Browsing and searching on intermittent networks can also be improved by local caching (storing already downloaded pages) and prefetching (downloading pages that might be retrieved in the future).  Many other internet applications could also be adapted to intermittent networks, such as electronic form filling, interaction with e-commerce sites, blogsoftware, large file downloads, social media, and so on. [11,30] All these applications would remain possible, though at lower speeds.
Obviously, real-time applications such as internet telephony, media streaming, chatting or videoconferencing are impossible to adapt to intermittent networks, which provide only asynchronous communication. These applications are also difficult to run on synchronous networks that have limited bandwidth. Because these are the applications that are in large part responsible for the growing energy use of the internet, one could argue that their incompatibility with low-tech networks is actually a good thing (see the previous article).
Furthermore, many of these applications could be organized in different ways. While real-time voice or video conversations won't work, it's perfectly possible to send and receive voice or video messages. And while streaming media can't happen, downloading music albums and video remains possible. Moreover, these files could be "transmitted" by the most low-tech internet technology available: a sneakernet. In a sneakernet, digital data is "wirelessly" transmitted using a storage medium such as a hard disk, a USB-key, a flash card, or a CD or DVD. Before the arrival of the internet, all computer files were exchanged via a sneakernet, using tape or floppy disks as a storage medium.
Stuffing a cargo train full of digital storage media would beat any digital network in terms of speed, cost and energy efficiency. Picture: Wikipedia Commons.
Just like a data mules network, a sneakernet involves a vehicle, a messenger on foot, or an animal (such as a carrier pigeon). However, in a sneakernet there is no automatic data transfer between the mobile node (for instance, a vehicle) and the stationary nodes (sender and recipient). Instead, the data first have to be transferred from the sender's computer to a portable storage medium. Then, upon arrival, the data have to be transferred from the portable storage medium to the receiver's computer.  A sneakernet thus requires manual intervention and this makes it less convenient for many internet applications.
There are exceptions, though. For example, a movie doesn't have to be transferred to the hard disk of your computer in order to watch it. You play it straight from a portable hard disk or slide a disc into the DVD-player. Moreover, a sneakernet also offers an important advantage: of all low-tech networks, it has the most bandwidth available. This makes it perfectly suited for the distribution of large files such as movies or computer games. In fact, when very large files are involved, a sneakernet even beats the fastest fibre internet connection. At lower internet speeds, sneakernets can be advantageous for much smaller files.
Technological progress will not lower the advantage of a sneakernet. Digital storage media evolve at least as fast as internet connections and they both improve communication in an equal way.
While most low-tech networks are aimed at regions where the alternative is often no internet connection at all, their usefulness for well-connected areas cannot be overlooked. The internet as we know it in the industrialized world is a product of an abundant energy supply, a robust electricity infrastructure, and sustained economic growth. This "high-tech" internet might offer some fancy advantages over the low-tech networks, but it cannot survive if these conditions change. This makes it extremely vulnerable.
The internet as we know it in the industrialized world is a product of an abundant energy supply, a robust electricity infrastructure, and sustained economic growth. It cannot survive if these conditions change.
Depending on their level of resilience, low-tech networks can remain in operation when the supply of fossil fuels is interrupted, when the electricity infrastructure deteriorates, when the economy grinds to a halt, or if other calamities should hit. Such a low-tech internet would allow us to surf the web, send and receive e-mails, shop online, share content, and so on. Meanwhile, data mules and sneakernets could serve to handle the distribution of large files such as videos. Stuffing a cargo vessel or a train full of digital storage media would beat any digital network in terms of speed, cost and energy efficiency. And if such a transport infrastructure would no longer be available, we could still rely on messengers on foot, cargo bikes and sailing vessels.
Such a hybrid system of online and offline applications would remain a very powerful communication network -- unlike anything we had even in the late twentieth century. Even if we envision a doom scenario in which the wider internet infrastructure would disintegrate, isolated low-tech networks would still be very useful local and regional communication technologies. Furthermore, they could obtain content from other remote networks through the exchange of portable storage media. The internet, it appears, can be as low-tech or high-tech as we can afford it to be.
Kris De Decker (edited by Jenna Collett)
Sources & Notes:
DIY: Wireless networking in the developing world (Third Edition) is a free book about designing, implementing and maintaining low-cost wireless networks. Available in English, French, and Spanish.
 Connecting the unwired world with balloons, satellites, lasers & drones, Slashdot, 2015
 A QoS-aware dynamic bandwidth allocation scheme for multi-hop WiFi-based long distance networks, Iftekhar Hussain et al., 2015
 Long-distance, Low-Cost Wireless Data Transmission (PDF), Ermanno Pietrosemoli, 2011
 This link could only be established thanks to the height of the endpoints (4,200 and 1,500 km) and the flatness of the middle ground. The curvature of the Earth makes longer point-to-point WiFi-links difficult to achieve because line of sight between two points is required.
 Radio waves occupy a volume around the optical line, which must be unemcumbered from obstacles. This volume is known as the Fresnel ellipsoid and its size grows with the distance between the two end points and with the wavelength of the signal, which is in turn inversely proportional to the frequency. Thus, it is required to leave extra "elbow room" for the Fresnel zone. 
 A Brief History of the Tegola Project, Tegola Project, retrieved October 2015
 WiLDNet: Design and Implementation of High Performance WiFi based Long Distance Networks (PDF), Rabin Patra et al., 2007
 Topology Patterns of a Community Network: Guifi.net (PDF), Davide Vega et al., 2012
 Global Access to the Internet for All, internet draft, Internet Engineering Task Force (IETF), 2015
 This is what happened to Afghanistan's JLINK network when funding for the network's satellite link ran dry in 2012.
 The case for technology in developing regions (PDF), Eric Brewer et al., 2005
 Beyond Pilots: Keeping Rural Wireless Networks Alive (PDF), Sonesh Surana et al., 2008
 VillageCell: Cost Effective Cellular Connectivity in Rural Areas (PDF), Abhinav Anand et al., 2012
 Deployment and Extensio of a Converged WiMAX/WiFi Network for Dwesa Community Area South Africa (PDF), N. Ndlovu et al., 2009
 "A telemedicine network optimized for long distances in the Amazonian jungle of Peru" (PDF), Carlos Rey-Moreno, ExtremeCom '11, September 2011
 "Telemedicine networks of EHAS Foundation in Latin America", Ignacio Prieto-Egido et al., in "Frontiers in Public Health", October 15, 2014.
 "The design of a wireless solar-powered router for rural environments isolated from health facilities" (PDF), Francisco Javier Simo Reigadas et al., in "IEEE Wireless Communications", June 2008.
 On a long wireless link for rural telemedicine in Malawi (PDF), M. Zennaro et al., 2008
 A Survey of Delay- and Disruption-Tolerant Networking Applications, Artemios G. Voyiatzis, 2012
 Supporting Cloud Deployment in the Guifi Community Network (PDF), Roger Baig et al., 2013
 A Case for Research with and on Community Networks (PDF), Bart Braem et.al, 2013
 There are smaller networks in Scotland (Tegola), Slovenia (wlan slovenija), Belgium (Wireless Antwerpen), and the Netherlands (Wireless Leiden), among others. Australia has Melbourne Wireless. In Latin America, numerous examples exists, such as Bogota Mesh (Colombia) and Monte Video Libre (Uruguay). Some of these networks are interconnected. This is the case for the Belgian and Dutch community networks, and for the Slovenian and Austrian networks. [8,22,23]
 Proxy performance analysis in a community wireless network, Pablo Pitarch Miguel, 2013
 RuralCafe: Web Search in the Rural Developing World (PDF), Jay Chen et al., 2009
 A Delay-Tolerant Network Architecture for Challenged Networks (PDF), Kevin Fall, 2003
 Delay- and Disruption-Tolerant Networks (DTNs) -- A Tutorial (version 2.0) (PDF), Forrest Warthman, 2012
 Healthcare Supported by Data Mule Networks in Remote Communities of the Amazon Region, Mauro Margalho Coutinho et al., 2014
 First Mile Solutions' Daknet Takes Rural Communities Online (PDF), Carol Chyau and Jean-Francois Raymond, 2005
 DakNet: A Road to Universal Broadband Connectivity (PDF), Amir Alexander Hasson et al., 2003
 DakNet: Architecture and Connectivity in Developing Nations (PDF), Madhuri Bhole, 2015
 Delay Tolerant Networks and Their Applications, Longxiang Gao et al., 2015
 Low-cost communication for rural internet kiosks using mechanical backhaul, A. Seth et al., 2006
 Searching the World Wide Web in Low-Connectivity Communities (PDF), William Thies et al., 2002
 Slow Search: Information Retrieval without Time Constraints (PDF), Jaime Teevan, 2013
 Potential for Collaborative Caching and Prefetching in Largely-Disconnected Villages (PDF), Sibren Isaacman et al., 2008// //
In terms of energy conservation, the leaps made in energy efficiency by the infrastructure and devices we use to access the internet have allowed many online activities to be viewed as more sustainable than offline.
On the internet, however, advances in energy efficiency have a reverse effect: as the network becomes more energy efficient, its total energy use increases. This trend can only be stopped when we limit the demand for digital communication.
Although it's a strategy that we apply elsewhere, for instance, by encouraging people to eat less meat, or to lower the thermostat of the heating system, limiting demand remains controversial because it goes against the belief in technological progress. It's even more controversial when applied to the internet, in part because few people make the connection between data and energy.
Picture: Matthew G. CC.
How much energy does the internet consume? Due to the complexity of the network and its fast-changing nature, nobody really knows. Estimates for the internet's total electricity use vary by an order of magnitude. One reason for the discrepancy between results is that many researchers only investigate a part of the infrastructure that we call the internet.
In recent years, the focus has been mostly on the energy use of data centers, which host the computers (the "servers") that store all information online. However, in comparison, more electricity is used by the combination of end-use devices (the "clients", such as desktops, laptops and smartphones), the network infrastructure (which transmits digital information between servers and clients), and the manufacturing process of servers, end-use devices, and networking devices. 
A second factor that explains the large differences in results is timing. Because the internet infrastructure grows and evolves so fast, results concerning its energy use are only applicable to the year under study. Finally, as with all scientific studies, researcher's models, methods and assumptions as a base for their calculations vary, and are sometimes biased due to beliefs or conflicts of interest. For example, it won't suprise anyone that an investigation of the internet's energy use by the American Coalition for Clean Coal Electricity sees much higher electricity consumption than a report written by the information and communication technology industry itself. [2,3]
Eight Billion Pedallers
Keeping all this in mind, we selected what seems to be the most recent, complete, honest and transparant report of the internet's total footprint. It concludes that the global communications network consumed 1,815 TWh of electricity in 2012.  This corresponds to 8% of global electricity production in the same year (22,740 TWh). [5,6]
If we were to try to power the (2012) internet with pedal-powered generators, each producing 70 watt of electric power, we would need 8.2 billion people pedalling in three shifts of eight hours for 365 days per year. (Electricity consumption of end-use devices is included in these numbers, so the pedallers can use their smartphones or laptops while on the job). Solar or wind power are not much of a solution, either: 1,815 TWh equals three times the electricity supplied by all wind and solar energy plants in 2012, worldwide. 
Today's internet can't be powered by renewable energy. Picture: Wikipedia Commons.
These researchers estimate that by 2017, the electricity use of the internet will rise to between 2,547 TWh (expected growth scenario) and 3,422 TWh (worst case scenario). If the worst-case scenario materializes, internet-related energy use will almost double in just 5 years time. Note that further improvements in energy efficiency are already included in these results. Without advances in efficiency, the internet's energy use would double every two years, following the increase in data traffic. 
Increasing Energy Consumption per User
Importantly, the increasing energy consumption of the internet is not so much due to a growing amount of people using the network, as one would assume. Rather, it's caused by a growing energy consumption per internet user. The network's data traffic rises much faster than the number of internet users (45% versus 6-7% annually).  There's two main reasons for this. The first is the evolution towards portable computing devices and wireless internet access. The second is the increasing bit rate of the accessed content, mainly caused by the digitalization of TV and the popularity of video streaming.
The increasing energy consumption of the internet is not so much due to a growing amount of people using the network, as one would assume. Rather, it's caused by a growing energy consumption per internet user.
In recent years we have seen a trend towards portable alternatives for the desktop computer: first with the laptop, then the tablet and the smartphone. The latter is on its way to 100% adoption: in rich countries, 84% of the population now uses a smartphone. [9,4] These devices consume significantly less electricity than desktop computers, both during operation and manufacture, which has given them an aura of sustainability. However, they have other effects that more than off-set this advantage.
First of all, smartphones move much of the computational effort (and thus the energy use) from the end-device to the data center: the rapid adoption of smartphones is coupled with the equally rapid growth in cloud-based computer services, which allow users to overcome the memory capacity and processing power limitations of mobile devices. [4,11] Because the data that is to be processed, and the resulting outcome must be transmitted from the end-use device to the data center and back again, the energy use of the network infrastructure also increases.
High-Speed Wireless Internet
Robbing Peter to pay Paul can improve the total efficiency of some computational tasks and thus reduce total energy use, because servers in datacenters are managed more energy efficiently than our end-use devices. However, this advantage surely doesn't hold for smartphones that connect wirelessly to the internet using 3G or 4G broadband. Energy use in the network is highly dependent on the local access technology: the "last mile" that connects the user to the backbone of the internet.
A wired connection (DSL, cable, fibre) is the most energy efficient method to access the network. Wireless access through WiFi increases the energy use, but only slightly. [12,13] However, if wireless acces is made through a cellular network tower, energy use soars. Wireless traffic through 3G uses 15 times more energy than WiFi, while 4G consumes 23 times more.  [See also 4, 15] Desktop computers were (and are) usually connected to the internet via a wired link, but laptops, tablets and smartphones are wirelessly connected, either through WiFi or via a cellular network.
Wireless traffic through 3G uses 15 times more energy than WiFi, while 4G consumes 23 times more. Picture: jerry0984. CC.
Growth in mobile data traffic has been somewhat restricted to WiFi "offloading": users restrict data connectivity on the 3G interface due to significantly higher costs and lower network performance.  Instead, they connect to WiFi networks that have become increasingly available. With the advance of 4G networks, the speed advantage of WiFi disappears: 4G has comparable or improved network throughput compared to WiFi.  Most network operators are in the process of large-scale rollouts of 4G networks. The number of global 4G connections more than doubled from 200 million at the end of 2013 to 490 million at the end of 2014, and is forecast to reach 875 million by the end of 2015. [11,16,17]
More Time Online
The combination of portable computing devices and wireless internet access also increases the time we spend online.  This trend did not start with smartphones. Laptops were expected to lower the energy consumption of the internet, but they raised it because people took advantage of the laptop's convenience and portability to be online far more often. "It was only with the laptop that the computer entered the living room". 
Smartphones are the next step in this evolution. They allow data to be consumed in many places in and outside the home, alongside more conventional computing.  For example, field research has revealed that smartphones are intensively used to fill 'dead time' -- small pockets of time not focused on one specific activity and often perceived as unproductive time: waiting, commuting, being bored, coffee breaks, or "social situations that are not stimulating enough". Smartphones also have become to play an important bedtime role, being called upon last thing at night and first thing in the morning. 
We are using our increasingly energy efficient devices for longer hours as we send more and more data over a worldwide infrastructure.
Noting these trends, it is clear that not every smartphone is a substitute for a laptop or desktop computer. Both are used alongside each other and even simulatenously. In conclusion, thanks to smartphones and wireless internet, we are now connected anywhere and anytime, using our increasingly energy efficient devices for longer hours as we send more and more data over a worldwide infrastructure. [19,20]
The result is more energy use, from the mobile devices themselves, and -- much more important -- in the datacenters and in the network infrastructure. Also, let's not forget that calling someone using a smartphone costs more energy than callling someone using a dumbphone.
Increasing Bit Rates: Music & Video
A second key driver behind the growing energy consumption per internet user is the increasing bit rate of content. The internet started as a text-medium, but images, music and video have become just as important. Downloading a text page requires very little energy. To give an example, all the text on this blog, some 100 articles, can be packed into less than 9 megabytes (MB) of data. Compare this to a single high-resolution image, which easily gets to 3 MB, or a standard quality 8-minute YouTube video, which ticks off at 30 MB -- three times the data required for all the words on this blog.
Because energy use rises with every bit of data, it matters a lot what we're doing online. And as it turns out, we are increasingly using the network for content with high bit rates, especially video. In 2012, video traffic was 57% of all internet traffic (excluding video exchanged through P2P-networks). It's expected to increase to 69% in 2017. 
Trains are energy efficient. But mobile computing is not. Picture: Nicolas Nova.
If video and wireless internet access are the key drivers behind the increasing energy use of the internet, then of course wireless video is the worst offender. And it's exactly that share of traffic that's growing the fastest. According to the latest Cisco Visual Networking Index, mobile video traffic will grow to 72% of total mobile data traffic in 2019: 
"When device capabilities are combined with faster, higher bandwith, it leads to wide adoption of video applications that contribute to increased data traffic over the network. As mobile network connection speeds increase, the average bit rate of content accessed through the mobile network will increase. High-definition video will be more prevalent, and the proportion of streamed content, as compared to side-loaded content, is also expected to increase. The shift towards on-demand video will affect mobile networks as much as it will affect fixed networks".
Power consumption is not only influenced by data rates but also by the type of service provided. For applications such as email, web browsing, and video and audio downloads, short delays are acceptable. However, for real-time services -- videoconferencing, and audio and video streaming -- delay cannot be tolerated. This requires a more performant network, and thus more energy use.
Does the Internet Save Energy?
The growing energy use of the internet is often explained away with the argument that the network saves more energy than it consumes. This is attributed to substitution effects in which online services replace other more energy-intensive activities.  Examples are videoconferencing, which is supposed to be an alternative for the airplane or the car, or the downloading or streaming of digital media, which is supposed to be an alternative for manufacturing and shipping DVDs, CDs, books, magazines or newspapers.
Some examples. A 2011 study concluded that "by replacing one in four plane trips with videoconferencing, we save about as much power as the entire internet consumes", while a 2014 study found that "videoconferencing takes at most 7% of the energy of an in-person meeting". [22,23] Concerning digital media, a 2014 study concludes that shifting all DVD viewing to video streaming in the US would respresent a savings equivalent to the primary energy used to meet the electricity demand of nearly 200,000 US household per year.  A 2010 study found that streaming a movie consumed 30 to 78% of the energy of traditional DVD rental networks (where a DVD is sent over the mail to the customer who has to send it back later). 
Because the estimates for the energy intensity of the internet vary by four orders of magnitude, it's easy to engineer the end result you want.
There are some fundamental problems with these claims. First of all, the results are heavily influenced by how you calculate the energy use of the internet. If we look at the energy use per bit of data transported (the "energy intensity" of the internet), results vary from 0,00064 to 136 kilowatt-hour per Gigabyte (kWh/GB), a difference of four orders of magnitude. [13,19]. The researchers who made this observation conclude that "whether and to what extent it is more energy efficient to download a movie rather than buying a DVD, or more sustainable to meet via videoconferencing instead of travelling to a face-to-face meeting are questions that cannot be satisfyingly answered with such diverging estimates of the substitute's impact." 
To make matters worse, researchers have to make a variety of additional assumptions that can have a major impact on the end result. If videoconferencing is compared to a plane trip, what's the distance travelled? Is the plane full or not? In what year was it built? On the other hand, how long does the videoconference take? Does it happen over a wired or a wireless access network? Do you use a laptop or a high-end telepresence system? When you're streaming music, do you listen to a song once or twenty times? If you buy a DVD, do you go to the store by car or by bike? How long is the trip? Do you only buy the DVD or do you also shop for other stuff?
Time and Distance
All these questions can be answered in such a way that you can engineer the end result you want. That's why it's better to focus on the mechanisms that favour the energy efficiency of online and offline services, what scientists call a "sensitivity analysis". To be fair, most researchers perform such an analysis, but its results usually don't make it into the introduction of the paper, let alone into the accompanying press release.
One important difference between online and offline services is the role of time. Online, energy use increases with the time of the activity. If you read two articles instead of one article on a digital news site, you consume more energy. But if you buy a newspaper, the energy use is independent of the number of articles you read. A newspaper could even be read by two people so that energy use per person is halved.
High-end telepresence system. Source: Wikipedia Commons. Courtesy of Tandberg Cooperation.
Next to time there is the factor of distance. Offline, the energy use increases with the distance, because transportation of a person or product makes up the largest part of total offline energy consumption. This is not the case with online activities, where distance has little or no effect on energy consumption.
A sensitivity analysis generates very different conclusions from the ones that are usually presented. For example: streaming a music album over the internet 27 times can use more energy than the manufacturing and transportation of its CD equivalent.  Or, reading a digital newspaper on a desktop PC uses more energy than reading a paper version from the moment the reading length exceeds one hour and a quarter, taking the view that the newspaper is read by one person.  Or, in the earlier mentioned study about the energy advantage of videoconferencing, reducing the international participant's travel distance from 5,000 to 333 km makes travelling in person more energy efficient than videoconferencing when a high-end telepresence system is used. Similarly, if the online conference takes not 5 but 75 hours, it's more energy efficient to fly 5,000 km. 
The energy efficiency advantage of videoconferencing looks quite convincing, because 75-hour meetings are not very common. However, we still have to discuss what is the most important problem with studies that claim energy efficiency advantages for online services: they usually don't take into account rebound effects. A rebound effect refers to the situation in which the positive effect of technologies with improved efficiency levels is offset by systematic factors or user behaviour. For example, new technologies rarely replace existing ones outright, but instead are used in conjunction with one another, thereby negating the proposed energy savings. 
Not every videoconference call is a substitute for physical travel. It can also replace a phone call or an email, and in these cases energy use goes up, not down.  Likewise, not every streamed video or music album is a substitute for a physical DVD or CD. The convenience of streaming and the advance of portable end-use devices with wireless access leads to more video viewing and music listening hours , at the expense of other activities which could include reading, observing one's environment, or engaging in a conversation.
A videoconference can also replace a phone call or an email, and in these cases energy use goes up, not down.
Because the network infrastructure of the internet is becoming more energy efficient every year -- the energy use per bit of data transported continues to decrease -- it's often stated that online activities will become more energy efficient over time, compared to offline activities.  However, as we have seen, the bit rate of digital content online is also increasing.
This is not only due to the increasing popularity of video applications, but also because of the increasing bit rate of the videos themselves. Consequently, future efficiency improvements in the network infrastructure will bring higher quality movies and videoconferencing, not energy savings. According to several studies, bit rates increase faster than energy efficiency so that green gains of online alternatives are decreasing. [23,24,25]
Efficiency Drives Energy Use
The rebound effect is often presented as a controversial issue, something that may or may not exist. But at least when it comes to computing and the internet, it's an ironclad law. The rebound effect manifests itself undoubtedly in the fact that the energy intensity of the internet (energy used per unit of information sent) is decreasing while total energy use of the internet is increasing.
It's also obvious in the evolution of microprocessors. The electricity use in fabricating a microprocessor has fallen from 0.028 kWh per MHz in 1995 to 0.001 kWh per MHz in 2006 as a result of improvements in manufacturing processes.  However, this has not caused a corresponding reduction of energy use in microprocessors. Increased functionality -- faster microprocessors -- has cancelled out the efficiency gains per MHz. In fact, this rebound effect has become known as Moore's Law, which drives progress in computing. [28,29]
In other words, while energy efficiency is almost universally presented as a solution for the growing energy use of the internet, it's actually the cause of it. When computers were still based on vacuum tubes instead of transistors on a chip, the power used by one machine could be as high as 140 kilowatt. Today's computers are at least a thousand times more energy efficient, but it's precisely because of this improved energy efficiency that they are now on everybody's desk and in everybody's pocket. Meanwhile, the combined energy use of all these more energy-efficient machines outperforms the combined energy use of all vacuum tube computers by several orders of magnitude.
In conclusion, we see that the internet affects energy use on three levels. The primary level is the direct impact through the manufacturing, operation and disposal of all devices that make up the internet infrastructure: end-use devices, data centers, network and manufacturing. On a second level, there are indirect effects on energy use due to the internet's power to change things, such as media consumption or physical travel, resulting in a decrease or increase of the energy use. On a third level, the internet shifts consumption patterns, brings technological and societal change, and contributes to economic growth. [28,29] The higher system levels are vastly more important than the direct impacts, despite receiving very little attention. 
"[The internet] entails a progressive globalisation of the economy that has thus far caused increasing transportation of material products and people... The induction effect arising from the globalisation of markets and distributed forms of production due to telecommunication networks clearly leads away from the path of sustainability... Finally, the information society also means acceleration of innovation processes, and thus ever faster devaluation of the existing by the new, whether hardware or software, technical products or human skills and knowledge." 
Nobody can deny that the internet can save energy in particular cases, but in general the overwhelming trend is towards ever-higher energy use. This trend will continue unabated if we don't act. There's no constraint on the bit rate of digital data. Blu-ray provides superior viewing experience, with data sizes ranging between 25 and 50 GB -- five to ten times the size of a HD video. With viewers watching 3D movies at home, we can imagine future movie sizes of 150 GB, while holographic movies go towards 1,000 GB. 
Nor is there any constraint on the bit rate of wireless internet connections. Engineers are already preparing the future launch of 5G, which will be faster than 4G but also use more energy. There's not even a constraint on the number of internet connections. The concept of the "internet of things" foresees that in the future all devices could be connected to the internet, a trend that's already happening. [4,11] And let's not forget that for the moment only 40% of the global population has access to the internet.
There are no limits to growth when it comes to the internet, except for the energy supply itself. Picture: Gongashan.
In short, there are no limits to growth when it comes to the internet, except for the energy supply itself. This makes the internet rather unique. For example, while the rebound effect is also very obvious in cars, there are extra limits which impede their energy use from increasing unabated. Cars can't get larger or heavier ad infinitum, as that would require a new road and parking infrastructure. And cars can't increase their speed indefinitely, because we have imposed maximum speed limits for safety. The result is that the energy use of cars has more or less stabilized. You could argue that cars have achieved a status of "sufficiency":
"A system consuming some inputs from its environment can either increase consumption whenever it has the opportunity to do so, or keep its consumption within certain limits. In the latter case, the system is said to be in a state of sufficiency... A sufficient system can improve its outputs only by improving the efficiency of its internal process." 
The performance of cars has only increased within the limits of the energy efficiency progress of combustion engines. A similar effect can be seen in mobile computing devices, which have reached a state of sufficiency with regard to electricity consumption -- at least for the device itself.  In smartphones, energy use is limited by a combination of battery constraints: energy density of the battery, acceptable weight of the battery, and required battery life. The consequence is that the per-device energy use is more or less stable. The performance of smartphones has only increased within the limits of the energy efficiency progress of computing (and to some extent the energy density progress of batteries). 
A Speed Limit for the Internet
In contrast, the internet has very low sufficiency. On the internet, size and speed are not impractical or dangerous. Batteries limit the energy use of mobile computing devices, but not the energy use of all the other components of the network. Consequently, the energy use of the internet can only stop growing when energy sources run out, unless we impose self-chosen limits, similar to those for cars or mobile computing devices. This may sound strange, but it's a strategy we also apply quite easily to thermal comfort (lower the thermostat, dress better) or transportation (take the bike, not the car).
Limiting the demand for data could happen in many ways, some of which are more practical than others. We could outlaw the use of video and turn the internet back into a text and image medium. We could limit the speed of wireless internet connections. We could allocate a specific energy budget to the internet. Or, we could raise energy prices, which would simultaneously affect the offline alternatives and thus level the playing field. The latter strategy is preferable because it leaves it to the market to decide which applications and devices will survive.
Setting a limit would not stop technological progress. Advances in energy efficiency will continue to give room for new devices and applications to appear.
Although none of these options may sound attractive, it's important to note that setting a limit would not stop technological progress. Advances in energy efficiency will continue to give room for new devices and applications to appear. However, innovation will need to happen within the limits of energy efficiency improvements, as is now the case with cars and mobile computing devices. In other words: energy efficiency can be an important part of the solution if it is combined with sufficiency.
Limiting demand would also imply that some online activities move back to the off-line world -- streaming video is candidate number one. It's quite easy to imagine offline alternatives that give similar advantages for much less energy use, such as public libraries with ample DVD collections. Combined with measures that reduce car traffic, so that people could go to the library using bikes or public transportation, such a service would be both convenient and efficient. Rather than replacing physical transportation by online services, we should fix the transport infrastructure.
In the next articles, we investigate the low-tech information networks that are being developed in poor countries. There, "sufficiency" is ingrained in society, most notably in the form of a non-existing or non-reliable energy infrastructure and limited purchasing power. We also discuss the community networks that have sprung up in remote regions of rich countries, and the designs for shared networks in cities. These alternative networks provide much more energy efficient alternatives for digital communication in exchange for a different use of the internet.
Kris De Decker (Edited by Jenna Collett)
 Even the most complete studies about the internet's energy use do not take into account all components of the infrastructure. For example, the embodied energy of the energy plants which are used to power the internet is completely ignored. However, if you run a data center or cellular tower on solar energy, it's obvious that the energy it took to produce the solar panels should be included as well. The same goes for the batteries that store solar energy for use during the night or on cloudy days.
 "The cloud begins with coal: big data, big networks, big infrastructure, and big power" (PDF), Mark P. Mills, National Mining Association / American Coalition for Clean Coal Electricity, augustus 2013
 "SMARTer2030 -- ICT Solutions for 21st Century Challenges" (PDF), Global e-Sustainability Initiative, 2015
 "Emerging trends in electricity consumption for consumer ICT", Peter Corcoran, 2013
 "Key Electricity Trends" (PDF), IEA Statistics, 2015
 Of the total, 852 TWh was consumed by end-use devices, 352 TWh by networks, 281 TWh by data centers, and 330 TWh during the manufacturing stage.
 The researchers also provide a "best case scenario" in which energy use increases only slightly. However, this scenario is already superseded by reality. It supposes slow growth of wireless data traffic and digital TVs, but the opposite has happened, as Cisco Visual Networking Index  shows. Furthermore, the best-case-scenario supposes a year-on-year improvement in energy efficiency of 5% for most device categories and an annual improvement in efficiency of the core network of 15%. These figures are well above those of past years and thus not very likely to materialize. The expected growth scenario supposes wireless traffic to grow to 9% of total network electricity consumption, and digital TV to stabilize at 2.1 billion units. In this scenario, energy efficiency improvements for devices are limited to 2% per year, while energy efficiency in the core network is limited to 10% per year. In the worst case scenario, wireless traffic grows to 15% of total network electricity consumption, digital TV will keep growing, and improvements in energy efficiency are limited to 1-5% annually for devices and to 5% in the core network. 
 "Measuring the Information Society Report 2014" (PDF), International Telecommunication Union (ITU), 2014
 "Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2014-2019", CISCO, 2015.
 "Small network equipment key product criteria", Energy Star, retrieved September 2015.
 "The energy intensity of the internet: home and access networks" (PDF), Vlad Coroama, 2014
 "A close examination of performance and power characteristics of 4G LTE networks" (PDF), Junxian Huang, June 2012.
 "Energy consumption in mobile phones: a measurement study and implications for network applications" (PDF), Niranjan Balasubramanian, 2009
 "4G networks to cover more than a third of the global population this year, according to new GSMA intellligence data", GSMA Intelligence, 2015
 Network equipment manufacturer Cisco notes in its 2015 report that "as mobile network capacity improves and the number of multiple device users grow, operators are more likely to offer mobile broadband packages comparable in price and speed to those of fixed broadband."  If this becomes true, and a majority of internet users would routinely connect to the internet through 4G broadband, the energy use of the network infrastructure would more than double, assuming data traffic would remain the same.  That's because from an energy perspective, the access network is the greedy part of any service provider's network. The core network of optic cables is much more energy efficient. 
 "Are we sitting comfortably? Domestic imaginaries, laptop practices, and energy use". Justin Spinney, 2012
 "Demand in my pocket: mobile devices and the data connectivity marshalled in support of everyday practice" (PDF), Caolynne Lord, Lancaster University, april 2015
 "Towards a holistic view of the energy and environmental impacts of domestic media and IT", Oliver Bates et al., 2014
 "Cisco Visual Networking Index 2012-2017", Cisco, 2013
 "The energy and emergy of the internet" (PDF), Barath Raghavan and Justin Ma, 2011
 "Comparison of the energy, carbon and time costs of videoconferencing and in-person meetings", Dennis Ong, 2014
 "Shipping to streaming: is this shift green?", Anand Seetharam, 2010
 "MusicTank report focuses on environmental impact of streaming platforms", CMU, 2012
 "Screening environmental life cycle assessment of printed, web based and tablet e-paper newspaper", Second Edition, Asa Moberg et al, 2009
 "Information Technology and Sustainability: essays on the relationship between ICT and sustainable development", Lorenz M. Hilty, 2008
 "Environmental effects of informantion and communications technologies", Eric Williams, Nature, 2011
 "Computing Efficiency, Sufficiency, and Self-Sufficiency: A Model for Sustainability?" (PDF), Lorenz M. Hilty, 2015
- The Monster Footprint of Digital Technology.
- How Sustainable is PV Solar Power?
- How Sustainable is Stored Sunlight?
Artwork by Grace Grothaus.// //
Most modern heating systems are primarily based on the heating of air. The old way of warming was based upon radiation and conduction, which have the potential to be more energy-efficient than convection.
While convection implies the warming of each cubic centimetre of air in a space in order to keep people comfortable, radiation and conduction can directly transfer heat to people, making energy use independent of the size of a room or building.
However, restoring the old way of warming would not make sense without new technology. Most heating systems from the old days were inefficient, polluting, dangerous and impractical. Today, we have better options available, which can be combined in interesting ways.
Picture: An electric radiant heating panel or "dark radiator" mounted to the wall. Source: EasyTherm.
As we have seen in the previous article, heat can be transferred through convection (warming the air), conduction (direct physical contact) and radiation (electromagnetic energy). Most modern heating systems heat by convection, but it's important to note that conductive and radiant heat sources also heat the air.
This is especially relevant for radiant heating sources. A 100% radiant heater doesn't exist. The sun produces 100% radiation, but it sits in a vacuum. On earth, the surface of a heating system will always make contact with air, which is warmed by conduction and rises. Therefore, a radiant heating device is defined as a heating device in which the share of radiant heat in the total heat transfer is equal to or larger than 50%.
Tile Stoves or Masonry Heaters
There's one heating system from earlier times that's still very much recommendable: the tile stove or masonry heater. In fact, there exists no modern heating device that can match the comfort and energy efficiency of a tile stove with integrated bench seat or sleeping platform, an appliance that combines heat transfer by radiation and conduction. A tile stove provides a central place of comfort and coziness that modern air heating systems no longer offer us.
A tile stove or masonry heater is also the most efficient and least polluting way to heat with wood. It achieves this by a high thermal mass, which allows wood combustion at very high temperatures without overheating the room. Because of their high combustion efficiency (close to 100%), and high heating efficiency (up to 90%), tile stoves use much less wood and produce much less air pollution than a common wood stove or a fireplace. 
A modern tile stove, made by a craftsman. Source: De Meiboom.
Furthermore, they have to be fired only once or twice a day and keep radiating heat for about 12 to 24 hours, greatly reducing the work required to heat a building by wood. In contrast, a wood stove or a fireplace demand constant attention. Lastly, because it burns wood cleanly, and because most of the heat is given off to the masonry structure, a chimney fire is almost impossible.
The tile stoves of today are not comparable to those of yesteryear. Great improvements were made in the eighteenth century, resulting in a more efficient heating device -- the Swedish "kakelugn". The design was further improved in the 1970s by the Finnish, and the technology keeps evolving. These days, masonry heaters can be built by craftsmen, or put together from premanufactured parts. The second option is much cheaper, but it limits you to the available forms and sizes. When a tile stove is built by a craftsman, any form is possible.
The Drawbacks of Tile Stoves
The superior comfort and efficiency of the tile stove is not for everybody, though. First of all, they are by far the largest and heaviest heating systems around. Consequently, ample space and a sturdy floor should be available. Logically, a tile stove or masonry heater also requires a chimney. Furthermore, although tile stoves made from modular parts can be moved, those built by craftsmen are forever attached to the house they where built in, so they're not such an attractive option for tenants.
Another disadvantage of a tile stove is that it can't provide heat quickly. The high thermal mass of the structure delays the heat transfer to the room. This is not a problem for spaces which are frequently used, because thermal comfort can be maintained by firing the stove once or twice per day, depending on the weather conditions. However, if there's nobody around to fire the stove at least once a day, people arrive in a cold house with a heating appliance that will work on full capacity only many hours later. The convenience of our modern lifestyle might make us see this as a bigger problem than we would have 150 years ago.
The high thermal mass of a tile stove makes it best suited for frequently used rooms and for persistently cold weather
Because of its slow responsiveness, a tile stove or masonry heater is also better suited for persistently cold weather than for a rapidly fluctuating climate. If you have burnt too much wood in the morning, there's no way to lower the heat production of the tile stove in the afternoon; for instance, when the sun unexpectedly breaks through the clouds and quickly warms up the house.
A tile stove, made by a craftsman. Source: Lehm und Feuer.
Likewise, if you have burnt insufficient wood, there's no way to raise heat production in case the outside temperature drops unexpectedly. You always have to wait for the next fire cycle to adapt the heat output of the stove, which means you have to guess what the weather is going to be like in the next 12 to 24 hours.
Finally, like any other radiant heating source, a tile stove only provides warmth in the space that it's built in, not in other rooms. To counter this issue, tile stoves can be constructed in each room, and large tile stoves can be built through floors or walls to distribute warmth throughout a building. However, since it concerns a large and heavy heating device, these plans would require lots of money, time, and a very sturdy building. The tile stove is thus better suited for large spaces than for buildings with many smaller rooms.
Rocket Mass Heaters
Many features of the tile stove also apply to the rocket mass heater, its low-tech cousin. The rocket mass heater only appeared in the 1980s, resulting from research into more efficient cooking stoves. It heats more by conduction than by radiation; it uses the benchwork around the heater to guide hot smoke gases towards the chimney. The heater itself -- usually a metal barrel -- is rather small. Most of the heat of the fire is stored in the masonry mass of the benchwork, from where it's slowly released.
Rocket mass heaters have some important advantages over tile stoves. They are less heavy and bulky, and they are much cheaper and easier to build. On the other hand, they are less efficient, have to be fired more regularly, require more maintenance than tile stoves, and they are just as slow to respond. This makes them best suited for frequently used spaces and in persistently cold climates. Rocket mass heaters also require long, straight wood.
Despite these drawbacks, a rocket mass heater is still a much more efficient and comfortable choice than a common wood stove. If conditions are right and you can't afford a masonry heater, building yourself a rocket mass heater is still a viable option. A warning, though, from the man who invented the device, Ianto Evans: "These stoves have not been in regular use long enough to determine the real risk of chimney fires, so inspect your chimney often." 
Thermally Active Building Surfaces
With the arrival of the public water supply in the nineteenth century, a new radiant heating system appeared: building surfaces heated by hot water running through a circuit of metal pipes. While these systems are generally known as radiant or heated floors, we prefer the term "thermally active building surfaces", as this technology also works in walls and ceilings, and it can not only heat but also cool a building, which is achieved by running cold water through the pipes. 
The heating of building surfaces, of course, took place long before the nineteenth century. The Romans warmed their bath houses and large villas with hypocausts, central heating systems that distributed the heat from an underground fire through flues in floors and (sometimes) walls. However, because of its high energy density, water is a better medium for heat transfer than smoke. The water pipes can be much smaller than the flue pipes, and the fire risk is greatly reduced. Nowadays, both copper pipes and polyethylene ("PEX") tubes can be used to distribute hot (and cold) water.
Unlike tile stoves, thermally active building surfaces distribute warmth evenly throughout a space.
Like tile stoves, thermally active building surfaces have a high thermal mass, which means that they can't give off heat quickly. Because of this, they are best suited for frequently used spaces and steady, cold temperatures. Unlike tile stoves, however, they distribute their warmth evenly throughout a space, which means they're not suited to localised heating. With a thermally active building surface, the whole room will be comfortable, regardless of how many people are inside and how much space is being occupied.
A radiant floor under construction. Source: Wikipedia Commons.
The main advantage of thermally active building surfaces is that they eliminate radiant temperature assymetry: there are no large differences in temperature throughout the space. There's no need for local insulation (hooded chairs, folding screens -- see the previous article) to provide thermal comfort. On the downside, when compared with air heating systems energy savings are usually rather small, unless the water is heated by a solar collector or a heat pump.
Because of the large heating surface, water temperatures can be relatively low, usually less than 30ºC (86ºF). Heat pumps and solar collectors are very efficient in delivering these low temperatures. The water for a heated building surface can also be warmed by a tile stove (provided it has enough heating capacity), which is yet another way to distribute the heat from a tile stove throughout a building.
Interestingly, thermally active building surfaces don't really heat us through radiation. Because the temperature of the heating surface is usually lower than our skin temperature, it is in fact us who are heating the building surfaces by radiation. However, heated building surfaces limit the radiant heat loss from our body to the environment, providing thermal comfort in another way. They also produce a significant share of convection (especially for floor heating), while heated floors and walls can provide warmth through conduction.
The West facade of the Terrence Donnelly Centre for Celular and Biomolecular Research in Toronto, Canada, a building that is heated and cooled by thermally active building surfaces. Picture: University of Toronto.
The main disadvantage of heated building surfaces is that they require radical building renovation, because the floor, the wall or the ceiling has to be broken away and built up again. Furthermore, thermal insulation is a necessity for outer walls or a great deal of heat will be lost to the outside.
Thermally active building surfaces are almost a logical choice for new buildings, at least for frequently used spaces and especially in climates that also need cooling during the summer months. However, if we are looking for solutions to decrease energy use in the buildings that we already have, and to save energy in temporarily heated spaces, we should look for other options.
Infrared Heating Panels
The most recent radiant heating systems are infrared panels, which can be operated by electricity or hot water. They can be useful both as an alternative or as a complement to a tile stove or a heated building surface. Hydronic (water-based) radiant heating panels came on the market some 50 years ago, while electric radiant heating panels date from the late 1990s. Both technologies have evolved a lot in recent years. 
Radiant heating panels have little or no thermal mass and can produce heat very quickly
Like tile stoves, radiant panels heat locally, creating warmer micro-climates within a cooler space. However, because infrared heating panels have a thin metal heating surface with little or no thermal mass, they can produce heat quickly. This makes them interesting options for use in less frequently used spaces and in more changeable climates, situations in which tile stoves, rocket mass heaters, and thermally active building surfaces are less beneficial. Because radiant heating panels can provide warmth quickly, a room need only be heated when somebody enters it.
Electric longwave infrared heating panels. Source: EasyTherm.
Radiant heating panels have more advantages over older systems. For example, they are as light and compact as a tile stove is heavy and bulky, and, unlike heated building surfaces, they are easy to install in an existing building. Radiant panels can be mounted on the walls or the ceiling, they can be free-hanging, or recessed into a suspended ceiling system.
This makes it practical to use them in multiple rooms, and it also makes them suitable for tenants, who can take their heating system with them when they move to another place. On the downside, the heating surface of a radiant panel cannot be touched safely because burns would occur immediately. This means that heat transfer through conduction is impossible.
Hydronic or Electric?
In hydronic panels, heated water flows through plastic or copper tubes attached to a metal plate, which then radiates the heat into the space. Electric panels look very similar, but the heat is produced by electric resistance. Like water-based thermally active building surfaces, hydronic radiant panels can also cool a building, something that electric radiant heating panels can't do. On the other hand, electric panels are easier to install and even more responsive than hydronic panels -- it takes less than 5 minutes before an electric panel radiates heat at full power. 
Hydronic radiant heating panels should not be confused with the so-called "radiators" that are common in many European buildings. While these are hydronic heating systems too, their design is aimed at producing the largest share of convection possible (which is why they should actually be called "convectors"). The radiant metal surfaces of a "radiator" are facing each other, so that most of the heating surface can't radiate energy to people directly.
Hydronic heating and cooling radiant panels in a sports hall, aimed at the audience. Source: Zehnder ZBN.
Instead, they radiate energy to each other, heating the air in between the panels through conduction, which then rises and heats a space by convection. Another difference is that "radiators" have lower surface temperatures than infrared panels. As a consequence, the share of radiant heat in the total heat transfer is only 20-30%. The same goes for electric "radiators". 
Concerning electric radiant heating panels, it's important to note that we are talking about electric longwave infrared heaters. These are not to be confused with the older -- and much better known -- electric shortwave infrared heaters, which produce a glowing red light when in operation. Longwave radiant heaters produce no visible light (they are "dark radiators") and have much lower surface temperatures. Both technologies have a different effect on health, which we will discuss at the end of this article.
Electric longwave infrared heaters are not to be confused with the older and much better known electric shortwave infrared heaters, which produce a glowing red light when in operation
Infrared heating panels are the perfect addition to a high mass radiant heating system. For instance, an infrared heating panel can heat up (part of) a room quickly while the tile stove comes to speed, which solves the comfort problem for people who keep irregular schedules. Likewise, the combination of a "fast" and a "slow" radiant heating source offers more possibilities when dealing with changeable weather conditions. Different radiant heating sources can also complement each other in different rooms of the same building. For instance, a tile stove in the living room can be combined with radiant heating panels in less frequently used bedrooms and bathrooms.
Hydronic radiant heating and cooling panels. Source: Zehnder ZBN.
However, it's important to keep in mind that radiant heating panels lose part of their efficiency advantage over high mass radiant systems when they are used continuously in frequently occupied rooms. This is especially true for electric radiant panels, which experience large energy conversion losses in the power plant. Electric radiant heating panels might also lose their efficiency advantage over air heating systems if they are used to heat a whole space instead of creating micro-climates (see the next article).
Hybrid Heating Systems
Some radiant heating technologies blur the lines between the systems we have discussed. For example, some electric and hydronic radiant heating panels have a high thermal mass of natural stone, which basically turns them into an electric or hydronic masonry heater. The high thermal mass lowers the surface temperature, so these heating elements can also provide heat transfer through conduction when we lean against them.
An hydronic stone radiator. Source: The-Radiators.
Conversely, some electric and hydronic heating systems create thermally active building surfaces with little or no thermal mass, using mats (electricity) or interconnected prefabricated lightweight panels (water) that can be attached to a building surface. These systems can be just as responsive as radiant panels, but they distribute warmth throughout a space rather than locally. They are also easier to install than high thermal mass systems.
Modular radiant heating panels that can be interconnected to build a thermally active building surface. Source: Ray Magic.
Vertical or Horizontal Radiant Heat?
As was noted at the beginning of the article, every radiant heating source also warms the air. However, the share of radiation in the total heat transfer of a radiant heat source can vary between 50 and 95%, mainly depending on the orientation of the radiant heating surface. Downward facing radiant heating surfaces reach the highest share of radiation (up to 95%), while sidewards facing radiant heating surfaces obtain 60-70% radiant heat transfer. Heat surfaces that are facing upwards only reach 50-60% radiant heat transfer share. 
The large influence of surface orientation has everything to do with the natural, upward movement of hot air. Because downward convection doesn't exist -- warm air always rises -- a downward facing radiant heat surface produces almost no heating of the air. As a consequence, ceiling-mounted radiant heating surfaces are the most energy efficient: to produce a similar amount of radiation as a downward facing radiant heating panel of 250 watts, a sidewards oriented panel requires 325 watts and an upward-facing panel 350 watts. 
While a ceiling-mounted panel maximizes radiant heat production, a vertically positioned panel maximizes radiant heat reception
However, the high share of radiant heat for downward facing heating panels doesn't mean that the ceiling is by definition the most appropriate place for a radiant heat source. Humans find themselves mostly in a vertical position during waking hours, either standing up or sitting down. So while a ceiling-mounted panel maximizes radiant heat production, a vertically positioned panel maximizes radiant heat reception. 
A section of silhouettes of a subject in various postures, corresponding to the areas illuminated by the sun's rays at the angles of altitude and azimuth shown. From: "Human Thermal Environments: The Effects of Hot, Moderate, and Cold Environments on Human Health, Comfort, and Performance, Third Edition", Ken Parsons, 2014
A larger part of the body will be irradiated directly when the heating surface is vertical -- see the illustration above. If the heating surface is aimed upwards or downwards, most radiant heat will pass along the body, limiting the direct heating effect. A ceiling-mounted panel only maximizes radiant heat reception when we are lying down -- but then we are mostly sleeping and under the covers.
Another reason to opt for a vertically oriented radiant heating surface is radiant temperature assymetry. In the previous article we have seen that the human body can experience large differences in temperature when it's warmed by a local radiant heating source. A person sitting in front of an open fire will receive sufficient radiant heat on one side of their body, while the other side loses heat to the cold air and surfaces at the opposite half of the room. However, the sensitivity for radiant temperature assymetry is heavily influenced by the orientation of the heating source.
Humans are least sensitive to the radiant temperature assymetry caused by a warm, vertical surface such as a tile stove or a wall-mounted infrared heating panel. The difference in radiant temperature can reach up to 35ºC (63ºF) before 1 in 10 people will complain about thermal discomfort. However, in the case of a warm downward-facing radiant heat source, complaints have been noted at a temperature difference of only 4-7º C (7-13ºF). When the temperature difference amounts to 15ºC (27ºF), 50% of subjects report thermal discomfort. This is because the head is the body part that is most sensitive to heat. 
The sensitivity to a hot surface above our heads is not a problem when the whole ceiling is converted into a radiant heating source, as is the case with a thermally active ceiling. Because of the large heating surface, the radiant temperature of such a system can be very low, often below skin temperature. However, the much higher temperatures of electric or hydronic radiant heating panels could make temperature assymetry problematic for some people.
Are Radiant Heating Systems Safe?
There's an important difference between the radiation originating from the sun, and the radiation that's produced by the radiant heating systems discussed here. The sun is much hotter, and it's the temperature of an object that determines which wavelengths of the electromagnetic spectrum dominate: the higher the temperature, the higher the share of shortwave radiation. Because the sun has a very high surface temperature, solar radiation also produces significant amounts of harmful ultraviolet and shortwave infrared waves, which is why we are advised against spending too much time in the sun. 
Longwave infrared radiation doesn't penetrate the skin and is harmless. However, excessive use of shortwave infrared heaters or conductive heating systems could lead to a skin condition called Erythema ab igne.
However, if the surface temperature remains below 100ºC (212ºF), as is the case with all the radiant heating systems we have discussed, longwave infrared dominates the heat transfer. Longwave infrared radiation doesn't penetrate the skin and is harmless, which is fortunate considering our bodies constantly exchange longwave infrared radiation with other bodies around us. 
Fireplaces, wood stoves, and shortwave radiant space heaters are another matter, though. While their surface temperature is not as high as that of the sun, it's much higher than that of tile stoves, infrared panels or heated building surfaces. This means that they also emit shortwave radiation, and can have health consequences. 
A 22 year old female with toasted skin syndrome, caused by a shortwave radiant heating source. From "Diseases of the Skin", James H. Sequeira, 1915.
Erythema ab igne, also known as "thermal keratosis" or "toasted skin syndrome", is a skin condition that's caused by repeated and prolonged exposure to a heat source, resulting in patches on the skin. It's a benign dermatitis and the patches usually disappear a few months after the heat exposure ends. However, if the heat exposure continues, the patchwork can become permanent. These cases can eventually develop into skin cancer many years later, although this is rare. The main issue is cosmetic, but the effect is quite spectacular -- you might be tricked into thinking that it's a tattoo.
The condition was first described in the early 1900s, but it must have existed before this. In a 1920 medical handbook it's described as "a rare disease of the anterior surface of the legs, leading to permanent pigmentation and found in senile, weak or alcoholic individuals exposed to intense heat (firemen, stokers)".  Recent papers about the condition say that it was "fairly common in the elderly who stood or sat closely to open fires or wood stoves". The condition would typically present on the anterior shins and interior thighs because people would warm themselves right in front of the stove. 
Are Conductive Heating Systems Safe?
Nowadays, Erythema ab igne caused by a radiant heat source can appear in chefs and bakers (on the arms) and in jewellers, silversmiths and glassblowers (on the face) as an occupational disease. It has also been observed on the legs of women who cook daily on open fires while seated on the ground. Medical cases caused by sitting too close to a shortwave radiant heat source are still recorded, although nowadays it usually concerns red-glowing shortwave electric space heaters instead of fireplaces or wood stoves. There are no reports of Erythema ab igne being caused by longwave radiant heating sources.
However, modern conductive heating sources seem to present a risk. Electric and hydronic heating elements with lower surface temperatures can also be inserted in desks, tables, chairs or benches, or they can be used as portable heating pads. If you can't afford a radiant floor, you can opt for a water-heated carpet, for instance. Some of these technologies cross the boundary between furniture and apparel, such as heated bracelets or electrically heated clothes. Recent reports show Erythema ab igne appearing following the use of heating pads, car seat heaters, heating blankets, hot water bottles, and even laptops, hot baths and showers.
A cordless heated desk chair. Source: Hammacher.
All reported cases are due to very frequent use of a heat source. For example, two incidents concern a man that took 5 to 6 hot showers daily, and a girl that took a daily hot bath of 60-90 minutes.  A 16 year old boy developed patches in the neck after sleeping on a heated pillow each night for two months, with the first patches appearing after four weeks.  A case was reported of a woman who consistently used her car's heated seats while driving 2 to 4 hours per day for several years.  Most cases have been reported due to the use of hot water bottles to relieve chronic pains. 
While some of these incidents are obviously the consequence of an excessive use of conductive heating, others are not. For example, the use of a conductive heat source for 2-4 hours per day -- like the woman in her car -- is not unlikely in a space where conductive heating plays an important role in providing thermal comfort. At the moment, only individual cases have been reported, and we don't know if Erythema ab igne appears in all people who use conductive heating sources for prolonged times, or just in exceptional cases. Nevertheless, it's obvious that conductive heating systems can have an effect on the skin, so caution is advised.
In the next article, we investigate the influence of local heating systems on energy use, thermal comfort, and indoor health.
 "Masonry Heaters: Designing, Building, and Living with a Piece of the Sun", Ken Matesz, 2010
 "Poêles à accumulation: le meilleur du chauffage au bois", Vital Bies and Marie Milesi, 2011
 "The Book of Masonry Stoves: Rediscovering an Old Way of Warming", David Lyle, 1984
 "Rocket Mass Heaters, Third Edition", Ianto Evans, Leslie Jackson, 2013
 "Thermally Active Surfaces in Architecture", Kiel Moe, 2010
 Radiant Heating and Cooling Handbook (Mcgraw-Hill Handbooks), Richard Watson, 2008
 Personal communication, Leo & Richard de Mos, Li-tech/Prestyl.
 "Faber & Kell's Heating & Air-conditioning of Buildings", Doug Oughton & Stephen Hodkinson, 2008
 "Beispielhafte Vergleichsmessung zwischen Infrarothstrahlungsheizung und Gasheizung im Altbaubereich", Peter Kosack, TU Kaiserslautern, 2009
 "Thermisch Binnenklimaat" (PDF), Atze Boerstra et al., 2008
 "Modeling Thermal Comfort with Radiant Floors and Ceilings", Z. Wang, 4th International Building Physics Conference 2009, June 15-18, Istanbul
 "ICNIRP Statement on Far Infrared Radiation Exposure", ICNIRP, 2006
 "ICNIRP Guidelines on Limits of Exposure to Incoherent Visible and Infrared Radiation" (PDF), ICNIRP, 2013
 "Dermatology: the essentials of cutaneous medicine", Walter James Highman, 1921
 "Some like it hot: Erythema Ab Igne due to Cannabinoid Hyperemesis", Ryan R. Kramer, 2014
 "Erythema ab igne caused by frequent hot bathing", Sung-Jan Ling, 2002
 "Thermal pillow: an unusual causative agent of erythema ab igne" (PDF), Enver Turan et al., 2013
 "Erythema ab igne: evolving technology, evolving presentation", Katarine Kesti et al, 2014
 "Erythema ab igne: a case report" (PDF), Melinda Mohr et al., 2005
- Restoring the Old Way of Warming: Heating People, not Places
- How to Keep Warm in a Cool House
- The Revenge of the Ceiling Fan
- Insulation: First the Body, then the Home
- Heat your Clothes, not your House
- The Solar Envelope: How to Heat and Cool Cities without Fossil Fuels
- Don't Heat your Room with Tea Candles
Combining the old local heating practices with modern radiant and conductive heating systems could lower energy consumption, improve health, and increase thermal comfort. This is especially true for uninsulated buildings, where air heating is particularly disadvantageous. Local heating sources can be applied on their own, but can also be used in combination with air heating. This may mean, however, that thermal comfort standards need to be redefined.
Illustration by Diego Marmolejo.
Heating is a huge source of fossil energy use in cooler climates. In the Netherlands, for instance, heating accounts for 20 to 25% of total primary energy use, despite relatively mild winters. This means heat supply guzzles as much fuel as transportation.  According to many, the solution to the high energy use of heating systems is to be found in better strategies for thermal insulation.
A well-insulated building can indeed lower energy use spectacularly, to the point that there's no need for a heating system: the heat produced by people, electric devices and the sun can ensure thermal comfort. Orientating a building (or a whole city) around the sun is another important design element that can render heating redundant. For new buildings, the design and the orientation are much more important factors for energy efficiency than the choice of the heating system, if that's needed at all.
When we talk about existing buildings, however, things look very different. There are several methods for insulating older buildings, but their effect on energy use is usually limited in comparison to what a new building can achieve. What's more, insulating existing buildings can be expensive and some of the easier-to-apply methods can cause problems, such as crack formation, frost damage, mould and rot.  And, of course, it's not easy to re-orientate an existing building toward the sun.
If we rely solely on insulation, solar energy and sustainable architecture, it would take too much time to address the high energy use of buildings. Based on the current yearly number of new construction in the Netherlands, it would take 88 years before the Dutch building stock would meet today's strict insulation standards, for example.  And that doesn't take into account the energy required to demolish old buildings and build new ones.  If we are serious about reducing our dependence on fossil fuels, we'll also need to find affordable short-term solutions that can lower energy use in existing buildings.
Local Heating as an Alternative to Insulation
One previously discussed solution is clothing. Insulating the body is more efficient than insulating a building, and thermal underclothing is particularly effective. In this article, we discuss another solution, which can be applied on its own or in combination with better clothing: local heating.
Contrary to air heating, which distributes warmth throughout a space, radiant and conductive heating systems act much more locally; they can make people comfortable without having to warm the whole space. Radiant heating systems transfer energy through electromagnetic waves (in many ways similar to the energy coming from the sun), which are converted to heat when absorbed by the skin. Conductive heating systems warm the body through direct physical contact.
Infrared thermal image showing the heat loss of a building. Source.
While local heating systems could improve thermal comfort and energy efficiency in most types of buildings, they are extra advantageous in older, uninsulated buildings. This is because they provide comfort at colder air temperatures, decreasing the heat loss from the building and thus making thermal insulation relatively less important.
If we are looking for quick and substantial energy savings for existing buildings, then local heating systems deserve our closest attention
Insulation can further improve the energy efficiency and thermal comfort of radiant and conductive heating systems, so this is certainly not a plea against insulation. But if we are looking for quick and substantial energy savings for the large share of uninsulated buildings, then local heating systems deserve our closest attention. Switching from air heating to radiant and conductive heating, or combining them in a hybrid system, can bring energy savings that are at least as large as insulating an existing building.
Hybrid Systems: The Best of Both Worlds?
In a report for Historic Scotland titled "Keeping Warm in a Cool House" (PDF), researcher Michael Humphreys advocates a return to the old-fashioned way of heating, combined with modern heating devices, in historic buildings throughout Scotland (some 20% of the total housing stock). Humphreys argues that this approach should be considered more often at the expense of thermal insulation, which is more expensive and changes the character of a building. 
He proposes a hybrid system in which an air heating system delivers a "background temperature" of about 16ºC (61ºF), a sufficiently high temperature for household activities. For sedentary activities like reading, studying or watching television, local heating systems provide thermal micro-climates of 21-23ºC (70-73ºF) using radiant heat sources.
A hybrid system has interesting advantages. Because air heating is so inefficient -- the whole volume of air in a space has to be warmed -- large energy savings can be obtained even if the thermostat is turned down just a few degrees. At the same time, the background temperature delivered by the air heating system improves thermal comfort because the difference in climate between local hot spots and the rest of the room (the "radiant temperature assymetry") becomes smaller.
A hooded chair protects against radiant temperature assymetry. "Ahrend Kaigan Chair", Marijn van der Pol
Local insulation, in the form of hooded chairs and folding screens, can further protect the body from the colder parts of the space, increasing comfort in an uninsulated building. Finally, in a hybrid system, the local heating sources should not be dimensioned for exceptionally cold periods, and the air heating can be of a lower capacity. 
For every 1ºC (1.8ºF) that the thermostat is lowered, 7-10% of heating energy can be saved.  If the temperature in the space is lowered from 21 to 16ºC (70 to 61ºF), the energy savings can be as high as 35-50%. The heating sources that produce warmer microclimates introduce extra energy use that, of course, should also be taken into account.
According to Humphreys, local heating can save 30-40% of energy compared to air-heating alone, taking into account the (primary) energy use of the local heating sources. In an old, uninsulated building in Scotland, he calculated, a vertically positioned radiant heat source needs to provide 425 watts per person to achieve the desired microclimate at a background temperature of 16ºC (61ºF). 
Using local insulation -- an antique hooded chair -- this comes down to 340 watts per person, which can be provided by a radiant heating panel of only 60x60 cm. In his experiments, Humphreys makes use of outdated radiant heating systems from the 1970s, so that his results may be on the conservative side. 
Local heating can save 30-40% of energy compared to air-heating alone, taking into account the energy use of the local heating sources
Conductive heating systems can be even more energy efficient. According to a recent study, a heated office chair can keep 92% of subjects (with clothing insulation of 0.8 clo) comfortable at an operative temperature of 18ºC (64ºF), while 74% of subjects are still comfortable at only 16ºC (61ºF). The desk chair itself uses only 16 watts (around 30-40 watts primary energy if electricity is generated by fossil fuels), demonstrating the effectiveness of heat transfer through conduction. 
The numbers Humphreys provides are in line with research investigating summer comfort in offices, which showed that ventilating fans and other personal cooling devices are preferred over air-conditioning, while using much less energy.
Few People, Lots of Space
While local heating systems have the potential to be more sustainable and save large amounts of energy, this effect is not guaranteed. There are situations in which local heating will use more energy than air heating. Likewise, exactly how much energy can be saved by local heating depends on many factors: the interior volume of a space, the amount of people in it, how frequently the space is used, the insulation level of the building, the ventilation requirements, the efficiency of the local heating system, and the efficiency of the air heating system.
The most important factors are the volume of the space and the amount of people who use it. Obviously, the larger the space and the less people are inside, the more interesting local heating will become compared to an air heating system. Local heating systems also become comparatively more efficient as ceilings become higher. Hot air rises, and so the energy efficiency of air heating further deteriorates in spaces with high ceilings. It's no coincidence that churches in northern countries have been heated by giant tile stoves for ages.
Illustration by Diego Marmolejo.
Another factor is the local heating system. Radiant heating is not tied to a specific primary energy source. For instance, hot water for a hydronic heating panel can be delivered by a solar collector, electricity, a heat pump or a gas, coal, or wood-fired boiler. Naturally, the choice of the primary energy source will affect the energy efficiency of the heating system. In particular, the use of electric radiant panels may raise eyebrows, because electric heating isn't considered sustainable: burning fossil fuels to make electricity and then convert it back to heat has large energy conversion losses, which can be avoided if you heat a space directly with fossil fuels.
However, things aren't as simple as they might seem. Electric radiant heating panels can offer energy savings even if the electricity is generated by fossil fuels, because they are able to heat locally and quickly. Since it takes less than five minutes for an electric radiant heating panel to produce maximum output, it can be used only when and where it is needed. Air heating systems, tile stoves, or radiant building surfaces need considerably more time to bring a space to a comfortable temperature, and therefore they have to work continuously throughout the day (or they have to be oversized) in order to provide instant comfort.
How much energy can be saved depends on -- among other things -- the interior volume of a space, the amount of people in it, and how frequently the space is used
Do the advantages of electric heating panels outweigh the disadvantages? This depends mainly on how frequently the space is used. Their efficiency advantage is largest for rooms that are less frequently used. Many spaces are only used intermittently, and it's these spaces that could benefit the most from electric radiant heating panels. On the other hand, if electric heating panels are used continuously throughout the day, their quick heating capacity brings no efficiency advantage and they might end up using more energy than an air heating system. 
Open the Windows
Local heating can provide a healthier indoor climate compared to air heating. Indoor air pollution is a growing problem for two reasons. Firstly, people spend more and more time indoors: up to 90% of their lives in the western world. Secondly, building materials and household items have become increasingly polluted. Harmful chemicals can be expelled from building materials, furniture, and household cleaning products, while additional pollution is generated by human activities (mainly cooking and smoking), and by the intrusion of outdoor pollutants. 
Local heating is better combined with natural ventilation than air heating. When we heat a space by air, the medium for heat storage is also the medium for ventilation. Measures that improve the efficiency and comfort of air heating, such as making a building air-tight, have a negative impact on the health of the indoor environment, while measurs that promote a healthier indoor climate, such as regularly opening the windows, are detrimental to the efficiency and comfort of the heating system.
Illustration by Diego Marmolejo.
With local heating, the air is not the medium for heat storage. Heat is directly transferred to people. Every radiant or conductive heating source also heats the air, so it will still cost energy to open the windows and bring in fresh air. However, since local heating provides thermal comfort at cooler air temperatures, it will cost less energy to bring in more fresh air. It's an alternative to complex and costly mechanical ventilation systems, which work good if they are built, used and maintained as they should be, but can actually worsen the indoor climate if that's not the case.
Local heating also minimizes the continuous air circulation that is typical in air heating systems: warm air rises, cools and comes down again, is warmed up and rises, and so on. This turbulence causes a circulation of dust particles that can cause or aggravate allergies or mucous membrane infections. If the air temperature is reduced, these effects are minimized. Cooler indoor temperatures also reduce the prevalence of house dust mites. 
Improving Thermal Comfort
The obvious downside of local heating is that you are tied to a certain space when you want to be comfortable. The great advantage of air heating -- at the expense of very high energy consumption -- is that the warmth is distributed uniformly across the space, at least on the horizontal plane, so that thermal comfort is independent of your location. However, the fact that local heating fixes you at a certain point in space is not as disadvantageous as it might seem, and it actually brings an important and unexpected benefit: more comfort, at least in shared spaces.
The uniform comfort temperature prescribed by international comfort standards -- 23.3ºC (74ºF) with a clothing insulation of 1 clo -- is actually aimed at people in rest (activity level of 1 "met", which corresponds to "seated, quiet"). Using the CBE Thermal Comfort Tool, we can see that an increase in activity has a profound effect on comfort. If the metabolism increases from 1 to 2.2 met ("seated, heavy limb movement") or 2.7 met ("house cleaning"), the ideal comfort temperature decreases to 13ºC (55ºF) and 9ºC (48ºF), respectively. Even a slight increase from 1 to 1.1 met ("typing") already lowers the comfort temperature from 23.3 to 22.4ºC (74 to 72ºF).
People are different, wear different clothes, and perform different activities, while air heating creates a thermal environment that's for everyone the same
In an air-heated space with a uniform temperature of 23.3ºC, the person sitting in the couch watching TV could be comfortable, but the person typing might be slightly hot and the person cleaning the room or performing an animated conversation could be sweating. In a space that is heated by radiant and conductive heating sources, everybody can find the thermal comfort that suits their needs best.
While this still implies that you are tied to a certain spot in order to be comfortable if you are inactive or performing light activity, it's very common even with air heating to be in a specific location for extended periods: on the couch, at a desk, at the kitchen table. There are many places in a room where we are never at rest, and so there is no need to heat them to the same temperature.
A modern "kotatsu", a heated table from Japan, using an electric heater instead of glowing fuel. Source: Rakuten.
People not only differ in their activities, but also in their clothing and personality. When performing similar activities and wearing similar clothes, the difference in neutral temperature between individuals can still be as high as 5ºC.  In an air-heated space, these people are condemned to a thermal climate that's the same for everyone -- a compromise. This fact is recognised by international comfort standards, which state that even at a "perfect" temperature a maximum of 80% of users will be comfortable. In other words, using modern heating systems, one in five will be too warm or too cold in the best case scenario. 
In a space that's warmed by local heating sources, occupants that are more active or better dressed can find a cooler spot, while those at rest, dressed lighty or extra sensitive to cold can find a warmer micro-climate. 100% of occupants would be able to find their ideal environment. Personal control of the thermal environment can be organised in two ways: everyone regulates their individual comfort by means of a personal radiant and/or conductive heating source, or everyone "migrates" through a space that is heated by a central radiant heating source. Both methods can also be combined, as it was in the old days.
Comfort Studies in Office Buildings
The performance of local radiant and conductive heating systems, combined with a lower background temperature provided by air heating, has been most extensively researched in office environments. Most of these studies have concluded that personalised heating systems can lower energy use and simultaneously improve thermal comfort and overall performance.  In offices, a multitude of people share the same space for an extended period of time, without much or any personal control over their thermal environment. Research has shown that about one in two office workers is -- year-round -- unhappy with the thermal climate. 
In offices, personalized heating systems can lower energy use and simultaneously improve thermal comfort and working performance.
By providing each office worker with personal heating sources, everyone can decide the thermal environment they prefer. The systems under study are usually electric or hydronic radiant panels, which can be built into the walls of privacy cubicles, hung at the ceiling above office workers, or attached below the desk surface.
These can be combined with conductive heating elements embedded into furniture. Systems that warm the hands and the feet usually work best, because these body parts are most sensitive to cold. Because personal heating systems can produce heat very quickly, they can be turned off automatically when the office worker leaves the desk, using energy only when it's necessary.
Illustration by Diego Marmolejo.
Of course, to be advantageous, the energy use of the personal heating sources should be smaller than the energy saved by turning down the thermostat a few degrees. Otherwise, thermal comfort might improve comfort but energy savings won't materialise. This can happen when the air heating system has too little capacity (in which case extra energy use is expected), but it's also possible that people start dressing more lightly because of personal energy sources, which can lead to more energy consumption.
Adaptive Thermal Comfort
Restoring the old concept of "heating people, not places" requires a new definition of thermal comfort. For all its advantages, the use of local heating systems doesn't comply with international comfort standards, because the average temperature in the room will not obtain the minimum recommended values. As discussed in a previous article, the same goes for cooling: a space that is cooled by local cooling systems (such as fans) exceeds the maximal temperature values for summer comfort.
Modern comfort standards don't recognise the freedom to actively move throughout a space in search of thermal comfort, although this could have profound consequences for energy use while maintaining thermal comfort, write Humphreys and two of his colleagues in "Adaptive Thermal Comfort: Principles and Practice".  We have been conditioned by ideas that comfort implies a steady temperature throughout a space, but this is an intrinsic feature of modern air heating and cooling systems, not a condition for feeling comfortable.
A steady temperature throughout a space is an intrinsic feature of modern air heating and cooling systems, not a condition for feeling comfortable.
In reality, we constantly adapt ourselves to the thermal environment, not only by moving between different thermal environments, but also by changing clothes or our activities, by opening or closing windows or curtains, by consuming hot or cold drinks, by changing posture, and so on. Field studies have demonstrated that people can be comfortable in much wider temperature ranges than those prescribed by comfort standards if they have the freedom to react to changing conditions. This "Adaptive Thermal Comfort" model, which leans heavily on local heating/cooling sources and clothing insulation, is at odds with the established comfort standards, which are based on research in climate chambers. 
Climate chambers are special laboratories in which the temperature, humidity and air speed are precisely controlled, while the subjects' thermal comfort is measured. All subjects are made to perform the same task, wearing the same clothes, and sitting in a fixed location. They can't change clothes or activity or move closer to a heating or cooling source, while these actions could have large consequences for their thermal comfort. Comfort standards -- which are the guidelines for most architects and engineers -- treat us as if we are passive beings living in climate chambers. We have come to believe that we are.
 Stralingsverwarming: Gezonde Warmte met Minder Energie, Kris De Decker, 2015
 Keeping Warm in a Cooler House. Creating Comfort with Background Heating and Local Supplementary Warmth (PDF). Historical Scotland Technical Paper 14, Michael Humphreys, Historic Scotland, 2011
 Adaptive Thermal Comfort: Principles and Practice, Fergus Nicol, Michael Humphreys & Susan Roaf, 2012
 Energy-efficient comfort with a heated/cooled chair, Center for the Built Environment, UC Berkeley, Wilmer Pasut, 2014
 Beispielhafte Vergleichsmessung zwischen Infrarothstrahlungsheizung und Gasheizung im Altbaubereich, Peter Kosack, TU Kaiserslautern, 2009
 Indoor Pollutants, Committee on Indoor Pollutants, National Research Council, 1981
 Individual control at each workplace: the means and the potential benefits, David Wyon, in "Creating the productive workplace", Derek Croome, 2000
 Persoonlijke beïnvloeding als sleutel tot een A+ klimaat (PDF), Atze Boerstra, in TVVL Magazine, 04, 2010
 Comfort, perceived air quality, and work performance in a low-power task-ambient conditioning system (PDF), Hui Zhang et al., Center for the Built Environment, 2008
 Air quality and thermal comfort in office buildings: results of a large indoor environmental quality survey (PDF), in "Proceedings of healthy buildings 2006, Lisbon, Vol.III, 393-397
- Restoring the Old Way of Warming: Heating People, not Places
- Radiant and Conductive Heating Systems
- The Revenge of the Ceiling Fan
- Insulation: First the Body, then the Home
- Heat your Clothes, not your House
- The Solar Envelope: How to Heat and Cool Cities without Fossil Fuels
- Don't Heat your Room with Tea Candles
These days, we provide thermal comfort in winter by heating the entire volume of air in a room or building. In earlier times, our forebear's concept of heating was more localized: heating people, not places.
They used radiant heat sources that warmed only certain parts of a room, creating micro-climates of comfort. These people countered the large temperature differences with insulating furniture, such as hooded chairs and folding screens, and they made use of additional, personal heating sources that warmed specific body parts.
It would make a lot of sense to restore this old way of warming, especially since modern technology has made it so much more practical, safe and efficient.
Illustration: People gathering around a tile stove. Die Bauern und die Zeitung, a painting by Albert Anker, 1867.
Most modern heating systems are primarily based on the heating of air. This seems an obvious choice, but there are far worthier alternatives. There are three types of (sensible) heat transfer: convection (the heating of air), conduction (heating through physical contact), and radiation (heating through electromagnetic waves).
The old way of warming was based upon radiation and conduction, which are more energy-efficient than convection. While convection implies the warming of each cubic centimetre of air in a space in order to keep people comfortable, radiation and conduction can directly transfer heat to people, making energy use independent of the size of a room or building.
Conduction, Convection, Radiation
First, let's have a look at the different methods of heat transfer in some more detail. Conduction and convection are closely related. Conduction concerns the transfer of energy due to the physical contact between two objects: heat will flow from the warmer to the cooler object. The speed at which this happens depends on the thermal resistance of the substance. For example, heat is transferred much faster through metal than through wood, because metal has a lower thermal resistance. This explains why, for instance, a cold metal object feels much colder than a cold wooden object, even though they both have the same temperature.
Conduction not only occurs between physical objects, but also between physical objects and gasses (like air), and between gasses mutually. Each physical object that is warmer than the air that surrounds it, heats up the air in the immediate vicinity through conduction. By itself, this effect is limited, because air has a high thermal resistance -- that's why it forms the basis of most thermal insulation materials. However, the air that is warmed by conduction expands and rises. Its place is taken by cold air, which is heated in turn, expands, rises, and so on. This plume of warm air that rises from every object that is warmer than the surrounding air, is called convection.
Radiation, the third form of sensible heat transfer, works in a very different way from conduction and convection. Radiant energy is transferred through electromagnetic waves, similar to light or sound. More precisely, it concerns the part of the electromagnetic spectrum that's called infrared radiation. Radiation doesn't need a medium (like air or water) for heat transfer. It also works in a vacuum and it's the most important form of heat transfer in outer space. The primary source of radiant energy is the sun, but every object on earth radiates infared energy as long as it has mass and a temperature above absolute zero. This energy can be absorbed by other objects with a lower temperature. Radiant energy doesn't have a temperature. Only when it hits the surface of an object with mass, the energy can be absorbed and converted into heat.
Thermal Comfort at Low Air Temperatures
Because of the general use of central air heating (and cooling) systems, we have come to believe that our indoor thermal comfort depends mainly on air temperature. However, the human body exchanges heat with its environment through convection, radiation, conduction and evaporation (a form of "latent" heat transfer). Convection relates to the heat exchange between the skin and the surrounding air, radiation is the heat exchange between the skin and the surrounding surfaces, evaporation concerns the moisture loss from the skin, and conduction relates to the heat exchange between a part of the human body and another object that it's in contact with.
If the share of radiation or conduction in the total heat transfer increases, people can be perfectly comfortable at a lower air temperature during the heating season
In winter we can remain comfortable in lower air temperatures by increasing the share of radiation or conduction in the total heat transfer of a space. The opposite is also true: conduction and radiation can make people feel uncomfortable in spite of a high air temperature. For example, a person standing on a cold floor with bare feet will feel cold, even if the air temperature is a comfortable 21ºC (70ºF). This is because the body loses heat to the floor through conduction. A hot cup of soup in the hand, floor heating, or a heated bench have the opposite effect, because heat is transferred from the warm object to the body through conduction.
Radiant heat can make people comfortable at a lower air temperature, too. The obvious example is direct sunlight. In spring or autumn, we can sit comfortably outside in the sun wearing only a T-shirt, even if the air temperature is relatively low. A metre away, in the shade, it can be cold enough to need a jacket, although the air temperature is more or less the same. In summer, we prefer the shade. The difference is explained by the radiant energy of the sun, which heats the body directly when it is exposed to sunlight. This higher "radiant temperature", which can be measured with a black-globe thermometer, allows thermal comfort at a colder air temperature in winter.
Radiant heating systems compensate a lower air temperature with a higher radiant temperature, while air heating systems compensate a lower radiant temperature with a higher air temperature. The operative temperature -- a weighted average of both -- can be the same. Source: Radiant Heating & Cooling Handbook, Richard Watson, 2008.
It should be noted that on earth, radiation always goes hand in hand with convection. Because air has little mass, the radiant energy of the sun doesn't heat the air directly. However, it does so indirectly. The radiant energy of the sun is absorbed by the earth's surface, where it is converted to heat. The warmer earth's surface then slowly releases this heat to the air through the earlier described mechanisms of conduction and convection. In other words, it's not the sun but the earth's surface that heats the air on our planet.
The radiant temperature is equally important when heating a building, no matter which heating system is used. Indoors, the radiant temperature represents the total infrared radiation that is exchanged between all surfaces in a room. Radiant heating systems, which we will discuss later on, work in a similar manner as the sun: they don't heat the air but the surfaces in a space, including human skin, raising the radiant temperature and providing thermal comfort at a colder air temperature. The use of radiant heating is more practical indoors, where environmental factors are under control. If a wind picks up outside, for example, the warming effect of the sun quickly disappears.
It's not the sun but the earth's surface that heats the air on our planet
A 100% radiant heating system doesn't exist, because both the radiant heating surface and the irradiated surfaces make contact with the air and warm it by conduction and convection. However, this heating of the air has a delayed onset and is more limited than in the case of a direct air heating system. Likewise, an air heating system will also raise the radiant temperature in a space, because the hot air warms the building's surfaces through conduction. But again, the increase of the radiant temperature is slow and limited in comparison to a radiant heating system.
As with conduction, radiation can also make people uncomfortable in spite of warm air temperature. If we are seated next to a cold window, our body will radiate heat to this cold surface, making us feel cold even when the air temperature is a comfortable 21ºC (70ºF). In short, neither a high air temperature nor a high radiant temperature are a guarantee of thermal comfort. The best understanding of the thermal environment in a space is given by the "operative temperature", which is a weighted average of both.
The Old Way of Warming
Before the arrival of central air heating systems in the twentieth century, buildings were mainly heated by a central radiant heat source, such as a fireplace or a wood, coal or gas stove. Usually, only one of the rooms in a building was heated. But even within this room, there were large differences in comfort depending on your exact location in the space. While air heating distributes warmth relatively evenly throughout an area, a radiant heating source creates a local microclimate that can be radically different from the rest of the room.
This is because the energy potential of a radiant heat source decreases with distance. It's not that the infrared waves become weaker, but that they become more dispersed as they are fanning out from a specific source. This is shown in the two illustrations below, which appear in Richard Watson's "Radiant Heating and Cooling Handbook". The drawing on the left shows the radiant heat distribution (or "radiant landscape") in a room, seen from above, which is warmed by a forced-air heating system. The average radiant temperature in the space is 20ºC (68ºF). Except for the influence of a cold window surface (at the top of the illustration), the radiant temperature is relatively constant throughout the room.
Source: Radiant Heating and Cooling Handbook. Richard Watson, 2008
The illustration on the right shows the same room, again with a mean radiant temperature of 20ºC (68ºF), but now heated with a radiant heat source which is located at the centre of the ceiling. It concerns an electric longwave infrared panel, a new technology that we will explain in the second part of this article, but a fireplace in the middle of the room would give a similar result. The radiant landscape is now very different. The highest radiant temperature is measured in the middle of the room, right below the heating panel. The radiant temperature then decreases rapidly in concentric circles towards the sides of the room. The difference between minimum and maximum radiant temperature is much larger than in the case of an air heating system.
In an air-heated room, it doesn't matter much where you are. In a room heated by a radiant heating source, location is everything.
Of course, a different location of the radiant heating surface, or a combination of two or more radiant heating surfaces, would again present a very different radiant landscape. Furthermore, as with solar radiation, other objects can throw shadows, which means that even the location of the furniture can have an effect on the heat distribution in a room. Also note that the heterogeneous distribution of the radiant temperature will be somewhat tempered by the homogeneous character of the air temperature, no matter which heating system is being used.
In an air-heated room, it doesn't matter much where you are. In a room heated by a central radiant heating source, location is everything. The mean radiant temperature can be optimal, but the radiant temperature in parts of the space may be too low. But the opposite is also possible: the mean radiant temperature can be too low, while at certain locations the room is perfectly comfortable. This is the ancient principle of spot or zone heating, which is impossible to realize with an air heating system. Instead of heating the entire space, our forefathers only heated the occupied parts of a building.
Air heating (left) versus radiant heating (right) in a church building. Source: Fabric-friendly heating, Dario Camuffo.
A similar thing happens on the vertical plane. Warm air rises, so that most heat ends up under the ceiling, where it is of little use. With radiant heating, it's perfectly possible to only heat the lower part of a space, no matter how high the ceiling is. Radiant heat doesn't rise, unless the radiant heating surface is aimed upwards. In conclusion, instead of heating the entire volume of air in a space, a radiant heating system can heat only that part of a space which is occupied, which is of course much more energy efficient.
Unless the room is very small or very crowded, only a very small part of the energy used by an air heating system benefits people. On the other hand, almost all the energy used by a radiant heating system is effectively heating humans.
A problem with the heterogeneous indoor climate of old times was radiant assymetry -- the difference in radiant temperature between distinct parts of the body. A person sitting in front of an open fire will receive sufficient radiant heat on one side of their body, while the other side loses heat to the cold air and surfaces at the opposite half of the room. The body can be in thermal balance -- the heat loss on one side equals the heat gain on the other -- but if the temperature differences are too large, thermal comfort will not be obtained.
A bench with adjustable backrest. Source: Dictionnaire de l'ameublement et de la décoration depuis le XIII siècle, 1887-1890
The problem is illustrated on the engraving above. The back of the bench could be switched from side to side. By regularly turning the body to the fire and then away from it, both the front and the back of the body could be heated alternately. Although radiant assymetry can be an issue with forced-air heating systems, it's much more likely to appear in spaces that are warmed by a a radiant heat source. In historical buildings, the difference in surface temperatures was aggravated by the fact that building surfaces were not insulated. Drafts, another cause of local thermal discomfort, were also a problem in old buildings, because they were anything but air-tight.
To create a comfortable microclimate without radiant assymetry or draft, our forefathers supplemented local heating with local insulation
To create a comfortable microclimate without radiant assymetry or drafts, our ancestors supplemented local heating with local insulation. One example was the hooded chair. This chair, which could be upholstered or covered with leather or wool blankets, fully exposed people to a radiant heat source, while protecting their back from the drafts and the low surface temperatures behind them.
At the same time, the shape of the furniture ensured that a greater share of the radiant heat emitted by the fire was effectively used: the chair was heated directly by the fire through radiation, and this heat was transferred to the person sitting in it. Recent research has shown that the insulation value of these types of chair amounted to at least 0.4 clo, which corresponds to the insulation value of a heavy pullover or coat. Some hooded chairs could host more than one person.
An additional solution, which could also be used alone, was the folding screen. The folding screens used as winter furniture were insulated with fabrics or built with heavy wood panels. They could be placed behind an insulated chair, or behind a table, for instance. Like the hooded chair, the folding screen protected the back of a person against drafts and cold temperatures, creating a comfortable microclimate.
A third example of local insulation were special sitting areas close to the fireplace. These could be benches placed between the fire and the side walls of the fireplace, or a niche in the wall with a built-in seat. In both cases, a person would lean against a wall that was warmed by the fire and protected from drafts. In some cases, the fireplace itself was placed in a room-inside-a-room. In the bedroom, which often remained unheated, yet another piece of furniture was aimed at providing a microclimate: the four poster bed, which had a canopy and thick curtains. When the curtains were closed, drafts were eliminated and body heat was trapped inside.
Portable Heating Systems
The apparent downside of spot heating is that you have to be in a specific location in order to be comfortable. In earlier times, the family gathered around the fireplace or the stove when no physical work had to be done, or when the body had to be warmed up after a long stay in a cold environment. Other locations in the room, as well as unheated rooms, were better suited for activities which required a higher metabolism. People were "migrating" throughout the room and throughout the house in search of the climate that suited their needs best.
Familienszene in einem Interieur, a painting by Albert Anker, 1910
However, the use of radiant heat sources and local insulation were also complemented by portable heating sources which transferred heat through radiation, convection and/or conduction. These could be used to further increase thermal comfort in the presence of a central heat source, and were also helpful in bringing warmth to other locations. Portable heating systems were designed especially to heat the feet or the hands: the parts of the body that are most sensitive to cold.
Personal heating sources allowed people to enjoy the heat from the central fireplace in unheated rooms, or even outside the house
An example is the foot stove, a box with one or more perforated partitions, which contained a metal or earthenware bowl or pan filled with embers from the fireplace. The feet were placed on top of the stove and the often long garments worn in those days increased the effect of the small heating device: the warmth was guided through a skirt or a chamber coat along the legs to the upper body. The upper part of the stove was made of wood or stone, as these materials have low thermal conductivity to avoid burns.
In many cultures worldwide, similar heat sources were used for warming the hands. They were made from metal or ceramics and were filled with embers from the fireplace, or with coal or peat. These personal heating sources also allowed people to enjoy the heat from the central fireplace or stove outside the house. They were taken in unheated coaches and railcars, or to Sunday Mass. Poor people made use of heated stones or bricks, or even heated potatoes put in coat pockets.
For heating the bed, people made use of brass bedpans with a long handle which were shoved underneath the mattress. Some beds had a bed wagon: a large, wooden frame designed to hold a pot of glowing fuel in the centre of the bed. In the 19th century, following the arrival of the public water supply, the use of ceramic hot water bottles became common -- water is a much safer heat medium than smouldering fire. These devices, which were often protected by a fabric cover, were used as foot warmers, hand warmers, or bed warmers.
An Afghan "Korsi". Source unknown.
Some peoples took the concept of the foot stove one level higher. The Japanese had their "kotatsu", a movable low table with a charcoal heater underneath. A thick cloth or quilt was placed over the table to trap the heat and the whole family slid their legs under the table, sitting on the floor. As with the European and American foot stoves, contemporary clothing increased the effect of the device. The heat of the charcoal burner was transferred through the traditional Japanese kimono, warming the whole body. Similar heating devices were used in Afghanistan (such as the "korsi"), as well as Iran, Spain and Portugal.
Conductive Heating Systems
Some historical radiant heating systems also transferred heat through conduction, further improving efficiency and comfort. More than 3,000 years ago, the Chinese and the Koreans built heating systems which were based on trapping smoke gases in a thermal mass. The northern Chinese "kang" ("heated bed") was a raised platform made from stone, masonry or adobe, which occupied about half of the room. As the name indicates, the kang was in the first place a heated bed, but the platform was also used during the day as a heated work and living space. The "dikang" ("warmed floor"), which was typical in North-Eastern China, worked in the same way as the kang, but had a larger floor area.
Above: a Chinese Kang, photographed in the 1920s. Source: Wandering in Northern China, Harry A. Franck.
The Koreans used the "ondol" ("heated stone"), which was a wall-to-wall platform. A similar heating system in Afghanistan, the "tawakhaneh" ("hot room") is possibly the oldest of these systems: its use may date back 4,000 years. In all these systems, the heat of an open fire was led underneath the platform to a chimney at the other side of the room. Both the fireplace and the chimney could be in the room or in adjacent rooms. The heat of the hot smoke gases was transferred to the thermal mass of the platform, which slowly released the warmth to the space. Conduction was as important as radiation and convection in the total heat transfer.
Above: Blick in eine Schwarzwaldstube mit kleinem Mädchen auf der Ofenbank, a painting by Georg Saal, 1861. Below: Auf dem Ofen, a painting by Albert Anker, 1895
These ancient Eastern heating systems are somewhat reminiscent of the European tile stoves that appeared in the middle ages. Tile stoves (or "masonry heaters" as they are known in the USA) are heat accumulating wood stoves that make use of a high thermal mass to burn wood at very high temperatures, which is cleaner and more efficient. The smoke gases are trapped in a labyrinth of smoke channels, transferring most of the heat to the masonry structure before leaving the chimney.
Tile stoves produce a large share of radiant heat, but on top of this they allow heat transfer through conduction, as many tile stoves had built-in platforms to sit or sleep on. Even if these platforms were not there, wooden benches were placed next to the stove so that one could lean against the warm (but not too hot) surface.
Why We Also Need Modern Technology
In conclusion, all historic heating systems used radiation and/or conduction as the primary modes of heat transfer, while convection was merely a by-product. It makes good sense to return to this concept of heating, but that doesn't mean that we have to go back to using fireplaces and carrying burning embers around the house. While the old concept of heating is more energy-efficient, the same cannot be said of most of the old heating devices.
While the old concept of heating is more energy efficient, the same cannot be said of most of the old heating devices.
Fireplaces, for one thing, are hugely inefficient, because most of the heat escapes through the chimney. They also suck in large amounts of cold air through cracks and gaps in the building envelope, which cools the air indoors and introduces strong drafts. Owing to this, fireplaces can even have negative efficiency as far as the air temperature is concerned: they can make the room colder instead of warmer. Stoves do better, but they remain relatively inefficient and have to be fired regularly, just like a fireplace. And for both options, air pollution can be substantial.
The (improved) tile stove is the only ancient heating system that can still be recommended, but we have far more options now, such as electric and hydronic radiant and conductive heating systems. These are more efficient, more practical, and safer than the heating sources of yesteryear. In the next two articles, we investigate how the old way of warming can be improved upon by modern technology, and how much energy could be saved.
Sources (in order of importance):
- Stralingsverwarming: Gezonde Warmte met Minder Energie, Kris De Decker, 2015
- Keeping Warm in a Cooler House. Creating Comfort with Background Heating and Local Supplementary Warmth (PDF). Historical Scotland Technical Paper 14, Michael Humphreys, Historic Scotland, 2011
- Radiant Heating and Cooling Handbook (Mcgraw-Hill Handbooks), Richard Watson, 2008
- Human Thermal Environments: The Effects of Hot, Moderate, and Cold Environments on Human Health, Comfort, and Performance, Third Edition", Ken Parsons, 2014
- The Book of Masonry Stoves: Rediscovering an Old Way of Warming, David Lyle, 1984
- History of radiant heating and cooling systems, part one. Robert Bean, Bjarne W. Olesen, Kwang Woo Kim, in "ASHRAE Journal", January 2010, pp. 40-47
- Adaptive Thermal Comfort: Principles and Practice", Fergus Nicol, Michael Humphreys & Susan Roaf, 2012
- Dictionnaire de l'Ameublement et de la Décoration depuis le XIII siècle, Henry Havard, 1887-1890.
- Foot warmers: hot coals, hot water. Home Things Past.
- Bed warmers. Old & Interesting.
- Muff warmers & other antique hand warmers. Home Things Past.
- Körperwärmespender. Website.
- Radiant and Conductive Heating Systems
- How to Keep Warm in a Cool House
- Fruit Walls: Urban Farming in the 1600s
- The Revenge of the Ceiling Fan
- Insulation: First the Body, then the Home
- The Solar Envelope: How to Heat and Cool Cities without Fossil Fuels
- Don't Heat your Room with Tea Candles