Archive | Geosciences column RSS feed for this section

Geosciences column: For permafrost, (sediment) size does matter

4 Jul

In this month’s Geosciences Column, David Bressan – now a regular EGU contributor – highlights a recent result published in The Cryosphere with implications on the occurrence and preservation of alpine permafrost.

The last 150 years saw an increase of 0.8°C in the Earth’s mean global temperature. In mountain ranges like the European Alps, however, this rising trend is even more pronounced with an increase of 1.5°C in temperature observed over the same period. This rise not only accelerates the retreat of the alpine glaciers, but also causes the slow degradation of alpine permafrost – solid material with temperatures well below the freezing point of water at 0°C – and rock glaciers – lobes composed of a mixture of ice and rock debris, covered by a few meters of ice-free debris.

On a superficial glimpse, the permafrost seems relatively well isolated from superficial changes due to the cover of ice-free soil and rocks. However, measurements in the past few years have shown that the melting rate of ice inside the rock glaciers significantly increased in the last decades, which could lead to the destabilization of mountain slopes and could influence the hydrology of nearby springs.

To understand the behavior of permafrost in a warmer future it is therefore essential to understand how the material (for example different rock types) and the texture (like grain size or distribution) of the layers covering the permafrost transport heat from the warming surface into the frozen underground. Heat transfer in these layers can occur in two ways: conduction trough the rocky material or convection by mobile phases, like air or water, in the voids and pores of the material.

Scientists first investigated the site of the Murtel rock glacier, located in the mountains of the Swiss canton of Engadin, in 1970. The area has since become one of the best documented sites for mountain permafrost in the world. Permafrost occurs here below very heterogenous substrate like barren bedrock, vegetated bedrock, debris of a talus slope, fine-grained and coarse-grained debris of two rock glaciers. In a long-term project, geographer Sina Schneider and her team analyzed the temperature profiles of these different substrates, recorded in five boreholes over a period of eight years (2002–2010). The study was published last year in The Cryosphere, an open access journal of the European Geosciences Union.

Photographs of the different surfaces at the borehole locations in the investigation area, taken in summer 2009 (from Schneider et al. 2012).

The results confirmed in part previous observations, but showed also new intriguing finds. Freezing during late autumn and early winter influences significantly the temperature profile in all boreholes, as without an impermeable snow cover cold air can penetrate into the pores of the material. A dry winter with lacking or thin snow cover and rare freezing can therefore be more important for thawing permafrost than a hot summer.

Also the covering material can influence the behavior of the subsurface permafrost. Sites covered with an isolating cover of soil and vegetation showed significantly fewer fluctuations in the temperatures between winter and summer.

Borehole temperature data from 2002–2010 compared to monthly mean air temperature and snow cover. In bedrock the superficial heat (red and green colours) is conducted deep into the underground. Coarse-grained debris (like found at the talus slope or at the rock glaciers) is a relative effective insulator; however in fine-grained debris heat penetrates deep into the underground, causing thawing of permafrost (from Schneider et al. 2012).

In coarse-grained material air circulation in the voids plays a major role, cooling the underground effectively. Both in the coarse-grained debris of the talus slope and in the rock glacier no significant increase of temperatures along the contact between cover and permafrost was observed in eight years. However, it was in fine-grained material that most permafrost thawing occurred. Numerous complex processes, like water infiltration, freezing and air circulation, seem to be especially effective in the small pores of fine-grained sediments and transport heat deep into the underground.

In computer models used to calculate the changes and future distribution of permafrost and permafrost-related natural hazards the composition of the surface is often neglected – in part due the difficulties to gain detailed information in rugged mountain terrains. However the study by Schneider et al. shows that sediment type and size does matter and detailed field survey and data collection play essential parts in understanding the reactions of the hidden permafrost in a warmer climate.

By David Bressan, freelance geologist based in Italy

Advertisements

Geosciences column: What drives changes in flood risk?

6 Jun

After a couple of months of absence, GeoLog is once again hosting the Geosciences column. This month we have no less than two posts highlighting recent research in the Earth sciences. In the second of this month’s columns, Eline Vanuytrecht writes about recent research on flood risk published in the EGU journal Natural Hazards and Earth System Sciences.

If you’d like to contribute to GeoLog, please contact EGU’s Media and Commmunications Officer, Bárbara T. Ferreira at media@egu.eu.


Floods can cause serious damage in residential areas. Recent records show that the damage has increased over the last decades, placing floods as one of the most severe natural hazards. But what exactly was the main cause for this increase in damage? And how will the relative contribution of drivers of flood-risk change such as meteorological phenomena, land use and socio-economic developments evolve in the future?

Heavy precipitation, flood and measurements, by Bibiana Groppelli. Distributed by EGU under a Creative Commons licence

To answer these questions, a group of German researchers, led by Florian Elmer of the GFZ Research Centre for Geosciences in Potsdam, analyzed what drives changes in flood risk in the lower part of the Mulde River basin in Eastern Germany. They concluded, rather surprisingly, that land-use changes, not meteorological phenomena, are the main drivers of flood risk change.

“Consequently, the potential influence of local and regional land-use policies is substantial and could contribute significantly to (…) risk mitigation,” the authors write in the Natural Hazards and Earth System Sciences paper.

The study focused on the lower Mulde River catchment  but it can serve as bench-mark for complete risk analysis for other river systems. Flooding is common in this catchment, which comprises several municipalities and an area of approximately 1000km² of which around 8% is inhabited.

The scientists surveyed the change of flood risk in terms of expected annual damage in residential areas between 1990 and 2020 in 10-year time steps. The analysis was based both on observations (for the past period) and model projections (for future times). They further quantified how the cost and impact of floods are modified by each of these drivers: climate change, land-use change and changes in building values.

The results of the study show that, under changing meteorological conditions, including altering rainfall patterns and rising temperatures, only few changes in return time of floods of different magnitude are experienced. The return time of a flood is a measure for its frequency, which reflects the estimated average time between two events.

Predicted increases in flood risk are thus mainly related to land-use changes including paving of previously permeable surfaces. Since 1990, the region has undergone major socio-economic changes after the reunification of Germany, including population decrease. However, at the same time, the area saw urban sprawl and residential structure change towards single-family dwellings. This expansion of urban areas increases the area covered by impermeable pavements, hence increasing flood risk. The scientists expect to see more urbanization, and thus increased risk, in the future.

Interestingly, the monetary value of the estimated annual flood damage decreased from 1990 to 2000. This is due to a combination of factors after Germany’s reunification leading to an exceptional situation of high inflation after 1990. In the future, however, the authors predict increasing damage values.

Another interesting conclusion is that small to moderate flood events dominate the risk expectation. These events combine relatively small flood volumes with a high return time (less than 20 years).

The results of the research hold an important message for flood-risk policy. Since land-use change is identified as the main driver of flood-damage change, a key role is reserved for land-use policies in risk mitigation. Further, since the majority of the annually expected damage can be attributed to small to moderate floods occurring frequently, relatively easy-to-install protection measures can erase substantial part of the damage.

By Eline Vanuytrecht, freelance writer & PhD student, KU Leuven

Geosciences column: The evolution of the air

5 Jun

After a couple of months of absence, GeoLog is once again hosting the Geosciences column. This month we have no less than two posts highlighting recent research in the Earth sciences. In the first of this month’s columns, Amanda Gläser-Bligh writes about recent research on the regulation of the air published in the EGU journal Solid Earth.

If you’d like to contribute to GeoLog, please contact EGU’s Media and Commmunications Officer, Bárbara T. Ferreira at media@egu.eu.


What if the Earth had a thermostat that kept temperatures in the correct range to maintain life? In a recent paper published in Solid Earth, Euan Nisbet of London’s Royal Holloway and his collaborators propose just that.

Despite ups and downs in the atmospheric temperatures over geologic time, the Earth system seems to have maintained an equilibrium temperature of around 15°C. The global temperature appears to be able to correct itself up and down, much like an airplane constantly corrects its route, or a thermostat maintains a given temperature in a room. But how is the atmosphere being regulated on the planet, and who or what is selecting the temperature? An enzyme named rubisco seems to hold the answer.

What regulates the atmospheric composition of the Earth? (Photo: The sky from a plane after sunset, by Konstantinos Kourtidis. Distributed by EGU under a Creative Commons licence.)

Rubisco is one of the most commonly occurring enzymes on the planet: “When you eat your breakfast, you are most likely eating some rubisco,” Nisbet points out. The enzyme occurs in most green matter, having the important role of regulating CO2 intake to be used in photosynthesis, which in turn helps control atmospheric pressure by changing the CO2:O2 ratio.

The enzyme is responsible for the amount of carbon capture from CO2, and it is possible that rubisco’s selection of CO2 in preference to O2 controls the amounts of CO2 taken up by plants and algae, which is changing according to atmospheric conditions. This implies that rubisco itself determines the balance of CO2 and O2 in the air. The theory proposed by Nisbet and collaborators is that natural selection comes into play and causes the photosynthesizing organisms to pull down enough, but not too much, CO2 out of the atmosphere. This creates what is known as the Goldilocks effect: a planet that is not too hot and not too cold, but just right to sustain life.

There is a strong debate as to whether the atmospheric composition has been controlled by organic or inorganic elements. Nisbet and his team claim that while the system is biologically led, the process is a mixture of the two with inorganic chemistry working together with underlying biological trends.

Together with the changes occurring in the Sun, which was much weaker in the past, the greenhouse effect has reduced over time to compensate for this increase in solar temperatures. The suggestion is that rubisco evolution is the main driver of this shift and has created a stable system to maximize biological survival.

Additionally, because rubisco is a biological control, the response to challenging changes in atmospheric conditions (such as a massive volcanic outpouring of CO¬2) can occur rather rapidly when compared to inorganic processes, such as silicate erosion which also absorbs CO2.

To view what the scientific community had to say about the paper, check the article’s interactive discussion. ‘The regulation of the air: a hypothesis’ is the all-time most commented paper on Solid Earth Discussions.

By Amanda Gläser-Bligh, geologist and freelance writer

Geosciences column: Promise and challenges of space elevators for tourism

5 Mar

From Star Trek to Arthur C. Clarke, machines that carry humans into space inside a cable-driven chamber – space elevators – have remained in the realm of science fiction. However, recently a Japanese construction company revealed it has aspirations to actually build such a device, claiming it could be operational as early as 2050. Despite assurances from its backers, the project remains scientifically implausible for a number of reasons, not least because travelers would face deadly doses of radiation as they climb through Earth’s atmosphere.

NASA concept model of a space elevator, viewed from space (Source: Wikimedia)

Tokyo-based Obayashi Corp. proposes to carry space tourists to a station a tenth of the distance to the moon, or roughly 36,000km above Earth’s surface, using a solar powered space elevator attached to a carbon nanotube pulley. The device would also allow researchers to continue yet further into space, likely as far as the counterweight at the end of the cable, located a whopping 96,000km from Earth.

Such altitudes are considered very high, even in the context of the most ambitious space projects. For example, the International Space Station orbits at 330km above Earth, whereas the forthcoming Virgin Galactic shuttle will briefly fly customers at an altitude of 110km. In comparison, the average passenger jet cruises at a height of 10km.

But Obayashi is no stranger to ambitious construction projects. They are the main contractor on Tokyo Sky Tree, the world’s tallest self-supporting tower (635m), and its international portfolio includes the Dubai Metro system and Stadium Australia, used for the Sydney Olympics.

“Not simply a dream”

Renewed enthusiasm for the space elevator project hinges on a number of crucial technological developments and methodological insights, including the carbon nanotube technology used to construct the cable. Invented in the 1990s, it is many times stronger and more flexible than steel.

The elevator car, or climber, would travel on the cable using magnetic linear motors, which would use an alternating magnetic field to cause the coil to move. As for the station, it would have to be strategically placed in geosynchronous orbit, that is, circling in sync with the spinning of the Earth, and thereby always remaining in the same spot relative to its base on the ground.

Little is known about many of the project’s finer practical details, including the potential cost, likely sponsors, and where to build it. “At this moment, we cannot estimate the cost for the project,” an Obayashi official said to Wired. “However, we’ll try to make steady progress so that it won’t end up as simply a dream.”

Space tourism has recently been heavily featured in the news, with Virgin Galactic expecting to test its first spacecraft beyond Earth’s atmosphere this year and promising commercial suborbital passenger services as soon as by 2014. However, unlike Virgin Galactic, which will only carry six passengers at a time, its designers claim the elevator could carry up to 30 people and travel at a maximum speed of 200km/h. For comparison, the traditional Space Shuttle travelled at 28,000km/h.

Overview diagram featuring an elevator car (climber) traveling along a reinforced cable towards the counterweight. A recent proposal would place a space station approximately a third of the distance between Earth and the counterweight at the cable's end (Source: Wikimedia)

Avoiding lethal radiation a major challenge

Although the space elevator concept is alluring, the project faces important scientific challenges. For example, without improved protection for travelers, they would be subjected to lethal doses of ionising radiation as they travel through two concentric rings of charged particles surrounding the Earth, known as Van Allen belts.

Van Allen belts span a range of approximately 1,000-20,000 km altitude from Earth’s surface. Therefore, in the proposed space elevator, passengers would spend several days within the belts, exposing them to over 200 times the radiation experienced by the Apollo astronauts. “They would die on the way through the radiation belts if they were unshielded,” said Anders Jorgensen to New Scientist. He is the author of a new study on the subject and a technical staff member at Los Alamos National Laboratory, New Mexico, USA.

Jorgensen’s sentiments are echoed by Iannis Dandouras of the European Space Agency. “The most intense radiation levels are in the Inner Radiation Belt (IRB), which extends typically in altitudes from ~1,000 to ~20,000 km above the equator. At the announced ascension speed of 200km/h, it would take the space elevator passengers almost four days to go through the IRB, receiving during this time an extremely high accumulated radiation dose, which would present a very high risk for their health (or even for their survival, if not properly shielded). The IRB contains a very intense population of energetic ions, trapped in the Earth’s magnetic field, each of these ions having an energy of typically several tens of MeV (megaelectronvolts).  In addition to this, there is also the Outer Radiation Belt, populated mainly by energetic electrons having an energy of typically several MeV, and extending out to almost the geostationary orbit. In the 1960s and 1970s, the Apollo astronauts did not face such a hazard, due to the very quick transit time through the radiation belts (transit through the IRB was less than an hour),” he commented in a recent email interview.

“Humans have long adored high towers”

Proposed solutions to the radiation problem come with important consequences. By moving the elevator’s base off the equator, the most intense part of the radiation belts could be avoided. However, centrifugal forces would cause the elevator’s cable to veer south – if located, for example, at a latitude of 45° North, it would run nearly horizontally for thousands of kilometers through Earth’s atmosphere and thus be weakened by weather-related stresses, such as high winds, hurricanes, and tornadoes.

Another option would be to have a radiation shield stationed along the cable, to be picked up by the elevator when it reaches the belts, but such a shield would be heavy and disrupt the natural motion of the cable.

A further option is to generate magnetic fields around the climber that could shield the habitat module as it climbs through space. However, this would require a great deal of power, difficult to transfer to such altitudes.

NASA has also toyed with the idea of space elevators. According to their concept, the base tower would be approximately 50km tall, with a counterweight placed beyond geostationary orbit – possibly even attached to an asteroid.

Despite the daunting task of overcoming these major practical challenges, Obayashi Corp. remains confident of their ability to deliver humanity’s first space elevator. “We were inspired by the construction of Sky Tree. Our experts on construction, climate, wind patterns, design, they say it’s possible. Humans have long adored high towers. Rather than building it from Earth we will construct it from space,” commented Satomi Katsuyama, the project’s leader, at a recent press conference.

By Edvard Glücksman, EGU Science Communications Fellow  

Geosciences column: Life in the aftermath of hydrothermal vents

8 Feb

Pioneering new study explores the structure and function of microbial communities at expired hydrothermal vent sites

Undiscovered lifeforms abound in Earth’s most seemingly inhospitable environments, as demonstrated by the recent discovery of bacteria living deep underneath the seafloor. An equally extreme environment can be found in the vicinity of hydrothermal vents, where water is expelled from the Earth’s crust at temperatures exceeding 400°C, after it percolates down through cracks formed at the intersection of tectonic plates. We now know that active vents teem with life, yet little is known about these habitats after venting eventually stops.

A study recently published in the Open Access journal mBio explored the inactive mineral deposits left behind by expired vents along the floor of the deep sea, showing they serve as long-term microhabitats for a succession of unique bacterial communities with potentially important roles within the broader marine food chain. This work is the first to demonstrate that life continues even as vent activity drops off.

Active hydrothermal vents and their distinctive chimneys (source: Wikipedia)

Searching for life at inactive vent sites: a genetic approach

The study, co-authored by researchers at the University of Southern California and the University of Minnesota, characterised microbial communities using advanced genetic sequencing techniques. The scientists obtained samples from expired vent chimneys on the East Pacific Rise, a tectonic plate boundary that runs along the Pacific Ocean, using the US Navy deep-sea submersible, Alvin — famous for its use in the exploration of the Titanic in 1986.

The genetic sequences provide a snapshot of the bacterial community at each sampling location. These data include the type of species present and, based on similarities with described species, their potential role within the local microbial food chain; in other words, what contribution their uptake of nutrients could make to the wider marine ecosystem. By obtaining similar data from active vents, the researchers could predict how relative counts of bacterial species change as venting ceases and the environment cools.

Microbes vital for marine ecosystems

The chimneys are formed by minerals carried by the vent emissions as they emerge from deep within the hot crust and collide with seawater. In the context of the study now published, they are of particular interest as long-term habitats for microbes because they are both widespread and resilient, remaining for up to 20,000 years following the expiry of a vent.

As venting ceases, the microbial community composition within chimneys changes drastically and the remaining bacterial biomass, the size of the community, grows by up to five times. Furthermore, the new community comprises several species that can take up and distribute essential elements back into their global cycles. In other words, not only does this study show that inactive vent sites contain unique communities of microbes, but the results also suggest these species are particularly important for the healthy functioning of the marine ecosystem.

Microbial communities serve as an important link in the global cycling of elements vital for life, such as carbon and nitrogen. Bacteria break down dead plant and animal matter, taking in carbon and thereby reintroducing it to the food chain when they are in turn consumed by larger organisms. Through a process known as nitrogen fixation, some bacteria also create – or ‘fix’ – nitrogen, an element necessary for the growth of plants.

Understanding these processes in the deep sea gives us unprecedented insight into how entire marine ecosystems function. “There are all these organisms down there making biomass, and that’s not at all accounted for in our carbon cycle,” commented senior author, Katrina Edwards, in an interview with OurAmazingPlanet.

Bacterial lifestyle shift

Apart from demonstrating the important ecological role of life at expired vents, this study also complements previous work by illustrating how the drastic environmental change that accompanies vent expiry favours organisms with an entirely different lifestyle. At active vent sites, bacteria get the energy they need to survive from the heat and content of the fluids coming from deep within the Earth. At inactive vents, the two most commonly found bacterial groups stay alive using a different mechanism, generating energy from the minerals freed up by the natural weathering of the chimneys. “Seeing the shift in the microbial population – seeing who actually came and left, was fairly illuminating for me,” said Edwards.

This exploratory study will be followed up by more work on the mechanisms of bacterial community succession, both at hydrothermal vent sites and also, as Edwards explains to AstroBio Magazine, in microbes existing beneath rock surfaces. The authors conclude by explaining that studies focusing on other microscopic species have yet to be undertaken, but may hold equally great potential in understanding the vital ecological contribution of deep-sea microorganisms.

By Edvard Glücksman, EGU Science Communications Fellow  

Geosciences column special: Planetary science, part 2

13 Jan

This month we have a special edition of our Geosciences column with two pieces on planetary science written by external contributors. Whereas the first piece, published yesterday, focused on Martian water, this second article examines the internal structure of the Moon.

If you’d like to contribute to GeoLog, please contact EGU’s Media and Commmunications Officer, Bárbara T. Ferreira at media@egu.eu.


Moon not made of cheese!

A Science paper published last year re-examines previously obtained lunar seismograms to provide evidence that the moon’s core, like that of Earth, has a partly liquid exterior and a solid interior.

Although its surface is barren, the moon's internal configuration is multilayered and resembles Earth's (Source: Wikipedia)

It is likely that, as generations of star-crossed lovers gazed towards the round face of the moon in the night sky, they could not help but wonder what it was made of. “Green cheese” (then referring to freshly made, or immature), as John Heywood proposed in 1546, was probably as good a guess as any. When telescopes allowed a closer view it came with a big disappointment for cheese lovers: all this time, humankind had been staring at a rocky sphere that, furthermore, appeared passive and sterile.

But things are not always as they seem. By re-evaluating data obtained decades ago by the Apollo missions, Renee Weber and her team of planetary scientists at NASA provide, for the first time, a detailed picture of the moon’s interior. Moreover, they show that the moon is remarkably similar to Earth. It comprises a small inner core enclosed by a slightly larger fluid outer core, both of which are surrounded by an even larger partially molten zone, a solid mantle, and finally, a crust.

The Science study relates the polarization of shock waves created by seismic events to the likely internal configuration of the moon. The waves, recorded in the early 1970s, were digitally stacked and filtered, enabling the researchers to better pinpoint the precise point where the shock event took place, determining its velocity and direction. By identifying waves which may have been directed towards the centre of the moon from each area of seismic activity, or cluster zone, and reflected back from the lunar core, the results provide indirect information on the boundaries separating each of the moon’s layers.

Seismic events create different types of waves. Longitudinal, or primary (p-), waves are fast and weak, progressing in a vertical motion, superficially resembling the locomotion of a caterpillar. Shear, or secondary (s-), waves are slower but stronger, and their movement is horizontal, as in the movement of a snake. When p- and s- waves travel in the same direction, they are orientated perpendicular to each other and are polarized.

The waves also differ according to the medium in which they are moving. In fluid, shear waves are attenuated, gradually losing their energy (which is why on Earth we cannot measure direct shear waves from quakes occurring on the other side of the planet, as they would have to pass through its outer core comprising mostly molten iron). This is how Weber and her colleagues were able to indirectly investigate the density of each of the moon’s internal layers.

The recent study would be impossible without data obtained from the Apollo missions, the last of which left the moon in 1972. They left behind enough strategically placed seismic detectors to form a triangle, with edges of over 1,000 km in length, and thus distant enough from each other to pinpoint the location of underground tremors, previously not known to have existed. The lunar Passive Seismic Experiments (PSE), as they were called, continuously recorded five years of data and sent them back to Earth, identifying over 12,000 seismic events, including likely meteorite impacts and moonquakes. These data were reinforced by later research showing that most deep moonquakes occurred repeatedly in particular source regions, located around 1,000 km below the surface, and were associated with constant tidal pressure changes that the moon experiences as it rotates around Earth and the sun.

Plans to place additional seismometers on the moon have thus far been postponed or scrapped and, therefore, the PSE catalogue has remained the only data source for moonquakes. However, as lovers may today still gaze at the moon, at least they can be certain it is not made of cheese.

By Till  F. Sonnemann, researcher at the University of Sydney 

Geosciences column special: Planetary science, part 1

12 Jan

This month we have a special edition of our Geosciences column with two pieces on planetary science written by external contributors. The first article, published today, focuses on Martian water while the second, to be published tomorrow, examines the interior structure of the Moon.

If you’d like to contribute to GeoLog, please contact EGU’s Media and Commmunications Officer, Bárbara T. Ferreira at media@egu.eu.


Martian water lasted longer than previously suspected

There is no liquid water on Mars today, but an article published in Geology late last year suggests some areas of the Red Planet may have held water for longer, and more recently, than scientists previously believed. Mineralogical data gathered by the Mars Reconnaissance Orbiter (MRO), from two troughs in the Noctis Labrynthus area of Valles Marineris, show evidence of the continuous presence of water only two billion years ago. Most other traces of Martian water date back to at least three and a half billion years ago, a significant difference even in geologic time.

“It’s twice as young as other places where water used to exist,” said Catherine Weitz, lead author of the paper and a researcher at the Planetary Science Institute in Tucson, Arizona.

Valles Marineris is a system of canyons running along the equator of Mars. (Source: Wikipedia)

Four billion years ago, Mars was a very different place from the arid red planet we know today. Liquid water could have flowed across the surface of the planet and there may even have been rudimentary life. But unlike Earth, Mars could not sustain this kind of environment and began to dry out. Surface water, along with oxygen and other atmospheric elements, evaporated away into space and temperatures dropped. Now, any water left on Mars is either deep underground or frozen in the small polar ice caps. “It didn’t happen all at once,” Weitz said. “It was a slow process, taking place over hundreds of millions of years.”

Although several deep valleys and troughs on Mars show signs of ancient water, only the two discussed in the Geology article show evidence that water lasted there for more than a very brief time. “It’s the observation of the clays over the sulfates that is interesting and unique to this region,” Weitz said. “It means that water was persistent and in a liquid state for months to years.”

Aside from the clay layering, the researchers also found other lines of evidence that water may once have existed in the Noctis Labrynthus. For example, the presence of hydrated minerals, including silicates like sand and opal, as well as other types of clay, suggests water may once have existed there and at other locations on Mars.

Furthermore, the two troughs in question lie close to the Tharsis, a volcanic plateau home to one of the largest volcanoes in the Solar System. This is important because volcanic activity and tectonic plate movements are often associated with mineral hydration, as ground water is pushed up around and through ore and mineral beds by these powerful events. Even when such water ultimately disappears, traces remain embedded in the area’s mineral structure.

Apart from revealing a unique time frame for the presence of water in the two Noctis Labrynthus troughs, the recent study also suggests its neutral pH differed from the likely acidic water suspected of being present on Mars at the same time. This could be determined by examining the unique mineral layering within the troughs. This finding is significant because water at a neutral pH-level would be theoretically more likely to sustain life.

Putting together the ancient geological history of Mars remotely from Earth is not an easy task, yet MRO’s High Resolution Imaging Science Experiment (HiRISE) and Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provide such high-quality images of the Martian surface that researchers can examine, centimetre by centimetre, the minerals in troughs, measuring their spectra for information on their history. “The troughs are in a pretty rugged area high up,” Weitz said. “We wouldn’t be able to get a lander there.”

The study also used data collected by the Viking Mars mission in the 1970s and 80s to measure the age of the area around the troughs by mapping its geology and counting the number of craters. Relatively few craters present there indicate that the local geology stabilized only very recently.

The next step for Weitz and her colleagues will be to try and find other Martian locations with similar characteristics to the ones observed at Noctis Labrynthus. “We have revised some of what we know about water on Mars,” Weitz said. “Now we want to find out more.”

By Eric Hal Schwartz, science writer at the US Environmental Protection Agency

Geosciences column: Why are jet streams not good wind energy sources?

2 Dec

Commercial airlines know jet streams well. Planes often hitch a ride on these strong, high-altitude atmospheric winds, which blow from west to east, to fly faster, and they are the reason why long-haul easterly flights (such as those between the US and Europe) are quicker than the corresponding westerly journeys.

Scientists are also familiar with these fierce and persistent winds, which occur at altitudes of 7 to 16 kilometres and have velocities from 90 to several hundred kilometres per hour. Some have even suggested we could harvest wind power from jet streams by developing appropriate airborne technology such as large kite-like wind-power generators. A group of researchers from the US and Australia estimated in 2007 that this potential renewable energy source could provide roughly 100 times the global demand of energy.

But research published this week in Earth System Dynamics, a journal of the European Geosciences Union, challenges this assumption. Lee Miller and collaborators from the Max Planck Institute for Biogeochemistry in Jena, Germany, calculated the maximum extractable energy from these streams to be about 200 times less than previously reported. They also warned that extracting wind power in this way can result in significant climate impacts.

Airborne wind-power generators: to remain science fiction? (Source: AlphaGalileo)

The scientists pointed out that the high velocities of jet streams are not the result of a strong power source but are consequence of the near absence of friction high up in the atmosphere, as it is well-known in meteorology. The group shows in their calculations that, in fact, it takes very little power to accelerate and sustain these winds.

“It is this low energy generation rate that ultimately limits the potential use of jet streams as a renewable energy resource,” said Axel Kleidon, the study’s leader, in a press release.

A maximum of 7.5 terawatts (7.5 trillion watts), less than half of the 2010 global energy demand of 17 terawatts, can be extracted from jet streams, they determined. Previous studies arrived at much higher values because they used the wind velocity to estimate wind power, a method the Max-Planck researchers claim is flawed.

As with other weather systems, jet streams are in part caused by the fact that equatorial regions are warmer than the poles, which are less strongly heated by the sun. The differences in temperature and air pressure between these regions drive the atmosphere into motion creating the strong winds. These differences, rather than wind speeds, are what controls how much of the generated wind can be used as an energy resource.

The authors also estimated the climate impacts of extracting energy from jet streams. Wind turbines build up resistance when harvesting energy, which alters the flow of the wind. This disruption can slow down the entire climate system of our planet when substantial amounts of energy are extracted.

If 7.5 terawatts of energy were extracted from jet streams “the atmosphere would generate 40 times less wind energy than what we would gain from the wind turbines,” said Miller in a press release.

“This results in drastic changes in temperature and weather.”

By Bárbara Ferreira, EGU’s Media and Communications Officer

Geosciences column: Iceland spar, or how Vikings used sunstones to navigate

2 Nov

Nowadays, we can rely on GPS receivers or magnetic compasses to tell us how to reach our destination. Some 1000 years ago, Vikings had none of these advanced navigation tools. Yet, they successfully sailed from Scandinavia to America in near-polar regions where it can be hard to use the Sun and the stars as a compass. Clouds or fog and the long twilights characteristic of polar summers complicate direct observations of these celestial bodies. So how did they find their bearings? A new study published in Proceeding of Royal Society A shows that they probably used Iceland spar, a “sunstone”.

Centuries-old Viking legends tell of glowing sunstones that navigators used to find the position of the Sun and set the ship’s course even on cloudy days. In 1967, a Danish archaeologist named Thorkild Ramskou speculated that the Viking sunstone could have been Iceland spar, a clear variety of calcite common in Iceland and parts of Scandinavia.

This crystal has an interesting property called birefringence: a light ray falling on calcite will be divided in two, forming a double image on its far side. (This double image is easily seen by placing transparent calcite on printed text.) Further, the Iceland spar is a polarising crystal, meaning the two images will have different brightnesses depending on the polarisation of light.

Birefringence of Iceland Spar seen by placing it upon a paper with written text. Source: Wikimedia Commons.

Light is made up of electromagnetic waves with component electric and magnetic fields. If these components have a specific orientation, the light is said to be polarised, while in unpolarised light the orientation of these fields has no preferred direction. Calcite can appear dark or light depending on the polarisation of light that falls upon it.

Sunlight becomes polarised as it crosses the Earth’s atmosphere, and the sky forms a pattern of rings of polarised light centred on the Sun. Changing the orientation of calcite as light passes through it will change the relative brightness of the projections of the split beams, even when the Sun is hiding behind clouds or just below the horizon. The beams are equally bright when the crystal is aligned to the Sun.

It can be hard to determine when exactly these split beams have equal brightness. But the new study, led by Guy Ropars at the University of Rennes 1 in France, suggests Vikings could have built a simple device to better use the sunstone.

The technique consists in covering the Iceland spar with an opaque screen with a small hole in its centre and a pointer. As light passes through the hole onto the crystal, a dark surface below it receives the projection of the double image for comparison.

The authors of the Proceedings of the Royal Society study believe Vikings could have used a device like this to navigate. The crystal is inside, and the projection of a double image is seen below it. Credit: Guy Ropars. Source: ScienceNOW.

By rotating the apparatus and determining the direction at which the two images were equal in brightness, the team managed to pinpoint the Sun’s position on a cloudy day with an accuracy of one degree on either side. Researchers also conducted tests when the Sun was largely below the horizon. “We have verified that the human eye can reliably guess clearly the Sun direction in dark twilights, even until the stars become observable,” Ropars’ team writes in the paper.

Although archaeologists have not yet found Iceland spar among Viking shipwrecks, the new study adds credence to the idea that Viking seafarers used the crystal in their travels.

Further, the recent finding of a calcite crystal on a sixteenth century Elizabethan ship shows that navigators could have used Iceland spar even after the appearance of the magnetic compass. Cannons on ships could perturb a magnetic compass orientation by 90 degrees, so a crystal serving as an optical compass could be crucial in avoiding navigational errors and get sailors to a safe port.

By Bárbara Ferreira, EGU’s Media and Communications Officer